The past few elections’ focus on polling has lead to a proliferation of theories about the polls, with varying levels of evidence and plausibility.
One late-breaking theory is that right-wing pollsters are deliberately releasing skewed polls to improve the perception of former president Donald Trump’s lead, what one writer has called “Red Wavers,” in reference to the predicted red wave in 2022 that failed to materialize. Two pieces I’ve seen covering it are VP Harris And Her Campaign Are Working Hard And Closing Strong, Trump Is Unfit, Unwell and Unraveling by Simon Rosenberg at his Substack The Hopium Chronicles and “Red Wave” Redux: Are GOP Polls Rigging the Averages in Trump’s Favor by Greg Sargent and Michael Tomasky for The New Republic.
While this theory has a degree of plausibility, I’m not sure these efforts are having a significant impact on aggregates or would show a different race.
It’s very likely that there are bad-faith pollsters out there who are deliberately trying to bias their results to favor their preferred candidate. Rasmussen Reports, for example, has shown a consistent bias toward Republicans in past elections, including being off by 10 points in favor of the Republicans in 2010. Both Rosenberg and Sargent and Tomasky cite an example where Quantus Insights posted on X (formerly Twitter) bragging that their poll flipped North Carolina back to Trump in 538’s polling average. Sargent and Tomasky also quote multiple sources, both people familiar with polls and Republicans themselves, saying that.
I also find it plausible that certain media cycles would have played out differently without a shift in polling that was entirely manufactured. It’s unclear whether these significantly impact the overall race, though.
Month | All Polls | Two Star and Higher Polls | Two and a Half Star Polls | ‘Red Waver’ Removed |
---|---|---|---|---|
July | -1.3% | -1.0% | -1.2% | -1.1% |
August | 3.1% | 2.5% | 2.7% | 3.7% |
September | 3.4% | 2.6% | 2.5% | 3.4% |
October | 2.0% | 1.7% | 1.5% | 2.5% |
November | 0.6% | 0.8% | 0.6% | 1.0% |
Polling averages by month for selected subsets of polls. Negative (red) percentages are more pro-Trump, positive (blue) percentages are more pro-Harris. Source: 538
It’s true that if you remove “red wavers,” Harris’ margins improve. That’s not really decisive, since that would also happen if the theory is wrong and they’ve essentially just cherrypicked pollsters that happen to show worse numbers for Harris. To be clear, I don’t think the source of my list, Rosenberg, is intentionally cherry-picking, just that it might be hard to avoid doing once you have a compelling theory.
Even if you think that’s unlikely, there are some problems.
One oddity is that the highest-rated pollsters (as judged by 538) show worse margins for Harris compared to either all polls or ones with the suspect pollsters removed. That suggest to me there’s plausible choices that make it an even tighter race than the average has converged to. (To be clear, there are also plausible choices that favor Harris more than the average.)
Another is that the magnitude isn’t huge. In August averages were .6 points worse with the pollsters Rosenberg flagged removed, in September they were exactly even, in October they were .4 points worse, and in November they were .4 points worse. This isn’t nothing if you’re trying to build a narrative among poll-focused pundits, strategists, and political news junkies, but it still results in a pretty tight race. Sargent and Tomasky’s article had a good example of how it could affect perceptions, noting that at the time, aggregators estimated Trump was leading Pennsylvania by 0.3 points and the allegedly manipulative polls had a pro-Trump lead of 0.8, enough that it could flip the state.
I also looked at these results by each pollster in Rosenberg’s list, comparing them to the average for that month. This is expressed as a difference in percentage points. For example, Beacon/Shaw’s polls showed Trump one point over Harris in July, whereas the average (with the “red wavers” removed) was 1.06 points of Trump over Harris. Therefore, Beacon/Shaw was 0.06 percentage points more in favor of Harris (rounded to 0.1).
Pollster | July | August | September | October | November | Total |
---|---|---|---|---|---|---|
Beacon/Shaw | 0.1% | -4.7% | -1.4% | -4.5% | — | -2.6% |
Cygnal | — | -3.1% | -1.3% | 0.7% | — | -1.3% |
Echelon Insights | -0.9% | -4.2% | 3.6% | -0.5% | — | -0.5% |
Echelon Insights/GBAO | — | — | -0.4% | — | — | -0.4% |
Emerson | -5.1% | 0.9% | 0.3% | -1.9% | -0.8% | -1.3% |
Fabrizio/GBAO | -0.9% | -2.2% | — | -5.5% | — | -2.9% |
Hart/POS | -0.9% | -5.7% | 1.6% | -3.5% | -1.0% | -1.9% |
McLaughlin | -0.9% | — | -3.5% | — | — | -2.2% |
Noble Predictive Insights | -2.9% | — | — | 0.0% | — | -1.5% |
OnMessage Inc. | — | — | — | -3.8% | — | -3.8% |
Quantus Insights | — | — | — | -2.5% | — | -2.5% |
Quantus Polls and News | — | -2.0% | — | — | — | -2.0% |
RMG Research | — | -3.4% | -0.9% | -1.2% | — | -1.8% |
Redfield & Wilton Strategies | — | -1.7% | -0.9% | -2.6% | — | -1.7% |
TIPP | — | -1.7% | 0.6% | -1.3% | -1.5% | -1.0% |
co/efficient | — | — | — | -0.5% | — | -0.5% |
All | -1.7% | -2.8% | -0.3% | -2.1% | -1.1% | -1.7% |
Difference between polling average and “red waver” pollsters, by pollster and date. Negative (red) percentages are more pro-Trump, positive (blue) percentages are more pro-Harris. Source: 538
In an ideal world, I would look at this for the states, since the national polling only tells you so much (and there’s more variation that bad actors could exploit). It would also be good to look at the pollsters’ numbers are after 538 makes its adjustments, since they do look at past bias. There’s actually a column in 538’s released data that might contain, but I wasn’t able to find documentation clarifying its meaning.
I’m also heavily relying on 538’s data. While it would probably be best to have some outside data for the sake of comparison (particularly for rating polls), the exact question is how these pollsters affect aggregators and modelers like 538, so it’s appropriate in that way. Whatever you think of 538, they do deserve credit for releasing their input data (and much of their output data, including the final aggregates and projections).
What Other Sources Are Saying
538 didn’t look at this question directly, but they did look at the state of polling during the 2024 election. Their takeaway is that the quality of polling has improved, while the quantity has decreased—the opposite of what the “red waver” theory would suggest. An earlier article by 538 did address this more directly. They actually did a similar analysis as I did, with similar results. They’re somewhat more skeptical, placing it in the larger context of variability:
In most places, the pollsters in question are indeed more pro-Trump than other pollsters. However, this has just a mild effect on our averages, moving them toward Trump by just 0.3 points on average. (The biggest difference is in Pennsylvania, where our published average gives Harris a 0.1-point lead over Trump, but the nonpartisan average gives her a 0.9-point edge.) That’s not a significant difference in a world where the average polling error in presidential elections is 4.3 points, and it’s small enough that it could easily be attributed to sampling error or some methodological factor other than partisan bias. As a point of comparison, our averages regularly move by 0.1-0.3 points on a daily basis, and we don’t recommend that anyone read into those shifts.
To be fair to them, they do frequently bring this up alongside coverage of their averages, so these points about small shifts are not just something they pull out when people question them.
Electoral-Vote.com looked at the role of pollsters’ choices and as a part of the analysis, produced a table showing each pollster’s average results. They found pollsters varied by seven points. This is over a span of time, so there’s some actual variation. Still, while this leaves open the possibility of some manipulation being the cause. I think their explanation of the many decisions pollsters necessarily make as part of it. In their analysis, two of the pollsters who’ve been flagged as a “red waver,” TIPP and RMG, tend to give Harris a 2 percent or so advantage. That’s smaller than some, but in line with a lot of pollsters.
My Takeaways
I think there’s a reasonable case that bad-faith pollsters have affected perceptions at certain moments during the election. For example, it’s theoretically possible that a campaign would operate differently based on a small change in a swing state and that could affect a few thousand votes in a state with margins of tens of thousands of votes.
Still, the overall picture wouldn’t be that much different. If the election turns out not to be particularly close, it may not matter at all. Of course, polling aggregators should still take the issue seriously as they reflect on their results and make any methodological changes.
One thing I absolutely agree with (and virtually everyone on this issue seems to agree with, except perhaps pollsters accused of operating bad faith themselves) is that it’s important not to overemphasize small differences in polling. Not only does this give undue power to individual pollsters, it doesn’t help inform your audience. Even in a world with only scrupulous pollsters, partisan demagogues could easily cherry-pick polls that present the narrative they like. A reduced emphasis and a better sense of what a tiny lead in the poll means would partly inoculate the public against this manipulation.
I will join the chorus of every media critic and say it’s also important not to overemphasize polling in general. I think people will always be interested in the current status of the race, and it’s reasonable to address that curiosity with polling aggregates and analysis. However, it’s far from the most important task media has in this election, and it’s important to keep in mind the limits of polls. Every election has consequences, and the primary role of media in a democracy during an election is to communicate those consequences.