Election modeling has become a serious racket ever since Nate Silver, founder of FiveThirtyEight but now working independently, correctly predicted each state in the 2012 presidential election.
Since that time, with the company he founded being sold to ABC, which still operates under the FiveThirtyEight brand sans Silver, the days of modeling an election have become a lot more difficult. For starters, with the arrival of Donald Trump on the scene, the typical voting coalition demographic assumptions have been thoroughly destroyed and rewritten.
In 2024, Silver’s model went back and forth throughout August, September, and October, eventually landing at a 50/50 split on Election Day. A coin-flipping monkey would’ve produced the same result with fewer words:
This morning's update: Welp.https://t.co/vsGVG189Sa pic.twitter.com/dQocuD763I
— Nate Silver (@NateSilver538) November 4, 2024
While the early voting analysis pointed to a likely Trump victory, Silver’s model is based on polls that were wildly inaccurate in many cases. To make matters worse, Silver began putting his finger on the scale calling some polls “high quality” and giving them stronger weighting in the model. This turned out to be a bad decision given that polls with normal weight actually tended to be more accurate on Election Day.
Critics of the “coin toss” result were not forgiving:
Nate Silver: “I’m the best statistician in the world. I am so smart.”
Everyone: “Who’s going to win the election?”
Nate Silver: “Um, it’s a coin toss” https://t.co/ws1GBNh5Cv
— Jason Andrews (@BasedHypnotist) November 6, 2024
Heading into election night, Silver posted a new model meant to predict the eventual winner based on reported results. The output of the model showed Harris likely to win throughout the night even as Trump put points on the board and became the clear favorite. This public embarrassment led to Silver’s decision to terminate the model, reports Newsweek:
In a post on Silver Bulletin, Silver wrote: “We are taking the model down for two reasons. One, it isn’t capturing the story of this election night well. It’s based only on called states and the timing of those calls. So far, all the calls have been predictable. But no swing states have been called and there is a lot of information it doesn’t capture, information that is mostly good for Donald Trump and bad for Kamala Harris—not the 50/50 race the ‘called’ states might imply. Something like The New York Times needle is a much better product.”
At least he admitted the election night model was broken but that doesn’t excuse adhering to an obviously flawed way of modeling races based on fickle polling. It’s the old garbage in garbage out rule and a majority of election polls are garbage.
The obvious reality leading up to Election Day was that the race was anything but a coin toss. Donald Trump was clearly in a commanding lead and all the “high quality” polls from the New York Times couldn’t save Harris with overly rosy numbers.
The days of relying on polls to model election outcomes seem to have gone by the wayside in 2024. With so many states voting early and a subset of those states making the early voting data public by party registration, it’s easier to build a model based on actual voter behavior compared to potential voter behavior.
In Virginia, for example, Christian Heiens successfully called the state within a half percent of the final margin based on modeling early voting behavior. Keep in mind that Virginia doesn’t register voters by party making this modeled result all the more impressive:
The final margin in Virginia this year was Harris by 5.21%.
My model was ultimately off by 0.46%. https://t.co/f3K5gJzJnE pic.twitter.com/vdt9GL6oqY
— Christian Heiens 🏛 (@ChristianHeiens) November 7, 2024
Similar modeling was done for Nevada and Arizona by other data crunchers which showed a clear Trump advantage heading into Election Day. These estimates ended up being entirely correct as he comfortably won both states.
Meanwhile, as the early vote models were producing data that wound up being almost entirely correct, one noted analyst was warning against even looking at the early voting trends:
OK, a lot of people have been asking for this one.
Just Say No to analysis of early voting. It probably won't help you to make better predictions. But you may fool yourself. Here's why.https://t.co/YCusV6z525
— Nate Silver (@NateSilver538) October 30, 2024
It was early voting analysis that correctly called a Trump victory while the Silver model based on public polls was little more than a three-month experiment in coin toss theory.
Does this mean early vote modeling will always be correct? Of course not. It does mean that early voting data should be considered equally or more likely better than the traditional election modeling built on subjectively weighted polls.
Donate Now to Support Election Central
- Help defend independent journalism
- Directly support this website and our efforts