Editor’s note: This story is one in a series, #PolicyForward, that spotlights how faculty, students and alumni at the Harris School of Public Policy are driving impact for the next generation. Leading up to the May 3 grand opening of Harris’ new home at the Keller Center, these stories will examine three of the most critical issues facing our world: strengthening democracy, fighting poverty and inequality, and combating climate change.

One week before last year’s midterm elections, Emily Krone MPP Class of 2019 turned in her final Science of Elections and Campaigns course assignment, a statistical model that allowed her to predict the outcome of the 2018 senatorial elections more than a full point better than the average national poll. 

When she first received the assignment from her instructor, Alexander Fouirnaies, Krone was excited by the challenge. 

“I follow politics very closely and listen to Nate Silver's FiveThirtyEight Politics podcast where this is exactly what they do,” she said. “They spend months out of the year building state-by-state models in the House and the Senate…so I feel like I had picked up some tips and tricks, and I just knew what to think about as I was building my model.”

A record 49% of eligible voters turned out to vote in the 2018 midterms, according to CBS News. People’s heightened interest in national politics has led to the public paying closer attention to election polling and prediction models, as well, as a means to feel more engaged and informed. 

Nate Silver's FiveThirtyEight is known for its data-driven election predictions.

On the same day his students submitted their assignments, Fouirnaies had the TA for his class download Silver’s predictions. 

“On average, Nate Silver had an error of 2.1% and Emily had an error of 3.2%,” Fouirnaies said. 

On average, all of the national polls were biased toward Democrats for the midterms, according to The New York Times’s Upshot. But during the final three weeks leading up to the 2018 midterm election, the average Senate poll miscalculated the actual outcome by 4.3 percentage points.

On February 7 at the University of Chicago Harris School of Public Policy’s Keller Center, Silver AB’ 00 and Krone were able to connect for a meeting of statistical minds prior to a hugely attended conversation between Silver and Dr. Austan D. Goolsbee, the Robert P. Gwinn Professor of Economics at the University of Chicago Booth School of Business, which focused on the merit of election prediction leading up to the 2020 election. After the fallout of the 2016 election, the polling community at large took a lot of flak for failing to predict Trump’s win. 

Emily Krone, Kaylen Ralph, and Nate Silver discuss the perils of election prediction.

FiveThirtyEight’s 2016 prediction model was more accurate than most. Their final forecast, issued the evening of the 2016 election, had Trump with a 29 percent chance of winning the Electoral College (just below the 30 percent chance Trump’s own campaign predicted he had, as reported by FiveThirtyEight). “By comparison, other models tracked by The New York Times put Trump’s odds at: 15 percent, 8 percent, 2 percent and less than 1 percent,” according to Silver’s post-mortem election analysis.

When they were able to touch base on February 7, Krone wanted to know — was Silver making any changes to his prediction model in light of the 2016 outcome?

“Not a lot, because we think that we have a really good model, and it was a model that gave Trump a much higher chance than other models did,” Silver said. “I mean, every year, we go back and look at every single piece of code, because you always get smarter, and hopefully you become wiser after four years, and you realize, ‘Oh, here is a technique I think I was using before that I can make a little better.’ But philosophically, [there probably won’t be] too many changes.”

Krone asked Silver what he thought has changed the most regarding the relationship between data and politics since he first successfully predicted the outcome of the presidential election in 2008.

“Number one, the extent to which election forecasting, as opposed to polling itself has become a central part of the discourse around campaigns,” Silver said. “Which in some ways is not what you want, right? In some ways you want to vacuum seal the campaign and say, ‘Okay, I'm just looking at it from the outside, and nothing I could possibly do would affect the discourse around the campaign, would affect campaign strategies.’ We're just scientist[s or reporters] trying to understand the world. So the fact now that [predictions] affect perception of the campaign – that I'm influential in some ways – is something that you don't want necessarily.”

Krone herself is cognizant of the need for responsible polling, especially as the general voting public gives such figures more and more import. 

“It's always great to have data out there, but you can twist the numbers any way you want to make them say what you want to say,” Krone said. “So, to have a robust understanding of how that's done, and to make sure that it's done in the most scientifically sound way possible — not biased — is important. And it's important to help people who don't have a policy degree or an economics degree to understand what goes into that.”

Silver said he agrees that it's primarily the job of journalists and scholars to bear the burden of distilling and digesting data that is likely beyond the scope of a layman’s statistical analysis capabilities. 

“I think people should have responsibility to be numerate so they understand what a probability means,” Silver said. “And you can coach and nudge them and whatever else, but that takes a lot of work. At the same time, you have to make sure you're communicating clearly. You have to understand that people are going to engage with your forecast, or your article, or your interactive, in varying levels of detail, meaning if someone is looking at it for only five seconds, or 20 seconds, what's the impression that they get from it? What kind of headlines do you have? What kind of social media campaign do you have around it?”

Despite her love of politics, Krone said she doesn’t have an interest in taking her statistical talents to D.C. She’s currently working as a research assistant at the University of Chicago Urban Labs, where since October she’s assisted on the Supportive Release Center project, a randomized, controlled trial with the goal of helping individuals with mental illness transition to services in their communities after their release from Cook County Jail.

“The only definite thing I have in my future is I want to stay in Chicago,” Krone said. “As a research assistant, I’m testing out policies in the Chicago setting and doing vigorous trials of those policies in a space where it's not a lawmaker [who] makes something, and then five years later, we figure how it affected people. I’m on the ground getting results pretty much immediately on how these policies are working and seeing how we can scale them up to all of Chicago or implement them in different cities and different contexts.”