Government, Probability

Predicting the 2022 Midterm Elections with Cook Race Ratings

Today is Election Day in the United States and I thought it would be fun to dip my toes into election forecasting. I’ll build a very simple model based on race ratings published by The Cook Political Report. I’ll check how accurate Cook’s ratings have been in the past and use that information to predict the 2022 Congressional elections.

We can take all the ratings and boil them down to a single number—the number everyone cares about—how many seats each party will win.


1. Check Cook’s Historical Accuracy.

There’s a dataset available on Kaggle that will make things easy for us. It doesn’t cover every rating but there are more than 2000, which should provide a reasonable estimate of Cook’s performance.

Start by reading the dataset with pandas and dropping any irrelevant bits.

df = pd.read_csv("2002-2018_house_election_ratings_results.csv")

df = df.dropna(subset=["cook_rating"])
df = df.drop(df[~df.winner.isin(["d", "r"])].index)
df = df.drop(df[df.cook_rating == "tossup"].index)

Ratings are absent pre-2008 so we can drop those rows. There are a couple races involving Independent candidates but not enough to significantly impact the measurement. We’ll also ignore any races classified as “tossup,” which we’ll later divvy up 50-50.

With our dataset now trimmed to the relevant 2,427 rows, we’ll create a column to represent whether each prediction was True or False.

def check_prediction(row):
    confidence, prediction = row.cook_rating.split("-")
    if row.winner == prediction:
        return True
    else:
        return False


df.loc[:, "correct"] = df.apply(check_prediction, axis=1)

The ratings are coded as strings, e.g. “likely-r”, “lean-d”, etc. We’ll use an apply and pass each row through a function. This is generally an inefficient way of doing things in pandas, but with such a small dataset, readability can sometimes outweigh performance losses.

Finally check how often each confidence level (solid, likely, and lean) was correct.

df_solid = df[df.cook_rating.isin(["solid-d", "solid-r"])]
solid_accuracy = df_solid[df_solid.correct].shape[0] / df_solid.shape[0]

df_likely = df[df.cook_rating.isin(["likely-d", "likely-r"])]
likely_accuracy = df_likely[df_likely.correct].shape[0] / df_likely.shape[0]

df_lean = df[df.cook_rating.isin(["lean-d", "lean-r"])]
lean_accuracy = df_lean[df_lean.correct].shape[0] / df_lean.shape[0]

We find:

Solid:  99.95%
Likely: 98.47%
Lean:   92.35%

Cook seems to know what they’re doing! Of course it helps to simply toss all the close races into a “tossup” bin, but they do deserve some credit.

2. Simulate the Election.

Now we can put these figures into action. Let’s boil the race ratings down to a single numerical prediction.

We’ll do this by generating a random number between 0 and 1 to represent each individual race. For example, “lean” ratings are correct about 92% of the time. So if the associated random number is 0.57, i.e. below 0.92, we call it a correct prediction. On the other hand, 8% of the time it will be an incorrect prediction, which matches the accuracy calculated above.

Repeat this process 10,000 times and see how it shakes out.

def get_random_race_result(cook_prediction, thresholds_dict):
    level, party = cook_prediction.split("-")
    if random() < thresholds_dict[level]:
        return party
    else:
        return {"r": "d", "d": "r"}[party]


accuracy_dict = {"solid": solid_accuracy,
                 "likely": likely_accuracy,
                 "lean": lean_accuracy}

cook_final_predictions = {"solid-d": 159,
                          "likely-d": 13,
                          "lean-d": 15,
                          "lean-r": 13,
                          "likely-r": 11,
                          "solid-r": 188}

simulated_output = []

for _ in range(10000):
    house_seats = ["d"] * 18 + ["r"] * 18

    for item in cook_final_predictions:
        for _ in range(cook_final_predictions[item]):
            house_seats.append(get_random_race_result(item, accuracy_dict))

    simulated_output.append(house_seats.count("d"))

I want to mention an important caveat here—one that has gotten election forecasting into trouble in the past. We assume that every race is a random, independent event. But in reality the errors tend to be correlated. For example if Republicans exceed expectations in Ohio, they’re likely to exceed them in Virginia as well. I’m happy to ignore this complexity and present my humble blog model for entertainment purposes only.

With 10,000 simulations in hand, let’s check the average result and call it a day.

predicted_d_seats = mean(simulated_output)
predicted_r_seats = 435 - predicted_d_seats

The output:

D seats: 204.8
R seats: 230.2

So the most likely outcome, based on Cook’s historical accuracy, is Republicans winning a 230-205 majority. We say nothing about the margin of error on these predictions and that’s okay.

Repeating the same process for Senate elections (using this dataset), the model finds Republicans most likely to win a 51-49 majority.


Update (December 7th, 2022): After the Georgia Senate Runoff Election we now have final results for both chambers. Republicans won a 222-213 majority in the House, and Democrats won 51-49 in the Senate. So our model did pretty well in the House by predicting a modest GOP victory. However it overestimated Republican gains in the Senate by two seats—enough to swing majority control.

Full code:

import pandas as pd
from random import random
from numpy import mean


def check_prediction(row):
    confidence, prediction = row.cook_rating.split("-")
    if row.winner == prediction:
        return True
    else:
        return False


def get_random_race_result(cook_prediction, thresholds_dict):
    level, party = cook_prediction.split("-")
    if random() < thresholds_dict[level]:
        return party
    else:
        return {"r": "d", "d": "r"}[party]


df = pd.read_csv("2002-2018_house_election_ratings_results.csv")

df = df.dropna(subset=["cook_rating"])
df = df.drop(df[~df.winner.isin(["d", "r"])].index)
df = df.drop(df[df.cook_rating == "tossup"].index)

df.loc[:, "correct"] = df.apply(check_prediction, axis=1)

df_solid = df[df.cook_rating.isin(["solid-d", "solid-r"])]
solid_accuracy = df_solid[df_solid.correct].shape[0] / df_solid.shape[0]

df_likely = df[df.cook_rating.isin(["likely-d", "likely-r"])]
likely_accuracy = df_likely[df_likely.correct].shape[0] / df_likely.shape[0]

df_lean = df[df.cook_rating.isin(["lean-d", "lean-r"])]
lean_accuracy = df_lean[df_lean.correct].shape[0] / df_lean.shape[0]

print(f"Solid:  {solid_accuracy * 100:.2f}%")
print(f"Likely: {likely_accuracy * 100:.2f}%")
print(f"Lean:   {lean_accuracy * 100:.2f}%")

accuracy_dict = {"solid": solid_accuracy,
                 "likely": likely_accuracy,
                 "lean": lean_accuracy}

cook_final_predictions = {"solid-d": 159,
                          "likely-d": 13,
                          "lean-d": 15,
                          "lean-r": 13,
                          "likely-r": 11,
                          "solid-r": 188}

simulated_output = []

for _ in range(10000):
    house_seats = ["d"] * 18 + ["r"] * 18

    for item in cook_final_predictions:
        for _ in range(cook_final_predictions[item]):
            house_seats.append(get_random_race_result(item, accuracy_dict))

    simulated_output.append(house_seats.count("d"))

predicted_d_seats = mean(simulated_output)
predicted_r_seats = 435 - predicted_d_seats

print(f"D seats: {predicted_d_seats:.1f}")
print(f"R seats: {predicted_r_seats:.1f}")