The RAND American Life Panel (ALP) offers a unique perspective on voter intent as we near the 2012 presidential election.
The Election Forecast provides our best forecast of the popular vote based on the responses that panelists provided in the past week. The gray band indicates if the difference between the estimates for the two candidates is statistically significant. If the lines for Obama and Romney lie outside the gray band, then with at least 95-percent confidence we can say that one candidate would win the election if on election day the citizens vote as they now anticipate. It is important to note that if the lines are within the gray band then the observed differences may be due to chance.
It is also important to note that the predictions combine the percent chance of voting for a candidate with the percent chance that a respondent will actually vote. For example, if someone says in response to the first question that he or she has a 50-percent chance of actually voting, then this person's response to the question of who they will vote for gets a “weight” of 50 percent in the calculation of our prediction (which is then further weighted by the percent chance that he or she says that they are likely to vote for their chosen candidate).data
This graph presents the forecast by the panelists of who will win the election. This need not be the same as who one votes for. One can vote for one candidate, but still think that the other candidate will win.data
This graph shows us how many panelists shifted their vote from Obama to Romney or vice versa in the past week. Notice that these are generally small percentages, which causes this particular graph to be rather “noisy”.data
The next graph shows the “intention to vote” by candidate preference. Supporters of one candidate may be more likely to vote than supporters of another candidate and the graph shows how this differs between the Obama supporters and the Romney supporters. Recall that, as explained above, the weekly poll takes the differences in voting intentions into account.data
The fifth graph shows one of the first three graphs explained above, but now broken out by respondent characteristics. Each day we present the data broken out by a different characteristic, such as age, education, income, position in the labor market, race, sex, and state (Pennsylvania, Florida, Ohio). Anyone interested to know more about the results broken down by respondent characteristics, or about other ways data from the survey can be accessed and used, is invited to contact Krishna Kumar, director of RAND Labor and Population, at firstname.lastname@example.org.
First, it allows us to ask the same people for their opinion repeatedly over time. In comparison to most polls, this leads to much more stable outcomes; changes that we see are true changes in people's opinions and not the result of random fluctuations in who gets asked the questions.
Second, we may be more accurately capturing the likely votes of a greater number of voters in the crucial “middle” (i.e., not closely aligned with either candidate) by allowing respondents to more precisely assign their own numerical probability (or percent chance) to both the likelihood that they will vote and the likelihood that they will vote for a particular candidate. By comparison, traditional polls may not be fully capturing the intentions of these voters because they rely on less precise qualitative metrics (such as somewhat likely and somewhat unlikely) when asking respondents to indicate for whom they may vote and the likelihood that they will vote.
Since July 5, 3,500 participants in the RAND American Life Panel (all U.S. citizens over the age of 18) have been invited to answer three questions every week:
Every day one seventh of the panel members receive an email inviting them to answer the three questions above within a week. So one seventh of the panel members always get an email on Monday, one seventh always on Tuesday, etc. In total about 3,500 panel members participate, so every day about 500 panel members get an email inviting them to “vote.”
By asking respondents for probabilities (i.e., “What is the percent chance...”), we acknowledge the fact that respondents may not be 100-percent sure who they will vote for or whether they will even vote. We use both of these probabilities to weight their responses in the final poll data. This approach was developed by Adeline Delavande (University of Essex, United Kingdom) and Charles Manski (Northwestern University).
As with nearly all survey data, we weight our sample to best match the population of interest (including factors such as sex, age category, race-ethnicity, education, household size, and family income). Based on the premise that the best predictor of future voting behavior is past voting behavior, we also reweight each daily sample separately such that its voting behavior in 2008 matches known population voting behavior in 2008.
The results are updated every day (to be precise: every day at 1 a.m., PDT), and each daily figure always refers to the average over the week ending on that particular day. In other words, each data point in the graph represents the accumulated responses to the last seven days' surveys. (Or to put it another way, each day's result is an updated rolling tabulation of that day's data and the previous six days' survey data). It is important to keep this in mind when comparing the numbers from one day to the next, as in that time only one-seventh of the respondents would have been able to change their answer.
The microdata on which the forecasts are based can be found here.
For more information, contact Krishna Kumar, director of RAND Labor and Population, at email@example.com