Comparison with Other Data Sources and Alternative Interview Modes
Estimates of Internet access in the U.S. are on the order of 75%. So despite the fact that the ALP provides Internet access to respondents without prior Internet access, this group is underrepresented. In itself it is not unusual that different groups in the population exhibit different response rates. The main question is if one can correct for differential response rates by reweighting. Couper, Kapteyn, Schonlau and Winter (2007) find that conditional on Internet use both the stated willingness to participate in an Internet survey and actual participation are only weakly linked to individual characteristics. This suggests that once we weight based on individual characteristics, the resulting sample combining both individuals with and without Internet access will be population representative. This can be further investigated by comparing weighted frequencies in the ALP with external benchmarks.
For each ALP dataset we provide weights. Figures 1-3 present comparisons of weighted ALP variables (this is the set of active members of the American Life Panel as of September 2014 who are considered randomly sampled) with weighted CPS variables (CPS March 2014), for males, females, and by number of household members. The weights are calculated using a raking algorithm, as explained in the weighting section. Figure 4 presents a comparison between ALP and CPS with respect to some variables of interest that are both present in CPS and in the ALP, and which have not been used for weighting.
The recruiting method is based on RDD. In that respect it is not different from most telephone surveys (including of course the Michigan Monthly Survey). Alternative modes of interviewing are in-person and mail. Although both mail and in-person in theory can provide 100% coverage of the US population, there are obvious draw-backs to both. Mail surveys tend to have low response rates. In-person interviews can generate high response rates, as is illustrated by the Health and Retirement Study, but they are also very costly and slow to implement, even with a large team of interviewers.
Different modes have different characteristics that make them desirable or undesirable; interviewer driven modes (both by telephone and in-person) may lead to social desirability bias (e.g. Holbrook, Green, Krosnick, 2003), while an auditive mode tends to lead to potentially strong primacy and recency effects (e.g. Schwarz, 2005). If it comes to the collection of factual information mode effects generally seem to be mild; this has been demonstrated by the HRS which uses a mixture of modes (in-person, telephone, mail, and more recently Internet). In Kapteyn and Van Soest (2009) we show that to be the case for the measurement of wealth in the HRS by comparing wealth measured in the core interview (a telephone interview) and over the Internet, while at the same time pointing to the importance of question format and question order.
Figure 1: Comparison of weighted frequencies in ALP and CPS, Males
Figure 2: Comparison of weighted frequencies in ALP and CPS, Females
Figure 3: Comparison of weighted frequencies in ALP and CPS, By number of household members
Figure 4: Comparison of weighted frequencies in ALP and CPS, for variables not matched in the weighting algorithm