« September 27, 2004 | Main | September 29, 2004 »

September 28, 2004

skepticism about surveys

I keep encountering new reasons to be skeptical about survey results. The moral is not to dismiss everything you read that's based on polling, but to use surveys very carefully.

1. We looked at the percentage of young people who said they were registered at several moments during each of the last four presidential campaigns. The results show lots of up and down movement, including quick declines of as much as 10 points. This doesn't make sense, since people register during the campaign season and don't lose or drop their registration in large numbers. Furthermore, self-reported registration rates at any given month do not predict turnout in November--at all. For example, self-reported registration rates were consistently the highest in 1996, the year when we saw the lowest youth turnout ever. September of 2000 looked terrible, but then the registration rate rose to the highest ever recorded in November of 2000--even though actual turnout was poor that year. The registration number seems to move randomly and isn't meaningful.

Since all election polls use registration questions to screen voters, this finding should make one skeptical of horse-race polls.

2. Some states (e.g., Michigan and Minnesota) collect hard data about voters, such as their ages. In these states, one can compare the demographics of the actual voters against exit poll data. We have found striking discrepancies in past years. Presumably, problems arise because people are not equally likely to participate in exit polls, and many now vote absentee.

3. When pollsters call random phone numbers, in theory they should reach a representative sample of Americans. In fact, as I know from bitter personal experience, they tend to reach samples whose demographics differ greatly from the Census Bureau's--and not in predictable ways. Therefore, pollsters almost always "weigh" their samples. If they reach half as many African American males as they should, then each Black man in the sample counts for two. But there are huge questions about which variables one should "weigh," and by how much.

I put more faith in trends, rather than snapshots. For example, I'm very skeptical about claims like "Bush has 52% of the vote," because they are based on calculations involving who is registered; but I'm more persuaded that Kerry has lost three points since the last Gallup poll. However, even an apparently identical survey does not give you comparable results if the sample is weighted differently each time.

One can improve the quality of a survey by spending the time and money necessary to reach a high proportion of the people who were on your original, random list of phone numbers; or even better, by supplementing phone calls with home visits. Such efforts will be reflected in a high "response rate," such as we see in Census polls. But the response rates of other polls are rarely disclosed and vary enormously. Many respectable firms have disturbingly low response rates. I think the lesson is to distinguish between a few solid polls and many dubious ones, and to pay attention only to the former.

Posted by peterlevine at 11:17 AM | Comments (2) | TrackBack

Site Meter