Sometimes, people respond in strange ways to survey questions.
For a recent project with Jim Stimson and Elizabeth Coggins, I spent a fair amount of time analyzing data from the Cooperative Congressional Election Study (CCES). Here’s a fun nugget from my exploration: a sizable proportion (21 percent) of respondents both support and oppose Obamacare. Simultaneously.
We can speculate wildly about why a fifth of respondents — in a sample that is disproportionately educated and interested in politics! — would give such a puzzling answer.
But in a bigger sense, surveys — as useful as they are — offer highly artificial settings where respondents will give answers. Not attitudes, nor opinions, nor preferences per se — just answers. We should keep that in mind before reading too much into public opinion reports.
Part of the CCES comprises a set of “roll call” votes. These present respondents with a policy position and require a simple yea/nay answer. Two of these questions ask about the Affordable Care Act: one asks the respondent to vote for or against Obamacare, the second asks respondents to vote for or against repealing Obamacare.
There is a logical connection between these two questions. In general, someone who wants to repeal the law would probably not vote for it; and those who want to keep the law around should vote for it to begin with.
Generally that works… but as the ‘jitter’ plot shows above, it doesn’t work that way for everyone. Each dot on the figure represents a single respondent. (I like imagining that I’m assigning people to stand in a corner of the room depending on their answers to questions. Maybe I have a power complex…) There are clearly a good number of respondents in two quadrants: those who either support Obamacare and want to keep it, and those who oppose Obamacare and want to repeal it. Makes sense.
But who are those respondents in the other two quadrants? Slightly more than 12 percent of them want to repeal Obamacare, despite saying that they would vote for the bill; and 9 percent would vote against the bill, but wouldn’t repeal it.
The latter group — the Vote Against / Don’t Repeal group — may be reasoning through the path dependency of Obamacare. Something like, “Well, I don’t like it, but it would endanger the health care system to repeal it now.” Or maybe they’re just ardent believers in the Democratic process: elected officials passed the bill, any who would I be to usurp them? I doubt either of these stories, but it’s not impossible.
The other group — the Vote For / Repeal It! group — is weirder, though. There’s really no logical connection between the two answers.
Surveys are weird…
Well, they are! Despite having used public opinion data in research for several years now, I took my first “real” political survey over the winter holidays. Gallup called and wanted to talk to me about global warming, and that sounded like fun.
It wasn’t. First, you get pretty tired of answering questions after the first twenty. Second, even as a well-educated, highly-informed and engaged observer of the political world, the survey made me feel dumb. There’s this unusual pressure in a survey to answer questions promptly, which is fine but sometimes you don’t have an easy answer right at the top of your mind. Besides, these issues are complicated! Global warming? Economics? Coal, nuclear, wind, oil? Health care mandates?
Stressed yet? Even informed and engaged respondents get a bit overwhelmed by the survey items, and by the need to provide clean answers to complicated questions. And sometimes the questions aren’t entirely clear. Are we asking if you would have voted for Obamacare back in 2010? Or would you vote for it today? Do some respondents miss the “repeal” part of the question? These are all possible points of confusion, introduced in a highly artificial environment, but for which it’s impossible to test without a specific instrument.
Here’s the uncomfortable truth about polls: we use them because they’re what we have. On many questions, they’re good for giving the general feeling in the public. “Will you vote for Mitt Romney, the Republican, or Barack Obama, the Democrat?” isn’t terribly difficult, and most respondents can give a decent answer.
But as the questions become more complicated, responses become less reliable. Accessing “true” attitudes on policy questions with a survey can sometimes be like removing a splinter from your finger with an axe: In a sense it works, but it’s awfully messy.
And it gets messier when we try drawing relationships between multiple items, all of which have some weird characteristics, like non attitudes, weak attitudes, and non response. Aggregating to reduce the high dimensionality of multiple responses can help filter out some of the noise, but that’s a topic for another post.
Pundits and commentators roll out polls daily to elicit support for some position or another. Being an informed consumer of surveys means going beyond “What’s the Margin of Error?” (We are the Margin of Error, duh!)
It means realizing that a fair number of responses might carry little objective meaning. When pressed I’ll answer, but I honestly don’t know, don’t care or haven’t quite figured out my views yet. Treating these responses as some true-to-life measure of how the American people feel, or how they’ll act, can go pretty far afield.
Note: The CCES sample above is limited to the UNC module of 1,000 respondents. Expanding this to the full CCES sample of 55k+ doesn’t change anything, though, but does make the figure a bit messier.