Margin of Σrror

Margin of Σrror -

Anthony Weiner climbs as NYC mayor race heads for tightest finish since 1977

When the candidates are polling this poorly and closely to each other, anything can happen

The New York City mayoral race is a zoo, and we’re all witnesses to it. Over the past two months, I said Anthony Weiner was stronger than initial polling suggested, Christine Quinn might not make the runoff (held if no candidate reaches 40% in the first round) and Bill Thompson probably would end up in the second round. The polling that came out this week makes me doubt none of these beliefs, yet the race remains very unsettled.

This week Marist and Quinnipiac released polls with slightly different, though, mostly consistent results. Marist had Weiner at 25%, Quinn at 20%, Thompson at 13%, Bill de Blasio at 10% and John Liu at 8%. Quinnipiac put Quinn at 19%, Weiner at 17%, Thompson at 16%, de Blasio at 10% and Liu at 7%. You’ll note that both surveys paint Quinn, Thompson, de blasio and Liu in nearly the same place and any differences are within the margin of error. Weiner’s higher percentage in the Marist poll may, as Mark Blumenthal noted on Wednesday, be because he’s in the news a lot and is benefitting from Marist pushing undecideds harder.

That should not be mistaken for Quinn having any sort of strong support. The Marist poll finds that among those who strongly support a candidate Weiner’s lead over Quinn’s extends to 17pt – 40% to 23%. Quinnipiac has her unfavorable rating among Democrats climbing to 31% – 12pt higher than in May and by far her highest of the year. What is going on?

While many analysts were stuck claiming that Weiner was a product of name recognition, they failed to recognize the same and perhaps to a greater extent was true of Quinn. I wrote in March that Quinn didn’t have a record that some Democrats would like once they got to know it. Anti-Quinn advertisements have been running in New York, and they clearly have had an effect.

Quinn had also been polling particularly strongly among African-Americans and Latinos in prior Marist surveys, which didn’t make much sense. Quinn is seen as a kind of heir apparent to Mayor Michael Bloomberg who polls worst among minorities. The latest Marist poll has her falling back to third among blacks at 19% and second among Latinos at 16%. Those numbers may fall further.

Weiner, meanwhile, has seen his numbers climb. In the ballot test, he’s at his high point in both the Marist and Quinnpiac polls. He’s cut his deficit in a potential runoff against Quinn from 15pt down to 2pt per Marist. Marist has his net favorable among Democrats rising from 0 to +16pt in the last month alone, while Quinnpiac show the percentage of Democrats thinking he should run for mayor up to 52% from 41% last month.

It’s easy to say that Weiner’s rise is because more people are hearing his name, but I don’t think that’s necessarily true. Weiner’s rise in the last month has occurred, despite the percentage having an opinion of him staying the same. That means he’s changing minds like I suggested he might in April. This transformation has occurred even as a number of stories hit the press about Weiner’s lack of a solid congressional record, past racially tinged campaigns, and of the damage his sexual escapades had on the women he had conversations with.

The biggest story, however, from the polling is the predictable rise of Bill Thompson. It’s been my contention since day one that Thompson, an African-American, would pick up the lion share of votes from African-Americans, who will make up about 30% of the electorate. That would put him in prime contention for a runoff spot that will probably be earned with just a little greater than 20% of the vote. Both the Marist and Quinnipiac polls have Thompson rising from the mid teens to 21% among African-Americans in the past month. That should go up even more.

The underlying voter sentiment gives Thompson even more hope. Thompson has the best net favorables in both the Marist and Quinnipiac polls of any of candidate among all Democratic voters – including whites. We can see this at work in potential runoffs, which he is in statistical dead-heats with Quinn and Weiner per Marist, even as Marist has him trailing both of them in the initial round.

Thompson may also benefit from the fact that pollsters in New York City seem to have a difficult time surveying minority candidates and voters. The leading minority candidate has over-performed his final polling significantly in every Democratic primary since 1989. Pollsters underestimated the percentage of minority voters going for the minority candidate in the past two general elections – including Thompson in 2009.

At the end of the day, though, any of the top three candidates can advance. I went through the polling that I could find since 1989, and I can’t find a single poll this late in the mayoral primary campaign when the leading candidate had less than 26% and certainly not less than 20%. There simply is no precedence for this in the past 30 years.

Indeed, the only race I can ever remember that shares the slightest resemblance to this one is 1977. That race featured Democrats Bella Abzug, Herman Badillo, Abe Beame, Mario Cuomo, Ed Koch and Percy Sutton. Abzug was thought of as a favorite with Beame close behind. Polling in that race had Abzug leading with right around 20% until mid-August. Then Koch “surged” forward to win the first round with less than 20% with Cuomo close behind, while none of the six earned less than 10%.

The lesson from that campaign that should be applied to this one is that when the candidates are polling so poorly and close to each other anything can happen. I wouldn’t even count out Bill de Blasio who is lurking with 10%. If you buy the Quinnipiac poll, he’s less than 10pt back. With two months to go and most voters not tuned into the race yet, it could be 1977 with someone we wouldn’t think of coming from behind. I don’t expect it, but in this race expect the unexpected.

theguardian.com © 2013 Guardian News and Media Limited or its affiliated companies. All rights reserved. | Use of this content is subject to our Terms & Conditions | More Feeds

“Trust, But Verify:” Quinnipiac and the GOP surge in the States

Recent weeks have seen a spate of good news for Republicans in Quinnipiac polls of several 2014 governor’s races. In Colorado, Democratic Governor John Hickenlooper leads conservative ex-Congressman Tom Tancredo (R-CO) by just one point, Republican Colorado Secretary of State Scott Gessler by only two points, and Republican State Senator Greg Brophy by just six points. In Connecticut, Democratic Governor Dan Malloy actually trails 2010 opponent and former U.S. Ambassador Tom Foley by three points. (Malloy also has single digits leads over other possible opponents with low name identification.)

In Ohio, Republican Governor John Kasich leads his likely Democratic opponent, Cuyahoga County Executive Ed FitzGerald, by fourteen percentage points. Finally, in what qualifies as good news for Republicans in Florida, unpopular Republican Governor Rick Scott trails now-Democratic former Governor Charlie Crist by ten points (previously, Scott had trailed by even larger margins).

So what is one to make of these polls? Are Republicans poised for a midterm rebound in these states or are these polls too rosy for Republicans?

First, it should be said that there is certainly a case to be made that the Democratic Party will fare at least somewhat poorly in the 2014 midterms. Traditionally, two-term presidents experience a six-year itch as their party loses offices around the county in their second midterm election (Bill Clinton in 1998 was an exception). Furthermore (and separate from the six-year itch), the Cook Political Report’s David Wasserman notes that Republicans have a built-in turnout advantage for the midterm elections. This built-in Republican edge in midterm turnout has grown especially large in recent years.

Second, at the same time, it is also unwise to uncritically accept the results of a single poll (or a series of polls from the same polling firm). As Nate Silver and Real Clear Politics have shown in recent elections, poll averages tend to perform well in forecasting election results (Silver weights polls based on the past quality of the polling firm, while Real Clear Politics uses an unweighted average). Indeed, it is not coincidental that the two Senate races that Silver’s model incorrectly predicted—North Dakota and Montana—suffered from a dearth of polling data.

This is not to say that Quinnipiac is necessarily any better or worse than any other polling firm. Indeed, a post-election analysis of polling accuracy from Nate Silver’s 538 blog rated Quinnipiac as 11th of the 23 polling firms that conducted at least 5 polls in the last three weeks of the campaign.

Yet, like every polling firm, Quinnipiac conducts a poll from time to time that seems to be an outlier. For example, a mid-September 2010 Quinnipiac Ohio poll showed John Kasich leading Ted Strickland by a whopping 17 points. According to Real Clear Politics, polls from several other polling firms who conducted polls at roughly the same time as Quinnipiac showed Kasich with a substantially smaller lead (Kasich ultimately won by 2 percent in November).

In keeping with what recent polls from Quinnipiac suggest, Republicans may well be surging in the states. But one should at least be somewhat skeptical of these results until they are confirmed by results from other polling firms (or even future polls from Quinnipiac in the same states).

A complete lack of skepticism, best epitomized by a recent piece from POLITCO declaring John Kasich as a model for GOP success in swing states, results in too great a willingness to accept as fact the results of a single poll. At the same time, a complete dismissal of these Quinnipiac polls would be equally (or perhaps even more) silly.

So what is the right approach? Acceptance of results combined with a healthy dose of skepticism until confirmed by other polls. Or as our 40th President was famous for saying about U.S. relations with the Soviet Union, “trust, but verify.”

Anthony Weiner climbs as NYC mayor race heads for tightest finish since 1977

When the candidates are polling this poorly and closely to each other, anything can happen

The New York City mayoral race is a zoo, and we’re all witnesses to it. Over the past two months, I said Anthony Weiner was stronger than initial polling suggested, Christine Quinn might not make the runoff (held if no candidate reaches 40% in the first round) and Bill Thompson probably would end up in the second round. The polling that came out this week makes me doubt none of these beliefs, yet the race remains very unsettled.

This week Marist and Quinnipiac released polls with slightly different, though, mostly consistent results. Marist had Weiner at 25%, Quinn at 20%, Thompson at 13%, Bill de Blasio at 10% and John Liu at 8%. Quinnipiac put Quinn at 19%, Weiner at 17%, Thompson at 16%, de Blasio at 10% and Liu at 7%. You’ll note that both surveys paint Quinn, Thompson, de blasio and Liu in nearly the same place and any differences are within the margin of error. Weiner’s higher percentage in the Marist poll may, as Mark Blumenthal noted on Wednesday, be because he’s in the news a lot and is benefitting from Marist pushing undecideds harder.

That should not be mistaken for Quinn having any sort of strong support. The Marist poll finds that among those who strongly support a candidate Weiner’s lead over Quinn’s extends to 17pt – 40% to 23%. Quinnipiac has her unfavorable rating among Democrats climbing to 31% – 12pt higher than in May and by far her highest of the year. What is going on?

While many analysts were stuck claiming that Weiner was a product of name recognition, they failed to recognize the same and perhaps to a greater extent was true of Quinn. I wrote in March that Quinn didn’t have a record that some Democrats would like once they got to know it. Anti-Quinn advertisements have been running in New York, and they clearly have had an effect.

Quinn had also been polling particularly strongly among African-Americans and Latinos in prior Marist surveys, which didn’t make much sense. Quinn is seen as a kind of heir apparent to Mayor Michael Bloomberg who polls worst among minorities. The latest Marist poll has her falling back to third among blacks at 19% and second among Latinos at 16%. Those numbers may fall further.

Weiner, meanwhile, has seen his numbers climb. In the ballot test, he’s at his high point in both the Marist and Quinnpiac polls. He’s cut his deficit in a potential runoff against Quinn from 15pt down to 2pt per Marist. Marist has his net favorable among Democrats rising from 0 to +16pt in the last month alone, while Quinnpiac show the percentage of Democrats thinking he should run for mayor up to 52% from 41% last month.

It’s easy to say that Weiner’s rise is because more people are hearing his name, but I don’t think that’s necessarily true. Weiner’s rise in the last month has occurred, despite the percentage having an opinion of him staying the same. That means he’s changing minds like I suggested he might in April. This transformation has occurred even as a number of stories hit the press about Weiner’s lack of a solid congressional record, past racially tinged campaigns, and of the damage his sexual escapades had on the women he had conversations with.

The biggest story, however, from the polling is the predictable rise of Bill Thompson. It’s been my contention since day one that Thompson, an African-American, would pick up the lion share of votes from African-Americans, who will make up about 30% of the electorate. That would put him in prime contention for a runoff spot that will probably be earned with just a little greater than 20% of the vote. Both the Marist and Quinnipiac polls have Thompson rising from the mid teens to 21% among African-Americans in the past month. That should go up even more.

The underlying voter sentiment gives Thompson even more hope. Thompson has the best net favorables in both the Marist and Quinnipiac polls of any of candidate among all Democratic voters – including whites. We can see this at work in potential runoffs, which he is in statistical dead-heats with Quinn and Weiner per Marist, even as Marist has him trailing both of them in the initial round.

Thompson may also benefit from the fact that pollsters in New York City seem to have a difficult time surveying minority candidates and voters. The leading minority candidate has over-performed his final polling significantly in every Democratic primary since 1989. Pollsters underestimated the percentage of minority voters going for the minority candidate in the past two general elections – including Thompson in 2009.

At the end of the day, though, any of the top three candidates can advance. I went through the polling that I could find since 1989, and I can’t find a single poll this late in the mayoral primary campaign when the leading candidate had less than 26% and certainly not less than 20%. There simply is no precedence for this in the past 30 years.

Indeed, the only race I can ever remember that shares the slightest resemblance to this one is 1977. That race featured Democrats Bella Abzug, Herman Badillo, Abe Beame, Mario Cuomo, Ed Koch and Percy Sutton. Abzug was thought of as a favorite with Beame close behind. Polling in that race had Abzug leading with right around 20% until mid-August. Then Koch “surged” forward to win the first round with less than 20% with Cuomo close behind, while none of the six earned less than 10%.

The lesson from that campaign that should be applied to this one is that when the candidates are polling so poorly and close to each other anything can happen. I wouldn’t even count out Bill de Blasio who is lurking with 10%. If you buy the Quinnipiac poll, he’s less than 10pt back. With two months to go and most voters not tuned into the race yet, it could be 1977 with someone we wouldn’t think of coming from behind. I don’t expect it, but in this race expect the unexpected.

guardian.co.uk © 2013 Guardian News and Media Limited or its affiliated companies. All rights reserved. | Use of this content is subject to our Terms & Conditions | More Feeds

Flexing those Conservative Muscles, Pt. II

In Part I, we discussed popular media coverage of a forthcoming Psychological Science article. The work by Michael Bang Petersen and his colleagues claims to show an evolutionary link between physical strength and the intensity of political beliefs. In Part II, we’ll push that causal claim a bit further.

In the past few days, furor over the NSA surveillance program sparks new debates between liberals and conservatives. Interestingly, much the defense-versus-liberty arguments pit conservatives against one another: defense-hawks who want liberal power to root out terrorists, and libertarian-minded conservatives who abhor anything smelling like a police state.

This calls to mind the more traditional “liberals are like mothers, conservatives are like fathers” trope. Conservatives are tough and strong while liberals are warm and nurturing. The recent piece by Petersen and his colleagues plays, in part, into that stereotype. But how reliable is the research?

Revisiting Causation

Petersen and his colleagues find a correlation between bicep size (their measure of fighting aptitude) and the congruence between individual economic conditions and political conservatism. Why might the relationship exist? The authors argue that this clearly points to evolutionary and thus genetic bases of political attitudes. That’s not impossible, but it’s hardly the only reasonable conclusion to draw.[1]

Before we dig too deeply into the research, it might serve to review the basics of causal inference. Causation, as the name implies, seeks to establish that some phenomenon causes another. On its surface, this is simple to do. I flip the light switch, the light comes on. Boom: cause produces effect. Sure, there are mechanisms that allow the switch to work, but for all intents and purposes I caused the lights to come on.

Unfortunately, causal inference is rarely so simple. The cleanest way to observe causation is through randomized experiments, where some subjects receive a treatment (some rooms get the light switch flipped) and others don’t (some rooms’ switches are left untouched). If the treatment group experiences different outcomes than the control group, we may have observed a robust causal process.

With observational data (i.e., data collected by observation and not by experimentation), testing causal theories gets stickier. We can observe correlations, but we often cannot determine if x caused y, if y caused x, or if both x and y were caused by some other unobserved phenomenon z.

In political terms, we might ask: Does my evaluation of the economy determine my partisanship? Does my partisanship determine my evaluation of the economy? Does my political ideology cause both? Or might they all three interact in a more complicated way? With observational data, it’s hard to tell.

We don’t just abandon causal inference here, though. We can still make causal claims, but we must (a) provide thorough theoretical explanations for how our posited cause produces a certain effect; and (b) exhaustively examine alternate explanations that might falsify our theory.[2] Petersen and his colleagues tentatively satisfy the first condition, but hardly attempt at the latter.

Examining the Research

Turning back to the research, we can note first that the effects proffered by the authors are quite small. This illuminates the difference between statistical and substantive significance, which is frequently overlooked. The effect may exist, but it’s so small as to be swamped by other other influences in real world.

The more important challenge, however, critiques the authors’ causal claims. The authors posit that survival likelihood in men should cause more self-interested behavior, but they rather lazily support their case. In fact, the authors don’t even get close to providing an exhaustive test of their causal story.

The authors instead show a correlation between the two phenomena. Yet other reasonable explanations abound. Bicep size, after all, is not purely innate, but depends in large part on resistance training and even body fat percentage. And the predispositions toward having large arms may reflect little about evolution or genetics.

Certain people are more likely to believe strongly in physical aptitude and healthfulness, and perhaps these people are also more likely to hold strong political views. Education, for instance, tends to predict both political sophistication and physical exercise, health and lower obesity. Political science research also shows that knowledge and interest in politics play an important role in our ability to match economic policies and political elites to our personal economic well-being.

Or maybe the story does involve assertiveness, which makes individuals more likely to care about politics, behave more self-interestedly and/or hit the gym more regularly. The mechanism here could be nature, but it could also be nurture. Guys who are raised to assert their opinions on any manner of subjects, politics included, may also be raised to be physically strong.

Or perhaps self-interest causes physical fitness. It could be that people who are more self-interested also tend to care about physical strength. From a survival perspective, that would make sense. In fact, none of these alternative explanations are wholly unreasonable, and any of these explanations could produce similar correlations as expounded in the study.

Of Science and Old Sayings

The age-old bromide “correlation does not imply causation” should be ringing in our ears about now. The authors have a mildly interesting finding, but at root all they’ve presented is a correlation which hardly supports their causal story.

Image courtesy of http://xkcd.com

Unless future research shows that other fitness characteristics, condition political attitudes, the theory remains weak.

Even in that case, researchers need to rule out competing theories. Strength might cause self-interest; but self-interest could cause strength, or both could be the effects some another evolutionary or behavioral cause.

What To Do?

How might researchers better examine the connection between strength and political attitudes? Let’s consider a few things to try.

1. Measure strength, not (just) biceps: The authors argue that biceps are the best single predictor of fighting ability. But why use just one? Surely there are reasonable ways to measure strength, like with compression springs, that would give us a better idea of how strong participants are. This eliminates concerns that bicep size might be a poor indicator of strength.

2. Measure fighting ability, not just strength: Biceps (or strength, otherwise defined) only matters to the degree that it taps into fighting ability. Other characteristics help us fight, though. Height and longer arms are useful, as are stronger legs and faster reflexes. The authors argue that biceps are used by others to assess a person’s fighting ability, but that should not matter for this study. The theory suggests that good fighters, not people who just look like good fighters, should take what they want from the political system.

3. Explore innate characteristics: Behavior, personality and interpersonal influence (parents, peers, et cetera) affect our tendency to lift heavy things. That makes it nearly impossible to say that strength per se influences our political views. More innate qualities, not as subject to behavioral manipulation, could be helpful.

4. Consider other dimensions of assertiveness: The authors argue that strong men will be more politically assertive; but why stop there? Asking, or experimenting, with other facets of assertiveness may shed light on an interesting question, namely whether strength predicts certain personality types, politics included.

 

In sum, it’s not surprising that conservative outlets latched onto the research without reading it thoroughly. Conservatives on my Facebook feed were certainly thrilled to learn that they were, in fact, ruggedly strong and athletic, standing in stark contrast to their pantywaist liberal counterparts. That’s motivated reasoning at it’s best, but its also wrong.

And frankly, I’m not even too surprised that a peer-reviewed article would oversell its findings. The research garnered a lot of attention and, if true, could constitute a generally interesting finding.

But science isn’t about pithy titles or provocative theories. Causal claims must be made carefully, especially when working with observational data without randomization and controlled treatments. Time will expose the authors’ theory to better tests, and I suspect that when that happens, the theory will find its way to the scientific dustbin.

 

Notes:

1. I’m fairly agnostic to using biological research in political science. Like all research subfields, it produces some good and some mediocre products. Some of the bio-politics research is actually interesting, some reflects a fetish for new data with little theoretical development, and some just mines for significance stars (see § 3).

2. This is actually a more optimistic view of causal inference in the social sciences than I tend to embrace. For an interesting read on causation writ large, I recommend Causality by Judea Pearl (see also here). For a grounded, if slightly depressing, view of statistical models and causal inference, David Freedman’s posthumous collection is a must read.

Gallup’s 2012 election polling errors were only part of the problem | Henry Enten

Gallup was caught out badly, but other national pollsters were off, too. It’s time to look at different methods and new technologies

We all know that Gallup screwed the pooch in the 2012 presidential election. It had Mitt Romney leading through most of October and in its final poll by a point – a 5pt error. Gallup sought to prove to the polling world that it was seriously investigating its 2012 polling errors by issuing a report on Tuesday. In the write-up, Gallup noted that although there was no single cause, a likely faulty voter screen and too few Hispanics were among the problems. This comes as no surprise to others including Mark Blumenthal and myself.

It’s worth the time, though, to point out, as I have and Gallup did on Tuesday, that the Gallup effect was only half the problem.

The average of polls done in the final week, excluding Gallup and Rasmussen, had Obama’s lead over Romney more than 2pt too low. I might be willing to look the other way, except the polling average in 2000 had George W Bush winning and had a margin error of again more than 2pt. The error in margin in 1996 was off by 3pt. The 1980 average saw an error of more than 5pt. The years in between 1980 and 1996 were not much better. In other words, the “high” error in national polling even when taking an average isn’t new; in fact, it seems to be rather consistent over the years.

Worse than the error in the final polls was how the national polls took the consumer for a ride in October 2012 before finally settling in the final week. Anyone remember when Pew Research published a poll after the first debate in 2012 that had Mitt Romney up by 4pt among likely voters? I don’t mean to single out Pew, but because of Pew’s sterling reputation, this poll got an outsized amount of attention even as most of us suspected that it probably didn’t reflect the truth. Other pollsters, too, showed a bounce for Romney that propelled him into the lead after the first debate, though not all to the same extent.

The state polling, meanwhile, did not show an analogous large bounce. It consistently had Obama leading in the states he needed to be leading in. Moreover, it showed Obama holding very similar positions to those he did prior to the first debate in the non-battleground states.

Look at YouGov, for example, which polled before and after the first debate. In Florida, Obama was ahead by 2pt before the debate and 1pt after it. In blue New York, Obama was ahead by 22pt before the first debate and 24pt after. In red Georgia, Romney was up by 7pt before the debate and 8pt afterward. Pollsters like ABC/Washington Post, CNN/ORC,and Public Policy Polling (PPP) all did better on the state level throughout October than they did at the national level.

It’s not the first time the state polling beat the national surveys. Back in 2000, for instance, state poll followers knew that Al Gore had a really good shot at winning. National survey followers, though, were surprised when Gore won the national vote. That’s why smart poll aggregators like Drew Linzer, Nate Silver and Sam Wang barely looked at national polling in 2012 when trying to project the winner. It’s also why the Obama campaign didn’t conduct national surveys.

I asked Gallup about state polling on Tuesday, and why it didn’t try to do individual state polls and/or then sum up, as Silver did, to calculate the national vote. After all, besides polling accuracy, the ball game of presidential elections for pollsters is state elections. Gallup’s response was telling. First, it said that polling 50 individual states via live interviewer to come up with a national estimate would be too time-consuming and cost too much money. That’s fair. Second, Gallup said that it didn’t just poll the swing states because it was interested in knowing what all Americans thought, not just swing-state voters. (I agree and made the same point in an earlier column.)

But for those of us who are interested in knowing who is going to win, Gallup’s answer is not satisfying. Other live pollsters like CNN/ORC, Marist, Quinnipiac and the Washington Post did very good statewide polling in 2012. Gallup hasn’t conducted a statewide general election poll since 2006 and hasn’t done so in a general presidential election since 2004. (Those 2004 polls, by the way, weren’t very good.)

Moreover, the option now exists for pollsters to use other technologies to poll most states, if not all 50. We have interactive voice response (IVR) or robo-polls that are relatively cheap and can survey many people quickly. As long as you properly weight in younger voters, as does PPP and SurveyUSA, these polls work quite well in predicting who is going to win the national election. We also have the somewhat less expensive, randomly selected internet surveys such as Knowledge Networks, and the cheaper volunteer internet polling, which YouGov and Ipsos have implemented successfully. These volunteer surveys hold a lot of promise in the future as more and more people get rid of landlines and have computers.

The point is that there are proven ways to poll that produce more consistently accurate portrayals of the election than doing a single live telephone interviews of a randomly selected population in a national poll. In fact, it’s already being done. That’s not to say that good-quality probabilistic national surveys don’t have a place. No one has proven to me that IVR or non-random internet surveys are as good as probabilistic telephones surveys on issue questions beyond the ballot test. The problem, again, is that I’d look to other sources in preference to a survey that interviews some number of respondents in one survey, a different set of respondents in the next survey, and so on.

One of the biggest takeaways from the American Association for Public Opinion Research (AAPOR) conference is on the usefulness of panel research. That is, you have a set number of respondents, weighted to the correct population parameters, who get interviewed over and over again. This leads to less volatility, and you can actually see how different respondents are reacting to the campaign. Panels can be difficult to do by phone, yet are rather easily obtained in randomly-selected internet samples that pretty much everyone, including those against volunteer internet samples, agrees do just as good a job at finding the true public opinion on issue questions. You can actually see how well panels worked with the Rand American Life Panel. It was the only national tracking survey in 2012 that had both convention bounces and Obama leading throughout the month of October.

None of this is to say that live telephone surveying is bad or useless, by any stretch. Most of the national telephone polls in 2012 were better than Gallup’s. It just seems to me that we shouldn’t only be examining Gallup for 2012′s polling failings. It might be time for even the most ardent defenders of live telephone national interviews to look at other methods in greater depth. Whether it be for the presidential horserace or more in-depth issue questions, different and (in some cases) less expensive survey styles have shown a trend to do better or at least as well.

guardian.co.uk © 2013 Guardian News and Media Limited or its affiliated companies. All rights reserved. | Use of this content is subject to our Terms & Conditions | More Feeds

Flexing those Conservative Muscles, Pt. I

Several media outlets recently published an astonishing finding: strong men are more conservative. Based on an article in Psychological Science, the popular accounts stray far from the research and probably even further from the truth. In this two-part post, we’ll discuss what the researchers actually said (Pt. I) and what their research might actually mean (Pt. II).

Strong Men are Conservative!

Last week, my Facebook feed brought me some intriguing news: researchers apparently discovered that physically stronger men tend to hold conservative economic views.

A couple quick clicks and a Google search showed ample coverage, much in conservative media, of the peer-reviewed article. Michael Bang Petersen and his coauthors did, indeed, find a relationship between physical strength and political attitudes.

The study’s authors examine the connection between strength and politics by interviewing men and women in three countries (Argentina, Denmark and the United States). They measure the flexed, dominant-arm bicep of participants, and ask them a battery of political questions. The relationship is “statistically significant” at conventional levels.

So, are beefy men more conservative? Nope. Popular accounts of the research misinterpret the findings, which are themselves oversold by the authors. At no point do the data provide exhaustive support for the theory that physical strength causes any part of one’s political views, much less that it makes men more conservative.

Leveling the mountain to uncover the molehill

The study’s authors nest their theory in the evolutionary advantage strong men enjoy in physical conflicts. Members of the conservative press seemed to love this story, and thus extracted the evolutionary thread without reading beyond the article’s abstract.

The logic, according to the popular accounts, claims that since nature favors strong males, these same males will eschew social safety nets. They don’t need it, after all, and would rather not see the fruits of their strength redistributed to the ninnies.

Here’s the kicker: that’s not at all what the authors claimed to find. For the story above to be supported by data, we would expect bicep size to positively correlate with conservative views. It’s presumably not, given that they don’t publish that finding.

The authors posit instead that strength interacts with other characteristics to affect political views. They point out that traditional rational decision making models don’t perform so well in the electorate, especially when using economic wellbeing to predict attitudes. That is, rich people tend to support redistribution less than poorer people, but the relationship between wealth and economic attitudes isn’t as clean as we may expect.

Here enters strength: stronger males may be more willing to engage in self-interested behavior than their weaker counterparts. The authors explain that strong males would, in nature, be more likely to claim resources because they are more able to defend them, and the same may be true of political resources. Strong, rich men may be more willing to fight against redistribution, while stronger, poor men may be more willing to claim resources through redistribution.

That’s a horse of a different color. According to the authors’ account, strength does not predict conservatism, but instead predicts the congruence of personal economic status and political views. The authors do not present a correlation between strength and attitudes, but an interaction term between bicep size and wealth that together predict conservatism.[1]

How did it go so wrong?

I have several quibbles with the original research, which we’ll explore in Part II, but we should be clear: the authors did not claim to have found that strength relates to conservatism. That’s wholly an invention of either lazy or ulteriorly motivated journalists.

On one level, this shouldn’t be surprising. Of my friends who posted stories about the research, all are quite conservative. The story provides a feel-good boost to people who identify with conservative politics, and it’s not odd that they would share the good news.

Second, the original theory matches our collective political notions. Cowboys are conservative, academics are liberals; soldiers are conservative, environmentalists are liberals. Given this natural frame, it’s easy enough to package the findings in a way that calls to mind these biases. Of course strong men oppose redistribution! They’re rugged, self-sufficient, independent, and thus the perfect candidates for conservative political views. The more nuanced story, of strength conditioning self-interest, is more difficult to tell and less intuitive, so it gets left by the roadside.

Third, media face powerful incentives to hyperbolize. “Strong men are more conservative” is flashier and more provocative than “strong men more likely to act self-interested in economic policy preferences.” A story guaranteed to tickle conservatives and enrage liberals will also attract more shares, tweets, and trackbacks in the blogosphere. If there’s no such thing as bad publicity, perhaps there’s also no such thing as bad web traffic.

Surprising or not, the coverage should still be depressing and alarming. In an era where we’re debating the role of social science research in society, it’s important to understand how the public accesses major findings and how these may improve the democratic process. News media play an important role here, but that responsibility is corrupted when journalists choose sexy over scientific.

In Part II, we’ll discuss the research by Petersen and his colleagues, and explore the relationship between strength, economics and political attitudes.

Notes:

Continue reading

Did Under Voting Cost Mount Vernon Schools the November Levy Election? (Part Two)

In my first post on under voting in Knox County, Ohio, I introduced the concept of under voting and discussed patterns of under voting in races in Knox County involving candidates. I found that the Gambier precincts exhibited levels of under voting that were below the Knox County norm in the presidential race, but that under voting rates in Gambier were much higher than the Knox County norm in other races down ballot.

This piece examines the effect of under voting on an issue race, focusing on the Knox County School Levy election that took place on November 6, 2012.

The Mount Vernon School Levy failed narrowly on November 6th, losing by a margin of 6813 votes in favor (49.3%) to 7014 votes against (50.7%). Had the levy gotten 202 more votes (a tie results in a loss), it would have passed. In the Gambier precincts, 241 votes or ~18.1% of votes cast were under votes. In the non-Gambier precincts, 390 votes or ~3.2% of all votes cast were under votes.

So, getting back to the central question, did the high rate of under voting in the Gambier precincts cost Mount Vernon Schools the November Levy Election? The answer to that question, of course, is complicated. Below, I will examine four alternative scenarios, each of which results in a slightly different answer.

Scenario One- Everyone votes, under voters all vote for the levy: This scenario, while perhaps unrealistic, is the most optimistic for the levy. Had the under voters in Gambier all voted for the levy, the levy would have passed by a margin of 7054 votes to 7014 votes (pending automatic recount). This scenario, however, is probably overly optimistic; unless the school levy could have generated the sort of enthusiasm as Barack Obama, it is at least somewhat unreasonable to expect that there would be no under votes at all in this race. It is also somewhat optimistic for the levy to assume that all under voters would vote for the levy if they had cast ballots.

Scenario Two- Everyone votes, under voters support levy at rate of voters: What if one assumes that everyone votes, but that the under voters support the levy at the same rate as those who already voted? This may be a more reasonable assumption than assuming that every under voter would naturally support the levy. In the Gambier precincts, 91.2% of voters supported the school levy. Had 91.2% of the under voters supported the school levy, the levy would have gotten approximately 220 more yes votes for a total of 7033 yes votes. However, under this assumption, approximately 21 of the under voters (~8.8%) would have voted no, giving the no side a total of 7035 no votes. Under this scenario, the levy would have failed by three (!) votes (a tie results in a loss). Obviously, the levy would have gone to recount under this scenario; the only thing that would be sure under this scenario is a lengthy legal battle.

Scenario Three- Under voting falls to norm outside Gambier, under voters support levy at rate of voters: The assumption that everyone votes is also somewhat optimistic; after all outside of the Gambier (and College Township) precincts there was some under voting in this race. If we reduce under voting in this race to the non-Gambier average of 3.2%, this means that ~43 under votes would still have been cast in this race, thus meaning that 198 fewer under votes would have been cast. By allocating these under votes in the same way as the formula in Scenario Two, 6994 total votes (increase of 181) would have been cast for the levy and 7031 votes would have been cast against the levy. As a result, the levy would have needed 38 more yes votes to pass under this scenario; however, as with the previous scenario, this result falls within the 0.5% margin to trigger an automatic recount in a local, county, or municipal election.

Scenario Four- Relaxing the Assumptions of Scenarios Two and Three: While the assumptions in Scenario One were likely too loose, the assumptions in Scenarios Two and Three may be too rigid. (Goldilocks had a similar problem with temperature and pudding!) In Scenario Two, I used the 91.2% support rate among all voters. However, it is likely that most of the under voters were Kenyon students as opposed to year-round Gambier townspeople (who make up a small portion of the Gambier vote). I also suspect that Kenyon-affiliated people may have supported the levy at a slightly higher rate than the year-round Gambier townspeople (although support must have been widespread in the village among all residents for the levy to get 91.2% of the vote). Therefore, I average Scenario 1 and Scenario 2 and say that 95.6% of under voters would support the levy.

Let me also relax the assumption of under voting- what if under voting in Gambier took place at a rate of 1.6% in the school levy election, half the 3.2% average for non-Gambier precincts? After all, the Gambier precincts showed in the presidential race that their voters are quite adept at filling out ballots when they want to make their voices heard. Is this assumption reasonable? Perhaps.

Under the relaxed assumption about under voting, ~220 under voters would be converted into voters. Using the assumption of 95.6% support for the levy, I find that supporters would gain ~210 votes and opponents would gain ~10 votes. As a result, the levy would have received 7023 votes in favor and 7024 against, failing by only two (!) votes (again, tie=loss). Once again, the election would have been decided by a recount.

So did under voting cost Mount Vernon Schools the November 2012 election? The answer to that question is a definitive “maybe.” That all depends on a.) which of the above scenarios one finds most convincing and b.) what one assumes would have happened in a recount.

The only other conclusion that can draw is that, had a lower rate of under voting taken place, the election administrator’s prayer most certainly would not have been answered. Most likely a lengthy recount process would have taken place that may have dragged on for weeks if not months.

 

 

Patterns of Under Voting in Gambier and the rest of Knox County, Ohio (Part One)

Among residents of Knox County, Ohio, the political differences between Gambier (home of Kenyon College) and the rest of the county are well-known. Gambier is populated by generally liberal students and faculty who (mostly) vote Democratic; Michelle Obama even visited the Kenyon campus in 2012. In contrast, the rest of the county is largely filled with generally conservative voters who tend to vote Republican. Indeed, 2012 Republican candidate Mitt Romney held a campaign event at the Ariel Corporation in Mount Vernon. Overall, Knox County voted for Governor Romney over President Obama by a 61 to 37 percent margin. Outside of Gambier and surrounding College Township, President Obama won the most votes in only one precinct (there was a tie in another precinct).

Using precinct-level data from the Knox County Board of Elections, this post focuses on another noticeable difference in voting patterns that exists between Gambier and the rest of Knox County: the extent to which “under voting” takes place in various contests. According to Wikipedia, an “under vote” occurs when, “the number of choices selected by a voter in a contest is less than the maximum number allowed for that contest or when no selection is made for a single choice contest.”

A close look at the Knox County Board of Elections website reveals an interesting pattern when one examines under voting by precinct. In the 2012 presidential race, not a single “presidential under vote” was cast in either Gambier precinct (the surrounding College Township precinct also saw no under votes). What makes this so interesting? In the rest of the county every other precinct had at least one under vote in the race for president.  Indeed, 213 votes (~0.8% of all votes cast) in the rest of the county were under votes.

What makes this pattern even more remarkable is that it begins to reverse itself in other races down ballot. Outside of the race for president, the under vote rate in Gambier exceeded the norm for the rest of the county.

For example:

  • In the Senate Race between Senator Sherrod Brown (D) and State Treasurer Josh Mandel (R), there were 87 under votes in Gambier or ~6.5% of all votes cast. Outside of the Gambier precincts, there were 619 under votes or ~2.3% of all votes cast.
  • In the House Race between Representative Bob Gibbs (R) and Challenger Joyce Healy-Abrams, there were 140 under votes in Gambier or ~10.5% of all votes cast. Outside of the Gambier precincts, there were 1360 under votes or ~5% of all votes cast. This despite the fact that the only debate between Gibbs and Healy-Abrams was actually held at Kenyon College in Gambier!
  • In the “Nonpartisan” State Supreme Court Race between Incumbent Robert Cupp (“R”) and Challenger Bill O’Neill (“D”), there were 730 under votes or ~54.8% (!) of all votes cast. Outside of the Gambier precincts, there were 6453 under votes or ~23.6% of all votes cast. (Note: I called this race “nonpartisan” due to the fact that, although no partisan labels appear on ballots, candidates are nominated through partisan primaries.)
  • The pattern is similar in other races down ballot.

So what implications can be drawn from this?

Here are three initial takeaways:

  • The Power of the Obama Campaign: Young voters really connected with President Obama and his campaign did a great job of reaching out to these voters and getting them to turn out to the polls. These voters were excited to vote for President Obama and filled out their ballots in such a way as to act on this excitement. This excitement about voting for President Obama, however, did not represent increased loyalty to the Democratic Party as a whole; this was made clear in the 2010 midterms as turnout among young voters remained relatively constant with historical patterns and did not experience any noticeable surge.
  • Importance of Partisan Cues: The substantial drop off that took place in the Gambier precincts for the State Supreme Court race underscores the odd things that can happen in ostensibly non-partisan judicial races. While some Kenyon students were willing to vote for a candidate with a “D” next to their name, they weren’t about to go searching for the partisan affiliation of a non-partisan candidate. (Good work on non-partisan judicial elections is being done by University of Pittsburgh Professor Chris Bonneau and UNC Graduate Student John Lappie.)
  • Under voting isn’t a liberal thing, it’s a college student thing: While under voting rates were above average in the Gambier precincts, this was not the case in the College Township Precinct. Home to some Kenyon employees, College Township has an ever-so-slight Democratic tilt. Furthermore, under voting in College Township was in line with the rates for the rest of the county. For example, 5 voters or ~2.2% under voted in the U.S. Senate race between Senator Brown and State Treasurer Mandel in College Township.

These implications are certainly not the only ones that can (or should) be drawn from this data. Indeed, the next post in this series will examine the practical implications of under voting for low turnout races, focusing specifically on the Mount Vernon School Levy.

Americans Secretly Oppose Gay Marriage

If you’ve struggled to find humor in politics recently, rejoice. At least the skewed-polls people are still around.

Yesterday, Chris Stirewalt blogged for Fox News that polls overstate support for gay marriage. Voicing a similar belief, leading social conservative Gary Bauer showed little concern over public opinion, telling Fox’s Chris Wallace:

“No, I’m not worried about it because the polls are skewed, Chris. Just this past November, four states, very liberal states, voted on this issue. And my side lost all four of those votes. But my side had 45, 46 percent of the vote in all four of those liberal states.”

As with many fallacies, there’s an iota of truth here. Stirewalt draws on work by New York University political scientist Patrick Egan that shows that late-season polls typically overestimate support for gay marriage compared with the election returns.

I don’t really have a problem so far. A Pollster article by Harry back in 2009 made a similar point and explored some ways to improve predictive models. The gap between pre-election polls and election returns, in other words, is well documented.

So, the polls are skewed…

Here’s where I depart from most interpretations of this observation. The poll-vote gap does not necessarily imply that the polls are “skewed.” Could it? Yes. But it doesn’t need to. I suspect a good bit of the bias comes from who votes not how they vote.

Stirewalt argues that the polls are skewed and mainly blames social desirability bias. In this line of reasoning,  respondents do not want to admit opposition to gay rights for fear of social judgement; instead, they act supportive but cast their secret ballot against. In other words, the “true” level of support is systematically lower than the polls show.

What’s crazy to me is that Stirewalt, even after basing his entire argument on Egan’s research, ignores the part where Egan dismisses social desirability as the primary cause of the polls’ inaccuracy. And Egan couldn’t be much plainer about it: “On the whole, these analyses fail to pin the blame for the inaccuracy of polling on same‐sex marriage bans on social desirability bias” (p. 7)1.

What seems most likely is that pollsters haven’t figured out how to calibrate their samples to match the turnout. Ballot measures only attract at least moderately engaged observers. On an issue like gay marriage, it’s not surprising that some who ostensibly support gay rights aren’t nearly as motivated as those who have social, cultural or religious objections to it. The polls may decently represent the “true” proportion of citizens who support gay marriage, but not the class of voters who cast a ballot on the issue.

We’re Missing the Point

But far, far more importantly, any potential skew in the polls misses the true point here. Let’s assume that the polls are skewed, and that “true” support for gay marriage is actually seven points (best guess from the Egan research) lower than the polls say.

So what?

Those who invoke public opinion aren’t really that worried about crossing 50 percent. Even if the polls exaggerate support for gay marriage, the trend favors the equal rights argument. The above figure2 shows general sentiment (“thermometer” scores) toward gays and lesbians in the American National Election Study3This figure by Nate Silver shows a similar rise in support for gay marriage. And this figure from Gallup shows a widening gap favoring general rights for gays and lesbians.

In this light, even yelling “Skewed Polling!” doesn’t change the fact that support for gays and their ability to marry is rising steadily.
Now I know that race and sexual orientation are not the same, but there are some similarities between the above kernel density plot and the one at the top of the post. In general, support for rights and general sentiment co-evolve. Sentiment toward black Americans has increased even in the post-Civil Rights era. We see a smaller but similar “swell” in sentiment for homosexuals, with every reason to think it will continue on its current trajectory.

Even if support today is really say, 51 percent instead of 58 percent, it’s much higher than it used to be.

Could we just be getting more politically correct, instead of more ‘liberal’, on gay rights? Sure, but the green line in the time series doesn’t show any real change in the rate of respondents opting out. No, young people are coming of age with a more permissive view on this issue.

Skew or no, the trend speaks for itself.

Notes:
[1] Now, as a brief aside, Egan’s first test for social desirability bias makes no sense to me. I can imagine plenty of reasons why a state’s gay population wouldn’t predict the poll-election gap. But the second test is much stronger: despite the social acceptance of LGBTs growing, the gap has become smaller. All in all, I’m sure social desirability is part of the story, but it’s most likely not the primary factor.

[2] The figure shows thermometers scaled on the interval [0, 1], as well as the proportion of respondents who respond to gays warmly (therm > 0.5), cooly (therm < 0.5), and those who opt to not answer. Confidence bands are generated using 1,000 bootstraps from the survey margin of error. The margin around “skip” seems odd, but for convenience I’m treating “skip” as an expression of a desire to not answer, and thus as a random variable in its own right.

[3] The ANES, funded by the National Science Foundation, could be at risk thanks to recent Congressional targeting of political science. Contact your representatives in Congress because (I promise!) most scholars use the study for more consequential research than I.

Bill and Claire’s Unconstitutional Adventure

“No law, varying the compensation for the services of the Senators and Representatives, shall take effect, until an election of Representatives shall have intervened.”

-The 27th Amendment to the U.S. Constitution (1992)

Senators Bill Nelson (D-FL) and Claire McCaskill (D-MO) introduced a bill today that would force members of Congress to take pay cuts equal to the cuts affecting other government agencies under sequestration. This proposal is likely to be hugely popular with the American people. What’s the problem with this plan? It’s clearly unconstitutional, as well as simply being a bad idea.

The 27th Amendment to the U.S. Constitution (posted above) makes it unconstitutional for congressional pay to be changed until an election has taken place. While ratified in 1992, this amendment was originally proposed as part of the Bill of Rights and was supported by James Madison.

Madison supported this amendment because he did not want members to vote themselves a huge pay raise before the voters were allowed a chance to register their approval or disapproval of Congress. To quote Madison, “there is a seeming impropriety in leaving any set of men without control to put their hand into the public coffers, to take out money to put in their pockets.”

This amendment also protects members of Congress who express minority opinions on legislation or nominations. Imagine this scenario: The majority in Congress wants to pass a bill that it has the votes for, but it wants to look bipartisan in passing the bill. The speaker goes to the minority leader and says, “Have your members vote yes on my bill or we will vote to cut the pay of minority party members by half.” Seem implausible? Maybe, but the 27th Amendment is an important protection against the tyranny of the majority.

In addition to being unconstitutional, cutting the pay of members of Congress is a misplaced (albeit popular) reform. While the $174,000 salary that members of Congress receive is certainly a decent salary, it is not excessive if one considers the other potential job options for members of Congress. Who can forget the million dollars that former-Senator Jim DeMint (R-SC) has been offered to run the Heritage Foundation? Dozens of other examples exist of members getting huge pay raises to work as lobbyists or other positions in the private sector after leaving Congress.

As the old saying goes, “You get what you pay for.” If you offer second rate pay to members of Congress, you will get second rate members. (Yes, even worse than now!) In other words, there needs to be a competitive salary for members of Congress in order to maximize the probability of attracting high quality individuals to the job.

Furthermore, if the pay of members of Congress is cut too much, then serving in Congress will become even more difficult for middle and lower income Americans than it is now. Members of Congress have to maintain two residences and must have an extensive professional wardrobe (among other living expenses beyond that of the average American). This isn’t a problem if you are one of the 50 richest members of Congress, but if you are a clean energy expert/advocate,  high school teacher, or farmer/gospel music singer then a drastic pay cut might make it financially difficult to serve in Congress. At the very least, such an individual would be unable to build a financial nest egg in case of an election loss, which would disincentivize them from running in the first place.

All in all, a substantial pay cut for members of Congress would serve to make the membership of Congress even less economically representative of the country as a whole than it is now. While some people might be willing to take significant financial sacrifices to serve in Congress, enlightened statesmen will not always be at the helm.

In conclusion, even if you don’t agree that Mr. Nelson and Ms. McCaskill’s proposal is a bad idea, it is clear that it is unconstitutional. And when laws are legitimately unconstitutional, the Supreme Court has a way of striking that whole thing down.

[Note: The title of the article is a pun on the movie“Bill and Ted’s Excellent Adventure.” The final line of this piece is a (hopefully satirical) reference to Rep. Todd Akin’s disastrous comments about “legitimate rape” in the 2012 election.]