RSS

Category Archives: Polling

The meaning of success

How should we define the success of the Affordable Care Act (ACA)? In recent months, news reports focused on the number of new enrollees as a key test of the law. Although the troubled performance of the healthcare.gov website during October and November delayed enrollment for hundreds of thousands of potential subscribers, Obama administration officials and Congressional Democrats hailed a surge in enrollment at the end of the year as proof that the law would fulfill its promise of providing affordable coverage to millions of uninsured Americans.

To date, enrollment numbers paint a decidedly mixed portrait of the ACA’s impact. Speaking on September 30, 2013, HHS Secretary Kathleen Sebelius declared that “success looks like at least 7 million people having signed up by the end of March 2014.” By late December, however, Sebelius hailed the fact that 2.1 million people had signed up for coverage through the new exchanges as evidence that the law was now working well. Earlier in the month, President Obama cited the increased pace of enrollment as proof that “the demand is there, and the product is good.” Even the most optimistic estimates, however, suggest that signups continue to lag far behind the administration’s own goals.

Obama administration officials responded to criticism about the widespread cancellation of individual insurance market policies in late 2013 by exempting millions of Americans who faced “unexpected natural or human-caused events” that prevented them from obtaining coverage from the individual mandate. Ironically, this decision, which sought to mollify Congressional critics and their outraged constituents, further undermines the prospects for meeting its enrollment targets and exacerbates an already serious credibility gap for Democratic candidates in the upcoming Congressional elections. Democrats continue to emphasize a “moving average” approach to measuring the success of the health insurance exchanges, pointing out that the pace of enrollments increased steadily once the website’s “glitches” were ironed out in late November. However, a failure to meet the administration’s own goal of 7 million new enrollees by the end of March 2014 will provide Republicans with a new policy story just in time for the 2014 campaign season.

Unfortunately for Congressional Democrats, increased enrollments did little to rehabilitate the image of the ACA in the eyes of the public. In a CNN poll released in on December 23, support for the law fell to 35% – a new low – despite significant improvements to healthcare.gov as a result of the “tech surge” in late November. The new polls highlight a troublesome trend for Democratic candidates who heed President Obama’s call to close ranks behind the ACA. Core Democratic constituencies now oppose the law, including 60% of women. Furthermore, in an ironic twist, 63% of those polled expected to pay more for health care after the implementation of the Affordable Care Act. In its current form, the ACA promises to be a millstone around the necks of vulnerable Congressional Democrats in 2014. Unless the Obama administration and other supporters of reform can reassure a doubtful public about the problem-solving capacity of American political institutions, the ACA may prove to be a classic Pyrrhic victory. In short, administration officials may win small battles over improving the performance of website, but lose the larger war over public support for government-led health care reforms.

The continued unpopularity of ObamaCare more than three and a half years after its enactment reflects a much deeper concern than simply website snafus or insurance cancellations. As I’ve argued elsewhere, ObamaCare has done little to restore public faith in the ability of government to solve social problems. Unless and until the administration begins to meet its own targets, the political fallout of the ACA will cast a long shadow over the 2014 elections … and beyond.

Advertisement
 

In Health Reform Polling, Even Undecideds Have Opinions

If you’re at all familiar with opinion polls, you know a few things: How the questions are asked matters; The results are always within a certain margin of error thanks to probability sampling; and there are always some people who profess to have no opinion on the question being asked. When we interpret the results of polls, it’s easy to keep the margin of error in mind, and it’s not even that difficult to assess the impact of question wording by comparing the results from differently worded polls that are asking about the same thing. No, the real challenge is figuring out what those undecideds are actually thinking. A little example shows why this can be so important.

Let’s say a poll asks respondents about their support of health reform. Now imagine two sets of results, both with a margin of error of +/- 3%.

Support Health Reform – 25%
Oppose Health Reform – 25%
No Opinion – 50%

In this case, we clearly want to know what the undecided respondents are thinking, because there are so many of them. The results simply aren’t very informative when we only know what half of people are thinking. Now, consider a case where we might be inclined to ignore this lack of opinion:

Support Health Reform – 46%
Oppose Health Reform – 49%
No Opinion – 5%

In this case, we would consider the public evenly divided on the issue, because the difference of 3% is well within our combined margin of error. We probably wouldn’t think much of the 5% with no opinion, although the reality is that how that group broke out if people were forced to choose one position or the other, could make a difference. If all 5% went to the support camp, we’d still have a statistical tie. If they broke for the opposition, the difference between the two groups would no longer overlap. The undecideds matter.

Fortunately, there’s something that researchers can do to get around this. As Adam Berinsky and Michele Margolis from MIT write in a recent issue of the Journal of Health Politics, Policy, and Law, it is possible to impute the opinions of those who profess not to have an opinion. Here’s how it works: Most people don’t answer “no opinion” to every question. So, you use their responses to the questions they did answer to match them up with other respondents who answered those questions the same way they did. Then you look at the way those people answered the question that the others expressed no opinion on. That gives you a sense of how they would have answered the question if they had done so. Make sense?

Here’s a silly example: Let’s say you asked people three questions about which foods they liked to eat. First, you ask if they like lettuce. 80% say yes, and 20% say no. Then you ask if they like cheese. 50% say yes, and 50% say no. Then you ask if they like hamburger. 40% say yes, 20% say no, and 40% have no opinion. When you look at the data, you see that the 40% with no opinion all like lettuce and dislike cheese. Others in the sample who like lettuce and dislike cheese report that they dislike hamburger (it turns out they are strict vegetarians). Therefore, it’s not a certainty, but there’s a good chance that the folks with no opinion actually dislike hamburgers. They may also be vegetarians who answered no opinion because they felt the question wasn’t relevant to them, for example. Make better sense? Good.

 
1 Comment

Posted by on February 10, 2012 in Polling, Recent Research

 

Tags: , ,

 
%d bloggers like this: