“The Devil Is In The Data”: Simply Adding More Observational Data Won’t Expand The Ranks Of The GOP
GOP’s “Growth and Opportunity Project,” which details a plan for revitalizing the Republican Party in the aftermath of the 2012 defeat, is necessarily broader than it is deep. There is, however, a topic that will need to be thoroughly explored if the Republican Party is to successfully execute this ambitious plan.
Two words were used nearly 300 times throughout the report: “data” and “testing.” One word—”experiment”—was not mentioned at all. But experiments are the only type of test that can produce the kind of data the GOP needs.
Put simply, “data” is information about the world we live in, and it comes in two types: “observational” and “experimental.” “Observational” data is static; it’s information about the things as they are, or were. For example, voters who are pro-life are also less supportive of gun control. That’s the world as it is. But it doesn’t tell us whether being pro-life causes people to be more pro-gun or whether a pro-life message will decrease support for gun control.
“Experimental” data is dynamic; it’s information about what causes things to change and how things could be. Experiments show us how specific messages or modes of contact—like telephone calls, mailers or TV ads—push or pull on voter opinion and behavior. Experiments open our eyes to a counterfactual universe: what if every citizen watched this ad, knew that fact, or was visited at their door by a volunteer? Will it shift the vote or turn more people out to the polls? Will it work with some voters, but not others, or even cause a backlash?
The experimental method is simple in concept, but difficult in practice. The core of a true experiment is random assignment of a large number of test subjects to “treatment” and “control” groups, like a clinical drug trial. With large numbers, random assignment ensures there is no systematic bias in who ends up in each group. We can then attribute any difference in the outcome between the “treatment” and “control” group, whether that’s blood pressure or support for a candidate, to the effect of the “treatment.” It’s the only way to confidently identify a causal relationship.
This all might sound far too fussy and academic, even philosophical, to be a core part of a political effort. But this is the new world of politics in which we’re already living.
What made the Obama campaign so accurate in their prediction of the vote across contested states was the use of experimental results from the “lab” and the “field” in their voter modeling. Because they had a large amount of experimental data, showing them how different kinds of people shifted in response to various messages (toward or away from Obama, greater or lesser likelihood of voting), they could predict with astonishing accuracy the aggregate results of their efforts.
Simply adding more observational data won’t expand the ranks of the GOP. In Iowa’s 2008 caucus, the Romney campaign turned out just under 30,000 votes and lost badly to a late-surging Mike Huckabee. Romney maintained his database on the state’s voters. In 2011, his campaign commenced a quiet but ambitious “data-driven” effort to win Iowa. All the experience, information and algorithms hard-won over the last four years were plowed into a massive persuasion and turnout effort. But when their work was completed and the counting was done, Romney received just under 30,000 votes once again. Four years and millions of dollars later Romney had earned about 140 fewer votes and a loss to yet another late-surging social conservative.
Observational data and the modeling it generates are cold and static. And no statistical technique, regardless of its sophistication, can overcome the inherent limitations of observational data. In contrast, experimental data and the modeling it generates are alive and dynamic.
We will never know what messages, digital tactics or other campaign tools work or are a waste without experiments. As Alan Gerber, Donald Green and Edward Kaplan—two of whom are political scientists from Yale who brought experiments out of academia and into Democratic politics—conclude, “unless researchers have prior information about the biases associated with observational research, observational findings are accorded zero weight [in a test of a causal proposition] regardless of sample size, and researchers learn about causality exclusively through experimental results.”
Big, integrated, and clean observational data are a necessity. But it isn’t sufficient. Mathematician and physicist Henri Poincaré claimed, “experiment alone can teach us something new; it alone can give us certainty.” I’d only caution that certainty is not something we can expect of this world. But experiments bring us as close to glimpsing it as we can hope.
By: Adam Schaeffer, U. S. News and World Report, March 22, 2013
Justifying Cuts: A Well-Used But Misleading Medicaid Statistic
“Cash-strapped states are also feeling the burden of the Medicaid
entitlement. The program consumes nearly 22 percent of states’ budgets today, and things are about to get a whole lot worse.”
— Sen. Orrin Hatch (R-Utah), June 23, 2011, at a hearing of the Senate Finance Committee
“Medicaid is the lion’s share of that spending burden as it now consumes about 22 percent of state budgets now and will consume $4.6 trillion of Washington’s budget over the next ten years.”
— Former Kentucky governor Ernest Lee Fletcher (R), June 23, 2011, at the same hearing
“Across the country, governors are concerned about the burgeoning cost of Medicaid, which in fiscal 2010 consumed nearly 22 percent of state budgets, according the National Association of State Budget Officers. That’s larger than what states spent on K-12 public schools.”
— Washington Post front page article, June 14, 2011
When a statistic is universally tossed around as a certified fact, it’s time to get suspicious.
Such is the case with this oft-cited statistic that 22 percent of state budgets is being gobbled up by Medicaid, the state-federal program that provides health coverage for the poor and the disabled. Medicaid supposedly is even dwarfing what is spent on educating children and teenagers.
But note the phrase “state-federal.” There’s billions of dollars in federal money involved, and the “22-percent” statistic obscures that fact. Let’s dig a little deeper into the numbers.
The Facts
Medicaid was a central part of President Lyndon Johnson’s “Great Society” initiative in the mid-1960s. Each state administers its own Medicaid program, but with federal oversight, federal requirements—and plenty of federal dollars. On average, the federal government provides 57 percent of Medicaid funds.
Initially, Medicaid was focused low-income Americans, but elderly nursing home care has also become a big part of it. The new health care law would also greatly expand eligibility to people up to 133 percent of the official poverty line.
There’s no question that the recession has put pressure on Medicaid spending, as more people lost jobs or income and so became eligible for coverage. The new requirements of the health care law also will boost Medicaid spending.
The assertion that Medicaid is 22 percent of state spending, and thus now exceeds education spending, comes from an annual survey of the National Association of State Budget Officers (NASBO). But if you dig into the report — if you just go to page
one — you will see that this number includes the federal contribution, in what
is known as “total funds.”
If you want to see what states themselves are spending on Medicaid —“general funds” — you have to use another set of statistics.
As NASBO says on page one, “For estimated fiscal 2010, components of general fund spending are elementary and secondary education, 35.7 percent; Medicaid, 15.4 percent; higher education, 12.1 percent; corrections, 7.2 percent; public assistance, 1.9 percent; transportation, 0.8 percent; and all other expenditures, 27.0 percent.”
In other words, without the federal dollars included, Medicaid falls to second place, far behind education. It turns out that on average, states spend 15.4 percent of their funds on Medicaid — not 22 percent.
Brian Sigritz, NASBO’s director of state fiscal studies, said, “You are correct that there are several different ways of looking at Medicaid spending that you can use. If you consider just general funds, K-12 easily remains the largest component of general fund spending, as it historically has been.”
Indeed, when you look at NASBO’s historical data (table three of this report), it becomes clear that Medicaid spending, as a proportion of general funds, has remained relatively consistent since 1995 — about 15 percent — in contrast to the popular image of being a drain on state budgets.
Sigritz said that the two figures provide a different picture of state spending. “General funds gives you a sense of spending deriving from state revenue, while total funds gives you a sense of total state expenditures,” he said. “Typically when you discuss overall state budgets you examine the various funding sources that go into them including general funds, other state funds, bonds, and federal funds.”
The Office of the Actuary for Medicare and Medicaid makes this distinction. The 2010 Actuarial Report for Medicaid notes the broad figure, but then takes pains to add: “This amount, however, includes all Federal contributions to State Medicaid spending, as well as spending from State general revenue funds and other State funds (which for Medicaid consists of provider taxes, fees, donations, assessments, and local funds).” The report concludes: “When only State general revenues are considered, however, Medicaid spending constitutes an estimated 16.2 percent of expenditures in 2009, placing it well behind education.”
Antonia Ferrier, a spokeswoman for Hatch, defended the 22-percent figure, noting its wide use. “It is part of their budgets, and there are many different streams of funding that fund those state budgets (including federal funding, taxes, etc.) that fund their many programs,” she said.
But Colleen Chapman, a spokeswoman for the Georgetown University Center for
Children and Families, a policy and research center, said, “In the current budget debate, the data are being misused to argue that the Medicaid program in states is out of control and needs to be cut dramatically, when in fact, Medicaid is still much less of state spending than education and has not grown, as a portion of state budgets, in any way close to the mammoth way that others argue it has.”
The Pinocchio Test
We will label this with one of our rarely used categories: TRUE BUT FALSE.
(We still need to get an appropriate icon for this one — suggestions are welcome.)
Yes, the 22-percent figure is a valid number. But it is being used in an inappropriate way, and therefore is misleading. Hatch and Fletcher are only the latest in a long line of public figures — and news outlets — who have seized onto this number without apparently realizing that it is the wrong statistic to use. If people want to understand the impact the Medicaid is having on state budgets, politicians should begin to use the 15-percent figure — or at the least offer a caveat to the 22-percent number. Otherwise, there might be some Pinocchios in their future.
By: Glenn Kessler, The Fact Checker, The Washington Post, July 5, 2011