Another good paper for a journal club recently appeared in JAMA. What makes this one worth discussing is the research question the investigators posed and how they addressed it. Although this is not focused on cardiac care, the issues are germane to literature in our field and cardiac care certification is also spreading.
The article, titled “Association Between Stroke Center Hospitalization for Acute Ischemic Stroke and Mortality,” focuses on the New York State Stroke Center Designation program — a collaboration among the New York State Department of Health, the American Heart Association (AHA), and the New York State Quality Improvement Organization. Starting in 2004, the program allowed New York hospitals to apply for certification as a “stroke center” if they met a set of Brain Attack Coalition (BAC) criteria and passed an on-site review and inspection.
Findings and conclusions
The researchers analyzed data from roughly 15,000 patients with acute ischemic stroke who were admitted to designated stroke centers and roughly 16,000 who were admitted to nondesignated sites in 2005 and 2006. They found that mortality rates were modestly better at the stroke centers than at the other centers (e.g., 30-day mortality, 10.1% vs. 12.5%; P<0.001). The authors’ conclusion: “Our study suggests that the implementation and establishment of a BAC-recommended stroke system of care was associated with improvement in some outcomes for patients with acute ischemic stroke.”
Interest in these systems is likely to grow. Indeed, an editorialist writing in JAMA uses the study results to support his final statement: “Through the collaborative work of many medical professionals, supportive hospital administrators, EMS personnel, and state legislatures, stroke centers have helped reduce death rates one stroke at a time.” And leaders in the AHA have published an assessment of these types of programs and concluded, “As a part of its commitment to promoting high-quality, evidence-based care for cardiovascular and stroke patients, it is recommended that the American Heart Association/American Stroke Association explore hospital certification programs to develop truly meaningful programs to facilitate improvements in and recognition for cardiovascular disease and stroke quality of care and outcomes.” The JAMA study was funded in part by an AHA grant.
Assessing the study’s purpose
In their abstract, the authors frame the context for their observational study this way: “Although stroke centers are widely accepted and supported, little is known about their effect on patient outcomes.” In the introduction to the article proper, they mention that there is relatively little information “on whether better care at stroke centers improves acute or long-term mortality.” Then, in contrast, is their stated objective: “to evaluate the association between admission to stroke centers for acute ischemic stroke and mortality.” (The italics in all three quotations are mine, for emphasis.)
The authors went to great lengths to take into account that hospitals may provide care for patients with different risk profiles (selection is an important issue because New York mandates that EMS transport patients suspected of having a stroke to a stroke center). Specifically, the authors employed an instrumental variable, which is often used in economic analyses, as a way to limit confounding. We can explore instrumental variables in another journal club; using one was a reasonable approach in this case, although the presumption is that patients go to the closest hospital. If the only stroke patients that go to a nondesignated hospital are those too ill to be transported a longer distance to a stroke center, the instrumental-variable analysis would be invalid.
That having been said, the bigger problem with this paper is not in the approach to differences in case mix among the hospitals that were studied. What’s hard to determine is whether the authors are interested in cause and effect (do stroke centers “improve” mortality?) or association (do designated stroke centers simply have better outcomes than centers without that designation?). This distinction — cause/effect versus association — is key to evaluating the study design and the appropriateness of the conclusions. The authors use the word “association” in their stated objective, but the language that they — and the editorialist — use around it points to an interest in the “effect” of stroke centers. So which is it?
Assessing cause and effect
A 2009 study by Lichtman and colleagues had shown that hospitals certified by the Joint Commission as a stroke center performed better (on mortality and readmission metrics) than noncertified hospitals — but that was the case both after and before the stroke-certification program began. In the conclusion of their abstract, Lichtman and her colleagues state, “Cross-sectional studies assessing the effects of stroke center certification need to account for these pre-existing differences.”
The newer JAMA study did not determine whether differences between the designated stroke centers and the other hospitals existed before the former received their designations. Without a before/after analysis, it’s impossible to determine whether the program had any effect. All one can safely claim is that during the period studied, patients admitted to the stroke centers had a modestly better survival rate than patients admitted to the other centers. Given that (1) more centers have received the designation since then, (2) the performance of the hospitals that are designated late might differ from that of hospitals that were designated early, and (3) the effect of joining is unknown — we cannot fairly assess whether the mortality finding persists today.
Despite this missing before/after analysis, the authors tried to prove cause and effect in another way. They compared the stroke centers and the non-stroke centers according to the outcomes of patients with GI hemorrhage and those with acute MI — and found no mortality difference for those conditions. At first glance, that sounds like proof that the stroke designation is what made the difference. The problem is that hospitals do not perform the same across all conditions and may be better at one type of care versus another. Indeed, you would expect centers that gravitate toward joining a stroke-center program to be those that are better at caring for stroke patients, not those that excel at care for GI hemorrhage or acute MI. The lack of mortality difference between designated stroke centers and nondesignated centers for patients with each of those conditions does not convince me at all that the association between stroke-center designation and mortality is causal.
A better approach
To determine whether the stroke-center designation itself mattered, the authors might have used a technique called “difference in differences.” It would have involved analyzing — for both the designated stroke centers and the nondesignated centers — the change in mortality rates from the period before to the period after the stroke-designation program was established. Comparing the changes over time between the two types of centers is where the nugget lies. If the designation program were effective, you would expect a greater diminution of mortality (i.e., a greater rate of improvement in outcomes) at the designated centers than at the nondesignated ones. But even that approach has limitations, given that the designated centers might have been improving at a faster rate all along.
Back to the research question
What, then, was the aim of the study? If it was to assist a patient in determining the best place to get care in 2007, then the authors probably showed that a stroke-center designation signaled a better-performing hospital, depending on your view of the instrumental-variable analysis. (The relevance to patients in 2011 is also not clear because of the potential difference between late designees and early designees). If instead the authors sought, as it appears, to prove the value of the stroke-center designation program in improving care, then they clearly fell far short.
In evaluating the value of research articles, start with the aim. Sometimes you need to examine how the authors frame and contextualize that aim — and also how they cast their conclusions — to figure out what they really intended to investigate. With that understanding in hand, you can proceed to assess whether the study design provides evidence that actually answers the research question the authors pose.
What are your thoughts about the consistency between the questions posed by researchers and the answers that their findings provide? Given the JAMA study just discussed, what would you do if you were a policymaker in New York?
Harlan M. Krumholz is a professor of cardiology, epidemiology and public health at Yale University School of Medicine. He blogs at Forbes.