At first, scheduling for COVID-19 vaccinations was tough, but even a month or so into it, chances are you waltzed through an organized, uncrowded, series of rooms in a hospital or medical center or drug store. It’s little short of a miracle. Half the population in the U.S. has had at least one dose of the vaccine, and though slowing, the numbers of the vaccinated continue to rise.

Yet there are big questions. For example, reach out to Gavin Yamey, professor of the practice of global health and public policy and director of Duke’s Center for Policy Impact in Global Health. He helped design COVAX, the vaccines element of the Access to COVID-19 Tools (ACT) Accelerator created by the World Health Organization to help the world collaborate to create, manufacture, and equitably distribute tests, treatments, vaccines, and other tools to respond to COVID-19. He’s working constantly on the crisis in India, where around 3 percent of people are fully vaccinated, but he finds a second to respond to a reporter. “Every request just makes me cry inside, to be honest,” Yamey says of the tide of calls he receives asking him to explain what’s going on. “We all need some time off. But we’re never going to get it.

“Obviously the rapid development of astoundingly safe, high-efficacy vaccines in under a year, from lab to jab in eleven months, is one of the most extraordinary recent scientific achievements.”

The problem is that, once the vaccine was developed, the system seems to have broken down. Yamey gives a rapid-fire rundown of problems with international vaccine distribution. Addressing the current situation that he has heard called “vaccine apartheid” and “vaccine nationalism,” he says, “Only 0.3 percent of all global vaccine doses have gone to low-income countries.” He’s glad the Biden administration has backed the intellectual- property protection waiver. This will enable plants around the world to produce vaccines for their own countries’ use, without fear of a lawsuit by the vaccines’ developers, like Moderna and Pfizer. But it comes late, and poor countries around the world are suffering.

“It’s unethical,” he says of the current situation, but then makes a point that, for outcomes, is even more important: “It’s terrible public health.” That is, despite the astonishing scientific accomplishment of the development of the vaccine, the point of the science was to protect people from COVID-19, and what seem like the simple next steps to make that happen aren’t being taken. A scientific miracle itself doesn’t accomplish anything. If it’s not implemented, the breakthrough is only conceptual.

Welcome to implementation science, a somewhat new discipline that addresses how science affects the world outside the study. “Implementation science is bridging that gap between what science tells us we should do and actually getting it done in whatever setting we choose,” says Leah Zullig, associate professor in population health sciences and in medicine at Duke’s School of Medicine. All kinds of great science address all kinds of amazing things. But implementation science focuses on getting the benefits of the studies to the real world. Similar to that outworn 1980s term “technology transfer,” which focused on getting science into companies and the economy, implementation science focuses more on people, addressing uptake by populations and policymakers.

The issues implementation science examines vary from situation to situation. Regarding COVID-19, in rich countries like the United States and the United Kingdom, governments hoard doses they cannot get their populations to take. Yamey says fear that booster shots may eventually be needed may be keeping rich countries from sharing doses they’ve acquired, but he’s not impressed: “The 1.5 billion excess doses that the rich world isn’t using needs to be given to COVAX.” He adds, “We have got to get better at understanding some of the drivers of scale-up related to the intervention itself.” That’s implementation science.

Zullig puts it in simple terms. She commonly works in community-based health-care centers that don’t have the resources of a Duke or UNC hospital system. “What works in academic medicine,” she says, “doesn’t necessarily work in these places.” Whether it’s vaccine hesitancy, mask-wearing, or getting a cancer patient to take all of rather than some of his or her medication, implementation science tries to figure out how best to address things like worries people have about vaccines, complexities policymakers face in deciding what course to pursue, and other hurdles between science and the good it can do.

It’s a new area of study, only generating literature for the last decade or so. Says Zullig, “The MeSH term was only introduced in 2019.” That’s Medical Subject Heading, the thesaurus of terms used by the National Institutes of Health National Library of Medicine. Like many people working in implementation science, Zullig says the area has developed because when things like pandemics arise, standard scientific practice has a hard time responding quickly. “There’s a statistic that says only 7 percent of science is ever translated and makes its way into the real world,” she says, citing a widely shared 2011 paper from the Journal of the Royal Society of Medicine. “And when it does, it takes an average of seventeen years.” That pace isn’t going to cut it when a new virus shows up or the planet is on fire.

To explain how implementation science studies not just whether an intervention works in a lab but whether it can work in practice, Zullig cites her own work on an intervention for cancer patients. Whether the patients’ health improved was one outcome measured, but the other was “Can we actually deliver the intervention using only resources available in a normal clinic? Not air-dropping in research staff, but actually accomplishing it with resources” that the clinic would have on hand. She says she’ll encounter clinicians who say, “Gosh, we don’t actually have a nurse or a pharmacist in our clinic who has free time, or who has the training, who knows how to handle oncology drugs.” Often a study will provide an online resource for participants, but the local clinic will say, “Well, that’s super, but our patients don’t actually have Internet access at home. Forget wifi—we don’t even have dial-up.

“And as these things are uncovered, we continually reassess these outcomes,” Zullig says. “Both the clinical outcomes and the implementation outcomes, and we make changes.” She describes a study of a hospital-based mobility program. A standard intervention, where patients were required to move, and their movement was correlated with muscle health. “Super basic, right?” But during COVID-19, with patients confined to rooms without being touched, they couldn’t walk in corridors or be helped in their walking without adding risk. “So there was this whole adaptation process.” The program changed to a self-directed movement program, with patients moving within their room with the help of a family member or caregiver rather than a scientist.

They’re still studying the effect of movement on muscle, “but we’re doing it entirely differently.” The protocol changed mid-study to adapt to the needs of the moment—and to help get to the health outcome the study was designed to pursue. She describes this as hybrid study design: “In parallel, we’re collecting information about the clinical effectiveness of an intervention, and in tandem, also collecting information about how it can be implemented better. So we’ve got two primary outcomes.”

This is essential if science needs to work fast, and it will help, she thinks, improve people’s response to science in the first place. Because of the necessity of getting grants and focusing on small problems that seem certain to yield results, science can seem, on the surface, not to be accomplishing much. According to Zullig: “I think one of the reasons society has such disdain for science is because science has historically been for the sake of science. We’re still focused on the next fancy publication and the largest grant. That isn’t actually helping people, and you can see that with our promotion track record.” That is, researchers advance by publishing papers. “People think, ‘Ta-da! I got my Annals of Internal Medicine paper…everything will change!’ And the truth is the people who are actually going to use this knowledge are too darn busy to take the time to adjust to the Annals paper, which likely didn’t have enough detail in it for them to do anything with it anyway.”

In her work about medical adherence among cancer survivors, under common circumstances, “we have lots of opportunity to construct really beautiful, pragmatic, and hybrid and adaptive trials” that can gather evidence for decades about very small changes given one group of patients or another. “We’re not really harming patients by withholding that, because they’re still getting the best cutting-edge cancer treatment. We can wait for a super-strong evidence base stemming from trials.”

In a pandemic—or, say, with current crises in the climate—that won’t work. “We need to be geared toward moving more rapidly. And our infrastructures are set up on an NIH timeline, and we’re really not nimble enough to move when we have the need for rapid change. So we need to understand how much evidence do we really need to practically be able to move forward? Because our goal is to make societal impact.”

Making societal impact is the central point of implementation science. Lavanya Vasudevan, assistant professor of family medicine and community health and assistant research professor of global health, does research on immunization demand equity and acceptance, so she’s been busy in recent months. “In the vaccine hesitancy space,” she says, “there’s not usually just one factor.” Vaccine hesitancy grows from “political climate, historical influences, and you have things related to individual or social experiences.” One study she’s working on involves creating an online resource that physicians offer patients to consult, with the goal of helping physicians identify “where on the hesitancy spectrum [patients] fall,” and providing information appropriate to their positions.

“So the study involves not just the efficacy of the adaptive components, but do we screen accurately, do we assign the right intervention components to the right people? And does it actually translate to vaccination, because ultimately that’s your goal: You want people to be vaccinated.”

Like Zullig, she talks about hybrid studies, “in which while you’re assessing the effectiveness of an intervention, you also collect implementation outcomes.” That is, instead of just studying, say, which kind of information is most convincing to which kind of person, the same study works on identifying the best way to convert that convincing information into an actual vaccination.

“And the Web-based resource I talked about, our approach to that was to actually include our end-users in the design of that resource. That falls within one of the implementation-science constructs: in sort of engaging your stakeholders, engaging your end-users, from the beginning.” Leaving stakeholders out—and leaving the hoped-for outcome out—ends up with the kind of small, repetitive study Zullig was describing, which may get funding but doesn’t yield useful results and doesn’t inspire the population with confidence.

“We just don’t have time to do it the way we used to do it,” says Subhrendu Pattanayak, Oak Foundation Distinguished Professor of environmental and energy policy at the Sanford School of Public Policy. “I want to make a plug for avoiding type III errors,” he says. In science, type I errors are false positives—believing you’ve found a result in data that don’t actually support it; a type II error is a false negative—believing data do not show a looked-for effect, when in fact they do. He describes a type III error as “precisely answering a pointless question.” Doing science, writing papers, getting trustworthy results on tiny topics because real-world complexity has been removed. “We control away the things that are challenging about the world, and that’s what’s making our science irrelevant at some point, because they’ve become so precisely right, they are exactly wrong.

“It’s like no one cares about this answer at this point, right?” Pattanayak says. He, too, notes that science moves slowly, and onto that he piles arrogance: “Scientists who are not going to the field, are saying, ‘Hey, you dumb policymakers or you practitioners on the ground, why don’t you wait for us to generate the evidence around your practice?’ Congress calls for something important. The funding agency gets the money, it puts out an RFP, the best scientists respond. Five, six, seven, eight years to do the study.” Then follow-ups, publication battles, and then, “eventually, the paper gets published, right? Congress asked for this result eighteen years ago. It’s totally irrelevant. I mean, the glaciers have melted, the penguins are gone.” Evidence- based practice has become an accepted principle in health care, embracing the most up-to-date science in the practice of medicine. “I’m saying flip it around,” Pattanayak says. “Practice-based evidence. Build the evidence base around what people are already trying to do on the ground. I would much rather get a rough answer to the right question.”

He’s looked into issues surrounding the adoption of cleaner-burning cookstoves in India and Senegal, where their pollution causes significant health issues and research into toilet adoption where sanitation is an issue. He’s critical of research that draws simple conclusions, like giving people money to spend on cleaner cookstoves or better toilets furthers their adoption. Another study showing that if people have limitless funds, they will buy toilets doesn’t help, nor does another expensive toilet designed by engineers funded by philanthropists. What will help is figuring out how to get people to adopt the needed change. “We’ve learned over the years that you need to educate, you need to bribe, you need to shame them.” He published a paper called “Shame or Subsidy,” addressing how to get people in a part of rural India with high child mortality to adopt improved toilets. (Conclusion: Subsidies sure help, but the social pressure of shaming “can improve sanitation worldwide.”)

The point, once again, isn’t abstractly learning that paying people or shaming them is more likely to get them to improve sanitation; it’s getting them to actually improve sanitation.

He returns to the case of COVID-19. “The vaccine is a beautiful thing,” Pattanayak says, an example of science at its best, identifying and rapidly solving a problem. “But the trials should also include the human dimensions of it. Giving it to people who are Republican, giving it to people who are Democrat, giving it to people who are religious, you know: The variation should be built in. Because once the technology is in the field, its rollout is going to face not only the supply chain, and keeping the vaccine available, but prejudices, biases, opinion, because that is the field reality of vaccination rollout, right?”

Right. Both Zullig and Vasudevan speak directly about that issue, and their approach is the same: You start by accepting what Pattanayak calls field reality. “The role of implementation science is to really think about what sort of rules and strategies can we use to communicate with people where they are, to help them understand what science would direct what they should do,” Zullig says. “We have to take in this really contextually rich situation. In this instance, whether it’s right or wrong doesn’t really matter. It’s important that there is this social-political thing that’s impacting different populations differently. We don’t control for in a way that trials will control for a variable, but we recognize that as a variable we have to address. We embrace it as part of the beautiful chaos of implementation science.”

Some of the work of implementation science rolls back on itself, yielding extra dividends. Many people cite the confusing recommendations about face masks as an example of early failure in communication about COVID-19. And now that masks are becoming optional, the message seems equally muddled, and many people seem reluctant to give them up. Implementation science has looked at this. “Amazingly,” Zullig says, “it’s actually easier to get people to adopt a new behavior than it is to get them to stop one they think is good or reasonable or scientifically sound.” Though Duke scientists focus on actually practicing implementation science, the discipline “has suffered, ironically, from so many folks focusing on the science of implementation science so much that they’re not actually doing it. It’s in its adolescence,” she says. “Trying to bulk up the science, but at the same time really focus on the transition work. It’s tough to have those two tracks going at the same time.” Yamey notes that implementation science still doesn’t get the funding other scientific disciplines get, so the applicable, boots-on-the-ground solutions it can help put in place don’t come. “Knowledge continues to sort of be left on the shelf.”

Just the same, Vasudevan points out that steps toward a less-iterative, more-responsive science are being taken. “I think the field is slowly moving toward the idea that implementation is not just something you study at the end. Implementation is something you start thinking about from the beginning, instead of just once you have an effective and proven intervention.” As it should be, Zullig says. “If we could just require that this implementation piece be part of science, and really have that be a required perspective,” implementation would stop being an afterthought and would start guiding science. “Then when we find something that works, we could actually hit the ground running, and in an informed way, to actually include people’s health. Like, how novel would that be?”

Grant proposals currently require a dissemination plan, discussing where results will appear, what conferences researchers might attend. “What if we actually changed that dissemination plan to how we’re actually going to tell people about our research findings and what we’re actually going to do with it?” she said. “I think that would really kind of shift what’s important to us in the scientific community.” 

Share your comments

Have an account?

Sign in to comment

No Account?

Email the editor