It’s no secret that the United States is in the midst of an opiate epidemic. Almost 60,000 people died last year from overdoses, and overdose deaths are now the most common killer of Americans under the age of 50. Physicians and our patients have finally started the difficult conversation about what it will take to stop the suffering. But what’s been missing from this conversation is that this is not the first opiate epidemic to strike the United States. And what happened during that first epidemic almost 100 years ago offers valuable lessons for doctors and patients trying to navigate the second.
Opium is an ancient drug. Poppies have been cultivated since the beginning of civilization, and the ancient Egyptians used opium as a medicine. By the time Avicenna’s Canon of Medicine was written around the turn of the first millennium CE, opium’s pain-relieving and cough-suppressing effects had been well-described, as had its side effects. The drug spread to Europe and the New World in the form of laudanum, a mixture of opium, alcohol, and spices that was used to treat just about anything. By 1834, opium was the most commonly prescribed medication in the United States. And no wonder: In many ways, it was a miracle drug. From a medical system that often inflicted torture in the form of healing, opium comforted the suffering and dying, stopped the disabling coughs of tuberculosis, and slowed the gastrointestinal ailments that were common prior to modern sanitation.
But opium would soon transform from a miracle drug to a national scourge. The historian David Courtwright estimates that 0.7 people per 1000 were addicted to opiates in 1842, which would balloon to 4.6 per 1000 in the 1890s. To put this in context, the CDC currently estimates an addiction rate of 6 people per 1000. So what happened? Courtwright offers several compelling arguments in his 2001 book Dark Paradise.
Cholera, dysentery, and yellow fever swept the United States in the 1840s, and the treatment of choice for all was opium. Then the suffering of the Civil War led doctors to prescribe copious amounts of opium — Union requisitions alone show 10 million pills and three million ounces of powders. As a result, hundreds of thousands of patients were exposed. New technologies made opiates far more effective and addictive. Morphine was invented in 1817, but really caught on after the introduction of the hypodermic syringe in 1856. In an era before automobiles or effective public transportation, physicians would often leave a syringe of morphine with their patients to self-administer. And finally, quack cure joints opened where addicts would be given copious supplies of the drugs.
As the human toll of the opium habit became evident, doctors led the charge to fight the epidemic. They developed alternative painkillers, including new medications like aspirin and antipyretics, of which acetaminophen is a descendent. They developed new theories of addiction, which, while colored by the prejudices of the day, are still some of the first attempts to understand addiction beyond simple moralism. More importantly, they set about to reform medical education; “Opiates are the lazy physician’s remedy” was their rallying cry. They also advocated for new laws, including limiting prescribing to those with a valid prescription. In fact, by the time heroin was introduced to the American market as a cough medication, the medical establishment was firmly against it, and the American Medical Association even called for a ban. In 1914, the Harrison Act was passed, which dramatically decreased opiate prescribing, but even before this, the epidemic of addiction had largely petered out.
So how did this all happen again? The parallels are striking. Physicians who wanted to treat their patients with an admittedly effective medication exposed hundreds of thousands to opiates. And for a variety of reasons, the prescribing got out of hand — war and disease in the nineteenth century, and the so-called fifth vital sign, patient satisfaction scores, and misleading research in the twentieth. Technological innovation made the risk of addiction higher, from the morphine syringe to “crush and snort” OxyContin. Even the pill mills churning out opiates by the thousands are basically copies of quack cure joints. Probably the biggest difference between the two epidemics is the role of drug companies. Bayer’s heroin was never accepted by the medical community, but by the 1990s, Purdue Pharmaceuticals had drug marketing — including funding friendly physicians — down to a science.
But in our predecessors, we can see valuable lessons as we try to muddle our way through the second opiate epidemic. Medical schools are increasingly teaching about safe opiate prescribing. Doctors have again lobbied for laws restricting the drugs, including limiting the duration of new prescriptions, mandatory monitoring, and prescriber databases to help detect abuse. And there’s been a renewed focus on multidisciplinary pain treatment, including opiate alternatives. The drugs might be new, but the idea is old-fashioned. But there are also warnings we should heed. The Harrison Act has been blamed with snuffing out nascent drug treatment centers; fortunately, in the second epidemic, we’ve recognized the importance of drugs like methadone and buprenorphine to treat addiction. The first opiate epidemic also took a long time to pass — almost half a century. We’re barely twenty years into this epidemic. Experience suggests we have a long way to go. In any event, physicians helped get us into both the first and second opiate epidemics. And just like before, we’re going to have to help get all of us out.
Adam Rodman is a hospitalist and the host of the podcast Bedside Rounds, which can be found on iTunes and Stitcher. He can be reached on Twitter @AdamRodmanMD.
Image credit: Shutterstock.com