The development of toxicology as a distinct specialty began during the 18th and 19th centuries (Table 1–2).118 The mythological and magical mystique of poisoners began to be gradually replaced by an increasingly rational, scientific, and experimental approach to these agents. Much of the poison lore that had survived for almost 2000 years was finally debunked and discarded. The 18th-century Italian Felice Fontana was one of the first to usher in the modern age. He was an early experimental toxicologist who studied the venom of the European viper and wrote the classic text Traite sur le Venin de la Vipere in 1781.79 Through his exacting experimental study on the effects of venom, Fontana brought a scientific insight to toxicology previously lacking and demonstrated that clinical symptoms resulted from the poison (venom) acting on specific target organs. During the 18th and 19th centuries, attention focused on the detection of poisons and the study of toxic effects of xenobiotics in animals.111 Issues relating to adverse effects of industrialization and unintentional poisoning in the workplace and home environment were raised. Also during this time, early experience and experimentation with methods of GI decontamination took place.
Development of Analytical Toxicology and the Study of Poisons
The French physician Bonaventure Orfila (1787–1853) is often called the father of modern toxicology.111 He emphasized toxicology as a distinct, scientific discipline, separate from clinical medicine and pharmacology.11 He was also an early medical-legal expert who championed the use of chemical analysis and autopsy material as evidence to prove that a poisoning had occurred. His treatise Traite des Poisons (1814)116 evolved over five editions and was regarded as the foundation of experimental and forensic toxicology.154 This text classified poisons into six groups: acrids, astringents, corrosives, narcoticoacrids, septics and putrefiants, and stupefacients and narcotics.
A number of other landmark works on poisoning also appeared during this period. In 1829, Robert Christison (1797–1882), a professor of medical jurisprudence and Orfila's student, wrote A Treatise on Poisons.32 This work simplified Orfila's poison classification schema by categorizing poisons into three groups: irritants, narcotics, and narcoticoacrids. Less concerned with jurisprudence than with clinical toxicology, O.H. Costill's A Practical Treatise on Poisons, published in 1848, was the first modern clinically oriented text to emphasize the symptoms and treatment of poisoning.36 In 1867, Theodore Wormley (1826–1897) published the first American book written exclusively on poisons titled Micro-Chemistry of Poisons.48,157
During this time, important breakthroughs in the chemical analysis of poisons resulted from the search for a more reliable assay for arsenic. Arsenic was widely available and was the suspected cause of a large number of deaths. In one study, arsenic was used in 31% of 679 homicidal poisonings.149 A reliable means of detecting arsenic was much needed by the courts.
Until the 19th century, poisoning was mainly diagnosed by its resultant symptoms rather than by analytic tests. The first use of a chemical test as evidence in a poisoning trial occurred in the 1752 trial of Mary Blandy, who was accused of poisoning her father with arsenic.99 Although Blandy was convicted and hanged publicly, the test used in this case was not very sensitive and depended in part on eliciting a garlic odor upon heating the gruel that the accused had fed to her father.
During the 19th century, James Marsh (1794–1846), Hugo Reinsch (1842–1884), and Max Gutzeit (1847–1915) each worked on this problem. Assays bearing their names are important contributions to the early history of analytic toxicology.100,111 The "Marsh test" to detect arsenic was first used in a criminal case in 1839 during the trial of Marie Lefarge, who was accused of using arsenic to murder her husband.139 Orfila's trial testimony that the victim's viscera contained minute amounts of arsenic helped to convict the defendant, although subsequent debate suggested that contamination of the forensic specimen may have also played a role.
In a further attempt to curtail criminal poisoning by arsenic, the British Parliament passed the Arsenic Act in 1851. This bill, which was one of the first modern laws to regulate the sale of poisons, required that the retail sale of arsenic be restricted to chemists, druggists, and apothecaries and that a poison book be maintained to record all arsenic sales.15
Homicidal poisonings remained common during the 19th century and early 20th century. Infamous poisoners of that time included William Palmer, Edward Pritchard, Harvey Crippen, and Frederick Seddon.149 Many of these poisoners were physicians who used their knowledge of medicine and toxicology in an attempt to solve their domestic and financial difficulties by committing the "perfect" murder. Some of the poisons used were aconitine (by Lamson, who was a classmate of Christison), Amanita phalloides (by Girard), arsenic (by Maybrick, Seddon, and others), antimony (by Pritchard), cyanide (by Molineux and Tawell), digitalis (by Pommerais), hyoscine (by Crippen), and strychnine (by Palmer and Cream) (Table 1–3).24,86,147,149
In the early 20th century, forensic investigation into suspicious deaths, including poisonings, was significantly advanced with the development of the medical examiner system replacing the much-flawed coroner system that was subject to widespread corruption. In 1918, the first centrally controlled medical examiner system was established in New York City. Alexander Gettler, considered the father of forensic toxicology in the United States, established a toxicology laboratory within the newly created New York City Medical Examiner's Office. Gettler pioneered new techniques for the detection of a variety of substances in biologic fluids, including carbon monoxide, chloroform, cyanide, and heavy metals.49,111
Systematic investigation into the underlying mechanisms of toxic substances also commenced during the 19th century. Francois Magendie (1783–1855) studied the mechanisms of toxicity and sites of action of cyanide, emetine, and strychnine.47 Claude Bernard (1813–1878), a pioneering physiologist and a student of Magendie, made important contributions to the understanding of the toxicity of carbon monoxide and curare.85 Rudolf Kobert (1854–1918) studied digitalis and ergot alkaloids and authored a textbook on toxicology for physicians and students.83,114 Louis Lewin (1850–1929) was the first person to intensively study the differences between the pharmacologic and toxicologic actions of xenobiotics. Lewin studied chronic opium intoxication, as well as the toxicity of carbon monoxide, chloroform, lead, methanol, and snake venom. He also developed a classification system for psychoactive drugs, dividing them into euphorics, phantastics, inebriants, hypnotics, and excitants.93
The Origin of Occupational Toxicology
The origins of occupational toxicology can be traced to the early 18th century and to the contributions of Bernardino Ramazzini (1633–1714). Considered the father of occupational medicine, Ramazzini wrote De Morbis Artificum Diatriba (Diseases of Workers) in 1700, which was the first comprehensive text discussing the relationship between disease and workplace hazards.53 Ramazzini's essential contribution to patient care is epitomized by the addition of a standard question to a patient's medical history: "What occupation does the patient follow?"51 Altogether Ramazzini described diseases associated with 54 occupations, including hydrocarbon poisoning in painters, mercury poisoning in mirror makers, and pulmonary diseases in miners.
In 1775, Sir Percivall Pott proposed the first association between workplace exposure and cancer when he noticed a high incidence of scrotal cancer in English chimney sweeps. Pott's belief that the scrotal cancer was caused by prolonged exposure to tar and soot was confirmed by further investigation in the 1920s, indicating the carcinogenic nature of the polycyclic aromatic hydrocarbons contained in coal tar (including benzo[a]pyrene).72
Dr. Alice Hamilton (1869–1970) was another pioneer in occupational toxicology whose rigorous scientific inquiry had a profound impact on linking chemical xenobiotics with human disease. A physician, scientist, humanitarian, and social reformer, Hamilton became the first female professor at Harvard University and conducted groundbreaking studies of many different occupational exposures and problems, including carbon monoxide poisoning in steelworkers, mercury poisoning in hatters, and wrist drop in lead workers. Hamilton's overriding concerns about these "dangerous trades" and her commitment to improving the health of workers led to extensive voluntary and regulatory reforms in the workplace.60,65
Advances in Gastrointestinal Decontamination
Using gastric lavage and activated charcoal to treat poisoned patients was introduced in the late 18th and early 19th century. A stomach pump was first designed by Munro Secundus in 1769 to administer neutralizing substances to sheep and cattle for the treatment of bloat.24 The American surgeon Philip Physick (1768–1837) and the French surgeon Baron Guillaume Dupuytren (1777–1835) were two of the first physicians to advocate gastric lavage for the removal of poisons.25 As early as 1805, Physick demonstrated the use of a "stomach tube" for this purpose. Using brandy and water as the irrigation fluid, he performed stomach washings in twins to wash out excessive doses of tincture of opium.25 Dupuytren performed gastric emptying by first introducing warm water into the stomach via a large syringe attached to a long flexible sound and then withdrawing the "same water charged with poison."25 Edward Jukes, a British surgeon, was another early advocate of poison removal by gastric lavage. Jukes first experimented on animals, performing gastric lavage after the oral administration of tincture of opium. Attempting to gain human experience, he experimented on himself, by first ingesting 10 drams (600 g) of tincture of opium and then performing gastric lavage using a 25-inch-long, 0.5-inch-diameter tube, which became known as Jukes' syringe.105 Other than some nausea and a 3-hour sleep, he suffered no ill effects, and the experiment was deemed a success.
The principle of using activated charcoal to adsorb xenobiotics was first described by Scheele (1773) and Lowitz (1785), but the medicinal use of activated charcoal dates to ancient times.35 The earliest reference to the medicinal uses of activated charcoal is found in Egyptian papyrus from about 1500 b.c.35 The activated charcoal used during Greek and Roman times, referred to as "wood charcoal," was used to treat those with anthrax, chlorosis, epilepsy, and vertigo. By the late 18th century, topical application of activated charcoal was recommended for gangrenous skin ulcers, and internal use of an activated charcoal-water suspension was recommended for use as a mouthwash and in the treatment of bilious conditions.35
The first hint that activated charcoal might have a role in the treatment of poisoning came from a series of courageous self-experiments in France during the early 19th century. In 1813, the French chemist Bertrand publicly demonstrated the antidotal properties of activated charcoal by surviving a 5 g ingestion of arsenic trioxide that had been mixed with activated charcoal.68 Eighteen years later, before the French Academy of Medicine, the pharmacist Touery survived an ingestion consisting of 10 times the lethal dose of strychnine mixed with 15 g of activated charcoal.68 One of the first reports of activated charcoal used in a poisoned patient was in 1834 by the American Hort, who successfully treated a mercury bichloride–poisoned patient with large amounts of powdered activated charcoal.3
In the 1840s, Garrod performed the first controlled study of activated charcoal when he examined its utility on a variety of poisons in animal models.68 Garrod used dogs, cats, guinea pigs, and rabbits to demonstrate the potential benefits of activated charcoal in the management of strychnine poisoning. He also emphasized the importance of early use of activated charcoal and the proper ratio of activated charcoal to poison. Other toxic substances, such as aconite, hemlock, mercury bichloride, and morphine, were also studied during this period. The first activated charcoal efficacy studies in humans were performed by the American physician B. Rand in 1848.68
But it was not until the early 20th century that an activation process was added to the manufacture of activated charcoal to increase its effectiveness. In 1900, the Russian Ostrejko demonstrated that treating activated charcoal with superheated steam significantly enhanced its adsorbing power.35 Despite this improvement and the favorable reports mentioned, activated charcoal was only occasionally used in GI decontamination until the early 1960s, when Holt and Holz repopularized its use.63
The Increasing Recognition of the Perils of Drug Abuse
Although the medical use of opium was promoted by Paracelsus in the 16th century, the popularity of this agent was given a significant boost when the distinguished British physician Thomas Sydenham (1624–1689) formulated laudanum, which was a tincture of opium containing cinnamon, cloves, saffron, and sherry. Sydenham also formulated a different opium concoction known as "syrup of poppies."82 A third opium preparation called Dover's powder was designed by Sydenham's protégé, Thomas Dover; this preparation contained syrup of ipecac, licorice, opium, salt-peter, and tartaric acid.
John Jones, the author of the 18th century text The Mysteries of Opium Reveal'd, was another enthusiastic advocate of its "medicinal" uses.82 A well-known opium user himself, Jones provided one of the earliest descriptions of opioid addiction. He insisted that opium offered many benefits if the dose was moderate but that discontinuation or a decrease in dose, particularly after "leaving off after long and lavish use," would result in such symptoms as sweating, itching, diarrhea, and melancholy. His recommendation for the treatment of these withdrawal symptoms included decreasing the dose of opium by 1% each day until the drug was totally withdrawn. During this period, the number of English writers who became well-known opium addicts included Elizabeth Barrett Browning, Samuel Taylor Coleridge, and Thomas De Quincey. De Quincey, author of Confessions of an English Opium Eater, was an early advocate of the recreational use of opiates. The famed Coleridge poem Kubla Khan referred to opium as the "milk of paradise," and De Quincey's Confessions suggested that opium held the "key to paradise." In many of these cases, the initiation of opium use for medical reasons led to recreational use, tolerance, and dependence.82
Although opium was first introduced to Asian societies by Arab physicians some time after the fall of the Roman Empire, the use of opium in Asian countries grew considerably during the 18th and 19th centuries. China's growing dependence on opium was spurred on by the English desire to establish and profit from a flourishing drug trade.133 Opium was grown in India and exported east. Despite Chinese protests and edicts against this practice, the importation of opium persisted throughout the 19th century, with the British going to war twice in order to maintain their right to sell opium. Not surprisingly, by the beginning of the 20th century, opium abuse in China was endemic.
In England, opium use continued to increase during the first half of the 19th century. During this period, opium was legal and freely available from the neighborhood grocer. To many, its use was considered no more problematic than alcohol use.58 The Chinese usually self-administered opium by smoking, a custom that was brought to the United States by Chinese immigrants in the mid-19th century; the English use of opium was more often by ingestion, that is, "opium eating."
The liberal use of opioids as infant-soothing agents was one of the most unfortunate aspects of this period of unregulated opioid use.83 Godfrey's Cordial, Mother's Friend, Mrs. Winslow's Soothing Syrup, and Quietness were among the most popular opioids for children.88 They were advertised as producing a natural sleep and recommended for teething and bowel regulation, as well as for crying. Because of the wide availability of opioids during this period, the number of acute opioid overdoses in children was consequential and would remain problematic until these unsavory remedies were condemned and removed from the market.
With the discovery of morphine in 1805 and Alexander Wood's invention of the hypodermic syringe in 1853, parenteral administration of morphine became the preferred route of opioid administration for therapeutic use and abuse.70 A legacy of the generous use of opium and morphine during the United States Civil War was "soldiers' disease," referring to a rather large veteran population that returned from the war with a lingering opioid habit.125 One hundred years later, opioid abuse and addiction would again become common among the US military serving during the Vietnam War. Surveys indicated that as many as 20% of American soldiers in Vietnam were addicted to opioids during the war, partly because of its widespread availability and high purity there.130
Growing concerns about opioid abuse in England led to the passing of the Pharmacy Act of 1868, which restricted the sale of opium to registered chemists. But in 1898, the Bayer Pharmaceutical Company of Germany synthesized heroin from opium (Bayer also introduced aspirin that same year).140 Although initially touted as a nonaddictive morphine substitute, problems with heroin use quickly became evident in the United States. Illicit heroin use reached epidemic proportions after World War II and again in the late 1960s.71 Although heroin use appeared to have leveled off by the end of the 20th century, an epidemic of prescription opioid abuse exploded during the first decade of the 21st century.96
Ironically, during the later part of the 19th century, Sigmund Freud and Robert Christison, among others, promoted cocaine as a treatment for opiate addiction. After Albert Niemann's isolation of cocaine alkaloid from coca leaf in 1860, growing enthusiasm for cocaine as a panacea ensued.78 Some of the most important medical figures of the time, including William Halsted, the famed Johns Hopkins surgeon, also extolled the virtues of cocaine use. Halsted championed the anesthetic properties of this drug, although his own use of cocaine and subsequent morphine use in an attempt to overcome his cocaine dependency would later take a considerable toll.115 In 1884, Freud wrote Uber Cocaine,27 advocating cocaine as a cure for opium and morphine addiction and as a treatment for fatigue and hysteria.
During the last third of the 19th century, cocaine was added to many popular nonprescription tonics. In 1863, Angelo Mariani, a Frenchman, introduced a new wine, "Vin Mariani," that consisted of a mixture of cocaine and wine (6 mg of cocaine alkaloid per ounce) and was sold as a digestive aid and restorative.106 In direct competition with the French tonic was the American-made Coca-Cola, developed by J.S. Pemberton. It was originally formulated with coca and caffeine and marketed as a headache remedy and invigorator. With the public demand for cocaine increasing, patent medication manufacturers were adding cocaine to thousands of products. One such asthma remedy was "Dr. Tucker's Asthma Specific," which contained 420 mg of cocaine per ounce and was applied directly to the nasal mucosa.78 By the end of the 19th century, the first American cocaine epidemic was underway.108
Similar to the medical and societal adversities associated with opiate use, the increasing use of cocaine led to a growing concern about comparable adverse effects. In 1886, the first reports of cocaine-related cardiac arrest and stroke were published.126 Reports of cocaine habituation occurring in patients using cocaine to treat their underlying opiate addiction also began to appear. In 1902, a popular book Eight Years in Cocaine Hell described some of these problems. Century Magazine called cocaine "the most harmful of all habit-forming drugs," and a report in The New York Times stated that cocaine was destroying "its victims more swiftly and surely than opium."42 In 1910, President William Taft proclaimed cocaine to be "public enemy number one."
In an attempt to curb the increasing problems associated with drug abuse and addiction, the 1914 Harrison Narcotics Act mandated stringent control over the sale and distribution of narcotics (defined as opium, opium derivatives, and cocaine).42 It was the first federal law in the United States to criminalize the nonmedical use of drugs. The bill required doctors, pharmacists, and others who prescribed narcotics to register and to pay a tax. A similar law, the Dangerous Drugs Act, was passed in the United Kingdom in 1920.58 To help enforce these drug laws in the United States, the Narcotics Division of the Prohibition Unit of the Internal Revenue Service (a progenitor of the Drug Enforcement Agency) was established in 1920. In 1924, the Harrison Act was further strengthened with the passage of new legislation that banned the importation of opium for the purpose of manufacturing heroin, essentially outlawing the medicinal uses of heroin. With the legal venues to purchase these drugs now eliminated, users were forced to buy from illegal street dealers, creating a burgeoning black market that still exists today.
The introduction to medical practice of the anesthetic agents nitrous oxide, ether, and chloroform during the 19th century was accompanied by the recreational use of these agents and the first reports of volatile substance abuse. Chloroform "jags," ether "frolics," and nitrous parties became a new type of entertainment. Humphrey Davies was an early self-experimenter with the exhilarating effects associated with nitrous oxide inhalation. In certain Irish towns, especially where the temperance movement was strong, ether drinking became quite popular.102 Horace Wells, the American dentist who introduced chloroform as an anesthetic, became dependent on this volatile solvent and later committed suicide.
Until the last half of the 19th century, aconite, alcohol, hemlock, opium, and prussic acid (cyanide) were the primary agents used for sedation.33 During the 1860s, new, more specific sedative–hypnotics, such as chloral hydrate and potassium bromide, were introduced into medical practice. In particular, chloral hydrate was hailed as a wonder drug that was relatively safe compared with opium and was recommended for insomnia, anxiety, and delirium tremens, as well as for scarlet fever, asthma, and cancer. But within a few years, problems with acute toxicity of chloral hydrate, as well as its potential to produce tolerance and physical dependence, became apparent.33 Mixing chloral hydrate with ethanol, both of which inhibit each other's metabolism by competing with alcohol dehydrogenase, was noted to produce a rather powerful "knockout" combination that would become known as a "Mickey Finn" allegedly named after a Chicago saloon proprietor.16 Abuse of chloral hydrate, as well as other new sedatives such as potassium bromide, would prove to be a harbinger of 20th-century sedative–hypnotic abuse.
Absinthe, an ethanol-containing beverage that was manufactured with an extract from wormwood (Artemisia absinthium), was very popular during the last half of the 19th century.84 This emerald-colored, very bitter drink was memorialized in the paintings of Degas, Toulouse-Lautrec, and Van Gogh and was a staple of French society during this period.12 α-Thujone, a psychoactive component of wormwood and a noncompetitive γ-aminobutyric acid type A GABAA blocker, is thought to be responsible for the pleasant feelings, hyperexcitability, and significant neurotoxicity associated with this drink.67 Van Gogh's debilitating episodes of psychosis were likely exacerbated by absinthe drinking.144 Because of the medical problems associated with its use, absinthe was banned throughout most of Europe by the early 20th century.
Native Americans used peyote in religious ceremonies since at least the 17th century. Hallucinogenic mushrooms, particularly Psilocybe mushrooms, were also used in the religious life of Native Americans. These were called "teonanacatl," which means "God's sacred mushrooms" or "God's flesh."121 Interest in the recreational use of cannabis also accelerated during the 19th century after Napoleon's troops brought the drug back from Egypt, where its use among the lower classes was widespread. In 1843, several French Romantics, including Balzac, Baudelaire, Gautier, and Hugo, formed a hashish club called "Le Club des Hachichins" in the Parisian apartment of a young French painter. Fitz Hugh Ludlow's The Hasheesh Eater, published in 1857, was an early American text espousing the virtues of marijuana.91
A more recent event that had significant impact on modern-day hallucinogen use was the synthesis of lysergic acid diethylamide (LSD) by Albert Hofmann in 1938.66 Working for Sandoz Pharmaceutical Company, Hofmann synthesized LSD while investigating the pharmacologic properties of ergot alkaloids. Subsequent self-experimentation by Hofmann led to the first description of its hallucinogenic effects and stimulated research into the therapeutic use of LSD. Hofmann is also credited with isolating psilocybin as the active ingredient in Psilocybe mexicana mushrooms in 1958.106