Are Generics the Same As Brand Name Drugs?
An excerpt from “Generic: The Unbranding of Modern Medicine,” by Jeremy A. Greene.
The following is an excerpt from Generic: The Unbranding of Modern Medicine, by Jeremy A. Greene
Faced with a broad shelf of nearly identical allergy remedies, how do you know which one to choose? Perhaps you choose the least expensive, thinking, if they are all the same, why pay more? Yet in the back of your head there is a lingering concern that buying a cut-rate drug will expose you to untold risks. Or perhaps you choose the most expensive, thinking, why take chances with my health? But this purchase, too, comes with the nagging concern that you are simply being fleeced. The generic drug provokes a paradox of similarity: we believe, and yet we do not believe, that similar things are the same. We believe, and yet we do not believe, that similar things are different. And we are not sure which forms of proof should convince us of their exchangeability.
When generic drugs were broadly introduced to the American public in Senator Gaylord Nelson’s hearings on the competitive problems of the drug industry in the late 1960s, their interchangeability with brand-name drugs was assumed on the grounds of chemical identity. If a tablet of Parke-Davis’s antibiotic Chloromycetin were ground up with mortar and pestle, subjected to mass spectroscopy alongside a similar tablet of generic chloramphenicol, and both were found to contain 250 mg of the active compound, should these two tablets not be fully exchangeable as therapeutics?
But chemistry is not the only relevant science of similarity. Two pills with the same amount of the same active therapeutic ingredient can cause different effects on the human body if they dissolve at different times in the stomach, if their active principles appear at different rates in the bloodstream, or if their binders, fillers, dyes, and shellac coatings influence their action in the human body in different ways. A drug is not reducible to a molecule. Even the simplest tablet needs to be understood as a complex technology for delivering a molecular agent from the outside world to a series of inner bodily sites necessary for pharmacologic action.
In the second half of the twentieth century, chemical claims of drug equivalence were complicated by a host of other scientific demonstrations of difference, from disciplines as disparate as physiology, epidemiology, economics, and marketing sciences. Entirely new fields of investigation emerged to document the differences between brand-name and generic drugs, and these fields took entirely different objects of study to demonstrate similarity: the physiology of absorption, the molecular biology of cell-surface drug receptors, the managerial sciences of quality assurance. These new sciences of difference could also be inverted to form sciences of similarity: the set of rules, laws, assays, and metrics used to prove when two objects were similar enough to be exchangeable.
In an insightful exploration of the role regulators play in the development of new scientific fields, Dominique Tobbell and Daniel Carpenter have narrated the pathway by which older physical and chemical proofs of similarity were displaced by more complex protocols for demonstrating biological equivalence, or “bioequivalence.” They point to the Hatch-Waxman Act of 1984, which included bioavailability testing in healthy volunteers as a part of the Food and Drug Administration’s Abbreviated New Drug Application pathway for the approval of all generic drugs, as proof of the arrival of biological over chemical modes of proving similarity.
But the problem of generic equivalence was not simply resolved by political bipartisanship nor by replacing a chemical proof of similarity with a biological proof of similarity. A few months after the passage of the Hatch-Waxman Act, Medicine in the Public Interest, a group headed by clinical pharmacologist Louis Lasagna, filed a Freedom of Information Act request demanding “greater transparency” into how the FDA actually determined the bioequivalence of the first three generic versions of Roche’s Valium. On the first anniversary of the passage of the bill that bore his name, Senator Orrin Hatch issued a denial that the science of bioequivalence had solved the problems of brand/generic difference: “It was never my understanding that the acceptance of the terms of last year’s bill would somehow preclude this discussion. Some of the research-based companies accepted the generic drug title out of necessity because they thought it was worth it in order to obtain patent restoration; but not, as did Representative Waxman or the generics, because they believe that bioequivalence by FDA standards always means therapeutic equivalence.” According to Hatch, the “underlying premise” of the bill was not the ratification of bioequivalence as a gold standard but rather “the principle that generic drugs must be the same as the innovator in all significant respects.”
Generic drugs have never been identical in all respects to their original counterparts; rather, as Hatch suggests, their exchangeability depends on being similar in ways that are agreed to matter. Beyond bioequivalence, many forms of similarity and difference were still open for contest. Competing proofs of therapeutic difference could involve laboratory reagents, high sensitivity measuring scales, gas chromatography, the digestion of a tablet with simulated gastric juices in a glass beaker, spot checks on tablet shapes on assembly lines, bloodstream measurements of therapeutic biomarkers, or the epidemiology of patient experience with brand-generic switching. This book charts the emergence of these multiple sciences of therapeutic similarity as practices that constitute very different ways of proving that biomedical objects are, or are not, the same.
MAKING THINGS THE SAME
For centuries, pharmacists, physicians, manufacturers, and regulators have looked to pharmacopoeiae to adjudicate claims of therapeutic similarity and difference. In earlier chapters we explored the role of these compendia in ordering and (generically) naming the world of therapeutics.
But the pharmacopoeia was always much more than just a book of official names. It was a technology of standardization. The listings under each entry linked words with things, describing protocols of identity, purity, and accuracy so that the reader could determine that the object one held in one’s hands was indeed the therapeutic compound called for by the prescription.
To take an early twentieth-century example, the 1900 (8th) edition of the United States Pharmacopoeia listed the following chemical proofs of identity as part of its entry for the narcotic morphine:
When heated slowly to about 200° C. (392° F.) it assumes a brown color, and when heated rapidly it melts at 254° C. (489.2° F.). Upon ignition, it is slowly consumed without leaving a residue. Its aqueous solution shows an alkaline reaction to red litmus paper . . . Sulphuric acid containing a crystal of potassium iodate gives with Morphine a dark brown color. (Codeine yields a moss- green color, changing to brown, and narcotine a cherry- red color) . . . On adding 4 Cc. of potassium hydroxide T. S. to 0.2 Gm of Morphine, a clear solution, free from any undissolved residue, should result (absence of, and difference from, various other alkaloids), and no odor of ammonia should be noticeable (absence of ammonium salts).
These proofs of therapeutic identity fell into five basic categories: identification, assay, weight variation, content uniformity, and purity. Of these five, identification tests ranked foremost. Some tests were qualitative, in which the addition of readily available reagents like sulphuric acid and potassium iodide should provoke a predictable response. Other tests, like the measurements of melting points or the chromatographic analysis of a drug in solution, were quantitative. Assays evaluated the quantity of drug present in a sample by physical or biological means. Weight variation tests measured pill-to-pill changes in size and therefore in dose.
Content uniformity tests set forth acceptable limits of dose variation within and among drug samples. Purity tests functioned largely to identify substances like codeine, ammonium, or “other alkaloids” that were not supposed to be present in a sample of morphine. In 1950, a sixth standard was added, the disintegration test, which measured the ability of a tablet to dissolve in solution.
The standards of the USP were adopted as the official protocols for proof of identity, purity, and uniformity by the Pure Food and Drug Act of 1906. Yet, even though the United States Pharmacopoeial Convention had a quasi-public status since its initial meeting in the US Senate chambers in 1820, it remained a private concern of physicians, pharmacists, and ethical drug manufacturers. Because the USP was private, its ability to articulate standards relied on cooperation and collaboration among the major drug firms, all of whom nonetheless sought to differentiate their own product lines from one another in their own marketing strategies.
As discussed in chapter 1, until the middle of the twentieth century, the ethical drug industry largely sold versions of the same standard articles of the materia medica. All ethical firms benefitted from some minimal consumer confidence in pharmacopoeial standards, but the standards were understood to be a double-edged sword. The prominent placement of the mark of U.S.P. or N.F. on one’s product separated the scientific marketing of ethical drugs from the proprietary drug manufacturer. But at the same time individual firms also sought to distinguish their own in-house assays and quality control procedures as somehow superior to the standard minimum. As historians of technology have noted in many other fields—from rifle making to electrical engineering—industrial standards can function to increase popular and professional confidence in products while still reserving to individual manufacturers some specific know-how, so that they can claim their own products are still somehow different in the very criteria of quality, uniformity, and identity that the overall standards are supposed to establish as the same.
Each tablet or spoonful of Squibb’s morphine products were therefore comparable on one register with the basic standards described in the United States Pharmacopoeia, but only fully exchangeable with other versions of the product made by Squibb—or so Squibb’s marketing department would have you believe. The distance between the two standards, public (USP) and private (Squibb), described the added value a physician, pharmacist, or consumer would derive from purchasing a Squibbbranded product. Yet Squibb’s executives and scientific staff, along with their counterparts at competing firms like Eli Lilly, Parke-Davis, and Upjohn, were at the same time members of the USP Revision Committee and played a key role in the formation of the public USP standards against which they measured their own products. As in many other industries, the same people who created the private standards also created the public standards.
Architects of drug standards built in a buffer between these public and private specifications, a vague space to be occupied by trade secret and know- how. For example, the specifications for digitalis—a prominent cardioactive agent used widely over the course of the twentieth century—did not include biological assays until 1916, even though firms like Parke-Davis had been using them for decades to claim their own digitalis was produced to higher specifications. When the first biological assay standard for digitalis was published in United States Pharmacopoeia IX (1916), it recommended that the drug be assayed by the “onehour frog method,” in which digitalis-poisoned frogs were compared with a control group; the details of the frog method were left intentionally vague. Only in 1939, after extensive critique of the variability of the United States Pharmacopoeia Digitalis Reference Standard, did ten collaborating laboratories pool the results of more than sixty thousand frogs to establish a clear standard of digitalis performance. In the meantime, firms like Parke-Davis could continue to claim that their product conformed to higher standards.
In the middle decades of the twentieth century, the intersection of physiology and pharmacology produced new problems for the framers of public pharmacopoeial standards. Addressing the American Society of Hospital Pharmacists in August of 1960, a few months after the Kefauver hearings on generic names, Gerhard Levy of the University of Buffalo School of Pharmacy complained that pharmacists possessed a very limited set of tools to evaluate similarity and difference in pharmaceutical products. Levy pointed to the pioneering work conducted by researchers at the Canadian Food and Drug Laboratories in Ottawa—including J. A. Campbell and D. G. Chapman—who began in the early 1950s to study the absorption of micronutrients like riboflavin from different commercially available multivitamin products on the market and found that none of the tablets delivered an adequate dose of riboflavin into the bloodstream.
Though all of the products contained adequate amounts of the chemical riboflavin in tablet form, the best of these tablets delivered only 80 percent of the expected dose to the bloodstream, while the worst of them delivered only 14 percent. The limited absorption of vitamins might have minimal public health significance, Levy pointed out, but the Canadian researchers had also found similar variability among long-acting preparations of the important antituberculosis drug p-aminosalicylate.12 As the identity, purity, and dosage of the active agent could be verified in all products, the problem with p-aminosalicylate had not been anticipated by existing standards of similarity. The shellac coatings that manufacturers used to produce extended-release formulations varied widely in their application and affected the disintegration time of the capsule in the stomach and small intestine and therefore its absorption and overall clinical effectiveness.
As Chapman and colleagues investigated the situation further, they noted that pharmacopoeial proofs of equivalence had no criteria for measuring how and when these drugs were actually absorbed into the body.
Chapman and Campbell’s work laid the foundations for a new science of “biopharmaceutics” that asked what kinds of knowledge besides the quality, purity, and dosage of the active ingredient might be necessary to make sure that two drugs were the same. Smith, Kline & French, for example, had by 1957 patented and trademarked their own shellacking technique—the Spansule—which consisted of a series of individually shellacked pellets contained within a gelatin capsule shell. When Chapman’s group compared SKF’s Dexedrine Spansule with seven other shellacked amphetamine products, they found the amount of drug excreted in the urine after consuming 15 mg extended-release capsules of amphetamine from different manufacturers varied considerably. In one preparation, only 5 mg of the dose was ever delivered; in another preparation, the full 15 mg was absorbed all at once.
The problem was not limited to shellac. Almost any aspect of a drug’s physical manifestation, it seemed, could affect absorption. In 1960 a Canadian manufacturer of the blood- thinning agent dicoumeral reported complaints by patients and physicians after they changed the shape of their tablet to make it easier to break in half. Patients on dicoumeral required repeated blood and urine tests to titrate a precise dose response: too much drug and the patient might bleed to death; too little drug and the patient risked a life-threatening blood clot. Yet patients on the new drug immediately experienced a drop in their blood levels, even though the new tablets scored identically on USP tests of purity, identity, content, and disintegration compared with the old tablets. Even after the company reformulated the shape of their tablets again with new attention to what they called “dissolution time,” they received new complaints from physicians that the resultant tablet was now too potent. After advising physicians to retitrate their patients on the new formulation, the research laboratory published a letter to the editor in the Canadian Medical Association Journal noting that two lessons had been learned from this episode:
1. In vitro data cannot be used to interpret what may happen in vivo.
2. Different brands of products, although similarly composed with respect to active ingredient content, may not provide similar physiological responses. A brand name has implications beyond commercialism.
As he addressed the community of hospital pharmacists in 1960, Gerhard Levy presented these and other new proofs of significant differences among drugs that met all compendial standards as evidence for the inadequacy of existing sciences of similarity. His plea was joined by several other physiologically minded pharmacologists such as John G. Wagner, the Upjohn Professor of Clinical Pharmacology at the University of Michigan. Like Levy, Wagner became interested in the new problems posed by timed- release capsules and enteric- coated tablets. Wagner had created an animal model for drug absorption by training starved dogs to lie quietly on X-ray tables, feeding them enteric- coated tablets filled with radiopaque contrast agent, and then recording serial X-rays of their bellies to see how fast the contents of the tablets were actually released into the dogs’ digestive tracts. Wagner’s X-rays produced undeniable visual proof that the fate of seemingly identical pills could vary widely once they were inside the body.
Chiding those physicians and pharmacists still willing to believe that therapeutic actions are “due only to the inherent activity of the molecular structure of the compound,” Wagner urged the medical and pharmaceutical profession to demand further proofs of therapeutic similarity and difference. “Since dissolution, diffusion, absorption, transport, binding, distribution, adsorption on and transfer into cells, metabolism, and excretion are also intimately involved in drug action,” he concluded, “the molecular structure, although vitally important, is only one factor in drug action.”
It is not surprising that Levy and Wagner’s research found strong financial support from the research- based pharmaceutical industry. Wagner’s connections to industry were evident in the title of his endowed Upjohn chair at the University of Michigan and in his cross- appointment at the Pharmacy Research Section of the Upjohn Corporation. At both Upjohn and the University of Michigan, Wagner explored the ways in which therapeutic activity might be influenced by the delivery of a drug to its target site. The new science of biopharmaceutics documented variations in the biological availability, or bioavailability, of a drug once consumed. For most orally consumed drugs, this meant studying the process by which the material inside a given capsule found its way out of the capsule and into the gut, how content in the lumen of the gut found its way across a series of membranes into the bloodstream, and how a pharmaceutical agent in the bloodstream made its way to its ultimate site of action.
Over the course of a prolific research career, Wagner documented in detail that the path from tablet to target often was not a straight line but an S-shaped curve: flat on both sides, but steep in the middle. Very low doses of drugs resulted in very low absorption, very high doses of drugs resulted in very high levels of absorption, but in the middle—especially in drugs that were relatively insoluble in water and had a tight margin between an insufficient dose and a toxic dose—therapeutic function could be dramatically altered by tiny changes in dissolvability or absorbability of otherwise molecularly equivalent drugs. Wagner’s S- shaped curves formed a positive critique as well—expressible in a gathering series of laboratory practices, pharmacological pedagogy, and regulatory pathways—of how to create new in vitro and in vivo therapeutic tests. One subset of this field concerned the measurement and modeling of how drugs circulated through different compartments of the body, which Wagner and University of California, San Francisco, pharmacologist Eino Nelson began to call pharmacokinetics. Though Wagner was not the first to coin the term, his work with Nelson on the biologically relevant differences of chemically identical drugs would play a key role in the spread of this new basic science of pharmacology.
From Generic: The Unbranding of Modern Medicine by Jeremy A. Greene. Reprinted by permission of Johns Hopkins University Press.
Jeremy A. Greene is author of Generic: The Unbranding of Modern Medicine (Johns Hopkins University Press, 2014) and an associate professor in medicine and history of medicine at Johns Hopkins University School of Medicine in Baltimore, Maryland.