Skip to main content

Do Physicians Have a Duty to Warn a Patient’s Family?

Two medical professionals discuss a patient's medical record.

Exploring the ethical and legal issues around doctors sharing medical records and providing recommendations to the family members of the patients they treat.

Published June 1, 2002

By Fred Moreno, Dana Van Atta, Jill Stolarik, and Jennifer Tang

Image courtesy of Freedomz via stock.adobe.com

What guidelines should doctors follow regarding disclosure information to potentially affected family members and the genetic testing of children. In two widely discussed cases, Pate v. Threlkel and Safer v. Estate of Pack, judges grappled with whether physicians have a duty to warn family members of patients. Heidi Pate’s mother suffered from medullary thyroid carcinoma, an autosomal dominant disorder. Three years after Pate’s mother received treatment, Pate was diagnosed with the same disease. She sued, arguing that her mother’s doctors had a duty to warn her mother of the risk of genetic transmission and to recommend testing of any children.

The Florida Supreme Court ruled that if the standard of care were to warn a patient of the genetically transferable nature of a condition, as Pate alleged, then the intended beneficiaries of the standard would include the patient’s children. In other words, the patient’s children would be entitled to recover for a breach of the standard of care. However, in light of state laws protecting the confidentiality of medical information, the court found no requirement that a doctor warn a patient’s children directly. Rather, the court held that in any circumstances in which a doctor has a duty to warn of a risk of inherited disease, that duty is satisfied by warning the patient.

Donna Safer’s father was treated over an extended period of time for colon cancer associated with adenomatous polyposis coli, another autosomal dominant disorder. Almost two decades after his death, Safer was diagnosed with metastatic colon cancer associated with adenomatous polyposis coli. Safer also sued, arguing that her father’s doctor had a duty to warn her of the risk to her health.

Two Additional Significant Features

Two additional features of the case were significant. First, Safer’s mother testified that on at least one occasion she asked the doctor if what he referred to as an “infection” would affect her children and was told not to worry. Second, Safer contended that careful monitoring of her condition would have improved her medical outcome.

The New Jersey Superior Court found no essential difference between this case, with its “genetic threat,” and traditional duty-to-warn cases involving the menace of infection or threat of physical harm. The court concluded that a duty to warn in the genetics context would be manageable, commenting that those at risk are easily identified. The court failed to state how the duty to warn might be discharged, especially in cases involving small children.

This decision is of concern to many physicians who are worried about the scope and ramifications of a broad duty to warn. In 1996, following the Safer decision, the New Jersey legislature passed a law prohibiting disclosure of genetic information about an individual. The limited exceptions include disclosure to blood relatives for purposes of medical diagnosis, but only after the individual is dead.

It is important to remember that with predictive information, more is not necessarily better. In addition to possible psychological harms, family members may face discrimination if genetic information finds its way into their medical records or becomes part of their knowledge base and so must be disclosed on applications for insurance. An awareness of this problem may be behind a New York law that prohibits any person in possession of information derived from a genetic test from incorporating that information into the records of a non-consenting individual who may be genetically related to the tested individual.

Also read: Genetic Privacy: A War Fought on Many Fronts

A Case Against ‘Genetic Over-Simplification’

A graphical representation of a DNA helix and chromosomes.

Who are we? Why do we behave as we do? What explains why some die of illness at the age of 50 while others live past 100? How can we improve the human condition?

Published June 1, 2002

By Fred Moreno, Dana Van Atta, Jill Stolarik, and Jennifer Tang

Image courtesy of ustas via stock.adobe.com.

The answers to these questions are coded in our genes — or so the story goes in the popular media and in some corners of the scientific establishment.

“It’s a heroic story with a dark side,” said Garland E. Allen, Ph.D., Professor of Biology at Washington University and a specialist in the history and philosophy of biology, at a recent gathering at The New York Academy of Sciences (the Academy). Harking back to the eugenics movement of the early 20th century, modern genetic science is fraught with both promise and danger, Allen said, and “genomic enthusiasm” should be tempered with a good dose of historical awareness.

Eugenics in Context

Charles B. Davenport, the father of the eugenics movement in the United States, defined his fledgling field as “the science of human improvement by better breeding.” In attempting to apply Mendelian genetics to society’s ills, Davenport and his fellow eugenicists believed the problem — whether alcoholism, mental illness, or the tendency to simply “make trouble” — was in the person, not the system. The real culprit, therefore, was the individual’s defective biology, and biologists held the key to fixing the defect.

During the first four decades of the 20th century, eugenics gave credibility to American elites in their efforts to restrict the inflow of immigrants of “inferior biological stock” from southern and eastern Europe, culminating in the Immigration Restriction Act of 1924. The new science also provided a rationale for the compulsory sterilization of institutionalized individuals considered unfit for reproduction.

By 1935, 30 states had enacted sterilization laws that targeted habitual criminals, epileptics, the “feebleminded,” and “morally degenerate persons.” Their proponents saw them as preventive, not punitive. In their view, higher fertility rates among the less productive, genetically defective members of the population posed a threat to society, not least because of the high cost of maintaining them in prisons, in mental institutions, or on the dole.

“Social history in the United States between 1870 and 1930 was characterized by a search for order,” said Allen. “It was a period characterized by the maturation of the Industrial Revolution, rapid urbanization and growing social problem. There was a widespread sense of disorder, and many felt there was a need to do something about it.” This collective malaise made eugenics the “magic bullet” of its day.

As American as Apple Pie

Eugenics peaked during the 1930s, at the height of the Depression. Interestingly, the new science and its attendant policy program appealed to members of all social classes. Eugenics validated wealth and privilege as the birthright of the genetically superior. The rising union movement, arguably the greatest threat to the status quo, was rife with Italians and Jews, two of the groups deemed “socially inadequate.” At the same time, with competition over scarce jobs at an all-time high, eugenics fed into the anti-immigrant sentiments of the working class.

With their blatant racism, xenophobia, questionable ethics and tendency to blame the victim, eugenicists might impress us today as screwballs on the lunatic fringe of science. Actually, however, nothing could be further from the truth.

Theodore Roosevelt was just one of many highly regarded Americans who praised the science of eugenics. In his 1913 letter to Charles Davenport, Roosevelt wrote: “Any group of farmers who permitted their best stock not to breed, and let all the increase come from their worst stock, would be treated as fit inmates for an asylum.” Alexander Graham Bell himself served on the Board of Scientific Directors of the Eugenics Record Office, founded in 1910 as the country’s leading eugenics research and education center. In its day, the eugenics movement was mainstream and as American as apple pie.

Scientific Underpinnings

Taking its cue from advances in agriculture, eugenic science also emulated the efficiency movement in industry. “Eugenic reproductive scientists were the counterparts of the efficiency experts on the factory floor,” said Allen. In the early 20th century, farmers and industrialists alike turned to science for guidance in bringing about control and standardization.

If popular for the wrong reasons, eugenics nonetheless increased our understanding of human beings as genetic organisms. Davenport and other eugenically-minded human geneticists helped illuminate the genetic origins of a number of physical disabilities, for example, including color blindness, epilepsy and Huntington chorea. Instead of proceeding cautiously, however, Davenport and his colleagues applied the new genetic paradigm zealously and indiscriminately. All human intellectual and personality traits, they hypothesized, were ultimately reducible to heredity.

As it turns out, their methods were just as flawed as their theories. Commenting on a family study of epilepsy — rigorous for its time — Allen pointed to two methodological weaknesses: First, humans have small families compared to animals, which makes statistical modeling difficult at best. Second, research in the early 20th century was hampered by a lack of accurate information. Interviews, anecdotal accounts, and rumor were the stuff of scientific data at a time when medical record keeping was relatively haphazard.

Finally, the absolute privileging of heredity over environment trapped eugenicists in a form of circular thinking. If pellagra, a condition caused by vitamin B deficiency, was observed to run in a family, the disease must be genetically based, they thought, rather than rooted in poverty and shared nutritional deficits.

A Call for Balance

Allen warned that the genetic myopia of yesterday’s science is being recapitulated today. From shyness to homosexuality and from depression to infidelity, everything is in our genes, if we’re to trust the information in recent cover stories in Time, Business Week, and U.S. News & World Report, among other reputable publications. “These claims are as tenuously based now,” asserted Allen, “as they were in the 1920s.”

The most serious dangers of all, however, lie in the policy implications of the new genetic determinism. If a person is genetically predisposed to sensitivity to smog, why should the government commit itself to cleaning it up? Why should parents bother spending time and energy on raising a child who carries the criminality gene? And why should insurance companies pay for the care of those with genetic mutations that “cause” bipolar disorder, diabetes or cancer? We’ve seen this unhealthy marriage of scientific and political agendas before, Allen said.

Allen also argued for a more integrated approach to research. Social and biological scientists have been studying different groups, and never the twain shall meet. We’d gain a more complete picture of problems and their causation by funding integrated studies that join the perspectives of sociologists and biologists, he said. This approach would correct the current fixation on genes as bearers of the whole truth.

When it comes to the lessons of eugenics, Allen said the “that was then, this is now” attitude is worst of all. It can, indeed, happen today. He concluded by encouraging scientists who reject simplistic genetic ideas to step forward, articulate a balanced point of view and oppose the “geneticization” of the public discussion and its potentially dangerous consequences, sooner rather than later.

Also read:The Primordial Lab for the Origin of Life

The Enigma Surrounding the Brain’s Amygdala

An old black and white diagram denoting parts of the human brain.

By studying the amygdala’s function in both human and animal brains, we can better understand drug treatment and addition.

By Brian A. McCool, PhD

Image courtesy of Sergey Kohl via stock.adobe.com.

About 180 years ago, not long after the New York Academy of Sciences was founded as the Lyceum of Natural History in New York City, the amygdala, those almond-shaped structures within the basal ganglia of the brain, initially was described as discrete anatomical entities deep in each of the temporal lobes. But the behaviors governed by the left and right amygdala have remained subject to interpretation ever since.

While it is generally accepted that the amygdala is somehow responsible for regulating emotions, diverse experimental systems and approaches have until now prevented a unified appreciation

of its function. To contribute to the on-going, evolutionary process that is shaping our understanding of this important brain region, 205 basic and clinical scientists recently attended an important conference on the subject in Galveston, Texas.

Ultimately, it was agreed that the amygdala generally appears to be an arbitrary collection of some 20 different cell groups that can be divided into at least four behaviorally functional units. Together, these units determine how the brain integrates sensory and cognitive information to interpret the emotional significance of an event or thought. Regulating Human Behaviors Several scientific sessions focused on the behaviors regulated by the human amygdala. A number of the sessions highlighted the amygdala’s role in the emotionally motivated assessment of environment and memory.

Using patients with amygdala damage, the University of Iowa’s Ralph Aldophs, PhD, described studies indicating that this brain region is active when individuals make socially-relevant subjective judgments, in this case related to the interpretation of facial expressions associated with negative emotions. Importantly, the interpretation or expression of declarative “facts” regarding negative emotion appears intact in these individuals.

The Amygdala’s Role in Cognitive Processing

Using PET scans, Raymond Dolan, MD, Institute of Neurology in London, U.K., found that this subjective interpretation of negative facial expressions by normal individuals did not require cognitive recognition of the face. Together, these findings suggest that the amygdala’s role in the cognitive process relating to these judgments could occur independent of attention or awareness.

A number of presentations focused on the potential role of the amygdala in human behavioral and neuropathologic disorders. For example, Scott Rauch, MD, PhD, Massachusetts General Hospital, Wayne Drevets, MD, National Institute of Mental Health, and Michael Trimble, MD, Institute for Neurology, London, U.K., reported that amygdala activity or anatomy is altered in a number of different psychological/neurological disorders. A presentation by Anna Rose Childress, PhD, VA Addiction Treatment Research Center, University of Pennsylvania, clearly illustrated this point. Childress presented data indicating that experimentally induced drug craving in recovering cocaine addicts was associated with increased activity in both right and left amygdala and in anterior cingulate cortex.

Importantly, preliminary studies with both drug-based and behavioral interventions, treatments, that attenuate self-reported desire for cocaine, appear to inhibit amygdala activation during these craving states. However, in contrast to pharmacologic treatment, behavioral modification therapy increased brain activity in the orbito-frontal cortex, suggesting that the relative levels of activity between the “emotional” amygdala and the “cognitive” cortex may be an important determinant in the process leading to both drug addiction and recovery.

Animal Models of Behavior

Extensive studies of the amygdala in several mammalian species have provided substantial insight into animal correlates of human amygdala function. This is especially true of the non-human primate studies presented by David Amaral, PhD, University of California, Davis.

In these studies, experimental bilateral lesions in the amygdala of adult nonhuman primates demonstrate that this brain region is intimately involved with the subjective evaluation of novel environmental or social stimuli. Specifically, animals with lesions were less reluctant than normal controls to approach and interact with novel objects, and were more “uninhibited” during social interactions with unknown monkeys.

While these results clearly compliment findings in humans with amygdala damage, Amaral reported that, in contrast to adults, bilateral lesions in infant monkeys did not affect responsiveness to novel objects, and did cause more reluctance to participate in social interactions. These latter findings emphasize our lack of understanding regarding the long-term influence of social and physical development on amygdala function and underscore the need for additional investigations in non-human primates.

A number of reports focused on the behavioral role of the amygdala in rodents. Historically, studies using this animal system have provided the impetus for most of the human studies described above. In addition, current findings are beginning to provide a detailed understanding of the wealth of neurochemical and cellular mechanisms that appear to influence amygdala-dependent emotional learning in rats and, presumably, humans.

For example, Jim McGaugh, PhD, Center for the Neurobiology of Learning & Memory, University of California Irvine, presented an overview of his work in rats. It implicates specific neurotransmitter systems, namely those for norepinephrine and acetylcholine, as chemical mediators regulating amygdala activity related to emotional and stress-influenced memory formation.

Cellular & Molecular Insights into Amygdala Function

Similarly, Michael Davis, PhD, Emory University, presented recent findings indicating that amygdala glutamate receptors, specifically the N-methyl-D-aspartate isoform, are intimately involved with the ability of rats to extinguish fear-associated memories (also known as “extinction”). Importantly, manipulations of this particular membrane protein can enhance extinction, suggesting that this receptor may be an attractive target for therapies designed to resolve memories that elicit pathologic fear, as in posttraumatic stress disorder. Together, these findings emphasize the complexity and apparent wealth of neurochemical mechanisms that govern neuronal activity in the amygdala.

The most obvious advantage that animal models provide over studies in humans or in non-human primates is the relative ease with which basic biologic processes may be directly investigated. Denis Parj, PhD, Rutgers, State University of New Jersey, described the unique properties of a particular amygdala subdivision, the intercalated cell bodies. He concluded that this subdivision might help establish the timing and context of in-flowing sensory information, potentially representing a physiological mechanism that would help distinguish a “fearful” event from an innocuous one.

Similarly, Hans-Christian Pape, Ph.D., Otto-von-Geuricke University, Magdeburg, Germany, presented data indicating that amygdala neurons have intrinsic, rhythmic membrane oscillations that may aid in their communications with other brain regions.

Finally, Paul Chapman, PhD, Cardiff University, Cardiff Wales, U.K., provided an overview of our knowledge regarding the long-term alterations in amygdala neurotransmission associated with fear learning. Chapman noted that the mechanisms underlying memory-related, long-term amygdala adaptations appear to be distinct from those involved in other brain regions.

These findings emphasize that we are just beginning to appreciate the fundamental physiology regulating the amygdala’s involvement with “emotional learning.”

Also read: Teaming up to Advance Brain Research


About the Author

Brian A. McCool, PhD, is an assistant professor in the Department of Medical Pharmacology & Toxicology at the Texas A&M University System Health Science Center in College Station, Texas

The Scientific Mechanics of Cancer

A graphical representation of a damaged DNA helix.

New research illuminates the role of genetic mutations in the diagnosis of cancer. This research has resulted in some promising treatments.

Published June 1, 2002

By Fred Moreno, Dana Van Atta, Jill Stolarik, and Jennifer Tang

Image courtesy of sutadimages via stock.adobe.com.

Cancer researchers are getting ever closer to “understanding the molecular events that underwrite the transformation of a normal cell” into one capable of causing the deaths of millions of people around the world each year, Harold Varmus, MD, recently told a filled auditorium at Hunter College in New York City.

A Nobel laureate, former Director of the National Institutes of Health, and current President of Memorial Sloan Kettering Cancer Center, Dr. Varmus spoke at a mid-March gathering sponsored by The New York Academy of Sciences’ (the Academy’s) Microbiology Forum. “We are working toward understanding the molecular and genetic underpinnings of cancer,” he said.

Armed with this knowledge, physicians will be able to “assess the risk that an individual will develop cancer, prevent disease, diagnose it at the molecular level and, most importantly, treat it with new therapies that are much more precise than in the past.”

Varmus described a series of events in cancer research that have contributed to the understanding of oncogenesis, the changes that turn a normal cell into a malignant one. He spoke first about early events in the history of molecular oncology; next how some of this knowledge has been applied to the development of a specific cancer therapy. He concluded with a description of recent work, development models using mice, conducted in his own lab.

Cancer and Genetic Mutations

Cancer has its roots in genetic mutations — either changes to the genetic code of non-germ cells (somatic mutations), which may occur spontaneously or in response to environmental agents, or mutations inherited through germ cells. The latter happens much less frequently, Varmus noted. Singular mutations may be the first step in the process of oncogenesis, but many other cellular processes must subsequently occur for cancer to develop, he explained. Initiation is the moment normal gene expression is altered as a result of the mutation. If this altered cell fails to maintain normal cellular discipline, tumor maintenance begins. If the altered cell increases in oncogenecity, it is called progression.

Cancer cells then undergo a loss of growth control — an exaggerated response to growth signals or a failure to respond to inhibitory signals, and they escape from the signals that induce apoptosis, or cell death. Cancer cell growth is dependent on specific interactions between these cells and the host, such as angiogenesis, the induction of blood vessels to the tumor. Genetic instability gives rise to additional mutations and the cancer cell becomes more oncogenic, and may finally develop the capacity to colonize, to break away and travel to distant sites in the body.

In considering potential targets of cancer therapies, Varmus said many researchers have directed their efforts at tumor maintenance, the cellular functions necessary for cancer cells to remain in an oncogenic state. He noted that Steve Martin, a researcher at UC Berkeley, published in 1970 the results of a series of experiments conducted with avian cells.

The Impact of Temperature on Tumor Cells

The cells were infected with a virus that was capable of converting normal cells to those with a heightened potential for division or growth (the src mutant of the rous sarcoma virus). Dr. Martin induced many mutations in the virus stock and found a particular mutant form that would transform to an altered state only when the ambient temperature was 35 degrees F. or lower. When he took tumor isolates and raised the temperature above 35 degrees, he found that they returned to normal.

“With this work, Martin demonstrated that tumor cells require something — in this case temperature — to initiate and maintain the tumor state,” Varmus said. “This experiment defined the maintenance function.” The mutations in function allowed researchers to make the first genetic probe for a vertebrate gene.

Since 1970 researchers have made many fundamental discoveries about the role that genes play in cancer. They have identified specific genes — many of them encoding enzymes — that, when mutated, contribute to cellular transformation and tumor maintenance, as well as other genes that govern the integrity of the genetic code. Through this, they have discovered that the development of cancer depends on many kinds of mutations — inherited, somatic and multiple mutations. They also have discovered the biochemistry and physiologic properties of cancer gene products.

In addition, researchers have explored transgenes — foreign genes introduced into an organism in the laboratory — and have targeted mutations in mouse gene lines. And some of this genetic information is now used, in a limited way, in patient care, Varmus said. An understanding of genetic information was central to the development of one recently heralded new cancer therapy, Gleevec, a signal transduction inhibitor for patients with chronic myelogenous leukemia (CML). This is a common adult leukemia, with 6,000 new cases a year in the United States.

The Philadelphia Chromosome

Patients may remain in the early chronic phase, the phase in which the disease progresses slowly, for about five years. When the disease enters blast crisis, Varmus said patients survive about six months, on average.

Virtually all patients with CML have a mutation called the Philadelphia chromosome, in which a piece of chromosome 9 is joined to chromosome 22. At the point where the two chromosomes make contact, the abl oncogene fuses onto the bcr gene. “This fusion gene, bcr-abl, encodes an enzyme (an activated tyrosine kinase) that drives normal myeloid cells into the leukemic state and keeps them there,” he explained.

Gleevec fits in the active site in the enzyme and has a powerful inhibitory effect on the action of not only the enzyme encoded by the bcr-abl fusions gene, but also on two other oncogenes: the kit oncogene and the platelet-derived growth factor (PDGF) receptor. Nearly all patients in the early phase of CML respond when treated with Gleevec. It has produced striking remissions in patients with both CML and another cancer, gastrointestinal stromal cancer.

A Promising Treatment

After 10 days of treatment with Gleevec, patients with CML who had had evidence of disease throughout the bone marrow have marrow that has returned to normal, with no evidence of the Philadelphia chromosome. Patients can develop resistance to the drug, especially those with late-phase CML. It’s believed that this resistance is mediated by further mutations in the bcr-abl gene. “Patients’ responses to Gleevec demonstrate that bcr-abl activity is key to tumor maintenance, and that maintenance functions in general are potential therapeutic targets,” Varmus said.

“This success has emboldened those of us who work with mouse models to define tumor maintenance functions,” said Varmus. “In my lab we are working with a gene, ras, that is involved in a large number of non-small-cell lung cancers, which are a very common cause of cancer mortality.”

Members of Varmus’s lab are working with mutant mice that have a transgene, a mutant k-ras gene in a specific type of lung cells (the type 2 alveolar epithelium cells). The mutated gene was fused with a genetic unit called a tet operon, which turns the mutated gene on in the presence of the antibiotic doxycycline.

Using these techniques, researchers in Varmus’s lab are able to incite a proliferation of type 2 pneumocytes — tumors — in mice when Doxycyclineis administered. “If doxycycline is stopped after a few days, the tumor disappears, and there is little evidence of previous cell proliferation,” he said. These experiments suggest that this type of tumor grows in response to mutations in the ras gene, he concluded.

Also read: Cancer Metabolism and Signaling in the Tumor Microenvironment

A Personal Tale of Post-Infectious Encephalitis

A black and white photo of a man analyzing a sample under a microscope, likely taken in the 1950s or 1960s.

Encephalitis, often called sleeping sickness, made an appearance in Buffalo, New York, in 1946. Among the victims who survived was six-year-old Trumbull Rogers, now Associate Editor of the Annals of the New York Academy of Sciences. Below are his recollections of the life-affecting experience.

Published April 1, 2002

By Trumbull Rogers

Paul M. Versage, Hospital Corpsman First Class, USN, examines a blood sample under a microscope. Photograph released September 24, 1963. This was part of an effort to study trachoma, Japanese encephalitis and other infectious diseases. Image courtesy of National Museum of the U.S. Navy/Wikimedia Commons via Public Domain.

My mother’s entry under “Illnesses” in my baby book was precise: I contracted measles on April 6; improved by April 11; but became extremely drowsy by the 12th. Dr. W. Pierce Taylor, our pediatrician, called in another family friend, Dr. Douglas P. Arnold. Together they concurred in a diagnosis –– encephalitis.

An inflammation of the brain, encephalitis viruses come in many varieties, some named for where they were first diagnosed –– Central European, Murray Valley [Australia], Japanese or St. Louis encephalitis. Currently, we are most aware of the variety known as West Nile Virus, an arthropod-borne (arbovirus) infection that made its first appearance in the United States in the summer of 1999. (See West Nile Virus: Detection, Surveillance, and Control, Dennis J. White and Dale L. Morse, Eds., Annals of the New York Academy of Sciences, Vol. 951, 2001, for more on this virus.)

But not all forms of encephalitis are caused by the bite of the “dread tsetse fly” or Culex mosquito. One example is post-infectious (in my case, postmeasles) encephalitis, which is an acute disseminated encephalitis characterized by perivascular lymphocyte and mononuclear cell infiltration and demyelination. It is thought to result from the weakening of the immune system caused by the original measles virus.

1 in 1,000 Cases

According to a 1997 article by Dr. Michael J. McKenna, of the Massachusetts Eye and Ear Infirmary in Boston, post-measles encephalitis “occurs approximately 1 in 1,000 cases. Usually it manifests three to four days following the acute illness and is clinically characterized by seizures, obtundation and coma. The mortality rate of central nervous system involvement is approximately 25%. Half of those who survive have permanent sequellae, including mental retardation, seizures, motor abnormalities and deafness.” (“Measles, Mumps, and Sensorineural Hearing Loss,” by Michael J. McKenna, Immunologic Diseases of the Ear, Joel M. Bernstein et al., Eds., Annals of the New York Academy of Sciences, Vol. 830, 1997, p. 292)

After Drs. Taylor and Arnold made their joint diagnosis, they arranged my transfer to Children’s Hospital in Buffalo. I have no memory of any of this prior to finding myself lying in a large room with high windows. However, I remember hearing the occasional echo of a door closing somewhere far away, receding footsteps and distant voices. I was alone in a place that shifted each time I drifted into consciousness. Events that no doubt spanned only an hour or two seemed like several days.

Little Hope for Survival

Early on, the doctors told my parents there was little hope for my survival aside from one slim chance: a new drug had shown some success in treating the condition, but it was experimental. They wanted to try it on me, if my parents were willing. I’m not certain, but this drug was probably a corticosteroid, even though it was not widely available for human use at the time. Although it has never been proved that steroids are effective against post-measles encephalitis, many physicians use them today in treating this disease. I’m told my parents’ decision to let the doctors use the drug –– whatever it was –– saved my life.

I gained consciousness in my hospital room several days or perhaps a week after my arrival there. But I was not completely cured. My left arm was paralyzed and I had lost the ability to speak. Although there may have been other symptoms, these are the ones I recall most vividly.

By paralyzed, I mean that, my left arm, when left to its own devices, flexed so that my curled fingers rested against my left shoulder. To keep the arm straight, the doctors attached it to a splint, which made lying on my left side awkward and uncomfortable. It also meant that I needed assistance when I wanted or was required to turn over.

When my nurse changed the bandage, at least once daily, she positioned my elbow near the center of the splint. She then forced my arm down until it lay flat. Then she wrapped the clean bandage around both, securing the arm in place. I don’t recall feeling any pain during this process, though later I would.

Returning Home

In the matter of talking, I remember being restricted to two rudimentary forms of communication: “Uh-huh” (for yes) and “Uh-uh” (for no). This condition lasted for what, in retrospect, seems like a long time, an impression that is borne out by my baby book. It has me beginning to talk on April 27, the day after my seventh birthday and 16 days after I entered the hospital.

After recovering from post-measles encephalitis, the author (foreground, right) accompanied his family to Christmas Cove, Maine, to spend the summer of 1946 near where his godparents lived. Left to right: the author’s sister, Grace Wilcox, brother, Danforth William, and mother, Grace Danforth Rogers

On Friday, May 3, I was taken home and put to bed in one of the two second-floor bedrooms that looked out on the backyard. I was now freed of my hospital existence and could continue my recovery in the familiar surroundings, sounds and smells of home. But I still wore my splint and needed around-the-clock nursing.

I have no idea how long my convalescence lasted, but it probably continued until the end of May. Although I’m sure my brother and sister were curious about what had happened to me, I don’t remember seeing very much of them. I’m sure their visits were kept to a minimum.

Soon after my recovery, I was taken to a room that I recall being decorated with cartoon-like characters and was hooked up to an electroencephalograph. There must have been at least two sessions, because I can remember more than once picking patches of dried glue out of my hair, like scabs off my scalp.

“Awakened” with Dopamine

Although I was aware of having been close to dying, my child’s mind had no conception of what that really meant. So, even in adulthood, when I said the words it was like mouthing a memorized set piece that had no core connection to me. People sympathized, and I enjoyed that, but inwardly I felt like a fraud.

That feeling ended on December 7, 1998, however, when I watched “The True Story of Awakenings” on the Discovery Channel. The program included some of Dr. Oliver Sacks’ (Awakenings, HarperCollins, New York, 1973) “home movies” of his “frozen-intime” patients after he “awakened” them with dopamine.

But more arresting were the images of other faces, those forever contorted into idiot expressions (“Did I look like that?”) and “frozen” bodies; hearing a sister’s tale of her brother’s loss of artistic potential; seeing a young girl go through an ordeal similar to my experience.

Watching this program was like seeing myself as I had been, as well as how my life might have gone. This revelation –– of what I was exposed to and then escaped from without damage, of how fortunate I was in my doctors and nurses and in my parents’ courage in letting me be the guinea pig –– still resonates

Also read:The Rising Threat of Mosquito & Tick-Borne Illnesses

The Ethics and Morality of Modern Biotechnology

A gloved hand adjusts a sample under a microscope inside a science research lab.

Scientists are pondering ways to balance the immense potential of biotechnology, while also being responsible morally and ethically.

Published April 1, 2002

By Fred Moreno, Dana Van Atta, Jill Stolarik, and Jennifer Tang

Image courtesy of Panupat via stock.adobe.com.

Embryonic stem cell research. Cloning. Prenatal genetic screening. Genetically modified foods. What used to be thought of as impossible is not only probable — it’s now being done.

That’s why it’s more important than ever to develop regulations to ensure that today’s tools of the life sciences –– and those surely to be developed in the future –– are used for the betterment of mankind, not for our demise. These issues were the focus of a talk by Francis Fukuyama, PhD, called The Political Control of Human Biotechnology: National and International Governance Issues, held on March 4 at The New York Academy of Sciences (the Academy).

“We’re on the cusp of a major period of advance in biology,” said Fukuyama, Bernard Schwartz Professor of International Political Economy at the Paul H. Nitze School of Advanced International Studies of Johns Hopkins University, who is well known for his 1993 book The End of History and the Last Man. “We really need to start thinking seriously about a very different kind of governance structure for human biotechnology so that we’ll benefit from the great good that it promises, but also avoid some of the ethical and moral aspects of that revolution.”

Fukuyama identified four areas of pronounced advances –– discussed below –– that raise broad issues and concerns.

The Cognitive Revolution

Francis Fukuyama, PhD

How much of human behavior can be explained by genes? By the middle of the 20th century, both the social and life sciences had agreed that culture influences human behavior more than does nature. But a revolution in the life sciences later ensued, generating the field of behavioral genetics. Studies were conducted comparing the behaviors of monozygotic twins who were raised in different environments to determine how much of an individual’s personality, intelligence and other traits could be attributed to genetic makeup.

These investigations triggered a great deal of controversy. “People don’t like to be told that genes determine any part of their behavior,” said Fukuyama. But he said modern biology has even more controversies in store.

“In the next generation, we won’t have to rely only on behavioral genetics to uncover connections between genes and behavior,” he noted. “We’ll start to uncover molecular pathways that exist between certain alleles and behavioral variations. I don’t know what the outcome will be,” continued Fukuyama, “but with the discovery of causal mechanisms linking genes and behavior, it would potentially open these (molecular) pathways to manipulation.”

Neuropharmacology

A struggle for recognition, driven by feelings of status and worth, is the basis for all political behavior, Fukuyama said. “A lot of this is related to the dignity and self-worth that human beings have been programmed by evolution to feel, and that’s the way we sort ourselves out in society.”

By providing a “medical shortcut” to alter these feelings, Fukuyama noted, psychotropic drugs may have important consequences for control of both individual and political behavior. Drugs such as Prozac, for example, work by inhibiting the reuptake of serotonin in the brain. And serotonin determines feelings such as dignity and worth.

Ritalin is of even greater ethical concern. It is prescribed for the control of attention deficit hyperactivity disorder (ADHD), “a squishy diagnosis, and a perfect example of a socially constructed disease that wasn’t even recognized two or three generations ago,” said Fukuyama. While the drug has indeed been beneficial for many children, there are others for whom ADHD is merely the tail of the normal distribution of behavior.

Fukuyama noted that drugs like Ritalin alter what we regard as the foundation of virtue and character. “If we believe that human character is formed out of the ability to overcome adversity through training and self-mastery of one’s impulses, what we’ve done is create a medical shortcut around this.”

And psychotropic drugs like Prozac and Ritalin are only the tip of the iceberg, he added. In the next generation, new drugs may be created that will improve memory and increase the threshold for pain.

Life Extension

It’s already happening today: The birth rate in Japan and many European countries is declining, while the ratio of older citizens to younger ones is increasing. Some European nations are witnessing a decrease of more than 1 percent of their populations each year. And the size of Japan’s work force peaked in 1998.

Medical advances in the next half-century may add years, if not decades, to the human life span. But even without these advances, such age shifts are destined to have a profound impact on national economies. For one thing, where will the money come from to pay all of these retirees their social security pensions?

Another area to feel an impact is foreign policy, explained Fukuyama. In the next 50 years, Europe and Japan will be full of older individuals, while most developing nations will have populations where the median age is in the low 20s. To keep their economies going, Europe and Japan will have to import workers from developing countries to supplement their work forces. “These workers will be culturally different,” noted Fukuyama. “Those countries that can successfully assimilate people from different backgrounds will do the best.”

Moreover, Fukuyama asserted that dramatic age shifts at the population level will have an enormous impact on the creation of new ideas. “Generational turnover is absolutely critical to innovation and social change,” he said.

Genetic Engineering

Technologies are available that enable doctors to screen embryos for genes linked to certain diseases, select one lacking the errant genes and implant it in a woman to ensure the development of a relatively healthy baby. The combination of these technologies with the eventual discovery of genes for such traits as height and intelligence may open a Pandora’s box of possibilities for “designer humans.”

But just because we’ll have the ability to accomplish this doesn’t mean we should. “Human rights depend on human nature,” said Fukuyama. “If you have a technology that is powerful enough to change the underlying essence of what human beings are, then we will inevitably change the nature of those rights. There’s too much casualness about redesigning human beings and improving them genetically.”

A New Public Policy

So how do we regulate such technology to ensure that it’s put to the best use? Fukuyama asserted that current regulatory bodies, such as Congress, the National Institutes of Health, and the Food and Drug Administration, “are completely inadequate to deal with the choices we’ll have to face in the future. Legislative bans on broad areas of science and technology are not an appropriate model. We need a better regulatory structure.”

International regulation is another possibility, but such governance must be created – and succeed – on a national level first. One promising effort is establishment of the 17-member President’s Council on Bioethics, which held its first meeting last January – with Fukuyama as a member – but this is a deliberative and advisory body with no regulatory function.

“In addition to debating moral and philosophical issues,” concluded Fukuyama, “we can now begin a very concrete discussion about how we can make use of what is obviously a tremendously valuable and promising set of technologies – but have them work in ways that help humans to flourish, rather than the reverse.”

Also read: Agricultural Biotechnology in Developing Countries

Opportunities and Challenges in Biomedical Research

A woman examines a sample under a microscope in a science research lab.

While there have been major advances in biomedical research in recent years, this has also presented scientists with new challenges.

Published April 1, 2002

By Rosemarie Foster

Image courtesy of DC Studio via stock.adobe.com.

In Boston’s historic Fenway neighborhood, just beyond Back Bay, each spring heralds an annual ritual of renewed life. The Victory Gardens come abuzz with activity and abloom with burgeoning buds. Canoeists charge to the nearby Charles River. And sluggers at Fenway Park swing from their heels, cast in the spell of a 37-foot-high wall called the “Green Monster” that rises beyond the tantalizingly shallow left field.

Much history has been recorded inside the boundaries of Boston’s legendary baseball venue. But the seeds of a different kind of history –– that of 21st century biomedical science –– are being planted in the Fenway district this spring. Two important new scientific research facilities being built –– an academic addition to the Harvard Medical School and a commercial laboratory planned by pharmaceutical giant Merck & Co., Inc. –– will no doubt help shape biomedical advances for decades to come.

Merck is constructing its 11th major research site –– Merck Research Laboratories-Boston –– in the heart of the district. The company hails the facility as a multidisciplinary research center devoted to drug discovery. Covering an area of 300,000 square feet supporting 12 stories above ground and six stories below, Merck hopes its state-of-the-art structure will lure some 300 investigators to pursue studies within its walls. The building is scheduled for completion in 2004.

Harvard’s own new 400,000-squarefoot research building is under construction just 50 feet from the Merck site. With a design that fosters interactions between scientists, Harvard’s new facility will build on the university’s commitment to high throughput technologies. It’s expected to be operational in 2003.

The Interrelationship of Academic and Commercial Research

Although the two facilities are some way from completion, they’ve already exposed one of the major issues –– the interrelationship of academic and commercial research –– that continue to challenge biomedicine. Because of its close proximity to the Harvard Medical School, some scientists fear the new Merck facility may create some tension between nearby university investigators and industry researchers.

“The Merck laboratories, as a commercially driven research organization, may pay better salaries, have better equipment, have a better capacity for high-throughput screening and medicinal chemistry, and have other facilities that an academic medical center typically does not have available,” explained Charles Sanders, MD, former Chairman and CEO of Glaxo, Inc. and former Chairman of the Board of The New York Academy of Sciences (the Academy). “Whether this will create a source of problems for Harvard and its scientists remains to be seen. On the other hand, it could be a great resource if the academic-industrial relationship is managed well.”

Such tensions are likely to continue as emerging new trends in biomedical research offer investigators both greater opportunities and increasing challenges.  Academia and industry are partnering in ways they never have before. New high-throughput technologies are generating more data than previously thought possible. And scientists from a variety of fields must now cross interdisciplinary lines –– an approach some dub “systems biology” –– to make significant progress in conquering such diseases as cancer and AIDS.

New Approaches

A number of other biomedical research organizations have already set the stage for the new approaches to be incorporated into the Merck and Harvard facilities. In 1998, Stanford University launched an enterprise called “Bio-X” to facilitate interdisciplinary research and teaching in the areas of bioengineering, biomedicine and the biosciences. In January 2000, Leroy Hood, MD, PhD, created the Institute for Systems Biology in Seattle –– a research Environment that seeks to integrate scientists from different fields; biological information; hypothesis testing and discovery science; academia and the private sector; and science and society.

Some say it’s the “golden age” of biomedical investigation. The evolution that has led to this new age was the subject, along with related issues, of a gathering of biomedical researchers at the Academy last April. Hosted by the American Foundation for AIDS Research (amfAR), the symposium was called The Biotechnology Revolution in the New Millennium: Science, Policy, and Business.

“This meeting did an excellent job of showing how the nature of biomedical research has changed in the last 25 years,” explained Rashid Shaikh, PhD, the Academy’s Director of Programs, “not just quantitatively, in the amount of information we can generate, but also qualitatively, in the way the work is done. And this is a rapidly evolving process.”

A Quickened Pace

Much of the recent change in biomedical research is the result of a pace of investigation that has accelerated during the last quarter century – thanks in large part to recombinant DNA technology created in the 1970s. This Technology received a boost of support when the war on cancer was declared that same decade.

“Once recombinant DNA technology appeared, there was an enormous shift in molecular biology,” said David Baltimore, PhD, Nobel laureate and President of the California Institute of Technology, who chaired the amfAR symposium. “From a purely academic enterprise, it turned into one that had enormous implications for industry.”

Early on, the infant biotechnology enterprise focused on cloning to manufacture drugs, added Baltimore. The cloning was employed in the search for targets for a new generation of small molecule drugs. The need for chemical libraries soon developed, followed by a demand for high-throughput screening technologies. Add to that the wealth of information gleaned from the Human Genome Project.

Today investigators have more data than they ever did before. With the advent of high-throughput screening technologies, they also have speedier methods at their disposal to generate even more data. The nascent field of proteomics is expected to propel biomedicine even further. But with this heightened pace of research come new challenges.

For one thing, data are being generated faster than they can be analyzed and understood. Novel technologies have spawned a new field called bioinformatics: the analysis of all the data generated in the course of biomedical investigation. “We used to be able to look at the expression of one gene at a time,” said Shaikh. “But thanks to technologies (such as microarray systems), we can now analyze the expression of thousands of genes at once.”

High Demand, Low Supply of Bioinformatics Professionals

Bioinformatics professionals –– those who perform the data analysis –– are high in demand but short in supply, however, creating a problem for some research centers. Because they are so hard to come by, some institutions are sharing bioinformatics staff until a new generation of professionals can be educated and enter the workforce.

A second question that comes to mind is, “Who owns all these new data?” Is it the property of the individual researcher? The university he or she works for? The pharmaceutical company that sponsored the work or, if the studies were supported by public funds, is it the public?

Ownership issues apply to electronically published data as well. “Some of the data get published and made available to the scientific community, but some do not,” said Donald Kennedy, PhD, Editor-in-Chief of Science and President Emeritus of Stanford University. “Now that all data are stored electronically, there are major changes afoot in how data can be accessed in useful and efficient ways. But there are major unresolved questions regarding who owns the data: Do the publishers? Do the investigators?” These significant legal and policy issues will need to be resolved and, given the current rapid pace of study, resolved quickly.

A Blurred Line

In Europe, industrial support for universities has been an accepted and uncomplicated practice since the late 1800s, and this relationship continues to this day. But the relationship between academia and industry in the United States has had a quite different history, noted Charles Sanders.

As the American pharmaceutical industry began to develop in the last quarter of the 19th and early part of the 20th centuries, a relationship akin to the European model began to flower. By the early 1930s, however, the relationship between academia and industry in America began to sour. Disagreements arose over research discoveries and credit; there were disputes regarding the unauthorized use of pictures of some scientists in advertisements, implying endorsement of certain companies and products.

After World War II, the climate began to improve. With the advent of biotechnology in the 1970s, relations flourished even more, as witnessed by the founding of companies such as Genentech and Biogen by academic scientists. In addition, there are now countless examples of companies that support research programs at universities under a variety of arrangements.

On the face, these associations appear positive, because there is now a wealth of new sources for investigators to turn to for research funding. But these new opportunities also present certain challenges.

One of the most obvious concerns when industry supports a researcher is the investigator’s objectivity. Conflict of interest issues may arise. “Academic scientists who work with industry are generally very careful to retain their objectivity, yet appearances sometimes don’t allow that,” said Sanders. “The industry has to be very careful and make sure that its academic collaborators totally protect their objectivity and reputation.”

Intellectual Property Issues

Secondly, when academia partners with industry, intellectual property issues again surface. How does one determine who benefits financially from a research endeavor that goes on to produce a profitable product, such as a successful drug? How much does the scientist receive, and the university he or she works for, and how is that money used? “Academic institutions have become more sophisticated, and the scientists and organizations are demanding an ever larger part of the pie from their discoveries,” said Sanders.

Donald Kennedy noted that in industry-supported investigations a large proportion of research results that are of potential public value may be locked up in proprietary protections. Students at Yale University and the University of Minnesota recently demonstrated, for example, that their universities were collecting royalties on drugs that can benefit people suffering from HIV/AIDS in developing countries.

“Although the royalty slice of the drug price is minuscule in proportion to total revenues, it is very unattractive money to the students, and they make a passionate case,” said Kennedy. “Ironically, everybody involved in this process thought they were doing something good, and in a way everyone was. But this is the kind of problem that emerges when proprietary interests mix with the basic research function in a nonprofit institution.”

A Mixing of the Minds

Scientists are increasingly of the opinion that an integrated approach to biological investigation is essential for significant, meaningful progress to occur. This “systems approach” is bringing together biologists, chemists, physicists, engineers and computer scientists to coordinate research efforts and interpret the resulting data.

Such an approach is critical for understanding the inner workings of cells and how their functions go awry to create diseases such as cancer. The AIDS virus has proven to be an excellent model supporting the need for a multidisciplinary approach: When it was first discovered in the early 1980s, it was assumed that a vaccine was just around the corner. But that has obviously not been the case.

“It turned out that HIV was more difficult than anybody imagined, smarter and slipperier,” said David Baltimore. The cleverness of the virus has sent researchers back to their lab benches. Only by gathering together immunologists, structural biologists, biochemists and experts from other fields can we determine exactly what the virus does to the human immune system to deliver its lethal blow.

Is “Systems Biology” the Way to Go?

Not all investigators are convinced that “systems biology” –– as Hood describes it –– is the way to go. Many established researchers, for example, are used to working alone in conventional academic settings. “Traditional academic institutions have a difficult time fully engaging in systems biology, given their departmental organization and their narrow view of education and cross-disciplinary work,” explained Leroy Hood, President and Director of the Institute for Systems Biology. “The tenure system presents another serious challenge: Tenure forces people at the early stages of their careers to work by themselves on safe kinds of problems. However, the heart of systems biology is integration, and that’s a tough challenge for academia.”

“Specialization is often the enemy of cooperation,” added David Baltimore. “There are deep and important relationships between biology and other disciplines. To understand biology, we need chemists, physicists, mathematicians and computer scientists, as well as other people who can think in new ways.”

Future Challenges

Despite the presence of these as yet unresolved issues, biomedical research continues to hurdle forward, shedding light on the inner workings of organisms and yielding insights that will undoubtedly impact health and medicine. “The true applications (of biotechnology) to patient care have not really matured yet,” said Rashid Shaikh. “But there’s every reason to believe that we’re going to make very rapid progress in that direction.”

In addition to the challenges above, other issues include:

• Gathering political support. Although the budget of the National Institutes of Health has seen a significant increase in the last several years, other science-related agencies may not be as fortunate. “These agencies’ research budgets have not seen an increase, and we must pay attention to them,” said Baltimore.

• Educating the public. Hood touched on the distrust the public can have regarding science. “I am deeply concerned about society’s increasingly suspicious and often negative reaction to developments in science,” he said. “I sense an enormous uncertainty, discomfort and distrust. There is a feeling that we’re just making everything more expensive and more complicated. How do we advocate for opportunities in science? We have to be truthful about the challenges as well.”

• Educating today’s students. One of the best ways to garner support for a systems approach to biological investigation is to start educating students this way today. In Seattle, for example, the Institute for Systems Biology has pioneered innovative programs in an effort to transform the way science is taught in public schools.

“This is truly the golden age of biology,” said Sanders. There are unprecedented numbers of targets and compounds, for example. Research and development are very expensive, but funds will be available in abundance.

The Public’s Expectations

Still, he added, we need to handle the expectations of the public, which can be unrealistic when it comes to the speed with which basic science findings will result in new therapies. And academic institutions have to balance a commitment to both basic and translational research.

“Thousands of flowers will continue to bloom, driven by the lure of discovery and the opportunity to improve human health,” added Sanders. “Though not linear, the process is very creative, entrepreneurial, and clearly reflective of the American free enterprise system.”

Also read:Building the Knowledge Capitals of the Future


About the Author

Rosemarie Foster is an accomplished medical freelance writer and vice president of Foster Medical Communications in New York.

The Epidemiology of Depression: A Family Affair

A therapist comforts a patient by putting her hand on his knee in a supportive way.

Experts are beginning to better understand and mitigate the economic and social consequences of disabling psychiatric illnesses like depression.

Published March 1, 2002

By Henry Moss, PhD

Image courtesy of KMPZZZ via stock.adobe.com.

Health insurance reimbursement for mental disorders has still not achieved parity with traditional illness and the topic continues to be hotly debated in the U.S. Congress. The statistics seem clear, however, as studies document the enormous economic and social consequences of disabling psychiatric illnesses. Broken marriages, lost jobs and productivity, and the impact on children make mental illness one of the major sources of disability loss in the United States and the world.

Columbia University psychiatric epidemiologist Dr. Myrna Weissman made a powerful case for parity when she presented the cumulative results of major studies led by her and colleagues to an Academy audience in January. The talk was part of an ongoing program by the Academy on “Mind, Brain and Society.” Dr. Weissman, who is also associated with the New York State Psychiatric Institute, dealt specifically with unipolar, major depression, perhaps the most widespread and significant of these disorders, and one that now appears to amplify its effect by impacting families – young mothers and children in particular.

Perhaps the most significant finding is that, contrary to popular belief, depression is not a middle-aged, menopausal phenomenon. Recent studies show a substantial rise in the onset of depression at puberty and a peak that occurs between age 25 and 35, for both men and women, though incidence is substantially higher in women. Onset actually declines beyond age 35, implying that, as Weissman put it, “if you can make it to 50 you can pretty much look past depression and ahead to your dementias.” They also show that depression is most damaging in the sensitive child-bearing years of young women.

Depression and Other Health Complications

Dr. Myrna Weissman

Science is only now coming to grips with the significance of this data. Given depression’s early onset, we now recognize that people live with the debilitating disorder far longer than with heart disease, for example, or most diabetes. Indeed, the World Health Organization ranks unipolar depression number one in years of disability.

Weissman also noted that when women of child-bearing age are affected the impact is increased substantially. Children of depressed parents have a two to threefold increased risk for the illness, according to studies conducted by Weissman’s group. They also are more likely to experience earlier onset, around age 15, and to account for a major share of the small but significant number of cases among pre-pubescent children. They may then suffer the effects for a lifetime.

We’ve known that depression amplifies a number of general health problems, Weissman said, but it’s now becoming clear that the illness has a more devastating social impact than was previously thought. We can only imagine how it affects developing countries ravaged by AIDS and/or war. And it gets worse. The studies show that the effect remains robust across multiple generations; a grandparent with major depression may be an even stronger predictor for familial depression than is a parent.

The good news, according to Weissman, is that we’ve learned a lot about treating depression and other psychiatric conditions, with drugs and psychotherapy, and that outreach can overcome reluctance to seek treatment. But we need resources to conduct effective outreach and deliver treatment, and health insurance parity would certainly be a good start.

Myrna Weissman is a member of the National Academy of Science’s Institute of Medicine, and a Fellow of The New York Academy of Sciences.

Also read: Psychedelics to Treat Depression and Psychiatric Disorders

A Medical Doctor’s Perspective on Anthrax

A shot of anthrax taken under a microscope.

With the recent cases of anthrax occurring in New York and Connecticut, an MD breaks down the dangers of this devastating infectious disease.

Published March 1, 2002

By Philip S. Brachman, MD

Under a very high magnification of 31,207X, this digitally-colorized, scanning electron microscopic (SEM) image depicted endospores from the Sterne strain of Bacillus anthracis bacteria. For a black and white version of this image, see PHIL 2266. A key characteristic of the Sterne strain of B. anthracis, is the wrinkled surface of the protein coat of these bacterial spores. These endospores can live for many years, which enables these bacteria to survive in a dormant state, under environmentally-stressful circumstances. Image courtesy of Laura Rose via CDC Public Health Image Library.

Recent bio-terrorist events have resulted in the first cases of inhalational anthrax reported in the United States since the cases that were summarized in the article in the Annals of the New York Academy of Sciences, published in 1980. Five deaths have occurred among the 11 recent cases in the Unites States, making this the first successful bioterrorist event using B. anthracis. Investigations of the events have produced some new information concerning inhalational anthrax.

While the clinical aspects are generally similar to those reported in the initial article, the presence of a cough appears to be more prominent in the recent cases.

Recognizing that the initial symptoms of inhalational anthrax resemble those of the common cold or influenza, it is important to note that none of the current cases reported rhinitis as a symptom. This may be an important notation when initially considering the diagnosis on a patient with a potential exposure to B. anthracis.

New Diagnostic Techniques

On physical examination, the widening of the mediastinum has again been noted. Plural effusions and pulmonary infiltrates have been previously noted, though the former are more prominent among the current cases. As before, pneumonia is not present.

New diagnostic techniques have been developed, including the use of PCR and immunohistochemical staining of tissue, which allow more rapid diagnosis. Serological testing has advanced from the earlier days, which also has aided in the diagnosis. Molecular typing, when performed, has helped in associating cases with each other and with environmental sources of infection.

Successful treatment of six of the 11 recently diagnosed patients is significant. The previous mortality rate in such cases was reported to be 90%. This rate in recent cases has been lowered due to earlier recognition of the potential diagnosis, immediate treatment with large intravenous doses of effective antibiotics, use of pulmonary respirators and better attention to the use of intravenous fluids and medications.

Concerning antibiotic therapy, ciprofloxicin and doxycycline, when given intravenously and as soon as possible, have been important in assisting in the recovery of patients. Another important aspect of the recent events has been the use of prophylactic antibiotics for individuals exposed to aerosols of B. anthracis. It is recommended that prophylacsis be continued for 60 days –– based on the potential persistence of spores in the mediastinal lymph nodes.

New Epidemiological Features

New epidemiological features have also been described. We have learned that spore-bearing particles may be extremely small, possibly one micron in size. This allows the spores to pass through a paper envelope and contaminate the environment. Aerosols containing refined, highly concentrated B. anthracis-bearing particles have spread throughout buildings, either by airflow or movement of people, or movement of contaminated mail or equipment. As a result, distant environmental areas –– not directly related to where the envelopes containing the B. anthracis spores were opened –– have been contaminated.

Philip S. Brachman, MD

The contamination of tertiary envelopes (envelopes contaminated from secondary contaminated envelopes that had contact with B. anthracis particles from the primary envelope) may be the source of infection for the two recent cases: in New York and Connecticut. Careful investigations have not identified any other source of these two infections.

A factor that may have influenced differences between the previous cases of inhalational anthrax and the bio-terrorist cases was the B. anthracis-containing aerosol. In previous cases, the aerosol probably contained particles of a wide range of sizes. The bio-terrorist aerosol was pure B. anthracis particles, with a large percentage of particles in the range of one to two microns. We still do not have evidence concerning the dosage of B. anthracis organisms necessary to cause disease, but some suspect it might be a relatively small dose.

The current investigations have assisted in identifying improved methods for environmental sampling to determine the limits of the spread of B. anthracis. Several rapid assay methods have been developed, but they have not been adequately tested to determine their sensitivity and specificity. Additionally, information should be gained from the present problems concerning decontamination of large environmental areas

Also read:  Confronting Bio-Terrorism: The Anthrax Threat


About the Author

Philip S. Brachman, MD is a Professor in the Rollins School of Public Health at Emory University.

The Scientific Clues to Reducing Flu Epidemics

A shot of patients in beds in a makeshift healthcare facility during the Spanish Flu outbreak in the 1910s.

Our understanding of vaccines has come a long way since the 1918 flu epidemic, and scientists continue to advance the research in this field.

Published March 1, 2002

By Lorrence H. Green, PhD

A hospital in Kansas during the Spanish flu epidemic in 1918. Image courtesy of Wikimedia Commons via Public Domain.

In 1918 a global influenza pandemic is estimated to have killed between 20 and 40 million people. Today, influenza –– a negative stranded RNA virus that causes respiratory disease–– is responsible for about 20,000 deaths a year in the United States. In severe epidemics, the death toll can be much higher.

Because the genetic structure of the influenza virus changes each year, due to genetic drift, new vaccines must be in constant development. Significant genetic shifts occur about every 20-40 years, resulting in major genome changes and influenza pandemics. At a recent Microbiology Forum held at The New York Academy of Sciences (the Academy), Dr. Adolfo Garcia-Sastre, of New York’s Mount Sinai School of Medicine, described molecular research being undertaken to design improved influenza virus vaccines.

Garcia-Sastre explained that the influenza genome contains eight genetic segments coated by nucleoproteins and that it is surrounded first by a matrix, and then by a lipid bilayer envelope. In its replication cycle, the virus first binds to receptors on the cell surface; then it is incorporated. Following this, the negative strand RNA is copied, forming a double stranded structure.

An Unusual RNA Virus

Influenza is unusual in that it is an RNA virus that is replicated in the infected cell nucleus, as opposed to the cytoplasm. Important influenza proteins include: hemagglutinin (HA), which is responsible for binding to the cell receptor; neuraminidase (NA), which is responsible for budding off new influenza viral particles; the matrix (M1) and membrane proteins (M2); the nucleoprotein (NP), which is associated with the RNA genome; the transcriptase components (PB1, PB2 and PA), which are responsible for copying the negative RNA strand; and the NS proteins (NS1, NS2), whose functions were discussed.

Garcia-Sastre noted that negative stranded influenza RNA is not infective without its associated proteins. He described research in which genetic influenza material was inserted into a plasmid system. Then a second plasmid was developed that contained the genetic information the influenza proteins required for replication. By using both plasmids to transfect a cell, one could get influenza replication. This system could be used, he said, to specifically genetically alter the influenza genes and develop viral strains that would be useful as vaccines.

A Critical Protein

Using this system, Garcia-Sastre said NS1 was found to be a critical protein that could be exploited for vaccine development. Deletion experiments found that the function of NS1 was to allow the influenza virus to avoid the effects of the anti-viral protein, interferon. The NS1 protein was purified and studied. It was found to have three domains: an RNA binding domain, an eIF-4GI binding domain, and an effector domain. Further deletion experiments were conducted and showed that the anti-interferon activity was confined to a portion of the RNA binding domain. As one deleted more of the RNA binding domain, the influenza virus lost its pathogenicity.

Exploiting this finding to develop vaccine strains presented a problem, however. As larger portions of this domain were deleted, the virus that was generated became less immunogenic. The experiments suggested the possibility of removing just enough of the genetic coding material for the RNA binding domain of NS1 to create a vaccine virus that was not pathogenic, but that could still be used to induce long-term protection. These experiments are currently underway

Also read:  Antibodies, Vaccines and Public Health