Skip to main content

Continuing the Legacy of a Cancer Research Pioneer

A man in white lab coat and yellow necktie poses for the camera.

Advancing the cancer research started by Casare Maltoni, the late Italian oncologist who advocated for industrial workplace safety.

Published August 1, 2002

By Fred Moreno, Dan Van Atta, Jill Stolarik, and Jennifer Tang

Cesare Maltoni. Image courtesy of Silvestro Ramunno, CC BY-SA 4.0, via Wikimedia Commons.

For decades, the “canary in the coal mine” approach has been used to test for potential carcinogens. Standing in for humans, mice and rats have ingested or been injected with various chemicals to help toxicologists determine if the substances would induce cancers. In the end, autopsy revealed whether the lab animals had developed tumors.

Today, new approaches are emerging. They stem from a variety of tools that are evolving from advances in molecular biology, microbiology, genomics, proteomics, novel animal models of carcinogenesis and computer technology.

These tools and approaches were the focus of an April conference commemorating the work of Italian researcher Cesare Maltoni, who died January 21. Renowned for his research on cancer-causing agents in the workplace, Maltoni was the first to demonstrate that vinyl choloride produces angiosarcomas of the liver and other tumors in experimental animals. Similar tumors later were found to be occurring among industrial workers exposed to vinyl chloride.

Maltoni also was the first to demonstrate that benzene is a multipotential carcinogen that causes cancers of the zymbal gland, oral and nasal cavities, the skin, the forestomach, mannary glands, liver, and hemolymphoreticular systems, i.e. leukemias.

Sponsored by the Collegium Ramazzini, the Ramazzini Foundation, and the National Toxicology Program of the National Institute of Environmental Health Sciences (NIEHS), the meeting was organized by The New York Academy of Sciences (the Academy).

Measuring More Than Pathological Changes

After reviewing the contributions of Maltoni and David Rall, an American giant in the same field, as well as providing an update on ongoing research in their respective groups, the speakers and attendees discussed the future of carcinogenesis testing. While new tools will not replace bioassays, most noted, they will make it possible to measure more than simply the pathological changes seen through the microscope.

J. Carl Barrett, head of the Laboratory of Biosystems and Cancer at the National Cancer Institute, cited four recent developments that are fundamentally changing the research to identify risk factors and biological mechanisms in carcinogenesis.

The four developments are: new animal models with targeted molecular features – such as mice bred with a mutated p53 oncogene – that make them very sensitive to environmental toxicants and carcinogens; a better understanding of the cancer process; new molecular targets for cancer prevention and therapy; and new technologies in genomics and proteomics.

New technologies in cancer research, like gene expression analyses, are revealing that cancers that look alike under the microscope are often quite different at the genetic level. “Once we can categorize cancers using gene profiles,” Barrett said, “we can determine the most effective chemotherapeutic approaches for each – and we may be able to use this same approach to identify carcinogenic agents.”

A Robust Toxicology Database

A related effort – to link gene expression and exposure to toxins – has recently been launched at the NIEHS. The newly created National Center for Toxicogenomics (NCT) focuses on a new way of looking at the role of the entire genome in an organism’s response to environmental toxicants and stressors. Dr. Raymond Tennant, director of the NCT, said the organization is partnering with academia and industry to develop a “very robust toxicology database” relating environmental stressors to biological responses.

“Toxicology is currently driven by individual studies, but in a rate-limited way,” Tennant said. “We can use larger volumes of toxicology information and look at large sets of data to understand complex events.” Among other benefits, this will allow toxicologists to identify the genes involved in toxicant-related diseases and to identify biomarkers of chemical and drug exposure and effects. “Genomic technology can be used to drive understanding in toxicology in a more profound way,” he said.

Using the four functional components of the Center (bioinformatics, transcript profiling, proteomics and pathology), Tennant believes that the NCT will be able “to integrate knowledge of genomic changes with adverse effects” of exposure to toxicants.

Current animal models of carcinogenesis are unable to capture the complexity of cancer causation and progression, noted Dr. Bernard Weinstein, professor of Genetics and Development, and director emeritus of the Columbia-Presbyterian Cancer Center.

Multiple factors are involved in the development of cancer, Weinstein said, making it difficult to extrapolate risk from animal models. Among the many factors that play a role in cancer causation and progression are “environmental toxins such as cigarettes, occupational chemicals, radiation, dietary factors, lifestyle factors, microbes, as well as endogenous factors including genetic susceptibility and age.”

Gene Mutation and Alteration

By the time a cancer emerges, Weinstein added, “perhaps four to six genes are mutated, and hundreds of genes are altered in their pattern of expression because of the network-like nature and complexity of the cell cycle. The circuitry of the cancer cell may well be unique and bizarre, and highly different from its tissue of origin.”

Research over the past decade has underscored the role that microbes play in a number of cancers: the hepatitis B and hepatitis C viruses in liver cancer along with cofactors alcohol and aflatoxin; human papilloma virus and tobacco smoke in cervical cancer; and Epstein Barr virus and malaria in lymphoma, said Weinstein. Microbes are likely to be involved in the development of other kinds of cancer as well, he speculated. “Microbes alone cannot establish disease, they need cofactors. But this information is important from the point of view of prevention, and these microbes and their cofactors are seldom shown in rodent models.”

When thinking of ways to determine the carcinogenicity of various substances, he concluded, “we have to consider these multifactor interactions, and to do this we need more mechanistic models” of cancer initiation and progression.

Christopher Portier, a mathematical statistician in the Environmental Toxicology Program at the NIEHS, is working to make exactly this type of modeling more widespread. He stressed the importance and advantages of complex analyses of toxicology data using a mechanism-based model – or “biologically based data.”

This model includes many more factors than just length of exposure and time till death of the animal. It can incorporate “the volume of tumor, precursor lesions, dietary and weight changes, other physiological changes, tumor location and biological structure, biochemical changes, mutations,” Portier said, and give a more complete picture of the processes that occur when an organism is exposed to a toxicant.

New Analytical and Biological Tools

With biologically based models, researchers would link together a spectrum of experimental findings in ways that allow them to define dose-response relationships, make species comparisons, and assess inter-individual variability, Portier said. Such models would allow researchers to quantify the sequence of events that starts with chemical exposure and ends with overt toxicity. However, he said “each analysis must be tailored to a particular question. They are much more difficult computationally and mathematically than traditional analyses, and require a team-based approach.

“Toxicology has changed,” Portier continued. “We now have new analytical and biological tools – including transgenic and knockout animals, the information we’ve gained through molecular biology, and high through-put screens. We need to link all that data together to predict risk, then we need to look at what we don’t know and test that.”

While most speakers focused on the future benefits of up and coming technologies and concepts, Philip Landrigan, director of the Mount Sinai Environmental Health Sciences Center at the Mount Sinai School of Medicine, reminded the group of the work on the ground that still needs to be accomplished. “We’ve made breathtaking strides in our understanding of carcinogens and cancer cells,” he said. “I am struck, though, by the divide in the cancer world – the elegance of the lab studies, but our inefficiency in applying that knowledge to cancer prevention.”

Thorough Testing Needed

One of the problems confronting researchers is the vast number of substances that are yet to be tested. About 85,000 industrial chemicals are registered with the U.S. Environmental Protection Agency for use in the United States. Although some 3,000 of these are what the EPA calls high-production-volume chemicals, Landrigan said, “only 10 percent of these have been tested thoroughly to see the full scope of their carcinogenic potential, their neurotoxicity and immune system effects.”

Landrigan also discussed other troubling issues. For example: Children, the population most vulnerable to the effects of toxins, are only rarely accounted for in testing design and analysis, he said, and the United States continues to export “pesticides, known carcinogens, and outdated factories to the Third World.” Landrigan said he believes the world’s scientific community needs to address these issues.

At the conclusion of the conference, Drs. Kenneth Olden and Morando Soffritti signed an agreement formalizing an Institutional Scientific Collaboration between the Ramazzini Foundation and the NIEHS in fields of common interest. Priorities of the collaboration will include: carcinogenicity bioassays on agents jointly identified; research on the interactions between genetic susceptibility and exogenous carcinogens; biostatistical analysis of results and establishment of common research management tools; and molecular biology studies on the basic mechanisms of carcinogenesis.

Detailed information presented in several papers will be included in the proceedings of the conference, to be published in the Annals of the New York Academy of Sciences later this year.

Also read: From Hypothesis to Advances in Cancer Research

The Complexities of Stem Cell Research

A shot of a cell taken from under a microscope.

Proponents on both sides of this at-times controversial debate each make their case, combining the science, history, policy, and ethics of the research.

Published August 1, 2002

By Fred Moreno, Dan Van Atta, Jill Stolarik, and Jennifer Tang

Image courtesy of NIH via Wikimedia Commons.

Following the recent death of American baseball legend Ted Williams, it was learned that the former Boston Red Sox slugger’s body had been suspended in liquid nitrogen, encased in a titanium-steel cylinder along with other bodies being preserved at a commercial cryonics facility. Controversy swirled as the story circulated that at least one family member sought to preserve the icon’s DNA for possible future use in cloning.

Cryonics and cloning are the stuff of popular fiction and films from Frankenstein to Star Wars, with the scientist’s power to “create life” eliciting both fear and fascination. With cloning and embryonic stem cell research now poised for rapid expansion, however, the real-world debate on cloning, even for specifically defined therapeutic purposes, has heated up. Scientists, too, have begun to grapple with the issue of setting appropriate limits on their ability to engineer life.

Stuart Newman, professor of Cell Biology and Anatomy at New York Medical College, is among the more skeptical voices in the debate on human cloning. Speaking at a roundtable discussion held on the subject at The New York Academy of Sciences (the Academy) this spring, Newman called the creation of clonal embryos a slippery slope that no amount of regulation can level. He cited what he considers to be inexorable pressures on biomedical researchers to transgress acceptable limits by allowing cloned embryos to grow beyond the cellular stage.

The Thornier Aspects

During the meeting, which was co-hosted with Gene Media Forum, Newman engaged in an interchange with patient-activists – including the noted actor and director Christopher Reeve – and fellow scientists in an effort to sort out the thornier aspects of the cloning debate.

Craig Venter, PhD, president of the TIGR Center for the Advancement of Genomics and a major figure in microbiology and genomics, moderated the debate. Other panelists included Rudolf Jaenisch, MD, professor of Biology at MIT; James Kelly, an activist on behalf of spinal cord treatment; and Reeve.

For many, the cloning debate hinges on the distinction between reproductive and therapeutic cloning. Reproductive cloning aimed at creating a child has been censured by scientists and ethicists alike. Earlier this year, the National Academy of Sciences called for a total ban on human reproductive cloning, but strongly endorsed cloning to obtain stem cells that hold promise for curing a broad spectrum of human diseases. Jaenisch and Reeve expressed their support for this view, while Kelly and Newman cast doubt on the advisability of human cloning for any purpose.

Therapeutic cloning relies on nuclear transfer technology, a technique used to create a customized stem cell line for a patient in need. The nucleus of one of the patient’s own skin cells, for example, is extracted and transferred into a human egg whose nucleus has been removed. The new nucleus of this cell is then exposed to the egg’s signals, causing it to revert to its embryonic state.

In theory, embryonic stem cells can be chemically coaxed into producing lines of cells that will make whatever tissues are needed to heal and repair the body. Examples being considered include leukemia-free bone marrow cells, insulin-producing islet beta cells for diabetics, and dopamine-rich neurons for patients with Parkinson’s disease.

Commercial Interests and Patient Pressures

Still, the slippery slope looms large for critics of the new science. If a legal limit is eventually set allowing scientists to grow a clonal embryo for 14 days, Newman speculated, why not 15, 16, or 17 days and beyond? He said a combination of commercial interests and patient pressures would make it impossible to regulate the technology.

But Rudolf Jaenisch strongly disagreed with this all-or-nothing view. “It’s premature to ban a technique that is still in the process of evolving,” said Jaenisch, referring to a bill in the Senate that, if passed, would criminalize all forms of human cloning. “At no point in our nation’s history has Congress banned an area of scientific exploration or technology by federal legislation.” Nonetheless, despite the objections of many scientists, a total ban on cloning in the United States remains a distinct possibility.

European governments are generally recommending a more measured approach to regulating the new technology. The U.K. recently passed a law prohibiting reproductive cloning but allowing therapeutic cloning research to move forward under strict government oversight.

Australia, Canada, Israel, Japan, Portugal, Singapore and the Benelux countries also have approved therapeutic cloning. A special committee of the European Parliament has been holding meetings to develop a framework for cloning research that can help European governments evaluate its risks and benefits.

“The British solution is black and white,” said Jaenisch. “If you implant a cloned embryo into a uterus, it’s a criminal act. If you put it into a Petri dish with the intent of making an embryonic stem cell, it is allowed. There is no gray zone.” Again putting forth the slippery-slope argument, Newman pointed out that the development of an artificial uterus, for example, would nullify this distinction.

The Legality of Therapeutic Cloning

The United States is alone among the so-called developed nations in attempting to make therapeutic cloning illegal. If Congress succeeds in criminalizing all forms of cloning, the U.S. would effectively seal its borders against the importation of cloning-derived treatments for diseases that afflict millions of Americans. For those with Parkinson’s disease, diabetes, spinal cord injuries, Alzheimer’s disease, and a whole host of incurable conditions, this could be tantamount to “health exile.”

Despite their promise, however, cloning-derived stem cells and their successful development into cures are still just a distant possibility, according to James Kelly, who himself is confined to a wheelchair as a result of a spinal cord injury. They’re too uncertain, he believes, to warrant a large investment of research dollars at the expense of more tried-and-true avenues of investigation.

Christopher Reeve disputed Kelly’s assertion on two counts: First, in his view, it won’t be that long before therapeutic cloning techniques will be ready for use in humans; and second, biomedical research isn’t a zero-sum game. Pointing to the recent doubling of the NIH budget and to funds that have been earmarked by the Department of Health and Human Services for therapeutic cloning, he claimed there will be sufficient funding for many types of research.

The Promise of Therapeutic Cloning

Reeve, who was paralyzed in an equestrian accident in 1995, believes his best hope for recovery lies in therapeutic cloning. Because spinal cord injury usually leads to a compromised immune system, his doctors say his best option is treatment with embryonic stem cells derived from his own DNA, as cells from an anonymous donor would pose a high risk of rejection.

The charismatic activist and philanthropist further reminded his fellow discussants, and the audience, that scientific breakthroughs are often greeted with suspicion. “When vaccines became available early in the 20th century, there was a real fear and, in fact, strong opposition from the private sector and the government,” he said. “The idea for a vaccine against, say, measles meant the introduction of a small amount of measles into the patient, and people couldn’t comprehend that that would be actually the solution to contracting measles.”

Venter concluded the meeting by seconding Reeve’s warning against allowing fear to shape today’s attitudes toward scientific advances, stressing the inherent value of cloning research itself. “Just doing the basic science research is one of the greatest avenues we’re ever going to have to understand our own development and our own biology,” he said

Also read: The Tantalizing Promise of Stem Cell Research

Analyzing the Self: When Mind Meets Matter

An x-ray of a brain scan.

Linking the self – our passions, our hatreds, our temperaments and such – to the physical wiring and physiological functioning of the brain.

Published August 1, 2002

By Rosemarie Foster

Image courtesy of samunella via stock.adobe.com.

Each living creature exists as a unit: a self. But what makes each of us the person we are? It’s a question that’s been pondered for hundreds of years.

Seventeenth century philosopher and mathematician René Descartes’ most famous quotation – “I think, therefore I am” – postulated that the self is a nonphysical entity, rather than a being identical to one’s body. Two centuries later, in his celebrated essay Self Reliance, transcendentalist Ralph Waldo Emerson wrote, “To believe your own thought, to believe that what is true for you in your private heart is true for all men – that is genius.” And modern-day screenwriter Woody Allen expressed self-doubt when he said, “My one regret in life is that I’m not someone else.”

Today the study of the self goes beyond the realm of philosophy, bridging this ancient science with contemporary neurobiology. At universities around the world, investigators are using modern analytical methods, laboratory tools and sophisticated imaging techniques in an attempt to link the self – our passions, our hatreds, our temperaments and such – to the physical wiring and physiological functioning of the brain. “If you really want to understand the nature of the mind, you have to understand the nature of the brain,” explains Patricia S. Churchland, professor of Philosophy at the University of California, San Diego.

Nature and Nurture

One thing is certain: Who we become and what personalities we develop is a combination of nature – the influence of genes – and nurture, the experiences we encounter throughout our lives. Both influence the development of the brain’s neural circuitry. “The relationship between genes and personality is not a simple one, but they do contribute,” says Joseph LeDoux, Henry and Lucy Moses Professor of Science at the Center for Neural Science at New York University. “But just because something is biological doesn’t mean it’s genetic. Experiences are also very important in shaping our neural wiring.”

Specifically, our experiences help us to learn, through an intricate system of memory processing employed by our brains. This learning results in the formation of actual neural networks.

Our peers may heavily influence such learning. According to social scientist Mahzarin Banaji, Richard Clarke Cabot Professor of Social Ethics in the Department of Psychology at Harvard University and Carol K. Pforzheimer Professor at Radcliffe, the self is the result of one’s collective social experiences.

“Our attitudes and beliefs come from the groups we associate with,” Banaji explains. “The thoughts and opinions we claim to be uniquely ours may in fact not be uniquely ours.” Indeed, a battery of “implicit association tests” Banaji has developed and implemented may reveal hidden biases in ourselves that we may not be aware of, and that many of us may not like.

From Soul to Brain

Churchland, LeDoux and Banaji will be among a cadre of distinguished scientists who will gather from around the world at the Mount Sinai School of Medicine from September 26-28 to speak at a unique conference called The Self: From Soul to Brain, sponsored by The New York Academy of Sciences (the Academy). “This will be the first time this group will assemble at one meeting to focus on this novel topic,” said LeDoux, who is organizing and chairing the event. “We want to address how the brain pieces itself together as we go through life.” The conference will foster a dialogue among researchers exploring the neuroscientific, philosophical, theological, and social aspects of the complex entity we call the self.

How do our brains make us who we are? LeDoux explains that it’s all in the brain’s wiring, and the exchange of neurotransmitters between billions of neurons across synapses. Such synaptic wiring regulates all brain functions, such as perception, emotion, motivation, thinking and memory. “But the trick is to understand how we as people can emerge out of all of this,” says LeDoux.

“That the self is synaptic can be a curse – it doesn’t take much to break it apart,” he writes in his book Synaptic Self: How Our Brains Become Who We Are, published this year. “But it is also a blessing, as there are always new connections waiting to be made. You are your synapses. They are who you are.”

Traumatic Memories and Physiological Responses

LeDoux’s group is focusing on the study of traumatic memories and the physiological responses they can incite. The brain does not process all of our memories the same way. For traumatic memories, two systems interact: one conscious, one unconscious. For example, if you were in a car accident and you returned to the accident scene, you might remember objective details of the event: “conscious memories.” But your blood pressure and heart rate might escalate, you may sweat, and your muscles might tense – all “unconscious memories” that surface as a result of the past experience.

Moreover, neuroanatomists have learned that these memory systems are mediated by two structures in the brain’s temporal lobe: the hippocampus, which regulates conscious memories, and the amygdala, an almond-shaped area of tissue controlling unconscious memories. LeDoux has focused two decades of inquiry on the latter structure, which he calls “the emotional processing system of fear.” His team has pioneered the study of emotions on the biological level, deep within the recesses of the brain: the amygdala of the rat brain, to be exact.

According to LeDoux, nature installed the amygdala as a survival mechanism. Early on, evolution wired the brain to produce responses to keep an organism alive in dangerous situations. This solution has not changed much over centuries, and works essentially the same way in rats as in people.

The LeDoux lab conducts “fear conditioning” in rats to study the function of the amygdala, its connections with other parts of the brain – such as the cortex, which is responsible for thought – and what happens in the brain when the amygdala is damaged. At the heart of their studies is a tone-shock system: They condition a rat by sounding a tone and delivering a minor shock, and they measure the rat’s physiological responses.

The Sensory Thalamus and the Amygdala

Thereafter, whenever the rat hears the tone it may either freeze in its tracks or respond with an increase in blood pressure and heart rate, even when no shock is delivered. Just the anticipation of a shock is enough to trigger a physical reaction in the animal. LeDoux’s group now studies how fear-arousing experiences alter synapses in the rats’ brains – particularly those in the lateral nucleus of the amygdala, the gateway into the system – and thereby create long-lasting memories.

The amygdala doesn’t work alone. One key interaction exists between the amygdala, the sensory thalamus and the sensory cortex. When we see or hear something frightening, we may freeze, jump or turn to see what caused it. That reaction can be traced to the connection between the sensory thalamus and the amygdala, between which signals travel quite quickly but not so precisely.

The same signal is processed several milliseconds more slowly between the thalamus and the sensory cortex, but in a way that allows us to assess the situation more accurately. LeDoux’s findings may be relevant to the management of anxiety disorders, which account for about half of the mental health problems reported in the U.S. and which can result from malfunctions in the way we deal with fear.

In Synaptic Self, LeDoux points out that neuroscientists have done an excellent job of studying how individual systems work, but as persons with selves, we are more than a mere collection of systems. To understand the self, he contends, neuroscientists will have to figure out how the various individual systems work together. One of his reasons for organizing the September conference was to engage scientists from a variety of research areas to begin to think about the self in ways that might be compatible with the tools and findings of brain research.

Not a Solitary Entity

“When we talk about ‘the self,’ it’s misleading to think of it as a single entity,” says Patricia Churchland. “Rather, it’s a number of different capacities engaged in monitoring the body and the various aspects of brain function.” When we perceive objects and events in our external environment – distinguishing them from our inner experiences such as emotions – and when we plan in our minds how, for example, to portage a canoe or to build a shelter, we are exercising what Churchland calls “self-representational capacities.” For 30 years, she has explored the complex connections between neural systems that have developed over time to enable humans to cope with and adapt to external signals, allowing us to improve our behavioral strategies.

In a paper published in the April 12, 2002 issue of the journal Science, Churchland emphasizes the brain-based nature of self-representational capacities. Our internal organs, for example, are represented by chemical and neural pathways aimed mainly at the brainstem and hypothalamus, while autobiographical memories appear to be governed by structures in the medial temporal lobe. The prefrontal lobe and limbic structures are important for deferring gratification and controlling impulses, so much so that damage to these areas may result in personality changes. “Hitherto quiet and self-controlled, a person with lesions in the ventromedial region of the frontal cortex is apt to be more reckless in decision-making, impaired in impulse control, and socially insensitive,” writes Churchland.

Self-Representational Capacities

Indeed, studies of patients who’ve experienced brain damage as a result of stroke, tumors or other disease or injury have shed light on specific areas of the brain associated with such self-representational capacities. Researchers have compared these patients’ abilities and personalities before and after the damage, and coupled that data with the results of contemporary diagnostic tools such as functional magnetic resonance imaging. Churchland aims for a panoramic view of a vast range of data, scrutinizing the findings of investigators in various laboratories to “try to make the story come together,” she explains.

One dramatic example Churchland describes is a patient known as R.B., who has been studied for two decades by Antonio Damasio’s neurology lab at the University of Iowa. R.B. is a middle-aged man who suffers from bilateral damage to his temporal lobes, resulting from herpes simplex viral infection. In particular, his hippocampus was destroyed. As a result, R.B. has catastrophic amnesia: he is unable to learn anything new, and is bereft of essentially all autobiographical memory. He lives within a 40-second time window and has no memory of events that occurred just moments ago, let alone those that happened before his illness.

R.B. does, however, have some social aspects of self-representation, thus demonstrating the dissociation of self-representing capacities. “Although he suffers diminished self-understanding, he nevertheless retains many elements of normal self-capacities,” Churchland notes, “including self-control in social situations and the fluent and correct use of ‘I.’”

The Amygdala and the Frontal Cortex

He also knows where his body stands in space at any given time, can identify feelings such as happiness, and is able to show sympathy with the distress of others. “This shows that the structures of the brain necessary for memory storage and retrieval are probably not those responsible for social skills,” explains Churchland.

At the September meeting, Churchland plans to speak on the topic of self-control, specifically linking self-control to parameters such as connectivity between the amygdala and the frontal cortex, as well as levels of hormones, neurotransmitters such as serotonin, and appetite-regulating proteins such as leptin. “Defining a neurobiological basis for self-control by identifying the relevant neurobiological parameters may be difficult, but I suspect it is possible,” she says.

“As we come to understand the nature of decision-making and choice, and how we acquire habits of self-control in childhood, it is bound to have an impact on how we understand ethics and the criminal justice system. The precise impact of new discoveries concerning the neurobiology of self-control remains to be seen, especially as technologies for intervention become increasingly available.”

Do You Have Hidden Biases?

Men are better suited for math and science than women. Many of us trust whites more than blacks. And we favor youth over age.

These statements may seem like extraordinary generalizations. But they represent hidden biases that many individuals learn they may have after taking Mahzarin Banaji’s implicit association tests (IATs), co-developed with the University of Washington’s Anthony Greenwald and Yale University’s Brian Nosek. If we think we’re open-minded and able to make free choices, then why might we unconsciously harbor such potentially disturbing beliefs? The answer, says Banaji, may lie in the people with whom we associate.

“The self is our most unique aspect. It is what distinguishes us from everyone else,” she explains. “And yet this most unique component of personality is itself socially constructed, a part of a larger collective gathered from everything we live and breathe.” That means we are most likely to hold opinions similar to those of our peers, and the social groups with which we identify most strongly. Moreover, these attitudes are often expressed without conscious awareness.

“Implicit Patriotism”

Banaji and her student Kristin Lane investigated “implicit patriotism” among 74 New England beachgoers who took IATs during the summer of 2000. The IATs compared how they identified with their nation (United States) and their region (New England) on both Independence Day and on a nonholiday in August. The results from tests taken on Independence Day showed a significantly stronger association between the concepts of “self ” and “American” on July 4th than on the August test date.

Moreover, regional identity was weaker on July 4th than in August. “Implicit identity is susceptible even to very subtle, naturally occurring events that can strengthen or weaken aspects of identity,” the researchers conclude. They are also being mindful of the impact of events like September 11th on shaping one’s implicit self and identity.

In another example, Banaji notes that more men than women tend to go into math and science, while women gravitate toward language and the arts. But in elementary school, there are no differences between males and females with regard to math test performance. Gender differences favoring men begin to surface in high school, and become progressively greater as the level of education increases. We’d like to think that anything’s possible for anyone, but in reality, the groups we identify with (in this case, males or females) may exert an unconscious influence on our choices and decisions.

Self-Imposed Segregation

In a paper to be published in the Journal of Personality and Social Psychology, Banaji, Nosek and Greenwald report that this is the case. Their conclusions were based on the findings of several IATs taken by groups of college students. They analyzed whether individuals link subjects such as math and the arts with good words (such as “love, rainbow, heaven”) or bad words (“death, torture, hatred”), and determined if they associated themselves more with math (“algebra=me”) or the arts (“poetry=me”).

While both sexes demonstrated negativity toward math, especially compared to the arts, that negativity was twice as strong among women than men. Moreover, the more strongly a woman identified with the female gender, the more negatively she felt toward math; conversely, the more strongly a man identified with being male, the greater his preference for math. “Knowledge of stereotypes, even implicit knowledge, may be sufficient to perpetuate stereotypes and even discourage women’s subsequent participation and performance in math domains,” concludes Banaji.

“The blunt reality is that not everything is equally possible for everyone,” she continues. “Societies that aspire to purer forms of democracy need be aware that wanting and choosing can be firmly shaped by membership in social groups. Until the internal, mental constraints that link group identity with preference are removed, the patterns of self-imposed segregation may not change.”

Also read:How the Brain Gives Rise to the Mind

Do Physicians Have a Duty to Warn a Patient’s Family?

Two medical professionals discuss a patient's medical record.

Exploring the ethical and legal issues around doctors sharing medical records and providing recommendations to the family members of the patients they treat.

Published June 1, 2002

By Fred Moreno, Dana Van Atta, Jill Stolarik, and Jennifer Tang

Image courtesy of Freedomz via stock.adobe.com

What guidelines should doctors follow regarding disclosure information to potentially affected family members and the genetic testing of children. In two widely discussed cases, Pate v. Threlkel and Safer v. Estate of Pack, judges grappled with whether physicians have a duty to warn family members of patients. Heidi Pate’s mother suffered from medullary thyroid carcinoma, an autosomal dominant disorder. Three years after Pate’s mother received treatment, Pate was diagnosed with the same disease. She sued, arguing that her mother’s doctors had a duty to warn her mother of the risk of genetic transmission and to recommend testing of any children.

The Florida Supreme Court ruled that if the standard of care were to warn a patient of the genetically transferable nature of a condition, as Pate alleged, then the intended beneficiaries of the standard would include the patient’s children. In other words, the patient’s children would be entitled to recover for a breach of the standard of care. However, in light of state laws protecting the confidentiality of medical information, the court found no requirement that a doctor warn a patient’s children directly. Rather, the court held that in any circumstances in which a doctor has a duty to warn of a risk of inherited disease, that duty is satisfied by warning the patient.

Donna Safer’s father was treated over an extended period of time for colon cancer associated with adenomatous polyposis coli, another autosomal dominant disorder. Almost two decades after his death, Safer was diagnosed with metastatic colon cancer associated with adenomatous polyposis coli. Safer also sued, arguing that her father’s doctor had a duty to warn her of the risk to her health.

Two Additional Significant Features

Two additional features of the case were significant. First, Safer’s mother testified that on at least one occasion she asked the doctor if what he referred to as an “infection” would affect her children and was told not to worry. Second, Safer contended that careful monitoring of her condition would have improved her medical outcome.

The New Jersey Superior Court found no essential difference between this case, with its “genetic threat,” and traditional duty-to-warn cases involving the menace of infection or threat of physical harm. The court concluded that a duty to warn in the genetics context would be manageable, commenting that those at risk are easily identified. The court failed to state how the duty to warn might be discharged, especially in cases involving small children.

This decision is of concern to many physicians who are worried about the scope and ramifications of a broad duty to warn. In 1996, following the Safer decision, the New Jersey legislature passed a law prohibiting disclosure of genetic information about an individual. The limited exceptions include disclosure to blood relatives for purposes of medical diagnosis, but only after the individual is dead.

It is important to remember that with predictive information, more is not necessarily better. In addition to possible psychological harms, family members may face discrimination if genetic information finds its way into their medical records or becomes part of their knowledge base and so must be disclosed on applications for insurance. An awareness of this problem may be behind a New York law that prohibits any person in possession of information derived from a genetic test from incorporating that information into the records of a non-consenting individual who may be genetically related to the tested individual.

Also read: Genetic Privacy: A War Fought on Many Fronts

A Case Against ‘Genetic Over-Simplification’

A graphical representation of a DNA helix and chromosomes.

Who are we? Why do we behave as we do? What explains why some die of illness at the age of 50 while others live past 100? How can we improve the human condition?

Published June 1, 2002

By Fred Moreno, Dana Van Atta, Jill Stolarik, and Jennifer Tang

Image courtesy of ustas via stock.adobe.com.

The answers to these questions are coded in our genes — or so the story goes in the popular media and in some corners of the scientific establishment.

“It’s a heroic story with a dark side,” said Garland E. Allen, Ph.D., Professor of Biology at Washington University and a specialist in the history and philosophy of biology, at a recent gathering at The New York Academy of Sciences (the Academy). Harking back to the eugenics movement of the early 20th century, modern genetic science is fraught with both promise and danger, Allen said, and “genomic enthusiasm” should be tempered with a good dose of historical awareness.

Eugenics in Context

Charles B. Davenport, the father of the eugenics movement in the United States, defined his fledgling field as “the science of human improvement by better breeding.” In attempting to apply Mendelian genetics to society’s ills, Davenport and his fellow eugenicists believed the problem — whether alcoholism, mental illness, or the tendency to simply “make trouble” — was in the person, not the system. The real culprit, therefore, was the individual’s defective biology, and biologists held the key to fixing the defect.

During the first four decades of the 20th century, eugenics gave credibility to American elites in their efforts to restrict the inflow of immigrants of “inferior biological stock” from southern and eastern Europe, culminating in the Immigration Restriction Act of 1924. The new science also provided a rationale for the compulsory sterilization of institutionalized individuals considered unfit for reproduction.

By 1935, 30 states had enacted sterilization laws that targeted habitual criminals, epileptics, the “feebleminded,” and “morally degenerate persons.” Their proponents saw them as preventive, not punitive. In their view, higher fertility rates among the less productive, genetically defective members of the population posed a threat to society, not least because of the high cost of maintaining them in prisons, in mental institutions, or on the dole.

“Social history in the United States between 1870 and 1930 was characterized by a search for order,” said Allen. “It was a period characterized by the maturation of the Industrial Revolution, rapid urbanization and growing social problem. There was a widespread sense of disorder, and many felt there was a need to do something about it.” This collective malaise made eugenics the “magic bullet” of its day.

As American as Apple Pie

Eugenics peaked during the 1930s, at the height of the Depression. Interestingly, the new science and its attendant policy program appealed to members of all social classes. Eugenics validated wealth and privilege as the birthright of the genetically superior. The rising union movement, arguably the greatest threat to the status quo, was rife with Italians and Jews, two of the groups deemed “socially inadequate.” At the same time, with competition over scarce jobs at an all-time high, eugenics fed into the anti-immigrant sentiments of the working class.

With their blatant racism, xenophobia, questionable ethics and tendency to blame the victim, eugenicists might impress us today as screwballs on the lunatic fringe of science. Actually, however, nothing could be further from the truth.

Theodore Roosevelt was just one of many highly regarded Americans who praised the science of eugenics. In his 1913 letter to Charles Davenport, Roosevelt wrote: “Any group of farmers who permitted their best stock not to breed, and let all the increase come from their worst stock, would be treated as fit inmates for an asylum.” Alexander Graham Bell himself served on the Board of Scientific Directors of the Eugenics Record Office, founded in 1910 as the country’s leading eugenics research and education center. In its day, the eugenics movement was mainstream and as American as apple pie.

Scientific Underpinnings

Taking its cue from advances in agriculture, eugenic science also emulated the efficiency movement in industry. “Eugenic reproductive scientists were the counterparts of the efficiency experts on the factory floor,” said Allen. In the early 20th century, farmers and industrialists alike turned to science for guidance in bringing about control and standardization.

If popular for the wrong reasons, eugenics nonetheless increased our understanding of human beings as genetic organisms. Davenport and other eugenically-minded human geneticists helped illuminate the genetic origins of a number of physical disabilities, for example, including color blindness, epilepsy and Huntington chorea. Instead of proceeding cautiously, however, Davenport and his colleagues applied the new genetic paradigm zealously and indiscriminately. All human intellectual and personality traits, they hypothesized, were ultimately reducible to heredity.

As it turns out, their methods were just as flawed as their theories. Commenting on a family study of epilepsy — rigorous for its time — Allen pointed to two methodological weaknesses: First, humans have small families compared to animals, which makes statistical modeling difficult at best. Second, research in the early 20th century was hampered by a lack of accurate information. Interviews, anecdotal accounts, and rumor were the stuff of scientific data at a time when medical record keeping was relatively haphazard.

Finally, the absolute privileging of heredity over environment trapped eugenicists in a form of circular thinking. If pellagra, a condition caused by vitamin B deficiency, was observed to run in a family, the disease must be genetically based, they thought, rather than rooted in poverty and shared nutritional deficits.

A Call for Balance

Allen warned that the genetic myopia of yesterday’s science is being recapitulated today. From shyness to homosexuality and from depression to infidelity, everything is in our genes, if we’re to trust the information in recent cover stories in Time, Business Week, and U.S. News & World Report, among other reputable publications. “These claims are as tenuously based now,” asserted Allen, “as they were in the 1920s.”

The most serious dangers of all, however, lie in the policy implications of the new genetic determinism. If a person is genetically predisposed to sensitivity to smog, why should the government commit itself to cleaning it up? Why should parents bother spending time and energy on raising a child who carries the criminality gene? And why should insurance companies pay for the care of those with genetic mutations that “cause” bipolar disorder, diabetes or cancer? We’ve seen this unhealthy marriage of scientific and political agendas before, Allen said.

Allen also argued for a more integrated approach to research. Social and biological scientists have been studying different groups, and never the twain shall meet. We’d gain a more complete picture of problems and their causation by funding integrated studies that join the perspectives of sociologists and biologists, he said. This approach would correct the current fixation on genes as bearers of the whole truth.

When it comes to the lessons of eugenics, Allen said the “that was then, this is now” attitude is worst of all. It can, indeed, happen today. He concluded by encouraging scientists who reject simplistic genetic ideas to step forward, articulate a balanced point of view and oppose the “geneticization” of the public discussion and its potentially dangerous consequences, sooner rather than later.

Also read:The Primordial Lab for the Origin of Life

The Enigma Surrounding the Brain’s Amygdala

An old black and white diagram denoting parts of the human brain.

By studying the amygdala’s function in both human and animal brains, we can better understand drug treatment and addition.

By Brian A. McCool, PhD

Image courtesy of Sergey Kohl via stock.adobe.com.

About 180 years ago, not long after the New York Academy of Sciences was founded as the Lyceum of Natural History in New York City, the amygdala, those almond-shaped structures within the basal ganglia of the brain, initially was described as discrete anatomical entities deep in each of the temporal lobes. But the behaviors governed by the left and right amygdala have remained subject to interpretation ever since.

While it is generally accepted that the amygdala is somehow responsible for regulating emotions, diverse experimental systems and approaches have until now prevented a unified appreciation

of its function. To contribute to the on-going, evolutionary process that is shaping our understanding of this important brain region, 205 basic and clinical scientists recently attended an important conference on the subject in Galveston, Texas.

Ultimately, it was agreed that the amygdala generally appears to be an arbitrary collection of some 20 different cell groups that can be divided into at least four behaviorally functional units. Together, these units determine how the brain integrates sensory and cognitive information to interpret the emotional significance of an event or thought. Regulating Human Behaviors Several scientific sessions focused on the behaviors regulated by the human amygdala. A number of the sessions highlighted the amygdala’s role in the emotionally motivated assessment of environment and memory.

Using patients with amygdala damage, the University of Iowa’s Ralph Aldophs, PhD, described studies indicating that this brain region is active when individuals make socially-relevant subjective judgments, in this case related to the interpretation of facial expressions associated with negative emotions. Importantly, the interpretation or expression of declarative “facts” regarding negative emotion appears intact in these individuals.

The Amygdala’s Role in Cognitive Processing

Using PET scans, Raymond Dolan, MD, Institute of Neurology in London, U.K., found that this subjective interpretation of negative facial expressions by normal individuals did not require cognitive recognition of the face. Together, these findings suggest that the amygdala’s role in the cognitive process relating to these judgments could occur independent of attention or awareness.

A number of presentations focused on the potential role of the amygdala in human behavioral and neuropathologic disorders. For example, Scott Rauch, MD, PhD, Massachusetts General Hospital, Wayne Drevets, MD, National Institute of Mental Health, and Michael Trimble, MD, Institute for Neurology, London, U.K., reported that amygdala activity or anatomy is altered in a number of different psychological/neurological disorders. A presentation by Anna Rose Childress, PhD, VA Addiction Treatment Research Center, University of Pennsylvania, clearly illustrated this point. Childress presented data indicating that experimentally induced drug craving in recovering cocaine addicts was associated with increased activity in both right and left amygdala and in anterior cingulate cortex.

Importantly, preliminary studies with both drug-based and behavioral interventions, treatments, that attenuate self-reported desire for cocaine, appear to inhibit amygdala activation during these craving states. However, in contrast to pharmacologic treatment, behavioral modification therapy increased brain activity in the orbito-frontal cortex, suggesting that the relative levels of activity between the “emotional” amygdala and the “cognitive” cortex may be an important determinant in the process leading to both drug addiction and recovery.

Animal Models of Behavior

Extensive studies of the amygdala in several mammalian species have provided substantial insight into animal correlates of human amygdala function. This is especially true of the non-human primate studies presented by David Amaral, PhD, University of California, Davis.

In these studies, experimental bilateral lesions in the amygdala of adult nonhuman primates demonstrate that this brain region is intimately involved with the subjective evaluation of novel environmental or social stimuli. Specifically, animals with lesions were less reluctant than normal controls to approach and interact with novel objects, and were more “uninhibited” during social interactions with unknown monkeys.

While these results clearly compliment findings in humans with amygdala damage, Amaral reported that, in contrast to adults, bilateral lesions in infant monkeys did not affect responsiveness to novel objects, and did cause more reluctance to participate in social interactions. These latter findings emphasize our lack of understanding regarding the long-term influence of social and physical development on amygdala function and underscore the need for additional investigations in non-human primates.

A number of reports focused on the behavioral role of the amygdala in rodents. Historically, studies using this animal system have provided the impetus for most of the human studies described above. In addition, current findings are beginning to provide a detailed understanding of the wealth of neurochemical and cellular mechanisms that appear to influence amygdala-dependent emotional learning in rats and, presumably, humans.

For example, Jim McGaugh, PhD, Center for the Neurobiology of Learning & Memory, University of California Irvine, presented an overview of his work in rats. It implicates specific neurotransmitter systems, namely those for norepinephrine and acetylcholine, as chemical mediators regulating amygdala activity related to emotional and stress-influenced memory formation.

Cellular & Molecular Insights into Amygdala Function

Similarly, Michael Davis, PhD, Emory University, presented recent findings indicating that amygdala glutamate receptors, specifically the N-methyl-D-aspartate isoform, are intimately involved with the ability of rats to extinguish fear-associated memories (also known as “extinction”). Importantly, manipulations of this particular membrane protein can enhance extinction, suggesting that this receptor may be an attractive target for therapies designed to resolve memories that elicit pathologic fear, as in posttraumatic stress disorder. Together, these findings emphasize the complexity and apparent wealth of neurochemical mechanisms that govern neuronal activity in the amygdala.

The most obvious advantage that animal models provide over studies in humans or in non-human primates is the relative ease with which basic biologic processes may be directly investigated. Denis Parj, PhD, Rutgers, State University of New Jersey, described the unique properties of a particular amygdala subdivision, the intercalated cell bodies. He concluded that this subdivision might help establish the timing and context of in-flowing sensory information, potentially representing a physiological mechanism that would help distinguish a “fearful” event from an innocuous one.

Similarly, Hans-Christian Pape, Ph.D., Otto-von-Geuricke University, Magdeburg, Germany, presented data indicating that amygdala neurons have intrinsic, rhythmic membrane oscillations that may aid in their communications with other brain regions.

Finally, Paul Chapman, PhD, Cardiff University, Cardiff Wales, U.K., provided an overview of our knowledge regarding the long-term alterations in amygdala neurotransmission associated with fear learning. Chapman noted that the mechanisms underlying memory-related, long-term amygdala adaptations appear to be distinct from those involved in other brain regions.

These findings emphasize that we are just beginning to appreciate the fundamental physiology regulating the amygdala’s involvement with “emotional learning.”

Also read: Teaming up to Advance Brain Research


About the Author

Brian A. McCool, PhD, is an assistant professor in the Department of Medical Pharmacology & Toxicology at the Texas A&M University System Health Science Center in College Station, Texas

The Scientific Mechanics of Cancer

A graphical representation of a damaged DNA helix.

New research illuminates the role of genetic mutations in the diagnosis of cancer. This research has resulted in some promising treatments.

Published June 1, 2002

By Fred Moreno, Dana Van Atta, Jill Stolarik, and Jennifer Tang

Image courtesy of sutadimages via stock.adobe.com.

Cancer researchers are getting ever closer to “understanding the molecular events that underwrite the transformation of a normal cell” into one capable of causing the deaths of millions of people around the world each year, Harold Varmus, MD, recently told a filled auditorium at Hunter College in New York City.

A Nobel laureate, former Director of the National Institutes of Health, and current President of Memorial Sloan Kettering Cancer Center, Dr. Varmus spoke at a mid-March gathering sponsored by The New York Academy of Sciences’ (the Academy’s) Microbiology Forum. “We are working toward understanding the molecular and genetic underpinnings of cancer,” he said.

Armed with this knowledge, physicians will be able to “assess the risk that an individual will develop cancer, prevent disease, diagnose it at the molecular level and, most importantly, treat it with new therapies that are much more precise than in the past.”

Varmus described a series of events in cancer research that have contributed to the understanding of oncogenesis, the changes that turn a normal cell into a malignant one. He spoke first about early events in the history of molecular oncology; next how some of this knowledge has been applied to the development of a specific cancer therapy. He concluded with a description of recent work, development models using mice, conducted in his own lab.

Cancer and Genetic Mutations

Cancer has its roots in genetic mutations — either changes to the genetic code of non-germ cells (somatic mutations), which may occur spontaneously or in response to environmental agents, or mutations inherited through germ cells. The latter happens much less frequently, Varmus noted. Singular mutations may be the first step in the process of oncogenesis, but many other cellular processes must subsequently occur for cancer to develop, he explained. Initiation is the moment normal gene expression is altered as a result of the mutation. If this altered cell fails to maintain normal cellular discipline, tumor maintenance begins. If the altered cell increases in oncogenecity, it is called progression.

Cancer cells then undergo a loss of growth control — an exaggerated response to growth signals or a failure to respond to inhibitory signals, and they escape from the signals that induce apoptosis, or cell death. Cancer cell growth is dependent on specific interactions between these cells and the host, such as angiogenesis, the induction of blood vessels to the tumor. Genetic instability gives rise to additional mutations and the cancer cell becomes more oncogenic, and may finally develop the capacity to colonize, to break away and travel to distant sites in the body.

In considering potential targets of cancer therapies, Varmus said many researchers have directed their efforts at tumor maintenance, the cellular functions necessary for cancer cells to remain in an oncogenic state. He noted that Steve Martin, a researcher at UC Berkeley, published in 1970 the results of a series of experiments conducted with avian cells.

The Impact of Temperature on Tumor Cells

The cells were infected with a virus that was capable of converting normal cells to those with a heightened potential for division or growth (the src mutant of the rous sarcoma virus). Dr. Martin induced many mutations in the virus stock and found a particular mutant form that would transform to an altered state only when the ambient temperature was 35 degrees F. or lower. When he took tumor isolates and raised the temperature above 35 degrees, he found that they returned to normal.

“With this work, Martin demonstrated that tumor cells require something — in this case temperature — to initiate and maintain the tumor state,” Varmus said. “This experiment defined the maintenance function.” The mutations in function allowed researchers to make the first genetic probe for a vertebrate gene.

Since 1970 researchers have made many fundamental discoveries about the role that genes play in cancer. They have identified specific genes — many of them encoding enzymes — that, when mutated, contribute to cellular transformation and tumor maintenance, as well as other genes that govern the integrity of the genetic code. Through this, they have discovered that the development of cancer depends on many kinds of mutations — inherited, somatic and multiple mutations. They also have discovered the biochemistry and physiologic properties of cancer gene products.

In addition, researchers have explored transgenes — foreign genes introduced into an organism in the laboratory — and have targeted mutations in mouse gene lines. And some of this genetic information is now used, in a limited way, in patient care, Varmus said. An understanding of genetic information was central to the development of one recently heralded new cancer therapy, Gleevec, a signal transduction inhibitor for patients with chronic myelogenous leukemia (CML). This is a common adult leukemia, with 6,000 new cases a year in the United States.

The Philadelphia Chromosome

Patients may remain in the early chronic phase, the phase in which the disease progresses slowly, for about five years. When the disease enters blast crisis, Varmus said patients survive about six months, on average.

Virtually all patients with CML have a mutation called the Philadelphia chromosome, in which a piece of chromosome 9 is joined to chromosome 22. At the point where the two chromosomes make contact, the abl oncogene fuses onto the bcr gene. “This fusion gene, bcr-abl, encodes an enzyme (an activated tyrosine kinase) that drives normal myeloid cells into the leukemic state and keeps them there,” he explained.

Gleevec fits in the active site in the enzyme and has a powerful inhibitory effect on the action of not only the enzyme encoded by the bcr-abl fusions gene, but also on two other oncogenes: the kit oncogene and the platelet-derived growth factor (PDGF) receptor. Nearly all patients in the early phase of CML respond when treated with Gleevec. It has produced striking remissions in patients with both CML and another cancer, gastrointestinal stromal cancer.

A Promising Treatment

After 10 days of treatment with Gleevec, patients with CML who had had evidence of disease throughout the bone marrow have marrow that has returned to normal, with no evidence of the Philadelphia chromosome. Patients can develop resistance to the drug, especially those with late-phase CML. It’s believed that this resistance is mediated by further mutations in the bcr-abl gene. “Patients’ responses to Gleevec demonstrate that bcr-abl activity is key to tumor maintenance, and that maintenance functions in general are potential therapeutic targets,” Varmus said.

“This success has emboldened those of us who work with mouse models to define tumor maintenance functions,” said Varmus. “In my lab we are working with a gene, ras, that is involved in a large number of non-small-cell lung cancers, which are a very common cause of cancer mortality.”

Members of Varmus’s lab are working with mutant mice that have a transgene, a mutant k-ras gene in a specific type of lung cells (the type 2 alveolar epithelium cells). The mutated gene was fused with a genetic unit called a tet operon, which turns the mutated gene on in the presence of the antibiotic doxycycline.

Using these techniques, researchers in Varmus’s lab are able to incite a proliferation of type 2 pneumocytes — tumors — in mice when Doxycyclineis administered. “If doxycycline is stopped after a few days, the tumor disappears, and there is little evidence of previous cell proliferation,” he said. These experiments suggest that this type of tumor grows in response to mutations in the ras gene, he concluded.

Also read: Cancer Metabolism and Signaling in the Tumor Microenvironment

A Personal Tale of Post-Infectious Encephalitis

A black and white photo of a man analyzing a sample under a microscope, likely taken in the 1950s or 1960s.

Encephalitis, often called sleeping sickness, made an appearance in Buffalo, New York, in 1946. Among the victims who survived was six-year-old Trumbull Rogers, now Associate Editor of the Annals of the New York Academy of Sciences. Below are his recollections of the life-affecting experience.

Published April 1, 2002

By Trumbull Rogers

Paul M. Versage, Hospital Corpsman First Class, USN, examines a blood sample under a microscope. Photograph released September 24, 1963. This was part of an effort to study trachoma, Japanese encephalitis and other infectious diseases. Image courtesy of National Museum of the U.S. Navy/Wikimedia Commons via Public Domain.

My mother’s entry under “Illnesses” in my baby book was precise: I contracted measles on April 6; improved by April 11; but became extremely drowsy by the 12th. Dr. W. Pierce Taylor, our pediatrician, called in another family friend, Dr. Douglas P. Arnold. Together they concurred in a diagnosis –– encephalitis.

An inflammation of the brain, encephalitis viruses come in many varieties, some named for where they were first diagnosed –– Central European, Murray Valley [Australia], Japanese or St. Louis encephalitis. Currently, we are most aware of the variety known as West Nile Virus, an arthropod-borne (arbovirus) infection that made its first appearance in the United States in the summer of 1999. (See West Nile Virus: Detection, Surveillance, and Control, Dennis J. White and Dale L. Morse, Eds., Annals of the New York Academy of Sciences, Vol. 951, 2001, for more on this virus.)

But not all forms of encephalitis are caused by the bite of the “dread tsetse fly” or Culex mosquito. One example is post-infectious (in my case, postmeasles) encephalitis, which is an acute disseminated encephalitis characterized by perivascular lymphocyte and mononuclear cell infiltration and demyelination. It is thought to result from the weakening of the immune system caused by the original measles virus.

1 in 1,000 Cases

According to a 1997 article by Dr. Michael J. McKenna, of the Massachusetts Eye and Ear Infirmary in Boston, post-measles encephalitis “occurs approximately 1 in 1,000 cases. Usually it manifests three to four days following the acute illness and is clinically characterized by seizures, obtundation and coma. The mortality rate of central nervous system involvement is approximately 25%. Half of those who survive have permanent sequellae, including mental retardation, seizures, motor abnormalities and deafness.” (“Measles, Mumps, and Sensorineural Hearing Loss,” by Michael J. McKenna, Immunologic Diseases of the Ear, Joel M. Bernstein et al., Eds., Annals of the New York Academy of Sciences, Vol. 830, 1997, p. 292)

After Drs. Taylor and Arnold made their joint diagnosis, they arranged my transfer to Children’s Hospital in Buffalo. I have no memory of any of this prior to finding myself lying in a large room with high windows. However, I remember hearing the occasional echo of a door closing somewhere far away, receding footsteps and distant voices. I was alone in a place that shifted each time I drifted into consciousness. Events that no doubt spanned only an hour or two seemed like several days.

Little Hope for Survival

Early on, the doctors told my parents there was little hope for my survival aside from one slim chance: a new drug had shown some success in treating the condition, but it was experimental. They wanted to try it on me, if my parents were willing. I’m not certain, but this drug was probably a corticosteroid, even though it was not widely available for human use at the time. Although it has never been proved that steroids are effective against post-measles encephalitis, many physicians use them today in treating this disease. I’m told my parents’ decision to let the doctors use the drug –– whatever it was –– saved my life.

I gained consciousness in my hospital room several days or perhaps a week after my arrival there. But I was not completely cured. My left arm was paralyzed and I had lost the ability to speak. Although there may have been other symptoms, these are the ones I recall most vividly.

By paralyzed, I mean that, my left arm, when left to its own devices, flexed so that my curled fingers rested against my left shoulder. To keep the arm straight, the doctors attached it to a splint, which made lying on my left side awkward and uncomfortable. It also meant that I needed assistance when I wanted or was required to turn over.

When my nurse changed the bandage, at least once daily, she positioned my elbow near the center of the splint. She then forced my arm down until it lay flat. Then she wrapped the clean bandage around both, securing the arm in place. I don’t recall feeling any pain during this process, though later I would.

Returning Home

In the matter of talking, I remember being restricted to two rudimentary forms of communication: “Uh-huh” (for yes) and “Uh-uh” (for no). This condition lasted for what, in retrospect, seems like a long time, an impression that is borne out by my baby book. It has me beginning to talk on April 27, the day after my seventh birthday and 16 days after I entered the hospital.

After recovering from post-measles encephalitis, the author (foreground, right) accompanied his family to Christmas Cove, Maine, to spend the summer of 1946 near where his godparents lived. Left to right: the author’s sister, Grace Wilcox, brother, Danforth William, and mother, Grace Danforth Rogers

On Friday, May 3, I was taken home and put to bed in one of the two second-floor bedrooms that looked out on the backyard. I was now freed of my hospital existence and could continue my recovery in the familiar surroundings, sounds and smells of home. But I still wore my splint and needed around-the-clock nursing.

I have no idea how long my convalescence lasted, but it probably continued until the end of May. Although I’m sure my brother and sister were curious about what had happened to me, I don’t remember seeing very much of them. I’m sure their visits were kept to a minimum.

Soon after my recovery, I was taken to a room that I recall being decorated with cartoon-like characters and was hooked up to an electroencephalograph. There must have been at least two sessions, because I can remember more than once picking patches of dried glue out of my hair, like scabs off my scalp.

“Awakened” with Dopamine

Although I was aware of having been close to dying, my child’s mind had no conception of what that really meant. So, even in adulthood, when I said the words it was like mouthing a memorized set piece that had no core connection to me. People sympathized, and I enjoyed that, but inwardly I felt like a fraud.

That feeling ended on December 7, 1998, however, when I watched “The True Story of Awakenings” on the Discovery Channel. The program included some of Dr. Oliver Sacks’ (Awakenings, HarperCollins, New York, 1973) “home movies” of his “frozen-intime” patients after he “awakened” them with dopamine.

But more arresting were the images of other faces, those forever contorted into idiot expressions (“Did I look like that?”) and “frozen” bodies; hearing a sister’s tale of her brother’s loss of artistic potential; seeing a young girl go through an ordeal similar to my experience.

Watching this program was like seeing myself as I had been, as well as how my life might have gone. This revelation –– of what I was exposed to and then escaped from without damage, of how fortunate I was in my doctors and nurses and in my parents’ courage in letting me be the guinea pig –– still resonates

Also read:The Rising Threat of Mosquito & Tick-Borne Illnesses

The Ethics and Morality of Modern Biotechnology

A gloved hand adjusts a sample under a microscope inside a science research lab.

Scientists are pondering ways to balance the immense potential of biotechnology, while also being responsible morally and ethically.

Published April 1, 2002

By Fred Moreno, Dana Van Atta, Jill Stolarik, and Jennifer Tang

Image courtesy of Panupat via stock.adobe.com.

Embryonic stem cell research. Cloning. Prenatal genetic screening. Genetically modified foods. What used to be thought of as impossible is not only probable — it’s now being done.

That’s why it’s more important than ever to develop regulations to ensure that today’s tools of the life sciences –– and those surely to be developed in the future –– are used for the betterment of mankind, not for our demise. These issues were the focus of a talk by Francis Fukuyama, PhD, called The Political Control of Human Biotechnology: National and International Governance Issues, held on March 4 at The New York Academy of Sciences (the Academy).

“We’re on the cusp of a major period of advance in biology,” said Fukuyama, Bernard Schwartz Professor of International Political Economy at the Paul H. Nitze School of Advanced International Studies of Johns Hopkins University, who is well known for his 1993 book The End of History and the Last Man. “We really need to start thinking seriously about a very different kind of governance structure for human biotechnology so that we’ll benefit from the great good that it promises, but also avoid some of the ethical and moral aspects of that revolution.”

Fukuyama identified four areas of pronounced advances –– discussed below –– that raise broad issues and concerns.

The Cognitive Revolution

Francis Fukuyama, PhD

How much of human behavior can be explained by genes? By the middle of the 20th century, both the social and life sciences had agreed that culture influences human behavior more than does nature. But a revolution in the life sciences later ensued, generating the field of behavioral genetics. Studies were conducted comparing the behaviors of monozygotic twins who were raised in different environments to determine how much of an individual’s personality, intelligence and other traits could be attributed to genetic makeup.

These investigations triggered a great deal of controversy. “People don’t like to be told that genes determine any part of their behavior,” said Fukuyama. But he said modern biology has even more controversies in store.

“In the next generation, we won’t have to rely only on behavioral genetics to uncover connections between genes and behavior,” he noted. “We’ll start to uncover molecular pathways that exist between certain alleles and behavioral variations. I don’t know what the outcome will be,” continued Fukuyama, “but with the discovery of causal mechanisms linking genes and behavior, it would potentially open these (molecular) pathways to manipulation.”

Neuropharmacology

A struggle for recognition, driven by feelings of status and worth, is the basis for all political behavior, Fukuyama said. “A lot of this is related to the dignity and self-worth that human beings have been programmed by evolution to feel, and that’s the way we sort ourselves out in society.”

By providing a “medical shortcut” to alter these feelings, Fukuyama noted, psychotropic drugs may have important consequences for control of both individual and political behavior. Drugs such as Prozac, for example, work by inhibiting the reuptake of serotonin in the brain. And serotonin determines feelings such as dignity and worth.

Ritalin is of even greater ethical concern. It is prescribed for the control of attention deficit hyperactivity disorder (ADHD), “a squishy diagnosis, and a perfect example of a socially constructed disease that wasn’t even recognized two or three generations ago,” said Fukuyama. While the drug has indeed been beneficial for many children, there are others for whom ADHD is merely the tail of the normal distribution of behavior.

Fukuyama noted that drugs like Ritalin alter what we regard as the foundation of virtue and character. “If we believe that human character is formed out of the ability to overcome adversity through training and self-mastery of one’s impulses, what we’ve done is create a medical shortcut around this.”

And psychotropic drugs like Prozac and Ritalin are only the tip of the iceberg, he added. In the next generation, new drugs may be created that will improve memory and increase the threshold for pain.

Life Extension

It’s already happening today: The birth rate in Japan and many European countries is declining, while the ratio of older citizens to younger ones is increasing. Some European nations are witnessing a decrease of more than 1 percent of their populations each year. And the size of Japan’s work force peaked in 1998.

Medical advances in the next half-century may add years, if not decades, to the human life span. But even without these advances, such age shifts are destined to have a profound impact on national economies. For one thing, where will the money come from to pay all of these retirees their social security pensions?

Another area to feel an impact is foreign policy, explained Fukuyama. In the next 50 years, Europe and Japan will be full of older individuals, while most developing nations will have populations where the median age is in the low 20s. To keep their economies going, Europe and Japan will have to import workers from developing countries to supplement their work forces. “These workers will be culturally different,” noted Fukuyama. “Those countries that can successfully assimilate people from different backgrounds will do the best.”

Moreover, Fukuyama asserted that dramatic age shifts at the population level will have an enormous impact on the creation of new ideas. “Generational turnover is absolutely critical to innovation and social change,” he said.

Genetic Engineering

Technologies are available that enable doctors to screen embryos for genes linked to certain diseases, select one lacking the errant genes and implant it in a woman to ensure the development of a relatively healthy baby. The combination of these technologies with the eventual discovery of genes for such traits as height and intelligence may open a Pandora’s box of possibilities for “designer humans.”

But just because we’ll have the ability to accomplish this doesn’t mean we should. “Human rights depend on human nature,” said Fukuyama. “If you have a technology that is powerful enough to change the underlying essence of what human beings are, then we will inevitably change the nature of those rights. There’s too much casualness about redesigning human beings and improving them genetically.”

A New Public Policy

So how do we regulate such technology to ensure that it’s put to the best use? Fukuyama asserted that current regulatory bodies, such as Congress, the National Institutes of Health, and the Food and Drug Administration, “are completely inadequate to deal with the choices we’ll have to face in the future. Legislative bans on broad areas of science and technology are not an appropriate model. We need a better regulatory structure.”

International regulation is another possibility, but such governance must be created – and succeed – on a national level first. One promising effort is establishment of the 17-member President’s Council on Bioethics, which held its first meeting last January – with Fukuyama as a member – but this is a deliberative and advisory body with no regulatory function.

“In addition to debating moral and philosophical issues,” concluded Fukuyama, “we can now begin a very concrete discussion about how we can make use of what is obviously a tremendously valuable and promising set of technologies – but have them work in ways that help humans to flourish, rather than the reverse.”

Also read: Agricultural Biotechnology in Developing Countries

Opportunities and Challenges in Biomedical Research

A woman examines a sample under a microscope in a science research lab.

While there have been major advances in biomedical research in recent years, this has also presented scientists with new challenges.

Published April 1, 2002

By Rosemarie Foster

Image courtesy of DC Studio via stock.adobe.com.

In Boston’s historic Fenway neighborhood, just beyond Back Bay, each spring heralds an annual ritual of renewed life. The Victory Gardens come abuzz with activity and abloom with burgeoning buds. Canoeists charge to the nearby Charles River. And sluggers at Fenway Park swing from their heels, cast in the spell of a 37-foot-high wall called the “Green Monster” that rises beyond the tantalizingly shallow left field.

Much history has been recorded inside the boundaries of Boston’s legendary baseball venue. But the seeds of a different kind of history –– that of 21st century biomedical science –– are being planted in the Fenway district this spring. Two important new scientific research facilities being built –– an academic addition to the Harvard Medical School and a commercial laboratory planned by pharmaceutical giant Merck & Co., Inc. –– will no doubt help shape biomedical advances for decades to come.

Merck is constructing its 11th major research site –– Merck Research Laboratories-Boston –– in the heart of the district. The company hails the facility as a multidisciplinary research center devoted to drug discovery. Covering an area of 300,000 square feet supporting 12 stories above ground and six stories below, Merck hopes its state-of-the-art structure will lure some 300 investigators to pursue studies within its walls. The building is scheduled for completion in 2004.

Harvard’s own new 400,000-squarefoot research building is under construction just 50 feet from the Merck site. With a design that fosters interactions between scientists, Harvard’s new facility will build on the university’s commitment to high throughput technologies. It’s expected to be operational in 2003.

The Interrelationship of Academic and Commercial Research

Although the two facilities are some way from completion, they’ve already exposed one of the major issues –– the interrelationship of academic and commercial research –– that continue to challenge biomedicine. Because of its close proximity to the Harvard Medical School, some scientists fear the new Merck facility may create some tension between nearby university investigators and industry researchers.

“The Merck laboratories, as a commercially driven research organization, may pay better salaries, have better equipment, have a better capacity for high-throughput screening and medicinal chemistry, and have other facilities that an academic medical center typically does not have available,” explained Charles Sanders, MD, former Chairman and CEO of Glaxo, Inc. and former Chairman of the Board of The New York Academy of Sciences (the Academy). “Whether this will create a source of problems for Harvard and its scientists remains to be seen. On the other hand, it could be a great resource if the academic-industrial relationship is managed well.”

Such tensions are likely to continue as emerging new trends in biomedical research offer investigators both greater opportunities and increasing challenges.  Academia and industry are partnering in ways they never have before. New high-throughput technologies are generating more data than previously thought possible. And scientists from a variety of fields must now cross interdisciplinary lines –– an approach some dub “systems biology” –– to make significant progress in conquering such diseases as cancer and AIDS.

New Approaches

A number of other biomedical research organizations have already set the stage for the new approaches to be incorporated into the Merck and Harvard facilities. In 1998, Stanford University launched an enterprise called “Bio-X” to facilitate interdisciplinary research and teaching in the areas of bioengineering, biomedicine and the biosciences. In January 2000, Leroy Hood, MD, PhD, created the Institute for Systems Biology in Seattle –– a research Environment that seeks to integrate scientists from different fields; biological information; hypothesis testing and discovery science; academia and the private sector; and science and society.

Some say it’s the “golden age” of biomedical investigation. The evolution that has led to this new age was the subject, along with related issues, of a gathering of biomedical researchers at the Academy last April. Hosted by the American Foundation for AIDS Research (amfAR), the symposium was called The Biotechnology Revolution in the New Millennium: Science, Policy, and Business.

“This meeting did an excellent job of showing how the nature of biomedical research has changed in the last 25 years,” explained Rashid Shaikh, PhD, the Academy’s Director of Programs, “not just quantitatively, in the amount of information we can generate, but also qualitatively, in the way the work is done. And this is a rapidly evolving process.”

A Quickened Pace

Much of the recent change in biomedical research is the result of a pace of investigation that has accelerated during the last quarter century – thanks in large part to recombinant DNA technology created in the 1970s. This Technology received a boost of support when the war on cancer was declared that same decade.

“Once recombinant DNA technology appeared, there was an enormous shift in molecular biology,” said David Baltimore, PhD, Nobel laureate and President of the California Institute of Technology, who chaired the amfAR symposium. “From a purely academic enterprise, it turned into one that had enormous implications for industry.”

Early on, the infant biotechnology enterprise focused on cloning to manufacture drugs, added Baltimore. The cloning was employed in the search for targets for a new generation of small molecule drugs. The need for chemical libraries soon developed, followed by a demand for high-throughput screening technologies. Add to that the wealth of information gleaned from the Human Genome Project.

Today investigators have more data than they ever did before. With the advent of high-throughput screening technologies, they also have speedier methods at their disposal to generate even more data. The nascent field of proteomics is expected to propel biomedicine even further. But with this heightened pace of research come new challenges.

For one thing, data are being generated faster than they can be analyzed and understood. Novel technologies have spawned a new field called bioinformatics: the analysis of all the data generated in the course of biomedical investigation. “We used to be able to look at the expression of one gene at a time,” said Shaikh. “But thanks to technologies (such as microarray systems), we can now analyze the expression of thousands of genes at once.”

High Demand, Low Supply of Bioinformatics Professionals

Bioinformatics professionals –– those who perform the data analysis –– are high in demand but short in supply, however, creating a problem for some research centers. Because they are so hard to come by, some institutions are sharing bioinformatics staff until a new generation of professionals can be educated and enter the workforce.

A second question that comes to mind is, “Who owns all these new data?” Is it the property of the individual researcher? The university he or she works for? The pharmaceutical company that sponsored the work or, if the studies were supported by public funds, is it the public?

Ownership issues apply to electronically published data as well. “Some of the data get published and made available to the scientific community, but some do not,” said Donald Kennedy, PhD, Editor-in-Chief of Science and President Emeritus of Stanford University. “Now that all data are stored electronically, there are major changes afoot in how data can be accessed in useful and efficient ways. But there are major unresolved questions regarding who owns the data: Do the publishers? Do the investigators?” These significant legal and policy issues will need to be resolved and, given the current rapid pace of study, resolved quickly.

A Blurred Line

In Europe, industrial support for universities has been an accepted and uncomplicated practice since the late 1800s, and this relationship continues to this day. But the relationship between academia and industry in the United States has had a quite different history, noted Charles Sanders.

As the American pharmaceutical industry began to develop in the last quarter of the 19th and early part of the 20th centuries, a relationship akin to the European model began to flower. By the early 1930s, however, the relationship between academia and industry in America began to sour. Disagreements arose over research discoveries and credit; there were disputes regarding the unauthorized use of pictures of some scientists in advertisements, implying endorsement of certain companies and products.

After World War II, the climate began to improve. With the advent of biotechnology in the 1970s, relations flourished even more, as witnessed by the founding of companies such as Genentech and Biogen by academic scientists. In addition, there are now countless examples of companies that support research programs at universities under a variety of arrangements.

On the face, these associations appear positive, because there is now a wealth of new sources for investigators to turn to for research funding. But these new opportunities also present certain challenges.

One of the most obvious concerns when industry supports a researcher is the investigator’s objectivity. Conflict of interest issues may arise. “Academic scientists who work with industry are generally very careful to retain their objectivity, yet appearances sometimes don’t allow that,” said Sanders. “The industry has to be very careful and make sure that its academic collaborators totally protect their objectivity and reputation.”

Intellectual Property Issues

Secondly, when academia partners with industry, intellectual property issues again surface. How does one determine who benefits financially from a research endeavor that goes on to produce a profitable product, such as a successful drug? How much does the scientist receive, and the university he or she works for, and how is that money used? “Academic institutions have become more sophisticated, and the scientists and organizations are demanding an ever larger part of the pie from their discoveries,” said Sanders.

Donald Kennedy noted that in industry-supported investigations a large proportion of research results that are of potential public value may be locked up in proprietary protections. Students at Yale University and the University of Minnesota recently demonstrated, for example, that their universities were collecting royalties on drugs that can benefit people suffering from HIV/AIDS in developing countries.

“Although the royalty slice of the drug price is minuscule in proportion to total revenues, it is very unattractive money to the students, and they make a passionate case,” said Kennedy. “Ironically, everybody involved in this process thought they were doing something good, and in a way everyone was. But this is the kind of problem that emerges when proprietary interests mix with the basic research function in a nonprofit institution.”

A Mixing of the Minds

Scientists are increasingly of the opinion that an integrated approach to biological investigation is essential for significant, meaningful progress to occur. This “systems approach” is bringing together biologists, chemists, physicists, engineers and computer scientists to coordinate research efforts and interpret the resulting data.

Such an approach is critical for understanding the inner workings of cells and how their functions go awry to create diseases such as cancer. The AIDS virus has proven to be an excellent model supporting the need for a multidisciplinary approach: When it was first discovered in the early 1980s, it was assumed that a vaccine was just around the corner. But that has obviously not been the case.

“It turned out that HIV was more difficult than anybody imagined, smarter and slipperier,” said David Baltimore. The cleverness of the virus has sent researchers back to their lab benches. Only by gathering together immunologists, structural biologists, biochemists and experts from other fields can we determine exactly what the virus does to the human immune system to deliver its lethal blow.

Is “Systems Biology” the Way to Go?

Not all investigators are convinced that “systems biology” –– as Hood describes it –– is the way to go. Many established researchers, for example, are used to working alone in conventional academic settings. “Traditional academic institutions have a difficult time fully engaging in systems biology, given their departmental organization and their narrow view of education and cross-disciplinary work,” explained Leroy Hood, President and Director of the Institute for Systems Biology. “The tenure system presents another serious challenge: Tenure forces people at the early stages of their careers to work by themselves on safe kinds of problems. However, the heart of systems biology is integration, and that’s a tough challenge for academia.”

“Specialization is often the enemy of cooperation,” added David Baltimore. “There are deep and important relationships between biology and other disciplines. To understand biology, we need chemists, physicists, mathematicians and computer scientists, as well as other people who can think in new ways.”

Future Challenges

Despite the presence of these as yet unresolved issues, biomedical research continues to hurdle forward, shedding light on the inner workings of organisms and yielding insights that will undoubtedly impact health and medicine. “The true applications (of biotechnology) to patient care have not really matured yet,” said Rashid Shaikh. “But there’s every reason to believe that we’re going to make very rapid progress in that direction.”

In addition to the challenges above, other issues include:

• Gathering political support. Although the budget of the National Institutes of Health has seen a significant increase in the last several years, other science-related agencies may not be as fortunate. “These agencies’ research budgets have not seen an increase, and we must pay attention to them,” said Baltimore.

• Educating the public. Hood touched on the distrust the public can have regarding science. “I am deeply concerned about society’s increasingly suspicious and often negative reaction to developments in science,” he said. “I sense an enormous uncertainty, discomfort and distrust. There is a feeling that we’re just making everything more expensive and more complicated. How do we advocate for opportunities in science? We have to be truthful about the challenges as well.”

• Educating today’s students. One of the best ways to garner support for a systems approach to biological investigation is to start educating students this way today. In Seattle, for example, the Institute for Systems Biology has pioneered innovative programs in an effort to transform the way science is taught in public schools.

“This is truly the golden age of biology,” said Sanders. There are unprecedented numbers of targets and compounds, for example. Research and development are very expensive, but funds will be available in abundance.

The Public’s Expectations

Still, he added, we need to handle the expectations of the public, which can be unrealistic when it comes to the speed with which basic science findings will result in new therapies. And academic institutions have to balance a commitment to both basic and translational research.

“Thousands of flowers will continue to bloom, driven by the lure of discovery and the opportunity to improve human health,” added Sanders. “Though not linear, the process is very creative, entrepreneurial, and clearly reflective of the American free enterprise system.”

Also read:Building the Knowledge Capitals of the Future


About the Author

Rosemarie Foster is an accomplished medical freelance writer and vice president of Foster Medical Communications in New York.