Skip to main content

Predicting the Onset of Alzheimer’s Disease

X-rays of a brain scan.

Doctors and researchers studying the molecular and clinical aspects of Alzheimer’s disease are learning more about the mechanisms of this devastating condition.

Published March 1, 2003

By Vida Foubister

While proteins involved in the generations of Alzheimer’s disease (AD) continue to perplex researchers, progress is being made in the presymptomatic and early identification of patients.

But the use of genetic testing, brain imaging and other available technologies to identify people who are likely to develop Alzheimer’s will remain problematic until disease modifying therapies become available, according to Norman Relkin, MD, PhD, associate professor of Clinical Neurology and Neuroscience at New York-Presbyterian Hospital, Weill Medical College of Cornell University.

“This information is viewed as toxic information that is potentially very, very harmful,” Relkin told participants at the Fourth New York Alzheimer’s Research Symposium held last Nov. 20. “It’s one thing to tell someone they’re at risk, it’s quite another thing for there to be nothing that they can do about it.”

The lifetime risk of AD is estimated to be 10 to 15 percent among the general population, meaning that one-in-10 women who live to 80 and one-in-seven men who live to 76 will develop the disease. This risk doubles to 20 to 30 percent for first-degree relatives – mother, father, sister, brother, daughter or son – of people with AD.

Convened by The New York Academy of Sciences (the Academy), the afternoon symposium brought together three experts on the evolving molecular and clinical aspects of Alzheimer’s disease. Its co-sponsors included The Institute for the Study of Aging, The New York City Chapter of the Alzheimer’s Association and The New York City Metro Area Chapter of the Society for Neuroscience.

The Calsenilin Story Evolves

One of the main pathological hallmarks of AD is the presence of amyloid plaques in patient brains. These plaques are composed of amyloid beta-peptide (Aβ), which is derived from gamma-secretase cleavage of the beta-amyloid precursor protein (APP).

Several years ago Joseph Buxbaum, PhD, associate professor of Psychiatry and head of the Laboratory of Molecular Neuropsychiatry at Mount Sinai School of Medicine, identified a novel protein that interacts with presenilin, a protein required for γ−secretase cleavage of APP that many scientists believe should be inhibited to prevent or treat Alzheimer’s. Named calsenilin, for calcium and presenilin binding protein, it was subsequently identified by other researchers as a transcription factor regulating dynorphin expression (DREAM) and a potassium channel interacting protein (KChIP3).

Buxbaum developed a calsenilin knockout mouse to determine the true physiological function of this protein. This work is important because it helps to identify the side effects of potential Alzheimer’s drugs, in this case drugs targeting presenilin that might affect calsenilin function as well. But calsenilin “has resisted, very effectively, easy analysis,” he said.

His initial results, which found Aβ formation and K+ currents decreased and long-term potentiation in the dentate gyrus of the hippocampus increased, suggested a role for calsenilin in regulating presenilin and voltage-gated potassium channel (Kv4) function. Further, the animals were found to be more sensitive to shock and, therefore, it seemed unlikely that calsenilin was involved in modulating pain sensitivity through the antagonism of dynorphin expression.

Calsenilin as a Dynorphin Suppressor

But data published by another lab showing that calsenilin knockouts were less sensitive to pain led Buxbaum to reevaluate the knockout using the tail-flick flick latency test. This test measures the time taken for a mouse to flick its tail away from a heat source, and has long been thought to be analogous with shock sensitivity.

The tail-flick results confirmed that the knockout mice were less sensitive to pain, and thus a role for calsenilin as a dynorphin suppressor could not be ruled out. These results also are likely to get much attention from those working in pain research. “The take-home message is that shock sensitivity is actually not reflective at all of tail-flick sensitivity,” he said.

Though the exact role of calsenilin in Alzheimer’s disease remains unclear, Buxbaum’s current hypothesis is that calsenilin affects Aβ formation by modulating calcium levels in the cell. There is a relationship between calcium abnormalities and Alzheimer’s, and it’s known that increases in cellular calcium result in the production of more and more Aβ.

Selective Degeneration

A pathological feature of AD, in addition to amyloid plaque and neurofibrillary tangle formation, is selective neurodegeneration. “In patients with Alzheimer’s disease, not all neurons are dying at the same time,” said Tae-Wan Kim, PhD, assistant professor at Columbia University’s Taub Institute for Research on Alzheimer’s Disease and the Aging Brain. “There is remarkable specific and selective degeneration and this underlies a lot of cognitive deficits.”

Focusing on the basal forebrain cholinergic neurons, a prime site of neuronal death in AD patients that also correlates with their cognitive deficits, Kim used proteomics to identify novel substrates that are cleaved by γ-secretase. He did this by comparing the protein profile of normal cells to those lacking functional γ-secretase.

“We wanted to find substrates expressed predominantly in these neurons that are affected in AD,” Kim explained. Notch, a developmentally regulated protein important for cell plate determination and neurogenesis, and ErbB4, a receptor tyrosine kinase, have previously been identified as γ-secretase substrates. Like APP, these substrates are cleaved in the transmembrane region and their cleavage is dependent on functional presenilins, early-onset familial AD genes.

Kim’s recent analysis found that the p75 neurotrophin receptor (p75-NTR), a protein that has been implicated in Alzheimer’s and other diseases due to its regulation of cell survival and death, is a γ-secretase substrate. He did this by demonstrating that p75-NTR undergoes ectodomain shedding, a step that is required for γ-secretase cleavage, and that the p75-NTR cleavage is blocked in cells treated with a γ-secretase inhibitor. Further, the p75-NTR cleavage site was found to be in the transmembrane region and similar to that of Aβ40, one of the APP peptide fragments.

Risk and Risk Perception

In future work, Kim plans to investigate whether shedding and cleavage by γ-secretase can regulate neuronal cell death and survival. “If that is the case then we might have a molecular basis for selective neurodegeneration of basal forebrain cholinergic neurons in AD,” he said. He also would like to use microarray analysis to identify downstream target genes.

Eighteen months before announcing he had been diagnosed with Alzheimer’s disease, former President Ronald Reagan gave a speech of about 90 minutes in length without a single error.

“He spoke perfectly,” said Relkin, who assessed a videotape of the 1992 speech for signs or symptoms of incipient AD. “When one considers that we’re trying to come to a point in which we can diagnose AD in its presymptomatic stages, or at least predict with reasonable accuracy who is going to develop the disease, performances like that are daunting.”

Relkin also sees it as a lesson that predictive testing should involve more than clinical interviews and observational methods.

The field has moved from diagnosing Alzheimer’s by exclusion to direct diagnosis. In addition, clinicians have begun subcategorizing patients with mild cognitive impairment (MCI) who are believed to have an increased likelihood of developing AD or other forms of dementia in the near future.

Managing Risk Perception

One of these subgroups, AD-like MCI, includes patients whose symptoms are found to have an “AD-like flavor.” It’s estimated that from 5 to 40 percent of them go on to develop AD each year.

Translating this subcategorization into general practice without more specific diagnostic criteria, however, will be problematic. This is where technologies such as genetic testing, proteomic analysis and structural/functional neuroimaging can be used to improve differential diagnosis and presymptomatic detection of AD.

As more information becomes available, managing patients’ risk perception will become important. The general population is more influenced by their perceptions of risk than by numbers representing the probability they will develop a disease, said Relkin. “Perceptions are altered by life experiences, like caring for patients in the family with the disease, and have a greater impact on how one views one’s risk of AD.”

Results he presented from an ongoing study, called REVEAL (Risk Evaluation and Education for Alzheimer’s Disease), confirm this. People in the survey tended to remember their genotype, that is whether or not they carry the APOE 4 allele associated with an increased risk of AD, more than their numeric lifetime risk estimates.

Also read :Changing the Game: Fighting Alzheimer’s Disease

Technology Promises Faster Diagnostic Tests

A medical professional examines graphs and data on a tablet.

The Doctor-on-a-Chip technology has potential to revolutionize the field of medicine by providing quick and accurate test results.

Published March 1, 2003

By Bruce Tobin

Image courtesy of Toowongsa via stock.adobe.com.

Sending medical specimens off to labs can mean lengthy waits for results needed to make or confirm diagnoses. But help is on the way in the form of a developing nanotechnology called Doctor-on-a-Chip (DoaC).

In broad terms, DoaC technology will allow a sample of bodily fluid to be processed to test for a disease’s DNA marker. Research teams at universities in the United Kingdom and the United States are working on such devices. DoaCs will allow clinicians to perform many more medical diagnostic tests in their offices and in the field, and promise delivery of results in as little as 5 to 10 minutes.

At Brunel University in London, Professor Wamadeva Balachandran (Bala) heads a six-member research team working on a DoaC. In the United States, a team of 70 researchers led by Professor Chad A. Mirkin, of Northwestern University, is working on a similar program.

Bala believes the system of taking a patient sample and sending it to a lab – where it may takes days for the results to be determined and communicated back to the doctor – can be dangerous. “In certain cases it could be a life-and-death situation,” he said. “The idea here (with DoaC devices) is that doctors can get the results while still talking with their patient.”

In the DoaC concept, the doctor places a drop of the patient’s blood on the front end of a polymer chip and waits 5 to 10 minutes for the chip to do its tests and display the results. The device will initially be the size of a credit card, Bala said, and eventually the size of a microprocessor chip.

Faster Diagnostic Tests

Prof. Wamadeva Balachandran (Bala)

Going into more detail, Mirkin explained, “A sample (blood, saliva or urine) is processed through microfluidics (micro- or nano-scale devices for manipulating fluids). Then the marker DNA (for the diseases of interest) is delivered to the reader portion of the chip. If marker DNA binds to this portion of the chip…nanoparticle probes are used to develop the chip (also through microfluidics).” The readout device will measure the conductivity of the particles between microelectrodes.

Bala said the idea behind his device involves the Electric Field Manipulation of DNA (characterizing DNA using electrical fields to move them and then to look at their properties). His original thinking, three or four years ago, was that if you could identify various characteristics you could confirm a particular virus in terms of its properties.

“But, of course, during this period the genome sequencing has moved on so fast,” Bala explained. “Various medical colleagues were all saying that if there were a system, which could be easily utilized to detect viruses by GPs (general practitioners) in their offices, that would speed up the process of diagnosis and save a lot of lives.” So he decided to work on it. Bala’s idea now is to use this technique to move DNA into a chamber to look for a particular type of DNA linked to a virus. Once confirmed as the suspect DNA type, “it comes out of that chamber and we again use electrical techniques to categorize the DNA: electrical impedance, for example.”

Results in 5-10 Minutes

Prof. Chad A. Mirkin

The technology involved in the tests is nothing new. “The challenge is to bring the technique down to the microscale, to put it on a single chip,” Bala said, “and we’re doing that now.”

Processing the sample involves attaching probes to the DNA. The type of virus that’s suspected determines the type of probe that is used. The sample then goes through a polymerase chain reaction (PCR) and then through the chamber with the medium for dielectrophoretic measurement. It then passes through various dielectrophoretic chambers. “In 5-10 minutes the doctor will be able to look at his computer screen and know whether you have hepatitis A or hepatitis B, for example, or whether you don’t have any virus,” he said.

Early models of Bala’s chip will check for various kinds of viral infections sequentially, one virus type after another being tried until a match is found. Eventually, he expects DoaCs to have the ability to run through a whole series of tests for various viruses.

“The (DoaC) potential,” concludes Mirkin, “is enormous.”

Also read: Building a Big Future from Small Things

85 Cents at a Time: Saving Lives and Fighting HIV

A scientist works with blood samples in a research lab.

After diagnosing the first pediatric case of HIV in Uganda, Dr. Ammann has devoted much of his professional life to combating this deadly virus.

Published November 1, 2002

By Fred Moreno, Dan Van Atta, and Jennifer Tang

Image courtesy of salomonus_ via stock.adobe.com.

More than 2000 infants around the world are infected with HIV every day. In sub-Saharan Africa alone up to 46 percent of pregnant women carry the virus, and some 25 to 35 percent of their children will be born infected.

Arthur J. Ammann, MD, is succeeding in improving those statistics. As President of Global Strategies for HIV Prevention, Ammann oversees the Save a Life program, which provides HIV testing and medication to prevent HIV transmission from pregnant women to their infants in Africa, Asia and South America.

At the heart of the program is the antiretroviral drug nevirapine. Giving a single tablet of nevirapine to a woman during labor and delivery together with a single dose of nevirapine syrup to her newborn reduces HIV transmission by 50 percent. Moreover, in many countries the cost of treatment is as little as 85 cents for both mother and child. The program has helped some 50,000 women and infants in more than 72 hospitals in 18 nations. Save a Life also provides antibiotics to prevent opportunistic infections in HIV-infected women.

Obtaining and Administering Nevirapine

Global Strategies makes it easier for start-up programs in developing countries to obtain and administer nevirapine for this use. “They just tell us what they do and how much they need,” explains Ammann. “This is especially helpful for small programs that have the infrastructure to test women and give the drugs, but which may be waiting for additional funding from larger organizations.”

Ammann’s commitment to helping women and children with HIV began some two decades ago. As a professor of Pediatric Immunology at the University of California, San Francisco (where he is still on the faculty), Ammann and his colleagues diagnosed the first child with HIV in this country. The epidemic grew, and in 1987 AZT was introduced as the first anti-HIV drug.

In 1994, a landmark study showed that giving AZT to pregnant women could prevent transmission of the virus to newborns. Thanks to AZT, the number of new pediatric AIDS cases in the United States and Europe plummeted from 2,000 per year to less than 200. “However, that remarkable success story was paralleled by a lack of success in developing countries,” notes Ammann, “where 1,800 children are born with HIV every day.”

HIV Treatment

So, in 1997 Ammann founded Global Strategies. Through a series of international conferences held every two years, and with the assistance of organizations such as the Elizabeth Glaser Pediatric AIDS Foundation, Global Strategies has called on nations to immediately implement countrywide programs to prevent HIV infection of infants, identify HIV-infected women, and provide treatment for children and mothers with HIV. One major step in that direction is the production and distribution of more than 30,000 copies of an educational CDROM.

While Save a Life is clearly rescuing the futures of thousands of infants, Ammann notes that challenges remain. Programs to continue drug treatment of HIV-infected women, as well as their sexual partners, require further development. A new drug that could be used when HIV eventually develops resistance to nevirapine remains to be found. And educational opportunities and support for children orphaned by AIDS need to be expanded.

In the meantime, counseling is becoming more available to women without HIV, so they remain uninfected. “We’re working at the end of the process, the point where HIV infection has already occurred,” says Ammann. “Where we want to go is the beginning, to keep the infection from happening in the first place. Then all those other problems would go away.”

Also read: Improving Women’s Health: HIV, Contraception, Cervical Cancer, and Schistosomiasis

Parkinson’s: A Perplexing Puzzle for Researchers

A black and white photo of two male scientists interacting in a research lab, likely in the 1950s or 1960s.

Nearly two centuries after James Parkinson first defined “shaking palsy” in 1817, million of people throughout the world struggle daily with the disabling effects of Parkinson’s Disease. Neither a cause nor a cure has yet been found for this enigmatic and deadly disease.

Published November 1, 2002

By Vida Foubister

First Parkinson’s disease patient. Photograph taken March 24, 1965. John H. Lawrence Collection-5521. Photograph by Doug Bradley via National Archives Catalog.

Speaking as chair of the opening session at a recent three-day seminar on Parkinson’s Disease, sponsored by The New York Academy of Sciences (the Academy), Stanley Fahn, MD, director of the Center for Parkinson’s Disease and Other Movement Disorders at Columbia University, wasted no time in summing up the dilemma.

Many decades ago, he told the large gathering of scientists and clinicians – whose registry resembled a global “who’s who” in PD research – “two basic science findings in clinical, pathological and animal models led to the dopamine hypothesis of Parkinson’s disease. And we’ve been there ever since.”

Dopamine, produced by the dopaminergic neurons in the substantia nigra region of the brain, was found to reverse bradykinesia in animals. Bradykinesia, which includes difficulty initiating movement, slowness in movement and paucity or incompleteness of movement, is considered the most prominent and disabling symptom of PD. The second finding was that PD patient brains were markedly depleted in dopamine and the amount of depletion correlated with the disease’s severity.

Since the 1960s, the early features of the disease have been treated with levodopa (L-dopa) and similar drugs that function by replacing the lost dopamine. “Before my drug takes effect, I am unable to move well enough to really get out of bed,” explains Joan I. Samuelson, president of the Parkinson’s Action Network. “Then, an hour later, I can get up, get dressed and walk into the room and function. That’s miraculous.”

Treating Symptoms, Not PD

The fact that this drug exists, and that PD was the first neurological disease to be treated with pharmacological drugs, isn’t trivialized. However, there is a growing sense that the field is ripe to move beyond this early discovery and symptomatic treatment.

L-dopa and other related drugs often cause significant side effects, most notably dyskinesias – or involuntary movements – that limit their usefulness. And even though many patients initially respond well to therapeutics, they lose their effectiveness as the disease progresses. Most importantly, these drugs only function to treat the symptoms of Parkinson’s and do nothing to slow the disease process.

Patients, clinicians and scientists spent three days in September, essentially sequestered at a wooded retreat near Princeton, N.J., discussing where the field should go. As the organizers hoped, the setting of this conference, Parkinson’s Disease: The Life Cycle of the Dopamine Neuron, brought people together across many disciplines and stimulated discussions that continued from early morning sessions into impromptu evening debates.

The high level of interaction at the conference stimulated both new ideas for research and treatment, as well as connections between the existing body of scientific and clinical knowledge. Yet it also highlighted the dichotomy that exists in the field. Despite all that is known about Parkinson’s, its cause remains essentially unknown and untreatable.

That dichotomy was reflected by the participants. Some pushed for more basic research, including investigation into the non-motor aspects of the disease that are often overlooked due to the focus on the role of dopamine neurons. However, others urged the scientists to move their many promising new findings out of the lab and into the clinic.

Important Leads

Robert E. Burke, M.D., of Columbia University was a conference speaker and member of the organizing committee.

John Q. Trojanowski, MD, PhD, co-director of the Center for Neurodegenerative Disease Research at the University of Pennsylvania School of Medicine, is among those who believe there are a “number of phenomenally important leads that have potential implications for therapies.” He pointed to new findings about some of the proteins that have been implicated in PD, namely alpha-synuclein and parkin. “Knowing the culprits, the molecular criminals, is the first step towards taking them out of the action or doing something to improve what’s broken,” Trojanowski said.

Lewy bodies, a pathological marker of Parkinson’s disease in the substantia nigra, contain a fibrillar form of alpha-synuclein. Though it’s long been known that patients with the familial form of the disease can have a mutation in the gene coding for this protein, new data presented at the conference suggests that alpha-synuclein abnormalities in patients with sporadic Parkinson’s might be due to mitochondrial dysfunction. (Sporadic Parkinson’s is more common than the familial form of the disease.)

“We know from a genetic standpoint that alpha-synuclein does what it does because you have a mutation, but why in everybody else does synuclein go bad?” asked Ted M. Dawson, MD, PhD, director of the Morris K. Udall Parkinson’s Disease Research Center of Excellence at the Johns Hopkins University School of Medicine. “Well, it might be because oxidative stress is hammering it.”

Some New Clues

Peter T. Lansbury Jr., PhD, an associate professor of neurology at Harvard Medical School, presented some new clues about what in the alpha-synuclein fibrillization process causes disease. His research suggests that an intermediate, called a protofibril, is toxic to dopamine neurons and he has also found that the formation of this protofibril can be inhibited by beta-synuclein.

“Beta-synuclein has a nice therapeutic profile: It stops oligomerization all together,” Lansbury said. “We are proceeding with this idea as a therapeutic strategy. Specifically, we’re interested in developing small molecules that would induce increased expression of endogenous beta-synuclein.”

Mutations in parkin, another so-called molecular criminal, are the most common cause of familial PD. It’s been found to function as a unbiquitin E3 ligase that labels proteins for degradation and disposal. That means when parkin isn’t functioning, proteins such as synphilin-1 and Pael receptor build up in the cell. “The current theory is that the accumulation of these substrates causes Parkinson’s disease,” Dawson said. “So enhancing the function of parkin, identifying the substrates and then figuring out ways to get them degraded” are possible therapeutic approaches.

The Cellular Level

Moving from proteins to the cellular level, another session at the conference focused on the mitochondria and the circumstances under which it produces toxic oxygen-free radicals that lead to apoptosis or cell death.

“There’s more and more reason to believe that in Parkinson’s disease, either because of environmental toxins like pesticides or because of genetic defects, the mitochondria produce an abnormally high level of these reactive oxygen species,” said Gary Fiskum, PhD, professor and research director in the department of anesthesiology at the University of Maryland School of Medicine.

One way to elucidate this pathogenic mechanism is by using genomics and proteomics to identify genes that are expressed in response to environmental toxins and mitochondrial oxidative stress. “The idea is that you may come up with things you have no preconceived notion would be associated with the disease process and that, conceivably, might give you a new insight into the disease,” explained M. Flint Beal, MD, neurologist-in-chief at The New York Presbyterian Hospital.

Beal’s research has focused on two known antioxidants – coenzyme Q10 and creatine – that act either to inhibit the production of mitochondrial free radicals or to detoxify them once they’re produced. “We have good animal data that [coenzyme Q10 and creatine] prevent damage to dopaminergic neurons,” he said. “What we’re going to do now is see if we can administer those in combination with some anti-inflammatory drugs and get even better protection.”

New Approaches

Courtesy of Dr. Stanley Fahn.

Much excitement has been generated in the field by the promise of two approaches to replace the dopamine neurons that are lost in patients with Parkinson’s disease. One involves manipulating endogenous precursor cells in the adult brain to become dopamine neurons. The other approach focuses on transplanting embryonic stem cells, which have been coaxed to become dopamine neurons, into the adult brain.

“There’s evidence that even a mature and degenerating brain will accept new cells, including neurons, that will grow to reconnect damaged parts,” said Ole Isacson, Dr. Med. Sci., M.B., director of The Morris K. Udall Parkinson’s Disease Research Center of Excellence at Harvard University Medical School.

The implications of these therapies for patients, though not novel, can be dramatic. Such was the film clip presented by Isacson that showed a patient with advanced Parkinson’s disease walk down a hallway before and after receiving a transplantation of fetal dopamine cells. The first walk down the hall seemed to take forever, as the patient struggled with every step. Then, after the transplant, the patient appeared to stride down the hall and back.

Though this demonstrates the potential of this strategy, Ronald McKay, PhD, senior investigator at the National Institute of Neurological Disorders and Stroke, doesn’t believe it represents a possible therapy for patients. “It’s just too difficult,” he said. Among the problems he cited is the challenge of obtaining a sufficient number of fetal cells for the procedure.

The Promise of Embryonic Stem Cells

Instead, McKay emphasized the promise of embryonic stem cells that have been engineered to become dopamine neurons – both for cell therapy and for further study of the disease. Those studies include determining what signals and factors are required to make an immature cell become a dopamine neuron. “The title of this meeting is the life cycle of the dopamine neuron. We’re essentially dissecting the life cycle of the neuron,” he said. “There’s many signals at different stages that influence the properties of the cells.”

Manipulating precursor cells in the brain to become dopamine neurons might have some advantages due to their existing regional differentiation. “Rather than one tube of all-purpose cells, one potentially can recruit precursors that would give rise to the right kind of replacement neurons,” commented Jeffrey D. Macklis, MD, D.HST, associate professor of Neurology and Neuroscience at Harvard Medical School.

Overall, the basic scientific understanding of Parkinson’s disease appeared to reach a new level at the conference, generating hope that the focus of the next such meeting will be on clinical therapies. “This kind of session tells you how complicated [the disease] is, but any day now there could be a revolutionary big idea,” Samuelson said. “If it’s in the treatment end of things, it could revolutionize the lives of a million people pretty quickly, and that’s a big deal.”

Also read: The Role of Glial Cells in Alzheimer’s, Parkinson’s

‘Free-Radical’ Scientist Recalls Research Journey

A young person holds the hand of an elderly person to provide comfort.

Almost 50 years ago, Denham Harman’s theory of aging as a biochemical process started a chain reaction in theoretical medicine.

Published October 1, 2002

By Fred Moreno, Dan Van Atta, and Jennifer Tang

Image courtesy of Khunatorn via stock.adobe.com.

Louis Pasteur once noted: “Chance favors the prepared mind.” Denham Harman’s mind was unusually prepared to develop a notion that took well over a decade to attract any serious attention, but is now a driving force in biomedical research: the free-radical theory of aging, a phrase Harman coined in 1960.

Now professor emeritus at the University of Nebraska Medical Center and still spry at 86, Harman recently edited Annals of the New York Academy of Sciences volume 959, Increasing Life Span: Conventional Measures and Slowing the Innate Aging Process. The volume also includes a recent paper by Harman on Alzheimer’s Disease: Role of Aging in Pathogenesis.

Free radicals are molecules or atoms that feature an unpaired electron. Because electrons prefer to travel in pairs, free radicals can set off chain reactions – their loner electrons cut in on the dance of another molecule’s two electrons in an attempt to grab one. This move satisfies the original unpaired electron, but merely creates a new free radical bent on pairing up.

Thus, like bulls in the china shop of living cells, free radicals, especially the hydroxyl radical, damage delicate cell membranes and muck up proteins whose functions depend on their structure. And the cellular damage wrought by free radicals is the mechanism, according to Harman, of the natural process we take for granted as aging.

A Circuitous Route

Harman took a circuitous, but in retrospect necessary, route to this conclusion. He was born in 1916 in San Francisco, but did live briefly as a boy in New York City, where his father worked for a jewelry company located just blocks from the site of The New York Academy of Sciences (the Academy) on 63rd Street near Fifth Avenue. The family returned to the Bay area in 1932, and Harman graduated from Berkeley High School two years later. Jobs were scarce, but Harman’s father happened to meet the director of the Shell Development Company, the chemical research division of the Shell Oil Company, at a local tennis club. Harman began working for Shell as a lab assistant.

The position sparked a true interest in chemistry. Harman went on to receive his undergraduate degree and, in 1943, his doctoral degree from the University of California, Berkeley, in chemistry. He continued with Shell the entire time, at first working with lubricating oils. But he was fortunately transferred – to the reaction kinetics department, where much of the work concerned free-radical reactions. During seven years there, Harman was instrumental in gaining 35 patents for Shell, including work on the active ingredient of something designed to shorten, not extend, life: the famous “Shell No-Pest Strip.”

Time to Think

In December 1945, Harman’s wife Helen put a bee in his bonnet. “She showed me a magazine article she thought might be of interest. It was a well-written piece by William Lawrence of the New York Times about aging research in Russia,” he recalls. Harman knew a lot of chemistry, but not much biochemistry or physiology. And the idea of aging as a biochemical process so fascinated him that in 1949 he decided to attend medical school. Berkeley turned him down because of his advanced age – he was 33 – but Stanford accepted him.

After his internship, Harman became a research associate at the Donner Laboratory back at Berkeley. “Donner was great,” he remembers, “because I didn’t really have to do anything, other than a hematology clinic on Wednesday mornings. I could just think.” And what he thought about was aging. “One thing you learn in biology,” he notes, “is that Mother Nature has a tendency to use the same processes over and over. My impression was that since everything ages, there was probably a single, basic cause.”

Pondering the issue at first left him frustrated. “I thought perhaps there wasn’t even enough knowledge available at the time to solve the problem,” he says. “And then in November of 1954 I was sitting at my desk when all of a sudden the thought came to me: free radicals. In a flash, I knew it could explain things.”

He quickly discussed the idea with medical colleagues – most thought it was interesting but too simple to explain such a complex phenomenon. “I got encouragement from only two people, both of whom were organic chemists, not medical doctors,” he recalls.

The Ubiquitous Enzyme Superoxide Dismutase

Helen and Denham Harman

Harman spent the next decade on virtually a lone research effort that produced circumstantial evidence for his idea. The limits of the instrumentation of that time made it difficult to even show that free radical species existed in living cells. Electron spin resonance studies found free radicals in yeast in 1954, but it was not until 1965 that free radicals were detected in human blood serum.

Then in 1967 biochemists discovered the ubiquitous enzyme superoxide dismutase, whose job it is to protect cells by sopping up free radicals formed during aerobic respiration in cells. The presence of a defense implies that free radicals are indeed a clear and very present danger to cells.

Ensuing research has implicated free radicals in cancer, heart disease, Alzheimer’s disease and other conditions. And observations of the animal kingdom are especially suggestive of the general aging theory. Harman points out that rats and pigeons, for example, have about the same body weights and metabolic rates. But pigeons produce far less hydrogen peroxide (formed from the superoxide radical) during cellular processes than do rats – and the birds live some 15 times longer than the rodents.

Judging by the sales of antioxidant supplements that scavenge free radicals, the American public has clearly subscribed to Harman’s ideas. Many physicians and scientists also have signed on to his view of aging, with the free-radical theory underlying much of current aging research.

“I think we’re now getting to a point where we may be able to actually intervene in the aging process,” Harman says. If his prediction proves true, our extra years will be owed to his many well-spent ones.

Also read: A New Approach to Studying Aging and Improving Health

Building a Big Future from Small Things

A finger holds a microprocessor to showcase the small size of this technology.

Nanotechnology has potential to revolutionize our daily lives and one aspect that makes this technology so promising and effective is its bottom-up approach.

Published October 1, 2002

By Charles M. Lieber

Nanotechnology has gained widespread recognition with the promise of revolutionizing our future through advances in areas ranging from computing, information storage and communications to biotechnology and medicine. How might one field of study produce such dramatic changes?

At the most obvious level nanotechnology is focused on the science and technology of miniaturization, which is widely recognized as the driving force for the advances made in the microelectronics industry over the past 30 years. However I believe that miniaturization is just one small component of what makes and will make nanoscale science and technology a revolutionary field. Rather, it is the paradigm shift from top-down manufacturing, which has dominated most areas of technology, to a bottom-up approach.

The bottom-up paradigm can be defined simply as one in which functional devices and systems are assembled from well-defined nanoscale building blocks, much like the way nature uses proteins and other macromolecules to construct complex biological systems. The bottom-up approach has the potential to go far beyond the limits of top-down technology by defining key nanometer-scale metrics through synthesis and subsequent assembly – not by lithography.

Producing Structures with Enhanced and New Functions

Of equal importance, bottom-up assembly offers the potential to produce structures with enhanced and/or completely new function. Unlike conventional top-down fabrication, bottom-up assembly makes it possible to combine materials with distinct chemical composition, structure, size and morphology virtually at will. To implement and exploit the potential power of the bottom-up approach requires that three key areas, which are the focus of our ongoing program at Harvard University, be addressed.

First and foremost, the bottom-up approach requires nanoscale building blocks with precisely controlled and tunable chemical composition, structure, morphology and size, since these characteristics determine their corresponding physical (e.g. electronic) properties. From the standpoint of miniaturization, much emphasis has been placed on the use of molecules as building blocks. However, challenges in establishing reliable electrical contact to molecules has limited the development of realistic schemes for scalable interconnection and integration without having key feature sizes being defined by the conventional lithography used to make interconnects.

My own group’s work has been focused on the nanoscale wires and, in particular, semiconductor nanowires as building blocks. This focus was initially motivated by recognition that the one-dimensional nanostructures represent the smallest morphology structure for efficient routing of information – either in the form of electrical or optical signals. Subsequently, we have shown that nanowires can also exhibit a variety of critical device function, and thus can be exploited as both the wiring and device elements in functional nano-systems.

Control Over Nanowire Properties

Currently, semiconductor nanowires can be rationally synthesized in single crystal form with all key parameters – including chemical composition, diameter and length, and doping/electronic properties – controlled. The control that we have over these nanowire properties has correspondingly enabled a wide range of devices and integration strategies to be pursued. For example, semiconductor nanowires have been assembled into nanoscale field-effect transistors, light-emitting diodes, bipolar junction transistors and complementary inverters – components that potentially can be used to assemble a wide range of powerful nano-systems.

Tightly coupled to the development of our nanowire building blocks have been studies of their fundamental properties. Such measurements are critical for defining their limits as existing or completely new types of device elements. We have developed a new strategy for nanoscale transistors, for example, in which one nanowire serves as the conducting channel and the other crossed nanowire as the gate electrode. Significantly, the three critical device metrics are naturally defined at the nanometer scale in assembled crossed nanowire transistors:

(1) a nanoscale channel width determined by the diameter of the active nanowire;

(2) a nanoscale channel length defined by the crossed gate nanowire diameter; and

(3) a nanoscale gate dielectric thickness determined by the nanowire surface oxide.

These distinct nanoscale metrics lead to greatly improved device characteristics such as high gain, high speed and low power dissipation. Moreover, this new approach has enabled highly integrated nanocircuits to be defined by assembly.

Hierarchical Assembly Methods

Second and central to the bottom-up concept has been the development of hierarchical assembly methods that can organize building blocks into integrated structures. Obtaining highly integrated NWs circuits requires techniques to align and assemble them into regular arrays with controlled orientation and spatial location. We have shown that fluidics, in which solutions of nanowires directed in channels over a substrate surface, is a powerful and scalable approach for assembly on multiple-length scales.

In this method, sequential “layers” of different nanowires can be deposited in parallel, crossed and more complex architectures to build up functional systems. In addition, the readily accessible crossed nanowire matrix represents an ideal configuration since the critical device dimension is defined by the nanoscale cross point, and the crossed configuration is a naturally scalable architecture that can enable massive system integration.

Third, combining the advances in nanowire building block synthesis, understanding of fundamental device properties and development of well-defined assembly strategies has allowed us to move well beyond the limit of single devices and begin to tackle the challenging and exciting world of integrated nano-systems. Significantly, high-yield assembly of crossed nanowire structures containing multiple active cross points has led to the bottom-up organization of OR, AND, and NOR logic gates, where the key integration did not depend on lithography. Moreover, we have shown that these nano-logic gates can be interconnected to form circuits and, thereby, carry out primitive computation.

Tremendous Excitement in the Field

Prof. Lieber

These and related advances have created tremendous excitement in the nanotechnology field. But I believe it is the truly unique characteristics of the bottom-up paradigm, such as enabling completely different function through rational substitution of nanowire building blocks in a common assembly scheme, which ultimately could have the biggest impact in the future. The use of modified nanowire surfaces in a crossed nanowire architecture, for example, has recently led to the creation of nanoscale nonvolatile random access memory, where each cross point functions as an independently addressable memory element with a potential for integration at the 1012/cm2 level.

In a completely different area, we have shown that nanowires can serve as nearly universal electrically based detectors of chemical and biological species with the potential to impact research in biology, medical diagnostics and chem/bio-warfare detection. Lastly, and to further highlight this potential, we have shown that nanoscale light-emitting diode arrays with colors spanning the ultraviolet to near-infrared region of the electromagnetic spectrum can be directly assembled from emissive electron-doped binary and ternary semiconductor nanowires crossed with non-emissive hole-doped silicon nanowires. These nanoscale light-emitting diodes can excite emissive molecules for sensing or might be used as single photon sources in quantum communications.

The bottom line – focusing on the diverse science at the nanoscale will provide the basis for enabling truly unique technologies in the future.

Also read: Molecular Manufacturing for the Genomic Age


About the Author

Charles M. Lieber moved to Harvard University in 1991 as a professor of Chemistry and now holds a joint appointment in the Department of Chemistry and Chemical Biology, where he holds the Mark Hyman Chair of Chemistry, and the Division of Engineering and Applied Sciences. He is the principal inventor on more than 15 patents and recently founded a nanotechnology company, NanoSys, Inc.

Molecular Manufacturing for the Genomic Age

A computer chip and similar technology.

Researchers are making significant advances in nanotechnology which someday may help to revolutionize medical science for everything from testing new drugs to cellular repair.

Published October 1, 2002

By Fred Moreno, Dan Van Atta, and Jennifer Tang

When it comes to understanding biology, Professor Carl A. Batt believes that size matters – especially at the Cornell University-based Nanobiotechnology Center that he codirects. Founded in January 2000 by virtue of its designation as a Science and Technology Center, and supported by the National Science Foundation, the center seeks to fuse advances in microchip technology with the study of living systems.

Batt, who is also professor of Food Science at Cornell, recently presented a gathering – entitled Nanotechnology: How Many Angels Can Dance on the Head of a Pin? – with a tiny glimpse into his expanding nano biotech world. The event was organized by The New York Academy of Sciences (the Academy). “A human hair is 100,000-nm wide, the average circuit on a Pentium chip is 180 nm, and a DNA molecule is 2 nm, or two billionths of a meter,” Batt told the audience.

“We’re not yet at the point where we can efficiently and intelligently manipulate single molecules,” he continued, “but that’s the goal. With advances in nanotechnology, we can build wires that are just a few atoms wide.

“Eventually, practical circuits will be made up of series of individual atoms strung together like beads and serving as switches and information storage devices.”

Speed and Resolution

There is a powerful rationale behind Batt’s claim that size is important to the understanding of biology. Nanoscale devices can acquire more information from a small sample with greater speed and at better resolution than their larger counterparts. Further, molecular interactions such as those that induce disease, sustain life and stimulate healing all occur on the nanometer scale, making them resistant to study via conventional biomedical techniques.

“Only devices built to interface on the nanometer scale can hope to probe the mysteries of biology at this level of detail,” Batt said. “Given the present state of the technology, there’s no limit to what we can build. The necessary fabrication skills are all there.”

Scientists like Batt and his colleagues at Cornell and the center’s other academic partners are proceeding into areas previously relegated to science fiction. While their work has a long way to go before there will be virus-sized devices capable of fighting disease and effecting repairs at the cellular level, progress is substantial. Tiny biodegradable sensors, already in development, will analyze pollution levels and measure environmental chemicals at multiple sample points over large distances. Soon, we’ll be able to peer directly into the world of nano-phenomena and understand as never before how proteins fold, how hormones interact with their receptors, and how differences between single nucleotides account for distinctions between individuals and species.

The trick – and the greatest challenge posed by an emerging field that is melding the physical and life sciences in unprecedented ways – is to adapt the “dry,” silicon-based technology of the integrated circuit to the “wet” environment of the living cell.

Bridging the Organic-Inorganic Divide

Nanobiotechnology’s first order of business is to go beyond inorganic materials and construct devices that are biocompatible. Batt names proteins, nucleic acids and other polymers as the appropriate building blocks of the new devices, which will rely on chemistries that bridge the organic and inorganic worlds.

In silicon-based fabrication, some materials that are common in biological systems – sodium, for example – are contaminants. That’s why nano-biotech fabrication must take place in unique facilities designed to accommodate a level of chemical complexity not encountered in the traditional integrated-circuit industry.

But for industry outsiders, the traditional technology is already complex enough. Anna Waldron, the Nanobiotechnology Center’s Director of Education, routinely conducts classes and workshops for schoolchildren, undergraduates and graduates to initiate them into the world of nanotechnology, encourage them to pursue careers in science, and foster science and technology literacy.

In a hands-on presentation originally designed for elementary-school children, Waldron gives the audience a taste – both literally and figuratively – of photolithography, a patterning technique that is the workhorse of the semiconductor industry. Instead of creating a network of wells and channels out of silicon, however, Waldron works her magic on a graham cracker, a chocolate bar and a marshmallow, manufacturing a mouthwatering “nanosmore” chip in a matter of minutes.

Graham crackers are substituted for silicon substrate, while chocolate provides the necessary primer for the surface. Marshmallows act as the photoresist, an organic polymer that, when exposed to light, radiation, or, in this case, a heat gun, can be patterned in the desired manner. Finally, a Teflon “mask” is placed on top of the marshmallow layer and a blast from the heat gun transfers the mask’s design to the marshmallow’s surface – a result that appeared to leave a lasting impression on the Academy audience as well.

What’s Next?

According to Batt, it won’t be too long before the impact of the nanobiotech revolution will be felt in the fields of diagnostics and biomedical research. “Progress in these areas will translate the vast information reservoir of genomics into vital insights that illuminate the relationship between structure and function,” he said.

Prof. Batt

Also down the road, ATP-fueled molecular motors may drive a whole series of ultrasmall, robotic medical devices. A “lab-on-a-chip” will test new drugs, and a “smart pharmacist” will roam the body to detect abnormal chemical signals, calculate drug dosage and dispense medication to molecular targets.

Thus far, however, there are no manmade devices that can correct genetic mutations by cutting and pasting DNA at the 2-nanometer scale. One of the greatest obstacles to their development, Batt said, doesn’t lie in building the devices, but in powering them. Once the right energy sources are identified and channeled, we’ll have a technology that speaks the language of genomics and proteomics, and decodes that language into narratives we can understand.

Also read: Building a Big Future from Small Things


About Prof. Batt

Microbiologist Carl A. Batt is professor of Food Science at Cornell University and co-director of the Nanobiotechnology Center, an NSF-supported Science and Technology Center. He also runs a laboratory that works in partnership with the Ludwig Institute for Cancer Research.

Continuing the Legacy of a Cancer Research Pioneer

A man in white lab coat and yellow necktie poses for the camera.

Advancing the cancer research started by Casare Maltoni, the late Italian oncologist who advocated for industrial workplace safety.

Published August 1, 2002

By Fred Moreno, Dan Van Atta, Jill Stolarik, and Jennifer Tang

Cesare Maltoni. Image courtesy of Silvestro Ramunno, CC BY-SA 4.0, via Wikimedia Commons.

For decades, the “canary in the coal mine” approach has been used to test for potential carcinogens. Standing in for humans, mice and rats have ingested or been injected with various chemicals to help toxicologists determine if the substances would induce cancers. In the end, autopsy revealed whether the lab animals had developed tumors.

Today, new approaches are emerging. They stem from a variety of tools that are evolving from advances in molecular biology, microbiology, genomics, proteomics, novel animal models of carcinogenesis and computer technology.

These tools and approaches were the focus of an April conference commemorating the work of Italian researcher Cesare Maltoni, who died January 21. Renowned for his research on cancer-causing agents in the workplace, Maltoni was the first to demonstrate that vinyl choloride produces angiosarcomas of the liver and other tumors in experimental animals. Similar tumors later were found to be occurring among industrial workers exposed to vinyl chloride.

Maltoni also was the first to demonstrate that benzene is a multipotential carcinogen that causes cancers of the zymbal gland, oral and nasal cavities, the skin, the forestomach, mannary glands, liver, and hemolymphoreticular systems, i.e. leukemias.

Sponsored by the Collegium Ramazzini, the Ramazzini Foundation, and the National Toxicology Program of the National Institute of Environmental Health Sciences (NIEHS), the meeting was organized by The New York Academy of Sciences (the Academy).

Measuring More Than Pathological Changes

After reviewing the contributions of Maltoni and David Rall, an American giant in the same field, as well as providing an update on ongoing research in their respective groups, the speakers and attendees discussed the future of carcinogenesis testing. While new tools will not replace bioassays, most noted, they will make it possible to measure more than simply the pathological changes seen through the microscope.

J. Carl Barrett, head of the Laboratory of Biosystems and Cancer at the National Cancer Institute, cited four recent developments that are fundamentally changing the research to identify risk factors and biological mechanisms in carcinogenesis.

The four developments are: new animal models with targeted molecular features – such as mice bred with a mutated p53 oncogene – that make them very sensitive to environmental toxicants and carcinogens; a better understanding of the cancer process; new molecular targets for cancer prevention and therapy; and new technologies in genomics and proteomics.

New technologies in cancer research, like gene expression analyses, are revealing that cancers that look alike under the microscope are often quite different at the genetic level. “Once we can categorize cancers using gene profiles,” Barrett said, “we can determine the most effective chemotherapeutic approaches for each – and we may be able to use this same approach to identify carcinogenic agents.”

A Robust Toxicology Database

A related effort – to link gene expression and exposure to toxins – has recently been launched at the NIEHS. The newly created National Center for Toxicogenomics (NCT) focuses on a new way of looking at the role of the entire genome in an organism’s response to environmental toxicants and stressors. Dr. Raymond Tennant, director of the NCT, said the organization is partnering with academia and industry to develop a “very robust toxicology database” relating environmental stressors to biological responses.

“Toxicology is currently driven by individual studies, but in a rate-limited way,” Tennant said. “We can use larger volumes of toxicology information and look at large sets of data to understand complex events.” Among other benefits, this will allow toxicologists to identify the genes involved in toxicant-related diseases and to identify biomarkers of chemical and drug exposure and effects. “Genomic technology can be used to drive understanding in toxicology in a more profound way,” he said.

Using the four functional components of the Center (bioinformatics, transcript profiling, proteomics and pathology), Tennant believes that the NCT will be able “to integrate knowledge of genomic changes with adverse effects” of exposure to toxicants.

Current animal models of carcinogenesis are unable to capture the complexity of cancer causation and progression, noted Dr. Bernard Weinstein, professor of Genetics and Development, and director emeritus of the Columbia-Presbyterian Cancer Center.

Multiple factors are involved in the development of cancer, Weinstein said, making it difficult to extrapolate risk from animal models. Among the many factors that play a role in cancer causation and progression are “environmental toxins such as cigarettes, occupational chemicals, radiation, dietary factors, lifestyle factors, microbes, as well as endogenous factors including genetic susceptibility and age.”

Gene Mutation and Alteration

By the time a cancer emerges, Weinstein added, “perhaps four to six genes are mutated, and hundreds of genes are altered in their pattern of expression because of the network-like nature and complexity of the cell cycle. The circuitry of the cancer cell may well be unique and bizarre, and highly different from its tissue of origin.”

Research over the past decade has underscored the role that microbes play in a number of cancers: the hepatitis B and hepatitis C viruses in liver cancer along with cofactors alcohol and aflatoxin; human papilloma virus and tobacco smoke in cervical cancer; and Epstein Barr virus and malaria in lymphoma, said Weinstein. Microbes are likely to be involved in the development of other kinds of cancer as well, he speculated. “Microbes alone cannot establish disease, they need cofactors. But this information is important from the point of view of prevention, and these microbes and their cofactors are seldom shown in rodent models.”

When thinking of ways to determine the carcinogenicity of various substances, he concluded, “we have to consider these multifactor interactions, and to do this we need more mechanistic models” of cancer initiation and progression.

Christopher Portier, a mathematical statistician in the Environmental Toxicology Program at the NIEHS, is working to make exactly this type of modeling more widespread. He stressed the importance and advantages of complex analyses of toxicology data using a mechanism-based model – or “biologically based data.”

This model includes many more factors than just length of exposure and time till death of the animal. It can incorporate “the volume of tumor, precursor lesions, dietary and weight changes, other physiological changes, tumor location and biological structure, biochemical changes, mutations,” Portier said, and give a more complete picture of the processes that occur when an organism is exposed to a toxicant.

New Analytical and Biological Tools

With biologically based models, researchers would link together a spectrum of experimental findings in ways that allow them to define dose-response relationships, make species comparisons, and assess inter-individual variability, Portier said. Such models would allow researchers to quantify the sequence of events that starts with chemical exposure and ends with overt toxicity. However, he said “each analysis must be tailored to a particular question. They are much more difficult computationally and mathematically than traditional analyses, and require a team-based approach.

“Toxicology has changed,” Portier continued. “We now have new analytical and biological tools – including transgenic and knockout animals, the information we’ve gained through molecular biology, and high through-put screens. We need to link all that data together to predict risk, then we need to look at what we don’t know and test that.”

While most speakers focused on the future benefits of up and coming technologies and concepts, Philip Landrigan, director of the Mount Sinai Environmental Health Sciences Center at the Mount Sinai School of Medicine, reminded the group of the work on the ground that still needs to be accomplished. “We’ve made breathtaking strides in our understanding of carcinogens and cancer cells,” he said. “I am struck, though, by the divide in the cancer world – the elegance of the lab studies, but our inefficiency in applying that knowledge to cancer prevention.”

Thorough Testing Needed

One of the problems confronting researchers is the vast number of substances that are yet to be tested. About 85,000 industrial chemicals are registered with the U.S. Environmental Protection Agency for use in the United States. Although some 3,000 of these are what the EPA calls high-production-volume chemicals, Landrigan said, “only 10 percent of these have been tested thoroughly to see the full scope of their carcinogenic potential, their neurotoxicity and immune system effects.”

Landrigan also discussed other troubling issues. For example: Children, the population most vulnerable to the effects of toxins, are only rarely accounted for in testing design and analysis, he said, and the United States continues to export “pesticides, known carcinogens, and outdated factories to the Third World.” Landrigan said he believes the world’s scientific community needs to address these issues.

At the conclusion of the conference, Drs. Kenneth Olden and Morando Soffritti signed an agreement formalizing an Institutional Scientific Collaboration between the Ramazzini Foundation and the NIEHS in fields of common interest. Priorities of the collaboration will include: carcinogenicity bioassays on agents jointly identified; research on the interactions between genetic susceptibility and exogenous carcinogens; biostatistical analysis of results and establishment of common research management tools; and molecular biology studies on the basic mechanisms of carcinogenesis.

Detailed information presented in several papers will be included in the proceedings of the conference, to be published in the Annals of the New York Academy of Sciences later this year.

Also read: From Hypothesis to Advances in Cancer Research

The Complexities of Stem Cell Research

A shot of a cell taken from under a microscope.

Proponents on both sides of this at-times controversial debate each make their case, combining the science, history, policy, and ethics of the research.

Published August 1, 2002

By Fred Moreno, Dan Van Atta, Jill Stolarik, and Jennifer Tang

Image courtesy of NIH via Wikimedia Commons.

Following the recent death of American baseball legend Ted Williams, it was learned that the former Boston Red Sox slugger’s body had been suspended in liquid nitrogen, encased in a titanium-steel cylinder along with other bodies being preserved at a commercial cryonics facility. Controversy swirled as the story circulated that at least one family member sought to preserve the icon’s DNA for possible future use in cloning.

Cryonics and cloning are the stuff of popular fiction and films from Frankenstein to Star Wars, with the scientist’s power to “create life” eliciting both fear and fascination. With cloning and embryonic stem cell research now poised for rapid expansion, however, the real-world debate on cloning, even for specifically defined therapeutic purposes, has heated up. Scientists, too, have begun to grapple with the issue of setting appropriate limits on their ability to engineer life.

Stuart Newman, professor of Cell Biology and Anatomy at New York Medical College, is among the more skeptical voices in the debate on human cloning. Speaking at a roundtable discussion held on the subject at The New York Academy of Sciences (the Academy) this spring, Newman called the creation of clonal embryos a slippery slope that no amount of regulation can level. He cited what he considers to be inexorable pressures on biomedical researchers to transgress acceptable limits by allowing cloned embryos to grow beyond the cellular stage.

The Thornier Aspects

During the meeting, which was co-hosted with Gene Media Forum, Newman engaged in an interchange with patient-activists – including the noted actor and director Christopher Reeve – and fellow scientists in an effort to sort out the thornier aspects of the cloning debate.

Craig Venter, PhD, president of the TIGR Center for the Advancement of Genomics and a major figure in microbiology and genomics, moderated the debate. Other panelists included Rudolf Jaenisch, MD, professor of Biology at MIT; James Kelly, an activist on behalf of spinal cord treatment; and Reeve.

For many, the cloning debate hinges on the distinction between reproductive and therapeutic cloning. Reproductive cloning aimed at creating a child has been censured by scientists and ethicists alike. Earlier this year, the National Academy of Sciences called for a total ban on human reproductive cloning, but strongly endorsed cloning to obtain stem cells that hold promise for curing a broad spectrum of human diseases. Jaenisch and Reeve expressed their support for this view, while Kelly and Newman cast doubt on the advisability of human cloning for any purpose.

Therapeutic cloning relies on nuclear transfer technology, a technique used to create a customized stem cell line for a patient in need. The nucleus of one of the patient’s own skin cells, for example, is extracted and transferred into a human egg whose nucleus has been removed. The new nucleus of this cell is then exposed to the egg’s signals, causing it to revert to its embryonic state.

In theory, embryonic stem cells can be chemically coaxed into producing lines of cells that will make whatever tissues are needed to heal and repair the body. Examples being considered include leukemia-free bone marrow cells, insulin-producing islet beta cells for diabetics, and dopamine-rich neurons for patients with Parkinson’s disease.

Commercial Interests and Patient Pressures

Still, the slippery slope looms large for critics of the new science. If a legal limit is eventually set allowing scientists to grow a clonal embryo for 14 days, Newman speculated, why not 15, 16, or 17 days and beyond? He said a combination of commercial interests and patient pressures would make it impossible to regulate the technology.

But Rudolf Jaenisch strongly disagreed with this all-or-nothing view. “It’s premature to ban a technique that is still in the process of evolving,” said Jaenisch, referring to a bill in the Senate that, if passed, would criminalize all forms of human cloning. “At no point in our nation’s history has Congress banned an area of scientific exploration or technology by federal legislation.” Nonetheless, despite the objections of many scientists, a total ban on cloning in the United States remains a distinct possibility.

European governments are generally recommending a more measured approach to regulating the new technology. The U.K. recently passed a law prohibiting reproductive cloning but allowing therapeutic cloning research to move forward under strict government oversight.

Australia, Canada, Israel, Japan, Portugal, Singapore and the Benelux countries also have approved therapeutic cloning. A special committee of the European Parliament has been holding meetings to develop a framework for cloning research that can help European governments evaluate its risks and benefits.

“The British solution is black and white,” said Jaenisch. “If you implant a cloned embryo into a uterus, it’s a criminal act. If you put it into a Petri dish with the intent of making an embryonic stem cell, it is allowed. There is no gray zone.” Again putting forth the slippery-slope argument, Newman pointed out that the development of an artificial uterus, for example, would nullify this distinction.

The Legality of Therapeutic Cloning

The United States is alone among the so-called developed nations in attempting to make therapeutic cloning illegal. If Congress succeeds in criminalizing all forms of cloning, the U.S. would effectively seal its borders against the importation of cloning-derived treatments for diseases that afflict millions of Americans. For those with Parkinson’s disease, diabetes, spinal cord injuries, Alzheimer’s disease, and a whole host of incurable conditions, this could be tantamount to “health exile.”

Despite their promise, however, cloning-derived stem cells and their successful development into cures are still just a distant possibility, according to James Kelly, who himself is confined to a wheelchair as a result of a spinal cord injury. They’re too uncertain, he believes, to warrant a large investment of research dollars at the expense of more tried-and-true avenues of investigation.

Christopher Reeve disputed Kelly’s assertion on two counts: First, in his view, it won’t be that long before therapeutic cloning techniques will be ready for use in humans; and second, biomedical research isn’t a zero-sum game. Pointing to the recent doubling of the NIH budget and to funds that have been earmarked by the Department of Health and Human Services for therapeutic cloning, he claimed there will be sufficient funding for many types of research.

The Promise of Therapeutic Cloning

Reeve, who was paralyzed in an equestrian accident in 1995, believes his best hope for recovery lies in therapeutic cloning. Because spinal cord injury usually leads to a compromised immune system, his doctors say his best option is treatment with embryonic stem cells derived from his own DNA, as cells from an anonymous donor would pose a high risk of rejection.

The charismatic activist and philanthropist further reminded his fellow discussants, and the audience, that scientific breakthroughs are often greeted with suspicion. “When vaccines became available early in the 20th century, there was a real fear and, in fact, strong opposition from the private sector and the government,” he said. “The idea for a vaccine against, say, measles meant the introduction of a small amount of measles into the patient, and people couldn’t comprehend that that would be actually the solution to contracting measles.”

Venter concluded the meeting by seconding Reeve’s warning against allowing fear to shape today’s attitudes toward scientific advances, stressing the inherent value of cloning research itself. “Just doing the basic science research is one of the greatest avenues we’re ever going to have to understand our own development and our own biology,” he said

Also read: The Tantalizing Promise of Stem Cell Research

Analyzing the Self: When Mind Meets Matter

An x-ray of a brain scan.

Linking the self – our passions, our hatreds, our temperaments and such – to the physical wiring and physiological functioning of the brain.

Published August 1, 2002

By Rosemarie Foster

Image courtesy of samunella via stock.adobe.com.

Each living creature exists as a unit: a self. But what makes each of us the person we are? It’s a question that’s been pondered for hundreds of years.

Seventeenth century philosopher and mathematician René Descartes’ most famous quotation – “I think, therefore I am” – postulated that the self is a nonphysical entity, rather than a being identical to one’s body. Two centuries later, in his celebrated essay Self Reliance, transcendentalist Ralph Waldo Emerson wrote, “To believe your own thought, to believe that what is true for you in your private heart is true for all men – that is genius.” And modern-day screenwriter Woody Allen expressed self-doubt when he said, “My one regret in life is that I’m not someone else.”

Today the study of the self goes beyond the realm of philosophy, bridging this ancient science with contemporary neurobiology. At universities around the world, investigators are using modern analytical methods, laboratory tools and sophisticated imaging techniques in an attempt to link the self – our passions, our hatreds, our temperaments and such – to the physical wiring and physiological functioning of the brain. “If you really want to understand the nature of the mind, you have to understand the nature of the brain,” explains Patricia S. Churchland, professor of Philosophy at the University of California, San Diego.

Nature and Nurture

One thing is certain: Who we become and what personalities we develop is a combination of nature – the influence of genes – and nurture, the experiences we encounter throughout our lives. Both influence the development of the brain’s neural circuitry. “The relationship between genes and personality is not a simple one, but they do contribute,” says Joseph LeDoux, Henry and Lucy Moses Professor of Science at the Center for Neural Science at New York University. “But just because something is biological doesn’t mean it’s genetic. Experiences are also very important in shaping our neural wiring.”

Specifically, our experiences help us to learn, through an intricate system of memory processing employed by our brains. This learning results in the formation of actual neural networks.

Our peers may heavily influence such learning. According to social scientist Mahzarin Banaji, Richard Clarke Cabot Professor of Social Ethics in the Department of Psychology at Harvard University and Carol K. Pforzheimer Professor at Radcliffe, the self is the result of one’s collective social experiences.

“Our attitudes and beliefs come from the groups we associate with,” Banaji explains. “The thoughts and opinions we claim to be uniquely ours may in fact not be uniquely ours.” Indeed, a battery of “implicit association tests” Banaji has developed and implemented may reveal hidden biases in ourselves that we may not be aware of, and that many of us may not like.

From Soul to Brain

Churchland, LeDoux and Banaji will be among a cadre of distinguished scientists who will gather from around the world at the Mount Sinai School of Medicine from September 26-28 to speak at a unique conference called The Self: From Soul to Brain, sponsored by The New York Academy of Sciences (the Academy). “This will be the first time this group will assemble at one meeting to focus on this novel topic,” said LeDoux, who is organizing and chairing the event. “We want to address how the brain pieces itself together as we go through life.” The conference will foster a dialogue among researchers exploring the neuroscientific, philosophical, theological, and social aspects of the complex entity we call the self.

How do our brains make us who we are? LeDoux explains that it’s all in the brain’s wiring, and the exchange of neurotransmitters between billions of neurons across synapses. Such synaptic wiring regulates all brain functions, such as perception, emotion, motivation, thinking and memory. “But the trick is to understand how we as people can emerge out of all of this,” says LeDoux.

“That the self is synaptic can be a curse – it doesn’t take much to break it apart,” he writes in his book Synaptic Self: How Our Brains Become Who We Are, published this year. “But it is also a blessing, as there are always new connections waiting to be made. You are your synapses. They are who you are.”

Traumatic Memories and Physiological Responses

LeDoux’s group is focusing on the study of traumatic memories and the physiological responses they can incite. The brain does not process all of our memories the same way. For traumatic memories, two systems interact: one conscious, one unconscious. For example, if you were in a car accident and you returned to the accident scene, you might remember objective details of the event: “conscious memories.” But your blood pressure and heart rate might escalate, you may sweat, and your muscles might tense – all “unconscious memories” that surface as a result of the past experience.

Moreover, neuroanatomists have learned that these memory systems are mediated by two structures in the brain’s temporal lobe: the hippocampus, which regulates conscious memories, and the amygdala, an almond-shaped area of tissue controlling unconscious memories. LeDoux has focused two decades of inquiry on the latter structure, which he calls “the emotional processing system of fear.” His team has pioneered the study of emotions on the biological level, deep within the recesses of the brain: the amygdala of the rat brain, to be exact.

According to LeDoux, nature installed the amygdala as a survival mechanism. Early on, evolution wired the brain to produce responses to keep an organism alive in dangerous situations. This solution has not changed much over centuries, and works essentially the same way in rats as in people.

The LeDoux lab conducts “fear conditioning” in rats to study the function of the amygdala, its connections with other parts of the brain – such as the cortex, which is responsible for thought – and what happens in the brain when the amygdala is damaged. At the heart of their studies is a tone-shock system: They condition a rat by sounding a tone and delivering a minor shock, and they measure the rat’s physiological responses.

The Sensory Thalamus and the Amygdala

Thereafter, whenever the rat hears the tone it may either freeze in its tracks or respond with an increase in blood pressure and heart rate, even when no shock is delivered. Just the anticipation of a shock is enough to trigger a physical reaction in the animal. LeDoux’s group now studies how fear-arousing experiences alter synapses in the rats’ brains – particularly those in the lateral nucleus of the amygdala, the gateway into the system – and thereby create long-lasting memories.

The amygdala doesn’t work alone. One key interaction exists between the amygdala, the sensory thalamus and the sensory cortex. When we see or hear something frightening, we may freeze, jump or turn to see what caused it. That reaction can be traced to the connection between the sensory thalamus and the amygdala, between which signals travel quite quickly but not so precisely.

The same signal is processed several milliseconds more slowly between the thalamus and the sensory cortex, but in a way that allows us to assess the situation more accurately. LeDoux’s findings may be relevant to the management of anxiety disorders, which account for about half of the mental health problems reported in the U.S. and which can result from malfunctions in the way we deal with fear.

In Synaptic Self, LeDoux points out that neuroscientists have done an excellent job of studying how individual systems work, but as persons with selves, we are more than a mere collection of systems. To understand the self, he contends, neuroscientists will have to figure out how the various individual systems work together. One of his reasons for organizing the September conference was to engage scientists from a variety of research areas to begin to think about the self in ways that might be compatible with the tools and findings of brain research.

Not a Solitary Entity

“When we talk about ‘the self,’ it’s misleading to think of it as a single entity,” says Patricia Churchland. “Rather, it’s a number of different capacities engaged in monitoring the body and the various aspects of brain function.” When we perceive objects and events in our external environment – distinguishing them from our inner experiences such as emotions – and when we plan in our minds how, for example, to portage a canoe or to build a shelter, we are exercising what Churchland calls “self-representational capacities.” For 30 years, she has explored the complex connections between neural systems that have developed over time to enable humans to cope with and adapt to external signals, allowing us to improve our behavioral strategies.

In a paper published in the April 12, 2002 issue of the journal Science, Churchland emphasizes the brain-based nature of self-representational capacities. Our internal organs, for example, are represented by chemical and neural pathways aimed mainly at the brainstem and hypothalamus, while autobiographical memories appear to be governed by structures in the medial temporal lobe. The prefrontal lobe and limbic structures are important for deferring gratification and controlling impulses, so much so that damage to these areas may result in personality changes. “Hitherto quiet and self-controlled, a person with lesions in the ventromedial region of the frontal cortex is apt to be more reckless in decision-making, impaired in impulse control, and socially insensitive,” writes Churchland.

Self-Representational Capacities

Indeed, studies of patients who’ve experienced brain damage as a result of stroke, tumors or other disease or injury have shed light on specific areas of the brain associated with such self-representational capacities. Researchers have compared these patients’ abilities and personalities before and after the damage, and coupled that data with the results of contemporary diagnostic tools such as functional magnetic resonance imaging. Churchland aims for a panoramic view of a vast range of data, scrutinizing the findings of investigators in various laboratories to “try to make the story come together,” she explains.

One dramatic example Churchland describes is a patient known as R.B., who has been studied for two decades by Antonio Damasio’s neurology lab at the University of Iowa. R.B. is a middle-aged man who suffers from bilateral damage to his temporal lobes, resulting from herpes simplex viral infection. In particular, his hippocampus was destroyed. As a result, R.B. has catastrophic amnesia: he is unable to learn anything new, and is bereft of essentially all autobiographical memory. He lives within a 40-second time window and has no memory of events that occurred just moments ago, let alone those that happened before his illness.

R.B. does, however, have some social aspects of self-representation, thus demonstrating the dissociation of self-representing capacities. “Although he suffers diminished self-understanding, he nevertheless retains many elements of normal self-capacities,” Churchland notes, “including self-control in social situations and the fluent and correct use of ‘I.’”

The Amygdala and the Frontal Cortex

He also knows where his body stands in space at any given time, can identify feelings such as happiness, and is able to show sympathy with the distress of others. “This shows that the structures of the brain necessary for memory storage and retrieval are probably not those responsible for social skills,” explains Churchland.

At the September meeting, Churchland plans to speak on the topic of self-control, specifically linking self-control to parameters such as connectivity between the amygdala and the frontal cortex, as well as levels of hormones, neurotransmitters such as serotonin, and appetite-regulating proteins such as leptin. “Defining a neurobiological basis for self-control by identifying the relevant neurobiological parameters may be difficult, but I suspect it is possible,” she says.

“As we come to understand the nature of decision-making and choice, and how we acquire habits of self-control in childhood, it is bound to have an impact on how we understand ethics and the criminal justice system. The precise impact of new discoveries concerning the neurobiology of self-control remains to be seen, especially as technologies for intervention become increasingly available.”

Do You Have Hidden Biases?

Men are better suited for math and science than women. Many of us trust whites more than blacks. And we favor youth over age.

These statements may seem like extraordinary generalizations. But they represent hidden biases that many individuals learn they may have after taking Mahzarin Banaji’s implicit association tests (IATs), co-developed with the University of Washington’s Anthony Greenwald and Yale University’s Brian Nosek. If we think we’re open-minded and able to make free choices, then why might we unconsciously harbor such potentially disturbing beliefs? The answer, says Banaji, may lie in the people with whom we associate.

“The self is our most unique aspect. It is what distinguishes us from everyone else,” she explains. “And yet this most unique component of personality is itself socially constructed, a part of a larger collective gathered from everything we live and breathe.” That means we are most likely to hold opinions similar to those of our peers, and the social groups with which we identify most strongly. Moreover, these attitudes are often expressed without conscious awareness.

“Implicit Patriotism”

Banaji and her student Kristin Lane investigated “implicit patriotism” among 74 New England beachgoers who took IATs during the summer of 2000. The IATs compared how they identified with their nation (United States) and their region (New England) on both Independence Day and on a nonholiday in August. The results from tests taken on Independence Day showed a significantly stronger association between the concepts of “self ” and “American” on July 4th than on the August test date.

Moreover, regional identity was weaker on July 4th than in August. “Implicit identity is susceptible even to very subtle, naturally occurring events that can strengthen or weaken aspects of identity,” the researchers conclude. They are also being mindful of the impact of events like September 11th on shaping one’s implicit self and identity.

In another example, Banaji notes that more men than women tend to go into math and science, while women gravitate toward language and the arts. But in elementary school, there are no differences between males and females with regard to math test performance. Gender differences favoring men begin to surface in high school, and become progressively greater as the level of education increases. We’d like to think that anything’s possible for anyone, but in reality, the groups we identify with (in this case, males or females) may exert an unconscious influence on our choices and decisions.

Self-Imposed Segregation

In a paper to be published in the Journal of Personality and Social Psychology, Banaji, Nosek and Greenwald report that this is the case. Their conclusions were based on the findings of several IATs taken by groups of college students. They analyzed whether individuals link subjects such as math and the arts with good words (such as “love, rainbow, heaven”) or bad words (“death, torture, hatred”), and determined if they associated themselves more with math (“algebra=me”) or the arts (“poetry=me”).

While both sexes demonstrated negativity toward math, especially compared to the arts, that negativity was twice as strong among women than men. Moreover, the more strongly a woman identified with the female gender, the more negatively she felt toward math; conversely, the more strongly a man identified with being male, the greater his preference for math. “Knowledge of stereotypes, even implicit knowledge, may be sufficient to perpetuate stereotypes and even discourage women’s subsequent participation and performance in math domains,” concludes Banaji.

“The blunt reality is that not everything is equally possible for everyone,” she continues. “Societies that aspire to purer forms of democracy need be aware that wanting and choosing can be firmly shaped by membership in social groups. Until the internal, mental constraints that link group identity with preference are removed, the patterns of self-imposed segregation may not change.”

Also read:How the Brain Gives Rise to the Mind