Skip to main content

The Environment’s Best Friend: GM or Organic?

A pig standing in hay.

The debate rages between the benefits of genetically modified organisms (GMOs) versus organic methods within in the realm of agriculture and nutrition. But what’s the science say?

Published May 1, 2006

By Lee M. Silver

Image courtesy of gabort via stock.adobe.com.

Pigs raised on farms are dirty, smelly animals. Shunned by Jews and Muslims for millennia, they get no respect from most other people as well.

It’s also not just our senses, though, that pig farms assault: it’s the environment. Pig manure is loaded with phosphorus. A large sow can secrete from 18 to 20 kg per year, which runs off with rainwater into streams and lakes, where it causes oxygen depletion, algal blooms, dead fish, and greenhouse gases. [11] Ecological degradation has reached crisis levels in heavily populated areas of northern Europe, China, Korea, and Japan.

The Cost of Dietary Protein

The problem is that—unlike cows, goats, and sheep—farmed pigs cannot extract sufficient phosphorus from the corn and other grains they are fed. Grains actually contain plenty of phosphorus, but it is mostly locked away in a large chemical called phytate, which is inaccessible to digestion by animal enzymes. Ruminants release the phosphorus during a lengthy digestive process in four stomachs, with the help of symbiotic bacteria.

To survive, the pig’s wild ancestors depended on a varied diet, including worms, insects, lizards, roots, and eggs. But pig farming is most efficient with a simpler all-grain diet, supplemented with essential minerals. Although this feeding strategy works well for the animals, the inaccessible phosphorus in the grain passes entirely into the manure, which farmers use as crop fertilizer or otherwise distribute onto the land.

Today, in most rich and poor countries alike, pigs provide more dietary protein more cheaply and to more people than any other animal. Worldwide, pork accounts for 40% of total meat consumption. [11] While northern Europe still maintains the highest pig-to-human ratio in the world (2 to 1 in Denmark), the rapidly developing countries of east Asia are catching up. During the decade of the 1990s alone, pork production doubled in Vietnam and grew by 70% in China.

Along the densely populated coastlines of both countries, pig density exceeds 100 animals per square kilometer, and the resulting pollution is “threatening fragile coastal, marine habitats including mangroves, coral reefs, and sea grasses.” [7] As the spending power of people in developing Asian countries continues to rise, pig populations will almost certainly increase further.

Pig-caused Ecological Degradation

Pig-caused ecological degradation is a complex problem, and no single solution is in the offing. But any approach that allows even a partial reduction in pollution should be subject to serious consideration by policy makers and the public.

A prototypical example of what directed genetic modification (GM) technology can deliver is the transgenic Enviropig, developed by Canadian biologists Cecil Forsberg and John Phillips. Forsberg and Phillips used an understanding of mammalian gene regulation to construct a novel DNA molecule programmed for specific expression of the E. coli phosphorus-extraction gene (phytase) in pig saliva. They then inserted this DNA construct into the pig genome. [8]

The results obtained with the first generation of animals were dramatic: the newly christened Enviropigs no longer required any costly dietary supplements and the phosphorus content of their manure was reduced by up to 75%. Subtle genetic adjustments could yield even less-polluting pigs, and analogous genetic strategies can also be imagined for eliminating other animal-made pollutants, including the methane released in cow belches, which is responsible for 40% of total greenhouse gas emissions from New Zealand. [5]

Enzymes in Natural Bacteria

An added advantage with the Enviropig is that the single extra enzyme in its saliva is also present naturally in billions of bacteria inhabiting the digestive tract of every normal human being. As bacteria continuously die and break apart, the naked enzyme and its gene both float free inside us without any apparent consequences, suggesting that the Enviropig will be as safe for human consumption as non-GM pigs. If the enzyme happened to escape into meat from modified pigs, it would be totally inactivated by cooking.

Of course, empirical analysis is required to show that the modification does not make the meat any more harmful to human health than it would be otherwise. With modern technologies for analyzing genomes, transcriptosomes, proteomes, and metabolomes, any newly constructed transgenic animal can be analyzed in great molecular detail. “Isn’t it ironic,” Phillips and Forsberg commented [13], “that new varieties of animals with extreme but natural mutations undergo no safety testing at all?”

Environmentally Friendly GM

Not all GM projects are aimed specifically at reducing the harmful effects of traditional agriculture on the environment. Other GM products approved to date, developed almost entirely in the private sphere, have aimed to reduce production costs on large-scale farms. But as molecular biology becomes ever more sophisticated, the universe of potential environmentally friendly GM applications will expand.

Scientists have begun research toward the goal of obtaining pigs modified to digest grasses and hay, much as cows and sheep do, reducing the land and energy-intensive use of corn and soy as pig feed. Elsewhere, trees grown on plantations for paper production could be made amenable to much more efficient processing. This would reduce both energy usage and millions of tons of the toxic chemical bleach in effluents from paper mills. [3] [14]

The most significant GM applications will be ones that address an undeniable fact: every plot of land dedicated to agriculture is denied to wild ecosystems and species habitats. And that already amounts to 38% of the world’s landmass. Genetic modifications that make crop production more efficient would give us opportunities to abandon farmland that, in many cases, should cede it back to forests and other forms of wilderness, as long as world population growth is ameliorated.

So why are many environmentally conscious people so opposed to all uses of GM technology? The answer comes from the philosophy of organic food promoters, whose fundamental principle is simply stated: natural is good; synthetic is bad. [16]

The Roots of Organic Farming

Before the 18th century, the material substance of living organisms was thought to be fundamentally different—in a vitalistic or spiritual sense—from that of non-living things. Organisms and their products were organic by definition, while non-living things were mineral or inorganic. But with the invention of chemistry, starting with Lavoisier’s work in 1780, it became clear that all material substances are constructed from the same set of chemical elements.

As all scientists know today, the special properties of living organic matter emerge from the interactions of a large variety of complex, carbon-based molecules. Chemists now use the word organic to describe all complex, carbon-based molecules—whether or not they are actually products of any organism.

Through the 19th and 20th centuries, increased scientific understanding, technological innovations, and social mobility changed the face of American agriculture. Large-scale farming became more industrialized and more efficient. In 1800, farmers made up 90% of the American labor force; by 1900, their proportion had decreased to 38%, and in 1990, it was only 2.6%.

A Return to Preindustrial Farming Methods

However, not everyone was happy with these societal changes, and there were calls in the United States and Europe for a return to the preindustrial farming methods of earlier times. This movement first acquired the moniker organic in 1942, when J. I. Rodale began publication in America of Organic Farming & Gardening, a magazine still in circulation today.

According to Rodale and his acolytes, products created by—and processes carried out by—living things are fundamentally different from lab-based processes and lab-created products. The resurrection of this prescientific, vitalistic notion of organic essentialism did not make sense to scientists who understood that every biological process is fundamentally a chemical process. In fact, all food, by definition, is composed of organic chemicals. As a result, the U.S. Department of Agriculture (USDA) refused to recognize organic food as distinguishable in any way from nonorganic food.

Legislating Meaning

In 1990, lobbyists for organic farmers and environmental groups convinced the U.S. Congress to pass the Organic Foods Production Act, which instructed the USDA to establish detailed regulations governing the certification of organic farms and foods. [2] After 12 years of work, the USDA gave organic farmers the certification standards that they wanted to prevent supposed imposters from using the word organic on their products. [15] Similar organic standards have been implemented by the European Commission and by the Codex Alimentarius Commission of the United Nations. [4] [6]

In all instances, organic food is defined not by any material substance in the food itself, but instead by the so-called natural methods employed by organic farmers. The USDA defines organic in essentially negative terms when it says, “the [organic] product must be produced and handled without the use of synthetic substances” and without the use of synthetic processes. The Codex Commission explains in a more positive light that “organic agriculture is a holistic production management system.”

The physical attributes of organic products—and any effects they might have on the environment or health—are explicitly excluded from the definition. Nonetheless, the definitions implicitly assume that organic agriculture is by its very nature better for the environment than conventional farming.

Precision Genetic Modification

The European Commission states as a fact that “organic farmers use a range of techniques that help sustain ecosystems and reduce pollution.” Yet, according to self-imposed organic rules, precision genetic modification of any kind for any purpose is strictly forbidden, because it is a synthetic process. If conventional farmers begin to grow Enviropigs—or more sophisticated GM animals that reduce nearly all manure-based pollution—organic pig farmers will then blindly continue to cause much more pollution per animal, unless they are prevented from doing so by future EPA regulations.

Many organic advocates view genetic engineering as an unwarranted attack not just on the holistic integrity of organic farms, but on nature as a whole. On the other hand, spontaneous mutations caused by deep-space cosmic rays are always deemed acceptable since they occur “naturally.” In reality, laboratory scientists can make subtle and precise changes to an organism’s DNA, while high-energy cosmic rays can break chromosomes into pieces that reattach randomly and sometimes create genes that didn’t previously exist.

Regardless, organic enthusiasts maintain their faith in the beneficence and superiority of nature over any form of modern biotechnology. Charles Margulis, a spokesman for Greenpeace USA, calls the Enviropig “a Frankenpig in disguise.” [12]

Chemical Pesticides and Organic Farming

Although the market share held by organic products has yet to rise above single digits in any country, it is growing steadily in Europe, the United States, and Japan. Nearly all consumers assume that organic crops are, by definition, grown without chemical pesticides. However, this assumption is false.

Pyrethrin (C21H28O3), for example, is one of several common toxic chemicals sprayed onto fruit trees by organic farmers—even on the day of harvesting. Another allowed chemical, rotenone (C23H22O6), is a potent neurotoxin, long used to kill fish and recently linked to Parkinson’s disease. [1] [10]

How can organic farmers justify the use of these and other chemical pesticides? The answer comes from the delusion that substances produced by living organisms are not really chemicals, but rather organic constituents of nature. Since pyrethrin is produced naturally by chrysanthemums and rotenone comes from a native Indian vine, they are deemed organic and acceptable for use on organic farms.

However, the most potent toxins known to humankind are all natural and organic. They include ricin, abrin, botulinum, and strychnine—highly evolved chemical weapons used by organisms for self-defense and territorial expansion. Indeed, every plant and microbe carries a variety of mostly uncharacterized, more or less toxic attack chemicals, and synthetic chemicals are no more likely to be toxic than natural ones.

Less-allergenic GM Food

All currently used pesticides—both natural and synthetic—dissipate quickly and pose a miniscule risk to consumers. Nevertheless, faith in nature’s beneficence can still be fatal to some children. About 5% express severe allergic reactions to certain types of natural food. Every year unintentional ingestion causes hundreds of thousands of cases of anaphylactic shock with hundreds of deaths.

The triggering agents are actually a tiny number of well-defined proteins that are resistant to digestive fluids. These proteins are found in such foods as peanuts, soybeans, tree nuts, eggs, milk, and shellfish. They linger in the intestines long enough to provoke an allergic immune response in susceptible people.

No society has been willing to ban the use of any allergenic ingredients in processed foods, even though this approach could save lives and reduce much human suffering. GM technology could offer a more palatable alternative: scientists could silence the specific genes that code for allergenic proteins. The subtly modified organisms would then be tested, in a direct comparison with unmodified organisms, for allergenicity as well as agronomic and nutritional attributes.

USDA-supported scientists have already created a less-allergenic soybean. Soy is an important crop used in the production of a variety of common foods, including baby formula, flour, cereals, and tofu. Eliot Herman and his colleagues embedded a transgene into the soy genome that takes advantage of the natural RNA interference system to turn off the soy gene responsible for 65% of allergic reactions. [9]

Promising Results

RNA interference can be made to work in a highly specific manner, targeting the regulation of just a single gene product. Not only was the modified soy less allergenic in direct tests, but the plants grew normally and retained a complex biochemical profile that was unaltered except for the absence of the major allergen. Further rounds of genetic surgery could eliminate additional allergenic soy proteins. Other scientists have reported promising results in their efforts to turn off allergy-causing genes in peanuts and shrimp.

Some day perhaps, conventional soy and peanut farmers will all switch production to low-allergenicity GM crop varieties. If that day arrives, organic food produced with GM-free organic soy or peanuts will be certifiably more dangerous to human health than comparable nonorganic products.

Unfortunately, conventional farmers have no incentive to plant reduced-allergy seeds when sales of their current crops are unrestricted, especially when the public has been led to believe that all genetic modifications create health risks. In the current social and economic climate, much of the critical research required to turn promising results into viable products is simply not pursued. Anti-GM advocates for organic food may be indirectly and unknowingly responsible for avoidable deaths in the future.

Vegetarian Meat

Only three decades have passed since genetic modification technology was first deployed in a rather primitive form on simple bacteria. The power of the technology continues to explode with no end in sight, leading to speculation about how agriculture could really be transformed in the more distant future.

Chicken meat is actually cooked muscle, and muscle is a type of tissue with a particular protein composition and a particular structure. At some future date, as the power of biotechnology continues to expand, our understanding of plant and animal genes could be combined with the tools of genetic modification to create a novel plant that grows appendages indistinguishable in molecular composition and structure from chicken muscles.

Vegetative chickens—or perhaps muscular vegetables—could be grown just like other crops. Eventually, there could be fields of chicken, beef, and pork plants. At harvest time, low-fat boneless meat would be picked like fruit from a tree.

The Advantages

The advantages of genetically engineered vegetative meat are numerous and diverse. Without farm animals, there could be no suffering from inhumane husbandry conditions and no pollution from manure. Since the sun’s rays would be used directly by the plant to make meat, without an inefficient animal intermediate, far less energy, land, and other resources would be required to feed people.

Up to 20% of the earth’s landmass currently used for grazing or growing animal feed might be ceded back to nature for the regrowth of dense forests. As a result, biodiversity would expand, the extinctions of many species might be halted, and a large sink for extracting greenhouse gases from the atmosphere might be created.

Of course, this scenario is wild biotech speculation. But current-day organic advocates would reject any technology of this kind out of hand, even if it was proven to be beneficial to people, animals, and the biosphere as a whole. This categorical rejection of all GM technologies is based on a religious faith in the beneficence of nature and her processes under all circumstances, even when science and rationality indicate otherwise.

Also read: Cultivating Better Health with Science

References

1. Betarbet, R., T. B. Sherer, G. MacKenzie et al. 2000. Chronic systemic pesticide exposure reproduces features of Parkinson’s disease. Nature Neuroscience 3: 1301-1306.

2. Burros, M. 2002. A definition at last, but what does it all mean? New York Times (Oct 16).

3. Chiang, V. L. 2002. From rags to riches. Nature Biotechnology 20: 557-558.

4. Codex Alimentarius Commission. 1999. Guidelines for the Production, Processing, Labeling and Marketing of Organically Produced Foods. U.N. Food and Agriculture Organization, Rome

5. Dennis, C. 2004. Vaccine targets gut reaction to calm livestock wind. Nature 429: 119.

6. EUROPA. 2000. What Is Organic Farming? European Commission, Belgium.

7. FAO. 2006. Livestock Policy Brief 02: Pollution from Industrialized Livestock Production. U.N. Food and Agriculture Organization, Rome

8. Golovan, S. P., R. G. Meidinger, A. Ajakaiye et al. 2001. Pigs expressing salivary phytase produce low-phosphorus manure. Nature Biotechnology 19: 741-745.

9. Herman, E. M., R. M. Helm, R. Jung & A. J. Kinney. 2003. Genetic modification removes an immunodominant allergen from soybean. Plant Physiology 132: 36-43.

10. Isman, M. B. 2006. Botanical insecticides, deterrents, and repellents in modern agriculture and an increasingly regulated world. Annual Review of Entomology 51: 45-66.

11. OECD. 2003. Agriculture, trade and the environment: the pig sector. OECD Observer (Sep).

12. Osgood, C. 2003. Enviropigs may be essential to the future of hog production. Osgood File (Aug 1). CBS News.

13. Personal communication.

14. Pilate, G., E. Guiney, K. Holt et al. 2002. Field and pulping performances of transgenic trees with altered lignification. Nature Biotechnology 20: 607-612.

15. USDA. 2000. National Organic Program. U.S. Department of Agriculture.

16. Verhoog, H., M. Matze, E. Lammerts van Bueren & T. Baars. 2003. The role of the concept of the natural (naturalness) in organic farming. Journal of Agricultural and Environmental Ethics 16: 29-49.


About the Author

Lee M. Silver is professor of molecular biology in the Woodrow Wilson School of Public and International Affairs at Princeton University, and the author of Challenging Nature: The Clash of Science and Spirituality at the New Frontiers of Life (Ecco).

A New Perspective on the Future of Human Evolution

A gorilla holds her baby in a jungle.

Frank Wilczek, the 2004 Nobel Prize winner and renowned theoretical physicist and mathematician, offers some provocative speculations on the future of human evolution.

Published September 1, 2006

By Frank Wilczek

Archaeopteryx could fly—but not very well. Human beings today can penetrate outside Earth’s airy envelope—but not very well. Our minds can penetrate into realms of thought far beyond the domain they were evolved to inhabit—but not very well. It seems clear that the present form of humanity is, like archaeopteryx, a transitional stage. What will come next? I don’t know, of course, but it’s an entertaining, inspiring—and maybe important—question to think about.

Qualitative Evolution Based on Biology

In the past, evolution has been based on natural selection. Its results are impressive. Yet from an engineering perspective, natural selection is both haphazard and crude—haphazard because no meaningful goal is explicit; crude because it gathers feedback slowly and with much noise.

What we might call its “goal” is simply to keep going. Its “performance criterion” is production of fertile off spring: what Darwin called the struggle for existence. That “goal” is, of course, not a mindful goal, nor is the “performance criterion” a performance criterion in the conventional sense, where we judge how well some concrete task has been accomplished. Yet natural selection, by allowing information to flow from the environment to the replicating unit—the genes—results ineffective adaptation and creative response to opportunities. Famously, it leads to what seems to be inspired designs to achieve what appear (to us) to be concrete goals.

Viewed analytically, evolution’s design methods look terribly inefficient. Feedback arrives once a generation, and its information content is just a few bits, to wit the number and genetic types of surviving offspring. Furthermore, that information content is dominated by unrelated noise, all the complex accidents that impact survival. By way of comparison, we routinely gather gigabytes of useful information every hour by using our eyes and brain to look out at the world. Evolution by natural selection produces impressive feats of creative engineering only because it plays out over very long spans of time (many generations) on a very large stage (many individuals).

The Failures of Classic Eugenics

In the past, eugenics—encouraging certain individuals to reproduce while discouraging others—has been proposed as a path to human improvement. Even leaving moral issues aside, classical eugenics was doomed to failure. Selecting human parents on the basis of a few superficial characteristics is inherently crude and inefficient, with the same drawbacks as natural selection.

Only recently, with increased understanding of genetics, development, and physiology at the molecular level, have truly powerful possibilities for directing evolution begun to arise.

Screening against catastrophic genetic diseases is widely practiced and accepted. But where are the boundaries between disease, substandard performance, and suboptimal performance? Is deafness a disease? Or is tone deafness? Is lack of perfect pitch? Any boundary is artificial, and arbitrary boundaries will be breached. What’s in store for the future? Some, if not all, parents will seek to produce the best children they can, according to their own view of “best.” Parental (or governmental?) selection will replace natural selection as an engine of human evolution. Selection by genetic screening will be much more efficient.

What goals will parents pursue? (Note: I say will, not should.)

The most obvious goal is to improve health, broadly defined to include both physical vitality and longevity. The popularity of performance-enhancing drugs for athletes, of diets and food supplements, and, of course, our vast investment in medical research, attest to our powerful drive toward that goal. In this area, the most fundamental issue is aging.

The Realm of Molecular Investigation

After a long exile at the fringes of biology, the question of why we age, and what can be done to combat that process, has now firmly entered the realm of molecular investigation. Decisive progress may or may not come within a few years, but in a few decades it is likely, and in a century almost certain. Future humans will be healthier and live much longer than we do. They may be effectively immortal—and they’ll all have perfect pitch.

A second goal is more powerful intelligence. It may not be obvious, especially if you pay attention to the American political scene, but the evidence of nature is that there is intense pressure toward the evolution of increased intelligence. In the six million years or so since protohumans separated from chimpanzees, even bumbling natural selection has systematically upgraded our brains and enlarged our skulls, despite the steep costs of difficult childbirth and prolonged infancy. I suspect an important part of the pressure for intelligence comes from sexual selection: finding a mate is a complicated business, and women in particular tend to be choosy.

The salient facts here are: first, that it was possible to come so far so fast (on an evolutionary timescale!), and second, that the limiting factor is plausibly the mechanics of childbirth. Together, these facts suggest that tuning up production of bigger and better brains may be simple, once we find the tuning mechanism. More generally, better understanding of the molecular mechanisms behind development and learning gives new hope for improving mental vitality, just as understanding molecular genetics and physiology does for physical vitality.

Qualitative Evolution Based on Technology

Biological evolution, whether based on natural or parental selection, is intrinsically limited. Early design decisions, that may not be optimal, were locked in or forced by the physical nature of Earth-based life. Some of those decisions can be revisited through the addition of nonbiological enhancements (man-machine hybrids); others may require starting over from scratch.

The concept of a man-machine hybrid may sound exotic or even perverse, but the reality is commonplace. For example, humans do not have an accurate time sense, or absolute place sense, or the ability to communicate over long distances or extremely rapidly, or the ability to record sensory input accurately. To relieve these deficiencies, they have already become man-machine hybrids: by wearing a watch, using a GPS system, and carrying a cell phone, a Blackberry, and a digital camera.

Of these devices, only the watch was common ten years ago (and today’s watches are more accurate and much cheaper). Many more capabilities, and more seamless integration of man and machine, are on the horizon. For better or worse, much of the cutting-edge research in this area is military.

In other cases, incremental addition of capability may not be feasible. To do justice to what is possible, radical breaks will be necessary. I’ll mention three such cases.

Hostility to Human Physiology

The vast bulk of the universe is extremely hostile to human physiology. We need air to breathe, water to drink, a narrow range of temperatures to support our biochemistry; our genetic material is vulnerable to cosmic radiation; we do not thrive in a weightless environment. As a practical matter, our major ventures into space will be by proxy. Our proxies will be either humans so modified as to clearly constitute a different species; or, more likely, new species we design from scratch, that will contain a large nonbiological component.

The fundamental design of human brains, based on ionic conduction and chemical signaling, is hopelessly slower and less compact than modern semiconductor microelectronics. Its competitive advantages, based on three-dimensionality, self-assembly, and fault tolerance, will fade as we learn how to incorporate those ideas into engineering practice. Within a century, the most capable information processors will not be human brains, but something quite different.

Recently, a new concept has emerged that could outstrip even these developments. Physicists have realized that quantum mechanics offers qualitatively new possibilities for information processing, and even for logic itself. At the moment, quantum computers are purely a theoretical concept lacking a technological realization, but research in this area is intense, and the situation could change soon. Quantum minds would be very powerful, but profoundly alien. We—and this “we” includes even highly trained, Nobel-Prize-winning physicists—have a hard time understanding the subtleties of quantum mechanical entanglement; but exactly that phenomenon would be the foundation of the thought processes of quantum minds!

Where Does it Lead?

A famous paradox led Enrico Fermi to ask, with genuine puzzlement, “Where are they?”

Simple considerations strongly suggest that technological civilizations whose works are readily visible throughout our Galaxy (that is, given current or imminent observation technology) ought to be common. If they were, I’d base my speculations about future directions of evolution on case studies! But they aren’t. Like Sherlock Holmes’s dog that did not bark in the nighttime, the absence of such advanced technological civilizations speaks through silence.

Main-sequence stars like our Sun provide energy at a stable rate for several billions of years. There are billions of such stars in our Galaxy. Although our census of planets around other stars is still in its infancy, what we know already makes it highly probable that many millions of these stars host, within their so-called habitable zones, Earth-like planets. These bodies meet the minimal requirements for life in something close to the form we know it, notably including the possibility of liquid water.

On Earth, the first emergence of a species capable of technological civilization took place about one hundred thousand years ago. We can argue about defining the precise time when technological civilization itself emerged. Was it with the beginning of agriculture, of written language, or of modern science? But whatever definition we choose, the number will be significantly less than one hundred thousand years.

In any case, for Fermi’s question the most relevant time is not ten thousand years, but closer to one hundred. This marks the period of technological “breakout,” when our civilization began to release energies and radiations on a scale that may be visible throughout our Galaxy. Exactly what that visibility requires is an interesting and complicated question, whose answer depends on the means available to hypothetical observers.

Sophisticated Extraterrestrial Intelligence

We might already be visible to a sophisticated extraterrestrial intelligence through our radio broadcasts or our effects on the atmosphere. The precise answer hardly matters, however, if anything like the current trend of technological growth continues. Whether we’re barely visible to sophisticated though distant observers today, or not quite, after another hundred years of technological expansion we’ll be easily visible.

A hundred years is less than a part in ten million of the billion-year span over which complex life has been evolving on Earth. The exact placement of breakout within the multibillion-year timescale of evolution depends on historical accidents. With a different sequence of the impact events that lead to mass extinctions, or earlier occurrence of lucky symbioses and chromosome doublings, Earth’s breakout might have occurred one billion years ago, instead of one hundred.

The same considerations apply to those other Earth-like planets. Indeed, many such planets, orbiting older stars, came out of the starting gate billions of years before we did. Among the millions of experiments in evolution in our Galaxy, we should expect that many achieved breakout much earlier, and thus became visible long ago. So, where are they?

Several answers to that paradoxical question have been proposed. Perhaps our simple estimate of the number of life-friendly planets is for some subtle reason wildly overoptimistic. Furthermore, perhaps, even if life of some kind is widespread, technologically capable species are extremely rare. Perhaps breakout technology quickly ends in catastrophic warfare or exhaustion of resources. There are uncertainties at every stage of the argument. Even so, like Fermi, I remain perplexed.

The preceding discussion suggests another sort of possibility: they’re out there, but they’re hiding.

Quantum Quiet

If the ultimate information processing technology is deeply quantum-mechanical, it may not be energy-intensive. Excessive energy use brings heat in its wake, and heat is a deadly enemy of quantum coherence. More generally, quantum information processing is extremely delicate, and easily spoiled by outside disturbances. It is best done in the cold and the dark. Quantum minds might well be silent and isolated by necessity.

Silence and inner contemplation can also be a choice. The ultimate root of human drives remains what our selfish genes, in the struggle for existence, have imprinted. That root is apparent in many of our behavior’s most obvious priorities, which include fending off threats from a hostile environment, finding and attracting desirable mates, and caring for the young.

Those priorities involve active engagement with the external world. The products of deliberate biological or technological evolution, as opposed to natural selection, could have quite different motivations. They might, for example, seek to optimize their state according to some mathematical criterion (their utility function). Having found an optimum state, or several excellent ones, they could choose ever to relive selected moments of perfect bliss, perfectly reconstructed. This was the temptation of Faust:

If I say to the moment:

“Stay now! You are so beautiful!”
Then round my soul the fetters throw,
Then to perdition let me go!

Humans were not built to treasure a Magic Moment, nor could they reproduce such a moment reliably and in detail. For our evolutionary successors, that Faustian temptation will be much more realistic.

Also read: Resolving Evolution’s Greatest Paradox


About the Author

Frank Wilczek is the Herman Feshbach professor of physics at MIT. He received the Nobel Prize in Physics in 2004 for the discovery of asymptotic freedom in the theory of the strong interaction. He is the author, with Betsy Devine, of Longing for the Harmonies: Themes and Variations from Modern Physics (W. W. Norton) and the recently published, Fantastic Realities (World Scientific).

Lee Smolin: A Crisis in Fundamental Physics

With an infinity of universes proposed, and more than 10400 theories, is experimental proof of physical laws still feasible?

Published January 1, 2006

By Lee Smolin

Image courtesy of WP_7824 via stock.adobe.com.

For more than two hundred years, we physicists have been on a wild ride. Our search for the most fundamental laws of nature has been rewarded by a continual stream of discoveries. Each decade back to 1800 saw one or more major additions to our knowledge about motion, the nature of matter, light and heat, space and time. In the 20th century, the pace accelerated dramatically.

Then, about 30 years ago, something changed. The last time there was a definitive advance in our knowledge of fundamental physics was the construction of the theory we call the standard model of particle physics in 1973. The last time a fundamental theory was proposed that has since gotten any support from experiment was a theory about the very early universe called inflation, which was proposed in 1981.

Since then, many ambitious theories have been invented and studied. Some of them have been ruled out by experiment. The rest have, so far, simply made no contact with experiment. During the same period, almost every experiment agreed with the predictions of the standard model. Those few that didn’t produced results so surprising—so unwanted—that baffled theorists are still unable to explain them.

The Gap Between Theory and Experiment

The growing gap between theory and experiment is not due to a lack of big open problems. Much of our work since the 1970s has been driven by two big questions: 1) Can we combine quantum theory and general relativity to make a quantum theory of gravity? and 2) Can we unify all the particles and forces, and so understand them in terms of a simple and completely general law? Other mysteries have deepened, such as the question of the nature of the mysterious dark energy and dark matter.

Traditionally, physics progressed by a continual interplay of theory and experiment. Theorists hypothesized ideas and principles, which were explored by stating them in precise mathematical language. This allowed predictions to be made, which experimentalists then test. Conversely, when there is a surprising new experimental finding, theorists attempt to model it in order to test the adequacy of the current theories.

There appears to be no precedent for a gap between theory and experiment lasting decades. It is something we theorists talk about often. Some see it as a temporary lull and look forward to new experiments now in preparation. Others speak of a new era in science in which mathematical consistency has replaced experiment as the final arbiter of a theory’s correctness. A growing number of theoretical physicists, myself among them, see the present situation as a crisis that requires us to reexamine the assumptions behind our so-far unsuccessful theories.

I should emphasize that this crisis involves only fundamental physics—that part of physics concerned with discovering the laws of nature. Most physicists are concerned not with this but with applying the laws we know to under standard control myriads of phenomena. Those are equally important endeavors, and progress in these domains is healthy.

Contending Theories

Since the 1970s, many theories of unification have been proposed and studied, going under fanciful names such as preon models, technicolor, supersymmetry, brane worlds, and, most popularly, string theory. Theories of quantum gravity include twistor theory, causal set models, dynamical triangulation models, and loop quantum gravity. One reason string theory is popular is that there is some evidence that it points to a quantum theory of gravity.

One source of the crisis is that many of these theories have many freely adjustable parameters. As a result, some theories make no predictions at all. But even in the cases where they make a prediction, it is not firm. If the predicted new particle or effect is not seen, theorists can keep the theory alive by changing the value of a parameter to make it harder to see in experiment.

The standard model of particle physics has about 20 freely adjustable parameters, whose values were set by experiment. Theorists have hoped that a deeper theory would provide explanations for the values the parameters are observed to take. There has been a naive, but almost universal, belief that the more different forces and particles are unified into a theory, the fewer freely adjustable parameters the theory will have.

Parameters

This is not the way things have turned out. There are theories that have fewer parameters than the standard model, such as technicolor and preon models. But it has not been easy to get them to agree with experiment. The most popular theories, such as supersymmetry, have many more free parameters—the simplest supersymmetric extension of the standard model has 105 additional free parameters. This means that the theory is unlikely to be definitively tested in upcoming experiments. Even if the theory is not true, many possible outcomes of the experiments could be made consistent with some choice of the parameters of the theory.

String theory comes in a countably infinite number of versions, most of which have many free parameters. String theorists speak no longer of a single theory, but of a vast “landscape1” of possible theories. Moreover, some cosmologists argue for an infinity of universes, each of which is governed by a different theory.

A tiny fraction of these theories may be roughly compatible with present observation, but this is still a vast number, estimated to be greater than 10400 theories. (Nevertheless, so far not a single version consistent with all experiments has been written down.) No matter what future experiments see, the results will be compatible with vast numbers of theories, making it unlikely that any experiment could either confirm or falsify string theory.

A New Definition of Science

This realization has brought the present crisis to a head. Steven Weinberg and Leonard Susskind have argued for a new definition of science in which a theory maybe believed without being subject to a definitive experiment whose result could kill it. Some theorists even tell us we are faced with a choice of giving up string theory—which is widely believed by theorists—or giving up our insistence that scientific theories must be testable. As Steven Weinberg writes in a recent essay: [2]

Most advances in the history of science have been marked by discoveries about nature, but at certain turning points we have made discoveries about science itself…Now we may be at a new turning point, a radical change in what we accept as a legitimate foundation for a physical theory…The larger the number of possible values of physical parameters provided by the string landscape, the more string theory legitimates anthropic reasoning as a new basis for physical theories: Any scientists who study nature must live in a part of the landscape where physical parameters take values suitable for the appearance of life and its evolution into scientists.

An Infinity of Theories

Among an infinity of theories and an infinity of universes, the only predictions we can make stem from the obvious fact that we must live in a universe hospitable to life. If this is true, we will not be able to subject our theories to experiments that might either falsify or count as confirmation of them. But, say some proponents of this view, if this is the way the world is, it’s just too bad for outmoded ways of doing science. Such a radical proposal by such justly honored scientists requires a considered response.

I believe we should not modify the basic methodological principles of science to save a particular theory—even a theory that the majority of several generations of very talented theorists have devoted their careers to studying. Science works because it is based on methods that allow well-trained people of good faith, who initially disagree, to come to consensus about what can be rationally deduced from publicly available evidence. One of the most fundamental principles of science has been that we only consider as possibly true those theories that are vulnerable to being shown false by doable experiments.

Contending Styles of Research

I think the problem is not string theory, per se. It goes deeper, to a whole methodology and style of research. The great physicists of the beginning of the 20th century—Einstein, Bohr, Mach, Boltzmann, Poincare, Schrodinger, Heisenberg—thought of theoretical physics as a philosophical endeavor. They were motivated by philosophical problems, and they often discussed their scientific problems in the light of a philosophical tradition in which they were at home. For them, calculations were secondary to a deepening of their conceptual understanding of nature.

After the success of quantum mechanics in the 1920s, this philosophical way of doing theoretical physics gradually lost out to a more pragmatic, hard-nosed style of research. This is not because all the philosophical problems were solved: to the contrary, quantum theory introduced new philosophical issues, and the resulting controversy has yet to be settled. But the fact that no amount of philosophical argument settled the debate about quantum theory went some way to discrediting the philosophical thinkers.

It was felt that while a philosophical approach may have been necessary to invent quantum theory and relativity, thereafter the need was for physicists who could work pragmatically, ignore the foundational problems, accept quantum mechanics as given, and go on to use it. Those who either had no misgivings about quantum theory or were able to put their misgivings to one side were able in the next decades to make many advances all over physics, chemistry, and astronomy.

The shift to a more pragmatic approach to physics was completed when the center of gravity of physics moved to the United States in the 1940s. Feynman, Dyson, Gell-Mann, and Oppenheimer were aware of the unsolved foundational problems, but they taught a style of research in which reflection on them had no place in research.

Physics in the 1970s

By the time I studied physics in the 1970s, the transition was complete. When we students raised questions about foundational issues, we were told that no one understood them, but it was not productive to think about that. “Shut up and calculate,” was the mantra. As a graduate student, I was told by my teachers that it was impossible to make a career working on problems in the foundations of physics. My mentors pointed out that there were no interesting new experiments in that area, whereas particle physics was driven by a continuous stream of new experimental discoveries. The one foundational issue that was barely tolerated, although discouraged, was quantum gravity.

This rejection of careful foundational thought extended to a disdain for mathematical rigor. Our uses of theories were based on rough-and-ready calculation tools and intuitive arguments. There was in fact good reason to believe that the standard model of particle physics is not mathematically consistent at a rigorous level. As a graduate student at Harvard, I was taught not to worry about this because the contact with experiment was more important. The fact that the predictions were confirmed meant that something was right, even if there might be holes in the mathematical and conceptual foundations, which someone would have to fix later.

The Disappearance of Contact with Experiment

In retrospect, it seems likely that this style of research, in which conceptual puzzles and issues of mathematical rigor were ignored, can only succeed if it is tightly coupled to experiment. When the contact with experiment disappeared in the 1980s, we were left with an unprecedented situation.

The string theories are understood, from a mathematical point of view, as badly as the older theories, and most of our reasoning about them is based on conjectures that remain unproven after many years, at any level of rigor. We do not even have a precise definition of the theory, either in terms of physical principles or mathematics. Nor do we have any reasonable hope to bring the theory into contact with experiment in the foreseeable future. We must ask how likely it is that this style of research can succeed at its goal of discovering new laws of nature.

It is difficult to find yourself in disagreement with the majority of your scientific community, let alone with several heroes and role models. But after a lot of thought I’ve come to the conclusion that the pragmatic style of research is failing. By 1980, we had probably gone as far as we could by following this pragmatic, antifoundational methodology.

If we have failed to solve the key problems of quantum gravity and unification in a way that connects to experiment, perhaps these problems cannot be solved using the style of research that we theoretical physicists have become accustomed to. Perhaps the problems of unification and quantum gravity are entangled with the foundational problems of quantum theory, as Roger Penrose and Gerard t’Hooft think. If they are right, thousands of theorists who ignore the foundational problems have been wasting their time.

Unification and Quantum Gravity

There are approaches to unification and quantum gravity that are more foundational. Several of them are characterized by a property we call background independence. This means that the geometry of space is contingent and dynamical; it provides no fixed background against which the laws of nature can be defined. General relativity is background-independent, but standard formulations of quantum theory—especially as applied to elementary particle physics—cannot be defined without the specification of a fixed background. For this reason, elementary particle physics has difficulty incorporating general relativity.

String theory grew out of elementary particle physics and, at least so far, has only been successfully defined on fixed backgrounds. Thus, the infinity of string theories which are known are each associated with a single space-time background.

Those theorists who feel that theories should be background-independent tend to be more philosophical, more in the tradition of Einstein. The pursuit of background-independent approaches to quantum gravity has been pursued by such philosophically sophisticated scientists as John Baez, Chris Isham, Fotini Markopoulou, Carlo Rovelli, and Raphael Sorkin, who are sometimes even invited to speak at philosophy conferences. This is not surprising, because the debate between those who think space has a fixed structure and those who think of it as a network of dynamical relationships goes back to the disputes between Newton and his contemporary, the philosopher Leibniz.

Meanwhile, many of those who continue to reject Einstein’s legacy and work with background-dependent theories are particle physicists who are carrying on the pragmatic, “shut-up-and calculate” legacy in which they were trained. If they hesitate to embrace the lesson of general relativity that space and time are dynamical, it may be because this is a shift that requires some amount of critical reflection in a more philosophical mode.

A Return to the Old Style of Research

Thus, I suspect that the crisis is a result of having ignored foundational issues. If this is true, the problems of quantum gravity and unification can only be solved by returning to the older style of research.

How well could this be expected to turn out? For the last 20 years or so, there has been a small resurgence of the foundational style of research. It has taken place mainly outside the United States, but it is beginning to flourish in a few centers in Europe, Canada, and elsewhere. This style has led to very impressive advances, such as the invention of the idea of the quantum computer. While this was suggested earlier by Feynman, the key step that catalyzed the field was made by David Deutsch, a very independent, foundational thinker living in Oxford.

For the last few years, experimental work on the foundations of quantum theory has been moving faster than experimental particle physics. And some leading experimentalists in this area, such as Anton Zeilinger, in Vienna, talk and write about their experimental programs in the context of the philosophical problems that motivate them.

Currently, there is a lot of optimism and excitement among the quantum gravity community about approaches that embrace the principle of background independence. One reason is that we have realized that some current experiments do test aspects of quantum gravity; some theories are already ruled out and others are to be tested by results expected soon.

Collective Phenomena

A notable feature of the background independent approaches to quantum gravity is that they suggest that particle physics, and even space-time itself, emerge as collective phenomena. This implies a reversal of the hierarchical way of looking at science, in which particle physics is the most “fundamental” and mechanisms by which complex and collective behavior emerge are less fundamental.

So, while the new foundational approaches are still pursued by a minority of theorists, the promise is quite substantial. We have in front of us two competing styles of research. One, which 30 years ago was the way to succeed, now finds itself in a crisis because it makes no experimental predictions, while another is developing healthily, and is producing experimentally testable hypotheses. If history and common sense are any guide, we should expect that science will progress faster if we invest more in research that keeps contact with experiment than in a style of research that seeks to amend the methodology of science to excuse the fact that it cannot make testable predictions about nature.

Also read: What Physics Tells Us About the World

References

1 Smolin, L. 1997. The Life of the Cosmos. Oxford University Press.

2 Weinberg, S. 2005. Living in the multiverse.

Further Reading

Smolin, L. 2006. The Trouble with Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next. Houghton Mifflin, New York.

Woit, P. 2006. Not Even Wrong: The Failure of String Theory and the Search for Unity in Physical Law. Basic Books, New York.


About the Author

Lee Smolin is a theoretical physicist who has made important contributions to the search for quantum theory of gravity. He is a founding researcher at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario. He is the author of Life of the Cosmos (Oxford, 1997), Three Roads to Quantum Gravity (Orion, 2001), and the forthcoming, The Trouble with Physics (Houghton Mifflin, 2006).

Tsunami Relief Efforts: A Personal Account

Water splashes and people scramble during a tsunami.

Collaboration is key when dealing with disasters. A medical doctor offers guidance from her experience in the aftermath of the 2004 tsunami in the Indian Ocean.

Published June 1, 2005

By Sheri Fink, MD, PhD

A photograph of the 2004 tsunami in Ao Nang, Krabi Province, Thailand. Image courtesy of David Rydevik via Wikimedia Commons. Public Domain.

During two months working in Thailand and Indonesia after the tsunami, I was struck by the many ways that science and technology was employed during the disaster recovery process, although not without controversy and complications. Geospatial imaging information guided aid workers to highly populated disaster zones, but not all countries immediately released the sensitive information. Instant cell-phone messaging allowed disease surveillance specialists to track emerging infectious outbreaks across widespread areas, but not all health workers reported their cases.

One of the most interesting applications of science was in the field of forensics. In Thailand, the tsunami stole the lives of an estimated 3,442 Thai nationals and 1,953 foreigners, many of them European tourists. While tsunami victims’ bodies were buried or cremated in countries with fewer tourists, identification teams from more than two dozen countries showed up in Thailand to identify the victims, using techniques ranging from forensic anthropology to genetics. Most of the experts worked on the verdant grounds of a massive Buddhist temple known as Wat Yan Yao.

Quickly, however, a problem emerged: Each team had its own standards for evidence collection. Brendan Harris, a young volunteer from Vancouver, Canada, provided assistance to the teams, heaving waterlogged bodies onto mortuary tables in the first weeks after the tsunami. “There are a lot of arguments going on about how to deal with the bodies,” he said.

Collaboration is Imperative during Crises

Clad in hospital gowns, masked and gloved, the foreign teams at first focused their efforts on Caucasian-appearing bodies. That left Thai forensic scientists and dentists to photograph, examine and take fingerprints and DNA samples from Asian-appearing bodies, or bodies where decomposition had wiped away all traces of race. The result was two separate identification efforts, one foreign and one Thai, proceeding within earshot of each other. A month after the tsunami, the Thai and foreign teams had established completely different computer databases and were not sharing information crucial to identifying the missing. With only roughly 1,000 bodies identified, family members of the missing were distraught.

Ultimately the scientists realized that they had to work together. The foreign teams and the Thai interior ministry formed the Thai Tsunami Victim Identification Center, adopting protocols based on Interpol standards. The Center’s members committed to identifying all recovered bodies, regardless of nationality.

Scientists cautioned that the identification process could take many months, but expressed hope for what had become one of the largest international disaster identification efforts in history. “I have no doubt this will be a very highly successful system,” said DNA expert Ed Huffine, of Bode Technology Group in Springfield, Virginia. “This is developing a world response system to disaster. And it’s beginning a standardization process that uses all forms of forensic evidence, where DNA will play the leading role.”

The Need for a Crisis Response Network

A laboratory in Beijing, China, offered to test all victims’ DNA samples for free. Weeks later, scientists were surprised when the Chinese lab, and eventually several labs in other countries, had difficulty deriving usable DNA profiles from the degraded DNA in tooth samples. By the end of March, more than three months after the tsunami, the Victim Identification Center had put names to only an additional 1,112 bodies, the vast majority of them matched exclusively through dental records. Only three IDs came exclusively from DNA.

Continued disagreements and frequent personnel turnover have plagued the identification center, which insiders refer to as “a mess.” The disappointing experience has pointed out the need for better preparation and coordination among multi-national forensics experts responding to disasters.

Just as the World Health Organization plays a coordinating role for diverse groups of health professionals working in disaster and conflict zones, so, too, an international organization is needed to coordinate disaster victim identification teams. Such a group would be wise to standardize not only technical procedures, but also ethical principles – including the impartial treatment of bodies of all nationalities and races.

Perhaps most importantly, family members of the missing, who have the largest stake in the outcome of identification efforts, should be offered both full access to information and decision-making representation in any future crisis. It is crucial that their preferences and belief systems count.

Also read: The Science Behind a Tsunami’s Destructiveness

Check out the Academy’s International Science Reserve and consider joining this inclusive, impactful network.

Climate Change: A Slow-Motion Tsunami

A road heavily damaged from flooding and tsunamis.

From reducing greenhouse gas emissions to developing reliable sources of renewable energy, scientists are planning for how to deal with the threats brought on by climate change.

Published March 1, 2005

By Christine Van Lenten

Image courtesy of Itxu via stock.adobe.com.

Images of the devastation wrought by the December 2004 tsunami in the Indian Ocean are still indelible in all our minds. Thus, that imagery suggests itself within the context of climate change, which poses the threat of natural forces altering our planet – albeit much more slowly – in ways that could be destructive to many forms of life, not least our own.

The tsunami was not, of course, caused by climate change. That imagery is facile; that imagery is lurid; we may hope it’s grossly overstated. But it would be a mistake to reject its core significance.

The reasons why, and steps we can take to mitigate and adapt to climate change, were presented by two of the world’s top climate scientists at an event sponsored by the Environmental Sciences Section in December. Dr. Rajendra K. Pachauri, who chairs the authoritative Intergovernmental Panel on Climate Change (IPCC), was the featured speaker.

Dr. James E. Hansen, who directs the NASA Goddard Institute for Space Studies (GISS), served as respondent. Kenneth A. Colburn of the Northeast States for Coordinated Air Use Management (NESCAUM) and Karl S. Michael of the New York Energy Research and Development Authority (NYSERDA) lent regional and state perspectives on how governments and the marketplace are – and aren’t – responding to the challenge of reducing the greenhouse gas (GHG) emissions that are driving global warming.

Give Carbon a Market Value

This quartet of insiders spoke against a backdrop of major events. Pachauri and Colburn were en route to Buenos Aires for the tenth Conference of the Parties to the UN Framework Convention on Climate Change, which produced the Kyoto Protocol. On January 1, 2005, the EU market for trading carbon emissions would be launched. That, along with implementation of the Kyoto Protocol in February, would, Colburn predicted, give carbon a market value, and “that will change everything.”

But will change come soon enough to transform a destructively carbon-intensive world into one in which current generations can meet their basic needs without impairing future generations’ ability to meet theirs?

A Monster Problem

Created in 1988 by the UN Environment Programme and the World Meteorological Society, the IPCC has enlisted scientists and other experts around the globe in shaping and advancing a new body of knowledge about a monster problem of unparalleled complexity. As climate change has evolved from an obscure topic to the mother of all fields, the IPCC has earned and maintained wide respect.

Central to its success are its methods: it proceeds by way of assessments of peer-reviewed, published literature, largely by consensus, and transparently. Central to its methods is the design of scenarios that project a range of future GHG emission levels and resulting physical effects. The scenarios rest on assumptions about energy use, population, and economic activity. Much turns on them: the higher the forecasts, the more urgent the problem, the greater the pressures to act.

Inevitably the scenarios are challenged. “One welcomes debate,” said Pachauri. “But there should be a healthy and objective debate.” The IPCC scenarios have been subjected to “a systematic attack” charging that forecasts are too high because methods are flawed. That attack is unfounded, Pachauri contended, and the IPCC has been responding to it. A natural resource economist, he presented the IPCC’s case. In short, the IPCC relies on capable economists who draw from mainstream sources and employ mainstream methods. Critics who say the IPCC exaggerates future emission levels are “not at all correct.”

Dangerous Trends

What has the IPCC learned? The evidence Pachauri cited that human-induced climate change is already under way is by now familiar and widely accepted: temperatures are rising, glaciers are retreating, snow cover is diminishing, droughts are more frequent and severe. The November 2004 Arctic Impact Assessment reports that the Arctic is warming much faster than anticipated.

Of the GHGs driving climate change, CO2 is by far the worst offender. By 1999 its concentration in the atmosphere had increased 31% since the industrial revolution – to a level perhaps not exceeded during the past 420,000 years and likely not in the past 20 million. It’s expected that fossil fuel burning will continue to drive CO2 emissions during the 21st century. In IPCC model runs, the global averaged surface temperature increases from 1.4o to 5.8oC, from 1990 to the end of the 21st century. Wintertime temperatures could rise, particularly in northern latitudes. Globally, average water vapor, evaporation, and precipitation could increase, though effects could vary regionally. Sea levels could rise by 0.09 to 0.88 meters.

Complicating the picture is the complexity of the interacting climate, ecological, and socioeconomic systems that are in play, with their feedback loops and indirect as well as direct effects. With systems of such complexity and temperature changes of such magnitude, effects won’t necessarily be linear: there will be discontinuities; there could be abrupt changes; impacts could be severe. Because of the inertia of the climate system, some impacts may only slowly become apparent. Some, Pachauri observed, could continue for decades, centuries, millennia. Some could be irreversible if certain thresholds, not yet understood, are crossed.

The Potential Damage

Large uncertainties are associated with IPCC projections; for example, assumptions about economic growth, technology, and substitutions among different forms of energy. But if by the end of this century we end up in the upper end of the ranges forecast, Pachauri cautioned, “the world is in trouble.”

In sketching the likely impacts of climate change, Pachauri drew from the IPCC’s 2001 Third Assessment Report. (The fourth is due in 2007.) Damage could result from changes in precipitation patterns that impact fresh water and food supplies, he said. Growth in crop yields has already slowed due to factors not related to climate; yields could fall. Demand for irrigation water will rise, aggravating scarcities. Water quality will decline. Competition for water is already growing.

Developing countries and the poor in all countries will be hardest hit. A half-billion people in the Indian subcontinent depend for the bulk of their water supply on snow melt from the Himalayas, now threatened. Two-thirds of India’s population lives in rural areas where much farming depends entirely on rain-fed agriculture. Without adequate rainwater, soil conditions will deteriorate; poverty will worsen. Food prices will rise for everyone; huge demand for agricultural products will threaten food security for the entire world. Other likely effects are no less alarming:

Data Sources: Energy Information Administration (2003) and U.S. Environmental Protection Agency (2004)

– Sea levels will rise as polar ice caps continue to melt and as heat expands water; coastal regions and islands will be inundated.

– Heat waves will kill people.

– Rates of infectious and heat-related respiratory diseases will rise.

– Economies will be dislocated; sustainable development, thwarted.

– All forms of life (so exquisitely temperature-dependent) will be affected, with far-reaching ecological consequences.

Energy Efficiency is Key

Even after levels of CO2 are stabilized, temperatures will continue to rise before they eventually stabilize. Sea levels will continue to rise for much longer. The longer we delay acting, the worse the impacts, the longer they’ll last, the harder it will be to mitigate them.

Stabilizing climate will require a broad range of actions that address not only CO2 but also other climate forcing agents including methane, trace gases, and black carbon (soot) aerosols, according to Jim Hansen, whose research into climate change goes back decades. His congressional testimony in the 1980s helped raise broad awareness of climate change issues.

IPCC business-as-usual scenarios are unrealistically pessimistic, in his opinion; he says a realistic description of CO2 emissions growth is 1.4% per year. However, “I’m more pessimistic than IPCC analyses” with regard to how fast large ice sheets may disintegrate. In order to avoid climate problems, he argues, CO2 emissions need to level off during the next few decades and then decline with the help of new technologies.

But the U.S. Department of Energy projects continued growth in CO2 emissions and energy needs over the next several decades. Who are the biggest U.S. energy consumers? Industrial uses have been fairly constant; the greatest growth is in transportation. Indeed, CO2 emissions from transportation now exceed those from industry. “If you want to flatten out the use of energy, you need to address transportation,” Hansen said.

Within the transportation sector, the lion’s share of emissions comes from automobiles and “light trucks,” a category that includes SUVs, pick-ups, and vans. Growth in emissions comes primarily from light trucks. With gasoline at $2 per gallon, a conservative 4.8 mpg increase in efficiency for light trucks and a 2.8 mpg increase for automobiles would pay for themselves through decreased fuel usage.

Other Measures; Other Means

Implementing this by 2015, Hansen advised, would save, by 2030, more than 3 million barrels of oil each day; integrated over 35 years, four times the amount of oil in the Alaska National Wildlife Refuge. A moderate scenario – more modest than what’s possible with existing technology – would save, by 2030, seven times what’s in the Alaska National Wildlife Refuge. “If you go further it would be possible to get an absolute decrease [in emissions]…So the possibilities of avoiding large climate change are there technologically. We just have to get serious” – and act.

As Pachauri observed, climate change poses equity issues. To date, developed nations have added the largest share of human-generated GHGs to the atmosphere; they hold the most GHG “debt.” Should developing nations have a chance to “catch up?” Who should bear the burden of reducing GHG emissions?

Along with acting to mitigate climate change, we can adapt to it, Pachauri explained, by such measures as developing drought- and salt-tolerant crops; by developing better water-conservation practices and technologies for conserving and desalinating water; by reducing the enormous inefficiencies in the biomass cycle, the developing world’s major form of energy.

But technology isn’t just a mere fix. “You have to create the policy framework, the social conditions by which technology will be developed and used. We need to redefine technology-related priorities within a global framework.” Toward this end, social scientists’ rigorous analyses of the impacts of climate change are essential. Political implications, too, which tend to be viewed only in terms of negotiating positions, should be examined within an objective academic framework, Pachauri urged.

What Will It Cost?

And economic issues must be squarely addressed: beliefs that slowing climate change will be costly are “fallacious,” Pachauri stated. The IPCC found that reducing CO2 concentrations to a reasonable 450 ppm by 2050 would reduce global GDP by only about 4%. Essentially, in a period of healthy economic growth, you merely postpone by a year or so the date by which you reach a certain level of prosperity. And historically, technologies have proved far more efficient and less costly than anticipated, he noted.

Moreover, as Hansen remarked, the millions of barrels of oil you could easily save by 2030 are equivalent, at $40 a barrel, to $80 billion a year, “which would, independent of mitigating climate change, do a lot for our economic and national security.”

The view that it’s going to be terribly costly to mitigate climate change is shortsighted and irresponsible, Pachauri asserted. “You can’t see mitigation of climate change in isolation from other priorities.” He cited several benchmarks. Worldwide military expenditures for 2004 are estimated at $950 billion. In 2003, aid from donor nations totaled only $68.5 billion, 0.25% of their income; it’s generally believed aid should total around 0.7%. If the World Trade Organization reduces subsidies so that farmers in developing countries can compete on a level playing field, “altogether you’re providing a few hundred billion dollars a year” those countries could use to address climate change.

That is, the resources are there, but “there is unprecedented need for global vision and commitment. Groups like this – the scientific community, the thinkers of the world” – must bring it about.

Leaders and Laggards

The U.S. federal government hasn’t yet acted to regulate GHG emissions, although the United States is the largest GHG emitter in the world. Nor has it ratified the Kyoto Protocol. The United States is not leading technologically or intellectually, charged Ken Colburn, whose organization, NESCAUM, is an association of eight Northeastern states’ air quality regulators who are working to reduce GHG emissions.

China just adopted motor vehicle standards that may ultimately disadvantage the United States technologically, he reported. Australia’s government hasn’t ratified Kyoto, but all 10 of its states are committed to capping GHG emissions, and they’ll meet the first Kyoto target for emission reductions.

Tony Blair has been progressive on climate. “He’s getting flak from the right for not going far enough.” Germany requires that solar energy be integrated into new buildings; New York City’s proposed building standards don’t. And while New York is the financial capital of the world, the intellectual capital for carbon markets is being built in London. The financial ramifications of the EU’s carbon-trading market will be significant, Colburn predicted.

While the federal government balks, U.S. states are pursuing initiatives. The California Air Resources Board announced fuel-efficiency standards that require a 30% reduction in emissions by 2016. The cost, about $1,000 per vehicle, will be recouped by lower fuel consumption. A coalition of auto manufacturers has mounted a legal challenge. “The worst problem in America relative to fuel efficiency standards? High standards would disadvantage domestic manufacturers…We evidently need to drag them kicking and screaming into technological survival.”

Coast to Coast

Like the Northeastern states, California, Oregon, and Washington – whose fossil-fuel GHG emissions total 7% of the global total – are pursuing an initiative to develop a cap-and-trade program for CO2 emissions. Some “red states” are putting together climate action plans. With implementation of the Kyoto Protocol and the start of EU carbon trading, Colburn said, “pressure is especially building on the business community.”

Karl Michael, who coordinates New York State’s energy, environmental, and economic modeling and forecasting activities related to energy policy and planning, reported some bright spots on the state level. “Ten years ago…global warming wasn’t on the radar screen,” he recalled. But a state Climate Change Action Plan formulated five years ago rapidly evolved into a statewide GHG task force. “Suddenly, it was…the hottest issue in town.” Today, climate change issues are one of Governor George Pataki’s highest priorities.

New York adopted the goal of reducing its 1990 GHG emissions levels by 5% by 2010, and by 10% by 2020. It imposed a “system benefit charge” on electric bills to fund a variety of programs related to energy efficiency and renewable resources. It required that, by 2013, 25% of electricity used in the state come from renewable resources. “We expect that to mean a lot of windmills.” Roughly 16% now comes from renewable sources, largely hydro projects. “Moving to 25% over a decade is a big commitment,” Michael noted.

Just the Beginning

Another bold venture, initiated by Governor Pataki, is the Regional GHG Initiative (RGGI), a consortium of Northeastern states that’s developing a regional cap-and-trade program for carbon emissions from electricity generators. A model rule each state can use to fashion its own regulations is due out by April 2005.

The governors are saying, “We’re serious about this. Get this done. Come up with something that will work,” Michael said. And they want to help the economy by encouraging the development of new technologies. RGGI expects “to make some real changes” in how electricity is produced in the Northeast. The hope is that the federal government will follow suit, “because this is something the states are clearly way out ahead on.” After emissions from electricity generators are capped, RGGI will turn to other sectors. “We see this as just the beginning.”

Also read: The Dire Climate Change Wakeup Call


About the Author

Christine Van Lenten is a freelance writer who has written about environmental subjects for the Academy, government agencies, and private sector firms

A Public Good: Accelerating AIDS Vaccine Development

A medical professional wearing rubber gloves and a facemask, draws a liquid from a vile using a syringe.

Researchers are making strides in the research and drug development necessary to combat the deadly HIV/AIDS epidemic, but more needs to be done to achieve this goal.

Published January 1, 2005

By Marilynn Larkin

More than 20 years into the HIV/AIDS epidemic, there is still no end in sight to this dreaded disease. Worse, the number of new cases of HIV/AIDS continues to climb, particularly in the less developed world.

In a presentation this July sponsored jointly by The New York Academy of Sciences’ (the Academy’s) Science Alliance and Rockefeller University’s Postdoctoral Association, Seth Berkley, founder, president, and CEO of the International AIDS Vaccine Initiative (IAVI), painted a disturbing picture of the magnitude of the epidemic, underscoring the scientific and advocacy work that needs to be done to quell it.

IAVI is a public–private partnership dedicated to putting an end to the AIDS epidemic. The organization offers financial and technical support to preventive vaccine research and development, serving as an advocate for sound public policy and as a community educator about AIDS and the clinical studies necessary to halt the disease. Scientists working with IAVI are playing vital roles in these endeavors, from basic science to regulatory issues, product management, and communicating with the media and public health officials.

“You need skill sets,” Berkley said. “But, what we really want at IAVI are people who care about the preventive vaccine issue and who are willing to dedicate themselves to trying to drive it forward. Then we match those desires with the career opportunities that are out there.”

A Two-Pronged Approach

A two-pronged approach is needed to deal with the devastation. One is to focus on the short-term emergency – preventing further spread of the virus, treating individuals who are infected, and mitigating the societal consequences. But he stressed that there also needs to be a long-term view, including creating the tools needed to end the epidemic entirely: female-controlled barrier methods and microbicides; diagnostics to improve treatment and control sexually transmitted diseases; and HIV vaccines. “A preventive vaccine is the only way we’re going to end the epidemic,” he said, “and we should settle for nothing less than ending the epidemic.”

AIDS vaccines are special in that their use would result in an international public good, Berkley emphasized. Simply put, that means the vaccine goes beyond individual protection. The message for policymakers, therefore, is that investing in HIV vaccines makes sense because it affects public health.

Several hurdles must be overcome before this potential global good becomes a global reality, however. For one thing, as drug candidates move from preclinical to phase-1 trials, success rates are low. Moreover, vaccines must be made available at low cost, which make them less attractive as investments. The result: Today’s market for vaccines is only about 1-2% of the market for pharmaceuticals. For HIV, that market is mostly in the less developed world, where companies are least likely to realize profits.

Lack of Funding

Vaccine development also is hampered by a lack of research funding. Public sector organizations – such as the National Institutes of Health in the United States, the Medical Research Council in the UK, and the ANRS [Agence Nationale de Reserche sur le Sida] in France – usually are national in their outlook and are not necessarily able to take a global view, said Berkley. Hence, the mission of IAVI: Ensure the development of a safe, effective, accessible, preventive HIV vaccine for use throughout the world.

While more than 30 HIV products are moving into trials around the world, the pipeline is duplicative. “Candidates are focused primarily on cell-mediated immunity, with little emphasis on neutralizing antibodies or mucosal vaccines. Also, the time from preclinical studies to market is far, far too slow.”

“So, here’s the take-home message,” said Berkley. “Twenty-three years into the worst viral infectious disease epidemic since the 14th century, only one vaccine candidate has been fully tested to see if it works. That is unbelievable. And with 14,000 new infections daily, speed is of the essence. We have to compress every aspect of vaccine development and access, without compromising safety.”

The challenge is to take the standard timeline, which is 35-plus years, and squeeze it down with parallel track approvals and deployment so that a safe, effective vaccine is licensed in most countries within 10 years after the start of preclinical research, with widespread access in less developed countries in less than 20 years.

Other Challenges

When a product is ultimately confirmed as efficacious, however, other challenges arise. One is pricing. What you want is to have the wealthiest nations pay the most, and the very poor pay as close to manufacturing cost as possible. “The problem here is getting wealthy country policymakers such as the United States Congress and European Union, which are focused on lowering their own domestic health care budgets, to accept that type of differential pricing.” Production poses yet another challenge. “We’ll want massive doses of the candidate produced in a timeline that allows us to immunize people at high risk, especially adolescents.”

“Advocacy, policy change, and scientific progress have to go hand in hand,” Berkley said. “Good science alone won’t get you products. Product development alone won’t get you there. You need to have it all together.”

This statement led to a discussion of how the IAVI has affected the global effort to develop an HIV vaccine. “We started out in 1994 to do something that seemed audacious in its magnitude, and yet there’s no question that we have, in fact, affected the worldwide effort. There are a lot more resources and a lot more attention being paid to a preventive vaccine. However,” he conceded, “the effort is still grossly inadequate.

“If you have a fatal disease, you’ll do anything to get treatment. But when it comes to prevention technology, there isn’t the same cry that there is for treatment.”

The Role of Advocacy

By contrast, Berkley continued, “there’s no movement for AIDS vaccines because when a mother has a child, she says, ‘my child’s not going to be bisexual. My child is never going to experiment with IV drugs.’ Nobody wants to sit there and say, ‘gee my kid may do this,’ and so there’s no advocacy for the creation of a vaccine for the next generation.”

To accomplish this, IAVI forms partnerships with organizations around the world. These organizations help IAVI staff working in their countries to build relationships, work with the media, and with the science community. Thus, for example, when you’re in Germany, you have a German group that has links, speaks the language, understands the system.” The result has been successes in countries such as India, where leaders of two opposing parties both stood up during a conference and stressed the importance of AIDS vaccines.

Does this mean the advocacy effort is a success? Yes and no. On the one hand, global spending on vaccine product development has increased from $125 million in 1994 to about $650 million in 2002. But the five-fold increase in spending is still only a “very small sliver” of HIV/AIDS spending overall, and less than 1% of total health-related R&D spending.

A Global Laboratory Network

Nonetheless, IAVI continues to have a hand in many vaccines currently in development, with more than 25 principal R&D partners working on six major vaccine projects, the Neutralizing Antibody Consortium, and human and non-human core laboratories. IAVI has also created a global laboratory network to ensure standardization of results from lab to lab. Moreover, to further compress the development timeline, IAVI is conducting trials in parallel in different countries.

The result, said Berkley, “is that we were able to bring five vaccines into the clinic in five years. The only other group that was able to do that was the Merck Corporation, which is a huge pharmaceutical company. We’ve also done clinical trials in eight countries, and shown that developing countries can be full partners in this effort. We’ve built their capacity and their leaders. We have good laboratory practices across all of our sites. We’ve done it with relatively small amounts of money, and my hope is that we’re seeing the beginning of a political movement to try to move AIDS vaccines up on the agenda.”

Also read: Antibodies, Vaccines, and Public Health


About the Author

Marilynn Larkin is a contributing editor to The Lancet.

Are Lawyers the Problem with Flu Vaccines?

A medical professional gives a patient a shot/vaccine.

From early discoveries in vaccine development to the anti-vax movement, the industry has changed immensely; and attorneys are playing a more prominent role than ever.

Published January 1, 2005

By William Tucker

“British authorities certainly thought there was a problem with the Chiron Corporation manufacturing – although the company didn’t,” commented Paul A. Offit, MD, who until last year sat on the Center for Disease Control’s Advisory Committee on Immunization Practices. “And the FDA was certainly caught off guard by the British decision. That’s what brought us to the current crisis.

“But when you look at the whole 40-year history of the vaccine industry and how we got to where we are today, you realize that lawsuits and changes in civil law have been a big factor. Profit margins are very thin and if liability costs run too high it just doesn’t make sense to stay in the business.”

Dr. Offit isn’t just making this argument off the cuff. He’s researched a book, The Cutter Incident, which will be published by Yale University Press next year. The Cutter Incident tells the story of an error by a California company in producing the first round of Salk vaccinations in 1955, which led to the accidental exposure of thousands of children to live polio viruses. The parents of a child who contracted polio sued Cutter Laboratories in one of the nation’s earliest medical product liability cases.

“It was 1957 and the courts had just adopted the principle of liability without fault,” said Dr. Offit. “You no longer had to prove negligence. The jury found that Cutter was not negligent – the crisis atmosphere had created a rush and there were no clear standards. But they found Cutter liable anyway, on the principle of liability without fault, and awarded $150,000.” Melvin Belli, who represented the family, always said that this victory made Ralph Nader possible.

An On-Going Problem

Whether liability law would have proceeded without the Cutter incident is an open question. But Dr. Offit makes a strong case that lawsuits have reshaped the vaccine industry, leaving the U.S. in the vulnerable position of having only two manufacturers of flu vaccines, both with roots in other countries. “In 1980 there were 18 American companies making eight different vaccines for childhood diseases,” Dr. Offit said. “Today four companies – Aventis, GlaxoSmithKline, Merck, and Wyeth – make 12 vaccines. Of the 12, seven are made by only one company and only one is made by more than two. There’s no redundancy in the system. Whenever there’s a bump in the road – and it happens fairly frequently – there’s a new shortage.”

Dr. Offit sees the current flu vaccine shortage as only the latest in a long string of incidents. In 2003 there was another flu vaccine shortage and a mini-outbreak when the two vaccine makers and the Centers for Disease Control and Prevention made a wrong guess and decided to cultivate the A/Panamanian strain of the flu virus. Instead, a rogue A/Fujian strain emerged and vaccine makers were caught short.

Before that, there was a serious shortage of the 2000 DTP (diphtheria, tetanus, and pertussis) vaccine after the FDA decided to ban thimerosal, a mercury preservative in vaccines, because of rumors it caused autism. The vaccine industry is now facing over 300 lawsuits claiming five times its net income from vaccines over thimerosal, even though there does not seem to be any scientific evidence to back up the claim.

“Fear of Litigation”

“There are some new vaccines that are not being made because of fear of litigation,” said Dr. Offit. “In 1998 the FDA approved a vaccine for Lyme Disease, which strikes 23,000 people a year. GlaxoSmithKline (GSK) manufactured it for three years, but withdrew it immediately after class actions were filed on rumors that it caused arthritis. Companies just don’t want to deal with the threat of lawsuits anymore.”

There is general consensus that the nation’s vaccine base has become too narrow, but diverse opinion about what has led to it. “Vaccines are really a very small business,” said Dr. Gregory A. Poland, director of vaccine research at the Mayo Clinic. “This global market is $6 billion, while the world drug business is $340 billion. Frankly, given the small profit margins, risks, and huge manufacturing costs, I marvel there are still companies in the business.”

One major problem is that flu vaccines are only good for a year. The flu virus circles the globe every 12 months, visiting Asia and the Southern Hemisphere before returning to the U.S. for the winter. On the way a few surface proteins change and the inoculations must be adjusted accordingly. Every February, the CDC sits down with the vaccine makers – now only Chiron and Aventis – and makes an educated guess at what strain will emerge the following year.

Then everybody makes the same vaccine. This lack of competitive diversity might seem like a weakness in itself, although Dr. Offit believes the system works. “By and large, the CDC has done a pretty good job of picking the right strain,” he said.

Painful Production

The difficulty comes if there’s a mild flu season or for some other reason people don’t want the vaccine. Then the manufacturers have to discard the vaccine and swallow their losses. “I’ve suggested that the government incentivize private industry by negotiating a fair price for a major portion of each year’s production and then promising to buy the last 10% as well,” said Poland. “That would reduce some of the risk.”

Antiquated cultivation techniques are also regarded as a handicap. Influenza vaccine manufacture hasn’t changed much since the 19th century, when companies such as Aventis set up production in Pennsylvania because that state has the nation’s largest production of eggs. Viruses are grown by injecting them one-by-one into raw eggs, with each egg producing three to five doses. From start to finish, manufacturing takes six months and any number of things can go wrong. A bacterial contamination prevented distribution of Chiron’s vaccine this year. There’s plenty of talk about modernizing the system – growing viruses in mammalian cell cultures, for example – but so far nothing has happened.

One problem is the exacting standards set by the FDA. It’s not that safety shouldn’t be observed, but it takes a long time to figure out whether a new method will work. In 2002, Congress heard testimony from Wayne Pisano, executive vice president of Aventis Pasteur North America, the company that will be the nation’s sole supplier of flu vaccine this year.

At the time, Pisano was explaining the previous shortage of DTP vaccines. “The schedule for the removal of thimerosal from the vaccines was decided on without input from industry,” said Pisano. “If changes are required before we can make them and the FDA can approve, shortages will occur. Science, not manufacturing, is the limiting factor in developing new vaccines.”

“More Trouble Than It’s Worth”

Finally, as with any public good, there is a certain amount of free riding. If the flu season is mild, people tend to skip their shots. Just having a lot of other people vaccinated will slow the progress of a virus and provide protection to people who aren’t vaccinated. “People are prepared to spend thousands of dollars a year on a treatment once they contract a disease, but will balk at paying modest sums to prevent it from happening,” complained Pisano.

Yet at the same time, companies are reluctant to have Congress mandate flu vaccines for everyone, since that would probably lead to a low mandated price. “The Childhood Immunization Act of 1992 has led to government purchase of nearly 20% of every year’s output,” said Poland. “But the government’s price barely covers the costs. The only place vaccine companies make money is in the private market.”

All this led David Brown of the Washington Post to write a story claiming that drug companies have abandoned the flu vaccine because it’s “more trouble than it’s worth.”

Fears of Lawsuits

Historically, lawsuits also have played perhaps a key role in winnowing down the competitors. When “swine flu” broke out at Fort Dix, N.J., in 1976, Congress decided to inoculate the entire country. It was astonished to find the insurance companies would not participate. They claimed lawsuits from people who would inevitably experience bad reactions would wipe out any profit margin.

The Congressional Budget Office went to work and came up with a prediction that of 45 million Americans inoculated, 4,500 would file injury claims, resulting in 90 damage awards totaling $2 million. The insurance companies still refused to bite, so Congress provided the insurance instead.

As Peter Huber recounted in Liability: The Legal Revolution and Its Consequences, the first part of the CBO’s estimate proved to be uncannily accurate. A total of 4,169 damage claims were filed. However, 700 – not 90 – of these suits were successful and the total bill to Congress came to over $100 million, 50 times what the CBO had predicted. The insurance companies knew what they were doing.

Another episode Dr. Offit noted is the pertussis vaccine scare of the 1980s. In 1974, a British researcher published a paper claiming that the whooping cough vaccine had caused seizures in 36 children, leading to 22 cases of epilepsy or mental retardation. Subsequent studies proved the claim to be false, but in the meantime Japan canceled inoculations, resulting in 113 preventable whooping cough deaths. In the United States, 800 pertussis vaccine lawsuits asking a total of $21 million in damages were filed over the next decade. The cost of a vaccination rose from 21 cents to $11.

A Flawed Process

Every American drug company dropped pertussis vaccine except Lederle Laboratories. “The company was punished for its persistence,” Dr. Offit writes in his book. In 1980, Lederle lost a single liability suit for the paralysis of a three-month-old infant – even though there was little evidence implicating the vaccine. The damages were $1.1 million, more than half the company’s gross revenues for sale of the vaccine that year.

“These scares may have no scientific basis, but they tend to take on a life of their own in the courtroom,” said Dr. Offit. Peter Huber’s second book, Galileo’s Revenge, which coined the term “junk science,” had a tremendous impact in cleaning up evidentiary procedures. Plaintiffs no longer win damages on the “impact theory of cancer” or for “chemical AIDS.” But the general problem persists.

To protect the vaccine manufacturers, Congress set up the National Vaccine Injury Compensation Program (NVICP) in 1986. Like Worker’s Compensation, the bill created a national fund to compensate legitimate injuries. In exchange, the injured party gave up the right to sue the manufacturer. Unlike Worker’s Comp, however, the program was not mandatory. Injured parties and their lawyers retained the right to sue if they aren’t satisfied with the verdict of the National Vaccine Injury Board. “The result has been that the most obvious cases are compensated, while the most unlikely claims go back to court,” said Dr. Offit. “The manufacturers still have to defend themselves.”

The Anti-vax Movement

The thimerosal episode – currently the biggest sword hanging over the vaccine manufacturers – has completely bypassed the National Vaccine Injury process. Thimerosal is a preservative containing slight traces of mercury that has been added to vaccines since the 1930s. In the late 1980s, a few speculations began suggesting that the mercury exposure to infants might be causing brain damage. This was soon related to what was described as an “epidemic of autism.”

Lawyers quickly circumvented the NVICP by arguing that thimerosal was an additive and not the vaccine itself. At present there are 300 pending lawsuits asking several billion dollars in damages – more than the net worth of the entire vaccine industry, for example. There are now numerous “Vaccine Liberation” organizations and several national directories of law firms looking for clients.

“There have been four large epidemiological studies that have looked for a connection between vaccines and autism and found nothing,” Dr. Offit said. “There’s absolutely no scientific evidence.”

“Back Where We Started”

Try telling that to angry parents saddled with the costs of raising autistic children. In 2001, U.S. Senator Dan Burton, chairman of the Government Reform Committee, held hearings that widely publicized the claims.

With such passions afoot, there is a serious question of whether childhood vaccination programs can continue to be successful. “People forget that 100 years ago, bacterial and viral infections were the number one cause of death,” said Dr. Offit. “The reason the average lifespan was 45 in 1900 and nearly 80 today is because we’ve been successful in conquering infectious diseases. If people start refusing to take shots – or if the manufacturers will no longer supply them – we’re going to be right back where we started.”

* The views and opinions expressed in this article are those of the author and do not necessarily reflect the views or opinions of The New York Academy of Sciences.*

Also read: Law Experts Give Advice for Scientific Research

About the Author

William Tucker is a writer for The American Enterprise.

Talking Teaching: A Case for Standardized Testing

A student uses a pencil to fill in a bubble on an exam.

While the United States’ education system is unique in many ways, embracing the proven, standardized testing practices of countries like South Korea can lead to better outcomes for American students.

Published June 1, 2004

By Rosemarie Foster

Image courtesy of Achira22 via stock.adobe.com.

In France it’s the Baccalauréat. In Germany it’s the Abitur. In those countries, these are the standardized exams that every student must pass to graduate high school and attend college. But in the United States there’s no such requirement – at least not on a national level. Only two states have standardized “exit exams” that students must pass before moving on to the next grade or graduating: the Regents Examinations in New York and the North Carolina Testing Program. Despite a public school system that is generally quite good, statistics show that U.S. students lag behind their European and Asian counterparts by as many as four grade levels in such fields as math and science.

“Students in those countries know a lot, lot more. So we’ve got a problem,” asserted John H. Bishop, PhD, associate professor of human resource studies at the School of Industrial and Labor Relations of Cornell University. He is also executive director of the Educational Excellence Alliance, a consortium of 325 high schools that is studying ways to improve school climate and student engagement.

At a meeting of the Education Section at The New York Academy of Sciences (the Academy) in April, Bishop argued that accountability strategies, such as external exit examinations aimed at raising student achievement levels in math and science, do indeed work.

Why Can’t Johnny Do Algebra?

Bishop proposed several reasons for the poor showing of U.S. students in math, science, and reading. The first: lower teaching salaries. “We pay our teachers terribly compared to other countries,” said Bishop, who noted that this is particularly true for high school teachers. A typical high school teacher in Korea makes more than twice as much per hour ($82) as his American colleague ($37). There may, therefore, be less incentive for qualified individuals to teach when they can get better paying jobs elsewhere. Bishop suggested that by raising standards and expectations for teachers and paying them more, we’ll get better teachers, and students will have a greater opportunity to excel.

Reason number two: In the U.S., credentials earned yield immediate rewards from employers, but “employers don’t reward learning as much as is the case abroad,” contended Bishop. Students who learn more than others with the same credentials do not get better jobs that reflect their greater capability and effort when they graduate. It takes a decade for the labor market to discover that they are more productive, and to reward them for their effort. As a result, students are encouraged to do the minimum necessary to get the credentials, and no more.

A third and widespread influence on student performance in the U.S. is pressure by peers against studying. Research has shown that students are more likely to be harassed by their classmates if they are gifted, participate in class, are often seen studying, and spend several hours a day doing homework.

“Getting in with the peer group requires a lot of time. If you’re doing five hours of homework a night, you’re not spending enough time hanging out,” explained Bishop.

Leveling the Playing Field

Why are the studious so unpopular in the U.S.? Athletes are valued more because their success is viewed as an asset to the school. But scholarly students, Bishop maintained, aren’t seen as contributing to the overall good of the school. Indeed, their success only forces others to keep up. Those who harass them, therefore, are trying to bring them down to a lower level, in hopes of dropping the standard.

In Europe and Asia, external exit exams force everyone to do well, explained Bishop. The stakes are higher: Without passing them, students can’t excel and attend university. In an environment where rank is based on achievement on such external exams, students are not competing with each other. Rather, as a group they are all motivated to achieve at a high standard. Data show that the exams work: Countries that require students to pass national external examinations to graduate have higher science and mathematical literacy than nations without these tests.

In the U.S., class rank and grade point average are given more weight. Since these rankings position a student relative to the rest of the class, it behooves the “bullies” to harass hard-working students as a means of advancing their own standing.

Bishop advocates a combination of the GPA and external exams. “The purpose of an external exam is to create good teaching and to engage the students,” he said. Having to give grades encourages the teachers to mentor and motivate their students to do well in class. Adding external exams helps everyone aspire to a common standard that can level the playing field.

Evidence that Testing Works

Data comparing scholastic achievement between U.S. states support Bishops contention that standardized testing results in better student performance. End-of-course examinations taken by eighth-grade students in New York and North Carolina are linked to better reading, math, and science literacy, compared to students who didn’t take these exams.

Studies also show that end-of-course exams can increase the likelihood of students going on to college and getting better paying jobs. These tests were especially motivating for C students, who were more likely to go to college if they graduated from a school in a state that required them to pass end-of-course exams. The test had less of an effect on the A students because they probably would have gone to college anyway.

Where Do We Go from Here?

Source: OECD, Education at a Glance 2003

The answer to the question of how to improve our educational system isn’t an easy one. While requiring a student to pass end-of-course exams can certainly help, Bishop contended that other elements of the educational environment need to change, too.

One problem is “out-of-field” teaching. Many of America’s teachers do not have college degrees in the very topics they are teaching. “We have teachers who lack a basic understanding of what they’re trying to teach, and they often screw it up,” asserted Bishop. “You have to know your subject so deeply that you can figure out how to make it interesting.” New York State, which fares well in national rankings of student competence, has one of the lowest rates of out-of-field teaching in the country.

Teachers also need better instruction in how to teach. And they need to be more receptive to what works: Many teachers don’t want to use established teaching techniques because they’re considered “scripted.”

Bishop also supports more basic research in the field of education. “We need to spend the kind of money on research in education that we spend on research seeking a cure for cancer,” he emphasized, acknowledging that the high cost of conducting such studies is often a deterrent.

The news is not all bad: Math and science literacy among American students has increased one to two grade levels in the last several years, but could be even better. “I’m actually amazed at how well our kids do considering the difficulties we start them out with,” said Bishop. “But the good news is that we’ve made marvelous gains.”

Also read: Embracing Globalization in Science Education

For the Public Good: Policy and Science

A night shot of the U.S. Capital Building in Washington D.C.

While many conjure images of beakers and Bunsen burners when thinking about science, it’s also important to consider the policy implications.

Published June 1, 2004

By Eric Staeva-Vieira

Image courtesy of Worawat via stock.adobe.com.

Hypotheses are derived; experiments planned; results recorded. But what do the Washington elite think? The New York Academy of Sciences (the Academy) recently spoke with a newly minted Ph.D., Ginny Cox (Weill-Cornell ‘04), about her aspirations to examine the crossroads of science and politics as an AAAS science policy fellow.

Can you tell us about your story?

I came to graduate school after attending Wake Forest University, where I majored in Biology. At Cornell Medical College I joined Dr. Mary Baylies’ lab, where I used Drosophila Genetics to study the mechanisms that cells use to communicate with one another. While I enjoyed working in basic science research, I also noticed problems outside the lab: specifically, a growing intellectual divide between policymakers and scientists. I felt the need to become actively involved in the policymaking process and to work to educate policymakers and the public about basic science and its impact on society.

How did you become interested in politics?

Involvement in politics seemed like a natural extension of my interest in policy. One particular event I participated in was a Capitol Hill Day sponsored by the Joint Steering Committee for Science Policy (JSC). During this day, groups of scientists met with Congressional Members and their staffs to increase awareness of biomedical research and the continuing need to support funding for this research. It was an exciting experience to talk to lawmakers about my work while learning more about the lawmaking process that underlies federal biomedical research funding.

In your opinion, what are the major issues for U.S. science policy?

Since globalization has become a driving force in the world economy, U.S. science policy also must extend beyond its borders. Improving vaccines and treatments for diseases that impact developing countries should be a central concern because improving health among the global poor has direct consequences for political stability in those countries. Closer to home, we need better policies concerning human embryonic stem cell usage and non-reproductive cloning. Laws passed in New Jersey and California have opened the door to state-by-state funding for human embryonic stem cell research, but more states need to pass such legislation.

Also, now that we are in the genomic era, scientists and policymakers need to unite to improve public education with regard to genetic testing. Much of the fear associated with genetic testing could be removed by putting better protective measures in place to safeguard an individual’s genetic information and to inform people of both the benefits and limitations of genetic testing.

It was once remarked: “Scientists best serve public policy by living within the ethics of science, not those of politics. If the scientific community will not unfrock the charlatans, the public will not discern the difference — science and the nation will suffer.” What are your thoughts on this statement?

Dr. Ginny Cox

Science and the nation will suffer more if scientists abstain from the public policy debate. Accompanying an increase in the complexity of technology has been an increase in the complexity of arguments about how to best regulate it. Those individuals who best understand the technology — scientists — have a responsibility to educate the public and lawmakers as to the basic principles of this technology. By distilling these complicated scientific issues to a more understandable level, we can arm policymakers with the facts, allowing them to make the best decisions possible.

How can scientists best serve the public?

By staying informed about socially contentious issues in their fields, and by reaching out to everyday people to answer their questions about scientific issues. Last year I met a pair of businessmen while I was staying at the Chicago Sheraton during the annual Drosophila Research Conference. They wanted to know why thousands of people were meeting to discuss fruit flies.

I explained to them that many of the first insights about the genetic basis for embryonic patterning had come from flies, and that new discoveries in such diverse fields as stem cell biology and neuroscience continue to be made using flies. By taking the time to explain our research to people, we can make science more accessible on an individual basis and dispel those mad scientist myths.

Also read: What Makes Science of Interest to the Public?

An Interview with Scientist Dr. Cindy Jo Arrigo

A scientist looks examines a sample under a microscope.

Dr. Cindy Jo Arrigo discusses her decision to become a research scientist, why she got involved with the National Postdoctoral Association, challenges facing female scientists, solutions to this challenge and more.

Published March 1, 2004

By Eric Staeva-Vieira

Image courtesy of angellodeco via stock.adobe.com.

What/who influenced your decision to become a research scientist?

I’ve been looking under rocks since I was a little kid. Discovering new things, following leads, and learning about how things work has always thrilled me, so science and then research were natural choices. What influenced me to actually go into science were my experiences as an undergraduate student. At New Jersey City University, where I studied, I didn’t learn how to do research. Instead, I learned about science, how to think and how to go after what I wanted. Science was what I wanted and the rigorous undergraduate training I got there and at graduate school allowed the rest to happen.

It was also during my time as an undergraduate that I had my first “celebrity scientist” experience: I met Dr. Richard Smalley of Rice University at an American Chemical Society meeting. Smalley, who later went on to win the Noble Prize in Chemistry for his work, was the first to describe a new form of carbon. His “buckyballs” and stories stayed with me long after the memories of the banquet had faded. My most recent inspiring “celebrity scientist” moment took place in the halls of The New York Academy of Sciences, where I got a chance to talk with the Academy Chair and Nobel Laureate Dr. Torsten Wiesel.

Can you tell us about your story?

I was born in the Great Midwest to parents who would eventually carry their children all around the world following military directives, my father being a non-commissioned officer in the United States Air Force. Following my father’s lead I joined the Air Force, where I met my husband. We later settled in New Jersey to begin our family.

There was never a question about whether one of us would stay at home with our children – the only question was which one of us. I won and was a stay-at-home parent until our two, now college-aged children were in school fulltime. The idea was that there would be plenty of time to “catch up,” but very little time for the children to be young. Catch up I did, first with a BS, a significant milestone since I am the first person in my family to graduate from college, then with a PhD, the entrée into the academic research world.

Very quickly I was able to secure an individual postdoctoral National Research Award Training Fellowship from the National Institute of General Medical Sciences (NIGMS). Since that time I have worked with Dr. Michael B. Mathews, professor and chair of the Department of Biochemistry and Molecular Biology at New Jersey Medical School, on a project that combines viral research with proteomics technologies.

Emerging technologies are essential drivers for scientific progress, and the coupling of proteomics to virology has the very real potential to open up new avenues of understanding about not only viral progression, but also the innate antiviral response. At the University of Medicine and Dentistry of New Jersey (UMDNJ) and in the Mathews lab we share a rich scientific and mentored environment. My own experiences in science have been all that I would have asked for and more.

How did you become involved in the National Postdoctoral Association?

UMDNJ, where I am a postdoctoral appointee, was progressive about postdoctoral issues very early on. Dr. Henry Brezenoff, dean of the Graduate School of Biomedical Sciences and director of the UMDNJ Office of Postdoctoral Affairs, was eager to facilitate grassroots postdoctoral association efforts on our campus. He had distributed promotional material on the 2003 National Postdoctoral Association Inaugural meeting in Berkeley, California. He also sent along best wishes and the promise of matching funds for any successful NPA travel award recipient. I received the travel award and UMDNJ finally got its postdoctoral association.

What moved me though to become active in the NPA was the climate for national change that was evident at the Berkeley meeting. I joined the NPA Policy Committee and helped author the NPA White Papers to the NIH – a milestone document containing the NPA’s recommendations for national postdoctoral policy. The NIH listened and, if the Advisory Committee to the Director meeting that I recently attended in Bethesda is any indication, it’s a great time both to be doing science, and to be a postdoctoral scientist.

The NPA 2004 Annual Meeting in Washington, D.C., April 16th-17th, promises to be as exciting and momentous as was the Berkeley meeting. Naturally, all postdoctoral scientists, their allies and even their foes are invited to participate in Envisioning and Creating the 21st Century Postdoctoral Experience. Who knows, a travel award might even launch the next future NPA leader.

Torsten Wiesel (left) and Cindy Jo Arrigo.

What are the biggest challenges facing female scientists?

The biggest challenge facing many new scientists, regardless of gender, is how to respect accelerating family commitments at the time when, as a scientist, the greatest is also expected. Balancing career and family has never been easy, but the demands on research scientists in the early part of their careers make it especially challenging. For the biomedical postdoctoral scientist the issue is more nuanced since our field has, in the recent past at least, been characterized by very, very long training periods often spent in low-pay and insufficient-benefits situations. Women, especially, may well ask, “How long are we supposed to wait to start a family?”

Another important and historical challenge has been to find appropriate mentors at all levels for female scientists. We know that those claiming to have been mentored fair far better than those who say they did not have any mentoring. The bottom line is that mentoring is essential for all new scientists and may be especially important for women scientists in academic research, a place where gender inequity is often strikingly marked.

Any solutions in mind?

Transitioning to independence is a huge feat and one that must happen more quickly than it has in the recent past if the U.S. research enterprise is to remain able to attract and retain the best and brightest new talent. Real progress in this area has already begun: Funding institutions are smartening-up their existing transitions awards and crafting new ones that will more effectively ease the transition to independence. Shortening graduate and postdoctoral training periods, providing respectable salary and benefits (including decent health care and retirement), and offering part-time postdoctoral options are strategies that also may keep new scientists from having to make the choice: family or independent science.

Significant disparities exist in the quality and quantity of mentoring within the sciences in general – we already know this. Real solutions are to train faculty not only to be good researchers, but also good mentors, and to supply opportunities for institutions and renowned scientific societies to participate in the mentoring process. Case in point: the Academy’s Science Alliance. When mentoring is valued at the national and institutional levels it shows. And as for more women mentors in science at all levels: We are getting there, one postdoctoral scientist, one faculty, one department chair and one university president at a time.

Where do you want to go from here?

There are still a lot more rocks to explore. I intend to continue to make good use of my postdoctoral experience and the investment that NIGMS and UMDNJ have made in me; to hone my research skills and to make my mark. Only time will tell if I am selected to stay in the game of research science – but in my case at least, time has always been on my side.

Also read: Women Rising: The Science of Leadership