Skip to main content

Success, Tenacity, and the Aid of Global Colleagues

Noble Prize winner and long-time Academy member Raymond Davis, Jr., PhD shares his advice to find success as a scientist.

Published January 1, 2003

By Dan Van Atta

Raymond Davis, Jr. receives the Medal of Science from President Bush, with Office of Science and Technology Policy Director John “Jack” Marburger looking on. Image courtesy of the National Science Foundation.

Curiosity, a keen focus, teamwork, and the tenacity to never stop searching for solutions: These are among the qualities that Raymond Davis, Jr., Ph.D., credits with contributing to his long and highly successful career as a physical chemist.

A long-time member of The New York Academy of Sciences (the Academy) and contributor to the Annals of the New York Academy of Sciences, Davis was awarded the Nobel Prize in Physics last month for detecting solar neutrinos – ghostlike particles produced in the nuclear reactions that power the sun. He shares the prize with Masatoshi Koshiba of Japan and Riccardo Giacconi of the United States.

“Neutrinos are fascinating particles, so tiny and fast that they can pass straight through everything, even the earth itself, without even slowing down,” said Davis.

“I’ve been interested in studying neutrinos since 1948, when I first read about them in a review article by physicist H.R. Crane. Back then, it was a brand-new field of study. It has captivated me for more than a half-century.”

After receiving his BS and MS from the University of Maryland, Davis earned a PhD in physical chemistry from Yale University in 1942. After his 1942-46 years of service in the U.S. Army Air Force and two years at Monsanto Chemical Company, he joined the Brookhaven National Laboratory’s Chemistry Department in 1948. He received tenure in 1956 and was named senior chemist in 1964.

The Neutrino Detector

Davis is recognized for devising a method to detect solar neutrinos based on the theory that the elusive particles produce radioactive argon when they interact with a chlorine nucleus. He constructed his first solar neutrino detector in 1961, 2,300 feet below ground in a limestone mine in Ohio. Later, he mounted a full-scale experiment 4,800 feet underground, at the Homestake Gold Mine in South Dakota.

In research that spanned from 1967 to 1985, Davis consistently found only one-third of the neutrinos that standard theories predicted. His results threw the field of astrophysics into an uproar and, for nearly three decades, physicists tried to resolve the so-called “solar neutrino puzzle.”

Experiments in the 1990s using different detectors around the world eventually confirmed the solar neutrino discrepancy. Davis’ lower-than-expected neutrino detection rate is now accepted by the international science community as evidence that neutrinos have the ability to change from one of the three known neutrino forms into another. This characteristic, called neutrino oscillation, implies that the neutrino has mass, a property that is not included in the current standard model of elementary particles. (In contrast, particles of light, called photons, have zero mass.) Davis’ detector was sensitive to only one form of the neutrino, so he observed less than the expected number of solar neutrinos.

‘A Lot of Fun’

“I had a lot of fun doing the work,” Davis said, adding that he was “very surprised” when he learned it had earned him the Nobel Prize. “I could never have done it,” he hastened to add, “without the aid of colleagues all over the world.”

Davis said he is especially indebted to colleagues at Brookhaven, where he retired in 1984, but has an appointment in Brookhaven’s Chemistry Department as a research collaborator, and at the University of Pennsylvania. Davis moved to Penn in 1985 to continue experiments at the Homestake Gold Mine with Professor Kenneth Lande, and continues his association there as a research professor of Physics.

A member of the National Academy of Sciences and the American Academy of Arts and Sciences, Davis has won numerous scientific awards. Among them, most recently, are the 2000 Wolf Prize in Physics, which he shared with Masatoshi Koshiba, of the University of Tokyo, and the 2002 National Medal of Science.

Asked to what singular factor he attributes his remarkable success, Davis responded: “People say I’m tenacious. But I’d also have to say that the atmosphere at Brookhaven gave me the freedom to focus on research that really intrigued me.”

What advice would the accomplished researcher have for today’s generation of young scientists? “I would tell aspiring students and young scientists to find a research topic that really interests them,” Davis said. “When I began my work I was intrigued by the idea of learning something new. The interesting thing about doing new experiments is that you never know what the answer is going to be.”

Also read:  Adnan Waly: A Life and Career in Physics

Science and Citizenship: ‘A Matter of Trust’

Public trust in science is an issue as old as time, but experts are proposing new methods and approaches aim to change this.

Published January 1, 2003

By Jennifer Tang

Image courtesy of RomanR via stock.adobe.com.

Scientists and policymakers now insist that the public must understand science if people are to be useful citizens – capable of functioning as workers, community members and informed citizens in a technological age.

But what does public understanding mean? And what can we do to prepare the public, and particularly the young, for lives of citizenship and social responsibility – as well as success in workplaces that are increasingly shaped by science and technology?

These issues were the focus of the Willard Jacobson Lecture recently given by Dr. Judith A. Ramaley, assistant director, Education & Human Resources, National Science Foundation. Ramaley, the winner of this year’s Jacobson Award, was honored for her work in mathematics and science education projects.

Public Understanding?

How much do our citizens really “know” about science? According to Ramaley, approximately 20 percent of American adults think they are well informed about new scientific discoveries and technologies, while 25 percent say they understand enough about scientific inquiry to make informed judgments about scientific research reported in the media. About 14 percent admit they pay attention to science and technology policy issues only when a crisis compels their attention.

Ramaley defined what a public understanding of science would encompass: it means paying careful and thoughtful attention to science and technology issues while also recognizing the strengths and limitations of these fields. Scientific literacy involves understanding scientific and technical concepts and vocabulary as well as the use of various sources of such information.

But how well prepared is the public to distinguish valid sources of information from useless or even dangerous misrepresentations?

Developing Public Trust

Dr. Judith A. Ramaley

Surveys show that public trust in science and scientists is highest in times of peace, Ramaley noted. This confidence can waver, however, when a crisis emerges over such controversial subjects as nuclear power, genetic engineering or space exploration.

“People who think that science is a product rather than a messy process of inquiry can become profoundly uncomfortable when they are brought face-to-face with the uncertainties and arguments at the frontiers of science,” she observed. “When people are fearful, they want simple answers to emotionally laden questions, preferring the opinions of their friends or trusted advisors over the information provided by scientists.”

How, then, can we increase the public’s trust in the scientific community? The UK’s public outreach effort was cited as a model. The Citizen Foresight project, launched by the London Centre for Governance, Innovation and Science, offered citizens an opportunity to meet with scientists. British citizens, selected at random, met every week to explore not only the “facts,” but also the deeper ethical and emotional issues associated with questions about food supplies and agricultural technologies.

“The British have learned that public trust and confidence cannot be gained simply through providing information about science, but by direct dialogue and discussion about the issues,” she observed. “Scientific knowledge must be grounded in a moral and ethical foundation that is seen as legitimate by the public and is accepted as responsive to their needs and interests.”

Science for Everyone

How science is taught in the schools also is vital to promoting a public understanding of science. “Students can best learn how science is done by doing genuine scientific inquiry,” she said.

Science also can be made appealing to students if they view science as being connected to their own lives and interests. “When science is meaningfully connected to things that young people care about, it becomes an experience rather than a product to be memorized,” she added.

In addition, schools should integrate scientific exploration with other disciplines so that students can see how science contributes to understanding in any field, and how other fields contribute to science. “Science is for everybody,” Ramaley said. She recommends a curriculum in which disciplines that foster creative and critical thinking – such as language and literature, history, the arts and foreign languages – predominate.

Understanding science, however, poses a mental challenge. “New knowledge can only be absorbed and put in context if the participant can uncover older, ‘untrue,’ knowledge and discard it,” she said. “If during our education, we are never required to examine those deeper assumptions, acquired early and applied without thought to the challenges of daily life, we will not be responsive to the insights and knowledge generated by any discipline, including the sciences and mathematics.”

Also read: Building Trust Through Transparency in Biorisk Management

Challenging Female Stereotypes in STEM

A new book explores the stereotypes that women overcame, as well as their accomplishments achieved, when contributing to the war effort in WWII.

Published January 1, 2003

By Jeffrey Penn

A colorized photo of real life “Rosie the Riveter.” Image courtesy of U.S. Library of Congress via Wikimedia Commons. Public Domain.

Advertising and other visual images during the past century have helped shape and challenge prevailing stereotypes about the role of women at home and in society, according to a social historian who recently addressed a gathering at The New York Academy of Sciences (the Academy) on the subject of “Woman and the Machine: Changing Images.”

“These contrasting images reveal signs of ambivalence in deeply felt social attitudes about women’s roles and technical abilities,” said Julie Wosk, professor of Art History, English, and Studio Painting at SUNY Maritime College and author of the recently published Women and the Machine: Representations From the Spinning Wheel to the Electronic Age (Johns Hopkins University Press).

Breaking Old Frameworks

It was recognized soon after new machines and technologies became widespread following the Industrial Revolution that the breaking of old frameworks could have a disorienting effect on people. “In early images that anxiety was often expressed visually in people being confused or torn apart by exploding steam-powered machines,” Wosk said.

Commenting on a series of slides, Wosk noted that many of the early images portrayed machines as the tools that could liberate women from the drudgery associated with the manual labor of domestic life. “Machines and technology have often been sold as liberating to women,” she said, “but there also has been an enslaving of women.” New electrical appliances, for example, were supposed to emancipate women from housework. “But there were often heightened expectations about increased cleanliness,” Wosk said, “and a belief that the new appliances would permit women to do even more work.”

Although some images challenged stereotyped assumptions about the relationship of women to machines, just as many used women as mere decorations or sentimental and romantic adornments to whatever was being marketed. “Women were early portrayed as childlike and naive, requiring simple machines in contrast to men, whose sphere was assumed to be machines and technology,” Wosk said. “Women were often portrayed as aghast at machines, technologically challenged, forlorn and baffled.”

The “Rosie the Riveter” poster. Image courtesy of U.S. Library of Congress. Public Domain.

Riding Old Assumptions

In early advertisements and motion pictures associated with electricity and electrical devices, women often appeared as “daffy and fearful,” Wosk noted, “or, occasionally, as electrically created facsimiles of females compliant to men.” There were, however, some positive female images in early advertisements related to electricity. But the ambivalence was still there, Wosk suggested, seen in the notion that gas engine automobiles were masculine and electric automobiles were especially suited for women because they were clean and easy to operate.

More than in any other advertising genre, visual images related to transportation – particularly bicycles, automobiles and airplanes – have both supported and challenged conventional assumptions about the role of women, Wosk said.

Early bicycle advertising included images of women, but the invention of the safety bicycle in the 1890s “contributed most to the idea that women could be fully independent and mobile,” Wosk said. “A bicycle-riding craze began because bicycles were lighter, more stable, and the closed gears permitted women to ride bikes without their skirts getting caught. The invention of coaster brakes and a drop-frame bicycle for women also encouraged them to take up bike riding.”

Even though many images portrayed women on bicycles, they often contained a subtle suggestion. In satiric stereoscopic photos, she said, “You often see men in the background looking nervous that women might just ride away from their responsibilities at home.”

The advent of automobiles, however, helped women refute stereotypes that they were inept, she said. Female images were increasingly used to market the vehicles, and magazine photos included portrayals of so-called “flappers” displaying their sense of independence in cars.

A Cultural Ambivalence

Peggy Bridgeman at the left demonstrates to Ruth Harris the correct technique while their instructor, Lee Fiscus, looks on attentively, in the Gary plant of the Tubular Alloy Steel Corporation, United States Steel Corporation subsidiary. Peggy is acclaimed by her superiors to be one of the most skilled welders they have had working with them. Image courtesy of U.S. National Archives and Records Administration via Wikimedia Commons Public Domain.

Again, however, many early images of women with autos revealed a cultural ambivalence. “You often find that women in advertising images are presented as being more interested in the color and upholstery of the interior of cars than in the mechanics of the internal combustion engine,” Wosk said. And she pointed out that artists’ images sometimes supported the notion that women “were harebrained, maniacal drivers.”

The invention of the airplane, Wosk believes, combined with rapid social change during both world wars to transform the image of women in visual and advertising images. “With airplanes there was a sense that women could transcend the earth and the confining cultural notions about women’s lack of technical abilities.” As one early female aviator wrote, “Flying is the only real freedom we are privileged to possess.”

Service During WWI

Although the shift in expectations regarding women during World War II is well documented, Wosk noted that women were recruited to serve as machine tool operators, automobile repairers and workers in airline manufacturing as early as World War I.

“During World War II, women began to redefine their roles and sense of patriotic duty as they learned new jobs vacated by men who entered the military,” Wosk said. Many new images portrayed women in jobs formerly held only by men, including famous renderings of “Rosie the Riveter.” Yet even in those images, “Rosie often was portrayed with a makeup compact in her pocket.” In many of the new images, Wosk said, “women were portrayed as changing their clothes – a practical requirement related to the new jobs they were doing, but also a symbol of transformation.”

After World War II, advertising images attempted to persuade women to revert to their former clothing styles and occupations. “Women were encouraged to become enamored of their home appliances again,” Wosk concluded.

Also read:Celebrating Girls and Women in Science


About Prof. Wosk

Professor Julie Wosk received a B.A. from Washington University in St. Louis (graduating magna cum laude, Phi Beta Kappa), an M.A. from Harvard University and a Ph.D. from the University of Wisconsin. She has twice been a National Endowment for the Humanities Fellow in art history – at Princeton and Columbia University. She is also an artist whose oil paintings and large-format color photographs have been exhibited in New York and Connecticut galleries.

Code to Commodity: Genetics and Art

A new art exhibit at The New York Academy of Sciences explores everything from genetic iconography and gene patents to bioinformation and artificial chromosomes.

Published January 1, 2003

By Dorothy Nelkin and Suzanne Anker

In scientific terms, the gene is no more than a biological structure, a DNA segment that, by specifying the composition of a protein, carries information that promotes the formation of living cells and tissues. However, its cultural meaning – reflected in popular culture and visual art – is independent of its biological definition. The signs and symbols of genetics have become icons expressing numerous issues emerging from the genetic revolution.

Since the late 1980s many contemporary artists have incorporated genetic imagery into their work. Images of chromosomes, double helices and autoradiographs increasingly appear in paintings, sculpture, photography and film. Both scientists and artists use visualizations to explore the hidden meanings in the corporeal body, to probe the deeper world underlying surface manifestations and to comprehend the mysteries of life.

While science and art share a cultural context and draw referents from the same milieu, they are distinct ways of knowing the world. Scientific images reflect the fact that science, aspiring to objectivity, is evidence-based. In contrast, artists are absorbed by subjectivity, seeking a truth based on individual and private perceptions.

The images created by artists, however subjective, are important in bridging the connection between the world of scientific discovery and its cultural interpretation in society. These visualizations are a means to shape and analyze how culture assimilates the issues emerging from the burgeoning genetic revolution and a filter engaging our hopes and fears of a bio-engineered future.

Genetic Iconography

From Code to Commodity: Genetics and Visual Art, a show we have curated for The New York Academy of Sciences’ (the Academy’s) Gallery of Art and Science, addresses two themes that have inspired artists to adopt genetic iconography: DNA as a semiotic sign system and a bio-archive for the commercial patenting of gene sequences. Molecular biology has turned the body into a set of notations as scientists seek to understand the workings of the DNA molecule.

Many artists regard these graphic visions as an aspect of modernism’s abstract legacy, a part of the iconography of the 21st century. Attracted by the concept of the body as “code,” they use the symbols of chromosomes and helices to reflect upon the complex structures of life, the inner domain of the person, and the truth underlying appearances.

In Frank Gillette’s The Broken Code (for Luria) (2002), the artist converts a Gregorian chant into a meditation on mitosis. Olivia Parker’s Torso on Blue (1998) directly addresses the body as code through letter forms imposed on a torso. So does Kevin Clarke. His digital color portrait Eight Pages from the Book of Michael Berger, Page 5 (1999) uses the subject’s own nucleotide sequence, garnered through his blood sample.

The artist then overlays this genetic code on top of Mr. Berger’s collection of robots, bringing together two variants of the sitter’s identity. The emerging world of proteomics is another source of iconography, adopted by Steve Miller, Eat Protein (2002).

Bioinformation and Artificial Chromosomes

Michael Rees generates a linguistic sculpture using a sculptural user interface computer program. By typing a particular sentence into his program, he constructs a pictorial equivalent that can be turned into a prototyped sculpture. Marcia Lyons Manipulates her “code” in Munging Body (1999) series to show future ways in which bioinformation may be used to create living specimens in a variety of shapes.

And Suzanne Anker’s Cyber-Chrome Chromosome (1991) addresses the concept of artificial chromosomes, which geneticists are now beginning to create in their labs.

Other artists are starting to explore an increasingly important aspect of contemporary genetics – its role in the world of commerce. Bryan Crockett’s marble and resin sculptures employ the motif of genetically altered mice as instruments in science. In Frank Moore’s Index Study (2001), the commercial icon Mickey Mouse appears on a fingernail emerging from a double helix.

Ellen Levy addresses the issue of patenting life forms as an extension of the routine pattern of commodifying inventions. For the Storey sisters, high fashion meets high technology in a set of dresses conceived from images of fetal development and cellular script. Concerns about the way the body and its genetic materials have been mined and patented, bought and sold, banked and exchanged as commodities are expressed in Larry Miller’s conceptual copyright certificates. And for Natalie Jeremijenko, the cost/benefit analysis of IVF is rhetorically and visually addressed in her media installation.

Public Concern Over Gene Patents

The implications of gene patents – for privacy as well as the protection of patients and human subjects of research and the exchange of information – are emerging as public concerns in the molecular age. This also is reflected in contemporary art.

This Academy exhibition is intended to raise several questions: Is bio-information just another commodity? Should the body become a bio-archive? What are the implications for using the body as a source of coded information for personal privacy, identity and corporeal integrity?

An extended analysis, including numerous illustrations, can be found in our forthcoming book, The Molecular Gaze: Art in the Age of Genetics (New York: Cold Spring Harbor Laboratory Press, 2003).

Also read:The Art and Science of Human Facial Perception

85 Cents at a Time: Saving Lives and Fighting HIV

After diagnosing the first pediatric case of HIV in Uganda, Dr. Ammann has devoted much of his professional life to combating this deadly virus.

Published November 1, 2002

By Fred Moreno, Dan Van Atta, and Jennifer Tang

Image courtesy of salomonus_ via stock.adobe.com.

More than 2000 infants around the world are infected with HIV every day. In sub-Saharan Africa alone up to 46 percent of pregnant women carry the virus, and some 25 to 35 percent of their children will be born infected.

Arthur J. Ammann, MD, is succeeding in improving those statistics. As President of Global Strategies for HIV Prevention, Ammann oversees the Save a Life program, which provides HIV testing and medication to prevent HIV transmission from pregnant women to their infants in Africa, Asia and South America.

At the heart of the program is the antiretroviral drug nevirapine. Giving a single tablet of nevirapine to a woman during labor and delivery together with a single dose of nevirapine syrup to her newborn reduces HIV transmission by 50 percent. Moreover, in many countries the cost of treatment is as little as 85 cents for both mother and child. The program has helped some 50,000 women and infants in more than 72 hospitals in 18 nations. Save a Life also provides antibiotics to prevent opportunistic infections in HIV-infected women.

Obtaining and Administering Nevirapine

Global Strategies makes it easier for start-up programs in developing countries to obtain and administer nevirapine for this use. “They just tell us what they do and how much they need,” explains Ammann. “This is especially helpful for small programs that have the infrastructure to test women and give the drugs, but which may be waiting for additional funding from larger organizations.”

Ammann’s commitment to helping women and children with HIV began some two decades ago. As a professor of Pediatric Immunology at the University of California, San Francisco (where he is still on the faculty), Ammann and his colleagues diagnosed the first child with HIV in this country. The epidemic grew, and in 1987 AZT was introduced as the first anti-HIV drug.

In 1994, a landmark study showed that giving AZT to pregnant women could prevent transmission of the virus to newborns. Thanks to AZT, the number of new pediatric AIDS cases in the United States and Europe plummeted from 2,000 per year to less than 200. “However, that remarkable success story was paralleled by a lack of success in developing countries,” notes Ammann, “where 1,800 children are born with HIV every day.”

HIV Treatment

So, in 1997 Ammann founded Global Strategies. Through a series of international conferences held every two years, and with the assistance of organizations such as the Elizabeth Glaser Pediatric AIDS Foundation, Global Strategies has called on nations to immediately implement countrywide programs to prevent HIV infection of infants, identify HIV-infected women, and provide treatment for children and mothers with HIV. One major step in that direction is the production and distribution of more than 30,000 copies of an educational CDROM.

While Save a Life is clearly rescuing the futures of thousands of infants, Ammann notes that challenges remain. Programs to continue drug treatment of HIV-infected women, as well as their sexual partners, require further development. A new drug that could be used when HIV eventually develops resistance to nevirapine remains to be found. And educational opportunities and support for children orphaned by AIDS need to be expanded.

In the meantime, counseling is becoming more available to women without HIV, so they remain uninfected. “We’re working at the end of the process, the point where HIV infection has already occurred,” says Ammann. “Where we want to go is the beginning, to keep the infection from happening in the first place. Then all those other problems would go away.”

Also read: Improving Women’s Health: HIV, Contraception, Cervical Cancer, and Schistosomiasis

Environmental Catastrophe or New Global Ecology?

With the population of urban areas expected to grow substantially in coming decades, researchers are pondering ways to plan with climate change in mind.

Published November 1, 2002

By Margaret W. Crane

Image courtesy of .shock via stock.adobe.com.

In 2007, for the first time in history, the number of people living in cities will equal the number of rural dwellers, according to the most recent report of the United Nations Population Division of the Department of Economic and Social Affairs. Virtually all of the world’s anticipated population growth during the next 30 years will be concentrated in urban areas. And almost all of that growth will take place in less-developed regions.

The urbanization of early 19th-century Europe begins to look like a modest blip compared to the unplanned, unchecked growth of cities in the developing world today. Between 2000 and 2010, cities in Africa will have grown by another 100 million people, while those in Asia will have swelled by 340 million. Taken together, that’s the equivalent of adding another Hong Kong, Teheran, Chicago or Bangkok every two months.

Concerned about the coming dominance of urban areas over the world’s environment, a small but growing number of scientists have begun to focus on the city itself as simultaneous driver and subject of environmental change. In their view, the sheer quantity of people piling into cities calls for a shift of focus away from issues related to the physical environment alone and toward a more integrated approach to the broad question of urban ecology.

A New Vocabulary and Conceptual Framework

Roberta Balstad Miller, PhD, director of Columbia University’s Center for International Earth Science Information Network (CIESIN), believes scientists need a new vocabulary and a new conceptual framework to tackle the complex dialectic between physical environmental change, mushrooming cities, poverty, and rising human expectations across the globe.

“We already know a great deal about each discrete sector in the urban environmental mix,” said Miller. “Beyond atmospheres, oceans and the natural historical origins of environmental change, scientists also have investigated the interlocking issues of clean water, waste disposal, energy and land use. What we haven’t done is connect the dots that will allow us to respond to the big picture: How can cities become less vulnerable to environmental stressors? What can we learn from the environmental successes as well as the environmental problems of the great 20th-century metropolises?”

These questions form the backdrop of a new research project at the Earth Institute of Columbia University – provisionally called the Twenty-First Century Cities Project – that will examine environment and sustainable development issues in major cities worldwide. The project will focus initially on four cities: Fortaleza, Brazil; Accra, Ghana; Chennai, India; and New York, United States.

“We’re keeping New York in the mix,” said Dr. Balstad Miller, “because it affords an opportunity to study the impact of rapid urban growth over a long period of time, and also because there is so much research on the environment of New York under way at Columbia.”

Toward Sustainable Cities

The Brundtland report (Our Common Future, 1987) defined sustainable development, the theme of this summer’s Johannesburg Summit, as development that meets the needs of the present without compromising the ability to satisfy the requirements of future generations.

It’s a concept most governments agree on in principle. But with cities expanding at the rate of 10 percent per year, largely owing to massive migration fueled by poverty and conflicts in rural areas, sustainability can look like a remote ideal instead of a real-world possibility. In Johannesburg, 100 world leaders and nearly 50,000 delegates turned their energies to the challenge of bringing sustainable development back down to earth.

The Summit’s participants queried the model of urban development based on automobile-driven sprawl. They asked themselves whether it is possible for new cities laboring under a chronic shortage of resources to develop sewage and waste disposal systems in time to prevent serious outbreaks of communicable disease. They looked at the plight of unemployed urban youth and the need to find ways to cool down the social tinderbox of frustration and poverty. And they discussed the strengthening of governance – the management of society – to help smooth the expansion of cities and check chaos.

In a speech to the Megacities Foundation, British architect Lord Richard Rogers said that, above all, cities must be a vehicle for social inclusion. “This is no utopian vision,” he said. “Cities that are beautiful, safe and equitable are within our grasp.”

The Role of Sustainability

Utopian or not, the question of sustainability colors Balstad Miller’s research, and is the ultimate motivation behind the Twenty-First Century Cities Project. “Ecosystems are being bisected by highways,” she said. “Forests, wetlands and prime agricultural lands are being lost to urban development. Less land is available for indigenous animal and plant populations, whose genetic diversity is at risk. And yet we can’t halt urban growth. We need to develop sustainable approaches to a process that’s not about to go away.”

Balstad Miller, an urban historian, studies cities at three levels: The environment of the city itself, exemplified by the quality of its air, water and sanitation systems; the environment of the region, such as the city’s impact on regional weather patterns and its surrounding forested and agricultural areas; and global networks of cities as the nexus of decision-making, economic integration, and growth.

Oddly enough, she added, the real demographic story isn’t taking place in megacities like Tokyo, Mexico City, Mumbai and Sao Paulo. The number of cities with 1 million or more inhabitants grew from 80 in 1950 to more than 300 by 1990, and is projected to reach 500 by 2010. Most of the world’s urban population actually lives in the 40,000-50,000 urban centers with fewer than 1 million inhabitants, according to the United Nations Centre for Human Settlements. These urban agglomerations are a relatively new subject for those who study the complex relationship between environment and urban development. What these scientists learn may be crucial for our common future.

Also read:The Impact of Climate Change on Urban Environments


About Dr. Roberta Balstad Miller

Roberta Balstad Miller, PhD, is a senior research scientist at Columbia University and director of the University’s Center for International Earth Science Information Network (CIESIN). Dr. Miller has published extensively on science policy, information technology and scientific research, and the role of the social sciences in understanding global environmental change.

As chair of the National Research Council’s Steering Committee on Space Applications and Commercialization, she recently completed two book-length reports on public-private partnerships in remote sensing and on government use of this new technology. In addition to her many research interests, she is a published translator of the poetry of Jorge Luis Borges and N.P. van Wyck Louw. Dr. Miller was recently elected a Fellow of The New York Academy of Sciences.

The Impact of Climate Change on Urban Environments

New York City and the tri-state region provide a unique case study for examining the impact of climate change within the context of an urban environment.

Published November 1, 2002

By Margaret W. Crane

In Alaska the average temperature has risen by 5.4 degrees Fahrenheit over the past 30 years, and entire villages are being forced to move inland because of rising sea levels. El Niño – a disruption of the ocean-atmosphere system in the tropical Pacific – has been linked with multiple epidemics of dengue fever, malaria and cholera. Flowers in the northern hemisphere are blooming in January. Greenland’s glaciers are melting. The world’s ecosystems are in the throes of rapid transformation. And large, coastal cities are among the most vulnerable of all.

Global by definition, climate change has already begun to reshape the earth’s environment from pole to pole and from tundra to rainforest. But until recently few scientists had studied its impact on cities. Cynthia Rosenzweig, PhD, the lead author of a recent report titled Climate Change and a Global City, is among the first to look at cities – specifically New York and its environs – as distinct ecosystems that are being remodeled by global warming as relentlessly as are distant oceans, islands, forests and farmlands.

At the Forefront of Vulnerability to Climate Change

Rosenzweig and co-author William D. Solecki place global cities like New York, Sao Paulo, London and Tokyo at the forefront of vulnerability to climate change. As such, the world’s largest cities are charged with finding ways to adapt to changes that have already occurred and simultaneously reduce the greenhouse gases that are a major factor in heating up the globe in the first place.

“Global warming is on the cusp of becoming a mainstream issue,” said Rosenzweig, “an issue that’s being integrated into the day-to-day life of citizens.” This mainstreaming is emerging in tandem with a stronger-than-ever consensus among scientists that climate change has arrived and has two faces: an overall warming trend, and more frequent and severe droughts and floods. Moreover, instead of hypothesizing about global warming, researchers are now studying its effects and developing models to project the course and intensity of future changes.

To map the trajectory of projected climate change, scientists are using global climate models (GCMs), mathematical formulations of the processes – such as radiation, energy transfer by winds, cloud formation, evaporation and precipitation, and transport of heat by ocean currents – that comprise the climate system. These equations are then solved for the atmosphere, land surface, and oceans over the entire globe.

Because GCMs take into account increasing feedbacks from greenhouse gases, they project more dramatic temperature changes than do predictions based on current warming trends alone. New York’s GCM-projected temperature in the 2080s, therefore, will be from 4.4 to 10.2 degrees Fahrenheit higher. Rosenzweig and Solecki predict a more modest 2.5 F rise by the 2080s, based on current temperature trends alone minus any multiplier effect associated with greenhouse gases.

Interchange Between Scientists and Decision-Makers

The Climate Change report draws on a range of GCMs, but it also benefits from a rich interchange between scientists and decision-makers. In grappling with the complexity of the urban ecosystem, the two groups developed an innovative conceptual framework comprising three basic, intersecting elements: People, Place, and Pulse. These three P’s correspond to socio-economic conditions, physical and ecological systems, and decision-making and economic activities. “Pulse, a term we coined, is really about what makes a city a city,” said Rosenzweig. “In the past few years, I’ve gotten on familiar terms with New York’s pulse, defined roughly as the whole matrix of relationships that makes it run and hum.”

Rosenzweig’s focus on the New York region began when she was chosen to head up the Metropolitan East Coast (MEC) Regional Assessment, part of a national effort to assess the potential consequences of growing climatic instability and the engine behind the Climate Change report.

The New York metropolitan region is unique, Rosenzweig said, due to the extraordinary density and diversity of its population. Comprised of five boroughs and 26 adjacent counties in New York, New Jersey and Connecticut, it is home to a complex web of environmental problems and pressures. The area’s high demand for energy and clean water, along with its poor air quality, toxic waste dumps and threatened wetlands, are all interconnected, according to the Assessment, and call for a many-sided response.

Rising Seas Levels and Floods

For instance, New York will likely need to build higher seawalls and raise airport runways to protect against rising sea levels and increasingly severe and frequent floods. City and regional governments will be called upon to increase support for the poor and elderly, who suffer disproportionately from heat stress and respiratory ailments due to the effects of air pollution. Developers will be encouraged to disinvest from highly vulnerable coastal sites. Policymakers will need to think longer-term and learn to cooperate at the regional level. And they’ll have to get serious about reducing greenhouse gas emissions.

But it is New York’s greatest virtue – its diversity – that turns out to be its political stumbling block. The 20 million people who inhabit the area’s boroughs and neighboring counties often represent conflicting agendas. Rosenzweig believes it will take education, training and a good dose of political will to take on global warming.

In recent decades, the MEC region has experienced a marked increase in floods, droughts, heat waves, mild winters and early springs. Its annual average temperature has risen by nearly 2 degrees Fahrenheit, and precipitation levels have gone up slightly. The current rate of sea-level rise is about 0.1 inch per year, a number that is expected to increase with the further melting of glacial ice and the warming of the upper layers of the ocean. The study found that in many scenarios, the sea level is expected to rise faster than the accretion rate of wetlands, further accelerating their disappearance.

Growing Hydrologic Variability

Growing hydrologic variability is another expression of the climate change that has already begun to be felt in the region. This century, the New York area will be subject to more severe flooding during hurricanes and nor’easters. Some scientists have estimated that by the 2080s, as a worst-case scenario, a major coastal storm could occur every three to four years, compared with every 100 years in the past, while a 500-year flooding event could hit every 50 years.

It has long been the region’s default policy to place transportation and other necessary but unappealing infrastructure across and along the edges of wetlands, bays and estuaries. For example, the Hackensack Meadowlands in northern New Jersey, a low-elevation, degraded wetland, is home to an airport, port facilities, pipelines and highways. The region will need to move infrastructure inland – a matter of double urgency, Rosenzweig contends, for the sake both of the infrastructure itself and the vulnerable lands that are the first casualty of violent storms.

Climate change is, however, a bipolar phenomenon. During the summer of 1999, an intense summer drought may have contributed to the fatal outbreak of West Nile virus. More conspicuously, water conservation campaigns have become a regular feature of New York life. While the New York City water supply system – the largest in the region and one of the largest in the world – should accommodate expected hydrologic extremes, the report warns that smaller systems within the MEC region might buckle under stress. Increasingly, water distribution must be addressed intra- and inter-regionally, said Rosenzweig. Future protocols might include diverting Delaware River water from the west to reduce the impact of drought in the New York area, and vice versa.

Multiplicity of Environmental Problems

Demand for electricity also is expected to rise along with mounting temperature. No less than clean and abundant water, the area’s population requires a consistent supply of energy. But the distribution of energy continues to be far from equal. During the intense succession of heat waves over the past several summers, blackouts and brownouts plagued many of New York’s poorer neighborhoods, meaning a loss of air conditioning just when it was most critically needed.

With 27 days of temperatures above 90 degrees Fahrenheit in the summer of 1999 and 28 days over 90 degrees F (including two in September) in the summer of 2002, New Yorkers have had a recent foretaste of what’s in store. According to most climate change scenarios, the average number of days exceeding 90 degrees F (13 days in our present climate) will increase by two to three times by the 2050s.

Despite their multiplicity of environmental problems, Rosenzweig believes cities have an important role to play in shaping the earth’s future. “New York has an opportunity to rethink itself as an urban ecosystem,” she said. “For example, we can start to design buildings that are more energy-efficient. We’ll need to find ways to help people stay cooler as they adapt to a warmer environment and to reduce greenhouse gas emissions at the same time.”

The Missing Link

Although scientists haven’t yet been able to establish to an absolute certainty the causal link between human activity – especially the burning of fossil fuels – and climate change, they are largely in agreement that such change is under way. The Intergovernmental Panel on Climate Change has concluded that “there is a discernible anthropogenic signal in the climate,” and that this signal is growing. There are, however, many remaining uncertainties surrounding the rate and ultimate magnitude of the change.

By assessing its nature and extent, monitoring its trajectory, and forecasting its future impact on cities, scientists like Rosenzweig are informing a new public discussion that is just getting off the ground. But there’s no time to waste, she said: “The political and social responses to the global climate issue in cities should begin at once.”

Also read: Tales in New Urban Sustainability


About Dr. Cynthia Rosenzweig

Dr. Cynthia Rosenzweig is a research scientist at the Goddard Institute for Space Studies, where she is the leader of the Climate Impacts Group. She is an adjunct senior research scientist at the Columbia University Earth Institute and an adjunct professor at Barnard College. A recipient of a 2001 Guggenheim Fellowship, Dr. Rosenzweig led the Metropolitan East Coast Region for the U.S. National Assessment of the Potential Consequences of Climate Variability and Change.

She is a lead author of the Intergovernmental Panel on Climate Change Working Group II Third Assessment Report, and has worked on international assessments of climate change impacts, adaptation and vulnerability. Her research focuses on the impacts of environmental change, including increasing carbon dioxide, global warming, and the El Niño-Southern Oscillation, on regional, national and global scales.

‘Free-Radical’ Scientist Recalls Research Journey

Almost 50 years ago, Denham Harman’s theory of aging as a biochemical process started a chain reaction in theoretical medicine.

Published October 1, 2002

By Fred Moreno, Dan Van Atta, and Jennifer Tang

Image courtesy of Khunatorn via stock.adobe.com.

Louis Pasteur once noted: “Chance favors the prepared mind.” Denham Harman’s mind was unusually prepared to develop a notion that took well over a decade to attract any serious attention, but is now a driving force in biomedical research: the free-radical theory of aging, a phrase Harman coined in 1960.

Now professor emeritus at the University of Nebraska Medical Center and still spry at 86, Harman recently edited Annals of the New York Academy of Sciences volume 959, Increasing Life Span: Conventional Measures and Slowing the Innate Aging Process. The volume also includes a recent paper by Harman on Alzheimer’s Disease: Role of Aging in Pathogenesis.

Free radicals are molecules or atoms that feature an unpaired electron. Because electrons prefer to travel in pairs, free radicals can set off chain reactions – their loner electrons cut in on the dance of another molecule’s two electrons in an attempt to grab one. This move satisfies the original unpaired electron, but merely creates a new free radical bent on pairing up.

Thus, like bulls in the china shop of living cells, free radicals, especially the hydroxyl radical, damage delicate cell membranes and muck up proteins whose functions depend on their structure. And the cellular damage wrought by free radicals is the mechanism, according to Harman, of the natural process we take for granted as aging.

A Circuitous Route

Harman took a circuitous, but in retrospect necessary, route to this conclusion. He was born in 1916 in San Francisco, but did live briefly as a boy in New York City, where his father worked for a jewelry company located just blocks from the site of The New York Academy of Sciences (the Academy) on 63rd Street near Fifth Avenue. The family returned to the Bay area in 1932, and Harman graduated from Berkeley High School two years later. Jobs were scarce, but Harman’s father happened to meet the director of the Shell Development Company, the chemical research division of the Shell Oil Company, at a local tennis club. Harman began working for Shell as a lab assistant.

The position sparked a true interest in chemistry. Harman went on to receive his undergraduate degree and, in 1943, his doctoral degree from the University of California, Berkeley, in chemistry. He continued with Shell the entire time, at first working with lubricating oils. But he was fortunately transferred – to the reaction kinetics department, where much of the work concerned free-radical reactions. During seven years there, Harman was instrumental in gaining 35 patents for Shell, including work on the active ingredient of something designed to shorten, not extend, life: the famous “Shell No-Pest Strip.”

Time to Think

In December 1945, Harman’s wife Helen put a bee in his bonnet. “She showed me a magazine article she thought might be of interest. It was a well-written piece by William Lawrence of the New York Times about aging research in Russia,” he recalls. Harman knew a lot of chemistry, but not much biochemistry or physiology. And the idea of aging as a biochemical process so fascinated him that in 1949 he decided to attend medical school. Berkeley turned him down because of his advanced age – he was 33 – but Stanford accepted him.

After his internship, Harman became a research associate at the Donner Laboratory back at Berkeley. “Donner was great,” he remembers, “because I didn’t really have to do anything, other than a hematology clinic on Wednesday mornings. I could just think.” And what he thought about was aging. “One thing you learn in biology,” he notes, “is that Mother Nature has a tendency to use the same processes over and over. My impression was that since everything ages, there was probably a single, basic cause.”

Pondering the issue at first left him frustrated. “I thought perhaps there wasn’t even enough knowledge available at the time to solve the problem,” he says. “And then in November of 1954 I was sitting at my desk when all of a sudden the thought came to me: free radicals. In a flash, I knew it could explain things.”

He quickly discussed the idea with medical colleagues – most thought it was interesting but too simple to explain such a complex phenomenon. “I got encouragement from only two people, both of whom were organic chemists, not medical doctors,” he recalls.

The Ubiquitous Enzyme Superoxide Dismutase

Helen and Denham Harman

Harman spent the next decade on virtually a lone research effort that produced circumstantial evidence for his idea. The limits of the instrumentation of that time made it difficult to even show that free radical species existed in living cells. Electron spin resonance studies found free radicals in yeast in 1954, but it was not until 1965 that free radicals were detected in human blood serum.

Then in 1967 biochemists discovered the ubiquitous enzyme superoxide dismutase, whose job it is to protect cells by sopping up free radicals formed during aerobic respiration in cells. The presence of a defense implies that free radicals are indeed a clear and very present danger to cells.

Ensuing research has implicated free radicals in cancer, heart disease, Alzheimer’s disease and other conditions. And observations of the animal kingdom are especially suggestive of the general aging theory. Harman points out that rats and pigeons, for example, have about the same body weights and metabolic rates. But pigeons produce far less hydrogen peroxide (formed from the superoxide radical) during cellular processes than do rats – and the birds live some 15 times longer than the rodents.

Judging by the sales of antioxidant supplements that scavenge free radicals, the American public has clearly subscribed to Harman’s ideas. Many physicians and scientists also have signed on to his view of aging, with the free-radical theory underlying much of current aging research.

“I think we’re now getting to a point where we may be able to actually intervene in the aging process,” Harman says. If his prediction proves true, our extra years will be owed to his many well-spent ones.

Also read: A New Approach to Studying Aging and Improving Health

Molecular Manufacturing for the Genomic Age

Researchers are making significant advances in nanotechnology which someday may help to revolutionize medical science for everything from testing new drugs to cellular repair.

Published October 1, 2002

By Fred Moreno, Dan Van Atta, and Jennifer Tang

When it comes to understanding biology, Professor Carl A. Batt believes that size matters – especially at the Cornell University-based Nanobiotechnology Center that he codirects. Founded in January 2000 by virtue of its designation as a Science and Technology Center, and supported by the National Science Foundation, the center seeks to fuse advances in microchip technology with the study of living systems.

Batt, who is also professor of Food Science at Cornell, recently presented a gathering – entitled Nanotechnology: How Many Angels Can Dance on the Head of a Pin? – with a tiny glimpse into his expanding nano biotech world. The event was organized by The New York Academy of Sciences (the Academy). “A human hair is 100,000-nm wide, the average circuit on a Pentium chip is 180 nm, and a DNA molecule is 2 nm, or two billionths of a meter,” Batt told the audience.

“We’re not yet at the point where we can efficiently and intelligently manipulate single molecules,” he continued, “but that’s the goal. With advances in nanotechnology, we can build wires that are just a few atoms wide.

“Eventually, practical circuits will be made up of series of individual atoms strung together like beads and serving as switches and information storage devices.”

Speed and Resolution

There is a powerful rationale behind Batt’s claim that size is important to the understanding of biology. Nanoscale devices can acquire more information from a small sample with greater speed and at better resolution than their larger counterparts. Further, molecular interactions such as those that induce disease, sustain life and stimulate healing all occur on the nanometer scale, making them resistant to study via conventional biomedical techniques.

“Only devices built to interface on the nanometer scale can hope to probe the mysteries of biology at this level of detail,” Batt said. “Given the present state of the technology, there’s no limit to what we can build. The necessary fabrication skills are all there.”

Scientists like Batt and his colleagues at Cornell and the center’s other academic partners are proceeding into areas previously relegated to science fiction. While their work has a long way to go before there will be virus-sized devices capable of fighting disease and effecting repairs at the cellular level, progress is substantial. Tiny biodegradable sensors, already in development, will analyze pollution levels and measure environmental chemicals at multiple sample points over large distances. Soon, we’ll be able to peer directly into the world of nano-phenomena and understand as never before how proteins fold, how hormones interact with their receptors, and how differences between single nucleotides account for distinctions between individuals and species.

The trick – and the greatest challenge posed by an emerging field that is melding the physical and life sciences in unprecedented ways – is to adapt the “dry,” silicon-based technology of the integrated circuit to the “wet” environment of the living cell.

Bridging the Organic-Inorganic Divide

Nanobiotechnology’s first order of business is to go beyond inorganic materials and construct devices that are biocompatible. Batt names proteins, nucleic acids and other polymers as the appropriate building blocks of the new devices, which will rely on chemistries that bridge the organic and inorganic worlds.

In silicon-based fabrication, some materials that are common in biological systems – sodium, for example – are contaminants. That’s why nano-biotech fabrication must take place in unique facilities designed to accommodate a level of chemical complexity not encountered in the traditional integrated-circuit industry.

But for industry outsiders, the traditional technology is already complex enough. Anna Waldron, the Nanobiotechnology Center’s Director of Education, routinely conducts classes and workshops for schoolchildren, undergraduates and graduates to initiate them into the world of nanotechnology, encourage them to pursue careers in science, and foster science and technology literacy.

In a hands-on presentation originally designed for elementary-school children, Waldron gives the audience a taste – both literally and figuratively – of photolithography, a patterning technique that is the workhorse of the semiconductor industry. Instead of creating a network of wells and channels out of silicon, however, Waldron works her magic on a graham cracker, a chocolate bar and a marshmallow, manufacturing a mouthwatering “nanosmore” chip in a matter of minutes.

Graham crackers are substituted for silicon substrate, while chocolate provides the necessary primer for the surface. Marshmallows act as the photoresist, an organic polymer that, when exposed to light, radiation, or, in this case, a heat gun, can be patterned in the desired manner. Finally, a Teflon “mask” is placed on top of the marshmallow layer and a blast from the heat gun transfers the mask’s design to the marshmallow’s surface – a result that appeared to leave a lasting impression on the Academy audience as well.

What’s Next?

According to Batt, it won’t be too long before the impact of the nanobiotech revolution will be felt in the fields of diagnostics and biomedical research. “Progress in these areas will translate the vast information reservoir of genomics into vital insights that illuminate the relationship between structure and function,” he said.

Prof. Batt

Also down the road, ATP-fueled molecular motors may drive a whole series of ultrasmall, robotic medical devices. A “lab-on-a-chip” will test new drugs, and a “smart pharmacist” will roam the body to detect abnormal chemical signals, calculate drug dosage and dispense medication to molecular targets.

Thus far, however, there are no manmade devices that can correct genetic mutations by cutting and pasting DNA at the 2-nanometer scale. One of the greatest obstacles to their development, Batt said, doesn’t lie in building the devices, but in powering them. Once the right energy sources are identified and channeled, we’ll have a technology that speaks the language of genomics and proteomics, and decodes that language into narratives we can understand.

Also read: Building a Big Future from Small Things


About Prof. Batt

Microbiologist Carl A. Batt is professor of Food Science at Cornell University and co-director of the Nanobiotechnology Center, an NSF-supported Science and Technology Center. He also runs a laboratory that works in partnership with the Ludwig Institute for Cancer Research.

Continuing the Legacy of a Cancer Research Pioneer

Advancing the cancer research started by Casare Maltoni, the late Italian oncologist who advocated for industrial workplace safety.

Published August 1, 2002

By Fred Moreno, Dan Van Atta, Jill Stolarik, and Jennifer Tang

Cesare Maltoni. Image courtesy of Silvestro Ramunno, CC BY-SA 4.0, via Wikimedia Commons.

For decades, the “canary in the coal mine” approach has been used to test for potential carcinogens. Standing in for humans, mice and rats have ingested or been injected with various chemicals to help toxicologists determine if the substances would induce cancers. In the end, autopsy revealed whether the lab animals had developed tumors.

Today, new approaches are emerging. They stem from a variety of tools that are evolving from advances in molecular biology, microbiology, genomics, proteomics, novel animal models of carcinogenesis and computer technology.

These tools and approaches were the focus of an April conference commemorating the work of Italian researcher Cesare Maltoni, who died January 21. Renowned for his research on cancer-causing agents in the workplace, Maltoni was the first to demonstrate that vinyl choloride produces angiosarcomas of the liver and other tumors in experimental animals. Similar tumors later were found to be occurring among industrial workers exposed to vinyl chloride.

Maltoni also was the first to demonstrate that benzene is a multipotential carcinogen that causes cancers of the zymbal gland, oral and nasal cavities, the skin, the forestomach, mannary glands, liver, and hemolymphoreticular systems, i.e. leukemias.

Sponsored by the Collegium Ramazzini, the Ramazzini Foundation, and the National Toxicology Program of the National Institute of Environmental Health Sciences (NIEHS), the meeting was organized by The New York Academy of Sciences (the Academy).

Measuring More Than Pathological Changes

After reviewing the contributions of Maltoni and David Rall, an American giant in the same field, as well as providing an update on ongoing research in their respective groups, the speakers and attendees discussed the future of carcinogenesis testing. While new tools will not replace bioassays, most noted, they will make it possible to measure more than simply the pathological changes seen through the microscope.

J. Carl Barrett, head of the Laboratory of Biosystems and Cancer at the National Cancer Institute, cited four recent developments that are fundamentally changing the research to identify risk factors and biological mechanisms in carcinogenesis.

The four developments are: new animal models with targeted molecular features – such as mice bred with a mutated p53 oncogene – that make them very sensitive to environmental toxicants and carcinogens; a better understanding of the cancer process; new molecular targets for cancer prevention and therapy; and new technologies in genomics and proteomics.

New technologies in cancer research, like gene expression analyses, are revealing that cancers that look alike under the microscope are often quite different at the genetic level. “Once we can categorize cancers using gene profiles,” Barrett said, “we can determine the most effective chemotherapeutic approaches for each – and we may be able to use this same approach to identify carcinogenic agents.”

A Robust Toxicology Database

A related effort – to link gene expression and exposure to toxins – has recently been launched at the NIEHS. The newly created National Center for Toxicogenomics (NCT) focuses on a new way of looking at the role of the entire genome in an organism’s response to environmental toxicants and stressors. Dr. Raymond Tennant, director of the NCT, said the organization is partnering with academia and industry to develop a “very robust toxicology database” relating environmental stressors to biological responses.

“Toxicology is currently driven by individual studies, but in a rate-limited way,” Tennant said. “We can use larger volumes of toxicology information and look at large sets of data to understand complex events.” Among other benefits, this will allow toxicologists to identify the genes involved in toxicant-related diseases and to identify biomarkers of chemical and drug exposure and effects. “Genomic technology can be used to drive understanding in toxicology in a more profound way,” he said.

Using the four functional components of the Center (bioinformatics, transcript profiling, proteomics and pathology), Tennant believes that the NCT will be able “to integrate knowledge of genomic changes with adverse effects” of exposure to toxicants.

Current animal models of carcinogenesis are unable to capture the complexity of cancer causation and progression, noted Dr. Bernard Weinstein, professor of Genetics and Development, and director emeritus of the Columbia-Presbyterian Cancer Center.

Multiple factors are involved in the development of cancer, Weinstein said, making it difficult to extrapolate risk from animal models. Among the many factors that play a role in cancer causation and progression are “environmental toxins such as cigarettes, occupational chemicals, radiation, dietary factors, lifestyle factors, microbes, as well as endogenous factors including genetic susceptibility and age.”

Gene Mutation and Alteration

By the time a cancer emerges, Weinstein added, “perhaps four to six genes are mutated, and hundreds of genes are altered in their pattern of expression because of the network-like nature and complexity of the cell cycle. The circuitry of the cancer cell may well be unique and bizarre, and highly different from its tissue of origin.”

Research over the past decade has underscored the role that microbes play in a number of cancers: the hepatitis B and hepatitis C viruses in liver cancer along with cofactors alcohol and aflatoxin; human papilloma virus and tobacco smoke in cervical cancer; and Epstein Barr virus and malaria in lymphoma, said Weinstein. Microbes are likely to be involved in the development of other kinds of cancer as well, he speculated. “Microbes alone cannot establish disease, they need cofactors. But this information is important from the point of view of prevention, and these microbes and their cofactors are seldom shown in rodent models.”

When thinking of ways to determine the carcinogenicity of various substances, he concluded, “we have to consider these multifactor interactions, and to do this we need more mechanistic models” of cancer initiation and progression.

Christopher Portier, a mathematical statistician in the Environmental Toxicology Program at the NIEHS, is working to make exactly this type of modeling more widespread. He stressed the importance and advantages of complex analyses of toxicology data using a mechanism-based model – or “biologically based data.”

This model includes many more factors than just length of exposure and time till death of the animal. It can incorporate “the volume of tumor, precursor lesions, dietary and weight changes, other physiological changes, tumor location and biological structure, biochemical changes, mutations,” Portier said, and give a more complete picture of the processes that occur when an organism is exposed to a toxicant.

New Analytical and Biological Tools

With biologically based models, researchers would link together a spectrum of experimental findings in ways that allow them to define dose-response relationships, make species comparisons, and assess inter-individual variability, Portier said. Such models would allow researchers to quantify the sequence of events that starts with chemical exposure and ends with overt toxicity. However, he said “each analysis must be tailored to a particular question. They are much more difficult computationally and mathematically than traditional analyses, and require a team-based approach.

“Toxicology has changed,” Portier continued. “We now have new analytical and biological tools – including transgenic and knockout animals, the information we’ve gained through molecular biology, and high through-put screens. We need to link all that data together to predict risk, then we need to look at what we don’t know and test that.”

While most speakers focused on the future benefits of up and coming technologies and concepts, Philip Landrigan, director of the Mount Sinai Environmental Health Sciences Center at the Mount Sinai School of Medicine, reminded the group of the work on the ground that still needs to be accomplished. “We’ve made breathtaking strides in our understanding of carcinogens and cancer cells,” he said. “I am struck, though, by the divide in the cancer world – the elegance of the lab studies, but our inefficiency in applying that knowledge to cancer prevention.”

Thorough Testing Needed

One of the problems confronting researchers is the vast number of substances that are yet to be tested. About 85,000 industrial chemicals are registered with the U.S. Environmental Protection Agency for use in the United States. Although some 3,000 of these are what the EPA calls high-production-volume chemicals, Landrigan said, “only 10 percent of these have been tested thoroughly to see the full scope of their carcinogenic potential, their neurotoxicity and immune system effects.”

Landrigan also discussed other troubling issues. For example: Children, the population most vulnerable to the effects of toxins, are only rarely accounted for in testing design and analysis, he said, and the United States continues to export “pesticides, known carcinogens, and outdated factories to the Third World.” Landrigan said he believes the world’s scientific community needs to address these issues.

At the conclusion of the conference, Drs. Kenneth Olden and Morando Soffritti signed an agreement formalizing an Institutional Scientific Collaboration between the Ramazzini Foundation and the NIEHS in fields of common interest. Priorities of the collaboration will include: carcinogenicity bioassays on agents jointly identified; research on the interactions between genetic susceptibility and exogenous carcinogens; biostatistical analysis of results and establishment of common research management tools; and molecular biology studies on the basic mechanisms of carcinogenesis.

Detailed information presented in several papers will be included in the proceedings of the conference, to be published in the Annals of the New York Academy of Sciences later this year.

Also read: From Hypothesis to Advances in Cancer Research