Skip to main content

Teaching the Elegance of the Universe

A young girl with pig tails writes math equations on a whiteboard.

A playwright and mathematician turned tutor came to realize that a relatively simple pedagogical approach was most effective when engaging his students.

Published March 1, 2005

By William Tucker

Image courtesy of Vitalii via stock.adobe.com.

It was billed as “two imaginative minds in conversation.” Brian Greene, author of The Elegant Universe and The Fabric of the Cosmos, is probably the world’s best explainer of string theory – the latest theory of the “physics of everything.” John Mighton is a talented Canadian playwright, mathematician, and researcher who built a second career teaching math to elementary students in Toronto.

Two Minds and a Quartet

Moderating the evening, at the City University of New York, was Robert Krulwich, the New York ABC-TV correspondent with a bent for scientific subjects. It was all part the CUNY series Science & the Arts, designed as a bridge between two worlds.

What made the evening particularly promising is that Greene and Mighton are collaborating on a play that will attempt to take the concepts of string theory and turn them into a dramatic narrative – with musical accompaniment, no less. “We got together with the director and kicked around how the science might inform the narrative and intertwine with certain musical themes,” said Greene. “Then John goes back and writes up various snippets of scenes and we have actors read them to see how they feel and sound. Then John initiates another roundtable discussion and we go at it again. We’ll have the first full script by November.”

Greene also described another recent project, Strings and Strings, with the Emerson Quartet. “It’s sponsored by the Guggenheim,” he explained. “I talk about the physics in scientific terms, and then I shift into metaphorical language that can apply as well to music. The quartet then takes over and elaborates on that metaphor. People take in the concepts, not just through their heads, but as a full-body experience.”

Taking It Step by Step

All this held promise for some future evenings’ entertainment. But to the delight of some – and the disappointment of others – this night’s discussion revolved almost completely around Mighton’s experiences in tutoring elementary students in Toronto.

“I was completely broke as a playwright and looking for a part-time job,” Mighton recounted. “One day I saw a sign for math tutors. I had taken a calculus course in college and managed to convince the woman that this qualified me for the job. I didn’t tell her my grade.”

Mighton’s first student was a 15-year old boy. “His teacher had told him he was the stupidest kid he ever saw. Having struggled with math myself, I decided to reserve judgment. I worked with him for five years and he turned out to be an ideal student. He’s now doing his doctoral work in math at the University of Toronto.”

Since beginning tutoring 10 years ago, Mighton has founded JUMP – Junior Undiscovered Math Prodigies – an educational charity that provides free math tutoring to elementary-level students in Toronto. He also has written a book, The Myth of Ability: Nurturing Mathematical Talent in Every Child, which outlines his philosophy.

Mighton has two basic strategies. First, he presents math in a simple, step-by-step approach that allows mastery of one stage before moving on to the next. Second, he gives the children plenty of encouragement in order to build their confidence.

JUMP-Starting Math

“I started JUMP in my apartment with a couple of my actor friends, many of whom didn’t know much math,” he said. “We asked the local school to send over some children who needed to learn fractions. Somehow they misunderstood and sent over a remedial class.” The experience was daunting. “My first student could barely count to 10. She had never heard of multiplication. She was absolutely terrified. When presented with the simplest concepts, she kept saying, `I don’t understand what you’re saying.’ “

Mighton says he panicked. “I asked her to count to 10 on her fingers. She couldn’t do it at first but gradually relaxed. Then we began skip-counting by twos and threes. Pretty soon she got the hang of it. I told her she was brilliant. Her mother told me the next day that she had a nightmare that she wouldn’t be allowed to return to tutoring.”

After three years his student had moved back into mainstream classes. She is now working a year ahead of her grade on some subjects.

Mighton’s methods involve lots of guided exercise in the early stages of the program, which puts him at odds with most of the educational schools. “When I wrote this book, I didn’t realize I’d stepped into these math wars,” he said.

“I’m not advocating a swing back to rote learning. What’s happening today, however, is that they expect kids to discover whole concepts. In grade four they now expect kids to discover their own algorithm for division.

“In eight centuries Roman Civilization never discovered an efficient division algorithm. It’s a bit unrealistic to expect children to discover it in one morning.”

Every Child a Prodigy

Greene weighed in on behalf of rote learning. “When people learn some advanced concept in mathematics or physics, they don’t usually swallow it whole,” he said.

“Oftentimes they pick it apart bit by bit. By rote, by calculating, by imbedding yourself into the details and doing it over and over, somehow you get it. The process of rote has gotten a bad reputation, but it is a very, very powerful tool in the service of education.”

“It’s like Ted Williams and these hitters who you assume just have great ability,” said Krulwich, the moderator. “But when they get into the batting cage, they hit and hit and hit and hit and hit.” Mighton added the words of one of the century’s greatest mathematicians, John Von Neumann: “Math is a matter of getting used to things.”

Also read: The Chaos of Celestial Physics and Astrodynamics

The Chaos of Celestial Physics and Astrodynamics

A starry night sky with the outline of mountains in the foreground.

For mathematician Edward Belbruno, by embracing “chaos” he was better able to understand the three-body problem of celestial physics. His notion of chaos describes motion that defies precise long-term predictions.

Published January 1, 2005

By William Tucker

In 1990, Edward Belbruno was packing his belongings, getting ready to leave the Jet Propulsion Laboratories in Pasadena. His five-year effort to interest NASA in low-energy trajectories for spaceflight had failed.

A graduate of the Courant Institute of Mathematics in New York, Belbruno had long been playing with the idea of charting very precise flight paths through the sky or into space. He wanted to allow space probes to slip into orbit around a moon or planet without the use of powerful, fuel-consuming retrorockets. His task was made immensely complicated – if not impossible – by the three-body problem of celestial physics.

When first formulating the laws of gravity, Isaac Newton had calculated the interaction of two bodies. They could be a stone falling to Earth, a spacecraft in orbit, or the Earth itself on its trajectory about the Sun. In each case, the two bodies both revolve around the center of mass – a point somewhere between their two centers, like the balancing point of a see-saw.

The interaction of three bodies, however, is immensely more difficult. In fact, in the late 1950s, V. Arnold, a Russian mathematician, and J. Moser, a German, independently proved that the three-body problem could not be solved at all. The proof came from solving the more general problem of chaos in nearly periodic motion, as outlined by Arnold’s teacher, A. N. Kolmogorov, in the 1920s. It is now known as the Kolmogorov-Arnold-Moser (KAM) theorem.

Order in Chaos

The obstacle to finding a solution is that the three-body problem leads, literally, to chaos. To a mathematician, that does not mean a dark abyss or a mad frenzy. Rather, chaos describes motion that defies precise long-term predictions.

However, mathematics offers tools even for dealing with the unknowable. Using the mathematics of chaos, Belbruno felt that he could fudge the three-body problem enough to create a proper trajectory. The difficulty was that his slow dance to the Moon would take two years, whereas conventional rockets can make the trip in three days. NASA lost interest, and Belbruno was shown the door.

Then a miracle happened. The Japanese had launched a two-part Moon probe, Muses A, the size of a desk, and Muses B, the size of a grapefruit. The two had separated while in Earth orbit and the grapefruit headed for the Moon. Upon arrival, however, Muses B’s radio failed, and the probe was lost. Now Muses A was circling the Earth with very little fuel and nothing to do. A JPL engineer remembered Belbruno’s work. Suddenly Belbruno had an audience. Could he help? Belbruno said he could.

“In the same instant, I realized that I could add the Sun’s gravitational field to the equation,” Belbruno says. Ten months later, Muses A – now rechristened Hiten, after a Buddhist angel – fired half its remaining fuel and, guided by Belbruno’s equations, glided into a 2-million-mile itinerary beyond the Moon and back again. It was like flicking a paper airplane into space, hoping it would eventually settle into a trajectory where its momentum perfectly matches the Moon’s gravity.

The Angel of Chaos

Belbruno’s formulas worked, and the mission was saved. “They used it again for the Genesis probe of the Sun and the European Space Agency mission SMARTONE,” says Belbruno. “NASA now takes my work a lot more seriously.”

So seriously that Belbruno was commissioned to call a conference at the University of Maryland in 2003 to investigate astrodynamics and chaos. Also under study were formation flying, navigation and control of unmanned spacecraft, orbital dynamics, mission proposals, and possible propulsion methods for pushing probes deep into the solar system. The results have been collected as Astrodynamics, Space Mission, and Chaos, Volume 1017 in Annals of the New York Academy of Sciences.

Although Belbruno and his fellow authors could not know it, space probes were about to be brought back front and center by President George W. Bush’s announcement of a mission to Mars, somewhere around 2020. “The cost for delivering cargo to the Moon is now $1 million per pound,” says Belbruno. “Every pound of fuel we can save is another pound of payload that can be delivered.

“I don’t agree with everything the president does, but I think he has shown great vision on this initiative,” he adds. “The idea of going step by step to the Moon, building a base, and then moving on to Mars and back is very practical. I think there’s a good possibility we’ll succeed.”

Also read: Exploring the Ethics of Human Settlement in Space

Merging Modern and Ancient Medicines

An Interview with Albert Y. Leung, a pharmacologist who uses modern medical science to study the mechanisms—or active components—of herbs.

Published September 30, 2004

By Dan Van Atta

Image courtesy of iMarzi via stock.adobe.com.

To Albert Y. Leung, the benefits of Western medicine and those of medicinal herbs and other “natural” remedies are by no means mutually exclusive. Born and raised in Hong Kong, Leung grew up experiencing the power of traditional approaches to medicine used for centuries in China.

“My great grandfather on my mother’s side was a local doctor in his little village,” recalls Leung, a member of The New York Academy of Sciences since 1976. “While, I never knew him, my grandmother knew a lot about herbs. I grew up taking herbs.”

“I grew up taking herbs, but no one really understood why they worked.”

For three decades, Leung has used the tools and knowledge of modern medical science to study the mechanisms—or active components—of herbs. He is helping to understand what makes them effective in reducing certain aches and pains, as well as alleviating other symptoms of illness.

“I knew that for certain problems herbs were effective,” Leung said, “but then no one really understood why they worked. Now we know that many herbs contain active ingredients that are antioxidant or anti-inflammatory agents.”

Leung obtained a BS degree in pharmacy at the National Taiwan University before coming to the United States in 1962. He earned his MS and PhD in pharmacognosy at the University of Michigan, in Ann Arbor.

Part Scientist, Part Entrepreneur

Moving to Glen Rock, New Jersey, in the late 1970s, Leung created AYSL Corp., an information company. AYSL “probably holds the most extensive collection of Chinese journals in a single location outside of China” covering traditional Chinese medicine. He also edited the Encyclopedia of Common Natural Ingredients Used in Food, Drugs, and Cosmetics. Hailed as the most authoritative reference for natural ingredients in commercial use, it is now entering its third edition.

In the past 30 years dietary supplements and “health foods” based on “natural” ingredients have become a major industry. Leung said he is concerned about the safety and efficacy of many products sold as herbal extracts.

“The major problem is that everyone claims their product is the best,” he said, “but there is no real science behind it, no real controls. To say that a product is standardized doesn’t mean much when, for many of these products, the active ingredient is not known.”

In 1996, Leung founded a second company, Phyto-Technologies, Inc., to specialize in herb research. Phyto-Technologies manufactures and custom formulates Chinese herbal products for private-label distribution. With facilities in Glen Rock, New Jersey, and Woodbine, Iowa, the company now has 20 employees. Leung serves as president and chief executive officer.

“My approach is to provide the quality control needed to make the extracts the way they are supposed to be made,” Leung explained. “Certain herbs have to be extracted by traditional methods, such as boiling in water or soaking in alcohol. In the past four or five years we’ve developed some more technical aspects, but our approach is to combine appropriate science with the traditional methods necessary to retain the total benefits of traditional Chinese herbs.”

A Major Headache

Leung is currently engaged in the third year of a research study of the herb feverfew (Tanacetum parthenium Schultz Bip.) for use in migraine prevention. His company has been awarded a Small Business Innovation Research grant by the National Center for Complementary and Alternative Medicine to conduct the study, for which he is the principal investigator.

This second year of the phase II grant, “Reproducible Feverfew Preparations for Migraine Trials,” is fully funded, with $690,337. Dennis V. C. Awang, of MediPlant, Inc., an expert in the chemistry of feverfew, is the co-principal investigator. Funding for both phases of the three-year project comes to about $1.4 million.

Leung’s main objective is to characterize and to standardize feverfew preparations that have the greatest potential for use in human clinical trials for relief of migraine. During the past 20 years four clinical trials have yielded positive results in migraine prevention. Three of the trials used dried feverfew leaf powder, and one used a CO2 supercritical fluid extract (SFE). However, another trial—using a 90% ethanolic extract (by prolonged extraction), containing high levels of parthenolide (0.35%)—produced negative results.

“These results indicated that parthenolide is not the active principle of feverfew in migraine prevention, as previously assumed,” Leung said. The researchers then used chromatographic and spectrophotometric profiling and bioassay and gene expression assay techniques to define and isolate the potentially active components present in the dried leaf and the SFE, but absent in the prolonged extract.

Further studies are now in progress to characterize potential active components, Leung said. “Pilot batches of materials standardized to contents and physicochemical profiles of these components will be prepared and further subjected to activity verification by bioassay and gene expression assay,” he added.

The Researcher as Communicator

These materials “will then be subjected to clinical trials.” If all goes well, Leung said, the work would result in a safe, effective over-the-counter drug for migraine.

In the meantime, Leung continues to see his role as one of communicator as well as researcher. In 1995 he published another book, Better Health with (Mostly) Chinese Herbs & Foods. He also serves as an advisor to the Modernizing Chinese Medicine International Association, headquartered in Hong Kong. In addition to conducting research and writing books about herbal medicine, Leung produces a newsletter on the subject as well.

“There are a lot of aspects of modern medicine that are superior,” commented Leung, “but there are many common ailments that modern medicine still does not understand and is unable to treat. And there are herbs that work to reduce aches and pains—even though we may not know the active ingredients that make them work. I think the two forms of medicine should be used side by side.”

Also read: A New Look at an Ancient Pain Remedy

Scientists and War: An Ethical Dilemma

A black and white photo of an atomic bomb test, showing a massive mushroom cloud.

Major advances were made in the development of chemical weapons between World War I and the Cold War. This would present scientists with a moral dilemma.

Published August 1, 2004

By Mary Crowley

Atomic cloud during Baker Day blast at Bikini atoll. Image courtesy of National Archives Catalog. Public domain.

“Of arms I sing, and the man,” man,” began the Aeneid, Virgil’s epic poem on war and heroism, written in the first century BCE. Battle and humankind’s relationship to it is a timeless theme.

But war and weaponry took on new meaning in the 20th century, when nuclear arms created the potential to eliminate entire cities and even civilization. From the chemists who manufactured gas in World War I to the physicists who designed the atom bomb in World War II, scientists were at the fulcrum of a world literally in the balance.

And they are still there now, in the post-9/11 era, this time with molecular biologists facing off against the shadowy enemy of bioterrorism. Hopefully, they have gleaned some insights from their forebears, particularly physicist J. Robert Oppenheimer, who has come to represent the ethical dilemma that scientists face when called on to use their skills to defend their nation.

“The association of scientist, arms and the state is fraught with troublesome questions, many centering on whether the scientist’s obligation to the state requires deploying his or her expertise to hazardous, potentially destructive purposes and/or defending against them,” said Daniel J. Kevles, Ph.D., Stanley Woodward Professor of History at Yale University. Oppenheimer continues to fascinate us, prompting books, plays and even a coming opera because of the “vexing vitality of these issues,” he said at a recent meeting of The New York Academy of Sciences’ (the Academy’s) History and Philosophy of Science Section.

Chemists at War

The Hague Conventions of 1899 and 1907 condemned the development of chemical weapons (despite objections from the Americans and the British). The ban, instituted because of fears that chemical weapons like gas could be used against cities and civilians, demonstrated “the widely supported belief, even in military circles at the time, that at the opening of the 20th century civilian populations should not be fair game in warfare among the advanced civilized nations,” said Kevles.

But by the outbreak of World War I in August 1914, the Institute of Chemistry in Germany was trying to produce nitric acid for munitions. The Institute was headed by Fritz Haber, the “father of chemical warfare,” who with Carl Bosch won a Nobel Prize in 1918 for devising a method to fix nitrogen from the air. As Haber envisioned it, gas released from cylinders got around The Hague Convention’s prohibition against delivering it via projectiles. Indeed, Haber himself led the first gas attack at Ypres, in Belgium, in April 1915.

Public Opposition to Chemical Weapons

Daniel J. Kevles, Ph.D.

In response, the Allies quickly implemented their own programs. When the United States joined the battle in 1917, it established the Chemical Warfare Service, involving some 700 chemists and more than 20 academic institutions. Quite rapidly, the letter of The Hague Convention was ignored, as well as its spirit, as the French began using gas shells to better disperse the noxious agent. By war’s end, there were an estimated 560,000 gas casualties.

Artists and writers depicted the horrors of gas attacks. A poll of Americans showed such overwhelming opposition to chemical weapons that a government advisory committee noted, “The conscience of the American people has been profoundly shocked by the savage use of scientific discoveries for destruction rather than for construction.”

Nonetheless, as the Allies were poised for victory in 1918, “gas was hailed as a triumph of Allied industry,” said Kevles. Should the war have continued, the U.S. and Britain had plans to aerially assault cities with chemical bombs, despite vehement opposition from many military officers, including General John J. Pershing. Chemical weapons were seen as a necessary evil. At hearings on Capitol Hill, General Amos A. Fries argued that the more deadly the weapons, “the sooner…we will quit all fighting.”

In part through lobbying by the gas industry and in part through support of veterans who counted gas a “humane weapon” that ended the war sooner, the Chemical Weapons Service received generous research funding. And American gas chemists “displayed no moral anguish about their wartime role,” according to Kevles. They agreed with Haber, who said that gas was “a higher form of killing.”

Physicists at War

Physicists played the starring role in World War II science. Early on, it was clear that this war would be “an unprecedented technological conflict,” one that would require physicists to enjoin the battle for more powerful weaponry, explained Kevles.

They were eager to do so. The Blitzkrieg in 1940 and other early assaults “established a new imperative for the social responsibility of science: Do whatever possible to meet the technological threat from fascist aggressors by forging an all out technological response in the democracies,” said Kevles. With the memory of Germany’s World War I surprise gas attack still raw, the Allies had no plans to be caught unaware. “The willingness to develop an atomic bomb, a dramatically unconventional innovation that promised to wipe out entire cities, was to prevent being beaten to the punch by the Nazis,” according to Kevles.

But the bomb never went off against its preferred target. By the time Fat Man and Little Boy were completed, the Germans had surrendered. The bombs were used instead against civilians in Hiroshima and Nagasaki, even as Japan was on the brink of surrender.

The Oppenheimer Paradox

Robert Oppenheimer

By the time the atom bomb was dropped, “moral sensibilities about bombing civilians had been almost completely shattered, among scientists as well as policy and opinion makers,” said Kevles. J. Robert Oppenheimer’s experiences during World War II and the postwar years poignantly capture the inherent ethical dilemmas of scientists at war.

World War II transformed Oppenheimer from “an otherworldly theoretical physicist into the internationally renowned creator and sage of American nuclear strength,” who was then humiliated and destroyed by “the vicious and bare-knuckled politics of national security,” described Kevles.

Oppenheimer entered the war years eager to apply his physicist’s craft against the Nazis. He was the research head of the Manhattan Project at Los Alamos, New Mexico, feverishly working to develop an atom bomb before the Germans did. In 1945 he wrote, “We recognize our obligation to our nation to use the weapons to help save American lives [and] we can see no acceptable alternative to military use.”

That the bomb was used against Japanese civilians horrified Oppenheimer. He publicly stated in 1947, “Physicists felt a particularly intimate responsibility for suggesting, for supporting, and in the end, in large measure, for achieving the realization of atomic weapons. Nor can we forget that these weapons, as they were in fact used, dramatized so mercilessly the inhumanity and evil of modern war. In some sort of crude sense which no vulgarity, no humor, no overstatement can quite extinguish, the physicists have known sin; and this is a knowledge which they cannot lose.”

A Dutiful Soldier of Science

Despite these reservations, he remained a “dutiful soldier of science” during the early Cold War years, when intense investment into the machines of war was considered essential for national security. Oppenheimer signed on to the plan for creating an H-bomb, and served on various government advisory boards on national defense, until he lost his security clearance in 1953. Most significantly, he was chair of the General Advisory Committee of the just-formed Atomic Energy Commission, which he claimed was supposed to “provide atomic weapons and good atomic weapons and many atomic weapons.”

“Oppenheimer is something of a paradox, embodying at one and the same time a sense of sin associated with the forging of nuclear weapons and a commitment to improving and multiplying those weapons for the sake of national security, a task that could lead to further sin,” contended Kevles. “Yet the power of nuclear weapons, the reach of new delivery systems, the utter vulnerability of cities, and the potential combustibility of the Cold War forced Oppenheimer and his fellow scientists to embrace their paradox, to accept both the anguish of their sin and the continuing responsibilities of national security.”

Biologists at War

The science warriors of our era – the biologists who are at the forefront of research that can be turned to new types of weaponry – face a similar paradox. “The horrendous events of September 11, 2001 placed bioterrorism high on the national security agenda,” noted Kevles. Biomedical researchers are confronted with a new dilemma: Much of their research can serve both the beneficent needs of health and the nefarious needs of terrorism.

Due to the contemporary global nature of biology, with thousands of journals easily accessible, the information is highly transparent – and the key agents of bioterrorism require relatively small-scale investments. Meantime, the funding stream for biology is rich. The National Institutes of Health earmarked $1.7 billion for bioterrorism research in fiscal 2003.

How biologists contend with this challenge is history waiting to be written. “The challenge posed by bioterrorism is unprecedented in the history of science, arms and the state,” concluded Kevles. “To deal with it, one would like from the country’s biomedical leadership the kind of courage, tenacity and vision that Robert Oppenheimer provided – an engagement with the problems of arms and the state that offers, to paraphrase the majority report on the hydrogen bomb, some limitation upon the totality of war, some cap to fear, some reassurance for mankind.”

Also read: National Security, Neuroscience and Bioethics

Paul Ehrlich: Can We Avert a Global ‘Nineveh’?

A shot of the plant earth.

Due to human impacts on the planet, our species and the broader ecosystem may be “racing toward a miserable future.” Paul Ehrlich says we shouldn’t over-rely on technology to correct this troubling trend.

Published August 1, 2004

By Christine Van Lenten

Our “triumphant” species may be partying on toward the first collapse of a global civilization. By accelerating depletion of our natural capital, the interrelated trends of population growth, rampaging consumption, and worsening political and economic inequality have put us on a collision course with nature and eroded our ability to create a sustainable future.

The sources of these trends and how they can be altered is the subject of Paul and Anne Ehrlich’s new book, One with Nineveh, which Paul Ehrlich discussed at The New York Academy of Sciences (the Academy) this spring, at the invitation of the Environmental Sciences Section and the Science Alliance.

That title refers to the seat of the ancient Assyrian empire, which, you may have noticed, is no longer flourishing. Its demise was hastened by self-inflicted environmental damage – a cautionary tale.

Today, Ehrlich’s name is more widely recognized than Nineveh’s. Author of the 1968 bestseller The Population Bomb, he is Bing Professor of Population Studies at Stanford and has published extensively, won many awards, and been a forceful scientist-citizen spokesman on vital issues for decades.

Grave and Worsening

The issues he’s grappling with now are grave and worsening, and Ehrlich did not disguise his frustration with the problem that dismays him most. The human race has radically reshaped the planet; scientists understand all too well that we’re racing toward a miserable future; what must be done is all too clear; for years, scientists have been urgently trying to make this understood. But the mass media carry little science news, and too many citizens and policymakers remain blithely unconcerned. Magical beliefs that technology will solve all problems, quickly, contribute to this syndrome. Leadership is essential, but, Ehrlich believes, the Bush administration is making matters worse.

Scientists must do a better job of getting their story out, he insisted. One with Nineveh is a heroic, plain-English attempt to do this.

The Ehrlichs’ agenda for achieving needed change is proportional to the problems: that’s to say, it’s staggering in scope. One initiative would squarely tackle the challenge of modifying nothing less than human behavior itself. “Remember,” Ehrlich said, “we’re a small-group species, both genetically and culturally. For most of our 5-million-year history…we lived in groups that averaged below 200 people, and almost everybody within those groups was related. Now, evolutionarily in an eye blink of time, we’re trying to live in a global civilization of 6.3 billion people.” We must figure out how to do this better. And individuals’ rights become part of environmental problems, because we can’t tackle problems “if we’re at each other’s throats.”

A millennium assessment of human behavior, he suggested, would examine issues on the “population-environment-resource-ethics-power” spectrum, including the fundamental question of “what people are for.” Ethical issues – including our obligations to the world’s poorest people, to future generations, and to nature – would be central.

Potential for Change

This initiative may seem fanciful, but a partial precedent is enjoying impressive success: the Intergovernmental Panel on Climate Change involves scientists from many countries and disciplines in tackling an unprecedented global problem. Its work is regarded as authoritative. The UN is a cosponsor, and while Ehrlich believes the UN must be radically restructured to reflect 21st century realities, he views it as “the only game in town.”

Another promising precedent is the Millennium Ecosystem Assessment, an international scientific collaboration that will support local, national, and international decision making about ecosystem management.

But can human behavior change, and change quickly enough? Ethical standards have been evolving, Ehrlich reflected. For example, it’s no longer OK to beat your horse to death in the street; becoming a despot is no longer considered a good career move. And societies can change dramatically and rapidly: after President Truman desegregated the military, race relations in the United States changed quickly, though not enough; the Soviet Union collapsed suddenly.

Ehrlich sees the potential for similar change in how we treat each other and the environment, and it is in this that he places his hope. “When the time is ripe, people will begin to realize that the only realistic solutions today are ones we thought were idealistic yesterday. What I hope all of you will do is everything you possibly can to ripen the time.”

Also read: Sustainable Development for a Better Tomorrow

Flying High and Cutting through the Glass Ceiling

A large jet zips by with blue skies and white clouds in the background.

From sitting on the lap of Einstein as a child to making significant advances in aerospace and materials engineering as an adult, Pamela Kay Strong has done it all.

Published August 1, 2004

By Dan Van Atta

“Many, many times I’ve been the only woman in the room,” commented Pamela Kay Strong, a member of The New York Academy of Sciences (the Academy) from Huntington Beach, Calif. Her distinguished career in science and engineering was recently recognized when she was named a Fellow of the Society for the Advancement of Material and Process Engineering (SAMPE). “I think it’s made me a stronger person.”

A chemist and engineer whose career spans more than 30 years in the aerospace industry – including technical leadership positions at Hughes Aircraft Co., General Electric Co., Northrop Corp. and, since 1987, The Boeing Co. – Dr. Strong is just the third female among the 93 individuals to be so honored by SAMPE.

Strong’s identification with science began as a young child. Her father, W. T. Strong, worked in the missile and space division of Goodyear at Holloman Air Force Base and often hosted visiting scientists, who were introduced to her as “uncle” or “aunt” in the family home. “I was an aerospace brat,” Strong said with a chuckle during a recent interview. She added that she can recall sitting on Albert Einstein’s lap and, at age 5, building a wooden rocket with the help of Wernher von Braun.

Shooting for the Stars

She then reiterated an anecdote that was published earlier this year in S&T, the science and technology newsletter of her alma mater, Bryn Mawr College. When “Uncle Wernher” asked her how the launch of her wooden rocket had gone, she responded: “It didn’t go to the moon.” Strong said he then asked, “Well, did you get it off the ground?”

Her reply was, “Yes, it went as high as a tree.” To that response von Braun retorted: “Then it was a success! I can’t get mine off the ground.”

Strong’s interest in science had also taken off. In 1972 she earned a BS in organic chemistry from the Philadelphia College of Pharmacy and Science, and two years later her MS and PhD equivalent, also in organic chemistry, from Bryn Mawr. She soon followed in her father’s footsteps, entering the male-dominated aircraft industry.

“In the beginning it was ‘what’s this woman doing here?’” Strong recalled. “But after six months it became come out and join us – in the softball game or whatever it was they were doing. I’ve always tried to get along, and I quickly became one of the boys.”

At the same time, she was equally committed to “doing the best possible job that you can.” At GE in the mid-1980s she was an important member of the team that established the parameters needed to consistently manufacture commercial parts from polyimide (PMR-15) and other aircraft structural composites – an advance that led to significant improvements in aircraft performance.

Continue Fighting the Glass Ceiling

Pamela Kay Strong receives Fellows award from SAMPE International President Clark Johnson.

Strong’s title is currently “Principal Engineer/Scientist 5/Technical Specialist” in the Materials and Process Engineering Department of Boeing’s Integrated Defense Systems business unit in Long Beach. She and her team provide technical and design support for nonmetallic manufacturing processes and material parameters used in aircraft, rockets and the B-1B Bomber. In receiving the SAMPE recognition, she was cited for her contributions to the advancement of such diverse material technologies as composites, low observables and ablative materials.

“It’s unfortunate that women have to work 10 times as hard as men,” Strong said, then displayed her tongue-in-cheek sense of humor, “but it’s good that it’s so easy for us to do that.”

Her advice to young women seeking a career in science and engineering is much the same as for those already engaged in technical careers. “Find a mentor as fast as you can and hang on for dear life – don’t burn any bridges along your way.”

“And continue fighting the glass ceiling,” Strong concluded, “but don’t forget to bring your diamond glass cutting etcher with you.”

Not a member of this impactful and inclusive science community? Sign up today!

At Any Cost: Cheating, Integrity, and the Olympics

Runners take off from the starting line.

Researchers continue to advance the science behind doping in sports and are developing detection measures to catch the cheaters. But will it be enough to maintain the integrity of the Olympic Games?

Published August 1, 2004

By Diane Kightlinger

Crossing the finish line in Athens this August should mark the climax of the athletes’ quest to put native ability, training, perseverance, and courage to work in pursuit of their Olympic moment. And provided that’s all the athletes bring into play, they won’t mind the team waiting on the sidelines to signal the start of the next challenge – the contest between the dopers and the testers.

The result can topple victors, strip medals, and bar athletes from competing, possibly for life. For now, the competitors know only that sometime between the victory lap and awards ceremony and press conference, the doping control team will take aside the top four finishers and two other randomly selected athletes to find out if they played true.

Drug testing in the Olympic Games began in 1968, a response to illness and death caused by widespread amphetamine use in prior decades. Since then, the estimate of how many athletes use performance-enhancing drugs in sport has ranged from almost none to almost all. Look at test results and the dopers amount to less than 3% of athletes; ask coaches and trainers and the number can rise as high as 90%, according to “Winning at Any Cost: Doping in Olympic Sports,” a September 2000 report released by the National Center of Addiction and Substance Abuse (CASA) at Columbia University.

Banned Substances

Today the pharmacopoeia of substances banned at the Olympic Games includes not only stimulants, but narcotics, anabolic steroids, beta-2 agonists, peptide hormones such as EPO (erythropoietin) and hGH (human growth hormone), and a shelf-full of masking agents. Add designer drugs like the steroid THG (tetrahydrogestrinone), around which the Balco scandal churns, plus the specter of gene doping, anticipated by the Beijing Olympics in 2008, and the testers face increasing odds of losing the detection game.

But don’t count them out just yet. The researchers and administrators focused on catching dopers have won important battles in recent years by developing tests for THG and EPO and by using them to catch abusers. Testers are increasingly taking a proactive stance, anticipating their opponents’ next moves and the techniques needed to identify illegal substances and methods. And the creation of the World Anti-Doping Agency (WADA) in November 1999 should soon result in near-universal standards for doping control across sports federations and countries.

Whether in- or out-of-competition, sample collection today is a painstaking ritual overseen by the athlete, his representative, doping control agents, and independent observers who act as the public’s eyes and ears. The athlete selects a sealed collection vessel and provides a 75-ml urine sample in view of a doping control officer (DCO) of the same gender. After dividing the urine into A and B bottles, the competitor seals them securely and makes sure the DCO records the correct code on the control form. Blood tests employ a phlebotomist and similar procedures to obtain 2 tubes of at least 2-ml each.

Gaming the Tests

On site, the DCO checks the urine’s pH and specific gravity to ensure it will prove suitable for analysis, and may also screen the blood sample for reticulocytes, hemoglobin, and hematocrit. Athletes must document all prescription and nonprescription drugs, vitamins, minerals, and supplements they take; then all parties sign the doping control form and the samples are sent by courier for analysis at one of 31 laboratories accredited by WADA.

But testing during the Olympic Games accomplishes only so much: It won’t catch athletes who use steroids to bulk up during training but stop months before the Games, or those who use EPO much more than a few days before competition. “Ninety to 95% of the solution is effective, year-round, no-notice testing,” according to Casey Wade, WADA education director. “Give athletes more than 24 hours’ notice and they can provide a sample all right, but it’s going to be free from detection.”

The International Olympic Committee (IOC) requires most Olympic athletes to make themselves available for doping tests anytime and anywhere for one year prior to the opening of the Games. WADA plans some 2,400 tests this year, with a selection process based on the requirements of each sport, the substances an athlete might use, when the abuse might occur, and how long the body will take to clear the drug from the athlete’s system before the Athens Games start.

Once the Olympic village opens for the Games, the IOC will take charge of testing at sporting venues. WADA will continue to conduct out-of-competition tests inside and outside Greece, however, and at non-Olympic venues in Athens to determine which athletes will be allowed to take part in the Games.

The Key to Meaningful Doping Tests

The key to meaningful doping tests lies in the lab’s ability to detect substances and also to document the chain of custody meticulously enough to meet the burden of proof in court cases. Once the samples arrive in the lab, scientists store the B bottle for use in confirmation tests, and open the A bottle, withdraw multiple aliquots, and test for substances on the WADA Prohibited List. The U.S. Olympic Lab at the University of California at Los Angeles, a preeminent testing facility, employs an array of mass spectrometry techniques to work through the samples.

“Mass spectrometry breaks up the molecules and sorts the resulting fragments by mass,” said Don Catlin, the lab’s director. “We can identify steroids by chemical moieties with characteristic masses but, for example, THG was modified in such a way that it lacked those characteristic fragments, making it difficult to spot on conventional tests.”

THG posed only one of many challenges the lab has faced and overcome. Catlin said that the detection of EPO and hGH abuse is particularly vexing. EPO increases oxygen delivery to the muscles, and hGH enhances muscle growth. As potent substances, both appear only in minute quantities in body fluids.

“With methyltestosterone, you might have 500 nanograms per ml of urine; with EPO, you might have less than a nanogram,” explained Catlin. “You have to extract the EPO from the urine, and the less there is, the more difficult it is to extract with good recovery. Then you’re faced with the final jolt: EPO has a molecular weight of 30,000 to 35,000, whereas most of the drugs we’re working with have molecular weights of 300. EPO molecules are too large for our mass spectrometers, which means we have to use different approaches based on molecular biology. It’s really tough work.”

Blood and Gene Doping

A long-acting form of EPO, darbepoetin, became available shortly before the Winter Games in 2002. The existing test for EPO could detect darbepoetin, but Catlin chose not to announce it – catching two gold medalists. Both were stripped of medals for events in which they tested positive in Salt Lake City and, later, of all medals they won at the Games.

For hGH, scientists, lab directors, physicians, and administrators have not yet agreed on a test, but that doesn’t mean athletes can freely abuse the substance. WADA has placed hGH on the Prohibited List, and DCOs will draw, freeze and store blood samples during the Athens Games for later analysis.

In addition to banning dozens of substances, the Prohibited List also bans methods such as blood and gene doping. The proliferation of gene therapy trials, which now number in the hundreds, and the promise of gene transfer methods to build skeletal muscle and increase red blood cell production, make genetic approaches to enhancing performance an encroaching reality.

“All the technology is in the medical literature,” said Theodore Friedmann, director of the Program in Human Gene Therapy at the University of California at San Diego. “The genes are all available or you can make them. The vectors, the viral tools, are all published and available. All it takes is three or four reasonably well-trained post-docs and a million or two dollars.”

On and Off: Inducible Genes

With that in mind, researchers are already focusing on several approaches for gene doping tests. Geoffrey Goldspink, professor, University College Medical School, London, England, described some of the possibilities being pursued. If an adenovirus or lentovirus is used as the vector to transmit a gene such as hGH, Goldspink said, the virus might also move into cells in the blood or mucus. A scrape of the inside of the cheek, followed by real-time RT-PCR (Polymerase Chain Reaction), could produce sufficient sample for scientists to distinguish the wild-type virus from the engineered version.

In addition, some gene transfer techniques may involve inducible genes, which can be switched on and off. Without a mechanism to stop production, EPO could swamp the body with red blood cells, for instance. But introducing a gene that can handle the switching function might give testers a detectable bit of DNA on the vector. Friedmann cautioned that although these approaches represent reasonable first steps, new technology will be required to characterize the system and enable researchers to predict when vectors or genes or gene products will appear and then detect them.

Whatever techniques ultimately prove viable, they are likely to drive one change already taking place: the shift from urine to blood tests for detection. “Some of the new tests that we are developing are based on the blood matrix,” said Olivier Rabin, WADA science director. “This is clearly going to be used to detect new substances, to better detect blood transfusions, and also in the future to detect gene doping.”

The Magnitude of the Doping Problem

For decades, the magnitude of the doping problem among Olympic sports and the rewards made possible by ignoring the issue tarnished every medal awarded, even if the athlete tested clean. Tom Murray, bioethicist and president of the Hastings Center in Garrison, New York, and a longtime member of the committee entrusted with drug control for the U.S. Olympic team, said “I think for most of the time, drug control was just seen as a nuisance that they’d rather have go away. Their concerns were marketing and bringing home medals. Drug control was just a pain.”

Since the inception of quasi-independent organizations such as WADA and the national anti-doping agencies, which are funded only partly by their respective Olympic committees, many of the problems cited in the CASA report of 2000 have been alleviated. WADA employs a standard protocol for establishing the Prohibited List; accredits testing labs around the world; sends independent observers to oversee major events; and provides timely notice of banned substances and methods for athletes, coaches, and administrators. In addition, a detailed approach to reporting and managing results insures legal recourse and standard sanctions for athletes who test positive.

Making Strides

On the other hand, the $3 million in research grants doled out by WADA each year, combined with $2 million from the U.S. Anti-Doping Agency, still runs far shy of the $50 million to $100 million collaborative effort over five years that the CASA report called for. But scientists are making strides by developing effective tests, streamlining existing procedures, and lowering costs.

And they seem almost eager to face sophisticated new substances and delivery systems, no matter how difficult detection may be. Catlin summed up his view by saying, “We’re still here, we’re still able to hold our heads up. When I toss in the towel, because there’s so much doping by so many means that we can’t detect it, then it’s an issue. But I don’t think we’re there yet.”

Also read: The Science Behind Doping in Sports

Carbon Sequestration on the Great Plains

A bison grazes on a grassy hill with white clouds and blue sky in the background.

While the concept of carbon sequestration might seem like a magic trick, researchers continue to advance its environmental and financial feasibility.

Published June 1, 2004

By Christine Van Lenten

Image courtesy of Tom via stock.adobe.com.

Carbon dioxide emission dwarf in quantity all other greenhouse gases (GHGs) and exacerbate the impacts of climate change. But CO2 emissions are difficult to reduce. Chemically scrubbing them from smokestacks isn’t generally practicable, and many sources are mobile, small, and/or dispersed. Rather, achieving reductions requires adopting energy-efficient measures, converting to renewable energy sources or other sources that contain less or no carbon, or attempting to sequester the CO2 after it’s been emitted – that is, removing it from the atmosphere and storing it.

Various carbon sequestration schemes are being pursued. Exploiting soil’s capacity to store carbon is the one advocated by Dr. Patrick Zimmerman, who directs the Institute of Atmospheric Sciences at the South Dakota School of Mines and Technology. And Zimmerman has a specific carbon storehouse in mind.

While the world struggles to devise policies, practices, and technologies that can slow global warming, the Great Plains region of the United States, says Zimmerman, continues to serve as a vast “carbon sink,” silently sucking CO2 out of the atmosphere. At a March 23, 2004, event co-sponsored by The New York Academy of Sciences’ (the Academy’s) Environmental Sciences Section and the Third Annual Green Trading Summit, Zimmerman, the featured speaker, contended that croplands and rangelands can store much more carbon. And, he stated, the science needed to quantify sequestered carbon and a system for bringing credits for it to market are available right now.

Markets are Emerging

Setting the stage for Zimmerman’s talk was Peter C. Fusaro, an organizer of the Trading Summit and chairman of Global Change Associates. Fusaro said the Kyoto Protocol, the international community’s attempt to reduce GHG emissions by specifying national targets and a timetable for meeting them – yet to be endorsed by the United States – is flawed and won’t work.

But markets for trading GHG credits are emerging, he reported. They’re modeled on existing pollution-trading markets, like the successful market for sulfur dioxide run by the U.S. Environmental Protection Agency’s Acid Rain Program. SO2 emissions are capped by federal regulation; parties reducing their emissions below regulatory limits can claim credits for reductions and sell them to parties that haven’t met their targets. The overall goal of reducing emissions is served.

In the United States, CO2 emissions aren’t capped by the federal government, but various state and regional initiatives are under way, and leadership for forming GHG markets is coming from state policy makers, Fusaro observed, through bipartisan efforts that are creating the conditions necessary for markets to succeed: “simplicity, replication, and common standards.”

Why Buy Credits?

Why buy credits for reducing CO2 emissions? Anticipation of regulatory schemes is one reason; good corporate PR is another; simple good practice, yet another. And demand generates supply.

California’s Climate Action Registry for voluntary reporting of GHG emissions is the nation’s first. At the invitation of New York’s Governor Pataki, nine northeastern and mid-Atlantic states are collaborating to create a registry and formulate a model rule that states can adapt for capping and trading carbon emissions; two other states and several Canadian provinces are participating as observers. The Chicago Climate Exchange is a new, voluntary pilot market focused on North America. Some major corporations are already trading carbon credits.

Large, existing exchanges will enter the market soon, Fusaro predicted. New York City is the “environmental finance center,” and New York State will be “the template for world trade in carbon. We will have cap-and-trade markets in New York next year.”

CERCs, Cycles, and Soils

In the world of carbon trading, the unit of exchange is the carbon emission reduction credit (CERC), equivalent to one metric ton of CO2. Trading CERCs for CO2 that’s been snatched from the air and stored in the soil may sound like a magic trick. And indeed, scientists are only beginning to understand the intricate and complex feedback loops among climate, atmospheric composition, and terrestrial ecosystems that govern this form of sequestration.

Zimmerman framed the science by explaining that Earth’s carbon inventory cycles among reservoirs – the atmosphere, lithosphere, hydrosphere, and biosphere. The reservoirs vary dramatically in size; so do carbon fluxes between them. Because fluxes between terrestrial ecosystems and the atmosphere are large, over time small changes in them can produce large changes in how much carbon accumulates in the atmosphere.

Zooming in on the molecular level helps illuminate the huge potential of the carbon sequestration scheme that Zimmerman is advocating. During the growing season, green plants absorb solar energy and remove CO2 from the atmosphere, producing carbohydrates. Because these compounds contain less oxygen for each carbon atom than CO2 does, “surplus” oxygen is released; carbon is stored.

Plants also respire, taking in oxygen to metabolize carbon compounds and release energy needed for cellular processes, and producing water and CO2. When the growing season ends, photosynthesis and plant respiration cease, and organic matter, rich in carbon, decomposes, primarily because organisms in the soil feast on the carbon, metabolize it, and return CO2 to the atmosphere. When soil containing organic matter is broken up – for example, by tillage – more organic matter is exposed to this oxidizing process, accelerating release of CO2 and depletion of soil’s carbon bank.

The “Missing Sink”

If you want to prolong sequestration of carbon in soil, the crucial question becomes this: What conditions hinder decomposition of organic matter in soil, slowing down the release of CO2 back into the atmosphere?

Vegetation growing at high latitudes, Zimmerman said, fits the bill. At high latitudes, plants grow quickly in the summer, and about half the growth is underground. In winter, freezing slows the metabolic processes that oxidize carbon, trapping it within the soil in the form of organic material that’s not completely decomposed. By contrast, Zimmerman noted, “it’s really tough to store organic matter in tropical soils.”

High-latitude efficiencies in storing carbon also offer one answer to the mystery of the “missing sink.” Scientists calculate that quantities of CO2 produced by burning fossil fuels and resulting from deforestation and other land-use practices are greater than quantities taken up by the atmosphere and oceans. Where’s the balance? Seasonal variations in rising concentrations of atmospheric CO2 point to a slight net carbon sink that’s land-based, in the northern hemisphere, in North America, said Zimmerman. And the Great Plains, a high-latitude region, appears to be that “missing sink.” Analysis of carbon and oxygen in atmospheric CO2 samples collected from air masses as they traverse North America appears to confirm this.

Adapting Land-Use Practices

The Great Plains, he added, can store still more carbon, through alteration of land-use practices – for example, converting high-tillage cropland to low-till or no-till, or to pasture or grassland. Zimmerman, who grew up on a wheat farm, pointed out that organic matter also increases ecosystem productivity and resilience to stress.

Sequestering carbon in the Great Plains can significantly offset emissions elsewhere, he contended. For the purpose of slowing global warming, what matters is that CO2 is sequestered; not where. Thus, cropland and rangeland in the Great Plains can serve as generators of tradable CERCs and a brake on global warming.

But how can you tell how many metric tons of CO2 an acre of land is sequestering? As described by Zimmerman, the science is fiercely complex. His team of meteorologists, ecologists, plant physiologists, GIS experts, analytical chemists, computer scientists, and remote-sensing specialists virtually swarm all over the landscape – working from the molecular level, to individual leaf, to grass canopy, to atmosphere – gathering data on a host of processes and factors.

Equipment ranges from plastic bags placed over plants, to towers, tether balloons, an airplane equipped with a digital imaging system, and satellites. Measurements include variations in temperature, humidity, rainfall, snowfall, and solar radiation; quantities of water vapor, volatile organics, and CO2 emitted from vegetation, and the fluxes to vegetation; and leaf area indexes. Data on land use history, vegetation and crop dynamics, and feedback between carbon, phosphorous, nitrogen, and sulfur cycles are gathered, too.

Models Built on Data

Data are used to build numerical models of physical, chemical, and biological processes; these models are then linked, to model ecosystem carbon cycling and atmospheric chemistry, and extrapolated to landscape and regional scales. Regional modeling is essential, Zimmerman emphasized, “because that’s where the impacts are felt – where you live.” He termed the Black Hills (in South Dakota and Wyoming) “a great outdoor laboratory” that lends itself to the measurements needed to constrain regional models. His team is now establishing a network of field monitoring stations.

Determining how to link models constituted of algorithms based on physics and chemistry, across orders of magnitude that span spatial and temporal scales, is, Zimmerman observed, like trying to assemble an elephant from a box of molecules without the benefit of knowing what an elephant looks like. The work is iterative and time-consuming. And modeling rangeland to quantify incremental carbon storage poses special difficulties.

But while this science is still far from precise, it’s plenty good enough to get CERC markets going and “to make a difference,” he contends. Farmers can adapt their land use to sequester CO2 now, while we develop better technologies – and the socio-political will – to cut emissions.

And how can what Zimmerman termed “enhanced ecological carbon storage” be capitalized? For a market to be viable, he’s concluded, six conditions must be met:

(1) The business-as-usual baseline must be established.

(2) The additional CO2 each landowner sequesters must be quantified.

(3) How long CO2 will remain sequestered must be forecast.

(4) No unintended, offsetting releases can be generated; for example, converting cropland to pasture and introducing cows, which emit methane, every ton of which is equivalent to 20 tons of CO2 in its effects on global warming.

(5) Ownership of CERCs must be documented.

(6) CO2 sequestration must be verified.

Satisfying All Six Conditions

Zimmerman and his colleagues have designed a system, C-Lock (patent pending), that he said satisfies all six conditions. Internet- and GIS-based, it creates, certifies, standardizes, and verifies CERCs for specific land parcels, by integrating data on slope, climate, soil, historical land-use variables, and other factors. Farmers can access it directly online; no middlemen are required.

To create economies of scale, so benefits exceed transaction costs, C-Lock aggregates CERCs for many small landowners. Monte Carlo analysis is used to quantify uncertainty and normalize CERCs so they have universal currency. A reserve pool of CERCs with higher uncertainty values and correspondingly lower market values can be tapped to offset fluctuations in actual soil performance. The system’s transparency facilitates four levels of verification and “audits” that employ a variety of databases and scientific tools.

And because the carbon-storage capacity of tilled soil isn’t saturated, C-Lock quantifies only changes in amounts of soil carbon. Trying to quantify absolute amounts would pose daunting soil-sampling problems. Data on land-use history are key here. Quantification needn’t be precise for each individual land parcel; just reproducible and transparent. But it must be reasonably accurate in aggregate, and the uncertainty (financial risk) should be quantified to achieve maximum value.

Launching the System

C-Lock is now equipped with GIS for South Dakota, and a trade is in the works; trades in Idaho, Montana, Wyoming, and North Dakota will follow, Zimmerman predicted, adding that C-Lock can accommodate other GHG emissions and forms of sequestration, anywhere in the world.

What’s needed for it to succeed? A pilot phase, cap-and-trade policies, and policies that define soil sequestration’s role in the GHG reduction strategy, he said. But his biggest concern is that huge environmental advantages will be lost if USDA incorporates carbon sequestration into conventional farm subsidy programs. We have an obligation to make a difference, he insisted, and we can: markets can work, benefiting farmers, ranchers, and the environment.

As measures to slow global warming develop, what role is the Academy playing? Its Environmental Sciences Section is stepping up to the plate: Chair Michael Bobker says it’s creating “a dialogue around greenhouse gases and emission trading issues, as well as carbon reduction and sequestration projects” – an initiative squarely in keeping with the Academy’s historical role as a forum for exploring and debating the scientific issues that matter most, and advancing science for the public good.

Also read: The Promise and Limitations of Carbon Capture

Tapping into Ancient Urges for Food and Love?

A young woman plays a ukulele.

“After silence, that which comes nearest to expressing the inexpressible is music.”
-Aldous Huxley, Music at Night

Published March 1, 2004

By Linda Hotchkiss Mehta

Can music be reduced to mere brain anatomy and electrochemical interactions within the neural templates through which we experience it? Or will what we learn from science simply reinforce a reality the poets have intuited all along?

A group of scientists came together in Venice in October 2002 to take a look at what is known about music through the neurosciences. This area of study is providing insights into higher cognitive function through the mechanisms of musical perception and processing in the human brain. These scientists, many of whom are musicians themselves, approach their work well aware of the incredibly complex process that results in artistic expression and perception.

One broad question that has been explored is a perennial one about intelligence and musical ability – is musical aptitude an integral part of a person’s general cognitive potential or does it exist on its own, a separable and different type of intelligence?

Obviously, general intelligence alone is insufficient – plenty of demonstrably intelligent people never develop into excellent musicians, even when provided with an early music education. But must one be intelligent to be an accomplished musician? Evidence suggests that high general mental aptitude is necessary if special aptitudes (dare we say talent?) are to be fully developed.

In other words, the answer is yes: General intelligence and musical aptitude probably are linked. Furthermore, children who participate in musical activities show a higher degree of “mental speed” (a measure of mental aptitude) than their peers. So these findings have wide implications: Questions about how musical training can enhance general mental aptitude and what neuroscience can tell us about the effectiveness of various pedagogical techniques for musical training are of vital interest.

A Developmental Approach

Only a developmental approach could illuminate these questions, and The Neurosciences and Music, a volume in Annals of the New York Academy of Sciences resulting from the meeting in Venice, focuses on neural development in both musicians and non-musicians, seeking to clarify questions about the development of higher cognitive function, in general, through the lens of the development of musical abilities, specifically.

Contributing scientists explore the mechanisms of human perception of the components of music (pitch, timbre, rhythm and harmony), the development of musical abilities, and the fate of musical abilities within the contexts of cognitive disorders in children and of dementia in the aged.

Scientists studying visual imagery have developed techniques for identifying and quantifying the perception of a visual experience, including mental image-making during the act of reading. Because the image a subject observes while reading is black marks on a page, bearing no resemblance to the image conjured up in the brain by the written words, the scientist/observer cannot “see” the mental image of the subject, and this process can only be observed through the traces of brain-imaging techniques.

Using the same brain-imaging tools, scientists can watch what happens neurologically while a person processes music. In one experiment, subjects listened to music while electroencephalography was used to trace brain responses. Musical phrases with syntactically inappropriate endings elicit early right anterior negativity. Shakespeare understood this intuitively: “How sour sweet music is,/When time is broke, and no proportion kept!/So is it in the music of men’s lives.”

Musicians vs Nonmusicians

A group of skilled musicians showed no significant differences from nonmusicians when presented with tasks designed to assess perception of melody, structuring of harmony, and more complex musical presentations. The subjects were asked to judge the similarity of musical selections and the degree of completeness of a piece of music and to identify the musical emotion expressed. Non-musicians demonstrated an ability to use the same principles as musical experts as they listened to music, which suggests that the capacity to enjoy music is universal and not dependent on training.

Even young children with no musical training demonstrate innate musical knowledge when tested with “inappropriate” chord progressions (not dominant-tonic, which is experienced as a normal, or authentic, cadence) through electric brain potential responses. The brain structure in which this response occurs is also involved in processing the syntax of language, which suggests that this aspect of musical ability is something that the human brain is already structured to do.

Cultural Differences

We are also led to wonder about cultural differences in music perception. Interestingly, when the rhythmic differences between French and English were compared to French and English classical musical themes, rhythmic patterns similar to those of the spoken language were found in the music of each culture. When language perception is tested independently, listening to one’s native language elicits a different neurological response than does listening to an unfamiliar language.

But music perception is dramatically different. In spite of the apparent link between a culture’s language and its musical rhythms, studies that compared the responses of subjects to music of their native culture with their responses to unfamiliar music found that differences depended more on the subjects’ musical expertise than on their familiarity with the music. This is good news for Yo-Yo Ma’s Silk Road Project, because it suggests that appreciation of another culture’s music should not be out of reach for most people.

More Grey Matter

The neuroanatomical differences that do exist between musicians and non-musicians may instead reflect the complex motor and auditory skills required for performance on an instrument and learning musical repertoire, as well as the processing feedback necessary to monitor a performance. Musicians have more grey-matter volume in several brain areas compared with non-musicians and even compared with amateur musicians, probably because intensity of practice affects these differences.

Another means of elucidating the neural events underlying imagery and perception is to study the function of persons with brain injuries in precise locations. It turns out that both perception (of music as it is played) and the capacity to form a mental image (in the absence of audible music) are damaged when the associated brain structure is damaged, which demonstrates that both processes depend on the same neural territory.

Wordsworth alludes to this human capacity in his poetry: “The music in my heart I bore,/Long after it was heard no more.” Without this capacity to imagine musical tone and timbre accurately and vividly enough to use them in new arrangements, after all, Beethoven would have lost the ability to compose when he lost his ability to hear.

As scientifically defined by Ian Cross of Cambridge, “music embodies, entrains, and transposably intentionalizes time in sound and action.” Most of us, however, think first of the emotional response music engenders. Poets have described music as the language of angels and the food of love, a medium with “charms to soothe a savage breast.” Many people experience “chills” or “shivers” when certain musical phrases are played and describe this experience as euphoric. These responses can be elicited fairly reliably even in a laboratory, where the associated psychophysiological responses can be measured.

The Pleasure of Music

It appears as though the pleasure we derive from music occurs because our neocortex can reach ancient neural systems involved with basic biological stimuli linked to survival. Perhaps the capacity to make and enjoy music is the happy accident of skills acquired and refined for more basic needs: nourishment and reproduction. The poets anticipated the scientists by centuries, in linking music with the ancient urges of love and food.

The poets also speak of music’s power to help us reduce stress: “Music alone with sudden charms can bind/The wand’ring senses, and calm the troubled mind,” wrote William Congreve. As scientists discover more about the links between the immune system and stress, the stress-reducing mechanisms of music might be a fruitful area for research.

The contemporary composer Karlheinz Stockhausen observed that “sonic vibrations do not only penetrate ears and skin. They penetrate the entire body, reaching the soul, the psychic center of perception.”

Stockhausen believed that the ratio between the unknown and the known has remained pretty much the same over time: The discoveries of science may explain much, but new questions are perpetually raised. Thus wonder will never die, and the poets may have the last word. What better words than these from Alfred, Lord Tennyson: “Let knowledge grow from more to more,/ But more of reverence in us dwell;/That mind and soul, according well,/May make one music as before.”

Also read: Music on the Mind: A Neurologist’s Take

Sprawling Cities Can Coexist with Thriving Ecosystems

A rooftop garden with tall city buildings in the background.

Many major urban areas are constrained with the amount of green space they can provide to residents. Encouragingly, building rooftops have emerged as a solution to fill this shortfall of urban green space.

Published January 1, 2004

By Peter Coles

Jacob K. Javits Center – New York City. Image courtesy of demerzel21 via stock.adobe.com.

The common image of cities as hot spots of crime and grime may need updating. They also can be havens of natural and cultural diversity – and could hold the keys to sustainable development in the 21st century.

While some 3.2 billion people – half the world’s population – are now estimated to live in towns and cities, with a growing number of poor, “urban” is by no means incompatible with “nature,” even in a major city like New York. Once rare, peregrine falcons now nest on Manhattan bridges, while a survey carried out by the Brooklyn Botanic Gardens found over 3,000 species of plants in a 30-mile radius of the city – far more than in the vast cornfields of the Midwest.

The Need for Preservation

And, while the presence of man has driven some species of plant and animal close to extinction, cities may now be the only places they are still found. Paradoxically, they will no longer survive without human intervention to preserve them.

These topsy-turvy ideas emerged during a meeting at The New York Academy of Sciences (the Academy) in October 2003, entitled “Urban Biosphere and Society: Partnership of Cities,” co-organized with CUBES (Columbia University-UNESCO Joint Program on Biosphere and Society) and UN Habitat.

For many people, the built-up environment is the antithesis of nature, as Rutherford Platt, of the Ecological Cities Project at the University of Amherst, pointed out. “Nature” is somewhere else, outside the city, in a national park or some remote wilderness. But, recalling Lewis Mumford, champion of the green belt, he emphasized that not only can nature be part of a city, but cities themselves can be as much a part of nature as an anthill or a beaver colony.

Creating New Types of Habitat

Ecologists are now appreciating that cities, as well as preserving rare patches of ancient flora and fauna in parks and settler cemeteries, also present challenging new habitats, with their own adapted plants. “We are creating types of habitat that have never been seen before,” said Charles Peters, Kate E. Tode Curator of Economic Botany at the New York Botanical Garden, “like a vacant lot with 35 minutes of sunlight a day. It’s an interesting niche.”

Peters, who has been studying a 40-acre swathe of ancient oak and hickory forest in the Botanical Garden for several years, also defended the invasive species that are settling there, filling niches left by native species that have failed to adapt to an urbanized habitat. “The most important thing is that these plants continue to function, whether they’re from China or Siberia. We can’t put the forest back the way it was 200 years ago. To do that, we’d have to put the Bronx back the way it was 200 years ago. Forests are continually changing. What’s important is that the new species are controlling erosion, providing nutrients for the soil, recycling the air.”

Others argue that intact, native ecosystems, like the remnants of oak woodlands and prairies in Chicago, have a far richer biodiversity than those colonized by invasive species, and are more sustainable. Since 1996, Chicago Wilderness, a loosely-structured coalition that today comprises over 165 associations, institutions and organizations, has been working to restore biodiversity in the Windy City, which is visited by some 6 million neo-tropical birds every year on their way to and from Canada.

Retaining Residents

Chicago’s city fathers, explained John Rogner, Chicago Field Supervisor of the Fish and Wildlife Service of the Department of the Interior, bought patches of oak woodland and prairie to prevent them being developed. Their argument was that a beautiful environment, with access to nature, would stop residents – the city’s life force – from moving away.

After three years of research, including “bio blitzes” in which local residents and children help scientists count species, Chicago Wilderness established a “biodiversity recovery plan.” With a wide range of projects, such as ridding the oak woodlands of tenacious, but non-native buckthorn, the consortium is also helping to restore brown-field sites, like Calumet, south of the city, which ironically contains several endangered and critical species of bird, surviving amid the derelict steel plants and toxic waste dumps.

Mark Wigley, interim dean of the Graduate School of Architecture at Columbia University, suggested that for most people “old cities are the heroes, and new cities are the villains.” But this idealized image leaves out the crime, open sewerage, disease and overcrowding characteristic of city life in the Middle Ages.

For Robert Pirani, director of environmental programs for the New York Regional Plan Association, the “villain” today is not so much the post-industrial downtown as it is suburban sprawl. In the past 10 years, he said, land use in the New York area has expanded by 100%, while population has increased by less than 10%.

Sprawling Cities

This means “thousands of homes surrounded by lawns, and shopping malls surrounded by parking lots,” he said. According to Rutherford Platt, this trend can be seen across the U.S., where suburban population has increased fourfold since 1950, compared to an 85% increase in population. And, he added, car ownership in the U.S. has risen by 100% since 1970, while population increased by 40%. In Atlanta, which has been dubbed “sprawl city,” drivers spend an average of 72 hours a year in gridlock, he said.

If sprawl is a middle-class phenomenon in developed countries, however, it is associated with poverty in much of the south. While 82% of Brazil’s population live in cities, said Rodrigo Victor, of the São Paulo Biosphere Reserve, some 23% of the population of São Paulo live in shanty towns, mostly on the edge of the Green Belt Biosphere Reserve that surrounds the city, a part of the Atlantic Forest Biosphere Reserve.

With a global trend towards urban living – two-thirds of world population in 2030 will live in cities – the challenge is to find sustainable solutions to urban growth. One approach, according to freelance journalist Helen Epstein, is through architecture itself.

A new generation of high performance buildings attempt to behave more like natural systems, with water management on site, passive solar energy production, natural lighting and ventilation reducing their “footprint,” or impact on limited natural resources. An example is the Solaire housing development in Battery Park City, Manhattan. But, as architect Ernie Davis, mayor of Mount Vernon, New York, pointed out, these buildings are not usually for the poor, whereas project housing, which is designed to look as though it’s for the disadvantaged, does not have advanced design features.

Green Rooftops in South Korea

Green rooftops also offer a solution, as Kwi-Gon Kim, professor of landscape architecture at Seoul National University, South Korea, demonstrated. With 42% of Seoul covered by buildings, landscaping rooftops could add an estimated 200 square kilometers of green space to the city, about 30% of the Seoul area. In an experimental green roof project on top of UNESCO’s downtown Seoul office, just five months after its construction the 75 species of plant introduced at the outset had been joined by an additional 39 species, presumably from surrounding green belt areas, while 37 species of insect had colonized the site.

Seoul was one of 11 cities invited by CUBES to prepare case studies to see whether, and how, the UNESCO “biosphere” model could be applied to urban areas. This model, designed 30 years ago, has since been applied in 440 UNESCO Man and the Biosphere (MAB) sites in 97 countries. These are areas of terrestrial and coastal ecosystems that promote the conservation of biodiversity with its sustainable use.

They are internationally recognized, nominated by governments, and remain under the sovereign jurisdiction of the states in which they are located. Usually, they consist of a “core” area that has minimum human impact, surrounded by a “buffer zone” and a “transition” area, with increasing levels of social and economic activity, respectively. But, while some of the sites adjoin cities (like São Paulo), to date there is no urban biosphere site as such.

A Future Urban Biosphere Site

Cape Town, South Africa, which is already surrounded by three natural biosphere reserves, gives some clues as to what a future urban biosphere site might look like, although it is just a theoretical case study at this stage. As Ruida Stanvliet, of the Western Cape Nature Conservation Board illustrated, the nine provinces in the region house 3.5 million people, some of them affluent and white, living in suburbs, while much of the black population lives in extreme poverty, in temporary housing and with a high incidence of HIV/AIDS. Nonetheless, the area boasts a rich biodiversity, with some 9,000 plant species.

“Environment conservation is crucial for poverty alleviation,” said Stanvliet. “It connects people to their sustainable resource base.” And in Cape Flats, one area in the Cape Town urban biosphere case study, over 20% of the people live in sprawling, informal settlements. In some communities, 70% live with less than $1 a day, and only 36% of adults are in paid employment.

The windswept mosaic of dunes and wetlands of Cape Flats is where victims of apartheid were relocated. Now, in a pilot initiative, the City of Cape Town has joined with the Botanical Society of South Africa, the National Botanical Institute and the Table Mountain Fund, to form Cape Flats Nature. This project focuses on conservation and restoration of biodiversity in several sites, enlisting the participation of local people through educational programs.

The Cape Flats Nature project has a certain resonance with the Chicago Wilderness brown-field development in Calumet, half way across the globe from Cape Town. This linking of cities, at least informally, was one of the ambitions of the Academy/CUBES meeting.

As Many Questions as Answers

The meeting raised as many questions as it answered, but that was another of its ambitions. In a city like New York, where would the “core” of a biosphere site be? For William Solecki, of the Department of Geography at Hunter College, City University of New York, it could be the harbor and estuary area, which is historically the focus of human activity in the city, while pockets of intact wetland survive in adjacent Jamaica Bay.

And the “buffer zone” might be the watershed in the Catskills that feeds the “core.” Indeed, as Christopher Ward, commissioner of the New York City Department of the Environment explained, New York was able to avoid spending millions of dollars on a new water treatment plant by investing in protection of the watershed.

This inclusion of more distant areas in the biosphere of a city like New York is a way to acknowledge that its “footprint,” unlike that of an equivalent natural area, can even extend thousands of miles. The coffee consumed in New York has a direct impact on plantations as far away as Bolivia, which, incidentally, is where some of the migrant warblers come from that feed in Central Park every May and October. Food for thought.

Also read:  The Impact of Climate Change on Urban Environments


About the Author

Dr. Peter Coles is a freelance science writer and photographer living in Paris, France.