Skip to main content

The Genius of Quantum Physicist Richard Feynman

A black and white photo of a man in a suit and tie, with math formals scribbled on the blackboard in the background.

Missives from Feynman in Perfectly Reasonable Deviations from the Beaten Track, a book of his letters edited by daughter Michelle Feynman, reveal his genius and wit. What was his contribution to the canon of 20th-century quantum physics?

Published February 3, 2006

By Chris H. Greene

Richard Feynman in 1959. Image via Wikimedia Commons.

“Science alone of all the subjects contains within itself the lesson of the danger of belief in the infallibility of the greatest teachers in the preceding generation … Learn from science that you must doubt the experts. As a matter of fact, I can also define science another way: Science is the belief in the ignorance of experts.”
— Richard Feynman, 1981

We all know the stories of Richard Feynman. He was at times a showman and a clown. He expressed irreverence toward prestigious, hoary organizations like the National Academy of Sciences and the Royal Swedish Academy of Sciences. The tragic death of his young wife during the time of the Manhattan Project became familiar to millions through the touching Matthew Broderick film, Infinity. But behind his public persona lay one of the truly independent and innovative minds of the 20th century. Richard Feynman felt an intense, personal need to see physical phenomena in his own terms, and from his own perspectives, using theories that he generated himself.

At the same time, Feynman’s theoretical constructs did not arrive on the planet like a bolt from nowhere. His most important contributions were ideas that were in some sense already “in the wind,” but his way of developing them into consistent theoretical descriptions of nature differed dramatically from methods popular at the time.

Paradoxical Infinities

It may seem surprising, but the theoretical program that resulted in Feynman’s 1965 Nobel Prize (also awarded that year to Julian Schwinger and Sin-Itiro Tomonaga) was not aimed so much at explaining the result of any particular experiment, as it was an attempt to resolve some of the apparently self-contradictory aspects of both classical and quantum electrodynamics theory. If you shake an electron, it radiates light waves, whose electric fields must in turn act back on the electron to lower its energy. But attempts to calculate this “radiative reaction force” led to infinities which were paradoxical and in clear contradiction with experience.

In Feynman’s doctoral thesis work with John Wheeler at Princeton, the two entertained fantastic possibilities in a desperate attempt to solve these paradoxical infinities. One peculiar notion that emerged was that if, in a certain sense, the classical fields are allowed to propagate backward in time, the paradoxes and the infinities appeared to be magically removed.

A variant of this idea survived when Feynman wrote down his quantum mechanical formulation of this problem, which he credits to Wheeler for originally tossing out: that the positron, the antiparticle of an electron, can be regarded as an ordinary electron moving backward in time. Surely you’re joking, Mr. Feynman! As fantastic and unbelievable as this idea seems when stated in words, when formulated mathematically it was found that a consistent theoretical framework emerged, without the troubling infinities.

Moreover, Feynman created a simple way for these complicated calculations to be carried out, which is still used today: first, draw lines that represent electrons, positrons, and photons moving forward and backward in time in different ways that can contribute to the process of interest. Then apply Feynman’s rules for translating each such Feynman diagram into a precise mathematical formula.

Quantum Electrodynamics

One of the most famous applications of Feynman’s quantum electrodynamics was his calculation of a tiny frequency difference between two nearly identical energy levels (2S1/2 and 2P1/2) of the simplest atom, hydrogen. Willis Lamb and Robert C. Retherford had caused a stir in 1947 when they measured this frequency difference to be 1057 million cycles per second (MHz), because the then-accepted theory of Paul Dirac suggested that this difference should be identically zero. The methods for calculating this interaction between an atomic electron and the “vacuum-fluctuating electric fields of free space” gave infinity, a useless result entirely irrelevant to the experiment.

Using the Feynman calculus, however, a result very close to the experimental frequency splitting (the so-called “Lamb shift”) was obtained. In the intervening decades, both experiment and theory have improved, and we now know this Lamb shift experimentally to be 1057.8447 (plus or minus 0.0034) MHz, while theory based on Feynman’s work predicts 1057.839 (plus or minus 0.006) MHz.

Within experimental uncertainties, and within theoretical uncertainties associated with our imperfect understanding of the proton’s nuclear structure, these agree. Nature thus confirms the remarkable synthesis of theoretical ideas into working quantum electrodynamics, achieved by Feynman, as well as by Schwinger and by Tomonaga.

Advancing the World of Theoretical Physics

And what are we to take from these strange notions? Are positrons really just electrons moving backward in time? Feynman tended to dismiss such queries as having no more relevance to physics than debates about how many angels fit on the head of a pin. Here is one more example where the equations developed by theoretical physicists, after extensive testing, are the bottom line. Seemingly bizarre philosophical implications, when those equations are stated in words (such as “particles moving backward in time”), do not matter a whit. What matters from the physicist’s perspective is the explanatory and predictive power of the resulting theory.

In the end, Feynman’s work parallels eerily the way the “luminiferous aether” was abandoned as irrelevant, once physicists accepted around the beginning of the 20th century that Maxwell’s equations by themselves adequately describe all classical phenomena of electricity and magnetism. And it is similar to the way Einstein’s equations of relativity, and the peculiar quantum theory, were accepted despite their troubling, almost nonsensical implications for how we think about time, space, and reality. As Niels Bohr wrote and was quoted in Wheeler and Feynman’s 1945 Reviews of Modern Physics article:

We must, therefore, be prepared to find that further advance…will require a still more extensive renunciation of features which we are accustomed to demand of the space time mode of description.

The world of theoretical physics is better today because Richard Feynman was brave enough to contemplate and develop ideas that required such a renunciation.

Also read: The Challenge of Quantum Error Correction

Lee Smolin: A Crisis in Fundamental Physics

Various math equations written on a blackboard.

With an infinity of universes proposed, and more than 10400 theories, is experimental proof of physical laws still feasible?

Published January 1, 2006

By Lee Smolin

Image courtesy of WP_7824 via stock.adobe.com.

For more than two hundred years, we physicists have been on a wild ride. Our search for the most fundamental laws of nature has been rewarded by a continual stream of discoveries. Each decade back to 1800 saw one or more major additions to our knowledge about motion, the nature of matter, light and heat, space and time. In the 20th century, the pace accelerated dramatically.

Then, about 30 years ago, something changed. The last time there was a definitive advance in our knowledge of fundamental physics was the construction of the theory we call the standard model of particle physics in 1973. The last time a fundamental theory was proposed that has since gotten any support from experiment was a theory about the very early universe called inflation, which was proposed in 1981.

Since then, many ambitious theories have been invented and studied. Some of them have been ruled out by experiment. The rest have, so far, simply made no contact with experiment. During the same period, almost every experiment agreed with the predictions of the standard model. Those few that didn’t produced results so surprising—so unwanted—that baffled theorists are still unable to explain them.

The Gap Between Theory and Experiment

The growing gap between theory and experiment is not due to a lack of big open problems. Much of our work since the 1970s has been driven by two big questions: 1) Can we combine quantum theory and general relativity to make a quantum theory of gravity? and 2) Can we unify all the particles and forces, and so understand them in terms of a simple and completely general law? Other mysteries have deepened, such as the question of the nature of the mysterious dark energy and dark matter.

Traditionally, physics progressed by a continual interplay of theory and experiment. Theorists hypothesized ideas and principles, which were explored by stating them in precise mathematical language. This allowed predictions to be made, which experimentalists then test. Conversely, when there is a surprising new experimental finding, theorists attempt to model it in order to test the adequacy of the current theories.

There appears to be no precedent for a gap between theory and experiment lasting decades. It is something we theorists talk about often. Some see it as a temporary lull and look forward to new experiments now in preparation. Others speak of a new era in science in which mathematical consistency has replaced experiment as the final arbiter of a theory’s correctness. A growing number of theoretical physicists, myself among them, see the present situation as a crisis that requires us to reexamine the assumptions behind our so-far unsuccessful theories.

I should emphasize that this crisis involves only fundamental physics—that part of physics concerned with discovering the laws of nature. Most physicists are concerned not with this but with applying the laws we know to under standard control myriads of phenomena. Those are equally important endeavors, and progress in these domains is healthy.

Contending Theories

Since the 1970s, many theories of unification have been proposed and studied, going under fanciful names such as preon models, technicolor, supersymmetry, brane worlds, and, most popularly, string theory. Theories of quantum gravity include twistor theory, causal set models, dynamical triangulation models, and loop quantum gravity. One reason string theory is popular is that there is some evidence that it points to a quantum theory of gravity.

One source of the crisis is that many of these theories have many freely adjustable parameters. As a result, some theories make no predictions at all. But even in the cases where they make a prediction, it is not firm. If the predicted new particle or effect is not seen, theorists can keep the theory alive by changing the value of a parameter to make it harder to see in experiment.

The standard model of particle physics has about 20 freely adjustable parameters, whose values were set by experiment. Theorists have hoped that a deeper theory would provide explanations for the values the parameters are observed to take. There has been a naive, but almost universal, belief that the more different forces and particles are unified into a theory, the fewer freely adjustable parameters the theory will have.

Parameters

This is not the way things have turned out. There are theories that have fewer parameters than the standard model, such as technicolor and preon models. But it has not been easy to get them to agree with experiment. The most popular theories, such as supersymmetry, have many more free parameters—the simplest supersymmetric extension of the standard model has 105 additional free parameters. This means that the theory is unlikely to be definitively tested in upcoming experiments. Even if the theory is not true, many possible outcomes of the experiments could be made consistent with some choice of the parameters of the theory.

String theory comes in a countably infinite number of versions, most of which have many free parameters. String theorists speak no longer of a single theory, but of a vast “landscape1” of possible theories. Moreover, some cosmologists argue for an infinity of universes, each of which is governed by a different theory.

A tiny fraction of these theories may be roughly compatible with present observation, but this is still a vast number, estimated to be greater than 10400 theories. (Nevertheless, so far not a single version consistent with all experiments has been written down.) No matter what future experiments see, the results will be compatible with vast numbers of theories, making it unlikely that any experiment could either confirm or falsify string theory.

A New Definition of Science

This realization has brought the present crisis to a head. Steven Weinberg and Leonard Susskind have argued for a new definition of science in which a theory maybe believed without being subject to a definitive experiment whose result could kill it. Some theorists even tell us we are faced with a choice of giving up string theory—which is widely believed by theorists—or giving up our insistence that scientific theories must be testable. As Steven Weinberg writes in a recent essay: [2]

Most advances in the history of science have been marked by discoveries about nature, but at certain turning points we have made discoveries about science itself…Now we may be at a new turning point, a radical change in what we accept as a legitimate foundation for a physical theory…The larger the number of possible values of physical parameters provided by the string landscape, the more string theory legitimates anthropic reasoning as a new basis for physical theories: Any scientists who study nature must live in a part of the landscape where physical parameters take values suitable for the appearance of life and its evolution into scientists.

An Infinity of Theories

Among an infinity of theories and an infinity of universes, the only predictions we can make stem from the obvious fact that we must live in a universe hospitable to life. If this is true, we will not be able to subject our theories to experiments that might either falsify or count as confirmation of them. But, say some proponents of this view, if this is the way the world is, it’s just too bad for outmoded ways of doing science. Such a radical proposal by such justly honored scientists requires a considered response.

I believe we should not modify the basic methodological principles of science to save a particular theory—even a theory that the majority of several generations of very talented theorists have devoted their careers to studying. Science works because it is based on methods that allow well-trained people of good faith, who initially disagree, to come to consensus about what can be rationally deduced from publicly available evidence. One of the most fundamental principles of science has been that we only consider as possibly true those theories that are vulnerable to being shown false by doable experiments.

Contending Styles of Research

I think the problem is not string theory, per se. It goes deeper, to a whole methodology and style of research. The great physicists of the beginning of the 20th century—Einstein, Bohr, Mach, Boltzmann, Poincare, Schrodinger, Heisenberg—thought of theoretical physics as a philosophical endeavor. They were motivated by philosophical problems, and they often discussed their scientific problems in the light of a philosophical tradition in which they were at home. For them, calculations were secondary to a deepening of their conceptual understanding of nature.

After the success of quantum mechanics in the 1920s, this philosophical way of doing theoretical physics gradually lost out to a more pragmatic, hard-nosed style of research. This is not because all the philosophical problems were solved: to the contrary, quantum theory introduced new philosophical issues, and the resulting controversy has yet to be settled. But the fact that no amount of philosophical argument settled the debate about quantum theory went some way to discrediting the philosophical thinkers.

It was felt that while a philosophical approach may have been necessary to invent quantum theory and relativity, thereafter the need was for physicists who could work pragmatically, ignore the foundational problems, accept quantum mechanics as given, and go on to use it. Those who either had no misgivings about quantum theory or were able to put their misgivings to one side were able in the next decades to make many advances all over physics, chemistry, and astronomy.

The shift to a more pragmatic approach to physics was completed when the center of gravity of physics moved to the United States in the 1940s. Feynman, Dyson, Gell-Mann, and Oppenheimer were aware of the unsolved foundational problems, but they taught a style of research in which reflection on them had no place in research.

Physics in the 1970s

By the time I studied physics in the 1970s, the transition was complete. When we students raised questions about foundational issues, we were told that no one understood them, but it was not productive to think about that. “Shut up and calculate,” was the mantra. As a graduate student, I was told by my teachers that it was impossible to make a career working on problems in the foundations of physics. My mentors pointed out that there were no interesting new experiments in that area, whereas particle physics was driven by a continuous stream of new experimental discoveries. The one foundational issue that was barely tolerated, although discouraged, was quantum gravity.

This rejection of careful foundational thought extended to a disdain for mathematical rigor. Our uses of theories were based on rough-and-ready calculation tools and intuitive arguments. There was in fact good reason to believe that the standard model of particle physics is not mathematically consistent at a rigorous level. As a graduate student at Harvard, I was taught not to worry about this because the contact with experiment was more important. The fact that the predictions were confirmed meant that something was right, even if there might be holes in the mathematical and conceptual foundations, which someone would have to fix later.

The Disappearance of Contact with Experiment

In retrospect, it seems likely that this style of research, in which conceptual puzzles and issues of mathematical rigor were ignored, can only succeed if it is tightly coupled to experiment. When the contact with experiment disappeared in the 1980s, we were left with an unprecedented situation.

The string theories are understood, from a mathematical point of view, as badly as the older theories, and most of our reasoning about them is based on conjectures that remain unproven after many years, at any level of rigor. We do not even have a precise definition of the theory, either in terms of physical principles or mathematics. Nor do we have any reasonable hope to bring the theory into contact with experiment in the foreseeable future. We must ask how likely it is that this style of research can succeed at its goal of discovering new laws of nature.

It is difficult to find yourself in disagreement with the majority of your scientific community, let alone with several heroes and role models. But after a lot of thought I’ve come to the conclusion that the pragmatic style of research is failing. By 1980, we had probably gone as far as we could by following this pragmatic, antifoundational methodology.

If we have failed to solve the key problems of quantum gravity and unification in a way that connects to experiment, perhaps these problems cannot be solved using the style of research that we theoretical physicists have become accustomed to. Perhaps the problems of unification and quantum gravity are entangled with the foundational problems of quantum theory, as Roger Penrose and Gerard t’Hooft think. If they are right, thousands of theorists who ignore the foundational problems have been wasting their time.

Unification and Quantum Gravity

There are approaches to unification and quantum gravity that are more foundational. Several of them are characterized by a property we call background independence. This means that the geometry of space is contingent and dynamical; it provides no fixed background against which the laws of nature can be defined. General relativity is background-independent, but standard formulations of quantum theory—especially as applied to elementary particle physics—cannot be defined without the specification of a fixed background. For this reason, elementary particle physics has difficulty incorporating general relativity.

String theory grew out of elementary particle physics and, at least so far, has only been successfully defined on fixed backgrounds. Thus, the infinity of string theories which are known are each associated with a single space-time background.

Those theorists who feel that theories should be background-independent tend to be more philosophical, more in the tradition of Einstein. The pursuit of background-independent approaches to quantum gravity has been pursued by such philosophically sophisticated scientists as John Baez, Chris Isham, Fotini Markopoulou, Carlo Rovelli, and Raphael Sorkin, who are sometimes even invited to speak at philosophy conferences. This is not surprising, because the debate between those who think space has a fixed structure and those who think of it as a network of dynamical relationships goes back to the disputes between Newton and his contemporary, the philosopher Leibniz.

Meanwhile, many of those who continue to reject Einstein’s legacy and work with background-dependent theories are particle physicists who are carrying on the pragmatic, “shut-up-and calculate” legacy in which they were trained. If they hesitate to embrace the lesson of general relativity that space and time are dynamical, it may be because this is a shift that requires some amount of critical reflection in a more philosophical mode.

A Return to the Old Style of Research

Thus, I suspect that the crisis is a result of having ignored foundational issues. If this is true, the problems of quantum gravity and unification can only be solved by returning to the older style of research.

How well could this be expected to turn out? For the last 20 years or so, there has been a small resurgence of the foundational style of research. It has taken place mainly outside the United States, but it is beginning to flourish in a few centers in Europe, Canada, and elsewhere. This style has led to very impressive advances, such as the invention of the idea of the quantum computer. While this was suggested earlier by Feynman, the key step that catalyzed the field was made by David Deutsch, a very independent, foundational thinker living in Oxford.

For the last few years, experimental work on the foundations of quantum theory has been moving faster than experimental particle physics. And some leading experimentalists in this area, such as Anton Zeilinger, in Vienna, talk and write about their experimental programs in the context of the philosophical problems that motivate them.

Currently, there is a lot of optimism and excitement among the quantum gravity community about approaches that embrace the principle of background independence. One reason is that we have realized that some current experiments do test aspects of quantum gravity; some theories are already ruled out and others are to be tested by results expected soon.

Collective Phenomena

A notable feature of the background independent approaches to quantum gravity is that they suggest that particle physics, and even space-time itself, emerge as collective phenomena. This implies a reversal of the hierarchical way of looking at science, in which particle physics is the most “fundamental” and mechanisms by which complex and collective behavior emerge are less fundamental.

So, while the new foundational approaches are still pursued by a minority of theorists, the promise is quite substantial. We have in front of us two competing styles of research. One, which 30 years ago was the way to succeed, now finds itself in a crisis because it makes no experimental predictions, while another is developing healthily, and is producing experimentally testable hypotheses. If history and common sense are any guide, we should expect that science will progress faster if we invest more in research that keeps contact with experiment than in a style of research that seeks to amend the methodology of science to excuse the fact that it cannot make testable predictions about nature.

Also read: What Physics Tells Us About the World

References

1 Smolin, L. 1997. The Life of the Cosmos. Oxford University Press.

2 Weinberg, S. 2005. Living in the multiverse.

Further Reading

Smolin, L. 2006. The Trouble with Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next. Houghton Mifflin, New York.

Woit, P. 2006. Not Even Wrong: The Failure of String Theory and the Search for Unity in Physical Law. Basic Books, New York.


About the Author

Lee Smolin is a theoretical physicist who has made important contributions to the search for quantum theory of gravity. He is a founding researcher at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario. He is the author of Life of the Cosmos (Oxford, 1997), Three Roads to Quantum Gravity (Orion, 2001), and the forthcoming, The Trouble with Physics (Houghton Mifflin, 2006).

Reef Madness and the Meaning of Coral

Colorful fish swim among a coral reef in the Ocean.

While the nineteenth century’s greatest scientific debate was that over Charles Darwin’s theory of evolution, the century’s other great scientific debate, almost forgotten now, posed problems even more vexing than the species question did.

Published November 11, 2005

By David Dobbs

Image courtesy of Chonlasub via stock.adobe.com.

The Other Debate of Darwin’s day

Asked to name the 19th century’s major scientific squabble, most people will correctly name the row over Darwinism. Few recall the era’s other great debate—regarding the coral reef problem—even though it was nearly as fierce as that over the species problem. The reef debate saw many of the same philosophical issues contested by many of the same players. These included Charles Darwin, the naturalist Louis Agassiz, and Alexander Agassiz, an admirer of the former and the son of the latter. Their tangled struggle is one of the strangest tales in science.

The clash over Darwin’s species theory was partly one between empiricism, as represented by Darwin’s superbly documented Origin of Species, and the idealist or creationist natural science dominant before then. Louis Agassiz, the Swiss-born naturalist who became the leading light of American science after moving to the United States in 1846, offered a particularly seductive articulation of creationist theory. He held huge audiences spellbound as he explained how nature’s patterned complexity could only have sprung from a single, divine intelligence. A species, he said, was “a thought of God.” His elegant description made him a giant of American science, the director of Harvard’s new Museum of Comparative Zoology, and a man of almost unrivaled fame.

But the publication of Origin, in 1859, confronted Agassiz’s idealist creationism with an empirically robust naturalistic description of species origin. Though Agassiz opposed Darwin’s theory vigorously, his colleagues increasingly took Darwin’s view, and by 1870, Louis Agassiz was no longer taken seriously by his peers. He could hardly have fallen further.

A Son

Louis’s only son, Alexander, came of age watching this fall. Smart and careful as child and man—he began his scientific career as an assistant at the Museum of Comparative Zoology and would manage it after his father died—Alexander seemed determined to avoid his father’s excesses. Where Louis was profligate, Alexander was frugal. Where Louis was expansive and extroverted, Alex was reserved and liked to work in private. And where Louis favored a creationist theory based on speculation, Alex preferred the empirical approach established by Darwin.

By the age of 35, Alexander Agassiz had created a happy life. He loved his work at the museum, his wife and three children, and by investing in and for 18 months managing a copper mine in Michigan, he had made himself quite rich. Yet his luck changed in 1873. Louis, then 63, died of a stroke two weeks before Christmas. Ten days later, Alex’s wife, Anna Russell Agassiz, died of pneumonia.

Alexander Agassiz. Image via Wikimedia Commons

Wanderings and Reefs

Devastated by this double blow, Alex spent three years mostly traveling, mortally depressed. He felt able to “get back in harness,” as he put it, only when, in 1876, he engaged the coral reef problem. How did these great structures, built from the skeletons of animals that could grow only in shallow water, come to occupy platforms rising from the ocean’s depths? Naturalists had discerned in the early 1800s how corals grew, but the genesis of their underlying platforms remained obscure.

The prevailing explanation, first offered in 1837, held that coral reefs formed on subsiding islands. The coral first grew along shore, forming fringing reefs. As the island sank and lagoons opened between shore and reef, fringing reef became barrier reef. When the island sank out of sight, barrier reef became atoll. Thus this subsidence theory, as it was known, explained all main reef forms.

Alex, drawn to this problem by his friend Sir John Murray, a prominent Scottish oceanographer, thought the subsidence theory was just a pretty story. The theory rested on little other than the reef forms, while considerable evidence, such as the geology of many islands and most reef observations made during the mid-1800s, argued against it. Now Murray, who had just returned from a five-year oceanographic expedition aboard the HMS Challenger, told Alex of an alternative possibility. Murray had discovered that enough plankton floated in tropical waters to create a rain of planktonic debris that, given geologic time, could raise many submarine mountains up to shallows where coral reefs could form.

Alex immediately liked this idea, for it rose from close observation rather than conceptual speculation and relied on known rather than conjectural forces. Inspired for the first time since his wife’s death three years before, he began designing an extensive field research program to prove it.

There was only one problem: the person who had authored the subsidence theory was Charles Darwin.

Thirty Years of Fieldwork

Darwin had posited the subsidence theory as soon as he returned from the Beagle voyage in 1837. Like his evolution theory, it was a brilliant synthesis that explained many forms as the result of incremental change. But it did not rest on the sort of careful, methodical accumulation of evidence that underlay his evolutionary theory. Darwin conceived it before he ever saw a coral reef and published it when he’d seen only a few.

Yet the theory explained so much that it had launched Darwin’s career. Since then, of course, Darwin had developed his evolution theory, destroyed Louis’s career, and become the most renowned and powerful man in science. Alex knew he was courting trouble when he decided to champion an alternate theory. But he couldn’t resist such an enticing problem. And he firmly believed that Darwin had muffed it.

Alex spent much of the next 30 years collecting evidence. He developed a complicated and nuanced theory holding that different forces, primarily a Murray-esque accrual, erosion, some uplift, and occasionally some subsidence, combined in different ways to create the world’s different reef formations. He found evidence in every major reef formation on the globe. And so as the century ended, an Agassiz again faced Darwin (or Darwin’s legacy, for Darwin had died in 1882). Only this time the Agassiz held the empirical evidence and Darwin the pretty story.

Yet Alex hesitated to publish, even after he completed his fieldwork in 1903. Every year, Murray would ask Alex about the reef book. Every year Alex would say the latest draft hadn’t worked, but that he had found a better approach and would soon finish.

The last time he told Murray this was in 1910, when they met in London before Alex sailed home to the U.S. after a winter in Paris. On the fifth night out of Southampton, he died in his sleep. Murray, hearing the news by cable a couple days later, was much aggrieved—and stunned to hear what followed. A thorough search had found no sign of the coral reef book. It was, Alexander’s son George later wrote, “an excellent example of his habit of carrying his work in his head until the last minute.”

One Irony Among Many

The coral reef debate didn’t end until 1951, when U.S. government geologists surveying Eniwetok, a Marshall Islands atoll, prior to a hydrogen bomb test there, finally drilled deep enough to resolve the mystery. If Darwin was right about reefs accumulating atop their sinking foundations, the drill should pass through at least several hundred feet of coral before hitting the original underlying basalt. If Agassiz was right, the drill would go through a relatively thin veneer of coral before hitting basalt or marine limestone.

It speaks of the power of Alexander’s work that the reef expert directing the drilling, Harry Ladd, expected to prove Agassiz right. But the power of Darwin’s work was such that as the drill spun deep, it passed through not a few dozen or even a few hundred feet, but through some 4,200 feet of coral before striking basalt. Darwin was right, Agassiz wrong.

How did Alex miss this? In retrospect, geologists can identify various observational mistakes Alexander made. But Alex’s bigger problem was his singular place in the profound changes science underwent in the 1800s. Natural science in particular was struggling to define an empirical theoretical method. Alex played by the rules that most scientists, including Darwin, swore to: a Baconian inductivism that built theory atop accrued stacks of observed facts.

In reality, most scientists come to their theories through deductive leaps, then try to prove them by amassing evidence. A theory’s value rests not on its genesis, but on its proof. Today this is accepted and indeed codified as the “hypothetico-deductive method,” and its resulting theories are considered empirical as long as their proof lies in replicable evidence. But in Alex’s day, when pretty stories built on leaps of imagination spoke of reactionary creationism rather than creative empiricism, such theorizing was called speculation, and it was a four-letter word.

Alexander Agassiz was keenly sensitive to the dangers of such work. Yet his singular position fated him to take up a question that not only lay beyond the tools of his time, but which trapped him in the era’s most confounding difficulties of method and philosophy. He sought a solution that belonged to another age.

About the Author

David Dobbs is author of Reef Madness: Charles Darwin, Alexander Agassiz, and the Meaning of Coral, from which this lecture is drawn. You can find more of his work at daviddobbs.net.

Promoting Science, Human Rights in the Middle-East

A black fist and white fist risen in solidarity.

Two human rights activists are named winners of the Academy’s Human Rights Award for 2005.

Published October 17, 2005

By Fred Moreno

Image courtesy of Manpeppe via stock.adobe.com.

Two activists who have long fought for the rights of scientists-especially in the Middle East-received the 2005 Heinz R. Pagels Human Rights of Scientists Award at the Academy’s 187th Business Meeting held on September 29.

The 2005 winners are Zafra Lerman, distinguished professor of Science and Public Policy and head of the Institute for Science Education and Science Communication at Columbia College Chicago, and Herman Winick, assistant director and professor emeritus of the Stanford Synchrotron Radiation Laboratory at Stanford University.

Zafra Lerman

For more than a decade, in her role as chair of the Subcommittee on Scientific Freedom and Human Rights of the American Chemical Society’s Committee on International Activities, Zafra Lerman has stimulated human rights awareness in communities of chemists and is the American Chemical Society’s leading voice on behalf of the human rights of scientists throughout the world. She has traveled to the former Soviet Union, Russia, Cuba, China, and the Middle East, bringing encouragement to repressed scientists.

In 2003 she worked with the Israel Academy of Science, particularly in the case of allowing nine Palestinian scientists to attend a conference in Malta where scientists from ten nations in the Middle East met to tackle problems of research and education in the politically and economically troubled region.

Herman Winick

Herman Winick has been an extraordinarily effective and tireless scientist working on behalf of the Human Rights of Scientists for more than 25 years. He was one of the original supporters and founders of the Sakharov-Orlov-Scharansky (SOS) group in the 1980’s.

In the 1990s, he strongly supported the Human Rights activities of the American Physical Society (APS), on behalf of repressed scientists all around the world, first as a member, and then as the Chair of the APS Committee on International Freedom of Scientists. In the mid-1990’s he conceived the brilliant idea of creating a new synchrotron research facility in the Middle East, known as the SESAME project, which would be located in Jordan and actively solicit participants from other regional nations such as Egypt, The Palestinian Authority, Israel, Syria, and others; it is now operating.

For the past three years he has worked on behalf of an Iranian dissident physicist, Professor Hadizadeh, who has been imprisoned for his pro-democracy activities. Due in large part to efforts by Winick, Professor Hadizadeh is now carrying out research in the United States.

Pagels Award

The Academy’s first human rights award was given in 1979 to Russian physicist Andrei Sakharov. Renamed in 1988 in honor of former Academy president Heinz R. Pagels, the award has been bestowed on such imminent scientists as Chinese dissident Fang Li-Zhi, Russian Nuclear Engineer Alexander Nikitin, and Cuban Economist Martha Beatriz Roque Cabello. The 2004 award was presented to Dr. Nguyen Dan Que of Vietnam.

Also read: Promoting Human Rights through Science

Bringing a Scientific Perspective to Wall Street

The corner of Pearl Street and Wall Street in lower Manhattan.

Emanuel Derman was a pioneer in the now-established field of financial engineering, which was influenced by his background in theoretical physics.

Published October 6, 2005

By Adelle Caravanos

Image courtesy of helivideo via stock.adobe.com.

Emanuel Derman, director of the Columbia University financial engineering program, and Head of Risk at Prisma Capital Partners, will speak at the Academy on October 19. The self-described “quant” will discuss his unusual career path, from theoretical physics to Wall Street, where he became known for co-developing the Black-Derman-Toy interest-rate model at Goldman Sachs. His book, My Life As a Quant: Reflections on Physics and Finance, became one of Business Week’s Top Ten Books of 2004.

The Academy spoke with Derman in advance of his lecture.

*some quotes were lightly edited for length and clarity*

First, please tell our readers what a quant is!

Well, “quant” is short for quantitative strategist or quantitative analyst. It’s somebody who uses mathematics, physics, statistics, computer science, or any combination of these things at a technical level to try to understand the behavior of stock prices, auction prices, bonds, commodities, and various kinds of derivatives from a mathematical point of view — from a predictive view to some extent.

Is it safe to assume that most of the major banks employ quants?

Yes. When interest rates went up astronomically around [the time of], and even after, the oil crisis of ’73, [the hiring of quants] started in the fixed income business. Fixed income has always been a much more quantitative business historically than the rest of the securities business and people have always thought that bonds and fixed income investments were fairly non-volatile, stable, and safe. Once interest rates went up to around 15 percent and gold prices went up like crazy, investment banks and companies had a whole different range of problems to deal with than before.

They’d always known stocks were volatile, but not that bonds were. So, they started hiring people out of non-financial parts of universities, non-business schools — computer scientists, mathematicians, physicists, Bell Labs — to tackle these problems, partly because they involved more mathematics than people were used to and partly because they involved more computer science than people were used to.

If you had a whole portfolio of things, you couldn’t do them efficiently on paper anymore. You couldn’t take account of the changes or take account of what they were worth, so people started building computer programs to do these things. And so, there was an in-road there for a lot of quantitative people.

I think it was good to get in [to quantitative strategy] early because you could make a contribution with much less skill and talent. After 20 years everything gets so complicated mathematically that it’s much harder to do anything. It’s not impossible; people do it. But it was very exciting in the early ’80s because there were virtually no textbooks. You couldn’t get a degree in the field. Everybody was self-taught. It was exciting.

When you started at Goldman, were you one of the first of their quants?

They had maybe 10 or 20 people there. I was early, but I wasn’t the first.

You talk in your book about the difference between the way traders and quants approach problems.

I think the differences are less extreme now because quantitative methods have become much more ubiquitous all over Wall Street, particularly in hedge funds. But, yes, traders were impulsive, sharp, and gregarious. They liked meeting with people, and if you worked on the trading floor everybody was yelling and screaming. It’s exciting, but for people coming from an academic background, it is hard to concentrate! It’s chaotic. You have to multi-task a lot, which is very disturbing if you grew up wanting to do just one thing, like getting a PhD and working for six years solidly on it.

You also make the distinction between working on the mathematical models and the actual science or technology of working on the interfaces for the people. Which did you enjoy more?

I liked both. When I was in physics, we were always trying to do research and it was hard, lonely work. You shut yourself in an office and tried to make progress and when you couldn’t get anything done, or when things weren’t working, you had nothing else to do — it was really depressing.

What was nice about working at Goldman was that there were useful things you could do, like software, that didn’t take the same mental effort. They took talent and they took skill, but you didn’t have to discover something new to do them. So it was very nice to spend a quarter of your time doing research and half doing software and another quarter dealing with people. It was a much more balanced life.

Do you think that things are changing as far as academicians looking down at people going into the business world, and business people looking down at academicians?

I do think it goes both ways. I certainly looked down on people dropping out of PhDs and going into business. It felt like you were leaving the monastery before you’d become a monk. Academics brought you up to look down on anybody who copped out. And then business people always used “academic” as sort of a dirty word — “academic” in the sense of “not applied.”

About the Author

Professor Emanuel Derman is director of Columbia University’s program in financial engineering and Head of Risk at Prisma Capital Partners, a fund of funds. My Life as A Quant: Reflections on Physics and Finance was one of Business Week’s top ten books of the year for 2004. Derman obtained a PhD in theoretical physics from Columbia University in 1973. Between 1973 and 1980 he did research in theoretical particle physics, and from 1980 to 1985 he worked at AT&T Bell Laboratories.

In 1985 Derman joined Goldman Sachs’ fixed income division where he was one of the co-developers of the Black-Derman-Toy interest-rate model. From 1990 to 2000 he led the Quantitative Strategies group in the Equities division, where they pioneered the study of local volatility models and the volatility smile. He was appointed a Managing Director of Goldman Sachs in 1997. In 2000 he became head of the firms Quantitative Risk Strategies group. He retired from Goldman, Sachs in 2002.

Derman was named the IAFE/Sungard Financial Engineer of the Year 2000, and was elected to the Risk Hall of Fame in 2002.

Also read:What Happens When Innovative Scientists Embrace Entrepreneurship?

Exploring Nature and Nurture of Women in Science

Three young girls conduct a science experiment.

Are gender disparities in the STEM fields a matter of nature or nurture? A panel of experts explored the ways in which these two seemingly opposing viewpoints interact.

Published September 2, 2005

By David Berreby

In 1993, the makers of the Talking Barbie doll included, among its 270 recorded phrases, the sentence “Math class is tough!” Did they do this because girls don’t like math as much as boys? Or do girls not like math because of influences that include Barbie dolls?

That’s the issue for policymakers contemplating the absence of women from the rosters of some scientific fields. Do we perceive gender differences, and act upon them, because men and women are biologically different? (If so, then women may never be represented in some fields in the same numbers as men.) Or is it that we have trained ourselves to see those differences? (In which case, our beliefs are holding talented people back.)

Are gender disparities, in other words, a matter of nature or nurture?

To those dilemmas, most scientists agree, there is only one good response: those are the wrong questions. Gender differences are obviously a matter of both nature and nurture. This was one rare point of agreement among panelists at an April 14, 2005, discussion on women in science held at the Cooper Union, sponsored by the Women Investigators Network and the Ensemble Studio Theatre/Sloan Project’s First Light Festival, and moderated by former New York Times science editor Cornelia Dean. It’s absurd, the speakers agreed, to choose one side or the other. The real problem is to figure out how nature and nurture interact.

The State of Current Knowledge

In that quest, the essential controversy is over the state of current knowledge. Some argue that today’s science is good enough to speak of certain gender differences as facts, and to tell which are innate and which are not. Others believe we don’t yet know enough to tease nature and nurture apart and note that one era’s “facts” about men and women have a way of looking like prejudice to later generations.

The Cooper Union session had its roots in remarks made on January 14, 2005, by a former Harvard University president to a meeting of the National Bureau of Economic Research. Saying he wanted to “provoke” his listeners, he took a side on the basic underlying question: is today’s science good enough to speak with confidence about the biological differences between men and women?

He thinks it might be, arguing that it is reasonable to ask if one reason for the paucity of women in top-level academic science jobs might be innate differences in the ways male and female minds work. He also mentioned two other factors he thought merited consideration: the demands of family life and the hindrances posed by convention and prejudice. However, his remarks about innateness got the most attention, in part because a number of attendees were so offended they walked out.

One was Nancy Hopkins, who led MIT’s Study of Women Faculty in Science, an inquiry launched in 1995 to determine why women were underrepresented among MIT professors. At the Cooper Union Hopkins argued that if we don’t yet know how nature and nurture combine to shape people’s lives, the very act of claiming certainty can breed prejudices and stereotypes.

Increase in Representation at MIT

Pointing out that women went from comprising 2% of MIT’s student body to more than 40% in the 20th century, she noted that such an upsurge was unlikely to be due to a fundamental change in the biology of women’s brains. Overconfident belief that women weren’t suited to her field, biology, once kept talented people out, she said. Now women are well represented in biology, but the same beliefs obstruct their progress in mathematics.

Linda Gottfredson, professor in the School of Education and affiliated faculty in the University Honors Program at the University of Delaware, however, argued that innate gender differences are very clear—so clear, in fact, that a goal of gender parity in all professions seems unrealistic. Specifically, she said, male minds show a bias toward interest in things, while female minds are interested in people, creating what she called a genetic “tilt” that affects the types of careers they choose. In this light, supporting an idea of infinite human malleability “ignores both women’s own preferences and the huge challenges they face when committed to having both children and careers.”

Richard Haier, who studies the neurobiology of intelligence, consciousness, and personality at the University of California, Irvine, also argued for the innateness of intelligence. He explained that while bell curves of male and female scores of general intelligence “essentially completely overlap,” more men tend to be found at the extreme high end of the scale for a few specific cognitive abilities like mathematical reasoning. Using imaging technology, he found that different parts of men’s and women’s brains are related to general intelligence in one study and to mathematical reasoning in another.

Gender Differences in Cognitive Traits

Diane Halpern, past-president of the American Psychological Association and professor of psychology at Claremont McKenna College, agrees that there is significant evidence to suggest that gender differences in cognitive traits exist. However, the observation that some differences may be due to innate traits does not mean those differences are immutable. This is because even innate aspects of a person interact with an environment, and environments can change.

Or as Halpern said, “The word innate does not mean forever.” In assessing male and female performance on tests and career tracks, she argued, it is important to remember that the academic world “has been devised as a very male, very heterosexual world,” and the fact that “the biological clock and tenure clocks run in the same time zone” has been bad for women in academia.

New York University’s Joshua Aronson acknowledged the importance and relevance of studies on biological gender differences, but also warned against too much stress on innateness as an explanation. He looks at how people, cultural animals that they are, respond to cultural notions. Cognition, he argued, is affected by a phenomenon called stereotype threat, “an apprehension arising from the awareness of a negative stereotype or personal reputation in a situation where you can confirm that stereotype with your behavior or the way you look.” Citing various studies that he and his colleagues have conducted, he said that changes in the environment in which high-achieving women navigate cognitive tests can affect their performance.

Are Sex Differences in Intelligence Innate?

None of this was abstract theory for the panelists or their audience at The Cooper Union. As scientists were both the investigators and the subjects of research about such issues, the talks and subsequent questions produced an unusual mix of passion, autobiography, and political debate. The panel was sharply split on the question of whether we have sufficient knowledge to say that sex differences in intelligence are innate. As is usually the case when scientific controversies mix with political disputes, some on each side accused the other of seeking to cut off debate and suppress inconvenient facts. And some speakers of both schools of thought stated that the other approach was actively harmful to young women.

Women should not be pushed to do things against their nature, said one of the innatists. Women should not be told that their ambitions aren’t natural, said one of the environmentalists. Though no speaker bolted (and the tone of the talks, questions, and reception stayed civil), there was no disguising the profound philosophical, political, and scientific disagreement at the heart of this question.

Also read: Strategies from Successful Women Scientists


About the Author

David Berreby has written for the New York Times Science Section, The New York Times Magazine, The Sciences, Discover, Smithsonian, Slate, The New Republic, and many other publications

Wilson Bentley: The Man Who Studied Snowflakes

A shot of an intricate snowflake with a black background.

This Vermont-based farmer spent his career in the late 19th and early 20th centuries transforming the study of snowflakes into an art as well as a science.

Published June 1, 2005

By Fred Moreno

Image courtesy of vadim_fl via stock.adobe.com.

For centuries, humans have been fascinated by the endless variety of snowflakes and their six-fold symmetry. Scientists have sought to better understand how they are formed from single crystals of ice and why complex patterns arise spontaneously in such simple physical systems. According to Kenneth Libbrecht, a physicist at the California Institute of Technology, “snowflakes are the product of a rich synthesis of physics, mathematics, and chemistry.”

The oldest observation of snow crystals* on record appears in China around 135 BC, but the 17th century seemed to witness the dawn of their serious scientific consideration. A treatise by Johannes Kepler raised questions about the genesis of their hexagonal symmetry, while the French philosopher and mathematician René Descartes wrote detailed accounts of the geometrical perfection of snow-crystal structure.

Later in that century, English scientist Robert Hooke was the first to draw snowflakes through a microscope. Many others – including the 19th century Arctic explorer William Scoresby and the great Japanese scientist Ukichiro Nakaya in the mid-20th century – have made important contributions to understanding the science of snow and ice.

But if there is one person who transformed snowflake study into an art as well as a science, it was Wilson A. Bentley, a farmer from Jericho, Vermont, who spent a lifetime (1865-1931) studying snow crystals. He became interested in the structure of snow crystals as a teenager in the 1880s and tried sketching them through an old microscope his mother had given him. But he found this a frustrating task since he had to work very rapidly in order to capture a complex phenomenon.

Much Trial and Error

Eventually Bentley devised a means of attaching a bellows camera to a compound microscope, and after much trial and error, he finally succeeded in photographing his first snow crystal on January 15, 1885. Over the next 46 years, he took more than 5,000 snow-crystal images on glass photographic plates – as well as pictures of frost, pond ice, dew, and clouds. A little known fact about Bentley is that he also studied rainfall and was the first American to make measurements of raindrop size. His work in this area is one reason he is considered a pioneer in the science known today as cloud physics.

Keeping fragile things like crystals frozen and unspoiled meant Bentley had to work in temperatures below freezing. He caught the crystals on a blackboard and would transfer them to a microscope slide, taking care not to breathe on them.

“The utmost haste must be used, for a snow crystal is often exceedingly tiny, and frequently not thicker than heavy paper,” Bentley wrote. “Furthermore…evaporation (not melting) soon wears them away, so that, even in zero weather, they last but a very few minutes.”

The Treasures of the Snow

In the late 1890s, the world outside Jericho began to notice Bentley’s work. Some of his photomicrographs were acquired by the Harvard Mineralogical Museum and he published an article with George Henry Perkins, a natural history professor at the University of Vermont. It was in this article that he first outlined the notion that no two snowflakes are alike. In the coming years, many other academic institutions throughout the world – as well as the American Museum of Natural History and the British Museum – acquired samples of Bentley’s work, and he published articles in such magazines as Scientific American, National Geographic, Nature, and Popular Science.

Finally, in 1931, he collaborated with William J. Humphreys, chief physicist for the U.S. Weather Bureau, on a book, Snow Crystals, that would be the culmination of Bentley’s life’s work. It was illustrated with 2,500 snowflake photographs. Just a few weeks later, on a cold December day, Bentley died of pneumonia at his Jericho farm at the age of 66.

In the Old Testament, God asks of Job, “Hast thou entered into the treasures of the snow?” Wilson “Snowflake” Bentley surely would have answered, “Yes!”

*To most people, there is no difference between snowflakes and snow crystals. But there is a meteorological difference. A snow crystal refers to a single crystal of ice while a snowflake can mean an individual crystal or a cluster of them formed together. In short, a snowflake is always a snow crystal, but a snow crystal is not always a snowflake.

Also read: The Culture Crosser: The Sciences and Humanities

The Science Behind a Tsunami’s Destructiveness

A blue and white sign warning: Tsunami Hazard Zone - in case of earthquake go to high ground or inland.

In the aftermath of the 2004 tsunamis, and with tectonic plates continuing to shift beneath the Indian Ocean, scientists are seeking answers to handle the next natural disaster.

Published June 1, 2005

By Sheri Fink, MD, PhD

Image courtesy of jdoms via stock.adobe.com.

Stunning images of devastation and soaring body counts dominated news coverage of last December’s tsunami, leaving one of the most important questions about the disaster barely addressed: Why did so many people die? With tectonic plates still shifting beneath the Indian Ocean, setting off new earthquakes almost daily, finding answers to this question is urgent.

Lareef Zubair is an associate research scientist at Columbia University’s Earth Institute and founder of the Sri Lanka Meteorology, Oceanography and Hydrology Network. He studies why disasters in some parts of the world tend to carry a much higher human, as opposed to financial, toll than disasters in other places – compare the thousands who die in a typical cyclone in Bangladesh with the 123 deaths caused by last year’s four hurricanes in Florida.

Zubair recently spoke at The New York Academy of Sciences (the Academy) on a panel organized by Science Writers in New York (SWINY), an affiliate of the National Association of Science Writers, to discuss untold stories of the tsunami.

Disasters: Unequal Opportunity Killers

Destructive acts of nature impact human populations to varying degrees. “People who study disasters sort of separate out three aspects of disasters,” said Zubair. “One is the hazards, which is something like a flood, or lightening strike, or a tsunami, which is the physical or biophysical event itself. And then there is the exposure, the degree to which people are exposed to the hazard. The third thing is how vulnerable you are to that event.”

In Zubair’s home country of Sri Lanka, the tsunami restricted its wrath to the first several hundred meters adjacent to the water’s edge. The destruction coincided with areas of high population density. Not only those who eked out a living on the sea lived in close proximity to it, but also traders and farmers, despite regulations stipulating that construction within 300 meters of the shoreline be reviewed by the government. One reason people are drawn to the coast is that infrastructure such as roads, telephones, hospitals and schools have been developed there.

“The seashore has to be protected,” said Zubair. “Cyclones and flooding and storm surges happen at the seashore…every 10, 20, 30 years, and everybody knows this. But somehow that did not translate into the desired action of having people live in safer areas.”

Not only were people living along the seashore exposed to natural disasters, Zubair said, but because of the area’s depressed economy and 20-year history of civil war, they also were highly vulnerable to them. “Vulnerability…is grossly related [to] the distribution of wealth,” Zubair said. “How good are your houses, how good is the infrastructure, how good are the hospitals that are around so that you can get treatment? How good is the road system?” The answers in the tsunami-hit areas of Sri Lanka were, in most cases, “poor.”

A Failure of Prevention

On December 26, 2004, Sri Lanka’s National Disaster Management Center did not jump into action to mitigate the tsunami’s destructive effects. The country’s “Sunday Times” newspaper summed up the problem in a headline: “Only three phones, staff of 10, and never on a Sunday.” The tsunami had the bad manners to hit on the Sunday after Christmas. “How on earth [can you] have a national disaster management center that does not work on public holidays?” Zubair asked.

An hour elapsed between the tsunami’s first deadly landfall on the island’s eastern coast and its last lashing in the country’s northwest. In that time, an estimated 20,000 additional people died. Zubair believes that had a warning been broadcast to the rest of the country soon after the tsunami began hitting the coast – roughly one and a half hours after the earthquake – lives would have been saved.

“That should have happened,” said Zubair. “Any middle school student could see [that] if you have an earthquake hazard in the middle of the ocean, there is going to be a tsunami risk. You don’t need sophisticated scientists to come and tell you this. Why did people fail? And, why did people fail in Sri Lanka? Why did people fail in India? Lastly, why did people fail here? I don’t think we should push these questions under the carpet, as scientists.”

An Early Warning System

Zubair said he made his way to the “plush” part of Colombo to visit the disaster management center several times in the year prior to the tsunami, seeking to discuss early warning systems. He was offered tea, but never an audience with anyone willing to talk about technical issues. An early warning system had indeed been proposed after Sri Lanka’s 1978 cyclone. Plans were made, reports were written, money was disbursed, but the ideas were never implemented by the center. “They exist, with a name-board and a plaque, for donors,” he said, concluding that a “perverse incentive system” exists for those involved in disaster management and related fields.

“Every time there’s a disaster, they get rewarded with larger and larger amount of funds,” he said. “In countries such as Sri Lanka, fields like disaster management and energy conservation are seen as fields in which you can get foreign funds, opportunities for scholarships and maybe some sort of benefit. There’s no integration of the disaster management system itself into the internal networks of science, into the internal networks of education, into the internal networks of governments itself and disaster management.”

High Price of Neglecting Science

Sri Lanka’s Geological Survey and Mines Bureau possessed both a functioning seismograph and a 100-year scientific pedigree, but on December 26th it had no one working on site to analyze the seismic measurements. Data were sent instead to the Scripps Institute in San Diego. “The question is, why is it that you’re sitting on probably the most important piece of scientific data Sri Lanka ever recorded or needed and you just ship it off?”

The answer is controversial. Zubair traced it to the pressures of mounting foreign debt, which forced the bureau to shift its focus away from science to supporting commercial mining interests. “Because of the fact that the country is dependent…services that look after the safety of the population got converted into a service that helps repay debt,” he said. Hewing to World Bank and Sri Lanka’s central bank guidelines, the bureau did not have the authority to spend the roughly $2,000 needed to hire someone to monitor the seismograph.

In fact, $2,000 is the government’s entire yearly grant to Sri Lanka’s Academy of Sciences. “The investment of the Sri Lankan government in science is about .18% of GDP. It’s just miniscule. You should at least have 1% or 2%, because what you’re doing is investing in people, you’re investing in safety, in the future.”

Empowering Humanity

Zubair concluded that the death toll from the tsunami was in great part a function of unmitigated exposure and vulnerability of the population – factors he laid at the doorstep of a government that neglects science and technology, and international donor organizations that offer a shower of funds for emergency relief, but turn off the spigot for prevention efforts.

“The basic message here is we really should be talking about disaster preparedness and risk management,” he said. The goal is to integrate modern scientific and technological advancements with emergency preparedness and public education. “You can have policy, but there must be implementation and there must be good governance…governance that looks after the welfare of the people.”

Despite the failures, Zubair recalled that when he visited Sri Lanka a week after the disaster he came away with hope as well as frustration. At a time when the government and international agencies had not yet swung into action, he saw the local inhabitants themselves saving lives. “Church groups, community groups, temples, mosques, workplaces. It was like 9/11 here – extraordinary mobilization. It’s not a poor country in that sense.” The key, he says, is to support the “huge capacity of people.” Chief among them? The scientists.

Also read: Tsunami Relief Efforts: A Personal Account

Tsunami Relief Efforts: A Personal Account

Water splashes and people scramble during a tsunami.

Collaboration is key when dealing with disasters. A medical doctor offers guidance from her experience in the aftermath of the 2004 tsunami in the Indian Ocean.

Published June 1, 2005

By Sheri Fink, MD, PhD

A photograph of the 2004 tsunami in Ao Nang, Krabi Province, Thailand. Image courtesy of David Rydevik via Wikimedia Commons. Public Domain.

During two months working in Thailand and Indonesia after the tsunami, I was struck by the many ways that science and technology was employed during the disaster recovery process, although not without controversy and complications. Geospatial imaging information guided aid workers to highly populated disaster zones, but not all countries immediately released the sensitive information. Instant cell-phone messaging allowed disease surveillance specialists to track emerging infectious outbreaks across widespread areas, but not all health workers reported their cases.

One of the most interesting applications of science was in the field of forensics. In Thailand, the tsunami stole the lives of an estimated 3,442 Thai nationals and 1,953 foreigners, many of them European tourists. While tsunami victims’ bodies were buried or cremated in countries with fewer tourists, identification teams from more than two dozen countries showed up in Thailand to identify the victims, using techniques ranging from forensic anthropology to genetics. Most of the experts worked on the verdant grounds of a massive Buddhist temple known as Wat Yan Yao.

Quickly, however, a problem emerged: Each team had its own standards for evidence collection. Brendan Harris, a young volunteer from Vancouver, Canada, provided assistance to the teams, heaving waterlogged bodies onto mortuary tables in the first weeks after the tsunami. “There are a lot of arguments going on about how to deal with the bodies,” he said.

Collaboration is Imperative during Crises

Clad in hospital gowns, masked and gloved, the foreign teams at first focused their efforts on Caucasian-appearing bodies. That left Thai forensic scientists and dentists to photograph, examine and take fingerprints and DNA samples from Asian-appearing bodies, or bodies where decomposition had wiped away all traces of race. The result was two separate identification efforts, one foreign and one Thai, proceeding within earshot of each other. A month after the tsunami, the Thai and foreign teams had established completely different computer databases and were not sharing information crucial to identifying the missing. With only roughly 1,000 bodies identified, family members of the missing were distraught.

Ultimately the scientists realized that they had to work together. The foreign teams and the Thai interior ministry formed the Thai Tsunami Victim Identification Center, adopting protocols based on Interpol standards. The Center’s members committed to identifying all recovered bodies, regardless of nationality.

Scientists cautioned that the identification process could take many months, but expressed hope for what had become one of the largest international disaster identification efforts in history. “I have no doubt this will be a very highly successful system,” said DNA expert Ed Huffine, of Bode Technology Group in Springfield, Virginia. “This is developing a world response system to disaster. And it’s beginning a standardization process that uses all forms of forensic evidence, where DNA will play the leading role.”

The Need for a Crisis Response Network

A laboratory in Beijing, China, offered to test all victims’ DNA samples for free. Weeks later, scientists were surprised when the Chinese lab, and eventually several labs in other countries, had difficulty deriving usable DNA profiles from the degraded DNA in tooth samples. By the end of March, more than three months after the tsunami, the Victim Identification Center had put names to only an additional 1,112 bodies, the vast majority of them matched exclusively through dental records. Only three IDs came exclusively from DNA.

Continued disagreements and frequent personnel turnover have plagued the identification center, which insiders refer to as “a mess.” The disappointing experience has pointed out the need for better preparation and coordination among multi-national forensics experts responding to disasters.

Just as the World Health Organization plays a coordinating role for diverse groups of health professionals working in disaster and conflict zones, so, too, an international organization is needed to coordinate disaster victim identification teams. Such a group would be wise to standardize not only technical procedures, but also ethical principles – including the impartial treatment of bodies of all nationalities and races.

Perhaps most importantly, family members of the missing, who have the largest stake in the outcome of identification efforts, should be offered both full access to information and decision-making representation in any future crisis. It is crucial that their preferences and belief systems count.

Also read: The Science Behind a Tsunami’s Destructiveness

Check out the Academy’s International Science Reserve and consider joining this inclusive, impactful network.

Artists Consider Manipulation of Human Form

A woman gets a needle injection, presumably Botox, into her lip by a gloved hand.

Analyzing how cosmetic surgery, science, and art interact in a new exhibit on display at the Academy.

Published March 31, 2005

By Fred Moreno

Image courtesy of Acronym via stock.adobe.com.

Although the Hindu surgeon Sushruta noted how to reconstruct a nose from a patient’s cheek as far back as 600 B.C., plastic surgery is said to have begun during the Renaissance with the Italian Gasparo Tagliacozzi. He originated a method of nasal reconstruction in which a flap from the upper arm is gradually transferred to the nose.

Plastic surgery (a term which covers both reconstructive and cosmetic surgery) has come a long way since then and it is now one of the largest medical specialties in the United States. It is a good example of how market demand can drive medical developments, as technology races to keep up with consumer desire. But the decision to alter one’s face or body, surgically or otherwise, continues to raise questions about the social impact of medicine and technology, the manipulation of the human form, as well as issues of identity, self-esteem, and health, both physical and psychological.

Face Value: Plastic Surgery and Transformation Art

An exhibition opening April 8 in the Gallery of Art & Science of The New York Academy of Sciences, Face Value: Plastic Surgery and Transformation Art, takes a look at these questions through the eyes of more than a dozen contemporary artists who are imagining new parameters for body identity in a wide range of media, from painting to photography— and even through personal body manipulation. Curated by artist Suzanne Anker, chair of the Art History Department at New York’s School of Visual Arts, the exhibition will include works by Erica Baum, Aaron Cobbett, Margi Geerlinks, Leigh Kane, Daniel Lee, Lilla LoCurto and Bill Outcault, Orlan, Julia Reodica, Aura Rosenberg, Chrysanne Stathacos, and Linn Underhill.

“In many ways, plastic surgery lies at the nexus of medicine and consumerism,” Anker says. “How visual artists interpret that interaction can say a lot about the nature of beauty and our society’s medical and cultural values.”

Also read: The Art and Science of Human Facial Perception