Skip to main content

The Art of Sci-Fi: 80 Years of Movie Posters

An illustration of an astronaut shooting a ray gun.

A new art exhibit combines art and science as it explores 80 years of science fiction movie posters. See the styles of different artists from Argentina and the United States to Germany and Japan.

Published May 1, 2006

By Fred Moreno

Ever since science gave birth to the cinema more than a century ago, the link between the two has often been intimate and exciting – and sometimes rather disturbing. Sort of like the relationship between Dr. Frankenstein and his creation. Countless movies have featured aspects of science and technology, both credible (or almost so) and fantastic (mostly). Just as fanciful is the varied collection of absurdly mad or strangely saintly scientist “heroes” that have populated the movies over the years.

Numerous studies have shown that movies are a major source for what the general public thinks about science and scientists. And just as the films themselves have influenced societal perceptions, so too have their movie posters. With its images of heroic sacrifice, spaceships, other worlds, and scientifically engendered creatures, the movie poster has produced some of the most iconic visual signposts of our time.

Coming Attractions! 80 Years of Cinematic Science: Movie Posters from Around the World, an exhibition in The New York Academy of Sciences’ (the Academy’s) Gallery of Art & Science through June 30, brings together posters for more than 25 movies, including examples from such countries as Argentina, Germany, Japan, Russia, Great Britain, Italy, Poland, and the U.S., among others.

The exhibit includes a British poster for the rerelease of Fritz Lang’s Metropolis; one from France for the American eco-drama, Soylent Green; and an Argentinean poster for the Italian film Mission Stardust. Also represented will be posters for such true-to-life dramas as Inherit the Wind, the thinly disguised rendition of the 1925 Scopes “monkey trial,” and a poster for the glossy American tribute to the medical profession, Not as a Stranger.

Visual Lures

All works in the exhibition come from Posteritati Movie Posters, a New York gallery specializing in international movie art. It has more than 12,000 posters in its collection. The works are used courtesy of Posteritati owner Sam Sarowitz.

“Some of the world’s most talented illustrators, painters, art directors, and graphic designers have produced movie posters,” said Tony Stinkmetal, a filmmaker and screenwriter who is serving as curator for the Academy exhibition. “They have used their fertile imaginations to give us a visual impression of both today’s world and tomorrow’s possibilities while, at the same time, luring us into the theater.”

Mr. Stinkmetal noted that the posters in the exhibition reflected a variety of styles and designs, but that similarities in approach were discernible in works from the same country.

“American and British posters tend to be more direct and traditional, such as the masked surgeon in the Not as a Stranger poster,” he said. “On the other hand, more abstract and conceptual treatments are typical of Eastern European illustrators, such as the cosmic bodywork in the poster for Innerspace of Polish artist Andrzej Pagowski or the stark metallic automaton in the Czech poster for The Terminator.”

Also read: From Imagination to Reality: Art and Science Fiction

The Road to Discovery in 20th Century Science

A black and white photo of a 20th century female scientist reviewing paperwork.

For author Alan Lightman, reading landmark scientific papers provides a window into the lives and intellectual adventures of the men and women behind the 20th century’s most influential ideas.

Published April 14, 2006

By Karen Hopkin

Otto Loewi. Image courtesy of Institute of Pharmacology, Graz, CC-BY-SA-3.0-DE, via Wikimedia Commons.

The key experiment came to him in a dream. It was 1921 and Otto Loewi, a German pharmacologist, was looking for a way to determine how nerve cells communicate. Was the signal conveyed from one neuron to the next—or from a neuron to a muscle or organ—electrical? Or was it chemical?

The scientist awoke, jotted down his musings on a slip of paper, and went back to sleep. “It occurred to me at six o’clock in the morning that during the night I had written down something most important,” he later recalled, “but I was unable to decipher the scrawl.”

From Dream to Nobel Prize

Fortunately, the idea returned the following night. That time, Loewi must have written more legibly, because he was able to carry out his Nobel Prize-winning experiment that day. He dissected the hearts from two frogs and placed them, still beating, into separate dishes of saline solution. Loewi then stimulated the vagus nerve he’d left attached to the first heart. As expected, the heart slowed its beating.

Now here’s the elegant part. Loewi took some of the solution bathing the first heart and poured it over the second heart, from which he’d stripped the vagus nerve. This heart, too, slowed, proving that the message transmitted by the vagus nerve was chemical in nature. The compound, which Loewi called “Vagusstuff,” turned out to be acetylcholine, a neurotransmitter found widely throughout the nervous system.

For Loewi, the experience suggested that “we should sometimes trust a sudden intuition without too much skepticism.” And for Alan Lightman, physicist and author of The Discoveries: Great Breakthroughs in 20th Century Science, the story illustrates how scientists think, and reminds us that science is a process of exploration carried out by human beings.

Hearing the Scientist’s Voice

Over the years, Lightman has come to realize that scientists rarely read original research papers, perhaps because they view science as being all about the bottom line. “If science is an explanation of the way that the world behaves, then you don’t need to know how you got to that understanding,” says Lightman. “You just need to know the facts, ma’am. And that’s all that matters.”

That view, although valid, is limited, Lightman told an audience at The New York Academy of Sciences (the Academy) on January 31, 2006. “You can read a textbook on the theory of relativity and you can understand relativity,” he says. “But you don’t understand the mind of Einstein. You don’t hear his voice.”

To remedy that loss, Lightman assembled The Discoveries, a handpicked collection of 22 of the greatest ideas and experiments in 20th century science. Lightman asked his scientist pals—physicists, chemists, astronomers, biologists—for recommendations and then winnowed down the resulting list to the two dozen stories he presents in the book. For each discovery—from Werner Heisenberg’s enumeration of the uncertainty principle to Barbara McClintock’s revelation that genes can jump from one chromosome to another—Lightman provides a guided tour to the original paper along with an essay on the life and times of the scientists involved.

Measuring the Distance of Stars

Henrietta Leavitt. Image via Wikimedia Commons.

Among Lightman’s favorite tales is that of Henrietta Leavitt’s development of a method for measuring the distance to the stars. Leavitt was hired in the late 1800s by Edward Pickering, director of the Harvard College Observatory, to pore over photographic plates and calculate the positions and brightness of thousands of stars. As one of the cadre of women that formed Pickering’s low-paid battalion of human “computers,” Leavitt was expected to “work, not think,” says Lightman. “But some of the women disobeyed him, and Henrietta Leavitt was one of those.”

Through painstaking measurements, Leavitt uncovered a relationship between the periodicity and luminosity of the Cepheids, a group of stars that brighten and dim in predictable cycles that vary between three and 50 days. Leavitt found that the longer a star’s period, the greater its intrinsic luminosity, and that knowing how bright a star is allows one to calculate how far away from Earth it lies. Thus the Cepheids, which are scattered throughout the night sky, could serve as cosmic beacons by which astronomers could gauge distances in space.

Leavitt’s work laid the foundation for many of the astronomical discoveries that would follow, including Hubble’s determination that the universe is expanding. Yet the scientist remained uncelebrated in her lifetime. “Even today there are very few people who’ve heard of her,” notes Lightman. In 1925, a representative of the Swedish Academy of Sciences wrote to Leavitt to propose nominating her for a Nobel Prize. Unfortunately, Leavitt had been dead for three years by then, rendering her ineligible for the honor.

Passion and Obsession

The most satisfying stories, Lightman says, are the ones in which the researchers’ personalities drive the discovery. Take, for example, Arno Penzias and Robert Wilson’s detection of the cosmic background radiation—the persistent hum left over from the Big Bang. “Both men were incredibly meticulous experimentalists,” says Lightman. “If they hadn’t been so anal compulsive about the details then they wouldn’t have been so certain that this residual hiss in their antenna was something worth investigating.”

But, he adds, “they were so fastidious, so picky, and so careful” that they methodically chased after the source of the noise. And after they eliminated every possible thing they could think of, Penzias and Wilson concluded “this was something worth writing about,” says Lightman. Indeed, their almost comically understated paper, entitled “A measurement of excess antenna temperature at 4080 Mc/s,” formed the basis of their 1978 Nobel Prize.

In the end, Lightman himself discovered a thing or two in putting together the book. Although he did not uncover any particular scientific temperament—scientists’ personalities run the regular human gamut—Lightman did find that, regardless of the field in which they worked or how they came to their discoveries, all the scientists he profiled “were really passionate about what they do. All loved to solve puzzles. They all loved to challenge authority. All were independent thinkers. And all were really obsessed with science.”

And though all didn’t necessarily dream about their work, they did labor tirelessly to solve their favorite puzzles, leaving behind them tales that are certainly worth telling.

About the Speaker

Alan Lightman, PhD, is adjunct professor of humanities at the Massachusetts Institute of Technology. As a novelist, essayist, physicist, and lecturer, Lightman is committed to making science accessible and understandable to a wide audience. His writings cover a range of topics dealing with science and the humanities, particularly the relationship between science, art, and literature. Lightman’s short fiction, essays, and reviews have appeared in numerous popular magazines and publications, including Discover, Harper’s, Nature, and The New Yorker.

He is the author of four novels, including the international bestseller Einstein’s Dreams, which was runner-up for the 1994 PEN New England/Boston Globe Winship Award, has been translated into 30 languages, and is the basis for more than two dozen independent theatrical and musical productions. In addition to his novels, Lightman is the author of several science books, drawing on his research in the areas of gravitational theory, accretion disks, stellar dynamics, radiative processes, and relativistic plasmas.

Lightman holds a PhD in theoretical physics from the California Institute of Technology, and an Honorary Doctorate of Letters from Bowdoin College. He served a postdoctoral fellowship at Cornell University before becoming assistant professor of astronomy at Harvard University and research scientist at the Harvard-Smithsonian Center for Astrophysics. In 1989 Lightman joined the faculty of MIT, and in 1995 was appointed John E. Burchard Professor of Humanities, a position he resigned in 2001 to allow more time for his writing.

For his contributions to physics, Lightman was elected fellow of the American Physical Society and the American Association for the Advancement of Science, both in 1989. In 1996 he was elected fellow of the American Academy of Arts and Sciences, and that same year, was recipient of the American Institute of Physics Andrew Gemant Award for linking science to the humanities.

Resolving Evolution’s Greatest Paradox

A black and white photo of an elder Charles Darwin.

Darwin’s theory of natural selection has never been very good at explaining novelty or complexity in living organisms. The new theory of “facilitated variation,” however, promises to fill in the gaps.

Published March 3, 2006

By Robin Marantz Henig

Sponsored by: The New York Academy of Sciences and Yale University Press.

Charles Darwin in 1868. Image courtesy of Wikimedia Commons.

“I came neither to praise Darwin nor to bury him,” Marc Kirschner, founder and chair of the department of systems biology at Harvard Medical School, told an overflow crowd on January 25, 2006, as part of the Readers and Writers lecture series at The New York Academy of Sciences (the Academy). Kirschner, coauthor with John Gerhart of The Plausibility of Life: Resolving Darwin’s Dilemma, said that his goal, in both the lecture and the book, was to achieve a middle ground, a way “to challenge Darwin in the name of buttressing the theory of evolution.”

Kirschner and Gerhart, a professor in the graduate school at the University of California at Berkeley, have long been plagued by a paradox in Darwin’s theory of natural selection, one that creationists and Intelligent Design proponents have used to cast doubt upon evolution as a whole: How it is that extraordinary complexity could have evolved from the accretion of tiny, supposedly random variations?

The answer, at least in part, is that the changes are not as random as they seem. “Even though science has shown that genetic variation is random,” Kirschner told his audience, “phenotypic variation cannot be random—because you can only change what already exists.” You never see a vertebrate with six limbs, he said; some mechanism limits the number of limbs to four, and the number of digits to five. “Yet these limits are hardly very constraining,” Kirschner noted, “generating everything from a whale’s flipper to Artur Rubenstein’s hand.”

The Theory of Facilitated Variation

The constraints on phenotypic variation, “rather than being limiting, greatly enable evolutionary change,” Kirschner said. In his talk, he related how he and Gerhart developed a new theory to explain complexity, which they call the theory of facilitated variation.

As background, Kirschner began by describing the two different paths that biology was taking around the time of Darwin’s publication of The Origin of Species: the fascination with variation that led to the zoos and natural history museums of the late 19th and early 20th century; and the simultaneous realization, with the growth of cell biology and embryology, that much of life is characterized not by differences, but by similarities.

“So where does this leave us?” asked Kirschner. “Two paths in science, one extolling the variety of life, the other obsessed with its universal properties. Herein lies a paradox: how can this immense variation arise from this universality?”

This is where facilitated variation comes in. Kirschner used an analogy borrowed from the kindergarten classroom to explain how his and Gerhart’s theory differs from evolutionary theory up to this point. Traditionally, he said, biologists have compared life to a lump of modeling clay, “incredibly plastic, and able—due to the accrual of many small changes—to go in any direction.” But this is the wrong metaphor, he said. In truth, life is more like a bunch of Lego blocks. As with Legos, the basic building blocks of biology are rigid and quite similar to one another, but “there is a large variety of structures that can be assembled from similar parts.”

If You Give a Monkey a Typewriter

Another way of looking at it, Kirschner said, is to try to imagine trying to get a monkey to write the word “MONKEY.” You could do so by giving the monkey a pen and paper, but that would never work—all you’d get would be “random lines and scratches.” But if you gave him a typewriter, then you might be getting somewhere.

It would take a very long time (Kirschner calculated about ten years, typing at the rate of one keystroke per second round-the-clock), but the monkey would eventually produce all six letters in the right order, because the typewriter restricts the results of his physical actions—always letters instead of scribble-scrabble. “Letters have at least a chance to be useful,” Kirschner said. “Most pen scratches to do not.”

If, instead of a typewriter, the monkey was pounding on a computer keyboard programmed with an automatic spelling corrector, the time it would take for him to type out the word “MONKEY” would be reduced dramatically, from ten years to probably less than a single day. “More constraint equals more useful outcomes,” Kirschner said.

The point is that something similar seems to be at work in nature. Facilitated variation works like that computer spell-checker, leading to “a coordination of conserved processes that are highly adaptive and facile in situations that require change.”

Consider the evolution of limbs. Among vertebrates, Kirschner said, limbs can be “as varied as the wings of an albatross, the hooves of an antelope, and the claws of a tiger.” How could such a vast array have evolved from small and random variations? By having a certain logic to the variations, said Kirschner, something “quite ingenious, simple, and forgiving.”

Gene Feedback Inhibition and Tissue Morphogenesis

Complexity in multicellular organisms—changes and refinements in beak shape, pigmentation, jaw structure, limb formation—can be explained, he said, by forces involved in “changing the time and extent of a process rather than creating a new process.” The forces are those that have been uncovered recently in the field of molecular biology, such as gene feedback inhibition, and the field of developmental biology, such as tissue morphogenesis. They help account for the surprising fact that the human genome isn’t much bigger than the genome of a frog or a fruit fly. The vast differences among these organisms are accounted for not by number of genes, he said, but by how the genes are expressed.

“In multicellular organisms, the same few genes must be reused in many different contexts,” said Kirschner. “The organism has liberated itself from a requirement that each gene has to operate in the same way in each anatomical region.” What this means for evolutionary theory is that even though the variations found in genes can be tiny, they can lead to big differences in the phenotype—and big differences in the appearance and behavior of complex organisms.

Understanding Embryonic Development

Kirschner said that the modern understanding of embryonic development can help explain how facilitated variation works. “Embryonic development is replete with cell types that have multiple options and ranges of options, such as the neural crest, that can form cartilage, nerve, and pigment,” he said. “Thus, changes in beak shape, pigmentation, or jaw structure can easily occur by changing the time and extent of a process rather than creating a new process.” In other words, the gene itself doesn’t have to be different; what changes is the timing or location of the gene’s expression.

The theory of facilitated variation, as outlined in The Plausibility of Life, is a new way of synthesizing the first two pillars of Darwin’s theory of evolution, natural selection and genetics, Kirschner said. He quoted a colleague who once told him that in the future, the only way to teach evolution would be through the explanatory lens of facilitated variation. “Any other approach,” Kirschner’s colleague told him, “would seem like an arbitrary selection of ‘Just-So Stories.'”

About the Speaker

Marc Kirschner, PhD, is founding chair of the department of systems biology at Harvard Medical School. His laboratory investigates three broad, diverse areas: regulation of the cell cycle, the role of cytoskeleton in cell morphogenesis, and mechanisms of establishing the basic vertebrate body plan.

Kirschner was elected Foreign Member of the Royal Society of London and a Foreign Member of the Academia Europaea in 1999. He was the 2001 recipient of the William C. Rose Award, presented by the American Society for Biochemistry and Molecular Biology. He received a 2001 International Award by the Gairdner Foundation of Toronto. He is a member of the National Academy of Sciences and the American Academy of Arts and Sciences, and has served on the advisory committee to the director of the National Institutes of Health and as president of the American Society for Cell Biology.

Kirschner arrived at Harvard Medical School in 1993 from the University of California, San Francisco, where he had served on the faculty as professor for fifteen years. He graduated from Northwestern University and received his PhD from the University of California, Berkeley. Following postdoctoral research at Berkeley and at the University of Oxford, he was appointed an assistant professor at Princeton University.

He and John Gerhart are coauthors of Cells, Embryos, and Evolution and The Plausibility of Life: Resolving Darwin’s Dilemma.

Also read: From the Annals Archive: How Darwin Upended the World

The Genius of Quantum Physicist Richard Feynman

A black and white photo of a man in a suit and tie, with math formals scribbled on the blackboard in the background.

Missives from Feynman in Perfectly Reasonable Deviations from the Beaten Track, a book of his letters edited by daughter Michelle Feynman, reveal his genius and wit. What was his contribution to the canon of 20th-century quantum physics?

Published February 3, 2006

By Chris H. Greene

Richard Feynman in 1959. Image via Wikimedia Commons.

“Science alone of all the subjects contains within itself the lesson of the danger of belief in the infallibility of the greatest teachers in the preceding generation … Learn from science that you must doubt the experts. As a matter of fact, I can also define science another way: Science is the belief in the ignorance of experts.”
— Richard Feynman, 1981

We all know the stories of Richard Feynman. He was at times a showman and a clown. He expressed irreverence toward prestigious, hoary organizations like the National Academy of Sciences and the Royal Swedish Academy of Sciences. The tragic death of his young wife during the time of the Manhattan Project became familiar to millions through the touching Matthew Broderick film, Infinity. But behind his public persona lay one of the truly independent and innovative minds of the 20th century. Richard Feynman felt an intense, personal need to see physical phenomena in his own terms, and from his own perspectives, using theories that he generated himself.

At the same time, Feynman’s theoretical constructs did not arrive on the planet like a bolt from nowhere. His most important contributions were ideas that were in some sense already “in the wind,” but his way of developing them into consistent theoretical descriptions of nature differed dramatically from methods popular at the time.

Paradoxical Infinities

It may seem surprising, but the theoretical program that resulted in Feynman’s 1965 Nobel Prize (also awarded that year to Julian Schwinger and Sin-Itiro Tomonaga) was not aimed so much at explaining the result of any particular experiment, as it was an attempt to resolve some of the apparently self-contradictory aspects of both classical and quantum electrodynamics theory. If you shake an electron, it radiates light waves, whose electric fields must in turn act back on the electron to lower its energy. But attempts to calculate this “radiative reaction force” led to infinities which were paradoxical and in clear contradiction with experience.

In Feynman’s doctoral thesis work with John Wheeler at Princeton, the two entertained fantastic possibilities in a desperate attempt to solve these paradoxical infinities. One peculiar notion that emerged was that if, in a certain sense, the classical fields are allowed to propagate backward in time, the paradoxes and the infinities appeared to be magically removed.

A variant of this idea survived when Feynman wrote down his quantum mechanical formulation of this problem, which he credits to Wheeler for originally tossing out: that the positron, the antiparticle of an electron, can be regarded as an ordinary electron moving backward in time. Surely you’re joking, Mr. Feynman! As fantastic and unbelievable as this idea seems when stated in words, when formulated mathematically it was found that a consistent theoretical framework emerged, without the troubling infinities.

Moreover, Feynman created a simple way for these complicated calculations to be carried out, which is still used today: first, draw lines that represent electrons, positrons, and photons moving forward and backward in time in different ways that can contribute to the process of interest. Then apply Feynman’s rules for translating each such Feynman diagram into a precise mathematical formula.

Quantum Electrodynamics

One of the most famous applications of Feynman’s quantum electrodynamics was his calculation of a tiny frequency difference between two nearly identical energy levels (2S1/2 and 2P1/2) of the simplest atom, hydrogen. Willis Lamb and Robert C. Retherford had caused a stir in 1947 when they measured this frequency difference to be 1057 million cycles per second (MHz), because the then-accepted theory of Paul Dirac suggested that this difference should be identically zero. The methods for calculating this interaction between an atomic electron and the “vacuum-fluctuating electric fields of free space” gave infinity, a useless result entirely irrelevant to the experiment.

Using the Feynman calculus, however, a result very close to the experimental frequency splitting (the so-called “Lamb shift”) was obtained. In the intervening decades, both experiment and theory have improved, and we now know this Lamb shift experimentally to be 1057.8447 (plus or minus 0.0034) MHz, while theory based on Feynman’s work predicts 1057.839 (plus or minus 0.006) MHz.

Within experimental uncertainties, and within theoretical uncertainties associated with our imperfect understanding of the proton’s nuclear structure, these agree. Nature thus confirms the remarkable synthesis of theoretical ideas into working quantum electrodynamics, achieved by Feynman, as well as by Schwinger and by Tomonaga.

Advancing the World of Theoretical Physics

And what are we to take from these strange notions? Are positrons really just electrons moving backward in time? Feynman tended to dismiss such queries as having no more relevance to physics than debates about how many angels fit on the head of a pin. Here is one more example where the equations developed by theoretical physicists, after extensive testing, are the bottom line. Seemingly bizarre philosophical implications, when those equations are stated in words (such as “particles moving backward in time”), do not matter a whit. What matters from the physicist’s perspective is the explanatory and predictive power of the resulting theory.

In the end, Feynman’s work parallels eerily the way the “luminiferous aether” was abandoned as irrelevant, once physicists accepted around the beginning of the 20th century that Maxwell’s equations by themselves adequately describe all classical phenomena of electricity and magnetism. And it is similar to the way Einstein’s equations of relativity, and the peculiar quantum theory, were accepted despite their troubling, almost nonsensical implications for how we think about time, space, and reality. As Niels Bohr wrote and was quoted in Wheeler and Feynman’s 1945 Reviews of Modern Physics article:

We must, therefore, be prepared to find that further advance…will require a still more extensive renunciation of features which we are accustomed to demand of the space time mode of description.

The world of theoretical physics is better today because Richard Feynman was brave enough to contemplate and develop ideas that required such a renunciation.

Also read: The Challenge of Quantum Error Correction

Lee Smolin: A Crisis in Fundamental Physics

Various math equations written on a blackboard.

With an infinity of universes proposed, and more than 10400 theories, is experimental proof of physical laws still feasible?

Published January 1, 2006

By Lee Smolin

Image courtesy of WP_7824 via stock.adobe.com.

For more than two hundred years, we physicists have been on a wild ride. Our search for the most fundamental laws of nature has been rewarded by a continual stream of discoveries. Each decade back to 1800 saw one or more major additions to our knowledge about motion, the nature of matter, light and heat, space and time. In the 20th century, the pace accelerated dramatically.

Then, about 30 years ago, something changed. The last time there was a definitive advance in our knowledge of fundamental physics was the construction of the theory we call the standard model of particle physics in 1973. The last time a fundamental theory was proposed that has since gotten any support from experiment was a theory about the very early universe called inflation, which was proposed in 1981.

Since then, many ambitious theories have been invented and studied. Some of them have been ruled out by experiment. The rest have, so far, simply made no contact with experiment. During the same period, almost every experiment agreed with the predictions of the standard model. Those few that didn’t produced results so surprising—so unwanted—that baffled theorists are still unable to explain them.

The Gap Between Theory and Experiment

The growing gap between theory and experiment is not due to a lack of big open problems. Much of our work since the 1970s has been driven by two big questions: 1) Can we combine quantum theory and general relativity to make a quantum theory of gravity? and 2) Can we unify all the particles and forces, and so understand them in terms of a simple and completely general law? Other mysteries have deepened, such as the question of the nature of the mysterious dark energy and dark matter.

Traditionally, physics progressed by a continual interplay of theory and experiment. Theorists hypothesized ideas and principles, which were explored by stating them in precise mathematical language. This allowed predictions to be made, which experimentalists then test. Conversely, when there is a surprising new experimental finding, theorists attempt to model it in order to test the adequacy of the current theories.

There appears to be no precedent for a gap between theory and experiment lasting decades. It is something we theorists talk about often. Some see it as a temporary lull and look forward to new experiments now in preparation. Others speak of a new era in science in which mathematical consistency has replaced experiment as the final arbiter of a theory’s correctness. A growing number of theoretical physicists, myself among them, see the present situation as a crisis that requires us to reexamine the assumptions behind our so-far unsuccessful theories.

I should emphasize that this crisis involves only fundamental physics—that part of physics concerned with discovering the laws of nature. Most physicists are concerned not with this but with applying the laws we know to under standard control myriads of phenomena. Those are equally important endeavors, and progress in these domains is healthy.

Contending Theories

Since the 1970s, many theories of unification have been proposed and studied, going under fanciful names such as preon models, technicolor, supersymmetry, brane worlds, and, most popularly, string theory. Theories of quantum gravity include twistor theory, causal set models, dynamical triangulation models, and loop quantum gravity. One reason string theory is popular is that there is some evidence that it points to a quantum theory of gravity.

One source of the crisis is that many of these theories have many freely adjustable parameters. As a result, some theories make no predictions at all. But even in the cases where they make a prediction, it is not firm. If the predicted new particle or effect is not seen, theorists can keep the theory alive by changing the value of a parameter to make it harder to see in experiment.

The standard model of particle physics has about 20 freely adjustable parameters, whose values were set by experiment. Theorists have hoped that a deeper theory would provide explanations for the values the parameters are observed to take. There has been a naive, but almost universal, belief that the more different forces and particles are unified into a theory, the fewer freely adjustable parameters the theory will have.

Parameters

This is not the way things have turned out. There are theories that have fewer parameters than the standard model, such as technicolor and preon models. But it has not been easy to get them to agree with experiment. The most popular theories, such as supersymmetry, have many more free parameters—the simplest supersymmetric extension of the standard model has 105 additional free parameters. This means that the theory is unlikely to be definitively tested in upcoming experiments. Even if the theory is not true, many possible outcomes of the experiments could be made consistent with some choice of the parameters of the theory.

String theory comes in a countably infinite number of versions, most of which have many free parameters. String theorists speak no longer of a single theory, but of a vast “landscape1” of possible theories. Moreover, some cosmologists argue for an infinity of universes, each of which is governed by a different theory.

A tiny fraction of these theories may be roughly compatible with present observation, but this is still a vast number, estimated to be greater than 10400 theories. (Nevertheless, so far not a single version consistent with all experiments has been written down.) No matter what future experiments see, the results will be compatible with vast numbers of theories, making it unlikely that any experiment could either confirm or falsify string theory.

A New Definition of Science

This realization has brought the present crisis to a head. Steven Weinberg and Leonard Susskind have argued for a new definition of science in which a theory maybe believed without being subject to a definitive experiment whose result could kill it. Some theorists even tell us we are faced with a choice of giving up string theory—which is widely believed by theorists—or giving up our insistence that scientific theories must be testable. As Steven Weinberg writes in a recent essay: [2]

Most advances in the history of science have been marked by discoveries about nature, but at certain turning points we have made discoveries about science itself…Now we may be at a new turning point, a radical change in what we accept as a legitimate foundation for a physical theory…The larger the number of possible values of physical parameters provided by the string landscape, the more string theory legitimates anthropic reasoning as a new basis for physical theories: Any scientists who study nature must live in a part of the landscape where physical parameters take values suitable for the appearance of life and its evolution into scientists.

An Infinity of Theories

Among an infinity of theories and an infinity of universes, the only predictions we can make stem from the obvious fact that we must live in a universe hospitable to life. If this is true, we will not be able to subject our theories to experiments that might either falsify or count as confirmation of them. But, say some proponents of this view, if this is the way the world is, it’s just too bad for outmoded ways of doing science. Such a radical proposal by such justly honored scientists requires a considered response.

I believe we should not modify the basic methodological principles of science to save a particular theory—even a theory that the majority of several generations of very talented theorists have devoted their careers to studying. Science works because it is based on methods that allow well-trained people of good faith, who initially disagree, to come to consensus about what can be rationally deduced from publicly available evidence. One of the most fundamental principles of science has been that we only consider as possibly true those theories that are vulnerable to being shown false by doable experiments.

Contending Styles of Research

I think the problem is not string theory, per se. It goes deeper, to a whole methodology and style of research. The great physicists of the beginning of the 20th century—Einstein, Bohr, Mach, Boltzmann, Poincare, Schrodinger, Heisenberg—thought of theoretical physics as a philosophical endeavor. They were motivated by philosophical problems, and they often discussed their scientific problems in the light of a philosophical tradition in which they were at home. For them, calculations were secondary to a deepening of their conceptual understanding of nature.

After the success of quantum mechanics in the 1920s, this philosophical way of doing theoretical physics gradually lost out to a more pragmatic, hard-nosed style of research. This is not because all the philosophical problems were solved: to the contrary, quantum theory introduced new philosophical issues, and the resulting controversy has yet to be settled. But the fact that no amount of philosophical argument settled the debate about quantum theory went some way to discrediting the philosophical thinkers.

It was felt that while a philosophical approach may have been necessary to invent quantum theory and relativity, thereafter the need was for physicists who could work pragmatically, ignore the foundational problems, accept quantum mechanics as given, and go on to use it. Those who either had no misgivings about quantum theory or were able to put their misgivings to one side were able in the next decades to make many advances all over physics, chemistry, and astronomy.

The shift to a more pragmatic approach to physics was completed when the center of gravity of physics moved to the United States in the 1940s. Feynman, Dyson, Gell-Mann, and Oppenheimer were aware of the unsolved foundational problems, but they taught a style of research in which reflection on them had no place in research.

Physics in the 1970s

By the time I studied physics in the 1970s, the transition was complete. When we students raised questions about foundational issues, we were told that no one understood them, but it was not productive to think about that. “Shut up and calculate,” was the mantra. As a graduate student, I was told by my teachers that it was impossible to make a career working on problems in the foundations of physics. My mentors pointed out that there were no interesting new experiments in that area, whereas particle physics was driven by a continuous stream of new experimental discoveries. The one foundational issue that was barely tolerated, although discouraged, was quantum gravity.

This rejection of careful foundational thought extended to a disdain for mathematical rigor. Our uses of theories were based on rough-and-ready calculation tools and intuitive arguments. There was in fact good reason to believe that the standard model of particle physics is not mathematically consistent at a rigorous level. As a graduate student at Harvard, I was taught not to worry about this because the contact with experiment was more important. The fact that the predictions were confirmed meant that something was right, even if there might be holes in the mathematical and conceptual foundations, which someone would have to fix later.

The Disappearance of Contact with Experiment

In retrospect, it seems likely that this style of research, in which conceptual puzzles and issues of mathematical rigor were ignored, can only succeed if it is tightly coupled to experiment. When the contact with experiment disappeared in the 1980s, we were left with an unprecedented situation.

The string theories are understood, from a mathematical point of view, as badly as the older theories, and most of our reasoning about them is based on conjectures that remain unproven after many years, at any level of rigor. We do not even have a precise definition of the theory, either in terms of physical principles or mathematics. Nor do we have any reasonable hope to bring the theory into contact with experiment in the foreseeable future. We must ask how likely it is that this style of research can succeed at its goal of discovering new laws of nature.

It is difficult to find yourself in disagreement with the majority of your scientific community, let alone with several heroes and role models. But after a lot of thought I’ve come to the conclusion that the pragmatic style of research is failing. By 1980, we had probably gone as far as we could by following this pragmatic, antifoundational methodology.

If we have failed to solve the key problems of quantum gravity and unification in a way that connects to experiment, perhaps these problems cannot be solved using the style of research that we theoretical physicists have become accustomed to. Perhaps the problems of unification and quantum gravity are entangled with the foundational problems of quantum theory, as Roger Penrose and Gerard t’Hooft think. If they are right, thousands of theorists who ignore the foundational problems have been wasting their time.

Unification and Quantum Gravity

There are approaches to unification and quantum gravity that are more foundational. Several of them are characterized by a property we call background independence. This means that the geometry of space is contingent and dynamical; it provides no fixed background against which the laws of nature can be defined. General relativity is background-independent, but standard formulations of quantum theory—especially as applied to elementary particle physics—cannot be defined without the specification of a fixed background. For this reason, elementary particle physics has difficulty incorporating general relativity.

String theory grew out of elementary particle physics and, at least so far, has only been successfully defined on fixed backgrounds. Thus, the infinity of string theories which are known are each associated with a single space-time background.

Those theorists who feel that theories should be background-independent tend to be more philosophical, more in the tradition of Einstein. The pursuit of background-independent approaches to quantum gravity has been pursued by such philosophically sophisticated scientists as John Baez, Chris Isham, Fotini Markopoulou, Carlo Rovelli, and Raphael Sorkin, who are sometimes even invited to speak at philosophy conferences. This is not surprising, because the debate between those who think space has a fixed structure and those who think of it as a network of dynamical relationships goes back to the disputes between Newton and his contemporary, the philosopher Leibniz.

Meanwhile, many of those who continue to reject Einstein’s legacy and work with background-dependent theories are particle physicists who are carrying on the pragmatic, “shut-up-and calculate” legacy in which they were trained. If they hesitate to embrace the lesson of general relativity that space and time are dynamical, it may be because this is a shift that requires some amount of critical reflection in a more philosophical mode.

A Return to the Old Style of Research

Thus, I suspect that the crisis is a result of having ignored foundational issues. If this is true, the problems of quantum gravity and unification can only be solved by returning to the older style of research.

How well could this be expected to turn out? For the last 20 years or so, there has been a small resurgence of the foundational style of research. It has taken place mainly outside the United States, but it is beginning to flourish in a few centers in Europe, Canada, and elsewhere. This style has led to very impressive advances, such as the invention of the idea of the quantum computer. While this was suggested earlier by Feynman, the key step that catalyzed the field was made by David Deutsch, a very independent, foundational thinker living in Oxford.

For the last few years, experimental work on the foundations of quantum theory has been moving faster than experimental particle physics. And some leading experimentalists in this area, such as Anton Zeilinger, in Vienna, talk and write about their experimental programs in the context of the philosophical problems that motivate them.

Currently, there is a lot of optimism and excitement among the quantum gravity community about approaches that embrace the principle of background independence. One reason is that we have realized that some current experiments do test aspects of quantum gravity; some theories are already ruled out and others are to be tested by results expected soon.

Collective Phenomena

A notable feature of the background independent approaches to quantum gravity is that they suggest that particle physics, and even space-time itself, emerge as collective phenomena. This implies a reversal of the hierarchical way of looking at science, in which particle physics is the most “fundamental” and mechanisms by which complex and collective behavior emerge are less fundamental.

So, while the new foundational approaches are still pursued by a minority of theorists, the promise is quite substantial. We have in front of us two competing styles of research. One, which 30 years ago was the way to succeed, now finds itself in a crisis because it makes no experimental predictions, while another is developing healthily, and is producing experimentally testable hypotheses. If history and common sense are any guide, we should expect that science will progress faster if we invest more in research that keeps contact with experiment than in a style of research that seeks to amend the methodology of science to excuse the fact that it cannot make testable predictions about nature.

Also read: What Physics Tells Us About the World

References

1 Smolin, L. 1997. The Life of the Cosmos. Oxford University Press.

2 Weinberg, S. 2005. Living in the multiverse.

Further Reading

Smolin, L. 2006. The Trouble with Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next. Houghton Mifflin, New York.

Woit, P. 2006. Not Even Wrong: The Failure of String Theory and the Search for Unity in Physical Law. Basic Books, New York.


About the Author

Lee Smolin is a theoretical physicist who has made important contributions to the search for quantum theory of gravity. He is a founding researcher at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario. He is the author of Life of the Cosmos (Oxford, 1997), Three Roads to Quantum Gravity (Orion, 2001), and the forthcoming, The Trouble with Physics (Houghton Mifflin, 2006).

Reef Madness and the Meaning of Coral

Colorful fish swim among a coral reef in the Ocean.

While the nineteenth century’s greatest scientific debate was that over Charles Darwin’s theory of evolution, the century’s other great scientific debate, almost forgotten now, posed problems even more vexing than the species question did.

Published November 11, 2005

By David Dobbs

Image courtesy of Chonlasub via stock.adobe.com.

The Other Debate of Darwin’s day

Asked to name the 19th century’s major scientific squabble, most people will correctly name the row over Darwinism. Few recall the era’s other great debate—regarding the coral reef problem—even though it was nearly as fierce as that over the species problem. The reef debate saw many of the same philosophical issues contested by many of the same players. These included Charles Darwin, the naturalist Louis Agassiz, and Alexander Agassiz, an admirer of the former and the son of the latter. Their tangled struggle is one of the strangest tales in science.

The clash over Darwin’s species theory was partly one between empiricism, as represented by Darwin’s superbly documented Origin of Species, and the idealist or creationist natural science dominant before then. Louis Agassiz, the Swiss-born naturalist who became the leading light of American science after moving to the United States in 1846, offered a particularly seductive articulation of creationist theory. He held huge audiences spellbound as he explained how nature’s patterned complexity could only have sprung from a single, divine intelligence. A species, he said, was “a thought of God.” His elegant description made him a giant of American science, the director of Harvard’s new Museum of Comparative Zoology, and a man of almost unrivaled fame.

But the publication of Origin, in 1859, confronted Agassiz’s idealist creationism with an empirically robust naturalistic description of species origin. Though Agassiz opposed Darwin’s theory vigorously, his colleagues increasingly took Darwin’s view, and by 1870, Louis Agassiz was no longer taken seriously by his peers. He could hardly have fallen further.

A Son

Louis’s only son, Alexander, came of age watching this fall. Smart and careful as child and man—he began his scientific career as an assistant at the Museum of Comparative Zoology and would manage it after his father died—Alexander seemed determined to avoid his father’s excesses. Where Louis was profligate, Alexander was frugal. Where Louis was expansive and extroverted, Alex was reserved and liked to work in private. And where Louis favored a creationist theory based on speculation, Alex preferred the empirical approach established by Darwin.

By the age of 35, Alexander Agassiz had created a happy life. He loved his work at the museum, his wife and three children, and by investing in and for 18 months managing a copper mine in Michigan, he had made himself quite rich. Yet his luck changed in 1873. Louis, then 63, died of a stroke two weeks before Christmas. Ten days later, Alex’s wife, Anna Russell Agassiz, died of pneumonia.

Alexander Agassiz. Image via Wikimedia Commons

Wanderings and Reefs

Devastated by this double blow, Alex spent three years mostly traveling, mortally depressed. He felt able to “get back in harness,” as he put it, only when, in 1876, he engaged the coral reef problem. How did these great structures, built from the skeletons of animals that could grow only in shallow water, come to occupy platforms rising from the ocean’s depths? Naturalists had discerned in the early 1800s how corals grew, but the genesis of their underlying platforms remained obscure.

The prevailing explanation, first offered in 1837, held that coral reefs formed on subsiding islands. The coral first grew along shore, forming fringing reefs. As the island sank and lagoons opened between shore and reef, fringing reef became barrier reef. When the island sank out of sight, barrier reef became atoll. Thus this subsidence theory, as it was known, explained all main reef forms.

Alex, drawn to this problem by his friend Sir John Murray, a prominent Scottish oceanographer, thought the subsidence theory was just a pretty story. The theory rested on little other than the reef forms, while considerable evidence, such as the geology of many islands and most reef observations made during the mid-1800s, argued against it. Now Murray, who had just returned from a five-year oceanographic expedition aboard the HMS Challenger, told Alex of an alternative possibility. Murray had discovered that enough plankton floated in tropical waters to create a rain of planktonic debris that, given geologic time, could raise many submarine mountains up to shallows where coral reefs could form.

Alex immediately liked this idea, for it rose from close observation rather than conceptual speculation and relied on known rather than conjectural forces. Inspired for the first time since his wife’s death three years before, he began designing an extensive field research program to prove it.

There was only one problem: the person who had authored the subsidence theory was Charles Darwin.

Thirty Years of Fieldwork

Darwin had posited the subsidence theory as soon as he returned from the Beagle voyage in 1837. Like his evolution theory, it was a brilliant synthesis that explained many forms as the result of incremental change. But it did not rest on the sort of careful, methodical accumulation of evidence that underlay his evolutionary theory. Darwin conceived it before he ever saw a coral reef and published it when he’d seen only a few.

Yet the theory explained so much that it had launched Darwin’s career. Since then, of course, Darwin had developed his evolution theory, destroyed Louis’s career, and become the most renowned and powerful man in science. Alex knew he was courting trouble when he decided to champion an alternate theory. But he couldn’t resist such an enticing problem. And he firmly believed that Darwin had muffed it.

Alex spent much of the next 30 years collecting evidence. He developed a complicated and nuanced theory holding that different forces, primarily a Murray-esque accrual, erosion, some uplift, and occasionally some subsidence, combined in different ways to create the world’s different reef formations. He found evidence in every major reef formation on the globe. And so as the century ended, an Agassiz again faced Darwin (or Darwin’s legacy, for Darwin had died in 1882). Only this time the Agassiz held the empirical evidence and Darwin the pretty story.

Yet Alex hesitated to publish, even after he completed his fieldwork in 1903. Every year, Murray would ask Alex about the reef book. Every year Alex would say the latest draft hadn’t worked, but that he had found a better approach and would soon finish.

The last time he told Murray this was in 1910, when they met in London before Alex sailed home to the U.S. after a winter in Paris. On the fifth night out of Southampton, he died in his sleep. Murray, hearing the news by cable a couple days later, was much aggrieved—and stunned to hear what followed. A thorough search had found no sign of the coral reef book. It was, Alexander’s son George later wrote, “an excellent example of his habit of carrying his work in his head until the last minute.”

One Irony Among Many

The coral reef debate didn’t end until 1951, when U.S. government geologists surveying Eniwetok, a Marshall Islands atoll, prior to a hydrogen bomb test there, finally drilled deep enough to resolve the mystery. If Darwin was right about reefs accumulating atop their sinking foundations, the drill should pass through at least several hundred feet of coral before hitting the original underlying basalt. If Agassiz was right, the drill would go through a relatively thin veneer of coral before hitting basalt or marine limestone.

It speaks of the power of Alexander’s work that the reef expert directing the drilling, Harry Ladd, expected to prove Agassiz right. But the power of Darwin’s work was such that as the drill spun deep, it passed through not a few dozen or even a few hundred feet, but through some 4,200 feet of coral before striking basalt. Darwin was right, Agassiz wrong.

How did Alex miss this? In retrospect, geologists can identify various observational mistakes Alexander made. But Alex’s bigger problem was his singular place in the profound changes science underwent in the 1800s. Natural science in particular was struggling to define an empirical theoretical method. Alex played by the rules that most scientists, including Darwin, swore to: a Baconian inductivism that built theory atop accrued stacks of observed facts.

In reality, most scientists come to their theories through deductive leaps, then try to prove them by amassing evidence. A theory’s value rests not on its genesis, but on its proof. Today this is accepted and indeed codified as the “hypothetico-deductive method,” and its resulting theories are considered empirical as long as their proof lies in replicable evidence. But in Alex’s day, when pretty stories built on leaps of imagination spoke of reactionary creationism rather than creative empiricism, such theorizing was called speculation, and it was a four-letter word.

Alexander Agassiz was keenly sensitive to the dangers of such work. Yet his singular position fated him to take up a question that not only lay beyond the tools of his time, but which trapped him in the era’s most confounding difficulties of method and philosophy. He sought a solution that belonged to another age.

About the Author

David Dobbs is author of Reef Madness: Charles Darwin, Alexander Agassiz, and the Meaning of Coral, from which this lecture is drawn. You can find more of his work at daviddobbs.net.

Promoting Science, Human Rights in the Middle-East

A black fist and white fist risen in solidarity.

Two human rights activists are named winners of the Academy’s Human Rights Award for 2005.

Published October 17, 2005

By Fred Moreno

Image courtesy of Manpeppe via stock.adobe.com.

Two activists who have long fought for the rights of scientists-especially in the Middle East-received the 2005 Heinz R. Pagels Human Rights of Scientists Award at the Academy’s 187th Business Meeting held on September 29.

The 2005 winners are Zafra Lerman, distinguished professor of Science and Public Policy and head of the Institute for Science Education and Science Communication at Columbia College Chicago, and Herman Winick, assistant director and professor emeritus of the Stanford Synchrotron Radiation Laboratory at Stanford University.

Zafra Lerman

For more than a decade, in her role as chair of the Subcommittee on Scientific Freedom and Human Rights of the American Chemical Society’s Committee on International Activities, Zafra Lerman has stimulated human rights awareness in communities of chemists and is the American Chemical Society’s leading voice on behalf of the human rights of scientists throughout the world. She has traveled to the former Soviet Union, Russia, Cuba, China, and the Middle East, bringing encouragement to repressed scientists.

In 2003 she worked with the Israel Academy of Science, particularly in the case of allowing nine Palestinian scientists to attend a conference in Malta where scientists from ten nations in the Middle East met to tackle problems of research and education in the politically and economically troubled region.

Herman Winick

Herman Winick has been an extraordinarily effective and tireless scientist working on behalf of the Human Rights of Scientists for more than 25 years. He was one of the original supporters and founders of the Sakharov-Orlov-Scharansky (SOS) group in the 1980’s.

In the 1990s, he strongly supported the Human Rights activities of the American Physical Society (APS), on behalf of repressed scientists all around the world, first as a member, and then as the Chair of the APS Committee on International Freedom of Scientists. In the mid-1990’s he conceived the brilliant idea of creating a new synchrotron research facility in the Middle East, known as the SESAME project, which would be located in Jordan and actively solicit participants from other regional nations such as Egypt, The Palestinian Authority, Israel, Syria, and others; it is now operating.

For the past three years he has worked on behalf of an Iranian dissident physicist, Professor Hadizadeh, who has been imprisoned for his pro-democracy activities. Due in large part to efforts by Winick, Professor Hadizadeh is now carrying out research in the United States.

Pagels Award

The Academy’s first human rights award was given in 1979 to Russian physicist Andrei Sakharov. Renamed in 1988 in honor of former Academy president Heinz R. Pagels, the award has been bestowed on such imminent scientists as Chinese dissident Fang Li-Zhi, Russian Nuclear Engineer Alexander Nikitin, and Cuban Economist Martha Beatriz Roque Cabello. The 2004 award was presented to Dr. Nguyen Dan Que of Vietnam.

Also read: Promoting Human Rights through Science

Bringing a Scientific Perspective to Wall Street

The corner of Pearl Street and Wall Street in lower Manhattan.

Emanuel Derman was a pioneer in the now-established field of financial engineering, which was influenced by his background in theoretical physics.

Published October 6, 2005

By Adelle Caravanos

Image courtesy of helivideo via stock.adobe.com.

Emanuel Derman, director of the Columbia University financial engineering program, and Head of Risk at Prisma Capital Partners, will speak at the Academy on October 19. The self-described “quant” will discuss his unusual career path, from theoretical physics to Wall Street, where he became known for co-developing the Black-Derman-Toy interest-rate model at Goldman Sachs. His book, My Life As a Quant: Reflections on Physics and Finance, became one of Business Week’s Top Ten Books of 2004.

The Academy spoke with Derman in advance of his lecture.

*some quotes were lightly edited for length and clarity*

First, please tell our readers what a quant is!

Well, “quant” is short for quantitative strategist or quantitative analyst. It’s somebody who uses mathematics, physics, statistics, computer science, or any combination of these things at a technical level to try to understand the behavior of stock prices, auction prices, bonds, commodities, and various kinds of derivatives from a mathematical point of view — from a predictive view to some extent.

Is it safe to assume that most of the major banks employ quants?

Yes. When interest rates went up astronomically around [the time of], and even after, the oil crisis of ’73, [the hiring of quants] started in the fixed income business. Fixed income has always been a much more quantitative business historically than the rest of the securities business and people have always thought that bonds and fixed income investments were fairly non-volatile, stable, and safe. Once interest rates went up to around 15 percent and gold prices went up like crazy, investment banks and companies had a whole different range of problems to deal with than before.

They’d always known stocks were volatile, but not that bonds were. So, they started hiring people out of non-financial parts of universities, non-business schools — computer scientists, mathematicians, physicists, Bell Labs — to tackle these problems, partly because they involved more mathematics than people were used to and partly because they involved more computer science than people were used to.

If you had a whole portfolio of things, you couldn’t do them efficiently on paper anymore. You couldn’t take account of the changes or take account of what they were worth, so people started building computer programs to do these things. And so, there was an in-road there for a lot of quantitative people.

I think it was good to get in [to quantitative strategy] early because you could make a contribution with much less skill and talent. After 20 years everything gets so complicated mathematically that it’s much harder to do anything. It’s not impossible; people do it. But it was very exciting in the early ’80s because there were virtually no textbooks. You couldn’t get a degree in the field. Everybody was self-taught. It was exciting.

When you started at Goldman, were you one of the first of their quants?

They had maybe 10 or 20 people there. I was early, but I wasn’t the first.

You talk in your book about the difference between the way traders and quants approach problems.

I think the differences are less extreme now because quantitative methods have become much more ubiquitous all over Wall Street, particularly in hedge funds. But, yes, traders were impulsive, sharp, and gregarious. They liked meeting with people, and if you worked on the trading floor everybody was yelling and screaming. It’s exciting, but for people coming from an academic background, it is hard to concentrate! It’s chaotic. You have to multi-task a lot, which is very disturbing if you grew up wanting to do just one thing, like getting a PhD and working for six years solidly on it.

You also make the distinction between working on the mathematical models and the actual science or technology of working on the interfaces for the people. Which did you enjoy more?

I liked both. When I was in physics, we were always trying to do research and it was hard, lonely work. You shut yourself in an office and tried to make progress and when you couldn’t get anything done, or when things weren’t working, you had nothing else to do — it was really depressing.

What was nice about working at Goldman was that there were useful things you could do, like software, that didn’t take the same mental effort. They took talent and they took skill, but you didn’t have to discover something new to do them. So it was very nice to spend a quarter of your time doing research and half doing software and another quarter dealing with people. It was a much more balanced life.

Do you think that things are changing as far as academicians looking down at people going into the business world, and business people looking down at academicians?

I do think it goes both ways. I certainly looked down on people dropping out of PhDs and going into business. It felt like you were leaving the monastery before you’d become a monk. Academics brought you up to look down on anybody who copped out. And then business people always used “academic” as sort of a dirty word — “academic” in the sense of “not applied.”

About the Author

Professor Emanuel Derman is director of Columbia University’s program in financial engineering and Head of Risk at Prisma Capital Partners, a fund of funds. My Life as A Quant: Reflections on Physics and Finance was one of Business Week’s top ten books of the year for 2004. Derman obtained a PhD in theoretical physics from Columbia University in 1973. Between 1973 and 1980 he did research in theoretical particle physics, and from 1980 to 1985 he worked at AT&T Bell Laboratories.

In 1985 Derman joined Goldman Sachs’ fixed income division where he was one of the co-developers of the Black-Derman-Toy interest-rate model. From 1990 to 2000 he led the Quantitative Strategies group in the Equities division, where they pioneered the study of local volatility models and the volatility smile. He was appointed a Managing Director of Goldman Sachs in 1997. In 2000 he became head of the firms Quantitative Risk Strategies group. He retired from Goldman, Sachs in 2002.

Derman was named the IAFE/Sungard Financial Engineer of the Year 2000, and was elected to the Risk Hall of Fame in 2002.

Also read:What Happens When Innovative Scientists Embrace Entrepreneurship?

Exploring Nature and Nurture of Women in Science

Three young girls conduct a science experiment.

Are gender disparities in the STEM fields a matter of nature or nurture? A panel of experts explored the ways in which these two seemingly opposing viewpoints interact.

Published September 2, 2005

By David Berreby

In 1993, the makers of the Talking Barbie doll included, among its 270 recorded phrases, the sentence “Math class is tough!” Did they do this because girls don’t like math as much as boys? Or do girls not like math because of influences that include Barbie dolls?

That’s the issue for policymakers contemplating the absence of women from the rosters of some scientific fields. Do we perceive gender differences, and act upon them, because men and women are biologically different? (If so, then women may never be represented in some fields in the same numbers as men.) Or is it that we have trained ourselves to see those differences? (In which case, our beliefs are holding talented people back.)

Are gender disparities, in other words, a matter of nature or nurture?

To those dilemmas, most scientists agree, there is only one good response: those are the wrong questions. Gender differences are obviously a matter of both nature and nurture. This was one rare point of agreement among panelists at an April 14, 2005, discussion on women in science held at the Cooper Union, sponsored by the Women Investigators Network and the Ensemble Studio Theatre/Sloan Project’s First Light Festival, and moderated by former New York Times science editor Cornelia Dean. It’s absurd, the speakers agreed, to choose one side or the other. The real problem is to figure out how nature and nurture interact.

The State of Current Knowledge

In that quest, the essential controversy is over the state of current knowledge. Some argue that today’s science is good enough to speak of certain gender differences as facts, and to tell which are innate and which are not. Others believe we don’t yet know enough to tease nature and nurture apart and note that one era’s “facts” about men and women have a way of looking like prejudice to later generations.

The Cooper Union session had its roots in remarks made on January 14, 2005, by a former Harvard University president to a meeting of the National Bureau of Economic Research. Saying he wanted to “provoke” his listeners, he took a side on the basic underlying question: is today’s science good enough to speak with confidence about the biological differences between men and women?

He thinks it might be, arguing that it is reasonable to ask if one reason for the paucity of women in top-level academic science jobs might be innate differences in the ways male and female minds work. He also mentioned two other factors he thought merited consideration: the demands of family life and the hindrances posed by convention and prejudice. However, his remarks about innateness got the most attention, in part because a number of attendees were so offended they walked out.

One was Nancy Hopkins, who led MIT’s Study of Women Faculty in Science, an inquiry launched in 1995 to determine why women were underrepresented among MIT professors. At the Cooper Union Hopkins argued that if we don’t yet know how nature and nurture combine to shape people’s lives, the very act of claiming certainty can breed prejudices and stereotypes.

Increase in Representation at MIT

Pointing out that women went from comprising 2% of MIT’s student body to more than 40% in the 20th century, she noted that such an upsurge was unlikely to be due to a fundamental change in the biology of women’s brains. Overconfident belief that women weren’t suited to her field, biology, once kept talented people out, she said. Now women are well represented in biology, but the same beliefs obstruct their progress in mathematics.

Linda Gottfredson, professor in the School of Education and affiliated faculty in the University Honors Program at the University of Delaware, however, argued that innate gender differences are very clear—so clear, in fact, that a goal of gender parity in all professions seems unrealistic. Specifically, she said, male minds show a bias toward interest in things, while female minds are interested in people, creating what she called a genetic “tilt” that affects the types of careers they choose. In this light, supporting an idea of infinite human malleability “ignores both women’s own preferences and the huge challenges they face when committed to having both children and careers.”

Richard Haier, who studies the neurobiology of intelligence, consciousness, and personality at the University of California, Irvine, also argued for the innateness of intelligence. He explained that while bell curves of male and female scores of general intelligence “essentially completely overlap,” more men tend to be found at the extreme high end of the scale for a few specific cognitive abilities like mathematical reasoning. Using imaging technology, he found that different parts of men’s and women’s brains are related to general intelligence in one study and to mathematical reasoning in another.

Gender Differences in Cognitive Traits

Diane Halpern, past-president of the American Psychological Association and professor of psychology at Claremont McKenna College, agrees that there is significant evidence to suggest that gender differences in cognitive traits exist. However, the observation that some differences may be due to innate traits does not mean those differences are immutable. This is because even innate aspects of a person interact with an environment, and environments can change.

Or as Halpern said, “The word innate does not mean forever.” In assessing male and female performance on tests and career tracks, she argued, it is important to remember that the academic world “has been devised as a very male, very heterosexual world,” and the fact that “the biological clock and tenure clocks run in the same time zone” has been bad for women in academia.

New York University’s Joshua Aronson acknowledged the importance and relevance of studies on biological gender differences, but also warned against too much stress on innateness as an explanation. He looks at how people, cultural animals that they are, respond to cultural notions. Cognition, he argued, is affected by a phenomenon called stereotype threat, “an apprehension arising from the awareness of a negative stereotype or personal reputation in a situation where you can confirm that stereotype with your behavior or the way you look.” Citing various studies that he and his colleagues have conducted, he said that changes in the environment in which high-achieving women navigate cognitive tests can affect their performance.

Are Sex Differences in Intelligence Innate?

None of this was abstract theory for the panelists or their audience at The Cooper Union. As scientists were both the investigators and the subjects of research about such issues, the talks and subsequent questions produced an unusual mix of passion, autobiography, and political debate. The panel was sharply split on the question of whether we have sufficient knowledge to say that sex differences in intelligence are innate. As is usually the case when scientific controversies mix with political disputes, some on each side accused the other of seeking to cut off debate and suppress inconvenient facts. And some speakers of both schools of thought stated that the other approach was actively harmful to young women.

Women should not be pushed to do things against their nature, said one of the innatists. Women should not be told that their ambitions aren’t natural, said one of the environmentalists. Though no speaker bolted (and the tone of the talks, questions, and reception stayed civil), there was no disguising the profound philosophical, political, and scientific disagreement at the heart of this question.

Also read: Strategies from Successful Women Scientists


About the Author

David Berreby has written for the New York Times Science Section, The New York Times Magazine, The Sciences, Discover, Smithsonian, Slate, The New Republic, and many other publications

Wilson Bentley: The Man Who Studied Snowflakes

A shot of an intricate snowflake with a black background.

This Vermont-based farmer spent his career in the late 19th and early 20th centuries transforming the study of snowflakes into an art as well as a science.

Published June 1, 2005

By Fred Moreno

Image courtesy of vadim_fl via stock.adobe.com.

For centuries, humans have been fascinated by the endless variety of snowflakes and their six-fold symmetry. Scientists have sought to better understand how they are formed from single crystals of ice and why complex patterns arise spontaneously in such simple physical systems. According to Kenneth Libbrecht, a physicist at the California Institute of Technology, “snowflakes are the product of a rich synthesis of physics, mathematics, and chemistry.”

The oldest observation of snow crystals* on record appears in China around 135 BC, but the 17th century seemed to witness the dawn of their serious scientific consideration. A treatise by Johannes Kepler raised questions about the genesis of their hexagonal symmetry, while the French philosopher and mathematician René Descartes wrote detailed accounts of the geometrical perfection of snow-crystal structure.

Later in that century, English scientist Robert Hooke was the first to draw snowflakes through a microscope. Many others – including the 19th century Arctic explorer William Scoresby and the great Japanese scientist Ukichiro Nakaya in the mid-20th century – have made important contributions to understanding the science of snow and ice.

But if there is one person who transformed snowflake study into an art as well as a science, it was Wilson A. Bentley, a farmer from Jericho, Vermont, who spent a lifetime (1865-1931) studying snow crystals. He became interested in the structure of snow crystals as a teenager in the 1880s and tried sketching them through an old microscope his mother had given him. But he found this a frustrating task since he had to work very rapidly in order to capture a complex phenomenon.

Much Trial and Error

Eventually Bentley devised a means of attaching a bellows camera to a compound microscope, and after much trial and error, he finally succeeded in photographing his first snow crystal on January 15, 1885. Over the next 46 years, he took more than 5,000 snow-crystal images on glass photographic plates – as well as pictures of frost, pond ice, dew, and clouds. A little known fact about Bentley is that he also studied rainfall and was the first American to make measurements of raindrop size. His work in this area is one reason he is considered a pioneer in the science known today as cloud physics.

Keeping fragile things like crystals frozen and unspoiled meant Bentley had to work in temperatures below freezing. He caught the crystals on a blackboard and would transfer them to a microscope slide, taking care not to breathe on them.

“The utmost haste must be used, for a snow crystal is often exceedingly tiny, and frequently not thicker than heavy paper,” Bentley wrote. “Furthermore…evaporation (not melting) soon wears them away, so that, even in zero weather, they last but a very few minutes.”

The Treasures of the Snow

In the late 1890s, the world outside Jericho began to notice Bentley’s work. Some of his photomicrographs were acquired by the Harvard Mineralogical Museum and he published an article with George Henry Perkins, a natural history professor at the University of Vermont. It was in this article that he first outlined the notion that no two snowflakes are alike. In the coming years, many other academic institutions throughout the world – as well as the American Museum of Natural History and the British Museum – acquired samples of Bentley’s work, and he published articles in such magazines as Scientific American, National Geographic, Nature, and Popular Science.

Finally, in 1931, he collaborated with William J. Humphreys, chief physicist for the U.S. Weather Bureau, on a book, Snow Crystals, that would be the culmination of Bentley’s life’s work. It was illustrated with 2,500 snowflake photographs. Just a few weeks later, on a cold December day, Bentley died of pneumonia at his Jericho farm at the age of 66.

In the Old Testament, God asks of Job, “Hast thou entered into the treasures of the snow?” Wilson “Snowflake” Bentley surely would have answered, “Yes!”

*To most people, there is no difference between snowflakes and snow crystals. But there is a meteorological difference. A snow crystal refers to a single crystal of ice while a snowflake can mean an individual crystal or a cluster of them formed together. In short, a snowflake is always a snow crystal, but a snow crystal is not always a snowflake.

Also read: The Culture Crosser: The Sciences and Humanities