Skip to main content

Reef Madness and the Meaning of Coral

While the nineteenth century’s greatest scientific debate was that over Charles Darwin’s theory of evolution, the century’s other great scientific debate, almost forgotten now, posed problems even more vexing than the species question did.

Published November 11, 2005

By David Dobbs

Image courtesy of Chonlasub via stock.adobe.com.

The Other Debate of Darwin’s day

Asked to name the 19th century’s major scientific squabble, most people will correctly name the row over Darwinism. Few recall the era’s other great debate—regarding the coral reef problem—even though it was nearly as fierce as that over the species problem. The reef debate saw many of the same philosophical issues contested by many of the same players. These included Charles Darwin, the naturalist Louis Agassiz, and Alexander Agassiz, an admirer of the former and the son of the latter. Their tangled struggle is one of the strangest tales in science.

The clash over Darwin’s species theory was partly one between empiricism, as represented by Darwin’s superbly documented Origin of Species, and the idealist or creationist natural science dominant before then. Louis Agassiz, the Swiss-born naturalist who became the leading light of American science after moving to the United States in 1846, offered a particularly seductive articulation of creationist theory. He held huge audiences spellbound as he explained how nature’s patterned complexity could only have sprung from a single, divine intelligence. A species, he said, was “a thought of God.” His elegant description made him a giant of American science, the director of Harvard’s new Museum of Comparative Zoology, and a man of almost unrivaled fame.

But the publication of Origin, in 1859, confronted Agassiz’s idealist creationism with an empirically robust naturalistic description of species origin. Though Agassiz opposed Darwin’s theory vigorously, his colleagues increasingly took Darwin’s view, and by 1870, Louis Agassiz was no longer taken seriously by his peers. He could hardly have fallen further.

A Son

Louis’s only son, Alexander, came of age watching this fall. Smart and careful as child and man—he began his scientific career as an assistant at the Museum of Comparative Zoology and would manage it after his father died—Alexander seemed determined to avoid his father’s excesses. Where Louis was profligate, Alexander was frugal. Where Louis was expansive and extroverted, Alex was reserved and liked to work in private. And where Louis favored a creationist theory based on speculation, Alex preferred the empirical approach established by Darwin.

By the age of 35, Alexander Agassiz had created a happy life. He loved his work at the museum, his wife and three children, and by investing in and for 18 months managing a copper mine in Michigan, he had made himself quite rich. Yet his luck changed in 1873. Louis, then 63, died of a stroke two weeks before Christmas. Ten days later, Alex’s wife, Anna Russell Agassiz, died of pneumonia.

Alexander Agassiz. Image via Wikimedia Commons

Wanderings and Reefs

Devastated by this double blow, Alex spent three years mostly traveling, mortally depressed. He felt able to “get back in harness,” as he put it, only when, in 1876, he engaged the coral reef problem. How did these great structures, built from the skeletons of animals that could grow only in shallow water, come to occupy platforms rising from the ocean’s depths? Naturalists had discerned in the early 1800s how corals grew, but the genesis of their underlying platforms remained obscure.

The prevailing explanation, first offered in 1837, held that coral reefs formed on subsiding islands. The coral first grew along shore, forming fringing reefs. As the island sank and lagoons opened between shore and reef, fringing reef became barrier reef. When the island sank out of sight, barrier reef became atoll. Thus this subsidence theory, as it was known, explained all main reef forms.

Alex, drawn to this problem by his friend Sir John Murray, a prominent Scottish oceanographer, thought the subsidence theory was just a pretty story. The theory rested on little other than the reef forms, while considerable evidence, such as the geology of many islands and most reef observations made during the mid-1800s, argued against it. Now Murray, who had just returned from a five-year oceanographic expedition aboard the HMS Challenger, told Alex of an alternative possibility. Murray had discovered that enough plankton floated in tropical waters to create a rain of planktonic debris that, given geologic time, could raise many submarine mountains up to shallows where coral reefs could form.

Alex immediately liked this idea, for it rose from close observation rather than conceptual speculation and relied on known rather than conjectural forces. Inspired for the first time since his wife’s death three years before, he began designing an extensive field research program to prove it.

There was only one problem: the person who had authored the subsidence theory was Charles Darwin.

Thirty Years of Fieldwork

Darwin had posited the subsidence theory as soon as he returned from the Beagle voyage in 1837. Like his evolution theory, it was a brilliant synthesis that explained many forms as the result of incremental change. But it did not rest on the sort of careful, methodical accumulation of evidence that underlay his evolutionary theory. Darwin conceived it before he ever saw a coral reef and published it when he’d seen only a few.

Yet the theory explained so much that it had launched Darwin’s career. Since then, of course, Darwin had developed his evolution theory, destroyed Louis’s career, and become the most renowned and powerful man in science. Alex knew he was courting trouble when he decided to champion an alternate theory. But he couldn’t resist such an enticing problem. And he firmly believed that Darwin had muffed it.

Alex spent much of the next 30 years collecting evidence. He developed a complicated and nuanced theory holding that different forces, primarily a Murray-esque accrual, erosion, some uplift, and occasionally some subsidence, combined in different ways to create the world’s different reef formations. He found evidence in every major reef formation on the globe. And so as the century ended, an Agassiz again faced Darwin (or Darwin’s legacy, for Darwin had died in 1882). Only this time the Agassiz held the empirical evidence and Darwin the pretty story.

Yet Alex hesitated to publish, even after he completed his fieldwork in 1903. Every year, Murray would ask Alex about the reef book. Every year Alex would say the latest draft hadn’t worked, but that he had found a better approach and would soon finish.

The last time he told Murray this was in 1910, when they met in London before Alex sailed home to the U.S. after a winter in Paris. On the fifth night out of Southampton, he died in his sleep. Murray, hearing the news by cable a couple days later, was much aggrieved—and stunned to hear what followed. A thorough search had found no sign of the coral reef book. It was, Alexander’s son George later wrote, “an excellent example of his habit of carrying his work in his head until the last minute.”

One Irony Among Many

The coral reef debate didn’t end until 1951, when U.S. government geologists surveying Eniwetok, a Marshall Islands atoll, prior to a hydrogen bomb test there, finally drilled deep enough to resolve the mystery. If Darwin was right about reefs accumulating atop their sinking foundations, the drill should pass through at least several hundred feet of coral before hitting the original underlying basalt. If Agassiz was right, the drill would go through a relatively thin veneer of coral before hitting basalt or marine limestone.

It speaks of the power of Alexander’s work that the reef expert directing the drilling, Harry Ladd, expected to prove Agassiz right. But the power of Darwin’s work was such that as the drill spun deep, it passed through not a few dozen or even a few hundred feet, but through some 4,200 feet of coral before striking basalt. Darwin was right, Agassiz wrong.

How did Alex miss this? In retrospect, geologists can identify various observational mistakes Alexander made. But Alex’s bigger problem was his singular place in the profound changes science underwent in the 1800s. Natural science in particular was struggling to define an empirical theoretical method. Alex played by the rules that most scientists, including Darwin, swore to: a Baconian inductivism that built theory atop accrued stacks of observed facts.

In reality, most scientists come to their theories through deductive leaps, then try to prove them by amassing evidence. A theory’s value rests not on its genesis, but on its proof. Today this is accepted and indeed codified as the “hypothetico-deductive method,” and its resulting theories are considered empirical as long as their proof lies in replicable evidence. But in Alex’s day, when pretty stories built on leaps of imagination spoke of reactionary creationism rather than creative empiricism, such theorizing was called speculation, and it was a four-letter word.

Alexander Agassiz was keenly sensitive to the dangers of such work. Yet his singular position fated him to take up a question that not only lay beyond the tools of his time, but which trapped him in the era’s most confounding difficulties of method and philosophy. He sought a solution that belonged to another age.

About the Author

David Dobbs is author of Reef Madness: Charles Darwin, Alexander Agassiz, and the Meaning of Coral, from which this lecture is drawn. You can find more of his work at daviddobbs.net.

Landfill Diversion: Created from Consumerism

Brian Jungen reconstructs everyday materials into cultural and natural wonders in a modern art show that doubles as anthropology. Or paleontology.

Published October 21, 2005

By Adelle Caravanos

Image courtesy of Seventyfour via stock.adobe.com.

Walk into the New Museum of Contemporary Art, and you might think you’ve mistakenly stumbled into a natural history museum. After all, the first things you’ll see are three huge whale skeletons, suspended from the ceiling. Then there’s the collection of Aboriginal masks. Upon closer inspection, however …

You’ll see that the masks are made from Nike sneakers.

And those aren’t whale bones. They’re lawn chairs.

In fact, almost everything at the Brian Jungen exhibit is made from new, mass-produced items that the Canadian artist has reconfigured into something that looks, well, old and unique. The comprehensive exhibit features 35 sculptures, drawings and installations created by Jungen, who is best known for his Northwest Coast native masks made from sliced up Air Jordans. The complete collection of his masks, Prototypes for a New Understanding, is on display at this show for the first time.

With Prototypes, Jungen takes an everyday item from modern Western life – athletic sneakers – and reassembles it into a traditional indigenous item. The work is a comment on the commercialization of cultural heritage, as well as a comparison of the aesthetics of the two worlds. For instance, the trademark Air Jordans come in the same red, black and white color combination frequently used in Aboriginal masks.

Jungen’s reassembly of the sneakers — arguably the most sought after consumer products of the 90’s — literally gives them a human face, and their man-made material is refashioned into life-like ancient warriors: he renders the synthetic, organic.

The Natural Cycle of Materials

Jungen obtains a similar effect with the whale skeletons, comprised of chopped-up patio chairs: the stackable white plastic variety loved by suburbanites. Jungen worked with an assistant, bolting together the plastic pieces to form vertebrae, ribs, skulls and fins until each work became indistinguishable from the skeletal remains of a whale.

How many chairs make a whale? Jungen recalls midnight runs to the local home goods store to acquire some 300 for the three installations: Shapeshifter, named for a mythical creature with the ability to morph its form; Cetology, whose title refers to the zoological study of whales; and Vienna, titled in honor of the city where it was created. Although they loom large overhead in the gallery, ranging from 21 to 42 feet in length, Jungen says they’re on scale with baby whales.

By using plastic, which is derived from petroleum, which in turn comes from large animal fossils, Jungen draws attention to the natural cycle of materials on our planet – his fake whale skeletons are built using a by-product of the material that real whales leave behind.

Also in the collection: A series of “lava rocks” made from deconstructed soccer balls; wooden baseball bats carved with loaded words and phrases; and a set of neatly stacked cafeteria trays, inspired by a similar configuration of trays used by a Canadian prisoner to escape confinement.

Also read: Green is the New Black in Sustainable Fashion

Promoting Science, Human Rights in the Middle-East

Two human rights activists are named winners of the Academy’s Human Rights Award for 2005.

Published October 17, 2005

By Fred Moreno

Image courtesy of Manpeppe via stock.adobe.com.

Two activists who have long fought for the rights of scientists-especially in the Middle East-received the 2005 Heinz R. Pagels Human Rights of Scientists Award at the Academy’s 187th Business Meeting held on September 29.

The 2005 winners are Zafra Lerman, distinguished professor of Science and Public Policy and head of the Institute for Science Education and Science Communication at Columbia College Chicago, and Herman Winick, assistant director and professor emeritus of the Stanford Synchrotron Radiation Laboratory at Stanford University.

Zafra Lerman

For more than a decade, in her role as chair of the Subcommittee on Scientific Freedom and Human Rights of the American Chemical Society’s Committee on International Activities, Zafra Lerman has stimulated human rights awareness in communities of chemists and is the American Chemical Society’s leading voice on behalf of the human rights of scientists throughout the world. She has traveled to the former Soviet Union, Russia, Cuba, China, and the Middle East, bringing encouragement to repressed scientists.

In 2003 she worked with the Israel Academy of Science, particularly in the case of allowing nine Palestinian scientists to attend a conference in Malta where scientists from ten nations in the Middle East met to tackle problems of research and education in the politically and economically troubled region.

Herman Winick

Herman Winick has been an extraordinarily effective and tireless scientist working on behalf of the Human Rights of Scientists for more than 25 years. He was one of the original supporters and founders of the Sakharov-Orlov-Scharansky (SOS) group in the 1980’s.

In the 1990s, he strongly supported the Human Rights activities of the American Physical Society (APS), on behalf of repressed scientists all around the world, first as a member, and then as the Chair of the APS Committee on International Freedom of Scientists. In the mid-1990’s he conceived the brilliant idea of creating a new synchrotron research facility in the Middle East, known as the SESAME project, which would be located in Jordan and actively solicit participants from other regional nations such as Egypt, The Palestinian Authority, Israel, Syria, and others; it is now operating.

For the past three years he has worked on behalf of an Iranian dissident physicist, Professor Hadizadeh, who has been imprisoned for his pro-democracy activities. Due in large part to efforts by Winick, Professor Hadizadeh is now carrying out research in the United States.

Pagels Award

The Academy’s first human rights award was given in 1979 to Russian physicist Andrei Sakharov. Renamed in 1988 in honor of former Academy president Heinz R. Pagels, the award has been bestowed on such imminent scientists as Chinese dissident Fang Li-Zhi, Russian Nuclear Engineer Alexander Nikitin, and Cuban Economist Martha Beatriz Roque Cabello. The 2004 award was presented to Dr. Nguyen Dan Que of Vietnam.

Also read: Promoting Human Rights through Science

Bringing a Scientific Perspective to Wall Street

Emanuel Derman was a pioneer in the now-established field of financial engineering, which was influenced by his background in theoretical physics.

Published October 6, 2005

By Adelle Caravanos

Image courtesy of helivideo via stock.adobe.com.

Emanuel Derman, director of the Columbia University financial engineering program, and Head of Risk at Prisma Capital Partners, will speak at the Academy on October 19. The self-described “quant” will discuss his unusual career path, from theoretical physics to Wall Street, where he became known for co-developing the Black-Derman-Toy interest-rate model at Goldman Sachs. His book, My Life As a Quant: Reflections on Physics and Finance, became one of Business Week’s Top Ten Books of 2004.

The Academy spoke with Derman in advance of his lecture.

*some quotes were lightly edited for length and clarity*

First, please tell our readers what a quant is!

Well, “quant” is short for quantitative strategist or quantitative analyst. It’s somebody who uses mathematics, physics, statistics, computer science, or any combination of these things at a technical level to try to understand the behavior of stock prices, auction prices, bonds, commodities, and various kinds of derivatives from a mathematical point of view — from a predictive view to some extent.

Is it safe to assume that most of the major banks employ quants?

Yes. When interest rates went up astronomically around [the time of], and even after, the oil crisis of ’73, [the hiring of quants] started in the fixed income business. Fixed income has always been a much more quantitative business historically than the rest of the securities business and people have always thought that bonds and fixed income investments were fairly non-volatile, stable, and safe. Once interest rates went up to around 15 percent and gold prices went up like crazy, investment banks and companies had a whole different range of problems to deal with than before.

They’d always known stocks were volatile, but not that bonds were. So, they started hiring people out of non-financial parts of universities, non-business schools — computer scientists, mathematicians, physicists, Bell Labs — to tackle these problems, partly because they involved more mathematics than people were used to and partly because they involved more computer science than people were used to.

If you had a whole portfolio of things, you couldn’t do them efficiently on paper anymore. You couldn’t take account of the changes or take account of what they were worth, so people started building computer programs to do these things. And so, there was an in-road there for a lot of quantitative people.

I think it was good to get in [to quantitative strategy] early because you could make a contribution with much less skill and talent. After 20 years everything gets so complicated mathematically that it’s much harder to do anything. It’s not impossible; people do it. But it was very exciting in the early ’80s because there were virtually no textbooks. You couldn’t get a degree in the field. Everybody was self-taught. It was exciting.

When you started at Goldman, were you one of the first of their quants?

They had maybe 10 or 20 people there. I was early, but I wasn’t the first.

You talk in your book about the difference between the way traders and quants approach problems.

I think the differences are less extreme now because quantitative methods have become much more ubiquitous all over Wall Street, particularly in hedge funds. But, yes, traders were impulsive, sharp, and gregarious. They liked meeting with people, and if you worked on the trading floor everybody was yelling and screaming. It’s exciting, but for people coming from an academic background, it is hard to concentrate! It’s chaotic. You have to multi-task a lot, which is very disturbing if you grew up wanting to do just one thing, like getting a PhD and working for six years solidly on it.

You also make the distinction between working on the mathematical models and the actual science or technology of working on the interfaces for the people. Which did you enjoy more?

I liked both. When I was in physics, we were always trying to do research and it was hard, lonely work. You shut yourself in an office and tried to make progress and when you couldn’t get anything done, or when things weren’t working, you had nothing else to do — it was really depressing.

What was nice about working at Goldman was that there were useful things you could do, like software, that didn’t take the same mental effort. They took talent and they took skill, but you didn’t have to discover something new to do them. So it was very nice to spend a quarter of your time doing research and half doing software and another quarter dealing with people. It was a much more balanced life.

Do you think that things are changing as far as academicians looking down at people going into the business world, and business people looking down at academicians?

I do think it goes both ways. I certainly looked down on people dropping out of PhDs and going into business. It felt like you were leaving the monastery before you’d become a monk. Academics brought you up to look down on anybody who copped out. And then business people always used “academic” as sort of a dirty word — “academic” in the sense of “not applied.”

About the Author

Professor Emanuel Derman is director of Columbia University’s program in financial engineering and Head of Risk at Prisma Capital Partners, a fund of funds. My Life as A Quant: Reflections on Physics and Finance was one of Business Week’s top ten books of the year for 2004. Derman obtained a PhD in theoretical physics from Columbia University in 1973. Between 1973 and 1980 he did research in theoretical particle physics, and from 1980 to 1985 he worked at AT&T Bell Laboratories.

In 1985 Derman joined Goldman Sachs’ fixed income division where he was one of the co-developers of the Black-Derman-Toy interest-rate model. From 1990 to 2000 he led the Quantitative Strategies group in the Equities division, where they pioneered the study of local volatility models and the volatility smile. He was appointed a Managing Director of Goldman Sachs in 1997. In 2000 he became head of the firms Quantitative Risk Strategies group. He retired from Goldman, Sachs in 2002.

Derman was named the IAFE/Sungard Financial Engineer of the Year 2000, and was elected to the Risk Hall of Fame in 2002.

Also read:What Happens When Innovative Scientists Embrace Entrepreneurship?

The Story of a 25 Year Collaboration

Xrays from a brain scan.

Scientific collaborators Torsten Wiesel and David Hubel made significant advances in our understanding of the brain and perception. Their achievements were a work in progress for roughly a quarter century.

Published August 23, 2005

By Dorian Devins

An air of camaraderie pervaded The New York Academy of Sciences (the Academy) on March 31, 2005 as scientific collaborators Torsten Wiesel and David Hubel were joined by fellow Nobelist Eric Kandel in celebration of Wiesel and Hubel’s recently-published book, Brain and Visual Perception: The Story of a 25-Year Collaboration. The full-to-capacity house included several scientific luminaries and at least one other Nobel Prize winner in the audience.

Kandel kicked off the evening with a vivid description of the pair’s groundbreaking work, characterizing it as “the most important advance in understanding the brain since Ramón y Cajal” at the turn of the 20th century. Santiago Ramón y Cajal won the Nobel Prize in Physiology or Medicine in 1906 in recognition of his work on the structure of the nervous system. While Cajal’s work centered on the morphological aspects of interconnections between different parts of the brain, Wiesel and Hubel’s work used modern cellular physiological techniques to show how these connections filter and transform sensory information both within and on the way to the primary visual cortex.

According to Kandel, “using imagination in addition to methodology is the key to the Hubel and Wiesel success.”

Hubel and Wiesel made several major contributions to our understanding of the brain and perception, including new insights into how the cerebral cortex functions in transforming sensory information. They also did work on binocularity, cellular organization in orientation and ocular dominance, and visual sensory deprivation.

Processing Visual Information

Our dominant sensory experiences are visual, and Wiesel and Hubel’s work showed how visual information is processed in the first few stages after it reaches the brain. They found that the part of the cortex devoted to the early stages of visual processing is arranged in columns, within which the nerve cells have common response properties. An analysis of the image is compiled from this information, and results in what we see.

In other experiments the team also investigated how visual deprivation affects development, which they tested by unilateral lid closure. Hubel and Wiesel found that when one of a newborn kitten or monkey’s eyelids is sutured shut for several weeks or months, the animal is found to be blind once the eye is reopened. When the eye closure is done in adult cats no such result is obtained. In both cats and monkeys there is thus a “critical period” of plasticity, following which sensitivity to deprivation declines and finally disappears.

Their work yielded profound findings, especially in the area of neural circuitry. Kandel described the cerebral cortex’s capability of carrying out novel kinds of transformation of a visual image. Hubel and Wiesel realized that the image is decomposed and then reconstructed later, and their findings influenced not just neuroscience but also areas like cognitive psychology, where they allowed practitioners to develop the idea that the brain creates an internal representation of the outside world. For this work, Hubel and Wiesel were awarded the Nobel Prize in Physiology or Medicine in 1981.

The People Behind the Science

As Kandel pointed out, however, Hubel and Wiesel’s science itself was just one aspect of the evening’s program. It was also an occasion to celebrate their book, a collection of their major papers along with biographical and historical information. But perhaps most importantly, Kandel and the audience were assembled to honor the long and productive collaboration and friendship of these two very different people.

Kandel characterized David Hubel as “whimsical and anti-authoritarian,” someone who “probably couldn’t run a grocery store,” a creative and musical person perhaps not most at home as an administrator. Torsten Wiesel, on the other hand, was a “quiet, humble person who has emerged as really one of the great scientific leaders in the academic community,” in Kandel’s words. “You name it, he runs it!”

As the evening progressed, Hubel and Wiesel reminisced about their partnership. Hubel spoke of the difficulty of working while they were at the Salk Institute in La Jolla. The lull of the surf and general air of relaxation there was not a great motivation to get to the lab. Of their time together overall, he said it was like a “half-century long trip on a roller coaster.”

A Brotherly Relationship

Working and travelling together in their younger days created a brotherly relationship between the two. “We didn’t want to tell people that this kind of work isn’t horribly tedious because we thought that would invite competition,” said Hubel. Much of their success was due to the luck of finding each other at just the right point in the science’s history and in their own careers, and in their hitting it off as they did.

Typically modest, Wiesel gave credit to Hubel for most of the work necessary to create their recent book. In terms of their work in science overall, he said, “You may have the technical skill and the imagination and so on, but you also need luck in life to really have success.”

Wiesel also attributed much of the success of their careers to Steve Kuffler, one of the leaders in the emerging field of neuroscience in the 1950s and ‘60s. Kuffler had been chairman of the department of neurobiology at Harvard while Wiesel and Hubel were there, and his respect was reserved for those who showed up in the lab, did the experiments, and wrote them up. Wiesel said that despite his many administrative accomplishments, “I feel Steve Kuffler would look at me [now] with some disdain and state, ‘Torsten, why did you leave the lab? You’re supposed to do experiments!’”

What Makes Scientists Tick

Wiesel reiterated that he and Hubel had worked very reclusively. From early morning until late at night, they performed every component of their own experiments, from preparation of the animals at the outset to washing the glassware afterwards. By maintaining this atmosphere of privacy, they were able to keep the “primacy of the thoughts and the ideas” from being diluted. Over the years they did not work with many graduate students and postdocs, but were fortunate in the quality of those they did have.

This strategy obviously paid off. Wiesel attributes their motivation and that of scientists in general to “random reinforcement,” like that of B.F. Skinner’s famous pigeons. Wiesel and Hubel’s early discoveries about the visual cortex a few months into their work made it seem to them that one thing naturally led to next. They began without a hypothesis; rather, they had set out to use the new technology of the microelectrode to record the cells in different parts of the brain and try to understand how the cells cooperate. According to Wiesel, “We were explorers of unknown territory.”

Wiesel cited some other important factors aside from luck that lead to success in the sciences, including choosing the right problem to tackle, being observant, and having the right attitude and mentor. As a young scientist who came from Sweden to the U.S to learn more about the brain, he was frustrated by the limited knowledge available in the area. If he and Hubel hadn’t met and formed such a productive relationship, he would have returned to Sweden. One thing Wiesel worries about is that the current academic system does nothing to redirect those who might be better suited to other careers.

The Evolution of Neuroscience

In a question-and-answer session, Wiesel explained that the initial phase of their work was explorative, followed by a period of asking questions. Hubel stated that there was a misconception that “to do proper science it should be done in the image of physics, or the way most people think of physics.” He and Wiesel got ideas and tested them, but “we never would’ve expressed it in such exalted terms of having a hypothesis. One shouldn’t make up rules as to how Science with the capital ‘S’ is done or should be done.”

What surprises might be ahead for neuroscience in the way that Hubel and Wiesel’s discovery of orientation-specific cells once were? According to Wiesel, insights come in stepped points and are quite unpredictable. The study of olfaction has yielded some profound insights, but, for instance, our understanding of hearing is comparatively primitive. It is now an area of great interest to neuroscientists. The next frontier is unknown.

The field of neuroscience has grown exponentially over the years. David Hubel and Torsten Wiesel’s groundbreaking work has been in no small way responsible for this. As it was best put by Eric Kandel, when we celebrate Wiesel and Hubel, “we’re not only celebrating science at its best, we’re not only celebrating two extraordinary people and a wonderful collaboration,” but we’re also celebrating “the reason science is exciting. We’re really celebrating the whole scientific enterprise in celebrating the two of them.”

Also read: Discovering Cancer Therapies through Neuroscience


About the Author

Dorian Devins is a New York-based radio producer whose programs have aired for over 10 years on WFMU, 91.1 FM in the greater metropolitan New York area. For three years she produced and hosted The Green Room, a weekly science radio program which was carried both on the radio and the Web. She currently hosts The Speakeasy, a weekly arts and cultural interview program. She has also conducted an ongoing series of interviews for the National Academy of Sciences’ Web site, does freelance writing, and works as an acquisitions editor of technical physics books.

Devins’ background has been mostly in the arts and publishing. She was founder and executive director of Science Matters, Inc., a nonprofit organization dedicated to the public understanding of science.

An Interview with NYU’s Peter D. Lax

A blackboard with math equations scribbled on it.

The Abel Prize-winning mathematician talks about his life and career, from emigrating to the United States from Hungary to what he calls the “paradox of education.”

Published June 1, 2005

By Dorian Devins

Image courtesy of alesmunt via stock.adobe.com.

Peter D. Lax is professor in the Mathematics Department at the Courant Institute of Mathematical Sciences, New York University. At age 15 he traveled to the United States from Hungary with his family. His career at Courant began in 1950, and has been interspersed with work at Los Alamos National Laboratory. Dr. Lax’s efforts have concentrated in the area of partial differential equations, and he is recognized for significant contributions to nonlinear equations of hyperbolic systems and for the Lax Equivalence Theorem, among other contributions. He is a member of the National Academy of Sciences and the recipient of many honors and awards, most recently the 2005 Abel Prize, often referred to as the “Nobel Prize of Mathematics.”

Was coming to the U.S. a difficult transition?

I didn’t know much English at first. My parents chose NYU because of Courant, who had the reputation of being very good with young people. At 18 I was drafted into the Army and, thanks to Courant, sent to Los Alamos. I spent a fantastic year there. After finishing my Ph.D. in ‘49, I went back to Los Alamos for a year and thereafter almost every summer into the sixties. That’s where I got involved with computing.

One advisor was John von Neumann. He realized that you couldn’t design nuclear weapons by trial and error – you had to calculate to make sure the design worked. He understood that traditional tools of applied mathematics wouldn’t work; there had to be massive computation. Being von Neumann, he realized this would work for other big engineering designs and for scientific understanding.

You must’ve met a lot of characters there.

I knew Richard Feynman during the war. He was maybe 25, but already legendary. I met Teller and Hans Bethe, who was a wonderful man and a spokesman for science. Feynman could have become that, but he had this terrible illness and died. Others who did very important work were Niels Bohr and Leo Szilard. Szilard liked to operate behind the scenes, but was extremely intelligent and could foresee the future.

How did you end up choosing the path of partial differential equations?

My teachers had done studies in that field. It’s very broad. The word partial just means that it deals with functions of many variables. Most physical theories are expressed as differential equations, like the propagation of sound, flow of fluids, and the way elastic material bends.

Did you approach the problems through mathematics or think about the applications first?

When I was at Los Alamos I thought about the applications, but back here I follow the mathematics.

What is the work you’ve done that you’re most proud of and has been your most important?

I’ve worked on five or six different things. I couldn’t say which one is my favorite. The work on dispersive equations I like very much. The work on shock waves and in scattering worked out very well. I’ve done something very interesting in what can be called harmonic analysis. I did lots of things in functional analysis.

You work in applied and pure mathematics. Is there usually a pretty clear-cut line between the two?

No, everybody mingles. You have to have a balance. Mathematics is taught to children in a way that is very numbers oriented.

Shouldn’t there be a better way to get kids engaged and show the relevance and beauty of math?

Peter D. Lax

Many people think that mathematics theorems are something you memorize. One of the first things to impress on them is that mathematics is thinking. You don’t have to know anything; you can figure it out. Later you have to know a lot, but to get into it you can just figure it out in your head. I think once they get that, they lose their fear. There’s something I like to call the paradox of education: Science and mathematics evolve by leaps and bounds. But does that mean that what we teach in college and high school falls behind by leaps and bounds? The answer is not necessarily. New advances often simplify things tremendously, and whole branches of mathematics can be replaced by something much simpler.

What do you feel will be the most interesting or important areas of mathematics in the near future?

It’s hard to predict. Dispersive systems didn’t look so interesting until there was an astonishing discovery that nobody could have foreseen. Biologists are begging mathematicians to come in. The problems they have are somewhat different from the kinds that mathematicians have been working on before.

Is mathematics following other fields, in that the biological areas are booming?

Yes. I wish mathematics and computer science would move closer. It would be good for both.

On the connection between physics and mathematics: Was it Wigner who wrote the famous paper?

“The Unreasonable Effectiveness of Mathematics in the Natural Sciences.” It was a lecture held here, part of a series of lectures in honor of Courant. One could make a biological point: Why is our brain capable of doing mathematics? Being able to recognize saber-toothed tigers is an evolutionary advantage. But formulating and solving differential equations? These are big questions that evolution isn’t yet ready to answer.

Has winning the Abel Prize changed your life in any way?

It brings interviews, and I get more email about it than about cheap pharmaceuticals. I’ll be happy to go back to my life. Life is mathematics; it’s wonderful

Also read: An Interview with Scientist Dr. Cindy Jo Arrigo

The Science Behind a Tsunami’s Destructiveness

A blue and white sign warning: Tsunami Hazard Zone - in case of earthquake go to high ground or inland.

In the aftermath of the 2004 tsunamis, and with tectonic plates continuing to shift beneath the Indian Ocean, scientists are seeking answers to handle the next natural disaster.

Published June 1, 2005

By Sheri Fink, MD, PhD

Image courtesy of jdoms via stock.adobe.com.

Stunning images of devastation and soaring body counts dominated news coverage of last December’s tsunami, leaving one of the most important questions about the disaster barely addressed: Why did so many people die? With tectonic plates still shifting beneath the Indian Ocean, setting off new earthquakes almost daily, finding answers to this question is urgent.

Lareef Zubair is an associate research scientist at Columbia University’s Earth Institute and founder of the Sri Lanka Meteorology, Oceanography and Hydrology Network. He studies why disasters in some parts of the world tend to carry a much higher human, as opposed to financial, toll than disasters in other places – compare the thousands who die in a typical cyclone in Bangladesh with the 123 deaths caused by last year’s four hurricanes in Florida.

Zubair recently spoke at The New York Academy of Sciences (the Academy) on a panel organized by Science Writers in New York (SWINY), an affiliate of the National Association of Science Writers, to discuss untold stories of the tsunami.

Disasters: Unequal Opportunity Killers

Destructive acts of nature impact human populations to varying degrees. “People who study disasters sort of separate out three aspects of disasters,” said Zubair. “One is the hazards, which is something like a flood, or lightening strike, or a tsunami, which is the physical or biophysical event itself. And then there is the exposure, the degree to which people are exposed to the hazard. The third thing is how vulnerable you are to that event.”

In Zubair’s home country of Sri Lanka, the tsunami restricted its wrath to the first several hundred meters adjacent to the water’s edge. The destruction coincided with areas of high population density. Not only those who eked out a living on the sea lived in close proximity to it, but also traders and farmers, despite regulations stipulating that construction within 300 meters of the shoreline be reviewed by the government. One reason people are drawn to the coast is that infrastructure such as roads, telephones, hospitals and schools have been developed there.

“The seashore has to be protected,” said Zubair. “Cyclones and flooding and storm surges happen at the seashore…every 10, 20, 30 years, and everybody knows this. But somehow that did not translate into the desired action of having people live in safer areas.”

Not only were people living along the seashore exposed to natural disasters, Zubair said, but because of the area’s depressed economy and 20-year history of civil war, they also were highly vulnerable to them. “Vulnerability…is grossly related [to] the distribution of wealth,” Zubair said. “How good are your houses, how good is the infrastructure, how good are the hospitals that are around so that you can get treatment? How good is the road system?” The answers in the tsunami-hit areas of Sri Lanka were, in most cases, “poor.”

A Failure of Prevention

On December 26, 2004, Sri Lanka’s National Disaster Management Center did not jump into action to mitigate the tsunami’s destructive effects. The country’s “Sunday Times” newspaper summed up the problem in a headline: “Only three phones, staff of 10, and never on a Sunday.” The tsunami had the bad manners to hit on the Sunday after Christmas. “How on earth [can you] have a national disaster management center that does not work on public holidays?” Zubair asked.

An hour elapsed between the tsunami’s first deadly landfall on the island’s eastern coast and its last lashing in the country’s northwest. In that time, an estimated 20,000 additional people died. Zubair believes that had a warning been broadcast to the rest of the country soon after the tsunami began hitting the coast – roughly one and a half hours after the earthquake – lives would have been saved.

“That should have happened,” said Zubair. “Any middle school student could see [that] if you have an earthquake hazard in the middle of the ocean, there is going to be a tsunami risk. You don’t need sophisticated scientists to come and tell you this. Why did people fail? And, why did people fail in Sri Lanka? Why did people fail in India? Lastly, why did people fail here? I don’t think we should push these questions under the carpet, as scientists.”

An Early Warning System

Zubair said he made his way to the “plush” part of Colombo to visit the disaster management center several times in the year prior to the tsunami, seeking to discuss early warning systems. He was offered tea, but never an audience with anyone willing to talk about technical issues. An early warning system had indeed been proposed after Sri Lanka’s 1978 cyclone. Plans were made, reports were written, money was disbursed, but the ideas were never implemented by the center. “They exist, with a name-board and a plaque, for donors,” he said, concluding that a “perverse incentive system” exists for those involved in disaster management and related fields.

“Every time there’s a disaster, they get rewarded with larger and larger amount of funds,” he said. “In countries such as Sri Lanka, fields like disaster management and energy conservation are seen as fields in which you can get foreign funds, opportunities for scholarships and maybe some sort of benefit. There’s no integration of the disaster management system itself into the internal networks of science, into the internal networks of education, into the internal networks of governments itself and disaster management.”

High Price of Neglecting Science

Sri Lanka’s Geological Survey and Mines Bureau possessed both a functioning seismograph and a 100-year scientific pedigree, but on December 26th it had no one working on site to analyze the seismic measurements. Data were sent instead to the Scripps Institute in San Diego. “The question is, why is it that you’re sitting on probably the most important piece of scientific data Sri Lanka ever recorded or needed and you just ship it off?”

The answer is controversial. Zubair traced it to the pressures of mounting foreign debt, which forced the bureau to shift its focus away from science to supporting commercial mining interests. “Because of the fact that the country is dependent…services that look after the safety of the population got converted into a service that helps repay debt,” he said. Hewing to World Bank and Sri Lanka’s central bank guidelines, the bureau did not have the authority to spend the roughly $2,000 needed to hire someone to monitor the seismograph.

In fact, $2,000 is the government’s entire yearly grant to Sri Lanka’s Academy of Sciences. “The investment of the Sri Lankan government in science is about .18% of GDP. It’s just miniscule. You should at least have 1% or 2%, because what you’re doing is investing in people, you’re investing in safety, in the future.”

Empowering Humanity

Zubair concluded that the death toll from the tsunami was in great part a function of unmitigated exposure and vulnerability of the population – factors he laid at the doorstep of a government that neglects science and technology, and international donor organizations that offer a shower of funds for emergency relief, but turn off the spigot for prevention efforts.

“The basic message here is we really should be talking about disaster preparedness and risk management,” he said. The goal is to integrate modern scientific and technological advancements with emergency preparedness and public education. “You can have policy, but there must be implementation and there must be good governance…governance that looks after the welfare of the people.”

Despite the failures, Zubair recalled that when he visited Sri Lanka a week after the disaster he came away with hope as well as frustration. At a time when the government and international agencies had not yet swung into action, he saw the local inhabitants themselves saving lives. “Church groups, community groups, temples, mosques, workplaces. It was like 9/11 here – extraordinary mobilization. It’s not a poor country in that sense.” The key, he says, is to support the “huge capacity of people.” Chief among them? The scientists.

Also read: Tsunami Relief Efforts: A Personal Account

A New Look at an Ancient Pain Remedy

Despite legal restrictions in some states, cannabis has reemerged for its medical benefits in recent years, though its history dates back centuries.

Published April 1, 2005

By Alan Dove, PhD

Image courtesy of aon168 via stock.adobe.com.

While some researchers are pursuing genomic strategies to understand the causes of chronic pain, others are reversing the problem, starting with an ancient painkiller and trying to understand how it works.

Cannabis sativa and its close cousin Cannabis indica, better known as marijuana, have been used as medicinal herbs for centuries, and many patients suffering from chronic pain still use this herbal remedy today, despite its obvious drawbacks. To provide the painkilling benefits of marijuana without the side effects and legal troubles, pharmaceutical companies are now searching for more selective drugs that will use the same molecular targets.

On Oct. 26, 2004, Roger Pertwee, professor of neuropharmacology at the University of Aberdeen and an expert on pharmaceutically useful cannabinoids, gave the Academy’s Biochemical Pharmacology Discussion Group a briefing on the state of the science in this field. Marijuana contains more than 60 different cannabinoid compounds, and most are still poorly understood. These cannabinoids tap into a natural signaling system involving the body’s own endocannabinoids, which appear to control a wide range of physiological and pathological processes.

Early studies focused on a single cannabinoid, delta-9 tetrahydrocannabinol (THC), the main psychotropic ingredient of marijuana. Simple THC preparations are now prescribed to suppress nausea and stimulate appetite in cancer and HIV patients, but they are only moderately effective. A major breakthrough came in the early 1990s, with the discovery of CB1 and CB2, the receptors that bind cannabinoids in humans.

Popular in New Drug Development

CB1 and CB2 proteins are woven into the cell membrane, leaving loops of receptor protein hanging into the cell and the extracellular space. The structure is typical of receptors that act through multipurpose signaling molecules, called G proteins. G protein coupled receptors, including CB1 and CB2, are involved in a huge range of cellular responses. They also are among the most popular targets for new drug development.

CB1 is found on neurons, and stimulating it inhibits the release of neurotransmitters that communicate nerve impulses. In contrast, CB2 is seen primarily on cells of the immune system, and appears to modulate the release of cytokines that direct the immune response. Chemists have developed selective agonists that can stimulate either or both receptors.

Besides the agonists that stimulate CB1 and CB2, researchers have developed compounds that have the opposite effect. The most famous of these is Rimonabant, also known as Acomplia, a CB1-targeting drug currently being developed by Sanofi-Aventis for a variety of indications.

While the opposite of an agonist is usually called an antagonist, the story is more complicated in the cannabinoid system. A receptor antagonist blocks activation of the receptor. Rimonabant and related compounds go a step further.

“Their pharmacology is somewhat complicated,” says Pertwee. “They don’t just block. They produce effects themselves, and those effects are opposite to what you get with an agonist.”

For example, while CB1 receptor agonists inhibit neurotransmitter release, inverse agonists specifically stimulate neurotransmitter release from neurons. In animals, cannabinoid agonists act as painkillers, while Rimonabant actually amplifies pain responses. Rimonabant also exacerbates tremors and spasticity in a mouse model of multiple sclerosis (MS), whereas cannabinoid agonists reduce those symptoms.

Numerous Applications

Targeting the cannabinoid system could have numerous applications, as the investors buzzing about Rimonabant have already realized. Pertwee focuses on cannabinoid analogs’ potential uses as painkillers and as treatments for MS.

In animal models, CB1 agonists reduce acute and inflammatory pain, as well as the difficult-to-treat neuropathic pain that is untouched by traditional opioids. This aligns nicely with the patterns of CB1 expression in the nervous system, where it appears in areas of the brain and peripheral nerves involved in pain perception.

CB1 also is in the brain regions responsible for controlling movement. Satisfyingly, CB1 agonists reduce tremors and spasticity, and may even reverse the demyelination process in animal models of MS. CB2 agonists also reduce pain, including neuropathic pain. This is surprising, because CB2 is not known to be expressed on neurons.

Drug developers are now pursuing many strategies to improve the benefit-to-risk ratio for cannabinoid receptor activation in the clinic. These include targeting CB1 receptors outside the central nervous system, selectively activating CB2 receptors, and elevating endocannabinoid levels by delaying their removal from their sites of action.

Still another approach is to enhance the response of CB1 receptors to endogenously released endocannabinoids, by activating an allosteric site that Pertwee and his colleagues recently discovered on the CB1 receptor.

Meanwhile, patients suffering from chronic pain or MS continue to use marijuana and THC-containing extracts. Though this is less than ideal, Pertwee points out that when subjective reports from patients are taken into account, “My own view is that the benefits outweigh the risks.”

Also read: New Age Therapeutics: Cannabis and CBD

Hollywood Hysteria or Scientific Reality?

An ice burg with a sunset/sunrise in the background.

Much hype is made about the impact of climate change from both sides of the ideological spectrum. But what does the actual science say? These NASA researchers break it down.

Published March 1, 2005

By Sheri Fink

Image courtesy of PaulShlykov via stock.adobe.com.

From the cover stories of popular science magazines to the content of popular Hollywood movies, the possibility of abrupt, catastrophic climate change has stirred the public imagination. But how real is the threat? At NASA’s Goddard Institute for Space Studies, Gavin A. Schmidt and Ronald L. Miller are attempting to answer that question by creating climate models, testing them against evidence from historical climate records, and then using the models in an effort to predict the climate of the future.

The Greenland ice core offers clues to the history of climate change. Calcium content and methane levels correlate with the sharp temperature changes during abrupt climate changes. “You can count the layers in these ice cores. It’s like tree rings; you can see one year after another,” says Schmidt.

The idea that abrupt climate change is even a possibility in our relatively climatically stable Holocene era comes by way of a single example. Recorded in the Greenland ice core, it dates to the very end of the last ice age, roughly 12,000 years ago.

“This is the poster child for abrupt climate change,” says Schmidt, “extremely cold going to extremely warm very, very quickly. When this was first discussed, people had no idea that the climate could change so rapidly.”

The period was named the Younger Dryas because of its reflection in European Dryas flower pollen records. Various other climate records also show the event – from caves in East China to sediments in Santa Barbara and Cariacao Basins to ice cores in the Andes to cave deposits around the Mediterranean.

Flow and Flux in the North Atlantic

“You can see a clear signature of this event almost everywhere in the globe,” says Schmidt. However, the effect is largest near the North Atlantic. “That kind of points you to something that’s happening in the North Atlantic as a possible cause or trigger for what’s going on,” he says.

The circulation of the ocean is driven not only by wind, but also by the water’s salt content and density. The two factors interact in a complex way.

Schmidt sums up the ocean’s overturning circulation – also known as thermohaline circulation – as “warm water that rises along the surface and cold salty water that remains underneath. That transport makes it much warmer in the North Atlantic than it is for instance in the North Pacific.”

The process is self-sustaining. “It’s warm in the North Atlantic because those currents also bring up salt. That salt is heavy, which causes water to sink, and this motion causes the water to release heat.”

He points out that the system also has the potential for different states, however. If for some reason the currents ceased, then the water would not be as salty. It would not sink, and the surroundings would stay cold.

Researchers recently developed a paleoclimate measure that correlates with the residence time of water in the North Atlantic. “In the Younger Dryas,” says Schmidt, “there was a big dip in how much water was being exported – or the residence time of water in the North Atlantic.” This indicates that the North Atlantic overturning circulation was much reduced at the time of that rapid climate change.

An Explanation for Abrupt Climate Change?

The last ice age was characterized by many examples of rapid climate change. Changes in ocean circulation provide a possible explanation.

“We have reasons to believe that ice sheets aren’t particularly stale,” says Schmidt. “Every so often, if they get too big, they start melting at the base.”

An iceberg calving that landed in the ocean and melted would produce a large freshwater pulse. “As you make the ocean fresher and fresher and fresher, then you get less and less formation of that deep water. As that reduces, then there’s less salt being brought up from the lower latitudes,” he explained.

“At one point it’s just too fresh, and then nothing’s being brought up anymore.” In that case, the only stable solution, Schmidt says, is the slowing of the thermohaline circulation.

Could a reduced overturning actually cause abrupt climate change? The answer isn’t clear yet, but there is a correlation. “When we have a weak circulation, it seems like the climate in a lot of cases is very cold,” says Ron Miller.

Why Worry Now?

On top of this instability, humans have dramatically changed the atmosphere’s composition over the past 150 years. And that’s cause for some concern. The energy absorbed by greenhouse gases is balanced by evaporation, which should lead to an increase in rainfall.

“It’s predicted by every climate model,” says Miller. That rainfall could be a source of just enough fresh water to tip the scales, stilling the ocean and, perhaps, making the atmosphere colder. Indeed, a study of ocean salinity shows that in the past decades, the ocean has gained extra fresh water. “The question is, how much cooling do we get?” asks Miller. “Where is this cooling happening? Is it global, and how important is it compared with the warming caused by greenhouse gasses?”

Miller and Schmidt are using a general circulation model to predict the answers to these questions. First, the model was tested to see how well it could predict climatic occurrences of the past century. A rough grid was superimposed on the planet, and within each grid cell, the changes in water vapor, liquid water, momentum, energy, and other factors were observed.

Next, positive and negative forcings – the atmospheric conditions expected to warm or cool the planet, such as solar irradiance and tropospheric and stratospheric aerosols – were calculated or estimated and added to the model. “It’s tracking the observed global average temperature surprisingly well, and we’re really quite proud of this,” says Miller.

Miller admits to a few kinks in regional predictions. Still, he says, “we have a lot of confidence that the model is good at reproducing 20th century climate trends. That gives us some confidence that we can actually make predictions in the future.”

No Cause for Alarm, Yet

After estimating the atmospheric conditions of the next century – no easy task in and of itself – the researchers took the model out for a spin to see what could be expected over the next 100 years. The results indeed predict a slowing of the thermohaline circulation corresponding to a cooling in some areas. “But it’s swamped globally by the warming expected from the greenhouse gases,” says Miller. “So clearly there’s no evidence for any sort of ice age.”

Other researchers have created their own models. All of these point to various drops in the freshwater ocean circulation, but all agree with Miller and Schmidt’s conclusions. “The models give no indication we’re going to see any climate surprises or ice ages in the next 100 years or so,” says Miller.

The terrible tsunami of Dec. 26, 2004 has left no one to doubt the power of oceanic change, in this case due to an undersea earthquake. Still, those kept awake at night by the imagined catastrophic aftermath of thermohaline circulation slowing, as depicted in the film The Day After Tomorrow, may rest easier. Meanwhile, the scientists are continuing to refine their models and studying other factors that may have led to rapid climactic change in the past.

Also read: Climate Change: A Slow-Motion Tsunami


About the Author

Sheri Fink, M.D., Ph.D, is a freelance journalist. Her award-winning book War Hospital: A True Story of Surgery and Survival (PublicAffairs, 2003) was published in paperback in December 2004.

The Solution to Address Education Equity

A child uses his fingers to do the math equation 4 minus 1.

Adequate financial support for students early in their learning journey, particularly the preschool level, can help us create a more equitable education system.

Published March 1, 2005

By Mary Crowley

This is the era in which no child is supposed to be left behind. As Jeanne Brooks-Bunn illustrated in her Nov. 15, 2004 talk at The New York Academy of Sciences (the academy), however, the trail of kids bringing up the rear is long, poor and unfairly weighted with students of color. Her talk drew on the themes of “School Readiness: Closing Racial and Ethnic Gaps,” the upcoming spring issue of the Future of Children (volume 15, no. 1), which was edited by Brooks-Gunn, Cecilia Elena Rouse, professor of economics and public affairs at Princeton University, and Sara McLanahan, professor of sociology and public affairs at Princeton.

Recent education policy has focused on test score differences, and significant political capital is being spent to ensure that all kids stay at grade level. Yet, while the test score gap between white and nonwhite students has narrowed, it is still large when you look at 12th grade achievement in reading, according to the 2002 National Assessment of Educational Progress. While 42% of white students read at grade level, only 16% of black students and 22% of Hispanic students do, and there are similar gaps in other subjects, despite the high-profile No Child Left Behind Act.

The Differences that Matter

The problem is that policymakers are barking up the wrong tree, according to Brooks-Gunn, the Virginia and Leonard Marx Professor of Child Development at Teachers College and the College of Physicians and Surgeons at Columbia University, and director of the National Center for Children and Families and the Institute for Child and Family Policy at Columbia. Her research suggests that policymakers should be thinking in terms of racial and ethnic gaps in school readiness, not in school achievement.

While most education research and public policy dollars are devoted to academic skills, a national sample of 3,500 kindergarten teachers, queried in the late 1990s, said that 46% of kids reach school missing the basic skills required to learn, such as impulse control and being able to follow directions and work with a group. Brooks-Gunn maintained that putting more resources towards very young children will pay bigger dividends in the long run than simply funding school programs.

Brooks-Gunn’s research shows that racial test-score gaps begin by age three to four, as soon as children can take vocabulary tests – and the gaps are large. On vocabulary tests, the difference between black and white 3-, 4- and 5-year-olds is a full standard deviation (with black kids falling 15 points below the mean of 100), while the differences in early reading and counting are 60% of a standard deviation, or 8 to 9 points.

“These differences matter,” said Brooks-Gunn. Researchers estimate that 50% of the test score gap seen at 12th grade already exists by age five. Not only are kids who score poorly as preschoolers less likely to graduate, they also are more likely to become teen mothers or engage in juvenile delinquency. “It’s a hard trajectory to change once you’re on it,” she insisted.

Poverty: A Black-and-White Issue

The unifying principle behind these discrepancies is poverty. Almost 18% of American kids – 12.9 million – are poor, according to the 2003 federal poverty threshold of living in a family with an annual income of $18,810 for a family of four. Because this is what Brooks-Gunn called an “impossibly low living standard,” the percentage of poor kids is actually much higher.

And, because blacks and Hispanics are two to three times more likely than whites to be poor, Brooks-Gunn said her work is about racial inequality as well as poverty. “The argument against looking at racial gaps is that we need to help all kids,” she said. “This is certainly true, but our group wants to highlight the fact that current policies are leaving a group behind. We do live in a divided society that does not meet America’s purported value of equity, and the stark differences between white and black children growing up in America must be addressed.”

The litany of travails faced by children in these economic circumstances is long and hard. Compared to children who aren’t poor, they are more likely to have a depressed mother, a teenage mother, a mother with no job or a job with low socioeconomic status (SES), or a mother who dropped out of high school. These children also are more likely to be born with low birth weight, be punished by spanking, and have three or more siblings. Thirty percent of poor or near-poor children have no books in their homes.

Links Between Socioeconomic Status and Achievement

Brooks-Gunn’s work with economist Greg Duncan, Edwina S. Tarry Professor of Education at Northwestern University, examined the links between SES and achievement. Persistent and deep poverty has a bigger effect than any other factor, even when controlling for maternal cognition, number of siblings and other family differences. They also found that early childhood poverty is more impairing than poverty in mid- or late-childhood. “Living in poverty dampens achievement by many routes, including less access to high quality child care, parenting differences and parental mental health differences,” said Brooks-Gunn.

What happens to test score gaps in young children when you control for parental income and education? The achievement gap is significantly reduced. The gap in picture vocabulary and IQ is cut in half, from about one standard deviation to one-half of one. The gap in school readiness skills (pre-reading and math skills at the beginning of kindergarten) drops from about three-fifths of a standard deviation to one-fifth or less of a standard deviation. “The huge difference that controlling for SES makes in terms of reducing the achievement gap suggests that interventions can make a difference,” argued Brooks-Gunn.

She has made several suggestions, starting with income supplements for the poor. Welfare reform studies show that programs that include supplemental income for mothers improved achievement test scores in children, while there was no effect if the reform simply meant, “mom goes back to work.” An annual gain of $1,000 translates into an achievement increase of almost one point. The problem with such a strategy is that the income gap between the average white and black families at the mean is $30,000 – too big a differential for society to easily make up. Alternatively, the earned-income tax credit is a “stealth program for helping poor kids,” according to Brooks-Gunn.

The Economics Support Early Education

On average, this tax break gives up to $4,200 to low-income, working families, and 19 million families claim it. In 1997, the earned-income tax credit raised single mothers’ incomes by an average of 9%, helping lift two million kids out of poverty.

“Parenting programs also make a difference,” said Brooks-Gunn. Research shows you can change parenting behavior to boost literacy in the home, so that there is more reading and language stimulation, and can reduce achievement gaps as well. Home intervention alone does not help with school readiness, however. What works is center-based intervention that includes a parenting component, such as literacy programs that feature reading with both parents and teachers.

Five studies of early childhood education found that weekly home visits coupled with early childhood intervention at daycare centers boosted IQ by 5 points at age 3 – a difference that was sustained through age 18. Early Head Start, which runs from pregnancy to age 3, features both home- and center-based intervention.

The bottom line, concluded Brooks-Gunn, is that the school readiness gap in pre-reading and math skills between black and white children could be narrowed significantly with high-quality early childhood education for all poor children. The kinds of programs she envisions don’t come cheap, of course. But she argues that the pay-off is enormous – and that economists back her up.

Nobel laureate Jim Heckman, the Henry Schultz Distinguished Service Professor in Economics at the University of Chicago, maintains that the nation should invest the bulk of its education funds on preschoolers, because investment at that age pays a far greater return for both individuals and society than money spent on elementary or high school. As Brooks-Gunn noted, “It’s a huge step to have economists arguing for early education dollars.”

Also read: A New Report on the “Global STEM Paradox”