Skip to main content

A Laboratory for Science Education in NYC

A high school student inside a science lab holds up a test tube with an orange liquid.

With an alumni association reads like a dream science team from Fantasy University, Stuyvesant High School proves itself as one of the best in the nation.

Published July 1, 2006

By David Cohn

Image courtesy of Emi Suzuki

The principal’s office at Stuyvesant High School is lined with trophies of many shapes, but only one size: big. A few of the prizes are for sports, such as swimming, but most are for cerebral pursuits such as science, math, and chess. In one corner of the room looms a giant check from the Intel Science Talent Search, which awards $1000 to a school when its student is chosen as one of 300 semifinalists in the annual nationwide contest. Stuyvesant’s check for this year is made out for $8000, but that’s nothing unusual.

With a strong focus in math and science, Stuyvesant, located on the Hudson River at Chambers Street in Battery Park City, is recognized as one of the best public high schools in the country. The school has produced four Nobel laureates, and the membership of the 30,000-strong alumni association reads like a dream science team for a game of Fantasy University.

Members of The New York Academy of Sciences (the Academy) who are Stuyvesant grads are too numerous to list here, but they include Brian Greene of Columbia University, a leading authority on superstring theory; Eric Lander of MIT, the genomics pioneer; and physicist Nicholas Samios, director of the Brookhaven National Laboratory. Joshua Lederberg, who won the Nobel Prize for Medicine in 1958 for discovering the mechanisms of genetic recombination in bacteria, is a Stuyvesant grad, class of 1941. He recalls bright young students bouncing ideas off each other and “arguing the merits of going into science,” an atmosphere not too different from today’s.

The Top Achievers

Image courtesy of Emi Suzuki

Stuyvesant’s 800 incoming students represent the top achievers from the 25,000 children who take the Specialized High School Admissions Test, the SAT-like exam that determines who can attend one of New York’s special science and technology public high schools. “If I walked into the 9th grade assembly and said ‘Will everyone who was valedictorian and salutatorian last year in their junior high please stand up,’ about two-thirds would stand,” says principal Stanley Teitel.

Once accepted, students can choose from a varied curriculum that includes ten language choices, tough basic science classes, and advanced science courses in fields including oceanography, molecular biology, and psychology. Students leave Stuyvesant “prepared for the next level,” says Teitel, which is often a top-tier college or Ivy League university. In fact, Stuyvesant has limited the number of colleges to which students can apply to seven, to reduce overlap.

From All-Male to All-Star

The formerly all-male school became coed in 1969, and moved in 1992 from East 15th St. to its new campus in Lower Manhattan, a stone’s throw away from Rockefeller and other Battery Park City parks where students go to relax, eat, and take in majestic Hudson River views. The school’s remarkable labs, which specialize in everything from earth sciences to robotics engineering, “really capture the energy and enthusiasm of the school,” says Robert Sherwood, president of the Alumni Association, which donates most of the money to fund the facilities.

Image courtesy of Emi Suzuki

The location, only a few blocks from most major subway lines, makes it convenient for students who come from all five boroughs. The location also opens young minds. “Coming from Queens, I didn’t have much interaction with Manhattan,” says Emi Suzuki, president of ARISTA, a national honors society and Stuyvesant’s largest club. “So when I started at Stuyvesant, commuting really exposed me to all kinds of different people.”

Suzuki, like many of her classmates, has already had time in a professional lab. With the help of an internship advisor, she was able to spend last summer at the Memorial Sloan-Kettering Cancer Center under the mentorship of Dr. Harold Varmus, 1989 recipient of the Nobel Prize. Suzuki cultured cells, and produced and purified immunoadhesion-marker proteins. Others in her class interned at prestigious laboratories at Columbia, NYU, or Cornell.

“Stuyvesant absolutely does not give us internships on a silver platter,” Suzuki says, “but I do think that our school’s reputation helps.”

Learn more about educational programming at the Academy.

PATH Forward: Connecting New Jersey and New York

A shot of the Oculus in downtown NYC.

Santiago Calatrava’s new transport terminal will encourage Downtown residents, commuters and tourists to look up and marvel.

Published July 1, 2006

By Fred A. Bernstein

Image courtesy of sean via stock.adobe.com.

In his first completed project in New York, the Spanish-born architect Santiago Calatrava designed a time capsule meant to be opened in the year 3000. Calatrava’s bulbous, polished metal box, which stands outside the American Museum of Natural History, was clearly inspired by nature. But it would take experts from several departments of the museum to pin down all the referents. Some observers see a seashell; others, a flower or a seedpod; still others, an elaborate crystal. Animal, vegetable, or mineral?

In a world where most buildings are simply containers, their forms influenced only by other buildings, Calatrava’s blatantly biomorphic structures have made him, at 54, the most accessible of the current generation of superstar architects.

Most of Calatrava’s structures— bridges, airports, train stations, and museums—are in Europe, but as many as three more could arrive on the Lower Manhattan skyline by the end of the decade. The largest (and the one most certain to be built) is the $2.2 billion PATH terminal at Ground Zero, scheduled to open in 2009.

Hands in Prayer? Or Birds in Flight?

The terminal, just east of where the Twin Towers stood, will be topped by a pair of curved canopies of glass and steel that reach high into the sky as decoration. A hydraulic system will allow the canopies to rise, creating an opening about 35 feet wide at its center, bathing the huge concourse in sunlight.

Some visitors will see the canopies as hands interlocked in prayer; others will see birds in flight (to heighten the allusion, Calatrava released a dove into the air when he unveiled his design). Or perhaps it isn’t the bird but the birdcage, opening to the sky in a symbol of freedom. The building has been particularly welcome news at Ground Zero, where architectural squabbles—some growing out of forced collaborations— continue to make headlines.

Calatrava collaborates with no one, and it’s just as well, since he has too many ideas already. Born in Valencia, he speaks nearly a dozen languages and sometimes uses all of them—citing the works of philosophers, composers, poets, and painters—in a single sentence. He has no compunction about mixing metaphors in his buildings; how else can he hope to get a fraction of his ideas built in just one short lifetime?

Thirty years ago, after receiving an undergraduate degree in architecture, Calatrava moved to Switzerland to study engineering. He quickly developed a style all his own. His student work resembled the streamlined forms of one of his idols, Robert Maillart, an early 20th-century designer of bridges in the Swiss cantons. Maillart’s goal was to remove excess material, which resulted in concrete bridges so thin, they appeared to be stretched almost to breaking.

Bridging Twist and Turn for Decorative Effect

But unlike Maillart’s strictly economical structures, Calatrava’s bridges twist and turn for decorative effect. Not surprisingly, the great Catalan architect, Antonio Gaudi, who rarely used right angles and whose buildings ornament Barcelona, is another one of Calatrava’s idols.

Since the advent of modernism, architects have almost universally tried to explain form as the direct result of function, as if anything less rational were suspect. But Calatrava has joyfully shaken off that stricture. His design for a music school in Switzerland uses five exposed steel cables. Calatrava has said “I chose five, even knowing that I could have used only two, because music is read over five lines.”

More recently, he designed an opera house for Tenerife, in the Canary Islands, with a vast curved wing that resembles a crescent moon, a wave, an orchid, or about half a dozen other forms from nature. Asked about the origins of the wing, which significantly increased the cost of the building, Calatrava didn’t pretend that it served any practical purpose—except the purpose to inspire.

Lately, the architect has been creating buildings that don’t just look ready to move; they do move. Shortlisted to redesign the Reichstag in Berlin, Calatrava proposed a glass dome that would open when the Bundestag was in session, symbolizing openness in government. That design was never built. But in 2001 his ideas took flight in an entry pavilion at the Milwaukee Art Museum. There, a roof that resembles a bird’s wings opens to the sky in good weather. Getting the wings built was tricky—after long delays and huge cost overruns, Calatrava had the pieces assembled in Spain and flown across the Atlantic in a giant Soviet transport plane. Even then, there were minor problems with the mechanism.

A Secular Version of Gothic Cathedrals

In the end, Milwaukee garnered an important civic symbol—and even skeptics find the building’s now-reliable daily displays irresistible. The expense is of little concern to Calatrava’s fans, who see his buildings as the modern, secular version of Gothic cathedrals: uplifting symbols of humankind’s highest aspirations.

Private developers in the U.S. are just beginning to see whether Calatrava’s panache can produce profits. If completed in Chicago, his Fordham Spire, a mixed-use tower that twists a few degrees with every floor, would be the tallest building in the United States. For South Street in Lower Manhattan, Calatrava has designed a tower of 45-foot cubes hanging from cables—a plan the architect worked out with blocks of wood and marble.

Each cube would contain a single “apartment” priced at $30 million or more. He has also designed a gondola that could bring visitors from Manhattan to Governors Island—a pro bono project that he accepted at the request of city and state officials hoping to spark interest in the island’s redevelopment.

There’s only one problem with a Calatrava gondola: There had better be a very special building at the other end, or the trip will be an anticlimax.

Also read: An Architectural Historian’s Perspective of NYC


About the Author

Fred A. Bernstein studied architecture at Princeton and law at NYU, and writes about both subjects.

The Anthropic View of the Universe

According to Leonard Susskind, the universe we know might be just one crude but carefully balanced case among a host of different universes, each with its own physical laws.

Published June 9, 2006

By Sheri Fink, MD, PhD

Sponsored by: The New York Academy of Sciences and Little, Brown & Co.

Image courtesy of Maximusdn via stock.adobe.com.

Stanford University professor Leonard Susskind has had an illustrious career in theoretical physics. He is known as a “father of string theory”—the idea that everything, at its most minute scale, is made of combinations of vibrating strings. String theory began as a search for a unified theory capable of reconciling quantum field theory with general relativity, but has expanded in recent years and has caused a major shift in theoretical and experimental physics.

In his recent popular science book, The Cosmic Landscape: String Theory and the Illusion of Intelligent Design, Susskind addresses some startling recent developments in string theory, and on April 10, 2006 he took the podium as a part of the Academy’s Readers & Writers series to discuss why these ideas are making such waves in the physics community.

Susskind’s book deals with the meeting of two controversial ideas. One is the anthropic principle, which suggests that our corner of the universe is perfectly tailored to our existence—otherwise we would not be here to observe it. The other is string theory’s prediction of the “multiverse,” a giant, diverse universe with a rich landscape of “pocket universes,” each governed by its own laws of physics. The expansive possibilities of the multiverse provide a plausible explanation for the unlikely perfection of our own, relatively small, universe.

The Not-So-Elegant Universe

The array of elementary particles that determine the properties of atoms has grown in recent years. Electrons, photons, quarks, gluons, Z bosons, and neutrinos are just a few of the many elementary particles thought to exist. “It’s a rather large list,” said Susskind. “It’s hardly the kind of list that a minimalist would have invented.”

There is no particular reason known for the existence of these particles. Some of them, however, are requisites for life. For example, atoms need to contain electrons, which are held in the nucleus by the force of photons jumping back and forth from the electron to the nucleus. The nucleus, in turn, is held together by gluons jumping back and forth between quarks.

“To me the whole thing does not look like the product of an elegant mathematical theory,” said Susskind. “It doesn’t look like beautiful numbers like e or pi or √2; instead, it looks like a Rube Goldberg machine! It looks like something that was designed by a rather poor engineer for some purpose. While it works, it’s hardly elegant.”

Aside from particles, the existence of certain forces has allowed life to evolve. Some seem finely tuned such that if the values were slightly bigger, life could not exist. Take gravity, for example—a force 42 orders of magnitude weaker than the electrical force. If it were even one order of magnitude stronger, “the universe would expand and recontract in a much shorter time than it would take for evolution,” said Susskind. “Instead of being filled with galaxies, the universe would be filled with black holes. Even if an earth did form, it would not last very long. It would just have been sucked right into a black hole.”

The Puzzle of the Cosmological Constant

The weakness of gravity, the existence of just the right motley set of particles to form the building blocks of life—are these facts enough to cause physicists to abandon their quest for mathematical elegance and shift to embrace the anthropic principle? No, said Susskind, there is still the possibility that they arose by chance. “But there is one fine-tuning of nature, one accident, one conspiracy we might call it, which is so extraordinary that nobody thinks it’s an accident.”

Even the greatest of scientists have been prone to second-guessing. Einstein was not immune. He posited the existence of the “cosmological constant”—the energy density of empty space, which, if positive, gives rise to a repulsive pressure that counteracts gravity. While he later abandoned the concept, it did not disappear completely. “This is a case of Pandora’s Box,” said Susskind—once the lid had been raised on the idea, scientists could never explain it away.

The cosmological constant is also known as vacuum energy. In quantum theory, the continuous agitation of a vacuum creates energy, leading to the outward pressure that the cosmological constant describes. However, when physicists combine the theory of elementary particles with the theory of gravity and use quantum field theory to calculate the cosmological constant, they derive a gigantic value; if it existed, such a large amount of energy would conflict with astronomical observations and would be disastrous. “It would be enough not only to shatter the earth, it would be enough to shatter every atom and molecule,” said Susskind. “Every nucleus, every quark would go flying apart.”

More Mystery Around the Cosmological Constant

Nothing in known physics explains why the cosmological constant is not the size that quantum field theory predicts it to be. Physicists at first surmised that other particles and constants contributing to the calculation of vacuum energy must cancel out the large value, leading to a cosmological constant that is exactly zero.

In 1987, physicist Steven Weinberg proposed another idea. Physicists believe that gravity forced the bland early universe to differentiate into planets and galaxies by squeezing and contracting slightly denser regions of matter and sucking mass out of less dense regions. Weinberg showed that the cosmological constant must be extremely small—on the order of 10−120 units (joules/cm3)—to prevent a repulsive force from counteracting this process.

“A cosmological constant even ten times bigger than this would have been destructive and deadly to life,” says Susskind. “It would have prevented the creation of the home of life—stars, galaxies, and especially planets.” Using the anthropic principle, Weinberg made a prediction. While life depends on the cosmological constant being smaller than 10−120 units, the value does not need to be very much smaller than that. So, he predicted, if the value of the cosmological constant is determined by the existence of life, then its 121st digit will be a number other than zero.

Several years ago, the 121st decimal place of the cosmological constant was measured through cosmological observation; its value appears to be 2 instead of 0. To Weinberg and to Susskind, this confirmation of the earlier prediction is the best support for the anthropic contention that “some features of our own existence determine certain things about the laws of nature.”

Explaining the Appearance of Design

What else, besides an intelligent designer, could have tailored the universe to fit the needs of planets and people, including unlikely features that defy current mathematical prediction? Susskind’s answer lies in string theory—a mathematical model of nature to which many, if not most, physicists now subscribe.

String theory makes sense in 10 dimensions of space, not our usual three. The extra six-dimensional spaces are known as Calabi Yau or CY spaces. “These spaces control all the properties of the world in a large scale,” said Susskind.

“The (elementary) particles have to be able to fit into these spaces. If they fit, then they’re allowable particles. If they don’t, they’re not allowable. All the laws of nature and string theory are controlled by these features of these CY spaces.” There are about a million different CY spaces, or “manifolds.” Each one can be decorated with “little lines of flux that can wind around them in many, many ways,” said Susskind. “When you start counting up all the possible ways the CY manifolds can be decorated with these fluxes, the numbers are humongous.”

Thus, string theory allows for a landscape of possible universes “so rich that it appears there may be as many as 10500 different environments that can be described.” The number of possibilities is so large that it can compensate for the incredible unlikelihood of the cosmological constant being so exceptionally small.

Do these alternate universes actually exist outside of the realm of possibility, or is the universe everywhere the same as it is here, in all the places we can measure it? Nobody knows the answer yet. What is known is that the universe is far wider than the 10 billion light years across that it was once assumed to be.

Inflationary Cosmology

The school of inflationary cosmology holds that the universe is expanding at an increasing rate. An exponential and perpetual expansion would be possible if, as the universe expanded, new bits of space formed to fill interstitial spaces. The theory of eternal inflation suggests that as the universe grows, bubbles of alternate types of space appear.

“If a bubble is too small, it will melt back into the environment,” said Susskind. “If it happens to grow a little bit, it will then start to really expand.” Within that expanding bubble, more bubbles will form. “It creates this enormous diversity of different properties and in some tiny, tiny fraction of it, perhaps a comfortable little green neighborhood appears where life can exist. That’s where we are.”

Because physics has long posited a world controlled by elegant mathematics, the anthropic principle and the multiverse represent a fundamental shift in the way that many physicists and cosmologists view their fields. In fact, Susskind’s theories have drawn the ire of some prominent scientists. Stanford professor Burton Richter, winner of the 1976 Nobel Prize in Physics, has accused Susskind of having “given up” on the effort to find a theory that explains all the properties of fundamental particles and forces, bringing to an end the “reductionist voyage that has taken physics so far.”

Creationism

Religious figures, on the other hand, abhor Susskind’s views because they contradict the idea that God created the universe. The Roman Catholic cardinal archbishop of Vienna, Cardinal Christof Schonborn, wrote in The New York Times that the multiverse hypothesis was “invented to avoid the overwhelming evidence for purpose and design found in modern science.”

Susskind, for his part, seems to relish the controversy. “Paradigm shifts, serious ones, raise people’s anger, raise people’s passion. They are threatening,” he said. “The anger, the passion, the fighting spirit that goes with these questions is extremely intense.” The fact that Susskind’s ideas have aroused such emotion reflects the great attention that is being paid to this new way of looking at the universe.

About the Speaker

Leonard Susskind, PhD, grew up in the South Bronx, where he worked as a plumber and steam fitter during his early adult years. As an engineering student at the City College of New York, he discovered that physics was more to his liking than either plumbing or engineering. He later earned a PhD in theoretical physics at Cornell University.

Susskind has been a professor of physics at the Belfer Graduate School in New York City and at the Tel Aviv University in Israel. He has also been the Felix Bloch Professor in theoretical physics at Stanford University since 1978. During the past forty years he has made contributions to every area of theoretical physics, including quantum optics, elementary-particle physics, condensed-matter physics, cosmology, and gravitation.

In 1969 Susskind and Yoichiro Nambu independently discovered string theory. Later on, Susskind developed the theory of quark confinement (why quarks are stuck inside the nucleus and can never escape), the theory of baryogenesis (why the universe is full of matter but no antimatter), the Principle of Black Hole Complementarity, the Holographic Principle, and numerous other concepts of modern physics. He is a member of the National Academy of Sciences and the American Academy of Arts and Sciences.

Also read: Cosmic Chemistry and the Origin of Life


About the Author

Sheri Fink is the author of War Hospital: A True Story of Surgery and Survival (PublicAffairs, 2003). Fink obtained her MD and PhD in neurosciences at Stanford University and now, based in New York, writes about medicine, public health, and science for a range of publications.

Strategies from Successful Women Scientists

Author and former scientist Ellen Daniell discussed how participating in a small problem-solving group can lead to success in academic and other careers.

Published May 25, 2006

By Leslie Knowlton

Sponsored by: The New York Academy of Sciences and Yale University Press.

Image courtesy of sutlafk via stock.adobe.com.

Almost 30 years ago, Ellen Daniell, then an assistant professor of molecular biology at the University of California, Berkeley and the first woman in her department, joined a small bimonthly group of faculty, staff, and postdocs formed to reduce isolation and foster solutions to professional and other problems, including gender equity issues.

Today she credits the seven-member “Group” of high-achieving women, several of whom are well-known scientists, for seeing her through several difficult transitions, including being denied tenure at Berkeley, establishing herself in another career in business, and retiring from that to be a writer and enjoy her own interests.

In her book, Every Other Thursday: Stories and Strategies from Successful Women Scientists, Daniell tells the story of her experience with Group in an effort to help others form similar alliances. In her March 14, 2006, talk at The New York Academy of Sciences (the Academy), she explained the effect of Group on her life, saying, “I strongly believe I have made more satisfactory decisions and choices because I’ve talked out the possibilities, as well as the frequently apparent impossibilities, with Group.”

She also recommends this kind of organization to others not only in academia but also in a variety of professions, activities, and stages of life.

Common Concerns

Reading from her book’s preface, Daniell gave representative perceptions expressed by Group members, ingrained ideas and feelings that inhibit many women in many professions from achieving their full potential. They include

  • Maybe having a fulfilling personal life is incompatible with a successful career.
  • I feel like I’m an emotional cafeteria responding to what others want.
  • I feel responsible for everything but have no power to change anything.

Women also have trouble with recognizing personal achievements and taking credit for them. “It starts with forgiving mistakes … and moves from self-acceptance to self-appreciation and then to celebrating accomplishments.” This process requires developing a sense of entitlement. Group jokes that sometimes you have to say, “Maybe I AM the Queen of Sheba.”

After they learn to give themselves credit, it is important for women to take credit publicly when credit is due to them. This is important because in most pursuits, advancement and job satisfaction are affected by the image one presents to others. “We’ve worked long and hard on this while in the phase of careers when struggling to succeed and be recognized, and then found another puzzle—that of how to act as successful as we really are, without being dismissive of others.”

Another problem seen frequently in Group has been being able to make choices with a belief in the right to make them. “Change is stressful, no matter how desirable it is, and many support groups function primarily to help members through times of change and turmoil,” Daniell said. Some efforts are of the “egging-on” variety, giving encouragement to get on with a choice that’s already made. But most of the focus is on helping each other recognize when there are choices that can be made and figuring out how to make them.

How Group Works

Meetings are held evenings at homes of Group members, with the host of each session acting as facilitator. Group keeps a fixed bimonthly schedule, regardless of who can attend a particular session, and follows a set framework to ensure that everyone has an opportunity to speak, work, and listen.

First, the facilitator asks who wants to work on particular issues and how much time each person needs. The facilitator keeps track of the time requested and when that time is up, she asks if the person working wants more time. “Thinking about what you want to discuss and how long you think it might take both to describe the issue and to get feedback that you want is pretty good practice for assessing and asking for what you want outside of Group,” said Daniell.

While members, after becoming very good friends, now discuss personal issues, such as retirement, health, grandchildren, and aging parents, professional concerns still predominate. Members listen very closely, saying nothing until the speaker requests feedback, at which time other members give an honest appraisal of both the issue presented and solutions to it. “We try very hard not to make nice and not to say what it is that we think the person working wants to hear,” Daniell explained.

Eliminating Negative Self Perceptions

To help identify problems, Group raises “pig alerts” in response to certain kinds of statements. A pig is a “negative self-perception, an external judgment that you lay upon yourself and then use to defeat practically anything that you’re trying to accomplish.” They are frequently identified by the words always or never or by personal characteristics, such as being lazy or disorganized. Members attempt to replace pigs with a positive view.

For example, instead of saying, “I have so many papers lined up to be written because I’m lazy or disorganized,” one might change one’s perception by saying, “There are papers lined up because I’ve gotten so many interesting research results from my hard work.” This allows the person with the pig to overcome the negative characterization and address the problem.

After identifying a problem, Group creates a strategy to solve it. Members often make a contract, which includes a concise formulation of objectives, either immediate or long-range, to solve a problem or reach a goal. The contract should be “doable,” recognizing that it is often necessary to break large problems into the many small ones of which they are composed. A benefit of contracts is that often an apparently new issue may relate back to a previous contract. “By using this mode of thinking about something in terms of a contract,” Daniell advised, “you may find connections among various issues that at first didn’t seem connected.”

After work is done, members have refreshments and give each other strokes, positive statements about someone else. Stroke etiquette requires that in receiving a stroke one try to absorb and believe it, or just say you believe it. “It’s easier to give strokes than to get them at first, but once you get into it, they are really quite delicious.”

The Membership

Daniell noted that her book was written with the review and approval of all members, including Christine Guthrie, Carol Gross, Judith Klinman, Mimi Koehl, Suzanne McKee, and Helen Wittmer, each of whom let her struggles and fears be presented to motivate and help others. Women frequently cite isolation and marginalization as reasons that they avoid or get out of science and engineering at major research institutions, she said. They are also underrepresented relative to men in top faculty positions. Daniell sees her book as a way to help those women realize their potential.

Concluding her talk, Daniell said Group helps “alleviate the sense that you’re swimming with sharks and does so in an atmosphere of complete confidentiality—a place where everybody is truly on your side.” Along with practical support comes compassion and humor. In her experience with Group, pig images have become humorous symbols of struggles. All members have collections of ceramic, wood, and glass pigs displayed in their homes, along with pig bookends, plush stuffed pigs, pig earrings, and pig socks. “In contrast to the mental pigs that threaten our well-being, these little tangible pigs are a benign species that remind us to treat ourselves with compassion.”

About the Speaker

Ellen Daniell is a writer and consultant. She graduated from Swarthmore College in 1969 with high honors in chemistry and received her PhD, also in chemistry, from the University of California, San Diego. She was assistant professor of molecular biology at the University of California, Berkeley, and has held management positions in human resources and patent licensing in the biotechnology industry.

Also read: Supporting the NeXXt Generation of STEM

A Neuroscientist’s Search for Memory

In celebration of his new memoir, the Nobel Prize-winning neuroscientist recounted many formative episodes from his life in science.

Published May 5, 2006

By Carl Zimmer

Sponsored by:  The New York Academy of Sciences and W.W. Norton

Memory allows us to do more than just store telephone numbers and directions to the post office. It is a repository for lost worlds, which we can recreate years later. The Nobel Prize-winning neuroscientist Eric Kandel did just that during his lecture at The New York Academy of Sciences on March 2, 2006, as part of the Readers & Writers lecture series. Kandel, now 76, drew his audience back to his youth in Vienna in the 1930s. To convey the trauma of being a Jewish boy during the Nazi occupation of Austria, he recalled his ninth birthday.

“I’d gotten a number of toys, the most magical was a little shiny car I could control remotely,” Kandel said. But that joy turned to terror. It was 1938, the year the Nazis had invaded Austria. “Two days later, Nazi police officers came and told us we had to leave the house,” Kandel recalled. “They sent us to live with another family. When we came back, the apartment had been essentially emptied out. Everything was gone.”

Part Memoir, Part Intellectual History

Today Kandel understands a great deal about how he can manage to hold memories such as these. He escaped from Austria to the United States, where he trained as a neuroscientist. He went on to have a spectacular career probing the biological basis of the mind, winning the Nobel Prize in Physiology or Medicine in 2000. Kandel has woven together recollections of his life, his research, and the evolution of modern neuroscience into a memoir and intellectual history, In Search of Memory: The Emergence of a New Science of Mind.

Kandel attended Harvard, where he discovered psychoanalysis. He became convinced that it would allow him to understand both the rational and irrational sides of mankind. “This was the royal road to understanding the mind,” he said.

He entered medical school to be a psychoanalyst, but he had an unconventional idea about what his training should include. “I thought to be a psychoanalyst, it would be useful to know something about the brain,” he said. In the 1950s, most psychiatrists paid little heed to the actual structure and function of the brain. And neuroscience itself hardly existed as a unified discipline.

Eventually, Kandel ended up at the laboratory of Harry Grundfest at Columbia University. At first Kandel had the wild ambitions of someone who has not yet actually tried to study the brain. “I said, ‘I’d like to see if I can help localize where the ego, the superego, and the id are localized in the brain,'” Kandel recalled, laughing. “Grundfest looked at me like I was out of my mind. But rather than kicking me out, he said, ‘This is beyond the grasp of neuroscience today.'”

Cellular Psychoanalysis

Grundfest directed Kandel to more manageable experiments. In Grundfest’s lab he began recording the activity of neurons in crayfish. “In those days the output of the amplifier was hooked up to a loud speaker so you could hear each action potential. Boom, boom, boom!” Kandel said. “It was fantastic. Here I was listening to the signals coming from the brain of the crayfish. This was true psychoanalysis at the cellular level.”

In place of his dream of finding Freud in the brain, Kandel decided to chase a dream that was only a little less grand. He would find the biological basis of memory. In the mid-1950s, neuroscientists recognized two different kinds of memory: short-term and long-term. Damage to the brain could harm short-term memory without affecting long-term memory. One small region of the brain, known as the hippocampus, appeared to be one of the key regions for allowing us to remember.

Kandel set out to investigate the hippocampus, hoping to find something distinctive about its cells. But nothing earthshattering emerged. It was possible that what was important for memory was not individual neurons, but how they were arranged in a network and communicated with one another. The millions of neurons in the hippocampus would be too complex to analyze. So he needed to find a simpler system. “I thought, the way you solve a problem in biology is you solve its simplest representation.”

Aplysia‘s 20,000 Neurons

The ideal system turned out to be the marine snail Aplysia. It had only 20,000 neurons, as opposed to the 100 billion neurons in the human brain. And its neurons were big—the biggest of any animal, in fact. Kandel was able to study memories in Aplysia by training it. He would nudge the snail before applying a jolt, and it would learn to associate the two sensations. Kandel could then compare the biochemistry of the neurons before and after it had recorded memories of this uncomfortable experience.

Working with Aplysia, Kandel and his colleagues demonstrated that short-term memory formed through the strengthening of connections between neurons. They even identified some of the molecules that made that strengthening possible. For long-term memories, it was necessary to switch on genes in neurons in order to make new proteins, and to make new connections. Once Kandel began to feel confident that he had figured out Aplysia, he moved back to the hippocampus in mice, discovering that many of the same genes and proteins also played an important role in their memories—and, by extension, human memories.

Kandel was recognized for this pioneering work with a Nobel Prize in 2000, but the award hasn’t slowed him down. He is writing a flurry of books, including the newest edition of his doorstop-sized textbook on neuroscience. While studying mice in the 1990s, Kandel began to investigate the molecular changes that occur as the animals get old. Insights from these experiences led to the founding of Memory Pharmaceuticals. The company is now conducting clinical trials on drugs that may boost the cognitive skills of people suffering Alzheimer’s disease and age-related memory loss.

Rogue Proteins

Meanwhile, Kandel and his postdoctoral fellow, Kausik Si, have opened up an entirely new front in the search for memory: the possibility that it shares something in common with mad cow disease. Researchers have shown that mad cow disease is caused by rogue proteins that fold into abnormal shapes, known as prions. Once prions form, they acquire the ability to force other proteins to assume the same shape. Under some circumstances, this runaway shape-change can cause devastating diseases.

But prions may play a helpful role in organisms. Collaborating with Susan Lindquist at the Whitehead Institute for Biomedical Research, Kandel and Si have found that some of the molecules involved in forming long-term memories show signs of behaving like prions in yeast cells. Kandel and Si propose that memories may be stabilized by self-perpetuating proteins. Individual proteins in neurons may be short-lived, but prions might be able to pass on their functional state to other molecules for years.

It’s a hypothesis that demands more experiments, a prospect that delights Kandel. “This gives me unending pleasure,” said Kandel, “because I can’t think of anything else I’d rather do.”

About the Speaker

Eric Kandel, MD, is University Professor at Columbia University, Kavli Professor and director of the Kavli Institute for Brain Sciences, and senior investigator at the Howard Hughes Medical Institute. He received the Nobel Prize in Physiology or Medicine in 2000 and is a member of the President’s Council of the Academy. He is also the author of In Search of Memory: The Emergence of a New Science of Mind (W. W. Norton).

Exploring the Science and History of Thermodynamics

An old mechanical device, made of steel/iron and wood.

From the boilers that heat water in our homes to the engines in our vehicles that allow us to travel with ease, thermodynamics are an often-invisible part of our everyday lives.

Published May 1, 2006

By John H. Lienhard

Boulton and Watt Rotative Beam Engine – the ‘Lap’ engine. This is the oldest essentially unaltered rotative engine in the world. Built by James Watt in 1788, it incorporates all of his most important steam-engine improvements. The engine was used at Matthew Boulton’s Soho Manufactory in Birmingham, where it drove 43 metal polishing (or ‘lapping’) machines for 70 years. Image courtesy of the Science Museum Group © The Board of Trustees of the Science Museum, London. This image is released under a CC BY-NC-SA 4.0 Licence. No changes made.

The president of France, Sadi Carnot, was stabbed by an anarchist on June 24, 1894. The vein to his liver was severed, and he bled to death in the hospital. This touches our story in two ways:

First, the darkness of venous blood was one of the “tells” that led people to accept the idea of energy conservation, the first law of thermodynamics. Questions about how blood manages human body temperatures had helped people to see that our bodies achieve both work and heating from the chemical energy of food.

Second, President Carnot’s uncle, also Sadi Carnot, and his grandfather, Lazare Carnot, were key players in the struggle to understand the rules that govern heat and work. Their efforts led to what we call the second law of thermodynamics, the idea that no engine can ever be 100 percent efficient, and that all natural processes degrade energy. Yet neither senior Carnot accepted the first law of thermodynamics – the idea of energy conservation.

Black and Phlogiston

Many towns in France have a square, avenue, or street named Carnot but it is hard to tell which Carnot it honors: Lazare, best known as the “organizer of victory” during the revolutionary wars of the 1790s; his son, Sadi, who died at 36 having published just one work, yet whose name is inextricably linked to the origins of thermodynamics; or Sadi’s nephew who presided over the French Republic from 1887 until his assassination.

The story of the thermodynamical Carnots best begins about the time of Lazare Carnot’s birth, in 1753. Heat was then regarded as the “subtle fluid” phlogiston – the “substance” released during combustion. The young Scottish chemist Joseph Black was still thinking of heat as wedded to chemical change, but was asking just how much phlogiston it took to increase a material’s temperature one degree.

The Kindred Concept of Latent Heat

Black recognized that the amount must vary from material to material. By this time, both Fahrenheit and Celsius had provided excellent means for measuring the intensity of heat – its temperature. But should one not also have means for measuring its extent – its quantity? Black realized that he could heat a mass of water by transferring energy to it from another material. Since the heat leaving one mass is the same as that entering another, he could determine the heat capacity of any material by heating or cooling a known amount of water.

He also took an interest in the kindred concept of latent heat. At the transition points where a liquid boils or condenses (or a solid melts or freezes) it does so with no change in temperature. To measure the latent heat transferred in, say, melting, Black surrounded a known mass of ice with a known mass of hot water; then he measured how much the water temperature fell as the ice melted away.

These experiments led naturally to the British thermal unit or Btu (the energy needed to raise the temperature of a pound of cold water one degree Fahrenheit).

The Rise of Caloric

Black at first thought he was manipulating chemical changes in matter, but he began to see that heat was not some component of matter, as phlogiston was imagined to be. Rather, it flowed in and out of matter. Phlogiston was about to be displaced by the new term caloric. Caloric gained its full definition in 1779 when Black’s student, William Cleghorn, set down rules for its behavior. Cleghorn’s rules helped to make a useful tool of caloric, but they also helped expose its eventual failings.

Cleghorn determined that caloric had to be a subtle invisible fluid. He explained thermal expansion by imagining caloric to be elastic, with particles that repelled each other. Cool bodies attracted caloric to different extents. That explained heat conduction and specific heats. Caloric had to take a latent form as water boiled at 212° F. It was “sensible” when it raised a material’s temperature. Caloric had to have weight because metals gained weight when they were heated.

Today we know that bodies expand as they are heated because their molecules repel one another. We recognize the gain in weight in metals as a chemical change, oxidation.

Not the Whole Story

Black knew Cleghorn’s rules were not the whole story, but he allowed that they correctly explained the experiments of Benjamin Franklin and others. He cautiously called the caloric theory, “the most probable of any that I know.” Antoine Lavoisier, the French chemist, also liked the idea and coined the term calorique.

So the caloric theory remained for about seventy years. Not until atoms were far better understood would we realize that heat merely reflected atomic motion. However, in everyday life, we still speak of heat flow, or of bodies holding their heat, as if heat were behaving like a caloric fluid.

In our bones (or more accurately, in our muscles) we have always known that we can create heat by doing work. But how could frictional heating be reconciled with heat as a fluid? Caloric theorists tried to resolve that with increasingly tenuous arguments about how friction or deformation “released” caloric. They looked at frictional heating and saw, not a contradiction, but a phenomenon to be explained in terms of caloric. All the while, it was perfectly clear to everyone that the amount of caloric they could create was limited only by their own stamina.

A New Science of Thermodynamics

So the stage was set for the last act in the drama of writing a new science of thermodynamics. What had to be digested was the fact that thermal energy and mechanical work can be traded back and forth (the essence of the first law of thermodynamics).

Which takes the story back to venous blood. Natural philosophers were beginning to suspect that chemical reactions turned blood from red to dark. But estimates of the extent of chemical heating were too low to account fully for the heat.

Eighteenth-century physiologists had attributed blood heat to friction despite the caloric theory, and they continued to think that friction accounted for blood heat, well into the 19th century. Not until 1843, did French chemist Pierre Dulong have accurate enough data to show that chemical heating accounted for virtually all of blood heat. In an ironic twist, Dulong effectively bolstered the lingering caloric theory when he removed frictional heating from physiology.

Everyone who has ever studied the history of heat has struggled with the obviousness of mechanical friction. Yet even the idea that blood is heated by friction had failed to animate an anti-caloric movement. The recognition of friction as an instance of the convertibility of heat and work replaced caloric as a competing theory only in the 19th century, after cannon-boring experiments made in Bavaria by American expatriate Benjamin Thompson/Count Rumford. Thompson had become Count Rumford in Bavaria after a rapid and convoluted series of moves that began when he had to flee colonials who learned he was spying for the British.

Count Rumford’s Canon

As a result of tests in which he generated unlimited caloric by boring cannon with blunt bits under water, Rumford was able to state quite plainly, Anything which an insulated body, or system of bodies, can continue to furnish without limitation cannot possibly be a material substance; and it appears to me to be extremely difficult, if not quite impossible, to form any distinct idea of any thing, capable of being excited and communicated in the manner the Heat was excited and communicated in these experiments, except it be MOTION.

Rumford continued his advocacy of a mechanical theory of heat after he left Bavaria and returned to England and France. At that point he took up a four-year relationship with Lavoisier’s widow, Marie, which ended in a short and disastrous marriage. It’s quite possible that the scientifically savvy Marie Lavoisier egged him on in his attack on caloric. In any case, before the marriage Rumford crowed: “I think I shall live to drive caloric off the stage as the late M. Lavoisier drove away Phlogiston. What a singular destiny for the wife of two Philosophers!!”

With that kind of rhetoric, we can hardly be surprised that the marriage failed. Rumford did indeed help drive caloric “off the stage” by setting a foundation for the  first law of thermodynamics. But that would not happen yet.

An anti-caloric faction failed to arise, even after Rumford, for this is where Lazare and Sadi Carnot enter the story.

Lazare Carnot, Revolutionary Leader

From left: Lazare Carnot (1753-1823), Sadi N. L. Carnot (1796-1832), and M. F. Sadi Carnot (1837-1894).

Lazare Carnot was a remarkable figure. He was born in 1753 – the same year as Benjamin Thompson – and was educated in mathematics and military engineering. During his military service, he competed for mathematics prizes, and also had political dealings with the infamous Robespierre. While he was on garrison duty in the 1780s, Lazare Carnot began an intense affair with an aristocrat’s daughter.

Unbeknownst to Carnot, her father arranged her marriage to another aristocrat. Carnot, furious, went to the fiancé and revealed the affair. That broke up the marriage plans, but the father had Carnot thrown in jail for conduct unbecoming an officer and gentleman. This was 1789. The first events of the French Revolution were just taking place, and they led to Carnot being retrieved from prison after only two months.

His life had been pretty static up to that point. Now it began moving very rapidly. He was soon married (to someone else) and was elected to the Assembly. His skills in administering military missions led to his selection in 1793 as one of the 12 men on the Committee of Public Safety and, in 1796, as a member of France’s five-man ruling group, The Directory. They reorganized the government and ran it until Napoleon took power. Carnot served longer than any revolutionary leader except Napoleon.

A Mathematician and Technocrat

Carnot also started the Little Corporal on his rapid ascent to power by appointing him head of the Army of Italy, and Carnot would rally to Napoleon as his Minister of Interior when he returned from Elba. However, after Napoleon’s fall, the returning monarchy remembered Carnot’s vote to behead Louis XVI and he spent the rest of his life exiled to Germany.

Lazare Carnot was first a mathematician, yet strongly interested in technology. Also, he advocated active defense in fortification design, including what became known as Carnot walls – the high, heavy, detached walls built in front of forts, with loopholes for the exchange of fire. He befriended the Montgolfier Brothers, and Robert Fulton, who showed up in France trying to sell submarine designs. Carnot was an excellent violinist, but he thought like a technocrat. He once remarked: If real mathematicians were to take up economics and apply experimental methods, a new science would be created – a science which would only need to be animated by the love of humanity in order to transform government.

From Waterwheel to Steam Engine

Lazare Carnot’s attention naturally turned to power production. Imagine a perfect waterwheel, he said, in which no energy is wasted or dissipated. Water is stationary before it enters and stationary at the exit. Then he reached a very important insight: all motions would be completely reversible. Run the perfect waterwheel backward, and it would become the perfect pump.

Here Lazare’s son, Sadi, claimed his inheritance. In 1824, one year after his father died, 28-year-old Sadi Carnot wrote his sole monograph, Reflections on the Motive Power of Heat. In it, he asks us to conceive a perfectly reversible steam engine. If we could build such a machine, we could run it in reverse and pump heat from a low-temperature condenser to a high-temperature boiler. When the first refrigerators appeared 36 years later, they were exactly the reversed heat engines that Sadi Carnot had described.

Sadi “operated” his perfect engine in a thought experiment. In his mental engine, he used an ideal gas instead of steam. When he assumed the not-yet-fully-accepted fact that no engine can possibly act as a perpetual motion machine, he was able to show that the work of one kilogram of air in such an engine depends only upon the temperatures at which the air is heated and cooled.

The Basis for Carnot’s Theorem

That was the basis for Carnot’s Theorem: The motive force of a perfectly reversible engine depends solely upon the high and the low operating temperatures. (Those would be the boiler and condenser temperatures in a steam engine.) This sole dependence on temperature was the first step toward the second law of thermodynamics.

Carnot’s theorem would be true whether the engine used steam, air, or any other fluid. His ideal engine mirrored his father’s perfect waterwheel – a waterwheel that depends solely upon how far water falls through it. Yet neither father nor son accepted the conversion of work into heat or vice versa. (I can find no evidence that Lazare Carnot and his contemporary, Count Rumford, ever communicated.)

Sadi Carnot assumed that caloric was conserved as it passed through an engine, just as water passing through a waterwheel is conserved. Today we know that only part of the heat flowing into a boiler turns into useful work. A good fraction of the heat passes into the condenser. But since Carnot had couched his work in terms of indestructible caloric, the validity of what he said about steam engine performance seemed to bolster the caloric theory.

Clausius and Entropy

This strange turn of affairs meant that the demise of caloric had to await a new generation. Rudolf Clausius, born in 1822, finally synthesized our science of thermodynamics from these seemingly contradictory parts. Clausius showed how Carnot’s theorem and the conservation of energy complemented one another. Energy conservation said that less heat left a steam engine than entered it – the difference being converted into useful work. While that contradicted Carnot, it left Carnot’s theorem intact.

Clausius saw that something was being conserved in Carnot’s perfectly reversible engine – but something other than heat. He called it entropy, and defined it as the heat flow from a body divided by its absolute temperature. Entropy changes in a perfectly reversible engine balance out. As heat flows from the boiler to the steam, the boiler’s entropy is reduced. As it flows into the condenser coolant, the coolant’s entropy increases by the same amount.

No heat flows as steam expands in the cylinder or as condensed water is compressed back to the boiler pressure. Therefore, the entropy of the water or steam changes only when heat flows to and from the condenser and the boiler. The net entropy change is zero in that perfectly reversible engine and its surroundings. Under Clausius’s definition of entropy he was able to show that everything Sadi Carnot had claimed was true – except the part about heat or caloric being conserved.

Carnot’s Single Error

Once he corrected Carnot’s single error, Clausius could conclude that the efficiency of a perfectly reversible heat engine did indeed depend upon nothing other than the temperatures of the boiler and the condenser, just as Carnot had said it must. Carnot’s belief in caloric denied him the specific use of the word efficiency, but his central deduction remained intact.

Sadi Carnot died of cholera in 1832 and the image of his fevered blood brings to mind the dark venous blood of his nephew, Lazare’s grandson, its life-giving energy spent. What bizarre convergences these three generations offer – contradiction and resolution, terrorist politics and idealism, maddening complexity and elegant simplicity – and a crucial path along the road to understanding how things work.

Also read: Lockheed Martin Challenge Inspires Innovative Ideas

References

1. Brown, S. C. 1981. Benjamin Thompson, Count Rumford, MIT Press, Cambridge, MA.

2. Carnot, S. 1897. Réflexions sur la Puissance Motrice du Feu (Reflections on the Motive Power of Heat), R. H. Thurston, Ed. John Wiley, New York.

3. Gillespie, C. C. 1970-1979. The Dictionary of Scientific Biography, Charles Scribner’s Sons, New York.

4. Lienhard, J. H. June 2006. How Invention Begins: Echoes of Old Voices in the Rise of New Machines, Oxford University Press, Oxford, New York. Much of the material in this article, and all the resources used in its making, are in this book.

5. Lienhard, J. H. Engines of Our Ingenuity radio program Web site. www.uh.edu/engines. Short essays on many of the themes of this article can be found and heard here.


About the Author

John H. Lienhard is M. D. Anderson Professor Emeritus of Mechanical Engineering and of History at the University of Houston, and the author and voice of The Engines of Our Ingenuity, a radio program heard nationally on Public Radio. His latest book is the forthcoming, How Invention Begins: Echoes of Old Voices in the Rise of New Machines. (Oxford University Press)

The Road to Discovery in 20th Century Science

For author Alan Lightman, reading landmark scientific papers provides a window into the lives and intellectual adventures of the men and women behind the 20th century’s most influential ideas.

Published April 14, 2006

By Karen Hopkin

Otto Loewi. Image courtesy of Institute of Pharmacology, Graz, CC-BY-SA-3.0-DE, via Wikimedia Commons.

The key experiment came to him in a dream. It was 1921 and Otto Loewi, a German pharmacologist, was looking for a way to determine how nerve cells communicate. Was the signal conveyed from one neuron to the next—or from a neuron to a muscle or organ—electrical? Or was it chemical?

The scientist awoke, jotted down his musings on a slip of paper, and went back to sleep. “It occurred to me at six o’clock in the morning that during the night I had written down something most important,” he later recalled, “but I was unable to decipher the scrawl.”

From Dream to Nobel Prize

Fortunately, the idea returned the following night. That time, Loewi must have written more legibly, because he was able to carry out his Nobel Prize-winning experiment that day. He dissected the hearts from two frogs and placed them, still beating, into separate dishes of saline solution. Loewi then stimulated the vagus nerve he’d left attached to the first heart. As expected, the heart slowed its beating.

Now here’s the elegant part. Loewi took some of the solution bathing the first heart and poured it over the second heart, from which he’d stripped the vagus nerve. This heart, too, slowed, proving that the message transmitted by the vagus nerve was chemical in nature. The compound, which Loewi called “Vagusstuff,” turned out to be acetylcholine, a neurotransmitter found widely throughout the nervous system.

For Loewi, the experience suggested that “we should sometimes trust a sudden intuition without too much skepticism.” And for Alan Lightman, physicist and author of The Discoveries: Great Breakthroughs in 20th Century Science, the story illustrates how scientists think, and reminds us that science is a process of exploration carried out by human beings.

Hearing the Scientist’s Voice

Over the years, Lightman has come to realize that scientists rarely read original research papers, perhaps because they view science as being all about the bottom line. “If science is an explanation of the way that the world behaves, then you don’t need to know how you got to that understanding,” says Lightman. “You just need to know the facts, ma’am. And that’s all that matters.”

That view, although valid, is limited, Lightman told an audience at The New York Academy of Sciences (the Academy) on January 31, 2006. “You can read a textbook on the theory of relativity and you can understand relativity,” he says. “But you don’t understand the mind of Einstein. You don’t hear his voice.”

To remedy that loss, Lightman assembled The Discoveries, a handpicked collection of 22 of the greatest ideas and experiments in 20th century science. Lightman asked his scientist pals—physicists, chemists, astronomers, biologists—for recommendations and then winnowed down the resulting list to the two dozen stories he presents in the book. For each discovery—from Werner Heisenberg’s enumeration of the uncertainty principle to Barbara McClintock’s revelation that genes can jump from one chromosome to another—Lightman provides a guided tour to the original paper along with an essay on the life and times of the scientists involved.

Measuring the Distance of Stars

Henrietta Leavitt. Image via Wikimedia Commons.

Among Lightman’s favorite tales is that of Henrietta Leavitt’s development of a method for measuring the distance to the stars. Leavitt was hired in the late 1800s by Edward Pickering, director of the Harvard College Observatory, to pore over photographic plates and calculate the positions and brightness of thousands of stars. As one of the cadre of women that formed Pickering’s low-paid battalion of human “computers,” Leavitt was expected to “work, not think,” says Lightman. “But some of the women disobeyed him, and Henrietta Leavitt was one of those.”

Through painstaking measurements, Leavitt uncovered a relationship between the periodicity and luminosity of the Cepheids, a group of stars that brighten and dim in predictable cycles that vary between three and 50 days. Leavitt found that the longer a star’s period, the greater its intrinsic luminosity, and that knowing how bright a star is allows one to calculate how far away from Earth it lies. Thus the Cepheids, which are scattered throughout the night sky, could serve as cosmic beacons by which astronomers could gauge distances in space.

Leavitt’s work laid the foundation for many of the astronomical discoveries that would follow, including Hubble’s determination that the universe is expanding. Yet the scientist remained uncelebrated in her lifetime. “Even today there are very few people who’ve heard of her,” notes Lightman. In 1925, a representative of the Swedish Academy of Sciences wrote to Leavitt to propose nominating her for a Nobel Prize. Unfortunately, Leavitt had been dead for three years by then, rendering her ineligible for the honor.

Passion and Obsession

The most satisfying stories, Lightman says, are the ones in which the researchers’ personalities drive the discovery. Take, for example, Arno Penzias and Robert Wilson’s detection of the cosmic background radiation—the persistent hum left over from the Big Bang. “Both men were incredibly meticulous experimentalists,” says Lightman. “If they hadn’t been so anal compulsive about the details then they wouldn’t have been so certain that this residual hiss in their antenna was something worth investigating.”

But, he adds, “they were so fastidious, so picky, and so careful” that they methodically chased after the source of the noise. And after they eliminated every possible thing they could think of, Penzias and Wilson concluded “this was something worth writing about,” says Lightman. Indeed, their almost comically understated paper, entitled “A measurement of excess antenna temperature at 4080 Mc/s,” formed the basis of their 1978 Nobel Prize.

In the end, Lightman himself discovered a thing or two in putting together the book. Although he did not uncover any particular scientific temperament—scientists’ personalities run the regular human gamut—Lightman did find that, regardless of the field in which they worked or how they came to their discoveries, all the scientists he profiled “were really passionate about what they do. All loved to solve puzzles. They all loved to challenge authority. All were independent thinkers. And all were really obsessed with science.”

And though all didn’t necessarily dream about their work, they did labor tirelessly to solve their favorite puzzles, leaving behind them tales that are certainly worth telling.

About the Speaker

Alan Lightman, PhD, is adjunct professor of humanities at the Massachusetts Institute of Technology. As a novelist, essayist, physicist, and lecturer, Lightman is committed to making science accessible and understandable to a wide audience. His writings cover a range of topics dealing with science and the humanities, particularly the relationship between science, art, and literature. Lightman’s short fiction, essays, and reviews have appeared in numerous popular magazines and publications, including Discover, Harper’s, Nature, and The New Yorker.

He is the author of four novels, including the international bestseller Einstein’s Dreams, which was runner-up for the 1994 PEN New England/Boston Globe Winship Award, has been translated into 30 languages, and is the basis for more than two dozen independent theatrical and musical productions. In addition to his novels, Lightman is the author of several science books, drawing on his research in the areas of gravitational theory, accretion disks, stellar dynamics, radiative processes, and relativistic plasmas.

Lightman holds a PhD in theoretical physics from the California Institute of Technology, and an Honorary Doctorate of Letters from Bowdoin College. He served a postdoctoral fellowship at Cornell University before becoming assistant professor of astronomy at Harvard University and research scientist at the Harvard-Smithsonian Center for Astrophysics. In 1989 Lightman joined the faculty of MIT, and in 1995 was appointed John E. Burchard Professor of Humanities, a position he resigned in 2001 to allow more time for his writing.

For his contributions to physics, Lightman was elected fellow of the American Physical Society and the American Association for the Advancement of Science, both in 1989. In 1996 he was elected fellow of the American Academy of Arts and Sciences, and that same year, was recipient of the American Institute of Physics Andrew Gemant Award for linking science to the humanities.

The Science and Cinema of the Brain

Sloan Foundation gets cerebral at the Sundance Film Festival, going into the science and psychology of motion pictures.

Published February 5, 2006

By Adrienne J. Burke

Image courtesy of Svitlana via stock.adobe.com.

How is your mind like a movie? Will new technologies enhance the way films convey cognitive experience? How will the ancient human capacity for processing emotions keep pace with rapidly accelerating cognitive experiences?

These and other questions were tackled by a panel of four scientists and three filmmakers recently at the Sundance Film Festival in Park City, Utah. An audience of 250 filmmakers, journalists, and film enthusiasts attended the event called “What’s on Your Mind? The Science and Cinema of the Brain,” hosted by New York’s Alfred P. Sloan Foundation on January 27, to engage in a discussion about how movies can be tools for exploring the mind, for fulfilling the human need to vicariously experience emotion, or for mimicking the editing process in which our brains engage.

Meet the Panel

Moderating the panel was John Underkoffler, an MIT-trained engineer who has consulted as a science and technology advisor on films such as Steven Spielberg’s “Minority Report” and “The Hulk,” in which Nick Nolte plays a mad scientist.

Panelists, in order of appearance, were:

  • Lynn Hershman Leeson, artist and director of the films “Conceiving Ada,” about the contributions of the Countess of Lovelace to early computer science, and “Teknolust,” which won the Sloan Award at the 2002 Hamptons Film Festival;
  • Hal Haberman and Jeremy Passmore, the directing and writing team that created a film screened this year at Sundance called “Special,” about a man who enters a clinical trial and suffers a breakdown and thinks he is a superhero;
  • Antonio Damasio, a neurologist and neuroscientist who directs the University of Southern California Institute for the Study of the Brain and Creativity;
  • Martha Farah, director of the University of Pennsylvania’s Center for Cognitive Neuroscience; and
  • Kay Jamison, professor of psychiatry at Johns Hopkins University School of Medicine and author of several books on manic depression and bipolar disorder, including her autobiography, An Unquiet Mind.

Storytelling and Technology

Underkoffler kicked off the discussion pointing out that new technologies such as functional MRI are enabling neuroscientists to see where in the operating mind different activities are taking place, and to address for the first time questions that were previously the domain of philosophers, only answerable through intuitive thought, not scientific analysis. Considering that film is a unique vehicle for conveying states of mind, Underkoffler asked, “Is film privileged as a tool for exploring these ideas of mind and brain?”

*Here is an abridged version of the conversation that followed.*

Leeson: The technology always has some kind of way of altering the way we think. Some people have said that iPods are restructuring the way we create narratives. The advent of multidimensional possibilities with DVDs or other aspects of Internet use has created varying levels of how we communicate and what stories we tell and how we develop ideas of fractured intelligence, identity, and even artificial intelligence as characters and character subplots.

Haberman: For me, technology influences how we make movies, but in terms of changing the actual stories we’re telling and the structure of the stories we’re telling, I don’t think those are much different from the way I would have told the story in a movie if I had been alive to make one 30 years ago.

Passmore: I’d agree with that. The film doesn’t happen on the screen or in the speakers; the film happens when it’s synthesized by your brain when you’re sitting in the audience. Film is inherently the medium by which you experience alternate realities. As the technology evolves, whatever is after cinema is going to become even more so.

Frames in the Mind

Damasio: Film, and before it theatre and literature in general, have been historically means of inquiry into the human mind. Greek theatre was doing things similar to what filmmakers are doing today: using narrative you’re looking into the human mind and human behavior.

There’s something privileged about cinema that is different from the other modalities, [because] it’s probably so far the closest we can have to the kind of subjective experience we have of our own mind. It has to do with the fact that there is a frame in our minds when we’re looking at the world, whether we’re looking at the actual world, or into our minds with our eyes closed. The visual and the auditory are very powerful and are the bread and butter of film making. They bring us much closer to the experience of our own mind.

It’s as if film has [copied] some of the characteristics of the human mind. Editing is something we do all the time when we apportion attention differently to one image or another. We are constantly running an editing machine in our own mind by bringing a character into focus more strongly, by reframing it, or by the duration for which we allow the image of that character to linger.

It’s quite interesting that there are very close connections between the mind process and what our eyes are doing. John Huston might have been the first to point out that you cut on the blink in filmmaking. It’s something that shows film to be very privileged in its connection to brain and mind science, far more so than literature or theatre of any kind I can think of.

Simulating Experiences

Farah: I think the film “Being John Malkovich” illustrates your point well — that through film we can simulate the subjective experience of another person. “Special” does the same thing with this ambiguity between Les’s perception of what is going on and the reality. It’s a seemingly unbridgeable gulf that cognitive neuroscientists are continually trying to bridge, between subjective mental experience and objective observable things.

Haberman: “Being John Malkovich” is interesting also because it shows how you can illustrate things cinematically for a broader audience than scientists. A lot of people probably don’t know what a feedback loop is, but when they walk down the tunnel and there are John Malkoviches everywhere, I think intuitively [the audience] understands what’s happening. It illustrates a scientific principle without feeling like it’s telling or explaining to you.

Redefining Film

Leeson: I think the whole definition of film is radically changing right now, in a way that we haven’t seen in the last hundred years. We’re developing different options for how we look at moving images and therefore the whole definition of what film is and dealing with possibilities for entering virtual realities … We’ve never been able to have these possibilities before.

Jamison: If you’re trying to convey mood or desolation or despair or psychosis, or madness or ecstasy or expansive mood, it’s so much in the acting and directing and writing. The technology is not my bailiwick, but it seems to me that tremendous portrayal has been done so well since the beginning of film. If you’re trying to convey a mood such as desolation or despair, what is it in the technology recently that has made any difference in how well that would come across now to an audience as opposed to 30 years ago?

Underkoffler: Technologically, it seems like nothing. The digital resolution, sound, would have no bearing.

Leeson: Some artists are using PDAs to create environments that do alter moods when one goes there. They create installations and environments that are addressing these very particular issues.

A Wider Domain

Haberman: I think the most obvious example is video games that are so popular right now. That experience couldn’t have happened 10 years ago. They’re playing a narrative. It’s a whole way of watching a story.

Passmore: It’s kind of like antidepressants. It’s our version of “we don’t really know what the long term effects of it will be.”

Leeson: We’ve never had the connectedness that we have now. We’re able to interpret and hear so many points of view that it seems like we’re congealing things beyond a particular culture to a wider domain.

Haberman: But that’s something people have been thinking has been going on for years and years. Even if you look at things people were writing in the 1960s, it was all about connectedness and different cultures coming together. And all the poststructuralist film theory from the 1980s is the same thing: People always want to feel they’re more and more connected with each other and that technology does that, but I’m not convinced it does.

Transhumanists Thinking Like Bats

Underkoffler: I’m also interested in technologically expanded options for what cinema might become. It’s interesting to wonder what else is possible. Peter Greenway famously and cantankerously said sometime in the early 1990s that film had done nothing but produce illustrated 19th century novels in the sense that they follow a comprehensible narrative. What else could film do to map our cognitive or mental states onto other possibly even nonhuman or transhuman artifacts or situations? Might we elicit some kind of state that is impossible to elicit in any other way?

Farah: Well, it’s like the famous article “What Is It Like to Be a Bat?” by the philosopher Thomas Nagel, who ends up concluding that you can’t know what it’s like to be a bat because you don’t have a bat brain, you don’t have a bat experience.

Underkoffler: And you don’t have a bat body.

Passmore: What we need is a bat filmmaker.

The Essence of the Subjective Experience

Farah: How close could you get to a bat experience by watching a film? I’m going to say not very. If you can’t get the essence of the subjective experience of being a bat by walking around in the world having light impinge on your retina because it’s reflecting off surfaces around us, I don’t see how having light impinge on your retina because it’s coming from a movie screen is going to make a difference.

But one thing that might make a difference is a sort of wacky idea that Ray Kurzweil describes in his new book The Singularity is Near: When Humans Transcend Biologyall about how changes in computer- and nanotechnology are going to increasingly be incorporated into our bodies, including our central nervous systems. Eventually we’ll gradually transform ourselves into these cyborg creatures that won’t resemble much the humanity version 1.0, which is what we are sitting around here today.

One interesting scenario he describes is the use of nanotechnology to penetrate our nervous systems. We would first use nanotechnology to get a highly detailed, three-dimensional image of the state of somebody else’s brain. A nanobot would go into John’s ear and infiltrate his brain and get the picture and then I could inhale them into my brain and they could simulate the same state and thereby let me know what it’s like to be John Underkoffler. And maybe they could do the same thing with a bat.

The Cyborgian Age

Leeson: I think we already are posthuman and we’ve already entered the cyborgian age. More and more symbiosis with technology is altering the way we’re thinking. And as far as projections into the future, I think one that’s very close is how we distribute narratives, not just only on screens in dark rooms, but on computers and through software programs that incorporate moving images and build memory.

Damasio: I think with the Kurzweil scenario, there’s no need for immediate worry. It’s far into the distant future. If the Kurzweil scenario comes to pass it will lead to different relationships within ourselves and with technology, and I don’t know if it will illuminate our experience with nonhuman species, but I don’t think it will affect film as it is in itself. Film could portray all of this, but it doesn’t follow that it will alter it necessarily and change that fundamental technique.

How Movies Nourish Emotions

Passmore: My opinion is that this technology is great, it will help bring new ways of telling stories to people, but I think there’s a reason the narrative structure hasn’t changed over 1,000 years. It’s because we want to experience someone else’s life, someone else’s reality. We want to see a character and view the world through that character’s eyes and I think that’s the basis of narrative and I don’t see that changing anytime soon.

At the end of the day, you have an audience that wants someone they can identify with. There are always going to be people trying to beat their heads against the wall trying new things, but eventually the strength of the narrative in its current form is going to carry on forever.

Damasio: That has a lot of do with our own needs to experience vicariously emotional states. There are a lot of things going on in movies traditionally and in classical novels and theatre that is a way to experience emotion we would like to have and sometimes experience emotions that we would not like to have.

I don’t think anybody would choose to be in situations that cause extreme horror and terror and so on, but the fact is that people flock to movies that have suspense and show fear and that lead you to experience enormous horror sometimes. I think there’s one reason that continues, and that is that we rehearse. In some way we get rid of the need to worry about them, because we are going through that experience in a way that we know once the lights come up we’re not going to get killed or nothing terrible is going to happen to us.

Our Own Mortality

Passmore: It tricks us into thinking that we’ve dealt with our own mortality.

Damasio: Exactly. We need to have nourishment for our own emotions. And here I would point out biology. There is a big disconnect between the way our brain and our organism processes emotions, and the way our organism processes what people call straight cognition. Cognition is like lightning. Cognition is very rapid, and has the potential to become more rapid.

It’s quite likely that people in the world who are growing up with new technologies are going to have even more rapid cognition. But that doesn’t mean that they’re going to have faster emotional processes, because the emotional processes are very old, in terms of evolution, and they’re probably much more rigid and difficult to change at least over a course of a relatively limited period of time.

Leeson: Do you think there’s a difference in generational cognition and that it’s changing?

Jamison: I would address the emotional side, which is the more ancient side, and that probably is not changing nearly so rapidly. The thinking process probably is, but the moods and the fears and so forth are not changing so rapidly, so it’s a fascinating time in human evolution.

Also read: Music on the Mind: A Neurologist’s Take

The Genius of Quantum Physicist Richard Feynman

Missives from Feynman in Perfectly Reasonable Deviations from the Beaten Track, a book of his letters edited by daughter Michelle Feynman, reveal his genius and wit. What was his contribution to the canon of 20th-century quantum physics?

Published February 3, 2006

By Chris H. Greene

Richard Feynman in 1959. Image via Wikimedia Commons.

“Science alone of all the subjects contains within itself the lesson of the danger of belief in the infallibility of the greatest teachers in the preceding generation … Learn from science that you must doubt the experts. As a matter of fact, I can also define science another way: Science is the belief in the ignorance of experts.”
— Richard Feynman, 1981

We all know the stories of Richard Feynman. He was at times a showman and a clown. He expressed irreverence toward prestigious, hoary organizations like the National Academy of Sciences and the Royal Swedish Academy of Sciences. The tragic death of his young wife during the time of the Manhattan Project became familiar to millions through the touching Matthew Broderick film, Infinity. But behind his public persona lay one of the truly independent and innovative minds of the 20th century. Richard Feynman felt an intense, personal need to see physical phenomena in his own terms, and from his own perspectives, using theories that he generated himself.

At the same time, Feynman’s theoretical constructs did not arrive on the planet like a bolt from nowhere. His most important contributions were ideas that were in some sense already “in the wind,” but his way of developing them into consistent theoretical descriptions of nature differed dramatically from methods popular at the time.

Paradoxical Infinities

It may seem surprising, but the theoretical program that resulted in Feynman’s 1965 Nobel Prize (also awarded that year to Julian Schwinger and Sin-Itiro Tomonaga) was not aimed so much at explaining the result of any particular experiment, as it was an attempt to resolve some of the apparently self-contradictory aspects of both classical and quantum electrodynamics theory. If you shake an electron, it radiates light waves, whose electric fields must in turn act back on the electron to lower its energy. But attempts to calculate this “radiative reaction force” led to infinities which were paradoxical and in clear contradiction with experience.

In Feynman’s doctoral thesis work with John Wheeler at Princeton, the two entertained fantastic possibilities in a desperate attempt to solve these paradoxical infinities. One peculiar notion that emerged was that if, in a certain sense, the classical fields are allowed to propagate backward in time, the paradoxes and the infinities appeared to be magically removed.

A variant of this idea survived when Feynman wrote down his quantum mechanical formulation of this problem, which he credits to Wheeler for originally tossing out: that the positron, the antiparticle of an electron, can be regarded as an ordinary electron moving backward in time. Surely you’re joking, Mr. Feynman! As fantastic and unbelievable as this idea seems when stated in words, when formulated mathematically it was found that a consistent theoretical framework emerged, without the troubling infinities.

Moreover, Feynman created a simple way for these complicated calculations to be carried out, which is still used today: first, draw lines that represent electrons, positrons, and photons moving forward and backward in time in different ways that can contribute to the process of interest. Then apply Feynman’s rules for translating each such Feynman diagram into a precise mathematical formula.

Quantum Electrodynamics

One of the most famous applications of Feynman’s quantum electrodynamics was his calculation of a tiny frequency difference between two nearly identical energy levels (2S1/2 and 2P1/2) of the simplest atom, hydrogen. Willis Lamb and Robert C. Retherford had caused a stir in 1947 when they measured this frequency difference to be 1057 million cycles per second (MHz), because the then-accepted theory of Paul Dirac suggested that this difference should be identically zero. The methods for calculating this interaction between an atomic electron and the “vacuum-fluctuating electric fields of free space” gave infinity, a useless result entirely irrelevant to the experiment.

Using the Feynman calculus, however, a result very close to the experimental frequency splitting (the so-called “Lamb shift”) was obtained. In the intervening decades, both experiment and theory have improved, and we now know this Lamb shift experimentally to be 1057.8447 (plus or minus 0.0034) MHz, while theory based on Feynman’s work predicts 1057.839 (plus or minus 0.006) MHz.

Within experimental uncertainties, and within theoretical uncertainties associated with our imperfect understanding of the proton’s nuclear structure, these agree. Nature thus confirms the remarkable synthesis of theoretical ideas into working quantum electrodynamics, achieved by Feynman, as well as by Schwinger and by Tomonaga.

Advancing the World of Theoretical Physics

And what are we to take from these strange notions? Are positrons really just electrons moving backward in time? Feynman tended to dismiss such queries as having no more relevance to physics than debates about how many angels fit on the head of a pin. Here is one more example where the equations developed by theoretical physicists, after extensive testing, are the bottom line. Seemingly bizarre philosophical implications, when those equations are stated in words (such as “particles moving backward in time”), do not matter a whit. What matters from the physicist’s perspective is the explanatory and predictive power of the resulting theory.

In the end, Feynman’s work parallels eerily the way the “luminiferous aether” was abandoned as irrelevant, once physicists accepted around the beginning of the 20th century that Maxwell’s equations by themselves adequately describe all classical phenomena of electricity and magnetism. And it is similar to the way Einstein’s equations of relativity, and the peculiar quantum theory, were accepted despite their troubling, almost nonsensical implications for how we think about time, space, and reality. As Niels Bohr wrote and was quoted in Wheeler and Feynman’s 1945 Reviews of Modern Physics article:

We must, therefore, be prepared to find that further advance…will require a still more extensive renunciation of features which we are accustomed to demand of the space time mode of description.

The world of theoretical physics is better today because Richard Feynman was brave enough to contemplate and develop ideas that required such a renunciation.

Also read: The Challenge of Quantum Error Correction

How the Maillard Reaction is Linked to Disease

Scientists who study the chemistry of how food is cooked are exploring promising therapies to treat an array of diseases, from diabetes to Alzheimer’s.

Published January 20, 2006

By Jill Pope

Image courtesy of bernardbodo via stock.adobe.com.

It’s a chemical reaction central to daily life: the Maillard reaction browns our toast and makes roasted coffee smell wonderful. Oh yes, and it’s going on in our bodies all the time.

What happens when sugars and proteins are heated was first described in 1912, and it has intrigued food scientists for 50 years. Over the last 20 years, biomedical scientists have become fascinated as well. We now know that Maillard chemistry plays a role not just in normal aging, but also in a staggering array of age-related chronic conditions, among them atherosclerosis, diabetes, cardiovascular disease, and neurodegenerative diseases such as Alzheimer’s.

How are cooking and body processes related? Susan Thorpe, a prominent biochemist in the Maillard field who is based at the University of South Carolina, explains, “Much as we don’t like to think of our bodies this way, we are protein, sugar, and fat, and we are cooking at a low temperature.”

A Visionary’s Paper is Ignored

Louis Camille Maillard was a French physician and chemist who in 1912 wrote a paper, impressive in hindsight, describing a nonenzymatic browning reaction (that is, one not jump-started by enzymes) that occurred when he heated amino acids with sugars. His work suggested that the reaction might take place in the human body, and he even imagined the critical role we now know it plays in diabetes. At the time, the paper caused no stir.

It wasn’t until the late 1940s that food scientists became interested. For the next 25 years, they learned how the reaction improves the aroma, flavor, and texture of cooked foods. They also put some effort into finding ways to prevent this chemistry from causing undesirable changes in colors and flavors in foods that had to be stored a long time, such as powdered eggs and instant potatoes.

Then, in 1969, the reaction was recognized in the human body. Samuel Rahbar, now at the City of Hope National Medical Center and Beckman Research Institute, found while searching for a genetic marker for diabetes that his diabetic patients had glucose attached to their hemoglobin (the protein that carries oxygen). It had previously been assumed that the Maillard reaction required higher temperatures than those found in vivo.

Rahbar’s discovery of glycated hemoglobin had a major impact on diabetes management, giving doctors a better screening tool and patients a more reliable way to monitor blood sugar. It also opened dozens of research avenues. Once it was shown, in the late 1970s, that the reaction happened in all plasma proteins, biological research in this area took off.

Case in Point: A Lens Protein

The Maillard reaction is really a series of reactions. As an example, consider what happens when an eye protein encounters sugar. A long-lived protein, such as a lens protein, condenses with a sugar in a process called glycation. In subsequent reactions, the damaged lens protein is further abused by sugar as well as by oxidants (free radicals). When the chemistry is done, our lens protein has permanent glucose structures attached to it and appears brown under UV light. And it has a new name: advanced glycation endproduct (AGE).

AGEs accumulate with age and in age-related diseases. Many scientists believe they cause inflammation, loss of flexibility in tissues and organs, and ultimately, impaired function. In the case of our lens protein, the result could be cataracts.

Even the healthiest among us are accumulating AGEs in our tissues as we get older. But because of their elevated blood sugar, diabetic people accumulate AGEs much earlier in life than nondiabetic people. This buildup is seen in kidney disease, eye damage, and nerve damage—suggesting that AGEs are major contributors to diabetes complications. Tissues that depend on flexibility, such as the heart and blood vessels, are also affected.

Not everyone agrees with the theory that damaged protein accumulation causes aging and disease. It may turn out that AGEs simply correlate highly with life-threatening diseases in some other way. But debating that question is less important to many than stopping the damaging cycle.

Stop the Chemistry, I Want to Get Off

In light of the havoc Maillard chemistry can wreak in the body, there is considerable interest in finding ways to stop it, or at least slow it down.

Several Maillard inhibitors have been developed. One is Biostratum’s Pyridorin (pyridoxamine), a member of the vitamin B6 family that blocks AGE formation. Pyridorin is being tested in clinical trials for the treatment of diabetic kidney disease. Three Phase II clinical trials have been completed, and Phase III trials are planned. In the studies, scientists measured patients’ levels of serum creatinine, a widely accepted indicator of impaired kidney function. Treatment with Pyridorin significantly decreased the rate at which creatinine levels rose.

Another inhibitor now in preclinical (animal) trials at Biostratum, BST-4997, works by intervening at a different point, but appears to be even more effective. These drugs offer the potential to slow the progress of kidney disease, giving people more dialysis-free years.

Crosslinks: Reversing the Irreversible?

AGEs are notorious for forming protein crosslinks—becoming closely networked and resistant to being broken. Pimagedine (aminoguanadine, developed by Alteon), is a third kind of Maillard inhibitor for diabetic kidney disease that works by blocking the formation of protein crosslinks. The drug has been shown effective in clinical trials thus far, significantly reducing the amount of protein patients excreted in their urine.

Another substance moving through clinical trials may cause scientists to rethink AGEs entirely. Alagebrium (also by Alteon), the first AGE breaker, appears to work by cutting these protein crosslinks, and is being tested in patients with heart disease. Studies presented at the American Heart Association Scientific Session in November 2005 reported that it caused significant reduction in the mass of the left heart ventricle, a decrease in stiffening of the arteries, and improved function of the lining of the blood vessels. Alagebrium, and other crosslink breakers that may follow it, hold out a previously unimagined possibility—restoring function and flexibility to tissues and organs that have already sustained damage.

Treating Alzheimer’s by Blocking a Receptor

Alzheimer’s sufferers have been found to have three times the amount of AGEs in their brains as do healthy counterparts of the same age. But there is hope: a number of animal studies are looking at ways to treat Alzheimer’s by blocking the receptor for AGE (RAGE). Research suggests that the receptors that bind AGEs may also bind the proteins that accumulate in Alzheimer’s.

If the AGE receptor can be blocked, the accumulation of “senile plaques” in animal brains can also be limited. In one clever ploy, Yasuhiko Yamamoto of Kanazawa University, Japan and coworkers created a decoy receptor, called sRAGE, which they found trapped AGEs and competed with destructive RAGE-AGE communication.

A Role for Diet

What impact do browned foods have on our health? Maillard reaction products are mainly absorbed in the small intestine, and about 10% of dietary AGEs are absorbed in the bloodstream. According to Jennifer Ames, professor of human nutrition and health at the School of Biological and Food Sciences, Queen’s University, Belfast, Northern Ireland, most of the work on AGEs in diet has looked at how they affect atherosclerosis. Results suggest that a low-AGE diet is better for health—”especially for people who have, or who are at risk of developing, diseases related to inflammatory processes,” she says.

In light of these and other findings, Helen Vlassara of the Mount Sinai School of Medicine suggests that people reconsider the AGE content of common foods. Foods higher in fat and protein, such as meat and cheese, will give higher AGE levels. And in general, cooking at a higher temperature creates higher levels of AGEs. Sautéing, steaming, and poaching create fewer Maillard products than frying, grilling, and broiling.

Because oxidants contribute to Maillard chemistry, a diet rich in antioxidants may protect against disease. Toshihiko Osawa of Nagoya University and Yoji Kato of the University of Hyogo have found that antioxidative foods, such as turmeric, can prevent diabetic complications in rats. They also examined the role of glutathione (GSH), an antioxidant found in broccoli and pork, and found that it prevented diabetic kidney and nerve disease.

Eat Less, Live More

Like aging, Maillard chemistry seems inevitable. Drugs may soon help counter the damage. And, to the extent that we can fight it, eating more antioxidant-rich foods, and fewer char-broiled steaks, may help. But, at least in animal studies, only one thing has been shown to extend life—eating less. Most of us in America are eating too much, and an epidemic of type II diabetes is part of the price we pay. The best advice may sound familiar: eat a balanced diet, with lots of fresh fruits and vegetables, and don’t overeat.

Learn more about the Academy’s Nutrition Science program.


About the Author

Jill Pope writes about science and policy issues. She served as Senior Editor for The Cutting Edge: An Encyclopedia of Advanced Technologies (Oxford University Press, 2000).