The Hubble Space Telescope enables us to see intricate and colorful photos from outer space that are otherwise invisible to the naked eye. However, the future of this space mission is uncertain.
During its 14 years in orbit, the Hubble Space Telescope has unveiled some memorable images of the heavens. But one of its latest pictures, released earlier this year to international fanfare, may become the telescope’s enduring legacy.
Called the Hubble Ultra Deep Field (UDF), the photograph is the most sensitive view of the distant universe ever taken. It reveals thousands of galaxies spread throughout a tiny patch of the sky, arrayed like gems against the black velvet of space. The photo, which took nearly 300 hours to produce, is sharp even by Hubble’s standards. “It’s a magnificent, beautiful, stunning image,” says astrophysicist Michael Shara of the American Museum of Natural History (AMNH) in New York.
Indeed, the UDF is so scientifically rich that Shara and other astronomers in the area decided to share their research efforts with thousands of people. For six days in March, researchers and students from AMNH, Columbia University, and Stony Brook University pored over the image in front of fascinated onlookers beneath the white sphere of the Hayden Planetarium. The teams worked at banks of computers, fielded questions, and even produced a daily video that aired on a giant screen in Times Square.
“This was a unique opportunity to share the excitement of scientific research with the general public,” says organizer Kenneth Lanzetta, an astronomer at Stony Brook.
How Galaxies Change and Grow
Hubble’s leaders conceived of the UDF as a way to improve upon the original Hubble Deep Field, a 10-day-long photographic exposure of thousands of remote galaxies. That 1995 image and a 1998 follow-up opened startling windows into the depths of the universe, where galaxies were a fraction of their current age. The two Deep Fields launched a new era of research into how galaxies change and grow over time. But the images also tantalized astronomers with hints of the true original building blocks of modern galaxies, which lay beyond Hubble’s grasp in the 1990s.
Now, the telescope can detect those objects thanks to a powerful new tool: the Advanced Camera for Surveys. Astronauts installed the camera in 2002 during the space shuttle’s last service call to Hubble. It gives the telescope a crisper focus for photography and a wider field of view. The patch of sky captured in the UDF is still small by the standards of the human eye – just 1/67th the size of the full moon – but it’s big enough to display about 10,000 galaxies of all shapes and sizes, with unsurpassed clarity. “The quality of the data is better than anything we’ve ever done with Hubble,” says Steven Beckwith, director of the Space Telescope Science Institute in Baltimore, Maryland.
The new camera is sensitive to near-infrared light, just past the reddest wavelengths of light that our eyes perceive. As the entire universe expands, light shining from distant galaxies stretches into redder and redder light. For extremely remote objects, most of the optical light shifts into infrared radiation, which we know as “heat.” Hubble’s new camera, along with a revitalized instrument that detects infrared light exclusively, endowed the telescope with the vision it needed to see galaxies near the fringes of the observable universe.
The Universe: 13.7 Billion Years Old
Hundreds of galaxies in the UDF shine most brightly in the infrared, including some faint objects that may have existed just 800 million years after the Big Bang. Astronomers believe the universe is about 13.7 billion years old today. Seeing such distant galaxies is like looking at pictures from the childhood album of a 50-year-old adult – all the way back to age 3.
Astronomers who are trying to devise a coherent picture of how galaxies assembled will focus most intently on the faint red objects in the UDF. Even a glance at the image reveals that these galaxies look nothing like the gorgeous spirals and other large metropolises of stars we see today. “The objects really are quite irregular,” says Beckwith. “We’re clearly seeing back to a time when the universe was chaotic. We see a variety of unusual shapes that we can’t identify right now.”
These blotches were scrutinized by the Stony Brook team at the AMNH public event. Prior to the release of the UDF, Lanzetta expected the image might reveal galaxies shining a mere 500 million years after the Big Bang. However, after six days of working nearly round the clock to analyze the light from a whopping 8,172 galaxies, the team determined that none of them was quite so far away. Still, knowing the distances to that many objects – and studying their shapes – will help astronomers figure out how galaxy collisions and waves of star-birth transformed ragged shreds of stars into the grand galaxies of today.
Distant Object Detected
The Columbia researchers, led by astronomer Arlin Crotts, scoured the UDF for changing flares of light. The Hubble team assembled the photograph from a series of shorter exposures over a four-month period. If any star in a distant galaxy exploded as a supernova during that time, it might appear as a brighter pinprick of light in some of the exposures. Crotts anticipated that his group might see a half-dozen such flares, but they found none – an outcome that came as a mild surprise.
Meanwhile, Shara and his AMNH colleagues examined the images for evidence of moving objects. Specifically, the team searched for nearby stars that move quickly through space – so quickly that their motion would show up during the four months of UDF exposures. Such stars would have to be close to our sun – perhaps within 10 or 20 lightyears – but so faint that previous surveys had not detected them. After six days of intense hunting, says Shara, “We have exactly one candidate. Talk about a needle in a haystack!” The team hopes to confirm the object – and learn its nature – with further research, including another view by Hubble within a year.
The combined results of the three teams didn’t make any headlines – and that was just fine, the participants agreed. This raw process was in full view at AMNH, and the scientists could not have been more pleased.
“The single most important reason was to demystify, insofar as we could, astronomical research,” Shara says. “Most of the public still has this view of astronomers as old pipe-smoking men sitting at a telescope on a dark night and peering through the eyepiece. We wanted to show that astronomy is done by living, breathing people, many of them quite young, almost half of them female, and we don’t know the answers.”
Working to Save Hubble
One particular issue rang out during the public discussions, Crotts notes: “Saving Hubble was one of the major issues on people’s minds.” Most visitors were aware that in January, NASA announced it would no longer fly the space shuttle to maintain and upgrade the telescope. Without another such mission, Hubble probably will expire by 2006 or 2007 – several years earlier than astronomers had planned. Although NASA administrator Sean O’Keefe insists that the decision is based on the safety of the astronauts, scientists and science lovers have reacted strongly and negatively.
Regardless of Hubble’s fate, the UDF image will persist as one of the telescope’s profound contributions to science. And in New York, thousands of people watched as astronomers labored to comprehend our cosmic ancestry – encoded within swirls of light on their glowing computer screens.
Theoretical physicist and Columbia University professor Brain Greene delves into the intense rivalry between loop quantum gravity and string theory, and how it ties to Einstein.
As philosopher Paul Feyerabend once noted, science moves more rapidly when there are several competing approaches to a problem. Much of the excitement in theoretical physics today surrounds the intense rivalry between loop quantum gravity (LQG) and string theory.
Both theories aspire to achieve the Holy Grail of modern physics: the unification of general relativity and quantum mechanics. They both have their champions and detractors. Both have had difficulty finding experimental verification of their predictions, yet both claim to be on the verge of discovering results that will do just that.
Loop quantum gravity’s best known proponent is Lee Smolin, author of Three Roads to Quantum Gravity, and a research physicist at Perimeter Institute for Theoretical Physics in Waterloo, Canada. String theory’s most high-profile current spokesperson is Brian Greene.
Professor of Physics and Mathematics at Columbia University, Greene came to The New York Academy of Sciences (the Academy) on Oct. 16, 2003, for an informal conversation as part of his whirlwind tour to promote the NOVA series based on The Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory, his bestselling 1999 book. Four years in the making and with a budget of $3.5 million, “The Elegant Universe” premiered with a two-hour segment, “Einstein’s Dream” and “String’s the Thing” on PBS on Tuesday, Oct. 28, 2003, and concluded with a one-hour program, “Welcome to the 11th Dimension,” on November 4.
String Theory’s Core
Much as he did in his first NOVA segment, Greene began his talk by briefly describing the history of the conflict: how Einstein revolutionized our worldview by conceiving of space and time as a continuum, spacetime; how scientists in the 1920s and ‘30s invented quantum mechanics to describe the microscopic properties of the universe; and how these two radical worldviews clashed.
String theory promises to reconcile these two views of spacetime – Einstein’s vast fabric and the jittery landscape of quantum mechanics. The NOVA animations made clear just how visually captivating this story is. One showed an “elevator of the imagination” traveling to floors smaller by 10 orders of magnitude to illustrate the transition from the placid Einsteinian realm of large things down to the turbulent, frenetic world of atoms, electrons, protons and quarks.
It is at this lowest level of matter that we find the core contribution of string theory. At the smallest of scales, inside a quark, lies not a point but a fundamentally extended object that looks like a string. A vibrating loop of string. At the microscopic level the world is made up of music, notes, resonant vibrating frequencies. This is the heart of string theory.
The Mechanism for Reconciling Relativity
What enables these strings to become the mechanism for reconciling relativity and the laws of the microworld is that these strings have size. In particle physics, point particles have no size at all. In principle you could measure and probe at any scale. But if particles have length, then it makes no sense to believe you can probe into areas that are smaller than the length of the particle itself.
String theory posits that at the smallest of scales, the smallest elements do have a defined length, what is called Planck length, “a millionth of a billionth of a billionth of a billionth of a centimeter” (10-33 centimeter). For analogous reasons loop quantum gravity also posits a smallest unit of space. Its minimum volume is the cube of the Planck length.
In one of the most memorable animations from the show, we travel again to the lowest, most turbulent level of the microscopic world, the world of point particles. But much as when the landscape on a map zooms out when the scale changes, when we define the lowest level as one in which the smallest elements have a defined size, the spatial grid rises above the turbulence and the jitters calm down.
Worlds of Dimensions
One of the most provocative components of string theory is its insistence that the world has more than three spatial dimensions. String theory calls for six or seven extra dimensions. In the television series, Greene focuses more on how there could be these extra dimensions, rather than on why they need to exist.
Greene’s book acknowledges that the need for the extra dimensions is primarily driven by the mathematics behind string theory. In order for the negative probabilities of the quantum mechanical calculations to cancel out, the strings need to vibrate in nine independent spatial directions. Of course, these are not dimensions as we know them. Greene instructs us to “imagine that these extra dimensions come not uniformly large that we can see with our eyes, but small, tightly curled up. So small we just can’t see them.”
If any aspect of string theory is ripe for visual exploitation via animation, this is it. Many readers of the book will enjoy the series if only to get the chance to see what animated Calabi-Yau manifolds look like. In 1984 a number of string physicists identified the Calabi-Yau class of six-dimensional shapes as meeting the conditions the equations for the extra dimensions require.
The manifolds consist of overlapping and entwined doughnut shapes, each of which represents a separate dimension. If we zoom again into the microscopic world we can envision encountering curled up dimensions that look much like these Calabi-Yau manifolds – “simple rotating structures” in Greene’s description.
“That’s the basic idea of string theory. In a nutshell it requires the world to have more dimensions than we are familiar with.”
In an extensive question-and-answer exchange after his talk, Brian Greene amplified his ideas.
Experimental Verification
Elegant as it is, string theory has roused the ire of some physicists because it has thus far defied being able to be proven true or false by experiment. Familiar with this complaint, Greene described what he considered several promising developments.
For a long time string theorists had thought that the extra dimensions must be as small as the size of the Planck length and, therefore, beyond detectability. In the last few years work has been done suggesting that some dimensions might be as big as 10-2 cm. “That’s a size you can almost see with your eyes.” We haven’t seen them because the only force that could penetrate into these extra dimensions is gravity.
Unfortunately, the force of gravity is many powers of 10 weaker than the smallest size that can be currently probed in physics laboratories. However, there are some experiments planned to be done at CERN in 2007 in which we may actually see the extra dimensions by observing the effect of gravity on other dimensions.
Greene’s current research involves looking for signatures of string theory in astronomical data. Proponents of loop quantum gravity are also looking to the stars for confirmation of their calculations. The Gamma-ray Large Area Space Telescope (GLAST), due to be launched in 2006, should be sensitive enough to detect from the light from gamma-ray bursts the verification LQG researchers seek.
M-Theory
In recent years string theory has undergone a transformation. Much of this dates from a milestone event, the “Strings 1995” conference at the University of California that marked the beginning of the Second Superstring Revolution.
It was there that Edward Witten delivered his startling finding that string theory requires 11 dimensions and that what had until then been viewed as five competing superstring theories are really all part of one superstring framework, which he called “M-Theory.”
What the “M” refers to is not clear. “Mysterious” is one proposed meaning, since how the framework relates the theories to each other has not been defined. M-theory also “inflates” strings into two-dimensional “branes” (from “membranes”) that could contain entire alternate universes.
And most important to its critics from LQG, M-theory is expected, as it develops, to enable string theory to be “background independent,” like LQG, so that it does not need to rely on the standard model of spacetime.
The Next Book
For all of their competitiveness, researchers in both string theory and LQG frequently speculate that they could be working on different paths toward what may be one unified theory. This may explain why Greene’s next book, The Fabric of the Cosmos: Space, Time, and the Texture of Reality, due from Knopf in February 2004, seems designed to encompass a range of theoretical possibilities.
He noted that his coverage of space and time in The Elegant Universe addressed only what was needed as background for his explanation of string theory. Many other aspects he left uncovered.
In his new book they get center stage as he probes how our fundamental ideas of space and time have changed in their nature and importance over the past century. If Greene’s knack for engaging broad audiences holds true, it will undoubtedly expand the ranks and enjoyment of those eager to follow the lively scramble to the ultimate Theory of Everything.
From local sourcing of materials to utilizing renewable energy, the sustainable building design revolution has transformed the way that architects and engineers approach construction.
As environmental awareness spreads around the globe, the so-called “greening” of architecture has ignited a revolution in the design and construction of buildings, according to one of the nation’s leading experts in the field.
“The concept of sustainable building design has led to a new architectural vocabulary – known as ‘green buildings’ – that is transforming the way we act and think about the environment and the buildings we construct,” said Hillary Brown. Titled “Visioning Green: Advances in High-Performance Sustainable Building Design,” Brown spoke at a August 26 2003, meeting, cosponsored by The New York Academy of Sciences (the Academy) and the Bard Center for Environmental Policy.
Former director of Sustainable Design for the New York City Department of Design and Construction, Brown now heads her own firm, New Civic Works, which specializes in helping local government, universities and the nonprofit sector incorporate sustainable design practices into their policies, programs, and operations.
“These new practices are beginning to catalyze not only the construction industry, but also the wider society” as people learn about the issues at stake, Brown said. “All sectors are mobilizing around sustainable building design.”
Paying Attention to Nature
“The increased recognition that buildings can contribute directly toward a healthy environment in which to live and work,” Brown said, provides the context for the architectural revolution.
Brown presented a blueprint for “green principles” in new buildings, including climate-responsive designs and an understanding of the relationship between the building and its location. “In this view, water, vegetation and climate are taken into account in the design of the building, with special attention paid to how the building’s infrastructure affects its surroundings,” she said.
“Nature and natural processes should be made visible in green buildings,” Brown added, noting that the form and shape of the building should take into account the interactions between the occupants and the building itself.
“Technology often displaces our connection to the natural world,” Brown contended. Green buildings, she pointed out, “help to improve a sense of health and well being as occupants are put in touch with their natural surroundings.”
According to Brown, studies show that “people are more comfortable in green buildings than conventional buildings.” She asserted that four factors have a substantive impact on performance and mood inside buildings: air quality, thermal comfort, amount of natural light, and appropriate acoustics.
Minimizing Waste of Resources
In addition to aesthetics and comfort, green buildings respond to ecological concerns by “minimizing the impact of human activity in lowering the levels of pollution during both the construction and maintenance of the building,” Brown said.
“Conventional methods of building design and construction leads to depletion of natural resources,” she added, “especially because carbon-based fuels are used extensively during construction and in the operation of the buildings’ infrastructure after completion. Green buildings attempt to minimize the waste of water, energy, and building materials,” Brown said. Within the construction industry, architects and builders have set goals to substantially reduce emission of carbon dioxide during construction and operation of buildings.
Brown noted that green buildings employ the use of daylight in combination with high-efficiency lighting. Use of horizontal “light shelves” and other well-designed building apertures, for example, can reflect daylight deeper into buildings, displacing the need for artificial lighting. Other passive comfort-control techniques include the use of natural ventilation and an improved building envelope to reduce dependence on mechanical systems. Still other green buildings are cooled/heated by utilizing the constant ground temperatures of the earth as a heat source or heat sink.
Designers of green buildings also seek to reduce or eliminate construction materials that contain unstable chemical compounds that, as they cure over time, are released into the environment – such as adhesives, sealants and artificial surfaces. “We need to think about eliminating these noxious chemicals from the building palette,” Brown said.
In addition, Brown said that architects are paying more attention to recycled and local materials in construction. “The selection of local and regional materials means a lower consumption of transportation energy during construction,” she noted. Brown also encouraged the increased use of renewable materials, woods – such as bamboo – or other wood products that are “certified” grown in renewable forests.
Improving Public Spaces
Although architects and builders have been slow to integrate “green principles” into most residential blueprints, Brown cited their incorporation into public buildings such as courthouses, libraries, and performance spaces and schools.
She cited a study from California that revealed elementary students in classrooms with the most daylight showed a 21% improvement in learning rates when compared to students with the least amount of daylight in their classrooms.
For businesses, Brown said improved air quality would likely result in reduced absenteeism from asthma and other respiratory diseases, may lower other health-related costs, and generally help to improve productivity in the workplace. Although she acknowledged that the average well-designed green building might have a slightly higher initial construction cost, up to 3%, she stressed that the long-term savings in operating expenditures can be as much as 33% or higher.
Brown also said urban streetscapes should employ sustainable design practices, including efforts to reduce the “heat-island affect” with increased planting of trees and use of light- or heat-reflective materials in sidewalks, streets, and roofing membranes. In addition, she cited opportunities for improved water resource management by recycling once-used tap water from sinks for irrigation and cleaning, and by installing green roofs or other systems that harvest usable storm water from the roofs of buildings.
‘Civic Environmentalism’
Brown said that although there are still some barriers to incorporating green principles in construction – such as increased costs, the difficulties of apportioning savings to both tenant and developer, and various regulatory disincentives – she noted that the federal government, several states, and many municipalities are beginning to demand or incentivize green buildings. She predicted that building and zoning codes would eventually more adequately reflect the interest in green buildings as society embraces what she called, “civic environmentalism.”
Picture a world economy built around the profitable production of non-polluting and endlessly renewable energy supplies – a global society freed from the shackles of dependence on oil, coal and other carbon-based fossil fuels.
Such a scenario has long been the vision, or dream to skeptics, of Dr. Amory B. Lovins, co-founder and CEO of the Rocky Mountain Institute (RMI), whose widely published views on environmental and energy-related topics have gained him global recognition for more than three decades. Lovins described his “Roadmap to the Hydrogen Economy” to a crowded meeting room of both skeptics and believers at the Environmental Science Forum held September 4, 2003 at The New York Academy of Sciences (the Academy.)
Hypercar® vehicles – ultralight, ultra-low-drag, and originally based on hybrid gasoline-electric designs – were invented at RMI in 1991 and are the most attention-getting route to energy efficiency on Lovins’s roadmap. At that time, hybrid-electric propulsion, invented by Dr. Ferdinand Porsche in 1900, was still thought to be decades away, but Honda introduced the hybrid Insight in the United States in 1999, and Toyota debuted its hybrid Prius in the U.S. in 2000. DaimlerChrysler, Ford Motor Company, and General Motors have all announced hybrid vehicles for release in the next year or two.
Eliminating the Need for Internal Combustion Engines
Dr. Amory B. Lovins
Today, Lovins told the gathering, hydrogen could be used in combination with advanced fuel-cell technology to eliminate the internal combustion engine altogether, powering a new generation of ultra-high-efficiency hypercar-class vehicles. And, he added, hydrogen-powered fuel cells that can provide economical on-site electricity to business and residential buildings can set the hydrogen economy in motion – greatly accelerating the hydrogen transition that has led Honda and Toyota already to market early (and correspondingly expensive) hydrogen-fuel-cell cars, with three more automakers set to follow suit by 2005 and another six by 2010.
“U.S. energy needs can be met from North American energy sources, including local ones,” he said, “providing greater security.” Hydrogen production just from available windy lands in the Dakotas, he said, could fuel all U.S. highway vehicles at hypercar-like levels of efficiency.
Along with a more secure domestic energy supply, moreover, Lovins said the transition from a fossil fuel-based to a hydrogen-based economy would offer a “cleaner, safer and cheaper fuel choice” that could be very profitable for both the oil and auto industries. “Hydrogen-ready vehicles can revitalize Detroit,” Lovins said.
Molecular hydrogen (H2) – a transparent, colorless, odorless and nontoxic gas – is the lightest-weight element and molecule. One kilogram of H2 packs the same energy content as a gallon of gasoline weighing almost three times as much. It’s far bulkier, too, but that may be acceptable in uses where weight matters more than bulk, such as efficient cars.
And hydrogen is in abundant supply as it may be readily derived from water, as well as from natural gas or other forms of energy.
An Energy Carrier, Not an Energy Source
Unlike crude oil or coal, however, hydrogen is not an energy source. Rather, it is an energy carrier, like electricity and gasoline, which is derived from an energy source – and then can be transported.
“Hydrogen is the most versatile energy carrier,” Lovins said. “It can be made from practically anything and used for any service. And it can be readily stored in large amounts.”
Hydrogen is almost never found in isolation, however, but must be liberated – from water by electrolysis, which requires electricity; from hydrocarbons or carbohydrates using thermos-catalytic reformers (which typically extract part of the hydrogen from water); or by other currently experimental methods.
About 8% of the natural gas produced in the U.S. is now used to make 95% of America’s industrial H2, Lovins said. Only 1% is made by electrolysis, because that’s uneconomic unless the electricity is extremely cheap. And less than 1% of hydrogen is delivered in super-cold liquid form, mainly for space rockets, because liquefaction too is very costly. But, Lovins noted, there’s already a major global H2 industry, making one-fourth as much annual volume of H2 gas as the natural-gas industry produces, and already demonstrating safe, economical production, distribution and use.
Proper Handling of a “Hazardous Material”
A highly concentrated energy carrier, hydrogen is by definition a hazardous material. But because H2 burns in “a turbulent diffusion flame – it won’t explode in free air,” Lovins said the gas consumes itself rapidly when it ignites, rising up away from people on the ground because it’s extremely buoyant and diffusive. Its clear flame, unlike hydrocarbon flames, can’t sear victims at a distance by radiated infrared.
As a result, he said, nobody aboard the Hindenburg (a hydrogen dirigible whose 1937 flammable-canopy and diesel-oil fire killed 35% of those aboard) was killed by the hydrogen fire. The modern view, he reported, is that hydrogen is either comparable to or less hazardous than common existing fuels, such as gasoline, bottled gas and natural gas.
News media interest in the potential of hydrogen-fueled electric vehicles run by emission-free fuel cells was piqued after President George W. Bush mentioned the technology in his State of the Union address this year. But Lovins noted that evaluating the technology requires an understanding of unfamiliar terms and concepts that cut across disciplines, often confusing both supporters and critics.
To explain the fuel cell, Lovins referred to the common electrolysis experiment that many students remember from their high school chemistry class. An electric current is passed through water in a test tube, splitting the water into bubbles of hydrogen and oxygen.
The proton-exchange membrane (PEM) fuel cell does the same thing in reverse: It uses a platinum-dusted plastic membrane to combine oxygen (typically supplied as air) with hydrogen to form electricity. The only by-product is pure hot water. The reaction is electrochemical, takes place at about 80 degrees Celsius, and there’s no combustion.
No Carbon Dioxide Emissions
Conventional electric generating plants make power by burning carbon-based fossil fuels (coal, oil or natural gas), or by means of costly nuclear fission, to heat water and turn large steam-turbine generators. (Hydroelectric plants use water to turn the turbines.) While fuel cells do not release carbon dioxide and other emissions, they are not yet economically competitive with fossil fuels for large, centralized electricity generation. However, Lovins said, at the point of actual use, such as the light or heat delivered in a building or the traction delivered to the wheels of an electrically propelled vehicle, mass-produced fuel cells can offer a highly competitive alternative to conventional technology.
“A fuel cell is two to three times as efficient as a gasoline engine in converting fuel energy into motion in a well-designed car” Lovins said. “Therefore, even if hydrogen costs twice as much per unit of energy, it will still cost the same or less per mile – which is typically what you care about.”
“If you buy gasoline for $1 a gallon, pre-tax, and use it in a 20-mile-a-gallon vehicle, that’s a nickel a mile,” Lovins continued. “If you reform natural gas at a rather high cost of $6 per million BTU in a miniature natural gas reformer, you get $2.50 per kilogram hydrogen, which has an energy content equivalent to $2.50 a gallon gasoline.”
That sounds expensive. But used in an ultralight and hence quintupled-efficiency hydrogen-fuel-cell powered hypercar vehicle, he added, that translates to a cost of 2.5 cents a mile. Or more conventionally, Lovins reported, in Toyota’s target for a fuel-cell car – 3.5 times more fuel efficient than a standard gasoline car – the same hydrogen would yield an operating cost of 3.3 cents per mile, still well under today’s gasoline cost.
Peak Aerodynamic Efficiency
Designed for peak aerodynamic efficiency, cutting air drag by 40% to 60% from that of today’s vehicles, hypercar vehicles would be constructed using molded carbon-fiber composites that can be stronger than steel, but more than halve the car’s weight – the key to its efficiency. Such vehicles could use any fuel and propulsion system, but would need only one-third the normal amount of drive-power, making them especially well-suited for direct-hydrogen fuel cells.
That’s because the three-times-smaller fuel cell can tolerate three-times-higher initial prices (so fuels can be adopted many years sooner), and the three-times-smaller compressed-hydrogen fuel tanks can fit conveniently, leaving lots of room for people and cargo. Replacing internal combustion engines – and related transmissions, drive-shafts, exhaust systems, etc. — with a much lighter, simpler, and more efficient fuel cell amplifies the savings in weight and cost.
Carbon-fiber composite crush structures can absorb up to five times as much crash energy per pound as steel, Lovins said, as has been validated by industry-standard simulations and crash tests. The carbon-fiber composite bodies also make possible a much stiffer (hence sportier) vehicle, Lovins said, adding: “It doesn’t fatigue, doesn’t rust, and doesn’t dent in 6-mph collisions. So I guess we’ll have to rename fender-benders ‘fender-bouncers.’”
The main obstacle to making ordinary cars out of carbon-fiber composites – now confined to racecars and million-dollar handmade street-licensed versions – has so far been their high cost. But Lovins said Hypercar, Inc.’s Fiberforge™ process is expected to offset the costlier materials with cheap manufacturing “that eliminates the body shop and optionally the paint shop – the two biggest costs in automaking. This could make possible cost-competitive mid-volume production of carbon-composite auto-bodies, unlocking the potential of hypercar designs.”
Making the Transition
Some 156 fuel-cell concept cars have been announced. In mass production, Lovins added, investment requirements, assembly effort and space, and parts counts would be “perhaps an order-of-magnitude less” than conventional manufacturing. With aggressive investment and licensing, initial production of the first hypercar vehicles could “start ramping up as soon as 2007 or 2008.”
Lovins acknowledged that transitioning to a hydrogen economy creates something of a “chicken and egg” conundrum. How can you ramp up mass production of hydrogen-fueled cars in the absence of ubiquitous fuel supplies? And who will invest in building that refueling system before the market for it exists?
Fuels cells used to provide electricity for offices and residential buildings, Lovins said, can hold the answer. “You start with either gas or electricity, whichever is cheaper (usually gas), and use it to make hydrogen initially for fuel cells in buildings, where you can reuse the ‘waste’ heat for heating and cooling and where digital loads need the ultra-reliable power. Buildings use two-thirds of the electricity in the country,” he added, “so you don’t need to capture very much of this market to sell a lot of fuel cells.” Tellingly, the fuel-cell-powered police station in Central Park kept going right through the recent New York blackout, he noted.
Leasing hydrogen-fueled cars to people who already work or live in buildings that house fuel cells would create a perfect fit, Lovins suggested. For a modest extra investment, the excess hydrogen not needed for the building’s fuel-cell generators could be channeled to parking areas and used to re-fuel the fuel-cell cars. This would permit a novel value proposition for car owners, whose second-biggest household asset sits 96% idle: Lovins said the hydrogen-powered fuel-cell cars could constitute a fleet of “power plants on wheels.”
A Need for More Durable Fuel Cells
During working hours, when demand for electricity peaks, he said the fuel cells in parked cars could be plugged in, “selling power back to the electric grid at the time and place that they’re most valuable, thus earning back most or all of the cost of owning the car: the garage owner could even pay you to park there.”
While today’s PEM fuel cells can be “better than 60 percent efficient,” Lovins acknowledged that more durable fuel cells are needed, and that mass-production is needed to bring down their cost. Eventually, he added, efficient decentralized reformers could be placed conveniently around cities and towns, mainly at filling stations.
No technological breakthroughs are needed, Lovins said, to reach the hydrogen economy at the end of his roadmap. “The hydrogen economy is within reach” – if we do the right things in the right order, so the transition becomes profitable at each step, starting now.
“[Sir Winston] Churchill once said you can always rely on the Americans to do the right thing,” Lovins concluded, “once they’ve exhausted all the alternatives.” We’re certainly, he wryly added, “working our way well down the list. But, as Churchill also said, ‘Sometimes one must do what is necessary.’”
Dr. Klaus S. Lackner
Adding fuel to the discussion, Dr. Klaus S. Lackner, the Ewing Worzel Professor of Physics in the Department of Earth and Environmental Engineering, The Earth Institute at Columbia University, briefly responded with some thoughts on Lovins’s proposals.
Other Points of View
After agreeing that “things will have to change, business as usual will not work,” due mainly to the need to curb carbon dioxide emissions, Lackner raised a number of issues he believes proponents of the hydrogen economy should consider.
For example, Lackner said off-peak power costs should not be used to calculate the cost of producing hydrogen fuel from electricity, as the hydrogen-generation industry will “destroy” the structure of off-peak pricing. “There may be a benefit to the electricity market in that power generation profiles become flatter, but this will be a benefit to people running air conditioners at 4 p.m., not to the hydrogen economy.”
“Hydrogen will be made from fossil fuels,” Lackner stated, “because it is much cheaper than by any other route.” He also noted that fuel cells and hydrogen are not synonymous. “Hydrogen can work without fuel cells, and fuel cells can work without hydrogen.” Although Lovins’s vision emphasizes PEM fuel cells, Lackner added that “some fuel cells run on methane. You can use any hydrocarbon you like; we can debate which is the best fuel.”
Many Competing Options
It’s also important to remember that hydrogen is an energy carrier, like electricity, not an energy source. “One needs to compare the advantages of hydrogen as an energy carrier with those of other energy carriers,” Lackner said.
Regarding Lovins’s designs for ultralight hypercar vehicles, Lackner said there are many competing options for changing the internal combustion engine. “It’s not fair to compare old fashioned conventional cars, on the one side, with the new, fancy cars on the other side. We need to compare each of the potential energy carriers side by side, and not assume that the competition stands still.”
Lovins largely agreed with these comments, but felt that they didn’t affect the validity of his recommendations.
From 3D models to multimodal “conversation systems” to recreating the “visual complexities” of physical appearance, these researchers are taking computing to the next level.
Three decades ago engineers in California demonstrated a prototype personal computer, called the Alto, that would usher in the PC era and forever alter the course of human communications. In today’s online “e” era, the quest to conquer new challenges in computer science continues at an accelerating pace.
Imagining the potential impacts of today’s research on human activity three decades hence is certainly the creative stimulus of which innovative discovery is made. Three examples of such creative work underway in today’s computer science laboratories were discussed on April 1 at The New York Academy of Sciences’ (the Academy’s) semi-annual Computer Science Mini-Symposium.
Titled Frontiers in Visualization, research scientists from Columbia and Princeton Universities and IBM’s T. J. Watson Research Center each described efforts to create computer-based graphical imaging capabilities that overcome current limitations and open the door to a world of new possibilities.
A Search Engine for 3D Models
At Princeton University, Thomas Funkhouser, PhD, is working to advance the day when true three-dimensional (3D) models can be readily created electronically and transmitted via the Internet. “Scanners are getting cheaper and fast graphics cards are readily available on PCs,” Funkhouser told the gathering. “Someday 3D models will be as common as images are on the Web today.”
While 3D models already exist on Web sites, Funkhouser said they are often deeply embedded in data and not easy to locate. To remedy this, he and his Princeton colleagues have built a search engine specifically for locating 3D models on the Web.
To locate a 3D model using the search engine, the query can begin with a simple text word, such as “chair.” Or the query can be based on a 2D sketch – a simple drawing of a chair, for example. But the search engine also allows users to scan-in an actual model or sketch and instruct the computer to “find similar shape,” thereby producing a plethora of similarly shaped chairs. The new “query interfaces” they are building also will allow searches based on inputting 3D sketches and models.
“My goal is to create a metric for similarity,” the computer scientist said, “so that we can quickly search the data base and find a similar shape. This requires that we create an index of the data base.”
As an example, Funkhouser described the challenge of asking the search engine to provide the best matches of shapes similar to a 3D image of a Volkswagen Beetle. To accomplish this, he said the team needed to create a “shape descriptor” that would be concise enough to be stored in the data base, compute rapidly and be both efficient and discriminating in its selections.
The Challenges
One challenge is to match 3D models effectively even when they appear in arbitrary alignments. To address this, Funkhouser’s team is building a “harmonic shape descriptor” that is invariant to rotations and yet as discriminating as possible. For this the team decomposes the 3D shapes into “an irreducible set of rotation-independent components,” then stores “how much” of the model resides in each component.
In tests conducted by students at Princeton, Funkhouser said the recently developed search engine proved most effective – 90 percent – when the user query was based on matching to an existing 3D shape. While the engine is still a work in progress, he noted that more than 35,000 3D models have been indexed thus far and more than 260,000 queries were processed this past year.
“This field is so young that there are no real benchmark tests,” Funkhouser added. “We want to develop such a test so that people can test different methods and measure their effectiveness.” Additional work is planned to improve 2D matching methods, develop new query interfaces and new matching and indexing algorithms for better methods of shape matching and shape analysis.
Automating Info Graphics
At IBM’s T. J. Watson Research Center, Michelle Zhou leads a group that is developing next-generation multimodal “conversation systems” to aid users in searching for information. Their system can automatically produce information graphics – such as graphs, charts and diagrams – during the course of “computer-human conversations.”
Zhou, whose PhD dissertation at Columbia was on building automated visualization systems that in turn create a coherent series of animated displays for visualizing a wide variety of information, aims to make these “conversations” both “multimodal,” meaning users can employ both speech and gesture inputs to express their information requests, and “multimedia,” meaning that computers may employ speech, graphics and video to present desired information to users.
When computer users search for information today, such as real estate market trends for a particular area, for example, the desired information needs to be carefully handcrafted using graphics tools such as Microsoft PowerPoint or Adobe Illustrator. Without previous training in graphic design, however, the process of handcrafting such information graphics is difficult and time-consuming. Especially, within a dynamic human-computer conversation, it is extremely difficult to handcraft every possible information graphic in advance.
To simplify matters, researchers have built systems that can help people design information graphics automatically. After receiving a user request – such as “display sales data for the first quarter” – these systems can produce information graphics – such as a bar chart – automatically.
A New Graphics Generation System
Now Zhou and her team are building a new graphics generation system, called IMPROVISE+, that will allow users to provide more specific preferences, then adjust the graphic using a “feedback” input. The result: a new, customized information graphic.
“By allowing users to critique a sketch first,” Zhou said, “IMPROVISE+ can save the cost of fine-tuning the undesirable design.” After the computer processes the initial input to customize the image, however, the human user is once again asked for input. “The system is not foolproof,” Zhou acknowledged, “so at the last stage we take the users input to validate the generation process.”
Selecting from a database of existing graphic examples, or cases, she said the team’s approach uses a “similarity metric” to retrieve the case that is most similar to the request. The retrieved case is then either directly reused or adapted.
“This approach allows us to extend our work to cover a wider variety of applications,” Zhou said, “since existing graphic examples are abundant and the learning process itself doesn’t have to be changed.”
Modeling Visual Appearances
From top: Peter N. Belhumeur, Thomas Funkhouser, and Michelle Zhou.
Computer scientist Peter N. Belhumeur has had a remarkable career since receiving his PhD from Harvard University a decade ago. Recipient of both the Presidential Early Career Award for Scientists and Engineers and the National Science Foundation Career Award, Belhumeur was appointed a full professor of Electrical Engineering at Yale University in 2001.
Belhumeur recently moved to Columbia University, where he is creating computer models that attempt to recreate the “visual complexities” of physical appearance. To do this requires understanding and attempting to replicate the complex variations related to shape, reflection, viewpoint and illumination.
In looking at even very common and seemingly simple images, Belhumeur noted that the differences between the images of the surfaces of the various material elements are quite stunning. “Because of the variation in the composition of the materials,” he said, “there’s really a great disparity in the appearance.”
To accurately model the visual appearance of an object researchers must account for its shape, reflection, viewpoint and illumination. Of the four, Belhumeur said reflectance – a four-dimensional process involving both the incoming and outgoing light – is the most complex and least understood, despite its critical importance. “As a result, you have to make assumptions about the nature of reflectance,” he said, “and this has been sort of the Achilles’ heel of nearly all image-based shape reconstruction.”
Many Applications
Referring to side-by-side photos of a peach and a nectarine, Belhumeur said: “Here you have two objects that are Essentially the same shape and coloration. Yet, because of the differences in reflection, they appear different enough.” That difference, he pointed out, is due to the way the surface of each object reflects light.
Despite the challenges, he said researchers at Columbia have developed a new method for reconstructing models of objects from images of the objects themselves, as well as a new algorithm for determining reflectance. The models will allow scientists to view the shape of the object from a single image, he said, then produce reasonably accurate “synthetic images” showing how the object would look under viewpoints or different lighting conditions.
In addition, the researchers have invented a device called the Lighting Sensitive Display that uses photo-detectors, cameras and optical fibers to sense the illumination in the environment and modify the content of the image. Potential applications of this work, Belhumeur said, include face and object recognition, image-based rendering, computer graphics, content-based image and video compression and human-computer interfaces.
Peter N. Belhumeur graduated in 1985 from Brown University with Highest Honors, receiving a Sc.B. degree in Computer and Information Engineering. He received a S.M. in 1991 and a Ph.D. in 1993 from Harvard University, where he studied under David Mumford and was supported by a Harvard Fellowship. In 1993 he was a Postdoctoral Fellow at the University of Cambridge’s Sir Isaac Newton Institute for Mathematical Sciences. He was appointed professor of Electrical Engineering at Yale University in 2001. He recently joined the faculty at Columbia University.
Thomas Funkhouser is an assistant professor in the Department of Computer Science at Princeton University. Previously, he was a member of the technical staff at Bell Laboratories. His current research interests include interactive computer graphics, computational geometry, distributed systems, and shape analysis. He received a B.S. in Biological Sciences from Stanford University in 1983, a M.S. in computer science from UCLA in 1989, and a Ph.D. in computer science from UC Berkeley in 1993.
Michelle Zhou is a research staff member at IBM T.J. Watson Research Center, where she manages the group of intelligent multimedia interaction. Before joining IBM, Michelle was working on her thesis at Columbia University on creating automated visualization systems that can create a coherent series of animated displays for visualizing a wide variety of information. She also received a Ph.D. in computer science from Columbia.
Nanotechnology has potential to revolutionize our daily lives and one aspect that makes this technology so promising and effective is its bottom-up approach.
Nanotechnology has gained widespread recognition with the promise of revolutionizing our future through advances in areas ranging from computing, information storage and communications to biotechnology and medicine. How might one field of study produce such dramatic changes?
At the most obvious level nanotechnology is focused on the science and technology of miniaturization, which is widely recognized as the driving force for the advances made in the microelectronics industry over the past 30 years. However I believe that miniaturization is just one small component of what makes and will make nanoscale science and technology a revolutionary field. Rather, it is the paradigm shift from top-down manufacturing, which has dominated most areas of technology, to a bottom-up approach.
The bottom-up paradigm can be defined simply as one in which functional devices and systems are assembled from well-defined nanoscale building blocks, much like the way nature uses proteins and other macromolecules to construct complex biological systems. The bottom-up approach has the potential to go far beyond the limits of top-down technology by defining key nanometer-scale metrics through synthesis and subsequent assembly – not by lithography.
Producing Structures with Enhanced and New Functions
Of equal importance, bottom-up assembly offers the potential to produce structures with enhanced and/or completely new function. Unlike conventional top-down fabrication, bottom-up assembly makes it possible to combine materials with distinct chemical composition, structure, size and morphology virtually at will. To implement and exploit the potential power of the bottom-up approach requires that three key areas, which are the focus of our ongoing program at Harvard University, be addressed.
First and foremost, the bottom-up approach requires nanoscale building blocks with precisely controlled and tunable chemical composition, structure, morphology and size, since these characteristics determine their corresponding physical (e.g. electronic) properties. From the standpoint of miniaturization, much emphasis has been placed on the use of molecules as building blocks. However, challenges in establishing reliable electrical contact to molecules has limited the development of realistic schemes for scalable interconnection and integration without having key feature sizes being defined by the conventional lithography used to make interconnects.
My own group’s work has been focused on the nanoscale wires and, in particular, semiconductor nanowires as building blocks. This focus was initially motivated by recognition that the one-dimensional nanostructures represent the smallest morphology structure for efficient routing of information – either in the form of electrical or optical signals. Subsequently, we have shown that nanowires can also exhibit a variety of critical device function, and thus can be exploited as both the wiring and device elements in functional nano-systems.
Control Over Nanowire Properties
Currently, semiconductor nanowires can be rationally synthesized in single crystal form with all key parameters – including chemical composition, diameter and length, and doping/electronic properties – controlled. The control that we have over these nanowire properties has correspondingly enabled a wide range of devices and integration strategies to be pursued. For example, semiconductor nanowires have been assembled into nanoscale field-effect transistors, light-emitting diodes, bipolar junction transistors and complementary inverters – components that potentially can be used to assemble a wide range of powerful nano-systems.
Tightly coupled to the development of our nanowire building blocks have been studies of their fundamental properties. Such measurements are critical for defining their limits as existing or completely new types of device elements. We have developed a new strategy for nanoscale transistors, for example, in which one nanowire serves as the conducting channel and the other crossed nanowire as the gate electrode. Significantly, the three critical device metrics are naturally defined at the nanometer scale in assembled crossed nanowire transistors:
(1) a nanoscale channel width determined by the diameter of the active nanowire;
(2) a nanoscale channel length defined by the crossed gate nanowire diameter; and
(3) a nanoscale gate dielectric thickness determined by the nanowire surface oxide.
These distinct nanoscale metrics lead to greatly improved device characteristics such as high gain, high speed and low power dissipation. Moreover, this new approach has enabled highly integrated nanocircuits to be defined by assembly.
Hierarchical Assembly Methods
Second and central to the bottom-up concept has been the development of hierarchical assembly methods that can organize building blocks into integrated structures. Obtaining highly integrated NWs circuits requires techniques to align and assemble them into regular arrays with controlled orientation and spatial location. We have shown that fluidics, in which solutions of nanowires directed in channels over a substrate surface, is a powerful and scalable approach for assembly on multiple-length scales.
In this method, sequential “layers” of different nanowires can be deposited in parallel, crossed and more complex architectures to build up functional systems. In addition, the readily accessible crossed nanowire matrix represents an ideal configuration since the critical device dimension is defined by the nanoscale cross point, and the crossed configuration is a naturally scalable architecture that can enable massive system integration.
Third, combining the advances in nanowire building block synthesis, understanding of fundamental device properties and development of well-defined assembly strategies has allowed us to move well beyond the limit of single devices and begin to tackle the challenging and exciting world of integrated nano-systems. Significantly, high-yield assembly of crossed nanowire structures containing multiple active cross points has led to the bottom-up organization of OR, AND, and NOR logic gates, where the key integration did not depend on lithography. Moreover, we have shown that these nano-logic gates can be interconnected to form circuits and, thereby, carry out primitive computation.
Tremendous Excitement in the Field
Prof. Lieber
These and related advances have created tremendous excitement in the nanotechnology field. But I believe it is the truly unique characteristics of the bottom-up paradigm, such as enabling completely different function through rational substitution of nanowire building blocks in a common assembly scheme, which ultimately could have the biggest impact in the future. The use of modified nanowire surfaces in a crossed nanowire architecture, for example, has recently led to the creation of nanoscale nonvolatile random access memory, where each cross point functions as an independently addressable memory element with a potential for integration at the 1012/cm2 level.
In a completely different area, we have shown that nanowires can serve as nearly universal electrically based detectors of chemical and biological species with the potential to impact research in biology, medical diagnostics and chem/bio-warfare detection. Lastly, and to further highlight this potential, we have shown that nanoscale light-emitting diode arrays with colors spanning the ultraviolet to near-infrared region of the electromagnetic spectrum can be directly assembled from emissive electron-doped binary and ternary semiconductor nanowires crossed with non-emissive hole-doped silicon nanowires. These nanoscale light-emitting diodes can excite emissive molecules for sensing or might be used as single photon sources in quantum communications.
The bottom line – focusing on the diverse science at the nanoscale will provide the basis for enabling truly unique technologies in the future.
Charles M. Lieber moved to Harvard University in 1991 as a professor of Chemistry and now holds a joint appointment in the Department of Chemistry and Chemical Biology, where he holds the Mark Hyman Chair of Chemistry, and the Division of Engineering and Applied Sciences. He is the principal inventor on more than 15 patents and recently founded a nanotechnology company, NanoSys, Inc.
Researchers are making significant advances in nanotechnology which someday may help to revolutionize medical science for everything from testing new drugs to cellular repair.
When it comes to understanding biology, Professor Carl A. Batt believes that size matters – especially at the Cornell University-based Nanobiotechnology Center that he codirects. Founded in January 2000 by virtue of its designation as a Science and Technology Center, and supported by the National Science Foundation, the center seeks to fuse advances in microchip technology with the study of living systems.
Batt, who is also professor of Food Science at Cornell, recently presented a gathering – entitled Nanotechnology: How Many Angels Can Dance on the Head of a Pin? – with a tiny glimpse into his expanding nano biotech world. The event was organized by The New York Academy of Sciences (the Academy). “A human hair is 100,000-nm wide, the average circuit on a Pentium chip is 180 nm, and a DNA molecule is 2 nm, or two billionths of a meter,” Batt told the audience.
“We’re not yet at the point where we can efficiently and intelligently manipulate single molecules,” he continued, “but that’s the goal. With advances in nanotechnology, we can build wires that are just a few atoms wide.
“Eventually, practical circuits will be made up of series of individual atoms strung together like beads and serving as switches and information storage devices.”
Speed and Resolution
There is a powerful rationale behind Batt’s claim that size is important to the understanding of biology. Nanoscale devices can acquire more information from a small sample with greater speed and at better resolution than their larger counterparts. Further, molecular interactions such as those that induce disease, sustain life and stimulate healing all occur on the nanometer scale, making them resistant to study via conventional biomedical techniques.
“Only devices built to interface on the nanometer scale can hope to probe the mysteries of biology at this level of detail,” Batt said. “Given the present state of the technology, there’s no limit to what we can build. The necessary fabrication skills are all there.”
Scientists like Batt and his colleagues at Cornell and the center’s other academic partners are proceeding into areas previously relegated to science fiction. While their work has a long way to go before there will be virus-sized devices capable of fighting disease and effecting repairs at the cellular level, progress is substantial. Tiny biodegradable sensors, already in development, will analyze pollution levels and measure environmental chemicals at multiple sample points over large distances. Soon, we’ll be able to peer directly into the world of nano-phenomena and understand as never before how proteins fold, how hormones interact with their receptors, and how differences between single nucleotides account for distinctions between individuals and species.
The trick – and the greatest challenge posed by an emerging field that is melding the physical and life sciences in unprecedented ways – is to adapt the “dry,” silicon-based technology of the integrated circuit to the “wet” environment of the living cell.
Bridging the Organic-Inorganic Divide
Nanobiotechnology’s first order of business is to go beyond inorganic materials and construct devices that are biocompatible. Batt names proteins, nucleic acids and other polymers as the appropriate building blocks of the new devices, which will rely on chemistries that bridge the organic and inorganic worlds.
In silicon-based fabrication, some materials that are common in biological systems – sodium, for example – are contaminants. That’s why nano-biotech fabrication must take place in unique facilities designed to accommodate a level of chemical complexity not encountered in the traditional integrated-circuit industry.
But for industry outsiders, the traditional technology is already complex enough. Anna Waldron, the Nanobiotechnology Center’s Director of Education, routinely conducts classes and workshops for schoolchildren, undergraduates and graduates to initiate them into the world of nanotechnology, encourage them to pursue careers in science, and foster science and technology literacy.
In a hands-on presentation originally designed for elementary-school children, Waldron gives the audience a taste – both literally and figuratively – of photolithography, a patterning technique that is the workhorse of the semiconductor industry. Instead of creating a network of wells and channels out of silicon, however, Waldron works her magic on a graham cracker, a chocolate bar and a marshmallow, manufacturing a mouthwatering “nanosmore” chip in a matter of minutes.
Graham crackers are substituted for silicon substrate, while chocolate provides the necessary primer for the surface. Marshmallows act as the photoresist, an organic polymer that, when exposed to light, radiation, or, in this case, a heat gun, can be patterned in the desired manner. Finally, a Teflon “mask” is placed on top of the marshmallow layer and a blast from the heat gun transfers the mask’s design to the marshmallow’s surface – a result that appeared to leave a lasting impression on the Academy audience as well.
What’s Next?
According to Batt, it won’t be too long before the impact of the nanobiotech revolution will be felt in the fields of diagnostics and biomedical research. “Progress in these areas will translate the vast information reservoir of genomics into vital insights that illuminate the relationship between structure and function,” he said.
Prof. Batt
Also down the road, ATP-fueled molecular motors may drive a whole series of ultrasmall, robotic medical devices. A “lab-on-a-chip” will test new drugs, and a “smart pharmacist” will roam the body to detect abnormal chemical signals, calculate drug dosage and dispense medication to molecular targets.
Thus far, however, there are no manmade devices that can correct genetic mutations by cutting and pasting DNA at the 2-nanometer scale. One of the greatest obstacles to their development, Batt said, doesn’t lie in building the devices, but in powering them. Once the right energy sources are identified and channeled, we’ll have a technology that speaks the language of genomics and proteomics, and decodes that language into narratives we can understand.
Microbiologist Carl A. Batt is professor of Food Science at Cornell University and co-director of the Nanobiotechnology Center, an NSF-supported Science and Technology Center. He also runs a laboratory that works in partnership with the Ludwig Institute for Cancer Research.
On ordinary days, the control room for a deep-space mission is rather sedate: data stream in, routine commands stream out, no one need raise his voice. But February 12, 2001, was no ordinary day for the technicians controlling NASA’s Near Earth Asteroid Rendezvous (NEAR)-Shoemaker spacecraft. Some punched calculators madly, while others ran from computer monitor to computer monitor, shouting numbers, trying to find out what was happening. Nearby, television crews aimed cameras at the scrambling engineers, capturing their every motion. Pandemonium had replaced the serene orderliness.
The NEAR team had brought this chaos on themselves. In a bold flourish to end their successful mission, the spacecraft’s science and engineering teams at the Johns Hopkins University Applied Physics Lab in Laurel, Maryland, sent NEAR-Shoemaker toward a landing on the surface of Eros, the asteroid it had circled for a year. Never mind that the probe had been built as an orbiter and had no landing mechanism of any kind. Even if NEAR wound up shattering into a thousand pieces, the images it would send in its final moments would make the stunt worthwhile.
Two members of NEAR’s imaging team, Joseph Veverka, professor of astronomy at Cornell University and Mark Robinson of Northwestern University, huddled in front of a computer to marvel at the high-resolution images coming from space. Veverka was amazed by the absence of craters in the close-up pictures of the asteroid’s surface, and Robinson was impressed at the numerous boulders of all shapes and sizes.
Hungrily Consuming Information
The spacecraft descended at a leisurely four miles per hour, and the images grew in detail and complexity. The investigators hungrily consumed each bit of information, fully expecting the data stream to end abruptly at the moment of impact. Several technicians watched as their computer programs counted the altitude down to zero. Then one of the flight engineers yelled, incredulously, “Totally nominal––we’ve got a signal!” Robert Farquhar, the mission director, shouted, “Hold that signal!”
NEAR-Shoemaker had not only touched the surface of Eros, it had come through the impact seemingly whole and in operation. It was as if the controllers had rolled an egg across a gravel field without even cracking the shell. Although no more images could be transmitted, NASA allowed the mission an extension of several weeks to enable the craft to gather and radio back additional data about the chemical make-up of the spacecraft’s landing site.
After accomplishing the first rendezvous with an asteroid, the first orbit of an asteroid and the first landing on an asteroid, the investigators in charge of the NEAR-Shoemaker mission now have compiled a wealth of information about a heretofore shadowy subject –– the bits of planetary debris that inhabit the middle reaches of the solar system.
The data and images from the mission have already helped answer innumerable questions about asteroids and how they figure in the birth and formation of the solar system. But more interesting, perhaps, was what NEAR-Shoemaker did not tell scientists. As extraordinary as the landing was, the last-second images paralleled many of NEAR-Shoemaker’s other discoveries. For every question that was settled, another conundrum was unexpectedly uncovered.
“These [images] leave us with mysteries that will have us scratching our heads for years to come,” Veverka said.
A Place in Space
Even before the NEAR-Shoemaker mission, Eros had been one of the most studied asteroids. Its orbit ranges from 165 million miles to 105 million miles from the sun; that means on occasion it comes within 10 million miles of the earth. Astronomers have long used those close approaches as a valuable measuring stick. The earth’s distance to the sun and the mass of the earth-moon system were measured using positions triangulated with the help of Eros. What’s more, the regular visits enable astronomers to study the asteroid from the earth with relative precision.
The first asteroid was discovered in 1801 by Giuseppe Piazzi, a professor of mathematics and astronomy at the University of Palermo in Sicily. Piazzi had been surveying a part of the solar system between Mars and Jupiter in hopes of spotting a planet thought to lie there. Those hopes were based upon the Titius-Bode Law, a simple mathematical routine that could produce the orbital distance of the first eight planets with surprising accuracy; that law predicted a planet at a distance of 275 million miles.
After tracking a bright object across the background stars for more than a month, Piazzi calculated its position and found that its orbit closely corresponded with the location of the “missing planet.” On February 12, 1801 –– 200 years to the day before NEAR’s landing on Eros –– Piazzi announced his discovery. A new planet had been found, one he called Ceres, after the Roman goddess of the harvest.
A Point in the Sky
Piazzi’s fame was short-lived. Once other astronomers began observing Ceres they discovered that, unlike other planets, this one presented no discernable disk. It was, like a star, a point in the sky. The name “asteroid” (meaning “starlike”) stuck. The next year the German astronomer Heinrich Olbers found another asteroid in much the same orbit as Ceres. Hundreds of asteroids had been spotted by the time the German Gustav Witt and the Frenchman Auguste Charlois independently discovered Eros on the same night in 1898.
Eros, however, marked a first: Astronomers had never before found an asteroid that had left the main “belt” between Mars and Jupiter and approached earth’s orbit. And it is large, measuring some 21 miles long and eight miles wide. Although the total number of known asteroids exceeds 10,000, astronomers have identified only 250 or so near-earth asteroids, as those with orbits like that of Eros’s are called.
No asteroid is known to be on a collision course with earth, but impacts have occurred throughout geological history –– asteroid impacts are implicated in large extinctions and with creating the craters that formed lakes in Canada and elsewhere. Very small bits of asteroids hit the earth all the time. They’re called meteorites once they land.
Geologists have collected thousands of meteorites. Some meteorites are composed of carbon-rich minerals and look like soot; others are almost pure iron. But the majority –– some 80 percent –– are what geologists call ordinary chondrites. Such rocks are stony in appearance and largely made up of silicate ores, such as olivine and pyroxene.
A Model Mission
Rather than get fleeting images of many asteroids, the NEAR mission, launched in 1996, was designed to gain an extraordinary amount of information about just one. (The name of the mission was changed to NEAR-Shoemaker to honor the planetary geologist Eugene Shoemaker, who died in 1997.) The mission also was to be a model of efficiency: rather than roar to the target asteroid in one quick arc, the spacecraft would swing past the earth to get a gravitational boost. Along the way, NEAR-Shoemaker zipped through the asteroid belt and past Mathilde, a C asteroid.
Mathilde proved to be a bit of a surprise: a jagged, irregularly shaped 33-mile-wide body, darker than charcoal, was found to be only slightly denser than ice. Since the carbon-rich material that the asteroid is thought to be made of is far denser than this, planetary geologists believe Mathilde is nothing more than a gravel pile of primordial material loosely stuck together. But the fly-by of Mathilde was too fast to obtain detailed spectra.
NEAR-Shoemaker approached Eros in December 1999, and controllers sent the command that would slow it enough to be captured in an orbit. With so little gravitational pull (an astronaut on the surface could throw a rock fast enough to reach escape velocity) Eros was more of a point to maneuver about than a world to orbit. But at the very moment the spacecraft was supposed to settle into orbit around Eros, an engine failed to burn and the probe shot past.
That could well have been the end of the mission. But engineers found a way to correct the engine problem and re-aim the spacecraft. NEAR made an extra orbit of the sun so its path could be brought back to Eros 14 months later, on February 14, 2000.
Unprecedented Challenge
After settling into an orbit around the 21-mile-long, peanut-shape asteroid, NEAR-Shoemaker kept a careful distance. It was a matter of wise discretion, since orbiting such a strangely shaped object with such a tiny gravitational field was in itself an unprecedented challenge. And because of Eros’s bent and elongated shape and its rotation through a five hour and 15 minute “day,” the relative speed between spacecraft and asteroid ranged between two and 15 miles per hour, and was never the same from orbit to orbit. If ground controllers were not careful, the spacecraft could get whacked as the nose of the asteroid swung by.
For about two months, then, the spacecraft circled more than 100 miles above the surface, employing its camera, laser altimeter, magnetometer, infrared, and x-ray/gamma-ray detectors to obtain a comprehensive view of the end pointed to the sun. In mid-April the spacecraft moved inward, spending the next five months in orbits as low as 22 miles. Then, in August, ground controllers lifted NEAR-Shoemaker upward to a higher orbit so that scientists could get global views of the other end, now in daylight.
Of the many intriguing and distinct geological features spotted during this orbital reconnaissance, the most notable was the giant saddle-like feature. Data from the laser altimeter suggests that the feature is actually a crater, though strangely shaped. A more normal-looking large crater –– some three miles in diameter and a half-mile deep — dominates the asteroid’s other side.
Few Small Craters
Indeed, the size of the large craters gouged into Eros’s surface was perhaps less surprising than the absence of small ones. Unlike the moon and other solar system bodies –– where the relative number of differently sized craters remains constant as you get closer –– Eros lacks many craters less than 100 yards in diameter. “I am amazed at how devoid the surface is of small craters,” Veverka said.
Instead of small craters, investigators saw just the opposite: boulders everywhere, in all sizes and shapes. Some are rounded. Others have sharp angular facets. In fact, the entire surface of the asteroid seems covered with a layer of pulverized dust and debris of unknown depth. In some areas, such as in the large saddle, the layer appears thick enough to completely blanket and fill older craters. The photographs also revealed grooves, troughs, pits, ridges and fractures, similar to what was seen on Ida.
“These are generally very old features, and suggest the existence of fractures in the deep interior,” says Veverka. The ridges, one of which wraps one-third the way around the asteroid, average about 30-feet high and 300-feet wide. Their existence suggests that Eros has an internal structure and is therefore a consolidated body and not a rubble pile like Mathilde. In other words, if you gave Eros a push, it would move away from you as a unit, rather than dissolve into a cloud of gravel.
The strangest features spotted by the close-up photos were what appeared to be extremely smooth ponds of material at the base of some craters, as if the dust and dirt on the crater slopes had flowed downward and pooled at the bottom. “Some process we don’t understand seems to sort out the really fine particles and move them into the lowest spots,” notes Veverka.
New Conclusions
Not only is Eros a solid hunk, close-up views reveal that its composition is remarkably uniform and evenly distributed. In fact, Eros appears incredibly bland, with little color variation anywhere on its surface. “The very small color differences lend support to Eros being all the same composition,” says the planetary scientist Clark Chapman of the Southwest Research Institute in Boulder, Colorado, and a member of the NEAR-Shoemaker science team.
That means the ground-based spectroscopy suggesting that Eros was a differentiated body –– with hemispheres composed of minerals that had separated due to melting –– was wrong. In fact, the data NEAR-Shoemaker has collected calls into question many of the conclusions that have been made about the composition of asteroids. Astronomers believed that Eros and all other S-type asteroids were geologically distinct from ordinary chondrite meteorites; on close inspection, NEAR has shown Eros to be nothing more than one large ordinary chondrite.
Many investigators now believe that such S asteroids –– which make up the majority of asteroids in the inner part of the solar system –– might well be the source of most meteorites. In fact, the difference in spectra between S asteroids and ordinary chondrites might be more a function of rotation than substance: as asteroids rotate, their irregular surface distorts their spectrum.
A Daring Finish
Rather than simply shut NEARShoemaker off, mission director Robert Farquhar suggested a more daring finish: Why not try to land the orbiter on the surface of Eros? Not only would such a landing enable investigators to get some high-resolution images that would have been impossible to obtain otherwise, the feat would teach ground controllers the best techniques for landing spacecraft on such low-gravity objects, a skill that future space navigators will surely need.
On its way down, NEAR-Shoemaker snapped 69 high-resolution images of Eros’ surface, resolving details less than an inch across. Just before impact, the last two pictures caught the edge of one of the sand ponds. Though the pond appeared smooth –– as in more distant photographs –– small stones were seen peeking up through the fine dust. Those final photographs raised more questions than they answered.
NEAR-Shoemaker recorded 160,000 photographs –– imaging surface features as small as a foot. It will take years for the investigators working on the mission to digest it all. Just as the VIKING missions to Mars informed the study of that planet for a generation, it may take decades before planetary scientists get a set of asteroid data that is richer or more detailed. Even so, NEAR-Shoemaker gave astronomers a wealth of data on just one asteroid. Whatever conclusions astronomers may draw from NEAR must be tempered with the knowledge that asteroids come in many sizes, shapes and compositions. Any definitive conclusions can be said only of Eros.
The First Close and Detailed Look at an Asteroid
Nonetheless, this first close and detailed look at an asteroid gave humanity its first tantalizing glimpse at the very earliest birth pangs of a planet. The flow of material down the slopes of craters, the crumbling of boulders, and the pooling of material into sand ponds are merely the processes by which an irregularly shaped object slowly rounds itself off into a spherical planet.
Ancient and worn by its billion-year journey through the black emptiness of space, Eros has slowly been chiseled by impact after impact, then shaped by the slow, inexorable pull of its tiny gravity. In this dim, dark and silent environment, nature has –– like the seed in an oyster from which pearls will grow –– relentlessly built Eros up from nothing. From a similar seed grew our earth.
As things stand now, however, the best summary of what we really know about Eros and asteroids comes from Veverka, who spoke freely at a press conference immediately after the landing. Again and again, Veverka told reporters, “We really don’t understand what’s going on.”
Robert Zimmerman is author of Genesis, the Story of Apollo 8, published by Four Walls Eight Windows, and The Chronological Encyclopedia of Discoveries in Space, published by Oryx Pres.
In 2000, on a national basis, the number of students per instructional computer was 5, down from 9 in 1995. State averages for New York, New Jersey, and Connecticut ranged between 4.6 and 5.4.
UPSHOT: Not Down enough for Some
Disparities remain between the state averages and the averages for high-poverty schools in Connecticut and New York. Technology investment in Connecticut schools was derailed by budget cuts, and in New York it was well below recommendation. New Jersey made such significant investments in poor school districts that its high-poverty schools average 4 students per computer, while high-poverty schools in New York average 8.
Computer Use by Teachers
TREND: Computers are used more in Classrooms
Having computers in classrooms is a basic imperative. However, unless IT is integrated into teaching, the benefits are left unrealized. In 2000, 76 percent of all public schools reported more than half of their teachers were using computers in class. In the Tri-State area, Connecticut led with 72 percent, followed by New Jersey and New York at 69 percent and 68 percent, respectively.
UPSHOT: Except in Certain Schools
In computer use, Connecticut showed a huge gap between its schools with high percentages of historically underrepresented students (25%) and its state average (72%). Half of the teachers in Connecticut’s high-minority schools are “beginners” in technology use, while only one-third of low-minority schoolteachers are neophytes.
Internet Access
TREND: Most Schools are on the Internet
The Federal government’s efforts to transition America’s schools and libraries into the information age through initiatives such as the Technology Literacy Challenge Fund and the E-Rate—a program for obtaining discounted telecommunication services—produced great dividends. By 2000, 94% of schools nationwide had access to the Internet. Connecticut, at 96% connectivity, again led New Jersey and New York at 92% and 90%, respectively.
UPSHOT: Some are Left Behind
Despite an infusion of more than $600 million in E-Rate funding through March 2001, many New York schools remain without Internet access. 17% of high-poverty schools in New York are still not connected. Furthermore, of the rest, 65% access the Internet without the benefit of a high-speed connection compared to only 47% for low-poverty schools.
Investments for the Future: As Sales Plummet, Who Will Invest in R&D?
Information and electronics manufacturing and services accounted for more than two-thirds of total corporate R&D growth in 2000. In 1999, the IT industry spent nearly 10% of its sales in R&D. Depleting sales means fewer R&D dollars available, which could significantly slow the rate of innovation in products and ser- vices. Although the public sector is likely to provide a stable source of revenue growth for the industry, very little R&D funding will come from state and local governments. Most Federal R&D will focus on basic and applied science, not product development.
Still, with increased concerns over security and law enforcement following the Sept. 11 attacks, IT applications in the areas of defense and healthcare, the Tri-State region’s top two sources of Federal R&D (see January 2001 issue), could see significant increases in the coming years.
An economic and usage analysis of utility consumption in the tri-state region as well as the potential impact of regulation on prices for residential consumers.
Published July 1, 2001
By Allison L. C. de Cerreño, Ph.D., Mahmud Farooque, and Veronica Hendrickson
Energy supply is not all that has become scarce in the utilities industry. There are fewer establishments and even fewer employees than there were a decade ago. In spite of this tapering of the labor market, sectoral output in the United States has gone up steadily, increasing by close to 15% between 1993 and 1999. The Tri-State region has followed the national pattern in reduction of workforce and rise in output. However, sectoral product grew rather slowly while there was a sharper decline in the employment base.
The greater loss in jobs is partly due to the fact that establishments in the region are larger than other parts of the country. Although the NY/NJ/CT region represented only 5% of the nation’s utility establishments in 1999, it accounted for 11% of the total employment and 13% of the annual payroll. Utility establishments in the Tri-State region employ on average nearly twice as many people as their counterparts in California; nearly four times as many as in Texas.
Unbundling of the electric utilities in the 1990s has made the sector more efficient and productive. However, unlike the cases in the airline and telecommunication industries, deregulation is yet to bring a dramatic reduction in the price of electricity, especially for residential customers
Water Utilities: Projections are Up, but Who Will Pay?
New regulation and improved technology have produced the opposite effect on the region’s employment in the water supply and sanitary services industries. An increase in the number of contaminants that must be monitored and treated has prompted the Bureau of Labor Statistics to project a 34% increase in employment between 1998 and 2008. However, the price tag for required maintenance and regulatory compliance for the region is hefty: about $25 billion for wastewater and $18 billion for drinking water, accounting for 18% and 13% of the United States’ totals respectively.
Energy Consumption in the Tri-State Region
Efficient
In 1999, 14% of the nation’s population resided in the NY/NJ/CT region but accounted for only 8% of the U.S.’s total energy consumption, a rather significant difference. Without the region, per capita energy consumption in the country would increase by more than 3%. Had the Tri-State region’s population consumed energy at the same rate as the rest of the U.S., the total energy usage would have increased by 3010 trillion Btu – more than the total energy consumed in Georgia that year.
Balanced…
Nationally, industrial demand led all other sectors in terms of energy consumption, accounting for 37% of the total energy consumed in 1999. Commercial demand, at 16% of the total, was the lowest of all four sectors. The pattern is similar to that of the Tri- State region, but some thirty years ago. Industrial sector’s share of the total energy pie has been decreasing steadily in the region. Today, each of the four sectors (industrial, commercial, residential and transportation) consumes about a quarter of the total energy.
However, these shifts have not been uniformly distributed across the region. For example, between 1993 and 1999, industrial demand in NY and CT rose by 23% and 5% respectively, but in NJ it dropped by 2%. This loss in industrial demand for energy was more than compensated by a 2.6% increase in the residential sector in a period when New Jersey saw its resident population increase by 3.4%.
And Environment-Friendly…
Coal provided 20% of the region’s total energy in 1960, a share that dropped to just 3% in 1999. Nationally, consumption of coal continued to hover around the 20% mark, remaining essentially unchanged in the last four decades. Share of natural gas in the NY/NJ/CT region doubled, while nationally, it experienced a 4% decrease from its 1960 levels. Nearly 83% of the total energy consumed in the U.S. came from just three sources: petroleum, natural gas and coal. By contrast, these three accounted for 73% of the total energy consumed in the region, with the remainder coming from hydroelectricity, nuclear, and other sources.