Skip to main content

Creativity After Google: Art and Science Interfaces

A man in a suit poses for the camera.

Biomedical engineering professor David Edwards feels that creativity is catalyzed by the act of crossing the conceptual boundary between art and science.

Published September 1, 2008

By Adelle C. Pelekanos

David Edwards. Photo by Eliza Grinnell.

Pop quiz: What is the 43rd element on the periodic table? Who served as Secretary of State under President William Taft? What was the name of the fourth Star Trek motion picture?

Unless you’re preparing for a turn on Jeopardy, you’re probably not storing that knowledge in your head. No matter. In less than a second, an Internet search engine can produce it for you: Technetium; Philander Knox; and Star Trek IV: The Voyage Home.

In an age when information is increasingly at our fingertips, the ability to retain facts ranks second to the power of problem solving. And leaders of innovation are asking how an educational system that emphasizes rote memorization can generate creative thinkers.

Harvard biomedical engineering professor, entrepreneur, and author David Edwards tackles that question in a new book, ARTSCIENCE: Creativity in the Post-Google Generation. To teach people to be innovative, he suggests, requires an understanding of the circumstances that promote innovation. He proposes that creativity is catalyzed by the act of crossing the conceptual boundary between art and science. And to show what happens when creators traverse the invisible borders between disciplines and institutions, Edwards offers anecdotal evidence from his career and those of contemporary artists and scientists.

Artscience: One Word, One Process

“Artscience” is the term Edwards coined to describe the phenomenon by which creators float among the disciplines of art and science. An artscientist may be a cell biologist such as Don Ingber, who as a young Yale science student was inspired to take a design class. In it, he studied the structural principle of tensegrity in design, originally described by architect Buckminster Fuller. Ingber recognized this type of integrity in the cells he was studying in his lab, and went on to pioneer a new view of cellular structure.

Or, an artscientist may be a doctor such as Sean Palfry, whose passion for medicine runs parallel to a passion for photography. Unlike Ingber, who carried a specific idea over the artscience boundary, Palfry experienced another kind of catalyst for innovation. While working in the toughest neighborhood hospital in Boston, Palfry spent his free time developing a photography technique involving multiple exposures. The attention to detail and analytical skills that he honed as a doctor complemented his photographic work, and the enjoyment he experienced taking pictures served to balance the stress of his difficult work at the hospital. His application of artscience in his leisure activities both sparked his creativity and revitalized him for his medical work.

From Musician to Mathematician

Diana Darby is a third kind of artscientist—an artist by training who pursues scientific understanding to advance her creative work. One afternoon, Darby, a professional concert pianist, became fascinated by articles she found in a music journal at the Lincoln Center Library. The articles, written primarily by engineers, piqued her curiosity about how science might help her better understand her music. Unable to let the idea go, Darby researched engineering programs and eventually enrolled in City College, followed by graduate school at MIT.

Toward the end of her studies, Darby finally had the mathematical tools to explore her music. Her eureka moment came when she developed a use of chaos theory to generate musical variations. It had taken the intuition to follow her idea, the courage to risk her career as a professional musician, and dedication of more than six years to produce her artscience innovation.

Obstacles to Artscience Innovation

Edwards sees Darby, Ingber, Palfry, and other artscientists as “idea translators,” who have taken an idea through three stages: conception, translation, and realization.

None of them set out with the intention to pursue artscience. Darby’s idea of chaos theory transformed into music began merely as an intuition that mathematics and science could enhance her experience of music. She struggled with the decision to return to school, leaving behind her successful career as a pianist. But it is the move from one intellectual environment to another that Edwards says allows artscientists to create an aesthetic result from a scientific method, or the opposite.

Unfortunately, in a world that increasingly values innovation over information, our cultural and academic institutions are as yet ill equipped to foster creative innovation of the kind Edwards describes. “We value creators in business, culture, education, and society, but somehow we struggle to create institutional environments to welcome them,” he says.

Indeed, the divide between art and science that Edwards recognizes as a catalyst for creativity can contribute to the “administrative inertia” that weighs on academic institutions. “That we institutionally encourage these modern prejudices through our dizzying array of disciplines and internal departments stems from the specialization of human knowledge, expression, and experience,” Edwards writes. He calls this the educational institutional problem.

The Artscience Lab

To overcome such institutional barriers and catalyze innovation, Edwards proposes an intellectual artscience environment he calls the “laboratory.” Edwards promotes development of workspaces “for the societies, industries, cultural institutions, and research and education institutions in which artists and scientists might create a place that allows…creativity…to spread as pervasively as good ideas today should.”

Edwards walks his talk. In October 2007, he opened Le Laboratoire, the world’s first artscience center, in Paris. The 14,000-square-meter open area includes a large exhibition gallery, a design atelier, offices, and the commercial spaces for Le LaboClub, a members-only recreation area, and Le LaboShop, which sells merchandise.

Edwards says he founded the not-for-profit Le Laboratoire as an “art and design experimental center” where students could learn “to more effectively realize ideas about which they can feel passionate and which are simultaneously relevant to society, industry, and culture.” He underwrote a significant chunk of its operating expenses through the sale of his own biotech startup. A range of cultural, nonprofit, commercial, and educational institutions including Epson, the Wellcome Trust, and Harvard University also provide funding to support the break-even operation.

Edwards describes the center’s first few months as a “completely wild experience,” with artists and scientists collaborating on various experiments that might

produce “theater in the street, visual art in the office, or opera in the bathroom.” He says, “What takes place in the laboratory continues to change as the world’s issues and needs change.”

Inspired By NASA

One of the first projects of Le Laboratoire has been Edwards’ own collaboration with French designer Mathieu Lehanneur on an air filtration system called Bel-Air, inspired by observations of NASA scientists who detected unusually high levels of toxins in the blood of astronauts returning from space missions. The NASA team experimented with certain plants that acted like natural filters, absorbing and metabolizing the noxious chemicals emitted into the air by the artificial materials of the space suits and shuttle environment.

Edwards and Lehanneur recognized similarities between the space shuttle environment and a modern home—both of which play host to high levels of fine particles emitted by plastics, insulation, and other modern building materials. The two envisioned a kind of living filter that would absorb and metabolize airborne particles, the way plants did in the space shuttle. Bel-Air aims to maximize the natural absorptive properties of plants by optimizing the filtration capacity of leaves, roots, soil, and plant water.

Bel-Air qualifies as artscience because Edwards and Lehanneur used the tools of plant metabolism and of airborne chemicals to create an important functional appliance that is also aesthetically pleasing.

Innovation By Example

In addition to artscience experiments such as Bel-Air, Le Laboratoire carries out public programming and education in accordance with a goal to serve as a center for learning that complements traditional school environments in which, Edwards says, specialization is inevitable.

When specialization stifles creativity, the artscience lab is an invaluable environment for students, he says. Edwards sees the artscience lab as a supplement to the specialized learning that pervades education today. Through Le Laboratoire partnerships, such as one with Harvard, university students can work alongside established artists and scientists at the lab. Edwards hopes that this model for collaboration will be repeated between other educational and cultural institutions, catalyzing innovation and complementing similar missions.

Younger students can benefit from learning about the innovative work happening at the artscience lab in less formal ways. On any given day, students of any age can be found seated cross-legged on the ground at Le Laboratoire, chatting about exhibits and scribbling in notebooks, Edwards says. By presenting examples of creators freely exploring ideas through various methods, he says the artscience lab effectively teaches innovation by example.

“Giving kids the opportunity to play on that artscience interface is a gift,” Edwards says. “We learn best by being passionate for an idea, and by having enough fearlessness and experience to pursue those ideas wherever they’ll take us.”

Also read: The Art and Science of Human Facial Perception


About the Author

Adelle C. Pelekanos is a freelance science writer living in Queens, New York.

How Are Skyscrapers Able to Withstand High Winds?

A shot taken from ground level, looking up at the Freedom Tower and lower Manhattan.

While building codes do not require wind tunnel testing for new skyscrapers, engineers and architects conduct the testing anyway to ensure precision and efficiency during construction.

Published July 1, 2006

By Deborah Snoonian

Image courtesy of demerzel21 via stock.adobe.com.

Before glass, steel, and concrete, there were plastic, plywood, and pressure sensors. And even in this age of computer-aided design and analysis, engineers still build scale models of buildings to see if the full-sized real ones can withstand strong winds.

That explains why in 2002, researchers at the Alan G. Davenport Wind Engineering Group at the University of Western Ontario (UWO) built a 1-to-500 scale replica of 7 World Trade Center and the surrounding neighborhood, measuring about a foot and a half tall. They placed the model carefully inside a boundary-layer wind tunnel, a 128-foot long, 11-foot wide, and 8-foot high apparatus equipped with a wind machine that can simulate everything from gentle breezes to gusts of hurricane intensity. Then, as the wind blew, sensors attached to and around the model logged thousands of readings of pressures, speeds, and deflections. Later, researchers analyzed the data to spot potential wind-related problems, and compared them to computer-model predictions.

Such a study is a common practice in the design of a tall building to ensure its safety and the comfort of occupants and pedestrians. The studies guarantee that skyscrapers are flexible enough to withstand high winds without toppling over (all tall buildings are designed to sway slightly), and that strong gusts won’t rip off or break the cladding (I.M. Pei’s John Hancock Tower in Boston notoriously suffered falling and broken windows during its construction in the 1970s). As for comfort, engineers aim to prevent occupants from detecting the building’s motion by making sure it moves slowly and gently. Wind speeds at the base of the building are monitored so that pedestrians won’t have to endure strong gusts.

Wind Tunnel Testing Not Required

Although building codes don’t require wind tunnel testing, they usually permit architects and engineers to base their designs on test conclusions. This typically results in buildings that are engineered precisely and efficiently—and therefore less expensively—than what is mandated by conservative building codes.

The architects and engineers for 7 WTC, Skidmore, Owings & Merrill (SOM) and WSP Cantor Seinuk, respectively, had access to data on many similar tall, existing buildings. But the timing presented a challenge, because there was then no master plan yet in place for Ground Zero. Researchers tested three models: one of 7 WTC with no structures at Ground Zero (which is what exists today), and two that included surrounding buildings at various heights and orientations, which affect the wind speed and direction around 7 WTC. “We had to make some assumptions about what might get built there, so we made them conservatively,” says Silvian Marcus, chief executive officer of WSP Cantor Seinuk.

In the last decade or so, emerging analytical methods such as computational fluid dynamics (CFD) have allowed designers to study the complex behavior of air movement around buildings without the use of scale models or wind tunnels. But by all accounts, it will be years before computer-only wind studies become the norm.

Immensely Complicated and Computationally Intensive

One reason is that wind tunnel facilities—there are just a few in North America—have given designers the ability to look not only at the effects of wind, but also at other weather-related effects like snow and at the perfomance of other systems such as air in-takes and exhaust fans. These are “all things that are critical to building performance,” says SOM partner Carl Galioto.

More fundamentally, calculating airflow around buildings is both immensely complicated and computationally intensive. At this stage, CFD software for buildings requires a high level of expertise, produces results that are highly dependent on assumptions, and tends to be used only by wind-tunnel facilities themselves.

Change will come when the software and processing power improve. “I’d like to be able to use CFD analysis to spot check parts of buildings that tend to be problem areas for wind pressure, like corners and parapets, and then confirm the CFD predictions with a physical test prior to construction,” says Nicholas Holt, SOM’s senior technical architect for the project. “Eventually, with enough data corroborated by physical models, codes will likely begin to accept CFD analysis in lieu of wind tunnel testing.”

In the meantime, though, engineers will keep the plastic, plywood, and pressure sensors handy.

Also read: Green Buildings and Water Infrastructure

What’s Old Is New: A Revitalized Downtown NYC

A shot taken from a NYC building, looking downtown toward the Freedom Tower.

A convergence of real estate development, infrastructure improvements, and diverse cultural offerings is redefining Lower Manhattan, harkening back to the city’s colonial days.

Published July 1, 2006

By Pamela Sherrid

The block of Front Street just north of the South Street Seaport in Lower Manhattan was a sad sight for most of the last 30 years. Vintage commercial buildings built by prosperous merchants at the end of the 18th century stood derelict and nearly empty.

But today, life is stirring on Front Street. Real estate developers, helped by low-cost public financing, recently renovated 11 old buildings and built three new ones, creating 96 chic apartments that were all quickly snapped up by renters. On a recent sunny spring afternoon, entrepreneur Sandra Tedesco was unpacking bottles at her new wine bar, Bin No. 220, the first retail business to open on the block. A coffee bar, a dry cleaner, a sushi place, and a gourmet grocer—those basic upscale urban amenities—are also on the way.

Sandra and her business partner, Calli Lerner, both pioneering residents in the Financial District, are engines of the change that is sweeping Lower Manhattan. “We had nowhere we could walk to have a nice glass of wine and relax,” says Tedesco. So, both experienced in the restaurant trade, the partners are remedying the situation by opening a cozy neighborhood place.

A Neighborhood on the Move

If all you know about downtown is the seemingly endless squabbling about what will be built at Ground Zero, you are missing the big picture. Lower Manhattan is not only being rebuilt, it is morphing into a much more diverse and lively neighborhood. No longer is finance the only employer, nor do the streets echo emptily at 7 p.m. “This is definitely not the Downtown we once knew,” says Mary Ann Tighe, CEO of the New York Tri-State Region at real estate firm CB Richard Ellis. Baby strollers roll right by bankers’ limousines and green parks are sprouting amidst the concrete canyons.

Two powerful forces—the free market and the government—are working in tandem to improve life downtown. Rentals and condos are less expensive below Chambers Street than in many spots elsewhere in Manhattan, luring singles and families. That relative value is even greater for office space, attracting many nonprofit organizations and firms in everything from biometrics to publishing.

As for the public sector, it is spending billions to make Downtown an architectural and cultural showplace as a moral victory over terrorism. “Despite wishing terribly that 9/11 never happened, it does present us with a chance to look at Lower Manhattan from top to bottom, to evaluate its assets and see how it can be improved,” says Stefan Pryor, president of the Lower Manhattan Development Corp (LMDC).

Transportation Projects

The really big-ticket items are transportation projects that will make Downtown easier and more pleasant to travel to and move around. A new Fulton Street Transit Center, with an expected completion in 2008, will untangle the maze of ramps and passageways that connect a dozen subway lines. Its dramatic glass- and-steel pavilion entry at the corner of Fulton Street and Broadway, designed by prominent British architect Nicholas Grimshaw, will let natural light filter down to below street level.

The Port Authority hired an even better-known international “starchitect,” Santiago Calatrava, to design a new PATH Terminal at the World Trade Center, also currently under construction. A pedestrian underground concourse will be built to connect the Fulton Street Transit Center to the PATH terminal and to the World Financial Center further west. A proposed rail link to JFK airport, requiring a new tunnel under the East River, would make travel much faster between Downtown and anywhere on Long Island. It is not a done deal, but already funding is in place for more than half its $6 billion cost.

Arts and Leisure

Public spending is also revving up the cultural life Downtown. This spring 63 Lower Manhattan arts organizations and projects received a total of $27 million in grants that are expected to spur private donations of many times that sum. The Flea Theater, an award-winning Off-Off-Broadway theater known for nurturing innovative playwrights, is hoping to upgrade its building and create more rehearsal space.

The Poets House, which offers lectures and readings, and houses the nation’s largest collection of poetry books and media open to the public, will be moving next year to a beautiful river-view home in Battery Park City, just a short walk from The New York Academy of Sciences (the Academy).

The River to River Festival presents over 500 performances downtown from June through September, including a diverse range of music that includes pioneering rappers, The Sugar Hill Gang, and the lush-sounding indie rockers, Belle & Sebastian. And music is just part of the happenings: On a Sunday afternoon, for instance, a family can see a tap dance demonstration and then take part in a marathon reading of Walt Whitman’s “Song of Myself” aboard a tall ship.

Downtown nature lovers can celebrate, too. Government money is improving and creating more than a dozen parks and open spaces. At the foot of Broadway, Bowling Green, the nation’s oldest park, has been relandscaped, creating an oasis of green. Kiosks serving sandwiches and salads will open this summer in Battery Park; patrons can sit at café tables set amidst 57,000 square feet of newly planted perennial gardens and enjoy the views of New York harbor.

Governors Island

Governors Island, that 172-acre gem located just 800 yards off the southern tip of Manhattan, is a magnificent wildcard in the future of Lower Manhattan. In 2003, the federal government transferred control of most of the island to the State and City of New York. The public entity created to decide the island’s future has sketched out varied possibilities for redevelopment, ranging from entertainment park to innovation center. This spring more than two dozen proposals for development flooded in to meet a May deadline.

Live, Work, Visit, Enjoy

Meanwhile, the boom in residential population in Lower Manhattan—more than doubling in the past 15 years to 36,000—is also a boon for workers and visitors. As is the case with Bin No. 220 on Front Street, many of the businesses that are opening to serve residents also make it a nicer place to visit.

Lower Manhattan is now the fastest growing residential neighborhood in New York City, and not only in the traditional residential area of Battery Park City. Wall Street has been synonymous with finance for hundreds of years, but many of the older office buildings there can’t accommodate the high-tech wiring needed for modern trading.

So every building on the south side of that famous row from Broad Street to Water Street has been or is being converted to condos or rentals. “At 6 p.m. I now see people coming out of the sub- way on Wall Street on their way home,” says real estate broker Vanessa Low Mendelson, who not only sells luxury condos downtown, but also lives there with her husband and 18-month-old baby.

The Sound of Hope and Renewal

Of course, all these changes can’t happen without disruption. There’s a huge amount of construction going on downtown, bringing with it noise, blocked streets and sidewalks, and weekend subway station closures. “What’s going on in Lower Manhattan is like having open heart surgery while running a marathon,” says Eric Deutsch, president of the neighborhood business group Downtown Alliance.

But many people find in the commotion the sound of hope and renewal. In a 2002 speech, Mayor Bloomberg outlined his vision of Lower Manhattan as a bustling global hub of culture and commerce, and a live-work-and-visit community for the world. “If you study New York history,” he said, “you realize that it is often at the moments when New York has faced its greatest challenges that we’ve had our biggest achievements.”

Also read: 7WTC: A New Home, A Return to Downtown


About the Author

Guest Editor Pamela Sherrid is a veteran of U.S. News & World Report, Forbes, and Fortune magazines.

7WTC: A New Home, A Return to Downtown

A shot of the Brooklyn Bridge looking forward lower Manhattan.

The Academy’s new home features elegant architecture, intriguing conceptual art, and advanced environmental and safety engineering.

Published July 1, 2006

By Glenn Collins

Image courtesy of quietbits via stock.adobe.com.

7 World Trade Center was the last tower to fall on September 11, 2001, and the first to be reborn at Ground Zero. This shimmering, sharp-edged, 52-story parallelogram redefines the cityscape, and the arrival of new occupants fulfills a dream for those who dealt firsthand with the rubble that preceded it.

Fortunately for incoming tenants—and those from The New York Academy of Sciences (the Academy) will be among the first, following the developer of the building himself, Larry A. Silverstein—there is much more than the burden of memory to be acknowledged. There is also the promise of award-winning new architecture, state-of-the-art design, intricate technological solutions to daunting challenges and constraints, and a tower that is not only more environmentally responsible than any other in the city but, not incidentally, safer than any other as well.

A Poignant Transparency

There’s one aspect of the building—known simply as “Seven” by its designers and builders—that fascinates those who know it well. They call it the “stealth building” because its glass skin scatters light, and at times lets the building, from many different angles, inhabit the boundary between transparent and reflective.

The intensity of this magical effect is greatest in the early morning and late afternoon. Sometimes the shimmering surface takes on a seemingly supernatural glow, especially when viewed from the Hudson River. Its shining aspect changes dramatically during shifting light and varying weather conditions, and at times, when the conditions are perfectly correct, “the elements of the building seem to merge with the sky,” said James Carpenter. He is the glass artist and MacArthur Fellow who helped design Seven, and envisions it as one huge prism.

Despite its eerie transparency, though, it is an office building. And Seven embodies the antithesis of insubstantiality in its vital statistics: it is 741 feet tall, it cost $700 million to build, and has 1.7 million square feet of office space on 42 tenant floors. The tower is sheathed in 538,420 square feet of glass, more than 12 acres. Bounded by Barclay Street on the north, Vesey Street on the south, Washington Street on the west, and Greenwich Street on the east, it is within five minutes of 13 subway lines, the PATH system, and New York Waterway ferries. It takes just one sharp right turn from its doors to reach the West Side Highway.

Zip Code: 10007

But Seven has always been more than just a building. Many of those involved in the new Seven have searing memories of the day when the old one fell, including Mr. Silverstein of Silverstein Properties, the developer of both the original tower and the new one, on land leased from the Port Authority of New York and New Jersey. Mr. Silverstein had a dermatology appointment on September 11, and therefore missed a breakfast meeting in Windows on the World, where no one survived.

Rebuilding Seven was an especially emotional experience for the workers who built it twice, like Elio Cettena and Mike Pinelli, who were onsite supervisors for Tishman Construction Corporation, the construction manager of both the new tower and the old in 1985.

And it is no exaggeration to say that every milestone of the building’s creation was followed avidly, from its November 2003 groundbreaking to its topping-out ceremony on October 21, 2004. Then, after installing 15 tons of steel, the final beam was positioned on the 52nd floor as 500 construction workers, Governor George Pataki, Mayor Michael Bloomberg, and Silverstein looked on. With smiles, salutes, and not a few tears, the steel beam—which was adorned with the same American flag used in the topping-out ceremony for the original 7 World Trade Center—was hoisted 750 feet in the air to its place at the summit of the building.

Gateway to a New Downtown

Back in the clean-up phase after 9/11, open forums had made it clear that the public wanted streets to run through the redeveloped Trade Center site, unlike the former 1960’s design with a “super-block” pedestal that brought street traffic to a dead end. Both community activists and Seven’s lead architect, David Childs of Skidmore, Owings & Merrill, advocated that Silverstein reduce Seven’s footprint to reestablish Greenwich Street, one of the city’s oldest north-south thoroughfares.

Silverstein acceded. Instead of proceeding directly south on Greenwich Street, travelers will take a jog around a new, triangular, 15,000-square-foot park and pedestrian plaza. Planted with 60 sweetgum trees and boxwood shrubs, the plaza will serve as an amenity to occupants and also, in the words of Childs, “as the gateway to the Trade Center site.” To come is a complex of office towers including the 1776-foot-tall Freedom Tower, a retail center on Church Street, the World Trade Center Memorial , and a commuter station designed by architect Santiago Calatrava. In front of Seven, a short stretch of Greenwich Street will serve as a private drive for taxis and limos.

As for the building itself, Mr. Childs was aiming for “restrained beauty and perfect pitch,” he said, that would derive its effect not only from formal restraint but also from attention to detail.

Lighting the Way

But to do so, the architects had to respond to “a unique set of design challenges,” Silverstein said. Those challenges are so unusual that the building is actually a feat of architectural legerdemain: Childs had to place a delicate skyscraper a top one of the ugliest pedestals in any Manhattan building, a monumental $100 million Con Edison substation.

Sheathed in concrete like the old substation, which was destroyed on September 11, the new substation has three transformers putting out 80 megawatts of power not only for Seven, but also for Battery Park City and, eventually, the buildings of the rebuilt Trade Center site. It can accommodate seven more transformers up to a total of 10, one more than the inventory in the original substation.

The facility, one of 24 substations in Manhattan, reduces 138,000-volt power from generating stations into more manageable, 13,000-volt current distributed to residential and commercial customers. If the $1.1 million, 20-foot-tall, 168-ton transformers are unsightly, then the shifting of the base of the new Seven to the west of the previous Seven (to create the park and reestablish Greenwich Street) made the Con Edison vaults even uglier, because the transformers had to be stacked vertically to save space.

Free Flow of Air

Since the transformers generate heat, they posed another constraint: The wall around them had to permit air to flow freely. Worse, atop the seven-story concrete substation, three floors of the building had to be devoted to mechanical equipment. Rentable offices, therefore, could not be situated until the 11th floor.

While it was obvious that the lower floors had to be clad in some sort of curtain wall (an independently supported outside screen), Childs was adamant that the solution to the quandary of the skyscraper’s base be “something integral, that was designed from the start,” he said, adding that it could not be some fig leaf-like “external piece of art.”

Hoping, as in the haiku and the sonnet, that limitation would be the catalyst for art, Childs sought out Carpenter, a sculptor and architect whose designs have summoned effects from the characteristics of light.

The substation problem came down to one question for Carpenter: “How do you turn an absorptive concrete block,” he asked, “into a reflective, emotive surface?”

A Solution

Carpenter’s solution was to design a sculptural installation for the base of the building, “a stainless-steel scrim that is animated with light,” he said, visually shifting naturally by day with the changing light conditions, and artificially at night with programmed illumination sequences using light-emitting diodes (LEDs). At the same time, the wall could also second as a porous ventilator for the hidden vaults of the three-story transformers, dissipating their heat.

And so, the wall is built of elegantly polished and machined 15-foot-tall, 5-foot-wide panels—each weighing 1500 pounds—of precisely crafted, high-precision triangular steel prism bars set in inner and outer rows.

During the day, these 130,000 prisms reflect ambient light and make the wall an active surface, capturing the sky in different directions, since the prism sections are set off by 15 degrees from each other. “The wall creates a moiré effect that moves by you, as if you are walking past stretched silk,” Childs said.

At night, on the north and south walls, 220,000 blue and white LEDs illuminate the wall of prisms from within, subtly reflecting off the steel and into the street. The diodes are easy to maintain, and give off little heat. At night, 12 motion-sensing cameras are programmed to follow passers-by, marking their passage in columns of multistory blue light on a white ground.

Art and Innovation

The building lobby, which has the postal address of 250 Greenwich Street, posed another architectural challenge: It had to be sandwiched between two banks of transformer vaults framed by unsightly structural columns five feet in diameter. Childs opened it all up with a street side, 46-foot-tall curtain wall of glass that welcomes daylight into the lobby; he cladded the columns in reflective stainless steel.

Then Childs commissioned an artistic centerpiece: A dominating, floor-to-ceiling, 14-by-65-foot wall of acid-etched translucent glass illuminated by whitish light-emitting diodes, created by Carpenter and the conceptual artist Jenny Holzer. Like Chinese calligraphy, Holzer’s work uses words as art, at the same time as it plays with the power of commercial billboards.

At Seven, she has programmed the wall to display thousands of ghostly white, streaming words of text. This never-ending ribbon of poetry and prose by dozens of authors—from Elizabeth Bishop and Allen Ginsberg to Langston Hughes and Walt Whitman—evokes the history of New York. Though the artwork resides in the lobby, it is visible to pedestrians outside, as well as to those congregating in the Greenwich Street park; it even can be seen blocks away, on Church Street.

The sum of these architectural efforts helped 7 World Trade Center to win a Municipal Art Society annual MASterwork award for urban design this year.

Building Safe

But Seven is far from being just a provocative pretty face. It is also the first office tower in New York City to win gold certification for its green—or environmentally sensitive—architecture, which seeks to reduce energy and natural-resource consumption and lower the building’s impact on the environment. The design incorporates the recycling of rainwater and the use of natural light and air in an interior where toxic materials have largely been eliminated.

Given the building’s location, security was also a crucial concern. “It was a challenge to put an office building on top of a power substation, but it was equally a challenge, if not more so, to create a building that would be safe—safer than anything that had previously been built,” said Silvian Marcus, chief executive officer of WSP Cantor Seinuk, the building’s structural engineers. For its work on 7 World Trade Center, his company recently won the 2006 award for engineering excellence from the New York Association of Consulting Engineers.

Marcus said that Seven’s steel framing “has sufficient redundancy to prevent a progressive collapse,” which was the failing of the Trade Center towers. Nevertheless, “the framing could not interfere with the exterior look of the building, so we tried to make the framing invisible,” he said.

The Bulky Base

Among the building’s security consultants was an Israeli firm experienced in blast effects. Ironically, the bulky base of 7 World Trade Center dramatically enhances its safety, security experts say, since its tenant space starts at the 11th floor above grade, well above immediate street-level blast effects from vehicle-borne explosives. Indeed, the New York Police Department has insisted that the Freedom Tower at Ground Zero—also being designed by Childs—incorporate a similar structurally massive base that is distinctly different from its upper office floors. The designers are likely to use stainless-steel cladding to hide it, adopting design and artistic strategies similar to those pioneered at Seven.

At the base of 7 World Trade Center, the architects have utilized glass, including the 46-foot-high lobby facade, that is laminated with layers of polyvinyl butyral, which stabilizes blast-shattered panes and keeps shards in place. Holzer’s language wall is also a security blanket. Not only does it screen the private precincts of the building, its lamination acts as a blast shield and its high-strength steel members will yield if they counter an explosion.

Massive Concrete Central Core

Further than this, the architects’ principal answer to disaster at Seven is its massive concrete central core, which extends from the base to the top, placing a shield around the stairways, the elevators, sprinkler pipes, and electrical conduits.

The base-to-top core, for the most part two feet thick, is double-constructed with steel reinforcing bars. Its two stairwells will be located at opposite sides of the core, about 110 feet apart, cutting down the possibility that they could be damaged at the same time.

The stairways are oversized, five and a half feet wide—20 percent wider than required by the city code—to permit rapid evacuation. They are fitted with independent emergency lighting and glow-in-the-dark paint and are pressurized to prevent the intrusion of smoke in case of a fire.

The stair landings are extra deep—8 feet by 11 feet—to enable employees in wheelchairs to wait for rescue while the more mobile are able to step past them. The stair treads are wide enough to permit people to walk down, Silverstein said, while emergency workers are walking up. The four fire stairs exit directly to the building’s exterior, preventing bottlenecks or the possible confusion that might result from exits that lead through the main lobby.

A Renaissance, With a View

The beauty of the execution is that the Academy’s visitors will remain indifferent to all this contingency planning. They can simply enjoy the building’s location, elegant architecture, and the Academy’s striking new offices on the 40th floor. Seven will be a welcoming landmark on the route from trendy Tribeca, with its mix of shopping and restaurants, to the cultural institutions of Lower Manhattan. Looking out the Academy’s office windows, or pausing in Seven’s park, its new tenants will have a privileged close-up view of New York’s oldest neighborhood being made new again.

Also read: Academy’s Past: Where It All Began


About the Author

Glenn Collins reports on Ground Zero for The New York Times.

Advancing Science 40 Stories Atop New York City

An artist rendering of the Academy's new conference room.

The Academy’s new home on the 40th floor of 7 World Trade Center will convey our distinguished heritage while also establishing an efficient environment for new ideas.

Published July 1, 2006

By Hugh Hardy

Reception area at 7 WTC. Image courtesy of H3 Hardy Collaboration Architecture.

In 1950, a mansion on East 63rd Street was the answer to The New York Academy of Sciences’ (the Academy’s) dreams. With its sixteenth-century Italian mantel in the entry hall and a library of carved English oak, the building exuded an air of old-world scholarship and elegance that suited members and impressed visitors.

Today, however, the Academy needs more office and meeting space than the mansion can provide. What’s more, the building’s traditional interiors and furnishings give no hint of the Academy’s progressive nature and mission. Rather than shrink from change, as its current rooms dictate, this institution embraces it. This outlook will become astoundingly clear when members make their first visit to the Academy’s new home, forty stories in the air, at 7 World Trade Center. With spectacular urban and water views from all points of the compass, this aerie will dramatize the institution’s central role in New York’s scientific life and signal its vitality to visitors who come from around the world to participate in its activities.

Of course, the Academy is not abandoning its traditions. Science is built upon the work of previous generations and on many legacies of investigation and thought, even as it crosses frontiers into the unknown. This project’s design challenge lies in conveying the Academy’s distinguished heritage while also establishing a contemporary and efficient environment for its forward-looking activities.

A Magnificent Blank Slate

The Academy looked for space in many older office buildings, where it would have had to make decisions about what lobby space, offices, and conference rooms to keep and what to change. Instead, by renting (on advantageous terms) the entire 40th floor of a spanking new building, the organization was presented with an expanse of raw space, a magnificent blank slate. Seven World Trade Center is the only structure in the city whose floor plate is a parallelogram from bottom to top, and it offers 28,000 usable square feet per floor, without a single column between its central core and its perimeter walls of glass.

Our floor plan for the Academy bisects the building’s parallelogram on a north-south axis to accommodate two basic functions, one private, one public. The eastern portion is devoted to public areas, containing a lobby, reception space, three meeting rooms, “breakout” areas, and the president’s office. The western half of the floor contains offices for the staff and support areas.

The Academy’s links to the past are made clear in the entrance lobby, where a monumental bronze bust of Charles Darwin, which long graced the Academy’s garden, is prominently displayed to the left of the entry. Behind the reception desk is a sculptural metal “art wall.” Its openwork filigree echoes nineteenth-century street patterns and illustrates the Academy’s three original downtown locations. This patterned surface forms a sloping wall, dividing the entrance lobby from a generous socializing space by the windows. From here, views of Lower Manhattan will astonish visitors. At this vantage point, flatscreen monitors will direct participants to their meeting areas, announce current activities, and present the latest multimedia web offerings from www.nyas.org.

A Focus on Flexibility and Sustainability

A meeting room at 7 WTC. Image courtesy of H3 Hardy Collaboration Architecture.

Conferences and meeting presentations require concentration, without the distraction of fascinating views. Therefore, three meeting rooms are fashioned so that each can shut out the panoramas. One of the conference rooms, shaped like a pod, is totally enclosed, while the others have shades that can hide the view. Groups from 30 to 300 people can be accommodated.

To the northeast, in one of the wide corners of the parallelogram, movable walls provide further flexibility, permitting corridors to be joined with the largest presentation room. A pantry permits catered food service for special events. Throughout the project, we worked with the goal of flexibility, knowing that activities will change within rooms from hour to hour, day to day.

Green concerns informed our planning. Lighting zones are monitored by motion sensors, and lights turn off after an allotted time if no one is present. Photometric sensors tied to westernmost lights automatically turn lights off during bright afternoon sunlight. In addition, almost all of the lighting is energy efficient fluorescent. Carpet tile is being used to reduce waste.

If areas of the carpet wear out over time or are stained, only those tiles need to be replaced instead of an entire run of carpet. The desk chairs are 44 percent recycled and 99 percent recyclable, and offices and workstations use high proportions of recycled materials, including steel paneling and mineral board, and glues and finishes that do not contain volatile organic compounds. Fabric for all of the upholstered walls and cubicles is 100 percent recycled polyester.

Combining Utility and Aesthetics

This institution has long held art in high esteem, using many forms of expression to suggest the shared interests of artists and scientists. An 80-foot-long gallery runs the length of the building’s interior core and will contain artworks relating to the Academy’s programs. Photographic panels, designed by the graphics firm 2×4, will decorate the conference rooms.

Those large images—some in black-and-white, some in color—depict details of the natural environment as seen through an electron microscope, as well as flowers distorted by anamorphic projection. The Academy’s new interior design utilizes materials that juxtapose tradition with innovation. We custom-designed a red carpet woven with a decorative gray-and-blue version of the DNA double helix. The carpet will offset paneling of light-colored wood.

After the Academy’s move this fall, visitors will enjoy a distinctive new facility that will encourage communication, discovery, and the generation of research and ideas. The Academy’s physical transformation represents its confidence in the future and its prominent role in the scientific and intellectual leadership of New York.

Learn more about the Academy’s history.


About the Author

Hugh Hardy and his firm, H3 Hardy Collaboration Architecture, are designing the space. Among Hardy’s well known projects in New York are the redesign of Bryant Park, the visitor center at the New York Botanic Garden, and the restoration of the BAM Harvey Theater.

Exploring the Science and History of Thermodynamics

An old mechanical device, made of steel/iron and wood.

From the boilers that heat water in our homes to the engines in our vehicles that allow us to travel with ease, thermodynamics are an often-invisible part of our everyday lives.

Published May 1, 2006

By John H. Lienhard

Boulton and Watt Rotative Beam Engine – the ‘Lap’ engine. This is the oldest essentially unaltered rotative engine in the world. Built by James Watt in 1788, it incorporates all of his most important steam-engine improvements. The engine was used at Matthew Boulton’s Soho Manufactory in Birmingham, where it drove 43 metal polishing (or ‘lapping’) machines for 70 years. Image courtesy of the Science Museum Group © The Board of Trustees of the Science Museum, London. This image is released under a CC BY-NC-SA 4.0 Licence. No changes made.

The president of France, Sadi Carnot, was stabbed by an anarchist on June 24, 1894. The vein to his liver was severed, and he bled to death in the hospital. This touches our story in two ways:

First, the darkness of venous blood was one of the “tells” that led people to accept the idea of energy conservation, the first law of thermodynamics. Questions about how blood manages human body temperatures had helped people to see that our bodies achieve both work and heating from the chemical energy of food.

Second, President Carnot’s uncle, also Sadi Carnot, and his grandfather, Lazare Carnot, were key players in the struggle to understand the rules that govern heat and work. Their efforts led to what we call the second law of thermodynamics, the idea that no engine can ever be 100 percent efficient, and that all natural processes degrade energy. Yet neither senior Carnot accepted the first law of thermodynamics – the idea of energy conservation.

Black and Phlogiston

Many towns in France have a square, avenue, or street named Carnot but it is hard to tell which Carnot it honors: Lazare, best known as the “organizer of victory” during the revolutionary wars of the 1790s; his son, Sadi, who died at 36 having published just one work, yet whose name is inextricably linked to the origins of thermodynamics; or Sadi’s nephew who presided over the French Republic from 1887 until his assassination.

The story of the thermodynamical Carnots best begins about the time of Lazare Carnot’s birth, in 1753. Heat was then regarded as the “subtle fluid” phlogiston – the “substance” released during combustion. The young Scottish chemist Joseph Black was still thinking of heat as wedded to chemical change, but was asking just how much phlogiston it took to increase a material’s temperature one degree.

The Kindred Concept of Latent Heat

Black recognized that the amount must vary from material to material. By this time, both Fahrenheit and Celsius had provided excellent means for measuring the intensity of heat – its temperature. But should one not also have means for measuring its extent – its quantity? Black realized that he could heat a mass of water by transferring energy to it from another material. Since the heat leaving one mass is the same as that entering another, he could determine the heat capacity of any material by heating or cooling a known amount of water.

He also took an interest in the kindred concept of latent heat. At the transition points where a liquid boils or condenses (or a solid melts or freezes) it does so with no change in temperature. To measure the latent heat transferred in, say, melting, Black surrounded a known mass of ice with a known mass of hot water; then he measured how much the water temperature fell as the ice melted away.

These experiments led naturally to the British thermal unit or Btu (the energy needed to raise the temperature of a pound of cold water one degree Fahrenheit).

The Rise of Caloric

Black at first thought he was manipulating chemical changes in matter, but he began to see that heat was not some component of matter, as phlogiston was imagined to be. Rather, it flowed in and out of matter. Phlogiston was about to be displaced by the new term caloric. Caloric gained its full definition in 1779 when Black’s student, William Cleghorn, set down rules for its behavior. Cleghorn’s rules helped to make a useful tool of caloric, but they also helped expose its eventual failings.

Cleghorn determined that caloric had to be a subtle invisible fluid. He explained thermal expansion by imagining caloric to be elastic, with particles that repelled each other. Cool bodies attracted caloric to different extents. That explained heat conduction and specific heats. Caloric had to take a latent form as water boiled at 212° F. It was “sensible” when it raised a material’s temperature. Caloric had to have weight because metals gained weight when they were heated.

Today we know that bodies expand as they are heated because their molecules repel one another. We recognize the gain in weight in metals as a chemical change, oxidation.

Not the Whole Story

Black knew Cleghorn’s rules were not the whole story, but he allowed that they correctly explained the experiments of Benjamin Franklin and others. He cautiously called the caloric theory, “the most probable of any that I know.” Antoine Lavoisier, the French chemist, also liked the idea and coined the term calorique.

So the caloric theory remained for about seventy years. Not until atoms were far better understood would we realize that heat merely reflected atomic motion. However, in everyday life, we still speak of heat flow, or of bodies holding their heat, as if heat were behaving like a caloric fluid.

In our bones (or more accurately, in our muscles) we have always known that we can create heat by doing work. But how could frictional heating be reconciled with heat as a fluid? Caloric theorists tried to resolve that with increasingly tenuous arguments about how friction or deformation “released” caloric. They looked at frictional heating and saw, not a contradiction, but a phenomenon to be explained in terms of caloric. All the while, it was perfectly clear to everyone that the amount of caloric they could create was limited only by their own stamina.

A New Science of Thermodynamics

So the stage was set for the last act in the drama of writing a new science of thermodynamics. What had to be digested was the fact that thermal energy and mechanical work can be traded back and forth (the essence of the first law of thermodynamics).

Which takes the story back to venous blood. Natural philosophers were beginning to suspect that chemical reactions turned blood from red to dark. But estimates of the extent of chemical heating were too low to account fully for the heat.

Eighteenth-century physiologists had attributed blood heat to friction despite the caloric theory, and they continued to think that friction accounted for blood heat, well into the 19th century. Not until 1843, did French chemist Pierre Dulong have accurate enough data to show that chemical heating accounted for virtually all of blood heat. In an ironic twist, Dulong effectively bolstered the lingering caloric theory when he removed frictional heating from physiology.

Everyone who has ever studied the history of heat has struggled with the obviousness of mechanical friction. Yet even the idea that blood is heated by friction had failed to animate an anti-caloric movement. The recognition of friction as an instance of the convertibility of heat and work replaced caloric as a competing theory only in the 19th century, after cannon-boring experiments made in Bavaria by American expatriate Benjamin Thompson/Count Rumford. Thompson had become Count Rumford in Bavaria after a rapid and convoluted series of moves that began when he had to flee colonials who learned he was spying for the British.

Count Rumford’s Canon

As a result of tests in which he generated unlimited caloric by boring cannon with blunt bits under water, Rumford was able to state quite plainly, Anything which an insulated body, or system of bodies, can continue to furnish without limitation cannot possibly be a material substance; and it appears to me to be extremely difficult, if not quite impossible, to form any distinct idea of any thing, capable of being excited and communicated in the manner the Heat was excited and communicated in these experiments, except it be MOTION.

Rumford continued his advocacy of a mechanical theory of heat after he left Bavaria and returned to England and France. At that point he took up a four-year relationship with Lavoisier’s widow, Marie, which ended in a short and disastrous marriage. It’s quite possible that the scientifically savvy Marie Lavoisier egged him on in his attack on caloric. In any case, before the marriage Rumford crowed: “I think I shall live to drive caloric off the stage as the late M. Lavoisier drove away Phlogiston. What a singular destiny for the wife of two Philosophers!!”

With that kind of rhetoric, we can hardly be surprised that the marriage failed. Rumford did indeed help drive caloric “off the stage” by setting a foundation for the  first law of thermodynamics. But that would not happen yet.

An anti-caloric faction failed to arise, even after Rumford, for this is where Lazare and Sadi Carnot enter the story.

Lazare Carnot, Revolutionary Leader

From left: Lazare Carnot (1753-1823), Sadi N. L. Carnot (1796-1832), and M. F. Sadi Carnot (1837-1894).

Lazare Carnot was a remarkable figure. He was born in 1753 – the same year as Benjamin Thompson – and was educated in mathematics and military engineering. During his military service, he competed for mathematics prizes, and also had political dealings with the infamous Robespierre. While he was on garrison duty in the 1780s, Lazare Carnot began an intense affair with an aristocrat’s daughter.

Unbeknownst to Carnot, her father arranged her marriage to another aristocrat. Carnot, furious, went to the fiancé and revealed the affair. That broke up the marriage plans, but the father had Carnot thrown in jail for conduct unbecoming an officer and gentleman. This was 1789. The first events of the French Revolution were just taking place, and they led to Carnot being retrieved from prison after only two months.

His life had been pretty static up to that point. Now it began moving very rapidly. He was soon married (to someone else) and was elected to the Assembly. His skills in administering military missions led to his selection in 1793 as one of the 12 men on the Committee of Public Safety and, in 1796, as a member of France’s five-man ruling group, The Directory. They reorganized the government and ran it until Napoleon took power. Carnot served longer than any revolutionary leader except Napoleon.

A Mathematician and Technocrat

Carnot also started the Little Corporal on his rapid ascent to power by appointing him head of the Army of Italy, and Carnot would rally to Napoleon as his Minister of Interior when he returned from Elba. However, after Napoleon’s fall, the returning monarchy remembered Carnot’s vote to behead Louis XVI and he spent the rest of his life exiled to Germany.

Lazare Carnot was first a mathematician, yet strongly interested in technology. Also, he advocated active defense in fortification design, including what became known as Carnot walls – the high, heavy, detached walls built in front of forts, with loopholes for the exchange of fire. He befriended the Montgolfier Brothers, and Robert Fulton, who showed up in France trying to sell submarine designs. Carnot was an excellent violinist, but he thought like a technocrat. He once remarked: If real mathematicians were to take up economics and apply experimental methods, a new science would be created – a science which would only need to be animated by the love of humanity in order to transform government.

From Waterwheel to Steam Engine

Lazare Carnot’s attention naturally turned to power production. Imagine a perfect waterwheel, he said, in which no energy is wasted or dissipated. Water is stationary before it enters and stationary at the exit. Then he reached a very important insight: all motions would be completely reversible. Run the perfect waterwheel backward, and it would become the perfect pump.

Here Lazare’s son, Sadi, claimed his inheritance. In 1824, one year after his father died, 28-year-old Sadi Carnot wrote his sole monograph, Reflections on the Motive Power of Heat. In it, he asks us to conceive a perfectly reversible steam engine. If we could build such a machine, we could run it in reverse and pump heat from a low-temperature condenser to a high-temperature boiler. When the first refrigerators appeared 36 years later, they were exactly the reversed heat engines that Sadi Carnot had described.

Sadi “operated” his perfect engine in a thought experiment. In his mental engine, he used an ideal gas instead of steam. When he assumed the not-yet-fully-accepted fact that no engine can possibly act as a perpetual motion machine, he was able to show that the work of one kilogram of air in such an engine depends only upon the temperatures at which the air is heated and cooled.

The Basis for Carnot’s Theorem

That was the basis for Carnot’s Theorem: The motive force of a perfectly reversible engine depends solely upon the high and the low operating temperatures. (Those would be the boiler and condenser temperatures in a steam engine.) This sole dependence on temperature was the first step toward the second law of thermodynamics.

Carnot’s theorem would be true whether the engine used steam, air, or any other fluid. His ideal engine mirrored his father’s perfect waterwheel – a waterwheel that depends solely upon how far water falls through it. Yet neither father nor son accepted the conversion of work into heat or vice versa. (I can find no evidence that Lazare Carnot and his contemporary, Count Rumford, ever communicated.)

Sadi Carnot assumed that caloric was conserved as it passed through an engine, just as water passing through a waterwheel is conserved. Today we know that only part of the heat flowing into a boiler turns into useful work. A good fraction of the heat passes into the condenser. But since Carnot had couched his work in terms of indestructible caloric, the validity of what he said about steam engine performance seemed to bolster the caloric theory.

Clausius and Entropy

This strange turn of affairs meant that the demise of caloric had to await a new generation. Rudolf Clausius, born in 1822, finally synthesized our science of thermodynamics from these seemingly contradictory parts. Clausius showed how Carnot’s theorem and the conservation of energy complemented one another. Energy conservation said that less heat left a steam engine than entered it – the difference being converted into useful work. While that contradicted Carnot, it left Carnot’s theorem intact.

Clausius saw that something was being conserved in Carnot’s perfectly reversible engine – but something other than heat. He called it entropy, and defined it as the heat flow from a body divided by its absolute temperature. Entropy changes in a perfectly reversible engine balance out. As heat flows from the boiler to the steam, the boiler’s entropy is reduced. As it flows into the condenser coolant, the coolant’s entropy increases by the same amount.

No heat flows as steam expands in the cylinder or as condensed water is compressed back to the boiler pressure. Therefore, the entropy of the water or steam changes only when heat flows to and from the condenser and the boiler. The net entropy change is zero in that perfectly reversible engine and its surroundings. Under Clausius’s definition of entropy he was able to show that everything Sadi Carnot had claimed was true – except the part about heat or caloric being conserved.

Carnot’s Single Error

Once he corrected Carnot’s single error, Clausius could conclude that the efficiency of a perfectly reversible heat engine did indeed depend upon nothing other than the temperatures of the boiler and the condenser, just as Carnot had said it must. Carnot’s belief in caloric denied him the specific use of the word efficiency, but his central deduction remained intact.

Sadi Carnot died of cholera in 1832 and the image of his fevered blood brings to mind the dark venous blood of his nephew, Lazare’s grandson, its life-giving energy spent. What bizarre convergences these three generations offer – contradiction and resolution, terrorist politics and idealism, maddening complexity and elegant simplicity – and a crucial path along the road to understanding how things work.

Also read: Lockheed Martin Challenge Inspires Innovative Ideas

References

1. Brown, S. C. 1981. Benjamin Thompson, Count Rumford, MIT Press, Cambridge, MA.

2. Carnot, S. 1897. Réflexions sur la Puissance Motrice du Feu (Reflections on the Motive Power of Heat), R. H. Thurston, Ed. John Wiley, New York.

3. Gillespie, C. C. 1970-1979. The Dictionary of Scientific Biography, Charles Scribner’s Sons, New York.

4. Lienhard, J. H. June 2006. How Invention Begins: Echoes of Old Voices in the Rise of New Machines, Oxford University Press, Oxford, New York. Much of the material in this article, and all the resources used in its making, are in this book.

5. Lienhard, J. H. Engines of Our Ingenuity radio program Web site. www.uh.edu/engines. Short essays on many of the themes of this article can be found and heard here.


About the Author

John H. Lienhard is M. D. Anderson Professor Emeritus of Mechanical Engineering and of History at the University of Houston, and the author and voice of The Engines of Our Ingenuity, a radio program heard nationally on Public Radio. His latest book is the forthcoming, How Invention Begins: Echoes of Old Voices in the Rise of New Machines. (Oxford University Press)

The Science and Cinema of the Brain

Sloan Foundation gets cerebral at the Sundance Film Festival, going into the science and psychology of motion pictures.

Published February 5, 2006

By Adrienne J. Burke

Image courtesy of Svitlana via stock.adobe.com.

How is your mind like a movie? Will new technologies enhance the way films convey cognitive experience? How will the ancient human capacity for processing emotions keep pace with rapidly accelerating cognitive experiences?

These and other questions were tackled by a panel of four scientists and three filmmakers recently at the Sundance Film Festival in Park City, Utah. An audience of 250 filmmakers, journalists, and film enthusiasts attended the event called “What’s on Your Mind? The Science and Cinema of the Brain,” hosted by New York’s Alfred P. Sloan Foundation on January 27, to engage in a discussion about how movies can be tools for exploring the mind, for fulfilling the human need to vicariously experience emotion, or for mimicking the editing process in which our brains engage.

Meet the Panel

Moderating the panel was John Underkoffler, an MIT-trained engineer who has consulted as a science and technology advisor on films such as Steven Spielberg’s “Minority Report” and “The Hulk,” in which Nick Nolte plays a mad scientist.

Panelists, in order of appearance, were:

  • Lynn Hershman Leeson, artist and director of the films “Conceiving Ada,” about the contributions of the Countess of Lovelace to early computer science, and “Teknolust,” which won the Sloan Award at the 2002 Hamptons Film Festival;
  • Hal Haberman and Jeremy Passmore, the directing and writing team that created a film screened this year at Sundance called “Special,” about a man who enters a clinical trial and suffers a breakdown and thinks he is a superhero;
  • Antonio Damasio, a neurologist and neuroscientist who directs the University of Southern California Institute for the Study of the Brain and Creativity;
  • Martha Farah, director of the University of Pennsylvania’s Center for Cognitive Neuroscience; and
  • Kay Jamison, professor of psychiatry at Johns Hopkins University School of Medicine and author of several books on manic depression and bipolar disorder, including her autobiography, An Unquiet Mind.

Storytelling and Technology

Underkoffler kicked off the discussion pointing out that new technologies such as functional MRI are enabling neuroscientists to see where in the operating mind different activities are taking place, and to address for the first time questions that were previously the domain of philosophers, only answerable through intuitive thought, not scientific analysis. Considering that film is a unique vehicle for conveying states of mind, Underkoffler asked, “Is film privileged as a tool for exploring these ideas of mind and brain?”

*Here is an abridged version of the conversation that followed.*

Leeson: The technology always has some kind of way of altering the way we think. Some people have said that iPods are restructuring the way we create narratives. The advent of multidimensional possibilities with DVDs or other aspects of Internet use has created varying levels of how we communicate and what stories we tell and how we develop ideas of fractured intelligence, identity, and even artificial intelligence as characters and character subplots.

Haberman: For me, technology influences how we make movies, but in terms of changing the actual stories we’re telling and the structure of the stories we’re telling, I don’t think those are much different from the way I would have told the story in a movie if I had been alive to make one 30 years ago.

Passmore: I’d agree with that. The film doesn’t happen on the screen or in the speakers; the film happens when it’s synthesized by your brain when you’re sitting in the audience. Film is inherently the medium by which you experience alternate realities. As the technology evolves, whatever is after cinema is going to become even more so.

Frames in the Mind

Damasio: Film, and before it theatre and literature in general, have been historically means of inquiry into the human mind. Greek theatre was doing things similar to what filmmakers are doing today: using narrative you’re looking into the human mind and human behavior.

There’s something privileged about cinema that is different from the other modalities, [because] it’s probably so far the closest we can have to the kind of subjective experience we have of our own mind. It has to do with the fact that there is a frame in our minds when we’re looking at the world, whether we’re looking at the actual world, or into our minds with our eyes closed. The visual and the auditory are very powerful and are the bread and butter of film making. They bring us much closer to the experience of our own mind.

It’s as if film has [copied] some of the characteristics of the human mind. Editing is something we do all the time when we apportion attention differently to one image or another. We are constantly running an editing machine in our own mind by bringing a character into focus more strongly, by reframing it, or by the duration for which we allow the image of that character to linger.

It’s quite interesting that there are very close connections between the mind process and what our eyes are doing. John Huston might have been the first to point out that you cut on the blink in filmmaking. It’s something that shows film to be very privileged in its connection to brain and mind science, far more so than literature or theatre of any kind I can think of.

Simulating Experiences

Farah: I think the film “Being John Malkovich” illustrates your point well — that through film we can simulate the subjective experience of another person. “Special” does the same thing with this ambiguity between Les’s perception of what is going on and the reality. It’s a seemingly unbridgeable gulf that cognitive neuroscientists are continually trying to bridge, between subjective mental experience and objective observable things.

Haberman: “Being John Malkovich” is interesting also because it shows how you can illustrate things cinematically for a broader audience than scientists. A lot of people probably don’t know what a feedback loop is, but when they walk down the tunnel and there are John Malkoviches everywhere, I think intuitively [the audience] understands what’s happening. It illustrates a scientific principle without feeling like it’s telling or explaining to you.

Redefining Film

Leeson: I think the whole definition of film is radically changing right now, in a way that we haven’t seen in the last hundred years. We’re developing different options for how we look at moving images and therefore the whole definition of what film is and dealing with possibilities for entering virtual realities … We’ve never been able to have these possibilities before.

Jamison: If you’re trying to convey mood or desolation or despair or psychosis, or madness or ecstasy or expansive mood, it’s so much in the acting and directing and writing. The technology is not my bailiwick, but it seems to me that tremendous portrayal has been done so well since the beginning of film. If you’re trying to convey a mood such as desolation or despair, what is it in the technology recently that has made any difference in how well that would come across now to an audience as opposed to 30 years ago?

Underkoffler: Technologically, it seems like nothing. The digital resolution, sound, would have no bearing.

Leeson: Some artists are using PDAs to create environments that do alter moods when one goes there. They create installations and environments that are addressing these very particular issues.

A Wider Domain

Haberman: I think the most obvious example is video games that are so popular right now. That experience couldn’t have happened 10 years ago. They’re playing a narrative. It’s a whole way of watching a story.

Passmore: It’s kind of like antidepressants. It’s our version of “we don’t really know what the long term effects of it will be.”

Leeson: We’ve never had the connectedness that we have now. We’re able to interpret and hear so many points of view that it seems like we’re congealing things beyond a particular culture to a wider domain.

Haberman: But that’s something people have been thinking has been going on for years and years. Even if you look at things people were writing in the 1960s, it was all about connectedness and different cultures coming together. And all the poststructuralist film theory from the 1980s is the same thing: People always want to feel they’re more and more connected with each other and that technology does that, but I’m not convinced it does.

Transhumanists Thinking Like Bats

Underkoffler: I’m also interested in technologically expanded options for what cinema might become. It’s interesting to wonder what else is possible. Peter Greenway famously and cantankerously said sometime in the early 1990s that film had done nothing but produce illustrated 19th century novels in the sense that they follow a comprehensible narrative. What else could film do to map our cognitive or mental states onto other possibly even nonhuman or transhuman artifacts or situations? Might we elicit some kind of state that is impossible to elicit in any other way?

Farah: Well, it’s like the famous article “What Is It Like to Be a Bat?” by the philosopher Thomas Nagel, who ends up concluding that you can’t know what it’s like to be a bat because you don’t have a bat brain, you don’t have a bat experience.

Underkoffler: And you don’t have a bat body.

Passmore: What we need is a bat filmmaker.

The Essence of the Subjective Experience

Farah: How close could you get to a bat experience by watching a film? I’m going to say not very. If you can’t get the essence of the subjective experience of being a bat by walking around in the world having light impinge on your retina because it’s reflecting off surfaces around us, I don’t see how having light impinge on your retina because it’s coming from a movie screen is going to make a difference.

But one thing that might make a difference is a sort of wacky idea that Ray Kurzweil describes in his new book The Singularity is Near: When Humans Transcend Biologyall about how changes in computer- and nanotechnology are going to increasingly be incorporated into our bodies, including our central nervous systems. Eventually we’ll gradually transform ourselves into these cyborg creatures that won’t resemble much the humanity version 1.0, which is what we are sitting around here today.

One interesting scenario he describes is the use of nanotechnology to penetrate our nervous systems. We would first use nanotechnology to get a highly detailed, three-dimensional image of the state of somebody else’s brain. A nanobot would go into John’s ear and infiltrate his brain and get the picture and then I could inhale them into my brain and they could simulate the same state and thereby let me know what it’s like to be John Underkoffler. And maybe they could do the same thing with a bat.

The Cyborgian Age

Leeson: I think we already are posthuman and we’ve already entered the cyborgian age. More and more symbiosis with technology is altering the way we’re thinking. And as far as projections into the future, I think one that’s very close is how we distribute narratives, not just only on screens in dark rooms, but on computers and through software programs that incorporate moving images and build memory.

Damasio: I think with the Kurzweil scenario, there’s no need for immediate worry. It’s far into the distant future. If the Kurzweil scenario comes to pass it will lead to different relationships within ourselves and with technology, and I don’t know if it will illuminate our experience with nonhuman species, but I don’t think it will affect film as it is in itself. Film could portray all of this, but it doesn’t follow that it will alter it necessarily and change that fundamental technique.

How Movies Nourish Emotions

Passmore: My opinion is that this technology is great, it will help bring new ways of telling stories to people, but I think there’s a reason the narrative structure hasn’t changed over 1,000 years. It’s because we want to experience someone else’s life, someone else’s reality. We want to see a character and view the world through that character’s eyes and I think that’s the basis of narrative and I don’t see that changing anytime soon.

At the end of the day, you have an audience that wants someone they can identify with. There are always going to be people trying to beat their heads against the wall trying new things, but eventually the strength of the narrative in its current form is going to carry on forever.

Damasio: That has a lot of do with our own needs to experience vicariously emotional states. There are a lot of things going on in movies traditionally and in classical novels and theatre that is a way to experience emotion we would like to have and sometimes experience emotions that we would not like to have.

I don’t think anybody would choose to be in situations that cause extreme horror and terror and so on, but the fact is that people flock to movies that have suspense and show fear and that lead you to experience enormous horror sometimes. I think there’s one reason that continues, and that is that we rehearse. In some way we get rid of the need to worry about them, because we are going through that experience in a way that we know once the lights come up we’re not going to get killed or nothing terrible is going to happen to us.

Our Own Mortality

Passmore: It tricks us into thinking that we’ve dealt with our own mortality.

Damasio: Exactly. We need to have nourishment for our own emotions. And here I would point out biology. There is a big disconnect between the way our brain and our organism processes emotions, and the way our organism processes what people call straight cognition. Cognition is like lightning. Cognition is very rapid, and has the potential to become more rapid.

It’s quite likely that people in the world who are growing up with new technologies are going to have even more rapid cognition. But that doesn’t mean that they’re going to have faster emotional processes, because the emotional processes are very old, in terms of evolution, and they’re probably much more rigid and difficult to change at least over a course of a relatively limited period of time.

Leeson: Do you think there’s a difference in generational cognition and that it’s changing?

Jamison: I would address the emotional side, which is the more ancient side, and that probably is not changing nearly so rapidly. The thinking process probably is, but the moods and the fears and so forth are not changing so rapidly, so it’s a fascinating time in human evolution.

Also read: Music on the Mind: A Neurologist’s Take

The Chaos of Celestial Physics and Astrodynamics

A starry night sky with the outline of mountains in the foreground.

For mathematician Edward Belbruno, by embracing “chaos” he was better able to understand the three-body problem of celestial physics. His notion of chaos describes motion that defies precise long-term predictions.

Published January 1, 2005

By William Tucker

In 1990, Edward Belbruno was packing his belongings, getting ready to leave the Jet Propulsion Laboratories in Pasadena. His five-year effort to interest NASA in low-energy trajectories for spaceflight had failed.

A graduate of the Courant Institute of Mathematics in New York, Belbruno had long been playing with the idea of charting very precise flight paths through the sky or into space. He wanted to allow space probes to slip into orbit around a moon or planet without the use of powerful, fuel-consuming retrorockets. His task was made immensely complicated – if not impossible – by the three-body problem of celestial physics.

When first formulating the laws of gravity, Isaac Newton had calculated the interaction of two bodies. They could be a stone falling to Earth, a spacecraft in orbit, or the Earth itself on its trajectory about the Sun. In each case, the two bodies both revolve around the center of mass – a point somewhere between their two centers, like the balancing point of a see-saw.

The interaction of three bodies, however, is immensely more difficult. In fact, in the late 1950s, V. Arnold, a Russian mathematician, and J. Moser, a German, independently proved that the three-body problem could not be solved at all. The proof came from solving the more general problem of chaos in nearly periodic motion, as outlined by Arnold’s teacher, A. N. Kolmogorov, in the 1920s. It is now known as the Kolmogorov-Arnold-Moser (KAM) theorem.

Order in Chaos

The obstacle to finding a solution is that the three-body problem leads, literally, to chaos. To a mathematician, that does not mean a dark abyss or a mad frenzy. Rather, chaos describes motion that defies precise long-term predictions.

However, mathematics offers tools even for dealing with the unknowable. Using the mathematics of chaos, Belbruno felt that he could fudge the three-body problem enough to create a proper trajectory. The difficulty was that his slow dance to the Moon would take two years, whereas conventional rockets can make the trip in three days. NASA lost interest, and Belbruno was shown the door.

Then a miracle happened. The Japanese had launched a two-part Moon probe, Muses A, the size of a desk, and Muses B, the size of a grapefruit. The two had separated while in Earth orbit and the grapefruit headed for the Moon. Upon arrival, however, Muses B’s radio failed, and the probe was lost. Now Muses A was circling the Earth with very little fuel and nothing to do. A JPL engineer remembered Belbruno’s work. Suddenly Belbruno had an audience. Could he help? Belbruno said he could.

“In the same instant, I realized that I could add the Sun’s gravitational field to the equation,” Belbruno says. Ten months later, Muses A – now rechristened Hiten, after a Buddhist angel – fired half its remaining fuel and, guided by Belbruno’s equations, glided into a 2-million-mile itinerary beyond the Moon and back again. It was like flicking a paper airplane into space, hoping it would eventually settle into a trajectory where its momentum perfectly matches the Moon’s gravity.

The Angel of Chaos

Belbruno’s formulas worked, and the mission was saved. “They used it again for the Genesis probe of the Sun and the European Space Agency mission SMARTONE,” says Belbruno. “NASA now takes my work a lot more seriously.”

So seriously that Belbruno was commissioned to call a conference at the University of Maryland in 2003 to investigate astrodynamics and chaos. Also under study were formation flying, navigation and control of unmanned spacecraft, orbital dynamics, mission proposals, and possible propulsion methods for pushing probes deep into the solar system. The results have been collected as Astrodynamics, Space Mission, and Chaos, Volume 1017 in Annals of the New York Academy of Sciences.

Although Belbruno and his fellow authors could not know it, space probes were about to be brought back front and center by President George W. Bush’s announcement of a mission to Mars, somewhere around 2020. “The cost for delivering cargo to the Moon is now $1 million per pound,” says Belbruno. “Every pound of fuel we can save is another pound of payload that can be delivered.

“I don’t agree with everything the president does, but I think he has shown great vision on this initiative,” he adds. “The idea of going step by step to the Moon, building a base, and then moving on to Mars and back is very practical. I think there’s a good possibility we’ll succeed.”

Also read: Exploring the Ethics of Human Settlement in Space

Ballot Security and the Threat of Bad Actors

A man fills out a ballot for a county election.

Ensuring the integrity of the popular plebiscite, the most basic of democratic processes, in the 21st century cyber age may in the end come down to an age-old principle – trust, but verify.

Published November 1, 2004

By Myrna E. Watanabe

In August, Venezuelans, voted on whether to keep Hugo Chávez as president. This nationwide tally of more than 14 million registered voters was taken on direct-recording electronic (DRE) voting systems.

Chávez was not recalled, and the ink was barely dry on the voting machine printouts when accusations of fraud were made. The vote was a recall on Mr. Chávez, based on a petition signed by more than 3 million voters. Surely the vote, which was 57.8% in favor of retaining Chávez, must have been manipulated, thought some. And what better way to manipulate the vote than by subverting DRE machines?

A Simple Concept

There are several basic designs for DRE machines, with various permutations and combinations of features. Bernard Liu, legal staff attorney for the Elections Division of the Secretary of State’s office in Connecticut, described these machines as “the most simple application” of computer technology. A common automated teller machine (ATM), he explained, is more sophisticated. DREs simply record and compute votes.

The machines, many of which have touch screens like ATMs, must be activated for each voter. In some cases the poll worker activates the machine, either directly or through a local workstation. In others the poll worker may give the voter an electronic key: a card with a magnetic strip or a smartcard that contains a computer chip.

The poll worker will program the card to allow only one person to vote. If there are primaries being run on a given day, the card can be programmed for the primary ballot of a specific party. The ideal system will have no information programmed into the card that will identify the voter. Once the voter or poll worker activates a machine, a ballot will come up on the screen.

Frank Wiebe, president of AccuPoll, Inc., a small vendor of voting machines in Tustin, California, explained how his company’s machines work; other companies’ machines may work slightly differently. The AccuPoll machine is activated for a voter with a memory-only smartcard. The machine, said Wiebe, verifies that the card is for the correct polling place and is enabled for voting. It also has encoded within it the type of ballot that the voter needs to see.

Tangible Representation on Paper

The initial screen contains a set of instructions. After reading the instructions, the voter is asked to hit the “next” button to see each individual contest. At the end of the ballot, the voter sees a ballot review screen. Wiebe explained that the machine will not allow an over-vote – i.e., if you are supposed to vote for two of five names, you cannot vote for more than two – and will issue a warning if you have under-voted by skipping races or voting for fewer candidates for given offices.

“If they [voters] decide to change their minds, they would just touch the button for that contest,” Wiebe said. The machine would return to that specific contest to allow the voter to make the change. After reviewing the ballot, the voter would push the “cast your ballot” button.

After that button is pushed, Wiebe continued, “the representation of the ballot is written into memory [and] the go-vote key is disabled.” The vote is stored in the hard drive and in flash memory. Some machines do not print out a paper representation of the vote, but the AccuPoll machines do. “They have a tangible representation on paper, which the voter can confirm, that the vote could be recorded as desired,” noted Wiebe. The voter then puts the paper ballot into a ballot box so that the vote can be verified.

In some cases, as with the AccuPoll machines, the individual voting machines are networked to a central workstation at the polling place. In others, the machines are linked via the Internet to a computer at a centralized election office. And in yet others, the machines are not linked at all.

Variations on a Theme

There also are various permutations of a completed paper ballot. Some machines – those that have been most criticized by computer experts – provide no paper record at all. Others provide a “receipt” that cannot be proved against the record within the computer, as it cannot be matched with a specific ballot. Other machines generate a random number for the electronic ballot that also is on the paper ballot. That allows the paper ballot to be compared with the computer’s record.

Eugene Spafford, executive director of Purdue University’s Center for Education and Research in Information Assurance and Security, noted that there are a number of areas in which the electronic voting system can be compromised – beginning with people. “You have a very broad range of individuals who are working as the election clerk and monitors,” Spafford said, and there are no standardized tests for elections officials. In a medium-to-large-sized county, Wiebe noted, “it’s a 6- to 12-month project to transition from an old system to a new system.” And the transition includes educating both voters and poll workers.

A person intent on subverting the system could fabricate smartcards or smart keys to allow multiple votes. In their often cited paper, “Analysis of an electronic voting system” (T. Kohno et al., IEEE Symposium on Security and Privacy 2004, IEEE Computer Society Press, May 2004), Johns Hopkins’ Avi Rubin and colleagues review vulnerabilities of one DRE system. They note that there is no cryptography in the smartcards, thus, “there is no secure authentication of the smartcard to the voting terminal.” They further note that poll workers may have access to cards that can administer or end an election. These, too, can be duplicated.

Software Vulnerability

Another vulnerable area is the software, Spafford said, noting that the people who build voting machines are not necessarily security experts. He said companies “don’t build-in all the safeguards because that would be too expensive.” While the vendors may claim that their machines are safe, Spafford said, “Most of these vendors certainly don’t have the level of software testing that a Microsoft or Oracle has.” A Trojan horse – capable of wreaking havoc on software – can’t be found by the usual testing done on DREs, Spafford said.

Concern about the integrity of the software used in electronic voting systems is shared by Jennifer McCoy, of Georgia State University, in Atlanta. Commented McCoy, who led the Carter Center’s observation of the Venezuelan plebiscite: “I think it’s theoretically possible to manipulate the software.”

Local area networks (LANs) are also subject to potential tampering, and wireless LANs are particularly insecure. “There are definitely a lot of security exposures for a wireless LAN and we would never advocate for such,” says Wiebe. Liu says Connecticut is not considering networked machines or machines that are connected to the Internet. The advantage of stand-alone machines, he noted, is that people set on malfeasance would “have to hack into every single machine you have to try to change the vote.”

The Human Touch

Vote tabulation is also vulnerable to tampering. Transmission from the individual terminals  to the polling place workstation can be compromised, as can transmission from the polling place workstation to the central tabulation location. With no verifiable paper trail, “you cannot do a recount; all you can do is a re-read,” Spafford explained. And if, when the machines are opened, the counts are all zeros, he added, then “all the votes are gone.” Posting results at the polling place and again at the central tabulation location, he added, shows that there has been no tampering between the time the tabulations left the polling place and when the numbers were entered into a central computer.

The Venezuelan plebiscite illustrates why a verifiable paper trail is so important. So far, the Carter Center has gone through two audits of the results. The first was what McCoy called “a quick count,” where election observers called in the machines’ data to headquarters on the polling day. The second was in response to a report that criticized the first audit as not relying on a random sample.

“The paper ballot had the number of the machine on it; it had the result of the vote; then it had a 32-character string, numbers and letters combined,” noted McCoy. These numbers could be matched up to numbers printed on a tally sheet for each ballot that was cast.

Absentee Ballots

The center’s second audit report also compared voting machine results and numbers of signatures on the recall petition, but was based on a random sample.

Arnold Urken, a demographics and electronic voting expert from Stevens Institute of Technology in Hoboken, New Jersey, participated in a recent panel discussion on electronic voting held at The New York Academy of Sciences (the Academy) and sponsored by the Science Writers in New York. Other members of the panel included former Undersecretary of the Navy Jerry MacArthur Hultin, now of Stevens Institute; journalist Steve Ross; and former ABC White House correspondent Steve Taylor.

Urken indicated his sense of insecurity with the DREs by advising: “If you want your vote to be counted as carefully as your money, consider requesting an absentee paper ballot so that you do not run the risk of having your vote changed, corrupted, or eliminated by a computer malfunction.”

Also read: Deep Fakes and Democracy in the Digital Age


About the Author

Myrna E. Watanabe, PhD, is a freelance writer based in Patterson, NY. Her articles appear in many publications, including Nature, Nature Medicine, The Scientist, and The Hartford Courant.

Robin Kerrod and the Romance of Astronomy

A shot taken in outer space.

Author Robin Kerrod is inspired by science, so much so that his new book explores “the extraordinary beauty and aesthetic qualities of the images” produced by the Hubble telescope.

Published August 1, 2004

By William Tucker

Image courtesy of J. Hester and A. Loll (Arizona State University)/ NASA, ESA via Flickr. Public Domain.

To celebrate the Hubble telescope’s achievements, Robbin Kerrod has written a coffee-table book, Hubble: The Mirror on the Universe, to bring down to Earth the romance that Hubble has been carrying with the heavens for the past decade and a half. “I don’t think the public at large truly appreciated the extraordinary beauty and aesthetic qualities of the images Hubble has sent back to us,” he says.

Kerrod is particularly pleased that his book has been presented to prominent politicians in Washington and the White House as part of an ever-growing “Save the Hubble” lobby. The campaign is trying to persuade NASA and the government to change their mind about abandoning the Hubble Space Telescope (HST).

But then writing books has never been a chore for Kerrod, who has been a full-time author for more than 35 years. He has penned more than 200 titles – for children and adults – on all aspects of science and technology, from Robots (1984) and Whales and Dolphins (1998) to The Way the Universe Works (2002). “I’ve always loved writing,” he says. “There’s a seemingly endless source of inspiration in the sciences. There’s always something exciting going on somewhere.”

Hubble No Longer Sees Double

At its outset, Hubble came close to being one of the biggest scientific flops in history. Originally proposed in 1946 by American astronomer Lyman Spitzer, the space telescope was funded in 1979, and scheduled for launch in the mid-1980s. Then came the 1986 Challenger disaster. Lift-off was moved back and didn’t occur until April 24, 1990.

The 12-ton, 43-foot long satellite houses an 8-foot-diameter parabolic mirror made of silica-titanium oxide glass that took two years to polish to the proper looking-glass quality. Four main instruments (all since replaced) produced images and analyzed the light:

– The Wide-Field and Planetary Camera was designed to look at large swathes of sky, bringing images into sharp focus; The Faint Object Camera was so sensitive that it needed filters to look at anything brighter than magnitude 21. (Stars of magnitude greater than 6 are already too faint for the naked eye, and the best Earth-bound telescopes can see out to magnitude 24.)

– The High-Speed Photometer measured fluctuations of light sources from high-energy objects, from supernova remnants to ordinary stars.

– The Goddard High Resolution Spectrograph spread out light waves in order to detect the telltale dark bands that indicate the elements in the stars.

Within hours, however, things began to go wrong. Discovery Space Shuttle crew member Steven Hawley sent the instructions to unfurl the telescope’s two panels that gather solar energy. One of them stuck. A space walk by astronauts Kathy Sullivan and Bruce McCandless freed the frozen panel, but it would later vibrate each time the fast-moving satellite passed between light and dark (16 times a day), blurring many of Hubble’s images.

The Correction Optics Space Telescope Axial Replacement

Worse was yet to come. A month later, when Hubble’s first images were relayed back to Earth, they were unaccountably blurred. Something was obviously wrong. Not until a year later was it determined that someone had made the simple mistake of failing to convert English to metric measurements in manufacturing the mirror.

The aberration – only two microns, 1/50th the width of a human hair – was still enough to make Hubble lose focus. “Pix nixed as Hubble sees double!” said one headline.

In 1993, NASA engineered a rescue mission. COSTAR (Correction Optics Space Telescope Axial Replacement), an ingenious device fitted with ten small mirrors, corrected Hubble’s vision just like a pair of spectacles. In order to make room, however, the High-Speed Photometer had to be removed. Both solar panels were replaced, along with six failed gyroscopes and two nonworking memory banks.

Finally, after three years and 35 hours of space walking, Hubble was sending back breathtaking images of the wonders of the Universe. The pictures are in the public domain and Kerrod has assembled them in an exquisite collection – probably the best summation of Hubble’s work ever made.

Also read: Going Deep with the Hubble Space Telescope