Skip to main content

Reducing Mercury Pollution in NY Harbor

The Academy and a handful of local and federal entities have teamed up on a multi-year effort to lessen pollution in this vital natural asset.

Published August 1, 2002

By Fred Moreno, Dan Van Atta, Jill Stolarik, and Jennifer Tang

Image courtesy of Tierney via stock.adobe.com.

New York harbor is a vital natural asset whose ecological health has been threatened by contamination from a host of sources since the dawn of the urban/industrial era. Despite substantial water quality improvements following environmental laws enacted over the past four decades and decreased industrial activity, much remains to be done.

The New York Academy of Sciences’ (the Academy) on-going Harbor Project was commissioned by the U.S. Environmental Protection Agency (USEPA) in 1998. Its goal is to identify practical solutions to the difficult issues that continue to plague this precious watershed. Called “Industrial Ecology, Pollution Prevention and the NY/NJ Harbor,” the project is focused on finding environmentally sound, economically feasible and realistically achievable strategies for combating specific contaminants that are polluting the harbor.

Mercury was identified as the first pollutant for study. Results of this study are now available in a recently published monograph entitled “Pollution Prevention and Management Strategies for Mercury in the NY/NJ Harbor.”

Conducted by scientists with particular knowledge of mercury in this watershed, the results were analyzed and synthesized by the Academy staff and reviewed and endorsed by the NY/NJ Harbor Consortium, a coalition of interested business, regulatory, labor, academic and environmental organizations that was launched in January 2000. Professor Charles W. Powers, chair of the Harbor Consortium, characterized the mercury study as “audacious in scope, rigorous in its scientific and analytic conclusions, and bold in its recommendations affecting a wide variety of institutional interests and practices.”

Health Sector Identified

The report’s recommendations, made public at a June 25 press conference in New York, contain a number of specific measures intended to prevent further pollution of the harbor. Primary emphasis was put on actions needed in the health care sector – especially dental facilities, hospitals and laboratories – to prevent mercury releases to the watershed.

Although contamination from atmospheric deposition and solid waste sources, such as landfills, also were analyzed and assessed, the Harbor Consortium chose wastewater strategies as its first priority. That’s because wastewater is both the largest direct source of mercury and the most significant source of methyl mercury in the NY/NJ Harbor. Landfills and wastewater treatment facilities provide ideal conditions for methylation – the process that chemically transforms inorganic mercury into methyl mercury under the right environmental conditions.

Specifically, the Academy report recommended that:

– Dentists and dental facilities implement a two-tiered approach that, first, institutes filtration, collection and recycling in the short term and, second, moves toward replacement of amalgams with safe, durable and cost-effective alternatives.

– Hospitals substitute non-mercury alternatives for mercury-containing products, like thermometers and blood pressure gauges, and that they prevent breakage of existing mercury-containing products through proper maintenance.

– Laboratories substitute non-mercury alternatives for mercury-containing products, and take steps to prevent mercury discharges to sewers. In addition, the report provided specific strategies for implementing its recommendations. One strategy suggested for the dental sector, for example, called for development of programs to promote economically feasible filtration technologies and encourage the collection and recycling of mercury in amalgam.

Laboratories are advised to implement a non-mercury purchasing policy and to install filter systems to reduce mercury discharges or capture all discharge solutions for recycling or treatment. The report also recommended phasing out the sale of all mercury thermometers, expanding educational campaigns to inform the public about the health risks associated with spills from broken thermometers and other devices, and implementing collection and take-back programs.

The Best of Science

Dr. Rashid Shaikh, director of Programs for the Academy, noted that one of the novel aspects of this project has been the involvement and commitment of participants from a wide variety of institutions and a wide range of backgrounds and experiences.

“The Academy has a long history of helping to illuminate environmental issues by bringing together the best of science to bear on analyses of solutions,” Shaikh said. “We believe this report adds significantly to the work being done to prevent further pollution of our Harbor through measures that are appropriate environmentally, economically and technologically.”

Evaluation and Risk Management

The project enables a group of experts – including scientists studying problems associated with mercury, regulatory agencies responsible for protecting the Harbor, and representatives from the industries and businesses whose livelihoods depend on the use of mercury – to discuss the issues and possible solutions in an open forum.

Referring to this process, Dr. Powers, who also heads a multi-university environmental research effort and is professor of Environmental and Community Medicine at the University of Medicine and Dentistry of New Jersey – Robert Wood Johnson Medical Center, called the report “a rare synthesis of evaluation and risk management.”

Funding for the mercury study was provided by the USEPA, the Port Authority of New York and New Jersey, the Abby R. Mauzé Trust, AT&T Foundation, The Commonwealth Fund, and J.P. Morgan. The 113-page document was written by Marta Panero and Susan Boehme, of the Academy staff, and Allison de Cerreño, previous director of the Harbor Project and currently co-director of the Rudin Center for Transportation at New York University.

Also read: The Environmental Impact of ‘Silent Spring’

A Case Against ‘Genetic Over-Simplification’

Who are we? Why do we behave as we do? What explains why some die of illness at the age of 50 while others live past 100? How can we improve the human condition?

Published June 1, 2002

By Fred Moreno, Dana Van Atta, Jill Stolarik, and Jennifer Tang

Image courtesy of ustas via stock.adobe.com.

The answers to these questions are coded in our genes — or so the story goes in the popular media and in some corners of the scientific establishment.

“It’s a heroic story with a dark side,” said Garland E. Allen, Ph.D., Professor of Biology at Washington University and a specialist in the history and philosophy of biology, at a recent gathering at The New York Academy of Sciences (the Academy). Harking back to the eugenics movement of the early 20th century, modern genetic science is fraught with both promise and danger, Allen said, and “genomic enthusiasm” should be tempered with a good dose of historical awareness.

Eugenics in Context

Charles B. Davenport, the father of the eugenics movement in the United States, defined his fledgling field as “the science of human improvement by better breeding.” In attempting to apply Mendelian genetics to society’s ills, Davenport and his fellow eugenicists believed the problem — whether alcoholism, mental illness, or the tendency to simply “make trouble” — was in the person, not the system. The real culprit, therefore, was the individual’s defective biology, and biologists held the key to fixing the defect.

During the first four decades of the 20th century, eugenics gave credibility to American elites in their efforts to restrict the inflow of immigrants of “inferior biological stock” from southern and eastern Europe, culminating in the Immigration Restriction Act of 1924. The new science also provided a rationale for the compulsory sterilization of institutionalized individuals considered unfit for reproduction.

By 1935, 30 states had enacted sterilization laws that targeted habitual criminals, epileptics, the “feebleminded,” and “morally degenerate persons.” Their proponents saw them as preventive, not punitive. In their view, higher fertility rates among the less productive, genetically defective members of the population posed a threat to society, not least because of the high cost of maintaining them in prisons, in mental institutions, or on the dole.

“Social history in the United States between 1870 and 1930 was characterized by a search for order,” said Allen. “It was a period characterized by the maturation of the Industrial Revolution, rapid urbanization and growing social problem. There was a widespread sense of disorder, and many felt there was a need to do something about it.” This collective malaise made eugenics the “magic bullet” of its day.

As American as Apple Pie

Eugenics peaked during the 1930s, at the height of the Depression. Interestingly, the new science and its attendant policy program appealed to members of all social classes. Eugenics validated wealth and privilege as the birthright of the genetically superior. The rising union movement, arguably the greatest threat to the status quo, was rife with Italians and Jews, two of the groups deemed “socially inadequate.” At the same time, with competition over scarce jobs at an all-time high, eugenics fed into the anti-immigrant sentiments of the working class.

With their blatant racism, xenophobia, questionable ethics and tendency to blame the victim, eugenicists might impress us today as screwballs on the lunatic fringe of science. Actually, however, nothing could be further from the truth.

Theodore Roosevelt was just one of many highly regarded Americans who praised the science of eugenics. In his 1913 letter to Charles Davenport, Roosevelt wrote: “Any group of farmers who permitted their best stock not to breed, and let all the increase come from their worst stock, would be treated as fit inmates for an asylum.” Alexander Graham Bell himself served on the Board of Scientific Directors of the Eugenics Record Office, founded in 1910 as the country’s leading eugenics research and education center. In its day, the eugenics movement was mainstream and as American as apple pie.

Scientific Underpinnings

Taking its cue from advances in agriculture, eugenic science also emulated the efficiency movement in industry. “Eugenic reproductive scientists were the counterparts of the efficiency experts on the factory floor,” said Allen. In the early 20th century, farmers and industrialists alike turned to science for guidance in bringing about control and standardization.

If popular for the wrong reasons, eugenics nonetheless increased our understanding of human beings as genetic organisms. Davenport and other eugenically-minded human geneticists helped illuminate the genetic origins of a number of physical disabilities, for example, including color blindness, epilepsy and Huntington chorea. Instead of proceeding cautiously, however, Davenport and his colleagues applied the new genetic paradigm zealously and indiscriminately. All human intellectual and personality traits, they hypothesized, were ultimately reducible to heredity.

As it turns out, their methods were just as flawed as their theories. Commenting on a family study of epilepsy — rigorous for its time — Allen pointed to two methodological weaknesses: First, humans have small families compared to animals, which makes statistical modeling difficult at best. Second, research in the early 20th century was hampered by a lack of accurate information. Interviews, anecdotal accounts, and rumor were the stuff of scientific data at a time when medical record keeping was relatively haphazard.

Finally, the absolute privileging of heredity over environment trapped eugenicists in a form of circular thinking. If pellagra, a condition caused by vitamin B deficiency, was observed to run in a family, the disease must be genetically based, they thought, rather than rooted in poverty and shared nutritional deficits.

A Call for Balance

Allen warned that the genetic myopia of yesterday’s science is being recapitulated today. From shyness to homosexuality and from depression to infidelity, everything is in our genes, if we’re to trust the information in recent cover stories in Time, Business Week, and U.S. News & World Report, among other reputable publications. “These claims are as tenuously based now,” asserted Allen, “as they were in the 1920s.”

The most serious dangers of all, however, lie in the policy implications of the new genetic determinism. If a person is genetically predisposed to sensitivity to smog, why should the government commit itself to cleaning it up? Why should parents bother spending time and energy on raising a child who carries the criminality gene? And why should insurance companies pay for the care of those with genetic mutations that “cause” bipolar disorder, diabetes or cancer? We’ve seen this unhealthy marriage of scientific and political agendas before, Allen said.

Allen also argued for a more integrated approach to research. Social and biological scientists have been studying different groups, and never the twain shall meet. We’d gain a more complete picture of problems and their causation by funding integrated studies that join the perspectives of sociologists and biologists, he said. This approach would correct the current fixation on genes as bearers of the whole truth.

When it comes to the lessons of eugenics, Allen said the “that was then, this is now” attitude is worst of all. It can, indeed, happen today. He concluded by encouraging scientists who reject simplistic genetic ideas to step forward, articulate a balanced point of view and oppose the “geneticization” of the public discussion and its potentially dangerous consequences, sooner rather than later.

Also read:The Primordial Lab for the Origin of Life

The Primordial Lab for the Origin of Life

Exploring the role of RNA, DNA, nucleic acids, proteins and other elements that inform our understanding of the origins of life.

Published April 1, 2002

By Henry Moss, PhD

Image courtesy of issaronow via stock.adobe.com.

When Thomas Cech and Sidney Altman showed that the ribozyme, a form of RNA, could act in the same manner as a protein catalyst, i.e. enzyme, origin-of-life theorists believed the central piece of the puzzle of life had been found.

Enzyme creation normally requires RNA- or DNA-type templates, but these nucleotides themselves need enzymes to function. If RNA could be cut and spliced without the aid of proteins, however, there was a basis for self-replication: RNA molecules assisting each other, and eventually evolving into life as we know it.

The concept of a primordial replicator is at the center of most origin theories. So it seemed only a matter of time before researchers would show how the components of RNA became available under prebiotic conditions, and how they connected up.

But it has proven far from easy, and most researchers now agree that RNA itself is too complex and fragile to have formed entirely from abiotic processes. They are now looking for a simpler replicator, a pre-RNA, with RNA coming on the scene later.

Nonetheless, some scientists, including nucleic acid chemist Robert Shapiro of New York University, are convinced that this whole approach is misguided. Making his case before audience at The New York Academy of Sciences (the Academy) in February, Shapiro pointed to a growing number of skeptics who wonder if life started with a replicator at all.

At Least 3.5 Billion Years Old

It’s too difficult to conceive, Shapiro said, of all these sensitive organic ingredients coming together, hanging together and creating a replicator complex enough to build proteins –– and eventually cells –– under the earth’s early conditions. And, given the evidence that cellular life on earth is at least 3.5 billion years old, less time was available than once was imagined.

If one were to put pre-RNA ingredients together in a laboratory, without the helping hand of a chemist, and cook them with the other chemicals that were likely present on the early earth, Shapiro said, the outcome would be “a tarry mess.” It would be a near-miracle for these components to come together spontaneously to form a functioning replicator.

Shapiro prefers the work of a growing number of researchers looking at the possibility that small organic and inorganic molecules could organize themselves into self-catalyzing metabolic webs. These webs could recruit components into an increasingly complex organic matrix of reactions, and the simple compartments that held them could reproduce by the simple act of splitting. If a suitable energy source were available to drive the process, such systems could have multiplied and evolved. Accurate residue-by-residue replication would be an advance that was introduced later in evolution.

Primordial Laboratories

Günter Wächtershäuser has formulated scenarios involving molecular adhesion on the surface of iron pyrite, drawing chemicals such as iron, nickel and sulfur, and energy from deep sea vents. David Deamer, Doron Lancet and others have proposed that the chemistry of lipid vesicles –– growing and splitting and carrying around water and small molecules –– could have been the environment. These “little bags of dirty water” might have been primordial laboratories for the emergence of early life.

Shapiro urged support for these new ideas, many testable in the laboratory. He also urged support for space missions that might find environments that harbor, or once harbored, primordial life. We might glimpse this process at work, he suggested, or find evidence of primitive life forms. Most important, says Shapiro, we might prove that the emergence of life from non-living conditions is natural and common, that self-organizing principles exist in prebiotic chemistry.

Dr. Shapiro has written acclaimed books on this topic for the general reader, including, most recently, Planetary Dreams: The Quest to Discover Life Beyond Earth.

Also read: Cosmic Chemistry and the Origin of Life

Opportunities and Challenges in Biomedical Research

While there have been major advances in biomedical research in recent years, this has also presented scientists with new challenges.

Published April 1, 2002

By Rosemarie Foster

Image courtesy of DC Studio via stock.adobe.com.

In Boston’s historic Fenway neighborhood, just beyond Back Bay, each spring heralds an annual ritual of renewed life. The Victory Gardens come abuzz with activity and abloom with burgeoning buds. Canoeists charge to the nearby Charles River. And sluggers at Fenway Park swing from their heels, cast in the spell of a 37-foot-high wall called the “Green Monster” that rises beyond the tantalizingly shallow left field.

Much history has been recorded inside the boundaries of Boston’s legendary baseball venue. But the seeds of a different kind of history –– that of 21st century biomedical science –– are being planted in the Fenway district this spring. Two important new scientific research facilities being built –– an academic addition to the Harvard Medical School and a commercial laboratory planned by pharmaceutical giant Merck & Co., Inc. –– will no doubt help shape biomedical advances for decades to come.

Merck is constructing its 11th major research site –– Merck Research Laboratories-Boston –– in the heart of the district. The company hails the facility as a multidisciplinary research center devoted to drug discovery. Covering an area of 300,000 square feet supporting 12 stories above ground and six stories below, Merck hopes its state-of-the-art structure will lure some 300 investigators to pursue studies within its walls. The building is scheduled for completion in 2004.

Harvard’s own new 400,000-squarefoot research building is under construction just 50 feet from the Merck site. With a design that fosters interactions between scientists, Harvard’s new facility will build on the university’s commitment to high throughput technologies. It’s expected to be operational in 2003.

The Interrelationship of Academic and Commercial Research

Although the two facilities are some way from completion, they’ve already exposed one of the major issues –– the interrelationship of academic and commercial research –– that continue to challenge biomedicine. Because of its close proximity to the Harvard Medical School, some scientists fear the new Merck facility may create some tension between nearby university investigators and industry researchers.

“The Merck laboratories, as a commercially driven research organization, may pay better salaries, have better equipment, have a better capacity for high-throughput screening and medicinal chemistry, and have other facilities that an academic medical center typically does not have available,” explained Charles Sanders, MD, former Chairman and CEO of Glaxo, Inc. and former Chairman of the Board of The New York Academy of Sciences (the Academy). “Whether this will create a source of problems for Harvard and its scientists remains to be seen. On the other hand, it could be a great resource if the academic-industrial relationship is managed well.”

Such tensions are likely to continue as emerging new trends in biomedical research offer investigators both greater opportunities and increasing challenges.  Academia and industry are partnering in ways they never have before. New high-throughput technologies are generating more data than previously thought possible. And scientists from a variety of fields must now cross interdisciplinary lines –– an approach some dub “systems biology” –– to make significant progress in conquering such diseases as cancer and AIDS.

New Approaches

A number of other biomedical research organizations have already set the stage for the new approaches to be incorporated into the Merck and Harvard facilities. In 1998, Stanford University launched an enterprise called “Bio-X” to facilitate interdisciplinary research and teaching in the areas of bioengineering, biomedicine and the biosciences. In January 2000, Leroy Hood, MD, PhD, created the Institute for Systems Biology in Seattle –– a research Environment that seeks to integrate scientists from different fields; biological information; hypothesis testing and discovery science; academia and the private sector; and science and society.

Some say it’s the “golden age” of biomedical investigation. The evolution that has led to this new age was the subject, along with related issues, of a gathering of biomedical researchers at the Academy last April. Hosted by the American Foundation for AIDS Research (amfAR), the symposium was called The Biotechnology Revolution in the New Millennium: Science, Policy, and Business.

“This meeting did an excellent job of showing how the nature of biomedical research has changed in the last 25 years,” explained Rashid Shaikh, PhD, the Academy’s Director of Programs, “not just quantitatively, in the amount of information we can generate, but also qualitatively, in the way the work is done. And this is a rapidly evolving process.”

A Quickened Pace

Much of the recent change in biomedical research is the result of a pace of investigation that has accelerated during the last quarter century – thanks in large part to recombinant DNA technology created in the 1970s. This Technology received a boost of support when the war on cancer was declared that same decade.

“Once recombinant DNA technology appeared, there was an enormous shift in molecular biology,” said David Baltimore, PhD, Nobel laureate and President of the California Institute of Technology, who chaired the amfAR symposium. “From a purely academic enterprise, it turned into one that had enormous implications for industry.”

Early on, the infant biotechnology enterprise focused on cloning to manufacture drugs, added Baltimore. The cloning was employed in the search for targets for a new generation of small molecule drugs. The need for chemical libraries soon developed, followed by a demand for high-throughput screening technologies. Add to that the wealth of information gleaned from the Human Genome Project.

Today investigators have more data than they ever did before. With the advent of high-throughput screening technologies, they also have speedier methods at their disposal to generate even more data. The nascent field of proteomics is expected to propel biomedicine even further. But with this heightened pace of research come new challenges.

For one thing, data are being generated faster than they can be analyzed and understood. Novel technologies have spawned a new field called bioinformatics: the analysis of all the data generated in the course of biomedical investigation. “We used to be able to look at the expression of one gene at a time,” said Shaikh. “But thanks to technologies (such as microarray systems), we can now analyze the expression of thousands of genes at once.”

High Demand, Low Supply of Bioinformatics Professionals

Bioinformatics professionals –– those who perform the data analysis –– are high in demand but short in supply, however, creating a problem for some research centers. Because they are so hard to come by, some institutions are sharing bioinformatics staff until a new generation of professionals can be educated and enter the workforce.

A second question that comes to mind is, “Who owns all these new data?” Is it the property of the individual researcher? The university he or she works for? The pharmaceutical company that sponsored the work or, if the studies were supported by public funds, is it the public?

Ownership issues apply to electronically published data as well. “Some of the data get published and made available to the scientific community, but some do not,” said Donald Kennedy, PhD, Editor-in-Chief of Science and President Emeritus of Stanford University. “Now that all data are stored electronically, there are major changes afoot in how data can be accessed in useful and efficient ways. But there are major unresolved questions regarding who owns the data: Do the publishers? Do the investigators?” These significant legal and policy issues will need to be resolved and, given the current rapid pace of study, resolved quickly.

A Blurred Line

In Europe, industrial support for universities has been an accepted and uncomplicated practice since the late 1800s, and this relationship continues to this day. But the relationship between academia and industry in the United States has had a quite different history, noted Charles Sanders.

As the American pharmaceutical industry began to develop in the last quarter of the 19th and early part of the 20th centuries, a relationship akin to the European model began to flower. By the early 1930s, however, the relationship between academia and industry in America began to sour. Disagreements arose over research discoveries and credit; there were disputes regarding the unauthorized use of pictures of some scientists in advertisements, implying endorsement of certain companies and products.

After World War II, the climate began to improve. With the advent of biotechnology in the 1970s, relations flourished even more, as witnessed by the founding of companies such as Genentech and Biogen by academic scientists. In addition, there are now countless examples of companies that support research programs at universities under a variety of arrangements.

On the face, these associations appear positive, because there is now a wealth of new sources for investigators to turn to for research funding. But these new opportunities also present certain challenges.

One of the most obvious concerns when industry supports a researcher is the investigator’s objectivity. Conflict of interest issues may arise. “Academic scientists who work with industry are generally very careful to retain their objectivity, yet appearances sometimes don’t allow that,” said Sanders. “The industry has to be very careful and make sure that its academic collaborators totally protect their objectivity and reputation.”

Intellectual Property Issues

Secondly, when academia partners with industry, intellectual property issues again surface. How does one determine who benefits financially from a research endeavor that goes on to produce a profitable product, such as a successful drug? How much does the scientist receive, and the university he or she works for, and how is that money used? “Academic institutions have become more sophisticated, and the scientists and organizations are demanding an ever larger part of the pie from their discoveries,” said Sanders.

Donald Kennedy noted that in industry-supported investigations a large proportion of research results that are of potential public value may be locked up in proprietary protections. Students at Yale University and the University of Minnesota recently demonstrated, for example, that their universities were collecting royalties on drugs that can benefit people suffering from HIV/AIDS in developing countries.

“Although the royalty slice of the drug price is minuscule in proportion to total revenues, it is very unattractive money to the students, and they make a passionate case,” said Kennedy. “Ironically, everybody involved in this process thought they were doing something good, and in a way everyone was. But this is the kind of problem that emerges when proprietary interests mix with the basic research function in a nonprofit institution.”

A Mixing of the Minds

Scientists are increasingly of the opinion that an integrated approach to biological investigation is essential for significant, meaningful progress to occur. This “systems approach” is bringing together biologists, chemists, physicists, engineers and computer scientists to coordinate research efforts and interpret the resulting data.

Such an approach is critical for understanding the inner workings of cells and how their functions go awry to create diseases such as cancer. The AIDS virus has proven to be an excellent model supporting the need for a multidisciplinary approach: When it was first discovered in the early 1980s, it was assumed that a vaccine was just around the corner. But that has obviously not been the case.

“It turned out that HIV was more difficult than anybody imagined, smarter and slipperier,” said David Baltimore. The cleverness of the virus has sent researchers back to their lab benches. Only by gathering together immunologists, structural biologists, biochemists and experts from other fields can we determine exactly what the virus does to the human immune system to deliver its lethal blow.

Is “Systems Biology” the Way to Go?

Not all investigators are convinced that “systems biology” –– as Hood describes it –– is the way to go. Many established researchers, for example, are used to working alone in conventional academic settings. “Traditional academic institutions have a difficult time fully engaging in systems biology, given their departmental organization and their narrow view of education and cross-disciplinary work,” explained Leroy Hood, President and Director of the Institute for Systems Biology. “The tenure system presents another serious challenge: Tenure forces people at the early stages of their careers to work by themselves on safe kinds of problems. However, the heart of systems biology is integration, and that’s a tough challenge for academia.”

“Specialization is often the enemy of cooperation,” added David Baltimore. “There are deep and important relationships between biology and other disciplines. To understand biology, we need chemists, physicists, mathematicians and computer scientists, as well as other people who can think in new ways.”

Future Challenges

Despite the presence of these as yet unresolved issues, biomedical research continues to hurdle forward, shedding light on the inner workings of organisms and yielding insights that will undoubtedly impact health and medicine. “The true applications (of biotechnology) to patient care have not really matured yet,” said Rashid Shaikh. “But there’s every reason to believe that we’re going to make very rapid progress in that direction.”

In addition to the challenges above, other issues include:

• Gathering political support. Although the budget of the National Institutes of Health has seen a significant increase in the last several years, other science-related agencies may not be as fortunate. “These agencies’ research budgets have not seen an increase, and we must pay attention to them,” said Baltimore.

• Educating the public. Hood touched on the distrust the public can have regarding science. “I am deeply concerned about society’s increasingly suspicious and often negative reaction to developments in science,” he said. “I sense an enormous uncertainty, discomfort and distrust. There is a feeling that we’re just making everything more expensive and more complicated. How do we advocate for opportunities in science? We have to be truthful about the challenges as well.”

• Educating today’s students. One of the best ways to garner support for a systems approach to biological investigation is to start educating students this way today. In Seattle, for example, the Institute for Systems Biology has pioneered innovative programs in an effort to transform the way science is taught in public schools.

“This is truly the golden age of biology,” said Sanders. There are unprecedented numbers of targets and compounds, for example. Research and development are very expensive, but funds will be available in abundance.

The Public’s Expectations

Still, he added, we need to handle the expectations of the public, which can be unrealistic when it comes to the speed with which basic science findings will result in new therapies. And academic institutions have to balance a commitment to both basic and translational research.

“Thousands of flowers will continue to bloom, driven by the lure of discovery and the opportunity to improve human health,” added Sanders. “Though not linear, the process is very creative, entrepreneurial, and clearly reflective of the American free enterprise system.”

Also read:Building the Knowledge Capitals of the Future


About the Author

Rosemarie Foster is an accomplished medical freelance writer and vice president of Foster Medical Communications in New York.

The Epidemiology of Depression: A Family Affair

Experts are beginning to better understand and mitigate the economic and social consequences of disabling psychiatric illnesses like depression.

Published March 1, 2002

By Henry Moss, PhD

Image courtesy of KMPZZZ via stock.adobe.com.

Health insurance reimbursement for mental disorders has still not achieved parity with traditional illness and the topic continues to be hotly debated in the U.S. Congress. The statistics seem clear, however, as studies document the enormous economic and social consequences of disabling psychiatric illnesses. Broken marriages, lost jobs and productivity, and the impact on children make mental illness one of the major sources of disability loss in the United States and the world.

Columbia University psychiatric epidemiologist Dr. Myrna Weissman made a powerful case for parity when she presented the cumulative results of major studies led by her and colleagues to an Academy audience in January. The talk was part of an ongoing program by the Academy on “Mind, Brain and Society.” Dr. Weissman, who is also associated with the New York State Psychiatric Institute, dealt specifically with unipolar, major depression, perhaps the most widespread and significant of these disorders, and one that now appears to amplify its effect by impacting families – young mothers and children in particular.

Perhaps the most significant finding is that, contrary to popular belief, depression is not a middle-aged, menopausal phenomenon. Recent studies show a substantial rise in the onset of depression at puberty and a peak that occurs between age 25 and 35, for both men and women, though incidence is substantially higher in women. Onset actually declines beyond age 35, implying that, as Weissman put it, “if you can make it to 50 you can pretty much look past depression and ahead to your dementias.” They also show that depression is most damaging in the sensitive child-bearing years of young women.

Depression and Other Health Complications

Dr. Myrna Weissman

Science is only now coming to grips with the significance of this data. Given depression’s early onset, we now recognize that people live with the debilitating disorder far longer than with heart disease, for example, or most diabetes. Indeed, the World Health Organization ranks unipolar depression number one in years of disability.

Weissman also noted that when women of child-bearing age are affected the impact is increased substantially. Children of depressed parents have a two to threefold increased risk for the illness, according to studies conducted by Weissman’s group. They also are more likely to experience earlier onset, around age 15, and to account for a major share of the small but significant number of cases among pre-pubescent children. They may then suffer the effects for a lifetime.

We’ve known that depression amplifies a number of general health problems, Weissman said, but it’s now becoming clear that the illness has a more devastating social impact than was previously thought. We can only imagine how it affects developing countries ravaged by AIDS and/or war. And it gets worse. The studies show that the effect remains robust across multiple generations; a grandparent with major depression may be an even stronger predictor for familial depression than is a parent.

The good news, according to Weissman, is that we’ve learned a lot about treating depression and other psychiatric conditions, with drugs and psychotherapy, and that outreach can overcome reluctance to seek treatment. But we need resources to conduct effective outreach and deliver treatment, and health insurance parity would certainly be a good start.

Myrna Weissman is a member of the National Academy of Science’s Institute of Medicine, and a Fellow of The New York Academy of Sciences.

Also read: Psychedelics to Treat Depression and Psychiatric Disorders

Landing on Eros Unearthed Even More Mysteries

Astronomers had never before found an asteroid that had left the main “belt” between Mars and Jupiter and approached earth’s orbit…until now.

Published March 1, 2002

By Robert Zimmerman

On ordinary days, the control room for a deep-space mission is rather sedate: data stream in, routine commands stream out, no one need raise his voice. But February 12, 2001, was no ordinary day for the technicians controlling NASA’s Near Earth Asteroid Rendezvous (NEAR)-Shoemaker spacecraft. Some punched calculators madly, while others ran from computer monitor to computer monitor, shouting numbers, trying to find out what was happening. Nearby, television crews aimed cameras at the scrambling engineers, capturing their every motion. Pandemonium had replaced the serene orderliness.

The NEAR team had brought this chaos on themselves. In a bold flourish to end their successful mission, the spacecraft’s science and engineering teams at the Johns Hopkins University Applied Physics Lab in Laurel, Maryland, sent NEAR-Shoemaker toward a landing on the surface of Eros, the asteroid it had circled for a year. Never mind that the probe had been built as an orbiter and had no landing mechanism of any kind. Even if NEAR wound up shattering into a thousand pieces, the images it would send in its final moments would make the stunt worthwhile.

Two members of NEAR’s imaging team, Joseph Veverka, professor of astronomy at Cornell University and Mark Robinson of Northwestern University, huddled in front of a computer to marvel at the high-resolution images coming from space. Veverka was amazed by the absence of craters in the close-up pictures of the asteroid’s surface, and Robinson was impressed at the numerous boulders of all shapes and sizes.

Hungrily Consuming Information

The spacecraft descended at a leisurely four miles per hour, and the images grew in detail and complexity. The investigators hungrily consumed each bit of information, fully expecting the data stream to end abruptly at the moment of impact. Several technicians watched as their computer programs counted the altitude down to zero. Then one of the flight engineers yelled, incredulously, “Totally nominal––we’ve got a signal!” Robert Farquhar, the mission director, shouted, “Hold that signal!”

NEAR-Shoemaker had not only touched the surface of Eros, it had come through the impact seemingly whole and in operation. It was as if the controllers had rolled an egg across a gravel field without even cracking the shell. Although no more images could be transmitted, NASA allowed the mission an extension of several weeks to enable the craft to gather and radio back additional data about the chemical make-up of the spacecraft’s landing site.

After accomplishing the first rendezvous with an asteroid, the first orbit of an asteroid and the first landing on an asteroid, the investigators in charge of the NEAR-Shoemaker mission now have compiled a wealth of information about a heretofore shadowy subject –– the bits of planetary debris that inhabit the middle reaches of the solar system.

The data and images from the mission have already helped answer innumerable questions about asteroids and how they figure in the birth and formation of the solar system. But more interesting, perhaps, was what NEAR-Shoemaker did not tell scientists. As extraordinary as the landing was, the last-second images paralleled many of NEAR-Shoemaker’s other discoveries. For every question that was settled, another conundrum was unexpectedly uncovered.

“These [images] leave us with mysteries that will have us scratching our heads for years to come,” Veverka said.

A Place in Space

Even before the NEAR-Shoemaker mission, Eros had been one of the most studied asteroids. Its orbit ranges from 165 million miles to 105 million miles from the sun; that means on occasion it comes within 10 million miles of the earth. Astronomers have long used those close approaches as a valuable measuring stick. The earth’s distance to the sun and the mass of the earth-moon system were measured using positions triangulated with the help of Eros. What’s more, the regular visits enable astronomers to study the asteroid from the earth with relative precision.

The first asteroid was discovered in 1801 by Giuseppe Piazzi, a professor of mathematics and astronomy at the University of Palermo in Sicily. Piazzi had been surveying a part of the solar system between Mars and Jupiter in hopes of spotting a planet thought to lie there. Those hopes were based upon the Titius-Bode Law, a simple mathematical routine that could produce the orbital distance of the first eight planets with surprising accuracy; that law predicted a planet at a distance of 275 million miles.

After tracking a bright object across the background stars for more than a month, Piazzi calculated its position and found that its orbit closely corresponded with the location of the “missing planet.” On February 12, 1801 –– 200 years to the day before NEAR’s landing on Eros –– Piazzi announced his discovery. A new planet had been found, one he called Ceres, after the Roman goddess of the harvest.

A Point in the Sky

Piazzi’s fame was short-lived. Once other astronomers began observing Ceres they discovered that, unlike other planets, this one presented no discernable disk. It was, like a star, a point in the sky. The name “asteroid” (meaning “starlike”) stuck. The next year the German astronomer Heinrich Olbers found another asteroid in much the same orbit as Ceres. Hundreds of asteroids had been spotted by the time the German Gustav Witt and the Frenchman Auguste Charlois independently discovered Eros on the same night in 1898.

Eros, however, marked a first: Astronomers had never before found an asteroid that had left the main “belt” between Mars and Jupiter and approached earth’s orbit. And it is large, measuring some 21 miles long and eight miles wide. Although the total number of known asteroids exceeds 10,000, astronomers have identified only 250 or so near-earth asteroids, as those with orbits like that of Eros’s are called.

No asteroid is known to be on a collision course with earth, but impacts have occurred throughout geological history –– asteroid impacts are implicated in large extinctions and with creating the craters that formed lakes in Canada and elsewhere. Very small bits of asteroids hit the earth all the time. They’re called meteorites once they land.

Geologists have collected thousands of meteorites. Some meteorites are composed of carbon-rich minerals and look like soot; others are almost pure iron. But the majority –– some 80 percent –– are what geologists call ordinary chondrites. Such rocks are stony in appearance and largely made up of silicate ores, such as olivine and pyroxene.

A Model Mission

Rather than get fleeting images of many asteroids, the NEAR mission, launched in 1996, was designed to gain an extraordinary amount of information about just one. (The name of the mission was changed to NEAR-Shoemaker to honor the planetary geologist Eugene Shoemaker, who died in 1997.) The mission also was to be a model of efficiency: rather than roar to the target asteroid in one quick arc, the spacecraft would swing past the earth to get a gravitational boost. Along the way, NEAR-Shoemaker zipped through the asteroid belt and past Mathilde, a C asteroid.

Mathilde proved to be a bit of a surprise: a jagged, irregularly shaped 33-mile-wide body, darker than charcoal, was found to be only slightly denser than ice. Since the carbon-rich material that the asteroid is thought to be made of is far denser than this, planetary geologists believe Mathilde is nothing more than a gravel pile of primordial material loosely stuck together. But the fly-by of Mathilde was too fast to obtain detailed spectra.

NEAR-Shoemaker approached Eros in December 1999, and controllers sent the command that would slow it enough to be captured in an orbit. With so little gravitational pull (an astronaut on the surface could throw a rock fast enough to reach escape velocity) Eros was more of a point to maneuver about than a world to orbit. But at the very moment the spacecraft was supposed to settle into orbit around Eros, an engine failed to burn and the probe shot past.

That could well have been the end of the mission. But engineers found a way to correct the engine problem and re-aim the spacecraft. NEAR made an extra orbit of the sun so its path could be brought back to Eros 14 months later, on February 14, 2000.

Unprecedented Challenge

After settling into an orbit around the 21-mile-long, peanut-shape asteroid, NEAR-Shoemaker kept a careful distance. It was a matter of wise discretion, since orbiting such a strangely shaped object with such a tiny gravitational field was in itself an unprecedented challenge. And because of Eros’s bent and elongated shape and its rotation through a five hour and 15 minute “day,” the relative speed between spacecraft and asteroid ranged between two and 15 miles per hour, and was never the same from orbit to orbit. If ground controllers were not careful, the spacecraft could get whacked as the nose of the asteroid swung by.

For about two months, then, the spacecraft circled more than 100 miles above the surface, employing its camera, laser altimeter, magnetometer, infrared, and x-ray/gamma-ray detectors to obtain a comprehensive view of the end pointed to the sun. In mid-April the spacecraft moved inward, spending the next five months in orbits as low as 22 miles. Then, in August, ground controllers lifted NEAR-Shoemaker upward to a higher orbit so that scientists could get global views of the other end, now in daylight.

Of the many intriguing and distinct geological features spotted during this orbital reconnaissance, the most notable was the giant saddle-like feature. Data from the laser altimeter suggests that the feature is actually a crater, though strangely shaped. A more normal-looking large crater –– some three miles in diameter and a half-mile deep — dominates the asteroid’s other side.

Few Small Craters

Indeed, the size of the large craters gouged into Eros’s surface was perhaps less surprising than the absence of small ones. Unlike the moon and other solar system bodies –– where the relative number of differently sized craters remains constant as you get closer –– Eros lacks many craters less than 100 yards in diameter. “I am amazed at how devoid the surface is of small craters,” Veverka said.

Instead of small craters, investigators saw just the opposite: boulders everywhere, in all sizes and shapes. Some are rounded. Others have sharp angular facets. In fact, the entire surface of the asteroid seems covered with a layer of pulverized dust and debris of unknown depth. In some areas, such as in the large saddle, the layer appears thick enough to  completely blanket and fill older craters. The photographs also revealed grooves, troughs, pits, ridges and fractures, similar to what was seen on Ida.

“These are generally very old features, and suggest the existence of fractures in the deep interior,” says Veverka. The ridges, one of which wraps one-third the way around the asteroid, average about 30-feet high and 300-feet wide. Their existence suggests that Eros has an internal structure and is therefore a consolidated body and not a rubble pile like Mathilde. In other words, if you gave Eros a push, it would move away from you as a unit, rather than dissolve into a cloud of gravel.

The strangest features spotted by the close-up photos were what appeared to be extremely smooth ponds of material at the base of some craters, as if the dust and dirt on the crater slopes had flowed downward and pooled at the bottom. “Some process we don’t understand seems to sort out the really fine particles and move them into the lowest spots,” notes Veverka.

New Conclusions

Not only is Eros a solid hunk, close-up views reveal that its composition is remarkably uniform and evenly distributed. In fact, Eros appears incredibly bland, with little color variation anywhere on its surface. “The very small color differences lend support to Eros being all the same composition,” says the planetary scientist Clark Chapman of the Southwest Research Institute in Boulder, Colorado, and a member of the NEAR-Shoemaker science team.

That means the ground-based spectroscopy suggesting that Eros was a differentiated body –– with hemispheres composed of minerals that had separated due to melting –– was wrong. In fact, the data NEAR-Shoemaker has collected calls into question many of the conclusions that have been made about the composition of asteroids. Astronomers believed that Eros and all other S-type asteroids were geologically distinct from ordinary chondrite meteorites; on close inspection, NEAR has shown Eros to be nothing more than one large ordinary chondrite.

Many investigators now believe that such S asteroids –– which make up the majority of asteroids in the inner part of the solar system –– might well be the source of most meteorites. In fact, the difference in spectra between S asteroids and ordinary chondrites might be more a function of rotation than substance: as asteroids rotate, their irregular surface distorts their spectrum.

A Daring Finish

Rather than simply shut NEARShoemaker off, mission director Robert Farquhar suggested a more daring finish: Why not try to land the orbiter on the surface of Eros? Not only would such a landing enable investigators to get some high-resolution images that would have been impossible to obtain otherwise, the feat would teach ground controllers the best techniques for landing spacecraft on such low-gravity objects, a skill that future space navigators will surely need.

On its way down, NEAR-Shoemaker snapped 69 high-resolution images of Eros’ surface, resolving details less than an inch across. Just before impact, the last two pictures caught the edge of one of the sand ponds. Though the pond appeared smooth –– as in more distant photographs –– small stones were seen peeking up through the fine dust. Those final photographs raised more questions than they answered.

NEAR-Shoemaker recorded 160,000 photographs –– imaging surface features as small as a foot. It will take years for the investigators working on the mission to digest it all. Just as the VIKING missions to Mars informed the study of that planet for a generation, it may take decades before planetary scientists get a set of asteroid data that is richer or more detailed. Even so, NEAR-Shoemaker gave astronomers a wealth of data on just one asteroid. Whatever conclusions astronomers may draw from NEAR must be tempered with the knowledge that asteroids come in many sizes, shapes and compositions. Any definitive conclusions can be said only of Eros.

The First Close and Detailed Look at an Asteroid

Nonetheless, this first close and detailed look at an asteroid gave humanity its first tantalizing glimpse at the very earliest birth pangs of a planet. The flow of material down the slopes of craters, the crumbling of boulders, and the pooling of material into sand ponds are merely the processes by which an irregularly shaped object slowly rounds itself off into a spherical planet.

Ancient and worn by its billion-year journey through the black emptiness of space, Eros has slowly been chiseled by impact after impact, then shaped by the slow, inexorable pull of its tiny gravity. In this dim, dark and silent environment, nature has –– like the seed in an oyster from which pearls will grow –– relentlessly built Eros up from nothing. From a similar seed grew our earth.

As things stand now, however, the best summary of what we really know about Eros and asteroids comes from Veverka, who spoke freely at a press conference immediately after the landing. Again and again, Veverka told reporters, “We really don’t understand what’s going on.”

Also read:To Infinity: The New Age of Space Exploration

About the Author

Robert Zimmerman is author of Genesis, the Story of Apollo 8, published by Four Walls Eight Windows, and The Chronological Encyclopedia of Discoveries in Space, published by Oryx Pres.

Supporting Dissident Scientists in Cuba

As part of the Academy’s continued efforts to advance human rights, a representative recently visited Cuba to advocate for imprisoned dissident scientists.

Published March 1, 2002

By Fred Moreno, Dan Van Atta, Jill Stolarik, and Jennifer Tang

Image courtesy of andy via stock.adobe.com.

A representative of The New York Academy of Sciences’ (the Academy’s) Committee on Human Rights of Scientists traveled to Cuba in late November to visit the physics faculty at the University of Havana. He also met with political dissidents and provided moral support to the wife of Dr. Oscar Elias Biscet, a physician who has been imprisoned for publishing a medical report deemed to be “antigovernment.”

In an attempt to access the present status of human rights issues among scientists in Cuba, Dr. Eugene M. Chudnovsky, Distinguished Professor of Physics at Herbert Lehman College, the City University of New York, met with two dissidents –– an economist and an electrical engineer –– who were previously imprisoned for their political views. They are not permitted to hold official jobs, and both have illnesses for which they need medical supplies.

Chudnovsky also met with Elsa Morejon, the wife of Biscet, who is serving a three-year prison term for his medical report entitled, “Rivanol –– A Method to Destroy Life.” The report documented a 10-month study at Municipal Hospital of Havana, where the drug had been given to thousands of women.

In the report, Biscet found that 60% of the fetuses survived the procedure, which is supposed to kill the fetus after the first trimester. He wrote that surviving babies were left to die by the attending physicians. Dr. Bizcet charged that Rivanol was being promoted as a way to keep Cuba’s birth rate low.

Barred from Professional Jobs

Dr. Eugene M. Chudnovsky

Although Morejon was Chief Nurse at the Havana Hospital of Endocrinology prior to 1998, neither dissidents nor their immediate families are allowed to have professional jobs in Cuba. Dr. Biscet is being held in a high-security prison in the province of Holguin, 800 km from Havana. It is a three-day journey from Havana for his wife, who is allowed to visit only once a month for a two-hour guarded conversation. Morejon told Chudnovsky her husband has lost some teeth and is in serious need of medical attention. Chudnovsky said he is attempting to assist Dr. Biscet through a number of diplomatic channels.

During his visit, Chudnovsky delivered a talk on Macroscopic Quantum Tunneling at the University of Havana and met with 20 of the school’s 70 physics professors. He also visited the Institute of Materials Science, which is associated with the Physics Department, and toured the Institute of Molecular Biology.

He reported widely varying conditions at the Cuban universities. Most modern and best equipped was the Institute of Molecular Biology. Cuban Premier Fidel Castro believes biotechnology is Cuba’s path to prosperity, according to the hosts, and the institute does both research and production for hospitals in Europe as well as Cuba. Some scientists there are nuclear physicists who switched fields when Russian support for Cuban nuclear research ended.

Good Research Despite Extreme Poverty

Elsa Morejon

The average professor’s salary is about $25 a month, he said, and almost $4 of it goes to buy ration cards that enable Cubans to obtain 5 kg of rice and 10 kg of beans. Since all apartments belong to the government and rent is 10 percent of salary, he said “most professors and university administration live with parents.”

Despite the extreme poverty, he noted that some Cuban professors appear to be doing good research. “Experimentalists are trying to switch to cheap, soft condensed matter physics of sand piles, turbulence, etc.,” Chudnovsky said. “Their primitive electromechanical devices, interfaced with 15-year-old computers, surprise by their ingenuity.”

Chudnovsky said he believes the American Physics Society and allied scientific organizations should support their Cuban colleagues by providing scientific journals, which are now occasionally sent via e-mail from friends in Europe. He said he also will encourage the APS leadership to visit physics departments in Cuba and explore possible roots of cooperation.

“We are doing everything we can to support our members in Cuba,” commented Svetlana Stone Wachtell, director of the Academy’s Human Rights of Scientists program, “and to encourage our members throughout the world to engage in a professional exchange with their colleagues in Cuba.”

Also read: Supporting Scientists and Human Rights in Cuba

The Structural Design Of The Twin Towers

One of the structural engineers of the Twin Towers reflects on the destruction of the 9/11 terrorist attacks.

Published January 1, 2002

By Linda Hotchkiss Mehta

The Twin Towers circa March 2001. Image courtesy Jeffmock, GNU Free Documentation License, via Wikimedia Commons. No changes were made to the original work.

Although he lost many friends on September 11, Academy Member Leslie Robertson is thankful to be among the fortunate New Yorkers who did not lose family members or coworkers, as did thousands of others. Still, the shock and grief he felt during and after the attacks might be somewhat akin to the incomparable horror of suddenly losing two dear children.

For Robertson, now Director of Design at Leslie E. Robertson Associates, Consulting Structural Engineers, the World Trade Center has been a central part of his professional life –– the defining project that launched a distinguished career –– since the early 1960s. Together with then partner John Skilling and architect Minoru Yamasaki, Robertson and his team conceived, and helped develop the structural designs for five of the seven buildings in the WTC complex, including the 110-story Twin Towers.

An active member of the Academy’s Human Rights of Scientists Committee, Robertson was in Hong Kong on September 11 discussing a new skyscraper when he first received word that a plane had hit the WTC’s north tower. Everyone believed that it had been a helicopter or other small aircraft. He then was able to reach his wife, Saw-Teen See, an Academy Member and engineer in her own right, who reported the seriousness of the event and that the second tower had been struck. He rushed to his room to prepare for a return to New York.

The Structural Strength of the Towers

After turning on the TV and registering the shock of witnessing the dreaded images of death and destruction taking place, Robertson said his memory of the following hours are somewhat blurred. “You wanted to reach out and stop it,” he recalled, “but there was nothing you could do.”

Although he’s still plagued with thoughts about “what we might have done differently,” Robertson acknowledged in an interview that –– as many Members and other colleagues have told him –– the structural strength of the towers allowed them to stand long enough for perhaps 25,000 occupants to escape after each of the Boeing 767 aircraft crashed into them. The north tower was struck between the 94th and 99th floors at 8:45 a.m. and did not collapse until 10:28 a.m.; the south tower, which was impacted at a lower level, between the 78th and 84th floors, was the first to collapse, at 9:59 a.m., 53 minutes after the second aircraft struck.

“When I started work on this project, the tallest building I’d worked on had only 22 floors,” Robertson said. “The WTC engineering was a first of a new kind of high-rise building.” Aware of the military aircraft that hit the Empire State Building in a dense fog in 1945, Robertson said, “I thought we should consider the structural integrity that would be needed to sustain the impact of a (Boeing) 707 –– the largest aircraft at that time.”

Achieving Structural Strength

Leslie Robertson

Robertson added, “We didn’t have suicidal terrorists in mind.” Rather, he was considering an accident, a 707 flying at low speed, most likely lost in a dense fog. To achieve the structural strength, Robertson and his team designed the Twin Towers as steel boxes around hollow steel cores. An unusually large number of rigid, load-bearing columns of hollow-tube steel –– each column being only 14 inches wide and set just 40 inches on center –– supported the Towers walls.

Because the 767s were traveling at high speeds, were somewhat larger than 707s and each carried about 80 tons of jet fuel, Robertson said, “the energy that was absorbed by the impact was not less than three-times, and probably as much as six-times greater than the impact we had considered.

“The idea that someone might plant a plastic explosive or the like somewhere in the structure was considered in the design. The structure was redundant –– two-thirds of the columns on one face of each of the two towers were removed (by the aircraft) and yet the buildings were able to stand. But it was the combination of the impact from the speeding aircraft and the burning jet fuel –– both the kinetic and petrochemical energy released –– that ultimately brought them down.”

Impact on Future Design

Robertson said he doubts that the attacks will have a major impact on the structural design of new tall structures. “If you design buildings as fortresses that can withstand anything, then the terrorists will just avoid the fortresses,” he said. “There are plenty of other, smaller buildings that could be targets, and the threat of chemical or biological weapons is an even greater concern.

“Structural engineering is applied science. If a ceiling sags or a lobby is too drafty, life goes on. But structural reliability has been high; building collapses are rare. When they do occur, they’re usually caused by natural events –– wind or water or the ground shaking. I don’t believe we should engineer against the kind of event that happened on September 11, much less the impact and fire that could be created by the much larger Boeing 747 or the new AirBus 380.”

Robertson concluded that the solution lies in confronting the root causes of hatred among mankind: “There’s no end to the number of ways that man can do harm to man.”

Also read: Saving Lives in the Aftermath of Sept 11 Attack

The Ethics of Surveillance Technology

In the wake of the Sept. 11 attacks there’s been more emphasis on protecting public places and tracking terror threats. But what are the ethics of this?

Published January 1, 2002

By Fred Moreno, Dan Van Atta, Jill Stolarik, and Jennifer Tang

Image courtesy of Kate via stock.adobe.com.

Picture yourself living each day under the watchful eye of a network of surveillance cameras that track your movements from place to place. Every time you enter a large building or public space, your facial features are compared with those in a database of known criminals and terrorists. Do you feel safer knowing that someone, somewhere is watching?

This may sound farfetched, or something out of George Orwell’s dystopian novel 1984, but closed circuit TVs (CCTVs) –– like those being widely used in the United Kingdom –– and facial recognition systems are just two of the many well developed technologies the government and private companies are considering to bolster security. The Pentagon issued a request for new security proposals in the wake of the September 11 terrorist attacks and, already, new anti-terrorism laws have expanded the government’s surveillance powers.

Complex technological security measures are “coming on faster than lawmakers and the public can process and evaluate them,” said Susan Hassler, editor-in-chief of the IEEE Spectrum and moderator of a recent media briefing on surveillance technology at The New York Academy of Sciences (the Academy). Sponsored by the Academy and the IEEE Spectrum, the briefing mirrored the debate now being waged in the Congress, the Pentagon, the media –– and on the streets.

A New Manhattan Project

To sift through the myriad security ideas, Michael Vatis, director of the Institute for Security Technology Studies at Dartmouth College, issued “a clarion call for a new Manhattan Project.” Vatis proposed that security experts from industry, academia and government be asked to assess and recommend available surveillance technologies.

“I urge that we develop a mechanism to bring together expertise from across different fields to develop a research and development agenda to counter the threats now facing us,” Vatis said. Such an effort is even more urgent in light of the Pentagon’s recently published security technology “wish list,” he added. Biometrics, a technology used for analysis and quantification of the physical features of an individual, is already “on the radar” of law enforcement and airport security companies.

Biometrics, a technology used for analysis and quantification of the physical features of an  individual, is already “on the radar” of law enforcement and airport security companies. Facial recognition is one aspect of biometrics that could be deployed in counter-terrorism efforts. “The cornerstone of our defense against crime  and terror is our ability to identify and deter those who pose a threat to public safety,” said Joseph Atick, chairman and CEO of the Visionics Corp., a leader in the biometrics field.

Atick said facial recognition systems could be used in airports. As passengers pass through security gates, the systems could capture an image of each face, analyze its features and produce a unique, 84-byte computer code to describe it.

Vatis said this technology is an adjunct to security measures already in place such as X-rays, bag checks and metal detectors. Unlike a person scanning a crowd, he said, this technology “delivers security in a non-discriminatory fashion — free of prejudices.”

Increasingly Pervasive and Invasive Surveillance

Barry Steinhardt, associate director of the American Civil Liberties Union, said he was troubled not only by the specter of increasingly pervasive and invasive surveillance technologies, but also by the danger that government and industry leaders could, under pressure to act, invest in technologies that don’t work and instead provide a false sense of security. “As we look at any technology that may be introduced into society, we have to ask: Does it improve security? How much does it threaten our liberties? And do the benefits outweigh the risks?”

While facial recognition systems may or may not ever be implemented widely, we can look across the Atlantic to study the effects of a surveillance technology that’s been adopted with enthusiasm. Over the past decade, Britons have welcomed the installation of CCTVs in public places, work spaces and homes. Estimates are that some 2 million CCTVs are now scattered throughout the country, said Stephen Maybank, of the department of computer science at the University of Reading in the U.K.

The British fervor for CCTV comes from the belief that the cameras deter criminal activity, a contention that some studies support. The London Underground alone is laced with 4,000 cameras, and the sheer numbers of CCTVs pose problems: how does one store all the data and how can one find a particular image amongst all the data that’s stored?

Better and Cheaper Cameras

Improvements are coming in CCTV technology that will further encourage their use, said Maybank. “Cameras are becoming better and cheaper; they will soon work on low power and will be easy to install –– some are reduced to the size of a thumb. Software for people-tracking and behavior recognition also is improving. And large, coordinated camera networks are coming that will enable the analysis and description of people as they move over large areas.”

Closer to home and on a much smaller scale, anecdotal reports about CCTVs point to drawbacks in their use as crime stoppers. Robert Freeman, executive director of the New York State Committee on Open Government, reported that some residents and shopkeepers on the perimeter of New York City’s Washington Square Park believe the installation of CCTVs in the park simply pushed crime to the fringes of the areas.

New ideas will continue to emerge on how best to protect ourselves from future threats. Government’s challenge will be to select the best of the alternatives, technologies that pose the least threat to our civil liberties, and to knit them together to form an invisible shield –– without creating a technological version of the Emperor’s new clothes.

Also read: The Ethics of Developing Voice Biometrics

Tuberculosis: A Potential 21st Century Plague

Due to developing resistance to certain drugs, tuberculosis has reemerged as problematic for public health professionals.

Published January 1, 2002

By Linda Hotchkiss Mehta

Recent deaths from inhalation of the anthrax bacterium, coming in the wake of September’s terrorist attacks in the United States, have focused widespread public attention on the potential for biochemical terrorism. Potential agents of mass destruction being discussed range from the smallpox virus to the bubonic/pneumonic plague bacterium to a variety of toxic chemical agents.

While concern about the lethal potential of biochemical terrorism is warranted, many public health experts believe insufficient attention is being paid to an equally deadly germ whose spread is already a global pandemic: Mycobacterium tuberculosis. Unlike anthrax, which is not contagious and can be readily treated with antibiotics, tuberculosis (TB) is highly infectious and there is no effective treatment for some multi-drug-resistant (MDRTB) strains.

The TB threat is not new: In the early 19th century TB was so prevalent in England that it accounted for nearly one quarter of all deaths in the country. But 20th-century advances in treatment and public health have engendered a complacency that can tempt us to think we have put such scourges behind us.

Consider this: fully one-third of the world’s population is currently infected with tuberculosis. This amounts to 2 billion persons harboring Mycobacterium tuberculosis. And some MDR-TB strains are no more curable than the bacillus was in 1820.

A Patient Killer

TB remains, after HIV, the leading cause of young-adult death from infectious disease. Nevertheless, we have several advantages over the Victorians, one of which is that the survivors among them passed on some natural immunity to us, their descendants. We also are better housed and fed, and effective treatments now exist against most strains. We can quantify the risk better, and are much better at preventing hospital-based infection. Genetic research has identified TB-susceptibility genes in humans, making it possible to identify the mutations in the bacterium that make it drug resistant.

The 21st century has its own disadvantages, however, including much greater mobility of a large portion of the world’s population. Pharmaceutical companies fund most of the new drug research. Marketplace pressures might influence them not to pursue TB drugs, as most of the need is found in poorer countries. In any case, no new drugs or vaccines to either treat patients or contain the spread of TB are on the horizon in the next five years. The HIV/AIDS epidemic means that a much greater percentage of those persons exposed to TB will develop active cases, which in turn will exacerbate treatment of the HIV infection, causing a higher rate of premature death.

Drug resistance has been a problem since drugs were first used 50 years ago. An estimated 35 percent of people don’t take their medications correctly, whether for TB or any other ailment. This is true across socioeconomic and educational demographic lines, making it hard to predict which patients will be noncompliant.

Combination Therapy

The patient who is being treated with several antibiotics and decides to only take one at a time to cut down on side effects is providing the bacillus with ideal conditions for becoming progressively resistant to a series of medications. Use of combination therapy (two or three antibiotics in one capsule) has been demonstrated in Europe to be an effective way to circumvent this behavior, but this approach is only slowly catching on in the United States.

In March, the Royal Society of Medicine hosted a conference, Tuberculosis Drug Resistance: From Molecules to Macro-Economics, which will be published in Volume 953 of the Annals of the New York Academy of Sciences along with papers from another conference, New Vistas in Therapeutics: From Drug Design to Gene Therapy.

“Multi-drug-resistant tuberculosis has a 50 percent mortality rate and costs at least $10,000 per patient to treat,” according to Peter Davies, one of the principal organizers of the meeting. “It is of more concern than other infectious diseases, except perhaps malaria, because TB itself is so common around the world. Also, TB can “incubate” in the human body for decades, so infection caught now may erupt into active disease any time from six weeks to 90 years.”

The World Health Organization (WHO) has identified the world’s TB “hot spots”— 80 percent of the incident cases are found in just 22 countries. Mario Raviglione, coordinator of TB Strategy and Operations in the Stop TB Department of WHO in Geneva, reported that MDR-TB has been a major problem in the countries of the former Soviet Union. Newly identified areas with a high prevalence of MDR-TB are found in China, Iran and Russia.

The Situation in India

Zarir Udwadia, a consultant chest physician at three of Bombay’s private hospitals, described the situation in India, where half of the world’s TB is found and less than 1 percent of the gross domestic product is spent on health. In India, social factors, poverty, poor prescribing practices and uncontrolled sales of anti-TB drugs have contributed to a crisis in MDR-TB incidence. DOTS (directly observed treatment, short course) programs were introduced in 1992 and show heartening improvements in detection and cure rates, but are not likely to have an impact on existing MDR cases.

“Tuberculosis is a major cause of morbidity and mortality in sub-Saharan Africa, with an incidence rate of 259 per 100,000 population in the region,” said Alwyn Mwinga of the University of Zambia School of Medicine. “An increase in the number of TB cases has occurred in the last 15 years, much of them attributable to co-infection with HIV.”

In spite of these increases, the rates of MDR-TB in Africa are much lower than those in Russia. Overcrowding in Russian prisons, where the number of inmates has increased to seven times the Western European norm, mean that the beds in dormitories are used in three shifts.

Baroness Vivien Stern, of the International Center for Prison Studies at the Law School of Kings College London, reports estimates of MDR-TB in Russia that range between 20 percent and 40 percent. WHO guidelines recommend a 75 percent cure rate for MDRTB to control the disease within a community. Current data indicate that cure rates in Russia are as low as 5 percent.

An Insidious Bacterium

American satirical cartoon against trailing skirts as vectors of disease. First published in Puck, August 8, 1900. Image courtesy of Wikimedia Commons.

M. tuberculosis, discovered by Robert Koch in 1882, has a heavy lipid coat, which probably enables the bacterium to resist the onslaught of the body’s immune defenses, and a slow reproductive rate (18 to 20 hours, compared to less than one hour for other bacteria). The first effective treatment wasn’t discovered until 1943, when Selman Waksman identified streptomycin. Within 25 years, 11 more medications were available to combat TB, of which only a few were truly “first line,” including isoniazid and rifampin.

Unfortunately, the bacterium had also made great progress by then, and drug-resistant strains had developed. The tiny size (4 microns) of the bacillus meant that it could gain access to the deepest recesses of the lungs. Once lodged there, macrophages engulf the bacterium and begin to digest it, exposing some of its inner parts to the macrophage’s surface in the process, and then transport the bacillus to the lymph nodes, where T lymphocytes are stimulated to produce lymphokines, which in turn encourage the macrophages to become more aggressive.

As a result of these actions, the mycrobacteria will stop multiplying 95 percent of the time, but they remain in the body indefinitely, awaiting a weakening of the host’s immune system. HIV and AIDS have provided an opportunity for many infections to move from dormancy to an active state, as have cancer and the immunosuppressive drugs used in organ transplantation.

Cell Wall Biosynthesis

Most antibiotics work by affecting cell wall biosynthesis: We now understand these processes at the genetic level through structure–function analysis using recombinant DNA techniques. A drug that acts through inhibition of cell wall biosynthesis must be present long enough to be assured of an opportunity to act when the process it affects is under way. The slow reproductive rate of M. tuberculosis can, therefore, limit the effectiveness of these drugs.

Adrian Hill, professor of Human Genetics at the Wellcome Trust Centre for Human Genetics at the University of Oxford, reported on a two-stage, genome-wide linkage study in families from Gambia and South Africa to search for regions of the genome containing tuberculosis-susceptibility genes. Markers on chromosomes 15q and Xq showed evidence of linkage to tuberculosis, and an X chromosome susceptibility gene may contribute to the large number of males with tuberculosis in many populations.

Paul Farmer, of Harvard University Medical School, stressed the transnational quality of the spread of MDR-TB by describing the discovery of a patient from Massachusetts with pan-resistant TB. Because this patient had been working in Peru, Farmer and his team went to Lima to pursue the source of this strain of the bacillus. They detected the identical strain of TB and ultimately treated 74 patients who had been written off as “incurable.” The team achieved an 85 percent cure rate.

Education Is Essential

Farmer advocates implementation of local solutions that can respond appropriately to specific community circumstances.

Another contributor, Len Doyal, professor of Medical Ethics at the Royal London School of Medicine and Dentistry, explored the morality of one such solution: coercion and detention. He felt these policies could be part of an acceptable strategy, but that counterbalancing programs, including educational efforts to de-stigmatize TB and efforts to undermine the causes of world poverty, must be in place.

In light of evidence that MDR-TB is already a global pandemic, conference participants expressed concern that the global organizations and individual nations lack the will to provide the necessary resources to combat the tuberculosis epidemic in time to forestall a major crisis. Let this meeting be our warning.

Also read: The New Age Threat of Tuberculosis