Skip to main content

Talking Physics: String Theory’s Dangling Claims

Theoretical physicist and Columbia University professor Brain Greene delves into the intense rivalry between loop quantum gravity and string theory, and how it ties to Einstein.

Published January 1, 2004

By Rich Kelley

Image courtesy of WP_7824 via stock.adobe.com.

As philosopher Paul Feyerabend once noted, science moves more rapidly when there are several competing approaches to a problem. Much of the excitement in theoretical physics today surrounds the intense rivalry between loop quantum gravity (LQG) and string theory.

Both theories aspire to achieve the Holy Grail of modern physics: the unification of general relativity and quantum mechanics. They both have their champions and detractors. Both have had difficulty finding experimental verification of their predictions, yet both claim to be on the verge of discovering results that will do just that.

Loop quantum gravity’s best known proponent is Lee Smolin, author of Three Roads to Quantum Gravity, and a research physicist at Perimeter Institute for Theoretical Physics in Waterloo, Canada. String theory’s most high-profile current spokesperson is Brian Greene.

Professor of Physics and Mathematics at Columbia University, Greene came to The New York Academy of Sciences (the Academy) on Oct. 16, 2003, for an informal conversation as part of his whirlwind tour to promote the NOVA series based on The Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory, his bestselling 1999 book. Four years in the making and with a budget of $3.5 million, “The Elegant Universe” premiered with a two-hour segment, “Einstein’s Dream” and “String’s the Thing” on PBS on Tuesday, Oct. 28, 2003, and concluded with a one-hour program, “Welcome to the 11th Dimension,” on November 4.

String Theory’s Core

Much as he did in his first NOVA segment, Greene began his talk by briefly describing the history of the conflict: how Einstein revolutionized our worldview by conceiving of space and time as a continuum, spacetime; how scientists in the 1920s and ‘30s invented quantum mechanics to describe the microscopic properties of the universe; and how these two radical worldviews clashed.

String theory promises to reconcile these two views of spacetime – Einstein’s vast fabric and the jittery landscape of quantum mechanics. The NOVA animations made clear just how visually captivating this story is. One showed an “elevator of the imagination” traveling to floors smaller by 10 orders of magnitude to illustrate the transition from the placid Einsteinian realm of large things down to the turbulent, frenetic world of atoms, electrons, protons and quarks.

It is at this lowest level of matter that we find the core contribution of string theory. At the smallest of scales, inside a quark, lies not a point but a fundamentally extended object that looks like a string. A vibrating loop of string. At the microscopic level the world is made up of music, notes, resonant vibrating frequencies. This is the heart of string theory.

The Mechanism for Reconciling Relativity

What enables these strings to become the mechanism for reconciling relativity and the laws of the microworld is that these strings have size. In particle physics, point particles have no size at all. In principle you could measure and probe at any scale. But if particles have length, then it makes no sense to believe you can probe into areas that are smaller than the length of the particle itself.

String theory posits that at the smallest of scales, the smallest elements do have a defined length, what is called Planck length, “a millionth of a billionth of a billionth of a billionth of a centimeter” (10-33 centimeter). For analogous reasons loop quantum gravity also posits a smallest unit of space. Its minimum volume is the cube of the Planck length.

In one of the most memorable animations from the show, we travel again to the lowest, most turbulent level of the microscopic world, the world of point particles. But much as when the landscape on a map zooms out when the scale changes, when we define the lowest level as one in which the smallest elements have a defined size, the spatial grid rises above the turbulence and the jitters calm down.

Worlds of Dimensions

One of the most provocative components of string theory is its insistence that the world has more than three spatial dimensions. String theory calls for six or seven extra dimensions. In the television series, Greene focuses more on how there could be these extra dimensions, rather than on why they need to exist.

Greene’s book acknowledges that the need for the extra dimensions is primarily driven by the mathematics behind string theory. In order for the negative probabilities of the quantum mechanical calculations to cancel out, the strings need to vibrate in nine independent spatial directions. Of course, these are not dimensions as we know them. Greene instructs us to “imagine that these extra dimensions come not uniformly large that we can see with our eyes, but small, tightly curled up. So small we just can’t see them.”

If any aspect of string theory is ripe for visual exploitation via animation, this is it. Many readers of the book will enjoy the series if only to get the chance to see what animated Calabi-Yau manifolds look like. In 1984 a number of string physicists identified the Calabi-Yau class of six-dimensional shapes as meeting the conditions the equations for the extra dimensions require.

The manifolds consist of overlapping and entwined doughnut shapes, each of which represents a separate dimension. If we zoom again into the microscopic world we can envision encountering curled up dimensions that look much like these Calabi-Yau manifolds – “simple rotating structures” in Greene’s description.

“That’s the basic idea of string theory. In a nutshell it requires the world to have more dimensions than we are familiar with.”

In an extensive question-and-answer exchange after his talk, Brian Greene amplified his ideas.

Experimental Verification

Elegant as it is, string theory has roused the ire of some physicists because it has thus far defied being able to be proven true or false by experiment. Familiar with this complaint, Greene described what he considered several promising developments.

For a long time string theorists had thought that the extra dimensions must be as small as the size of the Planck length and, therefore, beyond detectability. In the last few years work has been done suggesting that some dimensions might be as big as 10-2 cm. “That’s a size you can almost see with your eyes.” We haven’t seen them because the only force that could penetrate into these extra dimensions is gravity.

Unfortunately, the force of gravity is many powers of 10 weaker than the smallest size that can be currently probed in physics laboratories. However, there are some experiments planned to be done at CERN in 2007 in which we may actually see the extra dimensions by observing the effect of gravity on other dimensions.

Greene’s current research involves looking for signatures of string theory in astronomical data. Proponents of loop quantum gravity are also looking to the stars for confirmation of their calculations. The Gamma-ray Large Area Space Telescope (GLAST), due to be launched in 2006, should be sensitive enough to detect from the light from gamma-ray bursts the verification LQG researchers seek.

M-Theory

In recent years string theory has undergone a transformation. Much of this dates from a milestone event, the “Strings 1995” conference at the University of California that marked the beginning of the Second Superstring Revolution.

It was there that Edward Witten delivered his startling finding that string theory requires 11 dimensions and that what had until then been viewed as five competing superstring theories are really all part of one superstring framework, which he called “M-Theory.”

What the “M” refers to is not clear. “Mysterious” is one proposed meaning, since how the framework relates the theories to each other has not been defined. M-theory also “inflates” strings into two-dimensional “branes” (from “membranes”) that could contain entire alternate universes.

And most important to its critics from LQG, M-theory is expected, as it develops, to enable string theory to be “background independent,” like LQG, so that it does not need to rely on the standard model of spacetime.

The Next Book

For all of their competitiveness, researchers in both string theory and LQG frequently speculate that they could be working on different paths toward what may be one unified theory. This may explain why Greene’s next book, The Fabric of the Cosmos: Space, Time, and the Texture of Reality, due from Knopf in February 2004, seems designed to encompass a range of theoretical possibilities.

He noted that his coverage of space and time in The Elegant Universe addressed only what was needed as background for his explanation of string theory. Many other aspects he left uncovered.

In his new book they get center stage as he probes how our fundamental ideas of space and time have changed in their nature and importance over the past century. If Greene’s knack for engaging broad audiences holds true, it will undoubtedly expand the ranks and enjoyment of those eager to follow the lively scramble to the ultimate Theory of Everything.

Also read: Adnan Waly: A Life and Career in Physics

Healthy Approaches to Dealing with Stress

Neuroscientists say that a “healthy lifestyle” is perhaps the most effective prescriptions for dealing with chronic stress.

Published June 1, 2003

By Jeffrey Penn

Feeling stressed out? Anxious? Frustrated and angry? Looking for a way out?

Some significant advances in the neurosciences are revealing that stress is actually a complex relationship of internal and external factors, and that some relatively simple lifestyle changes can contribute to a sense of well being and improve health.

“A healthy lifestyle is the best way to reduce stress,” according to Bruce S. McEwen, head of the Harold and Margaret Milliken Hatch Laboratory of Neuroendocrinology at the Rockefeller University in New York and co-author of the recently published The End of Stress As We Know It (Joseph Henry Press).

The notion that stress is the result of external pressures is incomplete, said McEwen, who summarized his book during a March 18 lecture at The New York Academy of Sciences (the Academy). Research now reveals how the body’s defense mechanisms are involved in keeping stress at bay, as well as how the body’s defense system breaks down from time to time.

When the body is working properly, a process known as “allostasis” helps individuals adapt to and survive the real or imagined threats that confront them in the course of everyday life. McEwen explained that the allostasis process is maintained by a complex network – including hormones, the autonomic nervous system, neurotransmitters in the brain, and chemicals in the immune system – in the body.

“When this network is working efficiently, its activity helps to mobilize energy reserves, promote efficient cardiovascular function, enhance memory of important events and enhance the immune defense towards pathogens,” McEwen said. Normally, the body is able to self-regulate the proper responses to external pressures, but occasionally it reaches a limit known as “allostatic overload.”

Bruce S. McEwen

External Stress Factors

Many external pressures can contribute to allostatic overload, according to McEwen, such as conflicts at work or home, fears about war and terrorism, overworking, lack of sleep, economic difficulties, lack of exercise, excessive drinking and bad eating habits. Genetic risk factors, such as a predisposition for cardiovascular disease or diabetes, can also contribute to allostatic overload.

“If the imbalances in the body’s regulatory network persist over long periods of time, the result can lead to disease,” McEwen said. “Hardening of the arteries, arthritis, diabetes, obesity, depressive illness and certain types of memory loss are among the disorders that are accelerated by allostatic overload,” he added. He cited research indicating that long-term stress also affects the amygdala and the hippocampus, the regions of the brain that regulate fear, emotions and memory.

According to McEwen, “genes, early development, and life experiences all contribute to determining how the brain responds to environmental stresses.” Research has revealed that external factors in society also can influence health and disease commonly related to stress.

“In industrialized societies, allostatic overload occurs with increasing frequency at lower levels of education and income,” McEwen noted. He pointed out that mortality rates and levels of diseases associated with allostatic load are much higher among people in lower socioeconomic status. “A combination of lifestyle, perceptions of inequality and stressful life experiences appear to play a role,” he said.

Best Antidote: Healthy Lifestyle

What can be done to reduce allostatic load and the stress associated with it? Changes in lifestyle are the best remedy, according to McEwen. “Maintaining social ties with friends and family is one of the most important factors in reducing stress,” he said. “In addition, restorative sleep, and regular, moderate exercise are all important,” he added. “Regular, moderate exercise not only increases muscle utilization of energy, but also enhances formation of new nerve cells in areas of the brain that support memory.”

McEwen said that, in addition to individual responses to counteract allostatic overload and reduce stress, the private sector and policy makers also can contribute to well-being. “Government policies that recognize the impact of inequality, promote comprehensive health care and reduce smoking, and provide housing and community services are also very important,” he said.

Stress reduction is not only critical for individuals, he added, but for the health and welfare of the wider society as well. “By 2020, depression will be the second-leading cause of disease in this country,” he concluded.

Also read: Mental Health in Children and Adolescents

Building a Big Future from Small Things

Nanotechnology has potential to revolutionize our daily lives and one aspect that makes this technology so promising and effective is its bottom-up approach.

Published October 1, 2002

By Charles M. Lieber

Nanotechnology has gained widespread recognition with the promise of revolutionizing our future through advances in areas ranging from computing, information storage and communications to biotechnology and medicine. How might one field of study produce such dramatic changes?

At the most obvious level nanotechnology is focused on the science and technology of miniaturization, which is widely recognized as the driving force for the advances made in the microelectronics industry over the past 30 years. However I believe that miniaturization is just one small component of what makes and will make nanoscale science and technology a revolutionary field. Rather, it is the paradigm shift from top-down manufacturing, which has dominated most areas of technology, to a bottom-up approach.

The bottom-up paradigm can be defined simply as one in which functional devices and systems are assembled from well-defined nanoscale building blocks, much like the way nature uses proteins and other macromolecules to construct complex biological systems. The bottom-up approach has the potential to go far beyond the limits of top-down technology by defining key nanometer-scale metrics through synthesis and subsequent assembly – not by lithography.

Producing Structures with Enhanced and New Functions

Of equal importance, bottom-up assembly offers the potential to produce structures with enhanced and/or completely new function. Unlike conventional top-down fabrication, bottom-up assembly makes it possible to combine materials with distinct chemical composition, structure, size and morphology virtually at will. To implement and exploit the potential power of the bottom-up approach requires that three key areas, which are the focus of our ongoing program at Harvard University, be addressed.

First and foremost, the bottom-up approach requires nanoscale building blocks with precisely controlled and tunable chemical composition, structure, morphology and size, since these characteristics determine their corresponding physical (e.g. electronic) properties. From the standpoint of miniaturization, much emphasis has been placed on the use of molecules as building blocks. However, challenges in establishing reliable electrical contact to molecules has limited the development of realistic schemes for scalable interconnection and integration without having key feature sizes being defined by the conventional lithography used to make interconnects.

My own group’s work has been focused on the nanoscale wires and, in particular, semiconductor nanowires as building blocks. This focus was initially motivated by recognition that the one-dimensional nanostructures represent the smallest morphology structure for efficient routing of information – either in the form of electrical or optical signals. Subsequently, we have shown that nanowires can also exhibit a variety of critical device function, and thus can be exploited as both the wiring and device elements in functional nano-systems.

Control Over Nanowire Properties

Currently, semiconductor nanowires can be rationally synthesized in single crystal form with all key parameters – including chemical composition, diameter and length, and doping/electronic properties – controlled. The control that we have over these nanowire properties has correspondingly enabled a wide range of devices and integration strategies to be pursued. For example, semiconductor nanowires have been assembled into nanoscale field-effect transistors, light-emitting diodes, bipolar junction transistors and complementary inverters – components that potentially can be used to assemble a wide range of powerful nano-systems.

Tightly coupled to the development of our nanowire building blocks have been studies of their fundamental properties. Such measurements are critical for defining their limits as existing or completely new types of device elements. We have developed a new strategy for nanoscale transistors, for example, in which one nanowire serves as the conducting channel and the other crossed nanowire as the gate electrode. Significantly, the three critical device metrics are naturally defined at the nanometer scale in assembled crossed nanowire transistors:

(1) a nanoscale channel width determined by the diameter of the active nanowire;

(2) a nanoscale channel length defined by the crossed gate nanowire diameter; and

(3) a nanoscale gate dielectric thickness determined by the nanowire surface oxide.

These distinct nanoscale metrics lead to greatly improved device characteristics such as high gain, high speed and low power dissipation. Moreover, this new approach has enabled highly integrated nanocircuits to be defined by assembly.

Hierarchical Assembly Methods

Second and central to the bottom-up concept has been the development of hierarchical assembly methods that can organize building blocks into integrated structures. Obtaining highly integrated NWs circuits requires techniques to align and assemble them into regular arrays with controlled orientation and spatial location. We have shown that fluidics, in which solutions of nanowires directed in channels over a substrate surface, is a powerful and scalable approach for assembly on multiple-length scales.

In this method, sequential “layers” of different nanowires can be deposited in parallel, crossed and more complex architectures to build up functional systems. In addition, the readily accessible crossed nanowire matrix represents an ideal configuration since the critical device dimension is defined by the nanoscale cross point, and the crossed configuration is a naturally scalable architecture that can enable massive system integration.

Third, combining the advances in nanowire building block synthesis, understanding of fundamental device properties and development of well-defined assembly strategies has allowed us to move well beyond the limit of single devices and begin to tackle the challenging and exciting world of integrated nano-systems. Significantly, high-yield assembly of crossed nanowire structures containing multiple active cross points has led to the bottom-up organization of OR, AND, and NOR logic gates, where the key integration did not depend on lithography. Moreover, we have shown that these nano-logic gates can be interconnected to form circuits and, thereby, carry out primitive computation.

Tremendous Excitement in the Field

Prof. Lieber

These and related advances have created tremendous excitement in the nanotechnology field. But I believe it is the truly unique characteristics of the bottom-up paradigm, such as enabling completely different function through rational substitution of nanowire building blocks in a common assembly scheme, which ultimately could have the biggest impact in the future. The use of modified nanowire surfaces in a crossed nanowire architecture, for example, has recently led to the creation of nanoscale nonvolatile random access memory, where each cross point functions as an independently addressable memory element with a potential for integration at the 1012/cm2 level.

In a completely different area, we have shown that nanowires can serve as nearly universal electrically based detectors of chemical and biological species with the potential to impact research in biology, medical diagnostics and chem/bio-warfare detection. Lastly, and to further highlight this potential, we have shown that nanoscale light-emitting diode arrays with colors spanning the ultraviolet to near-infrared region of the electromagnetic spectrum can be directly assembled from emissive electron-doped binary and ternary semiconductor nanowires crossed with non-emissive hole-doped silicon nanowires. These nanoscale light-emitting diodes can excite emissive molecules for sensing or might be used as single photon sources in quantum communications.

The bottom line – focusing on the diverse science at the nanoscale will provide the basis for enabling truly unique technologies in the future.

Also read: Molecular Manufacturing for the Genomic Age


About the Author

Charles M. Lieber moved to Harvard University in 1991 as a professor of Chemistry and now holds a joint appointment in the Department of Chemistry and Chemical Biology, where he holds the Mark Hyman Chair of Chemistry, and the Division of Engineering and Applied Sciences. He is the principal inventor on more than 15 patents and recently founded a nanotechnology company, NanoSys, Inc.

Genetic Privacy: A War Fought on Many Fronts

While genetic testing offers benefits from disease detection to casualty identification, it also creates a slew of legal and ethical questions.

Published June 1, 2002

By Mary R. Anderlik and Mark A. Rothstein

In 1995, U.S. Marine Lance Corporal John C. Mayfield III and Corporal Joseph Vlacovsky — along with many other U.S. service men and women — were told that DNA samples would be collected as part of a medical examination. Such testing had become routine since December 16, 1991, when the deputy secretary of the U. S. Department of Defense issued a memorandum launching its ambitious program to collect DNA samples from all members of the armed forces, active and reserve.

Unlike their comrades, however, Mayfield and Vlacovsky refused to provide the samples. Commented Vlacovsky: “I expected to give up some privacy when I joined the military, but not something I held so close.” Mayfield worried about the potential for abuse, given a historical record that included exposure of troops to radiation, LSD, and Agent Orange. A legal battle cry in the nascent war over genetic privacy was sounded.

DNA is not difficult to obtain. Initially, “collection” consisted of a finger prick to produce a pair of half-dollar sized blots of blood on paper cards and a swab of the inside of a cheek to scrape off epithelial cells. Cheek swabs were eventually discontinued due to storage problems. Samples are transported to the military’s DNA repository, a large warehouse in Gaithersburg, Maryland. A small cadre of workers at the warehouse processes and catalogs the samples, which are stored on trays in gigantic walk-in freezers. Over its history, the repository has been accessed over 700 times in support of human identification; the current inventory is 3.6 million specimens.

More Accurate Accounting of Casualties

The military contends that the DNA collection and identification program serves a laudable goal. Operation Desert Storm served as a catalyst for creation of the repository. The fragmentary remains of some soldiers who perished in that war proved difficult to identify by traditional means, such as dental records and fingerprints. DNA typing allows a more accurate accounting of casualties and brings closure for families caught in limbo between grief and hope. This was recently demonstrated, for example, when specimens in the DNA repository were used to identify some victims of the September 11, 2001, terrorist attack on the Pentagon.

Mayfield and Vlacovsky were not persuaded by these arguments and resisted sharing their genetic material even when that resistance led to court-martial. The two marines asserted that the collection, storage and use of their DNA violated their constitutional rights to due process, privacy, freedom of expression and freedom from unreasonable searches and seizures.

In legal terms, the search and seizure charge had the best prospects for success. Mayfield and Vlacovsky conceded that the military’s stated purpose for the registry was benign. But they claimed the risk remained that in the future the DNA samples would be used for other purposes, such as diagnosis of hereditary diseases or disorders, and that information would be disseminated to potential employers, insurers and others with an interest in the information.

A federal district judge in Hawaii refused to consider such “hypothetical” future uses and misuses. The judge concluded that the military had a compelling interest in obtaining DNA and that the “minimal intrusion” of taking blood samples and oral swabs, while a seizure, was not unreasonable.

What Constitutes ‘Privacy’?

While the case was on appeal, the marines were honorably discharged, without providing blood or tissue samples, and the judgment of the district court was vacated as moot. Hence, the legality of mandatory DNA collection by the military has yet to be decided.

The military case is unique in some respects, but the case of the two marines raises issues of general significance. In the context of genetics, the concept of “privacy” can encompass at least four categories of concern: 1) access to bodies and personal spaces; 2) access to information by third parties and any subsequent disclosure of this information by third parties; 3) third-party interference with personal choices and denial of opportunities; and 4) ownership of biological materials and personal information.

Advocates of restrictions on the collection and use of genetic material and information generally focus on what is different about DNA — and about the technologies that allow human beings to use DNA for purposes that might include commercial exploitation and discrimination. One feature of genetic information often cited as distinctive is its predictive nature. Many genetic tests detect a disorder that has not yet manifested in symptoms, or a mutation that puts a person at above average risk of a disease. Most genetic tests, however, are not sufficiently precise to allow prediction of the time of onset of disease, or the severity of a disease if and when it develops.

The Limits of Genetic Testing

Genetic testing for mutations associated with disease is of questionable value to the individual when neither cure nor prevention is possible. For example, many people choose not to be tested for the mutation that causes Huntington disease. When preventive care is available, such as with more frequent mammograms or prophylactic mastectomy as in the case of the BRCA1 and BRCA2 mutations associated with heightened risk of breast cancer, genetic testing may have considerable value.

Many people will be reluctant to undergo testing or participate in genetic research without assurances of confidentiality and protections against discrimination. Insurers and employers may be interested in information that is even crudely predictive of future disease and disability; the potential for unfairness to particular individuals may count for little given the potential cost savings from identification and exclusion of numerous high-risk individuals.

Flexible Concerns and Other Anxieties

The significance of genetic information for whole families, and not merely individuals, is also offered as evidence of the distinctiveness of genetic information. For inherited disorders, the revelation that one person is affected has implications for others who are biologically related; testing may also reveal a lack of biological relatedness (misattributed paternity), a trigger for another sort of problem.

Genetic testing also creates difficult dilemmas for those who are contemplating parenthood. Information related to any serious genetic disorder affects reproductive decision making in ways that are profound. The potential for disclosure of sensitive information and discrimination in such circumstances may add to a sense of confusion or distress in weighing the risks and benefits of information-seeking. It certainly increases the burden on those who are the bearers of knowledge and must consider the costs of its communication to siblings and descendants.

Genetic material also may reveal information beyond what was originally contemplated and serve purposes other than those for which it was originally obtained. With each advance in technology, DNA offers up more and more of its secrets. While many researchers and law enforcement professionals view this feature of DNA as a reason for preserving samples indefinitely, many privacy advocates view the same feature as a reason for prompt destruction following completion of the immediate analysis.

Stigmatization is another concern. Although genetic conditions do not excite the fears associated with infectious disease, the individual who is found to have a “genetic defect” may readily be viewed as a “genetic defective,” a person of lesser worth.

Nothing New?

While advocates of genetic privacy stress these distinctions, opponents of restrictions minimize the differences between genetic information and other kinds of personal information. Like the judge in the case of Mayfield and Vlacovsky, they may focus on the simplicity of the DNA collection process rather than the nature or potential uses of the DNA itself. They may argue that using DNA for identification purposes is not much different from using fingerprints for identification purposes — if we are comfortable with the later practice, how can we object to the former?

Even the distinctiveness of genetic information as predictive is open to challenge. Cholesterol tests, frequently required by insurers in the medical underwriting process, are considered useful because of their predictive value. Again the question arises, if we permit insurers to review the results of cholesterol tests in medical records, is it illogical to object to similar practices in relation to predictive genetic tests?

The New York State Task Force on Life and the Law, in its report Genetic Testing and Screening in the Age of Genomic Medicine, concludes that while “genetic testing shares characteristics with other forms of medical testing,” DNA-based testing is distinctive in its “long-range predictive power” and its capacity to reveal sharing of genetic variants “at precise and calculable rates,” among other things.

Genetic Privacy in the Information Age

Genetic advances must be considered along with other developments, such as the advent of electronic record keeping, managed care, and the ongoing consolidation in the insurance, banking and health care sectors. Never before has information exchange been so easy or profitable. If documented cases of genetic discrimination are rare, this may be due to the infancy of the technology, and the influence of genetic privacy laws already in place.

Thus far, lawmakers have been most ready to address the consequences of new genetic technologies for health care, health insurance and employment. In health care, confidentiality has long been understood as a crucial precondition to the therapeutic relationship. In the Hippocratic Oath, the physician swears that “Whatever, in connection with my professional service, or not in connection with it, I see or hear, in the life of men, which ought not to be spoken of abroad, I will not divulge, as reckoning that all such should be kept secret.” Laws providing for the confidentiality of physician-patient communications limit disclosure by providers of health care, but they typically permit use of blanket releases by insurance companies and other third parties.

A majority of states now have laws that specifically relate to genetic privacy. The most comprehensive include general provisions covering genetic testing and the handling of genetic information. About half prohibit genetic testing without prior informed consent, subject to exceptions such as law enforcement, paternity determination, court order and anonymous research. These laws often contain a statement that genetic information is confidential, or “confidential and privileged,” meaning that it is protected from subpoena in a civil proceeding, although production can still be compelled by a court order. Disclosure of genetic information to a third party without written authorization is generally prohibited.

Genetic Privacy Laws

Genetic privacy laws often prohibit insurers and employers from requiring genetic testing as a condition of insurance or employment and from discriminatory use of any genetic information obtained. Privacy advocates have long argued that these protections are fairly meaningless if insurers and employers can persuade or pressure unsuspecting individuals into submitting to genetic testing or sharing genetic information.

Once a third party has possession of information, it is difficult to police its use. To address these problems, some states prohibit covered insurers and employers from even requesting genetic testing or genetic information. In the area of insurance, a major issue is breadth of application of these laws. Many states limit special privacy protections for genetic testing and information to health insurance, leaving individuals with few or no safeguards in their dealings with life, disability income and long-term care insurers, among others. States vary in the sanctions imposed for violations of privacy protections. In most states, a violation is a misdemeanor punishable by fine or jail time or both; a willful violation may be a felony.

Genetic privacy laws are typically silent on the issue of retention of biological specimens obtained or retained for the purposes of genetic testing. A few states require destruction of samples upon specific request, or after the purpose for which the sample was obtained has been accomplished. The New York law requires that the sample be destroyed at the end of the testing process or not more than 60 days after the sample is taken, unless a longer period of retention is expressly authorized. Laws that require destruction of samples typically include exceptions for research and law enforcement.

Children Evoke Thorny Issues

Genetic testing of children also has provoked heated discussion. Disagreement is sharpest where the testing is for an adult onset condition that cannot be prevented, ameliorated or cured by any action taken during childhood.

In such cases, it is hard to argue that testing confers any benefit on the child or the parents. The general rule is that parents control medical decision making for their children.

Similarly, thorny issues may arise in the context of adoption. Prospective adoptive parents may insist that a child undergo genetic testing for inherited disorders before they proceed with adoption, especially if a genetic link is or appears to be found for a serious mental illness.

Genetic information is increasingly being sought in other contexts. Defendants in personal injury lawsuits may be eager to prove that injuries resulted from the plaintiffs’ genetic defects rather than their own negligent conduct.

As noted above, state laws may declare that genetic information is privileged and hence protected from routine discovery in the investigational phase of a civil proceeding. However, a judge may order testing or disclosure of information if persuaded of its relevance.

For example, a defendant in a lawsuit arising out of an automobile accident sought to compel genetic testing of the plaintiff for Huntington disease, as a possible causal factor, and the court ordered the testing over the plaintiff’s objections.

Looking Forward

Genetic information is often very powerful in its ability to identify individuals or predict future health. But with its power comes the potential for harm — both through the mere disclosure of genetic information and through the use of the information to deny opportunities.

With regard to genetic privacy, if public policy has lagged behind the science, it is largely because the public understanding (and that of decision makers) has lagged behind as well.

Without broader public education about the promise and peril of genetic information, it will be impossible to develop sensible policies on genetic privacy. As H. G. Wells wrote in 1920: “Human history becomes more and more a race between education and catastrophe.” This observation is still true in the genetic age.

Also read: AI and Big Data to Improve Healthcare


About the Authors

Mary R. Anderlik, Ph.D., received a J.D. from Yale Law School and is an Associate Professor at the Institute for Bioethics, Health Policy and Law, and in the Department of Medicine at the University of Louisville School of Medicine. Professor Mark A. Rothstein holds the Herbert F. Boehl Chair of Law and Medicine and is Director of the Institute of Bioethics, Health Policy and Law at the University of Louisville.

Infectious: The Return of Days Gone By?

While medical science has made tremendous strides in recent years, some diseases and viruses are re-emerging and creating new challenges for public health professionals.

Published January 1, 2001

By Allison L. C. de Cerreño

Image courtesy of CDC.

With winter’s arrival in New York, much of the concern over the West Nile Virus has disappeared – at least among the general population. However, new infectious diseases have emerged in recent years, and there is concern that we may be entering a period in which such diseases, thought to be a bane of the past, may come back to haunt us again.

Annually, infectious diseases kill 13 million people, and together are the leading cause of global fatalities. During the past 20 years, 30 new infectious diseases have been identified. Among these are Hantavirus pulmonary syndrome which was identified in the United States in 1993, with an associated fatality rate of 50%; and Nipah virus which, in 1999, led to fatalities as a result of severe encephalitis in 40% of those infected in Malaysia.

If one were to extend the timeline back two more decades the deadly Marburg virus could be added to the list, having made its first appearance in Germany in 1967. Ten years later, the Ebola virus, an even more virulent cousin, appeared in what is now the Democratic Republic of the Congo. Also in 1977, Legionnaires’ disease was identified in the United States.

Is this New?

So, is this really something new? Some of the emerging diseases of recent years are not really “new,” but better described, more prevalent, or occurring in previously unaffected locales. AIDS, for instance, was practically unheard of worldwide until the early 1980s, but it existed in Africa for many years prior. Similarly, West Nile virus was isolated in Uganda in 1937 but made its appearance in the United States only last year. Others, like Escherichia coli O157:H7, which made its deadly debut in 1982, or the virulent strain of cholera (vibrio cholerae) that struck India in 1992 are new forms of previously well-known disease agents.

However, something is happening that is leading to emerging diseases and to reemerging diseases, either in different forms or in new locations, and human behavior patterns are an important factor. An increase in global travel, for example, has created new and more rapid pathways of exposure. In the case of Lyme disease (1982), economic development has led to loss of natural habitat, placing humans near an ever-increasing population of deer and the ticks that carry the disease. And, increased use and misuse of antibiotics has led to drug resistant strains of tuberculosis, gonorrhea, and meningitis, to name a few.

What can be done? Public health efforts can be stepped up around the world and more attention paid to these human factors. Certainly, more research is needed to understand these diseases and search for vaccines and cures. However, we should keep in mind that only in very recent history and only in some countries have infectious diseases been less prevalent over the past half century. We should be thankful for this respite even as we brace for a possible return of days gone by.

Also read: Unraveling the Mystery in the DRC’s Disease Outbreak—Is It Disease X?