Support The World's Smartest Network
×

Help the New York Academy of Sciences bring late-breaking scientific information about the COVID-19 pandemic to global audiences. Please make a tax-deductible gift today.

DONATE
This site uses cookies.
Learn more.

×

This website uses cookies. Some of the cookies we use are essential for parts of the website to operate while others offer you a better browsing experience. You give us your permission to use cookies, by continuing to use our website after you have received the cookie notification. To find out more about cookies on this website and how to change your cookie settings, see our Privacy policy and Terms of Use.

We encourage you to learn more about cookies on our site in our Privacy policy and Terms of Use.

eBriefing

DIMACS Workshop on the Dialogue on Reverse Engineering Assessment and Methods (DREAM)

DIMACS Workshop on the Dialogue on Reverse Engineering Assessment and Methods (DREAM)
Reported by
Don Monroe

Posted January 05, 2007

Overview

On September 7–8, 2006, researchers gathered at Wave Hill in New York to discuss the reverse engineering of biological networks. The Dialogue on Reverse Engineering Assessment and Methods, or DREAM, was organized by Gustavo Stolovitzky of IBM, Andrea Califano of Columbia University, and Jim Collins of Boston University.

This eBriefing summarizes the main issues facing this project, as well as some research that suggests the methods that will bring DREAM to fruition. After describing the biological context that gives these networks their importance, the eBriefing explains the need for experimental gold standards and how these standards might be identified, and explores what types of data and metrics are needed to test these standards. It then describes the types of network data that are being used for reverse engineering, such as in silico networks and synthetic networks. Other sections describe details of several reverse-engineering algorithms, and some of the types of missing data that are not yet considered in reverse engineering.

Web Sites

DREAM resources

Computational Biology & Medical Informatics
Gustavo Stolovitzky's research area at IBM.

DIMACS Workshop on Strategies for Reverse Engineering Biological Circuits
Homepage of the inaugural meeting for the DREAM project.

Multiscale Analysis of Genetic and Cellular Networks (MAGNet)
Based at Columbia University, one of the founding supporters of the DREAM project.

Tools for biological networks

A Collection of Artificial Gene Networks
A project by Pedro Mendes's group of generating artificial gene expression data.

Complex Pathway Simulator (Copasi)
A software application developed by the Mendes group at the Virginia Bioinformatics Institute and the Kummer group at EML Research for simulation and analysis of biochemical networks.

Cytoscape
A platform for visualizing molecular networks and integrating them with other information.

Geneways
A system for automatically extracting, analyzing, visualizing, and integrating molecular pathway data from the research literature.

geWorkbench
A software platform for integrated genomics and systems biology.

NetBuilder
A graphical tool for building logical representations of genetic regulatory networks.

RegulonDB
A database for the transcriptional network in E. coli.

tYNA
Tool developed in Mark Gerstein's lab at Yale for comparing topological parameters of networks and comparing sub-networks.

Other critical assessment projects and related sites

Biomolecular Object Network Databank (BOND)
Unleashed Informatics offers commercial and open access versions of their databases.

Competitive Evaluation of Prediction Algorithms
A modeling competition organized to advance the algorithms and software for modeling chemical, biological, and medical data, with special emphasis on the prediction of physico-chemical properties and biological activities from molecular descriptors derived from the chemical structure. CoEPrA will also provide a reference database of modeling datasets that can be used to validate and compare new classification and regression algorithms.

Critical Assessment of Microarray Data Analysis
Aims to establish the state-of-the-art in microarray data mining.

Critical Assessment of Prediction of Interactions
A community-wide experiment on the comparative evaluation of protein-protein docking for structure prediction.

Database of Interacting Proteins (DIP)
The DIP database, based at UCLA, catalogs experimentally determined interactions between proteins.

The ENCODE Project: ENCyclopedia Of DNA Elements
An ongoing program sponsored by the National Human Genome Research Institute for testing and comparing existing methods to rigorously analyze a defined portion of the human genome sequence.

First International q-bio Conference on Information Processing in Cellular Signaling and Gene Regulation
An upcoming, August 2007, conference to be held in Santa Fe, New Mexico.

GeneClass
Christina Leslie and collaborators at Columbia University have developed a predictive framework for studying gene transcriptional regulation in simpler organisms using this novel supervised learning algorithm.

Genetic Analysis Workshop
A collaborative effort among genetic epidemiologists to evaluate and compare statistical genetic methods.

Genome Annotation Assessment Project
A program at the Berkeley Drosophila Genome Project that uses known sample genome regions in Drosophila melanogaster to obtain an in-depth and objective assessment of the current state of the art in gene and functional site predictions in genomic DNA.

Human Protein Reference Database (HPRD)
A centralized platform to visually depict and integrate information pertaining to domain architecture, post-translational modifications, interaction networks and disease association for each protein in the human proteome.

MEDUSA (Motif Element Discrimination Using Sequence Agglomeration)
An integrative method for learning motif models of transcription factor binding sites developed in Christina Leslie's laboratory at Columbia University.

MIPS Mammalian Protein-Protein Interaction Database (MPPI)
A collection of manually curated protein-protein interaction data assembled from the scientific literature.

Molecular Interactions Database (MINT)
MINT focuses on experimentally verified protein interactions mined from the scientific literature. The curated data can be analyzed in the context of the high throughput data and viewed graphically.

ProCheck
Software for checking the stereochemical quality of a proposed protein structure.

Protein Structure Prediction Center
Hosts results of the CASP competitions.

SIGMOID
A consortium of researchers at the Institute for Bioinformatics and Genomics, at the University of California Irvine, the Beckman Institute of the California Institute of Technology, and the bioengineering department at Johns Hopkins University that is assembling a database of cell-signaling pathways.


Journal Articles

Biological Context

Basso, K., A. A. Margolin, G. Stolovitzky, et al. 2005. Reverse engineering of regulatory networks in human B cells. Nat. Genet. 37: 382-390.

Bond, G. L., W. Hu & A. Levine. 2005. A single nucleotide polymorphism in the MDM2 gene: from a molecular and cellular explanation to clinical effect. Cancer Res. 65: 5481-5484. Full Text

Hong, Y., X. Miao, X. Zhang, et al. 2005. The role of P53 and MDM2 polymorphisms in the risk of esophageal squamous cell carcinoma. Canc. Res. 65: 9582-9587.

Lind, H., S. Zienolddiny, P. O. Ekstrom, et al. 2006. Association of a functional polymorphism in the promoter of the MDM2 gene with risk of nonsmall cell lung cancer.

Margolin, A. A., I. Nemenman, K. Basso, et al. 2006. ARACNE: an algorithm for the reconstruction of gene regulatory networks in a mammalian cellular context. BMC Bioinformatics 7 Suppl. 1: S7. Full Text

Experimental Gold Standards

Ghazalpour, A., S. Doss, B. Zhang, et al. 2006. Integrating genetic and network analysis to characterize genes related to mouse weight. PLoS Genet. [Epub ahead of print] Full Text

GuhaThakurta, D., T. Xie, M. Anand, et al. 2006. Cis-regulatory variations: a study of SNPs around genes showing cis-linkage in segregating mouse populations. BMC Genomics 7: 235. Full Text

Perkins, T. J., J. Jaeger, J. Reinitz & L. Glass. 2006. Reverse engineering the gap gene network of Drosophila melanogaster. PLoS Comput. Biol. 2: e51. Full Text

Schadt, E. E. 2006. Novel integrative genomics strategies to identify genes for complex traits. Anim. Genet. 37 Suppl. 1: 18-23.

Schadt, E. E., J. Lamb, X. Yang, et al. 2005. An integrative genomics approach to infer causal associations between gene expression and disease. Nat. Genet. 37: 710-717.

Schadt, E. E. & P. Y. Lum. 2006. Reverse engineering gene networks to identify key drivers of complex disease phenotypes. J. Lipid Res. [Epub ahead of print]

Schadt, E. E., S. A. Monks, T. A. Drake, et al. 2003. Genetics of gene expression surveyed in maize, mouse and man. Nature 422: 297-302.

Zhu, J., P. Y. Lum, J. Lamb, et al. 2004. An integrative genomics approach to the reconstruction of gene networks in segregating populations. Cytogenet. Genome Res. 105: 363-374.

What Kinds of Data and How Much?

Brem, R. B. & L. Kruglyak. 2005. The landscape of genetic complexity across 5,700 gene expression traits in yeast. Proc. Natl. Acad. Sci. USA 102: 1572-1577. Full Text

Friedman, N., M. Linial, I. Nacman & D. Pe'er. 2000. Using Bayesian networks to analyze expression data. J. Comput. Biol. 7: 601-620.

Just, W. 2006. Reverse engineering discrete dynamical systems from data sets with random input vectors. J. Comput. Biol. 13: 1435-1456.

Lee, S. I., D. Pe'er, A. M. Dudley, et al. 2006. Identifying regulatory mechanisms using individual variation reveals key role for chromatin modification. Proc. Natl. Acad. Sci. USA 103: 14062-14067. Full Text

Mehrabian, M., H. Allayee, J. Stockton, et al. 2005. Integrating genotypic and expression data in a segregating mouse population to identify 5-lipoxygenase as a susceptibility gene for obesity and bone traits. Nat. Genet. 37: 1224-1233.

Pe'er, D., A. Regev, G. Elidan & N. Friedman. 2001. Inferring subnetworks from perturbed expression profiles. Bioinformatics 17 Suppl 1: S215-S224. (PDF, 192 KB) Full Text

Sachs, K., O. Perez, D. Pe'er, et al. 2005. Causal protein-signaling networks derived from multiparameter single-cell data. Science 309: 1187.

Schilling, M., T. Maiwald, S. Bohl, et al. 2005. Computational processing and error reduction strategies for standardized quantitative data in biological networks. FEBS J. 272: 6400-6411.

Schilling, M., T. Maiwald, S. Bohl, et al. 2005. Quantitative data generation for systems biology: the impact of randomisation, calibrators and normalisers. Syst. Biol. (Stevenage) 152: 193-200.

Segal, E., M. Shapira, A. Regev, et al. 2003. Module networks: identifying regulatory modules and their condition-specific regulators from gene expression data. Nat. Genet. 34: 166-176.

Sheth, U. & R. Parker. 2003. Decapping and decay of messenger RNA occur in cytoplasmic processing bodies. Science 300: 805-808.

Metrics

Bernard, A. & A. J. Hartemink. 2006. Informative structure priors: joint learning of dynamic regulatory networks from multiple types of data. Pac. Symp. Biocomput. 2005: 459-470.

Jansen, R., H. Yu, D. Greenbaum, et al. 2003. A Bayesian networks approach for predicting protein-protein interactions from genomic data. Science 302: 449-453.

Kundaje, A., M. Middendorf, F. Gao, et al. 2005. Combining sequence and time series expression data to learn transcriptional modules. IEEE/ACM Trans. Comput. Biol. Bioinform. 2: 194-202.

Middendorf, M., E. Ziv, C. Adams, et al. 2004. Discriminative topological features reveal biological network mechanisms. BMC Bioinformatics 5: 181. Full Text

Smith, V. A., E. D. & A. J. Hartemink. 2003. Influence of network topology and data collection on network inference. Pac. Symp. Biocomput. 2003: 164-175. (PDF, 551 KB) Full Text

Yu, J., V. A. Smith, P. P. Wang, et al. 2004. Advances to Bayesian network inference for generating causal networks from observational biological data. Bioinformatics 20: 3594-3603. (PDF, 463 KB) Full Text

In Silico Networks

Brazhik, P., A. de la Fuente & P. Mendes. 2002. Gene networks: how to put the function in genomics. Trends Biotechnol. 20: 467-472.

Jamshidi, N., J. S. Edwards, T. Fahland, et al. 2001. Dynamic simulation of the human red blood cell metabolic network. Bioinformatics 17: 286-287. (PDF, 83 KB) Full Text

Mendes, P., W. Sha & K. Ye. 2003. Artificial gene networks for objective comparison of analysis algorithms. Bioinformatics 19 Suppl. 2: II122-II129. (PDF, 538 KB) Full Text

Ni, T. C. & M. A. Savageau. 1996. Application of biochemical systems theory to metabolism in human red blood cells. Signal propagation and accuracy of representation. J. Biol. Chem. 271: 7927-2941. Full Text

Ni, T. C. & M. A. Savageau. 1996. Model assessment and refinement using strategies from biochemical systems theory: application to metabolism in human red blood cells. J. Theor. Biol. 179: 329-368.

Synthetic Networks

Di Bernardo, D., M. J. Thompson, T. S. Gardner, et al. 2005. Chemogenomic profiling on a genome-wide scale using reverse-engineered gene networks. Nat. Biotechnol. 23: 377-383.

Gardner, T. S., D. di Bernardo, D. Lorenz & J. J. Collins. 2003. Inferring genetic networks and identifying compound mode of action via expression profiling. Science 301: 102-105.

Gardner, T. S., C. R. Cantor & J. J. Collins. 2000. Construction of a genetic toggle switch in Escherichia coli. Nature 403: 339-342.

Hasty, J., D. McMillen & J. J. Collins. 2002. Engineered gene circuits. Nature 420: 224-230.

Isaacs, F. J., D. J. Dwyer, C. Ding, et al. 2004. Engineered riboregulators enable post-transcriptional control of gene expression. Nat. Biotechnol. 22: 841-847.

Tegner, J., M. K. Yeung, J. Hasty & J. J. Collins. 2003. Reverse engineering gene networks: integrating genetic perturbations with dynamical modeling. Proc. Natl. Acad. Sci. USA 100: 5944-5949. Full Text

Yeung, M. K., J. Tegner & J. J. Collins. 2002. Reverse engineering gene networks using singular value decomposition and robust regression. Proc. Natl. Acad. Sci. USA 99: 6163-6168. Full Text

Algorithms for Reverse Engineering

Albert, R. & H. G. Othmer. 2003. The topology of the regulatory interactions predicts the expression pattern of the segment polarity genes in Drosophila melanogaster. J. Theor. Biol. 223: 1-18.

Kundaje, A., M. Middendorf, M. Shah, et al. 2006. A classification-based framework for predicting and analyzing gene regulatory response. BMC Bioinformatics 7 Suppl. 1: S5. Full Text

Middendorf, M., A. Kundaje, C. Wiggins, et al. 2004. Predicting genetic regulatory response using classification. Bioinformatics 20 Suppl. 1: S5. (PDF, 256 KB) Full Text

Middendorf, M., E. Ziv & C. H. Wiggins. 2005. Inferring network mechanisms: the Drosophila melanogaster protein interaction network. Proc. Natl. Acad. Sci. USA 102: 3192-3197. Full Text

Laubenbacher, R. & B. Stigler. 2004. A computational algebra approach to the reverse engineering of gene regulatory networks. J. Theor. Biol. 229: 523-537.

Missing Ingredients

Devos, D., S. Dokudovskaya, F. Alber, et al. 2004. Components of coated vesicles and nuclear pore complexes share a common molecular architecture. PLoS Biol. 2: e380. Full Text

Devos, D., S. Dokudovskaya, R. Williams, et al. 2006. Simple fold composition and modular architecture of the nuclear pore complex. Proc. Natl. Acad. Sci. USA 103: 2172-2127. Full Text

Dokudovskaya, S., R. Williams, D. Devos, et al. 2006. Protease accessibility laddering: a proteomic tool for probing protein structure. Structure 14: 653-660.

Hernjak, N., B. M. Slepchenko, K. Fernald, et al. 2005. Modeling and analysis of calcium signaling events leading to long-term depression in cerebellar Purkinje cells. Biophys. J. 89: 3790-3806.

Leslie, D. M., B. Timney, M. P. Rout & J. D. Aitchison. 2006. Studying nuclear protein import in yeast. Methods 39: 291-308.

Loew, L. M. 2002. The Virtual Cell project. Novartis Found Symp. 247: 151-160.

Slepchenko, B. M., J. C. Schaff, I. Macara & L. M. Loew. 2003. Quantitative cell biology with the Virtual Cell. Trends Cell Biol. 13: 570-576.

Organizers

Andrea Califano, PhD

Columbia University
email | web site | publications

Andrea Califano is professor of biomedical informatics at Columbia University, where he leads several cross-campus activities in computational and system biology. Califano is also codirector of the Center for Computational Biochemistry and Biosystems, chief of the bioinformatics division, and director of the Genome Center for Bioinformatics.

Califano completed his doctoral thesis in physics at the University of Florence and studied the behavior of high-dimensional dynamical systems. From 1986 to 1990, he was on the research staff in the Exploratory Computer Vision Group at the IBM Thomas J. Watson Research Center, where he worked on several algorithms for machine learning, including the interpretation of two- and three-dimensional visual scenes. In 1997 he became the program director of the IBM Computational Biology Center, and in 2000 he cofounded First Genetic Trust, Inc., to pursue translational genomics research and infrastructure related activities in the context of large-scale patient studies with a genetic components.

James J. Collins, PhD

Boston University
email | web site | publications

Jim Collins is professor of biomedical engineering at Boston University. He is director of the Applied Biodynamics Laboratory and the Center for Biodynamics, and co-director of the Center for Advanced Biotechnology. His research focuses on developing nonlinear dynamical techniques and devices to characterize, improve, and mimic biological function. His specific interests include (1) modeling, designing and constructing synthetic gene networks; (2) reverse engineering naturally occurring gene regulatory networks; and (3) developing noise-based sensory prosthetics.

Collins completed his PhD in medical engineering at the University of Oxford. In 2003 he received a MacArthur "genius" Award.

Gustavo Stolovitzky, PhD

IBM Computational Biology Center
email | web site | publications

Gustavo Stolovitzky is manager of the Functional Genomics and Systems Biology Group at the IBM Computational Biology Center in IBM Research. The Functional Genomics and Systems Biology group is involved in several projects, including DNA chip analysis and gene expression data mining, the reverse engineering of metabolic and gene regulatory networks, modeling cardiac muscle, describing emergent properties of the myofilament, modeling P53 signaling pathways, and performing massively parallel signature sequencing analysis.

Stolovitzky received his PhD in mechanical engineering from Yale University and worked at The Rockefeller University and at the NEC Research Institute before coming to IBM. He has served as Joliot Invited Professor at Laboratoire de Mecanique de Fluides in Paris and as visiting scholar at the physics department of The Chinese University of Hong Kong. Stolovitzky is a member of the steering committee at the Systems Biology Discussion Group of the New York Academy of Sciences.


Speakers

Joel Bader, PhD

Johns Hopkins University
email | web site | publications

Allister Bernard

Duke University
email | group web site

Diego di Bernardo, PhD

email | web site

Riccardo Dalla-Favera, MD

Columbia University
email | web site | publications>

Mark Gerstein, PhD

Yale University
email | web site | publications

Boris Hayete

Boston University
email | web site | publications

Winfried Just, PhD

Ohio University
email | web site | publications

Arnold Levine, PhD

Institute for Advanced Study
email | web site | publications

Christina Leslie, PhD

Columbia University
email | web site | publications

Andre Levchenko, DEngSci

Johns Hopkins University
email | web site | publications

Leslie Loew, PhD

University of Connecticut Health Center
email | web site | publications

Thomas Maiwald

University of Freiburg
email | web site | publications

Pedro Mendes, PhD

Virginia Bioinformatics Institute at Virginia Tech
email | web site | publications

Ilya Nemenman, PhD

Los Alamos National Laboratory
email | web site | publications

Dana Pe'er, PhD

Columbia University
email | web site | publications

Theodore Perkins, PhD

McGill University
email| web site | publications

Michael Rout, PhD

The Rockefeller University
email | web site | publications

Michael Samoilov, PhD

Lawrence Berkeley National Laboratory
email | publications

Eric Schadt, PhD

Merck
email

Brandilyn Stigler, PhD

Ohio State University
email | web site | publications

Chris Wiggins, PhD

Columbia University
email | web site | publications


Don Monroe

Don Monroe is a science writer based in Murray Hill, New Jersey. After getting a PhD in physics from MIT, he spent more than fifteen years doing research in physics and electronics technology at Bell Labs. He writes on biology, physics, and technology.

Highlights

  • Researchers have explored biological pathways through painstaking research, using genetics, biochemistry, expression profiles, molecular biology, and disease states as clues.
  • The p53 tumor-suppressor pathway responds to many types of stress, and causes a variety of responses related to cancer and cell death.
  • Different stresses lead to different post-translational modifications of p53, which trigger different responses.
  • Reverse engineering identified a region of the network with potential targets for intervention in B-cell lymphomas.
  • Network information should be tied to a particular biological environment, rather than averaged over many types of cells.

Introduction

Throughout the molecular-biology era, biologists have been steadily uncovering the molecular events and interactions that govern diverse phenomena. Traditionally, they depict these interactions as chains or "pathways," although such a linear picture often oversimplifies the true networks.

These techniques have uncovered a wealth of information, including the detailed biological mechanisms by which molecules influence each other. As reverse engineers work to build more complex networks to describe the interactions, they rely on such techniques to test and validate their conclusions. In addition, the networks they infer are of little value unless they influence the mainstream understanding of both normal and diseased states, and suggest possible interventions.

How to dissect a pathway

In his keynote talk at the DREAM conference, Arnold Levine of the Institute for Advanced Study in Princeton, New Jersey, surveyed the traditional techniques for elucidating pathways. In particular, he focused on signal-transduction pathways, especially the one involving the p53 cancer-suppressor gene. Biologists have now clarified dozens of such pathways.

The genetic approach, Levine said, is the "simplest and most used" method. For example, researchers look for epistasis, in which a lethal mutation is rendered less lethal by a second mutation, either natural or artificial. "Almost always the first mutation is compensated by a second mutation because you have a protein–protein interaction," Levine said. These "second-site suppressors create a whole pathway of protein–protein interactions. This is like dual hybrid, but at the genetic level." A related technique looks for highly prevalent pairs of single-nucleotide polymorphisms, which suggests interaction of the proteins they code for.

For some interactions, such as post-translational modifications of proteins, a direct biochemical approach is needed. For example, chemists can alter a kinase so that it is not driven by ATP but by another, externally supplied species to clarify which molecules it targets. "This is a particular type of activity that you'll never get in an Affymetrix chip. They will miss all of the modifications. This is a particularly important thing to work out," Levine said. Metabolic pathways, he added, "have almost all been worked out by in vitro enzymology." Levine touched only briefly on the correlated expression of mRNAs, as probed for example by microarrays, since other speakers discussed these high-throughput experiments in detail.

"We are engineered to compensate" for knockouts, making it challenging to interpret response to extreme perturbations.

Levine cautioned that although knockouts and siRNA (small interfering RNA) manipulations are powerful ways to reduce expression, "it's quite clear that mice and other organisms have compensatory mechanisms" that partially make up for the eliminated molecules. "Especially in development, we are engineered to compensate," he said, making it challenging to interpret the response to these extreme perturbations.

Overexpression carries its own problems, Levine warned: "At high concentrations a pathway may exist that never exists at lower concentration."

Finally, Levine described the important role of disease states, which can be regarded as a kind of stress, in revealing pathways. "Cancer, in particular, has been very valuable," he said, and the associated mutations come in three types: First, dominant mutations of oncogenes cause protein kinases, for example, to be constitutively active and unresponsive to signals. Second, recessive tumor suppressor genes are a problem when there are mutations in both members of a pair. Finally, Levine said, "any cancer may have mutations in three or four pathways."

Looking at a cancer suppressor

The p53 pathway is a signal-transduction pathway that orchestrates the response to various stresses, including hypoxia and damage to DNA or to spindles. A version of the pathway acts in lower organisms such as worms and flies to induce programmed cell death in germ-line cells, which are the only cells that divide in the mature animal.

In contrast, the somatic cells in vertebrates continue to divide throughout life, to varying degrees. This division allows these organisms to develop more complex structures than invertebrates do, but also permits the rogue cell multiplication that is cancer. In vertebrates, p53 regulates stress response in both germline and in somatic cells. Downstream, p53 affects not just apoptosis, but cell-cycle arrest, DNA repair, inhibition of angiogenesis and metastasis, and other responses that are still being clarified.

"Different stresses result in different modifications to p53" and induce different responses.

Upstream, p53 is affected by a wide variety of stresses. These stress signals are communicated by an extremely diverse set of protein modifications, including methylation, acetylation, ubiquitination, and sumoylation. "Different stresses result in different modifications to p53," Levine said, and the different modifications induce different responses. Capturing these modifications in high-throughput data is an important challenge, he commented.

Tracing this pathway upstream led researchers to the MDM2 gene, whose product regulates the degradation of p53. In turn, Mdm-2 is sensitive to estrogen, which helps to explain some of the gender-sensitive aspects of some cancers.

Levine described a few of the many features that researchers have identified in the p53 pathway, which can be used to assess and hopefully modify the progress of cancer. Nonetheless, "these pathways are imperfect" and context-dependent, he observed. "Signal-transduction pathways don't usually act the same in different tissues in the body. They don't act the same in different animals. They don't act the same in the presence of different modifiers."

What do biologists want?

In another talk, Riccardo Dalla-Favera, of Columbia University, endeavored to give "a biologist's perspective to what we expect from bioinformatics and reverse engineering." As background to his research, Dalla-Favera described the unique biology of B cells. At least ten phenotypically and clinically distinct lymphoma subtypes arise from B cells in different stages of their growth. Ordinarily, when naïve B cells from the bone marrow are exposed to antigen in the periphery, they undergo explosive proliferation. This proliferative growth occurs in a structure called the germinal center, and is the fastest growth known for any eukaryotic cell, with the cells doubling in number every six to eight hours.

During this growth, the B cell does something unique: it hypernutates the DNA associated with the variable region of antibodies. Among the many resulting cells with different mutated DNA, most are less sensitive to the antigen, but a few may have an even higher affinity and be selected to become memory cells or plasma cells.

Gene-expression studies of carefully isolated populations showed a major change as cells enter the germinal center, and again as they become memory cells. Interestingly, as they enter the mature stage, the cells return to an expression similar to that of the naïve cells.

"The dream is to have maps in which we can say which area of the network is changed."

Gene-expression profiling also distinguished malignant cells from normal cells, and allows some subtyping of the cancer. Still, Dalla-Favera said, profiling did not get researchers very far, "because it did not tell us which part of the network was involved. The dream for a biologist is to have maps in which we can say which area of the networks is changed" in cancer, he said. "This could guide further research."

Dalla-Favera said that his team had been looking at two important genes: BCL-6, which is critical to germinal-center formation, and the c-myc proto-oncogene, which Bcl-6 suppresses. The complex formed by their interaction is a potential clinical target.

The power of reverse engineering

Recently, Dalla-Favera worked with other Columbia researchers to reverse engineer this system. They looked at over 10 naturally occurring variations encompassing the tumor types, at different stages, and also at various chemical interventions. All told, they had over 400 different transcriptional profiles.

"It's very important to look at cellular networks in a specific biological context."

The resulting network was scale-free over two orders of magnitude, and indicated that c-myc was a major hub, as were most of its nearest neighbors. "This is very important biologically and even clinically," Dalla-Favera noted. Moreover, the network identified several new targets for Bcl-6; subsequent Chromatin Immuno-Precipitation studies showed that Bcl-6 indeed acted as a transcription factor for these genes. Rather than laboring to find a single target as in the past, Dalla-Favera said, using reverse engineering " we can get a number of targets and we can start studying the overall biological program of a given transcription factor."

In a subset of the tumors, however, in which Bcl-6 binds with c-myc, the networks showed only minimal similarity, overlapping by only 7%. Thus, Dalla-Favera cautioned, "The expression of a single gene dramatically influences the network surrounding another gene." For this reason, he said, "it's very important to look at cellular networks in a specific biological context, so that we create information that's biologically relevant, as opposed to information that averages many, many biological conditions." The full power of the bioinformatics methods, he suggested, is realized only in conjunction with biochemical methods that place the information in context.

Highlights

  • Reverse engineers would like to test their methods against a "gold standard" biological network whose detailed interactions are perfectly known, with high confidence about both linkages and their absence.
  • To serve as the most objective test, a gold standard would be perfectly unknown to those trying to reverse engineer it, although this "blinding" might deprive researchers of relevant biological context.
  • There are many well understood biological systems, including gap genes in Drosophila, genetic expression in E. coli, and well studied signaling pathways, but they are also widely publicized.
  • Protein–protein and protein–DNA interactions represent a clear physical interaction that could be objectively tested.
  • Sequestering a subset of data from large-scale screens could provide validated experimental data for assessing methods.

The DREAM

The DREAM program aspires to assess reverse-engineering methods by comparing how well they extract known information about networks from their behavior. A recurrent issue at the conference was the challenge of identifying experimental networks to serve as "gold standards," whose structures are accurately captured in models. A reverse-engineering method would fail to the extent that it disagrees with a gold-standard model in predicting which "nodes" (molecular species) within the network are connected by "edges"(interactions) and which are not. More quantitative methods would also need to capture the mathematical details of those interactions.

How best to provide a blinded assessment of reverse engineering methods?

Real biological systems are imperfectly understood. The pathways and networks that emerge from publication and replication in the literature represent the best current knowledge, which continually improves over time. Indeed, conference participants described many sources of extensively validated network information.

By and large, however, these validated sources are so well known that it is hard to prevent reverse-engineers from exploiting the biological context in developing their model networks. The DREAM community is still struggling to balance the quality of experimental data with a hope to provide a blinded assessment.

A developmental network

Theodore Perkins of McGill University described the gap gene system of Drosophila melanogaster, an extraordinarily well understood developmental network. The gap genes are active in the earliest stages of Drosophila development, and control the establishment of anterior-posterior structure in response to gradients in maternal proteins. The developing spatial patterns of expression emerge as the gene products act as transcription factors for one another. (For more information on this topic see the report on John Reinitz's work in the eBriefing Hairballs and Other Sloppy Models: Analyzing Complex Phenotypes).

Researchers have modeled early development using systems of coupled differential equations, which capture two distinct phenomena. The first is the expression rate at each position, which is a complex but universal function of the local concentration of each protein. This aspect of the model is similar to the directed graphs of regulatory networks, together with mathematical rule models describing their influence. The degradation rate of the proteins is assumed to be constant.

The second phenomenon that the models capture is the diffusion of the proteins along the anterior-posterior axis. This one-dimensional spatial aspect is a simple version of the full spatial modeling used in the virtual cell model described by Leslie Loew. At the same time, the spatial freedom adds new experimental insight. By developing methods to measure evolving expression profiles for each gene during development, experimentalists create a huge number of conditions (concentrations varying in both space and time) to constrain the network model and equations.

Perkins briefly described two attempts to reverse engineer this system of equations to establish the connections and the parameters. His own recent work largely agreed with previous results, but took only 36 hours of computer time, instead of the two years those previous results required. The results were also consistent with the literature. "Because the data are so well understood, any test that you'd like to do has already been done," he observed, so the literature provides strong validation.

The very complete understanding of the Drosophila gap-gene system, both topologically and numerically, makes it an excellent candidate for a "gold standard" developmental network, Perkins said. Of course, the results are widely known, and so could not be used as blinded tests of reverse-engineering methods.

Physical interactions of proteins

Joel Bader of Johns Hopkins University asserted that a gold standard needs to have a measurable physical reality. Influence networks such as Bayesian networks, he noted, are prone to uncertainties, depending on which nodes are included in the modeling. Even if you believe there's an interaction between two molecular species, you don't know what intermediates the interaction goes through. "What you really want is something that actually has a physical reality," he claimed.

"What you really want is something that actually has a physical reality."

Bader proposed assessing methods based on the "physical reality of biochemical interactions between proteins and proteins and [between] proteins and DNA." The protein–protein interactions data are available from high-throughput two-hybrid screens, while information about protein–DNA interactions comes from Chromatin Immunoprecipitation (ChIP).

One of the social challenges for blinded assessments is that experimentalists may not want to offer their data before publication. They will see little reward for providing data that is held in secret for an extended period while computational scientists make their predictions.

Bader offered a mechanism for avoiding this problem, at least for the protein–protein interactions: Researchers would select a small subset, perhaps 10–100, of their highest-confidence interactions. They would then erase all the edges and scramble the order of the proteins, and let computational biologists predict which pairs interact. This data still suffers from the weakness that it is unclear whether to trust the missing edges, however. Because of this lack of reliable negatives, Bader calls this a "12-karat" gold standard, having half of the desired information.

Human disease phenotypes

In the course of their exploration of the expression profiles of various disease states, Eric Schadt and his colleagues at Merck have developed what he described as a "massive" amount of data. For mice, for example, they have examined 15–20 populations of 500–1000 animals, sampling five tissues in each, genotyping thousands of markers. In humans the genetics are necessarily more complicated, but Schadt said, "We've done about five or six cohorts now, we're in the process of doing ten, we're doing liver, three different parts of brain, adipose, muscle, blood, and really trying to build up a definitive [data set] .... The networks definitely converge," he said. "You can come up with experimental gold standards from these huge data sets that you can show just through cross-validation to be highly predictive."

Arnold Levine described the signal-transduction pathway associated with the p53 cancer suppressor gene. The interactions have been painstakingly validated, and researchers have clarified the detailed biological mechanisms along much of the pathway. A similarly detailed pathway was summarized by Riccardo Dalla-Favera. These well established and robust pathways bring both the confidence and the interest of a larger biological community.

Data, data everywhere ...

There appears to be no shortage of well validated biological network and pathway data. In another example, Boris Hayete of Boston University used the RegulonDB network of genetic expression in E. coli for comparing different reverse-engineering algorithms. This database has modest coverage, however, so finding an edge that it doesn't contain is not necessarily a false positive.

The best algorithms will probably use various types of information simultaneously.

The issue of blinding the data remains contentious, since experimentalists have little reward for withholding their hard-won results for the benefit of reverse engineers. Withholding a fraction of high-throughput data is more promising. Some researchers, however, questioned whether removing data from their biological context even makes sense.

One challenge is that no single type of biological network can represent all others. Genetic expression networks, signaling networks, and metabolic networks all differ strongly, and a reverse engineering technique that does well on one might fail on others. If researchers are to compare methods, they will need to deal with different types of networks and different types of input data. Ultimately, the best algorithms will probably use various types of information simultaneously.

Highlights

  • Unless a set of data comes from a specific biological context, it may distort or disguise the real behavior.
  • Using single-cell data avoids the incorrect deductions made by averaging over cells.
  • Small-molecule perturbations of a genetic expression network can help clarify the direction of causality for known statistical associations.
  • By tracking changes to DNA, researchers can discriminate between changes in expression that are causal and those that occur in reaction to disease.
  • Sometimes data are not adequate to let researchers discriminate between models.
  • The "no free lunch" theorem stipulates that large amounts of accurate data are needed to specify complex network models.

Data requirements

A major stimulus for the reverse-engineering community has been the availability of high-throughput data, for gene expression as well as other types of data. In many cases, however, the simple quantity of data may be less important than the specific methods and conditions used to gather it.

Several speakers, including Riccardo Dalla-Favera, emphasized the need to make measurements in specific biological contexts. Addressing these needs may require new types of data collection, but could enable researchers to extract more robust networks with less data. Since data will always be limited, however, it is important to quantify how much is needed to specify the topology and parameters of a network model precisely.

Looking at single cells

Dana Pe'er, who recently moved to Columbia University from Harvard Medical School, has successfully used Bayes network algorithms to deduce network structure from microarray expression data. She has also demonstrated that treating networks as assemblages of modules gives a statistical boost to data that might otherwise be inadequate to deducing network structure. [See Metrics.] Even with data from hundreds of microarrays, however, deducing a network is a fundamental challenge in this process. "We don't have enough statistical power to reconstruct the pathway including hundreds and thousands of genes from 300 samples," she said. "It's just a statistical impossibility."

It's a statistical impossibility to reconstruct a complex pathway from 300 samples.

In the specific context of signaling, Pe'er said, "when you look at an average you miss the real signal: you have to look at each cell specifically. [The cells are] doing different things." No fancy algorithm will solve this problem, she said. "The real answer to solve this problem is not more sophisticated computational cartwheels, but better data."

To this end, Pe'er analyzed data on T-cell signaling, generated with collaborators including Karen Sachs of MIT and Garry Nolan of Stanford. The researchers used blood taken from human subjects, whose T cells were fixed and stained with fluorescent antibodies that bind to specific phosphorylated forms of proteins. They then used flow cytometry to measure the levels of twelve different phosphoproteins in individual cells, easily generating thousands of data points. "For each single cell you have a profile of eleven quantities representing the activity states of different phosphoproteins under different conditions," Pe'er said. "This is very unique, and very different from the microarray data."

The researchers also used a host of small-molecule interventions that target specific parts of the network. In this way, they were able to distinguish correlation from causation, since knocking down a specific molecular species should affect all other molecules it influences, but not molecules it is influenced by.

Using the huge number of measurements, and the intrinsic variability of individual cells and the responses to the small-molecule interventions, the team deduced a network using Bayesian analysis. Among the unexpected results was an influence of the extracellular signal-regulated kinases 1 and 2 (Erk1/2) on Akt. Subsequent siRNA knockdown experiments validated this influence. "Even in this well studied pathway, we found something new," Pe'er commented. This validation is important, she said, because even "some of the best researchers have fooled themselves with statistics. Sometimes you have to put your money where your mouth is and check something."

The power of the approach relies on three features: First, the small-molecule perturbations are central to establishing directionality. Second, the amount of data is important; when the researchers used only a subset of the data, the inferred network was much less accurate. "The fact that we got 6000 points gave us a lot of power that we miss once we have only 400 points, which is what most people with microarrays deal with." Finally, focusing on single cells dramatically improves the quality of the reconstructed network. When they simulated a Western blot by averaging the observations on different cells, Pe'er said, it gave "much worse reconstruction."

Tissues from real disease states

Eric Schadt and his colleagues at Merck focus on finding the best genes to target in order to treat disease. The molecular underpinnings of complex diseases like atherosclerosis and obesity, he said, are not likely to reveal themselves in single-cell, in vitro studies. "The disease is taking place in a very complex system, with very complex signaling going on between the tissues that give rise to the properties that cause disease." For this reason, he and his coworkers have been characterizing molecular expression in tissue samples taken from living animals and humans. The networks they inferred from this data, he said, show "dramatic differences that you're never going to see in a single cell."

In the process, the researchers have uncovered genes that strongly contribute, for example, to obesity. Previous genetic screens have been rather unsuccessful, he said, because they did not adequately account for the context within which genes act. "Geneticists have tried to ignore all of biology and then wondered why they didn't find the genes for disease," he observed.

"Geneticists have tried to ignore all of biology and then wondered why they didn't find the genes for disease."

In contrast, for Schadt, genetic-sequence data provides an important window into biological networks. In fact, the central dogma in biology allows the researchers to untangle cause and effect in the correlations of expression captured in microarrays. Because genetic variation influences, but is not influenced by, variations in expression, this data provides the same sort of directional information as Pe'er's small-molecule interventions. Schadt stresses that this data arises in a natural setting of disease, however: "The advantage is that this variation is naturally occurring in a population where diseases manifest themselves. It's not artificially perturbed through some stimulation," like knock-downs or transgenic experiments. "It's not clear how relevant those things will be to actual diseases."

The researchers used the DNA data to identify loci in the genome where DNA variation correlates with disease-related expression. Then they identified regions of the DNA where variations correlated with phenotypes related to disease. Using this data, they were able to identify a specific gene that predicted almost all of the expression changes. In other words, they were able to use the central dogma to distinguish between the "causal" expression changes and those that were merely reacting to the disease state.

Based on these deductions from the disease state, the researchers validated the predicted interactions in the labs. Interestingly, the DNA changes that predict disease almost all involve non-protein-coding regions. Schadt cautioned that care must be taken not to perturb the system too much, as can happen in some knockout experiments. "When you look at the knockouts, you really hit the system. It's not necessarily a validation for how this reconstructs in a natural setting," he said. "You have to use things that are closer to the natural state."

How much data is enough?

Even the best data may not differentiate between different network models. Thomas Maiwald of the University of Freiburg approached this question pragmatically. He built a software framework to test numerically how accumulating experiments discriminate between proposed models. He found that some kinds of data are never adequate to let researchers distinguish between some models. The tool should help researchers to design experiments that most efficiently test their models, and to recognize when more data is not the answer.

"Model selection is, at least implicitly, based on prior assumptions about what is biologically plausible."

Winfried Just of Ohio University took a mathematical approach to finding how much data is needed. Specifically, he developed theorems for a system in which concentrations assume discrete values and evolve in discrete time steps, but he said that many of his conclusions should apply to other types of dynamical systems as well. Given a set of observations of the concentrations, the task is to find a rule, relating the values to those at the preceding time step, that reiterates the observations.

The problem of generalizing to a rule from a finite number of observations is well known. For this type of problem, the "no free lunch" theorem states that a huge number of rules will match a data set equally well, unless there are prior constraints on the structure of the rule or the nature of the data. At first blush, this theorem appears to doom the entire reverse-engineering endeavor to failure. In practice, however, biologically reasonable constraints and experimental protocols provide important "priors" that may limit the number of choices. "Possible models that agree [with observations] are far too numerous to be constrained by realistic data," Just claimed. "Model selection is, at least implicitly, based on prior distributions of biologically plausible functions."

Just's conclusions echo the sentiments of Pe'er and Schadt, who each cautioned that the networks they produced could not be taken completely seriously. The question boils down to asking which parts of the models can be trusted, or at least used to suggest useful interventions, and what type and amount of data will speed this process.

Highlights

  • Ideally, researchers should compare reverse-engineering methods using the same data set for training and a different set for testing.
  • The ROC (receiver operating characteristic) curve quantifies the tradeoffs in classifying edges as present (positive) or absent (negative) in a network.
  • The area under the ROC curve is equal to the average of the probabilities of getting the right answer for a known positive and a known negative.
  • In biological networks, only a small fraction of the possible pairs of entities is connected, so negatives are much more prevalent than positives.
  • Researchers disagree on how to measure success in extracting such sparse networks.

Comparing methods

To fulfill its goal of comparing reverse-engineering methods, the DREAM project needs a clear way to measure their effectiveness. Ideally, different methods could be trained using the same data, and this metric would be used to quantify their subsequent ability to extract a network from a different set of test data.

The precise form of the metric depends on the type of network model the methods generate. In a few cases, such as the gap-gene system described by Theodore Perkins, this model includes differential equations characterizing the interactions. Many of the reverse-engineering methods, however, only predict a graph that summarizes which nodes are connected to which other nodes by an edge. For these models, the metric quantifies how well the method classifies edges as present or absent.

Quantifying the tradeoffs

Chris Wiggins of Columbia University discussed the reliability of dynamic Bayesian networks, which analyze discrete time-series data. Strictly speaking, traditional Bayesian analysis is restricted to networks without rings such as feedback loops. Dynamical networks avoid this constraint by "unrolling" the network in time. Since the state of a node at one time can only affect other nodes at a later time, no node can affect its own inputs, so loops are automatically impossible.

To evaluate this method, Wiggins generated a large number of simple in silico networks. If only a few networks are used for testing, he said, it's "extremely tempting" to choose the data set or tune the parameters of the algorithm to make the method look good.

The area under the ROC curve "has meaning as a probability" of classifying positive and negative edges correctly.

The traditional ROC curve shows numerically how a classifier becomes less selective (lower precision), reporting more false edges, as it is made progressively more sensitive so that it misses fewer true edges (higher recall). Neither the sensitivity nor the selectivity alone provides useful information. An under-appreciated fact, Wiggins said, is that "the area under the curve actually has meaning as a probability." When applied to a gold-standard network, this area is rigorously equal to the average of the probabilities of correctly classifying an edge that is truly present and another that is truly absent. Wiggins advocates using this metric routinely, even when the distribution of positives and negatives is highly asymmetric.

Wiggins acknowledged that his talk was "biology free," but concluded that dynamic Bayesian networks perform rather poorly for time-series data that resemble expression data. He described some local topologies of subnetwork that were particularly "shy," in that they did not easily reveal their structure.

Allister Bernard of Duke University also used dynamical Bayesian networks. He applied these techniques both to the electrophysiological and expression network underlying behavior in songbirds, and to joint data on expression and protein–DNA interaction in yeast. Combining multiple data types was clearly more effective than relying on one alone, he said, as was having appropriate prior information.

Nonetheless, Bernard said, comparing the performance of different algorithms is difficult. One important ingredient, he said, is common data sets for both training and testing of the algorithms. "Standard training and test sets are needed for adequate comparison of algorithms," he said.

The significance of negatives

Mark Gerstein of Yale University advocates using physically meaningful molecular interactions to test the effectiveness of reverse engineering. In evaluating potential metrics, he said, it is significant that most pairs of proteins, for example, will not interact. The huge number of negatives "changes the way people validate things. It changes the way people think about things," Gerstein observed. He illustrated the difference by noting that terrorists are a tiny fraction of airline passengers, so a terrorist-detection scheme that found no terrorists might seem to be reasonably effective. Nonetheless, even if the area under the ROC curve correctly averages the chances of correctly classifying a terrorist and a non-terrorist, the consequences of an incorrect classification differ drastically.

"Experimental validation usually has an anecdotal quality to it."

For biological networks, Gerstein said, "if we want to assess our predictions in an appropriate way, we're really going to have to assess many, many more negatives, relative to positives." In contrast, he said, researchers usually pick a few predicted positives to validate, and ignore many, many other possibilities. "Experimental validation usually has an anecdotal or selective quality to it, as opposed to a comprehensive quality."

In his talk, Pedro Mendes also explored possible metrics, including the area under the ROC curve. In addition, he proposed the confusion matrix as a candidate metric, as described in a poster at the conference. Like Gerstein, Mendes noted the challenge posed by sparse networks. "If you predict nothing is connected," he joked, "that's a very good prediction."

Highlights

  • Artificially designed synthetic networks, embedded into real biological systems, play an intermediate role between the control promised by in silico systems and the complexities of real biology.
  • Researchers integrated a five-gene regulatory system into yeast chromosomal DNA.
  • The artificial network responds to the presence of galactose, and also to the insertion of plasmids that the researchers created to perturb each gene individually.
  • Other researchers are invited to explore this synthetic strain in their laboratories.
  • Genetic modification lets researchers modulate expression in living organisms, and learn about their regulatory networks.

The best of both worlds?

"If you build a network yourself, you know exactly what you need to find out."

In their search for validated networks to use for assessing their methods, reverse engineers are faced with a dilemma: Networks created on a computer are completely specified, but may not be relevant to real systems that biologists care about. On the other hand, the networks underlying real biological systems are never completely known, so it is difficult to know for sure which of several reverse-engineering methods is better capturing reality.

Diego di Bernardo and his colleagues from the Telethon Institute of Genetics and Medicine in Naples are exploring a third alternative that combines some advantages of both types of network. They have integrated a synthetic regulatory network into the genome of the yeast, Saccharomyces cerevisiae. This network is small, comprising only five added genes, but it provides a uniquely well understood in vivo network for assessing reverse engineering techniques, di Bernardo said. "The idea was that if you build a network yourself, you know exactly what you need to find out." In addition, although the network serves no biological function, di Bernardo believes that it will be of more interest to biologists than purely computational networks.

Inserting extra DNA

The researchers selected five interacting genes: swi5, ash1, cbf1, gal4, and gal80. DiBernardo said that these genes are drawn from different cellular pathways, but they are not redundant and are not essential to the function of the cell. To create the network, the team spliced these genes downstream from known promoters. For example, the swi5 gene was rendered sensitive to the Gal4 protein by coupling it with the promoter that ordinarily controls the gal10 gene. Because some of the added genes interact with existing genes, di Bernardo said, the network really comprises seven genes.

Rather than relying on external plasmids, the researchers introduced these promoter/gene combinations directly into the chromosomal DNA. The high efficiency of homologous recombination in yeast let them swap the synthesized DNA at a known location. In addition to inserting the synthetic DNA, this process simultaneously knocks out the target gene. The team used this to advantage by knocking out genes that normally connect the inserted genes to other parts of the cellular network.

Researchers have inserted a five-gene regulatory system into the yeast chromosome.

To let them check that the network was working, the researchers included a master switch that is sensitive to the presence of D-galactose. Expression profiling showed that this sugar activated the added genes, as predicted, and triggered fluorescence monitors that they had incorporated along with the desired genes.

In addition, because swi5 expression is modulated by the cell cycle, the entire network should oscillate in synchrony. The researchers verified this behavior using α-factor arrest and fluorescence-activated cell sorting. In addition, they developed a strain that had constitutively activated Swi5, and therefore does not show the synchrony with the cell cycle.

The researchers also developed a tool for locally perturbing the network. They built plasmids containing each of the five genes, promoted by tetracycline. By introducing these plasmids, the response to each of the gene products can be tested individually by exposing cells to tetracycline. Di Bernardo said his team will share their yeast strain and the plasmids with other researchers to help them explore reverse-engineering methods.

Forward engineering

On a more limited scale, Jim Collins described a different kind of synthetic biology. Rather than building an entire new network, he and his coworkers created a new way to perturb or augment the existing network. They designed a simple genetic toggle switch, using mutual inhibition between a pair of genes. They used a plasmid to carry this switch into E. coli, and observed the switchable output.

The team also used interventions to perturb the bacterium's SOS response selectively by modulating the expression of several different genes. They then compared the response to these perturbations and other stimuli to reverse engineer the existing network.

More recently, Collins and his coworkers have developed a mammalian version of the switch, which uses RNA interference to knock down expression of a chosen gene, in conjunction with modulation of the activity of lactose repressor protein, LacI, by isopropyl-β-D-thiogalactopyranoside (IPTG). In mouse cells, it was possible to use the switch to repress the expression of a target gene by >99%. The team is also exploring other RNA-based techniques to perturb and modify the regulatory networks of complex organisms.

The tools of synthetic biology allow researchers great flexibility in modifying regulatory networks in living organisms. These techniques bridge the gap between artificial networks and those hidden deep within living systems.

The endless complexities of biological systems are orchestrated by intricate networks comprising thousands of interacting molecular species, including DNA, RNA, proteins, and smaller molecules. The goal of systems biology is to map these networks in ways that provide both fundamental understanding and new possibilities for therapy. However, although modern tools can provide rich data sets by simultaneously monitoring thousands of different types of molecules, discerning the nature of the underlying network from these observations—reverse engineering—remains a daunting challenge.

On September 7–8, 2006, more than a hundred researchers gathered to discuss the reverse engineering of biological networks at the beautiful Wave Hill, overlooking the Hudson River in the Bronx, New York. The Dialogue on Reverse Engineering Assessment and Methods, or DREAM, was organized by Gustavo Stolovitzky of IBM, Andrea Califano of Columbia University, and Jim Collins of Boston University. They are supported by an organizing committee comprising 23 researchers. Many of these researchers had already gathered in March 2006 at the New York Academy of Sciences where they discussed the challenges facing this assessment process. A summary of that meeting, including slides and audio of the speakers' presentations, is available as a part of the eBriefing The DREAM Project: Assessing the Accuracy of Reverse Engineering Methods. A volume of the Annals of the New York Academy of Sciences based on the first open meeting is also forthcoming.

How can systems biologists assess how well they are describing networks of interacting molecules?

The fundamental question for DREAM is simple: How can researchers assess how well they are describing the networks of interacting molecules that underlie biological systems? The answer is not so simple. Researchers have used a variety of algorithms to deduce the structure of very different biological and artificial networks, and evaluated their success using various metrics. What is still needed, and what DREAM aims to achieve, is a fair comparison of the strengths and weaknesses of the methods and a clear sense of the reliability of the network models they produce.

The attendees at the DREAM conference included computer scientists, physicists, mathematicians, and biologists. Over the course of two days, they often emphasized very different aspects of the reverse-engineering challenge. However, some highlights emerged:

  • The networks that result from reverse engineering are not "real"; they are only approximations.
  • It is critical to explore biological networks within a specific biological context. Diseases, for example, develop as part of a complex signaling within a tissue. Indeed, the apparent network topology can be quite different in different settings.
  • Many biological networks are quite well understood, and can serve as a platform or gold standard for the assessment of reconstruction algorithms. Researchers cannot easily be made blind to other information about these networks, however.
  • Artificial networks, which are easily generated and blinded, will continue to be important for understanding the strengths and weaknesses of algorithms.
  • Biological networks are sparse, so predictions of missing linkages are less significant than predictions of linkages.
  • Although many questions remain, it is time to initiate a formal assessment process, in which researchers pit their methods against common challenges to see how they fare.

The dialogue that is DREAM is only beginning, and many of the questions have hardly been formulated, much less answered. This eBriefing summarizes what conference speakers expressed as the main issues facing this project, as well as some recent research that suggests the methods and lines of inquiry that will bring DREAM to fruition. After describing the biological context that gives these networks their importance, the eBriefing explains the need for experimental gold standards and how these standards might be identified, and explores what types of data and metrics are needed to test these standards. It then describes the types of network data that are being used for reverse engineering, such as in silico networks and synthetic networks. Other sections describe details of several reverse-engineering algorithms, and some of the types of missing data that are not yet considered in reverse engineering. The eBriefing concludes with highlights from the open-ended discussion that closed the conference and pointed toward the DREAM project's future.

Highlights

  • High-throughput data collection is blind to important aspects of biology, including cellular geometry and compartmentalization as well as large protein complexes.
  • The detailed spatial distribution of biological macromolecules, as well as their segregation into separate cellular compartments, dramatically affects their interactions.
  • The nuclear pore complex, which transports labeled macromolecules in and out of the nucleus, is a precise arrangement of dozens of proteins.
  • Quickly freezing disrupted cells and analyzing which proteins survive as bound pairs lets researchers build a rough structural picture of the nuclear pore complex.
  • Currently available high-throughput data does not adequately capture important aspects of biological networks, such as post-translational modifications and RNA interference.

Looking under the lamppost

Much of the promise—and the challenge—of reverse engineering biological networks arises from technological advances that have dramatically increased the volume of data about molecular activity in a group of cells. Like drinking water from a fire hydrant, these techniques satisfy researchers' thirst for information, but the flood of data can be overwhelming. In addition, network aficionados need to remember that biological systems are more than just networks. The organizers of the DREAM conference invited some speakers to remind the participants of the larger biological picture.

The role of geometry

Leslie Loew of the University of Connecticut coordinates the Virtual Cell project. This software framework aims to simulate realistic spatial features of biology along with the reaction kinetics. (For more on the Virtual Cell project, see the eBriefing By the Numbers: Quantitative Modeling of Biological Systems.) "The resulting in silico models are probably going to be more realistic than many of you might be comfortable with," Loew challenged his audience.

In silico models are probably "more realistic than you might be comfortable with."

In one example, Loew described the growth of actin filaments, which are central both to the cellular skeleton and to various transport processes. Although there are lots of data characterizing the growth and degradation of these filaments, Loew said, the mechanisms include much more than mass-action kinetics typical of bulk chemistry. "There's a lot of physics, so if you want to build a realistic model you have to include that," he warned. For example, the model incorporates literature descriptions of elongation, capping, and aging of filaments, as well as recycling of actin monomers back to the free barbed ends. In addition, the joining of separate polymer segments depends on the unique "reptation" process by which polymers diffuse.

Loew has also described the long-term depression of nerve signals in Purkinje cells, which occurs when two types of fiber fire simultaneously. An accurate model of this process requires specification of the geometry of the dendritic spines, which markedly affect the diffusion of some of the important compounds in the process. Without the geometrical information, Loew said, the observed concentrations of these compounds would be very hard to understand.

In spite of this complexity, Loew suggested, "We need to start with full spatial models to understand the behavior, but then we could extract from them the ingredients that are necessary for modeling using ordinary differential equations .... The goal should be to reverse engineer the system in such a way that it reproduces all the gory details of cell biology."

Dissecting the nuclear pore complex

To render complex biological networks more comprehensible, researchers sometimes refer to "modules," or "machines," composed of multiple molecular species that perform closely linked functions. A keynote talk by Michael Rout of the Rockefeller University was a potent reminder that, for some important biological functions, the machine metaphor is much more literal. Proteins that form complexes such as the ribosome do not simply influence each other's reactions, but are arranged in precise spatial relationships.

Rout has spent the last eight years dissecting the components of the nuclear pore complex, which transports macromolecules between the nucleus and the cytoplasm. At least thirty different proteins contribute to the complete, eight-fold symmetrical structure. Macromolecules only transit the pore if they are bound to specialized transport proteins.

A critical technique for understanding how the proteins fit together involves breaking cells apart and rapidly freezing them. The speed of the freezing is critical, since the associated proteins dissociate or degrade quickly. "The moment you break the cells open," Rout said, you take the complexes away from the context that constantly replenishes them. After that, he said. "it's a matter of time. Things decay."

"The moment you break the cells open, things begin to decay."

By speeding their quick-freeze methods, the team identified interacting proteins as they existed in the cell, in a way that other tests of protein–protein interactions might well miss. From this data, the researchers developed an extensive table of the proteins that were found together. They then used a complicated process to surmise the full structure of the complex, which they continue to refine.

Their structural analysis suggests a mechanism for transport through the pore, Rout said. Specifically, both the nuclear and the cytoplasmic space near the complex are dominated by the elongated protein filaments attached to the complex. The high entropy of these long chains tends to drive away any foreign molecules in the vicinity. A molecule that binds to the filaments, however, is quickly attracted to the pore, and is likely to emerge from the other side.

The folding structure of the proteins that make up the complex is largely consistent with clathrins, proteins that are better known for their role in the endocytosis. Based on this observation, Rout suggested that the pore complex was drafted from the existing cellular toolbox when evolution first created the nucleus.

Other types of data

The importance of cellular geometry and partitioning and the structure of protein complexes serve as a reminder of the limited view of biology that high-throughput data provides. In addition, even for traditional network information some important data are hard to get. The post-translational modifications of proteins that are vital to signaling networks are often only revealed through painstaking analysis, for example, although Dana Pe'er's talk revealed some of the promise of extending these tools. In addition, RNA interference, alternative splicing and related phenomena are increasingly recognized as central to genetic regulation. As understanding of these phenomena matures, these new types of information, and others, must be continually merged with existing data and incorporated into reverse-engineering methods and network models.

Highlights

  • Many participants were interested in a formal assessment process, but the details of how to do this remain unclear.
  • A blinded competition would be most objective, but one based on artificial data could become an abstract academic exercise with no biological relevance.
  • DREAM could provide a repository for two types of data: model predictions for validation by experimenters, and experimental data for evaluation by modelers. Establishing and enforcing standards of statistical rigor could improve the quality of papers.

Wrapping it up

At the end of the two-day DREAM conference, attendees participated in a wide-ranging discussion of the future of the project. There was widespread enthusiasm for continued meetings, but some differences of opinion about how to proceed.

Many attendees felt that DREAM could provide a clearinghouse for ongoing reverse-engineering activities. Bahrad Sokhansanj of Drexel University, for example, suggested maintaining a list of predictions made by various modeling papers, which could be a resource for experimentalists. There could also be a complementary list of biological networks under active investigation. Gustavo Stolovitzky commented that a formal database was probably impractical and perhaps premature, but that DREAM could serve as a repository for biological network data.

The idea of creating a one-year period for evaluating predictions resonated with many participants. For example, Andrea Califano suggested that computational biologists could deposit existing experimental data with DREAM, predicting targets, regulators, and so forth. After a year, he said, the predictions could be compared with the latest results. "Based on the pace at which the community is moving right now, there's likely to be a substantial number of targets that have been validated."

How to assess?

Many participants were enthusiastic about a formal competition, or at least a self-assessment. It seems clear that it is too early for this to be DREAM's exclusive focus, but that more rigorous evaluation would be valuable.

Mark Gerstein asserted that the time has come to formally assess methods. "Even though it's not perfect, I think that doing that would dramatically raise the profile of practically all the people in this room, and give added credibility to this whole field," he said. If we just spend years talking about different ideas," without actually assessing things, he said, "people will laugh at us."

Diego di Bernardo agreed that "a bit of competition" could help the field by providing concrete goals. For example, he said, it could help to "concentrate the effort as the meeting deadline comes," and also to "increase the productivity of graduate students."

Pedro Mendes recalled that for protein structure, the CASP process described at the March 2006 DREAM organizational meeting had shown that some of the most highly regarded methods were much worse than some more obscure ones.

Ilya Nemenman, however, worried that the formal structure of a competition, especially if the context is blinded, could become an empty exercise. "Data is not just numbers. You're not using all the information you have. You should be able to use that information that is already available." He added, "we're not just developing algorithms for the sake of algorithms," but to enhance biological understanding.

Christina Leslie agreed that although such competitions are common in machine learning community, the fact that "you don't use any domain knowledge" encourages superficial methods. She added, "I'd hate to kind of veer our efforts toward sort of superficial tasks that do not really deal with biology."

Andrea Califano stressed that the reverse-engineering community must also convince biologists of the value and validity of their efforts, which may require a more formal validation of the process. Still, there are many questions about how best to structure that assessment.

Raising the bar

Dana Pe'er was skeptical that the field was ready for formal assessment. "My personal belief is in statistics, cross-validation, and doing that right," she commented. She expressed the hope that DREAM could define rigorous statistical standards for network validation, and that papers in high-profile journals would be required to meet them.

Andrea Califano warned that neither such statistical rigor nor the reiteration of accepted facts would insure that papers are published. The papers accepted in these journals, he said, are the ones that make a novel prediction and then validate it in the lab. Still, he joked, "Many of us are looking forward to the point that experimentalists would be asked to do computational validation before their papers are published." It is clear that that day is far in the future, but the efforts of the DREAM community to rigorously assess their tools may bring it closer.

How can biological data be blinded without missing essential biological information?

How can the results of a reverse engineering exercise be validated in an unbiased way?

What sort of uniformity in data format does assessment require?

Can complex biological networks ever be completely determined?

How do the assessments of reverse engineering methods contribute to biological or clinical advances?

How can we better design experiments to facilitate reverse engineering?

How can the reverse-engineering community better engage biologists?

When and how can RNA interference phenomena be included in network analysis?

If experiments do not uniquely determine a network, how can we determine which parts can be trusted?

Highlights

  • In silico networks offer a perfectly accurate description of the "real" network, which is critical for assessing reverse-engineering methods.
  • Although researchers control all details of these networks, the resulting behavior can still surprise them.
  • Networks of gene expression, signal transduction, and metabolism have different characteristics.
  • Reverse engineering methods developed for expression may work well for other types of networks.

God of the network

In silico networks, which are created by people and exist only in computers, have an important role to play in the assessment of reverse-engineering methods, said Pedro Mendes of Virginia Tech, who has been simulating these networks for years. Most importantly, they serve as one very clear "gold standard" for reverse-engineering methods, since researchers already know all of the network details precisely. "You are God in this universe," Mendes said. Moreover, new, secret networks can be created quickly and cheaply, providing an easy source of data for a "blinded" reverse-engineering assessment. By contrast, confidence in the structure of biological networks generally arises only after years of published research.

On the other hand, these artificial networks have no intrinsic significance for biologists. Moreover, researchers may include simplifications or assumptions in a model that make the reverse-engineering task appear easier than it really should be.

For this reason, Mendes and his coworkers have been striving to make the artificial networks as realistic as possible. For example, Mendes believes that saturation of response is an important feature of biological networks, so it should be part of model networks as well.

"Once you make things complicated, they become as hard to understand as the real things."

Constructing such complex, realistic networks led to an important realization: "Once you make things complicated, they become as hard to understand as the real things," Mendes said. In one example, he commented, "I doubt if any existing algorithms could reverse engineer this network."Mendes emphasized that a complete understanding of biological networks cannot be limited to genetic expression. Instead, researchers must ultimately include "the rest of biochemistry," including both metabolic and signaling networks, and their interactions with genes. RNA interference mechanisms will eventually be needed as well.

"Metabolism is a very rigid network: it's very hard to change concentrations of metabolites," Mendes said. The reactants are constrained by a handful of cofactors participating in thousands of reactions. The rigidity also reflects the highly-branched nature of metabolic networks, Mendes said, which contrasts markedly with traditional linear pictures of metabolic networks like the citric-acid cycle.

Signal-transduction networks, like the one described by Arnold Levine, have other special properties, Mendes observed. "They are in many ways fairly flexible, they have a lot of crosstalk, and they integrate signals."

Mendes developed the artificial Claytor Network, which includes genes, proteins, and metabolites.

Combining these networks can yield surprises. Mendes described his "Claytor Network," which includes 20 genes, 23 proteins, and 16 metabolites, interacting by transcription, translation, metabolism, and signal transduction. Surprisingly, the gene-expression response to a perturbation of this small network seemed to indicate a direct connection between two unconnected genes. "In a gene network," Mendes clarified, "we call a connection 'direct' if one gene activates another without passing through another mRNA," although the influence may pass through other types of intermediate.

In this case, however, no such direct link existed. The apparent direct interaction probably arose because Mendes, inspired by biological systems, included a protein in a metabolic chain. The unexpected result, Mendes said, was that even in this simple case, "I don't know how to draw the graph," of the network as it appears in the space of gene interactions. "I made this network up," he added. "I thought I was God, but I really didn't know what I'm doing." The humility that arises from this exercise may be one of the most important results of in silico modeling. Mendes suggested that "when we think we know how to interpret real networks that we have in our hand, maybe we really don't know."

A model metabolic network

Ilya Nemenman of Los Alamos National Laboratory is also exploring the networks that go beyond expression data. He described plans for a robotically sampled chemostat under development at Los Alamos for monitoring metabolic networks. This new tool will monitor hundreds of metabolites, in small batches of a few dozen cells, over hundreds of growth conditions. The researchers intend to focus initially on steady-state measurements, which should be more robust than time-series data. The resulting data are very similar to those found for gene expression, Nemenman said.

The underlying network is quite different, however, involving transformation of one entity into another instead of regulation. Metabolic networks also have a high density of loops, which pose a challenge to some reverse-engineering algorithms. In addition, metabolite concentrations can vary by seven orders of magnitude, and have different noise and nonlinearity than regulatory networks.

The ARACNe algorithm, developed for gene expression, reverse-engineered a metabolic network "pretty well."

For these reasons, the researchers decided to explore whether existing reverse-engineering methods could be effective. In particular, they used the mutual-information method, ARACNe, which Nemenman had helped to develop at Columbia. As described by Riccardo Dalla-Favera, ARACNe was very successful in exploring the network governing B-cell growth and lymphomas.

Nemenman tested the algorithm on data synthesized from previously published network models for red blood cell metabolism. To simulate experimental data, he included both additive and multiplicative noise. The reproduction of the perfectly known original network was gratifying.

"Reverse-engineering models actually work pretty well" on this kind of data, he said. This conclusion, based on the in silico models, suggests that the forthcoming metabolic data could provide a robust new window on biological networks.

Artificial networks are an important tool for reverse engineers to assess their methods. This assessment can provide important intuition about the relationship between the real and inferred networks and the weaknesses of the methods. Although they should not be used to the exclusion of experimental data, in silico networks also provide a uniquely precise "gold standard" for evaluating the accuracy of reverse-engineering methods.

Highlights

  • Researchers use a variety of algorithms to reverse engineer various biological networks.
  • Bayesian network analysis is widely used but does not capture loops in a network.
  • Newer algorithms improve their power by computing the mutual information between nodes, excluding that conveyed by other connections.
  • Rather than overfitting and looking for consensus, the "boosting" scheme reinforces the training set to reflect a weak learning algorithm.
  • Researchers are not yet evaluating their methods against common data sets and metrics.

Introduction

The DREAM process aims to assess different methods or algorithms for reverse engineering biological networks. Much of this first conference focused on plans for doing this assessment, such as the types of networks to analyze, and how to compare the performance, rather than on the methods themselves. Nonetheless, there were useful discussions, some rather technical, of both established and new algorithms, and some attempts to compare them.

Bayesian analysis

Many of the speakers employed Bayesian network analysis to infer network structure. Since this technique is well established, none of them focused on the detailed implementation of this algorithm, although they highlighted its advantages as they applied it to specific network problems.

Dana Pe'er, for example, explained that Bayesian technique goes beyond the simple pairwise correlations. By examining the relationship between two quantities in the context of other variables, it can pick up associations that are not obvious in the correlations because of confounding variables. "We're not just looking at correlations, but at multidimensional interactions," she commented.

"We're not just looking at correlations, but at multidimensional interactions."

In addition, as a statistical technique, Bayesian analysis identifies indirect linkages between two variables, connected through a third, unmeasured variable. Even if such an edge is not part of the real network graph, Pe'er said, "I consider it correct" because it describes a real influence. When the missing third variable is included, she said, the apparent edge disappears, as it should.

In practice, researchers apply the technique hundreds of times with different starting points, and assign a score to the result. The edges reported in the consensus networks are those that appear in a large fraction of the resulting high-scoring networks. Since there are rarely enough data to determine the network confidently, Pe'er noted that looking for clusters or modules of co-regulated genes could boost the technique's statistical power.

One weakness of the Bayesian approach is that it reports correlation, not causation. Both Pe'er and Eric Schadt showed that some types of data allowed them to deduce the direction of influence. (For more information on this subject, see the section on data requirements.)

One shortcoming of Bayesian network analysis is that the inferred networks have no loops, in which the influence of one node on others eventually acts back on the original node. Thus, the technique will miss feedback loops, for example, that are central to the behavior of some real networks. One way to deal with that weakness is to "unfold" the data in time. In this "dynamic" Bayesian network analysis, as described by Chris Wiggins, each node affects others only at a later time step, so loops are impossible. This requires dynamic data, however, which is often hard to get for complex, living organisms.

Regression models

Jim Collins of Boston University used regression models to infer networks to identify the targets of antibiotics. Collins described this as "real-world application, where network analysis and pathway analysis can have a real impact." They first used their analysis to create a network model, based on the expression changes induced by a variety of perturbations. They then measured the numerous expression changes induced by drugs. Finally, the researchers used their inferred network in reverse, as a filter to determine which changes were direct responses to the drug and which were responses to the responses. (For more on this subject, see the eBriefing Finding Your Inner Network: Reverse Engineering Makes Sense of Genetic Expression.)

Mutual information

Several of the speakers referred to the results from the ARACNe method, pioneered by a team including DREAM organizers Andrea Califano and Gustavo Stolovitzky. Riccardo Dalla-Favera, for example, described ARACNe's effectiveness in elucidating the network underlying the unique evolution of B cells. Ilya Nemenman showed that the algorithm worked well in extracting an artificial metabolic network as well. In contrast to the Bayesian reliance on statistical correlations between nodes, this method uses a measure of the mutual information between nodes. In this way it quantifies how much of the correlation could not have been predicted by known correlations between other nodes.

Boris Hayete of Boston University compared the performance of different algorithms, including ARACNe, relevance networks, and Bayesian networks. He also described a new scheme called CLR, for Context Likelihood of Relatedness. This new algorithm assigns a significance to the mutual information between two nodes that is based on the local network context.

As a "gold standard," he used the regulatory interactions in E. coli, contained in the RegulonDB database, although he acknowledged that its limited coverage gives low sensitivity to known interactions. The new method outperformed the current ones using a ROC curve metric (as described in the section on metrics). The method identified hundreds of known interactions, and hundreds more putative interactions. Using DNA sequence data, the team flagged known and putative binding motifs that could explain the interactions. They then used ChIP-rtPCR to validate these motifs. When they demanded a precision of at least 60%, more than half of the predictions were validated.

Other approaches

A recurring problem in machine-learning algorithms for biological data, said Christina Leslie of Columbia University, is overfitting of the frequently inadequate data sets. Proponents of Bayesian networks address this issue by running the algorithm many times and looking for consensus. She described a different approach, embodied in a tool for identifying motifs of transcription factors called MEDUSA (Motif Element Discrimination Using Sequence Agglomeration).

Leslie's team exploits a technique known as boosting. Instead of quickly locking in mediocre networks, this scheme employs an intentionally weak prediction rule. This rule is used to modify the weights assigned to the training data, in an iterative process. "The idea is to weight unusual examples that you haven't fit well," Leslie said. This procedure has a large margin to avoid overfitting in the high-dimensional parameter space.

"The idea is to weight unusual examples that you haven't fit well."

When trained on one-fifth of the data from a large yeast dataset, MEDUSA did substantially better than TRANSFAC (31% versus 13%) in identifying motifs and nearest neighbors among the remainder. This statistical validation shows that MEDUSA is not overfitting the data, Leslie said, and generates useful predictions for experiments.

Brandilyn Stigler of Ohio State University presented a formal analysis of time-series data in which the parameters, such as expression levels, are drawn from a discrete set. Such discrete values may accurately capture what is known about the level in the presence of noise. Under many circumstances, such a system can be described as a polynomial dynamical system, which allows a systematic accounting of the possible "wiring diagrams" that are consistent with its observed evolution. Stigler exploited this observation to devise a new algorithm that does well at extracting both a Drosophila developmental network and one of Pedro Mendes's in silico networks.

Michael Samoilov, of Lawrence Berkeley Laboratory, described a new algorithm called ENRICHed (for Enhance Network Reconstruction via Inference of Connectivity/Control from Heterogeneous data). The driving idea is to use "data fusion" to combine disparate types of data for simultaneous analysis. The algorithm also imposes biochemical constraints on the underlying network from the beginning. Samoilov applied the algorithm to cell-cycle data from yeast, and predicted a large number of new interactions.

Andre Levchenko of Johns Hopkins University explored a specific developmental network—although he cautioned, "I'm not sure there even are developmental networks." The rate at which mouse red blood cells progress between different stages is governed in part by signals from populations of cells in other stages. He analyzed time-series observation of developmental markers for the stages, looking for interactions that explain both the evolution and its robustness. "The idea is not to do the best model, but to choose the best interactions," he said. In other words, which stages influence the progression of cells between which other stages? The results indicated a specific feedforward mechanism, probably involving Fas and its ligand and mediated by cell–cell contact.

Researchers are exploring a wide variety of algorithms for reverse engineering. In addition, they are applying their methods to different networks, and evaluating their effectiveness in different ways. The continued DREAM process should help to assess the strengths and weaknesses of the various methods.