eBriefing

Cracking the Neural Code

Cracking the Neural Code
Reported by
Kat McGowan

Posted November 06, 2012

Presented By

New York Academy of Sciences and the Aspen Brain Forum Foundation

Overview

Despite the remarkable discoveries of neurobiology, the biggest questions about the brain remain. How do groups of neurons turn raw sensory stimuli into information? How do other neural populations decipher these codes? How do regions of the brain work together to create holistic percepts, generate thoughts, and organize behaviors? What is the wiring diagram of the brain? What is consciousness, and how does it arise?

Recent advances in imaging and informatics, a massive increase in our ability to collect and manipulate data, new computational techniques, and innovative biological tools together make it possible to address these mysteries. This science was highlighted at the Third Annual Aspen Brain Forum on August 23–25, 2012. The symposium, Cracking the Neural Code, introduced new brain-wide maps of neuroanatomy and gene expression, methods to read the signaling of neural circuits, and projects that integrate multiple dimensions of data—anatomical, functional, electrical, genetic, and behavioral.

One theme of the conference delved into the neural "code" itself—how information is represented in patterns of electrical activity generated by ensembles of neurons. The brain has not one code but many: cells in the retina encode visual information using one set of activity patterns, while cells in the motor cortex use a different coding system to generate electrical signals that control muscle movement. Another conference theme explored neural connectivity: structural connectivity, how regions of the brain are anatomically linked; and functional connectivity, relationships between activity in remote regions. Ambitious efforts to explain connectivity on multiple scales are underway. On the largest scale, full-brain maps are describing the topography and organization of the entire brain; on the regional or "meso" scale, optogenetics offers a new way to explore local circuits; and on the smallest scale, efforts have begun to map the complete synaptic connections of one neuron.

Cracking the neural code means finding patterns and meaning in the noisy activity of cell ensembles. The potential rewards: an understanding of how brains generate and manage information and a new era of neuroprosthetics that could enable paralyzed patients to control robotic limbs and could restore sight to the blind.

Use the tabs above to find a meeting report and multimedia from this event.

Presentations available from:
Tim Behrens, DPhil (Oxford University)
Ed Boyden, PhD (Massachusetts Institute of Technology)
Gyorgy Buzsaki, MD, PhD (The Neuroscience Institute, New York University Langone Medical Center)
George Church, PhD (Harvard Medical School)
Fred H. Gage, PhD (The Salk Institute for Biological Studies)
Sean Hill, PhD (École Polytechnique Fédérale de Lausanne)
Leigh R. Hochberg, MD, PhD (Brown University; Providence VAMC; Massachusetts General Hospital/Harvard Medical School)
Allan Jones, PhD (Allen Institute for Brain Science)
Christof Koch, PhD (Allen Institute for Brain Science)
Wei Ji Ma, PhD (Baylor College of Medicine)
Sheila Nirenberg, PhD (Weill Medical College of Cornell University)
Stephanie E. Palmer, PhD (University of Chicago)
Jonathan W. Pillow, PhD (University of Texas at Austin)
Elad Schneidman, (Weizmann Institute of Science)
Rava Azeredo da Silveira, PhD (École Normale Superieure, Paris)
David Van Essen, PhD (Washington University in St. Louis)


Presented by

  • Aspen Brain Forum
  • New York Academy of Sciences

Imaging Regional Connections In Vivo


Tim Behrens (Oxford University)
  • 00:01
    1. Introduction
  • 02:31
    2. Non-invasive techniques; In vivo measurement and examples
  • 16:12
    3. Conclusion

Tools for Understanding and Engineering Brain Computations


Ed Boyden (Massachusetts Institute of Technology)
  • 00:01
    1. Introduction; Robot development and the patch algorithm
  • 04:55
    2. Integrative analysis; Robotic triple patching
  • 08:39
    3. Improving and iterating optogenetic molecules; Noninvasive silencing
  • 13:57
    4. Opto-fMRI; Multi-waveguide arrays
  • 17:35
    5. Looking forward; Acknowledgements and conclusio

Neural Syntax: Coordination of Cell Assemblies by Rhythms


Gyorgy Buzsaki (The Neuroscience Institute, New York University Langone Medical Center)
  • 00:01
    1. Introduction; Oscillation classes
  • 04:13
    2. Predicting the brain's choices
  • 12:13
    3. Time offsets within the theta cycle
  • 16:06
    4. Summary and conclusio

Reading and Writing Human Basepairs and Brains


George Church (Harvard Medical School)
  • 00:01
    1. Introduction; Human genomes environments traits
  • 03:25
    2. Tissue engineering and FISSEQ
  • 08:45
    3. The brain activity map; New sequencing methods
  • 19:02
    4. Molecular recording devices; Conclusio

The Human Connectome Project


David Van Essen (Washington University in St. Louis)
  • 00:01
    1. Introduction and overview
  • 11:53
    2. About the Human Connectome Project; Structural and functional connectivity
  • 19:54
    3. Task-fMRI, behavior, and MEG/EEG; Parcellations and networks
  • 23:06
    4. Summary and conclusio

Brain Plasticity and Neural Diversity


Fred H. Gage (The Salk Institute for Biological Studies)
  • 00:01
    1. Introduction
  • 03:36
    2. Purification and propagation of adult stem cells in vitro; Mobile elements
  • 08:41
    3. L1 retrotransposition; Neuronal gene L1 insertion
  • 15:00
    4. Endogenous L1s; Estimating increase in L1 copy number; CpG islands
  • 21:20
    5. Endogenous L1 insertions by qPCR; L1 retrotransposition model
  • 27:12
    6. Summary, acknowledgements, and conclusio

Developing an International Neuroinformatics Infrastructure


Sean Hill (École Polytechnique Fédérale de Lausanne)
  • 00:01
    1. Introduction; The birth of INCF and its mission
  • 06:29
    2. INCF programs; Neuroscience data integration
  • 13:14
    3. Neuroinformatics infrastructure; Conclusio

Neuronal Ensembles: Harnessing their Power in BrainGate and Epilepsy Research


Leigh R. Hochberg (Brown University; Providence VAMC; Massachusetts General Hospital/Harvard Medical School)
  • 00:01
    1. Introduction
  • 05:25
    2. The BrainGate2 trial; Predicting epilepsy
  • 17:50
    3. Acknowledgements and conclusio

View from the Top: What Probabilistic Models of Perception Can Teach Us about Neural Computation


Wei Ji Ma (Baylor College of Medicine)
  • 00:01
    1. Introduction
  • 02:02
    2. What are Bayesian models?
  • 07:11
    3. The power of Bayesian modeling
  • 11:04
    4. The generality of Bayesian modeling; Categorization
  • 14:55
    5. Summary and conclusio

The Allen Human Brain Atlas


Allan Jones (Allen Institute for Brain Science)
  • 00:01
    1. Introduction; The institute's goal and impact
  • 08:40
    2. The Allen Human Brain Atlas
  • 16:10
    3. A global view; Other projects
  • 20:58
    4. The future of the institute; Conclusion and acknowledgement

Project MindScope: Building Brain Observatories


Christof Koch (Allen Institute for Brain Science)
  • 00:01
    1. Introduction
  • 07:42
    2. The MindScope misssion and organization
  • 11:05
    3. The neocortex and mouse study; Current projects and near-term goals
  • 21:35
    4. Challenges; Conclusio

Neural Coding and Retinal Prosthetics


Sheila Nirenberg (Weill Medical College of Cornell University)
  • 00:01
    1. Introduction; Finding neural codes
  • 05:35
    2. Meeting three critical conditions; Using the retina
  • 15:40
    3. Retinal prosthetics; Conclusion and acknowledgement

Prediction in the Retina


Stephanie E. Palmer (University of Chicago)
  • 00:01
    1. Introduction
  • 04:58
    2. Recording from the retina and data interpretation
  • 08:10
    3. Schematic of calculations; Bar movie position statistics
  • 14:18
    4. Retinal optimization for prediction; Summary, acknowledgements, and conclusio

A Statistical Approach to Understanding Decision-related Signals in Parietal Cortex


Jonathan W. Pillow (University of Texas at Austin)
  • 00:01
    1. Introduction; The neural coding problem
  • 03:38
    2. Decision making; The random dots test; The drift-diffusion model
  • 06:32
    3. The normative and descriptive/statistical approaches; The generalized linear model
  • 13:10
    4. Model-based decoding; Network implementation
  • 16:28
    5. Correlations between neurons; Conclusions and acknowledgement

The Orchestral Brain: Coding with Correlated and Heterogeneous Neurons


Rava Azeredo da Silveira (École Normal Superieure, Paris)
  • 00:01
    1. Introduction; Perception task vs. neural correlate
  • 03:45
    2. Averaging out noise
  • 09:10
    3. Discrimination reliability; Heterogeneity
  • 13:40
    4. Summary and conclusio

Websites

Watch a video of Aspen Brain Forum speakers George Church and Fred H. Gage as they compare and contrast the Human Genome Project and current efforts in neuroscience to map the human brain and "crack the neural code." Watch the full Panel Discussion and Q&A for further insights on large-scale brain mapping projects.

The Brain Activity Map Project is a collaborative research initiative announced by the U.S. government in 2013, with the goal of mapping the activity of every neuron in the human brain in ten years.
Markoff J. Obama Seeking to Boost Study of Human Brain. New York Times. February 17, 2013.

The Allen Institute for Brain Research hosts human, mouse, and primate brain maps of anatomy paired with gene expression data, publicly accessible through the web.

Neuroimaging, behavioral, and genetic data generated by the Human Connectome Project will be freely available; the first major data release is scheduled for the spring of 2013.

The International Neuroinformatics Coordinating Facility hosts a neuroinformatics resource to foster neurobiological data sharing.

Ed Boyden's Synthetic Neurobiology group website includes talks and protocols about innovations in optogenetics and in vivo robotics.

Eyewire is the citizen-science project to map the retinal connectome; users can register to correct computerized analysis of cell boundaries and correct one another.

Details of the Braingate 2 clinical trial of a brain-machine interface to be used in paralysis, and more about the system.

Further information about Rahul Sarpeshkar's ultra-low power prosthetics, and glucose fuel cells is at MIT's Analog Circuits and Biological Systems Group website.


Journal Articles

Big Science and the Brain

Alivisatos AP, Chun M, Church GM, et al. The brain activity map project and the challenge of functional connectomics. Neuron 2012;74(6):970-4.

Glasser MF, Van Essen DC. Mapping human cortical areas in vivo based on myelin content as revealed by T1- and T2-weighted MRI. J. Neurosci. 2011;31(32):11597-616.

Hawrylycz MJ, Lein ES, Guillozet-Bongaarts AL, et al. An anatomically comprehensive atlas of the adult human brain transcriptome. Nature 2012;489:391-399.

Van Essen DC, Ugurbil K, Auerbach E, et al. The Human Connectome Project: a data acquisition perspective. Neuroimage 2012;62(4):2222-31.

Zeng H, Shen EH, Hohmann JG, et al. Large-scale cellular-resolution gene profiling in human neocortex reveals species-specific molecular signatures. Cell 2012;149(2):483-496.

New Tools and Techniques

Ellisman MH, Deerinck TJ, Shu X, Sosinsky GE. Picking faces out of a crowd: genetic labels for identification of proteins in correlated light and electron microscopy imaging. Methods Cell Biol. 2012;111:139-55.

Hama K, Arii T, Katayama E, et al. Tri-dimensional morphometric analysis of astrocytic processes with high voltage electron microscopy of thick Golgi preparations. J. Neurocytol. 2004;33(3):277-85.

Kodandaramaiah S, Talei Franzesi G, Chow B, et al. Automated whole-cell patch clamp electrophysiology of neurons in vivo. Nature Methods 2012;9:585-587.

Mittmann W, Wallace DJ, Czubayko U, et al. Two-photon calcium imaging of evoked activity from L5 somatosensory neurons in vivo. Nat. Neurosci. 2011;14(8):1089-93.

Computational Models

Azeredo da Silveira R, Roska B. Cell types, circuits, computation. Curr. Opin. Neurobiol. 2011;21(5):664-71.

Rust NC, Stocker AA. Ambiguity and invariance: two fundamental challenges for visual processing. Curr. Opin. Neurobiol. 2010;20(3):382-8.

Serre T, Kreiman G, Kouh M, et al. A quantitative theory of immediate visual recognition. Prog. Brain Res. 2007;165:33-56.

How Cells Code: The Micro Scale

Ecker AS, Berens P, Tolias AS, Bethge M. The effect of noise correlations in populations of diversely tuned neurons. J. Neurosci. 2011;31(40):14272-83.

Hochberg LR. Turning thought into action. N. Engl. J. Med. 2008;359(11):1175-7.

Sreenivasan S, Fiete I. Grid cells generate an analog error-correcting code for singularly precise neural computation. Nat. Neurosci. 2011;14(10):1330-7.

Reading Regional Circuits: The Meso Scale

Ganmor E, Segev R, Schneidman E. Sparse low-order interaction network underlies a highly correlated and learnable neural population code. Proc. Natl. Acad. Sci. USA 2011;108(23):9679-84.

Lin D, Boyle MP, Dollar P, et al. Functional identification of an aggression locus in the mouse hypothalamus. Nature 2011;470(7333):221-6.

Paninski L, Pillow J, Lewi J. Statistical models for neural encoding, decoding, and optimal stimulus design. Prog. Brain Res. 2007;165:493-507.

Schneidman E, Berry MJ 2nd, Segev R, Bialek W. Weak pairwise correlations imply strongly correlated network states in a neural population. Nature 2006; 440(7087):1007-12.

The Whole Brain: The Macro Scale

Behrens TE, Sporns O. Human connectomics. Curr. Opin. Neurobiol. 2012;22(1):144-53.

Diba K, Buzsáki G. Forward and reverse hippocampal place-cell sequences during ripples. Nat. Neurosci. 2007;10(10): 1241-2.

Coufal NG, Garcia-Perez JL, Peng GE, et al. L1 retrotransposition in human neural progenitor cells. Nature 2009;460 (7259):1127-31.

Hauschild M, Mulliken GH, Fineman I, et al. Cognitive signals for brain-machine interfaces in posterior parietal cortex include continuous 3D trajectory commands. Proc. Natl. Acad. Sci. USA 2012.

Johansen-Berg H, Behrens TE, Robson MD, et al. Changes in connectivity profiles define functionally distinct regions in human medial frontal cortex. Proc. Natl. Acad. Sci. USA 2004;101(36):13335-40.

Pastalkova E, Itskov V, Amarasingham A, Buzsáki G. Internally generated cell assembly sequences in the rat hippocampus. Science 2008;321(5894):1322-7.

Royer S, Zemelman BV, Losonczy A, et al. Control of timing, rate and bursts of hippocampal place cells by dendritic and somatic inhibition. Nat. Neurosci. 2012;15(5):769-75.

Singer T, McConnell MJ, Marchetto MC, et al. LINE-1 retrotransposons: mediators of somatic variation in neuronal genomes? Trends Neurosci. 2010;33(8):345-54.

Applications of Neural Codes

Hochberg LR, Bacher D, Jarosiewicz B, et al. Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature 2012;485(7398):372-5.

Jain V, Seung HS, Turaga SC. Machines that learn to segment images: a crucial technology for connectomics. Curr. Opin. Neurobiol. 2010;20(5):653-66.

Nirenberg S, Pandarinath C. Retinal prosthetic strategy with the capacity to restore normal vision. Proc. Natl. Acad. Sci. USA 2012;109(37):15012-7.

Ragan T, Kadiri LR, Venkataraju KU, et al. Serial two-photon tomography for automated ex vivo mouse brain imaging. Nat. Methods 2012;9(3):255-8.

Sit JJ, Simonson AM, Oxenham AJ, et al. A low-power asynchronous interleaved sampling algorithm for cochlear implants that encodes envelope and phase information. IEEE Trans. Biomed. Eng. 2007;54(1):138-49.


Books

Buzsaki G. Rhythms of the Brain. New York, NY. Oxford University Press, USA; 2011.

Koch C. Consciousness: Confessions of a Romantic Reductionist. Cambridge, MA. MIT Press; 2012.

Sarpeshkar, R. Ultra Low Power Bioelectronics: Fundamentals, Biomedical Applications, and Bio-Inspired Systems. New York, NY. Cambridge University Press; 2010.

Seung, S. Connectome: How the Brain's Wiring Makes Us Who We Are. New York, NY. Houghton Mifflin Harcourt Trade; 2012.

Wei XX, Stocker AA. Efficient coding connects prior and likelihood function in perceptual Bayesian inference. NIPS Advances in Neural Information Processing Systems 25. Lake Tahoe, CA. MIT Press; 2012.

Keynote Speakers

George Church, PhD

Harvard Medical School
e-mail | website | publications

George Church is a professor of genetics at Harvard Medical School and is the director of PersonalGenomes.org, the world's only open-access information source for human genomic, environmental, and trait data (GET). Church holds a PhD from Harvard. His PhD research included the first methods for direct genome sequencing, molecular multiplexing, barcoding, and automation, which led to the first commercial genome sequence (pathogen, Helicobacter pylori). Church's research focuses on new technologies for genomic and proteomic measurement, synthesis and modeling of biomedical, and ecological systems—in particular, personal genomics and biofuels. His lab has developed next-generation sequencing methods to analyze the output of combinatorial selections as well as comprehensive gene–environment–trait data for affordable personalized medicine.

Sean Hill, PhD

École Polytechnique Fédérale de Lausanne
e-mail | website| publications

Sean Hill received his PhD in computational neuroscience from the University of Lausanne, Switzerland, where he investigated the computational role of the auditory thalamocortical circuitry in the rat. He subsequently held postdoctoral positions at The Neurosciences Institute and the University of Wisconsin, Madison. Hill has developed numerous large-scale models of neural systems and is the designer/developer of the general-purpose neural simulator synthesis. As part of his research, he developed the first large-scale model of the cat visual thalamocortical system that replicates neural activity during wakefulness and sleep. He began his work with the Blue Brain project in 2006. His research interests include the use of biologically-realistic models to study the role of emergent phenomena in information processing, network connectivity, and synaptic plasticity in the central nervous system.

Allan Jones, PhD

Allen Institute for Brain Science
e-mail | website | publications

Allan Jones holds a PhD in genetics and developmental biology from Washington University School of Medicine. He is the chief executive officer of the Allen Institute. Working closely with the founders, scientific advisors, and business advisors, he has expanded the Institute's portfolio of large-scale, high-impact initiatives from the mouse brain atlas to work on the human brain. The Institute creates free, public resources for brain atlas research.

Christof Koch, PhD

Allen Institute for Brain Science
website | publications

Christof Koch received his PhD from the Max-Planck-Institut für Biologische Kybernetik, Tübingen. He is the chief scientific officer at the Allen Institute and previously served on the faculty at the California Institute of Technology (Caltech). He spent four years as a postdoctoral fellow in the Artificial Intelligence Laboratory and the Brain and Cognitive Sciences Department at MIT. Koch has published extensively, and his writings and interests integrate theoretical, computational, and experimental neuroscience. Stemming in part from a long-standing collaboration with the late Nobel Laureate Francis Crick, Koch authored the book The Quest for Consciousness: A Neurobiological Approach. He has also authored the technical books Biophysics of Computation: Information Processing in Single Neurons and Methods in Neuronal Modeling: From Ions to Networks, and served as editor for several books on neural modeling and information processing. Koch's research addresses scientific questions using a widely multidisciplinary approach.

David Van Essen, PhD

Washington University in St. Louis
e-mail | website | publications

David Van Essen received his PhD in neurobiology from Harvard University. He was a postdoctoral fellow at Harvard and a faculty member in the division of biology at Caltech before being named Edison Professor of Neurobiology and head of the Department of Anatomy and Neurobiology at Washington University School of Medicine. He served as editor-in-chief of the Journal of Neuroscience from 1994 to 1998, widely considered the premier journal in its field. Van Essen is internationally known for his research on how the brain organizes and processes visual information. He has made extensive contributions to the understanding of how the brain perceives shape, motion, and color and how attention affects neural activity. His work has helped to demonstrate that the brain contains dozens of different areas involved in vision and that these areas are interconnected by hundreds of distinct neural pathways. He and his colleagues have developed powerful new techniques in computerized brain mapping to analyze these visual areas in humans as well as nonhuman primates. This work includes the continued development of an integrated suite of software tools for surface-based analyses of cerebral cortex. These methods are applied to the analysis of cortical structure and function in monkeys and humans. A broad objective is to develop probabilistic surface-based atlases that accurately convey commonalities as well as differences between individuals.


Speakers

Richard Andersen, PhD

California Institute of Technology
e-mail | website | publications

David J. Anderson, PhD

California Institute of Technology
website | publications

Tim Behrens, DPhil

Oxford University
e-mail | website | publications

Matthias Bethge, PhD

University of Tübingen
e-mail | website | publications

Ed Boyden, PhD

Massachusetts Institute of Technology
e-mail | website | publications

Gyorgy Buzsaki, MD, PhD

The Neuroscience Institute, New York University Langone Medical Center
e-mail | website | publications

Yadin Dudai, PhD

Weizmann Institute of Science
e-mail | website | publications

Mark H. Ellisman, PhD

The National Center for Microscopy and Imaging Research (NCMIR), University of California, San Diego
e-mail | website | publications

Ila R. Fiete, PhD

University of Texas at Austin
e-mail | website | publications

Fred H. Gage, PhD

The Salk Institute for Biological Studies
e-mail | website | publications

Leigh R. Hochberg, MD, PhD

Brown University; Providence VAMC; Massachusetts General Hospital/Harvard Medical School
e-mail | website | publications

Jason N. D. Kerr, PhD

Max Planck Institute for Biological Cybernetics
e-mail | website | publications

Wei Ji Ma, PhD

Baylor College of Medicine
e-mail | website | publications

Sheila Nirenberg, PhD

Weill Medical College of Cornell University
e-mail | website | publications

Stephanie E. Palmer, PhD

University of Chicago
e-mail | website | publications

Jonathan W. Pillow, PhD

University of Texas at Austin
e-mail | website | publications

Tomaso Poggio, PhD

Massachusetts Institute of Technology
e-mail | website | publications

Rahul Sarpeshkar, PhD

Massachusetts Institute of Technology
e-mail | website | publications

Elad Schneidman, PhD

Weizmann Institute of Science
e-mail | website | publications

Andrew Schwartz, PhD

University of Pittsburgh
e-mail | website | publications

Sebastian Seung, PhD

Massachusetts Institute of Technology
e-mail | website | publications

Rava Azeredo da Silveira, PhD

École Normale Superieure, Paris
e-mail | website | publications

Alan A. Stocker, PhD

University of Pennsylvania
e-mail | website | publications

Anthony Zador, PhD

Cold Spring Harbor Laboratory
e-mail | website | publications


Kat McGowan

Kat McGowan is a freelance magazine writer specializing in science and medicine.

Sponsors

Presented by

  • Aspen Brain Forum
  • New York Academy of Sciences

Silver Sponsor

  • Aetna Foundation

Academy Friend

Ripple

Grant Support

This conference was supported in part by a grant from Medtronic.

The human brain is "the most complex piece of organized matter in the known universe."

The human brain includes some 100 billion neurons, each of which makes an average of 10,000 connections with other cells. The brain has as many as 500 distinct anatomical regions, most of which have never been fully described or distinguished from neighboring regions. In short, the human brain is "the most complex piece of organized matter in the known universe," said Christof Koch of the Allen Institute for Brain Science in his introductory keynote speech. In the past, this complexity made it difficult to pose scientific questions about the whole brain. New tools and computational approaches are changing that, as are innovative collaborative connectome projects aiming to create standardized, multidimensional comprehensive atlases of the brain. These new capabilities offer unprecedented opportunities for research into highly complex brain systems, as Mark Ellisman of the University of California, San Diego explained: "We live in a time of convergent revolutions, with high-throughput methods to acquire massive amounts of information and the information technology that allows us to deal with it."

Brain atlases: A question in three sizes

In the 1990s, Olaf Sporns proposed finding the "connectome"—a map of all the connections in the human brain. The term now has many meanings. It can refer to anatomical maps on a local or brain-wide scale. It can be applied to atlases that describe not only physical connections but also information about cells and the functional relationship between brain regions—how activity in one part of the brain relates to activity in another.

The word is no accident; in ambition and scope, the connectome projects now underway evoke the human genome project. These are technologically-driven, resource-intensive collective efforts designed to generate research tools rather than to address a specific hypothesis. However, the analogy has its limits: DNA has a single code, which was known before the genome project began; the brain has many, and its codes are largely unknown. Humans have roughly 21,000 genes—and approximately one trillion brain connections. Thus far, the only wiring diagram to be fully mapped out is that of the nematode C. elegans. This creature has only 302 neurons—yet the project required 50 person-years to complete.

A connectome can refer to the whole brain (left), a functional subsegment of the brain (middle), or just one neuron (right). (Image courtesy of David Van Essen)

Major collaborative projects such as the Human Connectome Project (HCP) and the Allen Brain atlases are creating new public resources to describe the topography and organization of the brain. The HCP is made possible by innovations in neuroimaging technologies, such as diffusion spectrum imaging, which traces white matter tracts in vivo to generate a map of connectivity. The Allen Institute deploys automated serial block-face scanning electron microscopy, in situ hybridization, and microarrays to reconstruct tissue sections into 3-D atlases that map gene expression against neuroanatomy.

Major collaborative projects are creating new public resources to describe the topography and organization of the brain.

Intermediate-scale projects probe regional circuits and connectivity at greater resolution. Optogenetics, for example, can reveal how the activity of one population of nerve cells influences a network. In another project sponsored by the Allen Institute, the mouse visual system is being comprehensively catalogued to create an atlas that includes neuroanatomical connectivity, cell type analysis, physiological data, and gene expression. Taking a complementary approach, the Blue Brain Project at the École Polytechnique Fédérale de Lausanne is using activity data from the rat cortex to simulate a cortical column in silico. Anatomical and activity data collected from cortical neurons are reassembled by a supercomputer to create a working model of a cortical column, the main functional subunit of the mammalian cortex. The cortex of a rat or a human is composed of tens of thousands to millions of such columns, each forming a local microcircuit that also makes regional and long-distance connections. The first aim of this project is to explain how physical connections generate and constrain electrical activity and neural function in one of these local networks.

On the neuronal level are efforts to map the most local connections, such as those between two cells or one microcircuit. Using serial block-face scanning electron microscopy, Sebastian Seung from MIT is attempting to recreate the 3-D connectome of a 350 by 300 micron section of mouse retina that includes every synapse. To tackle the enormous amount of data, this "Eyewire" project combines computer analysis with the participation of online volunteers.

Neural code: Speaking the language of neurons

Mapping neural connections is just one way to explore whole brain function. It is also necessary to know how neurons communicate with one another—to crack the neural code. Neurons are noisy and unpredictable—in any one cell, the exact same stimulus does not reliably generate the same response—but the sensory world we perceive is not. Somehow, the intermittent "spikes" or action potentials of one cell combine with the activity of others to give rise to accurate, efficient communication. In some cases, the action potentials of groups of cells form a simple population code that can be read out in a straightforward fashion. A cell in the visual system, for example, might spike slowly in response to a stimulus at a 45-degree angle and rapidly in response to a vertical line. In this case, the activity of many visual cortex neurons might be simply summed up to arrive at the spike code that represents the stimulus. However, such codes tend to be inefficient, and recordings from other parts of the brain suggest that most functions are represented in some other fashion. Computational approaches and information theory are being used to describe the principles that underlie neural coding in an attempt to build a thesaurus of neural languages.

Somehow, the noisy "spikes" of one cell combine with the activity of others to give rise to accurate, efficient communication.

One overarching observation made by Matthias Bethge from the University of Tübingen among others is that the brain is inherently probabilistic—the same stimulus may elicit different responses in the same neuron—and yet somehow groups of neurons efficiently represent information. Models that predict the behavior of real neuronal circuits provide some insights into how the brain may maximize efficient information transfer.

The neural code problem has direct practical applications. Understanding "encoding," how sensory data is translated into electrical activity, can improve sensory prosthetics such as the artificial retina. Investigating "decoding," how signals from the brain are read out by other regions and by the peripheral nervous system, is driving advances in new motor prosthetics using brain-machine interfaces. These applications also test our understanding of neural processes: the successes and failures of neural prosthetics drive further insights into the nature of the neural code.

Keynote speakers:
Christof Koch, Allen Institute for Brain Science
Allan Jones, Allen Institute for Brain Science
David Van Essen, Washington University in St Louis
Sean Hill, École Polytechnique Fédérale de Lausanne
George Church, Harvard Medical School

Highlights

  • Whole brain atlases combine neuroanatomical and genetic data to create public resources for basic science.
  • A combination of electrophysiology, fine-scale neuroanatomy, genetics, and modeling is being used to comprehensively describe the entire mouse visual system.
  • The Human Connectome Project is mapping the brain's major connections in vivo and exploring variability.

A new era for neuroscience

Neurobiology has traditionally been an artisanal enterprise. Ever since Santiago Ramon y Cajal hunched over his microscope, most projects have been conceived and executed by independent investigators working in isolation. Today, as the keynote addresses made clear, the era of big brain research has arrived. High-throughput, big-budget projects are collecting and organizing massive amounts of multidimensional data into "hypothesis-independent comprehensive resources," in the words of Allan Jones, chief executive officer of the Allen Institute for Brain Science. These projects are "experiments in the sociology of neuroscience," said Christof Koch, the Allen Institute's chief scientific officer; scientists are being asked to subordinate personal curiosity to the goals of a collective effort. The end result is potentially unprecedented: structured, standardized comprehensive data sets—on a massive scale—that will be able to be analyzed and manipulated by many researchers.

Two keynote addresses delivered by Koch and Jones outlined the projects underway at the Allen Institute. One explores the encoding and transformation of information in mouse corticothalamic vision networks to establish a standard reference of how the neocortex processes sensory data. Neuroanatomical, physiological, and genetic data are collected, and cell types are described by combining quantitative analysis of physical structure and spatial characteristics with full readouts of gene transcripts. Major patterns of cell-to-cell connectivity will be catalogued using electron microscopy-based anatomical reconstruction and electrophysiological techniques, including 64-channel electrode arrays. Optogenetics will probe how alterations to one cell type influence network activity. Data and modeling will be paired to create a "virtuous loop," Koch said.

Data and modeling will be paired to create a "virtuous loop."

The Allen Institute is also creating atlases of human, mouse, and primate brains that include standardized neuroanatomical and gene-expression data from the entire CNS. Serial electron microscopy block-face imaging is linked with microarray and in situ hybridization data to populate a 3-D interactive atlas that can be accessed through the Web. Since initiation in 2003, the atlas projects have generated 3.2 million tissue sections, nearly a petabyte of image data, and three of the top ten transgenic mouse lines sold through Jackson Labs. The human brain atlas can be accessed online to interrogate the expression of any gene in the genome and map its expression in tissue. It now includes data from two complete brains that can be explored separately or jointly. The 10-year horizon includes mapping all cell types in both the human and the mouse brain using postmortem tissue and systematically exploring molecular networks using normal and disease-associated induced pluripotent stem (iPS) cells. The idea is that this comprehensive structural and functional characterization will enable new types of experiments to probe how a healthy brain is organized, how human genetic diversity influences the brain, and what goes wrong in disorder and disease.

Gene expression data—coded by the multicolored bar at the bottom—is mapped against 3-D anatomical coordinates in the Allen human brain atlas. (Image courtesy of Allan Jones)


 

It all links up

The Human Connectome Project is another major resource currently under construction. This project will use diffusion tensor imaging (DTI) and functional magnetic resonance imaging (fMRI) to scan 1,200 people, including 300 twin pairs. Data will be combined with EEG, gene sequencing, and behavioral tests. The completed atlas will permit exploration of circuitry and function and will cast light on individual variation. "We presume that the connectivity of everyone in this room is not only substantially different but accounts for many aspects of brain and behavioral relationships," said David Van Essen of Washington University in St. Louis in his keynote. The "macroconnectome" that will be developed will also be used to perform a more accurate parcellation of the estimated 150–200 human cortical areas in each hemisphere. This new ability to bridge macro- and micro-scales offers new possibilities for understanding disease and development, although defining a truly complete connectome probably will not be feasible "in my lifetime," Van Essen said.

Advances in synthetic biology and precise manipulation of specific neuronal ensembles will accelerate our understanding of the neural code.

The Blue Brain project aims to model the entire anatomical and physiological relationships in one cortical column—the fundamental building block of the mammalian brain. This "in silico" brain, as Sean Hill of École Polytechnique Fédérale de Lausanne described it, exists entirely in digital form, the result of data generated by years of electrophysiological recordings, mostly from slices of rat cortex. Specific characteristics of each cell, such as channel type, size, and electrical behavior, are plugged into the model so that it closely reflects the biological reality of a cortical column. The biologically-inspired model will be used to probe how cellular activity in the cortex gives rise to function.

In parallel to mapping efforts, advances in synthetic biology will accelerate our understanding of the neural code, suggested George Church of Harvard Medical School. Just as the precipitous decline in the price of gene sequencing led to a transformation in human genetic research, synthetic biology and advances in robotics may have profound effects on brain science. He provided a quick history of recent advances in sequencing and predicted that high-throughput technologies would soon provide equally powerful tools for investigating the brain. Currently, it is possible to record from dozens of neurons at once in a living brain; Church suggested that a synthetic biology approach might one day enable researchers to record all the neurons in a rodent brain simultaneously or to precisely control the electrical activity of specific networks. As one example, Church envisioned the creation of highly-efficient miniaturized "molecular recording devices"—synthetic T cells labeled with bar codes and photosensors that could be introduced into the brain to influence and report on neural activity.

In the discussion that followed, this question was posed: Has neuroscience progressed far enough that we can begin to probe the neural code or should these efforts have been postponed until knowledge about the brain has deepened? "The 'code' was cracked genomically before the Human Genome Project, and in this case it seems we haven't cracked the code," said Fred Gage. The history of science includes many examples in which progress was fostered by moving ahead despite incomplete knowledge, argued Church. The huge datasets that brain projects will generate will undoubtedly spur developments in field such as neuroinformatics, Van Essen pointed out. Koch agreed that "incomplete knowledge is vastly better than no knowledge at all." Yet, merely collecting "big data" does not solve questions on its own; to understand a complex system, it is necessary to describe the interactions between different levels and scales with precision. "I'm not sure knowing all the details of connections and genes will tell us how the brain works and how it does its magic," said one audience member. Hill and others countered that these projects at least provide a starting point to organize and integrate data.

Open Questions

What is the core columnar operation performed by cortex that makes natural intelligence so robust and flexible?

How many distinct anatomical regions should the human brain be parcellated into?

Does the lack of comprehensive theories about neural coding mean that collecting brainwide data on structure and function is premature?

Speakers:
Ed Boyden, Massachusetts Institute of Technology
Anthony Zador, Cold Spring Harbor Laboratory
Jason N. D. Kerr, Max Planck Institute for Biological Cybernetics
Mark H. Ellisman, The National Center for Microscopy and Imaging Research, University of California, San Diego
Sean Hill, École Polytechnique Fédérale de Lausanne

Highlights

  • Further innovations in optogenetics will enable simultaneous activation and silencing of different cell populations within one network.
  • Robotics could transform patch-clamp recording into a high-throughput technique.
  • DNA sequencing technologies could be adapted to create a high-throughput method to map neural connections.
  • Astrocytes should not be overlooked in the brain connectome.

Technology and tools: What's up next?

In the last decade, remarkable research tools have emerged that enable multiple dimensions of data to be collected and compared on a large scale, permitting new questions to be asked and answered.

Novel classes of light-sensitive proteins could simultaneously activate and silence different groups of cells within the same network.

Optogenetics introduces genes that code for light-sensitive proteins, such as channelrhodopsin, into specific types of neurons. Depending on the type of protein used, LEDs or optical fibers inserted inside the skull or brain can activate or inhibit one cell type in a network at speeds within the timescale of natural neuronal activity. Optogenetics allows precise interventions that make it possible to observe how the electrical activity of a particular cell type contributes to the function of a circuit, and ultimately to behavior. MIT's Ed Boyden discussed the next phase of optogenetics: the search for novel classes of light-sensitive proteins that could simultaneously activate and silence different groups of cells within the same network. He also described a transcranial noninvasive mechanism to activate neurons that allows experimenters to wirelessly control free-moving animals with LEDs. Another project: automating whole-cell patch clamp recording. Patch clamp recording is a highly informative but labor-intensive physiological technique that records and labels individual living neurons; Boyden's group is developing robots to perform the technique, in theory making it possible to record from between 25 and 100 cells at once.

Patch-clamp recording from three cells at once in the living mouse. Boyden hopes robotics will soon make it possible to record from 25 to 100 cells simultaneously. (Image courtesy of Ed Boyden)

Adapting the high-throughput technology developed for gene sequencing to the problem of neural connectomics could take advantage of sequencing's spectacular gains in efficiency. Anthony Zador of Cold Spring Harbor Laboratory proposed using DNA plasmids to label each neuron with a unique "barcode." Pseudorabies virus, which moves genetic material transynaptically, could be used to associate the DNA barcodes of synaptically-connected neurons. Linked barcodes would reflect neurons that are physically joined, and each neuron would bear both its individual DNA barcode and those of its thousands of synaptic partners. After extracting and sequencing the DNA, the result would be a high-throughput readout of neuronal links. Modifications of the technique could also incorporate information about cell type and morphology. "Converting connectivity into a sequencing problem raises the possibility we will be able to determine complete connectivity in [only] a few days," said Zador.

To understand how neural circuitry underlies behavior, behavior must be as naturally motivated as possible, a requirement at odds with the restrictions necessary to collect physiological data. Jason Kerr of Max Planck Institute for Biological Cybernetics introduced a set of technological solutions to this problem. With a mobile two-photon microscopy system, which at 5.5 grams can be mounted on the head of a rat, he can simultaneously track head movement, eye movement, and the activity of cortical neurons in free-moving rodents. The animal can freely explore its environment as well as experimental visual stimuli while its brain activity is being imaged.

Bringing it all together

Bridging scales will be a challenge in this new era of neuroscience, said Mark Ellisman of the National Center for Microscopy and Imaging Research at the University of California, San Diego. Advances in light microscopy (such as the use of fluorescent proteins) and in electron microscopy (EM) generate data about cell structures at previously unimaginable resolutions. Bridging the difference between EM and light microscopy requires innovative methods such as Ellisman's label "miniSOG," a small protein that can be genetically introduced into neurons, tag a structure, and then be catalyzed with light. The precipitate that results can be resolved with EM, allowing proteins to be localized within the cell with both light and electron microscopy.

Ellisman made the case for the importance of astrocytes, the non-neuronal cells that are at least as prevalent in the brain as neurons. Far from the mere housekeeping cells they were once thought to be, astrocytes are now known to have both receptors and neurotransmitters, to communicate via calcium waves, and to enfold neurons in such a way that each astrocyte could potentially influence as many as 120,000 neuronal synapses. "We think these are highly dynamic and interesting," said Ellisman. Whether astrocytes modify information is not known, but the anatomy of the cells suggests that the territorial relationships between neighboring astrocytes may be yet another organizational dimension of the brain. [Note: Ellisman did not explicitly connect the idea of bridging scales with his findings on astrocytes.]

Astrocytes were underestimated in part because the usual staining technique did not represent cell volume (A, left); a more accurate view (B, right) reveals the cells have extensive processes. (Image courtesy of Mark Ellisman)

Heterogeneous data generated by the many neurobiological projects should be compiled into a comprehensive knowledge base, argued Sean Hill of the École Polytechnique Fédérale de Lausanne, because "we don't know how much we know." He described the effort by the International Neuroinformatics Coordinating Facility, launched in 2005, to establish an open informatics infrastructure to integrate various data sources into a system that could facilitate analysis and modeling.

Open Questions

What are the best ways to encourage and organize data sharing in neuroscience?

What role do astrocytes play in neural coding?

Speakers:
Stephanie E. Palmer, University of Chicago
Alan A. Stocker, University of Pennsylvania
Rava Azeredo da Silveira, École Normal Superieure, Paris
Wei Ji Ma, Baylor College of Medicine
Tomaso Poggio, Massachusetts Institute of Technology

Highlights

  • Retinal cells code predictive information about the likely future state of a stimulus.
  • Positive correlations between neurons and the physiological diversity of neurons can powerfully suppress error.
  • Most brain function—from processing simple sensory inputs to complex perception—is probabilistic.
  • Bayesian approaches explain primate attention, perception, and memory and can constrain neural models.

It's all predictable

The brain predicts the world, even at simple levels of processing. The activity of sensory system neurons suggests that circuits make use of past information about inputs to process and interpret new data more quickly and efficiently. "Prediction is useful," said Stephanie Palmer of the University of Chicago. "Our hypothesis is that it's so important that it starts at the earliest sensory inputs." Recordings from dissected larval salamander retinas show that information about the most likely future state of a complex but predictable visual stimulus is encoded in the population firing of ganglion cells. Analysis of this coding suggests that this retinal code is near optimal. Predictive rules are used by the cells to extract important stimulus features. As the stimulus changes the coding strategy can be updated, but in general the retina is designed to be predictive.

Representing information in the brain requires both encoding and decoding. Neural encoding should efficiently represent stimuli. Neural decoding should accurately represent signals from other brain regions. It may be optimized by combining immediate stimulus information with prior knowledge to generate a probability estimate that predicts the shape or pattern a stimulus will follow, and thus helps it to represent information efficiently—the basic idea behind Bayesian inference. The Bayesian brain hypothesis explains decoding: when the brain encounters a sensory stimulus, it does not approach the information from scratch but considers it in the light of past experience, as part of a likely set of stimuli. These probabilities are constantly revised and updated online. Bayesian propositions explain human perceptual behavior well and can account for the accuracy of our perceptions despite the noisiness of sensory signals. While Bayesian probabilistic strategies do a good job accounting for sensory perception on a cognitive level, the mechanics of how neurons physically enact and refine these rules has still to be determined, said Alan Stocker of the University of Pennsylvania. He proposed a conceptual framework to link encoding and decoding: optimally efficient population codes (encoding) lead to an environment that makes Bayesian inference (decoding) possible. A population vector code, in which the individual tuning frequencies of many neurons are multiplied by the individual firing rate of each cell and then summed, is not optimal in terms of its information coding but is biophysically plausible.

The Bayesian approach explains human perceptual behavior well and can account for the clarity of our perceptions despite the noisiness of sensory signals.

Correlations between the activity of distinct types of neurons are common in the brain, but it's unknown whether correlations improve the efficiency and quality of neural coding or impair coding fidelity as some studies have suggested. Rava Azeredo da Silveira from the École Normale Superieure presented one scenario in which correlations radically improve the accuracy of heterogeneous groups of neurons and increase their information-carrying capacity. In this "orchestral" model, noise is averaged out by the collaboration of different types of neurons; as few as 5 to 10 correlated neurons can reduce noise, behaving together like one "deterministic meganeuron," he said.

Neurons with correlated activity (right) are not only more accurate but also have greater information-coding capacity than neurons operating independently (left). (Image courtesy of Rava Azeredo da Silveira)


 

The Bayesian brain

Probabilistic models can also describe higher-order cognition, said Wei Ji Ma from Baylor College of Medicine. Complex perceptual tasks, such as categorizing stimuli or noticing changes, result in behaviors that closely fit Bayesian models. When confronted with ambiguous stimuli that are difficult to categorize, both humans and monkeys continually recompute the boundaries between categories after every trial, suggesting that they are adopting a Bayesian approach.

Turning to the specific perceptual task of object recognition, and the neurons in the ventral stream of the visual system that carry out this function, Tomaso Poggio from MIT proposed a theory to explain invariant recognition—the surprising ability of the primate visual system to rapidly recognize a familiar object from any angle, even if this perspective is entirely novel. His theory postulates that as we develop, visual information about each object encountered is stored as a book-like set of templates that represent the possible transformations of the object. This information is encoded in the strength of connections between visual cortex neurons, and this structure underlies the familiar "tuning curve" of neural responsivity in the visual cortex.

Open Questions

How do neurons use probabilistic strategies to improve coding efficiency?

How do noisy, unpredictable neuronal spikes generate reliable perceptual information?

Speakers:
Matthias Bethge, University of Tübingen
Andrew Schwartz, University of Pittsburgh
Ila R. Fiete, University of Texas at Austin

Highlights

  • A nonparametric model of neuronal spiking that captures response dependencies accurately predicts neuronal activity.
  • A tuning curve reflecting how the structure of inputs to a cell results in activation can be employed to control a neural prosthetic.
  • Exponentially-strong population codes used by hippocampal grid cells are 100,000 times more efficient than simple population codes.

One brain, many codes

The gist of the neural code problem is simple: Somehow, the external world is represented in the brain, said Matthias Bethge of the University of Tübingen. But how this translation occurs is not obvious. Models are used to test hypotheses about how neurons transmit information. Traditional generalized linear models are based on the reverse correlation of data collected from neurons responding to sensory white noise; for instance, the spike-triggered average sums the stimuli that precede a spike and divides by the number of spikes. Such models require the feature space of the cell to be known in advance—that is, its preferred stimuli. In contrast, Bethge introduced a nonparametric model that can learn the feature space and predict the spike trains of individual neurons in response to any arbitrary stimulus. This spike-triggered mixture model, which captures the dependencies between responses and incorporates historical information, was about 100 bits/second better than the generalized linear model and more accurately predicted spiking behavior in a test with primary afferents of the rat whisker. Using such a generative model makes it possible to explore any neuron without having to first establish the feature space of that cell.

The nonparametric spike-triggered mixture model (center row), by including spiking history, predicts the spiking behavior of a rat sensory neuron (top row). (Image courtesy of Matthias Bethge)

Scaling-up from the behavior of individual neurons to neuron populations, Andrew Schwartz of the University of Pittsburgh discussed how to determine which subset of the roughly 10,000 electrical inputs to a cell combine to cause the action potential, a nonlinear event in which the summed excitatory post-synaptic potentials cause the polarity of the cell to reverse. "We have a poor understanding of this, and it is critical," he said. Capturing this information in the motor cortex can be used to direct a neural prosthetic. In one study, researchers decoded activity from the primate motor cortex as an animal moved its arm through space, using an implanted electrode array to record neuronal activity and obtain a readout of the trajectory of its movements. The firing rates of individual neurons did not directly reflect the trajectory of movement, but the population code—a vector combining the preferred directions of responding neurons—could capture the neural encoding of trajectory. This vector was used to direct the movements of a robotic arm so that an animal with a chronically implanted electrode array that detects impulses from motor cortex could manipulate the prosthetic with dexterity.

"Which subset of the roughly 10,000 electrical inputs to a cell causes the action potential?"

One noteworthy code is employed by grid cells in the rat entorhinal cortex, which is involved in determining location. The cells encode location as a periodic function of space, explained Ila Fiete of the University of Texas at Austin, a system that so far has not been seen elsewhere in the brain. This particular function is more efficient than classical neural population codes, generating a robust representation of location that is exponentially less sensitive to noise than the "majority-rule" codes often found in sensory and motor cortexes. One simple neural network that employs this periodic structure—an anatomical loop between the coding entorhinal cortex and the decoding hippocampus—is close to optimally efficient by an information-theory definition. It represents a 100,000-fold improvement over a classic population code, so it seems likely that the brain employs such "exponentially strong" codes [EPC] elsewhere, Fiete said: "The hunt for EPCs is on."

Open Questions

If the grid cell code is exponentially more efficient than a population code, why aren't these codes ubiquitous?

How can we identify which subset of the electrical inputs to a cell combine to cause the action potential?

How does the brain efficiently encode information when population codes are so information-poor?

Speakers:
David J. Anderson, California Institute of Technology
Elad Schneidman, Weizmann Institute of Science
Jonathan W. Pillow, University of Texas at Austin

Highlights

  • A principle of neural codes may be that they are easily learnable by other brain modules.
  • The sparse activity of groups of cells creates reliable patterns of activity.
  • Naturalistic and descriptive approaches to modeling can reveal the brain's codes.
  • Competing instinctual behaviors can be controlled by overlapping neural circuits.

Beyond the cortex

Studies of coding generally focus on the cortex, but sub-cortical structures should not be overlooked, said David J. Anderson of California Institute of Technology in his talk on the neural basis of aggression. To understand the circuitry that underlies evolutionarily-important behaviors, such as aggression and fear, his group combines molecular genetics with electrophysiology and functional imaging. One longstanding theory posits that behavior is hierarchically organized, the result of a series of decisions between competing instincts. Anderson is seeking circuit-based evidence for this idea with experiments exploring aggression and mating behavior in mice, using optogenetics to explore how overlapping circuitry can underlie two starkly different behaviors.

The computational challenge

Even modest numbers of cells—a dozen to a few hundred—have the potential to generate a staggering diversity of patterns, interactions, and codependencies. Looking for overarching patterns can organize the problem. One of the principles of neural codes may be that they are learnable, suggested Elan Schneidman of the Weizmann Institute of Science, so other brain modules can interpret meaning.

"A core principle of neural codes may be that they are learnable."

Experiments with the salamander retina suggest that these cells generate a learnable code. For 10 neurons, a model that takes all pairwise correlations into account can capture the code. For 100 neurons, pairwise models are superior to individual activity models but not good enough to account for response to stimuli of naturalistic scenes. Trying to compute third or fourth order relationships between all neurons for even modestly sized groups of cells is a very difficult computational task. However, neuron activity creates reliable patterns. A novel "pseudo-likelihood" model, which focuses only on the reliable patterns in activity, is able to identify higher-order interactions. The model was able to learn the functional dependencies and correlations that arise from the activity of 100 retinal neurons. Because the activity of any individual neuron is sparse, and only a few neurons are active at any time, patterns arise and appear frequently enough that interactions between cells can be detected and learned, either by models or by other brain modules. It should be possible to construct a neural codebook of such patterns and, because the brain is organized hierarchically, to treat the brain as a set of very small overlapping modules, each with its own patterns—one way to tackle the potentially overwhelming complexity of relationships between even small groups of cells.

By attending to patterns created by sparse neuronal activity, this model (blue) more accurately predicts actual activity than a pairwise model (red) or a model that uses only independent activity (grey). (Image courtesy of Elan Schneidman)

Neural codes can also be studied in such phenomena as decision-making, said Jonathan Pillow of the University of Texas at Austin. He used a descriptive approach to model activity in the lateral intraparietal [LIP] area of the parietal cortex, which is involved in making visual discrimination decisions about movement. In one experiment, monkeys were trained to saccade left or right in response to a visual image of dots appearing to drift in different directions. Recordings were made from LIP as the monkeys moved. Thus far, decision codes in this region have only been explored with single-unit recording. The standard model of behavior for these neurons, drift diffusion, predicts that the cells increase firing rate in one direction and that higher spike rates reflect more motion. Using a statistical approach, Pillow created a type of generalized linear model that fit sample data. Applying the model to multineuron recordings, researchers found that correlation between neurons can predict decision-making: the activities of two neurons that both decide in favor of the same motion became strongly correlated just before the saccade.

Open Questions

How do neurons represent sensory, cognitive, and motor variables?

How can models account for the complex higher-order interactions between groups of cells?

Speakers:
Richard Andersen, California Institute of Technology
Fred Gage, The Salk Institute for Biological Sciences
Tim Behrens, Oxford University
Gyorgy Buzsáki, The Neuroscience Institute, New York University Langone Medical Center
Yadin Dudai, Weizmann Institute of Science
Sebastian Seung, Massachusetts Institute of Technology

Highlights

  • The neural code is long distance as well as local.
  • Connectomics research in the living human brain provides unique information.
  • Neural information is organized on multiple time scales.
  • Large-scale electrical oscillations coordinate the activity of distant cells.
  • Cells that arise during adult neurogenesis are genetically diverse due to retroelements that reintegrate into the genome during differentiation.
  • Cortical neurons involved in goal-directed movement can learn arbitrary coding strategies.

Circuits and plasticity

Studies of brain injury show that cells that normally encode one type of stimulus may be capable of taking on new coding tasks. This plasticity might be exploitable in neural prosthetics: if a group of cortical neurons could be trained to use an arbitrary code to control a brain-machine interface, prosthetic researchers would not be required to first interpret the complex codes naturally used by the brain. Richard Andersen and his group at California Institute of Technology used an arbitrary code to train the parietal reach region, which is specialized for goal-related movements in primates. Findings suggest that this region does seem capable of learning new codes, and that the animal learned these patterns by imagining particular movements associated with the pattern. However, the fact that the activity of non-trained neurons in the region was correlated with the activity of trained neurons suggests that, ideally, codes should not be entirely arbitrary; a better strategy would be to harness natural activity patterns.

How do new cells the brain produces in adulthood become functionally integrated into circuits and contribute to neural plasticity? Fred Gage of the Salk Institute for Biological Sciences tracked gene expression in differentiating neural stem cells to explore the role these cells play in the way brain circuits adapt to experience. During a defined window in neuronal differentiation, retroelement sequences increase the genetic diversity of newly formed neurons by amplifying themselves and reinserting their genetic sequences into the genome. "Much to our surprise, neural cells in culture were getting new insertions in DNA of their cells and being mutated, and diversity was being generated," said Gage. The insertions are common—as many as 80–300 per newly born neuron—and can have functional consequences, increasing genetic and perhaps behavioral diversity in the brain and also potentially explaining conditions such as Rett syndrome.

Communication across the brain

Human brain anatomy is different enough from that of non-human primates that noninvasive in vivo techniques are a must.

The neural code is long distance as well as local, said Tim Behrens of Oxford University. On the largest scale, the neural code can be conceptualized as an anatomical problem, because the substrate for brain-wide patterns is the long tracts of white matter that link distant regions. Methods to trace this connectivity in vivo, such as tractography, are low resolution in comparison with invasive techniques but can provide a reasonable approximation of the precision that can be achieved with chemical tracers. Human brain anatomy is different enough from that of non-human primates that noninvasive in vivo techniques like diffusion MRI are a must; such techniques also allow many connections to be measured simultaneously, revealing homogenous zones and boundary regions where connectivity changes sharply. Connections can be related to regional function and linked to other sources of individual variability such as genetics and behavior.

Noninvasive diffusion MRI (dark blue) can depict anatomy at nearly the resolution of invasive chemical tracers (light blue). (Image courtesy of Tim Behrens)

How do distant populations of neurons coordinate their activity? Large-scale oscillations in resting potential may create a "syntax of the brain," suggested Gyorgy Buzsáki of NYU Langone Medical Center. Oscillations on different time scales are organized hierarchically, in such a way that slow oscillations modify the power of faster rhythms. Self-organizing cell assemblies are segregated by phase in the theta cycle oscillation, the slow neuronal rhythm characteristic of activity in the hippocampus. This separation is enforced through the activity of inhibitory neurons. Phase precession, the relationship between phase and position of spiking, may explain how neurons encode location. In one experiment, optogenetic silencing of an inhibitory interneuron that targets the perisoma of mouse hippocampal place cells had no effect on spike frequency but changed the phase of spike activity. A drug that changes the fine-scale relative timing (synchrony) of the activity of pyramidal cells and interneurons—but does not change their firing rates or behavior over long time scales—interferes with hippocampal theta rhythms and is associated with a decrement in memory performance in the rat. The implication is that when phase segregation is impaired, information deteriorates.

Memory requires information to be encoded, transformed into long-term storage or "consolidated," and made available for later recall, processes described by Yadin Dudai of the Weizmann Institute of Science. At the local level, memory consolidation happens as the weights of individual synapses change, a local subroutine of larger-scale memory consolidation that happens on a systems level. According to the standard theory, when event memories (episodic memory) are consolidated information is initially processed by both hippocampus and neocortex but is ultimately maintained only in the cortex. Trace transformation theory, in contrast, posits that episodic memories remain in the hippocampus and semantic elements—verbal descriptions—are encoded in the cortex. In tests, people were shown documentaries and then asked weeks or months later to either describe (semantic) or retrieve (episodic) what they saw. With time, the richness and detail of memories declined. Even after months, the hippocampus was still activated by retrieval tasks but was uninvolved in descriptions. The gradual transformation of episodic memory into semantic memory suggests that semantic memory may require less resources or online processing.

The retinal connectome

Exploring cell-type diversity is another way to understand the brain's large-scale organization, but the definition of a cell type is still unclear and existing catalogs are incomplete, said Sebastian Seung of MIT; anatomy, genetics, physiology, and development must all be considered. The retina, traditionally considered to have five cell types, now looks to have more than fifty, each of which may have a distinct computational character. To explore the wiring diagram of the retina, Seung's group began with Cajal's idea that stratification is an index of connectivity—imaging retinal cells with confocal microscopy and using algorithm-guided computerized tracing techniques to reconstruct ganglion cells.

The team looked for cells that co-stratify with JAM-B-expressing retinal ganglion cells to identify candidate presynaptic partners. Serial electron microscopy was used to reconstruct cells in the retina, with individual branches traced with artificial intelligence and then corrected through a crowd-sourced science project called Eyewire, which employs online volunteers identify cell boundaries. This is a huge project, which along with others like it will demand "exponential" innovation, said Seung: "It will require the combination of machines and crowds. Crowds can help the machine learning be better."

Open Questions

Are natural or artificial codes more effective for controlling brain-machine interfaces?

Why do episodic memories become semantically encoded over time?

How many cell types are there in the brain?

Speakers:
Sheila Nirenberg, Weill Medical College or Cornell University
Leigh R. Hochberg, Brown University; Providence VAMC; Massachusetts General Hospital/Harvard Medical School
Rahul Sarpeshkar, Massachusetts Institute of Technology

Highlights

  • A temporal correlation code derived from the activity of ganglion cells greatly improved the performance of a retinal prosthetic.
  • An investigational motor prosthetic allowed one paralyzed patient to precisely manipulate a robotic arm.

The code in action

Cracking neural codes has direct practical applications like the development of more effective neural prosthetics. Improving sensory prosthetics like artificial eyes and ears is often considered a bandwidth or resolution problem, but accurately capturing the right code provides a bigger performance boost, as Sheila Nirenberg of Weill Medical College or Cornell University pointed out.

The retina is a good location to explore codes: cells function without feedback from the brain and most or all cells can be sampled simultaneously. Activity recordings were made from living animals and from retinas in vitro to distinguish between theoretical coding strategies. The alternatives included a coarse spike count code representing information simply by the number of action potentials, a spike timing code dependent on the number of spikes per unit time, and a temporal correlation code carrying the information in a time-dependent spike pattern. The temporal correlation code gave the best account of observed performance. Using such a code in a novel retinal input/output algorithm (based on linear/nonlinear cascade models) and incorporating the algorithm into a retinal prosthetic dramatically improved the device. Nirenberg further showed that by combining the correct code with a high-quality transducer, such as channelrhodopsin, completely blind retinas can be made to produce normal output, opening the door to the possibility of prosthetic devices that can produce normal or near-normal vision.

With a prosthetic that uses an information-dense neural code involving the temporal correlations of spikes (second row), ganglion cells in completely blind retinas (even retinas with no photoreceptors at all) respond essentially normally to natural scenes. (Image courtesy of Sheila Nirenberg)

Motor prosthetics that allow paralyzed people to manipulate objects or to control a computer are potentially the most spectacular uses of neural code. Leigh Hochberg of Brown University, Providence VAMC, and Massachusetts General Hospital/Harvard Medical School described trials of BrainGate, a system that pairs a chronically implanted 100-microelectrode array in the motor cortex with a neurally-controlled robotic arm. Seven patients have received the device. Most recently, a paralyzed patient who survived a stroke 14 years earlier could use the system effectively enough to be able to lift a cup of coffee and drink from it. A related project involves using an electrode array to detect impending epileptic seizures, which are thought to result from hypersynchronicity of cortical neurons.

One major hurdle in neural prosthetics is the need for power, a demand that will grow as devices incorporate coding, said Rahul Sarpeshkar of MIT: "The more complex the algorithm, the more energy required to communicate." The natural auditory system solves the power problem through selective sampling, whereby high-energy channels are sampled finely and other channels are sampled more coarsely. This principle has been used to develop a cochlear implant that can encode more information at lower power, including the phase information that is essential for music perception. The processor uses an algorithm called asynchronous interleaved sampling, which permits lower average stimulation rates and therefore requires less energy to operate. In the future, devices such as this might be powered with glucose from the cerebrospinal fluid, circumventing the need for batteries.

This session demonstrated the practical reasons to study the neural code; in the panel discussion that followed, the day's speakers explored its intellectual promise. Already, pointed out Buzsáki, we can precisely read out patterns of activity and distinguish between normal and pathophysiological patterns. We have begun to understand causality in the brain as the result of combinations of events and probability distributions—a more sophisticated approach than looking for a predictable, one-to-one correspondence between a stimulus feature and response of a neuron. These new abilities, which make use of convergent revolutions in data capture, genomics, informatics, and electronics, may begin to provide us with the tools to understand the brain in its own language.