Systems Neuroscience and Brain Circuitry
Wednesday, January 28, 2009
Presented by the Systems Biology Discussion Group
What are the Goals of Systems Neuroscience?
Perhaps the foremost challenge for the emerging discipline of Systems Biology, and more so for Systems Neuroscience, is to identify its object of study. Traditionally, the natural sciences have been defined around a certain numbers of levels of abstraction: particle physics, condensed matter physics, cosmology, molecular biology, cell physiology, psychology, ecology. Even for these time-honored distinctions, the fields' boundaries are populated by inexorable links tying them, such as high-energy physics and cosmology, or bacteriology and ecology. But with the advent of high-throughput experimental technology and high performance computation, biologists in general and neuroscientists in particular are confronted with a new reality: availability of massive data of neural systems whose behavior cannot be simply decomposed into independent entities, i.e. "hydrogen atoms". The challenge that researchers face is, therefore, how to describe, conceptualize and eventually manipulate these systems taking into account their collective nature. We will open a conversation with leading neuroscientists about their own research, and how it pertains to the plausible goals of Systems Neuroscience.
Chair and Organizer: Guillermo Cecchi, IBM T.J. Watson Research Center
Speakers: Dmitri Chklovskii, Howard Hughes Medical Institute (Janelia Farm); Roger D. Traub, IBM T.J. Watson Research Center; Jonathan D. Victor, Weill Cornell Medical College
What Determines Neuronal Shape?
Dmitri Chklovskii, Howard Hughes Medical Institute (Janelia Farm)
The human brain is a network containing a hundred billion neurons, each communicating with several thousand others. As the wiring for neuronal communication draws on limited space and energy resources, evolution had to optimize their use. This principle of minimizing wiring costs explains many features of brain architecture, including placement and shape of many neurons. However, the shape of some neurons and their synaptic properties remained unexplained. This led us to the principle of maximization of brain's ability to store information, which can be expressed as maximization of entropy. Combination of the two principles, analogous to the minimization of free energy in statistical physics, provides a systematic view of brain architecture, necessary to explain brain function.
Fast Network Oscillations in the Brain
Roger D. Traub, IBM T.J. Watson Research Center
Network oscillations at 100 Hz and above occur in cerebral cortex, and at least sometimes are a prelude to an epileptic seizure. These very fast oscillations belong to a family of network behaviors in the brain that are generated solely, or primarily, by the coupling together of principal (excitatory) neurons by gap junctions. I will present experimental and simulation data that support these novel ideas, and discuss some of the implications for understanding epilepsy.
Understanding the Computations in Primary Visual Cortex: Does Tweaking the Standard Model Suffice?
Jonathan D. Victor, Weill Cornell Medical College
A central problem in systems neuroscience is to understand the nature of cortical computations, and how they are implemented. Primary visual cortex is an excellent model system for addressing these questions, since its inputs are readily controlled and its anatomy is well-understood. Most studies have suggested that neurons of primary visual cortex can be modeled as a bank of feedforward filters and simple nonlinearities. However, here we present evidence of widespread and dramatic differences between the computations performed by real cortical neurons and computations of models based on a feedforward cascade. These differences suggest that a strongly recurrent network is an appropriate basic framework for understanding cortical computations.