What’s in a model?

We’re less than halfway through 2016, but well over 10,000 studies about neuroscience have already been published in scientific journals. Each year, millions of dollars are poured into funding for neuroscience research, and highly publicized brain initiatives from both the US government and the EU have brought even more money and prestige to this rapidly growing field. So how much has all of this attention changed our understanding of the brain? According to a new study by Eric Jonas and Konrad Kording, not very far.

Their paper was uploaded to the preprint server bioRxiv late last week. I’ve included a brief summary of preprint servers below this article, but for now, suffice to say that the paper is a more or less finished scientific product, but it hasn’t yet passed the process of peer review, which is the main vetting tool for gauging the validity of new scientific research. Taking as their example Yuri Lazebnik’s now-classic paper about a biologist trying to understand a radio, Jonas and Kording ask whether a neuroscientist, using all of the modern tools of her trade, would be able to glean any insights into the workings of a microprocessor, the compact chip that forms the basis for modern computers. A microprocessor essentially consists of a series of individual transistors wired together into a circuit; Jonas and Kording argue that this architecture is largely analogous to the way neurons wire together to form circuits in the brain, making a microprocessor a fitting subject of research methods from neuroscience.

The premise of their study is well founded: if all of our best tricks are insufficient to solve a system as simple as microprocessor—and a system that’s already well understood by engineers, to boot—then how can we ever hope to comprehend the more complex workings of the human brain? However, their paper contains both factual and analytical errors that ultimately undermine their main point. Without going into every facet of their arguments, I’ll highlight a few of the main ideas covered in their paper.

 

Can breaking the brain teach us about its function?

 Jonas and Kording begin their neuroscientific examination of a microprocessor with a lesion study: in essence, they take out individual components of the microprocessor, and see what happens. The basic idea with these kinds of experiments (a long-time staple of research in both neuroscience and biology, more generally) is that we can mess up one part of a biological system, then look to see what kinds of things that biological system can’t do anymore. In theory, this approach should give us some idea of what that part normally does. Think what would happen if you took the blade out of your blender and tried to make a smoothie—your smoothie ingredients would remain intact, and you could logically conclude that the blade is a necessary component of a blender’s smoothie-making function. Lesion studies can also teach us important things about the functions a given component isn’t important for: taking the blade out of your blender won’t stop the motor from whirring about as usual, which suggests something about the specific contribution the blade makes to the process (namely, it chops things up).

After their microprocessor lesioning studies, Jonas and Kording conclude that, “Even if we can lesion each individual transistor, we do not get much closer to an understanding of how the processor really works.” In neuroscience, however, lesion studies have been widely successful at improving our understanding of many aspects of brain function, including topics as complex as language and narcolepsy. 

One of the most famous cases in which lesion studies improved our knowledge of neuroscience comes from the patient HM, who radically changed our understanding of the neurobiology of memory. As a young man, HM lost a part of his brain called the hippocampus (plural hippocampi – you have one on each side of your brain), as part of a treatment for epilepsy. As a result, he developed severe anterograde amnesia, totally losing the ability to form new memories.

At the time, scientists knew very little about how memory worked. Are any specific regions of the brain important for either memory formation or retrieval? Are all forms of memory stored in the brain in the same way, or are memories for things like conversations you have with friends different than remembering how to ride a bike? Because HM totally lost the ability to form new memories of his activities after losing his hippocampi, scientists learned that these structures are necessary for memory formation. However, in spite of his total inability to recall any new events that happened in his life, HM was still able to improve at physical tasks, which gave scientists the first evidence that so-called “episodic” memories (that is, memories about our first-person experiences) form independently from other kinds of memories, especially physical skills like riding a bike or learning to play the piano.

Of course, lesion studies are not perfect, and as Jonas and Kording rightly point out, the fact that disrupting one part of a biological system changes that system’s function doesn’t necessarily indicate that the part you broke is key to the function that’s disrupted. Going back to our blender analogy, cutting off the blender’s power cord will also cause it to produce sub-par smoothies, but that hardly indicates that the power cord has a specific smoothie-making function. In part, this problem can be circumvented by looking at the functions that aren’t disrupted after the lesion, as we discussed above: when you take off the blender’s power cord, the whole thing breaks; when you remove the blade, the only thing that goes awry is its ability to blend up fruits. Neuroscientists also combat this dilemma by relying on a combination of lesion and activation studies: if disrupting a part of the brain causes a particular behavior to go away, stimulating that same area should be able to induce the desired behavior on command. In fact, neuroscientists have used this technique to identify parts of the brain that are important for many functions, including waking from sleep, memory retrieval, and aggression.

Another potential issue with lesion studies is their coarseness: our ability to understand how single neurons—or even small groups of neurons—function would be severely limited if we lacked the ability to target finely defined parts of the brain. Today, neuroscientists use a number of techniques to both lesion and stimulate only specific cell types in specific parts of the brain. Although Jonas and Kording claim that tools for targeting specific neurons are “only now becoming possible with neurons in simple systems,” the papers they cite in support of this claim in fact rely on techniques that are over two decades old, and lesioning single neurons has been routine for even longer in other systems.

 

Tuning into the world: discovering how the brain represents sensory stimuli

To navigate the outside world successfully, the brain somehow needs to transform the environment it inhabits into meaningful neural representations. Engineers face a similar problem when building digital cameras to capture a visual scene, or designing microphones to capture sounds. The brain, however, needs to function as both a camera and microphone simultaneously, absorbing many different kinds of sensory information. It then uses that information to identify objects, recognize familiar faces, and guide difficult decisions. Unsurprisingly, the complexity of this enterprise also makes it very difficult for neuroscientists to study how the brain performs these tasks.

One key tool neuroscientists use to study sensory processing is called a tuning curve. In essence, tuning curves are a map of the sensory features each neuron “likes”: this brain cell fires in response to this thing but not that, while its neighbor fires in response to that thing but not this. Using this logic, neuroscientists have identified some cells that respond to very specific visual features, like bars that are slanted in a certain way, or moving in a certain direction. The idea is that these kinds of basic features can be combined and built up to form pictures of objects themselves.

In fact, neuroscientists have also identified other brain cells that respond to more abstract kinds of visual cues, like a cell that responds only to pictures of Jennifer Anniston’s face, but not to the faces of other celebrities. At a more intermediate level, other neurons respond only to specific aspects of faces, like the height of a face or the distance between the eyes. Neuroscientists have therefore made great strides in identifying how different brain areas and brain cells are primed to pay attention to specific features of the visual world. Given more time, I’m optimistic that scientists will continue to connect the dots to determine how each of these brain areas interact to create our nuanced, context-rich experiences.

Scientists have also used tuning curves to gain insight into more complicated aspects of brain function. For many years, it was thought that we respond to sounds less while we sleep largely because a part of the brain called the thalamus shuts out sensory input during sleep. The thalamus is like a way station for many kinds of sensory information. Neural signals about sounds, for example, pass from your ears through the thalamus and up to the auditory cortex, where the brain can identify specific noises and decide whether those sounds merit any kind of response. Thus, if the thalamus keeps neural signals about sounds from passing from your ears to the auditory cortex, you won’t respond to any noises.

We now know that this description can’t be the whole reason we don’t respond to noises while we sleep, in part thanks to tuning curves: when researchers studied how cells in the auditory cortex respond to sounds when animals were either awake or asleep, they found that the cells that respond to sound information when the animals were awake didn’t necessarily stop responding to sounds when the animals were sleeping, indicating that much of that sound information still makes it through the thalamus.

Arguably, tuning curves work best when applied to dedicated sensory areas of the brain. This initial identification of sensory brain regions can happen in many different ways. Modern scientists will often use something like functional magnetic resonance imaging (fMRI) to identify a part of the brain that seems to respond to something like faces or other objects. They can then go on to do more detailed recordings of individual cells in that area to identify their specific responses to various kinds of stimuli.

Unfortunately, Jonas and Kording largely ignore this tradition of experimentation when designing a tuning curve experiment to use on their microprocessor. Instead of focusing on an area with some known sensory function, they simply plot how the activity of each component of the microprocessor varies with the luminance of their screen. Unsurprisingly, this test fails to yield any useful information. Jonas and Kording do concede that the tuning curve approach may be “more justified for the nervous system,” but nonetheless caution that tuning curves’ overall utility may be dampened by the “dazzling heterogeneity of responses” within a given brain region. This observation, however, totally misses the point: the heterogeneity of responses within a given brain region is precisely the reason neuroscientists map out the tuning curves of individual neurons, rather than relying on coarser tools like fMRI to localize brain functions to broader areas of the brain.

 

Can a microprocessor teach us about the brain?

By employing techniques widely used in neuroscience, a researcher interested in understanding how a microprocessor works should—at a minimum—be able to (a) map the complete circuit of the microprocessor, identifying every transistor in its circuit and determining which transistor talks to which; (b) determine how each transistor will respond to various combinations of input signals; (c) examine how various states of the microprocessor correlate with its output. We’ll assume that we also have full access to collaborators in statistics, engineering, computer science, and physics to help out with the computational bits, as is frequently the case in real-world neuroscience studies. If, once all of this information is put together, and we’ve built our functional circuit diagram and consulted with our most computationally minded colleagues, we still can’t figure out how the darned thing works, we’re definitely in trouble.

The microprocessor analogy works best as a test case for new computational tools one might think to try out for analyzing neural data. If you record the state of every component of a microprocessor and align those states with its output, one would hope there would be some way to take that information and turn it into useful knowledge about how the microprocessor functions. This is essentially the tack taken by many forward-looking neuroscientists: record the activity of every neuron (especially in simpler organisms, like worms or small fish; for larger creatures, one usually has to settle for recording only a subset of brain cells) and try to correlate that activity to something the brain is doing at the time.

More generally, Jonas and Kording suggest that a microprocessor might be used as a benchmark to test the efficacy of neuroscience techniques: “We also want to suggest that it may be an important intermediate step for neuroscience to develop methods that allow understanding a processor… Unless our methods can deal with a simple processor, how could we expect it to work on our own brain?”

In this sense, they invite us to think of a microprocessor as a new kind of model system in neuroscience. Most often, scientists employ model systems—usually animals like mice, worms, or flies—as simpler, more ethical platforms in which to investigate brain function (as opposed to diving in and using our most invasive methods on humans or other primates). Jonas and Kording, then, suggest that a microprocessor might work as a model system not for testing our theories about the brain, but for testing techniques in neuroscience itself: if these techniques can’t recapitulate knowledge we already have about a system as simple as a microprocessor, can we expect those techniques to reveal anything novel or true about even simple nervous systems, like those in insects?

This line of questioning leads us to a more general inquiry: how can we tell if a field is moving in the right direction, making progress as quickly as we want it to? We could fall back on the old standby of American polling and ask scientists whether they think their field is “moving in the right direction” or “on the wrong track.” I’m skeptical that this poll will reveal anything useful, though, and we’re all but guaranteed to find more than one scientist who claims that “everyone in my field is on the wrong track but me” – not a particularly enlightening response.

Instead, the progression of science is usually measured by its ability to build on itself and develop functioning technologies. We know that relativity is true, in part because GPS satellites wouldn’t work without it. We know that calcium levels are important to neural signaling, in part because we can watch calcium levels go up and down in specific brain cells as those cells become involved in some particular task. Moreover, by assuming that calcium is important for neural coding, we’ve been able to build tools for studying and manipulating the nervous system that have themselves revealed new information about how the brain works.

On some level, we are all fundamentally limited in our ability to know the world, and scientists are no exception: every analysis and experiment is predicated on certain assumptions and experimental limitations that can affect the overall interpretation of experimental results. As such, the marker of scientific progress isn’t single experiments or single papers, but the ability of those experiments to be replicated and built upon over time. And as a field, neuroscience has been successfully building on itself for well over a century.

Of course, it’s still useful to check in every now and then to see if we can find any major gaps in available techniques or lines of inquiry. After all, hubris is no less dangerous for scientists than it was for heroes of Greek tragedies. By Jonas and Kording’s account, however, many techniques that have proved fruitful in neuroscience seem to yield little to no meaningful insight into the workings of a microprocessor; perhaps a microprocessor and a brain are too different to form a useful point of comparison, after all.

Instead, maybe we should be looking at a simple biological system, something like a worm. The common lab animal C. elegans is a species of roundworm with only 302 neurons, and we’ve known how each of these neurons is connected to every other since the mid 1980s. Many well-funded laboratories with many brilliant people are working long hours to figure out what makes these small animals tick, but we’re still far from having these little creatures figured out. Perhaps a neuroscientist can solve a microprocessor after all, but maybe we should be asking a different question: can a neuroscientist solve a worm?

 

A brief note on preprint servers:

Preprint servers (most notably arXiv for papers in math and physics, and the equivalent for biosciences, bioRxiv) are places where scientists can upload finished papers before submitting them to the arduous process of peer review. Preprint servers guarantee that scientific findings can be disseminated much more rapidly than they would be with traditional publishing platforms (papers go online on preprint servers immediately, while the traditional publishing process can sometimes drag on for years), and because preprint servers are open source, the information in new studies can be accessed widely, even without expensive subscriptions to scientific journals. On the other hand, papers on preprint servers—like this one—can also be picked up by science writers and bloggers, who tout the arrival of a “new scientific paper,” while failing to inform their readers that the paper has yet to be thoroughly vetted by scientific editors and impartial scientific experts. Of course, all scientists do their best to ensure that the content of their papers is accurate before publishing it on a preprint server, but everyone misses things from time to time, and editors and scientific peers can sometimes be invaluable for identifying these errors.

Ernst Haeckel's Elegant Universe

Ernst Haeckel's Elegant Universe