Imitate Your Way to the Top

Using examples and hints, a new algorithm provides a simple and efficient method for constructing models of neural circuits.

Generic image of people in a classroom

When stumped by a difficult question, struggling students might resort to two strategies. First, they might ask for a hint to make the problem simpler. If they still don’t know the answer, they’ll copy it from a neighbor. In the February edition of PLoS ONE, my colleagues and I co-opt these two strategies to develop a simple method for constructing artificial neural networks.

Some may wonder: Why make these models at all? The answer is that they provide a map for navigating the complex jungle of biology. Directly observing a complete neural circuit whose activity is responsible for a particular behavior is challenging. Instead, theoretical neuroscientists construct models of neural circuits that perform analogous operations. These models aid experimental efforts by codifying our knowledge about how neural systems work and testing assumptions about how they might work. In some cases, they can even rule out entire solutions before an experimentalist hunts them down. Models serve as a repository of our knowledge and assumptions, guiding future explorers on their journey.

Conventionally, constructing a model of a neural circuit involves selecting its various components, such as model neurons and their connections, and applying a rule to modify the connections so that the output of the system matches a desired output. Building highly interconnected (‘recurrent’) neural networks can be especially challenging because simultaneously modifying every neuron in the network can sometimes lead to unintended consequences.

The highly popular backpropagation approach trains neural networks by solving an optimization routine that cleverly searches for the individual neuron activity patterns that can collectively generate the desired output. Although impressive, this method is perhaps too clever to be implemented by real neurons. Rather than searching, our new method, called full-FORCE learning, explores whether a more rudimentary strategy — making an educated guess — can be harnessed to perform learning. (The approach grew out of a previous method called ‘FORCE learning’, developed by David Sussillo and Larry Abbott, both investigators with the Simons Collaboration on the Global Brain.)

The method’s success rests on a simple idea: Build a system that can perform an operation by copying another system that is already doing it. At first glance, this seems strange — if you already have a system that works, why build another? The key is that the source system does not really produce the desired output itself. Rather, it ‘cheats’ by receiving the desired output as an input. When the units of this ‘cheater network’ are recurrently connected with a random set of synapses, the network’s activity provides a ‘guess solution’ that tends to work for many tasks of interest.

The guess solution tends to work because of two assumptions. First, if you give a network a signal for free — the ‘cheat input’ — chances are that it can generate that signal as an output. This is fairly intuitive: If I give you a dollar, you should be able to provide one to someone who asks. Second, randomly wiring up neurons in the network should compensate for any mild distortions of the cheat input that might make it harder for the network to produce the output. We don’t know for sure that these assumptions generate a good guess, but they tend to.

Graph showing how the cheater network (top) receives the desired output as an input. Combined with strong, random recurrent connections (black), this input allows the network to produce the desired output trivially through a set of learned output connections (red). The imitator network (bottom) does not receive an external input. Instead it must copy the activity of every neuron in the cheater network (one example is shown) by modifying its recurrent connections (red) so that it can produce the desired output
The cheater network (top) receives the desired output as an input. Combined with strong, random recurrent connections (black), this input allows the network to produce the desired output trivially through a set of learned output connections (red). The imitator network (bottom) does not receive an external input. Instead it must copy the activity of every neuron in the cheater network (one example is shown) by modifying its recurrent connections (red) so that it can produce the desired output. Credit: Brian DePasquale

The full-FORCE method examines the activity patterns of the cheater network and uses them to instruct a second, essentially identical network to ‘imitate’ these patterns internally, without the help of the input signal. The imitator network absorbs the cheat input into its internal dynamics by learning to generate the activity pattern of every neuron in the cheater network. This allows the imitator network to generate the desired output and perform the task on its own.

The cheater and imitator networks are indistinguishable with respect to the activity of any neuron, and their output is identical; they only differ in their recurrent connections. The difference between them is like the difference between a human-driven car and an autonomous one: To learn how to drive, the self-driving car needs to watch the human-driven car. Though both cars appear to be doing the same thing, the causes of their actions are different.

The cheating model plays a critical role in our method by establishing a set of activity patterns that form a representation of the network output. A natural question arises when we consider this representation: Is it a good one? That is, is it easy for the imitator network to copy it?

As described thus far, the cheater network receives only the desired output as its cheat input. But perhaps we can apply extra signals, or hints, to the cheater network that make it easier for both the cheater and imitator networks to produce the desired output. These hints work because they induce internal activity patterns in the cheater network that are easier for the imitator network to learn. We explored this possibility and found that, quite to our surprise, applying fairly intuitive hint signals could transform a foundering network into a star student.

Although we introduced hint signals purely on intuitive grounds, can the concept of hints be used as a guiding principle for understanding experimental data? Indeed, recent experiments by SCGB investigator Mark Churchland have identified signals in motor cortical activity that perform a function very similar to that performed by our hint signals. While examining the responses of neurons in the primary motor cortex of primates performing a novel ‘cycling’ task (like riding a bicycle, except with one’s arm), Churchland found that the most dominant signals in the data did not relate to the system’s output — moving the animal’s arm — per se. Rather, these signals served to ‘disentangle’ more weakly represented output signals, making it easier for them to be accessed by other neural circuits that need them. It’s possible that these disentangling signals are generated from other brain regions and sent as instructive signals to the motor cortex, much like the hint signals we applied in our networks.

The positive impact of hint signals in our networks and the identification of similar signals in real neural activity introduces a number of important questions. Do other neural representations employ a similar strategy, or are these types of representations well-suited for the motor system specifically? In real brains, where might these hint signals come from? Our cheater network received these hints for free; do real neural representations evolve during learning so as to generate these helpful signals? When building models, can we develop algorithms to search for these helpful representations within the space of all representations of an operation? The good news is that the answers to these questions are readily found — just ask someone else who already knows!

Brian DePasquale is a postdoctoral research associate at the Princeton Neuroscience Institute. He completed his Ph.D. at the Center for Theoretical Neuroscience at Columbia University in 2016.

Recent Articles