Deciphering the Brain’s Algorithms
The space between our ears is full of mysteries. How do our brains organize and recall information? How do different neurons work together to produce coherent thoughts and tackle complex problem-solving?
Illuminating these mysteries requires reverse engineering the rules and patterns that control the brain. Accomplishing such a feat will not only help scientists understand the inner workings of the mind but also bring some of the smarts of biological brains to artificial intelligence. Building that bridge between nature and technology is one of the goals of the Neuroscience group at the Flatiron Institute’s Center for Computational Biology.
“I believe that the physical processes in neurons — like the opening and closing of ion channels, the charging of membranes, the excretion of chemical neurotransmitters across synapses — are all physical implementations of mathematical algorithms,” says Dmitri Chklovskii, leader of the Neuroscience group. “There is some algorithm that neurons carry out, and it’s a big puzzle, because we don’t know what this algorithm is. Why is it that nature was able to evolve such an amazingly powerful algorithm that we now haven’t come up with?”
Reconstructing the brain’s hidden algorithms begins with the torrents of data produced by neuroscience experiments. The Neuroscience group’s CaImAn software tool automates the normally tedious task of manually tracking the firings of individual neurons. CaImAn (an abbreviation of calcium imaging data analysis) has been freely available for a few years. The tool has already proved invaluable to the calcium imaging community, with more than 100 labs using the software.
Chklovskii’s team has already started applying some of the lessons learned from the brain to artificial intelligence. The typical neural network uses labeled training data to tweak parameters until it churns out correct results, but a new framework by Chklovskii and colleagues does not require labels. It starts by expressing three biological truths about how a network ought to function in a mathematical way: Neuronal activity should never be negative (real neurons can’t do anything less than not fire); similar inputs should produce similar outputs; and different inputs should yield different outputs.
When the team optimized this mathematical expression, ‘the objective function,’ the resulting network repeatedly developed the architecture of a human brain: It divided the input space into overlapping sections and assigned one neuron to handle each chunk. “We had a different expectation of what this algorithm would do,” says Anirvan Sengupta, a visiting scholar at the Flatiron Institute and a systems neuroscientist at Rutgers University in New Jersey. “It emerged despite us.”
The work is the group’s latest in a series deriving optimal networks for learning various tasks. The results hint that the way the brain simplifies inputs is not only efficient but perhaps borders on inevitable.