Researchers at the Massachusetts Institute of Technology have made a significant discovery in our understanding of memory.
Researchers from The Picower Institute for Learning and Memory published in PLOS Computational Biology their findings comparing the results of several computational models reflecting two competing hypotheses about the process underpinning working memory with observations of brain cell function in an animal completing the task.
These findings ran counter to the more conventional wisdom, which holds that memory is preserved by keeping neurons always active, and instead provided significant support for the more recent theory that a system of neurons maintains the information by creating temporary modifications in the architecture of their interconnections, or synapses.
While both versions permitted for knowledge to be remembered, only those that enabled for synapses to temporarily alter connection generated neuronal activity patterns that resembled those found in human brains at work.
Senior author Earl K. Miller admitted that the idea that brain cells maintain memories by being always “on” may be simpler, but it doesn’t represent what nature is doing and can’t produce the sophisticated flexibility of thought that can arise from intermittent neural activity backed by short-term synaptic plasticity.
Synapses could be the key
The consistent neuronal activity gives birth to consistent ideas, which is why most people believe that working memory takes place in neurons. This viewpoint, however, has recently been questioned due to its perceived inconsistency with evidence.
Synaptic activity is shown to be a substrate for working memory by use of artificial neural networks with short-term synaptic plasticity. The key insight from the article is that these “plastic” neural network models are more brain-like in a mathematical sense and also have extra functional advantages in terms of resilience.
The researchers wanted to do more than simply theorize about how data may be stored in working memory; they wanted to shed light on how nature really accomplishes it. To do so required first measuring the electrical spiking activity of hundreds of neurons in an animal’s prefrontal brain while the animal engaged in a working memory game. The animal was given a picture, which promptly vanished again and over again.
In a split second, it would be shown with two pictures, one of which would be the original, which it had to study in order to get a little reward. That split second before the test is crucial, so named because the picture must be retained in memory until then.
Neurons show intense activity in response to the initial picture, less activity after the delay, and then increased activity when the images are remembered for testing. Therefore, spiking is significant during the initial storage and recall phases, but it is seldom throughout the maintenance phase. During the wait, the spikes do not remain constant.
In addition, the team “decoders” to interpret the spiking activity readings and deduce the working memory information. When the spike was high, they were spot on, but during the delay time, their accuracy dropped significantly. It was concluded from this that spiking does not accurately reflect data when there is a delay. But it resulted in a fundamental query: if spiking doesn’t retain information, then what does?
Mark Stokes of Oxford University and colleagues have postulated that synaptic “weights,” or shifts in relative strength, may be used to record the information. The MIT researchers tested this hypothesis by creating computer models of brain networks that included two distinct interpretations of the central theories. The machine learning systems were taught to carry out the identical working memory job as the genuine animal and to generate brain activity that could be understood by a decoder.
The conclusion was that the computational networks that encoded information through short-term synaptic plasticity peaked when the real brain did and dropped down when it didn’t. Unlike the human brain, the networks designed to use continual spiking to store information did so even when the brain’s activity was low. The decoder findings also showed that the synaptic plasticity models had lower accuracy throughout the delay period, whereas the persistent spiking models maintained an unrealistically high level of accuracy.
The group also performed a deeper level of analysis by developing a decoder to extract data from the synaptic weights. Researchers discovered that throughout the latency phase, synapses portrayed details from working memory that the spiking mechanism failed to convey.
One of two model variants including short-term synaptic plasticity, was the most realistic option due to its use of a negative feedback loop to maintain the stability and robustness of the neural network.
Memory processes in working memory
The synaptic plasticity simulations better matched nature, but they also had additional advantages that are likely important for actual brains. One was that even when as many as half of the simulated neurons were “ablated,” the plasticity models still kept data in the shape of synaptic weightings.
After shedding just 10-20% of their synapses, the sustained activity models failed. Moreover, intermittent spiking uses far less energy than constant spiking. And because several items may be stored in memory at once with brief spikes rather than continuous ones, there is another another advantage. It has been shown via studies that the human working memory can store up to four items at once.
New studies are being planned to see whether models with sporadic spiking and synaptic weight-based storage of data accurately reflect actual brain data when animals must store numerous items in mind rather than just one picture.