In computers, memory is relatively simple,” says information theorist and Brown engineering professor Chris Rose. But in its more complex forms, memory remains fundamentally mysterious. “In humans,” Rose says, “we barely know how it’s structured or how it works.” Researchers in every corner of the Brown campus are right now devoting considerable energy to unraveling those mysteries. We spoke to three professors about their investigations.
How do we learn?
Psychologists have mostly settled on a structure for thinking about human memory, says Michael Frank, a professor of cognitive, linguistic, and psychological sciences. It falls into three categories: episodic (“things that happen, like where I parked my car”), procedural (“how to tie your shoes; people call it muscle memory because it’s often related to a motor skill”), and working memory (“how you hold multiple pieces of information in mind, like which items to get from the grocery store”).
The taxonomy is a useful guide, Frank says, but it doesn’t begin to describe the intensely complex interactions between different forms of human memory. “How exactly do I remember where I parked my car? And how did I make the decision to park it there today?” Trying to answer the second question, he says, can tell us a lot about the first one.
To better understand the connections between memory, learning, and decision making, Frank builds neural network models that seek to mimic functions in the human brain. “A lot of current AI research tries to make algorithms a little bit smarter and faster based on rough principles of how neurons interact,” he says, “but these strategies depend on huge data sets and don’t respond well to new situations.”
Frank’s latest project combines new findings in neuroscience with a specialized form of machine learning called reinforcement learning. His model uses distinct, parallel pathways for measuring risks and rewards in decision making, he says—just like the brain does. When circling for a parking spot, for example, we balance the value of what we already know (information about time, location, and parking meters) against the costs of an ever-expanding search radius. Both avenues have a huge role on learning outcomes, offering opportunities both to re-evaluate what we “know” for the next time and to acquire new information.
“It’s interesting to try to replicate some of the complexity of biophysical systems,” he adds. “I want to learn from the architecture of the human brain, how it learns and generalizes so effectively.”
How can we forget?
The current challenge for Diane Lipscombe, a professor of neuroscience and director of the Robert J. and Nancy D. Carney Institute for Brain Science, is understanding how we might “un-learn” certain things. Specializing in how calcium functions in neurons as they receive and transmit signals, Lipscombe has a special focus on chronic pain, which she describes as “a pathological form of memory.”
“If you hold your hand over a stove, within seconds you will feel pain,” Lipscombe says. “Your body is almost immediately sensitized differently to the signals being passed from your hand. But after the healing process takes place, that same signal—say someone accidentally brushes your hand—is processed innocuously in most people.” People with chronic pain don’t get this relief, she notes. The wound heals, but the pathways to your brain still transmit the pain of a burn.
Severe shoulder injuries have a higher likelihood of resulting in chronic pain, Lipscombe says. It also occurs more often with HIV patients, diabetics, and cancer patients who are undergoing treatments. “If we can find a way to stop the calcium proteins that carry pain signals, we can reduce the signal. But of course you have to find precisely the right target at the right moment and only target the pathways that are pathological. Pain is actually an important protection mechanism.”
Chemical memory
Computer memory may be simpler than its human analog right now, but not for long. A large group of Brown scientists is working on a completely new approach. “We had this idea,” says Rose: “How do we take chemicals and make memory out of them?” Software engineers conjure words, images, and actions out of 1’s and 0’s. “Our way,” Rose says, “is to mix chemicals together and run them under a mass spectrometer. For each possible chemical, we ask: ‘Is it there or not?’ If it is, that’s a 1. If not, that’s a 0.” He pauses. “I just made that sound a lot simpler than it really is….”
“Biology obviously uses molecules to process and store information—we can see and hear, and communicate with each other, and remember things. If you dwell on it, it all starts to feel very sci-fi.”
Powered by a 2018 grant of $4.1 million from the Defense Advanced Research Projects Agency, Brown’s first foray into “molecular memory” combines the efforts and expertise of Rose, six other professors, three postdocs, grad students in both engineering and chemistry, and several undergrads. Led by Jacob Rosenstein of engineering and Brenda Rubenstein of chemistry, the project has run a range of successful experiments—converting, for example, the image of a cat into a chemical sample, then reading out that sample in a mass spectrometer and converting it back into the cat.
“This project is a little weird, but that’s what makes it fun,” says Rosenstein. “We have people who do electronics and hardware design, information theory, computer architecture, quantum computing, and all kinds of chemistry. It’s the most unique team I have ever worked on,” he adds. “Usually I do projects where it’s electronics and one other thing—sensing, or biology, or something medical. Transistors are a lot easier than chemistry.”
The molecular memory team also looks to the human brain as a model for inspiration. “Biology obviously uses molecules to process and store information—we can see and hear, and communicate with each other, and remember things. If you dwell on it, it all starts to feel very sci-fi,” Rosenstein says. “But in the end I just want to remember where I parked my car.”