Skip to main content

Sleep Derived: How Artificial Neural Networks Can Avoid Catastrophic Forgetting

Inspired by how the human brain uses sleep to learn and remember, researchers are inducing a sort of sleep-like state in artificial neural networks to reduce characteristic “catastrophic forgetting.”

Humans and animals possess the remarkable ability to learn continuously, incorporating new information with existing knowledge. Part of that ability derives from sleep, during which the brain consolidates new and old memories.

Artificial neural networks are based on the architecture of the human brain and in some ways, exceed its abilities, notably in things like computational speed. But when they learn something new, it often comes at the expense of overwriting previous information, a phenomenon called catastrophic forgetting.

Building upon previous work, Maxim Bazhenov, PhD, professor of medicine and a sleep researcher at UC San Diego School of Medicine, and colleagues describe experiments in which a sleep-like unsupervised replay in artificial neural networks reduced catastrophic forgetting.

The findings are published in Nature Communications

In this case, sleep for artificial neural networks consisted of off-line training with local unsupervised Hebbian plasticity rules and noisy input.

The result, write the authors: “In an incremental learning framework, sleep was able to recover old tasks that were otherwise forgotten. Previously learned memories were replayed spontaneously during sleep, forming unique representations for each class of inputs. Representational sparseness and neuronal activity corresponding to the old tasks increased while new task related activity decreased.”

In other words, the artificial neural networks performed more like people after a good night’s sleep.

— Scott LaFee