UCL Scientists Reconstruct 10-Second Movies Directly from Mouse Neural Activity Using AI and Single-Cell Imaging

UCL researchers successfully use single-cell recordings and AI to reconstruct 10-second videos from mouse brain activity, revealing how minds interpret reality.

By: AXL Media

Published: Mar 10, 2026, 4:22 AM EDT

UCL Scientists Reconstruct 10-Second Movies Directly from Mouse Neural Activity Using AI and Single-Cell Imaging - article image
UCL Scientists Reconstruct 10-Second Movies Directly from Mouse Neural Activity Using AI and Single-Cell Imaging - article image

Decoding the Language of the Visual Cortex

While science has long sought to understand how the brain translates signals from the eye into a coherent image of the world, previous attempts have often relied on the relatively low resolution of fMRI scans in humans. A new study led by the Sainsbury Wellcome Centre at UCL and published in eLife has taken a more granular approach. By recording the activity of single cells within the visual cortex of mice, researchers have moved closer to a pixel-level understanding of biological vision. This technique allows for a far more precise measurement of brain representations than was previously possible, enabling the team to build a digital bridge between neural firing and actual visual experience.

A New Model for Neural Encoding

To translate raw brain activity into moving images, Dr. Joel Bauer and his colleagues utilized a dynamic neural encoding model. This AI-driven system predicts the activity of individual neurons based on the movie being watched, while simultaneously adjusting for the mouse’s physical movements and pupil diameter. The team refined this by calculating the "prediction error"—the difference between what the model expected the neurons to do and what they actually did (measured via microscopic imaging that detects calcium-level boosts in firing cells). This allowed an algorithm to gradually update pixels on a blank screen until the output perfectly matched the mouse’s internal neural state.

Reconstructing Reality from Thought

The true test of the model came when the researchers presented the "trained" AI with brain data from a mouse watching a 10-second video clip that the model had never seen before. The resulting reconstruction was strikingly accurate. According to Dr. Bauer, the quality of the video improved significantly as data from more individual neurons were included, highlighting the need for comprehensive neural datasets to achieve high fidelity. To verify the results, the team used pixel correlation to compare the original footage with the brain-derived version, finding that the timing was nearly identical between the two.

Categories

Topics

Related Coverage