Researchers have developed a pipeline that turns fMRI recordings from sleep into video-like sequences of dream imagery. The method first reconstructs visual content from wakeful fMRI, then decodes sleep-state brain activity, and finally assembles the frames into coherent narratives using language models.
By combining generative image models with sequence modeling, the team creates richer, story-like outputs rather than isolated snapshots.
They also built a dataset of dream reports paired with fMRI data and an evaluation pipeline to measure both visual accuracy and narrative coherence.
Top comments (0)