AI-generated imaging using brain activity hits record 75% accuracy
An image reconstructed by generative AI using recordings of a test subject’s neural activity
Artificial intelligence (AI) was for the first time able to reconstruct images from peoples’ brain activity with over 75% accuracy, researchers in Japan revealed on Nov. 30.
Until now, recreating images from brain activity was only possible when the subject was actually seeing them, or when the type of images, such as faces, letters or simple figures, were specified. The team of researchers from the National Institutes for Quantum Science and Technology (QST) and other organizations has reportedly demonstrated that it’s possible to reconstruct all sorts of images, such as landscapes and complex figures, to some extent based solely on thought.
First, the team recorded the brain activity of subjects who viewed 1,200 various images while situated in a functional magnetic resonance imaging (fMRI) machine. “Score charts” with some 6.13 million factors such as color, shape and texture were also created by making the AI recognize the images. A neural signal translator program was made that matches brain activity with the scoring chart, creating new scoring charts when new brain activity is input.
One of the images used for an AI reconstruction experiment is seen. The subject mentally pictured this image between 30 minutes and one hour after viewing
Next, the subjects were shown an image different from the 1,200, and their brain activity was measured under the fMRI 30 minutes to an hour later while asked to imagine what kind of image they had seen. Inputting the records, the neural signal translator then created score charts. The charts were input into another generative AI program in order to reconstruct the image, undergoing a 500-step revision process.
This resulted in the ability to pinpoint original images from the reconstructed ones with a 75.6% accuracy rate, a huge advancement compared to prior methods that had achieved merely 50.4% accuracy.
The research could lead to new forms of communication not requiring the use of words. QST researcher Kei Majima said, “This is a monumental achievement in which humans peered into another person’s head for the first time. I hope this research prompts further understanding of the human mind.”
The results were published as “Mental image reconstruction from human brain activity: Neural decoding of mental imagery via deep neural network-based Bayesian estimation” in the online edition of the international science journal Neural Networks.