As microelectrode arrays and machine learning algorithms improve, experts say these breakthroughs could move beyond medical applications, transforming how humans interact with computers and each other, while raising profound ethical questions about privacy, consent, and the commercialization of thought itself

The crackle of electricity inside your brain has long been too complex to decode. Artificial intelligence is changing that.
In laboratories from Stanford University to University of California, Davis and research centers in Japan, scientists say they are closer than ever to translating brain activity directly into words, images and even music.
In one recent study at Stanford, a 52-year-old woman paralysed by a stroke nearly two decades ago watched as sentences she could not speak appeared on a screen in front of her. Identified only as participant T16, she had a tiny array of electrodes surgically implanted in her brain. An artificial intelligence system decoded the neural signals produced as she imagined speaking, converting them into text in real time.
It was the closest scientists had come yet to a form of “mind reading”.
The findings, unveiled in August 2025, are part of a wave of advances in brain-computer interfaces, or BCIs, devices that connect the human brain directly to a computer. Months later, researchers in Japan reported a “mind captioning” system that used non-invasive brain scans and AI models to generate detailed descriptions of what a person was seeing or picturing.
Both efforts rely heavily on machine learning algorithms, a branch of artificial intelligence adept at identifying patterns in large datasets. Instead of interpreting sound waves, as voice assistants do, the systems interpret electrical signals produced by neurons.
“In the next few years, we will begin to see these technologies being commercialised and deployed at scale,” said Maitreyee Wairagkar, a neuroengineer at the University of California, Davis. Several companies, including Neuralink, founded by Elon Musk, are developing implantable brain chips aimed at bringing such tools beyond research settings. “It’s very exciting,” she said.
Scientists have experimented with BCIs for decades. In 1969, neuroscientist Eberhard Fetz showed monkeys could move a meter needle using the activity of a single neuron. Since then, BCIs have enabled some patients to control prosthetic limbs or computer cursors.
Decoding speech, however, has proved more complex. In 2021, Stanford researchers demonstrated that a quadriplegic man could generate English sentences by imagining himself writing letters in the air, producing about 18 words per minute.
“Can we directly decode the words that the person is trying to speak from neural activity alone,” Wairagkar asked.
By 2024, her lab translated attempted speech from a man with amyotrophic lateral sclerosis, or ALS, into text at roughly 32 words per minute with 97.5% accuracy. More recently, researchers have begun exploring whether systems can capture “inner speech”, words formed silently in the mind, rather than requiring patients to attempt speaking.
“We asked them to count the number of shapes of a certain colour on the screen, because we figured that in this type of task, you would probably accomplish it by literally counting numbers in your head,” said Frank Willett, co-director of Stanford’s Neural Prosthetics Translational Laboratory. “And that’s what we saw. We saw traces of these number words passing through the motor cortex that we could pick up on.”
Accuracy remains limited, particularly for open-ended thoughts. “With the current technology, we’re not able to get somebody’s fully unfiltered inner speech perfectly accurately,” Willett said. “But we were able to pick up traces of inner speech pretty clearly in these different tasks.”
Researchers are also working to decode tone, pitch and rhythm, allowing patients not just to generate words but to express emphasis or ask questions. In one 2025 demonstration, an ALS patient modulated his pitch and even sang simple melodies through a prototype system.
“Human speech is much more than text on the screen,” Wairagkar said. “Most of our communication comes through how we speak, how we express ourselves; what we say has different meanings in different contexts.”
Elsewhere, scientists are pairing functional magnetic resonance imaging scans, or fMRI, with generative AI systems such as Stable Diffusion to reconstruct images viewed by participants. In separate experiments, researchers have attempted to recreate snippets of music from brain activity.
While still experimental, the work is expanding understanding of how the brain processes language, vision and sound, and raising ethical questions about privacy and consent if such systems become more accurate.
Wairagkar believes improvements in hardware, including implants that can sample more neurons, will accelerate progress.
“Newer devices and better technology will be able to sample more neurons, get richer information, and achieve real time intelligible speech,” she said.
For now, the technology is largely confined to research labs and clinical trials. But as AI systems grow more powerful and companies push to commercialise brain implants, the prospect of machines that can interpret or even one day transmit human thoughts is moving from science fiction toward reality.
Get the latest news and insights that are shaping the world. Subscribe to Impact Newswire to stay informed and be part of the global conversation.
Got a story to share? Pitch it to us at info@impactnews-wire.com and reach the right audience worldwide
Discover more from Impact Newswire
Subscribe to get the latest posts sent to your email.



