How artificial intelligence can comprehend our jumbled inner thoughts

How artificial intelligence can comprehend our jumbled inner thoughts 
Your brain's electrical activity has long considered too complicated to understand. That is changing with artificial intelligence.

Other than the rise and fall of her respiration, the woman remained still, her palm balled into fist and her eyes locked intently. On the screen in front of her, words were slowly coming together to form complete phrases. Phrases she was unable to utter aloud.

small array of electrodes had been surgically implanted into lobe at the front of the woman's brain; she was only identified as participant T16. As she pictured herself pronouncing words, computer driven by type of artificial intelligence was deciphering the signals her neurons were producing and turning them into letters on screen. Along with three patients suffering from the neurological illness amyotrophic lateral sclerosis (ALS), she was participating in study at Stanford University in California, USA, to test method that could convert thoughts into text in real time.

It was the closest thing to "mind reading" that science had yet to discover.


In August 2025, the researchers announced their achievement. few months later, Japanese researchers developed "mind captioning" method that can produce precise, in-depth descriptions of what person is seeing or visualizing in their head. It translated an individual's brain activity using three distinct AI technologies and non-invasive brain scans.

These investigations are the most recent in series of discoveries that are opening up new avenues for neuroscientists to study the inner workings of the human brain and opening doors for those who can not communicate in other ways. But in the long run, it might drastically alter how each of us engages with the outside world and even with one another.
"We will start to see these technologies being commercialized and used at scale in the next few years," says Maitreyee Wairagkar, neuroengineer who has been working on brain-computer interfaces at the University of California, Davis' neuroprosthetics lab in the United States. number of businesses, like Elon Musk's Neuralink, are already working to develop commercial brain chips that will enable this technology to leave the lab and enter the real world. "It is really thrilling," Wairagkar says.
How artificial intelligence can comprehend our jumbled inner thoughts 
For very long time, scientists have been developing brain computer interfaces (BCIs), which are devices that can communicate directly with the human brain. In 1969, American neuroscientist Eberhard Fetz showed that if monkeys were given food pellet in exchange, they could learn to move meter's needle with the activity of single neuron in their brains. In more peculiar experiment from the same era, Spanish scientist Jose Delgado managed to remotely activate bull's angry brain, prompting it to stop in the middle of its charge.
For decades, BCIs have been able to decipher the brain impulses that accompany movement, enabling users to control a cursor on a screen or a prosthetic limb. However, the development of BCIs that convert speech signals or other complex ideas from brain signals has been slower. According to Wairagkar, "a lot of early work was done on non-human primates… and obviously, you cannot study speech with monkeys." However, the profession has made remarkable strides in recent years in its attempts to understand the speech of persons with limited communication capacities, such as those with "locked in" syndrome or ALS that causes paralysis. In 2021, for instance, researchers at Stanford University reported a successful proof-of-concept that enabled a paraplegic man to generate English phrases by visualizing himself using his hand to make letters in the air. He was able to write eighteen words per minute using this technique.

According to Maitreyee Wairagkar, is it possible to decipher the words someone is attempting to say just by looking at their cerebral activity? The next step included deciphering words from the brain activity linked to speech itself, as natural human speech is approximately 150 words per minute. A method that converted a 45-year-old man with ALS's attempted speech straight into text on a computer screen was tested in Wairagkar's lab in 2024. According to Wairagkar, this was the first example of how speech BCIs may support daily communication, achieving about 32 words per minute with 97.5% accuracy.

These techniques rely on microscopic "arrays" of microelectrodes that are surgically inserted into the surface of the brain. The arrays capture neural activity patterns from the brain region they are positioned in, and a computer algorithm interprets the signals. Machine learning, a subset of artificial intelligence, has revolutionized this field. These algorithms are skilled at finding patterns in enormous volumes of unrelated data. When it comes to decoding speech, machine learning algorithms are trained to identify neural activity patterns linked to various phonemes, which are the smallest units of language. Scholars have drawn comparisons between this and the processing that occurs in smart assistants such as Alexa on Amazon. However, the AI interprets neural signals rather than noises.

Opening the inner voice Even while these new efforts to decode speech are excellent, there are still some issues. In order for the BCI technology to effectively translate their words, patients would often need to make an effort to pronounce them, even if they were physically incapable of doing so. This is due to the fact that the electrodes are typically positioned in the motor cortex, which controls muscle contractions. However, speaking takes work, which makes communication difficult and slow. In their most recent endeavor, researchers from Stanford University sought to determine whether there was a simpler approach: whether they could create a technique that would simultaneously detect "inner speech" and "attempted speech" in real time.Frank Willett, co-director of Stanford University's Neural Prosthetics Translational Laboratory and one of the authors of the study involving the woman at the beginning of this article, says, "We asked them to count the number of shapes of a certain color on the screen because we figured that in this type of task, you would probably accomplish it by literally counting numbers in your head." "And we observed that. We were able to detect traces of these number phrases moving through the motor cortex.

How artificial intelligence can comprehend our jumbled inner thoughts

"Yes" was the timid response when asked if the tech could recognize inner speech. The researchers were able to attain an accuracy rate of up to 74% in real time for a challenge that required them to imagine a text. Although accuracy was lower, it was still higher than chance for the activities intended to elicit spontaneous inner speech. However, the decoded language was largely gibberish in a more open-ended condition when participants were asked to "think about your favorite phrase from a movie."We can not precisely capture someone is completely unfiltered inner speech with the technology we have now," Willett stated. "But we were able to pick up signs of inner speech quite clearly in these varied activities."



The study shed more light on the possible functioning of inner speech in our brains. Although the signals released were less, it was discovered that the neural patterns of inner speech and those of attempted speech in the motor cortex were significantly connected. This was consistent with earlier neuroimaging and electrophysiological research that discovered inner speech activates a brain network that is comparable to that of physically produced speech.



Beyond the realm of language

In 2025, Wairagkar's lab at the University of California, Davis made history by demonstrating that they could decipher not only words but also speech's non-verbal components, such as rhythm, intonation, pitch, and speed. Essentially, in addition to the words themselves, it enabled sufferers to express emphasis and passion. 

"Human communication is far more than text on the screen," Wairagkar asserts. "The majority of our communication occurs through our speech and self-expression; the meaning of our words varies depending on the situation."

When an ALS patient with a severe motor speech problem tried their prototype, Wairagkar and her colleagues showed it could create speech out loud.



According to Maitreyee Wairagkar, improved technology will be able to sample more neurons, obtain better information, and produce comprehensible speech in real time.

Most importantly, the individual was able to transmit meaning by modulating his remarks. Wairagkar stated, "Our participant was able to adjust his pitch while speaking and ask a question with an inflection at the end of the phrase." "We showed that by having him sing melodies as part of a straightforward exercise."



Although it was not flawless, testers deemed 60% of the words to be understandable. It showed what might be achievable in the near future, even if it is still far behind the finest brain-to-text technology.

Willett and Wairagkar both think further advancements are on the horizon. Increasing the number of microelectrodes applied to the brain could be one way to make improvements. According to Wairagkar, "we have trillions of connections and billions of neurons in our brains." "We were sampling just 256 of those" in her most recent study.Better technology and newer gadgets will be able to sample more neurons, obtain richer data, and produce comprehensible speech in real time," she continues. 

With plans to look into the potential involvement of brain regions other than the motor cortex, Willett is particularly interested in learning more about inner speech. "The superior temporal gyrus is one area that we are interested in," he says, referring to a part of the brain that processes auditory information and may possibly be involved in inner speech, such as "the auditory representations of what you are picturing hearing inside your head."

Helping those with brain abnormalities in this area—such as stroke victims who have motor cortex loss but are still able to understand speech—may also require looking beyond the motor cortex. According to Willett, understanding the additional brain regions involved in inner speech may eventually be able to assist these individuals with communicating as well.


To see is to believe.
While researchers studying brain-computer interfaces concentrate on useful uses of the technology that can aid patients, other sectors are making strides in deciphering brain scans and improving our understanding of how the brain functions.


One area focuses on using AI to analyze brain scans and recreate images that a person would have seen. It operates as follows: participants view pictures while functional magnetic resonance imaging (fMRI), a method that gauges brain activity by identifying variations in blood flow to various brain areas, records their brain activity. After being decoded by an algorithm, the brain data is given into an AI picture generator, which makes an effort to replicate the images the subject saw.

How artificial intelligence can comprehend our jumbled inner thoughts

Although researchers have been working on this problem for decades, the discipline has benefited greatly from the current explosion in generative AI. The quality of the photos created has significantly improved thanks to the most recent AI image generators, such Stable Diffusion. Using a Stable Diffusion algorithm, Yu Takagi, an associate professor at Nagoya Institute of Technology in Japan, released a paper in 2023. The University of Minnesota produced an online data set that included four users' brain scans taken while they each looked at 10,000 images in order to train the algorithm. Even though a salad bowl entirely baffled the AI, it was frequently able to provide a decent representation of the original image.

Now, the field is moving swiftly forward. Even more exact photos were reproduced by Israeli researchers in study that was published last year.

According to Takagi, these investigations have shed light on how the brain interprets visual data. As it happens, two distinct brain regions are essential. The "low level" visual components of an image, like layout, perspective, and color, are encoded by the occipital lobe, which is situated at the rear of the brain. In the meantime, the "high level" conceptual components needed to categorize what an object truly is are encoded by the temporal lobe, which is located behind the temples.



The sound of music

Reconstructing audio experiences is also current endeavor. Takagi presented study in 2025 that attempted to replicate sounds from fMRI scans collected while participants listened to music using proprietary Google algorithm.



Since music is always changing and the fMRI scanner can only finish scans every second, Takagi claims that this can be more difficult than reconstructing visual inputs. "Takagi claims that the quality of the reconstruction is inferior to that of the picture reconstruction. "But we were still able to reconstitute the main category and the character of the song."



We now know more about the brain underpinnings of music perception because to this field.

Next Post Previous Post
No Comment
Add Comment
comment url

ads