This Article is From May 02, 2023

Scientists Develop AI-Powered Non-Invasive Tool That Can Read Your Mind

Scientists said the new tool is very accurate in turning thoughts into text.

Scientists Develop AI-Powered Non-Invasive Tool That Can Read Your Mind

The AI-powered tool studied people's thoughts inside an fMRI machine. (Representational Pic)

In a major breakthrough, scientists have developed a decoder with the help of artificial intelligence (AI) that can translate brain activity into a continuous stream of text. This is the first time that a non-invasive technique to read a person's thoughts has been developed, said The Guardian. The AI-powered decoder is very accurate in analysing thoughts of people who are listening to a story, or even silently imagining one, the outlet further said citing the researchers. The tool has been developed by neuroscientists at University of Texas.

"We were kind of shocked that it works as well as it does. I've been working on this for 15 years ... so it was shocking and exciting when it finally did work," Dr Alexander Huth, a neuroscientist who was part of the team, told The Guardian.

The breakthrough overcomes a fundamental limitation of fMRI - time lag. While the technique can produce a high-resolution image of brain activity, it's impossible to track activity in real-time.

Mr Huth said that his team's language decoder "works at a very different level".

"Our system really works at the level of ideas, of semantics, of meaning," he told reporters.

How the new system works?

For the study, three people spent a total of 16 hours inside an fMRI machine listening to spoken narrative stories, mostly podcasts.

This allowed the researchers to map out how words, phrases and meanings prompted responses in the regions of the brain known to process language.

They fed this data into a neural network language model that uses GPT-1, the predecessor of the AI technology later deployed in the hugely popular ChatGPT.

The model was trained to predict how each person's brain would respond to perceived speech, then narrow down the options until it found the closest response.

To test the model's accuracy, each participant then listened to a new story in the fMRI machine.

The study's first author Jerry Tang said the decoder could "recover the gist of what the user was hearing".

For example, when the participant heard the phrase "I don't have my driver's license yet", the model came back with "she has not even started to learn to drive yet".

What are the existing tools?

Before this success, scientists had to reply on other language decoding systems that require surgical implants. One of these were released in 2019 to help those who lost their voice through paralysis and conditions such as throat cancer, amyotrophic lateral sclerosis (ALS) and Parkinson's disease.

The technology used implanted electrodes to identify relevant neural signals from brain activity. These signals were then decoded into estimated movements of lips, tongue, larynx and jaw and finally transformed into synthetic speech.

Warning from scientists

David Rodriguez-Arias Vailhen, a bioethics professor at Spain's Granada University not involved in the research, said it went beyond what had been achieved by previous brain-computer interfaces.

"This brings us closer to a future in which machines are able to read minds and transcribe thought," he was quoted as saying by news agency AFP. The scientists also warned this could possibly take place against people's will, such as when they are sleeping.

But researchers said they had anticipated such concerns.

The team said it ran tests showing that the decoder did not work on a person if it had not already been trained on their own particular brain activity.

The three participants were also able to easily foil the decoder.

While listening to one of the podcasts, the users were told to count by sevens, name and imagine animals or tell a different story in their mind. All these tactics "sabotaged" the decoder, the researchers said.

.