Tech News

How the technology behind ChatGPT could make mind-reading a reality



On a recent Sunday morning, I found myself in a pair of ill-fitting scrubs, lying flat on my back in the claustrophobic fMRI bins of a research facility in Austin, Texas. I thought about the things I do for TV.

Anyone who has had an MRI or magnetic resonance imaging scan will tell you how noisy they are – electrical currents swirl creating a powerful magnetic field that results in detailed scans of your brain. However, on this occasion I could barely hear the loud noise of the mechanical magnet, I was given a pair of specialized earphones which started playing passages from The Wizard of Oz audiobook.


Neuroscientists at the University of Texas at Austin have discovered a way to translate scans of brain activity into words using the same AI technology that powers the pioneering chatbot ChatGPT.

The breakthrough could revolutionize how aphasic communicate. It is just one pioneering application of AI that has been developed in the recent months as the technology continues to advance and seems ready to touch every part of our lives and society.

“So, we don’t like using the term mind reading,” Alexander Huth, assistant professor of neuroscience and computer science at the University of Texas at Austin, told me. “We think it evokes things that we’re not actually capable of.”

Huth volunteered to be a research subject for this study, spending more than 20 hours in the confines of an fMRI machine listening to audio clips while the machine took detailed pictures of his brain.

An AI model analyzed his brain and the audio he was listening to, and over time, was eventually able to predict the words he was hearing just by watching his brain.

The researchers used San Francisco-based OpenAI’s first language model, GPT-1, which was developed using a huge database of books and websites. By analyzing all of this data, the model learned how to construct sentences – basically how humans talk and think.

The researchers trained artificial intelligence to analyze the activity in the brains of Huth and other volunteers while they listened to specific words. Eventually, the AI ​​learned enough that it could predict what Hoth and the others were listening to or watching just by monitoring their brain activity.

I spent less than half an hour at the machine and, as expected, the AI ​​was unable to decipher that I was listening to a portion of The Wizard of Oz audiobook that described Dorothy making her way along the yellow brick road.

Before entering the fMRI machine, CNN reporter Donie O'Sullivan was given specialized earphones to listen to an audiobook during a brain scan.

Hoth listened to the same sound but because the AI ​​model was trained on his brain, he was able to accurately predict which parts of the sound he was listening to.

While the technology is still in its infancy and shows great promise, the limitations may be a relief to some. Artificial intelligence cannot easily read our minds yet.

“The real potential application of this is in helping people who are unable to communicate,” Huth explained.

He and other researchers at UT Austin believe the innovative technology could be used in the future by people with “locked-in” syndrome, stroke victims and others whose brains function but who are unable to speak.

“Our demonstration is the first evidence that we can get this level of accuracy without brain surgery. So we think this is one step along that path to help people who are unable to speak without having neurosurgery,” he said.

While a breakthrough medical advance is undoubtedly good news and may change the lives of patients with debilitating illnesses, it also raises questions about how the technology can be applied in controversial settings.

Can it be used to extract a confession from a prisoner? Or to reveal our deepest darkest secrets?

The short answer, Huth and his colleagues say, is no — not yet.

For starters, brain scans need to be performed in an fMRI machine, AI technology needs to be trained on an individual’s brain for many hours, and according to the Texas researchers, people need to give their consent. If someone resists listening to the audio or thinks of something else, the brain scans won’t work.

“We believe that everyone’s brain data should be kept secret,” said Jerry Tang, lead author in the article. Paper published earlier this month Detailing his team’s findings. “Our brains are kind of the final frontier of our privacy.”

Tang explained, “There are clearly concerns that brain decoding technology could be used in dangerous ways.” Brain decoding is the term researchers prefer to use rather than mind reading.

“I feel like mind reading conjures up the idea of ​​getting to the little thoughts that you don’t want to get away with, like reactions to things. And I don’t think there’s any suggestion that we can really do that with that kind of approach,” Huth explained. “What we can get is the big ideas you’re thinking about. The story someone is telling you, if you’re trying to tell a story inside your head, we can get into that, too.”

Last week, makers of generative AI systems, including OpenAI CEO Sam Altman, took to Capitol Hill to testify before a Senate committee about lawmakers’ concerns about the risks posed by powerful technology. Altman warned that developing artificial intelligence without guardrails could “cause great harm to the world” and urged lawmakers to implement regulations to address the concerns.

Echoing AI’s warning, Tang told CNN that lawmakers need to take “mental privacy” seriously to protect “brain data” — our thoughts — two of the most miserable terms I’ve heard in the age of AI.

While technology currently only works in very limited cases, this may not always be the case.

“It’s important not to have a false sense of security and to think that things will stay this way forever,” warned Tang. “Technology can improve and that can change how well we can decode and change whether decoders require a person’s cooperation.”

Source link


Leave a Reply

Your email address will not be published. Required fields are marked *