Scientists are now able to “decipher” people’s thoughts without even touching their heads. A scientist informed.
The former relied on mind-reading techniques implantation electrodes deep in people’s minds. The new method is described in a report published on September 29. to the bioRxiv preprint database, instead of a non-invasive brain scanning technique is called functional magnetic resonance imaging (fMRI).
fMRI tracks the flow of oxygen-rich blood the brainand because active brain cells need more energy and oxygen, this information provides an indirect measure of brain activity.
By its nature, this method of scanning cannot capture brain activity in real time, because the electrical signals emitted by brain cells travel faster than the flow of blood through the brain.
But remarkably, the study authors found that they could still use this imperfect proxy measure to decipher the semantic meaning of people’s thoughts, even though they couldn’t translate word-for-word.
“If you had asked any cognitive neuroscientist in the world 20 years ago if this was possible, they would have laughed you out of the room,” he said. Alexander HutA neurologist at the University of Texas at Austin said A scientist.
Related: A ‘universal language network’ was identified in the brain
For the new study, which has yet to be reviewed, the team took the brains of a woman and two men in their 20s and 30s. Each participant listened to a variety of podcasts and radio shows for a total of 16 hours over several sessions in the scanner.
The team then fed these scans to a computer algorithm they called a “decoder,” which compared patterns in the sound with patterns in the recorded brain activity.
The algorithm can then take the fMRI recording and create a story based on its content, which can fit “very well” into the original plot of a podcast or radio show, Huth said. A scientist.
In other words, the decoder could infer which story each participant heard based on their brain activity.
However, the algorithm made some mistakes, such as changing characters’ pronouns and using first and third person. He “knows what’s pretty accurate, but not who’s doing things,” Huth said.
In additional tests, the algorithm was able to accurately explain the plot of the silent film that the participants watched on the scanner. It could even repeat a story that the participants imagined they were telling in their heads.
In the long term, the research team aims to develop this technology for use in brain-computer interfaces designed for people who cannot speak or write.
Read more about the new decoder algorithm A scientist.
This article was originally published by living science. read it original article here.
Leave a Comment