News update
  • AI Moves Closer To Decoding Human Thoughts     |     
  • UNESCO Calls Iran School Strike Grave Violation     |     
  • Oil Jumps, Asian Stocks Slide On Gulf Tensions     |     
  • Death toll from central Israel strike rises to 5     |     
  • DSE sinks 138 points on broad sell-off; CSE also tumbles     |     

AI Moves Closer To Decoding Human Thoughts

GreenWatch Desk: Technology 2026-03-02, 10:18am

img-20260302-wa0000-ff6769eefae08d9f41a3a5f292fd73f81772425169.jpg




Artificial intelligence is transforming scientists’ ability to interpret the brain’s complex electrical signals, bringing researchers closer to decoding human thoughts and inner speech.

In a recent breakthrough, a 52-year-old woman who lost her ability to speak clearly after a stroke nearly two decades ago was able to see her unspoken thoughts appear as text on a screen. Identified as participant T16, she had a tiny array of electrodes surgically implanted in the front part of her brain. As she imagined speaking words, an AI-powered computer system translated her neural activity into readable sentences in real time.

The experiment was conducted by researchers at Stanford University as part of a broader study involving patients with amyotrophic lateral sclerosis (ALS), a progressive neurodegenerative disease. Scientists described the development as one of the closest steps yet toward practical “mind reading.”

The findings, unveiled in August 2025, were followed by another major advance in Japan. Researchers demonstrated a “mind captioning” technique that generated detailed descriptions of images people were viewing or imagining, using non-invasive brain scans combined with multiple AI systems.

Experts say these breakthroughs are opening an unprecedented window into the inner workings of the brain while offering new communication pathways for people who cannot speak or move.

“In the next few years, we will begin to see these technologies commercialised and deployed at scale,” said neuroengineer Maitreyee Wairagkar of the University of California, Davis, who works on brain-computer interfaces (BCIs). Several companies, including Neuralink, are pursuing commercial brain implants aimed at moving the technology from laboratories into everyday use.

BCIs are not new. Scientists have experimented with direct brain communication since the late 1960s. For decades, such systems have allowed users to control prosthetic limbs or computer cursors by decoding movement-related brain signals. However, translating speech and complex thoughts has proved far more challenging.

Progress has accelerated in recent years, especially for patients with severe communication impairments. In 2021, Stanford researchers showed that a paralysed man could form English sentences by imagining himself writing letters in the air. More recently, Wairagkar’s team demonstrated a system that converted attempted speech from an ALS patient into text at about 32 words per minute with nearly 98 percent accuracy.

These systems rely on microelectrode arrays implanted over brain regions involved in speech and movement. Machine-learning algorithms analyse vast amounts of neural data, identifying patterns linked to specific sounds or phonemes. Researchers often compare the process to voice assistants, except that instead of interpreting sound waves, the AI decodes neural activity directly.

A major challenge is that patients typically must attempt to speak for accurate decoding, which can be tiring and slow. To address this, Stanford scientists explored whether “inner speech” — words silently spoken in the mind — could also be detected.

Results were promising but limited. When participants imagined specific sentences, the system achieved accuracy rates of up to 74 percent in real time. Performance declined with spontaneous thoughts, and open-ended prompts sometimes produced incoherent output. Researchers believe inner speech activates neural pathways similar to spoken speech, though the signals are weaker.

Scientists are also working to capture the full richness of speech. In 2025, Wairagkar’s lab demonstrated decoding of tone, pitch and rhythm, enabling an ALS patient to convey emotion and emphasis. While only about 60 percent of the generated speech was judged clearly understandable, researchers say it marks progress toward more natural brain-driven communication.

Further advances are expected as electrode coverage expands. Current systems monitor only a few hundred neurons — a tiny fraction of the brain’s billions — leaving significant room for improvement in speed and accuracy.

Meanwhile, other teams are using AI to reconstruct images and sounds from brain activity. By combining functional MRI data with image-generation tools such as Stable Diffusion, scientists have recreated rough versions of images viewed by participants. Similar research aims to reconstruct music and better understand how the brain processes sound and vision.

Although fully decoding spontaneous, unfiltered thoughts remains distant, experts say the rapid pace of progress signals a profound shift. As AI continues to unlock the brain’s hidden signals, technologies once confined to science fiction are steadily moving closer to reality.