Researchers at MIT’s Media Lab are giving a “voice” to that voice inside your head. The team has come up with a way to communicate with a computer system by having users “speak silently” to themselves, or subvocalize. The device, called AlterEgo, uses electrodes to read the neuromuscular signals from the user’s internal speech organs when they “say” words in their head. Once captured, the signals are transferred to a machine-learning system that’s been trained to associate certain signals with certain words. So far, the system can recognize about 100 words, but the team hopes to expand its vocabulary. “Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition,” says Arnav Kapur, who led the system’s development. The neuromuscular signals are undetectable to the human eye, so to the outside world, it would look someone is simply walking down the street, when really they could be communicating with the computer system. However, Kapur stresses that this doesn’t mean that the device is reading your mind, since “silently speaking” is a voluntary action that the user makes. AlterEgo has so far been used to solve arithmetic problems, play chess, query basic questions and control a Roku streaming user interface. In the arithmetic example, the device had an average transcription accuracy of 92% over the course of a 10-person trial. Kapur and his team began the project as a way to make communicating with AI agents less disruptive to everyday interactions as well as more private. In the future, AlterEgo could be used for real-time translations, as an aid for people with speech impairments and even in military operations where stealth is paramount. Agencies