Las Vegas (AFP) – AI companies are on the hunt to design the ideal device to deliver AI’s superpowers, and some new enterprises are convinced that headphones or earbuds are the way.
Startups have for a while tried to beef up headphones beyond their basic functions, like listening to music and making phone calls.
Nearly a decade ago, tech startups Waverly Labs and Mymanu added real-time translation to that list, and Google quickly followed suit, creating a voice-activated AI assistant in 2020.
Riding the AI wave, other tech industry leaders Samsung and Apple have also entered the fray, with noise cancellation now almost a product standard.
Startups, many of which are attending this week’s CES consumer electronics extravaganza in Las Vegas, are now trying to refine this technology and apply it to specific uses.
Such is the case with OSO, which wants to take the concept of a professional assistant further.
Its earbuds will record meetings and retrieve conversation elements on demand using everyday language.
Viaim, a competitor, offers similar services and intends to focus on interoperability in a world controlled by major smartphone manufacturers that impose their own platforms.
“If you use a different brand of cell phone, it doesn’t have any AI functions at all. That’s the opportunity for our earbuds,” explained Shawn Ma, CEO of Viaim, whose devices are compatible with all brands, including iPhones in China.
Timekettle, meanwhile, is enjoying success in a completely different context, with “90 percent of its sales coming from schools,” according to Brian Shircliffe, head of US sales for the Chinese company.
Many schools equip their non-English-speaking students with the devices so they can follow lessons without the need for a translator.
Reading minds
As for whether earbuds can replace smart glasses, connected speakers, or even smartphones as the dominant physical extension of generative AI, remains unanswered.
For now, any AI functionality “is really dependent on the phone that it’s connected to,” said Ben Wood, chief analyst at CCS Insight.
“Earbuds are certainly a more accessible entry for AI than smart glasses,” said Avi Greengart, president of Techsponential, a consultancy.
“They’re a lot less expensive, they’re a product most smartphone users are buying anyway, and they don’t require a prescription.”
However, “people generally don’t wear them all the time,” unlike glasses, “and they can only interact with voice, so you’ll need to be in an environment where talking is acceptable,” the analyst cautioned, adding that the lack of a camera limits the device’s potential.

Some won’t be constrained by the shortcoming, notably Naqi Logix, whose Neural Earbuds are equipped with ultra-sensitive sensors that detect tiny movements.
Thanks to these sensors, a quadriplegic user can control their wheelchair or surf the internet simply by looking at their computer screen.
Operations manager Sandeep Arya sees great potential for these innovations, “because people would like to be able to interact with their environment in a more discreet, subtle way,” without having to call out to Siri on their smartphone, Alexa on their speaker, or Meta on their glasses.
Arya envisions the technology going further, thanks to improved sensors capable of deciphering facial movements that a chatbot can use to find the right tone and words according to mood.
Neurable, another startup whose MW75 Neuro LT headset measures brain activity, dreams of using its equipment to enable communication through thought, without gestures or words.
“It’s remarkable,” says Ben Wood of these breakthroughs, “but it’s still a niche market for now.”
Until further notice, “the hundreds of millions of headphones that have been sold will remain focused on listening.”

