Oticon says its new hearing aid technology utilizes deep neural networks to separate what wearers want to hear from background noise

Deep neural networks are one of the technology sector’s latest buzzwords. And, if anything’s immediately obvious about them, it’s how complicated they sound.

But, behind the jargon, there’s the simplest example imaginable: It’s what your brain does every day. You predict solutions and make inferences using layers and layers of previous experiences. That’s what the biologically-inspired technology also does.

Gary Rosenblum. (Oticon)

And this approach to programming is now being utilized in a field of health care that might sound simpler than it is: hearing health. Nothing about sound is simple, said Gary Rosenblum, president of Somerset-based Oticon.

You can’t, for instance, just boost the volume of sound in a crowded room and expect someone with mild-to-severe hearing loss to easily hold a conversation. The puzzle of disparate noises can make piecing together particular sounds difficult.

“The worst hearing aid in the world — that can hardly be called a hearing aid — is one that just makes everything louder,” Rosenblum said. “So, our focus from an innovation standpoint has been around improving hearing in difficult listening situations.”

Finding a hearing device that’s capable of sorting out all of the important sounds from all of the unimportant sounds in a loud situation is what Rosenblum calls the “Holy Grail” of hearing aid devices. The local hearing aid company believes it’s taking a big step forward on that quest by using deep neural network technology.

Oticon, a division of Denmark-based hearing health corporation Demant, last month unveiled a first-ever for hearing aids, a new device that would be built with an on-board deep neural network. It’s marketed as technology that can use experiential learning, rather than predefined rules, to process speech sounds like a brain can.

The product, which was recently named one of the 2021 Consumer Electronics Show’s innovation honorees, uses an artificial intelligence-inspired technology that’s trained to recognize 12 million sounds in noisy environments.

“A deep neural network falls under the umbrella of AI, but it’s much more specific than the AI term — which can be anything, from machine learning to an algorithm applied to a certain situation,” Rosenblum said. “In terms of what it does for a hearing aid, it means that the device can learn to identify complex patterns of sound.”

Put another way, it’s using technology to naturally separate what you actually want to hear from unimportant background noise — the goal hearing aid developers are striving for.

A company report cited in Oticon’s marketing materials states that wearers see an improvement in speech understanding by 15%. Rosenblum said he’s been encouraged by the patient feedback so far.

He’s also thrilled about the potential for the use of deep neural network-influenced technologies to advance medical technologies in general, even if it’s too early to say whether the hearing aid industry surrounding Oticon will go in the same direction.

“It has only been less than a month since we launched this,” he said. “But it’s a compelling concept. … And I will tell you this, there’s a lot of envy out there because of what we’ve been able to do so far in terms of good outcomes for patients.”