BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Combining Mixed Reality Tech With Brain Signals Could Improve Rehabilitative Medicine

This article is more than 3 years old.

A new headset and software platform allows researchers and developers of mixed reality programs the opportunity to incorporate signals from the brain and body into their technology, something that has wide potential for use in rehabilitative medicine.

Virtual and augmented reality has already been used to treat patients with a range of psychiatric disorders including ADHD, PTSD and anxiety, but this tech could help improve how effective it is.

“You can use virtual reality to put people into those environments and throttle how intense the experience is, but if you could know how intense the reaction is that someone is having, you can decide whether to dial it up a little bit, or dial it down a little bit just to make sure that they're not overwhelmed by the immersion,” says Conor Russomanno, CEO of OpenBCI, a Brooklyn-based startup that developed the Galea headset.

Another application of the technology, is to give people with spinal injuries or other forms of paralysis more freedom and to allow those who have lost limbs the ability to better control prosthetics.

There are different types of brain-computer interfaces and many involve actually implanting electrodes into the brain or spinal cord to either give people the ability to control a computer, or an artificial limb. However, the OpenBCI technology relies on external, non-invasive electroencephalogram readings to monitor brain activity (taken from sensors on a cap).

“The amount of control and the fidelity you have, obviously increases with the signal quality, and the best signal quality you're going to get is by putting electrodes down into the brain,” says Russomanno.

“But, a lot of the magic happens in the classification and the machine learning on a case on a person by person basis. And so, this is where I think that BCI technology is going to be used for personalizing control.”

There are of course big attractions to the non-invasive nature of this technology. If the artificial intelligence and machine learning side of the software can make up for the reduced signal then it has great potential to help people in need of this technology.

Russomanno set up OpenBCI with one of his professors after leaving grad school 6 years ago. The company is unusual in that it has been developing inexpensive, non-invasive, open source brain-computer interface technology for the last 6 years.

It’s unusual as most tech companies keep the technology behind their products under high levels of secrecy. “I think it's super important that that innovation takes place in the public domain, in a way where people have a variety of backgrounds and disciplines can contribute,” emphasizes Russomanno.

“It doesn't take place behind closed doors where the incentives of what's being put into the world, and what's being used by the users, can be misaligned with the best interest of the users themselves.”

Since the company started, many tech companies and researchers have used the OpenBCI tech to develop their own products and innovations. A key example of this is IBM Research.  Stefan Harrer, Senior Technical Staff Member, Lead Epilepsy Research at the company tested if the OpenBCI system could allow a user to move a robotic arm with their brain.  

“This holds significant potential for one day enabling individuals with the option to naturally control artificial limbs,” said Harrer.

“For people who have suffered from conditions such as stroke or spinal cord injuries, such an interface could one day allow them to use their brains to translate and execute actions for robots to perform.”

The Galea headset is the next step for OpenBCI as it incorporates sensors that can take a variety of biological readings and feed them back into the accompanying software program. In addition to recording ‘brain waves’ using electroencephalogram technology it can also monitor eye movements, muscle movements in the face, moisture on the skin from sweating and heart rate.

The technology is still at an early stage, so the first users will largely be mixed reality game and program developers, as well as researchers working on neurorehabilitative technology. However, Russomanno thinks it is just the start of revolutionizing how computer interfaces are developed.

“This whole idea of tapping directly into human consciousness and building tools and devices that can listen to the internal state of a player, a user, or a patient is an extremely powerful technology that could be used for both good and bad. I think that it's super important that people understand the implications of what we're building, and how it's going to impact humanity.”

Follow me on Twitter or LinkedIn