How brain-computer interfaces are turning thoughts into text

发布时间:2022.04.18 17:24  访问次数:  作者:

返回列表

Facebook (as it was then known, before it entered the Metaverse) made headlines when it started funding brain-computer interface (BCI) technology, searching for a way to allow users to create text just by thinking it. Facebook was interested in creating a new way of interacting with technology – a system where a user could simply imagine speaking, and a device would turn those electrical impulses into text.

Facebook had hoped to create a head-worn device that could capture language, translating electrical signals from the brain into digital information. Despite the fascinating premise of a social media company developing potentially the first consumer BCI for language use, the company decided to step back from the project last year, open sourcing its existing language research and concentrating on BCIs that captured nerve signals-related movement, rather than language, instead.

But while Facebook stepped back, a number of labs are stepping forward, making breakthroughs in turning language into text or spoken word. These projects are gathering data straight from the source, using electrodes directly in contact with the brain's surface. That's because, unlike systems that rely on wearable devices, BCIs that use implanted electrodes give a better signal-to-noise ratio and allow for far more detailed and specific recordings of brain activity.

SEE: What is a brain-computer interface? Everything you need to know

Facebook's research partner UCSF last year announced that its Chang lab, named after the neurosurgeon Edward Chang who heads the facility, had created a working thought-to-text BCI as part of a research trial.

The system uses sensors contained in a polymer sheet which, when laid onto the surface of the brain, can pick up the user's neural signals. That information is then decoded by machine-learning systems to create the words the user wants to say.

The first user of the system was an individual who has had a stroke in his brain stem, which left him with extremely limited head, neck and limb movements and an inability to speak. Since the stroke, he had to communicate by moving his head, using a pointer attached to a baseball cap to touch letters on a screen.

Typically, signals are carried from the brain to the speech muscles through nerves – think of nerves as the electrical wires of the brain. In the case of the trial participant, the wiring had been effectively cut between the brain and vocal muscles. When he tried to speak, the signals were formed, but couldn't reach their destination. The BCI picks up those signals directly from the speech cortex of the brain, analyses them to find out which of the muscles related to speech the participant was trying to move, and uses that to work out the words he wanted to say, converting those would-be muscle movements into electronic speech. That way, the participant could communicate quicker and more naturally than he had in the 15 years since his stroke.

The trial participant can say any of 50 words that the system would be able to recognise – the words were chosen by UCSF researchers, because they were either common, related to caregiving, or simply words that the participant wanted to be able to say – such as family, good, or water.

To create a working BCI, the system had to be trained to recognise which signals were correlated with which words. To achieve that, the participant had to practise saying each word nearly 200 times to create the right size of dataset for the BCI software to learn from. Signals were sampled from the 128-channel array on his brain and interpreted by an artificial neural network, which uses non-linear models that can learn complex patterns in brain activity and relate them to the intended speech.

When the user tried to say a sentence word by word, the language model would predict how likely it was that he would be trying to say each of the 50 words and how those words were likely to be combined in a sentence – for example, understanding that, 'how are you?' is a more likely sentence than 'how are good?', even though both use fairly similar speech muscles – to give the final output of real-time speech. The system was able to decode the participant's intended speech at rate of up to 18 words per minute and with up to 93% accuracy.

The UCSF team are now hoping to expand the use of the trial system to new participants. There are a lot of people who inquire about taking part in UCSF's thought-to-speech BCI research, according to David Moses, a post-doctoral engineer in the Chang lab and one of the lead authors on the research project. "It does take the right person. There are a lot of inclusion criteria about, not just what kind of disability the person has, but also their general health and other factors. It's also really important that they understand that it's a research study, and it's not a guarantee that the technology will directly benefit them, at least in the near future. It takes a special kind of person," he told ZDNet.

Most of the arrays used in human trials of invasive BCIs – where electrodes are laid directly onto the surface of the brain – are made by one company, Blackrock Neurotech.

Blackrock Neurotech is also working on language applications for BCIs. Instead of using signals sent to speech muscles like the UCSF trial, the company has created a system based on imagined handwriting: you mentally picture writing an 'A', and the system converts it to written text using an algorithm developed by Stanford University. It currently works at around 90 characters per minute, and the company hopes it could eventually take it up to 200 – around the same speed as the average person writes by hand.

The system is perhaps one of the closest to commercialisation, and is likely to be used by people with condition such as amyotrophic lateral sclerosis (ALS), a terminal disease also known as Lou Gerig's or motor neurone disease. In its later stages, ALS can cause locked-in syndrome, where a person can't use any of their muscles to move, speak, swallow or even blink. At the same time, their mind remains as active as it always was. BCIs, such as Blackrock Neurotech's, are intended to offer a way for people with ALS or locked-in syndrome, which can also be caused by certain strokes, to continue communicating.

"We've had cases where the neural interface has spelled a word that the auto-speller kept correcting, and the participants indicated that it was a word that they had created when they started dating [their partner]. The neural interface was able to find a word that only two people in the world knew – so Siri, eat your heart out. But we will benefit from the connection to auto correction tools and machine learning helps these kind of devices get a lot better," Marcus Gerhardt, CEO and co-founder of Blackrock Neurotech told ZDNet. The system currently runs at 94% accuracy, rising to 99% accuracy once auto-correction is applied.

Though they're at a relatively early stage of development, the potential for language BCIs to improve the quality of life of patients with conditions that render them currently unable to speak is clear. While the technologies that underpin BCIs have made great strides in recent years and BCIs themselves have become faster at translating intended speech into words on a screen, much more work is needed before systems can be deployed to patient populations at large.

Needless to say, due to the novelty of BCI systems, there are concerns over data privacy and ownership that need to be addressed before widespread commercialisation. (For those worried about people's minds being read against their will, this technology is not there for that reason. Users have to actively engage with the system by intending to speak – deliberately thinking about writing or moving muscles, for example – rather than by simply imagining speech in their head.)

Because BCIs are so new, more knowledge is also needed about their use over the long term. There's the practical consideration about how long electrodes will happily remain functional in the not-very-conducive-to-electrodes environment of the brain: Blackrock Neurotech's arrays have been in situ in humans for seven years, and the company believes 10 years is feasible.

There's also the question of long-term support, according to Mariska Vansteensel, assistant professor at UMC Utrecht's Brain Center. Regular adjustments in parameter settings will be needed to optimise systems in response to disease changes or other situations that could affect brain activity, as well as user preferences. Hardware might need replacing or updating, too. As yet, there's no agreed framework on who should manage BCI support over the long term.

SEE: This mind-controlled concept car lets you switch radio stations just by thinking about it

Perhaps the most pressing challenge for technologies like Blackrock Neurotech's and UCSF's is they're aimed at relatively small populations of patients. At the same time, the systems themselves are specialised and expensive, and need equally specialised and expensive neurosurgery to install. If language BCIs do make it to commercialisation, their costs could mean they don't reach those who need them most.

"The topic of accessibility requires significant attention," Vansteensel told ZDNet. "If and when language BCIs are going to be made commercially available, we (and with that I mean society as a whole) need to make sure that they are accessible to all potential end-users, and not only to the more privileged. This requirement obviously also applies to the continued support that is associated with the use of the BCIs," she added.

According to Blackrock Neurotech's Gerhardt, an uptick in government funding for conditions like ALS is likely to make the development of BCIs more feasible, despite the economic challenges.

"Right now, we're quite hopeful that we'll be able to provide this technology to patient populations, such as those with locked-in syndrome due to ALS, who have a life expectancy in the range of two to five years. It's very, very tough to make an economic case for that, but we feel a certain moral obligation to provide this kind of technology," Gerhardt said.

"We've got to take this research-level software and make it a robust business-level and medical-level software. We're working on that as we speak. It's not going to take decades and it may not even take years."

原文链接:https://www.zdnet.com/article/how-brain-computer-interfaces-are-turning-thoughts-into-text/

电话热线

TEL:  021-63210200

业务咨询: info@oymotion.com

销售代理: sales@oymotion.com

技术支持: faq@oymotion.com

加入傲意: hr@oymotion.com

上海地址: 上海市浦东新区广丹路222弄2号楼6层

厦门地址: 厦门市集美区百通科技园1号楼301-1室


上海傲意信息科技有限公司 版权所有 © 2015-2024

沪ICP备17032846号-1


微信号:oymotion

扫描二维码,获取更多相关资讯

  • 首页
  • 淘宝店铺
  • 电话
  • 留言
  • 返回顶部
  • 我要留言
    点击更换验证码