发布时间:2022.04.19 18:40 访问次数: 作者:
返回列表Researchers at the USC Viterbi School of Engineering are using generative adversarial networks (GANs) -- technology best known for creating deepfake videos and photorealistic human faces -- to improve brain-computer interfaces for people with disabilities.
advertisement
In a paper published in Nature Biomedical Engineering, the team successfully taught an AI to generate synthetic brain activity data. The data, specifically neural signals called spike trains, can be fed into machine-learning algorithms to improve the usability of brain-computer interfaces (BCI).
BCI systems work by analyzing a person's brain signals and translating that neural activity into commands, allowing the user to control digital devices like computer cursors using only their thoughts. These devices can improve quality of life for people with motor dysfunction or paralysis, even those struggling with locked-in syndrome -- when a person is fully conscious but unable to move or communicate.
Various forms of BCI are already available, from caps that measure brain signals to devices implanted in brain tissues. New use cases are being identified all the time, from neurorehabilitation to treating depression. But despite all of this promise, it has proved challenging to make these systems fast and robust enough for the real world.
Specifically, to make sense of their inputs, BCIs need huge amounts of neural data and long periods of training, calibration and learning.
"Getting enough data for the algorithms that power BCIs can be difficult, expensive, or even impossible if paralyzed individuals are not able to produce sufficiently robust brain signals," said Laurent Itti, a computer science professor and study co-author.
原文链接:
https://www.sciencedaily.com/releases/2021/11/211118203621.htm
TEL: 021-63210200
业务咨询: info@oymotion.com
销售代理: sales@oymotion.com
技术支持: faq@oymotion.com
加入傲意: hr@oymotion.com
上海地址: 上海市浦东新区广丹路222弄2号楼6层
厦门地址: 厦门市集美区百通科技园1号楼301-1室
微信号:oymotion
扫描二维码,获取更多相关资讯