首页 | 本学科首页   官方微博 | 高级检索  
     

通过触觉编码和声音为视力障碍者提供远程虚拟陪伴
引用本文:葛松,黄轩拓,林衍旎,李沿橙,董问天,党卫民,徐晶晶,伊鸣,许胜勇. 通过触觉编码和声音为视力障碍者提供远程虚拟陪伴[J]. 生物化学与生物物理进展, 2024, 51(1): 158-176
作者姓名:葛松  黄轩拓  林衍旎  李沿橙  董问天  党卫民  徐晶晶  伊鸣  许胜勇
作者单位:1) 北京大学电子学院,教育部纳米器件物理与化学重点实验室,北京 100871,2) 北京大学信息科学技术学院,北京 100871,1) 北京大学电子学院,教育部纳米器件物理与化学重点实验室,北京 100871,2) 北京大学信息科学技术学院,北京 100871,3) 北京大学第六医院,北京大学精神卫生研究所,国家卫生健康委员会精神卫生学重点实验室(北京大学), 国家精神心理疾病临床医学研究中心(北京大学第六医院),北京 100191,3) 北京大学第六医院,北京大学精神卫生研究所,国家卫生健康委员会精神卫生学重点实验室(北京大学), 国家精神心理疾病临床医学研究中心(北京大学第六医院),北京 100191,4) 山东大学微电子学院,济南 250100,5) 北京大学基础医学院神经生物学系神经科学研究所,神经科学重点实验室,北京大学公共卫生学院,北京 100191,1) 北京大学电子学院,教育部纳米器件物理与化学重点实验室,北京 100871
基金项目:国家“变革性技术关键科学问题重点专项”(2017YFA0701302)资助项目。
摘    要:目的 现有的人工视觉装置分为植入式装置和体外辅助装置两种,但它们都有一些不足之处。植入式装置需要手术植入、会造成不可恢复创伤;体外辅助装置指令相对简单、应用场景较为单一、过于依赖人工智能(AI)的判断不能提供足够的安全性。本文提出了一种将周边环境信息转化成头颈部触觉指令、并辅助以语音交互的系统,其有效性、安全性、信息量等均优于现有体外辅助技术,同时也具有低成本、低风险、适合多种生活和工作场景等优势。方法 该系统借助最新的远程无线网络通讯技术、芯片技术,利用前方人员随身佩戴的微小型电子设备、摄像头和感应器,以及云端庞大的数据库和计算能力,后台人员可以实时、充分地远程(比如跨越城市)了解前方的现场景象、环境参数和人员状态等信息,通过对比云端数据库和内存数据库、AI辅助识别和人工综合分析,快速获得最合理的行动方案,并将行动指令及时传给前方人员,实现盲人导航功能。同时,也用语音互动对话提供人文关怀、情感寄托。结果 本文首次提出了“远程虚拟陪伴概念”,并演示了相应的硬件和软件以及多种生活场景下的测试效果。除了可以实现基础的导航功能,比如帮助视觉障碍人群完成在超市购物、咖啡厅寻座、街道行走,以及完成更加复杂的拼图、日常娱乐打牌功能,还可以满足骑行等速度相对快的移动指令要求。结论 实验结果表明,这种“远程虚拟陪伴”装置适用大量场景和需求,不仅可以用于视觉障碍人群出行、购物、娱乐,也可用于陪伴老人出行、辅助野外探险或旅行等,具有广泛的发展和应用前景。

关 键 词:人工视觉辅助  远程虚拟伴侣  触觉代码  盲人  导航
收稿时间:2023-02-22
修稿时间:2023-12-06

Remote Virtual Companion via Tactile Codes and Voices for The People With Visual Impairment
GE Song,HUANG Xuan-Tuo,LIN Yan-Ni,LI Yan-Cheng,DONG Wen-Tian,DANG Wei-Min,XU Jing-Jing,YI Ming and XU Sheng-Yong. Remote Virtual Companion via Tactile Codes and Voices for The People With Visual Impairment[J]. Progress In Biochemistry and Biophysics, 2024, 51(1): 158-176
Authors:GE Song  HUANG Xuan-Tuo  LIN Yan-Ni  LI Yan-Cheng  DONG Wen-Tian  DANG Wei-Min  XU Jing-Jing  YI Ming  XU Sheng-Yong
Affiliation:1) Key Laboratory for the Physics & Chemistry of Nanodevices, School of Electronics, Peking University, Beijing 100871, China,2) School of Electronics Engineering and Computer Science, Peking University, Beijing 100871, China,1) Key Laboratory for the Physics & Chemistry of Nanodevices, School of Electronics, Peking University, Beijing 100871, China,2) School of Electronics Engineering and Computer Science, Peking University, Beijing 100871, China,3) Peking University Sixth Hospital, Peking University Institute of Mental Health, NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Beijing 100191, China,3) Peking University Sixth Hospital, Peking University Institute of Mental Health, NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Beijing 100191, China,4) School of Microelectronics, Shandong University, Jinan 250100, China,5) Key Laboratory for Neuroscience, School of Basic Medical Sciences, Neuroscience Research Institute, Department of Neurobiology, School of Public Health, Peking University, Beijing 100191, China,1) Key Laboratory for the Physics & Chemistry of Nanodevices, School of Electronics, Peking University, Beijing 100871, China
Abstract:Objective Existing artificial vision devices can be divided into two types: implanted devices and extracorporeal devices, both of which have some disadvantages. The former requires surgical implantation, which may lead to irreversible trauma, while the latter has some defects such as relatively simple instructions, limited application scenarios and relying too much on the judgment of artificial intelligence (AI) to provide enough security. Here we propose a system that has voice interaction and can convert surrounding environment information into tactile commands on head and neck. Compared with existing extracorporeal devices, our device can provide a larger capacity of information and has advantages such as lower cost, lower risk, suitable for a variety of life and work scenarios.Methods With the latest remote wireless communication and chip technologies, microelectronic devices, cameras and sensors worn by the user, as well as the huge database and computing power in the cloud, the backend staff can get a full insight into the scenario, environmental parameters and status of the user remotely (for example, across the city) in real time. In the meanwhile, by comparing the cloud database and in-memory database and with the help of AI-assisted recognition and manual analysis, they can quickly develop the most reasonable action plan and send instructions to the user. In addition, the backend staff can provide humanistic care and emotional sustenance through voice dialogs.Results This study originally proposes the concept of “remote virtual companion” and demonstrates the related hardware and software as well as test results. The system can not only achieve basic guide functions, for example, helping a person with visual impairment to shop in supermarkets, find seats at cafes, walk on the streets, construct complex puzzles, and play cards, but also can meet the demand for fast-paced daily tasks such as cycling.Conclusion Experimental results show that this “remote virtual companion” is applicable for various scenarios and demands. It can help blind people with their travels, shopping and entertainment, or accompany the elderlies with their trips, wilderness explorations, and travels.
Keywords:artificial visual aid  remote virtual companion  tactile code  visually impaired users  navigation
点击此处可从《生物化学与生物物理进展》浏览原始摘要信息
点击此处可从《生物化学与生物物理进展》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号