首页 | 本学科首页   官方微博 | 高级检索  
   检索      


Memory reconsolidation for natural language processing
Authors:Kun Tu  David G Cooper  Hava T Siegelmann
Institution:(1) BINDS laboratory, Department of Computer Science, University of Massachusetts Amherst, 140 Governor’s Drive, Amherst, MA 01003, USA;(2) College of Computer Science and Engineering, South China University of Technology, 510640 Guangzhou, People‘s Republic of China;(3) Program of Evolutionary Dynamics, Harvard University, One Brattle Square, Suite 6, Cambridge, MA 02138-3758, USA;
Abstract:We propose a model of memory reconsolidation that can output new sentences with additional meaning after refining information from input sentences and integrating them with related prior experience. Our model uses available technology to first disambiguate the meanings of words and extracts information from the sentences into a structure that is an extension to semantic networks. Within our long-term memory we introduce an action relationships database reminiscent of the way symbols are associated in brain, and propose an adaptive mechanism for linking these actions with the different scenarios. The model then fills in the implicit context of the input and predicts relevant activities that could occur in the context based on a statistical action relationship database. The new data both of the more complete scenario and of the statistical relationships of the activities are reconsolidated into memory. Experiments show that our model improves upon the existing reasoning tool suggested by MIT Media lab, known as ConceptNet.
Keywords:Memory reconsolidation  Natural language processing  Semantic network  Bayesian inference
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号