首页 | 本学科首页   官方微博 | 高级检索  
   检索      


A neural signature of hierarchical reinforcement learning
Authors:Ribas-Fernandes José J F  Solway Alec  Diuk Carlos  McGuire Joseph T  Barto Andrew G  Niv Yael  Botvinick Matthew M
Institution:1 Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08540, USA
2 Champalimaud Neuroscience Programme, Champalimaud Foundation, 1400-038 Lisbon, Portugal
3 Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104, USA
4 Department of Computer Science, University of Massachusetts Amherst, Amherst, MA 01002, USA
5 Department of Psychology, Princeton University, Princeton, NJ 08540, USA
Abstract:Human behavior displays hierarchical structure: simple actions cohere into subtask sequences, which work together to accomplish overall task goals. Although the neural substrates of such hierarchy have been the target of increasing research, they remain poorly understood. We propose that the computations supporting hierarchical behavior may relate to those in hierarchical reinforcement learning (HRL), a machine-learning framework that extends reinforcement-learning mechanisms into hierarchical domains. To test this, we leveraged a distinctive prediction arising from HRL. In ordinary reinforcement learning, reward prediction errors are computed when there is an unanticipated change in the prospects for accomplishing overall task goals. HRL entails that prediction errors should also occur in relation to task subgoals. In three neuroimaging studies we observed neural responses consistent with such subgoal-related reward prediction errors, within structures previously implicated in reinforcement learning. The results reported support the relevance of HRL to the neural processes underlying hierarchical behavior.
Keywords:
本文献已被 ScienceDirect PubMed 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号