首页 | 本学科首页   官方微博 | 高级检索  
   检索      


Top–down learning of low-level vision tasks
Authors:Michael J Jones  Pawan Sinha  Thomas Vetter  Tomaso Poggio[Author vitae]
Institution:1E25–201, Center for Biological and Computational Learning, Massachusetts Institute of Technology, 45 Carleton Street, Cambridge, Massachusetts 02142, USA;2E25–201, Center for Biological and Computational Learning, Massachusetts Institute of Technology, 45 Carleton Street, Cambridge, Massachusetts 02142, USA E-mail: tp@ai.mit.edu
Abstract:Perceptual tasks such as edge detection, image segmentation, lightness computation and estimation of three-dimensional structure are considered to be low-level or mid-level vision problems and are traditionally approached in a bottom–up, generic and hard-wired way. An alternative to this would be to take a top–down, object-class-specific and example-based approach. In this paper, we present a simple computational model implementing the latter approach. The results generated by our model when tested on edge-detection and view-prediction tasks for three-dimensional objects are consistent with human perceptual expectations. The model's performance is highly tolerant to the problems of sensor noise and incomplete input image information. Results obtained with conventional bottom–up strategies show much less immunity to these problems. We interpret the encouraging performance of our computational model as evidence in support of the hypothesis that the human visual system may learn to perform supposedly low-level perceptual tasks in a top–down fashion.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号