首页 | 本学科首页   官方微博 | 高级检索  
   检索      


Predicting the eye fixation locations in the gray scale images in the visual scenes with different semantic contents
Authors:Hassan Zanganeh Momtaz  Mohammad Reza Daliri
Institution:1.Neuroscience and Neuroengineering Research Lab., Biomedical Engineering Department, Faculty of Electrical Engineering,Iran University of Science and Technology (IUST),Tehran,Iran;2.School of Cognitive Sciences (SCS),Institute for Research in Fundamental Research (IPM),Tehran,Iran
Abstract:In recent years, there has been considerable interest in visual attention models (saliency map of visual attention). These models can be used to predict eye fixation locations, and thus will have many applications in various fields which leads to obtain better performance in machine vision systems. Most of these models need to be improved because they are based on bottom-up computation that does not consider top-down image semantic contents and often does not match actual eye fixation locations. In this study, we recorded the eye movements (i.e., fixations) of fourteen individuals who viewed images which consist natural (e.g., landscape, animal) and man-made (e.g., building, vehicles) scenes. We extracted the fixation locations of eye movements in two image categories. After extraction of the fixation areas (a patch around each fixation location), characteristics of these areas were evaluated as compared to non-fixation areas. The extracted features in each patch included the orientation and spatial frequency. After feature extraction phase, different statistical classifiers were trained for prediction of eye fixation locations by these features. This study connects eye-tracking results to automatic prediction of saliency regions of the images. The results showed that it is possible to predict the eye fixation locations by using of the image patches around subjects’ fixation points.
Keywords:Visual attention model  Saliency map  Eye fixation  Bottom-up and top-down attention  Semantic content  Eye tracking
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号