Paper
Augmented saliency model using automatic 3D head pose detection and learned gaze following in natural scenes
Published Nov 1, 2015 · Daniel F. Parks, A. Borji, L. Itti
Vision Research
Q2 SJR score
51
Citations
0
Influential Citations
Abstract
Abstract removed due to Elsevier request; this does not indicate any issues with the research. Click the full text link above to read the abstract and view the original source.
Study Snapshot
Key takeawayThe Dynamic Weighting of Cues model (DWOC) effectively predicts eye movements of passive observers in natural scenes by combining bottom-up saliency, actor's head pose, and gaze direction, improving our understanding of visual attention mechanisms.
PopulationOlder adults (50-71 years)
Sample size24
MethodsObservational
OutcomesBody Mass Index projections
ResultsSocial networks mitigate obesity in older groups.
Sign up to use Study Snapshot
Consensus is limited without an account. Create an account or sign in to get more searches and use the Study Snapshot.
Full text analysis coming soon...
References
Complementary effects of gaze direction and early saliency in guiding fixations during free viewing.
Gaze direction is a strong attentional cue in guiding eye movements during free viewing, complementing low-level saliency cues and derived from both actor faces and eyes in natural scenes.
2014·54citations·A. Borji et al.·Journal of vision
Journal of vision
ImageNet Large Scale Visual Recognition Challenge
The ImageNet Large Scale Visual Recognition Challenge has led to significant advances in object recognition, highlighting key breakthroughs and comparing current computer vision accuracy to human accuracy.
2014·37102citations·Olga Russakovsky et al.·International Journal of Computer Vision
International Journal of Computer Vision
Modeling Task Control of Eye Movements
Advances in eye tracking and probabilistic modeling techniques can help model the control of eye movements in natural behavior, with fixations selecting task-relevant information based on expected reward and environmental uncertainty.
2014·103citations·M. Hayhoe et al.·Current Biology
Current Biology
What/Where to Look Next? Modeling Top-Down Visual Attention in Complex Interactive Environments
Our dynamic Bayesian network model effectively models top-down visual attention in complex interactive environments, outperforming simpler models and state-of-the-art algorithms.
2014·108citations·A. Borji et al.·IEEE Transactions on Systems, Man, and Cybernetics: Systems
IEEE Transactions on Systems, Man, and Cybernetics: Systems
Visual Focus of Attention in Non-calibrated Environments using Gaze Estimation
Our system can estimate visual focus of attention using head rotation and eye gaze estimations, in unpretending environments, using simple hardware like a webcam.
2014·56citations·S. Asteriadis et al.·International Journal of Computer Vision
International Journal of Computer Vision
Citations
···
···
···
···