Paper
The prediction of head and eye movement for 360 degree images
Published Nov 1, 2018 · Yucheng Zhu, Guangtao Zhai, Xiongkuo Min
Signal Process. Image Commun.
Q1 SJR score
115
Citations
6
Influential Citations
Abstract
Abstract removed due to Elsevier request; this does not indicate any issues with the research. Click the full text link above to read the abstract and view the original source.
Study Snapshot
Key takeawayOur model effectively predicts head and eye movements in 360-degree images, aiding in fine-grained processing and reducing resource waste in computer vision and multimedia processing.
PopulationOlder adults (50-71 years)
Sample size24
MethodsObservational
OutcomesBody Mass Index projections
ResultsSocial networks mitigate obesity in older groups.
Sign up to use Study Snapshot
Consensus is limited without an account. Create an account or sign in to get more searches and use the Study Snapshot.
Full text analysis coming soon...
References
A saliency prediction model on 360 degree images using color dictionary based sparse representation
The CDSR model, using color dictionary based sparse representation, effectively predicts saliency in 360° images, outperforming other models on both natural and 360° images.
2018·63citations·Jing Ling et al.·Signal Process. Image Commun.
Signal Process. Image Commun.
GBVS360, BMS360, ProSal: Extending existing saliency prediction models from 2D to omnidirectional images
The newly designed BMS360 and GVBS360 saliency prediction models significantly improve head motion prediction and head/eye saliency-map prediction in omnidirectional images.
2018·77citations·Pierre R. Lebreton et al.·Signal Process. Image Commun.
Signal Process. Image Commun.
A feature-based approach for saliency estimation of omni-directional images
The proposed saliency model for omni-directional images, combining low-level and semantic features, effectively estimates visual attention in 360° images, improving immersion and coding.
2018·40citations·F. Battisti et al.·Signal Process. Image Commun.
Signal Process. Image Commun.
A novel superpixel-based saliency detection model for 360-degree images
The proposed superpixel-level saliency detection model for 360-degree images shows promising performance in detecting salient regions and improving visual attention in VR/AR data.
2018·35citations·Yuming Fang et al.·Signal Process. Image Commun.
Signal Process. Image Commun.
Toolbox and dataset for the development of saliency and scanpath models for omnidirectional/360° still images
This paper presents a dataset and toolbox for developing visual attention models for omnidirectional/360° still images, supporting research on visual attention modeling for immersive experiences.
2018·72citations·Jesús Gutiérrez et al.·Signal Process. Image Commun.
Signal Process. Image Commun.
Citations
GazeFed: Privacy-Aware Personalized Gaze Prediction for Virtual Reality
GazeFed, a privacy-aware personalized gaze prediction framework for virtual reality, effectively captures user behavioral patterns and enhances user experiences while maintaining user privacy.
2024·0citations·Jiang Wu et al.·2024 IEEE/ACM 32nd International Symposium on Quality of Service (IWQoS)
2024 IEEE/ACM 32nd International Symposium on Quality of Service (IWQoS)
Multi-Scale Transformer Network for Saliency Prediction on 360-Degree Images
The Multi-scale Transformer framework (MTSal360) improves saliency prediction on 360-degree images by capturing long-range information and addressing data insufficient issues.
2023·0citations·Xuan Lin et al.·2023 IEEE International Conference on Image Processing (ICIP)
2023 IEEE International Conference on Image Processing (ICIP)
D-SAV360: A Dataset of Gaze Scanpaths on 360° Ambisonic Videos
D-SAV360 is a comprehensive dataset of head and eye scanpaths for 360° videos with directional ambisonic sound, enabling a more comprehensive study of multimodal interaction on visual behavior in virtual reality environments.
2023·1citation·Edurne Bernal-Berdun et al.·IEEE Transactions on Visualization and Computer Graphics
IEEE Transactions on Visualization and Computer Graphics
Multi-Scale Estimation for Omni-Directional Saliency Maps Using Learnable Equator Bias
The proposed multi-scale estimation method improves the accuracy of omni-directional saliency maps by using a learnable equator bias layer and multiple angles of view.
2023·1citation·Takao Yamanaka et al.·ArXiv
ArXiv
Perceptual Quality Assessment of 360° Images Based on Generative Scanpath Representation
The generative scanpath representation (GSR) effectively assesses 360° image quality by considering diverse viewing behaviors and viewing conditions, providing a global overview of gazed-focused contents.
2023·3citations·Xiangjie Sui et al.·ArXiv
ArXiv
A Hypernetwork-Based Method for Omnidirectional Image Quality Assessment
The OIQA-Hyper method, based on hypernetwork, effectively assesses omnidirectional image quality, improving user experience in virtual reality.
2023·0citations·Jie Liu et al.·2023 IEEE 6th International Conference on Computer and Communication Engineering Technology (CCET)
2023 IEEE 6th International Conference on Computer and Communication Engineering Technology (CCET)
ScanDMM: A Deep Markov Model of Scanpath Prediction for 360° Images
ScanDMM, a novel Deep Markov Model, effectively predicts scanpaths in 360° images, achieving state-of-the-art performance and generalizability in other visual tasks.
2023·17citations·Xiangjie Sui et al.·2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)