Paper
A Computational Framework for Understanding Eye–Hand Coordination
Published Nov 25, 2017 · S. Jana, Atul Gopal, A. Murthy
Journal of the Indian Institute of Science
11
Citations
0
Influential Citations
Abstract
Although many studies have documented the robustness of eye–hand coordination, the computational mechanisms underlying such coordinated movements remain elusive. Here, we review the literature, highlighting the differences between mostly phenomenological studies, while emphasizing the need to develop a computational architecture which can explain eye–hand coordination across different tasks. We outline a recent computational approach which uses the accumulator model framework to elucidate the mechanisms involved in coordination of the two effectors. We suggest that, depending on the behavioral context, one of the two independent mechanisms can be flexibly used for the generation of eye and hand movements. When the context requires a tight coupling between the effectors, a common command is instantiated to drive both the effectors (common mode). Conversely, when the behavioral context demands flexibility, separate commands are sent to eye and hand effectors to initiate them flexibly (separate mode). We hypothesize that a higher order executive controller assesses behavioral context, allowing switching between the two modes. Such a computational architecture can provide a conceptual framework that can explain the observed heterogeneity in eye–hand coordination.
Full text analysis coming soon...