Models of gaze control for manipulation tasks
Article
-
- Overview
-
- Research
-
- Identity
-
- Additional Document Info
-
- View All
-
Overview
abstract
-
Human studies have shown that gaze shifts are mostly driven by the current task demands. In manipulation tasks, gaze leads action to the next manipulation target. One explanation is that fixations gather information about task relevant properties, where task relevance is signalled by reward. This work presents new computational models of gaze shifting, where the agent imagines ahead in time the informational effects of possible gaze fixations. Building on our previous work, the contributions of this article are: (i) the presentation of two new gaze control models, (ii) comparison of their performance to our previous model, (iii) results showing the fit of all these models to previously published human data, and (iv) integration of a visual search process. The first new model selects the gaze that most reduces positional uncertainty of landmarks (Unc), and the second maximises expected rewards by reducing positional uncertainty (RU). Our previous approach maximises the expected gain in cumulative reward by reducing positional uncertainty (RUG). In experiment ii the models are tested on a simulated humanoid robot performing a manipulation task, and each model%27s performance is characterised by varying three environmental variables. This experiment provides evidence that the RUG model has the best overall performance. In experiment iii, we compare the hand-eye coordination timings of the models in a robot simulation to those obtained from human data. This provides evidence that only the models that incorporate both uncertainty and reward (RU and RUG) match human data. © 2013 ACM.
-
Human studies have shown that gaze shifts are mostly driven by the current task demands. In manipulation tasks, gaze leads action to the next manipulation target. One explanation is that fixations gather information about task relevant properties, where task relevance is signalled by reward. This work presents new computational models of gaze shifting, where the agent imagines ahead in time the informational effects of possible gaze fixations. Building on our previous work, the contributions of this article are: (i) the presentation of two new gaze control models, (ii) comparison of their performance to our previous model, (iii) results showing the fit of all these models to previously published human data, and (iv) integration of a visual search process. The first new model selects the gaze that most reduces positional uncertainty of landmarks (Unc), and the second maximises expected rewards by reducing positional uncertainty (RU). Our previous approach maximises the expected gain in cumulative reward by reducing positional uncertainty (RUG). In experiment ii the models are tested on a simulated humanoid robot performing a manipulation task, and each model's performance is characterised by varying three environmental variables. This experiment provides evidence that the RUG model has the best overall performance. In experiment iii, we compare the hand-eye coordination timings of the models in a robot simulation to those obtained from human data. This provides evidence that only the models that incorporate both uncertainty and reward (RU and RUG) match human data. © 2013 ACM.
publication date
funding provided via
published in
Research
keywords
-
Decision making; Gaze control; Reinforcement learning Computational model; Environmental variables; Gaze control; Hand eye coordination; Humanoid robot; Manipulation task; Robot simulations; Visual search; Anthropomorphic robots; Decision making; Experiments; Reinforcement learning; Computer simulation
Identity
Digital Object Identifier (DOI)
Additional Document Info
start page
end page
volume
issue