INTEGRATING HEAD POSE TO A 3D MULTI-TEXTURE APPROACH FOR GAZE DETECTION
Résumé
Lately, the integration of gaze detection systems in human-computer interaction (HCI) applications has been increasing. For this to be available for everyday use and for everybody, the imbedded gaze tracking system should be one that works with low resolution images coming from ordinary webcams and permits a wide range of head poses. We propose the 3D Multi-Texture Active Appearance Model (MT-AAM): an iris model is merged with a local eye skin model where holes are put in the place of the sclera-iris region. The iris model rotates under the eye hole permitting the synthesis of new gaze directions. Depending on the head pose, the left and right eyes are unevenly represented in the webcam image. Thus, we additionally propose to use the head pose information to ameliorate gaze detection through a multi-objective optimization: we apply the 3D MT-AAM simultaneously on both eyes and we sum the resulting errors while multiplying each by a weighting factor that is a function of the head pose. Tests show that our method outperforms a classical AAM of the eye region trained on people committing different gaze directions. Moreover, we compare our proposed approach to the state-of-art method of Heyman et al. [12] which manually initialize their algorithm: without any manual initialization, we obtain the same level of accuracy in gaze detection.