2012 |
Epameinondas Antonakos, Vassilis Pitsikalis, Isidoros Rodomagoulakis, Petros Maragos Unsupervised classification of extreme facial events using active appearance models tracking for sign language videos Conference Proceedings - International Conference on Image Processing, ICIP, 2012, ISSN: 15224880. Abstract | BibTeX | Links: [PDF] @conference{178, title = {Unsupervised classification of extreme facial events using active appearance models tracking for sign language videos}, author = { Epameinondas Antonakos and Vassilis Pitsikalis and Isidoros Rodomagoulakis and Petros Maragos}, url = {http://robotics.ntua.gr/wp-content/uploads/publications/APRM_UnsupervisClassifExtremeFacialEventsAAM-SignLangVideos_ICIP2012.pdf}, doi = {10.1109/ICIP.2012.6467133}, issn = {15224880}, year = {2012}, date = {2012-01-01}, booktitle = {Proceedings - International Conference on Image Processing, ICIP}, pages = {1409--1412}, abstract = {We propose an Unsupervised method for Extreme States Classification (UnESC) on feature spaces of facial cues of interest. The method is built upon Active Appearance Models (AAM) face tracking and on feature extraction of Global and Local AAMs. UnESC is applied primarily on facial pose, but is shown to be extendable for the case of local models on the eyes and mouth. Given the importance of facial events in Sign Languages we apply the UnESC on videos from two sign language corpora, both American (ASL) and Greek (GSL) yielding promising qualitative and quantitative results. Apart from the detection of extreme facial states, the proposed Un-ESC also has impact for SL corpora lacking any facial annotations.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } We propose an Unsupervised method for Extreme States Classification (UnESC) on feature spaces of facial cues of interest. The method is built upon Active Appearance Models (AAM) face tracking and on feature extraction of Global and Local AAMs. UnESC is applied primarily on facial pose, but is shown to be extendable for the case of local models on the eyes and mouth. Given the importance of facial events in Sign Languages we apply the UnESC on videos from two sign language corpora, both American (ASL) and Greek (GSL) yielding promising qualitative and quantitative results. Apart from the detection of extreme facial states, the proposed Un-ESC also has impact for SL corpora lacking any facial annotations. |
Copyright Notice:
Some material presented is available for download to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author’s copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
The work already published by the IEEE is under its copyright. Personal use of such material is permitted. However, permission to reprint/republish the material for advertising or promotional purposes, or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of the work in other works must be obtained from the IEEE.