Therefore, presentation of visual stimuli has been explored with great emphasis covering laboratory setup, presentation timing, subjective issues, and ethical issues. In total, 137 peer reviewed articles have been studied and the results show that about 83% of emotion elicitations have been performed by employing visual stimuli (mostly pictures and video). Certainly, among several methods of emotion recognition (e.g., facial expression, speech, gesture and physiological signal), the EEG based emotion recognition works have been considered here due to availability of sufficient number of works, reliability and well-established technology. In this purpose, an inclusive study has been conducted aiming to summarize various aspects of stimuli presentation including type of stimuli, available database, presentation tools, subjective measures, ethical issues and so on. In addition, an ample study about this stage including how to select, design and present the stimuli has not been reported properly earlier. Due to lack of standard guidelines, the researchers employ their self-devised methods which are not always sufficiently informative - making this area very inconsistent and ambiguous. This work presents a comprehensive review on stimuli presentation, which is an important stage of any emotion elicitation experiment in affect analysis. © 2013 ICST Institute for Computer Science, Social Informatics and Telecommunications Engineering. Furthermore, we developed two distinct proof-of-concept applications, Streetview+ and Mood Profiler driven by Visage. Results demonstrate that Visage is effective in different real-world scenarios. Visage supports a set of novel sensing, tracking, and machine learning algorithms on the phone, which are specifically designed to deal with challenges presented by user mobility, varying phone contexts, and resource limitations. Visage fuses data streams from the phone's front-facing camera and built-in motion sensors to infer, in an energy-efficient manner, the user's 3D head poses (i.e., the pitch, roll and yaw of user's heads with respect to the phone) and facial expressions (e.g., happy, sad, angry, etc.). This paper presents the design, implementation and evaluation of a robust, real-time face interpretation engine for smartphones, called Visage, that enables a new class of face-aware applications for smartphones. Because smartphones come with front-facing cameras, it is now possible for users to interact and drive applications based on their facial responses to enable participatory and opportunistic face-aware applications. Smartphones represent powerful mobile computing devices enabling a wide variety of new applications and opportunities for human interaction, sensing and communications.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |