Emotional computing is one of the most active research topics right now and is getting more and more attention. This strong interest is driven by a wide range of promising applications in many areas such as virtual reality, smart surveillance, perceptual interfaces, etc. Emotional computing involves multidisciplinary knowledge backgrounds such as psychology, cognition, physiology, and computer science. This white paper highlights several issues that are implicitly involved in the entire interactive feedback loop. We review the latest technologies by discussing different methods for each issue.
Affective Computing seeks to assign computers the same ability to observe, interpret, and generate emotional traits as humans. It is an important topic for harmonious human-computer interaction by improving the quality of human-computer communication and enhancing computer intelligence. The study of affect or emotion can be traced from today to the 19th century. Traditionally, “impact” has rarely been associated with lifeless machines and has been commonly studied by psychologists. The capture and processing of impact functions by computers is quite new in recent years. Emotional computing builds “emotion models” based on information captured by various sensors and builds personalized computing systems with the ability to perceive, interpret, and provide intelligent, sensitive and friendly responses to human emotions. To give an impression of the cutting edge of emotional computing research, this paper is about emotional language processing, facial expressions, gestures and movements, multimodal systems, understanding and generating emotions, and more. A brief discussion is also held for each topic. In addition, this paper also introduces some related projects from around the world that give a clear impression of current/past research work and applications. Based on the above summary and analysis, this paper has discussed several recent research topics that may pose great challenges to improving current research work.
Affective Computing is a new field of interdisciplinary research that brings together researchers and practitioners from fields as diverse as artificial intelligence, natural language processing, cognitive and social sciences. With the proliferation of videos posted online (e.g. YouTube, Facebook, Twitter) for product reviews, movie reviews, political views, etc., emotional computing research evolves from traditional single-mode analysis to more complex forms of multi-modal analysis. I did. This is the primary motivation for the first comprehensive literature review of the various fields of emotional computing. Moreover, existing literature searches lack a detailed discussion of the state-of-the-art in multimodal impact analysis frameworks that this review aims to address. Multimode is defined as the presence of one or more modes or channels, such as visual, audio, text, gesture, and eye gauges. This article focuses primarily on the use of audio, visual and textual information for multimodal impact analysis, as approximately 90% of the relevant literature appears to cover these three modalities. Following an overview of different techniques for unimodal impact analysis, we briefly describe existing methods for fusing information from different modalities. As part of this review, we conduct an extensive study of various categories of state-of-the-art fusion technologies and then perform a critical analysis of potential performance gains with multimodal analysis compared to singlemode analysis. A comprehensive overview of these two complementary fields aims to form building blocks for the reader to better understand this challenging and exciting field of research.