Menu Close

AI-assisted detection of biosignals and human emotions

Fundamental methodology research on autonomous learning can benefit most, if not all, computer vision tasks. With the aid of autonomous learning, we can automatically design a contextual-aware neural network for different perceived data given specific computer vision tasks.

“Although wireless communication and machine vision focus on different forms of signals, we can use machine learning to advance specific tasks, e.g., to automize or optimize processes, and to facilitate human interactions,” professor Guoying Zhao and professor Xiaobai Li note.

The experts study Emotion AI including the recognition and analysis of facial (micro)-expressions, emotional (micro)-gestures and for non-typical emotions, which are directly related with emotion understanding and can facilitate education, psychotherapy, remote services, and autonomous driving, to mention a few.

“Facial micro-expression analysis is important for understanding human’s hidden or suppressed emotions, and facial action unit (AU) is the smallest element of facial movements,” the experts note. Moreover, body gesture and micro-gesture analysis and recognition are also crucial for emotional status recognition. Multi-channel information fusion has been investigated for a complete emotion understanding system.

The team aims to improve understanding of emotions that do not belong to the most general or typical classes, e.g., happy, sad, anger, etc., but are commonly encountered in our daily life and are practically essential in application scenarios. Such non-typical emotions include boredom, confusion, interest, shame, nervousness and confidence, Li notes.

Solutions for self-monitoring and early screening of diseases

Remote physiological signal measurements carry immense potential for numerous applications in remote heathcare, e.g., self-monitoring of heart functions, early screening of heart disease, and so on. Hence, they are a crucial tool for remote heart rate measurements, heart rate variability measurements and respiration measurements and can lead to massive cost savings in . One example application case is remote atrial fibrillation detection from just face video analysis with the collaboration with the cardiologists in University hospital.

With advanced computer vision and machine learning technology, the experts look for solutions to interconnected challenges such as: how to counter the influence of human motions and lighting variations when measuring heart rate from facial videos; how to deal with low intensity and improve micro-expression recognition accuracy and; how to relieve the problem of insufficient and imbalanced data for micro-gesture analysis.

Multimodal learning and fusion have been deeply explored in the group. In practice, multimodality fusion can take place on various levels. It includes fusion at the sensor level, i.e., data recorded with multiple sensors (RGB, NIR, depth, or 4D cameras, bio-sensors), fusion at the feature level, i.e., different feature clues such as depth and texture, multi-view faces, etc., and also fusion at the decision level, i.e., collaborative classification and voting of parallel modules.

“With 6G technology and the concept of IoT, we can explore ideas to combine cameras with other kinds of sensors for distributed learning and fusion, for tasks such as safety of driving, or home healthcare monitoring,” Zhao says. “When combined, multimodal fusion and autonomous learning can lead to more robust and efficient machine learning solutions in various fields and in forms of software, services, or smart products with emotional intelligence and self-learning towards 6G.”

Vision-based assistive medical diagnosis

In the current and future video-capable and , the integration of assistive diagnosis applications based on computer vision is going to play an increasingly important role as it will be integrated in all types of Tele-Health strategies.

Professor Miguel Bordallo Lopez adopts a multidisciplinary approach at the intersection of AI-assisted primary healthcare and real-time computer vision and signal analysis. “Our research is likely to enable new applications and methods in several related fields which are not traditionally studied jointly, such as digital and public health, computer vision, computing and communication architectures.”

His research aims at laying the foundations to produce novel solutions for vision-based assistive diagnosis in primary healthcare, bringing the technology into practical applicability and potentially changing the way that medical and healthcare are performed.

“Camera-based assistive medical diagnosis assistance is an emerging topic of interest as it provides a remote alternative to traditional primary healthcare, since it does not necessarily require personal visits to the health centers and allows continuous monitoring,” Bordallo Lopez says. “Computer vision and AI can leverage remote and mobile video data, and they can assist in providing unobtrusive and objective information in a patient’s condition.”

To give an example, up to 30 medically relevant symptoms or conditions can be detected or at least assessed objectively using computer vision methods and facial images. At the same time, analyzing the signals of complementary modalities such as radio signals, such as the ones used in 5G/6G communication, provide for privacy preserving alternative sources of information.

“Although many advanced computer vision based healthcare and medical diagnosis methods have been demonstrated, their actual implementations as embedded or remote solutions, if existing, are still far from being useful,” Bordallo Lopez notes. “The problem derives from the implementation challenges arising from explainability, real time computations, communication capabilities and cost issues.”

Challenges of distributed and embedded 5G/6G devices

The real challenge is finding how we could enable the use of computer vision for medical diagnosis using camera-based devices such as mobile phones, or remote video connections e.g. video-conference services, both including communication capabilities.

“A particular problem that we are trying to tackle now is the use of radio signals produced by 5G/6G devices, jointly for communication and sensing, so we could obtain data about location, activity and biosignals, such as vital signs, of a patient in an unobtrusive and privacy preserving way,” Bordallo Lopez notes. “These would enable a wide range of applications with cooperative systems that are integrated with other devices as well.”

The applicable implementation of these methods deals fundamentally with multiple distributed and embedded devices that need to communicate and process large amounts of data with low latencies in a very energy efficient way—a challenge, which is at the core of 6G research. In addition, effective applications need to deal with multiple sources of heterogenous data that retrieved from different locations and that must be combined in real-time.

It is important to embrace the challenges and particularities that derive from real world scenarios conditions and devices, so that the solutions become truly applicable. He and his colleagues use video to extract and classify biosignals (such as respiration or pulse) and respiratory or circulatory danger signs (chest indrawing, asymmetries) from regular videos obtained with hand-held devices or remote video-conference services—a cost-effective solution that can have wide global impact.

“We are trying to create self-assessment video-based mobile apps for the pre-diagnosis of Stroke, even before visiting the hospitals,” Bordallo Lopez explains. “We are also bringing real-time video analysis based on mobile devices to remote areas in small and middle-income countries, enabling point-of-care assistive diagnosis to, for instance, child pneumonia.”

More than 200 researchers work around different research topics related to artificial intelligence (AI) at the University of Oulu. The first AI research group—Machine Vision Group, the predecessor of Center for Machine Vision and Signal Analysis – (CMVS), was established as early as in 1981. The research truly has long roots.

Artificial Intelligence research and development at the University of Oulu covers a wide range of different areas: computer vision, emotion AI, machine learning, robotics, edge computing, and medical, industrial and atmospheric applications of AI methods.

“Throughout the years, many spin-off companies related to AI have been born,” says Olli Silvén, the head of the CMVS research unit. “Our group’s long-term expertise has a lot to offer for 6G development as well.”

Article: AI-assisted detection of biosignals and human emotions

 

Leave a Reply

Your email address will not be published. Required fields are marked *