Incorporating Multimodal Directional Interpersonal Synchrony into Empathetic Response Generation

Jingyu Quan, Yoshihiro Miyake, Takayuki Nozawa*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

This study investigates how interpersonal (speaker–partner) synchrony contributes to empathetic response generation in communication scenarios. To perform this investigation, we propose a model that incorporates multimodal directional (positive and negative) interpersonal synchrony, operationalized using the cosine similarity measure, into empathetic response generation. We evaluate how incorporating specific synchrony affects the generated responses at the language and empathy levels. Based on comparison experiments, models with multimodal synchrony generate responses that are closer to ground truth responses and more diverse than models without synchrony. This demonstrates that these features are successfully integrated into the models. Additionally, we find that positive synchrony is linked to enhanced emotional reactions, reduced exploration, and improved interpretation. Negative synchrony is associated with reduced exploration and increased interpretation. These findings shed light on the connections between multimodal directional interpersonal synchrony and empathy’s emotional and cognitive aspects in artificial intelligence applications.

Original languageEnglish
Article number434
JournalSensors
Volume25
Issue number2
DOIs
StatePublished - 2025/01

Keywords

  • affective computing
  • empathetic response generation
  • multimodal learning

ASJC Scopus subject areas

  • Analytical Chemistry
  • Information Systems
  • Atomic and Molecular Physics, and Optics
  • Biochemistry
  • Instrumentation
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Incorporating Multimodal Directional Interpersonal Synchrony into Empathetic Response Generation'. Together they form a unique fingerprint.

Cite this