The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
Empathy and Affective Computing Datasets Summary
This repository is a curated summary of existing datasets for empathy and affective computing research. It distinguishes between empathy-focused datasets (directly measuring empathic processes) and general affective computing datasets (emotion recognition, valence/arousal, etc.). This is not a new dataset but a reference guide—please access original datasets via provided links and cite their sources.
Empathy and Affective Computing Datasets
This repository collects and organizes datasets for empathy and affective computing research.
We distinguish between empathy-focused datasets (directly measuring empathic processes) and general affective computing datasets (emotion recognition, valence/arousal, etc.).  
Each entry lists: Year, Modality, Emotions/Empathy dimensions measured, Usage, Access type, and References/Links.
Empathy-Focused Datasets
| Dataset | Year | Modality | Empathy / Emotions Measured | Usage (ML/Clinical) | Access | References / Links | 
|---|---|---|---|---|---|---|
| EmpatheticDialogues (ED) | 2019 | Text (dialogues) | 32 situation-level emotions, listener responses | Training/evaluating empathetic chatbots | Public | ACL Anthology | 
| ESConv (Emotional Support Conversations) | 2021 | Text (dyadic chat) | 8 support strategies (e.g. questioning, sympathizing) | ML for emotional support dialogue; mental health | Public | ACL Anthology | 
| EDOS (Empathetic Dialogue at Scale) | 2021 | Text (movie subtitles + auto annotation) | 32 emotions + 8 empathy intents | Large-scale empathetic response generation | Public | Paper | 
| OMG-Empathy | 2019 | Video + Audio (storytelling dyads) | Listener valence shifts (affective empathy) | Affective impact prediction | Restricted (ACII challenge) | arXiv / Uni Hamburg | 
| MEDIC (Multimodal Empathy in Counseling) | 2023 | Video, Audio, Text (therapy sessions) | 3 empathy mechanisms: EE (client expr.), ER (emotional), CR (cognitive) | Clinical + ML multimodal empathy prediction | Request from authors | ACM Multimedia | 
| E-THER (Empathic THERapy Conversations) | 2025 | Video + Text (counseling dialogues) | Verbal–visual incongruence, patient engagement | Detecting breakdowns in empathic communication | TBD (new dataset) | arXiv (placeholder) | 
| LeadEmpathy | 2024 | Text (leadership emails) | Cognitive & Affective empathy (10-point scale) | Empathy detection in workplace communication | Public (CC BY-NC 4.0) | GitHub | 
| EmpathicStories++ | 2024 | Video, Audio, Text + surveys | Self-reported empathy (1–5), storytelling interactions | Longitudinal real-world empathy dynamics | Public (CC BY 4.0) | arXiv | 
General Affective Computing Datasets
| Dataset | Year | Modality | Emotions / Dimensions Measured | Usage (ML/Clinical) | Access | References / Links | 
|---|---|---|---|---|---|---|
| IEMOCAP | 2008 | Audio, Video, Motion capture, Text | 9 discrete emotions + valence/arousal/dominance | Multimodal emotion recognition, fusion methods | Semi-public (USC license request) | USC SAIL | 
| MELD | 2019 | Video, Audio, Text (TV dialogues) | 7 emotions (joy, sadness, anger, fear, disgust, surprise, neutral) + sentiment | Emotion in conversation, multimodal dialogue | Public | Paper | 
| CMU-MOSEI | 2018 | Video, Audio, Text (YouTube monologues) | 6 emotions (anger, happiness, disgust, sadness, fear, surprise) + sentiment | Largest multimodal benchmark for emotion/sentiment | Public | CMU SDK | 
| DEAP | 2012 | EEG, Physio (ECG, GSR), Face video | Valence, arousal, dominance, liking, familiarity | Emotion recognition from physiology, ML on EEG | Public | Paper | 
| AffectNet | 2017 | Images (faces in-the-wild) | 8 discrete emotions + valence/arousal | Deep CNNs for facial expression recognition | Public (license req.) | Wikipedia / Project | 
| IAPS (International Affective Picture System) | 1997+ | Images (color photos) | Valence, arousal, dominance | Psychological/clinical emotion elicitation | Restricted (psychology research license) | Wikipedia | 
| K-EmoCon | 2020 | Audiovisual + physiology (debates) | Continuous valence/arousal; categorical emotions | Multimodal affect in natural conversations | Public (PMC) | PMC | 
Notes
- Empathy datasets explicitly measure perspective-taking, emotional resonance, or therapist/patient empathy.
 - General affective computing datasets focus on emotion recognition, which supports empathy modeling indirectly.
 - Access types are indicated: Public, Semi-public (license required), Restricted (special request).
 - Please cite the original dataset papers when using.
 
- Downloads last month
 - 8