Can AI Decode How We Soothe Others?

Imagine your friend is stressed about an exam. Do you crack a joke to lighten the mood, reassure them it will be fine, or distract them with a walk?

We make these choices constantly, often without thinking.

Psychologists call this interpersonal emotion regulation – the ways we try to change how others feel.

Studying these subtle acts is tricky, especially when relying on surveys that may miss the richness of real interactions.

AI
What happens when artificial intelligence tries to read the ways we comfort, distract, or cheer up the people around us?

Key Points

  • Researchers tested whether AI models (ChatGPT, Claude) can identify how people try to comfort or calm others from short personal stories.
  • With carefully designed instructions, AI matched human coders in spotting emotion regulation strategies about 80–90% of the time.
  • Some strategies, like humor or distraction, were harder for AI to detect consistently.
  • Findings highlight both the promise and limits of using AI in psychology and digital mental health tools.

Turning Narratives Into Data

Instead of checkboxes, researchers asked students to write how they would help someone manage sadness, anger, or fear.

These open-ended stories better capture real behavior but are labor-intensive to analyze.

Normally, psychologists spend hours combing through such material, labeling strategies like:

  • Affective engagement (listening, showing empathy)
  • Cognitive engagement (helping reframe the problem)
  • Attention (spending time together)
  • Distraction (pulling focus away)
  • Humor (making them laugh)

This is slow, subjective work. Enter large language models.

Could AI read these short stories and classify the strategies as well as human experts?


Teaching ChatGPT the Rules

In the first study, researchers gave ChatGPT thousands of these narratives.

At first, its accuracy was shaky.

It sometimes mistook casual talk for deep empathy, or confused “cheering someone up” with “changing the subject.”

But when scientists refined the instructions – offering clearer definitions, sharper contrasts between categories, and bite-sized batches of stories – ChatGPT improved dramatically.

Agreement with human coders rose above 90% in some cases.


Study 2: Putting AI to the Test Again

To see if this success generalized, the team used the refined prompts on a fresh set of 2,090 stories, comparing ChatGPT and Claude.

Both models performed impressively, with accuracy levels similar to Study 1.

They were especially strong at spotting strategies in anger and sadness scenarios, but less reliable for fear-related situations and subtle distinctions between humor and supportive talk.


The Cracks in the System

The machines still stumbled. Fear scenarios were trickier than sadness or anger.

Humor, being subtle and context-heavy, often slipped through the cracks.

And when narratives grew longer, AI sometimes “over-read” them – seeing strategies that weren’t really there.

Most importantly, the process required a human in the loop.

Without careful oversight and prompt engineering, AI’s classifications drifted into error.


Why It Matters

This research shows that with the right guidance, AI can help psychologists sift through thousands of rich, narrative-based responses – work that would otherwise take human coders weeks.

That could accelerate studies of how people support each other, and even inform digital mental health tools that offer real-time feedback on communication strategies.

Still, the models aren’t perfect.

They sometimes over-interpret friendly conversation as “deep emotional support,” or miss nuances in fearful contexts.

For now, AI is best seen as a partner to human judgment, not a replacement.

Reference

López-Pérez, B., Chen, Y., Li, X., Cheng, S., & Razavi, P. (2025). Exploring the potential of large language models to understand interpersonal emotion regulation strategies from narratives. Emotion, 25(7), 1653–1667. https://doi.org/10.1037/emo0001528

Supplemental Material

Olivia Guy-Evans, MSc

BSc (Hons) Psychology, MSc Psychology of Education

Associate Editor for Simply Psychology

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.


Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

h4 { font-weight: bold; } h1 { font-size: 40px; } h5 { font-weight: bold; } .mv-ad-box * { display: none !important; } .content-unmask .mv-ad-box { display:none; } #printfriendly { line-height: 1.7; } #printfriendly #pf-title { font-size: 40px; }