The emotional component of Infant Directed-Speech: A cross-cultural study using machine learning

Translated title of the contribution: The emotional component of Infant Directed-Speech: A cross-cultural study using machine learning

Erika Parlato-Oliveira, Mohamed Chetouani, Jean Maximilien Cadic, Sylvie Viaux, Zeineb Ghattassi, Jean Xavier, Lisa Ouss, Ruth Feldman, Filippo Muratori, David Cohen, Catherine Saint-Georges

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Backgrounds: Infant-directed speech (IDS) is part of an interactive loop that plays an important role in infants’ cognitive and social development. The use of IDS is universal and is composed of linguistic and emotional components. However, whether the emotional component has similar acoustics characteristics has not been studied automatically. Methods: We performed a cross-cultural study using automatic social signal processing techniques (SSP) to compare IDS across languages. Our speech corpus consisted of audio-recorded vocalizations from parents during interactions with their infant between the ages of 4 and 18 months. It included 6 databases of five languages: English, French, Hebrew (two databases: mothers/fathers), Italian, and Brazilian Portuguese. We used an automatic classifier that exploits the acoustic characteristics of speech and machine learning methods (Support Vector Machines, SVM) to distinguish emotional IDS and non-emotional IDS. Results: Automated classification of emotional IDS was possible for all languages and speakers (father and mother). The uni-language condition (classifier trained and tested in the same language) produced moderate to excellent classification results, all of which were significantly different from chance (P < 1 × 10−10). More interestingly, the cross-over condition (IDS classifier trained in one language and tested in another language) produced classification results that were all significantly different from chance (P < 1 × 10−10). Conclusion: The automated classification of emotional and non-emotional components of IDS is possible based on the acoustic characteristics regardless of the language. The results found in the cross-over condition support the hypothesis that the emotional component shares similar acoustic characteristics across languages.

Translated title of the contributionThe emotional component of Infant Directed-Speech: A cross-cultural study using machine learning
Original languageEnglish
Pages (from-to)106-113
Number of pages8
JournalNeuropsychiatrie de l'Enfance et de l'Adolescence
Volume68
Issue number2
DOIs
StatePublished - Mar 2020
Externally publishedYes

Keywords

  • Cross-cultural
  • Machine learning
  • Mother-child interaction
  • Motherese
  • Social signal processing

Fingerprint

Dive into the research topics of 'The emotional component of Infant Directed-Speech: A cross-cultural study using machine learning'. Together they form a unique fingerprint.

Cite this