Psychological Science ›› 2013, Vol. 36 ›› Issue (1): 33-37.

Previous Articles     Next Articles

Emotional Voice Modulates the Recognition of Facial Expression: Evidences from an ERP Study

1,   

  • Received:2011-06-13 Revised:2011-12-03 Online:2013-01-20 Published:2013-02-26

情绪语音调节面孔表情的识别:ERP证据

郑志伟1,黄贤军2   

  1. 1. 中科院心理研究所
    2. 首都师范大学
  • 通讯作者: 黄贤军
  • 基金资助:
    教育部人文社科青年基金项目;高等学校博士学科点专项基金

Abstract: Continuous integration of information from multiple sensory inputs is very important for daily life of human being. But the mechanisms underlying the interaction of cross-modal stimulus processing failed to draw sufficient attention, especially on the cross-modal interaction of the stimulus containing emotional significance. This study aimed to investigate the neural mechanism of interaction of emotional voice and facial expression. Event-related potentials(ERP) technique and cross-modal priming paradigm were used to explore the influence of emotional voice on the recognition of facial expression. The materials consisted of 240 prime-target pairs using voices as primes and facial expressions as targets. Neutral semantic words were spoken with happy or angry prosody and followed by congruous or incongruous facial expressions. The participants were asked to judge the consistency between the valence of emotional voice and facial expression, during which ERPs were recorded. Each trial began with a central fixation cross presented for 500ms, then the priming stimuli(emotional voice) was presented through headphone. The central fixation cross displayed on the screen until the target(facial expression) was presented. The inter-stimulus-interval(ISI) is 1000ms. The facial expression was presented for 500ms, followed by a black screen for 2000-2200ms. After the presentation of facial expression, participants were instructed to indicate the consistence of valance between the emotional voice and facial expression by pressing a mouse button as quickly and accurately as possible. The results were analyzed by Repeated Measure ANOVA. The response time (RT) results showed that subjects responded more quickly to the congruous trials than the incongruous trials. It suggested the existence of the priming effect of emotional voice on recognition of emotional facial expression. The analysis of ERPs waveforms indicated that emotional voice modulates the time course of processing of facial expression. At the time window of 70-130ms and 220-450ms, facial expressions evoked more negative waveforms in incongruous trials than in congruous trials. At the time window of 450-750ms, facial expressions evoked more positive late positive component(LPC) in incongruous trials than in congruous trials. The ERPs results suggested that emotional voice influences the processing of emotional facial expression at the early perception stage, the emotional significance evaluation stage and the subsequent decision-making stage. This study demonstrated that emotional voice can influence the processing of facial expression in a cross-modal manner, and provided converging evidences for the interaction of multi-sensory inputs.

Key words: emotional voice, facial expression, cross-modal, event-related potential(ERP)

摘要: 采用事件相关电位(ERP)技术考察了情绪语音影响面孔表情识别的时间进程。通过设置效价一致或不一致的“语音-面孔”对,要求被试判断情绪语音和面孔表情的效价是否一致。行为结果显示,被试对效价一致的“语音-面孔”对的反应更快。ERP结果显示,在70-130ms和220-450ms,不一致条件下的面孔表情比一致条件诱发了更负的波形;在450-750ms,不一致条件下的面孔表情比一致条件诱发更正的后正成分。说明情绪语音对面孔表情识别的多个阶段产生了跨通道影响。

关键词: 情绪语音, 面孔表情, 跨通道, 事件相关电位

CLC Number: