Nctu logo
活動  
 
張貼者:黃雅意/腦科學研究中心 [演講公告] 2/15(三) 15:00-16:30 Auditory Temporal Processing at Behavioral and Neural levels @電資蘭成廳
 
活 動 類 別 活 動 對 象 起 始 時 間 結 束 時 間 地           點
演講 教職員生 2023/02/02 2023/02/17 電子資訊研究大樓203
 

聯絡人:黃雅意   聯絡電話:54484


講    者含 美國聖地牙哥加州大學博士生候選人

    PhD candidate Tzu-han Cheng

               Cognitive Science and Swartz Center for 

               Computational Neuroscience, UC San Diego

時    間:2023/2/15 (WED.) 15:00-16:30

地    點電子資訊大樓 蘭成廳 + Webex 線上同步進行

    R203, MIRC Building, NYCU

主持人:王蒞君講座教授 陽明交通大學電機工程學系

    Professor Li-Chun Wang,  Dept. of Electrical

    and Computer Engineering, NYCU

 

 

Abstract

Human listeners accurately recognize a vast number of complex sounds, but some, speech and music, are core to our identity as humans. Both speech and music are critically dependent on detecting temporal variations in sound signals, and more importantly, have a nature of rhythmicity. In my thesis research, I investigated how human brains and artificial neural networks process complex sounds, seeking a similar hierarchical principle between the two. Moreover, I investigated underlying timing models and neural mechanisms of humans, focusing on entrainment timing. My methods combined computational modeling on behavioral and neural data, including MEG, high-density EEG, EMG and motion capture. Our MEG results suggest a higher cortical selectivity for speech and music in contrast to other complex sounds in the secondary auditory cortex. These cortical regions could only be explained better by the later, and more complex layers of deep neural networks. These results are compatible with special coding for speech and music in the brain, and this line of work could ultimately reveal architectural design for state-of-the-art neural networks used for processing complex sounds. On the main line of my thesis, we used a novel method combining high-density EEG with independent component analysis (ICA) to separate motor and auditory activity. The results highlight the importance of entrainment, especially in the motor system, on rhythm perception, imagination and production. Our findings support active sensing of the motor system in auditory perception, which more broadly speak to the neural mechanisms of temporal processing in speech, music and other cognitive functions.

 

About Speaker

Tzu-Han Zoe Cheng is a sixth year Ph.D student from the Cognitive Science and Swartz Center for Computational Neuroscience at UC San Diego. She has extensive experience in experimental design, neuroimaging methods (MEG, EEG) and computational modeling on cognitive topics. Zoe has published journal articles and conference papers about time, music and rhythm perception. Her work leverages EEG and more traditional behavioral methodologies to understand how human brains process temporal features of sounds, focusing on speech and music perception, neural oscillations, and brain connectivity.


附件:演講海報.pdf