June 2008 @ Kobe, Japan
Kai-min Kevin Chang
Research Associate (Special Faculty),
Language Technologies Institute,
School of Computer Science,
Carnegie Mellon University.
Profile: CV, Resume Research Statement
My research interests include using machine learning and brain imaging technologies to investigate and model various human cognitive processes. My work includes theoretical contributions that study how is language encoded/decoded in the brain using functional Magnetic Resonance Imaging (fMRI), and practical applications of how do children learn to read using consumer-grade EEG headbands.
Recent advances in functional Magnetic Resonance Imaging (fMRI) provide a significant new approach to studying semantic representations in humans by making it possible to directly observe brain activity while people comprehend words, phrases, and even sentences. fMRI measures the hemodynamic response (changes in blood flow and blood oxygenation) related to neural activity in the human brain. Images can be acquired at good spatial resolution and reasonable temporal resolution - the activity level of 15,000 to 20,000 brain volume elements (voxels) of about 50 mm3 each can be measured every second. In previous research we produced and published the field's first predictive theory of neural representations of noun meanings (Mitchell et al., 2008; Chang et al., 2009), and demonstrated its success in predicting neural representations of arbitrary concrete nouns, across different human subjects. Building on this previous research, we are now investigating the influence of context (e.g., "doctor treated the patient" vs. "doctor drove the car") on the neural representation of a concept (e.g., "doctor"). Our goal is a general compositional modeling approach that will be applicable to predicting the neural representations of simple sentences, by determining, for example, how the neurosemantic properties of verbs and nouns combine.
The ultimate automated tutor could peer directly into students' minds to identify their mental states (knowledge, thoughts, feelings, and so forth) and decide accordingly what and how to teach at each moment. We proposed to investigate a novel source of input from as close to the brain as non-surgically practicable: EEG. The major goal of this project was to apply EEG technology in meaningful learning tasks (reading), using a uniquetestbed (Project LISTEN's Reading Tutor) to pursue useful targets (e.g., user intention, comprehension, if student is having difficulty, etc.). As part of this project, we collected ~3 years of tutor usage data collected in vivo at a primary school. The tutor is Project LISTEN's Reading Tutor and EEG was recorded with NeuroSky BrainBands. The Reading Tutor helps students learn how to read by listening (using Automated Speech Recognition) to them read story aloud. We annotated the time-course of a reading session with the sentence that the student was reading. The dataset consists of roughly 169 hours of EEG recording and 200,000 sentences. To assist researchers who are new to this topic, we also implemented a machine learning toolkit to help process the EEG data. We made both the (anonymized) dataset and toolkit publically available. Our notable results included using EEG to detect cheating (CSCW 2015), improve Knowledge Tracing (ITS 2014), detect comprehension (LAK 2014), detect engagement (AIED 2013), improve spoken dialog interface (ICMI 2012), improve automatic speech recognition (ACL 2012), improve intelligent tutoring system (IJAIED 2013).
|1981-1995||Taipei, Taiwan||I spent the first 14 years of my life in Taiwan. I was pretty ordinary.|
|1995||Canada||At age of 14, my family decided to immigrate to Canada - a move that fundamentally shapes my life and my character.|
|1995-1998||Vancouver, BC, Canada||I studied in Eric Hamber Secondary School.|
|Summer 1998||Hamilton, ON, Canada||I was a MacShad98 of Shad Valley.|
|1999-2003||Waterloo, ON, Canada||I graduated with a Bachelor of Mathematics in Computer Science and Psychology at University of Waterloo.|
|2003-2004||Taipei, Taiwan||I worked on the Automatic Speech Analysis System engine of MyET, a promising English-teaching software developed by LLabs.|
|2003-present||Pittsburgh, PA, USA||I am a graduate student in the Language Technology Institute at Carnegie Mellon University.|
|March 8, 2010||Tokyo, Japan||I am engaged!|
|Dec 29, 2010||Vancouver, BC, Canada||I am married to my lovely Yi-Chia Wang.|
|June 6, 2011||Pittsburgh, PA, USA||Dr. Chang!|
Some people write their diaries with words, some record them with pictures. I mark mine with food! Yes, I love to eat! My plan is taste all the savoury dishes in the world and mark them on my Savoury Google Maps! Still a long way to go, but I am getting there! :p
I like to read the Slashdot, the tw.bbs.talk.joke newsgroup, and watch Comedy Central on TV. Three comic strips that I frequently visit are Piled Higher and Deeper, Dilbert, and River's 543. For leisure, I enjoy playing poker, chess and pool. I am also very into mobile devices. I frequent xda-developers and stay up to date on many smart phone devices. My current phone is AT & T Tilt2. Finally, I treasure freedom in speech, thoughts, codes, and am an advocate of Open Source software.
PS, I was named a student of Watermelon according to this news article, originally published by University of Waterloo school officials on Apr 1, 2003. ;) Quite frankly, I joined Carnegie Mellon University later and that indeed made me a Watermelon. FYI, Kevyn Collins-Thompson is also a Watermelon.