I have graduated and left CMU. More up-to-date information about me can be found here.
I am a researcher at the Palo Alto Research Center, working in the User Interface Research Group. In 2001 I defended my Ph.D. thesis in Computer Science at Carnegie Mellon University. Then I was a postdoc in the Department of Psychology at CMU, working with John Anderson in the ACT-R group. More information about me can be found in my vita. My home country is Romania.
My research is in the area of cognitive science, at the intersection of computer science and cognitive psychology. The questions that I am interested in are (1) how language works and (2) how we can make programs that can understand language. I believe computational modeling of human language comprehension has the potential to offer a unique answers to both these question. Computational models are programs that run in real-time and that behave like humans on a given task. Computational models can be used as starting points for building application programs that perform the same task.
Language processing in humans is flexible, fast, and reliable. I believe that we can gain insight about how it's possible to build a program with those features by studying "extreme" language behavior (such as the comprehension of non-literal or semantically distorted language).
Dissertation. My dissertation ("The role of background knowledge in sentence processing") describes an ACT-R sentence-processing model that offers a unique explanation to a number of apparently unrelated behavioral phenomena such as metaphor understanding, memory for text or falling for Moses-illusion questions ("How many animals of each kind did Moses take on the ark?" --- if you fell for that and answered "two", you've just experienced the Moses illusion, as it was Noah who took the animals on the ark). In other words, my thesis argues that what helps people understand complex linguistic constructs such as metaphors hinders them when it comes to noticing errors in text. In other words, the apparent "imperfection" of our language behavior is exactly what makes language flexible.
INP. More recently, I have created INP, a real-time model of language processing, which is based on my dissertation and incorporates syntactic and semantic processing. The core of INP is an on-line randomized algorithm for language processing, that has O(1) complexity per word. INP unifies several different psycholinguistic domains: literal and metaphor comprehension, processing of semantic illusions, text priming, semantic memory. Preliminary tests show that INP can perform at about 50% accuracy on multiple-choice questions from the reading comprehension section of the Accuplacer college placement test (chance level being 25%).