I am a third year PhD student (started fall of 2011) in the Language Technologies Institute at Carnegie Mellon University, advised by Tom Mitchell. I work on NELL, the never ending language learner, with particular interest in what we call "micro reading," getting NELL to understand text at the level of individual documents, instead of aggregated statistics.
I received bachelors (2010) and masters (2011) degrees in computer science from Brigham Young University. My advisor at BYU was Kevin Seppi, and my focus there was mostly on parallelizing particle swarm optimization, with some forays into topic modeling.
My main research interest is in getting machines to learn from text like humans do. A human can pick up a textbook and gain a wealth of knowledge from it; machines have a much harder time even figuring out how to represent what knowledge they might gain from the text. After reading an introduction to physics, a human might be ready to read and understand a text on more complicated kinematics. Machines are not really able to build off of their current knowledge to learn more complex concepts.
To narrow down my interests a little from simply "machine reading," I am currently looking at two main problems. The first is joint entity linking and coreference resolution; given a document on the web, map everything you see to some concept or entity in a knowledge base, like NELL or Wikipedia. The second is doing random walk inference over graphs, particularly when the graph is a combination of a knowledge base and corpus statistics. As you might imagine, the two are (potentially) related.
CMU Language Technologies Institute
5000 Forbes Avenue
Gates Hillman Complex 6227 <-- that's my office number
Pittsburgh, PA 15213