I will present the evolution of my work on Situated Language
Understanding since
http://people.cs.ubc.ca/~kevinlb/LCI/00/gorniak-010525.html.
Along the way, I will briefly cover some work on spatial language
understanding in simulation and for robots, but spend most of the time
on the central tenet of my later work: that recognizing and reasoning
over speakers' intentions allows humans and machines to flexibly and
robustly understand highly situated and context-specific language. In
support, I will describe how to capture intentions in a video game
setting via probabilistic hierarchical plan recognition, and how to use
the recognized plan fragments as a representational substrate for
understanding highly ambiguous utterances. I call these plan fragments
affordances and the resulting semantic structures Affordance-Based
Concepts. Finally, I will expand on the uses of planning and plan
recognition in video games, and discuss some of the game design,
efficiency and logistical challenges in bringing these techniques to the
game industry, where I have been working for the last few years.
This is work partially in collaboration with Deb Roy and the Cognitive
Machines group at MIT, and partially with Ian Davis at Mad Doc Software
and Rockstar Games.