Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Animation using Motion Resequencing and Blending | ||||||||
Line: 43 to 43 | ||||||||
I found this paper a little confusing, mostly because I read the papers in the wrong order (Motion graphs second)! The authors' usage of active learning terminology was also confusing (I doesn't seem like they're using value iteration, but rather something based on the stochastic approximation -- at least in the section titled "computing control policies"). I like that this paper was honest about the limitations -- especially where they point out that for it to extend, there will need to be some intense human labelling of mocap data (of physical interactions) or many algorithms for detecting very specific features of motion. -- DanielEaton - 08 Mar 2006 | ||||||||
Added: | ||||||||
> > | Reinforcement learning in games and computer animations is a hot topic. I've seen many researches that leverage neural network and genetic algorithms in this area. While not too many statistical methods and dynamic programming approaches have been presented for specific problems in computer animation. Microsoft Research Cambridge once did a project to enhance NPC's skills in a fighting game using Q-Learning, and the results seem to be plausible. Anyhow, there might be some more interesting problems in this area that can be solved by well specified machine learning techniques and to identify them is a hard but interesting work. Using precomputing methods is a good approach to current video game. The demo of this paper is more persuasive than the other's. In addition, this paper provides some interesting referred literatures that are helpful to my AI course project. --Zhangbo(Zephyr) Liu |