Tags:
create new tag
view all tags

Animation using Motion Resequencing and Blending

Motion Graphs

Comments & Questions:

It presents a new way to re-assemble the captured data to create a realistic, controllable motion. But two problems are presented: 1. Not completely automatic. The special motions must recognized and labeled before. 2. Considering large database, the search part will be time-consuming even with parallelization. Especially in the time-critial application like game, it is the downside refering to the paper result. --jianfeng tong

As for the creating of transitions, smooth transition between frames might not give a pleasing motion; after all, it is interpolated between global positions and joint angles. So I am wondering how general this method could be? In addition, when extracting motion from a large graph, both efficiency and the acceptable final motion would be the concerns, even if user provides enough guidance. -- Steven Chang

An interesting paper. The model the authors described in this paper is more suitable for designing the games. So some AI concepts, such as reward function, are used in their algorithm. As a game, real-time and performance are the most important things need to be considered. In order to improve the performance, the authors adopted pre-compiled motion graph. By using the reward functions to control the actor's responses, the motion of the actor can't be forcasted. It makes the game more playable. The system is easy to extend by adding more reward functions. As an example used in the paper, a boxing game, all the motions are a little similar, so the transition from one clip to another can be easily handled. --Bo Liang

Although the authors listed several potential applications, I doubt motion graph can be easily imported into video games--either for non-player characters or for players. Realistic motion is definitely desirable, but in real time games: first, players and NPCs are acting every second which may cost much computation power if we use this kind of technique to every character; second, again, many imaginary characters appear in those virtual worlds which makes the original motion capture data not so easy to be used, let alone the modified data. Anyway, this paper also reminds me a question in our last assignment. --Zhangbo(Zephyr) Liu

(Motion Graphs) An interesting idea. It would be interesting to compare their detection of candidate transitions method with the Sederberg algorithm (Time Warping) that I presented on Monday. In the "Path Synthesis" section, it was not clear to me how they handle rotating the motion (if any); i.e. when two identical motions needs to be concatenated to follow a specific path (which requires rotating the original motion). If I understood correctly they do not handle that and will only use sub-portions of existing data in order to follow the path. In that case – wouldn’t it be relevant to support orientation changes of an existing motion? -- Hagit Schechter

This paper presents a reasonably cool idea and explains some interesting issues. I like their explanation of why not to use a simple vector norm as a difference metric between frames. However, they deal with the problem of affine invariance by considering only rotation along one axis (the y axis), which greatly restricts the allowable translations (though makes the system actually solvable since it permits a closed solution). The graph pruning description is also interesting. Future work could possibly look at more interesting ways of creating transitions (e.g., throw an IK solver into it so that a run could transition to a backflip motion and the character would know to bend his knees... similar to motion doodles). Section 4.2 was a bit humourous; 10 paragraphs to explain a fancy way of doing exponential exploration. -- KenRose

The output of this system is cool. I wonder if it could be integrated with motion doodles for 3D to simplify the notation (though, the running time of their algorithm would probably be prohibitive. Some criticisms: in section 3.1, it seems to me that choosing a k (window size) would be nontrivial, as it sets the resolution of transitions. Perhaps it would be better to choose a range of k's? (Or maybe I've misunderstood) I think a great limitation of this system is that the mocap data needs to be labelled, so it's not truly automatic (that said, my project is in the "statistical models" section, which almost always demands labelled data, so I guess I can't criticize!). In computing g (section 5.1) wouldn't you also pay attention to facing direction in addition to distance from the line? -- DanielEaton - 08 Mar 2006

The decision to use point cloud samples for measuring similarity was an interesting one. Can we blend between motion captured on different skeletons? Or is that a whole different problem? We'd somehow have to register the correspondance between the points. I wonder if you could combine this approach, of just stringing different clips together with some sort of motion synthesis approach, so that if we can't find a decent path by searching within the supplied input data, then the system can revert back to an alternate method of producing a reasonable motion (eg. statistical models, physics, etc.) This could minimize the cost of storing excessive quantities of motion, and allow the system to scale a bit better perhaps. Overall, the method seems like a fairly obvious (in retrospect?) idea, executed really well. -- Christopher Batty


Precomputing Avatar Behaviour From Human Motion Data

Comments & Questions:

The actual definition of an action is unclear to me. Is it a whole motion, or just some part of a motion? Also, I could use some clarification of the details of training/reward system. Scalability in terms of memory sounds like it might be a problem if you wanted the avatars to handle a wider variety of abilities. -Christopher Batty

Compared with the other paper, I am more interested in the precomputed control. It has a significant improvement in performance over the search. Precomputed control and runtime synthesis is a great combination to satisfy the requirement of time and a certain range of control. jianfeng tong

Compared with Lee's paper, this model is more suitable to be used in making a movie. You can define an error function, then the system will render a shore movie clip as you expected. So the performance is not as important as Lee's model. Maybe this is the reason the autor didn't use a pre-compiled motion database. The motions used to construct the motion graph are multiplicate, so the transition between the different clips are difficult to handle. --Bo Liang

Using dynamic programming to choose a pose on each iteration makes a lot sense, which would give more desirable result than with the method used in “Motion graph” to pick a target motion. Since I am lacking of knowledge on reinforcement learning and autonomous agents, further elaboration on that would be of help. -- Steven Chang

Using reinforcement learning to allow animated characters to come up with motions is a very neat idea. The authors mentioned that they added in some small actions and allowed for randomness in the model by allowing these actions to be chosen even when they aren’t optimal. Would it not be better if the authors used a HMM instead of a fully observable Markov Model? This would allow for the output states to have a probability distribution that could be randomly sampled and also keep the model more close to reality. -Disha Al Baqui

(Pre-Computing Avatar Behavior) I find the merge of machine learning and computer graphics quite interesting. I see the potential of actually using the paper's suggested technique for video games, but the paper lacks in my view a thorough discussion of the usability issue. Another question that comes to my mind is whether the reinforcement learning technique used in the paper also work for other scenarios where two concepts needs to be learned at the same time. For example, a two people dance. -- Hagit Schechter

This paper is organized much like any other contending "real time" performance: we preprocessed everything we could and stuck it in a LUT. smile I'm a little confused to their update rule for value iteration. It kind of resembles the Bellman update, but I don't understand why there is a gamma^t term (the Bellman update uses a gamma term and the actual value iteration yields exponentiation). The automatic data annotation of motion is cool: it is a way of programmatically describing certain types of motion (kind of like a "motion language"). Are there issues with false positives or negatives? The O(MN) requirements limits the scalability of this approach to support multiple behaviors (humans have many more behaviours). Still though, only two behaviors can produce interesting results (the 30 boxers animation looks hilarious). An application for this type of system could be MMRPGs where you may need a lot of computerized characters that do various things. Looks similar to the virtual train station demo that Terzopoulos showed last semester as part of his artificial life talk. -- KenRose

I found this paper a little confusing, mostly because I read the papers in the wrong order (Motion graphs second)! The authors' usage of active learning terminology was also confusing (I doesn't seem like they're using value iteration, but rather something based on the stochastic approximation -- at least in the section titled "computing control policies"). I like that this paper was honest about the limitations -- especially where they point out that for it to extend, there will need to be some intense human labelling of mocap data (of physical interactions) or many algorithms for detecting very specific features of motion. -- DanielEaton - 08 Mar 2006

Edit | Attach | Watch | Print version | History: r15 < r14 < r13 < r12 < r11 | Backlinks | Raw View |  Raw edit | More topic actions
Topic revision: r15 - 2006-03-08 - batty
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback