Tags:
create new tag
view all tags
-- MichielVanDePanne - 27 Feb 2006

Motion Capture Editing

Paper One

This is an example response. To add your own response, click on 'Edit' above. Paragraphs are separated with just a blank line. This paper is interesting because... It is flawed because ... I didn't understand the following bits... Open problems are ... -- Michiel van de Panne

Another paragraph. Replace this text. Motion signal processing

It is a good idea to apply the techniques from image and signal processing to animation. It provides another alternative to edit the captured data. But when considering the constraints for the real life characters, it is hard to describe those constraints in the format of signal or image processing (I suspect but not sure). Maybe the resulted motion is not realistic. -----jianfeng tong

Some criticisms: - Affine deformations (translation aside) don't seem to be very useful for describing changes to human/animal characters (or any object with limbs/projections). It seems like for them, the key-shape deformations would do all the work. - Sub-part decomposition is a sensible thing to do, but greatly complicates the process (since a human must mark which contours belong to which part). - I don't know much about animation, but it seems that knowing/having the contours of the animated characters is unlikely... (unless they used a computer tool). - Using colour to extract the foreground (character) is a cheap trick that won't work in many situations. - In the 3D case, there seems to still be a heavy burden on the animator. Once they've created their output key-shapes, how hard would it be to do ordinary keyframing??

-- Overall: It's a cool idea, but not only very crudely extendable to 3D.

-- daniel eaton

Motoin Capture Cartoon paper : While the idea of "capturing" the animation off an existing 2D animation sequence sounds great, the way the paper presents it, there seems to be a lot of user input and prior assumptions necessary to make it work (contours need to be provided, or assumptions about region colour is made). Also, extending the motion to 3D sounds costly and inaccurate. Overall, it's great to be able to reuse animations and apply them to create new ones, but it seems like there is almost as much work involved as doing it yourself to begin with. -- Roey Flor

(Motion signal processing) I usually find it quite interesting to see techniques discovered and developed in one field applied in another, but in this case, I found the results less than satisfying. Watching these animations in action may have been more convincing, but the descriptions and figures did not seem like very impressive results, and on top of this, their methods introduce constraint violations which almost seem to outweigh the benefits. What they believe "to be the most useful" of their techniques, motion displacement mapping, looks like an awkward and unintuitive method way of modifying a motion. I would think that directly modifying a pose of the character, and seeing where that new pose lies on several motion signals may make this method more approachable. -- H. David Young

(Motion Signal Processing)An economical way to use motion capture data. However, the main purpose of motion capture is to totally track human motions so that we can have a sense of realism. If we use signal processing on the motion capture data, then compared with the real motion caputre data to the same model, which is better? The multitarget interpolation is especially interesting. I think they can be combined with the character personality topic we discussed last week to generate some impressing results. The authors provided examples for the implementation of filtering algorithm, but they didn't explain why this result comes.--Zhangbo Liu

The idea is cool, and the results the author presented are also impressive. But it looks how to capture the motion from a 2D cartoon is time consumed and is hard to be handled by an animator without much CG knowledge. The autho made lots of assumations and applied their algorithm only on some simple short motions, such as the motion of the character is one-way, from left to right and so on. If the motion is complex, it is very difficult or impossible to capture the motion from a cartoon. Applying the captured data to the new model is also based on some assumptions and limited. In Figure 10 in the paper, if we want to use a new hat to replace the old one, it leads to a masking problem. In some colorful scene, it is not easy to handle. --Bo

(Cartoon capture)It seems like there's several areas where more automation would help with this (although how feasible it is, I'm not sure). eg. Automating the process of choosing keyshapes from the input shapes. Also, automatically detecting the contours of the animated character would be helpful, rather than requiring it to be given as input (either from software, or requiring somewhat laborious rotoscoping). In general, it seems to only cleanly handle fairly simple motions, ie. Jiminy's hat rather than his complex running and arm flailing motions. This seems like the area where the abilities of "the masters" are most important, but at the same time too difficult to recover. The post-processing techniques they use to fix translation/scaling issues are just glossed over, and don't seem likely to be general enough. -Christopher Batty

The idea of using the enormous amount of data already available by experienced artists in using motion from cartoons for new animations is interesting. The results look good too, however it seems quite difficult for a user to create such an animation from previous works as many assumptions have to be made and variables have to be defined a priori. Have the authors of the paper tried to implement an intuitive user interface so as to use the power of their tool more effectively? Disha Al Baqui

Paper Two

Another paper. Please add your comments below.

It provides a new method to describe the captured motion which is done by recording affine deformation and key-shaped deformations. And then those information can be retargeted and reused. Through this method, the traditional animation resources can be utilized as motion captured data. In this paper, there is not much information on the extension to 3D models. In the above two types of captured information, there is not enough data provided to do the extending. it that right? ----jianfeng tong

- 2.1.2, They use the same band gains for all DOF. This doesn't make sense to me -- I doubt you would want to amplify all of them equally (and some not at all). They use the example of a nervous walk, but with this approach it's more than the walk that's nervous/shaky, it's the entire body. And, if they were to allow for individual DOF gains, then they're almost back to square one, since the animator has to adjust each DOF separately. - Why don't they just use an FFT? - Waveshaping doesn't seem very useful... the examples they give aren't compelling (joint limiting is trivial, and I don't think it's very intuitive to produce these curves to add 'undulations'). - I've seen DTW referenced many times in animation papers, is this the first paper to introduce it to the animation community? - Motion displacement mapping: how does this compare to what IK/constrained optimization would output? It seems that, in fixing some constraints, this technique might cause other constraints to be violated.

-- daniel eaton

(Turning to the masters: motion capturing cartoons) So much of the technique shown in this paper requires a lot of human involvement that I am curious as to when the motion should be recovered and when it would be more efficient to recreate it. I also found it kind of funny that they mention having data from a cartoon made with a computer tool; if such data was available from the original tool, I would think there would be much more affective ways to re-target the motion. -- H. David Young

Using the highly developed DSP technology on computer animation is very interesting. I like this idea and the results presented in the paper. Meanwhile, I know that each parameter of the motion can be mapped to a band, but what is the physical meanings for the value? Such as what does 1.73 mean for a band value? --Bo

Motion signal processing paper : An interesting idea of using signal processing to modify animations. The idea, though, seems very non-intuitive and it probably takes a lot of trial and error and tweaking to see results from it. The parameter to band mapping seemed too simple, it wasn't local enough. -- Roey Flor

(Motion Capturing Cartoons)The biggest constraint of this approach might be that the target and the source object must be versy similar--I'm curious what if they try to retarget the key-shapes of the frog on a little boy:P The approach wants to make generating (or coloning) animation easier, but actually it still needs much work. In addition, it may have difficulty to be extended to 3D cases. Anyhow, I like this paper:) -- Zhangbo Liu

(Motion signal processing) These signal-based modifications seem to lack a conceptual understanding of what the motion is, ie. modifications don't really have any particular meaning. I guess that's more of a philosophical disagreement with the approach. As a result, the interactive nature of the method is important to ensure the modified motions are good, because I think it would be really difficult to generate useful modifications without the benefit of rapid feedback. I'm a bit skeptical of the benefits of waveshaping, and without seeing videos, it's hard to believe the multiresolution filtering is as useful as they claim. However the timewarping and interpolation seem like they'd be fairly effective. -Christopher Batty

Motion control of human figures is a very difficult problem and the authors seemed to have tried to solve this issue using existing motion capture data and tweaking it to provide new motions in certain dimensions. The authors have come up with several algorithms (Motion displacement mapping, Time warping, Multi-resolution filtering, wave shaping, Multi-target interpolation) all of which manipulate the data points to provide a certain kind of motion. The ideas seem to have worked to a certain degree however the results seem crude. Has any motion capture studio embraced this idea? Would it be possible for these techniques to be adapted by the authors of the previous paper? Disha Al Baqui

Edit | Attach | Watch | Print version | History: r11 < r10 < r9 < r8 < r7 | Backlinks | Raw View |  Raw edit | More topic actions
Topic revision: r11 - 2006-03-06 - albaab01
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback