--
MichielVanDePanne - 26 Feb 2006
Group Topic 2.7: Animation Interfaces
Presented by: Zhangbo Liu and Dieter Buys
"Motion Doodles"
This is an example response. To add your own response, click on 'Edit' above.
Paragraphs are separated with just a blank line. This paper is interesting
because... It is flawed because ... I didn't understand the following bits...
Open problems are ... -- Michiel van de Panne
I think in certain domains (eg. pre-visualization for movies), this system could be really useful, at least to create a first rough pass at an animation. It reminds me of the discussion we had last class regarding being able to get sufficient range of expression out of character's motion. ie. happy/sad/angry styles of motion don't cover everything an animation might need, just like simple doodle animations can't describe everything we might want a character to do. But it might get us most of the way there, allowing artists to focus their time on important aspects rather than getting bogged down in the details of every movement.
I think the main problem with it is that it requires a lot of assumptions and built-in parameterizations that limit its generality beyond specific closed domains. --Christopher Batty
A good read. Some questions/comments:
- How is the character model warped for motion of the two internal torso joints?
- What is Catmull-Rom interpolation? (brief elaboration would be nice)
- Is there any kind of motion-sketch garbage detection? (like there is for character sketching) Or is it truly/totally Garbage-in-Garbage-Out.
- Which of the proposed extensions have been made? (Having a 3D input interface would be cool!) A meta language that could connect novel motions with the identified gestures would be neat (making it easy for someone else to adapt this system to their own N-link characters).
- Has there been any commercial interest? -- Daniel Eaton
It is very interesting system. It does provide a easy tool to do the animation. The problem is when the condition is very complexed. Is that always possible to identify the gesture by a sequence of tokens? But anyway, it is useful if applied in a certain constrained environment, like interactive game, or animation film. ---jianfeng tong
The ability to recognize the hand drawn trajectories of motion and even human figures is quite appealing, and gesture vocabulary for 2D motion is also able to be built without much ambiguities. A catmull-Rom interpolant is used between successive keyframes, while catmull-Rom interpolates cubic curves with C1 continuity, but from “Sketch segmentation” section, the trajectories could be spike or hat like curves, so with start, apex and end locations, it would not be able to accurately interpolate the user drawn trajectories. Maybe it is not meant to be that accurate in trajectory, but a pattern of motion. Question: What does “the global position of character is controlled separately from the keyframes” mean? -- Steve Chang
"Spatial Keyframing for Performance Driven Animation"
This is a nifty idea, but the fact that all the available motions have to be squished into a 2D plane makes producing anything more than a fairly small simple set of animations pretty difficult, I suspect. (However, the author's demonstration of the juggling teddy bear at last year's SCA was hilarious.) -- Christopher Batty
Separating the timing from the posing may make it a little easier for novice keyframers to do their job, but this still seems to be a time consuming process (a simple periodic motion that involves translation (like walking or hopping) would still take a long time to create). It also might may make it harder to make consistent or precise motions, since the second stage ("performance") is just controlled through a mouse. -- Daniel
Is there some evidence to show that this idea is much faster than traditional temporal keyframing method? ---jianfeng tong
The animation and articulated figure described in the paper are more like for “morning cartoon show”, since both articulated figures and animations are not that complicated. The animation is obtained from intensive interpolation with even fewer keyframes than temporal keyframing based animation, so I would not be surprised if there are any nasty hacks in the implementation to make it happen; and furthermore, allowing user to control timing and pose simultaneously really looks like a hack to me, and yes, it is performance driven (like, it does not look good? don’t worry, try it again…), it indeed brings some flexibility to novice animators though. -- Steve Chang
Another paper. Please add your comments below.