-- MichielVanDePanne - 26 Feb 2006

Group Topic 2.7: Animation Interfaces

Presented by: Zhangbo Liu and Dieter Buys

"Motion Doodles"

This is an example response. To add your own response, click on 'Edit' above. Paragraphs are separated with just a blank line. This paper is interesting because... It is flawed because ... I didn't understand the following bits... Open problems are ... -- Michiel van de Panne

I think in certain domains (eg. pre-visualization for movies), this system could be really useful, at least to create a first rough pass at an animation. It reminds me of the discussion we had last class regarding being able to get sufficient range of expression out of character's motion. ie. happy/sad/angry styles of motion don't cover everything an animation might need, just like simple doodle animations can't describe everything we might want a character to do. But it might get us most of the way there, allowing artists to focus their time on important aspects rather than getting bogged down in the details of every movement. I think the main problem with it is that it requires a lot of assumptions and built-in parameterizations that limit its generality beyond specific closed domains. --Christopher Batty

A good read. Some questions/comments: - How is the character model warped for motion of the two internal torso joints? - What is Catmull-Rom interpolation? (brief elaboration would be nice) - Is there any kind of motion-sketch garbage detection? (like there is for character sketching) Or is it truly/totally Garbage-in-Garbage-Out. - Which of the proposed extensions have been made? (Having a 3D input interface would be cool!) A meta language that could connect novel motions with the identified gestures would be neat (making it easy for someone else to adapt this system to their own N-link characters). - Has there been any commercial interest? -- Daniel Eaton

Catmull-Rom interpolation uses a Catmull-Rom spline, a cubic interpolating spline. The main characteristic is the (somewhat intuitive) choice of tangent at each point: parallel to the line from the previous point to the next point. This leaves the extreme points ambiguous, but just a line from the start point to the next for the first tangent, and to the final point from the previous for the last tangent, are often used. --- HDFY

It is very interesting system. It does provide a easy tool to do the animation. The problem is when the condition is very complexed. Is that always possible to identify the gesture by a sequence of tokens? But anyway, it is useful if applied in a certain constrained environment, like interactive game, or animation film. ---jianfeng tong

The ability to recognize the hand drawn trajectories of motion and even human figures is quite appealing, and gesture vocabulary for 2D motion is also able to be built without much ambiguities. A catmull-Rom interpolant is used between successive keyframes, while catmull-Rom interpolates cubic curves with C1 continuity, but from “Sketch segmentation” section, the trajectories could be spike or hat like curves, so with start, apex and end locations, it would not be able to accurately interpolate the user drawn trajectories. Maybe it is not meant to be that accurate in trajectory, but a pattern of motion. Question: What does “the global position of character is controlled separately from the keyframes” mean? -- Steve Chang

I found the idea of sketching out motion to be quite interesting; it can be intuitive and fast, but also imprecise and limited. I think it holds great potential, since as the market for animation --- in movies, games, etc. --- is continually growing, so is the need to speed-up production or to make it more accessible. The first problem that I noticed is how the types of skeletons that can be used is constrained; however, with more thought, I realized that while animating, I typically only use a few different types (such as bipeds, quadrupeds, cars, etc.), and often have to recreate similar tasks, like walking or driving. As well, how the character is sketched limits the amount of detail, but this could probably be easily traded off for speed, by explicitly telling the system what link you are working on. Next, I see that there are 18 gestures, but only 31 when including backwards-travelling variants; so I guess five of them are meant for one direction. I understand this for some --- a backwards moonwalk doesn't make much sense, and neither does a forward back-flip or backward front-flip --- but I am not sure what the other two are. Also, why is the handsping a 3D motion? It only seems to need to rotate in 2D. As the paper says, this is not meant to be a replacement for professional tools, but I can definitely see it being incorporated into some of these tools, especially for previewing (as Christopher said). For future work, it mentions adding gestures, but I would hesitate to add too many, since this seems to aim at being intuitive and already incorporates most of the intuitive motions; for more precise work, a different tool will probably be needed anyway. Because of this, one of my favourite uses is for novice, fun animation; especially for children to learn. --- H. David Young

A very handy tool for the animator, especially useful for the artists doing some draft animation. Comments and questions: In some sense, 18 gesture vocabulary is not enough for the artists' imagination, but if adding more vocabularies, defining, recognizing and handling the gesture is a problem. How did the system handle the trajectory in a 3D environment? Because a normal input device, such as a mouse, is only a 2D device. Using the sketch hitting the ground in a fixed camera view to figure out the z-axis location? If I am not satisfied with parts of the sketch, can I modify it in the system, or have to re-draw whole sketch? -- Bo Liang

This system is using a very natural way artists render movement by taking motion lines and generating a gesture vocabulary that causes the artists to render animation on any human-like character. I think it’s an interesting way for animators to begin visualizing a movement quickly and concentrate on the more complicated and detailed aspects of an animation more effectively. I was wondering whether a character that was rendered with more detail using a professional tool could be imported in this program. Maybe an artist would not intuitively know if a certain movement looked unnatural however this program would allow them to recognize their mistakes in either the rendering of the character or the motion itself. – Disha Al Baqui

Cool idea. Indeed, creating animations in this way allows for extremely quick prototyping of character motion. The sketch lines offer a simple "language" for describing motion. Given the simplicity of the input system, this could be used by a director to specify to an animator the rought path that they'd like a character to take. The animator could then use this base animation and create a final result with imprecisions cleaned up by using a professional tool. One restriction is that this only works for humanoid (i.e., seven segmented) figures. It would be nice to experiment with things like snakes, Pixar's lamp or maybe some four legged creature. -- KenRose

Definitely a fast, easy way to create relatively complex motions for several simple types of characters. It does make a lot of underlying assumptions, but doing so allows the user to use minimal effort to animate a character. Flexibility is traded off for ease of use, which makes it ideal for learning purposes or basic prototyping of a motion to get a feel of what it may look like. -- Roey Flor

Link to the paper

"Spatial Keyframing for Performance Driven Animation"

This is a nifty idea, but the fact that all the available motions have to be squished into a 2D plane makes producing anything more than a fairly small simple set of animations pretty difficult, I suspect. (However, the author's demonstration of the juggling teddy bear at last year's SCA was hilarious.) -- Christopher Batty

Separating the timing from the posing may make it a little easier for novice keyframers to do their job, but this still seems to be a time consuming process (a simple periodic motion that involves translation (like walking or hopping) would still take a long time to create). It also might may make it harder to make consistent or precise motions, since the second stage ("performance") is just controlled through a mouse. -- Daniel

Is there some evidence to show that this idea is much faster than traditional temporal keyframing method? ---jianfeng tong

The animation and articulated figure described in the paper are more like for “morning cartoon show”, since both articulated figures and animations are not that complicated. The animation is obtained from intensive interpolation with even fewer keyframes than temporal keyframing based animation, so I would not be surprised if there are any nasty hacks in the implementation to make it happen; and furthermore, allowing user to control timing and pose simultaneously really looks like a hack to me, and yes, it is performance driven (like, it does not look good? don’t worry, try it again…), it indeed brings some flexibility to novice animators though. -- Steve Chang

My initial impression, contrary to the author's point, was that this is more difficult to grasp at first than traditional keyframing, but this is quite likely due to the fact that I have experience with traditional techniques and not to this new method. Of course, the more I read, the more I understood how to use their system. It seemed rather simple though, until I read the Algorithm section, when all the details were discussed. I found this section to be interesting because many of the complicating factors were not apparent at first, such as why they do not use angular parameterization, and how they must have special consideration for inverse kinematics and locomotion. I was wondering why they use 3D points for the control, when a mouse has two degrees of freedom, which is easy to navigate in real-time, and it appears that most of their points lie on a plane anyway. Overall I thought that the system was an interesting idea, but that currently it looks to be a bit complicated for novices, and even professionals as one of their testers had problems. As well, most professionals are already accustomed to the current keyframing technique. However, it seems like a viable alternative for real-time needs. --- H. David Young

An interesting idea, however it seems too unsophisticated for the more experience animator. The number of controls to manipulate the keyframes is too limited and is likely to frustrate some users. A user's goal may be to play large segments of animation and would like to see how one motion may smoothly transition into another giving it a realistic look and feel: which is a feature this system is lacking. Over all, it is a potentially interesting idea if extended further. – Disha Al Baqui

Using the pre-defined natural keyframing as a start to process IK solver in the system should be a better solution. The unrealistic pose can be avoided. This paper, as well as the previous one, the authors show us two very interesting and innovative ideas on how to design, create and performance animations. The control interface and results are attractive, but as two prototype systems, there are gaps to be the commercial implementation. As mentioned in this paper, they are good for demonstration in front of the audiences. -- Bo Liang

Performance driven animation has the flaw that it is hard to reproduce. That is, if you want to change the animation so that one small section is a bit more extreme, you have to re-record the entire animation again and hope that your hand is steady enough to recreate the previous motions. Nevertheless, for short animations, perhaps this isn't a problem. Using spatial techniques for keyframing seems like a great way to quickly prototype motions for simple characters. The implementation for locomotion seems hacky, leaving this technique reserved for animations involving a character at a fixed base position. Still though, this class of animations does produce interesting results (I love the juggling teddy bear). -- KenRose

Link to the paper


This topic: Imager > WebHome > CPSC526ComputerAnimation > AnimationInterfaces
Topic revision: r14 - 2006-03-01 - RoeyFlor
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2025 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback