-- Main.MichielVanDePanne - 26 Feb 2006 ---++ Group Topic 2.7: Animation Interfaces ---++++ Presented by: Zhangbo Liu and Dieter Buys ---++++ "Motion Doodles" This is an example response. To add your own response, click on 'Edit' above. Paragraphs are separated with just a blank line. This paper is interesting because... It is flawed because ... I didn't understand the following bits... Open problems are ... -- Michiel van de Panne I think in certain domains (eg. pre-visualization for movies), this system could be really useful, at least to create a first rough pass at an animation. It reminds me of the discussion we had last class regarding being able to get sufficient range of expression out of character's motion. ie. happy/sad/angry styles of motion don't cover everything an animation might need, just like simple doodle animations can't describe everything we might want a character to do. But it might get us most of the way there, allowing artists to focus their time on important aspects rather than getting bogged down in the details of every movement. I think the main problem with it is that it requires a lot of assumptions and built-in parameterizations that limit its generality beyond specific closed domains. --Christopher Batty A good read. Some questions/comments: - How is the character model warped for motion of the two internal torso joints? - What is Catmull-Rom interpolation? (brief elaboration would be nice) - Is there any kind of motion-sketch garbage detection? (like there is for character sketching) Or is it truly/totally Garbage-in-Garbage-Out. - Which of the proposed extensions have been made? (Having a 3D input interface would be cool!) A meta language that could connect novel motions with the identified gestures would be neat (making it easy for someone else to adapt this system to their own N-link characters). - Has there been any commercial interest? -- Daniel Eaton Catmull-Rom interpolation uses a Catmull-Rom spline, a cubic interpolating spline. The main characteristic is the (somewhat intuitive) choice of tangent at each point: parallel to the line from the previous point to the next point. This leaves the extreme points ambiguous, but just a line from the start point to the next for the first tangent, and to the final point from the previous for the last tangent, are often used. --- HDFY It is very interesting system. It does provide a easy tool to do the animation. The problem is when the condition is very complexed. Is that always possible to identify the gesture by a sequence of tokens? But anyway, it is useful if applied in a certain constrained environment, like interactive game, or animation film. ---jianfeng tong The ability to recognize the hand drawn trajectories of motion and even human figures is quite appealing, and gesture vocabulary for 2D motion is also able to be built without much ambiguities. A catmull-Rom interpolant is used between successive keyframes, while catmull-Rom interpolates cubic curves with C1 continuity, but from Sketch segmentation section, the trajectories could be spike or hat like curves, so with start, apex and end locations, it would not be able to accurately interpolate the user drawn trajectories. Maybe it is not meant to be that accurate in trajectory, but a pattern of motion. Question: What does the global position of character is controlled separately from the keyframes mean? -- Steve Chang I found the idea of sketching out motion to be quite interesting; it can be intuitive and fast, but also imprecise and limited. I think it holds great potential, since as the market for animation --- in movies, games, etc. --- is continually growing, so is the need to speed-up production or to make it more accessible. The first problem that I noticed is how the types of skeletons that can be used is constrained; however, with more thought, I realized that while animating, I typically only use a few different types (such as bipeds, quadrupeds, cars, etc.), and often have to recreate similar tasks, like walking or driving. As well, how the character is sketched limits the amount of detail, but this could probably be easily traded off for speed, by explicitly telling the system what link you are working on. Next, I see that there are 18 gestures, but only 31 when including backwards-travelling variants; so I guess five of them are meant for one direction. I understand this for some --- a backwards moonwalk doesn't make much sense, and neither does a forward back-flip or backward front-flip --- but I am not sure what the other two are. Also, why is the handsping a 3D motion? It only seems to need to rotate in 2D. As the paper says, this is not meant to be a replacement for professional tools, but I can definitely see it being incorporated into some of these tools, especially for previewing (as Christopher said). For future work, it mentions adding gestures, but I would hesitate to add too many, since this seems to aim at being intuitive and already incorporates most of the intuitive motions; for more precise work, a different tool will probably be needed anyway. Because of this, one of my favourite uses is for novice, fun animation; especially for children to learn. --- H. David Young [[http://www.cs.ubc.ca/~van/papers/doodle.pdf][Link to the paper]] ---++++ "Spatial Keyframing for Performance Driven Animation" This is a nifty idea, but the fact that all the available motions have to be squished into a 2D plane makes producing anything more than a fairly small simple set of animations pretty difficult, I suspect. (However, the author's demonstration of the juggling teddy bear at last year's SCA was hilarious.) -- Christopher Batty Separating the timing from the posing may make it a little easier for novice keyframers to do their job, but this still seems to be a time consuming process (a simple periodic motion that involves translation (like walking or hopping) would still take a long time to create). It also might may make it harder to make consistent or precise motions, since the second stage ("performance") is just controlled through a mouse. -- Daniel Is there some evidence to show that this idea is much faster than traditional temporal keyframing method? ---jianfeng tong The animation and articulated figure described in the paper are more like for morning cartoon show, since both articulated figures and animations are not that complicated. The animation is obtained from intensive interpolation with even fewer keyframes than temporal keyframing based animation, so I would not be surprised if there are any nasty hacks in the implementation to make it happen; and furthermore, allowing user to control timing and pose simultaneously really looks like a hack to me, and yes, it is performance driven (like, it does not look good? dont worry, try it again ), it indeed brings some flexibility to novice animators though. -- Steve Chang My initial impression, contrary to the author's point, was that this is more difficult to grasp at first than traditional keyframing, but this is quite likely due to the fact that I have experience with traditional techniques and not to this new method. Of course, the more I read, the more I understood how to use their system. It seemed rather simple though, until I read the Algorithm section, when all the details were discussed. I found this section to be interesting because many of the complicating factors were not apparent at first, such as why they do not use angular parameterization, and how they must have special consideration for inverse kinematics and locomotion. I was wondering why they use 3D points for the control, when a mouse has two degrees of freedom, which is easy to navigate in real-time, and it appears that most of their points lie on a plane anyway. Overall I thought that the system was an interesting idea, but that currently it looks to be a bit complicated for novices, and even professionals as one of their testers had problems. As well, most professionals are already accustomed to the current keyframing technique. However, it seems like a viable alternative for real-time needs. --- H. David Young [[http://www-ui.is.s.u-tokyo.ac.jp/~takeo/papers/squirrel.pdf][Link to the paper]]
This topic: Imager
>
WebHome
>
CPSC526ComputerAnimation
>
AnimationInterfaces
Topic revision: r9 - 2006-03-01 - hdfy
Copyright © 2008-2025 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki?
Send feedback