Jump to content
Double Fine Action Forums
Sign in to follow this  
DF Oliver

Programming Update #4: Animating the Jack

Recommended Posts

I'm not 100% certain if it was mentioned in this thread or a previous one, but there was a discussion on whether or not the team would animate the character in such a way where the layers of animations were given 3d depth, as opposed to a completely flat plane (paper mario). I just wanted to verify if the two operations would be computationally equivalent. In other words, is drawing a triangle in OpenGL constant time regardless of it's orientation in 3d space, or are there conditions where it might be faster? For example, SDL does not offer 3D drawing. Does it accomplish it's 2D screens the same way OpenGL (2 triangles) and if so would it also be computationally equivalent?

Cheers,

C

Share this post


Link to post
Share on other sites
I'm not 100% certain if it was mentioned in this thread or a previous one, but there was a discussion on whether or not the team would animate the character in such a way where the layers of animations were given 3d depth, as opposed to a completely flat plane (paper mario). I just wanted to verify if the two operations would be computationally equivalent. In other words, is drawing a triangle in OpenGL constant time regardless of it's orientation in 3d space, or are there conditions where it might be faster? For example, SDL does not offer 3D drawing. Does it accomplish it's 2D screens the same way OpenGL (2 triangles) and if so would it also be computationally equivalent?

Cheers,

C

The rendering is computationally equivalent when drawing 2d planes with depth compared to 2d planes without depth (er, refering to things being far away vs close up..not depth testing), yes (as far as the GPU goes). Maybe you have more code in terms of when considering the the cursor is above something or not.

Share this post


Link to post
Share on other sites

This is great!

As a Moai developer myself, I can't help but ask whether this work will find its way into Moai SDK? I am slowly writing my high-level Moai framework ( including importing SVG, 3D models from 3DS Max, video player, etc... the stuff I plan to publish as an open source) and I was looking into ways to accomplish bone animation myself. If the work that you do can possibly become a Moai feature that would be great as i could focus my time on many other areas that I still need to cover.

In the meantime, for us developer-dork-fans, can we get at least an "aerial overview" of how you accomplished this technically?

Thanks a bunch!

Nenad

Share this post


Link to post
Share on other sites
This is great!

As a Moai developer myself, I can't help but ask whether this work will find its way into Moai SDK? I am slowly writing my high-level Moai framework ( including importing SVG, 3D models from 3DS Max, video player, etc... the stuff I plan to publish as an open source) and I was looking into ways to accomplish bone animation myself. If the work that you do can possibly become a Moai feature that would be great as i could focus my time on many other areas that I still need to cover.

In the meantime, for us developer-dork-fans, can we get at least an "aerial overview" of how you accomplished this technically?

Thanks a bunch!

Nenad

Yep that should already be in the development branch. I'm not sure how far it is in the release branch, but I'm sure it'll appear soon.

It was pretty simple actually. I added a new class called MOAISkinnedMesh that inherits from MOAIMesh. It is responsible for setting the bind-pose matrices up and for calculating the current set of skin transforms. Then I send these to the graphics device which will bind them to the shader (if it uses skinning matrices). In order for that to work I also had to extended the MOAIGfxDevice and MOAIShader and I wrote a version of the mesh shader that uses the skinning matrices.

So all you need to do from the Lua side is to setup the scene hierarchy and tell the skinned mesh deck which joints it needs to consider and there you go...

Share this post


Link to post
Share on other sites

@Oliver,

Wow, wouldn't expect it's in the dev branch already! Thanks a bunch guys, and now I'm off to Github :)

No sleeping for me this week :)

Cheers

N

Share this post


Link to post
Share on other sites

I havn't played the game 'The ACT' ( http://theactgame.com/ ) but is there an engine they use that you might be able to apply? or is it simply click to play a quicktime file type of game maybe.

In any case it is exciting to see how the process is. Beeing an animator myself, I dont really know anything about the technical stuff like this.

Share this post


Link to post
Share on other sites
I havn't played the game 'The ACT' ( http://theactgame.com/ ) but is there an engine they use that you might be able to apply? or is it simply click to play a quicktime file type of game maybe.

In any case it is exciting to see how the process is. Beeing an animator myself, I dont really know anything about the technical stuff like this.

If you haven't seen the actual notes about creating the game - they hired 25 professional traditional animators from top notch studios to create animation on paper. I guess there is no such a budget for that (considering animators like that get paid around 500$ per day of work or more), that's why they are experimenting with blending skeletal setup and sprites.

Depending on the Maya setup and the number of sprites/meshes they use the animation quality may be very good and combine best of both worlds. I guess the real challenge will be to keep the hand animated textures swaps consistent with much more fluid joint animation. Since the engine will be interpolating the joint animation curves it may be a visual hiccup to see the sprites not being smooth enough. There are some games that handle it well, but they use very snappy animation timing to get away with it, and hide the time in-between the poses with some sort of transition image suggesting the motion.

@DF

How are you planning to handle different head shapes and body shapes? In example while seeing the character from the profile or from the back you need to swap the entire thing (and also do that for the actual animation of turning around). So will each head turn image will have it's separate joint setup(like back of the head, profile, 3/4) ? Adding additional joints to the facial shapes in example may enable the animators to get better control over them (i.e. deforming the lips in some different ways than they have already drawn on the texture) and you may just use Driven Keys within Maya to swap the visibility of the meshes. But I guess exporting this data to the engine would be a bit painful when it comes to managing the animation clips later on.

Share this post


Link to post
Share on other sites

How are you planning to handle different head shapes and body shapes? In example while seeing the character from the profile or from the back you need to swap the entire thing (and also do that for the actual animation of turning around). So will each head turn image will have it's separate joint setup(like back of the head, profile, 3/4) ? Adding additional joints to the facial shapes in example may enable the animators to get better control over them (i.e. deforming the lips in some different ways than they have already drawn on the texture) and you may just use Driven Keys within Maya to swap the visibility of the meshes. But I guess exporting this data to the engine would be a bit painful when it comes to managing the animation clips later on.

I'll let Ray talk about the animation style questions, but from a technically point we definitely support swapping out geometry. In fact each the visibility of individual piece of geometry in the Maya scene can be animated. This way the artists can draw different version of let's say the head and simply swap them in and out as needed. So we'll most likely use flipbook animation for different mouth shapes and stuff like that.

Share this post


Link to post
Share on other sites
Rayman Origins' animation looks cheap as well. The backgrounds are lovely, but they cheaped out on the animation and it looks like a flash cartoon.

And getting back to Rayman's style, I personally find it ugly to look at in motion, but it's just a personal opinion. It makes gorgeous stills sometimes, but it all looks so "fake", not entirely sure how to describe it to you.

Agreed. You may place me firmly in the camp of those that find Rayman's animation style to look terribly cheap in motion.

Even more to the point, Rayman is a good example of the most prevalent problems in recent animation work (particularly on iOS games and other Flash-like 2D products). All the effort goes into achieving high resolution output that moves at a high framerate, meaning a thoroughgoing reliance on vector art and various tweening / squishing / transforming techniques. Sadly, high resolution and framerate alone will manage to wow many audiences, but these factors aren't what actually matters to the animation quality.

In fact, raising the number of frames and pixels artificially (relying heavily on vector-y art and tweens) can produce a much worse looking product. You're not adding any new details or information in that manner; you're just smoothing out all the edges and textures (even movement has a "texture" to the jumps and omissions) and giving the entire production an artificial feeling. It's a bit like the embarrassingly awful modes some TV sets started adding that will inject additional frames into a movie in order to raise it to a high refresh rate; it looks absolutely terrible, and ruins the movement of the original photography.

I'd always gladly take 5 well-drawn frames over 60 frames that are just rotations and transformations of a handful of character assets carved up and skinned onto a skeletal frame. That's not to say that I don't have faith in DF's product, because I do, but this is certainly the most disheartening thing I've read since the project began. I consider the recent trends in 2D gaming (particularly on iOS and with various indie titles) to be rather terrible setbacks for the medium, and Rayman is quite the opposite of progress to my eyes.

Share this post


Link to post
Share on other sites

Problem with that is the lack of actual volume. While in traditional animation or 3d volume of the character is solid and constant, the skeletal animation on planes with different textures swapping looks very limited and flat.

Share this post


Link to post
Share on other sites
hmm, the walking animation for the extreme lumberjack still looks a bit wierd. i suppose its that part with the knee popping forward on the leg that has just landed. he looks almost like hes going to sit there.

anyways. 30 fps isnt smooth :PPPP. you have to add 10-20, ideally 60 to make it smooth.

It is pretty smooth in the game, but we had to drop a couple of frames when creating the GIF. So some artifacts come from that. :-)

It doesn't seem to me that the knee popping is from frame dropping, because the background still moves smoothly while the knee popping happens. Knee popping is a common issue with skeletal walking animation, and it seems to be a rigging issue. It can either be fixed by just not letting the knee get straight enough to pop, or by doing whatever magic this guy did:

Share this post


Link to post
Share on other sites

Reading through the post on the animation differences between facial and skeletal, a cool idea came to me. It's not something I have noticed in adventure games, but it would be cool if there was facial interaction with the world while moving.

As an example, while walking through the forest from one screen to another with nothing going on, if the character's head was turning looking at the leaves on the trees, perhaps spotting a bird's nest.

You can have a bird fly through the forest, and initially the character's eyes track it, and as the bird flies by the head is turned to follow it for a bit until returning to look forward to focus on walking.

It would also be so appropriate for a lumberjack to be whistling while walking through the woods. I know the real character won't be a lumberjack, but hopefully it gives some cool ideas.

An point & click adventure game has traditionally been about interacting with a static environment, but there's no reason it can't evolve into interacting with a more active environment that feels alive.

Aside from background events that are reacted to, an active form would be if the head and eyes followed the mouse cursor so as to demonstrate that you are controlling what the character is paying attention to. It would be fun if the player were to leave the mouse untouched and unmoving for a few seconds if the character looked like they were getting bored and they started looking around the screen. It can even create a subtle hint system where the characters head will occasionally glance in the direction of something that can be interacted with, perhaps with an inquisitive expression

Share this post


Link to post
Share on other sites
Reading through the post on the animation differences between facial and skeletal, a cool idea came to me. It's not something I have noticed in adventure games, but it would be cool if there was facial interaction with the world while moving.

As an example, while walking through the forest from one screen to another with nothing going on, if the character's head was turning looking at the leaves on the trees, perhaps spotting a bird's nest.

You can have a bird fly through the forest, and initially the character's eyes track it, and as the bird flies by the head is turned to follow it for a bit until returning to look forward to focus on walking.

It would also be so appropriate for a lumberjack to be whistling while walking through the woods. I know the real character won't be a lumberjack, but hopefully it gives some cool ideas.

An point & click adventure game has traditionally been about interacting with a static environment, but there's no reason it can't evolve into interacting with a more active environment that feels alive.

Aside from background events that are reacted to, an active form would be if the head and eyes followed the mouse cursor so as to demonstrate that you are controlling what the character is paying attention to. It would be fun if the player were to leave the mouse untouched and unmoving for a few seconds if the character looked like they were getting bored and they started looking around the screen. It can even create a subtle hint system where the characters head will occasionally glance in the direction of something that can be interacted with, perhaps with an inquisitive expression

Great idea Sabarok! Grim Fandango used this kind of interaction with objects (Mannys head turned towards the nearest object).

Thanks to Oliver and Ray for really interesting insights to development :)

This discussion about animation techniques is becoming really interesting..

As most of us backers, I'm also big fan of old sprite animations used in Lucas games in the 90's. But since we're gonna play DFA in HD resolutions and all the other dynamic visual elements like parallax backgrounds, light and particle effects are gonna be animated in such a smooth way (30/60fps), I believe the hand-drawn sprite animations stands out in a wrong way.

I've noticed the same "fault" (in my opinion, at least) in the Whispered World and in the MI special editions.

The biggest difference is when the character is walking. The sprite container moves smoothly in 60 FPS but the animation cycle changes in lets say 12 FPS.

This is tricky.. In the end i believe the skeletal animation approach DF devs chose is the better one. Specially when they do the "impossible poses" in sprite technique. It's gonna be kind of mix, but in a good way i believe. Guys know how to do it within the limits of good taste.

Keep up the good work Oliver and Ray! :)

Share this post


Link to post
Share on other sites

I forgot to ask the question, which was the reason why i read all 11 pages of this thread (which took me over 2 hours, btw)... :)

Would it be possible to write some kind of algorithm to take care of lip sync animations in talk scenes?

Would it be possible to draw facial expressions (or only the mouth) for the most common syllables. (You know, word consists of syllables.. I don't know is this the right word, i'm from Finland you know. English is really not my native language...)

And then the algorithm handles the text dialog and splits the sentences and words into syllables, which all got their own sprite for animation.

Of course there could be problems in the timing with the voice over.. I don't know.. :)

This is just my thought how the lip sync could be done better if someone could implement this..?

How lip sync animation is usually done in games (or in animated cartoons)? I have no idea, but i'd love to know!

Do you only cycle randomly couple of different facial expressions, or something more sophisticated?

I've been wondering this ever since the first time in the 1993 I played Sierra's Gabriel Knight (sins of the fathers) which had really good lip sync animations in the talk scenes (when compared to other adventure games of that era)..

Share this post


Link to post
Share on other sites
I forgot to ask the question, which was the reason why i read all 11 pages of this thread (which took me over 2 hours, btw)... :)

Would it be possible to write some kind of algorithm to take care of lip sync animations in talk scenes?

It has been done in other games, so it is possible from a technical aspect. Whether or not it is financially feasible compared to alternatives is a different question, and probably the one you mean. Star Trek Voyager: Elite Force is a game that comes to mind with automatic lip sync animations.

Since the characters are animated, rather than high resolution realistic style, there is a lot less involved to lip syncing. Usually, it's little more than opening or closing the mouth with small touches on the eyes. A game that uses high definition realistic faces would have to deal with the subtle details of the lips, cheeks, and jaw based on the sound being made.

I would be interested to know how Double Fine has done lip syncing in their earlier games.

Share this post


Link to post
Share on other sites
I forgot to ask the question, which was the reason why i read all 11 pages of this thread (which took me over 2 hours, btw)... :)

Would it be possible to write some kind of algorithm to take care of lip sync animations in talk scenes?

It has been done in other games, so it is possible from a technical aspect. Whether or not it is financially feasible compared to alternatives is a different question, and probably the one you mean. Star Trek Voyager: Elite Force is a game that comes to mind with automatic lip sync animations.

Since the characters are animated, rather than high resolution realistic style, there is a lot less involved to lip syncing. Usually, it's little more than opening or closing the mouth with small touches on the eyes. A game that uses high definition realistic faces would have to deal with the subtle details of the lips, cheeks, and jaw based on the sound being made.

I would be interested to know how Double Fine has done lip syncing in their earlier games.

It's an interesting question, and maybe worthy of an update on its own.

The quick version is that we have a Maya tool that allows us to define a set of visemes for each character (with optional support for unique visemes per emotion). A viseme is just a bunch of settings for the various face controls and we have a file than maps visemes to a standard set of phonemes (can be a one to many mapping). We then have a Maya plugin that wraps Annosoft's phoneme tech and which integrates with our localization tech. You feed that plugin a wav and the text for that wav and it'll generate the timing of phonemes for that line. We then turn that data into keyframes using the aforementioned phoneme<->viseme map and bam, that's an auto-lipsync file. The data is stored as animation curves in Maya, so animators can cleanup/tweak/modify by hand if they wish.

Share this post


Link to post
Share on other sites
Very cool! Thanks for the update.

A couple questions because I don't know/understand enough and want to learn more!

Is there a reason there's 3 joints in the stomach for the skeleton?

Also the head seems to be connected through a center point at the chin then down the rest of the body. Is there a reason it doesn't have a central point in middle of the head? Does that stuff matter in a virtual skeleton? Or is that just how it all landed to put this update together?

I'm not an animator so I can't really answer this question from a creative perspective, but technically it doesn't matter where the joints are. For example the head geometry is attached to the neck joint, so when it rotates the geometry will follow. Generally the animator use the minimal set of bones they need for an expressive character and that may or may not follow the human anatomy... :-)

I'm super late to this, but I didn't see anyone else answer in the bunch of pages I looked at, so here goes :) The 3 "stomach" joints are actually spine joints. The joints are there to make bends between his neck and pelvis if need be (in the case of the lumberjack, I'm not sure they are needed because he's not cut up to use the extra joints. I'm also not sure if the joint export they use will transfer mesh deformation to make bends - I assume they are just exporting translation and rotation). The position of each joint point does matter, as that is where the geometry (in this case flat planes with artwork mapped on) rotates from. So for lumberjack's head, you wouldn't want to place a joint in the center because his head rotates from the neck. If he was a character that had no neck (like cartman) you could possibly center the neck joint - but even cart man rotates pretty much from the bottom center.

I'm not sure if you've ever put a paper doll together, but this is the same idea. Everywhere you place a brad is an area you rotate from.

Share this post


Link to post
Share on other sites

Oliver! again, thank you so much, I love that you guys are putting the time to describe every process and since im getting caught up on every lecture you guys are posting, I'm happy to find new things over and over again. I've just started a project of some small videos for kids where there are gonna be characters involved, i'm gonna be doing the project with my partner in flash and I think some of this breakdown that you did is tremendously valuable as for the process of using bones.

So happy to learn from this

V.

Share this post


Link to post
Share on other sites
Oliver! again, thank you so much, I love that you guys are putting the time to describe every process and since im getting caught up on every lecture you guys are posting, I'm happy to find new things over and over again. I've just started a project of some small videos for kids where there are gonna be characters involved, i'm gonna be doing the project with my partner in flash and I think some of this breakdown that you did is tremendously valuable as for the process of using bones.

So happy to learn from this

V.

You are very welcome. I'm glad our updates are useful. :-)

Share this post


Link to post
Share on other sites
This is great!

As a Moai developer myself, I can't help but ask whether this work will find its way into Moai SDK? I am slowly writing my high-level Moai framework ( including importing SVG, 3D models from 3DS Max, video player, etc... the stuff I plan to publish as an open source) and I was looking into ways to accomplish bone animation myself. If the work that you do can possibly become a Moai feature that would be great as i could focus my time on many other areas that I still need to cover.

In the meantime, for us developer-dork-fans, can we get at least an "aerial overview" of how you accomplished this technically?

Thanks a bunch!

Nenad

Yep that should already be in the development branch. I'm not sure how far it is in the release branch, but I'm sure it'll appear soon.

It was pretty simple actually. I added a new class called MOAISkinnedMesh that inherits from MOAIMesh. It is responsible for setting the bind-pose matrices up and for calculating the current set of skin transforms. Then I send these to the graphics device which will bind them to the shader (if it uses skinning matrices). In order for that to work I also had to extended the MOAIGfxDevice and MOAIShader and I wrote a version of the mesh shader that uses the skinning matrices.

So all you need to do from the Lua side is to setup the scene hierarchy and tell the skinned mesh deck which joints it needs to consider and there you go...

Hello

I've been poking around in the getmoai.com website, their forums, and the public moai gethub repository and I can't find any mention of MOAISkinnedMesh. Anyone have any insight into when or where we might be avaliable?

Thanks!

Share this post


Link to post
Share on other sites
I haven't tested it, but for skeletal animation i think here: https://github.com/moai/moai-dev/tree/skeletal

That's an earlier iteration from the MOAI guys. Whenever we send them code, it goes into a private "from doublefine" Git branch. Then the Moai guys integrate it into the formal branch if/when they decide that they like it. I think in this case they are still figuring out which implementation they want to go with.

Share this post


Link to post
Share on other sites

Hey Oliver, thanks for all the information, it's really helping a lot.

Out of curiosity, how do you guys handle a character moving in pseudo-perspective? Any information or resource that you can share?

Thanks,

Gianmichele

Share this post


Link to post
Share on other sites
Sign in to follow this  

×
×
  • Create New...