sprockets TV Commercial by Matt Campbell Greeting of Christmas Past by Gerry Mooney and Holmes Bryant! Learn to keyframe animate chains of bones. Gerald's 2024 Advent Calendar! The Snowman is coming! Realistic head model by Dan Skelton Vintage character and mo-cap animation by Joe Williamsen
sprockets
Recent Posts | Unread Content
Jump to content
Hash, Inc. - Animation:Master

Recommended Posts

Posted

Here's the link to the newest Ernie cartoon:

 

 

The good news is that this was created in record time. I wrote it last night, captured this morning, exported and rendered this afternoon, and tonight it is posted. This is partly due to the direct A:M action export Luuk has added to Zign Track, and partly because I'm getting the hang of it.

 

The bad news is his antennae still do not bounce. At Matt's suggestion I did not solve the spring system but let it solve as it went, but I forgot to change the threshold to zero. Result, wooden antenna syndrome. Next time, I promise.

  • Replies 15
  • Created
  • Last Reply

Top Posters In This Topic

Posted

Next-time-shmextime!

 

DON'T do the SSS! It's not always needed. (referring to simulate-spring-systems...not sub-surface-scattering) Try this: Remove the dynamic you now have... and then apply another. That's it...leave the settings alone, unless maybe to lower the constraint percentage to get more 'wiggle'... and then RENDER...NO SSS!

 

I NEED IT!

 

The time frame you mention is very eye opening... my mind boggles at the possibilities. I hope that years from now we look back at 2008 and say "That's when it all happened...remember Ernie?" This opens the door for virtual news reporters...or 'timely animations' delivered very quickly...or- as in Ernie's case...animated opinion-givers spewing 'bovine scatology'...

 

I'm itching to get going on this front...but am currently battling with solving a very important A:M dilemma to me: Hair Collision-Detection, specifically LONG hair... I think I have it figured out...

 

LOOKING FORWARD TO THE NEXT INSTALLMENT!

Posted

After simulating the dynamics, you need to turn off the dynamic constraints (enforcement 0%). Also, turning the constraint off in the chor or action creates keyframes on the dynamic bones, you must delete these, they override the dynamic simulation data. You have to do this in v14, v13 you do not have to do any of this. If you have the dynamics setup in a pose, you can turn it on/off from the user properties of the model, under the objects folder. This will not create the keyframes.

Posted
That looks nice Ben, did you use export as pose drivers directly from Zign Track?

No I haven't tried the pose export yet, as it will require re-rigging the face poses to be driven by other poses, and I'm lazy.

Posted

One of the best capture tests to date. Still room for improvement but it could just be the mouth poses and not the capture software.

Posted
One of the best capture tests to date. Still room for improvement but it could just be the mouth poses and not the capture software.

Can you tell me where it didn't work for you, Ken? I'm pretty close to the material, so I don't always see what others do. I'm mostly seeing the need for more brow expression.

Posted
One of the best capture tests to date. Still room for improvement but it could just be the mouth poses and not the capture software.

Can you tell me where it didn't work for you, Ken? I'm pretty close to the material, so I don't always see what others do. I'm mostly seeing the need for more brow expression.

 

What I notice is that if the mouth opens, the upper lip moves as far up as the lower lip moves down. Normally the lower lip would go down further then the upper lip. That can easily be corrected by adjusting the mouth-open pose.

Can you tell how your lip poses work? For the A:M pose export that I have written for Zign Track the poses are based on as much properties as possible to detect the shape of the lips. I wonder how you have it set up for your poses because you create them in A:M.

Maybe you should give the Zign Track pose control a try? You can add a few poses that drive the existing poses. I was done adjusting Squetchy Sam in 10 minutes.

Posted
Can you tell me where it didn't work for you, Ken?

 

For example, at 17 sec, on the word "drinking", there's no Oooo mouth shape for the R sound. Looks odd. And generally, at times it looks like he's either mumbling or shouting the lines....but you're not from the sound in the dialog.

Posted
One of the best capture tests to date. Still room for improvement but it could just be the mouth poses and not the capture software.

Can you tell me where it didn't work for you, Ken? I'm pretty close to the material, so I don't always see what others do. I'm mostly seeing the need for more brow expression.

 

What I notice is that if the mouth opens, the upper lip moves as far up as the lower lip moves down. Normally the lower lip would go down further then the upper lip. That can easily be corrected by adjusting the mouth-open pose.

Can you tell how your lip poses work? For the A:M pose export that I have written for Zign Track the poses are based on as much properties as possible to detect the shape of the lips. I wonder how you have it set up for your poses because you create them in A:M.

Maybe you should give the Zign Track pose control a try? You can add a few poses that drive the existing poses. I was done adjusting Squetchy Sam in 10 minutes.

Luuk,

 

Here is a test using poses out of Zign. I had to do a bit of remapping.

 

tiger2_.mov

 

How are determining upper and lower limits for the poses? Some of them used their full range, while others only half of it. Does Zign compute upper and lower limits on each capture, or do you use a fixed scale?

 

Also it seems like the brow pose is clamping at 0, even though the manual says it goes to -100. It looks like the keys below zero are being clamped out, as opposed to having 50 be the midpoint. But it could just be my capture I'm not sure yet, just something you might check.

 

My system used secondary bones that measured the width and height of the mouth by scaling on the z axis.

 

And I got the antennae bouncing!

Posted
Can you tell me where it didn't work for you, Ken?

 

For example, at 17 sec, on the word "drinking", there's no Oooo mouth shape for the R sound. Looks odd. And generally, at times it looks like he's either mumbling or shouting the lines....but you're not from the sound in the dialog.

Ken,

 

Actually, there is no oo shape at all. What you see are calibration issues. In my haste to get the piece out I did not take the time to re-adjust calibration from the last performance.

I often find it necessary to adjust the calibration slightly for a performance. The differences in my face and Ernie's face make different facial animation require slightly different calibration. In the pose based version I just posted, I had to remap the wide/purse pose to emphasize the movement.

Posted
How are determining upper and lower limits for the poses? Some of them used their full range, while others only half of it. Does Zign compute upper and lower limits on each capture, or do you use a fixed scale?

The poses are computed using a fixed scale. Because of this you might want to adjust the poses the first time you try them, but you'll sure they will be the same the next time. If it wasn't fixed a character could be shouting while he speaks normal or vice versa. It's actually the same as it works with the BVH or bone action, those also are computed on a fixed scale. If you like you can add exaggeration.

Also it seems like the brow pose is clamping at 0, even though the manual says it goes to -100. It looks like the keys below zero are being clamped out, as opposed to having 50 be the midpoint. But it could just be my capture I'm not sure yet, just something you might check.

Take a look at the spline for the eyebrow in the action itself. In my tests they go all the way up and down. Or maybe it's the neutral frame you have set in Zign Track?

Posted
Take a look at the spline for the eyebrow in the action itself. In my tests they go all the way up and down. Or maybe it's the neutral frame you have set in Zign Track?

Okay I must have been pretty tired when I wrote that. It is actually the smile/frown pose that is exhibiting that behavior.

Posted
Okay I must have been pretty tired when I wrote that. It is actually the smile/frown pose that is exhibiting that behavior.

 

Hey Ben, you are right. When I noticed the frown wasn't exported I thought it was just because there was no frown detected and I needed to make it more sensitive, but I was wrong... I forgot to set one value in the code and because of that the frown value was always zero :huh: So it was easily fixed. I tested it again and it appeared I even had to reduce the amount of frown B)

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...