Jump to content
Hash, Inc. Forums
Luuk Steitner

Facial MoCap experiments

Recommended Posts

The frame rate specified in the BVH file is the same as the frame rate of the video you loaded in Zign Track.

So the bvh is at 29.97 fps. Isn't this a problem because A:M can operate only with whole number fps, i.e. 30 fps? Is this why the keys come in on fractional frames? And can it be adjusted?

 

When I was doing this with Syntheyes I had the 29.97 vs 30 fps problem, but the solution was easy. Export the video from Syntheyes as if the fps was 30. Render in A:M, import the rendered frames into my video program, where I specify them at 29.97. This way, each tracked video frame corresponds directly with a bvh keyframe.

 

Has anyone done a piece long enough (at least 60 sec) to really notice a .03 frame difference yet?

Share this post


Link to post
Share on other sites

Because Zign Track doesn't handle DV yet I always capure my video in Premiere, at the camera rate of 25 fps, (PAL), and then export it as an AVI at 24 fps. This lets Premiere handle the conversion along with any cropping and image editing that is needed. After that everything is syncronised for 24 fps so no real problems arise. Don't forget to setup A:M's motion capture settings in Options, (Ctrl-P).

 

I have had subframes show up once or twice and I put it down to capturing the BVH data into a BVH action object that had previously contained BVH data, thus overwriting. I haven't checked this out though.

 

It's good to see your bug back in action, Ben. :) Did you use your own tracking for the eyelids?

Share this post


Link to post
Share on other sites

Here is the same exact video as the earlier post, but I've remapped how the bvh information affects the mouth open/close target.

ernie1L_2.mov

 

Because Zign Track doesn't handle DV yet I always capure my video in Premiere, at the camera rate of 25 fps, and then export it as an AVI at 24 fps. This lets Premiere handle the conversion along with any cropping and image editing that is needed. After that everything is syncronised for 24 fps so no real problems arise. Don't forget to setup A:M's motion capture settings in Options, (Ctrl-P).

 

I have had subframes show up once or twice and I put it down to capturing the BVH data into a BVH action object that had previously contained BVH data, thus overwriting. I haven't checked this out though.

 

It's good to see your bug back in action, Ben. :) Did you use your own tracking for the eyelids?

 

Paul,

 

I did forget about A:Ms mocap setting, that might be the trick, or at least part of it.

 

The eyelids are driven by the Zign track bvh. The lower lids are driven by the sneer, and the upper lids by the eyebrow.

Share this post


Link to post
Share on other sites
I did forget about A:Ms mocap setting, that might be the trick, or at least part of it.

 

The eyelids are driven by the Zign track bvh. The lower lids are driven by the sneer, and the upper lids by the eyebrow.

 

I wonder if the A:M MoCap settings are involved for BVH files. I'll have to test this.

That's a great idea, controlling the eyelids with the sneer and eye brow features. I will add more features in a future upgrade so you'll have control of all facial features. Was it easy to stick markers to your eyelids or did you use special markers for that?

Share this post


Link to post
Share on other sites
I did forget about A:Ms mocap setting, that might be the trick, or at least part of it.

 

The eyelids are driven by the Zign track bvh. The lower lids are driven by the sneer, and the upper lids by the eyebrow.

 

I wonder if the A:M MoCap settings are involved for BVH files. I'll have to test this.

That's a great idea, controlling the eyelids with the sneer and eye brow features. I will add more features in a future upgrade so you'll have control of all facial features. Was it easy to stick markers to your eyelids or did you use special markers for that?

 

Luuk,

 

for the eyelid tracking I actually stuck a little tiny loop of tape on my eyelash, that kept it from being obscured when the eye was open. Probably only practical if you have long eyelashes.

 

Maybe you could put a marker on the eyelid, and then track whether it's visible or not. this would at least give you an on/off blink marker.

Share this post


Link to post
Share on other sites
This looks like a great app. Since it's bvh it can work with pretty much anything. My question though is, how would you rig something like this up? I'm in Lightwave 9.

 

Thanks guys!

rez

 

This forum is for Animation: Master discussion only so a Lightwave thread would be a bit misplaced. I don't know how it's done in Lightwave. I think you should start with looking for a tutorial that explains how to handle a BVH in your app. Or, buy Animation: Master :rolleyes:

 

 

Maybe I should clarify. How would you set this up in AM...and "use the premade setup" isn't really a helpful response. Is there a thread on how to rig a face in AM?

Share this post


Link to post
Share on other sites

rezman,

 

if you go back to page 10, there is some information there about setting up a rig or rigging a face for Zign Track. One method is Orient Like bones and another is relationships for muscle poses.

 

I've been creating bones for the face and Orienting Like to correspond to bvh file that Zign Track creates. I'm not so good at the relationship creation process to know if one is better or easier than the other.

 

I've been busy converting my characters to use with Zign Track, as well as CP Weighting many joints. I was never so good with Smartskinning; so, CP Weight has been a wonderful enlightenment. But, boy, lots of joints in fingers. hehe But, all this is in anticipation for full scale production once I purchase Zign Track sometime next week.

So, I'm mainly looking at things from a production pipeline aspect to see how the flow works. I've been looking into BVH Baking (as Bendy has suggested); but, in working with some files, I've notice that for some walking to bvh files, the skeleton rig I am using tends to make that a bit difficult. but, standing motion type bvh files work quite well.

 

Anyone have other examples of Zign Track videos and their characters? enjoy seeing others work. I like the bug one too!! and I'm not biased just cause he has my name :P

Share this post


Link to post
Share on other sites
Maybe I should clarify. How would you set this up in AM...and "use the premade setup" isn't really a helpful response. Is there a thread on how to rig a face in AM?

 

You could take a look at the Squetchy rig wiki. There are several tutorials there. You don't have to use the Squetchy rig if you don't want to but it will give you an idea how you can rig a face.

Also, you don't have to use a face rig that controls all facial features. You can use the BVH rig to drive muscle poses as well. At the tutorial page there should be plenty info about this.

Share this post


Link to post
Share on other sites
I wonder if the A:M MoCap settings are involved for BVH files. I'll have to test this.

Mocap settings do not seem to make any difference. My problem turned out to be twofold:

 

First, ZignTrack exported a bvh with a frame interval of .034483, which seems off. My original footage was NTSC 29.97 which should yield a frame interval of .033333, and if I hack the bvh and change the interval to .033333 it imports correctly into my 30 fps A:M project.

 

Second, I was using a project based on one of your examples for my import. The original project was created at 25 fps, and despite changing the frame rate to 30fps, the project saves the original 25fps, and uses that for bvh ingest. The only way to fix this was to hack the .prj file and change the value in a text editor.

 

Importing a bvh with a .033333 frame value does result keys falling slightly off frame as you get above 600 frames. Instead of frame 650 it will fall on frame 649.99, this accurately reflects what should happen with 29.97 fps data importing into a 30 fps project. This shouldn't represent any major problem, and could be worked around with a simple hack if needed.

Share this post


Link to post
Share on other sites
First, ZignTrack exported a bvh with a frame interval of .034483, which seems off. My original footage was NTSC 29.97 which should yield a frame interval of .033333, and if I hack the bvh and change the interval to .033333 it imports correctly into my 30 fps A:M project.

 

Thanks for the info. I'll try to find out what causes this and fix it.

 

EDIT: Wait, at 29.97 the interval should be 0.0333667 in stead of 0.034483. So it's wrong, but not like you say. An interval of 0.033333 would mean 30 FPS, not 29.97

Share this post


Link to post
Share on other sites
First, ZignTrack exported a bvh with a frame interval of .034483, which seems off. My original footage was NTSC 29.97 which should yield a frame interval of .033333, and if I hack the bvh and change the interval to .033333 it imports correctly into my 30 fps A:M project.

 

I found the problem. The FPS is calculated by the rate divided by the scale (that's the normal way to do that from the AVI params) but those values are integers and in the programmed code therefore the result also was an integer. So when the result must be 29.97 it appeared to be 29 I have modified the calculation method and the FPS is calculated correctly now.

I'm planning to upload the next upgrade this weekend. Can you wait this long?

Share this post


Link to post
Share on other sites
I found the problem. The FPS is calculated by the rate divided by the scale (that's the normal way to do that from the AVI params) but those values are integers and in the programmed code therefore the result also was an integer. So when the result must be 29.97 it appeared to be 29 I have modified the calculation method and the FPS is calculated correctly now.

I'm planning to upload the next upgrade this weekend. Can you wait this long?

No worries.

For anyone else with this problem just open the BVH in a text editor and search for "frame time". Just change the number to .03333, which imports correctly into A:M whether it is 30 or 29.97 fps.

Share this post


Link to post
Share on other sites

Has anyone had any success baking actions in A:M14c? I just keep getting exception 001 followed by a crash.

Any suggestions?

Share this post


Link to post
Share on other sites

Paul,

 

Are you Baking from the Action or from the Choregraphy? I am on 13t (will upgrade after this project) and found that when I tried to bake in an Action, I got an exception error as well. However, when I Baked from Choregraphy, everything went accordingly.

Share this post


Link to post
Share on other sites
Has anyone had any success baking actions in A:M14c? I just keep getting exception 001 followed by a crash.

Any suggestions?

Same problem here (also v14) Time for a report I guess.

I does work from the chor for me.

Share this post


Link to post
Share on other sites

Holy Schlong.

 

I would sooo seriously be interested in buying one of those programs if you got it 100% done.... Not even joking.

 

 

Bluesbro15@hotmail.com.

 

I'd try my damnedest to get the money :P.

Share this post


Link to post
Share on other sites

Thanks for the feedback, ernesttx. I 'll try the choreography method.

 

Luuk, I tried baking actions from the action object and it failed in every version of A:M from 12 up to 15. Maybe the "Bake all actions" option just needs to be removed from the action object's menu.

Share this post


Link to post
Share on other sites
Thanks for the feedback, ernesttx. I 'll try the choreography method.

 

Luuk, I tried baking actions from the action object and it failed in every version of A:M from 12 up to 15. Maybe the "Bake all actions" option just needs to be removed from the action object's menu.

Paul,

 

I would say always bake in the choreography then export it as an action. I don't remember it ever working from an action.

Share this post


Link to post
Share on other sites

Thanks, Luuk and Ben. :)

 

I am attempting to bake the BVH action in a choreography but it just takes forever and I haven't completed a baking session yet. I'm wondering if it is because this character has dynamic hair, even though it is hidden. Does anyone know if hair data is baked into the action along with all the bones, even if it is hidden?

 

Also does baking create keys for just the bones that have active channels or does it create keys for all the hidden bones in the rig too? Maybe that is what is going on here?

Share this post


Link to post
Share on other sites

How long is your action? Maybe you can give it a try with short action (a few seconds) and see what is baked.

Share this post


Link to post
Share on other sites

Yes. The action was almost three minutes long so I am trying a shorter one.

I have just managed to bake a 16 second action from a choreography and bring it in to a new project as an action without the bvh rig and it works well. The captured data is now associated directly with the facial bones in my rig. This allows the whole rig to work correctly so that it can be animated without distorting the model. All good!

 

But is there some way to filter out all of the keys on all the other bones in my rig other than deleting them in the new action? I realize that I could just create a head rig for capturing the head only but I just wondered if there is a way to filter out the unwanted data at the source if I capture using a fully rigged character.

 

Thanks again, guys, for the info on baking. I hope that I can get around the restriction on the length of the action.

Share this post


Link to post
Share on other sites
Yes. The action was almost three minutes long so I am trying a shorter one.

I have just managed to bake a 16 second action from a choreography and bring it in to a new project as an action without the bvh rig and it works well. The captured data is now associated directly with the facial bones in my rig. This allows the whole rig to work correctly so that it can be animated without distorting the model. All good!

 

But is there some way to filter out all of the keys on all the other bones in my rig other than deleting them in the new action? I realize that I could just create a head rig for capturing the head only but I just wondered if there is a way to filter out the unwanted data at the source.

 

Thanks again, guys, for the info on baking. I hope that I can get around the restriction on the length of the action.

Paul,

 

A separate head rig is the simplest approach I've come up with. It would probably help your slowdown problem a lot. Let us know if you come up with another approach.

Share this post


Link to post
Share on other sites

Thanks, Ben. I have been creating a few baked actions, from the BVH data in a choreography, and this is what the timeline looks like for the new action:

 

 

 

I notice that there is some kind of filtering going on as some of the frames are no longer keye frames. That is cool! :) Is that part of Zign Track's operation or does it happen during the baking? (I guess I should check that myself).

 

Can someone explain what RotateW is?

I'm also quite puzzled by the feedback that I get from A:M when I click on any of those transform channels for the "jaw" bone. If I click on any one, from Transform Scale X to Transform Translate Z the jaw bone is highlighted in yellow. But if I click on Transform Rotate X to Transform Rotate W a finger bone is highlighted.

Share this post


Link to post
Share on other sites
Can someone explain what RotateW is?

I'm also quite puzzled by the feedback that I get from A:M when I click on any of those transform channels for the "jaw" bone. If I click on any one, from Transform Scale X to Transform Translate Z the jaw bone is highlited in yellow. But if I click on Transform Rotate X to Transform Rotate W a finger bone is highlighted.

 

The rotate W is part of the Quaternion rotation and compensates the rotate X so the magnitude = sqrt(w2 + x2 + y2 + z2) is always 1.

 

I don't know what causes the strange highlights.

Share this post


Link to post
Share on other sites

Ah, yes! Thanks, Luuk. :)

 

Any clues as to what is happening here? :

 

 

This was a test where a BVH file was baked to an action from the choeography. The action was then applied to this model, (which uses a copy of the rig that I constrained the BVH data to before baking).

 

This jitter was not apparent before baking, as can be seen here:

 

Share this post


Link to post
Share on other sites
Ah, yes! Thanks, Luuk. :)

 

Any clues as to what is happening here? :

 

 

This was a test where a BVH file was baked to an action from the choeography. The action was then applied to this model, (which uses a copy of the rig that I constrained the BVH data to before baking).

 

This jitter was not apparent before baking.

 

Maybe the jitter is caused by the error reduction while baking the action. You could check the spline shape of the rotation axis of those bones. If the splines are poking out between the key frames you know where the error is. I'm just guessing, I'm not sure if this could happen and if it does I think it is not supposed to happen...

Share this post


Link to post
Share on other sites

Careful where you plug that guitar in!

 

 

 

This example is to show a couple of problems that I have been experiencing since I started baking the action. The face weighting isn't complete but the lips are weighted and that is one of the areas that I am having problems with.

 

1) The jitter seems to be coming from the neck bone because if I remove it's influence from the weighting of just a few CPs the jitter stops. Bizarre! Notice that it suddenly disappears, of it's own accord, at about halfway through this test movie.

 

2) The data for the Lip Upper L bone seems to get corrupted when I export the BVH data from Zign Track. For some strange reason it is always that one bone. I have checked the rotation handles and the weighting on the lips and it is all balanced. I have checked the tracking in Zign Track and there are no signs of this issue before exporting.

 

Any ideas, folks?

Share this post


Link to post
Share on other sites

Can you show a picture of the rotation curve for the head bone in the time line for the baked action?

And, can you email me the BVH file so I can take a look at it? If there's a problem with exporting the lip motion I need to find out what's causing that.

Share this post


Link to post
Share on other sites

Cool model Paul...it's a cross between Harold Ramis (Ghostbusters, Stripes) and Weird Al Yankovich! Hope you figger the cause of that...did you set the error tolerance reduction factor to zero (instead of default .01) before baking?

Share this post


Link to post
Share on other sites
Can you show a picture of the rotation curve for the head bone in the time line for the baked action?

 

Here you go, Luuk:

 

 

 

Note that the jitter is occuring every second frame.

 

did you set the error tolerance reduction factor to zero (instead of default .01) before baking?

 

Hmmm. I think I used the default value. I know that I did try using a value of 0 at one point but I didn't get the results that I was looking for. I'd better check that again.

Thanks! :)

Share this post


Link to post
Share on other sites

As you take a look at the close up on the time line the jitter is very obvious, especially on the blue spline. I don't know why this happens but it definitely looks like a baking error. Time for a report I think.

 

Do you still think baking is a good solution for the actions (if it worked OK)? Maybe we should forget about this and just same each action and BVH file with explicit names so it's obvious which action / BVH contains a certain clip.

Share this post


Link to post
Share on other sites

Well the nice thing about baking the action is that once it is done you can forget about the BVH. You can then treat it like any other sequence that you animate in an action. Much tidier and easier to manage.

 

I will gather some more results from baking sessions and if I don't find a way to prevent the jitter I will make a report.

Share this post


Link to post
Share on other sites

I think that I have discovered what was causing the jittering.

 

Capturing BVH data into an action will not neccessarily overwrite every frame. The only frames that are overwritten are the frames that come in with the new BVH file. So if every second frame is filtered out the overwriting will only occur on every second frame, leaving the old data in place on ever other frame. This creates the jitter.

 

I have been using a template action so that I don't have to keep constraining my head rig to the BVH rig every time that I want to capture data. The setup template has several frames of animation so that I can instantly see that all the constraints are working. Recently the BVH data that I have been exporting from Zign Track has been coming out on twos, (every second frame has a key frame). Because the animation in the setup action is on ones, (key on every frame), the new captured BVH data does not obliterate all the old data.

 

Solution: Use a template action that only uses frame 0 for transforming any bones and setting constraints. This will ensure that all frames are clean and ready to capture any filtered BVH data. Well, it sounds logical to me. :)

Share this post


Link to post
Share on other sites

I'm happy you have figured it out Paul :D

It would be nice if Hash made the action baking overwrite all keyframes. Would that be a feature request or an error report? :unsure:

Share this post


Link to post
Share on other sites
I'm happy you have figured it out Paul :D

It would be nice if Hash made the action baking overwrite all keyframes. Would that be a feature request or an error report? :unsure:

Luuk,

 

Baking the action does overwrite all keyframes. Paul's problem was that importing a BVH does not overwrite all keyframes, only the ones that are specified in the BVH you are importing. This can be a real hassle if there are keys between frames, and result in the kind of jitter Paul was seeing. I have made it a habit to select and delete all keys from the action object before loading a new BVH into it.

Share this post


Link to post
Share on other sites

I have just launched version 1.1

Not much changes yet but two important fixes that shouldn't wait too long. I'll try to get the 2D exporter done a.s.a.p. and if things work out like I expect I have a nice surprise for A:M users in the next upgrade... (I'm not telling till I'm sure it will work ;) )

Share this post


Link to post
Share on other sites

Luuk has produced quite a substantial update to Zign Track which some of us are testing at the moment.

 

Export in Zign Track 1.2 has been extended to include the following:

 

Export as BVH. Same as previous versions.

 

Export as A:M Action. This is awesome as it removes the need to use the BVH files completely and, so long as you export using the same bone names as your rig, the action can just be dropped onto your character. It avoids all those BVH containers that clutter up the PWS and also allows you to edit the motion like any other action. This is a great improvement, in my opinion, and will make the whole process much more straight forward for everyone. No baking needed!

 

Export poses. I haven't played with this much yet but I can see that it could be quite useful and fun. Acting out the extreme poses of each facial feature and capturing them as poses.

 

So much to play with! :)

 

 

Here is my first test with Zign Track 1.2:

 

 

I think that the rotation of the bones in the BVH rig has changed and that confused me at first. Once I read Luuk's instructions and rotated all my rig's face bones to the correct positions everything came together nicely. There is still something going wrong with the top lip but I am sure that I will get it ironed out eventually. :)

 

Great work, Luuk! Thanks! :)

Share this post


Link to post
Share on other sites

Hey Paul, you gave away the surprise! :D

 

I will add reverse rotation options for the bones and change some more things. You really should try the poses. When you use the poses the expressions will never exceed the limits you gave them. The test I did was very satisfying. If you find some things I need to fix/alter let me know by email.

 

Thanks!

 

EDIT: I'm not sure you get the meaning of the pose functions. In short: create poses for your model and name them the same as you have named them in Zign Track (or use the default names) and set the range the same in A:M and Zign Track. Zign Track will now control these poses. For example; if the eyebrows go down, the action gives a negative value to the related eyebrow pose. If the eyebrows go up, the action will give a positive value the the related pose.

Share this post


Link to post
Share on other sites
Hey Paul, you gave away the surprise!

 

Oops! The cat's out of the bag now!

 

Droevig, Luuk!

Share this post


Link to post
Share on other sites
Hey Paul, you gave away the surprise!

 

Oops! The cat's out of the bag now!

 

Droevig, Luuk!

 

Geeft niets Paul, no problem :)

Share this post


Link to post
Share on other sites

MAN!

 

I can't WAIT to git my fingers on this thing! Those are some pretty usefull features/improvements!

 

Thanks Luuk!

Nice sample Paul! I love the dynamic hair!

Share this post


Link to post
Share on other sites

Just another test for those interested in seeing output from Zign Track:

 

(Video removed. Please see updated example instead).

 

It is time that I put some animation together with the mocap stuff. Maybe next week. :)

Share this post


Link to post
Share on other sites

I looks good. Did you hold your head perfectly still or did you eliminate the head movement? :rolleyes:

And, is the face controlled by bones or poses?

BTW, I forgot to involve the enforcement values in the test version... Maybe you noticed.

Share this post


Link to post
Share on other sites
Just another test for those interested in seeing output from Zign Track:

 

Magic mouth

 

It is time that I put some animation together with the mocap stuff. Maybe next week. :)

 

Paul,

I have not had a chance to play with the software yet so all of this is me just thinking....

this looks ok, and not to take anything away from Luuk's hard work on his software,

but the lip synch just doesnt fly with me.

It appears to be mostly just up and down movements of the jaw.

No sound forming lip shapes as far as I can tell.

Now why is this?

Is it the limitations of the software or are the results dependent on how the user sets up

the facial bones/smartskin?

 

Mike Fitz

www.3dartz.com

Share this post


Link to post
Share on other sites
It appears to be mostly just up and down movements of the jaw.

No sound forming lip shapes as far as I can tell.

Now why is this?

Is it the limitations of the software or are the results dependent on how the user sets up

the facial bones/smartskin?

 

This would be mostly set up of the face and can be related to the quality of the video. Bendytoons's videos show better results concerning mouth shape. Zign Track does track the shape of the mouth, not only open and close movements.

The new A:M Action feature that I will release soon can be setup to directly drive mouth shape poses. When set up correctly that can give even better results because for example you can make the lips curl on the mouth purse pose and other things like that. This can also be done by adding smartkin when you prefer to drive the bones directly by an action or BVH file. Once a character is completely set up and "fine tuned" it is only a matter of loading a new action and it should look perfect.

Share this post


Link to post
Share on other sites
Did you hold your head perfectly still or did you eliminate the head movement? And, is the face controlled by bones or poses?[/

I exported this as an action but I had used the wrong name for the head bone so there is no head motion with this one.

 

It appears to be mostly just up and down movements of the jaw.

No sound forming lip shapes as far as I can tell.

Now why is this?

Well, I'm going to let Luuk answer all the technical questions because I don't have a clue. I'm just a layman testing this out to make it as fool proof as possible. ;)

With animation we are used to seeing facial expressions really exaggerated and this recent example has no exaggeration at all. On top of that Zign Track, being a single camera tracker, does not track forward and backward motion so the shape of the mouth is not correct for sounds that require pursing of the lips. This, as Luuk has explained, can be corrected by using the new pose driving method or by adding the motion by hand after exporting. I haven't done any of that yet, this is just rough and raw output, but I would like to try the pose method to see how well that works.

Share this post


Link to post
Share on other sites
I exported this as an action but I had used the wrong name for the head bone so there is no head motion with this one.

 

This is why you should always save your tracked Zign Track projects. Just open up the project file go to the export settings, change the name and export the action again. In 20 seconds you can have the corrected action applied to your model.

You also could just change the name of the bone in A:M.

Share this post


Link to post
Share on other sites
It appears to be mostly just up and down movements of the jaw.

No sound forming lip shapes as far as I can tell.

Now why is this?

Is it the limitations of the software or are the results dependent on how the user sets up

the facial bones/smartskin?

 

This would be mostly set up of the face and can be related to the quality of the video. Bendytoons's videos show better results concerning mouth shape.

Umm, well, yers and no. My stuff only uses four of the seven mouth points (sort of five, but I use the two lower lip points to drive a single value).

Zign track does export seven mouth point and a jaw. I only use a four because I'm driving poses.

I think alot of what Mike is seeing is just the difficulty in using bone mo-cap directly. When the face being driven is proportioned very differently from the one captured motions often become exaggerated or damped, and it can be hard to get good calibration. In the video it looks as if the mouth closed calibration between the actor and the model is off, the mouth doesn't quite close even on the hard consonants. Luuk has built exaggeration in to the export, but even then you need good calibration. One reason I use poses is that it's easy to recalibrate them, and they have hard limits (it is meaningless to drive them past 100 or 0).

Share this post


Link to post
Share on other sites

I think people should realize that Zign Track is a facial motion capture solution, not an overall workaround for a good face rig and lip synch poses. The more you rig your face and setup mouth poses the better Zign Track's results will be for you... am I right Luuk?

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...