Jump to content
Hash, Inc. Forums

Facial MoCap experiments


Luuk Steitner

Recommended Posts

Wow I have not been to this forum in awhile and my first visit back I see such an amazing software.

 

There is a few other softwares out there that may be better than your product but not much. The face/robot costs $90,000 with Animator liscenses of 14,000 a peice. There is two more companies that do it but don't share the software and that service is $5,000-10,000 a day. There is another similiar software for 20,000 but they let you rent for around 3,000 a month.

 

You my friend have created something very unique, powerful and wonderful.

 

I noticed this does BVH motion. Thank you! That means it's compatable with other software. That is totally awesome. I wonder if we can get this to work with Motionbuilder, C4D or other.

 

If that is the case I'm about to pimp out your software and get a couple of units sold for you :)

Link to comment
Share on other sites

  • Replies 273
  • Created
  • Last Reply

Top Posters In This Topic

Not all MoCap software is that expensive. It could be done with Syntheyes too, but it will still cost you a lot more, you will be needing more cameras and it's much more difficult to use.

With some practicing you can achieve pretty good results with Zign Track. Some people out there get better results than I do :D

The results you get with those very expensive software might be a little bit better, but I'm working on some improvements to really compete with their results. It will take some time and feedback from other users to make it better, so my question to all Zign Track users: Show us your results and tell me how you think it could be improved. Together we can make it better than good enough ;) (oh, and; please buy it... LOL)

Link to comment
Share on other sites

You can use A:M to do this too...though you will have to do the math.

 

SO- if you shot 10 seconds of action at 30fps and import it into A:M in a 24 fps project

 

Actually A:M will do the math for you.

 

Import the action into a project set for 30 fps (or whatever it was shot at) and then change the A:M project frame rate to 24. A:M will auto-magically realign the keyframes.

 

No stretching/dragging of keyframes involved either.

 

You might want to select "snap to frames" on all the keyframes - but that might not be necessary if you don't want to do any monkeying with the keyframes.

Link to comment
Share on other sites

Sorry to get back so late.

 

I was having too much fun :)

 

Ok, did a quick animation test with one of my models. Luuk, your face setup was just right in what I'm capable of understanding and implementing. I uploaded a clip on to youtube:

 

 

I am pleased with the results. However, I do think I need to tweak CPs and CP Weighting to get things even better. I'll work on things this weekend. I'm also rigging other characters and then will work on my tracking in Zign Track. I think I need to get better 'dots' and better placement on my face. I just did a quick track to work with Zign Track and see how things work.

 

Luuk, I'll let you know my thoughts on improvements. I've have SynthEyes and worked with it; so, I'll focus on what would be nice to have in Zign Track. BTW, I'll be purchasing around 12/6.

 

Also, on the FPS front, I create a new project at 30 FPS in AM, and imported the BVH file; all worked well. Also, I did the suggestions about expanding the action; those worked just as well. It just caught me off guard the first time I imported it.

 

Hope you guys enjoy the clip, and I'll work on some others. Oh, and disregard the voice part; I was concerned with that too much heheh :P

 

Thanks again Luuk for your brilliant program and all your efforts.

Link to comment
Share on other sites

I've been watching ZignTrack unfold. Never thought it would happen so fast.

 

I have it downloaded...[payment to come, Luuk] But I am as clueless as they come about BVH files. Can some one point me to a tutorial on using them with A:M?

 

Thanks.

Link to comment
Share on other sites

William Sutton has the 'grandaddy of all BVH toots' at zandoria.com but I'll sum them up.

 

A BVH motion capture file is a skeletal motion file that imports nicely into A:M (Make a new action and import a character. Select New/Biovision BVH/ and then find the file you want to load and load it. The BVH will load frame by frame and may take a while for a longer BVH. You will then see a series of BVH skeleton bones in a tree separate from your characters rig. You will need to select THE parent BVH bone and rotate/scale/position it to your rig, whether just the face or the whole body, as close as possible. Then- using 'Orient Like/Constrain To/ Aim at' and other constraints you can 'nail' your rig to the BVH's bones...one by one. Then adjust adjust adjust until happy. The beauty of it is that once you have one installed you can then go back and load another BVH action over the first (Good time to use Save As...) and avoid all the constraining the 2nd 3rd 4th time 'round.

 

Happy Animating!

Link to comment
Share on other sites

William Sutton has the 'grandaddy of all BVH toots' at zandoria.com but I'll sum them up.

 

A BVH motion capture file is a skeletal motion file that imports nicely into A:M (Make a new action and import a character. Select New/Biovision BVH/ and then find the file you want to load and load it. The BVH will load frame by frame and may take a while for a longer BVH. You will then see a series of BVH skeleton bones in a tree separate from your characters rig. You will need to select THE parent BVH bone and rotate/scale/position it to your rig, whether just the face or the whole body, as close as possible. Then- using 'Orient Like/Constrain To/ Aim at' and other constraints you can 'nail' your rig to the BVH's bones...one by one. Then adjust adjust adjust until happy. The beauty of it is that once you have one installed you can then go back and load another BVH action over the first (Good time to use Save As...) and avoid all the constraining the 2nd 3rd 4th time 'round.

 

Happy Animating!

 

In case of the Zign Track BVH files is not not necessary to rotate/scale/position the rig. You only need the 'orient like' constraints (with compensate) Only the rotation of the BVH bones is used.

I should make special tutorial for this but I don't have time for it at the moment. I found the use of BVH files very easy when I did it the first time. I hope others will find it's easy too. (is that correct english? :unsure: )

Link to comment
Share on other sites

Kelley, using BVH files really is pretty simple.

 

To start with you should create a new action and import the BVH rig so that you can study it's simple hierarchy and layout. Then build a similar rig in your model. Once you have saved this model work through the following steps:

 

1) Create a new Action for your character. (As the BVH rig starts in a neutral pose it is probably best to start this way yourself so model the face with the mouth closed but relaxed).

 

2) Right click in the action window and select New\Motion Capture Device\Biovision BVH File.

This creates an empty BVH object and you should see that you now have an Action Object called "BioVision BVH File1 Action Object".

 

3) Right click on this action object and select Capture Sequence. You will need to navigate to the BVH file that you wish to import. When you click "Open" the BVH data will be imported and you should see the frames being populated along the timeline.

(Tip: Make sure that you close the PWS Timeline and just use the other timeline to speed this process up. Not really a problem but if you are importing several minutes of data you will appreciate the faster response).

 

4) The Zign Track BVH rig is just from the neck up. You will find that it has been deposited at your character's feet and that it is probably not the same scale as your character's rig. No problem. You can move and scale the BVH rig if you like but but it is not essential as your character's facial bones only need to imitate the rotations of the BVH bones and not the translations.

 

5) Constraints - Be sure that you are on frame 0 and work through your rig, starting at the neck, and assign "orient like" constraints to each of the matching bones in the BVH rig. Remember to use the "compensate" button.

 

6) Scrub through the timeline to check that everything is working. If you discover any frames where a bone makes a sudden, unexpected jump there is probably a tracking error in the data and you will need to go back into Zign Track, correct the position of the marker and generate the BVH data again. Generating and exporting the BVH data is instantaneous so this isn't really a problem. Also, while scrubbing consider the weighting of the CPs. The weighting makes a big difference to the success of the effect.

 

7) If all is well save this action and do a shaded render so that you can inspect it better.

Link to comment
Share on other sites

William Sutton has the 'grandaddy of all BVH toots' at zandoria.com but I'll sum them up.

 

A BVH motion capture file is a skeletal motion file that imports nicely into A:M (Make a new action and import a character. Select New/Biovision BVH/ and then find the file you want to load and load it. The BVH will load frame by frame and may take a while for a longer BVH. You will then see a series of BVH skeleton bones in a tree separate from your characters rig. You will need to select THE parent BVH bone and rotate/scale/position it to your rig, whether just the face or the whole body, as close as possible. Then- using 'Orient Like/Constrain To/ Aim at' and other constraints you can 'nail' your rig to the BVH's bones...one by one. Then adjust adjust adjust until happy. The beauty of it is that once you have one installed you can then go back and load another BVH action over the first (Good time to use Save As...) and avoid all the constraining the 2nd 3rd 4th time 'round.

 

Happy Animating!

 

In case of the Zign Track BVH files is not not necessary to rotate/scale/position the rig. You only need the 'orient like' constraints (with compensate) Only the rotation of the BVH bones is used.

I should make special tutorial for this but I don't have time for it at the moment. I found the use of BVH files very easy when I did it the first time. I hope others will find it's easy too. (is that correct english? :unsure: )

 

 

What do you Do If you have got a beard and the spots dont stick.

Barry

Link to comment
Share on other sites

What do you Do If you have got a beard and the spots dont stick.

Barry

 

When I first started testing the software for Luuk, I had a beard / goatee (not sure what the proper term for it was). Anyway, I ended up using a child's glue stick to get the dots to stick to the facial hair. I just rubbed the glue stick where I wanted the dots to stick and then pressed the dots into place.

 

Hope this helps...

Al

Link to comment
Share on other sites

Apparently it's not possible to add an orient like constraint to the BVH object. I was taking a look at the Squetch rig because it works OK with that rig, although you have to rotate the head by hand. I noticed David used expressions to constrain the Squetch face. I'm not sure if that is what makes the difference.

 

David, if you read this; how did you do it?

 

A simple workaround would be to add a model with just the face rig and constrain that rig to the BVH rig. Add that model to the chor, constrain the root bone of that rig to orient like the chest of your model, and orient the face to the face rig.

 

I'm sure there must be better ways to do this (like David's) so any suggestions would be welcome.

Link to comment
Share on other sites

Apparently it's not possible to add an orient like constraint to the BVH object. I was taking a look at the Squetch rig because it works OK with that rig, although you have to rotate the head by hand. I noticed David used expressions to constrain the Squetch face. I'm not sure if that is what makes the difference.

 

David, if you read this; how did you do it?

 

A simple workaround would be to add a model with just the face rig and constrain that rig to the BVH rig. Add that model to the chor, constrain the root bone of that rig to orient like the chest of your model, and orient the face to the face rig.

 

I'm sure there must be better ways to do this (like David's) so any suggestions would be welcome.

 

The BVH face rig doesn't move with the Squetch Rig at all in what I set up...it doesn't need to. I don't think it would be necessary in any rig.

 

What I did was convert the rotation of the BVH bones into translate movement using Expressions and then used that to translate the FACE controls that are built into the Squetch Rig and standalone FACE installations. Since the face is completely rigged for traditional animation, all that is needed is for the manual controls to be controlled by the BVH data.

 

I think it's necessary for the BVH data that Zign Track generates to drive Poses instead of bones that are directly affecting the face. The reason is that the data is based on 2D movement...so you don't get something like the lips pulling back when smiling or puckering when the lips move inward unless you are driving Poses (you could probably also use Smartskin, but I think Poses would do the job a lot better).

 

First, I added extra nulls that the actual manual controls would be constrained to...that way, you can still manually tweak the movement after loading the BVH data.

 

Then, I added a bone for each control that would control the percentage that the data would be converted to linear movement. I set up a percentage Pose for each one where the bone (let's call it the "limiter") would be scaled to 0% at "0" of the Pose, 100% at "1000" of the Pose (to make the increments .1% for each whole number) and 500% at "5000". This will allow the movement from the data to be decreased and increased if necessary from 0-500%.

 

Next, I made a setup Action where I converted the BVH data using Expressions. As an example, the jaw is opened using this Expression on the "SyncNull_BVH":

 

Transform.Translate.Y = -..|..|..|..|Action Objects|Shortcut to BioVision BVH File1|Bones|Jaw.Transform.Rotate.X*..|..|..|..|Jaw_BVH.Transform.Scale.Z

 

What the Expression says is:

 

The 'Y' translation of the "SyncNull_BVH" equals the negative value of the 'X' rotation of the BVH jaw multiplied by the 'Z' scale of the "limiter" (in this case, the "limiter" is actually named "Jaw_BVH").

 

The negative 'Y' value is needed because the "SyncNull" opens the mouth as it is translated to its' negative 'Y'. The "SyncNull" (which is the manual control for opening the jaw in the FACE controls) then has a "translate to SyncNull_BVH" applied to it.

 

Finally, the BVH data can be imported into the setup Action using "Capture Sequence".

 

Does that help? I may have misunderstood the question...did I?

Link to comment
Share on other sites

serious nice that you make small video tutorial , how make it

Never use this math expressions :( , Please :P

not understand how make it :(

 

nice feature if AM support constraint to BVH bones

is more easy for all people is only a suggestion ;)

Link to comment
Share on other sites

Cronos,

 

I have found a very simple solution. The BVH root can be rotated by expressions in the chor.

 

To make it orient like the chest:

 

1. Select the model under the chor in the PWS.

2. switch "show more than drivers" on

3. switch to skeletal mode

4. click on the BVH rig in the chor window

5. now one of the BVH bones is selected select the "shortcut to BioVision BVH File1" in the PWS

6. in the properties window open transform->Rotate

7. select rotate X, right click, choose edit expression. in the expression for Transform.Rotate.X add: "..|..|..|..|..|Chest.Transform.Rotate.X"

8. now, select the rotate Y property in the same way. in the expression for Transfrom.Rotate.Y add: "..|..|..|..|..|Chest.Transform.Rotate.Z"

9. now, select the rotate Z property in the same way. in the expression for Transfrom.Rotate.Z add: "-..|..|..|..|..|Chest.Transform.Rotate.Y"

 

Note: The 'Y' is controlled by the 'Z' of the chest and the 'Z' by the inverted 'Y'. this is because the orientation of the bones is different.

 

Now the head should move when you move the chest.

 

FrenchMan_Expression.zip

expressions.jpg

Link to comment
Share on other sites

I still don't understand why this would be necessary...maybe I'm missing something.

 

 

------------------

EDIT

------------------

 

I should have been clearer. Why would the BVH bones need to rotate with the chest? All that is needed is the data from the BVH bones to drive the face...which is parented to the rig's chest.

Link to comment
Share on other sites

I still don't understand why this would be necessary...maybe I'm missing something.

 

 

------------------

EDIT

------------------

 

I should have been clearer. Why would the BVH bones need to rotate with the chest? All that is needed is the data from the BVH bones to drive the face...which is parented to the rig's chest.

 

You're right David, but constraining the face bones directly to the BVH rig is easier to do for those who aren't that experienced with setting up poses, expressions etc.

Rotating the BVH rig is a simple solution, but I think your method of driving the face is the best way to do it.

Link to comment
Share on other sites

I still don't understand why this would be necessary...maybe I'm missing something.

 

 

------------------

EDIT

------------------

 

I should have been clearer. Why would the BVH bones need to rotate with the chest? All that is needed is the data from the BVH bones to drive the face...which is parented to the rig's chest.

 

You're right David, but constraining the face bones directly to the BVH rig is easier to do for those who aren't that experienced with setting up poses, expressions etc.

Rotating the BVH rig is a simple solution, but I think your method of driving the face is the best way to do it.

 

Thanks for the explanation, Luuk.

Link to comment
Share on other sites

hey all,

 

not sure anyone else has ran across this situation. I'll give a little background as to how I am currently working with Zign Track.

 

1. create video reference with dots on face

2. run through Zign Track and export bvh file

3. Open AM, import model and create action

4. assign bones and constraints

5. Save as Action

 

Then, say I have created 3 actions from Zign bvh files (I've named the Actions differently).

 

1. Open new project and import model

2. import Actions

3. New Chor

4. place model and then drag actions onto model

 

Here is where things become a bit disorganized. It seems that the BVH Action Objects (if not named differently) are getting in the way as they all seem to have similiar names (ie, BioVision BVH File1, BioVision BVH File2, etc.). When creating the actions for individual bvh files, it seems the actions are looking for a specific named bvh file. However, if multiple bvh files are named the same, you might run across not knowing which bvh file it is looking at (as you will notice, in your Objects folder, there will be multiple BioVision BVH Files with similiar names).

 

I've had to go back and recreate the Actions but also rename the BioVision BVH File to something similiar in order to relate the Action to a specific BVH file. Now, when importing the Actions into a new project, the corresponding BVH shows up in the Objects folder. It also helps with assigning constraints with the picker; as the pop up list will show a Shortcut to and the related bvh bones to that bvh file.

 

Not sure anyone else has run across this, but I'd thought I'd pass it along. Also, if anyone else has a different way or insight into whether it's just me. :)

 

Also, Luuk, you might want to add as an improvement in the future. What I am doing, is using Bauhaus Mirage to import my video reference and pre-process the video before importing into Zign Track. The pre-process I am doing is eliminating all other colors (except tracker dot green) from the video. Something similiar to the little girl in the red coat process in Schindler's List. I have found out that it allows much better tracking of the color.

 

What are some other ways people are doing to use Zign Track and their workflows. Thanks :)

Link to comment
Share on other sites

When you capture a BVH sequence the animation is baked into the action. After this the BVH file is not needed anymore, if I'm right. I give my actions any name I want, that shouldn't matter as long it's obvious to you which action it is.

 

I was planning to add contrast/color adjustment in a future upgrade. But experimenting with the configuration settings should always do the trick if the lighting of your video is good. I understand it's not always easy to get good lighting so adding such a control might add value for some people. I was able to track all my videos without having to edit them first. Some of them were pretty bad.

Link to comment
Share on other sites

When you capture a BVH sequence the animation is baked into the action. After this the BVH file is not needed anymore, if I'm right. I give my actions any name I want, that shouldn't matter as long it's obvious to you which action it is.

 

The Action continues to reference the bvh file, I just tested it. Delete the bvh and the action is empty.

 

You have to bake the action onto the bones that are constrained to the bvh (or are driven by a relationship that gets data from the bvh) to take it out of the loop, I think.

 

I have a standard file with a model of just face bones that are constrained to bvh. I load this file, import a new bvh, and then bake out the constraints. Now I save the action as simple bone movement that can be loaded onto any character that has the same set of bones.

Link to comment
Share on other sites

The Action continues to reference the bvh file, I just tested it. Delete the bvh and the action is empty.

 

I didn't expect that. It appears in A:M like it's baked so I assumed it was. I wonder why capturing a sequence takes a minute but reloading the action does not. Maybe it is baked but deleted when the BVH file is missing.

This could be tested by swapping names of BVH files to see if the other action is loaded. I'll check it out later.

Link to comment
Share on other sites

I'm confused by the multiple copies of "BioVision BVH File1" that appear in the PWS.

 

When you capture a BVH sequence the animation is baked into the action. After this the BVH file is not needed anymore, if I'm right.

 

That's interesting, Luuk. Are you saying that we should use "Bake All Actions"? Sounds good in theory. And then I could get rid of all those BioVision BVN File1 duplicates.

Hmmm. I've never baked an action. Hope this doesn't burn. ;)

------------------

Edit:

Wait a minute! If I bake the action doesn't that remove all the constraints and basically turn everything into muscle motion? Huge file sizes and no flexibility to tweak everything?

 

Hmmm. I'll put up with the multiple BVH objects rather than lose the ability to tweak with bones.

Link to comment
Share on other sites

Anyone thinking of writing that BVH to Action converter yet? ;)

Paul,

 

Maybe I'm misunderstanding, but you shouldn't need a bvh to action converter. Just bake out the chor action of the model that's constrained to the bvh. After baking it's just bone actions on the model. They have the same keyframe info as the bvh, but now it will be just regular transform keys on the bones, without constraints or an action object.

Link to comment
Share on other sites

OMG Bendy that's what I've been looking for. Something that will allow me to have Actions based on a bvh file, but without the bvh file attached in some way.

 

I was going to take the long way and Action my model's bones to the bvh bones. Actually manually move each bone in accordance with the bvh bone in an Action window but with fewer keyframes . It's actually quite easy, but time consuming. But this way, i'll just need to attach the model to the bvh rig and then bake out the action in Chor. It's still going to be a time consuming to go through several bvh files; but, far less time.

 

However, in baking out the action, what would be a nice Unit of Error Tolerance for Channel Reduction? I did a test with "1" and it seems that various keyframes for various bone rotations were strewn throughout the action. No biggy, but tweaking the Action after that will take some adjusting.

Link to comment
Share on other sites

Ben, thanks so much for that insight!

Like I said, I have never used the "bake all actions" command so I am ignorant about what the end result would be. I will definitely be testing it later today though.

Thanks again.

----------------

This is very useful but it still leaves the animator with motion that is pretty much left locked in stone. As all the constraints are removed it prevents refining and tweaking of the motion by manipulating the facial rig.

 

I would like to be able to capture a face actor's performance, filter the BVH data through a plugin that would use user defined limits for each bone and remove any keys where the motion falls below those limits. The plugin would then copy the remaining keyframes into the action applied directly to the facial rig. The BVH file could then be discarded and the animator could refine the animation by removing and tweaking key frames in each of the channels as one normally does in an action.

 

That would smooth the workflow and keep everything tidy and A:M native. :)

Link to comment
Share on other sites

I downloaded zign track. It is a nice interface and easy to understand. I didnt use regular round markers for my first test, lust some cut up little mailing labels, I'll have to go to the store to get some regular ones. But I found my webcam only records 10 fps :( I didn't get very good results. I tried making it 30 fps in quicktime pro, I guess it just adds more frames, does that help? I still got bad results. Is that because the video or because my markers were bad? I'll have to try with better equipment.

Link to comment
Share on other sites

Ben, thanks so much for that insight!

Like I said, I have never used the "bake all actions" command so I am ignorant about what the end result would be. I will definitely be testing it later today though.

Thanks again.

----------------

This is very useful but it still leaves the animator with motion that is pretty much left locked in stone. As all the constraints are removed it prevents refining and tweaking of the motion by manipulating the facial rig.

 

I would like to be able to capture a face actor's performance, filter the BVH data through a plugin that would use user defined limits for each bone and remove any keys where the motion falls below those limits. The plugin would then copy the remaining keyframes into the action applied directly to the facial rig. The BVH file could then be discarded and the animator could refine the animation by removing and tweaking key frames in each of the channels as one normally does in an action.

 

That would smooth the workflow and keep everything tidy and A:M native. :)

 

If you use the setup for the FACE controllers, you can tweak everything by hand at any time. I'm not sure how you are setting it up Paul, so I don't know what to suggest as a solution. For driving the FACE controls, I made copies of the controls with "BVH" in their name and used the BVH data to drive those. Then, I had the actual controls tied to the "BVH" controls with constraints...no extra plugin necessary.

 

Hope that helps.

Link to comment
Share on other sites

I found my webcam only records 10 fps :( I didn't get very good results. I tried making it 30 fps in quicktime pro, I guess it just adds more frames,

 

10 fps is probably too slow. You could cheat and make sure you speak and move really really slowly. At 10fps you might actually be moving a marker on the face so fast from one frame to the next there is nothing to "track".

 

-vern

Link to comment
Share on other sites

I downloaded zign track. It is a nice interface and easy to understand. I didnt use regular round markers for my first test, lust some cut up little mailing labels, I'll have to go to the store to get some regular ones. But I found my webcam only records 10 fps :( I didn't get very good results. I tried making it 30 fps in quicktime pro, I guess it just adds more frames, does that help? I still got bad results. Is that because the video or because my markers were bad? I'll have to try with better equipment.

 

With a slow camera like that the chance of feature swapping gets really big. If the markers are close together and you move, it could be that a marker has moved to the position of it's neighbor on the next frame. 10 FPS is probably too slow for good results (it sure is too slow for lip syncing). But if you like to play with it first you'll have to manually adjust the first 3 dots first. You can place the features on some frames were it's hard too track by hand and track again till the neck, fore head and chin feature are tracked correctly. Once those are tracked like it should the Motion guide will use this features as reference to predict the position of other features. Check other features per pair and try to track them. Lock the features that are tracked correctly to save some time.

If this is not working out like you want you really need a better camera.

Link to comment
Share on other sites

This is very useful but it still leaves the animator with motion that is pretty much left locked in stone. As all the constraints are removed it prevents refining and tweaking of the motion by manipulating the facial rig.

 

I would like to be able to capture a face actor's performance, filter the BVH data through a plugin that would use user defined limits for each bone and remove any keys where the motion falls below those limits. The plugin would then copy the remaining keyframes into the action applied directly to the facial rig. The BVH file could then be discarded and the animator could refine the animation by removing and tweaking key frames in each of the channels as one normally does in an action.

 

That would smooth the workflow and keep everything tidy and A:M native. :)

Here is my approach, pretty simple and all done in A:M. The BVH and the rigged character never exist in the same model.

 

Create a capture model which consists of ONLY the bones you are driving directly with the zign track bvh.

 

Load a BVH and constrain (or relate) the models bones to the BVH. (Save this Project as a capture template)

 

Bake the models action down to transforms on the bones.

 

Export the action.

 

This action can now be loaded onto the rigged character. It will affect the bones in the character that have identical naming to the bones in the action. It is a regular A:M action; you can key reduce it; you can blend it with other actions; you can layer actions on top, including actions being driven by constraints.

 

I use this approach to drive muscle poses in the face rather than bones. The intermediary relationship, which lets the bone drive the pose, has an added benefit of allowing you to remap the motion to a curve, by adding keys in the relationship. Theres no reason you couldn't do this with bones.

 

I think that does everything you were asking about.

 

Ben

Link to comment
Share on other sites

Luuk what I have seen is outstanding thanks. I downloaded the trial and I am getting an EAccessViolation message while starting the program. I am running XP Pro on a Athlon 3700 with 2 Gigs of memory. Any suggestions?

 

Oscar

 

Please send bug reports to support@zigncreations.com Tell me if the program itself is showing or not.

Link to comment
Share on other sites

This looks like a great app. Since it's bvh it can work with pretty much anything. My question though is, how would you rig something like this up? I'm in Lightwave 9.

 

Thanks guys!

rez

 

This forum is for Animation: Master discussion only so a Lightwave thread would be a bit misplaced. I don't know how it's done in Lightwave. I think you should start with looking for a tutorial that explains how to handle a BVH in your app. Or, buy Animation: Master :rolleyes:

Link to comment
Share on other sites

Woohoo, I finally had time to get Zign Track up and running. I am attaching a qt of my first successful test.

 

ernie1L_.mov

 

However, I had an annoying problem. The BVH exported from Zign Track was not the same length as the sound track from the same video. The BVH seems to get longer. For this render I had to rescale it to match the sound. I can't figure what the discrepancy is, I thought it might have to do with A:M operating at 30fps and video operating at 29.97 as that was an issue with syntheyes, but so far that doesn't seem to be it. Is anyone else having these issues?

 

Also, and probably related, the keyframes in the BVH import with a spacing of less than a whole frame. This seems wrong, as the video was tracked on whole frames. Does the BVH file specify a frame rate or is this an A:M importing issue?

 

Any thoughts will be appreciated.

 

Ben

Link to comment
Share on other sites

Woohoo, I finally had time to get Zign Track up and running. I am attaching a qt of my first successful test.

 

ernie1L_.mov

 

However, I had an annoying problem. The BVH exported from Zign Track was not the same length as the sound track from the same video. The BVH seems to get longer. For this render I had to rescale it to match the sound. I can't figure what the discrepancy is, I thought it might have to do with A:M operating at 30fps and video operating at 29.97 as that was an issue with syntheyes, but so far that doesn't seem to be it. Is anyone else having these issues?

 

Also, and probably related, the keyframes in the BVH import with a spacing of less than a whole frame. This seems wrong, as the video was tracked on whole frames. Does the BVH file specify a frame rate or is this an A:M importing issue?

 

Any thoughts will be appreciated.

 

Ben

 

The frame rate specified in the BVH file is the same as the frame rate of the video you loaded in Zign Track. I don't know what's causing the key frames in A:M to be shorter than one frame. I have had such an issue before (not with BVH) and solved it but that's a long time ago and I don't remember how I solved this back then. If you email me your project file and the BVH file I can take a look at it.

 

Your test looks pretty good though but in one part the mouth hardly moves while you're talking. Was that like your video or did you smooth the mouth too much or something?

Link to comment
Share on other sites

Your test looks pretty good though but in one part the mouth hardly moves while you're talking. Was that like your video or did you smooth the mouth too much or something?

 

Probably a combo of both. I'm using the bvh to drive muscle poses, so I might just need to adjust the sensitivity of the relationship.

 

Ben

 

edit: I tweaked the curve in the mouth open relationship and it fixed that right up.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Announcements


×
×
  • Create New...