sprockets The Snowman is coming! Realistic head model by Dan Skelton Vintage character and mo-cap animation by Joe Williamsen Character animation exercise by Steve Shelton an Animated Puppet Parody by Mark R. Largent Sprite Explosion Effect with PRJ included from johnL3D New Radiosity render of 2004 animation with PRJ. Will Sutton's TAR knocks some heads!
sprockets
Recent Posts | Unread Content
Jump to content
Hash, Inc. - Animation:Master

Facial MoCap experiments


Luuk Steitner

Recommended Posts

  • Replies 273
  • Created
  • Last Reply

Top Posters In This Topic

Now, to create my setup action from my current action, (in order to avoid having to reset all my constraints), can I just delete the captured data for the rig and save as "BVH_Setup.act" or will I have to start again with a new action and setup all the constraints again?

---------

You can delete it if you like but you don't have to. When you choose "capture sequence" it will replace the previous action. You don't have to setup constraints or anything. Capture sequence is the only thing you have to do to use your new captured data. This is what makes motion capture so much fun. Once you have setup your character and action you're only seconds away from a completely new animation ;)

Link to comment
Share on other sites

Hi Luuk...it´s a great tool to capture the facial movements...do you know if any AM version support this?...congrats...go ahead with it...

 

I have used the BVH data from Zign Track in v11.1, v12, v13 and v14 while testing the FACE controls. I haven't tested on any older versions than that.

Link to comment
Share on other sites

Hi Luuk...it´s a great tool to capture the facial movements...do you know if any AM version support this?...congrats...go ahead with it...

 

I have used the BVH data from Zign Track in v11.1, v12, v13 and v14 while testing the FACE controls. I haven't tested on any older versions than that.

 

 

thanks for answer itsjustme, good for me...I have the 11.1 version for now, so I can use the BVH as well

Link to comment
Share on other sites

Here is another test for Zign Track:

 

 

 

Sorry about the size! I need to test whether using an MP3 audio track reduces the file size by much. This was just a wav file converted to 8 bit mono. Once again only the eye movements have been added. Everything else is direct from Zign Track's BVH data. I have increased the enforcement for the eyebrows and the bottom lip to add some exaggeration though.

 

I found that I had problems with the head jolting about for one frame every second and then I discovered that there were lots of subframes in the channels of the BVH Action Object. It seems that if you recapture BVH data into a BVH Action Object that the old data remains on subframes. How the subframes got there in the first place is a mystery though. Deleting all the data, except frame 0, from the Action Object before recapturing sorted the problem. (Just one to watch if you notice unexplainable jerks in your BVH movies). I will watch for this and if it keeps showing up I will report it to A:M Reports.

 

Beta 6 has been released and Zign Track continues to improve. :)

Link to comment
Share on other sites

Thanks, Luuk! No, I mean, THANKS, LUUK! :D

Without your software and A:M working well together I wouldn't be having all this fun.

 

I'm going to post Zign Track tests to YouTube to save Hash Inc disc space and to hopefully promote this great tool.

 

Zign Track Testing #1

(That is just the same video as the one above. I will just post future tests to YouTube).

Link to comment
Share on other sites

Thanks, Luuk! No, I mean, THANKS, LUUK! :D

Without your software and A:M working well together I wouldn't be having all this fun.

 

I'm going to post Zine Track tests to YouTube to save Hash Inc disc space and to hopefully promote this great tool.

 

Zine Track Testing #1

(That is just the same video as the one above. I will just post future tests to YouTube).

 

Thanks Paul, but one comment; it's Zign Track and zigncreations.com ;)

Link to comment
Share on other sites

I just made a new demonstration video. See this link.

I'm planning to use this one on my website but I'm not totally pleased with it. I should have spend a bit more time to rig/constrain the face. I'll make a new video later. First I have to get my updated website online. I don't have enough time at the moment to finish my total new website because I'm also working on a movie, so I'll release it on a temporary page first and I will launch my new site some weeks later.

Link to comment
Share on other sites

Luuk, I think, just guessing though, that maybeee the model, I mean the real model, the real girl, just might distract from the software, (what's it called again?, just kidding) even with all those white heads on her face, she is just too cute.........

Link to comment
Share on other sites

If you're really serious about this, I would suggest you take that down until you get time to do a better one. The mouth hardly opens and generally the expressions aren't as accurate as previous videos. Being new, you want to create as good an impression as possible. Or at least say in the video that this is an early demo. Also, you might wait till the website is ready.

Anyway....use that girl next time too. :)

Link to comment
Share on other sites

Luuk, I think, just guessing though, that maybeee the model, I mean the real model, the real girl, just might distract from the software, (what's it called again?, just kidding) even with all those white heads on her face, she is just too cute.........

 

I'll tell her ;) she's my sister.

Link to comment
Share on other sites

If you're really serious about this, I would suggest you take that down until you get time to do a better one. The mouth hardly opens and generally the expressions aren't as accurate as previous videos. Being new, you want to create as good an impression as possible. Or at least say in the video that this is an early demo. Also, you might wait till the website is ready.

Anyway....use that girl next time too. :)

 

I guess you're right... I'll take it down and make some time to make a better one.

Link to comment
Share on other sites

I did learn a lesson with this video. The actual problem was that my sister had her head titled backwards at the first frame while Zign Track needs the model to be in a neutral position at the first frame. This was causing bad offsets which I reduced by hand and because it didn't all work out like I wanted I reduced some enforcements too.

I made a new BVH file, this time starting at a neutral position. I had to leave the first seconds but now I was able to use this BVH without having to adjust the constraints. It looks a lot better now. To make it perfect I should have to add a few poses for the mouth by hand, but it might be better to show a demo without manual added poses so everyone can see exactly what was doen by Zign Track and what the animator might add to the mocap.

 

Here's the new video

Link to comment
Share on other sites

Hi Luuk:

 

Your video is looking pretty good. I have uploaded my third test with Zign Track to YouTube. As with Paul's, the only tweaking that I did was adding blinking and eye movement. All other movement is being controlled by the BVH file from your software. The software is working great and is very easy to use.

 

 

Zign Track Facial Motion Capture Test 3

 

 

I think what I may do next is to take a non-human character and add the same BVH file to it.

 

Thanks...

Al

Link to comment
Share on other sites

Thanks Luuk.

 

As I mentioned in my previous post, I took the same audio and BVH file and applied it to the Penguin model from the Extras DVD. Had to re-work the beak so it would open (it had been modelled as a single piece). All motion in the video is strictly from the Zign Track BVH file. I can see so many possiblities for this type of software. Very cool stuff.

 

Al

 

Another Zign Track Test Using A Penguin

Link to comment
Share on other sites

Here is another test of Zign Track: Silent Song 2

 

I had to strip out the sound for copyright reasons. Anyone got a song that they want to donate for testing purposes? :)All your music fully credited on any videos distributed. These will only be shown on YouTube, GoogleVideo or on my website but it may give you a bit of exposure that you wouldn't otherwise get and people can be directed to your own website.

 

I guess I'll go back to dialogue.

-----------------------------------------

 

Edit: Is it just me or is the internet slowing down? (It must be my ISP). :(

I tried to put this up on YouTube but my connection kept cutting out during the upload.

-----------------------------------------

Edit 2: Hooray! I managed to fix my internet connection! :)

I have replaced the video that the link above points to because there was a spelling mistake, as pointed out by Luuk in the post below. Now there is no more guessing what the song actually is.

Link to comment
Share on other sites

That looks great Paul! I have no clue what song it is, but I don't know much songs from the 50's.

 

If you're having problems with your internet speed that can also be caused by a busy YouTube server. Just try again later.

Btw, you've called it Zine Tack again... :rolleyes:

Link to comment
Share on other sites

Btw, you've called it Zine Tack again...

Yeeaaaah! :blink: Okay, I'm getting that tattooed to the inside of my eyelids! :lol:

I will fix it.

---------------

Edit:

 

thejobe, thankyou for the offer of your song. I have been trying to download it but my connection keeps cutting out. Thank goodness the contract with my ISP is about to expire! I am definitely jumping ship.

 

I will try again later. :)

Link to comment
Share on other sites

Here is another test of Zign Track: Silent Song

 

I had to strip out the sound for copyright reasons. Anyone got a song that they want to donate for testing purposes? :)All your music fully credited on any videos distributed. These will only be shown on YouTube, GoogleVideo or on my website but it may give you a bit of exposure that you wouldn't otherwise get and people can be directed to your own website.

 

I guess I'll go back to dialogue.

-----------------------------------------

 

Edit: Is it just me or is the internet slowing down? (It must be my ISP). :(

I tried to put this up on YouTube but my connection kept cutting out during the upload.

 

Looks great Paul, Great work Luuk.

Link to comment
Share on other sites

These are looking great. I just saw beowulf and it was back and forth with the motion capture between being believable and stiff and unlively so I didn't know what to make of it. But seeing these tests I am very encouraged about this product. Paul, I know there was no music, but it looked like you did a great job singing along with that song :)

Link to comment
Share on other sites

Thanks, Barshow and Dennis.

Dennis, my timing and mouth shapes are out of synch in many places. I should have rehearsed more.

 

TheJobe, I have only just managed to sort out my internet connection and have just downloaded your song. Very nice and quite appropriate as a test for Zign Track. It will be a few days before I will be able to do the video for the tracking but I would be grateful if you could email me, or PM me, with any details that you would like to be placed in the credits.

 

Thanks again.

Paul

Link to comment
Share on other sites

It looks like we're ready to roll!

 

I was waiting for the transfer of my website to my new web host and just this morning it was activated. Zign Track appears to be running without errors, so version 1.0 is released now. I will finish the 2D tracking part (which was intended to be experimental) later and keep working on improvements as people tell me what they like too see. In the mean time I will give a 10% discount and 1.xx upgrades will be free of charge so there's no need to wait.

 

To get Zign Track go to the Zign Track webpage

I will launch a new website in February with an automated payment system.

 

If you experience any problems email me at support@zigncreations.com

 

I hope you like Zign Track as much as I do ;)

Link to comment
Share on other sites

Luuk,

 

Could you describe the current payment system? It looks like we give you our email, then a day later you email back an invoice? How is payment made with this invoice?

 

thanks!

 

-Jim

 

After I have received your order you will receive an PayPal invoice. You can pay with your PayPal account, credit card or bank transfer.

When the payment is received you will get your license key.

 

 

Looks like it converts to $172.55 USD...right?

 

The current rate is 1.486, so that's 144.99 USD at the moment. I can't charge VAT to customers outside Europe.

Link to comment
Share on other sites

good news :lol: , please support "Import Folder as Frames" ( .tga , .jpg , .png ) ;) , is more flexible that AVI files :P

 

example : frame0000.tga , frame0001.tga , frame0002.tga ...... :P

 

I was thinking about that. I'll put it on my to do list, but I'm not sure how soon I'll get to do that.

Link to comment
Share on other sites

Luuk, I'm having great fun working with your tracking program. I am currently using the Squethcy Sam model to experiment on. However, in truly getting a feel for your program, I would like to experiment on my models for the film that I am working on. That way, I can test out the percentage variables when exporting a BVH file to see how they will work on my models when I get ready to implement your software into my workflow.

 

However, I'm guessing I need to use the FACE setup and bring that into my own models (versus come up with my own system of bones and such). I have several actions already created using my existing model rig and would like to keep that rig since I am familiar with it.

 

What would be the best way to implement the FACE setup into existing models?

 

I see alot of discussion on the forums and trying to follow the video tutorials for FACE and Squetchy Rig, et al. Such alot to cover.

 

What I would like to do, is just import the FACE setup bones and relationships into my current model and rig CP Weight accordingly.

 

Is there just a bone model for the FACE setup, or would I have to import Squetchy Rig into my model, delete all bones and relationships except for any FACE related?

 

Any help would be greatly appreciated by anyone with any knowledge or experience with the FACE setup. It's wonderfully implemented.

Link to comment
Share on other sites

Hi Ernest,

 

I don't think copying parts from the squetch rig would be the best idea. As far as I know you'll have to add all relationships yourself then.

How is the face rig set up in your models? If it's a bit like this example it is easy to constrain: FrenchMan.zip

If your face is driven by poses you could also let the BVH rig drive those poses. You'll have to add some bones or nulls, constrain them to the BVH rig (orient like only, don't forget compensate mode). Now those bones or null can be used to drive your poses.

 

In my example the face rig looks just like the BVH rig. Except for the sneer and cheeks, those are driven with a smartskin. So that's the third possible method.

 

I hope this helps.

Link to comment
Share on other sites

Thanks Luuk, I'll take a look at your example tonight. I've been trying to go through the v13_posable Squetchy Rig and trying to walk through the tutorials to rig one of my models...and...I think my brain exploded. I'll try to walk through your's and see if I can use constraints. I'm not a big Rigger person. Alot of this is new to me. I'm grateful to all who have contributed all this wonderful stuff and to truly take advantage and utilize what is out there, I feel will be a very patience-driven journey for me.

Link to comment
Share on other sites

Ernest, if you look at the BVH rig you should notice that all the mouth parts have their pivot point at the same spot, which is roughly the centre of the head and centred vertically on the mouth. There is nothing to stop you having them elsewhere but as the BVH data is simply the rotation data for the bones, (filtered through Luuk's smoothing and enforcement routines), your rig will be affected by the placement of it's pivot points.

 

Also, if you study the hierarchy of the BVH bones in the Project Workspace you will see that the head bone is a child of the neck bone and all the rest of the bones are children of the head bone. You can very quickly build your own simple face rig to constrain to the BVH rig.

Link to comment
Share on other sites

What would be the best way to implement the FACE setup into existing models?

 

There are standalone versions of the FACE controls at the bottom of the Wiki page...in the ZIP there are three versions, one with just the FACE controls, one with the FACE controls and tongue setup and one with the FACE controls and the bones setup for the face. These will work with any rig, the only assumption is that the rig has a bone named "head".

 

Hope that helps, Ernest.

Link to comment
Share on other sites

I just found out I forgot something :huh:

During Beta testing I made a lock in Zign Track which disables the Beta on 01/01/2008. When I released Zign Track I forgot to remove that lock so if you have downloaded and installed it before you saw this this message you should download it again from my site, if you plan to use it in 2008 :lol: . I have uploaded the new setup file.

 

I'm sorry about that...

Link to comment
Share on other sites

Thank you Luuk, Paul and David (itsjustme) for your help and suggestions.

 

I managed to apply a setup similiar to Luuk's Frenchman model due to the fact that I will have to set this up for multiple characters and needed something I can manage, setup, and tweak (read: something ' I ' can understand) accordingly. I'm definately going to have to wade into CP Weighting a bit more (does Computing CP Weights before assigning CPs corrupt anyone elses models? hehe), seems to be the answer to some of my other issues.

 

I should be able to post something later today. I was setting things up last night (4 AM :P ), checking things, fixing, tweaking CPs, etc. in order to get the rig in the model and make sure nothing was too out of place.

 

I do have 1 question Luuk. The AVI that I have imported into Zign Tracker shows frames 1 to 225 frames. However, when I export the BVH file and create the action in AM, it imports into AM with keyframes 1 to 195. But, it doesn't seem that it's truncating the BVH file; all the 'action' in the original AVI seems to be there. (That is, all the mouth movements to dialogue are covered and keyframes set for the words mouthed). I'm guessing that it might have something to do with Zign using 30 FPS and my setup in AM is for 24 FPS? I'll try a project setup for 30 FPS and see if that is it.

 

From what I have managed to render out for test videos so far, I could not be happier or more overwhelmed with possibilities. Luuk, YOU are DA MAN!!

Link to comment
Share on other sites

I do have 1 question Luuk. The AVI that I have imported into Zign Tracker shows frames 1 to 225 frames. However, when I export the BVH file and create the action in AM, it imports into AM with keyframes 1 to 195. But, it doesn't seem that it's truncating the BVH file; all the 'action' in the original AVI seems to be there. (That is, all the mouth movements to dialogue are covered and keyframes set for the words mouthed). I'm guessing that it might have something to do with Zign using 30 FPS and my setup in AM is for 24 FPS? I'll try a project setup for 30 FPS and see if that is it.

 

From what I have managed to render out for test videos so far, I could not be happier or more overwhelmed with possibilities. Luuk, YOU are DA MAN!!

 

Zign Track uses the same frame rate as your video file. It could be that if your A:M project has a different FPS the length will change. I have not tested this.

If you set your project FPS the same as your video it should work fine. If you want to alter the FPS of your BVH file you could use Dave Dub's BVH hacker that has a re sample to 30 FPS function.

Link to comment
Share on other sites

If you want to alter the FPS of your BVH file you could use Dave Dub's BVH hacker that has a re sample to 30 FPS function.

 

You can use A:M to do this too...though you will have to do the math. All the keyframes of the BVH 'should' be adjustable...meaning that if you drag a lassoo box around the keyframes (there's a LOT of them, and my computer slows down) you can then use the surrounding hash box to scale them in time...longer or shorter. SO- if you shot 10 seconds of action at 30fps and import it into A:M in a 24 fps project you would then scale all the keyframes to the 240th frame...or something like that. You could also use this method to generate slow-motion.

 

I haven't had chance to play with this Luuk---but I can't WAIT! Yes...the mind boggles with possibilities!

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...