sprockets TV Commercial by Matt Campbell Greeting of Christmas Past by Gerry Mooney and Holmes Bryant! Learn to keyframe animate chains of bones. Gerald's 2024 Advent Calendar! The Snowman is coming! Realistic head model by Dan Skelton Vintage character and mo-cap animation by Joe Williamsen
sprockets
Recent Posts | Unread Content
Jump to content
Hash, Inc. - Animation:Master

Recommended Posts

Posted

Hi Luuk,

 

That is sooo impressive. I can't wait to see your future work.

 

I only have one question. How sub-pixel accurate is your tracker (1/2 pixel , 1/4, etc.) ??

 

As I'm sure you already know, your sub-pixel accuracy will help a lot with lower resolution video captures.

 

Also, you might want to consider using small square stickers instead of dots. Or if you want to use dots, draw a black 90 degree corner in the middle of your dots or use smaller dots. This will also help the accuracy of your tracks.

 

I wish you all the best with your project,

Greg Rostami

  • Replies 273
  • Created
  • Last Reply

Top Posters In This Topic

Posted

I am interested in this motion tracking software package. I hope it can work ok with my webcam. But can we edit the facial animation after we apply it? If it is a keyframe per frame I assume that you cant realy edit it much. And what are the requirements for the markers? Where do you even get something like that?

Posted
Hi Luuk,

 

That is sooo impressive. I can't wait to see your future work.

 

I only have one question. How sub-pixel accurate is your tracker (1/2 pixel , 1/4, etc.) ??

 

As I'm sure you already know, your sub-pixel accuracy will help a lot with lower resolution video captures.

 

Also, you might want to consider using small square stickers instead of dots. Or if you want to use dots, draw a black 90 degree corner in the middle of your dots or use smaller dots. This will also help the accuracy of your tracks.

 

I wish you all the best with your project,

Greg Rostami

It isn't really sub-pixel accurate. There's no need for that because all values are converted to floating-point variables and then smoothed.

You're right about using smaller stickers, but those are hard to find and I didn't feel like making them myself. But, that's no problem at all. In the configuration you can specify the size of the used stickers and the algorithm will track to it's center.

 

 

I am interested in this motion tracking software package. I hope it can work ok with my webcam. But can we edit the facial animation after we apply it? If it is a keyframe per frame I assume that you cant realy edit it much. And what are the requirements for the markers? Where do you even get something like that?

If you want to edit the motions manually you can do that the choreography or add a second action and blend them.

The marker stickers I have used can be any normal sticker with a basic color. The brands I used are Herma and Avery. They should be available in any office supply store.

Posted

Just to show off again, I show a rendered animation with the smoothing I earlier mentioned. See this video

 

This animation was tracked from the same video as the previous tests, but this time the jitter is gone :lol:

Btw, did I mention I also added exaggerate options for each bone? :rolleyes:

Posted

Wow, that is looking fantastic. Although $150 USD is probably a fair price to ask for the software, for a hobbyist like myself, it is still out of my price range. Not critizing or complaining about your price, but as a husband and father trying to make ends meet, it would be hard to justify spending that amount of money to my family. If it were around $50 USD, I could probably swing it.

 

Keep up the great work on this, it is really amazing.

 

Al

Posted

Pricing for something like this is a tough one.

 

One thing you have to look at is not how much AM costs by comparison but how much work this tool by itself will save you. Think about the hours and hours... and hours... of work that could be saved by using this tool. The value of this adds value to AM. The total perceived value of both might even be more than the actual price. I think the 3D painter tool for AM is around the same price isn't it?

 

If you needed to do truckloads of lip syncing in a short time... this could save you bundles. It depends of course on what you put value on, time or money? If your time isn't worth so much then the price is a bigger factor. If however you are a hobbyist animator (with a wife and kids) with limited time to spend on your creations... then the cost might not seem like so much.

 

As I get older... I find that my time is more valuable. ;) This is really exciting. I don't think $150 is over the top... I would prefer $99 but that's splitting hairs. ;)

 

This tool opens up a world of possibilities for a lot of people who either don't have the time or skills or resources to do a lot of lip syncing. Another thing this can do is provide the ability to SHARE bvh lip sync files or just simple head and face expressions. (you could even stick those dots on other movie clips of actors).

 

-vern

Posted

Luuk,

 

Thanks for the info, Ill start saving up, Im guessing youll be going through a paypal type service or some way one can use a credit card? Keep up the good work!

Posted
Luuk,

 

Thanks for the info, Ill start saving up, Im guessing youll be going through a paypal type service or some way one can use a credit card? Keep up the good work!

 

I think that $99 is a very good price. It worked quite great for 3dpainter, it has a nice flow (less than hundred even not really less) and I think it would be a good pricetag compared to A:Ms pricing which is a factor, even if it is not the biggest one because you want to sell it for other packages too.

 

"Time spent" is very difficult to split correct, because you dont know how much you will sell.

With 1000 copies it is different to 100 copies and so on even if it would be a very reasonable pricing-strategey.

*Fuchur*

Posted
If however you are a hobbyist animator (with a wife and kids) with limited time to spend on your creations... then the cost might not seem like so much.

 

What I mean by justifying the cost is that in the past couple of months, expenses around the house have kind of gone through the roof. It started with my buying a new laptop. It had been a while since I bought anything for myself, and things were pretty good for us financially, so that purchase was easy to justify. Shortly after that, I got some bad news from my dentist. With years of grinding my teeth, my mouth was over closing and causing pain in my jaw and giving me headaches. After doing a bunch of tests, it was determined that in order to correct the problem, my bite needs to be opened 7.5mm. This is done by crowning all my teeth at a cost of around $35,000 to $40,000. My dental plan will cover around $12,000 leaving me on the hook for $23,000 to $28,000. To top if off, my wife's dental bridge needs to be replaced adding another $1,500. Then last week, we get a letter from our house insurance company. Since our house has aluminum wiring, we need to have it inspected and fixed otherwise our insurance will be cancelled. Now, we are looking at another $6,500 for that.

 

So, spending $150 on a hobby at this time is hard to justify (in my particular case).

 

Al

Posted
Just to show off again, I show a rendered animation with the smoothing I earlier mentioned. See this video

 

This animation was tracked from the same video as the previous tests, but this time the jitter is gone :lol:

Btw, did I mention I also added exaggerate options for each bone? :rolleyes:

 

 

Looks great man!

I'll be over here standing on line:)

Posted

Damn Tralfaz. I thought my expenses were bad. I feel for ya man.

 

I would gladly pay 150 for this tool though. I would need to understand about syncing the movements to audio though. It.s tough to speak exactly at the same time and movement of your wav file for voice. Maybe there's a trick to it or its just an art that needs practice.

Posted

Thanks for all your thoughts about this. I haven't decided yet but we'll see.

 

Luuk,

 

Thanks for the info, Ill start saving up, Im guessing youll be going through a paypal type service or some way one can use a credit card? Keep up the good work!

 

Yes, I'll use the paypal services for a start. That seems the best solution for now because it's the first time I'll be selling software online.

Posted
Damn Tralfaz. I thought my expenses were bad. I feel for ya man.

 

Not to worry folks. We have had some rough times and good times. This is just one of those times where things have happened all at once, and it will get better again. There are always people who are worse off than you are. Yes, money is a bit tight right now, but we are all pretty healthy and happy. I have a great wife and son, so I feel I am really fortunate.

 

Al

Posted

I think a video tutorial should be included or available for download with the purchase of the app, explaining how to set up the rig and how to get it to work with bvh data. Then I'm in...

Posted
I think a video tutorial should be included or available for download with the purchase of the app, explaining how to set up the rig and how to get it to work with bvh data. Then I'm in...

 

Yes, I will definitely make a tutorial how to work with it. I'll write one when all things are working like they should and I will probably make a video tutorial too.

Posted
Damn Tralfaz. I thought my expenses were bad. I feel for ya man.

 

I would gladly pay 150 for this tool though. I would need to understand about syncing the movements to audio though. It.s tough to speak exactly at the same time and movement of your wav file for voice. Maybe there's a trick to it or its just an art that needs practice.

 

I think the concept with this tool is not matching movements to an audio file. That is harder than doing lip sync animation by hand. If you ever listen to DVD commentaries "looping" dialog in a movie is a pain in the arse and the actors really hate it.

 

You should record the actors voice while video taping and use that audio along with the tracking information to do the lip sync.

 

-vern

Posted

realy nice tool !!! , some images from the app interfas please :P

 

is Posible animating creatures or monsters faces , not Human Faces ???

Posted
Btw, did I mention I also added exaggerate options for each bone? :rolleyes:

 

 

EXAGGERATE! Sweeeet!

 

Does your system track the eyes aiming? I don't think it's too big a deal if it doesn't. It would be SOMETHING if you actually brought this to market for us all. The 'animator' in me says "It's too much of a cheat!" but the filmmaker in me says "Long form conversation-driven animation is possible!" The results are quite compelling!

Posted
realy nice tool !!! , some images from the app interfas please :P

 

is Posible animating creatures or monsters faces , not Human Faces ???

 

Yes, you can use it for your monsters ;)

The interface looks like this for now: (pictures are a bit blurred because of the low quality compression)

Interface1.jpg

Interface2.jpg

 

Btw, did I mention I also added exaggerate options for each bone? :rolleyes:

 

 

EXAGGERATE! Sweeeet!

 

Does your system track the eyes aiming? I don't think it's too big a deal if it doesn't. It would be SOMETHING if you actually brought this to market for us all. The 'animator' in me says "It's too much of a cheat!" but the filmmaker in me says "Long form conversation-driven animation is possible!" The results are quite compelling!

It doesn't have eye aiming yet. I'll experiment with that later.

Posted

thanks for the images , i like the interfas :lol:

 

is posible add more or less motion caption control points ???

Posted
thanks for the images , i like the interfas :lol:

 

is posible add more or less motion caption control points ???

 

I will add more marker positions in a later update. first things first...

Less is possible, you don't need set them all. Only the neck, forehead nose and chin are always required because they are the base for the calculations.

 

Looks like every dot on the face would be associated with a bone in the models face?

 

Not every dot. The dots on the neck, forehead and eye corners are only used as reference to calculate the motion.

Posted

Me Want now!! :D

 

Ok....so, would someone need to setup the bones in the face before applying the BVH generated file? Or, does this require a special rigged face or skeleton?

 

Is it just a facial tracker for now? Or will it grow into a full body tracking program?

 

and as a side note.... me want now!! :D

 

Any ideas on a release date?

Posted
Me Want now!! :D

 

Ok....so, would someone need to setup the bones in the face before applying the BVH generated file? Or, does this require a special rigged face or skeleton?

 

Is it just a facial tracker for now? Or will it grow into a full body tracking program?

 

and as a side note.... me want now!! :D

 

Any ideas on a release date?

Yes, the face needs to be rigged with a rig that matches the BVH file. For now it's just a face tracker we'll see what the feature will bring but don't expect you'll be doing full body tracking with just one camera. Although there is a 2D part in the program that allows you track any thing you like with any number of features. It is an experimental part that isn't completely finished yet. I'm not sure I'll have it finished on the first release. If not I'll finish it in a next (free) upgrade. What you can do with that feature depends entirely on your imagination...

 

I understand you want it now but it isn't ready to be released yet. I'm working very hard to fix the last things that don't work smoothly yet and I decided to improve the tracking algorithm. Just be patient for a few weeks ;)

Posted
Although there is a 2D part in the program that allows you track any thing you like with any number of features. It is an experimental part that isn't completely finished yet. I'm not sure I'll have it finished on the first release. If not I'll finish it in a next (free) upgrade. What you can do with that feature depends entirely on your imagination...

 

Then... now is when I start to be really-really interested!!! :D You can't imagine how happy makes me know you are working on this kind of usefull solutions, and now... well, here you have another impatience and interested guy :), so good luck with the project and thanks in advance for all that effort and hard work, BYE!

Posted

Hi again,

 

I'm still working on some improvements, but I'm almost there (I think)

I have some good news for those who like to work with the squetch rig; David Simmons has added an BVH function to the squetch rig so animations recorded with Zign Track are now very easy to use. Here's an example video with David's Squetchy Sam.

I forgot to store the audio when saving my captured video file, but the movements are pretty accurate.

Posted

Hey Luuk,

 

I was wondering if you think your software will run under parallels or VM ware fusion(virtual machine software), would like to have this working on my mac... even through a vm...

Posted
Hey Luuk,

 

I was wondering if you think your software will run under parallels or VM ware fusion(virtual machine software), would like to have this working on my mac... even through a vm...

 

I'm not sure but I think it should work. You should give it a try when I release the software (trial) and let me know.

Posted

I have been testing the Beta software for Luuk, and this is my second attempt at doing facial motion capture. I added the bone structure to the "Wild Bill" character, then added the .bvh file. I think my rigging and smart skinning could use some more work, but Luuk's software worked great!

 

Al

 

fmc_test_2b.mov

Posted

I would love to see some examples of exaggeration, it anyone has time to post those experiments.... I just picked up a little mini dv camera and this program was one of the main reasons for getting one.

 

Looking forward to its release.

Posted

Hi Luuk and everybody,

 

first of all WOW, i'm really impressed by your application and count me in the number that will buy the software.

i saw Face Robot from Softimage that uses a similar system of markers and i have to say that, being a pro software, it didn't impressed me so much.

it would be fine to have a real time recording and applying deformation to the model inside A:M, anyway your software works very well.

it will be perfect with eyelids and eye tracking :)

about the price i think it's affordable, considering that Face Robot costs something like $50,000.

i'll stay tuned for the trial release.

 

... Vince

Posted
I would love to see some examples of exaggeration, it anyone has time to post those experiments.... I just picked up a little mini dv camera and this program was one of the main reasons for getting one.

 

Looking forward to its release.

 

You can already give it a try with the new squetch rig. David has uploaded it today. He also included exaggeration poses in his rig and an example bvh file.

But it's better to track your own video so you know what you're going for.

 

The release is getting very close. I have beta 5 ready but this one only works with an installer, so I'm making an installer now. I hope this will be the last beta. When this is done we'll do some more testing and I'll finish the manual and update my website. I expect this will take 1 or 2 weeks.

Posted

Here is a rough example of some tracking that I achieved with Beta 4 of Zine Track.

(Sorry, there's no sound with this test.)

I had a few problems, mostly because the video sample that I used would have been a bit of a challenge for any motion tracker so I had to do quite a bit of manual tracking where the motion was too fast or where markers became obscured or hidden. Also I just quickly rigged this head and it could also use another spline in the forehead. There are two or three places where the lips move to very strange poses. This is due to fact that forward/backward motion cannot be tracked. Any puckering of the lips or puffing of the cheeks will need assistance from a pose slider, or something, to get the forward/back motion. Which brings me to my question:

 

What is the best approach to cleaning up animation like this? I suppose I bake the action and then just tweak the bones on the problematic frames?

 

My next test will include sound and I will start with a video that is a little friendlier for Zign Track to handle.

 

And guess what....? Luuk has just released version1 to the testers and my quick test has shown it to be much faster and much more accurate right out of the box! I fed it some of the same horrible video that I gave up on previously and it just grabbed onto those markers and tracked about 600 frames in a few seconds. Brilliant!:D

 

Luuk, you're magic! :)

Posted

Wow, great job on your test Paul. I like the model of the head you used. It was very expressive, much more than the one that I tried.

 

Could you post a small sample of the video you used?

 

Al

Posted
What is the best approach to cleaning up animation like this? I suppose I bake the action and then just tweak the bones on the problematic frames?

 

Well done Paul. At the point where the upper lip is going a bit to high you might want to reduce the enforcement of the constraints. Where the lower lip is making that sudden movement there is probably 1 or 2 frames where the tracking lost the marker. Did you save the project in Zign Track? If you reopen it you can take a look at those frames to see what happened during tracking.

It's nice to see such a nice video!

 

Cheers.

Posted

I'll keep this short because my internet connection keeps cutting out and I have just lost one lengthy message that I posted here.

 

First, thanks for the compliments! :)

 

Al, your example is terrific! :)

I won't post any shots of my old mug on these forums if I can help it. ;)

 

All motion in that movie is direct from Zign Track with the exception of the eyeballs and eyelids. Absolutely no adjustments made.

 

The error in the bottom lip was caused by markers jumping to the wrong feature when they overlapped. Because all of Zign Track's markers are the same colour it wasn't obvious by scrubbing through the video. Maybe each feature could be given a distinct marker?

 

The upper lip problem occurs where I performed a very contorted pucker pose with my mouth which pushed the lips forward and up. Because Zign Track is naturally unable to handle this forward/back motion from a single camera the result looks rather strange but I am fairly confident that by adding fore/back pose sliders for top and bottom lips that this can be made to work without too much extra animating.

 

I would like to know how to swap a BVH file in an Action. Does anyone know how to do this? I fixed the errant markers for the bottom lip and exported a new BVH file which imported into A:M14 but when I attempted to swap shortcuts in the Action A:M seemed to freeze and I had to quit via the task manager. Any ideas? :)

 

Sometime over the next few days I will stick those dots back on my face and give Zign Track something better to work with. I will try to keep track of times taken for each part of the process so that you can get a clear picture of what to expect to put in for what you get out.

Posted

Oh I can't wait for this !!! :)

 

Luuk, do you think you can release it by Thursday Nov 15? It would make for a very nice birthday present for me hehe just kidding

 

your efforts will be greatly appreciated.

 

thanks for your hard work.

Posted

Now here's an idea for a useful plugin, or a post process for Zign Track:

 

BVH2AMAct (BVH to A:M Action)

 

The idea would be to run a BVH file through this plugin which would analise the data for each bone and would create an Action file with key frames, filtering out any movements that fell within user defined limits for each marker/bone. This would generate a smaller file and would make post editing much easier and native to A:M.

 

In the meantime BVH is good! :)

Posted
I would like to know how to swap a BVH file in an Action. Does anyone know how to do this? I fixed the errant markers for the bottom lip and exported a new BVH file which imported into A:M14 but when I attempted to swap shortcuts in the Action A:M seemed to freeze and I had to quit via the task manager. Any ideas? :)

 

This is what worked for me, Paul.

 

Hope that helps.

Posted
I would like to know how to swap a BVH file in an Action. Does anyone know how to do this? I fixed the errant markers for the bottom lip and exported a new BVH file which imported into A:M14 but when I attempted to swap shortcuts in the Action A:M seemed to freeze and I had to quit via the task manager. Any ideas? :)

 

Just use the "import sequence" function of the biovision object. Once you have set up the constraints you can use the same action for any BVH file that is set up the same way. A good idea would be to save your action to a separate action file so if you want to use multiple BVH actions just import the same action a few times, rename them and capture the sequence for each action.

 

Now here's an idea for a useful plugin, or a post process for Zign Track:

 

BVH2AMAct (BVH to A:M Action)

 

The idea would be to run a BVH file through this plugin which would analise the data for each bone and would create an Action file with key frames, filtering out any movements that fell within user defined limits for each marker/bone. This would generate a smaller file and would make post editing much easier and native to A:M.

 

In the meantime BVH is good! :)

 

You mean a file with less key frames? That's not a bad idea and could be useful for any BVH file. Maybe I'll make that later. Or some plugin writer with some spare time might do it :)

Posted

Big thanks, David! :)

 

Okay. A common base action with the rig constrained to a BVH rig ready for capturing the BVH data. This forms a template for all future BVH Actions. All new BVH data is imported into this action and saved with a new name. That works for me too. :)

 

Now, to create my setup action from my current action, (in order to avoid having to reset all my constraints), can I just delete the captured data for the rig and save as "BVH_Setup.act" or will I have to start again with a new action and setup all the constraints again?

---------

Edit: Sorry, I missed your post, Luuk.

Just use the "import sequence" function of the biovision object.

What is the "import sequence" function?

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...