-
Posts
623 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Posts posted by Luuk Steitner
-
-
This image is reduced in size. Are the black dots on the original render perfect squares or are they smooth? If they are squares it's a render error. I have had such problems in v14 some time ago and I was hoping it was solved by now.
It might be something else tough.
-
Ah, yes! Thanks, Luuk.
Any clues as to what is happening here? :
This was a test where a BVH file was baked to an action from the choeography. The action was then applied to this model, (which uses a copy of the rig that I constrained the BVH data to before baking).
This jitter was not apparent before baking.
Maybe the jitter is caused by the error reduction while baking the action. You could check the spline shape of the rotation axis of those bones. If the splines are poking out between the key frames you know where the error is. I'm just guessing, I'm not sure if this could happen and if it does I think it is not supposed to happen...
-
Can someone explain what RotateW is?
I'm also quite puzzled by the feedback that I get from A:M when I click on any of those transform channels for the "jaw" bone. If I click on any one, from Transform Scale X to Transform Translate Z the jaw bone is highlited in yellow. But if I click on Transform Rotate X to Transform Rotate W a finger bone is highlighted.
The rotate W is part of the Quaternion rotation and compensates the rotate X so the magnitude = sqrt(w2 + x2 + y2 + z2) is always 1.
I don't know what causes the strange highlights.
-
How long is your action? Maybe you can give it a try with short action (a few seconds) and see what is baked.
-
Has anyone had any success baking actions in A:M14c? I just keep getting exception 001 followed by a crash.
Any suggestions?
Same problem here (also v14) Time for a report I guess.
I does work from the chor for me.
-
First, ZignTrack exported a bvh with a frame interval of .034483, which seems off. My original footage was NTSC 29.97 which should yield a frame interval of .033333, and if I hack the bvh and change the interval to .033333 it imports correctly into my 30 fps A:M project.
I found the problem. The FPS is calculated by the rate divided by the scale (that's the normal way to do that from the AVI params) but those values are integers and in the programmed code therefore the result also was an integer. So when the result must be 29.97 it appeared to be 29 I have modified the calculation method and the FPS is calculated correctly now.
I'm planning to upload the next upgrade this weekend. Can you wait this long?
-
First, ZignTrack exported a bvh with a frame interval of .034483, which seems off. My original footage was NTSC 29.97 which should yield a frame interval of .033333, and if I hack the bvh and change the interval to .033333 it imports correctly into my 30 fps A:M project.
Thanks for the info. I'll try to find out what causes this and fix it.
EDIT: Wait, at 29.97 the interval should be 0.0333667 in stead of 0.034483. So it's wrong, but not like you say. An interval of 0.033333 would mean 30 FPS, not 29.97
-
Maybe I should clarify. How would you set this up in AM...and "use the premade setup" isn't really a helpful response. Is there a thread on how to rig a face in AM?
You could take a look at the Squetchy rig wiki. There are several tutorials there. You don't have to use the Squetchy rig if you don't want to but it will give you an idea how you can rig a face.
Also, you don't have to use a face rig that controls all facial features. You can use the BVH rig to drive muscle poses as well. At the tutorial page there should be plenty info about this.
-
I did forget about A:Ms mocap setting, that might be the trick, or at least part of it.
The eyelids are driven by the Zign track bvh. The lower lids are driven by the sneer, and the upper lids by the eyebrow.
I wonder if the A:M MoCap settings are involved for BVH files. I'll have to test this.
That's a great idea, controlling the eyelids with the sneer and eye brow features. I will add more features in a future upgrade so you'll have control of all facial features. Was it easy to stick markers to your eyelids or did you use special markers for that?
-
Woohoo, I finally had time to get Zign Track up and running. I am attaching a qt of my first successful test.
However, I had an annoying problem. The BVH exported from Zign Track was not the same length as the sound track from the same video. The BVH seems to get longer. For this render I had to rescale it to match the sound. I can't figure what the discrepancy is, I thought it might have to do with A:M operating at 30fps and video operating at 29.97 as that was an issue with syntheyes, but so far that doesn't seem to be it. Is anyone else having these issues?
Also, and probably related, the keyframes in the BVH import with a spacing of less than a whole frame. This seems wrong, as the video was tracked on whole frames. Does the BVH file specify a frame rate or is this an A:M importing issue?
Any thoughts will be appreciated.
Ben
The frame rate specified in the BVH file is the same as the frame rate of the video you loaded in Zign Track. I don't know what's causing the key frames in A:M to be shorter than one frame. I have had such an issue before (not with BVH) and solved it but that's a long time ago and I don't remember how I solved this back then. If you email me your project file and the BVH file I can take a look at it.
Your test looks pretty good though but in one part the mouth hardly moves while you're talking. Was that like your video or did you smooth the mouth too much or something?
-
This looks like a great app. Since it's bvh it can work with pretty much anything. My question though is, how would you rig something like this up? I'm in Lightwave 9.
Thanks guys!
rez
This forum is for Animation: Master discussion only so a Lightwave thread would be a bit misplaced. I don't know how it's done in Lightwave. I think you should start with looking for a tutorial that explains how to handle a BVH in your app. Or, buy Animation: Master
-
I've heard from a couple of folks who can't download or view this file. If you've tried and it won't play, let me know as I'd like to figure this out.
One guy gets an error message that he's missing some component (probably a codec) in his QT software, and another friend gets the blue Q logo but no download. Both *claim* to have the latest QT software, which I can't check. But if you've tried and failed, then given up, let me know what behavior you're seeing.
Thanks!
If one gets an message about a missing component he does not have the latest version of Quicktime. If one only sees the Q, it's just not downloading or taking a long time to download.
-
Luuk what I have seen is outstanding thanks. I downloaded the trial and I am getting an EAccessViolation message while starting the program. I am running XP Pro on a Athlon 3700 with 2 Gigs of memory. Any suggestions?
Oscar
Please send bug reports to support@zigncreations.com Tell me if the program itself is showing or not.
-
I downloaded zign track. It is a nice interface and easy to understand. I didnt use regular round markers for my first test, lust some cut up little mailing labels, I'll have to go to the store to get some regular ones. But I found my webcam only records 10 fps
I didn't get very good results. I tried making it 30 fps in quicktime pro, I guess it just adds more frames, does that help? I still got bad results. Is that because the video or because my markers were bad? I'll have to try with better equipment.
With a slow camera like that the chance of feature swapping gets really big. If the markers are close together and you move, it could be that a marker has moved to the position of it's neighbor on the next frame. 10 FPS is probably too slow for good results (it sure is too slow for lip syncing). But if you like to play with it first you'll have to manually adjust the first 3 dots first. You can place the features on some frames were it's hard too track by hand and track again till the neck, fore head and chin feature are tracked correctly. Once those are tracked like it should the Motion guide will use this features as reference to predict the position of other features. Check other features per pair and try to track them. Lock the features that are tracked correctly to save some time.
If this is not working out like you want you really need a better camera.
-
The Action continues to reference the bvh file, I just tested it. Delete the bvh and the action is empty.
I didn't expect that. It appears in A:M like it's baked so I assumed it was. I wonder why capturing a sequence takes a minute but reloading the action does not. Maybe it is baked but deleted when the BVH file is missing.
This could be tested by swapping names of BVH files to see if the other action is loaded. I'll check it out later.
-
When you capture a BVH sequence the animation is baked into the action. After this the BVH file is not needed anymore, if I'm right. I give my actions any name I want, that shouldn't matter as long it's obvious to you which action it is.
I was planning to add contrast/color adjustment in a future upgrade. But experimenting with the configuration settings should always do the trick if the lighting of your video is good. I understand it's not always easy to get good lighting so adding such a control might add value for some people. I was able to track all my videos without having to edit them first. Some of them were pretty bad.
-
Did you try closing your action/chor windows and reopening them? Sometimes that solves things like this.
-
I still don't understand why this would be necessary...maybe I'm missing something.
------------------
EDIT
------------------
I should have been clearer. Why would the BVH bones need to rotate with the chest? All that is needed is the data from the BVH bones to drive the face...which is parented to the rig's chest.
You're right David, but constraining the face bones directly to the BVH rig is easier to do for those who aren't that experienced with setting up poses, expressions etc.
Rotating the BVH rig is a simple solution, but I think your method of driving the face is the best way to do it.
-
Cronos,
I have found a very simple solution. The BVH root can be rotated by expressions in the chor.
To make it orient like the chest:
1. Select the model under the chor in the PWS.
2. switch "show more than drivers" on
3. switch to skeletal mode
4. click on the BVH rig in the chor window
5. now one of the BVH bones is selected select the "shortcut to BioVision BVH File1" in the PWS
6. in the properties window open transform->Rotate
7. select rotate X, right click, choose edit expression. in the expression for Transform.Rotate.X add: "..|..|..|..|..|Chest.Transform.Rotate.X"
8. now, select the rotate Y property in the same way. in the expression for Transfrom.Rotate.Y add: "..|..|..|..|..|Chest.Transform.Rotate.Z"
9. now, select the rotate Z property in the same way. in the expression for Transfrom.Rotate.Z add: "-..|..|..|..|..|Chest.Transform.Rotate.Y"
Note: The 'Y' is controlled by the 'Z' of the chest and the 'Z' by the inverted 'Y'. this is because the orientation of the bones is different.
Now the head should move when you move the chest.
-
I'm getting some more questions from MAC users asking if they can run my app on a MAC with VM or parallels.
Is there anyone out there with a MAC with Windows who is willing to try if it works?
-
Thanks David, it's clear to me. I hope this won't be too hard for some people
-
Apparently it's not possible to add an orient like constraint to the BVH object. I was taking a look at the Squetch rig because it works OK with that rig, although you have to rotate the head by hand. I noticed David used expressions to constrain the Squetch face. I'm not sure if that is what makes the difference.
David, if you read this; how did you do it?
A simple workaround would be to add a model with just the face rig and constrain that rig to the BVH rig. Add that model to the chor, constrain the root bone of that rig to orient like the chest of your model, and orient the face to the face rig.
I'm sure there must be better ways to do this (like David's) so any suggestions would be welcome.
-
William Sutton has the 'grandaddy of all BVH toots' at zandoria.com but I'll sum them up.
A BVH motion capture file is a skeletal motion file that imports nicely into A:M (Make a new action and import a character. Select New/Biovision BVH/ and then find the file you want to load and load it. The BVH will load frame by frame and may take a while for a longer BVH. You will then see a series of BVH skeleton bones in a tree separate from your characters rig. You will need to select THE parent BVH bone and rotate/scale/position it to your rig, whether just the face or the whole body, as close as possible. Then- using 'Orient Like/Constrain To/ Aim at' and other constraints you can 'nail' your rig to the BVH's bones...one by one. Then adjust adjust adjust until happy. The beauty of it is that once you have one installed you can then go back and load another BVH action over the first (Good time to use Save As...) and avoid all the constraining the 2nd 3rd 4th time 'round.
Happy Animating!
In case of the Zign Track BVH files is not not necessary to rotate/scale/position the rig. You only need the 'orient like' constraints (with compensate) Only the rotation of the BVH bones is used.
I should make special tutorial for this but I don't have time for it at the moment. I found the use of BVH files very easy when I did it the first time. I hope others will find it's easy too. (is that correct english?
)
-
If you get a message about the project file being incorrect I would think the file is corrupted. If you can post the file we can take a look at it.
Are you able to save new project files and open them?
Displacement and Toon Render
in Work In Progress / Sweatbox
Posted
If those areas are considered to be edges maybe a higher resolution displacement map would help, or maybe lower.... Maybe it's worth a try.