sprockets Purple Dinosaurs Yellow Duck tangerines Duplicator Wizard Gound Hog Lair metalic mobius shape KM Bismark
sprockets
Recent Posts | Unread Content
Jump to content
Hash, Inc. - Animation:Master

Bendytoons

Forum Members
  • Posts

    517
  • Joined

  • Last visited

Everything posted by Bendytoons

  1. thanks for putting that suucinctly. That's pretty much what I meant. Ben
  2. This would be mostly set up of the face and can be related to the quality of the video. Bendytoons's videos show better results concerning mouth shape. Umm, well, yers and no. My stuff only uses four of the seven mouth points (sort of five, but I use the two lower lip points to drive a single value). Zign track does export seven mouth point and a jaw. I only use a four because I'm driving poses. I think alot of what Mike is seeing is just the difficulty in using bone mo-cap directly. When the face being driven is proportioned very differently from the one captured motions often become exaggerated or damped, and it can be hard to get good calibration. In the video it looks as if the mouth closed calibration between the actor and the model is off, the mouth doesn't quite close even on the hard consonants. Luuk has built exaggeration in to the export, but even then you need good calibration. One reason I use poses is that it's easy to recalibrate them, and they have hard limits (it is meaningless to drive them past 100 or 0).
  3. Thanks for the tip, oh master of the bouncy.
  4. Darn it. I added dynamic constraints to the antennae, but I seem to be having trouble. Simulating the spring system seems to greatly dampen (or remove) the bounce. I'll keep at it, they'll be bouncy yet. Ben
  5. Here's the latest Ernie cartoon. Made with Zign Track, straight up no tweaking.
  6. Homeslice has it right. You can also attach your sprite emitter to a spiral path and have it emit sparks while it moves in a spiral.
  7. Thanks, Rodney. I don't think Ernie is quite ready for daily material. So far figuring out the production path is keeping me busy. And rendering is a bear, this piece took 11 hours to render even after I'd stripped it down to 13 seconds a frame. I am looking to incorporate more body and hand movement, but getting the facial thing working is priority one.
  8. Thanks Luuk. I was trying to get a more cartoony, animated feel than the first incarnation. the first version had veered too far in the bugman direction. You can take a look at the earlier incarnation here:
  9. Here's the latest Ernie piece, a reaction to the SF zoo tragedy. Again done with Zign Track.
  10. I just posted another bug piece done with ZT in the wip section: Ernie's Xmas
  11. Here's the latest Ernie piece. All done in ZignTrack except the eye direction and blinks. It's a link to Youtube because I couldn't get the higher quality version to upload to the board. Lower quality, but you can send it as a holiday card:
  12. Luuk, Baking the action does overwrite all keyframes. Paul's problem was that importing a BVH does not overwrite all keyframes, only the ones that are specified in the BVH you are importing. This can be a real hassle if there are keys between frames, and result in the kind of jitter Paul was seeing. I have made it a habit to select and delete all keys from the action object before loading a new BVH into it.
  13. Paul, A separate head rig is the simplest approach I've come up with. It would probably help your slowdown problem a lot. Let us know if you come up with another approach.
  14. Paul, I would say always bake in the choreography then export it as an action. I don't remember it ever working from an action.
  15. No worries. For anyone else with this problem just open the BVH in a text editor and search for "frame time". Just change the number to .03333, which imports correctly into A:M whether it is 30 or 29.97 fps.
  16. okay, the shorts really sold me on this. Looks good.
  17. Mocap settings do not seem to make any difference. My problem turned out to be twofold: First, ZignTrack exported a bvh with a frame interval of .034483, which seems off. My original footage was NTSC 29.97 which should yield a frame interval of .033333, and if I hack the bvh and change the interval to .033333 it imports correctly into my 30 fps A:M project. Second, I was using a project based on one of your examples for my import. The original project was created at 25 fps, and despite changing the frame rate to 30fps, the project saves the original 25fps, and uses that for bvh ingest. The only way to fix this was to hack the .prj file and change the value in a text editor. Importing a bvh with a .033333 frame value does result keys falling slightly off frame as you get above 600 frames. Instead of frame 650 it will fall on frame 649.99, this accurately reflects what should happen with 29.97 fps data importing into a 30 fps project. This shouldn't represent any major problem, and could be worked around with a simple hack if needed.
  18. I wonder if the A:M MoCap settings are involved for BVH files. I'll have to test this. That's a great idea, controlling the eyelids with the sneer and eye brow features. I will add more features in a future upgrade so you'll have control of all facial features. Was it easy to stick markers to your eyelids or did you use special markers for that? Luuk, for the eyelid tracking I actually stuck a little tiny loop of tape on my eyelash, that kept it from being obscured when the eye was open. Probably only practical if you have long eyelashes. Maybe you could put a marker on the eyelid, and then track whether it's visible or not. this would at least give you an on/off blink marker.
  19. Here is the same exact video as the earlier post, but I've remapped how the bvh information affects the mouth open/close target. ernie1L_2.mov Paul, I did forget about A:Ms mocap setting, that might be the trick, or at least part of it. The eyelids are driven by the Zign track bvh. The lower lids are driven by the sneer, and the upper lids by the eyebrow.
  20. So the bvh is at 29.97 fps. Isn't this a problem because A:M can operate only with whole number fps, i.e. 30 fps? Is this why the keys come in on fractional frames? And can it be adjusted? When I was doing this with Syntheyes I had the 29.97 vs 30 fps problem, but the solution was easy. Export the video from Syntheyes as if the fps was 30. Render in A:M, import the rendered frames into my video program, where I specify them at 29.97. This way, each tracked video frame corresponds directly with a bvh keyframe. Has anyone done a piece long enough (at least 60 sec) to really notice a .03 frame difference yet?
  21. Probably a combo of both. I'm using the bvh to drive muscle poses, so I might just need to adjust the sensitivity of the relationship. Ben edit: I tweaked the curve in the mouth open relationship and it fixed that right up.
  22. Woohoo, I finally had time to get Zign Track up and running. I am attaching a qt of my first successful test. ernie1L_.mov However, I had an annoying problem. The BVH exported from Zign Track was not the same length as the sound track from the same video. The BVH seems to get longer. For this render I had to rescale it to match the sound. I can't figure what the discrepancy is, I thought it might have to do with A:M operating at 30fps and video operating at 29.97 as that was an issue with syntheyes, but so far that doesn't seem to be it. Is anyone else having these issues? Also, and probably related, the keyframes in the BVH import with a spacing of less than a whole frame. This seems wrong, as the video was tracked on whole frames. Does the BVH file specify a frame rate or is this an A:M importing issue? Any thoughts will be appreciated. Ben
  23. Here is my approach, pretty simple and all done in A:M. The BVH and the rigged character never exist in the same model. Create a capture model which consists of ONLY the bones you are driving directly with the zign track bvh. Load a BVH and constrain (or relate) the models bones to the BVH. (Save this Project as a capture template) Bake the models action down to transforms on the bones. Export the action. This action can now be loaded onto the rigged character. It will affect the bones in the character that have identical naming to the bones in the action. It is a regular A:M action; you can key reduce it; you can blend it with other actions; you can layer actions on top, including actions being driven by constraints. I use this approach to drive muscle poses in the face rather than bones. The intermediary relationship, which lets the bone drive the pose, has an added benefit of allowing you to remap the motion to a curve, by adding keys in the relationship. Theres no reason you couldn't do this with bones. I think that does everything you were asking about. Ben
  24. Paul, Maybe I'm misunderstanding, but you shouldn't need a bvh to action converter. Just bake out the chor action of the model that's constrained to the bvh. After baking it's just bone actions on the model. They have the same keyframe info as the bvh, but now it will be just regular transform keys on the bones, without constraints or an action object.
  25. The Action continues to reference the bvh file, I just tested it. Delete the bvh and the action is empty. You have to bake the action onto the bones that are constrained to the bvh (or are driven by a relationship that gets data from the bvh) to take it out of the loop, I think. I have a standard file with a model of just face bones that are constrained to bvh. I load this file, import a new bvh, and then bake out the constraints. Now I save the action as simple bone movement that can be loaded onto any character that has the same set of bones.
×
×
  • Create New...