Jump to content
Hash, Inc. - Animation:Master

Luuk Steitner

Hash Fellow
  • Posts

    623
  • Joined

  • Last visited

Everything posted by Luuk Steitner

  1. I have one. I haven't used it much yet but as far as I can tell it works fine. I don't think I'll use it to model though. Maybe I should try it...
  2. That's pretty cool. I once saw a very good short movie that was created using only fonts. I did a quick search for it but I can't find it right now.
  3. That looks very good. I assume you are going for a photo realistic look? I think the balls may be a little bit less bright, then it would look almost real too me. Great job!
  4. Well, the one on the right is very smooth With 10 passes you'll loose some motion, personally I would never use values above 5 for the head (if a normal quality video is used) With 10 passes bigger movements will still be visible, but in this video there are no big movements. I hope you'll enjoy the ease of direct pose control
  5. Nice Paul. What are the smoothing settings you have used? I think the head bone can use a bit more smoothing.
  6. Well, if you want us to guess; The most common way to do this is using booleans. An other way would be to just model the hole in. And, a third way would be to use a cookie-cut decal. Am I close?
  7. That looks pretty good, you might want to use the smooth function to get rid of the jitter. Why don't you try using a pose for the eyelids? If you use the eyebrow B features for the eyelids you could exaggerate them to ~150% and drive blink poses that close on -100% I think that's easier to do then trying to find a correct size and place for the eyelid bones, plus you will have better control over the motion curve. Keep going , nice character btw.
  8. That's a good start. Maybe you could alter the wing flap action a bit. When the bird moves his wings down, his body raises too much. A wing is not 100% efficient, so if you move it downwards there is always some loss. I'm looking forward to see more
  9. Hey Ben, you are right. When I noticed the frown wasn't exported I thought it was just because there was no frown detected and I needed to make it more sensitive, but I was wrong... I forgot to set one value in the code and because of that the frown value was always zero So it was easily fixed. I tested it again and it appeared I even had to reduce the amount of frown
  10. The poses are computed using a fixed scale. Because of this you might want to adjust the poses the first time you try them, but you'll sure they will be the same the next time. If it wasn't fixed a character could be shouting while he speaks normal or vice versa. It's actually the same as it works with the BVH or bone action, those also are computed on a fixed scale. If you like you can add exaggeration. Take a look at the spline for the eyebrow in the action itself. In my tests they go all the way up and down. Or maybe it's the neutral frame you have set in Zign Track?
  11. Not bad I wonder what causes the jitter on the hands and eyes. I can't imagine that it's caused by the action, that would be strange. I wouldn't worry too much about using the smooth filter, if you use 1 or 2 passes for the head only, it removes the jitter but it leaves the total motion almost unaffected. The smoothing routine is designed to reduce small peaks.
  12. Can you tell me where it didn't work for you, Ken? I'm pretty close to the material, so I don't always see what others do. I'm mostly seeing the need for more brow expression. What I notice is that if the mouth opens, the upper lip moves as far up as the lower lip moves down. Normally the lower lip would go down further then the upper lip. That can easily be corrected by adjusting the mouth-open pose. Can you tell how your lip poses work? For the A:M pose export that I have written for Zign Track the poses are based on as much properties as possible to detect the shape of the lips. I wonder how you have it set up for your poses because you create them in A:M. Maybe you should give the Zign Track pose control a try? You can add a few poses that drive the existing poses. I was done adjusting Squetchy Sam in 10 minutes.
  13. As I was programming the new A:M Action export function for Zign Track, I needed to do a good test myself. I choose a song from one of the best Dutch bands and play backed it. OK, it turned out I'm not very good at this because I forgot my lines a few times, for the rest it was pretty good. I exported the Action with only poses for the facial expressions. I have used Squetchy Sam for this example. I added a some poses to Sam so it matched the poses in Zign Track and applied the action. I needed one little tweak on the lower lip so the mouth closed better, that is because of the neutral frame I selected in the video. Besides that I haven't done any tweaking at all, and it came out pretty nice. I added some body movements and eye blinks. I still need to animate the hands and dress him up, but I don't have the time for that now (That's why I post a WIP ) Here is the video (I also need to change the offset of the soundtrack a few frames I think...): RenLennyShadedSmall.mov For those who want to see how I have done it, here's the project file: V14_Sam_PosesTest.zip BTW, how does the song sound to someone that does not understand the language? I know if you understand it you'll find it a beautifull song
  14. You only need the 'translate to' constraint for the hips, because the hips will translate the whole body. For each other bone you use an 'orient like' constraint. The compensate buttonis the one I pointed at with the red arrow. Click it every time before to apply the 'orient like' constraint.
  15. That looks nice Ben, did you use export as pose drivers directly from Zign Track?
  16. Can you tell at what point you have a problem? Are you able to import the BVH in an action? Do you know how create constraints? If you understand those two things it can't be hard to get it done.
  17. Are you sure all mouth bones are actually moving? I think you might have given them wrong names so they don't move. I only see the jaw and lower lip moving. Maybe I'm wrong, but that's what it looks like.
  18. Can you show a picture of your face rig? And can you check what happens at the end of the time line for the mouth bones in the action? I noticed in one of my tests that one bone had a strange ending. I'm not sure what caused it, but I'll try to find out what happens.
  19. There was a good tutorial about this on zandoria.com but the site is off line at the moment. Here is what was still in Google's cache:
  20. Mike, you were right! This video was made with the A:M action bone control. I was just doing a lot of tests to improve the output accuracy (for all formats) and I used A:M actions with bone control because that's the fastest way to test it. For some reason the mouth shape wouldn't do what it's supposed to do. I took me a while before it occurred to me what mistake I had made; I used the same method as I did for the BVH for controlling the bones. I forgot that the BVH axis orientations are different and I didn't adjust it for the mouth bones. So, in stead of rotating the Y axis I was rotating the Z axis... doh... Sorry Paul, I'll finish some more things and I'll send you the new test version. You'll find the output results are much better this time
  21. That is actually a good idea. The reason I made the smile reduce the wide pose is because it is harder to 'model' the smile in the the pose when the mouth isn't wide. But that could be solved by applying the wide pose while creating the smile, and when the smile is done remove the link the 'wide' from the pose. Is that how you do it?
  22. Maybe it's a good idea to show what I'm working on for version 1.2. The new A:M action feature works quite well, as far as I have tested. Here is a picture so you can see what the possibilities for the poses are. The eyebrow poses involve up and down movement, the others speak for themselves I guess. I did some tests with these poses, and it looked pretty good. One thing I'm not sure about yet is if the mouth poses work 100% like they should. I need to do a few more tests to see if the result is good on all movements, and maybe I have to adjust the scaling of those movements. I hope Paul and Ben have some time to test the mouth poses, if not I'll do an extensive test later this week. I''l explain what the mouth poses do: The mouth wide/purse pose should contain exactly what it's name says. This is also true for the mouth smile/frown pose. As you know, the mouth also gets wider on a smile, so when Zign Track detects a wide smile it reduces the mouth wide pose, so they don't add. The mouth open/close and shift left/right would be obvious. Using these poses is a little bit less accurate like the original tracked footage, but the advantage is that the shapes it produces always suit the face (if the modeler made nice poses of course). Tell me what you guys think about this. I want to have it as good as possible before I release it. Maybe I'll change some things if that can give better results. Other than this, I'm working on some improvements for the output for all formats. On the left you can see that I have added an extra "neutral frame" spin box. This can be used to specify on which frame the face is most neutral. I did this because Zign Track was giving some strange offsets if the face in the first frame was not neutral. I'm also changing some calculations to improve the accuracy while the face is moving. This should solve the lip issues that sometimes occur. When this is done there should be very little manual tweaking required for the animation.
  23. I think that's a really good image. There is a pattern visible, but if you don't tell anyone no one will notice I didn't use hair much so I don't know all tricks to speed it up, but do you really need that transparency? Transparency can slow down the render significantly. Maybe you'll get the about same result if you just make the hair thinner at the tips.
×
×
  • Create New...