Jump to content
Hash, Inc. - Animation:Master

williamgaylord

*A:M User*
  • Posts

    907
  • Joined

  • Last visited

Everything posted by williamgaylord

  1. Looks familiar. You can also use this technique to animate plants that branch. Set up a set of control path splines that provide the branch structure of the plant as it would be fully grown. Each branch will be set up essentially like Robert described above. Basically a tube with each spline ring assigned to a separate bone and constrained to the particular control path spline in a percentage pose. In addition, scale the spline ring bones very small at the zero percent end of the pose slider. At the other end scale them to 100%. That way the branch grows in thickness as it grows in length. Once you have all the branch poses set up, animate the whole set in a new overall percentage pose. Strart with the "trunk" which will define the duration of the growth animation. As the trunk passes the base of a branch, start the branch growing and complete its growth at the same time the "trunk" completes. Of course you can add layers of branching, timing each layer accordingly. Leaf clusters can be associated with their own bone. These bones do not need to be constrained to a path, but they do need to be placed where each leaf cluster will sprout. Use pose sliders to scale the leaf cluster bones. Animate the leaf clusters in the overall percentage pose as well, once each leaf cluster percentage pose is set up. The two movies attached demonstrate the idea. If you are a real masochist, you can build a whole tree like this: Tree Growth Animation BranchDemo15e.zip BranchDemo1.mov BranchDemo2.mov
  2. Brilliant! Is this based on personal experience with cats, or have you been watching a lot of re-runs of The Planet's Funniest Animals? I currently have two cats who I know would react quite differently in this situation: One would scrabble frantically while looking ever so embarrassed. The other would simple go over the edge without any struggle and with a very round-eyed look of surprise on her face.
  3. I have DiVX6.8 installed and Quicktime 7.5.5 and neither recognize the existence of the other, so I can't get the trailer to play. Quicktime will play the audio without displaying video. Windows Media player will display the video, but will not play the audio. DivX6.8 complains that Quicktime is missing and needs to be installed. Anybody out there have a clue what's wrong with the Quicktime/DiVX link? Correction: DivX6.8.2 plays only the audio just like Quicktime. So in Windows Media I could see the trailer and in Quicktime I could hear it. Beautiful work!
  4. I updated to V15.0d and tried the render again. Identical rendering artifacts. So close!
  5. The diagrams I posted earlier are a start. If I can squeeze in some time I'll expand on these and Yves can then critique these and suggest any corrections, improvements. I work for AT&T at their Atlanta, Georgia labs, and video technology is my main specialty. I think diagrams like these can help clarify what happens where without having to get too technical. I won't make any promises for time frames, though, since AT&T's IPTV has me spread pretty thin these days. This will be a good exercise for me to learn more of the CG world, since most of my experience is in television technology. What Yves has done a marvelous job of already is showing how all this translates into the art of creating the image you are after. The last diagram I did I would certainly change to emphasize that the display industry has currently taken the approach of standardizing display response to the 2.2 gamma curve, whether or not the underlying display technology's native characteristic follows such a response curve. Look-up tables (LUT's) essentially make the display's natural response appear to fit the 2.2 gamma curve so that any gamma corrected image data matched to 2.2 will display correctly. In the future with digital interfaces and build-in display "intelligence" it would be more appropriate to move the gamma compensation into the display itself, opening up a completely linear work flow right up to the display input. Today, however, we conform the image signal to match the display instead of the other way 'round.
  6. BTW, I am really pleased with how fast this rendered! V15 Rocks!
  7. Next I want to tweak the lighting. May try the Skycast rig again, which worked great in earlier experiments. Might be a good time for me to actually learn lighting in A:M and get familiar with OpenEXR outputs and post processing.
  8. I'm posting the project file here so Hash can access it. Feel free to check it out, anybody, but just bear in mind that I'm reserving copyrights for a client project. Brownstones03.zip
  9. Did a lot better, but still some rendering glitches when the leaves start to grow. I'll send a copy of the project to Hash and see if they can figure out why. Much better than previous tries! So close! TreeAndBrownstones01.mov
  10. The animation is in the works! Tripped over a couple of querks in the import to V15. Don't know if I had the choreography length set wrong in the original, or it imported with a default length, but the imported project had a choreography length of 5000+ frames, rather than the intended 300. On top of that I did not notice that when I played out the animation, it only showed frames at large steps. The last frame in the playback was the last even step and the last frame of the sequence did not show--so the pose sliders that would show 100% only showed ~94% which corresponded to the last playback step. That threw me off and made me think the choreography relationships were messed up. Nope! Just a querk in the playback. Once I figured that out things have gone smoothly. I still have the small branches to set up, but once that is done I'll do a test render of the whole growth from seed to fully grown tree. So far so good!!
  11. I'm thinking assembling in an action with the trunk as a proxy and the branches as action objects. But if multiple models can be used in a pose, that might be even better. I want to use the whole set as though it were only one model so I can use it in multiple instances (9 if I can manage it), rotating the tree a bit so it will effectively look like a different tree each time. Another option might be to render the tree in a separate chor and fake the multiple trees as flat video images (that will cast shadows) since they will be in a long, extra wide shot of a city block with nine brownstones in view. The end product is the important thing, so any effective short-cuts will be appropriate. I used similar tricks putting the "Plateau of Leng" image together for the "Lost World" contest last year, but this is more complicated being an animated sequence. Basically a brother and sister will be planting seeds along a city block and the trees will magically grow up as they work their way down the block.
  12. You did so well for the contest I was wondering, "Is that little robot just composited into a photo?". Amazing work there, Stian!
  13. I suppose since video cameras apply gamma correction, video file input gets a similar treatment? (Knowing how sharp you guys are it's a pretty safe bet the answer is yes, without even having to ask.)
  14. The tree is actually a set of models: one "trunk" model and a set of "branch" models of various sizes. The branch models are constrained to the spline that guides the trunk growth when the tree is assembled in the choreography. I did it this way because the overall workflow was more efficient. Tree parts are simple, but there are so many of them! Coordinating the animation of all these parts was the real time killer. Easier to set up a "branch" model with it's own action and then replicate it. I change the shape of each branch by altering the splines that define it's shape. If I could do this iteratively to build a model I could do this in a single model. Otherwise a single model would require me to animate every single component--many hundreds of them. The model's animation is guided by a set of splines that define the final form of the tree. The basic building block is a spline path and a simple tube with a bone asigned to each spline ring of the tube. The bones of the tube are constrained to follow the path. I set up a set of sliders to control the position and scale of each bone/ring. An overall slider control controls the whole set. I build a branch by constraining the bone of each individual spline path to a point on the spline path that supports it in the tree. As the point in the "growth" of the supporting branch reaches the point where a new branch is located, the new brach's visibility is turned on and it starts growing. This all mimics the fact that tree growth happens at the end of the branches--once the branch form behind the growth at the tip is established, it keeps it's shape and just thickens over time as growth rings are added. Simple scaling of parts does not work, since that is not how a tree actuall grows. (That being said, I actually cheat at the last step by scaling the leaf clusters--but in that case it looks natural enough to work. Another compromise is that the tube stretches along the spline path, so the bark texture flows with it--in a real tree the bark stays put and just thickens.) So, if I could build a "composite" model from a set of smaller models, with all the constraints and controls remaining intact, I could build it as a single model instead of having to assemble separate branch models to make a tree in a choreography. It would be great if I could export what I animate in the choreography as sort of a "model group" with an overall animation control, so that, even though it is not a model, you can import it back into a different choreography as though it were a model. That might be more practical than an "iterative" model feature, since it would group a set of actions with the set of models. Yes, I need to work on that a bit to make it more natural looking. In reality, like branches, roots grow from the tips and thicken. In this case I should probably animate the shape of the spline rings at the base, not just scale them.
  15. Now that I know V15 handles this very complex "model" just fine, I'll set up the keyframes and do a render of the thing growing today and give it a whirl. Then, if that goes OK, I'll put the brownstones back in and put in a better lighting setup. Stay tuned...
  16. Not sure where the old thread is in the archives, if it's there at all. This is a render in V15 of an older project where I was trying to animate this tree growing from a sprout to a full grown tree. This is the full grown tree. This tree is actually a large set of smaller "branch" models assembled in the choreography. If I could assemble them as a single composite model it would be preferred, but I haven't figured out how to do that yet. I wish I could iteratively build the whole tree from a small number of tweakable branch models. Any ideas? Here is an early test animation with a scaled down version: Tree growth animation test. And a closeup: Tree growth closeup. Image attached is the project render in V15. Wahoo! It works without rendering artifacts!
  17. I for one have a boatload of learning yet to do on this stuff. (Think "Titanic" when reading the word "boatload".) I think it would be great to have a new special topic on lighting/gamma/tone correction/color correction/HDR etc. (What category might tie these "image fidelity" related topics together? Rendering?) A good start might be pointers (in one easy to find place like a special topic) to some of the best tutorials we already have, including Yves' excellent collection of tutorials on these topics. Although my background is digital signal processing and video/audio compression, I'm a babe in the woods still when it comes to tone correction, color correction, CG rendering, lighting, image file formats, etc., that are the bread and butter of creating CG material. It might be handy to have an overview of what processes/data should be kept linear (eg., scaled instead of compressed) and what processes/data benefit from non-linear processing (eg., to compensate for non-linearities in display device characteristics) and the proper processing order for the best results. A wall chart might be a good format for such an overview. A trivial project--NOT! The Hash Forums in general have been a fabulous resource for learning a lot of this stuff and the generosity of many of the more experienced and knowledgeable members is something to be proud of, folks! Thanks a million!
  18. Gamma correction is actually not that difficult to understand. It is becoming less important as more and more displays have the gamma correction built right in--which is really the best way to do it. Almost all types of displays have a non-linear response. They are less responsive at low values and more responsive at high values. Most have a response curve that follows a "power-law" curve that looks pretty much like a loose string connected at a point on the floor and a point on the wall. (The "gamma" is the exponent applied to the original value--the "power" it is raised to. A gamma of 2.0 is the "square" of the value.) The curve is roundest near the floor (where the weight of the string has less effect) and straighter near the top (where the weight of the string pulls it tighter). The overall result is that--relative to mid-range values--dark values become darker and lighter values become more bright and saturated as compared to the values as captured by the camera. Gamma correction essentially "pulls the string straight" by applying the inverse of the gamma. The problem with gamma correction is that it depends on the particular display. In the television world, CRT displays were standardized, so the gamma was predictable. The television signal was altered to correct for gamma. In the world of PCs, and the new world of digital television, there is a plethora of displays with different gammas, so there is no one gamma that fits all. You basically have to match the gamma to each display. However, newer displays with DVI or HDMI digital inputs are building the gamma correction into the display itself. This eliminates the need for gamma correction to be applied to the video signal or still image data and is the smart way to handle gamma. Even with these displays, as the display ages, corrections have to be made, but all this can be built into the display itself, so the user can tweak the display using a built in test pattern. If you have a newer display card that can apply gamma correction to match your display (assuming the display doesn't already apply it's own gamma correction) you can program the card to apply the correction. This case also essentially removes the need to apply gamma correction to the video or picture data. In digital video and images, both the non-linear characteristic of human vision and the need for gamma correction affects the visibility of quantization steps making artifacts like "banding" more visible at some values than at others. As it turns out, human vision and gamma correction have roughly an inverse relationship, so video or images that are encoded in a gamma corrected non-linear code are less succeptible to banding. Higher bit depths are rapidly making most of these issues a thing of the past. The need for gamma correction outside of the display, to correct for the display's effects, has complicated the work-flow of digital image production--something Yves' tutorial explains quite well. As displays (or the display adapters) take over the job of gamma correction, the entire work-flow from the output of the camera, to the input of the display, becomes linear and the work-flow becomes far more consistent and straightforward. We are on the cusp of this era of digital production. So, that's another long answer. The short answer is that a gamma of 2.2 is usually a good rough guess for displays + adapters which do not provide the gamma correction. So your video/image gamma correction would have a value about .45 (the inverse of 2.2). Macs typically have a gamma of 1.8 (still pretty close to 2) so your video/image gamma correction would have a value of about .55 (the inverse of 1.8). If you correct it for a Mac and send it to someone with a PC, the darks will look slightly darker and the lights will look slightly more bright and saturated, but should still look reasonably good. If you send it to someone with a gamma corrected display, it will look pretty washed out. So Nancy was right...it all depends!
  19. When you get to the point of making ears you may find this helpful: Ear Mesh You might find the whole thread interesting. If you want ready made ears, even just as examples you can use these: Free Ears
  20. I'd recommend you make a sphere the size of the eyeball as a guide. Then conform the eyelid to the curvature of the eyeball. Notice that the outer corners of the eyelids pull into the eye socket a bit more, especially on the upper side where the flesh from the crease up to the brow overhangs. The tear duct lets the inner corner pull out farther from the eyeball. That you can do by pulling in the splines closer to the curvature of the eyeball on the outer corner. That might fix most of the problem. BTW, this is some nice splinage!
  21. I'm working on making the feet look more natural...something more like this even though there aren't supposed to be any bones. I'm wondering if claws would help. I think I'll build more structure into the torso, not so smoothly rounded. Textures will add a lot to the "menace" factor. Maybe unruly blond hair and red overalls and a black and blue striped T-shirt?
  22. The menace factor is slowly increasing. New "arms".
  23. Made a better "wing"--beefier and a bit more organic in form. I started with Lovecraft's description, including the relative proportions. Over time I'll take some artistic liberties and morph it into something more organic and menacing. For instance, I think the "arms" should be longer and a bit thicker--long enough to reach the "mouths" and touch the "toes"....long enough to grab hapless victims and rip them apart, and eat them, etc. The wings look kind of useless, but imagine your surprise as one opens its wings, which start glowing with an electric corona discharge, with intense arcing to the arms which bend down. It then takes off flying like a rocket, propelled by five plasma jets created by the arms and wings. I have a feeling this thing is going to be a real pain to rig...I mean "interesting challenge" to rig. It will be intersting to animate. At first it might not look all that creepy, but once I get all the parts moving, and figure out how this thing will walk, I have a feeling it will be plenty creepy to watch. Imagine trying to sneak up on this guy.
  24. Glad you found it interesting. Didn't mean to clutter up your "thread" with large files. Today I discovered a square-rigged ship type I'd never heard of before. Ever heard of a "jack-ass barque"? It's sort of a ship that can't make up it's mind whether to be a barque or a barquentine.
×
×
  • Create New...