sprockets The Snowman is coming! Realistic head model by Dan Skelton Vintage character and mo-cap animation by Joe Williamsen Character animation exercise by Steve Shelton an Animated Puppet Parody by Mark R. Largent Sprite Explosion Effect with PRJ included from johnL3D New Radiosity render of 2004 animation with PRJ. Will Sutton's TAR knocks some heads!
sprockets
Recent Posts | Unread Content
Jump to content
Hash, Inc. - Animation:Master

ypoissant

Hash Fellow
  • Posts

    2,579
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by ypoissant

  1. ypoissant

    STEP 1

    Right now, I positioned like your screen cap example but if I were using a different model, what would be a good landmark for positioning the base bone here? Should it be positioned at the center of gravity? Centered on the Hips? the waist? the hankle? the neck? the head? Is there a rule?
  2. Oversampling is an old technique that was used in A:M v 8.5 and earlier. Now, you get the equivalent with better control with multipass.
  3. I plan to follow along too, Mark.
  4. I agree. But boy are the camera manufacturers slow at embracing that technology. HDR digital photography can be done with existing CCDs but a different way to handle the CCD must be coded. But no need to develop new CCDs basically (although it would certainly help HDR photography evolution). Just need to change the way the CCDs are controled and the software that controls them. HDR photographers do HDR photos using the technique of auto-braketing but they have to combine the single exposures into one single HDR image in an outside application which could easily be done inside the camera. I know that the EXR team is actively lobbying the camera indiustry in adopting OpenEXR and embedding the library inside cameras but even JPEG-2000 can handle HDR data. Anyway, I've been waiting for about two years, now, for a good HDR camera. Still waiting. Instead of going that way, they just add more and more pixels. What's the use of a 12 megappixels LDR image?
  5. WHen you render with radiosity. always add a gamma correction of 2.2 to your final render. That will add the same color correction as on digital photos.
  6. I don't think increasing ambiance intensity beyond 100% would give the results you expect. You could get the same result with a normal low dynamic range decal onto which you shine a very bright light. But that would not give the expected results because increasing ambiance intensity is like increasing exposure or increasing uniform light: every shades in the decal gets increased proportionately. You still have the same set of shades but more spaced apart. That would be usefull only if the decal color is a flat uniform shade which is a very limited application. High Dynamic Range works for reflections because it contains all the low dynamic range data plus high dynamic range data where it counts. So you get normal colored environment reflections in the paint but also bright white reflections where it counts, that is where there are bright lights in the environment. That is the reason why photography works. It is because most of the lights in our living environment is of low dynamic range nature. There are only spots of higher dynamic range lights, most notably, the sky, which in a photograph will get cliped to white.
  7. Just open 4 different windows on the same model (or choreography or action), position them in a 4 quadrants arrangement and set each of them to the view you want: front, side, top, bird's eye. You don't really need a button or a menu option to do that.
  8. This is not stupid. This is just the iconic memory kicking in. Anybody without some kind of anatomical training will draw or model in a kind of iconic way. For instance, a tube is an iconic representation for the arms and legs. One of the tools to help defeat the iconic memory is to develop a very good sense of observation and draw and model from what you really see instead of from your iconic representation of the human form. You will naturally develop your sense of observation as you get feedback on your model, self-critique what you did, figure what could be wrong, look for ways to improve, modify or start over, etc. But this takes time. Lot of time. And studying anatomy is just a way to help accelerate this process. It hones your sense of observation by pointing at some particular aspect of the human form that needs special study.
  9. And I would add: But that after you know which form you need to reproduce. Take that for what you think it is worth but IMO, you first need to learn the anatomy. I don't mean learning the bones names and muscle names etc. I mean the human form. How it is built. Why it looks the way it looks and why it moves the way it moves. If you don't have a basic understanding of the human anatomy, then figuring out how to layout splines is going to be very difficult.
  10. In outdoor scenes, the sun light is slightly yellow and the sky is blue. This is due to light scattering in the atmosphere. Blue wavelength are more scattered than yellow and red. Because blue is scattered, the sun loose of its blue and become yellow. Painters have understood this for a long time and paint their shadows blue. But becaue they paint, they can cheat as they whish and actually make the shadow bluer than natural. They do that to compensate for lost dynamic range of light reflected from paint compared to the true outdoor scene. By doing that, they can still have details in their shadows and get nice blue-yellow contrasts. In CG, the negative blue sun is a way to cheat and get nice blue shadows and nice shade definition in the shadows. We cast a blue skylight (either a skylight rig or a blue Ambiance Occlusion for the shadowed part of the image which gives nice shades in the shadows. But when we add a yellow sun light, it become difficult to keep a good dynamic range with details in the shadows as well as details in the lights. Adding a sun light to a scene which is already lit by the sky makes it difficult to find a good balance. Either the shadows are too dark or the lights are too bright. In addition, because we add a yellow light to a scene which already have a blue lighting, the lit parts become green. So the solution to that is to add sun light clone that cast a negative blue light in effect subtracting the sky light blue from the sun lit parts of the image so the lit parts are only lit by te yellow sun. This way, it is easy to balance the shadows darkness vs the sun light brightness and keep details in both the shadows and the lights and keep that nice blue-yellow contrasts. At least this is what I think Rodger is doing. If not then he will surely correct me.
  11. What use would you have of these files since if you don't have the original CD, you can't run the software?
  12. To me, at least, doing a brick decal looks like much less time and effort than doing this alpha composition. If you already like that brick texture, then you just need to render that wall from the modeling window in a high resolution targa file and then apply that back as a decal. You will need to do the same thing for the side of the columns too. Dong the bump maps should be easy once you have the color decals ready.
  13. And... Rodney, This model is from the CD and is full of materials. I ran two quck render tests. I droped the warehouse model in the standard choreography, repositioned the camera and rendered at 640x480. The render took 5m4s. Then I replaced all materials with simple attributes and the render took 0m20s. Phil, IMO, your learning time would be better spent if you learned how to do decals and apply them. You don't need to modal the building. Just replace all the materials by decals on the existing model. This solution seems to be so much simpler than what you are currently doing.
  14. I find the trouble with deciding when "good enough" is actually good enough comes when I'm working on my own porojects. For some of my own projects, it seems to be lifetime indeed. For commissioned projects, it is way much easier as it is generally related to time and money budget or to ownership. It is good enough when the deadline is reached, when there are no more money, when there are no more days/hours, when the customer had a very "personal" need and he won't see the difference anyway, when the customer requests changes that I don't agree with, when I don't feel ownership of the work anymore. There, the work is usually "good enough". One good in-between situation is to participate in community type of challenges and voluntary-based projects where there is still submission deadlines but there are no customers to boss around. I'm thinking of the image contests and TWO (or any similarly structured projects). Clear deadline, better ownership and as a bonus, feedback as opportunities for improvement.
  15. Looking very good. I like your attention to details like the wigly reflections in the window glasses.
  16. Oh yes. Materials. They not only take more time to render but are harder to antialias right. Use decals instead if you can and it could save you huge render times. Phil, I did a quick calculation and from the numbers you posted it appears that each frame takes about 15 minutes to render. For such a simple scene, this seems exhorbitant. Something is definitely going on there so check your shadows, reflections and materials.
  17. It is still on the first page of this thread.
  18. Rodney, No, I don't use compositing. I don't work that way and I don't think that way. I tend to see my scenes in a wholisitc way. I don't use light lists for the same reason. To me a scene is an indissociable whole and everything must be well integrated. If a light is cast on a character, then that light must also cast shadow on the ground or the walls around that character. I would never think of rendering a scene without any kind of shadows either. But at the same time, I like to only populate my scenes with the objects that are seen in the scene. Anything that don't show (either directly, or through reflections or shadows), I don't put. In this regard, I like to use foreground objects to help mask part of the image so that I don't have to include unnecessary objects in the background. To do a good job with layers and compositing takes a lot of time backed up by vast experience. I'm always amazed when I see the amount of layers that were composited in some movie sequences. I'm amazed and at the same time, I find that so archaic. I remember at Adapt 2006 a conference where Jeremy Birn showed how he did the layers and compositing for some movie sequence he worked on. I was really amazed at how archaic it was. I kept repeating myself, thanks goodness we don't need to do that in A:M. The steps he had to take to end up with what I consider vey normal lighting effect was convoluted to say the least. It turned out all this was necessary because the rendering software they were using on that movie was not capable of propely rendering some very simple lighting effects. It is amazing that A:M can do all that in one single render pass. And I left that presentation with the feeling that those "professionals" don't know what they are missing. Their "layer" way of thinking have been shaped by years of working with difficult to use tools that have been costomized by pipeline technicians inside a work pipeline that forces to think in terms of minute steps that are all combined together in the end. This is so far off from A:M working approach that is one tool for one artist to make one movie.
  19. Hmmm... You must understand that what takes render times are shadows and reflections. I seriously doubt that in scenes that don't have shadows and reflections, you will have those same render time issues. So you would most probably end up putting a lot more work into them and gaining no significative overall turnover time. The corollary of the above render time rule is that to cut down on render time, your best first bet is to visit your shadows settings. Robert Holmen asked you a question about that and you did not reply. I think you are underestimating that aspect and you should seriously consider this first, before any other approaches. I'm shure Rodney can guide you on this just as well as on alpha channels. I know you are a new user and I'm just trying to set you on the right track.
  20. Yes. I saw the topic title but I was not sure if it was from a misguided or misinformed desision. I had the impresion hat the underlying goal was to get that animation rendered. But if the goal is actually to really learn about alpha channels, then I say go and explore away. Unfortunately, I don't have a project that is designed to explore alpha channels. In fact I only use alpha channels for decals. I don't use them in an attempt to cut down render times by splitting down a render into multiple layers and composite them later. If I had a need to proceed in that way, I would just render the background, midground and foreground separately with alpha channels turned ON so I can composite the layers later usng A:M composites. But even this is something I don't like to do. I'm a "render it all in one stream" kind of guy. I like to see the computer do the work for me as much as possible. If I render in layer, I have to figure a way of doing it appropriately, then I have to setup the scenes for those separate layers and launch the separate renders, then I have to collect those layers into the compositor, then I have to figure the compositing order, operators, etc and then I have to render the composited pictures. That represents a lot of additional work for me with a lot more steps that can go wrong and that I might have to redo. I know other people like to render in layers and composite and I respect that but it is just not my bag so this is why I advise to just render the animation to TGA during night and off-work time. I advise that only because this is the way I do it.
  21. Why don't you just render the animation straight to TGA? You've been discussing different approaches to splitting the renders into different types of layers so you can cut render time but given that you want the cast shadows and the reflections, any splitter approach will only make the project more and more complicated. Plus, now you will have to spend large amount of time finding the proper techniques to do that and lots of time post processing all that. By the time you do that, your little animation would be completely rendered a long time ago. I'm all for learning new techniques and approaches but those seem overkill for this project. You started this topic jan 1st. If you had rendered, let's say 8 hours a night, your 75 hour render would be more than half completed now. If you had rendered 15 hours a day (assuming you render during night and during you are away for your day job), your animation would be done by now.
  22. In that case, it's probably worth an A:M report. The same ambiguity exist for translation. I just used the rotation as an example because the clockwise and counterclockwise rotation is easy to relate to. I could just as well use the translation if I could find an easy to relate to real world translation example. I could have used the typewriter carriage right translation but how many people still remember those devices? That was an attempt to explain the ambiguity that can be solved by adding an additional dimension. That was not an attempt to explain the gimbal lock issue. I could not come up with a simple explanation of the Gimbal Lock issue and I didn't think it usefull to give yet another opaque mathematical explanation here. Exactly. But when you manually rotate an object on the screen, you are actually doing that from a 2D projection. There are some rotations that you just cannot get in one single manipulation. You need to rotate, click somewhere else on the manipulator, which, in effect, resets the whole system, and rotate again, etc. It is impossible, from a 2D projection, to get into a 3D Gimbal Lock. The Gimbal Lock issue is really only an issue when the computer is doing the rotation interpolations on 2 dependent 3D systems. One reason why the Gimbal Lock issue is confusing is because it comes from using two dependent 3D systems where one 3D system is rotating inside the other 3D system. Even using a real gimbal mechanism as an example, the issue is still quite abstract unless we actually have the mechanism in our hand or a 3D animation of it is made (but don't count on me for that. I've seen animation of the Gimbal Lock issue but they were incomplete). Anyway, here is an illustration of a true 3D Gimbal mechanism. To help see the Gimbal Lock explanation, I will use the labels OG, MG and IG axis to reference the mechanism axis themselves and I will use the X, Y and Z axis to reference the fixed world 3D space axis (and note that the X, Y, Z axis labels in the illustration are not the ones we are used to and I will use the ones depicted in the illustration). Rotate the mechanism 90° around its MG axis until the LM axis is aligned with the OG axis and then rotate the mechanism 90° around its OG axis. In this configuration, there is no way that we can rotate the mechanism around the global Z axis unless we break the whole device. In this particular configuration, we are left with only two effective axis of rotaton. We lost one degree of liberty. This is exactly the "Gimbal Lock" configuration. In order to get out of that situation, a human will figure that we need to first rotate the mechanism around its OG axis. But a simple interpolation program cannot figure that. And guess what they do to avoid this situation in a real Gimbal device? They add a 4th axis called the redundent axis. Here is the NASA article from where those illustrations come from.
  23. This is not a bug. What happens is that the rotation manipulators as well as the rotation edit boxes all allow the user to do the rotation in Euler because this is the most intuitive for direct manipulation and since it is directly manipulated and not animated, it cannot gimbal lock. But the resulting rotation is stored as Quaternipon so that we don't have gimbal lock during animation. So even though you manipulate the X, Y Z rotation axis, the actual stored data is X, Y, Z, W and those are the channels you get in the timeline. Now, even though both rotation spaces ave an X, Y and Z channels, those channels only roughly correspond to their alter ego. That is why, if you select one of the X, Y or Z edit boxes, you don't get the corresponding X, Y or Z channel to edit. Quaternion is just the name given to a vector that lives in 4 dimensions (X, Y, Z, W) Why the Quaternion system? Imagine a clock needle in a 2D animation that turns in clockwise direction. That is what we are used to see. Now imagine the same clock needle turning in the counter-clockwise direction. Is that a problem with the clock? What if the clock was simply turned away so we now see it from the back? If the clock needle is a 2D animation, then we cannot know (unless we observe that the clock numbers are reversed) but if the clock is an actual object in 3D, it is easy to see that it was turned away. This ambiguity in 2D becomes evident in 3D. The same idea exist in 3D. Some rotation directions are ambiguous in 3D but become evident in 4D. Here is anoter curiosity of the 3D rotation system. Stand up with your right arm dowm and the palm facing left (facing your thigh). Now rotate your arm around the X axis 90° while keeping the arm and hand twist as rigid as possible so that the arm extends in front of you and the palm still facing left. Now rotate the arm around the Y axis 90° so that it extends right of you with the palm facing front. And now rorate the arm around the Z axis 90° so the arm rests again on the side of the body but note that the palm is still facing fromt. Now repeat the same thing but rotating the arm around the Z axis first, then around Y and finally around X and notice that now the palm is facing back. This shows that the order of rotation is important in 3D. Depending on the chosen axis order, even if the rotations are the same angles on each axis, the end result will be different.
  24. Very good finds there Rodney. I find it funny that a topic about BPM to FPS ended up with that post with in-depth coverage of music, beat, timing and animation thanks to a little bit of noise. Noise is a very good thing to help get fresh views onto something. This reminds me of works I did on recurrent neural-networks seeral years ago. Those are neural-networks that can learn grammars, that is temporal phenomenas that follow a set of temporally dependent rules. Those networks, while learning a string, tended to get caught into loops and never discover the "truth" about the string unless we injected some noise in them. Without the injection of noise, they tend to keep turning around attractors that are not fully developed and satisfying solutions. Like when, ourselves, get caught into a loop in our attempts to find solutions to a problem. Going outside to take a walk can freshen out ideas and help discover fresh new approaches for a new solution. So noise can, indeed, lead to music. But it requires human intervention.
×
×
  • Create New...