sprockets The Snowman is coming! Realistic head model by Dan Skelton Vintage character and mo-cap animation by Joe Williamsen Character animation exercise by Steve Shelton an Animated Puppet Parody by Mark R. Largent Sprite Explosion Effect with PRJ included from johnL3D New Radiosity render of 2004 animation with PRJ. Will Sutton's TAR knocks some heads!
sprockets
Recent Posts | Unread Content
Jump to content
Hash, Inc. - Animation:Master

ypoissant

Hash Fellow
  • Posts

    2,579
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by ypoissant

  1. That is a nice piece yu have there. I like the character design and the girl's face animation is very well done. Bravo.
  2. Here are the last Photon Room I will post here. For those interested, there is a new forum about Radiosity and other lighting techniques. This is a place for discussion about those techniques, problems and solutions, etc. I will also host a few tutorials about those subjects. One is already started. The following renders are repeat of previous renders with 35 photon bounces. Render times around 5h30m per render.
  3. Robert, Reducing the Photon Mapping render time is still possible but will require significant programming. If more users have real interest in radiosity, it might become worth it. This said, it is already possible to reduce render times through optimal Photon Mapping properties setting. Those are issues I will cover in my Photon Mapping Tutorials. I think it is currently more intersting to explore those avenues than further adding more complex code. As for baking the illumination, this again is not impossible to do but would require more code. The map produced by Photon Mapping is not of image type. It is a 3D data structure. And backing this data onto a complex 3D scene is not trivial.
  4. You have a nice moody scene there. Very well done. Interesting what a well thought out lighting can do to a scene. One technical bit here: If you do a 25 pass render with a 2 ray cast light, you have a 50 ray soft shadow. This should be more than sufficient for this scene since the soft shadows are far away from the viewer. This is definitely a scene that could work well with Photon Mapping BTW. But I think you already know that.
  5. Ah! The joy of doing commisionned work I think we can all go over the particulariries of what was imposed to you and still appreciate the mastery you've put into the model. Good work.
  6. Here is the Cornell Box project I used when I developped the Photon Mapping render engine. I cleaned it from unnecessary objects and maps and already set some good working Radiosity properties values. This is a render of the project : See attached file for the project. Have fun. Cornell_Demo.prj
  7. Mr Innovation is looking really really good. I agree with Mike about the gluteus though. The immediately poped out as too small when I saw the rotation. He has to be turned almost 45° before we see them. And I notices the vertical edge behind the thich, knee and calfs too. I liked the one with the separate top better although I would make the top just a tiny bit different than the rest. Just so the top shows but you keep the surface quality. The surface looks like dry latex to me. Not so much like spandex. I think double specular angles would give the spandex effect better, kind of like the Incredible suit. At least it wouldn't look so much lake latex. But those are nit picks. I really like the model.
  8. Very good work on this chain. I downloaded the project and I'm currious to see how you constrained and actioned the model. The idea of putting the chain in front of this curved graph paper makes a very slick image. Simple but effective design. I observed a tiny hesitation where the chain have looped one turn. To avoid this, skip this last frame of your loop in chor. Also, the chrome effect is nice but the background is a litle too uniform blue which produces flat reflections. I would suggest you cover your scene with a 100% ambiance (and there is something you needt to do to diffuse falloff too but I don't remember now) hemisphere onto which you set a gradient material that simulate the darker blue at zenith to paler blue (almost white) at horizon. This should produce nicer reflections. A darker plane for the ground would also help.
  9. I think the shafts were made from my beveled primitives so I think I can answer. In order to have a valid patch, it is not possible to use one single spline to close the cap. I could use 2 3CP splines and have a circular cap but I used a 4CP spline plus a 2 cp spline. And because the 2CP splines couldn't be rounded, I decided to peak the other spline too.
  10. MROUND will round to the nearest 50s. So instead of getting let's say 763 with ROUND, you would get 750 with MROUND. You may change MROUND for ROUND and further round it yourself or enter the given number as is.
  11. Check your link address. It doesn't work for me. I get a 404 error.
  12. I'll try to find the Cornell box project I setup when I developped the Photon Mapping engine and post it here. In the meantime, the most probable reason you would get grainy results is because your sampling area is too small. You can use the excel file I attached here to help determine a good starting point for the sampling area size. PhotonsSetup.zip
  13. Here are some Photon Mapping night lighting. Render time aroung 5h15m each. One klieg light for moon, no softness. Angle tightly fitting the window frame. Than another klieg light, same position and same angle as moon light but pointing exact oposite direction at a 100% reflective patch. The light pointing at a mirror and then back into the room is to allow photons to be stored at first hit. Normally, the first hit is nor stored because the first hit contribution is computed with Ray-Tracing but when light is outside the room, the scene is better illuminated if first hit is stored. Twice the same light setup. One for each window. The only light in this scene is the night light behind the side desk.
  14. Very brilliant use of ... ... ingenuity.
  15. You amaze me. You have, there, a very nice balance of blue sky and yellow sun. It really gives a nice outdoor impression.
  16. Paint programs assume non-premultiply because that is the way they fundamentally work. When you erase on a layer. you are actually painting in the alpha channel of the layer. That is: the transparency channel is an alpha channel. Same thing when you add a layer mask. You then make an explicit alpha channel. Yet again the same thing when you use a soft brush. The brush is actually solid but its alpha channel is a radial gradient. For the layer merge operations to produce the final image, all the image data needs to be non-premultiplied. Paint applications need to fundamentally work this way because they allow you to do the painting. It is not only designed to make compositing. It is true, though that premultiplying an image reduces the color resolution. However, the places where the resolution is reduced the most is precisely where they contribute the less in the final composition. Ultimately, a pixel that contributes nothing is black. That is its color resolution is zero bits. This is why, it is possible to un-premultiply the image data, even though they have lost color resolution, and still have very valid image data for compositing.
  17. For an explanation of what premultiply mean, I invite you to check my toturial on alpha channels. What is demonstrated in Vern's clips is indeed related to premultiplying. But for a premultiplied image, the background would need to be black. Not 50%grey. But in A:M you would get black fringes anyway. There is no way out of the fringes. The problem is not with A:M but with Photoshop which does not allow you to specify if the image you save is premultiplied or not. The premultiply flag in the targa file saved by Photoshop is always set to non-premultiply. So because of that, you need to build your photoshop image in a non-premultiplied way. Which is what Vern's clips show. About the ghosting. Ah the joy of making graphics for monitor viewing. On my PC, the background appears turquoise actually. But I tend to call any mix of blue and yellow as green. Still opposing magenta and blue is difficult for the eyes. Especially for color blind peoples. But even for normal vision people, red and blue does produce ghosting and even flashes. And then there is this very technical problem of trying to make the RGB channels to behave in situations where the two colors are mainly using very different channels. The ghosting I'm talking about is : on the nofringies, I get darker edge on the right side of the border where magenta meet the background and lighter edges on the left side. I invite you to take a look at this earlier thread where this opposing channels was discussed at length.
  18. I think it drives the point very well. I would just use other colors. Magenta and green produces ghosting and kind of defeat the demonstration.
  19. Hey! That's a very good 3D model of Gaston. Nice work.
  20. The 4 things that have a real impact on render time are: 1) Final render size, 2) Number of lights in the scene 3) Number of ray cast per light 4) Number of passes The number of objects or of maps in a scene have very little influence on render time compared to those 4 parameters. Materials may have a perceivable influence.
  21. Funny thing about the arms is that before I read your comment about them, I thought they had something strange. I don't have time to get my anatomy book out and make an analysis but from the feeling, I think they are a tad long and a tad low. They feel like the humerus is not connected with the collarbone at the shoulder joint. Apart from that, the model is superb.
  22. WOOOOOOW! I like those models. Very good caricatures. I had no problems at all recognizing them. Good likeness. I too, can't wait seeing your animation magic on them.
  23. Dearmad, As accurately as a photon can be. Or, in other words, not attenuated at all. The detailed explanation is this: Photons don't loose energy They are either absorbed or bounced. In ray-tracing, we apply the inverse square law of attenuation because we are not dealing with photons but rather with the photon density. So as light travel a distance from an emission point, the photons are more and more dispersed and thus they are less dense. That is the inverse square law. But with the photon mapper, we don't compute the density, we estimate it. We shoot a million photons in a scene and then estimate their densities everywhere in the scene. The end result is as if you had positionned a million light sources in the scene. In real world, photons have different level of energy which corresponds roughly to the color along the spectrum. In CG, each photon is actually 3 photons. R, G and B photons represented in one single CG photon. And their absorption vs bounce is computed on the photon by modifying the RGB energies. There is a part which is done statisticaly and a part which is done computationally. When a photon hits a surface, the decision to bounce it or absorb it is done statistically. Once this decision is made, the photon RGB energies are adjusted according to the surface diffuse color. The parameter that have the most influence is the surface diffuse color. That is what adjusts the photon RGB energies and that is what determines the statistics for deciding if a photon is bounced or absorbed. Other properties such as reflectivity, transparency, etc have some effects mainly on the way or the pattern the photons are bounced. The radiance should actually be set to 95% on all surfaces if we wanted to be more realistic. That is because, in reality, even a very white surface have a reflectivity of 95%. But 100% is very good for CG purposes and you wouldn't notice the difference anyway. Think of the radiance as a constant factor that may be used to attenuate the RGB characteristics of the surface. You would normally not use that and just leave them at 100%. The actual reflectivity of a surface is computed from its RGB diffuse color. Indeed, if you adjust radiance differently on a per object or per group basis, you will likely get all sort of unreal effects. This could be an artistic exploration and if mastered could probably lead to some interesting artistic style. But, personally, given the time required to test a photon mapping scene, I would not attempt that. The greater the number of bounces, the more interreflection between objects in the scene. At 0 bounces, the scene would be illuminated equivalent to standard ray-trace with one light. At 1 bounce, there would be some interreflection but still the shadows would be quite harsh. At 30 bounces, you start to get an illumination that looks like reality. In reality, photons can bounce several millions of times. The cabinet is actually purplish like the two other cabinets. The grayish look comes from the specular reflectivity from the spot of light. Dingo, 25 passes is determined purely on the antialiasing that is necessary for the blinds blades. I would use 25 passes even wiithout photon mapping in this scene because of the blinds. And I determined it by trial and error. The number of passes does not affect the photon mapping quality since the final gathering samples are distributed among passes. In other words, if you set 500 final gathering samples and 25 passes, then each pass will average 20 samples.
  24. Almost one year ago, I promised, in this thread, to post more renders of this room with different light settings. Here they are. All done with v11.0t : Photons cast : 1 000 000 Sample Area : 1 000 Photon Samples : 200 Max Bounces : 5 (I need to try more) Final Gathering Samples : 500 Jitter : 100% Average render times at 25 passes on a 2.5gHz WinXP : 6h30m Only one table light The two side table lights Only the tiny night lamp behind the desk
×
×
  • Create New...