sprockets The Snowman is coming! Realistic head model by Dan Skelton Vintage character and mo-cap animation by Joe Williamsen Character animation exercise by Steve Shelton an Animated Puppet Parody by Mark R. Largent Sprite Explosion Effect with PRJ included from johnL3D New Radiosity render of 2004 animation with PRJ. Will Sutton's TAR knocks some heads!
sprockets
Recent Posts | Unread Content
Jump to content
Hash, Inc. - Animation:Master

ypoissant

Hash Fellow
  • Posts

    2,579
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by ypoissant

  1. From the shadows, mainly from the observation of how it falls on the right armpit. the light placement may have a much larger impact on overall appearance than expected. For me, if the lighting is not setup exactly the same it makes it impossible to draw definitive conclusions. Any form of GI will dramatically soften the surface curves. Apart from that, the gamma correction, as pointed by Robert, is also a major factor.
  2. Having good textures plays just a small role in creating good materials. For example, if you take the default material and slap a good wood texture, you will get a plastic with a wood texture. Textures only describe the spatial distribution of the reflectance of the material. The other material properties such as diffuse, specularity, etc., help describe the directional distribution of the reflectance. If you want to avoid the plastic look, you need to experiment with those other properties. Chiccory and Coffee did not use radiosity.
  3. From the results I saw on ompf forum, No. The bottleneck is not so much the power of the GPU but the bandwidth between the CPU and the GPU. People who have attempted a "mixed" implementation all came to the same conclusion: You don't gain any speed unless everything is run on the GPU. Note that I have no experience implementing a path tracer on the GPU. All I know are from the ompf forum discussions, blog entries and technical articles.
  4. It isn't intrinsically faster than the current Photon Mapping. However, it is a much simpler rendering algorithm so it is often implemented on the GPU. But in order to be efficiently hardware accelerated, a lot more than just the renderer must be ported to GPU. This basically amount to an almost complete re-write of the application. This would be a huge undertaking. I also think that most users wouldn't use such a renderer because of the longer render time.
  5. Reflectance is used in two contexts: Used alone, it just means the color or texture. Used in "Bidirectional Reflectance Distribution Function" or BRDF it means the color and the scattering pattern. In A:M, "Reflectivity" refers to the color of the reflection and "Specularity" refers to the width of the specular highlight which is determined by the scattering pattern. So this is not too far from the actual thing. Technically, though, a "Specular" surface is a perfectly smooth surface and "Reflectivity" usually refers to the reflectance of a perfectly smooth surface.
  6. Yes. You are right. The hardest part is getting used to a different material representation. Lighting techniques need to be adapted too because the indirect lighting takes care of a lot of additional lights that are typically added to scenes in traditional CG scenes. Modeling techniques are the same. Indeed, physically-based material definition is usually easier to setup. Not only there are less parameters to tweak but the parameters are more meaningful and intuitive. The expected render result is very predictable. There are basically two material setups: One for single layer raw material and one for double layer coated materials. Once this is setup. all that is left to do is change the color/reflectance maps and the layers roughness. Add bump/normal maps where appropriate. It becomes very intuitive very quickly. Once a material is setup. it can be reused on any model and in any lighting situations. Because it is physically-based, it will always look the same no matter the environment it is in and the lighting conditions. The material just reacts to light as it would in real conditions. Yes. Absolutely. BTW, reflectance is a big word but it is just a regular color map with gamma correction removed.
  7. Models are fine. No need to convert to polygons. Lights are usable too as their units could be converted easily. It is really about the materials. If you really want to know the details, hold on. I like to distinguish material definitions used in CG world by two categories: "Physically-Based" and "Effect-Based". Physically-Based Describe materials by their physical structure and composition. Essentially using material properties that can (or could) be measured from real-world materials. A material is described by its layers, each layer being described by its reflectance, index of refraction (IOR), absorption/extinction, density. scattering/roughness, emission properties. Typically, providing the reflectance/IOR and roughness for each layer is enough for most materials. Natural materials are single layers composed of raw wood, metal, minerals, etc. Synthetic materials are usually double layer. A based wood material with a varnish coating for example. Paint or a plastic are double layers too because the substrate is a transparent layer in which are suspended colored particles or pigments. Knowing the composition and the physical properties, we can model how light is being reflected off a material. When a "photon" hits a physically-based material, we can compute the probability of it being reflected or absorbed and if reflected, the probability of the direction it will be reflected to. Effect-Based Describe material by an accumulation of visual effects. Diffuse, specularity/shininess/glossyness, reflectivity, ambience are all visual effects. Each of those visual effects have a color and an intensity. Given an assemblage of those visual effects with their properties, it is impossible to infer the physical properties that would produce the resulting material appearance. At best, one of those effects may be probabilistically selected when a photon is being scattered which would constraint the material into being at least physically-plausible. But this does not make the material physically-probable, meaning that even though the physically-plausible material respects all physical laws, the resulting material is very unlikely to exist in nature. And this produces more noise in the final render thus requiring much longer render times. It is possible to describe a physically-plausible and probable material with effect-based properties. But this requires an expertise in actual material composition, an expertise in the actual implementation of each effects for the material representation in the 3D application of choice, and a good dose of math. BTW, this is another difficulty one is faced when using Radiosity/Photon-Mapping in A:M. For best results, materials need to be setup in a physically-plausible way and doing this is not trivial. Effect-based material descriptions are intrinsically full of contradictions: The ambience property assumes the environment is reflected by a perfectly rough material while the reflectivity property assumes the environment is reflected by a perfectly smooth material. The diffuse property assumes light is reflected by a perfectly rough material while the specularity/shininess/glossyness assumes light is reflected by a more or less rough material. In a physically-based renderer, there is no distinction between light and the environment Everything is a source for illuminance either directly or indirectly. But the ambience and reflectivity specify how the environment is reflected while diffuse and specularity/shininess/glossiness specify how light sources are reflected. All the effects properties can make much more sense (less contradictory) if materials are assumed to be double layer (a base material with a transparent coating) but there are no properties that indicates how to separate the properties between the base layer and the coating layer. Such a "separation" property would be impossible to implement anyway. Those are the main differences.
  8. I know a lot of people implementing IMS BPT renderers as a hobby, myself included. Personally, my pet project has been in development on and off (but mostly off) for 5 years. While it is a BPT, it is mostly an experimental project where I can test ideas concerning parallelism, concurrency and vectorization in a path tracer context. At work, I maintain and improve the renderer (see attached example). This takes all my programming time. When I get back home, I'm usually not in a mood to keep focusing on programming again. I have played with the idea of porting my pet path tracer to A:M a few times already. But there are a few difficulties. First, this could not be ported as a plugin. It would need to be integrated in the code base. Then there is the material and light issue. Material definitions and light definitions need to be physically-based in order for a path tracer to work. This basically requires the implementation of a separate physically-based material system in A:M. I know from experience that this would not be appreciated. I'm not talking about the A:M community in particular but in general. There is a surprisingly strong resistance to adopting physically-based material definitions in traditional CG circles. People have invested a lot of time learning how to get materials they like in their legacy 3D application of choice and don't like to have to change that. And material in A:M can become quite complex especially when procedurals are used. It is possible to constraint A:M material definition so they are physically-plausible. But this gives unexpected results in renders because although the material are physically plausible they are still generally physically improbable so they don't look like real world materials. And it usually does not match the regular render results either. Bottom line: This would be a large project. Anyone wanting to "hire" an implementer should post an offer on the ompf forum. There you can find computer graphics students (and also professionals) interested in path tracing of all sort.
  9. Radiosity is not an alternative to A:M's light. It uses them to compute a global illumination solution. So yes, it is known as "Global Illumination" in other apps. The word "Radiosity" is misleading as it refers to an old, not-used-anymore very limited global illumination technique. This name was kept because it was already used in old days where A:M had an implementation of the "Radiosity" technique. The Global Illumination Technique implemented in A:M is called "Photon Mapping". It is a good technique but is a little bit difficult to use due to the numerous parameters that need to be set just right. "Multiple Importance Sampled Bidirectional Path Tracing" (IMS BPT) is the currently preferred Global Illumination technique and it has no parameters. You just let it render until the render is subjectively noise free enough. Indeed "Photon Mapping" is not too suitable for animation because it produces moving noise in the animation. This can be solved by cranking all parameters to their highest values, thus increasing render time by doing so. But then IMS BPT also requires much longer render time when frames are rendered for animation.
  10. Since radiosity seems to be essential for your project, before you invest a lot of time learning either animation or radiosity one after the other sequentially, I recommend you setup a very simple animation, say a moving cube inside a room, setup radiosity in this room and render a few seconds of animation. Just to get a feeling of the time required to render a frame of radiosity and the radiosity effect in an animated sequence. In general radiosity for animation requires quite expensive settings, which translates into long render times because of the required indirect illumination consistency between frames. Radiosity is nice but there is a price to pay. Maybe you will then opt for a non-radiosity render but with well thought out materials and lighting. This "plastic" look you mention is not intrinsic to A:M renderer. It is the result of poorly setup materials. Radiosity will add indirect illumination to your renders but it will not improve a poorly designed material. If your material are designed like plastic, then they will look like plastic without or with radiosity. With radiosity. it will look like plastic but with additional indirect illumination. Designing good material is an art in itself. And designing materials for rasiosity render is even more of an art. You might want to add this into your list of skills to learn.
  11. Fantastic result. Your attention to details, both for geometries, materials and lighting, is admirable.
  12. Man this new forum system is so Internet Explorer unfriendly. So I'll write that a second time: Consider JavaScript. It runs of all platform, It is a powerful language (much more powerful than it looks at first) that can be used to build large applications. And it is a fun and interactive experience developing and debugging applications in it. You have access to a large choice of libraries and frameworks. I've really been impressed by the open source cinder. See the variety of high class stuff that was made with cinder. And in GoingNative 2013, Herb Sutter challenged programmers at the conference (go to 1h:1min) to come up with a game or something creative that they could program in a few hours to an afternoon. See the results here and here (starting at 8min). Almost all the programmers never heard of Cinder before. So there you have a good feeling of the variety of things that can be done with cinder. It is not too difficult to imagine what could be done by a small team of programmers in a few months of work. It is just a pity that it does not run on portable devices. But OpenFramework, also open source, looks like cinder and runs on all platforms including iOS and Android.
  13. A:M code style is very standard object oriented C++. I would not qualify it as "unique". What is unique about A:M code, apart from the obvious use of patches, is the thought that went into the user interface and the unification of operations.
  14. One aspect I've been observing in the 3D industries is the switch to Physically-based rendering, The movie industry have been the first to push in that direction. I'd say, the last five years correlates very well. During the last 2-3 years, it is the game industry that is pushing toward physically-based rendering pipelines. The most pitched advantages of physically-based rendering is reduced costs, the artist friendliness, reduced reliance on post-prod. SIGGRAPH have dedicated full day tutorials on this topic for the last 2 years and those tutorials, including presentation from the big players in those indiustries, are available on the Web.
  15. I'm not sure where all this hair splitting is leading us in term of differences between splines/patches and Sub-D.
  16. Rodney, Your "Line and surface" post seemed to imply that lines and surfaces came first and then the subdivision came from those lines and surfaces, that is you need the lines and surfaces so you can subdivide them. My point is that there are no lines nor surfaces. Only render of a mathematical interpretation of a set of 3D points in space. The subdivision process does not need those lines and surfaces to do its thing. It is just another way to mathematically interpret the set of 3D points.
  17. Maybe it's a browser issue...it works for me in Comodo Dragon and Firefox, but didn't work in Internet Explorer. OK. Thanks for the pointer. It works in Chrome. What lines? What surfaces? Whatever the technology, you set control points in a 3D space. Then some algorithm computes line representations and surface representations from those points. Those lines and surfaces are only a consequence of how the control points are interpreted to mean, that is the basis functions used to interpolate the lines and surface representations. Different subdivision technologies use different basis functions and thus produce different line and surface representations from the same set of control points. Representing surfaces as triangles is only done for efficiency reasons. It is the lowest common representation for any surface topologies and it is way more efficient to have only one primitive than many on whathever current computing architecture. There are no degradations in splitting a quad into triangles because the shading calculations are based on the normals at each vertices. And the vertices normals are the same wether the surface is represented with quads or with triangles. Jos Stam proved that the 3D position and normal of any position on a Sub-D surface can be directly derived from the control points. In other words. subdivision is not required to display sub-D surfaces. It is a nice theoretical result but nobody does that because this is too expensive. It is still way less expensive to subdivide into micro-triangles and render those triangles. Of course, the solution is not so pure and elegant but who cares. The multiple images are combined into a single image plane where each pixel can represent basically infinitely many luminance variations and amplitude. Depth or Z-space have nothing to do there. Display technology is really just a memory plane where to store color values for the final rendered image and a large array of computing processors. This may be different in a far future but there are no sign that this model is going to change in any relevant time frame for this discussion. So you end up with a bunch of control points in 3D space and an algorithm to interpret those control points into some surface. The display technology uses its computing processors to produce a 3D surface from those control points. Here again, there are no fundamental differences. All surface representation technologies need some algorithms, and thus some computing power, in order to transform a bunch of control points into a surface.
  18. I'd like to participate in this discussion but even if I click on the "Quote" button, I don't get any quotes. And I can't even copy and paste texts from previous posts. So I'll pass.
  19. My Point of view from a technical perspective is that spline and patch modeling and rendering are a subset of the subdiv category. This has been demonstrated. Hash patches need to be subdivided into triangle meshes to be rendered. This is true for the ray-tracer and for the real-time renderer, HA:MR or not. Hash patches share many characteristics with Sub-D and they use the same mathematical Framework for subdivision except they use different basis functions for the subdivision and they use different constraints when subdividing n-ary vertices. From an artistic point of view, I don't see how quad modeling could be considered as superior to tri modeling in the absolute. There are so many situations where the quad constraint is detrimental to good modeling, Any sufficiently detailed anatomical model for instance. But at the same time there are so many situations where quad modeling simplify the modeling task so much like for architectural and many furniture models. Where is SubDiv going next is a rhetorical question. Wrong focus. Sub-Div doesn't need that many more innovations. The industry is mature and research focus is not on those issues anymore. This is why Disney opensourced their sub-div. No competitive advantages anymore. No matter the technology used, in the end, it is the artist behing the screen that makes the difference. Concerning porting HA:MR to todays browser environments. This would be a huge undertaking. HA:MR was designed in the days of OpenGL 1.1-1.5, the so-called "Fixed Function Pipeline". Converting that code to OpenGL ES 2.0 with the programmable shader model would be a major task. In addition to that, the browser industry is shifting toward a plugin-free environment. Exit plugins, enter apps. Google have already ditched Netscape plugins in favor of PNaCl applications, Firefox is looking at an alternative like asm,js and Microsoft IE 11 was already supposed to be plugin-free Under Win 8 but MS reverted this decision ... for now.
  20. Thanks all. I'll have a beer to you all.
  21. That is what I think too and why I was interested in knowing what people expect from using an external renderer. Physically based renderer do come with a rather large base of materials. Some of them are measured materials. But they are real-world materials and even though there are many, they may not suit some particular project. In this case, the artist need to define its own material. The material editor strictly enforces physical plausibility though. But still, one need to know a lot about materials, their composition and how they react to light in order to compose realistic materials. Of course, experimentation is always possible. So I share the impression that dealing with physically defined materials is probably more technical than most 3D artist are up to. It is whole different culture. The experience gained in defining visual effect based materials will not be transferable to physically based material. This said, Disney have designed a physically based material definition interface for artists that uses the same idea of adding and adjusting visual effects except that those visual effects have a meaningfull physical relationship. Some of those visual effects have similarities with the ones used in A:M but most are new. Maybe in a few years, this will be commonplace and physically based renderer will be more accessible.
  22. Exactly. Especially regarding materials. Material properties in A:M are visual properties: a palette of adjustable visual effects. By adding and adjusting those visual effects, the artist gets a given for look. A physically based renderer needs physical description of how light interacts with materials. It is impossible to convert a combination of visual effect into physical description of materials. At worst, the combined visual effects would result in physical impossibilities or result in contradictory physical properties. At best, the conversion would result into a material that remotely looks like the combined visual effects. Same for lights but it is easier to infer physical properties of lights from their combined set of visual effects. Note that I make a distinction between textures and material. A material includes a set of textures but could also be defined without textures. This said, textures used to drive some visual effects would be particularly troublesome in some cases because their values could not be used to drive physical properties.
  23. This is an important observation and is the case of all physically based renderers. I can tell you that this workflow is not going to give you the photoreal renders that you would expect. The problematic step here is "A:M converts the scene to be usable for the other renderer". While converting the geometry is easy, converting lights to physically defined lights is difficult and converting materials to physically defined material is impossible. And this is why every physically based renderers out there use their own materials. This is a more plausible workflow but nearly impossible to integrate in A:M unless you select one specific renderer that you want to integrate in A:M. The problematic step here is the "imported materials and lights". It would be a huge amount of work to allow A:M to import the material and lights definition from all the external renderers around. And in several of those renderers, their material definition is closed and not available for import. No. Area lights and light emitting materials don't result in longer render times in the sense that they are not more expensive to sample than the current lights. You currently have area lights in A:M and can see the results when you use ray-traced multipass soft shadows.
  24. For those interested in getting photo real renders, I would suggest trying your hands on Lagoa: http://home.lagoa.com/ I know these guys and they have build a superb online system that uses the cloud for rendering. You can register for free and there are no differences in functionalities between the free and paid registrations. The only differences are in the amount of cloud resources allocated for rendering so don't expect the sort of blazingly fast rendering as you can see in their videos if you use the free registration but you can still get photo real renders of your scenes if this is what you are looking for.
  25. I'm not saying that. Not at all. I'm genuinely currious, though, about the expectations of those, like you, who dream of using an external physically based renderer with A:M. How do you imagine your workflow? Say you build a scene with A:M, and then what? Can you explain how you imagine the link between the scene you did in A:M and the physically based render? What are the steps? What do you need to do to your scene so it can render in a physically based renderer? Have you ever rendered a scene with a physically based renderer before? How do you define the materials and the lights for example?
×
×
  • Create New...