-
Posts
2,579 -
Joined
-
Last visited
-
Days Won
1
Content Type
Profiles
Forums
Events
Everything posted by ypoissant
-
Is that the main interest? Speed? You know, IMO, A large factor in this "warm" vs "sharp" look is solvable not by using a GI renderer but by following the linear workflow principles. I know of no GI renderer that does not follow those rules, in some cases even enforcing them by automatically converting the textures plus surface properties and the final render under the hood. The reason is simple, while one can get away without the linear workflow in a non GI renderer it is difficult to avoid with a GI renderer because you simply cannot get a photorealistic look without it even with the best renderer. But whatever the renderer, linear workflow produces renders that look sharper because the light falloff on the surfaces is sharper. Even direct lighting will look more indirect by following the linear workflow principles. So if this "warm" vs "sharp" look is the main reason, this can easily be solved with the current A:M renderer. Anybody else want to share why he/she is looking for an external renderer?
-
Robert, Last time I worked on A:M, Radiosity wasn't available anymore. I doubt it is back. But I don't own the latest A:M version. Anyways... The old A:M radiosity worked the following way: During a prerender pass, light rays were shot from lights sources and bounced in the scene surfaces. And as the rays were bounced, their energy were accumulated in color maps that were silently and automatically decaled on every surfaces in the scene. Then during the rendering pass, those decals were added to the direct lights to result in a radiosity render. The curent Global Illumination algorithm used in A:M, is a Photon Mapper which is yet another technique for rendering global illumination. Photon Mapping works by shooting rays from the light sources into the scene during a pre-render pass, As those rays are bounced in the scene, their radiances are accumulated in a photon map. During the rendering pass, the photon map is combined with direct lighting to get a global illumination render. However, a photon map is not a color map decaled on surface. Rather it is an alternate 3D data structure that hold information about the photon hits. By far the most used and versatile Global Illumination algorithm is Path Tracing or more specifically different variants of Path Tracing. An combining path tracing with photon mapping. I think that what you are refering to, Robert, is Image Based Lighting. In itself, the IBL is not a GI algorithm but IBL can be used Inside a GI renderer to supply environmental lighting. IBL uses High Dynamic Range environment maps to supply the environment distribution of irradiances. But GI algorithms don't need IBL to work. GI algorithm will compute the indirect lighting from the actual lights in the scene and bouncing these lights on the surfaces of the scene to get indirect illumination.
-
Unity have licenced the Geomerics Enlighten engine for the GI. So because of that, I doubt the Geomerics GI will be available in the free Unity version. There have been several other people trying to implement GI in Unity over recent years too. For a review of how Geomerics Enlighten works, see the following slides: http://advances.realtimerendering.com/s201...g%20Course).pdf For the record, there are many techniques for computing Global Illumination. Radiosity is only one of those techniques. It is an old and very limited technique and is not used anymore. Global Illumination simply means that all lighting effects (global lighting effects) are taken into account when rendering in contrast to local illumination that only takes direct lighting effects into account. Considering that radiosity cannot render several lighting effects, it is only because of its history that it is considered as a GI algorithm. Photon map is another technique to compute Global Illumination and this technique uses a photon map to provide the indirect illumination. Enlighten uses a combination of several techniques to get GI in real time. Some of those techniques require precomputations and pre-run-time scene setup and are then combined with real-time use of those precomputed results. Using Enlighten in an animation project is likely to require a good amount of time in setting up the project for GI lighting. It is likely much less simple than placing some lights or a sky and press render as is the case with true GI. But really, aside from that, I'd like to ask more general questions: Why looking for an external renderer anyways? What is expected? How would it be integrated in A:M pipeline?
-
Yeah. Pavlidis. I had this book too. I think I still have it somewhere. Another old one I had was "Procedural elements for computer graphics" from David F. Rogers, in 1985.
-
1977-1984, those were pioneering days for computer graphics. The only avalilale reference was Newman and Sproul "Principles of Interactive Computer Graphics". A book I was atempting to understand. About everything had to be invented. Today. I consider myself Lucky. There are many refrences. Many books on all aspects of computer graphics and with the Internet, access to tons of articles, thesis, course note, video of university courses, discussion forums with experts in the domain and excellent open source applications that can be examined. I'm not demeaning what I do and the fun I have doing it. But I consider myselfs Lucky that I can stand on shoulders of giants that were there before me. If not for those giants, I would not be doing what I do and have the fun I have today. And you, Nancy, are part, somewhere, of this edifice of giants standing on shoulders of giants. Gratitude.
-
PIXAR's Universal Scene Description (USD) Open Source
ypoissant replied to Rodney's topic in Open Forum
What I understand is that there are two APIs: A C++ API and a Python scripting API. Of course, this means that USD is not an end-user product but requires a tech person to use it or more appropriately integrate it into a production pipeline of some sort. But that is the case for every Open Sources 3D related standards around. By themselves, they cannot do much. On the other hand, this is what makes them abstracted from one particular use case or from a small set of use cases. Rather, they can be adapted to about any use cases imaginable. Reminds me of OpenEXR, the High Dynamic Range file format. Even though this is maintained by ILM, the API has evolved from numerous discussion on their user forum. The current state of OpenEXR is way more evolved than the first version and is the result of contributions from a multitude of people all over the industry. -
Nancy, You are a hero for me. Always been and ever will be.
-
Early Sunday Morning Building
ypoissant replied to R Reynolds's topic in Work In Progress / Sweatbox
Very nice models and lighting Rodger, Concerning the sign shadow, My take is that the sign is not perpendicular to the facade. In the painting, we don't get to see much of the sign faces. If you were to tilt the sign toward the center of the building, you could get the sign longer and project the given shadow. -
If I can share my experience. From benchmarks at work, compared to single core no hyperrthreading render our renderer was a little more than 3 times faster when using the 4 cores of a quad core non-hyperthreading and almost 5 times faster on an quad core with hyperthreading. Hyperthreading is a way to try to keep the CPU busy while waiting for data to arrive from memory. Main memory access is much slower than the CPU operations. That is why there are several level of memory caches on the CPU. Hyperthreading keeps two execution queues per core. Whenever one of the queue is waiting for data from memory because the required data is not in the cache, the other queue tries to execute its instruction using data that might already be in the cache. The efficiency of hyperthreading depends heavily on memory access pattern of the application. Currently existing applications are quite bad at memory access patterns that optimize the cache utilization so hyperthreading tends to be worthwhile. But this needs to be tested with the application. Another CPU feature that needs to be tested with the application is the size of the L1 cache. Usually, the larger the L1 cache, the faster the computations for a renderer. But again, this is heavily dependent on the memory access pattern of the application.
-
Nope. I had nothing to do with that. Btw, what this really means is that Pixar had a patent for subdivision surfaces. Seeing that all 3D applications have been supporting subdivision surfaces for quite some time, it seems that they did not enforce their ownlership on this technology though. But now, it is open source. In principles, this means that anybody can use subdivision surfaces without worrying being sued by Pixar. The other aspect of what it means is that now, anybody who wants to use subdivision surfaces have a very efficient open source library to start with.
-
Hi Yves! How cool you're still around!! I'm not really sure (not in the office right now to check, maybe tomorrow..), but I pretty much cranked up every value.. I will post the values I used when I'm back in the office. And - yeah - what about that baking thing? All the best, Elm. EDIT: IIRC, my values are: Photons cast: 1.000.000 Sample Area: 4.000 Photon Samples: 2.000 Intensity: 90% (?) Max Bounces: 15 Caustics: Off Final Gathering: On Samples: 50 Jittering 50 % Precompute Irradiance: On I see. In that case, try the other way around: reduce the sampling area and photon samples. My hypothesis is that you will then trade low frequency noise for high frequency noise which could be easier to iron out with noise filtering. The problem with all form of global illumination techniques for animation is that all those techniques are stochastic and any object that moves in the scene can give very different results from frame to frame unless very long time is spent to compute a nearly perfect solution per frame. For photon mapping, that means increasing the final gathering samples and the number of photons as you found out. Leave the intensity to 100% Also, gamma correction could attenuate the noise by reducing the contrast gradients in the render.
-
I'm not sure how the light baking is done but at least it means that the infrastructure for producing the maps and storing values in them is there. Thinking out loud here: Photon mapping is a screen space algorithm. I mean there is the first pass which is world space but the second pass is the one that computes radiosity on surfaces. And this is screen space, meaning that only the surfaces visible to the camera have their radiosity computed and the computation is done on a per camera pixel way. In other words, the screen pixels gives the structure for computing the radiosity on surfaces. Baking radiosity would mean computing rasiosity on all surfaces, visible or not. Assuming the baking infrastructure is based on the old radiosity baking technique, the calculation of radiosity on surfaces could be driven by the texels on the baked radiosity texture maps. So yes, this should be feasible. The baking of radiosity would probably take quite long though. But then it would be done only once or very few times. Two very different techniques. Although "Photon Map" contains the world "map", this map is actually a hierarchical data structure to store the photons and their position and accelerate their query during rendering. While the old "radiosity" subdivides the patches to store the light interactions between surfaces. In A:M case, this subdivision was done through textures on surfaces. Surfaces that required finer sundivisions had higher resolution textures. Those radiosity texture maps cannot by used for photon maping. And the "Radiosity" technique have been abandoned by every 3D app around because 1) It isn't flexible enough, 2) it cannot render all reflection types (glossy for instance), 3) it cannot produce hard or quasi hard shadows, 4) It is hard to parameterize and get renders that look good, 5) It flickers like mad during animation. Well, the answer is not simple. In theory, every rendering algorithm can be programmed on a GPU. But the performance is oftentimes disapointing due to the limited computing model of GPUs. A technique like Photon Mapping would probably perform not too well on GPUs and programming it would require a lot of development effort. Some people have done that though. On the other hand, path tracing, a very popular global illumination technique on GPUs is relatively easy to program but gives poor performance (like several hours on a Fermi). To get really good performances on a GPU requires some extraordinary crafting. And so far, whatever rendering technique is developped, a well crafted one on a GPU is usually only a few times faster than a well crafted one on multi-cores CPUs. So my conclusion is that, right now it is conceivable but not worth the effort. Development effort would be better spent utilizing all the resources of multi-core CPUs IMO.
-
You should be able to get rid of some flicker by increasing "Sample Area" and "Photon Samples" instead of increasinf "Photons Cast". You will get more diffuse indirect lighting but with a scene composed of mostry diffuse surfaces like this one, this should not be a problem. Increasing "Photons Cast" can significantly increase the render time. But incrasing the area and samples only increases the irradiance precalculation time.
-
I found this video presentation about gamma correction and linear workflow: Matt's video presentation And if you want to know how to implement a linear workflow in A:M, then there is my tutorial: My gamma correction tutorial
-
Thanks guys for your appreciation. The way I solved the problem of keeping the objects color with blue sky lights in the past is this way: I had a sky light rig with bluish lights and a sun light with a yellowish light. Then to help remove the bluish shade from the sky in parts of the scene that should be lighted by the sun, I added a twin negative bluish sun. Today. with AO, you can replace the sky light rig with the AO with apropriate bluish color for the ambient lighting. The yellowish shade from the sun is also important when trying to match a photo. But there is more than the colors of the sun and sky to match a photo. There is also the dreaded black art of gamma correction (sorry for those who despise this topic). Your photo IS gamma corrected so should your render too. Without gamma correction, the colors look much more saturated than in the photos. And also getting the right ballance between the yellowish sun and the bluish sky is much harder to achieve without gamma correction because lights add up linearly in a render but not in a gamma corrected photo so you almost don't need the negative bluish sun trick. Also, withoiut gamma correction, your shadow terminators on the sides of objects come much more earlier than in the photo. A good example is your big ball in your second examples. The terminator would be much shorter with gamma correction than the one you have. But then, if you start experimenting with gamma correction, you will have to understand the ideas behing what is called the "linear workflow" to get your color rights even after you gamma correct your renders. If you do a search on Google images for "Linear Workflow", you will get numerous examples of what I mean with before/after comparisons and a lot of explanations.
-
The bluish shadows come from the fact that sun shadows are illuminated by the blue sky. So it is not only the cast shadow that is bluish but also the non-lit side of the objects too. If you want to match the photo, you need to add some blusih lighting that simulates the sky.
-
Someone called? You are right Rodger that a flat plane should be 128 128 255, no matter what else is in the field of view. Why it isn't? I don't know and this is a case where I would need to trace the code execution through a debugger to figure it. It is either a bug or a kind of tone mapping post process applied to the normal buffer. Try another file format maybe? As a remedy, you can always scale the values using the "Curves" or "Levels" tools in Photoshop. Doing that, you would get better rescaling precision if you could save the normal map in a format that support 16bits per channel. Although, in this case, since the span of the values are larger than the ones you need, I don't think you would loose a lot of normal information by going 8bits per channel.
-
Note that when the tutorial was written, a mag of 236 gave the perfect round curve. But at around v12, when work had been done to fix some crease issues, that mag value didn't work anymore as pointed out by Rodger. Better use the value he mentions for perfect round bevels.
-
Guys, I appreciate your nice words. But let's not hijack Rusty's thread. I replied Jakerupert privately. All goes well. I just don't have much time available these days. Work take all the place.
-
The host is ready to receive the web site but I can't seem to find time to upload and set it up. Part of my disinterest is that I haven't updated the site for years and it really doesn't reflect my current interests and work anymore and making it up to date would require much more time than I have available right now. The lady is sleeping somewhere on some hard-disk. It was left in a limbo/"work in progress state" when I started to modify the anatomy and then some other priorities came in. As it is, right now, I'm not satisfied with the anatomy and proportions at all. I don't see the time when I will start working on it again. I tried to match some backup models with the dates when I posted the renders here on this forum but I can't find that version. Don't know what happened then. And sorting this mess out would again require more time than I have available.
-
Sorry, I will get a bit technical but that is a technical issue. If your on-screen details (geometry or texture) are smaller than your render sampling resolution then you will get that sort of shimmering. Sampling resolution: Say you multipass with 9 passes, then you have a 3 x 3 sampling per pixels. Then if your details are smaller than 1/3 pixel square, you will get shimmer. For geometry size issues, there are no other solutions to this shimmering than sample at higher resolution. This mean increasing the number of passes. For image textures, one other solution is to use an image of lower resolution or do a blur pass on your image. You would need different versions of the same image with different resolutions or different level of blur to match the closeup-ness of your shots. For procedural texture, you got to be an expert in that sort of thing. Basically, you need to remove the tiny details in the texture generators. Looking at the octave is a good hint but there is much more to it. Unfortunately, you can't blur procedural textures. But usually, you can replace a procedural by an image texture. Procedurals are very prone to shimering. I'd recommend replacing all the procedural with images. One way of doing that is to render the procedural from your geometry at the highest resolution and then remap the resulting image.
-
FX-Guide interview with Zap, linear workflow evangelist. Worth listening. There are more information on this subject than any discussion we can have here.
-
You need to let the math and electronic away. If you concentrate only on the values and how those values add on, then you are not focusing on the right issues. When a monitor display a value of 128, it does not come out as half bright. So those numbers are fine for computations but not fine for display. That could be an habit thing. You are used to see your colors in some way and the other settings look off. I had this same feeling when I started to play with the gamma 2.2 issue years ago. But now I'm comfortable with thos settings and they look natural. Prior to that, I was working in a multimedia business and had my monitor set to gamma 1.8 to match my collegues that were all using macs. And we were receiving complains from several customers that our products looked too dark. Gamma 1.8 have several advantages relative to actual linearity of displayed values but it does not match the vast majority of computer monitors out there. Those were days prior to sRGB standard BTW. So we had to figure this stuff by ourselves. Unfortunately, there was no way to fix the mac monitor gamma that were factory set that way and had not controls. So the artists learned to compensate by making all their graphics more washed out that natural. That is the main reason why the gamma 2.2 that comes with the linear workflow is preferable. This should be the focus of observations instead of the fact that 128 corresponds to 0.5. I'm lost there. It is probably because, to me, the relationship between 128 and 50% gray is not true when displayed on a monitor. Indeed, on a normal monitor, 128 is quite darker than 50% gray. And that is the whole point of all this gamma compensation issue and the linear workflow. Exact.
-
It is set to gamma 2.2. Making a monitor linear (gamma 1.0) would push the electronics quite far. A monitor natural gamma is already about 2.5. Gamma 2.2 is a good compromise for all monitor without pushing their electronics too hard. So because the monitor electronics is setup got a gamma 2.2, normally, a factory set monitor would not need any further gamma adjustment from the video card or such. The electronic gamma adjustment is not to be confused with the gamma control pannels, though, which don't modify the electronics of the monitor but adjust the signals sent to the monitor. ... to make reasonably sure your monitor is set to gamma 2.2. But this chart is just approximate. It should works for all monitor set to 2.2 gamma with correct brightness and contrast settings. It is a quick check. But if the monitor controls are OFF by a good bit, this chart will not be very helpfull.
-
I'm pretty sure that there are no color profiles written in those files. Since LDR files are assumed to be in sRGB color space anyway, especially jpeg, there is not much point in duplicating this information in a color profile. In other words, not puting an explicit color profile in those files is like saying that the color profile is sRGB. I coded the sRGB gamma function and I remember coding the exact sRGB curve which is not quite gamma 2.2 but nearly so. The sRGB have a linear segment in the dark colors designed to prevent some artifact. But I don't recall which artifact this is. Yes. Absolutely. Yes. It is messy but you are right. compositing applications like AE already have what it takes to degamma those images when they come in a LDR format. However, like you observe, you loose data by going this route. For this reason, it is better to either encode LDR image in 16-bits file format or even better in floating point format (the here so called "32-bits images"). The best strategy is to keep the image in linear space in HDR format. Post prod with linear images and then gamma correct when post prod is done before outputing to final format. The only way to know for sure is to use a monitor calibration device. This will do the best job. By this I mean that whatever the capabilities of your monitor, it will set it in the most optimal way. So the charts may still look off but you know this is still the best you can get with this monitor. The fact that the three gamma bars on Norman Koren web page indicate different gamma is strange and would tell me that something is setup wrong somewhere. WHat this is, though, I don't know. Yes. That is the principle. I'm sorry this is turning out so difficult. I'd like to help but there is little I can do remotely.