sprockets The Snowman is coming! Realistic head model by Dan Skelton Vintage character and mo-cap animation by Joe Williamsen Character animation exercise by Steve Shelton an Animated Puppet Parody by Mark R. Largent Sprite Explosion Effect with PRJ included from johnL3D New Radiosity render of 2004 animation with PRJ. Will Sutton's TAR knocks some heads!
sprockets
Recent Posts | Unread Content
Jump to content
Hash, Inc. - Animation:Master

Depth Of Field and SBAO


detbear

Recommended Posts

I seem to have run into son banding when mixing the Screen Based Ambient Occlusion

and Depth Of Field. Is this a biproduct of the two being used or is there a setting in the

SBAO that will decrease the dark "halo" like banding on things.

 

Thanks for any suggestions,

 

Kevin

Link to comment
Share on other sites

  • Hash Fellow

I seem to have run into son banding when mixing the Screen Based Ambient Occlusion

and Depth Of Field.

 

 

Can you show an example?

 

 

 

The banding also sometimes occurs when combining SBAO with lens flare.

 

 

For that case I would suggest doing the lens flare as a separate pass with everything else in the chor black and all other lights off and then composite that with a flareless render.

Link to comment
Share on other sites

There may have been some speed improvements to the REAL ambient occlusion feature... may be worth a try. It may be included in the DOF operation whereas anything 'fake' or 'post' would not be.

 

I think to use that one you would need to open the Choreographies properties and turn-up the AO setting there as well as turn it on in the render settings... I am confused as to why there are 2 controls... maybe Gerald knows.

Link to comment
Share on other sites

The control in the choreography are quite simple to understand:

- Ambiance Color: What color should the "light" / brightness have... most people use white, but for different light situations you can use something different too.

- Ambiance Intensity: How much lighter you want the whole scene to be. (consider the light of any lights in your scene and this value together!)

- Ambiance Occlusion: How much darker the areas afftected by AO should be.

 

The control in the render settings are for something different:

- Ambiance Occlusion > do you at all want to render with it or not?

- Occlusion Sampling: This determines, how much noise you will see in the darkend areas where AO comes into it. This highly depends on the scene, but for me, 60% for instance does do the job most of the time. The higher this value, the longer AO will take to render.

- Transparent AO (Slow!) is exactly what it sounds like... if you have a transparent object above your object (lets say a roof window) AO will consider the transparent object as solid in general. If you do not want that, turn this value on. But be aware... "Slow" does stand there for a reason... (also it is not that bad as it sounds, if you ask me...)

 

Of did you talk about something different?

 

In general concerning these problems: All post-effects and post-filters will have a problem with SSAO if you ask me... SSAO is a post-filter too. It uses the depth-map of the rendered image to add simulated stuff on it but this is already based on the pixel-data created by the rendering, not on the 3d-data in the scene. AO on the other hand is based on the 3d-data (which makes it much harder to calculate) and like that it should be able to handle DOF and other post-effects better.

 

I'd say it could be fixed, if SSAO could be put in the calculation before other posteffects are applied to the image... but I am not sure about that.

 

See you
*Fuchur*

Link to comment
Share on other sites

I presume this is about multi-pass DOF and not the fast DOF?

Wull- more comparing the 'fast GPU' varietels(there are 2 I know of, Jenpy's FakeAO CPU which is a post effect, and A:M's own 'Screen Space Ambient Occlusion' TO what I 'think' is A:M's real AO generator, available as a list item just over the SSAO option in the render/Options dialogue...

 

When it comes to DOF, since the 2 GPU AO methods(Jenpys AO CPU and A:M's) are added post-render, they are not affected by the DOF blur operation, so you can have non-blurred AO effect over blurred imagery, which can be bad and cause banding- other issues. My question, is does the other 'regular' AO get calculated into the DOF... since all 3 options are available in standard A:M render OR multipass render ( I know that both renderers go about DOF in different ways.)

 

This is something that I would need to test to get to the bottom of... and since I don't ever see any noticeable result with the standard AO (plus- it has very little controls... ON/OFF and graininess.) testing would be difficult.

 

I think AO(and DOF btw) is a very important feature in 3D rendering, and an area where A:M could improve. Jenpy's Fake AO GPU is FANTASTIC, however useless due to apparent size restrictions(and only 32 bit version support)... I would LOVE to see it fixed and incorporated into A:M. After playing with both DOF and AO in Element 3D (GPU) and seeing how both features can be PUSHED and controlled deftly, keyframed and GPU rendered with no wait.st

 

A:M's way of controlling DOF is very intuitive with the 'focal areas' start and ends clearly defined and keyframable. BUT- what if you wanted to push the amount of un-focus to more obtuse degree (more blur). Some people will say 'render a depth map and do it in post', which-yeah is a fine solution, you CAN push the blur as far as you want in AE... however there are quality issues with the image results due to the depth maps 256 color depth, edge issues, and if your camera is moving thru the scene or you don't have a set background object in view other problems arise- making this approach mostly useless as well. A true solution, is to generate and apply both features within the 3D render. To A:M's defense, DOF(animation) is very difficult in 3D packages, and Maya/Max/C4D/modo users are frequently complaing about their renders as well. Element3D has a great handle on both features, I've found, and by seeing your results in real time without setting up and awaiting a render is priceless.

 

Has Jenpy ever fixed his AO GPU? Good question- I am off to find out. If I remember right, you could 'push' the effect with greater control and faster renders than his CPU version. It just had a size limitation, a 720 X 486 render would work great, but a 1920 X 1080 would only have the AO rendered in part of the screen.

Link to comment
Share on other sites

  • Hash Fellow

I've always wondered if A:M's fast Depth of Field, the DOF you get if you have Multi-pass OFF, could be modified to allow greater blurring.

 

It does what it does well, but only up to a small amount of blur.

Link to comment
Share on other sites

  • Hash Fellow
you CAN push the blur as far as you want in AE... however there are quality issues with the image results due to the depth maps 256 color depth,

 

 

Doe AE not allow you to use an OPENEXR depth map? A:M can make those.

 

edge issues, and if your camera is moving thru the scene or you don't have a set background object in view other problems arise- making this approach mostly useless as well.

 

 

 

If it's a matter of an object entering from out of the frame, you could widen the camera view by a number of pixels equal to the width of the blur and then crop the render back down to the desired size.

Link to comment
Share on other sites

I've always wondered if A:M's fast Depth of Field, the DOF you get if you have Multi-pass OFF, could be modified to allow greater blurring.

 

It does what it does well, but only up to a small amount of blur.

Yeah- it's like the programmer figured 'no-one should ever need more blur than this...' -and in truth... a camera shooting in real-life limits the DOF as well. Both MP and standard AM render limits the amount of blur... in MP you can add more passes, but that just smooths the blur with more offset iterations- it does not enlargen the blur amount.

 

 

'Doe AE not allow you to use an OPENEXR depth map? A:M can make those' - I will test that!

If it's a matter of an object entering from out of the frame, you could widen the camera view by a number of pixels equal to the width of the blur and then crop the render back down to the desired size. -What-who-where? workaround to the workaround power! I need less workarounds and more functioning features.

 

I keep playing with this TRYING to find the holy-grail of settings or a method that 'bing' I understand. Yesterday, I had such a 'bing' moment with SSS, which I feel is now rendering swiftly and can be controlled quite nicely and is a GREAT FUNCTIONING A:M feature! Just like A:M NetRender... FANTASTIC FEATURE!

Link to comment
Share on other sites

As far as I can tell - there are multiple flavors currently of FAKE AO for A:M

 

Jenpy's FASTAO is only available in 32 bit - works in 16 & 17 only - there was no GPU version of FASTAO (it's pretty fast anyway). Did not try in 32bit 18.

 

Steffen's SSAO (Screen space AO) is available in ver 18 only? (64 bit only?) and works with opengl (which is slower than fastao, and is CPU version) and opengl3 (which is GPU version and is plenty fast but to me doesn't look as good, but is good enough)

 

And yes since ALL FAKE ao's in A:M are post - there are anomalies when applied within A:M (have noticed also funnies).

Link to comment
Share on other sites

  • Hash Fellow

 

'Doe AE not allow you to use an OPENEXR depth map? A:M can make those' - I will test that!

If it's a matter of an object entering from out of the frame, you could widen the camera view by a number of pixels equal to the width of the blur and then crop the render back down to the desired size. -What-who-where? workaround to the workaround power! I need less workarounds and more functioning features.

 

 

OpenEXR is the right way to make a depth map, there's no way an 8-bit map will be good enough.

 

AE can be configured with a text file to read in the OpenEXR buffers automatically, so that should be the standard way to work rather than a work-around.

 

Yesterday, I had such a 'bing' moment with SSS, which I feel is now rendering swiftly and can be controlled quite nicely and is a GREAT FUNCTIONING A:M feature! Just like A:M NetRender... FANTASTIC FEATURE!

 

 

 

Tell us more about this!

Link to comment
Share on other sites

Wow .... Thats all great info. Thanks everyone.

 

Matt....the normal AO still lags way behind the Jenpy and SBAO methods in render time. I like the Normal

AO best, but the time per frame was wayyyyy too long on my current project. 20 seconds per frame

versus like 7 minutes per frame.

 

And the difference was not extreme. In fact, the SBAO had the same visual result. It just wasn't working

well with the DOF effect. What I decided was to render in layers and composite in AE. That way I could

just blur the layers as needed to give things a depth of field look.

 

Turned out OK although the steps are a pain in the neck.

Link to comment
Share on other sites

  • Hash Fellow

I see that A:M now supports Photoshop PSD... does that work like EXR in that each buffer would be placed on a separate layer? (I'm testing!) What is the advantage to EXR...? Floating depth point thingamagig?

 

A unique combination of features makes OpenEXR a good fit for high-quality image processing and storage applications:

 

 

High dynamic range Pixel data are stored as 16-bit or 32-bit floating-point numbers. With 16 bits, the dynamic range that can be represented is significantly higher than the range of most image capture devices: or 30 f-stops without loss of precision, and an additional 10 f-stops at the low end with some loss of precision. Most 8-bit file formats have around 7 to 10 f-stops.

 

 

A 32-bit number can distinguish between 4 billion+ values instead of 256. In computers that is typically +- 2,147,483,647

 

Floating point means it can use that precision in different scales. It can draw a billion levels of depth between 1cm and 2cm or it can draw a billion levels between 1 billion cm and 2 billion cm and do it all in the same image if it needs to.

 

A 32-bit floating point number is big enough to represent a pentillion light years in centimeters and small enough to represent the width of the smallest atomic particle at the same time, too.

 

The upshot of all that precision is that you never have to scale a 256-depth depthmap represent a certain distance from near to far. 0 is zero but what is 256 in real-world units? 10 feet? 50 feet? a mile?

 

 

Good color resolution With 16-bit floating-point numbers, color resolution is 1024 steps per f-stop, as opposed to somewhere around 20 to 70 steps per f-stop for most 8-bit file formats. Even after significant processing, for example, extensive color correction, images tend to show no noticeable color banding.

 

 

Link to comment
Share on other sites

Test did not go so well... PSD and EXR failed to render whereas PNG renders fine... gotta figure that out... BUT A rudimentary QUESTION REGARDING SSAO...

Can you just turn ON SSAO and it is good to go, or do you also need to turn on the depth buffer, the 'apply camera's post-effects to renderings' option or the plugin shaders option to ON...?

Link to comment
Share on other sites

Just turn on post effects - do not need depth buffer This is from 18n (ssao only - using opengl3 - ie GPU) - took 1 sec processing time for the SSAO.

 

My settings:

16 samples

64 radius

100 distance

100 dense

100 soft

1 gamma

0 illuminance

blur effect off (should have been on)

 

EDIT - 2nd image is with blur effect ON, blur radius = 2, applied to image

 

EDIT2 - and just for giggles, I just hand colored the ssao version (started with version without hair accidently). I think I will be doing this more frequently!

testssao0.png

testssao2bluronapplied0.png

Edited by NancyGormezano
  • ____ 1
Link to comment
Share on other sites

AWESOME, Nancy- That is the look I am after, and I don't think you can ask for much more than that from AO. SO- you render out the AO pass separately and then combine in post?

Kevin/William/detbear is correct - I only rendered the SSAO separately to show you what it looks like with my settings - I usually do NOT render separately. If you use the default settings for SSAO, you will see that it's not very strong, sometimes hardly any difference.

 

And yes the gold dress/skin, etc is MATCAP, and there is also a flare ON

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...