Jump to content
Hash, Inc. - Animation:Master

robcat2075

Hash Fellow
  • Posts

    27,743
  • Joined

  • Last visited

  • Days Won

    339

Posts posted by robcat2075

  1. Here is another modern render of one of my old Animation Showdown animations.

    I added a school gymnasium set and rendered with radiosity.

     

    A birdseye view of the chor looks like this. The set is a completely enclosed box with two kleig lights in the ceiling...

    ChorBirdseye.png

     

    A conventional render with those two lights gets this...

    Conv39b_151.png

     

    That is very severe. If I were going to use conventional lighting I would need add a number of fill lights in strategic places.

    Here is a radiosity render. The shadow areas are no longer pitch black and there is visible detail even where the lights do not directly shine.

    910 RadiRaw.png

     

    Overall, however, it is too dark for my taste. Increasing the Intensity of the lights so that the charcters were well illuminated caused teh brightest spots on the floor to become overbright and clip.

    Instead I applied a gamma correction to the radiosity render. I''m liking this much better...

    910 Gammai.png

     

     

    Unfortunately, the shadowing that was indistinct in the raw render is now ever weaker. To give that some more bite i rendered a pass with ScreenSpace Ambient Occlusion (SSAO)...

    910 SSAO only.png

     

    ... and composited that by "multiplying" it with the Radiosity. I did that in After Effects but an A:M "composite Project" can do the same operation.

    This PNG alternates "before" and "after"...

    ExerciseSSAO_animated.png

     

    SSAO has no anti-aliasing so I had to render those at 3x3 times the normal resolution to make smooth versions suitable for compositing.

    When A:M introduced Radiosity our computers weren't ready for it. Each render took so long that animation was unthinkable. But now with a modern CPU and NetRender it is within reach.

    My 640x480 test renders for this scene took only about 3 minutes per frame. After i got my settings decided and cranked up the quality, the full-frame final renders took only about 20 minutes each.

     

    Get started with Radiosity with Yves Poissant's Cornell Box Tutorial

    Learn more at Yves Poissant's Radiosity/Photon Mapping Pages

     

  2. Thanks, David!

    I'm finding out now that this must be a 19.5 problem. If I go render in v19.0 the frames come out properly.

    I should have thought of this before since i already had a 19.5 problem with the regular color render of this scene. :facepalm:

  3. Here's a birdseye view of the chor. The alien in the camera view is highlighted, the offscreen aliens are to the right of the camera.

    image.png

     

    But now I've gone through deleting each alien one-by-one,  re-rendering that frame and the gash is still there...

    rad39nSSAOXXX_096.png

     

     

  4. Starting with the full chor plus a few additions, the gash is till there...

    rad39SSAOXXX_096.png

     

    But if I delete all the aliens, (there are about 9 offscreen to the right) the gash is gone, with a new, smaller scratch on the front-most cushion...

    rad39cSSAOXXX_096.png

  5. 2 hours ago, itsjustme said:

    Can you upload a sample project where this happens, Robert?

    I just tried making a stripped-down version with only the brick wall... but that didn't produce the gash when I rendered.

    So there must be some point between all-the-of-other-objects and none-of-the-other-objects that creates the problem.

    I'll have to investigate further and report back.

    Thanks, none-the-less, David!

  6. I put a bump map on this model to simulate some wrinkles on the top, but although both top patches have their normals facing out, one patch shows the inverse bump result of the other...

    image.png

     

    Just to try it, I did RMB>Renumber CPs...

    image.png

     

    That fixed it! Don't know why, don't know how, but that did fix it.

    image.png

     

     

    • Like 1
  7. I'm surprised it got green-lit at all.

    The concept might be a great one but is there an audience for a big-budget movie about a 2D animated character?

    Looney Tunes haven't been on Saturday morning TV (in the US) for more than 20 years so that audience familiarity is gone. I wonder if anyone well below middle-age remembers the Coyote and Acme.

    And, as you note, the last few 2D + live action outings have not done well.

    I'm surprised it got green-lit at all.

    But i love that picture of the Coyote. It looks like they get it.

  8. Apparently an early Japanese effort at animation. It doesn't have a date attached to it but an online comment suggests 1932.

    I have no idea what is going on but I'll presume it's based on some traditional folk tale..
    @Rodney ?


    EDIT: Google translate says

    Quote

     

    [Pre-war anime] Ameya Tanuki (1930) Talkie version [Japanese Old Animation]

    Animation of Victor Records' new album "Ameya Tanuki" (50715), composed by Benika Sassa, sung by Teiichi Nimura, and spoken by Eiko Hirai.

     



     

     

  9. I've always wondered how much other people see when they "visualize".

    When I imagine something, it's not like a dream where I really felt I was seeing something.

    I suspect there is a wide spectrum of experiences rather than just sees/doesn't see.

  10. Look at this fabulously flexible result of rigging a face with the Transfer_AW plugin.

    Steve @Shelton modeled this great character (right) and then we pulled out a lo-res version (left) with just enough splines to  cover the essential landmarks of the face.

    We rigged and CP-weighted that with some minimal bones for the jaw and lip corners and then used Transfer_AW to interpolate that lo-res rig to the hi-res mesh.

    It would need some further fine-tuning but this is a huge time saver for rigging a face.

       

     

     

    • Thanks 1
  11. As I dimly understand it... Z-buffered lighting works by rendering a depth map from the viewpoint of the light. Every pixel of that map is really a number that is the distance from the light to the surface point that pixel lands on.

    then when the real camera image is being rendered, every pixel is checked to see if the surface point it lands on is farther from the light than that same surface point is in the depth map.

    If it is farther, then that means some other surface is in front of it from the light's point of view and it must be in shadow and should not get the light's intensity added to it's surface color.

    Apparently this is usually faster than computing an actual ray form teh camera to the light.

    So why can't this process accommodate the Boolean cutter? I dont' know, but it doesn't.

     

×
×
  • Create New...