Jump to content
Hash, Inc. - Animation:Master

A vs B (On Inputs and Outputs, Resamplings and Exposures...)


Rodney

Recommended Posts

  • Admin

The current forum banner (over the Latest Info forum) is a single frame from a 500 frame sequence rendered straight out of A:M with one exception; it was resized (to a lower resolution) in an image utility called Irfanview.

 

Where I have the option I prefer to use images rendered directly out of A:M; this is something of a 'purist' view.

I would relate this to what most would assume is the approach of PIXAR which is to forego any compositing, post production etc.; what comes directly out of the renderer is what will be used.

This is the A game of animation. The ideal environment in which to produce.

 

The B game is what most of us tend to use if for no other reason than practicality. I'll call it the new-Disney approach.

It might better be called the Hollywood approach or Summer Blockbuster... everyone is doing it.

And we enjoy this power of tweaking and adjusting, cropping and recoloring, recombining and generally re-purposing our resources.

This is what I tend to use other programs for; initial setup (if needed) and finalizing for presentation.

 

The attached is a rendering out of the newly open sourced program OpenToonz. Technically, it could be called a re-rendering.

All of the imagery is still that which was rendered directly out of A:M but only five frames are used from the 500 frame sequence.

The closed egg being exposed for the entire duration of the animation (almost as if it was a background) with the other four images superimposed.

 

There are a few settings that didn't get adjusted correctly; the width of the camera's view is slightly too wide. I re-rendered in OpenToonz with a slightly smaller camera view to get all of the banner into view.

The timing is not optimized; frames were exposed where I thought they might work and for as long as I thought necessary to be viewed.

No optimizations or post processing to smooth or ease in/out. No multiplane camera moves. No recoloring. No film grain. No bells. No whistles.

Just the retiming of five images rendered out of Animation:Master in OpenToonz via it's exposure sheet.

 

Certainly, this could have played out in another fashion; the various elements of font, egg, duck, and ground could have been rendered out separately and composited together. This can be a very useful approach if you know particular elements of a scene will change or need to be re-purposed. Perhaps the show might play again but this time with some other character/animal emerging from the egg. Perhaps the logo might change or fade in/out... and on and on and on.

 

I am always in a constant state of conflict between A and B; rendering everything directly out of A:M versus tweaking everything in post.

It's not one of those conflicts like warfare though. It's an effort to render better imagery and perhaps even learn.

It's like animation itself; working from Pose A to Pose B seeking the perfect breakdown position that will yield the absolute best performance in support of our stories.

out000 shrink 4.mov

Link to comment
Share on other sites

  • Replies 2
  • Created
  • Last Reply

Top Posters In This Topic

Popular Days

Top Posters In This Topic

  • Hash Fellow

My own feeling is that I prefer to do everything in one program environment (Game A) if I can, but most projects require capabilities that can't be found all in one program and I have to combine elements (Game B ).

 

It's easy to make a modification and have it bubble through to the final product if you are in one environment (Game A).

 

In Game B, I find myself avoiding small tweaks because it means retracing the workflow from the beginning to get the small tweak to be in the final product.

 

 

Just for the record we should note that Pixar does a lot of post adjustment and compositing and editing after the footage comes out of the renderer so they are more Game B than Game A.

Link to comment
Share on other sites

  • Admin
Pixar does a lot of post adjustment and compositing and editing after the footage comes out of the renderer

 

This is something I'd like to delve into at some point but I currently don't have enough data to present.

The realities of filmmaking are increasingly making it difficult for everything to pour out of a renderer fully formed.

This relates to some of the recent changes at Disney where the Hyperion renderer was built from scratch in short order which was then used in 'Big Hero Six', 'Zootopia', etc. rather than Renderman, etc.

 

What this makes me wonder about even more is how the dynamics work between PIXAR and DISNEY... not to mention other production houses (Dreamworks, Weta, etc.) who are in competition (or riding the coat tails of) the Disney juggernaut. Specifically with Disney/PIXAR though is the question about Research and Development and how elements of one inform those of the other.

And with Disney's purchase of Lucas Films a ton of technology and experience has rolled in house that wasn't accessible before.

And... we haven't even seen the next wave of films that propose to advance filmmaking more such as the next four Avatar movies... and how they've taken root with Disney too (in a way taking on the place of Lucas Film's Star Tours attraction after all that moved under the Disney umbrella).

 

they are more Game B than Game A.

 

 

I don't know enough at this point to know what I don't know.

PIXAR is certainly trying to project that they are fully invested in Game A.

I've seen relatively little behind the scenes evidence released to suggest the contrary.

I do think they shifted some significant focus near the release of 'Wall-e' and this was accented by their (rather odd IMO) decision to include live action characters in that movie.

Their need to connect the audience with reality appears to have compelled them into Game B territory in order to make the common-humanity-gone-to-excess throughline of the story work.

 

In Game B, I find myself avoiding small tweaks because it means retracing the workflow from the beginning to get the small tweak to be in the final product.

 

 

This sounds to me like you are attempting to be in Game B environment but with a Game A approach.

Game B by it's very nature needs to be modular, scalable... open.

Game A is, by comparison, modular, scaleable and (mostly) closed.

What's the difference? I'd say mostly of inputs and outputs.

 

The example of this in A:M would be that of textures. Textures certainly can be created in A:M and in fact they could all be created in A:M but that might be costly so A:M allows importing of external images for that purpose. Here then is a logical entry point in a Game A scenario. Textures of all kinds can be photographed, drawn, painted, created in almost any form for use as images in A:M's native environment. As is often the case... garbage in/garbage out... it's important to consider what foreign matter is being introduced into the ecosystem.

The same can be said doubly (triply?) for meshes made in other programs as A:M is not optimized for use as a Game B platform.

 

A slight caveat should be inserted at this point because A:M does work well with other programs but only as well as the differences between these programs inputs and outputs are understood.

This is the same with every program.

 

I like both Game A and Game B approaches.

The one I tend to prefer is that which is currently (or optimally) working. :)

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...