sprockets The Snowman is coming! Realistic head model by Dan Skelton Vintage character and mo-cap animation by Joe Williamsen Character animation exercise by Steve Shelton an Animated Puppet Parody by Mark R. Largent Sprite Explosion Effect with PRJ included from johnL3D New Radiosity render of 2004 animation with PRJ. Will Sutton's TAR knocks some heads!
sprockets
Recent Posts | Unread Content
Jump to content
Hash, Inc. - Animation:Master

Approaches to Blazing Fast Rendering


Rodney

Recommended Posts

  • Admin

Dan is struggling to meet a deadline over in another topic so I'm starting this here to collect general advice.

For those that feel they can assist him in getting his job rendered please comment over there:

 

Dan's topic

 

Meanwhile... we''ll collect some thought, theories and practices for faster rendering here in this topic so that we can bravely face the next deadline.

Link to comment
Share on other sites

  • Replies 16
  • Created
  • Last Reply

Top Posters In This Topic

  • Hash Fellow

-Z-buffered Kleig light shadows may render faster than Ray-traced Kleig light shadows

 

-Sun light shadows are ray-traced only but a distant Kleig light will look similar.

 

-Turning "Cast shadows" OFF on hair will render faster.

 

-Complex material can be baked into a bitmap for faster rendering.

 

-Fewer lights render faster than more lights (mostly true with ray traced lights)

 

-Regular render has anti-aliasing equivalent to 16 pass render, but is much faster.

Link to comment
Share on other sites

  • Admin

Thanks Robert!

 

---------------------------------------------------------------------------------------------------------------------------------

DISCLAIMER: The following is some theoretical stuff about rendering from a view of the renderer constantly working for the user.

Don't take it too seriously. It's not coming your way soon. It's just theoretical.

---------------------------------------------------------------------------------------------------------------------------------

 

One thing I've long wanted to explore is something I might call a hungry renderer.

I'm sure there are better names for such a critter but the idea is that the renderer is constantly on the prowl, very proactive, waiting to grab a new batch of instructions to render. If the renderer encounters an image that has already been rendered before, it places the instruction to render the image again at the bottom of the cue and nominates another instruction to render. *If* there is ever any idle time the renderer smartly deals with the set aside instructions in turn, pluses them up if allowed to, or improves the quality of the image (or as dictated by custom settings).

 

The goal would be to have the renderer always rendering.

If the render stops rendering it is out of optimal parameters (i.e. something is wrong with the settings... either default or customized).

 

The first thing that would be targeted would be a stack hierarchy mentality where a stack of instructions are executed in linear order *unless* there is another instruction of a higher priority, in which case the higher order instruction would pop up to the top of the stack (or the bottom if using that hierarchical methodology in a similar fashion to many of A:M's approaches to priority).

 

The second would be the cuing process itself which would have to be optimized so as not to be re-reading the listing/index too often (wasting cycles of processing for no good reason when they could be used for rendering). My initial instinct would be to give every instruction a numerical value that assumes 1) A new instruction is a higher priority instruction 2) the new instruction may allow itself to enter the cue at a lower priority.

 

It might work like this:

- Publish new instruction

- Write/Append old instruction to (end of) new instruction

- If new instruction priority is lower than second line priority then sort cue listing

(This would place the new instruction into the correct order hierarchically)

 

So how would you modify the priority of that instruction?

 

Let's say that instruction was:

7 Do this and that

 

The current cue listing includes:

 

10 Render this

8 Render this too

4 Please render this fast

2 Whenever

0 When all else fails do this

 

So the modified cue listing becomes:

 

7 Do this and that

10 Render this

8 Render this too

4 Please render this fast

2 Whenever

0 When all else fails do this

 

Since 7 is less than 10 the list is sorted, resulting in:

10 Render this

8 Render this too

7 Do this and that

4 Please render this fast

2 Whenever

0 When all else fails do this

 

But what if you want to interupt and modify the priority of that instruction because you see that it isn't high enough in the cue?

You simply add a new priority:

 

9 7 Do this and that

10 Render this

8 Render this too

7 Do this and that

4 Please render this fast

2 Whenever

0 When all else fails do this

 

Since 9 is still less than 10 the cue knows that this new modification is not of the highest priority so it sorts the cue listing again, resulting in:

 

10 Render this

9 7 Do this and that

8 Render this too

7 Do this and that

4 Please render this fast

2 Whenever

0 When all else fails do this

 

Since we haven't removed the old render line item it remains in the cue but is now on a new tier, a quality rendering cue.

When it is cued up again it will execute the instructions of the secondary cue.

 

Let's say that the secondary cue's instructions are to create a JPG thumbnail image of any existing images in the secondary cue.

*Here's a rendering timesaver* All images that exist already are not rerendered but bypass the renderer and are simply copied to the targeted format.

Once complete the secondary cue looks for something else to do.

If all instruction are completed then it moves on to the third cue (or as customization dictates).

 

Let's say in the third cue the instruction there is to compile any like named image sequence into a .MOV file.

The converter might kick in and begin to convert the smallest files first (yielding the greatest throughput) or as dictated by customization.

As the converter need not be a 'renderer' much time is saved in rendering.

There is no need to render an image sequence to .MOV because as long as it is in the cue it will be automatically converted.

The third tier cue would need to be smart enough to know when a sequence is ready for conversion or it would need to be able to append multiple .MOVs.

In the abstract this would be trivial because the interface would use thumbnails as proxies.

Manual concatenation of images or sequences might be dicated by On/Off (push down on all the images to be converted) or by selecting and Grouping images into separate cues. An equivalent in A:M would be that of selecting multiple objects in a Model or Chor and Naming that Group. While the display name of that Group might be whatever the user finds useful that is just a label, the true 'name' of that Group would be the priority and instructions that set it into the cue(s).

 

Each time a user issues new instructions those instructions go into the cue.

After awhile there might be hundreds if not thousands of things for the renderer to do, assuming the renderer ever runs out of things to do.

In time the best rendering instructions (for each individual user) get run first.

In this way the render continues to learn.

 

Perhaps the best part of this approach is that it is scalable.

It can work with one user or everyone in the entire world.

It can work on a single computer or over a network.

If an already image already exists it need not be re-rendered or it can be modified.

The project file + the instructions are enough to establish what the image will look like once rendered.

In many cases most files might never need to be (the highest quality) rendered because the best solution will already be rendered for you.

 

Seriously though... renderer... always rendering.

Not necessarily at the peak of using all available resources but understanding and executing it's instructions perfectly, constantly, continuously. Of course, you could always override the default and put the renderer manually into sleep mode. When needed again simply wake it up. It already knows what it needs to do.

Link to comment
Share on other sites

  • Hash Fellow

Here's the essential problem...

 

If the renderer encounters an image that has already been rendered before, it places the instruction to render the image again at the bottom of the cue and nominates another instruction to render.

 

...you can't know that new instructions are asking you to render something that has been already rendered until you have actually done them and compare the new to the old. That's the way I interpret it.

Link to comment
Share on other sites

  • Admin
it sounds like hyperthreading as explained by Yves is a way to keep the CPU always working on something.

 

I confess that I do not know enough to even hazard a guess regarding hyperthreading.

From what little I do know I'd say that seems to be the clear road ahead.

 

The problem we face with the 'always on' approach is that it is seen as wasteful.

But this assumes several things:

 

1) That what is produced is often waste-full

2) That "waste" cannot also be useful

 

One thing that can help is for the end user to define what they consider waste.

Sometimes (more often than not) is can be assumed that they don't know.

So how do you deal with waste that isn't defined?

We can project and predict.

 

With projection we make educated guesses on what we will encounter (or need) in the future.

With prediction we define and refine the next projection.

 

There is a method of texturing that saved a lot of time (from earlier attempts) in that rather than paint each pixel separately the programmers used a method more like throwing a can of paint onto a wall. Instead of painting each part of the wall increment by increment they covered the whole thing in an instant. (Masking would keep parts of the paint from sticking to the wall).

 

Similarly in rendering we have a problem in that we (as users) don't often think about how similar one frame is to the next, nor do we consider that we often rerender the exactly same frame (or nearly so) multiple times because we have animated no change there.

 

I'm not sure how we can get a renderer to determine what has changed from one frame to the next but it seems to me that working from the view that nothing has changed is a good start. If we can eliminate what will not change from the process then we will have optimized our processing.

 

To me this means that we should endeavor to apply the same (or similar) methods found in animation to rendering. Specifically:

 

Key

-Inbetween

--Breakdown

-Inbetween

Key

 

What is the difference between the first key and the last?

What is the difference between the result from that and the Breakdown?

The inbetween is an algorithmic process (i.e. on a sliding scale it's relatively easy to figure that one out)

 

So we need to go from 3 to 2 to 1 in the process of creating the optimized instruction.

Then we reverse the process to engineer the rendering algorithm: 1 to 2 to 3.

 

Of course it's considerably more difficult than that but it's a place to start.

 

What is that they say about Actors/Acting? Perform (Do) this until something causes you to (Do) perform something else?

A renderer can certainly do that.

 

Here's a key component:

- Observe

- Orient

- Decide

- Act

 

The OODA loop is as good for renderers as it is for pilots and animators.

 

What targets do we observe between our key parameters? (Know your Environment)

How do we best orient to engage? (Point of Contact)

What is the best Course of Action (CoA)?

Are we decisively Acting? (An incorrect action at this stage may be better than no action)

 

So what does this have to do with a renderer?

 

Most of us have a general idea of what we want but we don't know how to get that idea rendered out.

We mostly go by trial and error (we smile and frown a lot).

We usually forget what we've learned in the past and this effects our future outlook when we find ourselves on similar ground.

We are more likely to make the same mistakes often (twice is usually enough to avoid repeating that course of action unless risk allows for it)

We have too many options so we often hesitate to act. We second guess ourselves.

We become even more cautious because the result weren't what we wanted or were lacking.

We avoid commitment.

 

Want to win the rendering war?

Formulate a smaller OODA loop than the one you've currently got.

Link to comment
Share on other sites

One trick for fasting rendering is to only render what needs to be rendered.

 

I ran into a time crunch rendering the Vulcanine scenes on Stalled Trek and realized that for shots where the camera didn't move (and they weren't interacting with the background) I could turn off the characters, render one frame of the background, bring it in as a layer to match up to the camera shot and then turn off all of the background elements. Those shots rendered much faster.

 

You have to be mindful of lighting and such, but the time saved on those shots gave me the time I needed to render the shots with camera movement.

Link to comment
Share on other sites

One trick for fasting rendering is to only render what needs to be rendered.

 

I ran into a time crunch rendering the Vulcanine scenes on Stalled Trek and realized that for shots where the camera didn't move (and they weren't interacting with the background) I could turn off the characters, render one frame of the background, bring it in as a layer to match up to the camera shot and then turn off all of the background elements. Those shots rendered much faster.

 

You have to be mindful of lighting and such, but the time saved on those shots gave me the time I needed to render the shots with camera movement.

 

But also be mindful that if you change something in the choreography (say move the camera later on in the timeline) in between rendering the foreground and the background. That those changes don't have an unforeseen impact on the earlier section. I lost a whole bunch of time earlier in the project when I rendered the characters and the background/foreground independently. Did some more work on a later segment thinking the earlier part was done with. Discovered an issue in the earlier segment and then the re-render didn't match up properly.

 

Sorry if that is a bit hard to follow.

Link to comment
Share on other sites

  • Admin
I lost a whole bunch of time earlier in the project when I rendered the characters and the background/foreground independently.

 

Dan,

Just for clarification purposes do you mean you 'saved' a whole bunch of time or you 'wasted' a bunch of time by rendering the characters and the background/foreground independently? That sentence can be interpreted either way. The word 'lost' usually is a negative connotation so I want to be sure I understand. You seem to be indicating that you had to discard the previously rendered frames.

 

Thanks!

Link to comment
Share on other sites

definitely wasted Rodney. After I'm done with the Christmas ep I'm gonna post Netrender's interpretation of the segment of animation I'm talking about. It's a seemingly simple shot of Tech walking over to Game. But even though his head bone was not animated or set to be a dynamic constraint. Netrender decided to move the bone at random anyway and it looks like Tech has Parkinson's disease.

 

But in my experience, the easiest way to get fast render times is to avoid particle hair wherever possible. Unless you have your own render farm that can do a whole whack of frames all at once. My i5 processor only has 4 cores and 4 frames every 15 to 20 minutes on a render longer than a few seconds takes forrrrrrrever to finish.

Link to comment
Share on other sites

  • Hash Fellow
I'm gonna post Netrender's interpretation of the segment of animation I'm talking about.

 

We should investigate why that happened. It is EXTREMELY unusual for Netrender to produce a different result than A:M

Link to comment
Share on other sites

We should investigate why that happened. It is EXTREMELY unusual for Netrender to produce a different result than A:M

 

It's the first time it's ever happened to me and only happened at that one particular spot. but it's the same choreography and same instance of Tech for the entire scene. I double checked the timeline for the head bone and it's keyed at one position at the start of the cycle and turned slightly at the end of the cycle (Because he's looking at Game as he walks over). When I rendered that segment with A:M's built in render system it renders fine. But Like I said loading that exact same file into Net Render caused his head to practically shake off his shoulders.

 

In Fact I couldn't wait to show you guys. If you're drinking anything I suggest putting it down lest you snort it through your nose.

 

shake_what_your_designer_gave_ya.mov

 

On the right we have the correct render and on the left we have Tech looking like he's not had his cocaine in a while

Link to comment
Share on other sites

Certainly does Robert. To fix the background/foreground syncing problems I rendered the whole thing all at once and his head went all jiggly again. But I saved over those frames with the finished ones. It's not a huge deal I just thought you guys would get a giggle out of it.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...