Jump to content
Hash, Inc. Forums

SpaceToast

*A:M User*
  • Content Count

    70
  • Joined

  • Last visited

Community Reputation

0 Neutral

About SpaceToast

  • Rank
    Apprentice

Contact Methods

  • Website URL
    http://www.spacetoast.net
  • ICQ
    0

Profile Information

  • Name
    Matthew Rasmussen
  • Location
    Boston, MA, USA

Previous Fields

  • A:M version
    v18
  • Hardware Platform
    Macintosh
  • System Description
    3.06GHz Intel Core 2 Duo iMac, MacOS X 10.5, 2Gb RAM, 512MB NVIDIA GeForce 8800 GS
  • Self Assessment: Animation Skill
    Knowledgeable
  • Self Assessment: Modeling Skill
    Knowledgeable
  • Self Assessment: Rigging Skill
    Knowledgeable
  1. Sorry for the novel here. My team is nearing the end of post on our current film, and A:M's renderer has been on my mind a lot recently. I'd like to frame these notes as constructively as possible. Since it's gotten so long, I'll throw in some Big Headings for readability--and in hopes of making myself feel really important. The Bottleneck All of my films and many of my illustrations have featured A:M in one form or another. It's my utility knife app. I continue to find it an extremely powerful, user-friendly full stack modeling/rendering/animation package at an admirable price. It's beginning to show its age though. A 3D package is only as good as its output options. Moore's Law is dead. We're not going back to ever-faster single core CPUs. Absent third-party renderer support, everything we do in A:M has to go through the bottleneck of a raytracer that can only utilize one processor core. When Netrender was included with the base package (in v16, six years ago) I took it as a stopgap solution and admission that the single-thread renderer was becoming a problem. Looking at the v19 roadmap, I still see nothing about multicore rendering. According to Activity Monitor, raytracing in A:M launches one additional thread, with a negligible increase in memory usage. My current production machine, a laptop, has 8 cores. Netrender, with respect, was never intended to be an artist's tool--which A:M always has been. It's a TD tool for idle overnight labs and rackservers: overly complex, crash-prone, difficult to use, and less convenient than the multiple applications of the Playmation era. I personally have never even gotten it to work. Third party renderer support is attractive in some ways, but aside from introducing many of the same pain points as Netrender, it would mean writing translation code for all of A:M's output options--not only Hash patches, but procedural textures, hair, volumetrics, soft reflections, IBL, ambient occlusion, etc. etc. etc. As much as I'd love to see COLLADA import/export eventually, we're talking about a very large undertaking, better implemented piecemeal over a longer period of time. Keeping it Realistic It's been my observation that an A:M window at render is basically a "state" machine--anything that changes between frames or render passes is applied to the world state and then read back into the renderer. (You'll see soft lights literally shift position between antialiasing passes, for instance.) As such, simultaneous rendering of different frames/passes becomes a slow, crash-prone and inefficient process of repeatedly loading multiple, near-duplicate instances of the same scene. More realistic would be to render a single pass at a time as now, but split it into tiles (say 128x128). A separate rendering thread is spun up for each 128x128 square and fed into a queuing/load-balancing library like OpenMP. The world state is maintained by the main thread. The queue completes (with an acceptable bottleneck at the last thread rendering), the tiles are assembled in a new thread while the main thread is allowed to advance, and a new queue is spun up for the next render pass. I notice that some great work has already been done in v18 at handing off render pass reassembly and other post effects to the GPU. (Today even a netbook's integrated graphics will composite faster than a single CPU thread in most cases.) Pass compositing, post effects, and file I/O should be able to manage without reference to the main thread, so need not hold up work on the next pass/frame. Digging through the forums, I see that a multithreaded renderer was attempted in v14--splitting the image into horizontal strips in that case--but was abandoned because it produced render artifacts in complex scenes. I can only go by my own war stories to guess what might have caused them; many of you will remember the memory-limited old days when illustration-quality renders required "scanning" across the image with a zoomed in camera or obstructing different parts of the scene sequentially, rendering multiple frames, and reassembling the pieces in Photoshop. Post effects like glow and lens flares were obvious snags with these methods--but again, post effects are better handled on the fully reassembled image, preferably in a separate thread. Single-pass antialiasing, especially at the frame edges, could occasionally be a troublemaker, but overlapping the render tile edges a few pixels fixed this. Single-pass motion blur and depth of field rarely produced good results; the time savings of saturating more than one core (even on a low end system) would to my mind argue for their potential retirement if need be. Phong soft shadows were always a little finicky--they saved me on "Marboxian" with a 500Mhz PowerPC, but I'm not convinced retiring them would cause much pain today. I have very little experience with the toon renderer, so I can't even attempt to comment on how well it might parallelize. 2001 Miles to Futureburg With a multithreaded renderer, A:M's biggest bottleneck is ameliorated. The power available to users increases immediately, and begins to scale again with each hardware generation. Tile size and priority optimization can be tweaked in subsequent releases. More tasks that are suitable to be handed off to the GPU, like Perlin noise, can be experimented with in their own time. With all the work that's been done to date on control point weighting, COLLADA import/export of rigged, animatable polygon models becomes more and more realistic (bare bones and incomplete at first, but with plenty of time to improve). With effortless boolean animation, riggable resolution-independent curved surfaces, hassle-free texture mapping, particle/hair/cloth/flock/physics sim, volumetrics, image-based lighting, AO, radiosity, 32bit OpenEXR rendering and much more, A:M becomes an app indy filmakers and small shops can't afford not to plug into their pipelines.
  2. All realtime rendering of image files is broken in 18.0p Mac on machines with Nvidia graphics cards. DrPhibes reported a portion of this issue back in August and markw helpfully pointed out that he'd isolated it to Macs with Nvidia cards, but it's more severe than that report suggests. I've tested the following: -Targa (32bit, with & without alpha channel) -PSD (multilayer with transparency, single layer with transparency, single layer with alpha channel, single layer without alpha or transparency) -PNG (with & without transparency) -TIFF (with & without transparency) -Color Decal -Cookie Cut Decal -Rotoscope -OpenGL3 (Anistropic Filter on and off, "Scale always to power of two" on and off) -OpenGL In all cases, the image raytraces correctly (including transparency/alpha) but realtime rendering either causes the patch to disappear entirely or display video memory garbage as per the attached screenshot. The issue didn't occur in previous (numbered) versions.
  3. This little experiment may here crash and sink against the iceberg of my limited mathematical skills. If you look inside the file, guide hair CPs have a length and an orientation (no translation). The orientation is not calculated relative to world space, but relative to the normal of the CP the guide hair springs from. Normals don't seem to be accessible with Expressions, are not saved in the file... and I can't figure out how to calculate them myself. (Someone please correct me if I'm wrong about either of the first two parts.) The basic method above will work, but I'll have to orient the bones by hand. My dreams of doing all the grunt work in AppleScript are slipping away. Still, I've got a working script to generate and translate the bones, and adding a Relationship with the guide-hair-to-bone Expressions should be straightforward enough. Keep in mind, a 6x6 mesh with 7 control points per guide hair would need to be driven by 252 individual Expressions.
  4. I've got this solved, at least for four point patches. The roll handles of each of the bones in the chain simply need to aim at the CP opposite the one generating the hair guide. This seems to work on all valid 4-point patches. When a CP is owned by multiple patches, nothing changes; the first patch that refereneces the CP in the .mdl file determines the opposite control point to orient toward. The advantages of manipulating true bone chains become apparent when trying to work with long hair (3 or more control points per guide hair). With a real bone chain, you can A) lock bones in the chain, apply constraints, and C) move bones at the base of the chain without affecting those higher up (except when necessary). This is an attempt on my part to unite the posability of "helmet" type hair with the realism of generated hair. [Edited to remove automatic smilie at "B)". How I loathe smilies...]
  5. I've had some luck rigging hair guides to bones with Expressions, but I've hit a snag. If anyone has ideas, please post them here. Here's how to get started: -In Modeling mode, create a new bone chain. Match the bones to the default positions of the guide hairs' control points. -In Skeletal mode in an Action, Pose or Choreography, jiggle the position of the bones to create Rotate channels for them. -In Grooming mode, jiggle the positions of the guide hair's control points to create rotate channels for them. -Select the first hair guide CP in the PWS, and bring up the properties window. -Leave the Rotate properties unexpanded, so that they're all on the same line. -Right (Control) click on the Rotate properties, and select Edit Expression. -A blank Rotate= expression is created. Expand the first bone of the chain in the PWS and click on its Transform.Rotate.X channel. -Delete the ".X" from the Expression. You'll get something like "..|..|..|..|Bones|Bone1.Transform.Rotate" -Repeat for each of the remaining guide hair CPs and bones in the chain. Move the bone chain into some weird angle. Hit the spce bar to update the window, and the guide hairs should move. The trouble is, they probably won't match the positions of the bones. The trouble here is that the Z-rotation (roll handles) of the bone doesn't match the Z-rotation assigned to the guide hair. It's dancing like it was told to do, it just doesn't know which way is up. If you go back into the model and start playing with the roll handles, you should be able to get the bone chain and the hair guide to match up. How? I'm not entirely certain. In a simple 4-spline/4-point patch, aiming the roll handles of ALL bones in the chain at the opposite CP will make it match up. On a more complex surface, I'm still lost. Here is a (working) example: Hair_Test.prj
  6. I've always dealt with similar problems by using AppleScript to manipulate the project file. I find it gives me more control over instancing than I can get with the flocking plugins. (And no overlaps that aren't my own fault.) The format is pretty easy to understand for the most part. For the new BTP album cover, I wrote a script to instance several thousand "beans" in a column above the logo. Each was assigned one of three colors, Newton Physics, and a semi-randomized position and orientation. Then I reopened the .prj file in A:M, ran the Newton Physics plugin, and picked the frame I liked best. I did basically the same thing earlier on the Population Map.
×
×
  • Create New...