Jump to content
Hash, Inc. - Animation:Master

SpaceToast

*A:M User*
  • Posts

    70
  • Joined

  • Last visited

Everything posted by SpaceToast

  1. Sorry for the novel here. My team is nearing the end of post on our current film, and A:M's renderer has been on my mind a lot recently. I'd like to frame these notes as constructively as possible. Since it's gotten so long, I'll throw in some Big Headings for readability--and in hopes of making myself feel really important. The Bottleneck All of my films and many of my illustrations have featured A:M in one form or another. It's my utility knife app. I continue to find it an extremely powerful, user-friendly full stack modeling/rendering/animation package at an admirable price. It's beginning to show its age though. A 3D package is only as good as its output options. Moore's Law is dead. We're not going back to ever-faster single core CPUs. Absent third-party renderer support, everything we do in A:M has to go through the bottleneck of a raytracer that can only utilize one processor core. When Netrender was included with the base package (in v16, six years ago) I took it as a stopgap solution and admission that the single-thread renderer was becoming a problem. Looking at the v19 roadmap, I still see nothing about multicore rendering. According to Activity Monitor, raytracing in A:M launches one additional thread, with a negligible increase in memory usage. My current production machine, a laptop, has 8 cores. Netrender, with respect, was never intended to be an artist's tool--which A:M always has been. It's a TD tool for idle overnight labs and rackservers: overly complex, crash-prone, difficult to use, and less convenient than the multiple applications of the Playmation era. I personally have never even gotten it to work. Third party renderer support is attractive in some ways, but aside from introducing many of the same pain points as Netrender, it would mean writing translation code for all of A:M's output options--not only Hash patches, but procedural textures, hair, volumetrics, soft reflections, IBL, ambient occlusion, etc. etc. etc. As much as I'd love to see COLLADA import/export eventually, we're talking about a very large undertaking, better implemented piecemeal over a longer period of time. Keeping it Realistic It's been my observation that an A:M window at render is basically a "state" machine--anything that changes between frames or render passes is applied to the world state and then read back into the renderer. (You'll see soft lights literally shift position between antialiasing passes, for instance.) As such, simultaneous rendering of different frames/passes becomes a slow, crash-prone and inefficient process of repeatedly loading multiple, near-duplicate instances of the same scene. More realistic would be to render a single pass at a time as now, but split it into tiles (say 128x128). A separate rendering thread is spun up for each 128x128 square and fed into a queuing/load-balancing library like OpenMP. The world state is maintained by the main thread. The queue completes (with an acceptable bottleneck at the last thread rendering), the tiles are assembled in a new thread while the main thread is allowed to advance, and a new queue is spun up for the next render pass. I notice that some great work has already been done in v18 at handing off render pass reassembly and other post effects to the GPU. (Today even a netbook's integrated graphics will composite faster than a single CPU thread in most cases.) Pass compositing, post effects, and file I/O should be able to manage without reference to the main thread, so need not hold up work on the next pass/frame. Digging through the forums, I see that a multithreaded renderer was attempted in v14--splitting the image into horizontal strips in that case--but was abandoned because it produced render artifacts in complex scenes. I can only go by my own war stories to guess what might have caused them; many of you will remember the memory-limited old days when illustration-quality renders required "scanning" across the image with a zoomed in camera or obstructing different parts of the scene sequentially, rendering multiple frames, and reassembling the pieces in Photoshop. Post effects like glow and lens flares were obvious snags with these methods--but again, post effects are better handled on the fully reassembled image, preferably in a separate thread. Single-pass antialiasing, especially at the frame edges, could occasionally be a troublemaker, but overlapping the render tile edges a few pixels fixed this. Single-pass motion blur and depth of field rarely produced good results; the time savings of saturating more than one core (even on a low end system) would to my mind argue for their potential retirement if need be. Phong soft shadows were always a little finicky--they saved me on "Marboxian" with a 500Mhz PowerPC, but I'm not convinced retiring them would cause much pain today. I have very little experience with the toon renderer, so I can't even attempt to comment on how well it might parallelize. 2001 Miles to Futureburg With a multithreaded renderer, A:M's biggest bottleneck is ameliorated. The power available to users increases immediately, and begins to scale again with each hardware generation. Tile size and priority optimization can be tweaked in subsequent releases. More tasks that are suitable to be handed off to the GPU, like Perlin noise, can be experimented with in their own time. With all the work that's been done to date on control point weighting, COLLADA import/export of rigged, animatable polygon models becomes more and more realistic (bare bones and incomplete at first, but with plenty of time to improve). With effortless boolean animation, riggable resolution-independent curved surfaces, hassle-free texture mapping, particle/hair/cloth/flock/physics sim, volumetrics, image-based lighting, AO, radiosity, 32bit OpenEXR rendering and much more, A:M becomes an app indy filmakers and small shops can't afford not to plug into their pipelines.
  2. Two hours are up! Did I make it? Nope, not even close. Not bad for two hours, but the head and hands will take at least another hour of modelling. The fiddly bits are a mess of bad normals, and A:M absolutely refuses, no matter what I promise, to believe that there's a valid 5-point patch behind the right shoulder (though it's fine with the duplicate on the left). This little Maxwell will have to wait to be finished another day. Still, a good challenge for a chronic perfectionist like me.
  3. I've set myself a challenge tonight. I'd like to see if I can go from sketch to usable character in two hours. The character is a Maxwell -- a sort of demon, or imp, based roughly on child proportions. At the end of two hours, I'd like to have: Produced front and side sketches Scanned them as rotoscopes Modelled the character Added the 2001 Rig Textured vinyl-like wrinkles where the skin bends in certain places This won't be a full-screen character, and a lot of fine detail would be eaten up by the noise I'll have to add to the final layout. (I'll ultimately be building a scene around a noisy, low-light photoshoot.) I'll check in tonight at 10:00, and then -- hopefully -- at midnight.
  4. Nope! Sorry, you're right. 16-25 passes. So 4x4 or 5x5 oversampling.
  5. *has to check...* Nope, no rigs, just standard A:M lights and 16-25x oversampling. Here is the setup for the cover art: And here is the setup for the product shot. Note, with all the reflective surfaces in the image, the use of ambient white "cards" behind the shadow-casting lights: Cheers, -Matt
  6. I've mostly been busy building the frontend on FoodFiltr.com, but this was a project I did last year. Beantown Project is an awesome local group from Boston. I designed their album art for the "Moving at the Speed of Life" album. The rest of the pics are here.
  7. This little experiment may here crash and sink against the iceberg of my limited mathematical skills. If you look inside the file, guide hair CPs have a length and an orientation (no translation). The orientation is not calculated relative to world space, but relative to the normal of the CP the guide hair springs from. Normals don't seem to be accessible with Expressions, are not saved in the file... and I can't figure out how to calculate them myself. (Someone please correct me if I'm wrong about either of the first two parts.) The basic method above will work, but I'll have to orient the bones by hand. My dreams of doing all the grunt work in AppleScript are slipping away. Still, I've got a working script to generate and translate the bones, and adding a Relationship with the guide-hair-to-bone Expressions should be straightforward enough. Keep in mind, a 6x6 mesh with 7 control points per guide hair would need to be driven by 252 individual Expressions.
  8. I've got this solved, at least for four point patches. The roll handles of each of the bones in the chain simply need to aim at the CP opposite the one generating the hair guide. This seems to work on all valid 4-point patches. When a CP is owned by multiple patches, nothing changes; the first patch that refereneces the CP in the .mdl file determines the opposite control point to orient toward. The advantages of manipulating true bone chains become apparent when trying to work with long hair (3 or more control points per guide hair). With a real bone chain, you can A) lock bones in the chain, apply constraints, and C) move bones at the base of the chain without affecting those higher up (except when necessary). This is an attempt on my part to unite the posability of "helmet" type hair with the realism of generated hair. [Edited to remove automatic smilie at "B)". How I loathe smilies...]
  9. I've had some luck rigging hair guides to bones with Expressions, but I've hit a snag. If anyone has ideas, please post them here. Here's how to get started: -In Modeling mode, create a new bone chain. Match the bones to the default positions of the guide hairs' control points. -In Skeletal mode in an Action, Pose or Choreography, jiggle the position of the bones to create Rotate channels for them. -In Grooming mode, jiggle the positions of the guide hair's control points to create rotate channels for them. -Select the first hair guide CP in the PWS, and bring up the properties window. -Leave the Rotate properties unexpanded, so that they're all on the same line. -Right (Control) click on the Rotate properties, and select Edit Expression. -A blank Rotate= expression is created. Expand the first bone of the chain in the PWS and click on its Transform.Rotate.X channel. -Delete the ".X" from the Expression. You'll get something like "..|..|..|..|Bones|Bone1.Transform.Rotate" -Repeat for each of the remaining guide hair CPs and bones in the chain. Move the bone chain into some weird angle. Hit the spce bar to update the window, and the guide hairs should move. The trouble is, they probably won't match the positions of the bones. The trouble here is that the Z-rotation (roll handles) of the bone doesn't match the Z-rotation assigned to the guide hair. It's dancing like it was told to do, it just doesn't know which way is up. If you go back into the model and start playing with the roll handles, you should be able to get the bone chain and the hair guide to match up. How? I'm not entirely certain. In a simple 4-spline/4-point patch, aiming the roll handles of ALL bones in the chain at the opposite CP will make it match up. On a more complex surface, I'm still lost. Here is a (working) example: Hair_Test.prj
  10. I've always dealt with similar problems by using AppleScript to manipulate the project file. I find it gives me more control over instancing than I can get with the flocking plugins. (And no overlaps that aren't my own fault.) The format is pretty easy to understand for the most part. For the new BTP album cover, I wrote a script to instance several thousand "beans" in a column above the logo. Each was assigned one of three colors, Newton Physics, and a semi-randomized position and orientation. Then I reopened the .prj file in A:M, ran the Newton Physics plugin, and picked the frame I liked best. I did basically the same thing earlier on the Population Map.
  11. Greetings all. I've been putting some of my gainlessly unemployed time into a new project, building a county-by-county population map of the United States -- America's other topography. I've been blogging about this project on my site here and thought members of the A:M community might find it interesting. Best, -Matt
  12. This is a new image I've been working on as part of a redesign of my site. (January is turning out to be a slow work month.) "Trails" took a bit over 24 hours to render on a single 1.8Ghz G5, with 16-pass oversampling. It's lit with five dub lights and the practical below the peak of the roof. The clouds are a seperate layer, to simplify things, and because I wasn't sure how much work I'd need to do on them in Photoshop. (I ended up applying a half-pixel gaussian blur, but that was it.) Each cloud is a tube with something like 50 cross-sections. I ran my Punk&Bloat AppleScript on them twice, and crossfaded between the two action files between frame 0 and frame 1, with motion blur set to 100%. Please welcome back the Marboxian. [attachmentid=13392]
  13. This is my latest short, AEsop's Council of Mice. Bill Plympton's "Guard Dog" beat me at the Woods Hole Film Festival this summer, but I guess I can live with that. I had won Woods Hole last year, with "Marboxian," and was obviously hoping for a repeat performance. Now that I'm finally into my new place (though still living out of boxes), I need to get the DVD together and get this one out to more festivals. Go watch it! I don't think it's bad. ART STUFF: The severely crunched color palette was ripped shamelessly from "Avalon," a live-action/CGI film from the director of "Ghost in the Shell." Crunching your color palette, as that film and its later-born cousin "The Matrix" both realized, is also a good way to cover your special effects -- and I'm working legal, alone, and on the cheap here. A bit of advice here: Get an editor. He doesn't know what shot took the longest. He doesn't know what you expected to look cool. He doesn't know what you were thinking would happen with x footage, and he's not going to pretend it did when it didn't. If you spend the requisite man weeks required to animate a multishot film, for god sakes, get a fresh pair of eyes to help you in putting it together at the end. Rama Rodriguez (who is a heck of an animator in his own right, in Flash, and once re-edited "Fist of the North Star" into "Christ: The Return!" -- you'd have to see it) did an amazing job getting the timing and pace to work, as best my footage allowed. I'm serious: Ask for help. TECH STUFF: I shot/photographed the backgrounds in DV, then recolored, cleaned and painted alpha channels where needed in Photoshop. Editing and compositing in Final Cut Express, on a G5. The hair is 10.x decal controlled hair, for two reasons. For one, I spent an hour grooming one of the mice with v11 hair, and the results were a terrible mess; I couldn't find any way to groom symmetrically, nor to shorten/lengthen more than one guide hair at a time. The second reason was that the A:M for OS-X (10.5) release rendered significantly faster than either 10.5 or 11 for Classic, and time was not a luxury I had while finishing this for Woods Hole. I was never really happy with the mouse rig, but I'm never really happy with any of my rigs, and I don't believe in one-size-fits-all rigs. The legs were overly complicated, and could break in certain positions. I did all the dope sheets by hand, altering and deleting phonemes as I went, then went into the pose channels and dropped the intensity way down on all but the most noticeable phonemes. (I figured the audience is mostly watching how the mouth opens and closes, and that was all done by hand.) The spine was curled using a horrible smartskin system where forward/backward/side/side were all controlled by rotating one bone floating over his back -- I say horrible because it was impossible to crossfade actions, it turned out, without the spine freaking out. The ears are a great demo of the dynamic constraint, but it's not noticeable on many shots -- seriously love this feature though. Also, the geometry bones in the tail are all rigged to a set of control bones using a lagged constriant, so that the first segment of the tail is one frame behind, the second two frames, the third, three, and so on; I really liked how this feature worked too. At any rate, hope you like it.
  14. No no, it's not a picture frame, but I imagine it's a bit esoteric. What we have is a newly constructed raised bed, just filled with soil and watered perhaps a bit too liberally. There's a nice explanation of raised beds here -- building them is covered in the Square Foot Gardening book pictured. (This is right out of my childhood.) Still trying to find a pic... There we are. The other book is The Not So Big House. The graphic theme for my site/company (The Space Toast Shop) is based around a shop built in/out of the ruins of a crashed rocket. More pics here. As of Cold Night, I got around to shingling the building. In this pic, I just wanted to reflect the shop. The picture is kind of an homage to my dad, who was a fan of both book series. He built the house I grew up in, and always had a whole mess of projects going: garden, workshop, wharf, sailboats, you name it. He was also a school teacher, so the summer was when things would really kick into high gear. We all love to build things, you know?
  15. Summer days, summer days... This is an image I did a couple weeks ago to lend a summer theme to my site. Been meaning to post it. Cold Night was feeling more and more out of place... Props as always to Robb Allen for his free Grunge Pack collection. If I ever find myself with some time on my hands and a digital camera, I swear I'll pay him back karmically by creating a Grunge Pack 2. One of these days. Summer days...
  16. I've added a few more high-res still to my site here -- the first four from the top. I'm realizing that I can use color saturation for depth cuing... but I'm afraid that'll have to wait for tomorrow. *yawn* 'night, mouseketeers- -M@
  17. Crashing down to the task of revising and finishing "Aesop's Council of Mice," my next short, in a few too-short weeks. This is a high-res pic of Pawtuckett. (The actual short will be produced at 720x480 DV.) Pawtuckett has a kind of yellowish color to most of her fur, and I'm checking to see if that explains the difference in glow color between the acorns and her. I'm just posting this pic because she's the first of the character models I've revised. I need to decide what to do about her thumbs, and a few other trouble spots, before I move on to the other mice. The hair is 10.x decal-controlled hair- -M@
  18. Thanks for the kudos everyone. Actually, the best accolade so far was being told that it once kept a roomful of very sleepy children attentive for 12 minutes. The melting statues were just done with a three-frame muscle animation, if I remember correctly. I keep meaning to put together a quick tute on 2d warping and morphing in A:M -- this really was done with just A:M 8.5, Photoshop and Premier, on a G3 iMac. (Hence the blown-out backgrounds, as an eagle-eyed observer commented -- but that wasn't the only reason for the white empty space.) The whole thing took about a year of my spare time, going to school and working a part time job for a few months. The official Marboxian home page is here. Right now, I'm hurrying to finish "Council of Mice" for the Woods Hole Film Festival at the end of July, which is where I premiered "Marboxian" last year. I'd like to keep some local buzz going. Then we'll probably see about producing a series of short live action films with Boston writers/directors/talent, but that's... *whistles* way over the horizon right now- -M@
  19. My first festival short, "Marboxian," is now up on A:M Films. [Link] Parts of it are downright painful for me to watch now, but it's had a good run this year. Now I just need to finish up "Council of Mice" in a month. Yeek. Which reminds me, I have a call to make... Hope y'all like it- -M@
  20. The lighting gave me some trouble, although under standard Mac gamma, and the NTSC gammas I've played it under, father O'Gratin appears fine. No, for some reason that I didn't have time enough on this project to fix, the shadows were drawing incorrectly, if at all. I'm guessing this is a bug with boolean cutters (Jimmy's mouth) in 10.5o/11A8. I ended up just rendering without shadows. I actually tried dropping negative lights under the characters, too, just for a little rough darkening, but still no luck. Two things I should look into a little further, when I get a chance. There was also an issue with the boolean cutters not subtracting at all in some frames, but in retrospect I should have inspected the normals on them. When you finish an animation, it's hard to do anything but complain. Still, I think the comedy worked, so I'll call it good. And since a guy who actually WORKED on VeggieTales doesn't want to hurt me, I'll be all the happier. Oh, and if anyone's wondering what's on the stained glass window behind them, three words: Best. Picture. Ever. Cheers- -M@
  21. All right, sounds like a "proceed with caution" situation. We'll go with it that way: ATTENTION: THE FOLLOWING FILM DEALS WITH SEXUAL ABUSE COMMITTED BY CLERGY MEMBERS. WHILE NOT VISUALLY EXPLICIT, IT MAY BE CONSIDERED OFFENSIVE TO SOME VIEWERS. VIEWER DISCRETION IS THUS ADVISED. This took about two or three days, plus a few evenings designing the eye rig beforehand. It is intended as a parody of BigIdea's VeggieTales DTV series. I think you'll catch that, though. http://www.spacetoast.net/Gallery/Scrapbook/VeggieTrials.mov Watch your backs- -M@
  22. I've recently banged out a parody of BigIdea's popular Christian-themed VeggieTales videos, for a local sketch comedy show. I'm a little hesitant to post the link, however, for fear of offending other users. Psychologically, it's very wrong, though there's certainly nothing graphic about it. It deals with the recent church sex scandles here in Boston. Thoughts? Regards- -M@
  23. I like. I like a lot. It seems you've hit an interesting mix between a textured, CGI look, and a more simple flat-color illustrative one. The transition between the clean, basic clouds and sky at top, and the textured water/rock at bottom is a great technique. Personally, I'd love to see more- -M@
  24. I've zipped the QuickTime file down to 92MB. Not a big change, but perhaps a bit more manageable. Zack, nice thought; know anyone? At any rate, thanks for trying, everyone- -M@
×
×
  • Create New...