sprockets Rubik's Cube Nidaros Cathedral Tongue Sandwich A:M Composite Kaleidoscope Swamp Demon Caboose
sprockets
Recent Posts | Unread Content | Previous Banner Topics
Jump to content
Hash, Inc. - Animation:Master

Rodney

Admin
  • Posts

    21,630
  • Joined

  • Last visited

  • Days Won

    114

Everything posted by Rodney

  1. I can't imagine those characters with any other voices. I don't have any good guesses as to why Hippogyraph Song has more views. Okay, perhaps one guess; the people involved with this sequence might have pointed people to it and the sequence has quite a few people involved. For instance, I understand the voice of Hipposgyraf, Greg Schumsky has quite a following in the (way off broadway) theatrical world. The modeler of Hippogyraf, Will Sutton is well respected as a modeler of splines and patches. Riggers... can't quite recall who rigged Hippogyraf; Ken Heslip? The supporting cast with Teresa (Woot), Robert (StrawBear)... drat, I knew I shouldn't start naming people because I'd forget someone. And to add to the mystery, Hippogyraf is one of my favorite characters in TWO although I'm not exactly sure why. I'd say it's a combination of things from design to voice to entertainment value.
  2. Martin, It's seems more than fitting the creator of A:M be here as the A:M Forum approaches its 15th anniversary. I have questions... none very well thought out... but all relating to splines, patches and the production of animation. I'll see if I can dust those off. I'm also more than a little curious to know if there will be much adjustment required on your part to reorient to the world of A:M after diving so deeply into the realms of politics, philosophy, etc. I'll guess not. And will the return meet and exceed your expectations? I certainly hope so. I'm looking forward to lively discussion on animation thought, theory and practice with the mind that created Animation:Master. Even after all these years A:M is still the best thing going.
  3. Welcome back Martin! This is going to be a great year.
  4. Did the Anzovin video tutorials ever get distributed to a location where they can be regularly and reliably obtained?
  5. I'll add this because it relates to the topic of adding color into an image only to take it away later (via chromakeying or whatnot). In another forum I was curious about the difference between RGBA and RGBM. The later of which is what is generally referred to in the Japanese animation industry. Of note is that the entire industry basically goes through that extra process of adding color (pure white in their case) only to remove it later (with a few exceptions as noted in the text below). Shun Iwasawa is a technical director that was with Studio Ghibli for many years and now heads up development on the OpenToonz software (primarily through grants from the Japanese Government) and through agreement with the originators of the Toonz software which OpenToonz was extended by Studio Ghibli. At any rate, here is a little of what he had to say relating to the use of the M (that is to say 'Matte') channel in RGBM/RGBA: (Note that the initial quoted text is from me. The follow up/answer is from Iwasawasan) Exactly. In Japanese animation production, they never use "255-white" color (= R255 G255 B255 M255) for any part of characters, since it is reserved for transparent area. Instead they use light gray for "white " part such as the white of the eye, the head of Gundam, etc. Actually avoiding to use 255-white color in characters is more for visual effect, than for software restriction written in the above. Any light effect applied on the 255-white pixels will become useless since all channels are already saturated. So they use light gray, in order to leave dynamic range for representing "brighter than white" area. So, similarly, if/when we add color to an image that will later be taken back out we must take some care to make sure it is not a color that will be inadvertently removed during the composting stage. It is interesting to note also that this 'extra step' they are performing is largely through tradition in much the same way dealing with transparency in Photoshop; that's the way it has always been. Of course, the desire to get at higher dynamic range is an important aspect to consider and Shin emphasizes that as current industry practice. Of note, this is unlike adding green, blue or other color to an image with a goal of removing it later. There is little to no point in doing that unless... the program under consideration can't be made to work with alpha channels. In the case of Japanese animation many studios have a fairly good reason for maintaining the workflow because hand drawn images on paper are still scanned into computers and drawings on paper do not have transparency. As such that has to be dealt with at some stage. However, this is not the case with drawings made in digital programs! (Footstomp in case there is a test later) *IF* we can have transparency from the outset there is rarely a need to get rid of that transparency... replacing it with a temporary color... and then removing it again later. To do this makes very little sense. One of the problems with use of the Alpha Channel/Transparency is that not all programs display that transparency in a way that users can interact with it. This is why Photoshop create those 'crawing ants' so that masks could be readily seen. But a mask/matte and transparency is not neccesarily the same thing. Even A:M has some issues with this in that transparency may appear as black in some cases (such as preview images in the Project Workspace). This can lead users to mistakenly think their background is black when in fact it is completely transparent. Many programs use a checkerboard pattern to aid in the identification of transparency. All of this is further complicated by modern image formats (such as EXR) that store additional data in the Alpha Channel and perhaps especially for EXR 2.0 that allows depth and multiple images to be stored within the same channel in arbitrary manner. The film industry has been trying to standardize the expanded use of the Alpha Channel and has made great strides but to date no standard has been set.
  6. Well said. File output from A:M has been very consistent (one of the benefits of code not changing over the years). If you had said, "New versions of A:M do not often change how these exports work" I'd be in full agreement. Any bugs that are identified get quickly addressed. Downloaded and played around with it a bit. Thanks for the heads up. There are some drag/drop effects in there that I haven't seen readily available in other/similar programs.
  7. Random scene created in 'Make Dragon' button testing. This was more of a 'make.horse>make.lizard>make.dragon' test because that's the way it was developing (I started trying to automate the creation of a horse-like shape). For some unknown reason, at the point where I added the rock for the lizard to perch on top o (left of screenshot), A:M crashed. Posting this because I don't think I saved much of the test and want to revisit the general idea some day. Added: Found/posted screenshot of the splines used to create the creatures (horse and lizard). The lizard was a modification of the horse. The dragon splines (not seen) was a modification of the lizard. And Added: alternative approach using cylinders with dangling splines (splines not seen). The idea being that the dangling splines would then be connected to cylinders in close proximity.
  8. Yes, it seems that the sections of the explode rebuild model are reacting too quickly. I haven't been able to find an ideal setting to change to bring it down to something reasonable. BUT... at least it's breaking. I'm wondering if there might be a bug in the Bullet Joint Setting. Although it is set to 1 for a value the main setting is OFF and that cannot be changed. That seems odd to me. I need to review the Bullet Physics documentation Steffen has posted. If you haven't done so already I recommend playing with the project files he posted. That angular motor is very cool... and I've played a little with that. Automated motor in A:M... that's sure to be useful!
  9. John, You forgot to embed the models. A:M states that 2 coins and 2 spheres are missing which I suspect might be 2 instances of the same coin and sphere. The sphere is easy enough to replace but the coin... I don't think I can properly assess the project if one of the models (the coin) was created with the Explode Rebuild plugin. I'll go into a holding pattern pending your response. Edit: From what I can tell it appears you have not made the Coin (the explode rebuild model) a Bullet Body. To do that: Right Click on the Model's listing in the PWS, Add Constraint, Bullet Body.
  10. Here's a try with four (almost stacked) plates. (I like using the new Duplicate option via Right Click in the Chor. Duplicate and then move slightly to new postion... Duplicate. Ditto. Duplicate. Ditto. etc.) Note that my creation of the plates by lathing with 5 cross sections has limited the breaking points of the plate. Lathing with more cross sections or building the plate some other way would result in better breaking. Same drill. One Chor is pre-simulation. The other is results. Ball and 4 Plates.prj
  11. Here's a project file with two Chors. The first is the pre-simmed setup and the second is the results of running the simulation. Assuming I'm not way off track... there may be some settings that help get things set up on your end. Ball and Plate.prj
  12. Oooo.... you can try having one of the objects have a weight of zero. The object with the weight will fall into/through the other object. Have the other/falling object have a weight of 500. I'm sure I'm not doing this right but in trying to recreate what I think you have set up I found that having one part of the explode object set to zero weight helped to keep the object in place.until it was hit. Thereafter everything broke apart. The downside of this is that once hit all the pieces *except* the one with weight of zero fall to the ground. A workaround for that would be to add a part to the object that can be made transparent. That transparent part then stays in place while the rest of the object falls away. Like I said, I'm probably doing this wrong on my end. I've posted some examples of two objects crashing together and then breaking in the Alpha/Beta forum so I know it's possible to keep the object in place. Perhaps it just requires a keyframe.
  13. You are able to get Bullet to simulate right? It's just not breaking? Two things come to mind but much will depend on what is in your project. For instance, if objects are passing through each other... consider the density of the mesh. Settings for weight can be important. For instance, in a dense mesh built with Rebuild Explode plugin I weighted some groups very heavy but left some at default (low setting). What this does is create an imbalance so that when impact occurs the heavier parts keep moving while the others stop... this then leads to the breaking. *If* the simulation is just not running at all, remember that at least one object in the project needs to have a Bullet Body constraint applied to it. I would apply a Bullet Body constraint to each and then assign a lighter weight to the plate. At this point I think we need more info or a project file to examine.
  14. That's definitely a classic.
  15. Outstanding work Rodger. Wow! As good as it looks in a still image it looks even better in motion. I felt the sudden urge to say, "Staff Pick!" but... we don't have a staff so I'll just say, "Bravo!" and "More please!"
  16. Hmmm... meethinkis we need to have several planets crashing together... This'll have to do in the short term.
  17. A really cool thing about simulations is that changes can be made after the simulation with relative ease. One such change is that of texturing. A quick assignment of new groups to create striped patterns for instance. Or materials. Or material effects. Or Hair. Or Cloth. Etc. Etc. No need to re-simulate. (Assuming no need to simulate further) :
  18. I confess that I haven't used the Explode Rebuilt plugin much before... why... I have absolutely no idea. Bullet Physics is giving me an opportunity to use that! Attached are two projects. The first is the project prior to running the Bullet Simulation. The second is the result after the simulation is run. Fun stuff. (I looked at Steffens 'Fracture' example first before trying this and had to remind myself how he created the explode model) ExplodeRebuildBulletP.prj
  19. Here's a still from my first 'real' test of Bullet Physics. I was wondering how useful Bullet will be in creating rubble.
  20. And a hint of the Big Guy thrown in for good measure...
  21. Random mass of planets... (The stars are a bit hard to see in the thumbnail... developing an approach to creating easy stars was a goal in working on this piece and while it doesn't quite hit the mark I found an approach I want to explore further) Attached a second screen capture that adds a little color to the starfield.
  22. That's me to a 'T'... minus the really awesome dog collar.
  23. With that setup I'm thinking you should be able to get Keekat rendered in less than one second! In other news: I'm zeroing in on a possible render benchmark that derives from/includes sound/audio. The audio is the cartoon mix (combining all the audio effects from the old A:M CD into one wav file) posted to the forum elsewhere. This equates to just over 2 minutes worth of screen time (approx. 4000 frames). The minor benchmark (0.0 in this set) might be to render that length out to a 1x1 image (PNG sequence without alpha channel) with all the various settings turned off in render options. This would give us the absolute lowest boundary (read: unrealistic) expectation for rendering that 2 minute sequence on our system. *If* we ever beat that benchmark in production we know we have succeeded beyond our expectations.... and very likely need to create a new base benchmark to better inform future production planning. From that foundation we then build additional benchmarks that measure projects with increased (read: viewable) resolution, fill space with interesting objects and target the outputting of pretty images.
  24. In revisiting this topic... I note that in the referenced image above concerning the four shapes of shape note singing there are these words: "Jump in and sing 'la' if you aren't sure. You'll be right 25% of the time." This underscores the reason a lot of lips can fall into sync even if ideal matches aren't always present. This also relates to why those muppet mouths get it right often as well; namely, an open mouth will often an uttered sound convey. In the four shapes of shape note singing I further surmise that more than a hint of direction is conveyed: Fa - is conveyed with the jaw jutting downward (and possibly to the side considerably in breaking up symmetry when striving for the character in caricature) So - the lips move outward - more horizontally than vertically (capturing and containing the echoing sound of the 'o' inside the mouth) La - is the extension up and down (perhaps even extending to the raising of the head via the neck) to especially accommodate the movement of the tongue Mi - is the pursing of the lips and extending them outward (mostly in vertical orientation) to capture/direct the higher note at the end. Fun stuff that lip sync. Mostly unrelated: For a little inspiration in audio syncing challenges check out the Cartoon Mixdown!
  25. After clean install... Keekat rendered to 100% in 16 seconds. It then took another 20 seconds for the image to appear and UI to release control back to me. Total time: 36 seconds That's still too high so I'll be investigating some more. This has me wondering if writing to disk is gobbling up the majority of time as the numbers would seem to be finished crunching at that 16 second mark where A:M displays rendering at 100%. I further assume the rendered image gets displayed in A:M immediately after success of saving to disk and never before that so that delay from hitting 100% to finish is surely that of confirming success of the image being written to disk. Added: v18 Testing Total time (including write to disk): 10 Seconds Update: Reducing variables* v19 now appears to be rendering at the same rate: 10 Seconds *The primary variable is always going to be the user's preferences and this would appear to be the case here as well. Making sure A:M is set to render using the Camera (and not Render Panel Dialogue) in all tests eradicated the majority of variables and appears to have been the culprit in extending the Keekat render from 10 seconds to 1:57. That's pretty minor for a single frame but can prove to be a significant difference when rendering large sequences. I still think I should be able to get Keekat to render in 1 to 3 seconds.
×
×
  • Create New...