-
Posts
21,575 -
Joined
-
Last visited
-
Days Won
110
Content Type
Profiles
Forums
Events
Everything posted by Rodney
-
Did the Anzovin video tutorials ever get distributed to a location where they can be regularly and reliably obtained?
-
I'll add this because it relates to the topic of adding color into an image only to take it away later (via chromakeying or whatnot). In another forum I was curious about the difference between RGBA and RGBM. The later of which is what is generally referred to in the Japanese animation industry. Of note is that the entire industry basically goes through that extra process of adding color (pure white in their case) only to remove it later (with a few exceptions as noted in the text below). Shun Iwasawa is a technical director that was with Studio Ghibli for many years and now heads up development on the OpenToonz software (primarily through grants from the Japanese Government) and through agreement with the originators of the Toonz software which OpenToonz was extended by Studio Ghibli. At any rate, here is a little of what he had to say relating to the use of the M (that is to say 'Matte') channel in RGBM/RGBA: (Note that the initial quoted text is from me. The follow up/answer is from Iwasawasan) Exactly. In Japanese animation production, they never use "255-white" color (= R255 G255 B255 M255) for any part of characters, since it is reserved for transparent area. Instead they use light gray for "white " part such as the white of the eye, the head of Gundam, etc. Actually avoiding to use 255-white color in characters is more for visual effect, than for software restriction written in the above. Any light effect applied on the 255-white pixels will become useless since all channels are already saturated. So they use light gray, in order to leave dynamic range for representing "brighter than white" area. So, similarly, if/when we add color to an image that will later be taken back out we must take some care to make sure it is not a color that will be inadvertently removed during the composting stage. It is interesting to note also that this 'extra step' they are performing is largely through tradition in much the same way dealing with transparency in Photoshop; that's the way it has always been. Of course, the desire to get at higher dynamic range is an important aspect to consider and Shin emphasizes that as current industry practice. Of note, this is unlike adding green, blue or other color to an image with a goal of removing it later. There is little to no point in doing that unless... the program under consideration can't be made to work with alpha channels. In the case of Japanese animation many studios have a fairly good reason for maintaining the workflow because hand drawn images on paper are still scanned into computers and drawings on paper do not have transparency. As such that has to be dealt with at some stage. However, this is not the case with drawings made in digital programs! (Footstomp in case there is a test later) *IF* we can have transparency from the outset there is rarely a need to get rid of that transparency... replacing it with a temporary color... and then removing it again later. To do this makes very little sense. One of the problems with use of the Alpha Channel/Transparency is that not all programs display that transparency in a way that users can interact with it. This is why Photoshop create those 'crawing ants' so that masks could be readily seen. But a mask/matte and transparency is not neccesarily the same thing. Even A:M has some issues with this in that transparency may appear as black in some cases (such as preview images in the Project Workspace). This can lead users to mistakenly think their background is black when in fact it is completely transparent. Many programs use a checkerboard pattern to aid in the identification of transparency. All of this is further complicated by modern image formats (such as EXR) that store additional data in the Alpha Channel and perhaps especially for EXR 2.0 that allows depth and multiple images to be stored within the same channel in arbitrary manner. The film industry has been trying to standardize the expanded use of the Alpha Channel and has made great strides but to date no standard has been set.
-
Well said. File output from A:M has been very consistent (one of the benefits of code not changing over the years). If you had said, "New versions of A:M do not often change how these exports work" I'd be in full agreement. Any bugs that are identified get quickly addressed. Downloaded and played around with it a bit. Thanks for the heads up. There are some drag/drop effects in there that I haven't seen readily available in other/similar programs.
-
Random scene created in 'Make Dragon' button testing. This was more of a 'make.horse>make.lizard>make.dragon' test because that's the way it was developing (I started trying to automate the creation of a horse-like shape). For some unknown reason, at the point where I added the rock for the lizard to perch on top o (left of screenshot), A:M crashed. Posting this because I don't think I saved much of the test and want to revisit the general idea some day. Added: Found/posted screenshot of the splines used to create the creatures (horse and lizard). The lizard was a modification of the horse. The dragon splines (not seen) was a modification of the lizard. And Added: alternative approach using cylinders with dangling splines (splines not seen). The idea being that the dangling splines would then be connected to cylinders in close proximity.
-
Yes, it seems that the sections of the explode rebuild model are reacting too quickly. I haven't been able to find an ideal setting to change to bring it down to something reasonable. BUT... at least it's breaking. I'm wondering if there might be a bug in the Bullet Joint Setting. Although it is set to 1 for a value the main setting is OFF and that cannot be changed. That seems odd to me. I need to review the Bullet Physics documentation Steffen has posted. If you haven't done so already I recommend playing with the project files he posted. That angular motor is very cool... and I've played a little with that. Automated motor in A:M... that's sure to be useful!
-
John, You forgot to embed the models. A:M states that 2 coins and 2 spheres are missing which I suspect might be 2 instances of the same coin and sphere. The sphere is easy enough to replace but the coin... I don't think I can properly assess the project if one of the models (the coin) was created with the Explode Rebuild plugin. I'll go into a holding pattern pending your response. Edit: From what I can tell it appears you have not made the Coin (the explode rebuild model) a Bullet Body. To do that: Right Click on the Model's listing in the PWS, Add Constraint, Bullet Body.
-
Here's a try with four (almost stacked) plates. (I like using the new Duplicate option via Right Click in the Chor. Duplicate and then move slightly to new postion... Duplicate. Ditto. Duplicate. Ditto. etc.) Note that my creation of the plates by lathing with 5 cross sections has limited the breaking points of the plate. Lathing with more cross sections or building the plate some other way would result in better breaking. Same drill. One Chor is pre-simulation. The other is results. Ball and 4 Plates.prj
-
Here's a project file with two Chors. The first is the pre-simmed setup and the second is the results of running the simulation. Assuming I'm not way off track... there may be some settings that help get things set up on your end. Ball and Plate.prj
-
Oooo.... you can try having one of the objects have a weight of zero. The object with the weight will fall into/through the other object. Have the other/falling object have a weight of 500. I'm sure I'm not doing this right but in trying to recreate what I think you have set up I found that having one part of the explode object set to zero weight helped to keep the object in place.until it was hit. Thereafter everything broke apart. The downside of this is that once hit all the pieces *except* the one with weight of zero fall to the ground. A workaround for that would be to add a part to the object that can be made transparent. That transparent part then stays in place while the rest of the object falls away. Like I said, I'm probably doing this wrong on my end. I've posted some examples of two objects crashing together and then breaking in the Alpha/Beta forum so I know it's possible to keep the object in place. Perhaps it just requires a keyframe.
-
You are able to get Bullet to simulate right? It's just not breaking? Two things come to mind but much will depend on what is in your project. For instance, if objects are passing through each other... consider the density of the mesh. Settings for weight can be important. For instance, in a dense mesh built with Rebuild Explode plugin I weighted some groups very heavy but left some at default (low setting). What this does is create an imbalance so that when impact occurs the heavier parts keep moving while the others stop... this then leads to the breaking. *If* the simulation is just not running at all, remember that at least one object in the project needs to have a Bullet Body constraint applied to it. I would apply a Bullet Body constraint to each and then assign a lighter weight to the plate. At this point I think we need more info or a project file to examine.
-
That's definitely a classic.
-
Outstanding work Rodger. Wow! As good as it looks in a still image it looks even better in motion. I felt the sudden urge to say, "Staff Pick!" but... we don't have a staff so I'll just say, "Bravo!" and "More please!"
-
Hmmm... meethinkis we need to have several planets crashing together... This'll have to do in the short term.
-
A really cool thing about simulations is that changes can be made after the simulation with relative ease. One such change is that of texturing. A quick assignment of new groups to create striped patterns for instance. Or materials. Or material effects. Or Hair. Or Cloth. Etc. Etc. No need to re-simulate. (Assuming no need to simulate further) :
-
I confess that I haven't used the Explode Rebuilt plugin much before... why... I have absolutely no idea. Bullet Physics is giving me an opportunity to use that! Attached are two projects. The first is the project prior to running the Bullet Simulation. The second is the result after the simulation is run. Fun stuff. (I looked at Steffens 'Fracture' example first before trying this and had to remind myself how he created the explode model) ExplodeRebuildBulletP.prj
-
Here's a still from my first 'real' test of Bullet Physics. I was wondering how useful Bullet will be in creating rubble.
-
-
Random mass of planets... (The stars are a bit hard to see in the thumbnail... developing an approach to creating easy stars was a goal in working on this piece and while it doesn't quite hit the mark I found an approach I want to explore further) Attached a second screen capture that adds a little color to the starfield.
-
That's me to a 'T'... minus the really awesome dog collar.
-
With that setup I'm thinking you should be able to get Keekat rendered in less than one second! In other news: I'm zeroing in on a possible render benchmark that derives from/includes sound/audio. The audio is the cartoon mix (combining all the audio effects from the old A:M CD into one wav file) posted to the forum elsewhere. This equates to just over 2 minutes worth of screen time (approx. 4000 frames). The minor benchmark (0.0 in this set) might be to render that length out to a 1x1 image (PNG sequence without alpha channel) with all the various settings turned off in render options. This would give us the absolute lowest boundary (read: unrealistic) expectation for rendering that 2 minute sequence on our system. *If* we ever beat that benchmark in production we know we have succeeded beyond our expectations.... and very likely need to create a new base benchmark to better inform future production planning. From that foundation we then build additional benchmarks that measure projects with increased (read: viewable) resolution, fill space with interesting objects and target the outputting of pretty images.
-
In revisiting this topic... I note that in the referenced image above concerning the four shapes of shape note singing there are these words: "Jump in and sing 'la' if you aren't sure. You'll be right 25% of the time." This underscores the reason a lot of lips can fall into sync even if ideal matches aren't always present. This also relates to why those muppet mouths get it right often as well; namely, an open mouth will often an uttered sound convey. In the four shapes of shape note singing I further surmise that more than a hint of direction is conveyed: Fa - is conveyed with the jaw jutting downward (and possibly to the side considerably in breaking up symmetry when striving for the character in caricature) So - the lips move outward - more horizontally than vertically (capturing and containing the echoing sound of the 'o' inside the mouth) La - is the extension up and down (perhaps even extending to the raising of the head via the neck) to especially accommodate the movement of the tongue Mi - is the pursing of the lips and extending them outward (mostly in vertical orientation) to capture/direct the higher note at the end. Fun stuff that lip sync. Mostly unrelated: For a little inspiration in audio syncing challenges check out the Cartoon Mixdown!
-
After clean install... Keekat rendered to 100% in 16 seconds. It then took another 20 seconds for the image to appear and UI to release control back to me. Total time: 36 seconds That's still too high so I'll be investigating some more. This has me wondering if writing to disk is gobbling up the majority of time as the numbers would seem to be finished crunching at that 16 second mark where A:M displays rendering at 100%. I further assume the rendered image gets displayed in A:M immediately after success of saving to disk and never before that so that delay from hitting 100% to finish is surely that of confirming success of the image being written to disk. Added: v18 Testing Total time (including write to disk): 10 Seconds Update: Reducing variables* v19 now appears to be rendering at the same rate: 10 Seconds *The primary variable is always going to be the user's preferences and this would appear to be the case here as well. Making sure A:M is set to render using the Camera (and not Render Panel Dialogue) in all tests eradicated the majority of variables and appears to have been the culprit in extending the Keekat render from 10 seconds to 1:57. That's pretty minor for a single frame but can prove to be a significant difference when rendering large sequences. I still think I should be able to get Keekat to render in 1 to 3 seconds.
-
I'm revisiting the subject of Benchmarks in light of reviewing the exercises from TaoA:M. As such if anyone else is interested in benchmarks I'm interested in their feedback. As Robert mentions above most benchmarks are technical in nature and follow the path of hardware testing. That isn't my focus here... although hardware testing is certainly seen as the most important part of benchmarking. But that is only where the hardware is changed. The benchmarking I am most interested in removes (or at least isolates) variables introduced by hardware. The benchmark then becomes a tool to explore 'software' with a goal toward user controlled optimization. Bottom line: For those that don't want to change their hardware benchmarking can still be useful. An example of the type of benchmarking I am interested in might be demonstrated by a recent render of Exercise 1 from the TaoA:M manual. The render got to 100% rather quickly but then spent the better part of an additional minute finishing the render. Odd. A frame that normally would take mere seconds took 1:57 to render. How long should that frame from Exercise 1 take A:M to render? I suspect with all of the various installations and programs running as well as recording my screen while A:M was rendering that very little memory was available for the frame to be rendered. Potential suspects include: - A heavily customized installation of A:M (best to get back to a clean installation) - A system that hasn't been shut down in longer than I can remember. (lots of stray stuff still lodged in memory). Taking that 1:57 second render as a loose benchmark it should be easy to improve upon and refine as a personal benchmark for my current system. I anticipate that I should be able to get that frame to render in 1 to 3 seconds. Watch this space and we shall see.
-
Thanks Matt! I'm going to tackle some more fantastic four stuff in the future because it's got me feeling creative again. That and the general skeleton of the 'Jack and the Beanstalk' storyline has conjured up specific moments that can be translated into FF imagery. Of late I have been distracted by finding my Extra DVD. It had been missing in action for quite awhile and I unearthed it while cleaning. Some of the resources contained thereon really need to be shared with the community so I'm trying to get them posted and into more general circulation. Doing that has made me want to work on areas related to A:M Exchange and press toward the future of what can be realized there. In contrast to diving in and just modeling stuff from scratch, exploring the works of others is always inspirational. So the trip is rather cyclic... and feeds the beast of creativity. And... hopefully at the crossroads of creativity and inspiration converge motivation and patience can meet. Fill in your own words for those four things.
-
Steve, Most excellent. The cross pollination of ideas is exactly why we are here in the forum hanging out together. While particle fire has long been of interest to me I certainly have no great insights into how to get to the results we (collectively) want in fire effects. There are a ton of different effects that all fall within the broader category of 'fire effects' not to mention the other effects such as smoke that accompany fire and sell it to an audience without taking them out of the moment on either extreme... not looking at all like fire or looking too much like real fire when the style of the story doesn't call for it. My experiments with fire and smoke suggest that fire effects can largely be achieved both with and without particles... not to mention through the use of actual fire footage/video. As is usually the case, much depends on what the end goal for that effect will be... dragon blasts of fire toward a given target I must presume being somewhere in the mix. Yes, most definitely, let's get Robert thinking about fire effects so we can put a focus on that during a Live Answer Time session!