-
Posts
21,632 -
Joined
-
Last visited
-
Days Won
114
Content Type
Profiles
Forums
Events
Everything posted by Rodney
-
In revisiting this topic... I note that in the referenced image above concerning the four shapes of shape note singing there are these words: "Jump in and sing 'la' if you aren't sure. You'll be right 25% of the time." This underscores the reason a lot of lips can fall into sync even if ideal matches aren't always present. This also relates to why those muppet mouths get it right often as well; namely, an open mouth will often an uttered sound convey. In the four shapes of shape note singing I further surmise that more than a hint of direction is conveyed: Fa - is conveyed with the jaw jutting downward (and possibly to the side considerably in breaking up symmetry when striving for the character in caricature) So - the lips move outward - more horizontally than vertically (capturing and containing the echoing sound of the 'o' inside the mouth) La - is the extension up and down (perhaps even extending to the raising of the head via the neck) to especially accommodate the movement of the tongue Mi - is the pursing of the lips and extending them outward (mostly in vertical orientation) to capture/direct the higher note at the end. Fun stuff that lip sync. Mostly unrelated: For a little inspiration in audio syncing challenges check out the Cartoon Mixdown!
-
After clean install... Keekat rendered to 100% in 16 seconds. It then took another 20 seconds for the image to appear and UI to release control back to me. Total time: 36 seconds That's still too high so I'll be investigating some more. This has me wondering if writing to disk is gobbling up the majority of time as the numbers would seem to be finished crunching at that 16 second mark where A:M displays rendering at 100%. I further assume the rendered image gets displayed in A:M immediately after success of saving to disk and never before that so that delay from hitting 100% to finish is surely that of confirming success of the image being written to disk. Added: v18 Testing Total time (including write to disk): 10 Seconds Update: Reducing variables* v19 now appears to be rendering at the same rate: 10 Seconds *The primary variable is always going to be the user's preferences and this would appear to be the case here as well. Making sure A:M is set to render using the Camera (and not Render Panel Dialogue) in all tests eradicated the majority of variables and appears to have been the culprit in extending the Keekat render from 10 seconds to 1:57. That's pretty minor for a single frame but can prove to be a significant difference when rendering large sequences. I still think I should be able to get Keekat to render in 1 to 3 seconds.
-
I'm revisiting the subject of Benchmarks in light of reviewing the exercises from TaoA:M. As such if anyone else is interested in benchmarks I'm interested in their feedback. As Robert mentions above most benchmarks are technical in nature and follow the path of hardware testing. That isn't my focus here... although hardware testing is certainly seen as the most important part of benchmarking. But that is only where the hardware is changed. The benchmarking I am most interested in removes (or at least isolates) variables introduced by hardware. The benchmark then becomes a tool to explore 'software' with a goal toward user controlled optimization. Bottom line: For those that don't want to change their hardware benchmarking can still be useful. An example of the type of benchmarking I am interested in might be demonstrated by a recent render of Exercise 1 from the TaoA:M manual. The render got to 100% rather quickly but then spent the better part of an additional minute finishing the render. Odd. A frame that normally would take mere seconds took 1:57 to render. How long should that frame from Exercise 1 take A:M to render? I suspect with all of the various installations and programs running as well as recording my screen while A:M was rendering that very little memory was available for the frame to be rendered. Potential suspects include: - A heavily customized installation of A:M (best to get back to a clean installation) - A system that hasn't been shut down in longer than I can remember. (lots of stray stuff still lodged in memory). Taking that 1:57 second render as a loose benchmark it should be easy to improve upon and refine as a personal benchmark for my current system. I anticipate that I should be able to get that frame to render in 1 to 3 seconds. Watch this space and we shall see.
-
Thanks Matt! I'm going to tackle some more fantastic four stuff in the future because it's got me feeling creative again. That and the general skeleton of the 'Jack and the Beanstalk' storyline has conjured up specific moments that can be translated into FF imagery. Of late I have been distracted by finding my Extra DVD. It had been missing in action for quite awhile and I unearthed it while cleaning. Some of the resources contained thereon really need to be shared with the community so I'm trying to get them posted and into more general circulation. Doing that has made me want to work on areas related to A:M Exchange and press toward the future of what can be realized there. In contrast to diving in and just modeling stuff from scratch, exploring the works of others is always inspirational. So the trip is rather cyclic... and feeds the beast of creativity. And... hopefully at the crossroads of creativity and inspiration converge motivation and patience can meet. Fill in your own words for those four things.
-
Steve, Most excellent. The cross pollination of ideas is exactly why we are here in the forum hanging out together. While particle fire has long been of interest to me I certainly have no great insights into how to get to the results we (collectively) want in fire effects. There are a ton of different effects that all fall within the broader category of 'fire effects' not to mention the other effects such as smoke that accompany fire and sell it to an audience without taking them out of the moment on either extreme... not looking at all like fire or looking too much like real fire when the style of the story doesn't call for it. My experiments with fire and smoke suggest that fire effects can largely be achieved both with and without particles... not to mention through the use of actual fire footage/video. As is usually the case, much depends on what the end goal for that effect will be... dragon blasts of fire toward a given target I must presume being somewhere in the mix. Yes, most definitely, let's get Robert thinking about fire effects so we can put a focus on that during a Live Answer Time session!
-
Those would be his thumbs. I ran across a project on the Extra DVD (one by Andy Gibbons) that has some flame work. I'll try to post that here in the forum sometime this week.
-
Very nice. You have an eye for capturing detail while keeping everything simple.
-
Looking good Jali! Keep testing!
-
Tis the season to once again lament the loss of HAMR. This morning I've been playing with the standalone HAMR viewer and it still works amazingly wel... even with basic projects created in v19 Beta2. What a wonderful effort HAMR was... so far ahead of it's time. Aside: As for you Mac users that use Bootcamp to run PC programs... I'd love to know if you can run the HAMRviewer application. You should be able to. I still have high hopes that some of the code might someday be released so a standalone viewer for A:M files might be more readily distributed. In the meantime I will still explore what there is to explore via the viewer.
-
Darktrees haven't worked on the Mac platform for many years but they still work (on the PC). Here's a quick verification rendered in v19Beta2.
-
Here's a run with the ground plane having a big hole pushed out of it (and using a hull). Lest you think I simulated with the characters present... I cheated. I simulated with cubes like the purple one in the shot and then swapped out their shortcuts for the characters. There were a few pass throughs of arms and such so I just rotated the models slightly to alleviate that. I added a tile image to the ground/grid to show the density I used... I don't think we need that dense a mesh but... that's what I used.
-
Yes, Matt's suggestion to create a denser ground plane should work. I took your project, added a 100x100 grid to the ground model (rotating it so that I could see it better), removed simulation data and then launched Bullet. Everything appears to work as required. Attached image is of final resting of puzzle pieces using your Project just with an additional grid added to the Ground model.
-
Recommend you ping jason@hash.com and let him know you put in a request to orders@hash.com
-
Here's a quick attempt at the Human Torch. I confess that I didn't think it'd turn out this good. I was expecting an unrecognizable mess. From start to finish about 30 minutes work with a couple 'errors' turning out to work out to be strengths. The particle fire is created using a single tiny orange rectangle.
-
-
Last night I found myself contemplating that storyline a little more and even did a few quick pen doodles on paper to capture a few character moments (cues that might jog my memory if I ever decided to explore some more). Tonight I decided to see if I could put together a quick version of Aunt Petunia's favorite nephew; Benjamin Grimm: The rocks/plates were a hastily tossed in Cell Turb material. (You don't want to see this guy from side view but it looks a little like him from front view)
-
I just realized that contraption looks (somewhat) like a duck. Although why Mr. Fantastic is running forward to save a mecha-duck is unknown to me. If Mr. F's eyes were looking backward and we saw Galactus in a bath towel giving chase... that might explain something. (or not) Perhaps it's a Jack and the Beanstalk type story with the role of Jack being played by the fabulous four. And the duck... in the role of the golden goose. Hmmm.... something weird going on with the random ducks that sneak their way into my 3d doodles.
-
Thanks David! I definitely need to look into that Standalone Face rig.
-
Random Mr. Fantastic I started modeling random legs (with no real target to hit) and the color I applied happened to be blue... That made me start to move in the direction of Fantastic Four characters. I was working my way from bottom to top and I got tired of the model about the time I got to the chest... but I'm glad I didn't give up at that point because those splines then started to turn into Mr. Fantastic and the little details kept me interested. Needs to look at least a little like Reed Richards... needs the chest logo.... needs the signature hair... hmmm... need to make sure he can stretch.... pose him in a Chor.... what the heck is he doing... need some kind of Kirby contraption.... how about a corner logo showing the unposed character... needs some kind of title/text.... Ultimate Nullifier thingy can be made with SVG importer with pieces assembled, modified and such in A:M. I didn't bother to model the hands. All in all a fun foray into trying to capture the sense of a character from memory. Edit: Added the Invisible Girl but shes a bit hard to see... 'cause she's invisible.
-
Nice. In first looking at your image I thought that was real equipment!
-
Thanks! I don't use volumetrics very often but I like them! I need to spend some time with them so I better understand how to use them. Up next... I revisited an old idea that I couldn't get to work a long time ago related to render passes and... it worked quite well. The basic idea: Create controllers (for this test I created Background, Middleground and Foreground controllers) Import models into Choreography and assign them to a controller (this is based on overlap... background objects will almost always have something that lies in front, middle ground objects will often have something in front of them and foreground objects rarely will have anything in front of them. Objects within each 'zone' can freely overlap. Turn controllers on/off in order to render out passes with nothing (only backdrop/camera color/alpha channel... as applicable), background zone only, background and middle ground zone only, all objects, middleground objects only and foreground objects only. The controllers use expressions to drive the Active property of models in a Chor. A handy trick to editing placement of objects in the scene according to their 'zones' is to move to a frame where the models are visible, turn off Animate Mode and adjust the models to get proper overlap over or behind objects in other zones. Then turn Animate Mode back on. In the animated screen shot (see attached) the colored squares in the lower left corner are the controllers. This relates a little to tests automating crowd control. Another related idea is to have objects/characters move out of the way automatically as another object/character approaches (i.e. objects/characters avoid a controller). I think that is what originally got me thinking about this approach to render passes when I saw Hash Inc's Tech Talk about dynamic objects. Steffen does have a Render Passes plugin that can assist in this manner but I don't think it directly manipulates transparency and doesn't use the Active property of models. In prior versions I think I set up the expression wrong or A:M wouldn't save the expression once created using the Active property (I can't quite recall). But it works great now!
-
Ah, understand! I ran across an old dongle for A:M the other day and it made me curious which version it was for as it's been so long ago that I've forgotten. It *could* be for v13 but I'm not sure. I need to investigate. I remember I purchased the dongle to get access to Netrender... which every A:M User gets these days. It certainly could be circa v13. Best of luck with your attempt to get up and running with bootcamp. From what I've heard that should work well for you.
-
Yet another reason to leave v13 behind. I'm glad we don't need a CD/DVD drive any more.
-
I"m hardly in a position to question that and yet... If you got that recommendation from Dylan he may be slightly influenced by someone who suggested v13 was the last release Martin wrote code on. Theoretically that alone would make it better... automatically! BUT there have been so many useful and production worthy improvements since v13 that I wouldn't want to use it instead of current releases. Not by any stretch of the imagination. Now, that does not invalidate the theory that v13 might be the most stable for production but there is surely a long list of criteria that would need to be checked off in order to make it true. I could certainly think of many reason why v13 would be on or near the top of the list and I can think of a few that have little to do with how stable the release is. I guess what I'm trying to say is that there are a lot of variables to consider but the statement alone suggests that a few definitions might need to be set and additional details examined before I'd even begin to consider v13 a more reliable release that those that came later built upon v13's very substantial foundation. But you've definitely got me curious. One question that immediately springs to mind: Is there even a 64bit version of v13? If not then.... 'tis mothballs for v13... and if 'yes' then the jury is still out with no estimate of time for release of a verdict. My memory seems to set the 64bit release circa v15. Hopefully someone in the know can confirm. (or I can stop being lazy and look it up) Update: It looks to me like v16 was the first to have 64bit so this makes me think v17g+ should be a very good candidate to start production testing (i.e. use as a yardstick to measure stablity and production readiness from). I will assume everything is plus or minus (more stable/less stable... more feature rich/less feature rich... faster/slower... etc. from that point of reference. Disclaimer: All my theorizing concerns only the PC as I don't have any reference point to draw informed opinion from for the Mac. One reference point: the only addition to v17g+ was to get v17g working on Mavericks when it was first released. That's the reason the 'plus' was added to the release. Of course, the most important thing will be how well A:M works for you on your current system(s). I just hate to see people miss out on six or more years of excellent productivity enhancements. That sounds like a great idea. .
-
Very interesting. It's nice to see your approach... and that approach looks like it's is working great.