-
Posts
21,575 -
Joined
-
Last visited
-
Days Won
110
Content Type
Profiles
Forums
Events
Everything posted by Rodney
-
Thanks! You just made me realize that this will help in many way more than originally anticipated. For those interested in the process of benchmarking itself it's important to note that there is a Point of View (POV)... or perspective if you prefer... in every test. This is the bias that we need to account for in every single benchmarking test. It's also one that is extremely hard to get. But... this is where it gets really cool! There are two primary POVs on any given benchmark (three really but bear with me here). Your POV (Local) Others POVs (Global) Ultimately, the only benchmark you are personally interested in is your benchmark, the one that radiates outward from your POV. When we continually compare our benchmark with other peoples benchmarks we are getting a skewed picture that does not (cannot) fully apply to our own effort. It's good to keep both in mind but we need to focus on those variables that we can control at our production level. To step out of the local arena is to assume a different level of responsibility. Where frustration really starts to form and conflicts arise is where the level of access is not appropriate matched to the level responsibility. In short, if you do not have access to the global resource how can that possibly help you locally? Therefore, proper benchmarks at the local level must focus on variables at the local level. All others may be informative but they may or may not relate. Where they relate is at the crossroads of where the global and local levels meet. That was a long way to get to this: Given this, anyone can post a benchmark from the local level and someone else can test it at the global level. The only thing remaining then is to loop in feedback so that the differences can be compared and contrasted. But that's a bit broad for the present. The goal is to reduce variation within a specified control. Where possible that control should be user specified with optimal and suboptimal results recorded and shared globally. But how does one determine what is optimal? For that we need some kind of control. The point of the global benchmarking then is to demonstratively show others what (optimally) works. The feedback loop can then confirm the optimization and the production cycle can begin anew. Note: A project file need not move from the local level to the global unless/until it is seen to be out of control or if highly optimal. The concerns inbetween can (usually) be simply monitored as they satisfactorily move forward on schedule. Trivial matters then enter the realm of Research and Development where people purposefully break things in order to explore and innovate. But what is accepted as being in or out of control? I submit to you that in cases where things fall out of control the first comparison to use would be that of a running tests at the 0.0 Default level of Benchmarking. This compares the current results to that of the Norm and endeavoring to turn toward the default to the maximum extent possible . This can then also be compared and confirmed with the 2.0 Technical level of benchmarking to ensure the production is operating at peak capacity while ever seeking higher levels of optimization by thinking through the process. At level 0.0 there is no thinking, just do the default and require everyone else to do it too. This is how 'global reality' works. How closely does your 'local reality' align with it? At level 1.0 Something new is produced. Variables are added to the default that can be compared and contrasted with the default. The product isn't optimized at this stage it is produced. At level 2.0 Further optimization is secured with an eye toward the future. Things get seriously broken. Learning is afoot.
-
Every installation of A:M has a default chor. Just render a two minute movie with it. For a first variable I would suggest: Removing the default Ground Plane There are two (primary) ways to remove the default ground plane 1 ) Select the Ground Plane from under the Choreography and delete it (Delete key on keyboard) 2 ) Inactivate the Ground Plane by opening the Ground Plane and setting the Active attribute to 'Off'. After any changes are made the Project should be saved (to an approriate benchmark name) to ensure the setting are locked down. The resulting file can be shared with others who wish to test the same benchmark on their system. Added information that few will be interested in: Products: The products this benchmark will produce are of the following nature: 1) Backgrounds 2) Placemarks (Proxies) Backgrounds Rendered backgrounds can be reutilized and should be saved for reuse. Re-rendering the same background over and over again with no changes is an example of poor optimization. Placemarks Rendered backgrounds can be reused and replaced in other scenes. They can also serve as proxy frames pending replacement (Ref: Animatics) The placemarks can be rendered to 1x2 with alpha channel on to be completely transparent (A faster method would simply duplicate the original frame to the desire time length) When rendering to PNG format (or other format with transparency) the placemark can remain in place or be replaced as the subsequent/improved renderings arrive to fill in that space. PNG is currently optimal for use with HTML overlays. TGA is optimal for legacy compositing and interoperability. EXR is optimal for single image data (and with EXR 2.0 depth information is stored in the file as well).
-
Trust me, low impact is the goal here. The whole point being to make it the lowest impact possible to promote optimization. All that I am suggesting is that a smart benchmark can produce something that is useful beyond the benchmark itself (a reuse methodology). If the benchmark is successful then it could then be incorporated into workflow and a new benchmark replacing that one. I understand the old way of Benchmarks enough to know that they target the technical. As such my proposal has three core series: 0.0 Default Benchmarks 1.0 Production Benchmarks 2.0 Technical Benchmarks Of course it could have more but three areas would not only be optimal in and of itself but would mesh well with the whole idea of Preproduction, Production and Post Production. Specifically, in Preproduction we establish (or reestablish) the defaults. A strong foundation to build upon. In Production we execute the plan. Where there is no production plan we will fail or falter in reaching our production goal(s) In Post Production we refine our products, improve our presentations and thoroughly establish new benchmarking based on successful standards that have proven to work. Then the cycle can start anew. In the Default Benchmarks anyone with A:M can test things out just by opening A:M and executing a task and recording the results. This is primarily useful at the programmers level (Steffen certainly has his own benchmarks) In the Technical Benchmarks (which I believe to be the purpose you ascribe to in benchmarking) that tends to focus on hardware optimization. This is the hard and cold facts that only technical minds enjoy. It's focus is also on those who have the best and most optimized equipment. As these benchmarks identify optimized hardware the vast majority of A:M Users cannot fully take advantage of them without considerable expense. What I propose attempts to bring the user, the production and corporate knowledge of A:M fully to the fore in order to get into the production process itself. This is an area of analytics that is often misunderstood and undocumented. It directly effects the user who otherwise unknowingly may be working against themselves and forms a framework for establishing realistic expectations, production schedules and time management. It also provides a method of optimizing through recycling and waste management. Somewhere along the way computers rested benchmarking away from other areas of interest. The goal in even hardware-centric technical benchmarks is to measure productivity. It does little good to optimize the hardware only to see that optimization neglected (or bypassed entirely) due to random user defined production criteria that can only be optimized through training. I base this on a belief that A:M Users want to produce animated products more than they want to optimize their hardware. Because there are no benchmarks that measure productivity I have little evidence to support or to suspend that belief.
-
My first foray into benchmark produced: Title: Background Plate Benchmark (Default Cho) Mini Frames: 601 (equivalent to a 20+ second 'mini' movie) Render Time: 1 minute 37 seconds Lots of things not yet ideal nor optimized. What I believe this benchmark to show: The results suggest that (unless further optimized) it will can take no less than 1 minute and 37 seconds to render a 2 minute 'mini' movie on my computer. Added: Results from the second run of this particular benchmark but scaled to VGA are in. Interestingly, A:M produced the same number of frames at the larger VGA size in less time (10 seconds less). Title: Background Plate Benchmark (Default Cho) VGA Frames: 601 Time: 1 minute 27 seconds Conjecture: Decrease in rendering time possibly due to more memory being available to A:M at that time.
-
By way of establishing a useful and effective benchmark that helps A:M Users produce a short film or movie I submit the following for consideration: Any benchmark undertaken should help to advance an A:M User toward the creation of their image, short film, series or feature. Any benchmark in the 1.0 series should be A:M-centric. (Comparing A:M to other programs is not the goal but comparing A:M to itself on various systems will reveal useful information that can be adjusted and optimized). The fewer variable in a benchmark the better. With these considerations in mind we can move toward a proposal for the first 'official' A:M benchmark. Note that this would be the second series of benchmarks because the first would use only default settings found in A:M as determined by selecting "Reset Settings" in the A:M Help menu then performing and measuring a stated task. The default benchmarks (right out of the box) would likely be termed Benchmark 0.0. Through effective benchmarking that progresses the A:M User toward their goal all variables can be known, their effects measured and altered to suit the needs of the user. Any subsequent benchmarks beyond the default should be meticulously documented with all variable identified to the maximum extent possible (an unknown variation may still be within control limits but effectively out of the user's control). Where possible a default file should be shared with the those who will conduct the benchmark and for testing purposes it would replace the default in A:M (example: replacing the default.cho file). Assets required for a particular benchmark can be narrowed in scope to a Project file, Choreography, Model, Action, Material, etc. to benchmark each of those areas. It is hoped the best benchmarks would then be incorporated into A:M as optimized procedures which when run automatically (at installation or other time specified by user) establish a realistic foundation from which to project the user's production schedule. Note: Wherever possible the introduction of variables within the Benchmark should be used as an opportunity to learn. (Example: Telling A:M what file format to render to, where on the user's harddrive or network to render to and then padding the filename with zeros in order to optimize the sequential order of images produced) These represent three user specified variables that will have an effect on the user's production schedule. A proposal for the first 'useful benchmark' beyond default settings tests would be to render a two minute movie (from a project file provided). Assets rendered from this benchmark would be used to assist in the creation of the user's own two minute movie. (This setup can then be easily altered to target still image (single frame), 30 minute short, 90 minute movie, etc.)
-
Thanks, I got it. I've set it aside unopened for the moment and will dive deep into it as soon as I can. A question: I'm sure you've told me this before but what software are you using to create your script? Word? Celtx? Or something else. The reason I ask is that some software allows direct analysis of each character's individual story arc as it runs through the story. After reading through three times, I'd like to go back and follow each through line. I've found the best way to do this outside of reformatting the document is to print the script out and read through looking for the appearance of each character (a bit tedious). So it'b nice if the software can sort or specify what character to follow and in some way highlight that.
-
More SIGGRAPH 2012 related postings... This by itself is very cool and interesting but I'm already wondering how the tool can be extended. Rather than take multiple shots of a chrome ball to capture lighting, duffuse, mirroring/reflection, specularity etc. some folk decided to pursue the idea of having one single ball to capture all the sampling needed. http://gl.ict.usc.edu/Research/SSLP/A_Sing...IGGRAPH2012.pdf This has bigger more colorful imagery: http://gl.ict.usc.edu/Research/SSLP/ Now what I'm thinking is how we can use this idea to capture the environment that exists inside a manufactured 3D scene. There are obvious problems with this (for instance most scenes do not have an actual environment... it's all faked) but the important thing is that it could give us a tool for feedback (a feedback loop) that informs us as we create more realistic environments in our scenes. So in the end the probe in the real world matches (more or less) exactly with the probe in the 3D scene. At any rate the Single Shot Light Probe is a pretty interesting thing.
-
Were those splines attached in the modeling window manually or were they attached programatically? Something is obviously wrong there but moreover than that, something is wrong there. In my experience, that is not the way most three point patches react. I see the point on the tri that your are interest in but barely that. Edit: It that a flipped normal on that tri-patch? Perhaps you can share the steps you used to setup and render that?
-
I don't think we'd established this fact either here in this thread: The illustrious Brian Prince (famous lighter of 1000 hour renders) said: In his spare time Brian like to meticulously set up scenes (many from the local area where he lived) and render them. All the pics here were made by Brian with A:M: http://brianprince.squarespace.com/3d/personal-3d-pieces/ This was long before the advent of Global Illumination and all the fancy lighting bells and whistles.
-
Disregard. Allie wasn't hard to find once I used the right words in my search. Look for it shortly in the A:M Exchange Model section. Yes, all A:M! I'm sure there was some compositing/editing of film in premiere or some similar softare. Somewhere there is an early SIGGRAPH video where one of the animators takes you through the use of Layers for compositing as most of the backgrounds were not 3D but were painted.
-
Here's a bump up to the oldest topic currently in the New Users forum. (Note that the links in this topic above are long outdated. Don't waste your time clicking up there) I just ran across something that led me to a guy who claims to have designed Keekat: Steven A. Vitale (goes by the handle LOOSETOON) He didn't design the computer model or have anything to do with the ill fate feature but has stated that he designed the character. I think he also drew the comic strip which you can see at the Paw Island site here but the signature says E.S. Vitale. One more mystery... Here's an example of the comic strip (pretty crudely drawing but hey... it's Keekat!) http://www.pawisland.com/toons/comics/12.html In other news, at one point I had the Keekat's sister Allie (unrigged as I recall) and I thought I had posted that to the forum. At a glance I don't see it. At one time it was hiding on Robert (Bob) Taylors site but I couldn't find it there either. (Just two things I may follow up on some day) For those that haven't seen the trailer it was pretty awesome looking especially for it's 2000-2001 timeframe. It would have been a fun film. Here's the link to the trailer on the Paw Island site: http://www.pawisland.com/trailer/large.html
-
Robert, If you need anything (forum-wise) let me know. Happy animating!
-
Okay. That works for me. Thanks for the information Fuchur!
-
I really hope you can do that.
-
Well, now you've got me curious! Are you really going to leave us hanging without expanding upon that? Oh to be a fly on David Simmons wall (with a nice zoom lens video camera pointed directly over his shoulder at the monitor of course).
-
By your response I see that your head is in the right place. I wouldn't worry too much about being taken as a negative troll, but consider that there is only one kind of troll. Don't be one of those. Consider how your words may be taken. That is all. My apologies if I came off as a troll buster. *I wrote other words but thought they might be misconstrued. I don't mean to be short in my response to you. But short often = good.
-
Count on it! (P.S. You still owe me a script. ) BTW: Have you seen this on phonemes/lipsync: http://www.garycmartin.com/phoneme_examples.html It's by Gary Martin who animated with A:M long before he got all famous.
-
Not at all. The value of A:M itself has always been intrinsic to the product. The users then add even more value to the product when they follow his vision. Nope. I believe you've got the cart before the horse here but really, Martin's private life and what he does with it is none of my business. If I told you Martin's absence (especially from the forum) is a calculated business decision would you be able to perceive that? Hasn't Martin made his desires well enough known that we can plot our own way? I know the David and Goliath story well, and it still holds true today. I speak in terms of the numbers game here as it seemed to me that by the term 'more flexible' you meant changing A:M into another program, to abandon what is core to the whole idea of A:M. Yes, it is too easy to say staying the course is impossible. Too easy. By why would we ever want to speak that sort of self fulfulling prophesy anyway? Negativism and doubtful murmurings will always be the easy way. Have ye no faith? '(Now) faith is the substance of things hoped for. The evidence of things not seen. Hebrews 11:1'
-
It's probably not worth noting... it'll just create false hope... but circa v13 Hash Inc added code (polygon modifiers or somesuch) designed to better interface with polygons. If I had to guess I'd say it was largely a byproduct of the early effort to create Simcloth... that was abandoned for a more spline-centric approach to cloth animation. The polygon modifiers were turned off in the interface because there wasn't really anything to connect it to... still isn't... no one from the polygon world has been interested in interfacing. But Steffen knows it's there. It may even be documented in the current SDK. That's definitely the best way to get things done in A:M!
-
They do indeed! The voice in my head tells me to say... It would be nice to gain a little more feeling of depth than what we are seeing here. Compositionally this would have the background fading to a lighter hue while the foreground would be darker. Edit: The above is really a nitpicky thing. I like your composite as is. Is this one frame of a sequence? If it is it'd be nice to see a test even if the characters aren't walking... perhaps only sliding... through the scene. That movement is really what will give this scene a sense of depth. Added: A trick I have used to get different elements not from the same source to match better is to add noise over the top of everything. This can be done in several ways one of which is just to place a mostly transparent image over the top and have it move slightly so as not to have any element stay in place. Very very subtle... almost unseen is what you are going for there. Another way (which I have yet to quite get going effectively) is to use A:M's Post Effect called Film Grain. For my purposes it always seem to be too much or too little and so I go with the more manual overlay method. John (Tinkering Gnome) Lemke recently posted a similar technique with a smoky fog/haze that is a similar approach but in your case you aren't wanting the effect to be as visible, just to break up elements and tie other elements together. ( Here's John's topic where he creates environmental space via materials on 2D planes)
-
Fuchur, When you say you've uploaded these videos to the server... where exactly is that? If you mean to say A:M Films I didn't see them there. I'm interested in viewing Raf's Non Linear Animation video because mine was/is of very poor quality. Hopefully this one is better. Thanks!
-
That is much much better! There are some composition problems that distracted me the first time through so I missed a lot. Specifically, that we can't see Latimer's hands on the table. They look as if they are missing or inside the top of the table until he raises his hands. Yes, I know you are blocking but any place that your character makes contact with something in the scene is of maximum importance. Try to find and lock down those contact points as early as you can. Those contact points are what you are really blocking (placing/blocking out things). The rest is refinement of the character's performance. As a rule lipsync is accomplished toward the end of animation (although I can think of a few exceptions). Body language... now that's the thing. And you've got it showing up in this take. A note regarding the gesture on the (second) word 'here'. It's refinement but I would push that back even more until it no longer overlaps the election poster. That would likely be an overshoot and where you have the arm/hand stop now is the point that is at the minor extreme. It could then go past that point and slowly settle back to where it overlaps the poster again slightly on its return to (or near) that minor extreme. I'm tempted to say... toss that one out and start anew again because if you keep improving at this pace every time you do... you Sir are going places! I'm quoting these words from Robert because I really appreciate what he is telling you here. In the early stages of (learning) animation it is really hard to animate too big. Really exaggerate. Be fearless. That's why I suggest throwing out the second take and starting fresh again with a new take. This will get your mind into the habit of quickly ramping back to the place you want to be and pressing forward to the next level. And with each time you do you get better and quicker and your keyframes will be more organized and clean. *Of course when I say throw out I really mean Save! While it's unlikely you'll ever go back to that take again... you may need to some day. Save often and incrementally (ex: filename001a.prj)
-
Watching now. Nice demonstration of the problem. You are using v17.0a? I've seen some similar bounding box issues in the past but not seeing it now. Also, there seems to be some icons missing up top (perhaps they are to the right/hidden/off the screen). Make sure you aren't in Global Mode (the one that looks like the planet earth)
-
Take heart. I was only going on first impression. I SHOULD HAVE COMMENTED ON YOUR FACIAL ANIMATION but my brain refused to see it. It was too caught up in the main (body) activity. I do really like what you said about sleeping on it and then reviewing again in the morning. That is sure to put a new perspective on everything.