sprockets The Snowman is coming! Realistic head model by Dan Skelton Vintage character and mo-cap animation by Joe Williamsen Character animation exercise by Steve Shelton an Animated Puppet Parody by Mark R. Largent Sprite Explosion Effect with PRJ included from johnL3D New Radiosity render of 2004 animation with PRJ. Will Sutton's TAR knocks some heads!
sprockets
Recent Posts | Unread Content
Jump to content
Hash, Inc. - Animation:Master

Rodney

Admin
  • Posts

    21,575
  • Joined

  • Last visited

  • Days Won

    110

Everything posted by Rodney

  1. Looking sweet! I'm liking that shoulder movement.
  2. This is a given in the benchmark itself as the device or format used is simply the one you want to benchmark. In this way we compare apples to apples and not apples to bowling balls or more likely if we aren't paying attention to the variables, johnathan apples picked at the prime of their season versus golden delicious apples picked far too early in their season. So the answer is 'yes', you want to identify what device and format is being used in the benchmark. Ideally this would be a set standard and only need to be expressly identified if/when that standard was deviated from. As a for instance, in my previous benchmark it would have been good to specify I was rendering to PNG but with no Alpha Channel. Rendering is something I'd love to spend some time researching because I think there are massive savings to be had or... in a perfect world... the concept of 'rendering' itself would become somewhat obsolete. That is to say that rendering would be obsolete in the sense that the user would no longer know that 'rendering' was occurring... or if known wouldn't particular care. That is the revolutionary promise of technology and why we use computers; to eradicate, or at least radically reinvent and revolutionize our understanding of time and space. I'd like to hear more of what you think about this. Who knows, perhaps we can discover ways in which it isn't strictly theoretical. As I see that Benchmark 1.0, it strives to work with what is already given yet do this while accounting for considerable, even unwieldy, variation. This benchmark is primarily of use to the one conducting the benchmark and as such is not (ideally) global in nature. Each person would have to conduct the benchmark themselves and compare to their own benchmarking consistently... and not always look over at the other fellow's achievments. Benchmark envy is to be avoided if you are to maximize use of what you have already. This is not to say that a good benchmark cannot inform the global interpretation of information but by itself it cannot account for the vast complexities of unknown variation. What sharing personal benchmarks does then is allow us to mark, and therefore to benchmark against the relative changes in other systems and consider that in light of our own system. In this way we can recognize the approaches that best benefit on the global scale and effect a similar change at the local level. It should be noted that the benchmark is not software/hardware agnostic but is highly dependent upon knowing the original configuration or if the original is not known well striving to maintain that original configuration. This is at odds with what most benchmarks attempt to do. They actually encourage variation when the goal is to reduce variation! The more that is known about the original the more accurate the benchmark will reflect measures of change. However, that does not suggest the original cannot be in a constant state of change itself. It is assume to be changing but as long as the configuration doesn't change in any significant way limited variation is maintained. This is like adding a component to a computer and then testing it, then adding yet another component and testing again. With each new test a new level of qualitative and quantitative information is retained. But even here, the poor data is not seen as waste... but as very useful information! To better account for variation then the person conducting the benchmark simple tracks 'things' as they are introduced, removed or changed. For instance, if more memory is added to a computer that new variable that can be expected to shorten product cycles in the benchmark. Failure to see time shortened after increasing memory might therefore clue us in to the fact that the system being tested was already operating on maximum memory and increasing memory, without some other change, will not be advantageous. We then know to move our observation and testing to some other critical node in the system. Production is like this. We might think that one element of a production is failing and with all good intentions we set out to correct the deficiency. But without adequate information we are at least as likely (better than a 50/50 chance) to do more bad than good in the exchange. We just might not know the difference until it is too late. Next up I'd like to discuss Baselines: In general a baseline's objectives are: - Determine current status (is the system operating optimally) - Compare the current status to standard performance guidelines (what is operating sub-optimally or exceeding expectations) - Set thresholds/breaking points for when the status exceeds guidelines (if for no other reason than to notify the system that the guidelines need to be expanded or modified.
  3. The last major effort to have python programmatically working within A:M was Petr Sorfa's python plugin (v9 timeframe). I wasn't very familiar with python so so my use of theplugin was mostly just running his scripts and seeing what showed up. Petr Sorfa Python Plugin (Old plugin and information)
  4. I know we can export Lights to a python script but... Can we reimport that script back into A:M or is this for external usage only? (If not I know the workaround... perhaps more direct route... is to simply save the Chor setup and import that back into A:M. I do note however that editing the script to create lights is easily readable via the python script) The same appears to be true for exporting Choreographies to python scripts.
  5. BUMP (Because this is a cool topic and many have never seen it)
  6. Hi Simon, It looks like you are heading in the right direction. Here are a couple things that I would focus on: While I assume there is an intended disorienting feeling going on in this scene it's not immediately apparent what we are seeing. The windshield wipers clue us in that we are seeing out the front of a car window. This might be a great place to use A:M's ability to overlay the foreground over the background. In other words, you are dealing with three main levels/layers already within this scene so you might as well take advantage of them. Level 1: Inside the car Level 2: The threshold (window) Because of where the title text is this is where the initial focus will be. Level 3: The (unknown) road beyond By separating these into those three elements and working on each with the goal to draw attention to the title will make for utter clarity despite the chaos in the scene. Level three is where I think you can bring the most to this because even a few foggy elements moving in the scene will further suggest that this is a car moving through a dark and stormy night. Perhaps it is at Level 3 that the title text should be? As it is right now the title is a bit too hard to read. It's a bit hard to just this shot out of context but that is my first impression. You've got a great setup here now concentrate on the clarity in how it reads. Added: You don't have to actually composite layers in order to have these three levels in your scene but it might help in making it easier to work with.
  7. Mark, If you've got a could case study that you can push toward A:M Reports that may be just what is needed to push that into 32bit. I haven't investigated enough to know if some 32bit audio is PCM compliant. You never know, it may be that it's not that tough a change in the over all scheme of things. Adding it to A:M Reports helps to get it documented and prioritized.
  8. Rodney

    Continuing?

    Ernest, Thanks for that write up. That is very useful information to those of us contemplating productions of our own. You know I'm a fan of 'Subject 99'. It doesn't surprise me a bit that you'd consider taking 'Subject 99' into graphic novel and live action. If nothing else you've got a great animatic that you can show around! Obviously, I hope you can find whatever incentive it'll take to tell us more of the story... I wanna know what is up with this guy. That's easy. I think you should tell the rest of story! I think it's also important to be able to say you're finished with this one... even if you begin to move on with other things (its a given you'll do that). Yes, there most definitely is. I'd say it is when you've brought your story to a satisfactory conclusion. (on your terms) If you don't want to tell the 'big story' perhaps you can still have the characters reach a satisfactory short term conclusion. Perhaps you would prefer to get others involved in your storytelling. It's always time for a new series. Yes, there is but this forum is (at least at this point) populated more by creative types than by consumers. We aren't going to have the same perspective as would a movie going audience. Most of us want to just watch, we want to create products like 'Subject 99'! I'm not sure what your goals are but if you aren't making money at 'Subject 99' then it's important that you enjoy it or (as Robert suggests) place it on hiatus for awhile. But I don't think the series has to end just because new episodes aren't coming out... no... you still might have time to post little teasers and reedits and whatever you like to keep interest going for 'Subject 99'. It's just like you to leave us on such a cliffhanger!
  9. I think... I forgot to post the report to A:M Reports.
  10. Here's my take... I'd say there are both technical and practical reasons that exist for focusing on 16bit support of sound files. A case could be made for other additions and enhancements but ultimately A:M is not a dedicated editors as there are thousands available that fully focus on sounds. File Size Sound files (even 16bit) can grow to large size very quickly and often require considerable compression. In animation this can be especially processor intensive as the files must be encoded and decoded on the fly. Realtime Playback In order focus on animation with as much realtime playback as possible the straightforward approach might be to have two versions of a sound file, using the 16bit as your proxy in A:M and then replacing it with the original at a later time. This is similar to methods for using non-standard graphics (gif animation files) where the files used are temporary (proxies) and then exchanged at the appropriate time. Here's a write up from the Tech Ref/Help File for general information. I believe all of this still applies:
  11. I read this similarly to the old saying, "There is no such thing as a straight line in nature." And if that is true then linear square patches is (purely) theoretical. In my messing around here I note that the thing about linear/square patches seems to be that they are aligned (or in line) with other points in order to formulate 90 degree angles (hehe... no doubt hence the name 'linear'). This does leave the occasional point out of alignment which I think that a fix (not in the graphic) might be to scale or move the extruded points out until aligned with the original. An alternative method might be to scale the original and extruded point together (I assume an average here) but that would remove the original away from it's place with respect to it's own origin. Therefore I think scaling the extrusion back in line with the original would yield the best linear spline square interpolation. (Yes, I have exactly no idea what I just said) Added: While messing I came up with another construct and I'm not sure how or even if it would fit in. It'll require more experimentation.
  12. Congratulations Mark! I agree with your fan base. Make more 'Stalled Trek' films! Having folks enjoy something you've created at that level is a huge reward in and of itself.
  13. This is somewhat related... I've been staring at an issue but don't fully comprehend how to approach understanding what to do with the information. Specifically that of how most programs and plugins (and approaches for that matter) tend to advance from a perspective that works against our goals of having smoother surfaces. In the image below this can be seen in (what I perceive as current approach) versus an optimal approach. What we tend to see is: Block - Prevalent in Bump and Displacement Linear (Triangle) - Prevalent in the Grid Plugin (except optimal Control Point placement by assignment of lower Step Width and Height) We don't see Linear and Bilinear very often and with good reason but they still would be preferrable in some cases over Block and Linear. (Triangle). What we only tend to see in optimal spline generation is biquadratic and that seems to be where the crossroads of optimization is in applying scale, control point placement and magnitude to mesh creation. Note that all of the approaches are (under the hood) comprised of the same spline continuity and are quadratic in nature they just seem to get 'pinched' at critical points along those splines due to the given interpolation. This image taken from 'Efficient Animation Techniques Balancing Both User Control and Physica Realism by Zicheng Liu. It's well worth reviewing as it has a chapter on Optimizing Keyframes (Chapter 4) as well as some exercises for animators.
  14. I never pictured you as an excel guru. I learn something new about the Tinkering Gnome every day. Many years ago (about 1994... when Microsoft purchased Foxpro) I was at a demo where a lady showed live video being played from a spreadsheet. (it was of a guy windsurfing). Everyone in the crowd ooo'd and aww'd. Then she showed the easter egg 3D game that someone had programmed into Excel. More smiles from the audience. I've been a fan of excel ever since.
  15. That's wasn't so much of a benchmark but was just to get the ball rolling. The 'perfect' benchmark would likely render a series of sequences to account for default settings and then adjusted settings (thereby forming a way to measure what those settings are 'computationally worth'. Initially I set about rendering 600 frames at 24fps but it looks like I forgot to account for frame zero. Then I typo'd minutes for seconds... and yikes... doesn't that make a difference. But if others are learning from my mistakes that can be a good thing. The neat thing about 600 frames is that it is nicely divided by both 30 and 25fps That 601st frame really got things rolling. 600frames/30fps=20seconds 600frames/24fps=25seconds Without specifically remembering, I believe easy math my initial goal. So, perhaps I instinctively rendered 20 seconds of animation at 30fps and only lucked out that it was so relative to 24? In other words, I just got lucky. Attached is a pdf file I generated from an excel spreadsheet I just threw together that attempts to get a grasp on frame optimization. Assuming no typos it seems to me that when switching over to 30fps things really start to break with regard to keyframe optimization (30fps has advantages but one of them is not syncing with animation with extremes originally drawn on 4s, 8s, 12s, 16s etc. 30fps simply doesn't align well. I haven't studied the 3:2 pulldown enough to understand it's full effects on optimal conversion of keyframes but a cursory view seems to indicate an attempt to targeting key-frames that will maintain a semblance of the original performance. Disclaimer: I do not suggest that the attachment will in any way help understand what I'm going on about here but it's a raw data dump of numbers that I felt like typing into excel to make sure I understood it. frames_per_second_tables___draft___not_reviewed_for_accuracy.pdf
  16. Yes, indeed. 16 plus 1/2 of 16 (8) is 24. You have more than good reason to be assured.
  17. Where SMPTE gets dicey... and people tend to get lost... is where the numbers have to deal with the 'set' frames per second (usually 24 or 30fps). Thusly... What comes next in the sequence?: ... 00:18:20 00:18:21 00:18:22 00:18:23 ? Select the following text with your mouse to see the answer:
  18. I'm straying a bit off topic but this is fun stuff. Something that I didn't understand before was how the reading of xsheets and those timing charts on the extremes was read. That is real gold when trying to analyse classic animation sequences. As a for instance, let's consider our standard timing in SMTPE format (00:00:00) While this equates to hours, minutes and seconds it also equates directly to frames. Thusly: 00:03:04 is the equivalent of 3 seconds (feet) and 4 frames. 00:20:01 equates to 20 seconds (feet) and 1 frame. What then does 601 frames equal (in feet)? 00:20:01 (in SMPTE) which is equivalent to 20 seconds and 1 frame. Is everyone still with me here?
  19. Here's a breakdown for reference: (Image courtesy of: DonBluth Animation) Note: Don's Seminars have been on hiatus for the better part of a year. An announcement recently indicated they will be starting up again with new tech to resolve problems experienced in previous sessions.
  20. Is your fps set to 30fps in the Project Properties? 600 / 22 = 27.3181 on my calculator. Aside: I find it interesting that number is right smack dab in the middle of 24 and 30. That is telling us something. At a glance it looks like the difference of one frame extra per second. A general breakdown: 25 seconds at 601 frames = 24.04fps 24 seconds at 601 frames = 25.0416 fps 23 seconds at 601 frames = 26.1304347826086956521739 fps 22 seconds at 601 frames = 27.3181 fps 21 seconds at 601 frames = 28.619047 fps 20 seconds at 601 frames = 30.05 fps Note: I am trying to make a distinction here between Feet Per Second (FPS) and Frames Per Second (fps) by using the lower case 'fps' as the number that goes by the aperture of a camera can vary. One foot in traditional animation terms is pretty much set in stone as 16fps. Here's an interesting website that covers some of this information: http://frames-per-second.appspot.com/ Marcos understands the higher (and lower) frame rates that drive innovations such as MUFOOF. Here a response from Don Bluth after minds got confused on the issue: Of course the standard for video is closer to 30fps than 24fps and that is largely due to the need to bring down labor cost/time. An animation on 2s for instance simply doubles all the frames so all you need to do is create/draw/render the odd numbered frames and repeat them for the evens. A slower action could be on 4s which means one original and three copies. An even slower action might be on 8s which really slows things down. Reviewing on 8s is a good standard because otherwise the action is too quick to review. David Nethery (whose website is worth checking out) has this to say: Again, he's talking about traditional animation. Some conversion often takes place in order to move to video at 30fps. But note that everyone in the traditional art will 'automatically assume' you are animating at 24fps. This changes with computer animation in that 30fps is often easier to use in math. Everyone should take a moment to recognize the similarity between bits and bytes and traditional animation. It is more than just coincidence. 1 2 4 8 16 32 64 128 256 Fun stuff this is.
  21. Oooo. Really nice render. I like that!
  22. Welcome back to the forum Larry! :)

  23. I was going to start listing the various 'features' you've put into this project but I think it's easier to just say... everything!
×
×
  • Create New...