sprockets The Snowman is coming! Realistic head model by Dan Skelton Vintage character and mo-cap animation by Joe Williamsen Character animation exercise by Steve Shelton an Animated Puppet Parody by Mark R. Largent Sprite Explosion Effect with PRJ included from johnL3D New Radiosity render of 2004 animation with PRJ. Will Sutton's TAR knocks some heads!
sprockets
Recent Posts | Unread Content
Jump to content
Hash, Inc. - Animation:Master

Rodney

Admin
  • Posts

    21,575
  • Joined

  • Last visited

  • Days Won

    110

Everything posted by Rodney

  1. Go Jason! We'll steer folk away from usng just the ID alone.
  2. In the post Robert linked to the BBcode is: [ youtube]MK0ITXBWpHE[ /youtube] (without leading spaces) The new board requires the full URL whereas the old had supplied the URL. This would be easy to fix but the new youtube tag isn't part of the standard BBcode (i.e it isn't even listed as an option in the BBcode listing so must be deeper in the board's code. I've added a new BBcode tag for those that want to only paste the ID of the youtube video. It is: [ youtubeID]MK0ITXBWpHE[ /youtubeID] I see two options in updating the old links: 1. Add the URL after the [ youtube] tag 2. Change the tag from [ youtube] to [ youtubeID] Of course we'd only want to sample/select those links from the old forum or it'll break the links folks have posted in the new forum. Here's the code at Robert's link changed to [ youtubeID]: [youtubeID]MK0ITXBWpHE[/youtubeID] Edit: Not working at the moment. Will doublecheck the BBcode. The current youtube tag falls into IPB's new implementation of the media tag so there may be an easier way to get this done. That media tag is implemented in the PHP.
  3. Make sure you are using th Full Editor and not the quick response (default) editor. Actually, that may not even be required but that will work. I usually just copy/paste from the previous topic, then select the text in the new post I want to quote and hit the quote icon in the menu. In other words... exactly how I quoted text in the old forum. Sweet!
  4. Somebody get this man some drums! I can hardly wait to see (and hear) this Will.
  5. Ha! I knew it. Let me take this opportunity to thank you for the work that went into HAMR. I was in shock when I plopped the .HXT file into place, launched it and it played my current projects. Now that's longevity. And to a large extent A:M is still that scriptable game engine with an API today We just haven't leveraged that legacy. Wow. That makes me wonder where I was back then as that doesn't sound familiar to me. Not that I can speak to the code itself but the good news is that since the timeframe you left off with HAMR the core of A:M hasn't changed significantly. I'd guess the code won't compile directly without a few changes but not as if entire rewrites had altered the code base. I distinctly recall Martin stating back then he would not do a rewrite of the code base. To put this into perspective...the v13SDK has been defacto until only recently with the release of v18SDK and the majority of changes incorporated by one guy; Steffen Gross. A:M has had a long stretch of stability exactly because the program hasn't undergone any foundational structural change. Perhaps the biggest update was to allow the move to 64bit but the 32bit side is maintained as well. As v13 was released circa 2004 and the v18SDK mostly updates A:M to OpenGL3 that suggests that most of the core remains unchanged. Not that I've tested deeply but this is what I've seen in the process of opening current files in the HAMR viewer. I was in shock when I discovered that settings tweaked in A:M carried over to the HAMR Viewer (i.e. Moveable, Rotatable, Scaleable and the big one... Poseable!) I didn't expect that last one to work but there it was allowing me to pose characters in the HAMR viewer just like a decade ago... pretty neat. HAMR was ahead of it's time way back then and as far as I'm concerned it still is today.
  6. Just as Robert said... nice model. And nice economy of splines!
  7. Rodney

    A Victim Of Duty

    What a very odd tale. And expertly executed I must say. Outstanding work Tore!
  8. The ulitmate instancing mirror tool would mirror in any direction you desire. Luckily, A:M has that capability and more through the crude method you describe. The prime example. create an empty Model and drag/drop it into a new Chor *twice* (or more times if you prefer) It's usually best here if you delete/remove everything from that Chor (ground plane etc.) In the PWS enter the instance of one Model and scale it 100% in the X axis (or Y axis if mirroring up/down) Now you can model either in the original Model or in the Chor (in Modeling Mode) and see your splines draw and shapes form in real time on both sides of the mirror. When done, simply export the Chor as a Model. This is a really fast way to rig too although the Mirror Bones is specifically designed for this. Another (more complicated... but fun... way) would be to use a plugin such as Sweeper but instead of sweeping shapes you want to set settings to duplicate objects. For a simple mirroring you want to create two splines each pointing outward (because the orientation and direction of the spline is important) and *the trick* is to set the in scale to 100 but the middle and end to 0%. Then Sweeper will mirror the whatever shape/object you select to duplicate. This has the advantage of being able to create many variations on a theme on the fly just by drawing and selecting new targets. (This probably needs a video tutorial so I'll try to do that) The downside is that you don't get to see the mirroring appear in real time as you create or edit the original shape.
  9. That's turning out quite well! Nice!
  10. Douglas! It's great to have you back.
  11. See other topic same title in the Game Development forum... LINK. (It's harder to follow topics that are double posted)
  12. It does sound as if by 'extrude' he means trace... as in 'raster to vector' conversion. I would look for other programs/utilities to do that but there are some round about ways to get that done. Most of those would start with a template/mesh of some type to begin with however. Example: Creating terrain from a bitmap via the Terrain Wizard. That wouldn't directly do the batch processing of serialized sequences of images though. More info/examples needed.
  13. Welcome to the A:M forum. I suppose it depends on what you refer to as 'extrude'. That might alter the target we want to hit dramatically. The short answer is 'yes, but...' Without knowing more I'd point you to the fact that A:M has long been able to create/export meshes based on images and because these can be exported in sequence they can be driven by an animated sequence. A few additional thoughts: - PNG is likely not the ideal format so the PNG sequence might need to be converted to another format. (Note that I don't know why PNG tends to break down where other formats don't so I won't speculate on that here) Note also that I'm not saying PNGs cannot be used. They can, in my experience outside of targeting images for web/html may not be the best format. - Grayscale images may work better than color images because displacement specifically uses the gray scale. It will use color as well but you may get some unintended results. - An interim format such as Hash's .PLYH format might need to used to get the initial displaced geometry from the images. - A:M has had the ability to displace geometry upon export for a long time but some releases of A:M handle it better than others. Also, not all formats A:M exports to will process displacement. - Current export to formats such as .OBJ support displacement but I haven't tested those. As it is very likely that this isn't what you are referring to I'd love to hear more. If by 'batch extrude' you mean to place each image in different levels of space then that too can be done. The primary way I would approach that would be to scale the imported imagery but other processes could be used as well (such as distortion cages). Something that also lends itself to variations on any theme is A:M's file format. Because A:M files are text files you could batch the process outside of A:M and see it in all it's glory once the file is opened. Tell us more about your goal!
  14. There's another good reason for deleting the default lights that are in a Chor.... When I first started tweaking I didn't realize it but I was changing the settings of the default lights in the Chor. Deleting those helped to remind me that I needed to look in the Model for the 'Bones' which are the Lights in the Model. Also: If you are rendering from the Model window and the model only consists of the lights you might temporarily create a plane or other object in that model to better see the lights. If there is nothing for the light to shine on (and you don't set the light as a lens flare, volumetric etc. you likely won't see their effects. I also temporarily change the color of the lights so that I can easily distinguish which is which (for three lights I usually pick red, green and blue). Once happy with placement, intensity etc. I change the colors and go for additional adjustments.
  15. Hey Dale! There was something that restricted the number of lights that were active in real time a few versions ago... I can't find the reference for that at the moment. I don't think that is the problem but... that's what immediately came to mind. So, I guess there are several question that would narrow down the variables a little: What version of A:M? I know you said the lights are in the model but... are you trying to render from the Model window? Action window? Chor? Since it's a Klieg/spotlight is it possible they are pointed in another direction? Have you entered Bones mode at any time and repositioned the lights? That last one is going to be my guess but it's a wild shot! Edit: I just ran a test in v18 and the three lights I created in a model weren't showing up in the chor until I raised them up a little (y-axis) and pointed them downward (x-axis). I tend to delete the default lights as well any time I create custom lights and I didn't specifically check for that. Long ago there use to be an issue with more than three lights rendering in realtime (in a Chor). Note that for this test I dropped the Model into a Chor. If you are rendering it somewhere else that'd be good to know.
  16. Let's let the experts speak for themselves. Pay attention to what is said about the industry, where the entire industry was (still as of August of last year) and why PIXAR targeted Maya to bring the industry up to speed. To be fair, PIXAR also suggests, perhaps they are missing something important as well but I read that the primary thing they were missing was industry adoption and standardization of areas of importance to them. Surely since that time everything has been fully assimilated and incorporated into the industry... It's interesting to note the various titles given to Bill Polson in this video (he gets several) but I find the most appropriate to be 'Director of Industry Strategy'. I wonder how many folks understand what that title entails specifically related to what isn't made public by PIXAR and with regard to trade secrets. But everything is already known and implemented so we shouldn't bother to speculate... I almost got shivers up my spine when he said that Maya had recommended, "let's just get rid of polygons." Here's another video that just gives a brief overview of the process of real time tesselation with subdivision:
  17. I thought I had already commented here... but must have forgot to submit the post. I really like what you've got going here. It looks to me like your test was successful and the rig is working well. I know we aren't trying to develop a perfect picture but perhaps this will further test your rig: I note that his left hand (screen right) is in an awkward position. I'm curious as to how articulate that wrist/hand is? Other thoughts: I did a google search for leaning against walls and note that many images have arms crossed (back against the wall for those) or hands are in pockets (various poses) but perhaps more importantly almost all images seemed to suggest the person was posing for a/the camera. I can think of a few cases where leaning against a wall would not be motivated in that way but most would still imply intentional posing (even dramatic and as Robert suggests... exaggerated) on the part of the person. This being... and I suppose my point here... an opportunity to express the character's personality. Perhaps to truly test the rig what is needed is an animation test where he moves and shifts position?
  18. It's not Catmull-Clark that needs to be extended but tools that have used it. Particularly those that implemented it before Open Subdiv was released. There is also the matter of subdivision itself which isn't fully understood nor universally implemented, as witnessed by this discussion so that also leaves room for improvement. But that isn't the fault of Catmull-Clark. They knew what they were doing. Catmull-Clark has been improved upon for over 20 years and extensions of that model only recently embraced and exploited. Prior to release of PIXAR's Open SubDiv the world was considerably different. Since then, folks have had to relook at old code and see how their implementations fit into this new model going forward. Most found they had room for improvment so they set to work. So, from that perspective a review (and proper incorporation and/or reimplementation of Catmull-Clarke) in light of older code is assumed. Aside: I just saw 'How to draw your Dragon 2' and noted the difference between what was there on screen and what was there before (from all quarters but also from the first movie) and this movie was technologically superior in so many ways. While there are many reasons for this the primary reason is that the software Dreamworks used was programmed from scratch over the past five years. While there are many reasons to abandon older software the primary reason they gave for going that route was to supply artist friendly tools. The implication being that the older tools were not. So there is yet another reason to better understand and incorporate Catmull-Clark... to improve or replace old, outmoded or obsolete tools. In A:M's case there is considerable advantage to delving deeply and exploiting what other topological models and limited technology can and cannot exploit. Some of this is comparatively trivial... at a guess I'd say this would be represented by the automatic closing of patches found at extraordinary vertices. The process being one of identification, tagging and processing. But how does Catmull-Clark directly do that with hooks? Answer: Directly it doesn't. Indirectly it does as that area is the domain of special cases dealing with extraordinary vertices. Other than the issue of truly smooth surfaces; that of a continuously smooth line (singular) versus collections of noncontinuous and unsmooth lines (multiple approximations of a smooth line) the point is largely moot. But because the further exploitation of Catmull-Clark continues to bridge that gap and drive new tools it continues to be good news.
  19. Another downside of the setting aside of HAMR technology was the loss of read only content (in particular the binary form of A:M files). Why is/was that important? Because you don't always want to share every aspect of a particular project. Perhaps you just want others to view and appreciate what is there or would prefer it not be modified. Perhaps your project has elements that aren't yours to freely share with others, although other elements are. It is a trivial thing to access data stored in a Model, Action, Project format or even the compressed (zip) version of the files but it was considerably more difficult to extract useful information from the binary data. Not impossible (and I recall issues with the implementation) but any other usage could be considered intentional and outside of the purpose of the format if anyone tried. (Example: These files were only released in binary format... how is it that YOUR project came to include those files?) At one point I even thought A:M might gain the ability to read/play it's own .PRBJ binaries which would further the usage of that read only format. Bottom line: There was a lot more going on in the implementation of HAMR than real time display technology. But.. life goes on. And because they are zipped files and directories the HAMR viewer can directly open consolidated HAMR files. Note that these aren't read only but rather the viewer (quickly!) uncompresses the consolidated package and then reads the files. I like!
  20. At a guess... truly a guess... I'd say it's leading us back to a more proper review, understanding and perhaps even further extension of Catmull-Clark.
  21. You could (and theoretcially still can) do this with HAMR. In fact, the options are there in A:M to toggle those properties (triggers) on/off in a Pose/Action (in v18) if the hamr.hxt file is in its proper location upon startup. I say theoretically because you'd propably have to use a simulator to run a compatible release of internet explorer that can access those features through HAMR. I don't think the hamr viewer exposes those triggers although given that I was wrong about the capability to pose bones in the viewer I might be also wrong about the viewer's ability to handle interactivity such as mouseovers without a browser. I'd need to test to find out. If you pull up the old HAMR website (via archive.org) there is a interactive bedroom scene (project file) where specific locations/hotspots in the room can be activated. Mousing over or clicking on one of those hotspots then drives an action. The other examples (results from the HAMR contest) were interactive as well. Aside: Some of this reminds me of what they are currently doing with Adobe Edge Animate. Boy would it be sweet if those two approaches (HAMR and Adobe Edge) are compatible. Since both use javascript for scripting I'd say they are both loosely simpatico. Attached ref: screen cap of v18 dropdown menu
  22. I understand what you are saying. I just don't buy it. You yourself said that the lines and surfaces (that are rendered) are calculated from the points in space. Those lines and surfaces are derived mathematically due to the vectors/normals even though they aren't yet created. You stated that various schemes are applied to interpret how to connect those points to form lines and surfaces. The lines and surfaces exist in the calculation that exists. To test this one need only apply the same scheme to that set of points again and again and witness the same outcome. Random schemes applied to those points will result in the creation of different topologies (some more optimum than others). This is also validated by how you state the importance of the artist in the equation but the lines and surfaces (can) exist without input from artists they are already there. The points in space are a set of various contrivances used to represent what is there. I understand you to be saying virtual space isn't real and virtual objects aren't real things but that isn't a particularly useful construct. We DO need the defining data of those lines and surfaces BEFORE they can be generated. The normal or vector stores this (as you've suggested). Within that stored data the subdivided lines and surfaces can be generated. (this is where you state that subdiv surfaces can be displayed even without displaying the lines and surfaces) Luckily for mathematicians and programmers they are already conceptually there. Without that additional data subdivision cannot be made. They would just be (static) points in space. A point in space isn't an object but rather a specific location. That same point moved in space isn't two objects but simply a different location. The difference (distance) between those two locations can be measured. Assuming no extraordinary force alters timespace, this change of location can be defined (measured) within linear (2D) space. The meer presence of the extraordinary allows for movement and measurement in 3D space. As of yet we still do not have anything physical to measure. Nothing yet exists. And yet we still have the presence of paths (line) through measurable (real) space. This movement and measurement in real space provides data that can be tracked and traced. The tracks and traces of which facilitate the shaping and reshaping of space. And as of yet we still do not have anything physical to measure. And yet we are in the presence of outlines (contour) and shapes (face and surface). And we still haven't measured anything real or divided any thing yet.
  23. Hey now! I resemble that remark! (I recall reading the article as well. Was that the Wired interview?)
  24. Yves, I went through your post response by response but I'm not sure how my thoughts will add to the discussion. As such I've set aside what I wrote but I don't want you to think I didn't carefully consider your post in its entirety. I don't accept that because something is far off in the future it isn't pertinent to the discussion but I do recognize the burden of proof to demonstrate the value of adding such into a discussion with relevancy is no small matter. I did get a chuckle out of your "What lines?" "What surfaces?" intro as you then proceeded to derive and define those very lines and surfaces. To which I responded, "asked and answered". Now if we can just get everyone to answer their own questions in similar fashion every time.
  25. Yves, I don't know why that would be. Quoting works here (although I never use the forum's quote button... I copy/paste then add the quotes via the editors quote button so that I get only that part of the quote I want). As for copy/paste itself... that one is a serious head scratcher as that function has nothing to do with the forum.
×
×
  • Create New...