sprockets The Snowman is coming! Realistic head model by Dan Skelton Vintage character and mo-cap animation by Joe Williamsen Character animation exercise by Steve Shelton an Animated Puppet Parody by Mark R. Largent Sprite Explosion Effect with PRJ included from johnL3D New Radiosity render of 2004 animation with PRJ. Will Sutton's TAR knocks some heads!
sprockets
Recent Posts | Unread Content
Jump to content
Hash, Inc. - Animation:Master

Hash Animation Master Realtime


Rodney

Recommended Posts

  • Admin
It's not about display technology.

 

I'm not smart enough to disagree with things I haven't studied but I believe it is about display technology in that programmers have to deal with the inherent limitations of that technology. If that technology were sufficient then there wouldn't be obstacles to overcome.

 

Consider also that when most people hear the word 'display' they think only of the screen of a monitor (there's your sampling) that is a receptacle that waits (actively or passively) for data to display.

 

I would guess that the more passive the monitor the more direct while the more active the more data preprocessed prior to display.

I can think of several technologies used to to display data that don't appear to be used much for either category of displays.

One example would be the platts used to texture 3D objects in virtual space.

Those platts are two dimensional and yet are projected into 3D space (and/or onto 3D objects). Edit: I almost hate to say projected here because they don't have to be projected if they are already the same points but in different dimensions of space. Another example: a 3D model flattened onto a 2D plane.

 

It is in that area of further exploiting 3D space that the industry is heading (one example: EXR images that capture multiple images/levels in depth/z-space)

There are several shortcomings of platts and one of them is that by themselves they contain no depth but when coupled with other layers/channels the communication of data can be transmitted both ways... and in lossless ways. It's almost as if bitmaps and vectors could be treated as the same thing.

And the age old arguments exist there too... vectors are better than bitmaps except where they ain't.

 

You can't "directly" digitise something that's produced by a continuous function of your inputs. There'll always be some kind of sampling.

 

 

Direct or indirect sampling would be more akin to what I was referring to above. I don't see the point of stating you can't digitize something without digitizing it which is basically what you are saying. And besides all that, we are talking about data that is already digital.

 

But lets not ignore what isn't digital.

Due to world wide buy in, interest and investment, sensor tech is growing at more than an exponential rate and massive amounts of real world data are increasingly available and more will be as demonstrated by the game-changing leaps in point cloud technology (there's your sampling).

 

But those represent the collection of data that isn't readily available.

With 3D models (I speak here of both the objects and the processes) already digital now the task is not to sample (although the data can be resampled) but rather to move or transform the required data where it is needed.

Link to comment
Share on other sites

  • Replies 90
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Rodney

Loath as I am to state a platitude, but future display tech will provide better resolution and faster processing than the current display tech. But you seem to imply there are going to be some qualitative changes.

Link to comment
Share on other sites

  • Admin

I fear that words I type will appear to be terse...not my intention... but we can safely move beyond stating the obvious (i.e. that future display tech will provide better resolution and faster processing than the current display tech). These are givens unless someone were to believe they will not or cannot improve. (i.e. should they bump up against a technological obstacle or scientific certainly that will delay or prevent that improvement). But we don't know anyone who believes that such improvement wont happen do we?

 

I certainly anticipate there will be many qualitative improvements (to computers, computer displays, computer graphics, etc.) in the future (but this is also stating the obvious).

 

For this discussion to be of any relevance it might help to be more specific about what classifies or does not classify as qualitative improvement to you.

But more importantly... if the subject matter of this topic (HAMR etc.) isn't of relevance perhaps we can start a new topic with a focus toward areas of more direct interest to you?

Link to comment
Share on other sites

I'd like to participate in this discussion but even if I click on the "Quote" button, I don't get any quotes. And I can't even copy and paste texts from previous posts. So I'll pass.

Link to comment
Share on other sites

  • Admin
if I click on the "Quote" button, I don't get any quotes. And I can't even copy and paste texts from previous posts.

Yves, I don't know why that would be. Quoting works here (although I never use the forum's quote button... I copy/paste then add the quotes via the editors quote button so that I get only that part of the quote I want). As for copy/paste itself... that one is a serious head scratcher as that function has nothing to do with the forum.

Link to comment
Share on other sites

I'd like to participate in this discussion but even if I click on the "Quote" button, I don't get any quotes. And I can't even copy and paste texts from previous posts. So I'll pass.

 

Maybe it's a browser issue...it works for me in Comodo Dragon and Firefox, but didn't work in Internet Explorer.

Link to comment
Share on other sites

 

I'd like to participate in this discussion but even if I click on the "Quote" button, I don't get any quotes. And I can't even copy and paste texts from previous posts. So I'll pass.

 

Maybe it's a browser issue...it works for me in Comodo Dragon and Firefox, but didn't work in Internet Explorer.

 

OK. Thanks for the pointer. It works in Chrome.

 

I see this in reverse because the lines and surfaces must exist before they can be divided, much less subdivided.

 

What lines? What surfaces? Whatever the technology, you set control points in a 3D space. Then some algorithm computes line representations and surface representations from those points. Those lines and surfaces are only a consequence of how the control points are interpreted to mean, that is the basis functions used to interpolate the lines and surface representations. Different subdivision technologies use different basis functions and thus produce different line and surface representations from the same set of control points.

 

It may be worth noting that after one round of Catmull-Clark Subdivision all surfaces are quads. It is only just prior to rendering (to graphics cards that require this) that all the quads are tesselated (I tend to say degraded) into tris.

 

Representing surfaces as triangles is only done for efficiency reasons. It is the lowest common representation for any surface topologies and it is way more efficient to have only one primitive than many on whathever current computing architecture. There are no degradations in splitting a quad into triangles because the shading calculations are based on the normals at each vertices. And the vertices normals are the same wether the surface is represented with quads or with triangles.

 

Rendering directly from splines and patches.

 

Jos Stam proved that the 3D position and normal of any position on a Sub-D surface can be directly derived from the control points. In other words. subdivision is not required to display sub-D surfaces. It is a nice theoretical result but nobody does that because this is too expensive. It is still way less expensive to subdivide into micro-triangles and render those triangles. Of course, the solution is not so pure and elegant but who cares.

 

(one example: EXR images that capture multiple images/levels in depth/z-space)

 

The multiple images are combined into a single image plane where each pixel can represent basically infinitely many luminance variations and amplitude. Depth or Z-space have nothing to do there.

 

I believe it is about display technology in that programmers have to deal with the inherent limitations of that technology.

 

Display technology is really just a memory plane where to store color values for the final rendered image and a large array of computing processors. This may be different in a far future but there are no sign that this model is going to change in any relevant time frame for this discussion.

So you end up with a bunch of control points in 3D space and an algorithm to interpret those control points into some surface. The display technology uses its computing processors to produce a 3D surface from those control points. Here again, there are no fundamental differences. All surface representation technologies need some algorithms, and thus some computing power, in order to transform a bunch of control points into a surface.

Link to comment
Share on other sites

I think it would come down to pixels, then the algorithms would be interpreting the normal direction of the interpolated curvature between two finite lines.

So, a straight line between known curved lines first at intervals, then at intervals along that line, perpendicular lines stick out to make a curve. Then interpolate between adjacent curves with additional information such as color, then find out which pixel the camera would be viewing from and what distance, and simply color that pixel accordingly.

 

|

/ \_|__|__|__/

/ \_______/

/

\ |_|_|

\

\ /__|__\

 

That would be an hourglass shape with a heavy top viewed by the camera, as you can see. It is curved at the top, slender in the middle, with a flat bottom to rest easily on a table.

The camera to the left would interpolate which pixels are where in relation to the vertical points. Then find out which pixels would view which vertical points and color accordingly specific to their location to the pixel's viewpoint. Of course the distance from the camera point would require a "shading" factor, or additional coloring due to specularity and transparency.

 

Drawing between the two lines in an efficient pattern would land one in a rehabilitation center followed by Animators Anonymous and Programmers Anonymous meetings, while rediscovering the TV set, in my opinion. But a curve (or arc if you will) from a line should be visually simple enough to understand.

 

|

|

| | |

__|___|____|____|___|___

 

I really have no idea what I'm talking about as far as actually doing it, but the basic math seems simple enough.

 

That image would have 13 points, which would be all the camera cares about aside for color and depth, etc.

Link to comment
Share on other sites

I think the program would be a loop through the pixels, each one going through an efficient routine:

 

Pixel++

{

What dots are in view?;

 

What objects are in view and what part is in view?; //Whoa! big one

Return dot info;

 

Average color/depth/lighting...;

 

Final Color;

}

Next Pixel...

 

Someone told me it's all 2D at one point and I get what he was saying, but the math is very much 3D, right?

 

What was it called? Radial coordinates? I'm sure it comes in handy shooting Tie-Fighters from the Millenium Falcon...but maybe today we're shooting X-wings?

3D angle and distance. Degradians and Gradians ... no clue!

 

Figuring out what lights are where is another thing within the dot info, I suppose. Not sure where to stick it in the loop because it is too simplified.

Link to comment
Share on other sites

Oh! Don't forget collision detection, then you'll be feelin' me>>>in color! Ha! Oh, wait...how do we do this??? Sbarky says Boto says "... "

Vera's already talking tongues! Stinky little mind of hers.

Link to comment
Share on other sites

  • Admin

Yves,

I went through your post response by response but I'm not sure how my thoughts will add to the discussion.

As such I've set aside what I wrote but I don't want you to think I didn't carefully consider your post in its entirety.

I don't accept that because something is far off in the future it isn't pertinent to the discussion but I do recognize the burden of proof to demonstrate the value of adding such into a discussion with relevancy is no small matter.

 

I did get a chuckle out of your "What lines?" "What surfaces?" intro as you then proceeded to derive and define those very lines and surfaces. To which I responded, "asked and answered". Now if we can just get everyone to answer their own questions in similar fashion every time. :)

Link to comment
Share on other sites

I did get a chuckle out of your "What lines?" "What surfaces?" intro as you then proceeded to derive and define those very lines and surfaces. To which I responded, "asked and answered". Now if we can just get everyone to answer their own questions in similar fashion every time. :)

Rodney,

 

Your "Line and surface" post seemed to imply that lines and surfaces came first and then the subdivision came from those lines and surfaces, that is you need the lines and surfaces so you can subdivide them. My point is that there are no lines nor surfaces. Only render of a mathematical interpretation of a set of 3D points in space. The subdivision process does not need those lines and surfaces to do its thing. It is just another way to mathematically interpret the set of 3D points.

Link to comment
Share on other sites

  • Admin

I understand what you are saying. I just don't buy it. ;)

You yourself said that the lines and surfaces (that are rendered) are calculated from the points in space. Those lines and surfaces are derived mathematically due to the vectors/normals even though they aren't yet created. You stated that various schemes are applied to interpret how to connect those points to form lines and surfaces. The lines and surfaces exist in the calculation that exists. To test this one need only apply the same scheme to that set of points again and again and witness the same outcome. Random schemes applied to those points will result in the creation of different topologies (some more optimum than others). This is also validated by how you state the importance of the artist in the equation but the lines and surfaces (can) exist without input from artists they are already there. The points in space are a set of various contrivances used to represent what is there.

 

I understand you to be saying virtual space isn't real and virtual objects aren't real things but that isn't a particularly useful construct.

We DO need the defining data of those lines and surfaces BEFORE they can be generated. The normal or vector stores this (as you've suggested). Within that stored data the subdivided lines and surfaces can be generated. (this is where you state that subdiv surfaces can be displayed even without displaying the lines and surfaces) Luckily for mathematicians and programmers they are already conceptually there. Without that additional data subdivision cannot be made. They would just be (static) points in space.

 

A point in space isn't an object but rather a specific location.

That same point moved in space isn't two objects but simply a different location.

The difference (distance) between those two locations can be measured.

Assuming no extraordinary force alters timespace, this change of location can be defined (measured) within linear (2D) space.

The meer presence of the extraordinary allows for movement and measurement in 3D space.

As of yet we still do not have anything physical to measure. Nothing yet exists.

And yet we still have the presence of paths (line) through measurable (real) space.

This movement and measurement in real space provides data that can be tracked and traced.

The tracks and traces of which facilitate the shaping and reshaping of space.

And as of yet we still do not have anything physical to measure.

And yet we are in the presence of outlines (contour) and shapes (face and surface).

And we still haven't measured anything real or divided any thing yet.

Link to comment
Share on other sites

I've been following this thread, just wanted to chime in some useless input.

 

The pre-cursor to HA:MR was a utility called 'Arctic Pigs'(by a user/programmer Nils...something)... the goal of HA:MR was sort of to be to take-up where AP left-off and build upon the technology. In my opinion,HA:MR never got close to the AP level... One thing I really-really liked about AP (other that the facts that it worked and was easy) was that you could specify any object in your scene to be a 'button' that would trigger an action- for instance for a Hash/AP contest I had a 3D cartoony car on a turntable and if you clicked on the hood, an action was triggered to open the hood, same for the doors. Very cool!

 

I forget what it was that doomed the AP tech... I think it was the new SDK for V13 or something. If anyone wants to 'dust-off' some old cool program and update/modernize with todays tools... I would look at Arctic Pigs before HA:MR...

Link to comment
Share on other sites

If anyone wants to 'dust-off' some old cool program and update/modernize with todays tools...

Does it have to be an old program? One can use the "game engine" in the contemporary Blender to make interactive scenes.

Link to comment
Share on other sites

  • Admin
One thing I really-really liked about AP (other that the facts that it worked and was easy) was that you could specify any object in your scene to be a 'button' that would trigger an action- for instance for a Hash/AP contest I had a 3D cartoony car on a turntable and if you clicked on the hood, an action was triggered to open the hood, same for the doors. Very cool!

 

You could (and theoretcially still can) do this with HAMR. In fact, the options are there in A:M to toggle those properties (triggers) on/off in a Pose/Action (in v18) if the hamr.hxt file is in its proper location upon startup. I say theoretically because you'd propably have to use a simulator to run a compatible release of internet explorer that can access those features through HAMR. I don't think the hamr viewer exposes those triggers although given that I was wrong about the capability to pose bones in the viewer I might be also wrong about the viewer's ability to handle interactivity such as mouseovers without a browser. I'd need to test to find out.

 

If you pull up the old HAMR website (via archive.org) there is a interactive bedroom scene (project file) where specific locations/hotspots in the room can be activated.

Mousing over or clicking on one of those hotspots then drives an action. The other examples (results from the HAMR contest) were interactive as well.

 

Aside: Some of this reminds me of what they are currently doing with Adobe Edge Animate.

Boy would it be sweet if those two approaches (HAMR and Adobe Edge) are compatible.

Since both use javascript for scripting I'd say they are both loosely simpatico.

 

Attached ref: screen cap of v18 dropdown menu

HAMRdrivers.png

Link to comment
Share on other sites

I understand what you are saying. I just don't buy it. ;)

....

I'm not sure where all this hair splitting is leading us in term of differences between splines/patches and Sub-D.

Link to comment
Share on other sites

  • Admin
I'm not sure where all this hair splitting is leading us in term of differences between splines/patches and Sub-D.

 

 

At a guess... truly a guess... I'd say it's leading us back to a more proper review, understanding and perhaps even further extension of Catmull-Clark.

Link to comment
Share on other sites

Oh you mentioned hair-splitting. That's not a "happy-fun" topic around here!

a line rendered as line with glow yo!

 

As you can see, still a work in progress, but Vera wanted to F U up!

 

Looks like she could at least spell Japanese!

dancer12.png

props58.png

props46.png

dancer68.png

Link to comment
Share on other sites

  • Admin

Another downside of the setting aside of HAMR technology was the loss of read only content (in particular the binary form of A:M files).

 

Why is/was that important?

 

Because you don't always want to share every aspect of a particular project.

Perhaps you just want others to view and appreciate what is there or would prefer it not be modified.

Perhaps your project has elements that aren't yours to freely share with others, although other elements are.

 

It is a trivial thing to access data stored in a Model, Action, Project format or even the compressed (zip) version of the files but it was considerably more difficult to extract useful information from the binary data. Not impossible (and I recall issues with the implementation) but any other usage could be considered intentional and outside of the purpose of the format if anyone tried. (Example: These files were only released in binary format... how is it that YOUR project came to include those files?)

 

At one point I even thought A:M might gain the ability to read/play it's own .PRBJ binaries which would further the usage of that read only format.

Bottom line: There was a lot more going on in the implementation of HAMR than real time display technology.

 

But.. life goes on.

And because they are zipped files and directories the HAMR viewer can directly open consolidated HAMR files.

Note that these aren't read only but rather the viewer (quickly!) uncompresses the consolidated package and then reads the files.

I like! :)

Link to comment
Share on other sites

  • Admin
What kind of extension? Have you been making subdivision models and found Catmull-Clark wanting in the process?

 

It's not Catmull-Clark that needs to be extended but tools that have used it. Particularly those that implemented it before Open Subdiv was released.

There is also the matter of subdivision itself which isn't fully understood nor universally implemented, as witnessed by this discussion so that also leaves room for improvement. But that isn't the fault of Catmull-Clark. They knew what they were doing.

 

Catmull-Clark has been improved upon for over 20 years and extensions of that model only recently embraced and exploited.

Prior to release of PIXAR's Open SubDiv the world was considerably different. Since then, folks have had to relook at old code and see how their implementations fit into this new model going forward. Most found they had room for improvment so they set to work. So, from that perspective a review (and proper incorporation and/or reimplementation of Catmull-Clarke) in light of older code is assumed.

 

Aside: I just saw 'How to draw your Dragon 2' and noted the difference between what was there on screen and what was there before (from all quarters but also from the first movie) and this movie was technologically superior in so many ways. While there are many reasons for this the primary reason is that the software Dreamworks used was programmed from scratch over the past five years. While there are many reasons to abandon older software the primary reason they gave for going that route was to supply artist friendly tools. The implication being that the older tools were not. So there is yet another reason to better understand and incorporate Catmull-Clark... to improve or replace old, outmoded or obsolete tools.

 

In A:M's case there is considerable advantage to delving deeply and exploiting what other topological models and limited technology can and cannot exploit.

Some of this is comparatively trivial... at a guess I'd say this would be represented by the automatic closing of patches found at extraordinary vertices.

The process being one of identification, tagging and processing.

But how does Catmull-Clark directly do that with hooks?

Answer: Directly it doesn't. Indirectly it does as that area is the domain of special cases dealing with extraordinary vertices.

 

Other than the issue of truly smooth surfaces; that of a continuously smooth line (singular) versus collections of noncontinuous and unsmooth lines (multiple approximations of a smooth line) the point is largely moot. But because the further exploitation of Catmull-Clark continues to bridge that gap and drive new tools it continues to be good news.

Link to comment
Share on other sites

  • Admin

Let's let the experts speak for themselves.

 

Pay attention to what is said about the industry, where the entire industry was (still as of August of last year) and why PIXAR targeted Maya to bring the industry up to speed.

To be fair, PIXAR also suggests, perhaps they are missing something important as well but I read that the primary thing they were missing was industry adoption and standardization of areas of importance to them. Surely since that time everything has been fully assimilated and incorporated into the industry...

 

It's interesting to note the various titles given to Bill Polson in this video (he gets several) but I find the most appropriate to be 'Director of Industry Strategy'.

I wonder how many folks understand what that title entails specifically related to what isn't made public by PIXAR and with regard to trade secrets.

But everything is already known and implemented so we shouldn't bother to speculate...

 

 

I almost got shivers up my spine when he said that Maya had recommended, "let's just get rid of polygons."

 

 

Here's another video that just gives a brief overview of the process of real time tesselation with subdivision:

 

Link to comment
Share on other sites

I'm not dead yet!

 

Hey guys, I ran across this forum thread while doing a Google vanity search today:) . It was fun to read all of your comments and thoughs about HAMR and I was amazed that my work is still being talked about in the A:M community. I thought I might share my thoughts on this topic as otherwise, there may not be anyone who fully understands what I did with HAMR.

 

I worked on the HAMR code from December 2004 until April of 2008. I went to work at Texas Tech University as the director of the 3D Animation Lab in February of 2008, so my last HAMR release was done after I had left Hash Inc. In addition to HAMR and WebHAMR, I also produced MetaHAMR which was a Second Life type of metaverse based on HAMR. This was all the pinnacle of my 10 year stretch of doing game engine development. What I did was essentially turn A:M into a scriptable game engine with an API.

 

HAMR cannot be separated from A:M. Basically what I did as a first step was to encapsulate all of the A:M realtime code into a DLL and re-expose the A:M plugin API through the HAMR API. I would actually complle the bulk of HAMR directly from the A:M source code base. I did add a lot of game engine type capabilities as well as embedded Python and extended Python so that HAMR could be scripted. But, most of my work with the HAMRViewer, webHAMR and MetaHAMR was done in C++ using the HAMR API. WebHAMR typically made use of Javascript scripts, but it could have been any web based OO scripting language used. No signigicant amount of code was done in Javascript. The MetaHAMR system infrastructure was based on PHP and MySQL on a web server running UDP messaging between mataverse clients. I also added capabilities to the HAMR API that were separate from A:M for doing game world and character management. And I did a lot with dynamic adaptive patch subdivisions.

 

So, could HAMR be resurrected? Anything is possible, but I think this is unlikely. It cannot be resurrected without being compiled with the current A:M code base. Some parts of the API were very tedious to produce and involved COM and ATL automation and horrible parsers and encapsulating reformatters for the plugin API port. Having not been involved in A:M development since early 2008, I have no idea if A:M would currently build against the A:M code base.

 

I still bemoan the fact that there is no viable tool like HAMR out there. I have tried many things since 2008 but have never managed to come close to the ability I had with HAMR. My current work is all focused on the Unity game engine. Within the last year I did a HAMR-like character interaction event engine in Unity via C# scripting. I as able to approximate the quality of A:M smooth surfaces by using a DX11 tesselation sub-division shader. But, I am at the mercy of the character modeller as to what I can tie into for events. Typically my character models come from Maya or 3ds Max. I am not a modeller, so, my capabilities in realtime interactive systems suffer. I would love to be able to read in a A:M model file and do my thing in Unity, but that is not very feasible. It takes something like HAMR to be able to parse A:M files in a compatibility guaranteed manner. And it takes A:M code to do the patch splitting. When I first began working with A:M, I was doing game engine development and wanted to use A:M as my defacto modelling standard. I read A:M files directly and used the plugin API to do some things, but I had to do almost everything myself, including cubic spline patch splitting. I did it well enough that Martin Hash hired me to do WebHAMR, but I could not not come close to HAMR capabilities without directly invoking the A:M code.

 

These days my focus in Unity is with CUDA and DirectCompute GPU parallel computing. Most of my work is involved in simulating the human hippocampus and displaying parametric results in the Unity 3D engine. I also am an Oculus Rift developer and virtual and augmented reality are my true passions. It is easy to output Unity games to most any platform. Unity has a nice browser plugin that is very similar to WebHAMR, but will not do DX11!

 

Anyway, it was fun to see that the work I did 6-9 years ago is still relevant today. I'm glad you all liked what I did. I was proud of HAMR!

 

Good to see all of your familiar names!

 

Regards,

Ken Chaffin- Director, Texas Tech University Media Lab

Link to comment
Share on other sites

  • Hash Fellow

Hi, Ken!

I thought perhaps if i said your name three times you might appear, and here you are!

Just to clarify a few things you said...

HAMR cannot be separated from A:M.


What you mean is... even though anyone who doesn't own A:M can download the HA:MR viewer plugin and view HA:MR content, the HA:MR plugin itself incorporates proprietary Hash, Inc. A:M code and needs that code to do what it does, right?


I would love to be able to read in a A:M model file and do my thing in Unity, but that is not very feasible. It takes something like HAMR to be able to parse A:M files in a compatibility guaranteed manner. And it takes A:M code to do the patch splitting

 

 

IF... that code were available, would it be feasible to use it to bridge the difference between what A:M assets are and what a game engine typically uses as assets?

 

With that code, would it be possible to make some adaptation of the game engine so that it could use A:M created assets without needing to convert them to some other format (as is done now) and then the game author would create his game in the normal fashion of that game engine?

 

 

This is the part that confuses me. People are already using .X as an intermediate format to get A:M models and animation to a game engine (with some limitations), so I'm wondering why a game engine couldn't be modified to eliminate that .X step and be able to use A:M assets directly.

 

What am I missing?

Link to comment
Share on other sites

[Ken] Please excuse my quote formatting as I haven't used this forum in a while.

Hi, Ken!

I thought perhaps if i said your name three times you might appear, and here you are!

[Ken] Something worked!

Just to clarify a few things you said...

HAMR cannot be separated from A:M.


What you mean is... even though anyone who doesn't own A:M can download the HA:MR viewer plugin and view HA:MR content, the HA:MR plugin itself incorporates proprietary Hash, Inc. A:M code and needs that code to do what it does, right?

 

[Ken] What I meant is that HAMR cannot exist without A:M. At its heart, HAMR encapsulates A:M. HAMR cannot be rebuilt without access to the proprietary A:M source code. Wherever HAMR goes, much of A:M goes with it, even though that is invisible to the user. I'm not sure to what extent the 2008 build of HAMR API and tools can read current A:M data files. It depends on what has changed in the data file formats. If the files are still XML, then it may be that new files can still be read and just newer XML tagged objects will be ignored. When I first started parsing A:M files in 2004, this was prior to the XML format and was extremely easily broken.


I would love to be able to read in a A:M model file and do my thing in Unity, but that is not very feasible. It takes something like HAMR to be able to parse A:M files in a compatibility guaranteed manner. And it takes A:M code to do the patch splitting

 

 

IF... that code were available, would it be feasible to use it to bridge the difference between what A:M assets are and what a game engine typically uses as assets?

[Ken] Absolutely anything is possible. HAMR uses A:M code to read and parse A:M data files as well as render the objects. In a game engine you need to read the data files, but the game engine will probably render the objects.

 

With that code, would it be possible to make some adaptation of the game engine so that it could use A:M created assets without needing to convert them to some other format (as is done now) and then the game author would create his game in the normal fashion of that game engine?

[Ken] Well, yes, it is possible to read and parse the A:M data files and covert to the game engine native data format on the fly. A conversion still has to take place in memory, even though not neceessarily with an intermediate converted file.

 

 

This is the part that confuses me. People are already using .X as an intermediate format to get A:M models and animation to a game engine (with some limitations), so I'm wondering why a game engine couldn't be modified to eliminate that .X step and be able to use A:M assets directly.

[Ken] When I was doing game engine development work 9 years ago, my game engine could read ..X files as well as A:M files. So yes, a game engine importer could be written to read A:M data files and convert on the fly to the game engine native format. There are problems with this based on my own work of 9 years ago. It is very difficult to fully and reliably parse the A:M data files without the A:M code. You end up having to recreate major portions of the A:M code which already knows how to parse the files. There is also the problem that A:M is cubic spline patch based and game engines are almost universally polygon mesh based. The game engine does not natively have the data object representations needed to do cubic spline patches. Assuming you write an A:M model file parser, you would then have to write a patch splitter and renderer within the game engine. This is all feasible but very difficult to get all working correctly. I've always created procedural models, both in A:M/HAMR and in Unity. Procedural meshes are pretty easy to construct on the fly.

 

What am I missing?

[Ken] Nothing. It is a feasible task but extremely difficult. I would love to have cubic spline patch models inside of the Unity game engine. The low patch count containing high resolution information supporting extremely high density poly tesselation would be the big appeal to me. All of the Maya models I have are all relatively high poly counts to begin with. Only then can things like DX11 tesselation shaders be used to take the poly density even higher. The A:M patch based models were great for WebHAMR. The models were very small to send over the web, even though I went ahead and compressed them anyway, mostly so that I could package up a lot of text files in a single binary zip file.

Link to comment
Share on other sites

  • Admin
I'm not dead yet!

 

 

Ha! I knew it. :)

 

Let me take this opportunity to thank you for the work that went into HAMR.

I was in shock when I plopped the .HXT file into place, launched it and it played my current projects.

Now that's longevity.

 

In addition to HAMR and WebHAMR, I also produced MetaHAMR which was a Second Life type of metaverse based on HAMR. This was all the pinnacle of my 10 year stretch of doing game engine development. What I did was essentially turn A:M into a scriptable game engine with an API.

 

And to a large extent A:M is still that scriptable game engine with an API today

We just haven't leveraged that legacy.

 

I also produced MetaHAMR

 

 

Wow. That makes me wonder where I was back then as that doesn't sound familiar to me.

 

Having not been involved in A:M development since early 2008, I have no idea if A:M would currently build against the A:M code base.

 

 

Not that I can speak to the code itself but the good news is that since the timeframe you left off with HAMR the core of A:M hasn't changed significantly.

I'd guess the code won't compile directly without a few changes but not as if entire rewrites had altered the code base.

I distinctly recall Martin stating back then he would not do a rewrite of the code base.

 

To put this into perspective...the v13SDK has been defacto until only recently with the release of v18SDK and the majority of changes incorporated by one guy; Steffen Gross.

A:M has had a long stretch of stability exactly because the program hasn't undergone any foundational structural change.

Perhaps the biggest update was to allow the move to 64bit but the 32bit side is maintained as well.

As v13 was released circa 2004 and the v18SDK mostly updates A:M to OpenGL3 that suggests that most of the core remains unchanged.

 

If the files are still XML, then it may be that new files can still be read and just newer XML tagged objects will be ignored. When I first started parsing A:M files in 2004, this was prior to the XML format and was extremely easily broken.

 

 

Not that I've tested deeply but this is what I've seen in the process of opening current files in the HAMR viewer.

I was in shock when I discovered that settings tweaked in A:M carried over to the HAMR Viewer (i.e. Moveable, Rotatable, Scaleable and the big one... Poseable!) I didn't expect that last one to work but there it was allowing me to pose characters in the HAMR viewer just like a decade ago... pretty neat.

 

HAMR was ahead of it's time way back then and as far as I'm concerned it still is today.

Link to comment
Share on other sites

I developed MetaHAMR between January 2007 and April 2008. It was a Hash skunkworks development funded out of Martin's pocket. It was never made public. It worked great though as we tested it with several developers from worldwide locations.

 

Well, amazingly it sounds like the 2008 HAMR API will still read A:M files! That is good. I do not think it would be feasible to recompile HAMR against the current A:M source code base. There would probably be a lot of missing 3rd party libraries. I am guessing that the use of most 3rd party libraries were eliminated when the 64-bit A:M version was developed. That is probably why the PRJB format was discontinued as that was reliant on a 32-bit zlib compression library. But, since HAMR will still read A:M files, there is no need to rebuilt it. It should work with any 32-bit Windows application calling the DLL functions.

 

I'm going to explore a bit calling the HAMR DLL from a Unity C# script. It should work since Unity is still primarily a 32-bit application. I currently call my CUDA DLL from Unity with no problems. It may be possible to parse and load the A:M files from within Unity. I will have to see if I exposed enough functionality in HAMR to get to the patch or poly models and get them into Unity and create a Unity poly mesh model.

 

If was still doing 3D web plugin development, I would work with WebGL as you all were discussing above. It might be possible to use WebGL programs to talk to the HAMR DLL, but that be quite a challenge to develop and would only work with Windows.

 

By the way, at Texas Tech University I built a Remote Access 3D Lab where all manner of 3D animation programs run on 20 Windows rack-mounted workstation in our data center and then via Citrix XenDesktop HDX, the 3D applications can be used remotely on any device. So, A:M can run on an iPhone in this environment! I do not currently have A:M loaded on these systems though I have in the past.

 

Ken Chaffin

Link to comment
Share on other sites

" It may be possible to parse and load the A:M files from within Unity. I will have to see if I exposed enough functionality in HAMR to get to the patch or poly models and get them into Unity and create a Unity poly mesh model."

 

Hi Ken! It will be interesting to learn what you find on this! Glad you are well... sounds like a great gig there in Texas!

 

How do I work the quote function in this new forum?

Link to comment
Share on other sites

  • Admin
How do I work the quote function in this new forum?

 

 

Make sure you are using th Full Editor and not the quick response (default) editor.

Actually, that may not even be required but that will work.

I usually just copy/paste from the previous topic, then select the text in the new post I want to quote and hit the quote icon in the menu.

In other words... exactly how I quoted text in the old forum.

 

A:M can run on an iPhone in this environment!

 

Sweet!

Link to comment
Share on other sites

While there are many reasons for this the primary reason is that the software Dreamworks used was programmed from scratch over the past five years. While there are many reasons to abandon older software the primary reason they gave for going that route was to supply artist friendly tools. The implication being that the older tools were not.

One aspect I've been observing in the 3D industries is the switch to Physically-based rendering, The movie industry have been the first to push in that direction. I'd say, the last five years correlates very well. During the last 2-3 years, it is the game industry that is pushing toward physically-based rendering pipelines.

 

The most pitched advantages of physically-based rendering is reduced costs, the artist friendliness, reduced reliance on post-prod. SIGGRAPH have dedicated full day tutorials on this topic for the last 2 years and those tutorials, including presentation from the big players in those indiustries, are available on the Web.

Link to comment
Share on other sites

  • Admin

Even sounds still play in the HAMR viewer (as included in the A:M Project).

Tres cool.

 

I did have my first crash with the viewer due to trying to get the sound to play via a mouseover.

That might relate to setting the sound to play outside of the extent of the range of the animation.

Not sure.

 

Added: The sound will play with a project but not as a consolidated .zip file.

I assume that something is broke there so the sound doesn't get properly referenced after consolidation.

Link to comment
Share on other sites

I've been looking into whether it is feasible to call the HAMR DLL from within Unity and I have to say that it is not very feasible. My work in Unity is all via C# under the MonoDevelop .NET framework. C# can link to DLL and call exported functions but to produce something useful, the C# script would have to include a lot of the HAMR header files which are designed for C++ and the MS MFC. It might be more feasible to write a C++ extension to Unity but that would be a major undertaking. What I was exploring was whether it might be simple to call the HAMR DLL via C# in Unity and although that would probably be possible, it would take an extremely large effort to fully implement an A:M file parser and convertor to Unity objects.

 

So, I suggest that you use HAMRViewer as-is if it suits your needs, but don't count on new applications to be developed using HAMR. The full HAMR SDK was never published, so there is not enough info and required files out there that would enable a HAMR user to do such an application. The end-game for HAMR was the viewers, rather than a public SDK. The viewers themselves contained Hash Inc. proprietary information.

 

Sorry I don't have better news.

 

Ken Chaffin

Link to comment
Share on other sites

  • Admin

Thanks Ken,

I don't consider that bad news. I appreciate the dose of reality.

It is too bad that the Unity/HAMR connection isn't feasible.

 

Could you help my understanding of what you mean by proprietary?

When you say 'proprietary' do you mean 'exclusive to Hash Inc' or something more akin to 'code that cannot be shared'?

I suppose to me the first implies the viewer code can be shared/licensed by Hash Inc while the second suggests it likely cannot or will never be.

To narrow the field of my specific interest let's say this concerned only the code to the HAMR viewer.

If the folks at Hash Inc wrote that then I guess I'd have to ask them.

Link to comment
Share on other sites

The HAMR viewers source code contains some of A:M's source code as well as require a number of A:M "headers" which expose the data structure of A:M as of v14/15. Whether that contains any Hash proprietary trade secrets would be up to the Hash Inc. management. For someone to fully make use of HAMR requires a high level of programming expertise, including COM and ATL automation. I exposed the A:M SDK functionality via automation. The viewers themselves are fairly complex Windows graphical applications. I was and still am under a non-disclosure agreement regarding A:M source code and algorithms.

 

Ken

Link to comment
Share on other sites

  • 2 weeks later...
  • Admin

Here's a HAMR view of the water ripple test I recently put together.

The two primary things demo'd here are the wireframe mode that shows how the ripples were simply animated and the automatic subdivision generated on the fly to tesselate the quads into tris for the purpose of displaying the animation on a monitor as processed by modern day graphics cards.

 

Of note: The file itself is only a few kilobytes whereas the resulting representation of the sesson (via mp4 movie) is over 5MB.

 

HAMR testing.mp4

Link to comment
Share on other sites

  • 2 years later...
  • Admin

During today's 'Live Answer Time' session I went through a basic demo of the HAMR viewer (link in first post... or right click and Save As here). Note that this is an executable file (.exe) residing on the Hash Inc FTP so if you are wary of downloading and launching such you should download and scan the file before launching the program first to allay your fears).

 

 

I'm bumping up this topic because the content covers a lot more info than I could go into during the LAT session.

It also outlines the status of HAMR according to the guru behind it (Ken Chaffin) and suggests we shouldn't expect see any development of HAMR in the near future.

Further development may be unlikely but I say good ideas always return and HAMR is there with the best of them so that encourages me.

I have several specific goals in mind that have me circling the HAMR viewer but regardless of all that it is nice to know the HAMR Viewer still works well with Project, Models and Actions created in v18.

 

As anyone with the interest in HAMR very likely has Animation:Master already up and running there isn't much of a reason to run HAMR Viewer outside of the novelty of it all. This is especially true as A:M can load and display what the viewer can with more bells and whistles. But I remain amazed with what the viewer can display. So close.... so very close to success. Artic Pigs and HAMR were ahead of their time way back in the day.

 

It may be that someone (on a PC) without A:M might find the viewer useful for accessing Projects, Models and Actions so they can be pointed to it where there is a need.

If only the HAMR viewer was a mobile app... ;)

Link to comment
Share on other sites

  • 6 months later...
  • Admin

Tis the season to once again lament the loss of HAMR.

 

This morning I've been playing with the standalone HAMR viewer and it still works amazingly wel... even with basic projects created in v19 Beta2.

What a wonderful effort HAMR was... so far ahead of it's time.

 

Aside: As for you Mac users that use Bootcamp to run PC programs... I'd love to know if you can run the HAMRviewer application.

You should be able to.

 

I still have high hopes that some of the code might someday be released so a standalone viewer for A:M files might be more readily distributed.

In the meantime I will still explore what there is to explore via the viewer. :)

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...