sprockets TV Commercial by Matt Campbell Greeting of Christmas Past by Gerry Mooney and Holmes Bryant! Learn to keyframe animate chains of bones. Gerald's 2024 Advent Calendar! The Snowman is coming! Realistic head model by Dan Skelton Vintage character and mo-cap animation by Joe Williamsen
sprockets
Recent Posts | Unread Content
Jump to content
Hash, Inc. - Animation:Master

glTF (Kronos Groups Open Transmission Standard for 3D)


Recommended Posts

  • Admin
Posted

For those of you that keep track of file formats and other technical aspects of the trade you may want to watch the glTF (Open GL Transmission Format) standard as it seems to be gaining a lot of traction especially in light of current interest in augmented (and virtual) reality. Perhaps more specifically, flTF optimizes assets for rendering instead of modeling frameworks; primarily via JSON (Javascript Object Notation).

 

Of interest, glTF is not a file format so much as it is a scheme to move 3D data from file formats into a standard encoding that can be compressed/decompressed for transmission (primarily over the internet). The example used is that of other standards such as MP3 (for music), JPG (for images), and MPEG (for video). glTF is hoped to be the standard for 3D (to include mesh, animation, shaders and textures).

 

The 1.0 spec for glTF has recently been published.

 

Here's a video that goes into some of the technicalities:

 

xhttps://www.youtube.com/watch?v=YXPeh2hy6Tc

 

Introductory presentation (much of which is covered in the above video):

https://www.khronos.org/assets/uploads/developers/library/overview/glTF-1.0-Introduction-Oct15.pdf

 

 

 

From the website:

glTF™ 1.0 (GL Transmission Format) is a royalty-free specification for the efficient transmission and loading of 3D scenes and models by applications. glTF minimizes both the size of 3D assets, and the runtime processing needed by applications using WebGL™ and OpenGL®-family APIs to unpack and use those assets. glTF defines a common publishing format for 3D content tools and services that streamlines authoring workflows and enables interoperable use of content across the industry.

 

 

 

For those likely to just skim the material it can look like the collada format is an essential part of the process but the folk involved go out of their way to suggest that glTF doesn't need collada. Collada is simply one of the early extensions put together to demonstrate use of the standard.

 

Heres a link to Kronos Group's github for glTF: https://github.com/KhronosGroup/glTF

 

 

Where I believe the benefit to A:M Users will be is going back to Martin's look toward what is being presented with glTF but from the view of where he saw the process some 15 years or more ago. The underlying reason being that A:M's file format is already oriented in the right direction (as displayed prior via Arctic Pigs and HAMR) and that orientation transition from into glTF with the added benefit of leveraging A:M's considerable strengths.

  • Replies 12
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

  • Hash Fellow
Posted

Ideally there would be a programmer person from Hash involved in the working group that is devising that standard.

Posted

 

 

A:M's file format is already oriented in the right direction

What do you mean? It's actually the opposite of a viable transmission format. For one thing, it's all plain text. It's also full of specifics that only A:M can interpret.

  • Admin
Posted

Thaz whi youse haffa reed Martin's pehpas.

 

As far as plain text is concerned....

Is html a transmittable format?

What about code in that plain text or binary formats?

Posted

 

 

Martin's pehpas

Where can I find them?

 

 

 

Is html a transmittable format?

Any format is transmittable in the sense that you can transmit it over a network. Plain works well for hypertext, because it is a kind of text. In 1993 it probably seemed like the answer to everything.

 

 

 

What about code in that plain text or binary formats?

What about it?

  • Admin
Posted

Martin has posted most of his papers here: http://www.martinhash.com/forums/viewforum.php?f=25

 

I think this document can be found via Martin's forum too but don't see it by this specific title:

 

Articulation concerns for specifying generic class character animation

 

Edit: That one is actually his Masters thesis from 1989. It's collected at the first link.

  • Hash Fellow
Posted

It's also full of specifics that only A:M can interpret.

 

 

That's why it would be good ot have an A:M representative in that group so that a path to accomodating it wold exist in the specification.

 

I don't know much abotu what is involved in that or how likely this standard is to become important. Lots of standards get proposed and most go nowhere.

Posted

Rodney

Obviously you mean the paper Distributing 3D Character Animation on the Internet. It's really just an overview of what an A:M project is and an ad for Arctic Pigs.

Indeed, it mentions compression (zip) and proposes a way to save traffic. Namely, it says, "Get rid of detail, use splines". However, it fails to provide a solution for cases where you do need the detail. There's also no mention of streaming capabilities, which are crucial to the glTF guys.

  • Admin
Posted
Obviously you mean the paper Distributing 3D Character Animation on the Internet.

 

That is certainly one of the documents that addresses core aspects of what this 'new' standard proposes to do.

But then again... VRML was going to win over everyone back in the day too.

(I only mention VRML because some of the folks involved in glTF apparently were supporters of that standard and in a way glTF seems to be a modern day equivalent... just with more commercial need/support behind it than VRML had way back in those days.

There's also no mention of streaming capabilities, which are crucial to the glTF guys.

 

I believe one of the other papers addresses streaming although when most of these papers were written the internet wasn't streaming much of anything due to low bandwidth.

That's rather the whole point of Martin's earlier forays into 'lightweight' models however. Splines/patches automatically circumvented many of the issues found with denser models that were going to be problematic to transmit in any way... streaming or otherwise.

 

Thus the term 'orientation' that I used to suggest that Martin's approach was ideal specifically for a time when bandwidth was at a premium BUT that notion is just as relevant today when one considers the scale that 'big data' represents (and hopes to be able to use!). But we are straying a bit from the original topic into an area that we aren't even ready to approach because we haven't come to an understanding of the basics. Here's a hint though; how does one transfer an extremely large amount of data from one place to another with minimal expense/effort? The folks at Facebook and elsewhere think they've got the answer. Look for them to employ that in their efforts to 'teleport' matter/data. Of course, the easiest solution is not to transport any of that data in the first place but rather just to turn a facsimile on with preconfigured matter/data; then you just have to trigger it to happen.

 

 

Namely, it says, "Get rid of detail, use splines".

 

If by 'Get rid of detail' you mean 'dispense with unnecessary data' then sure.

This is certainly a premise of glTF... perhaps especially demonstrated by what Cesium is trying to do by having data appear 'just in time' as it is cued up for viewing.

But this too is covered by any technology that is sufficiently scalable.

As we see the data that interests us and zoom in to inspect it closer more detail appears which leads us to other just in time deliverables.

In the realm of character animation it's not unlike the telling of a story; aspects revealed in the last act are foreshadowed in the first.

 

it fails to provide a solution for cases where you do need the detail.

 

As that is covered in the core of spline patch technology I must assume that is referred to or assumed from the outset by those that follow the underlying technology. WIthout rereading the document I'll guess it may even be referenced or alluded to in the first paragraph.

  • Admin
Posted

I can be useful to set aside some assumptions that are currently (popularly) held.

But one must start with some sort of assumption, so where to begin.

 

Perhaps it might help to take a look again at the early days of computer animation?

Here I'm thinking specifically of Sutherland and early influences on men like Edward Catmull (who currently garner a lot of attention).

The thesis submitted for his doctoral in philosophy has some very familiar themes.

 

Perhaps more closely related to the current topic however would be a finer look at the process (art?) of communication.

The attached image is taken from "A Mathematical Theory of Communication" C.E. Shannon 1948.

 

A problem arises when the noise and the source become one and the same thing (or lie in closer proximity where they are harder to differentiate).

Although it can be said that (at least in the interim) in the absence of the true source a reflection, impression or near copy of that source may suffice.

 

There is also this simple principle (of animation): heavier objects move slowly while light objects move faster.

This can also be misinterpreted because a very heavy object can move very fast once it finally gets to (full) speed.

What must be taken into consideration is the cost/effort it takes to get to that speed.

 

In the end the solution isn't likely to be one or the other but rather an approach that accounts for our short term, mid term and long term needs.

It's in this same way that we will often place a proxy object/image in the space that the final character will occupy until such a time as the ideal object can take its place.

.

Mathematical Theory of Communication (General System).png

  • Admin
Posted

If I've failed to mention it, I always appreciate questions because just in the process of trying to better communicate those things that I believe should be self evident, pieces fall into place with regard to side projects I'd love to see move forward but cannot because I'm still babbling about speaking gibberish. While it may never produce the results I expect it's always encouraging to see these seemingly random pieces fall into their places.

 

.

Posted

If by 'Get rid of detail' you mean 'dispense with unnecessary data'

Yes, probably. At least that's the kind of data that the paper means—the kind that should stay implicit and can be generated, for example, by OpenSubdiv on the receiving end. But large amounts of data may also represent quality artist-crafted detail. If glTF knows how to move it fast, then more power to it.

 

it may never produce the results I expect

 

What results do you expect?

  • Admin
Posted
What results do you expect?

 

Easily accessible/transferable resources for the community (probably using SVN although initial tests were github).

'assets in the cloud' is a likely extrapolation of that although I have a long way to go to warm up to the idea of that..

With the added emphasis on access to optimal resources (for A:M).

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...