sprockets The Snowman is coming! Realistic head model by Dan Skelton Vintage character and mo-cap animation by Joe Williamsen Character animation exercise by Steve Shelton an Animated Puppet Parody by Mark R. Largent Sprite Explosion Effect with PRJ included from johnL3D New Radiosity render of 2004 animation with PRJ. Will Sutton's TAR knocks some heads!
sprockets
Recent Posts | Unread Content
Jump to content
Hash, Inc. - Animation:Master

Plug in wish list


PopaR

Recommended Posts

  • Admin

If you would have asked this a year or so ago I would have had a long list.

As it is right now I'm drawing a blank.

 

But... I'll remember some soon and post those. :)

 

Practically speaking,and mainly for the learning experience, I'd like to see a public project that expands the Font Wizard.

It might move forward in phases like this:

 

Phase 1: Add ability to input the text from an external text file.

Phase 2: Expand the capability of the text file input to include special characters (carriage returns, font styles, etc.)

Phase 3: Expand the capability of the text file syntax to allow for placement of the text in 3D space.

 

Phase 4: Mirror the -new- Font Wizard capability in a new Model Import Wizard that allows A:M to import and place Models in 3D space at specific coordinates designated in a referenced text file.

Note that the format might mirror that of A:M Libraries shortcuts with the addition of XYZ coordinates tacked on to the end.

 

Phase 5: Explore ability to save out 'scripts' for reimport at later time with options to change elements of the script in the Wizard.

At this point we've almost come full circle to simply importing a Chor or opening a Project file with the resources specified in detail.

 

R&D phase (With the knowledge/experience gained by programming those enhancements to the Font Wizard identify obstacles to overcome in order to create a full scripting environment for A:M)

 

Note that similar plugins could be created by swapping out the target format (i.e. SVG instead of AI. etc.)

I'm not sure if the code is freely available but plugins such as the Terrain Wizard might demonstrate excellent areas of plugin capability that are not often explored (The Terrain Wizard is one of the few areas where users can paint directly in A:M)

Link to comment
Share on other sites

  • Admin
It would be nice to have a similar module MoGraph of the package Cinema 4d - Module for procedural animation of multiple objects.

 

Can you provide a little more detail on what that entails?

What the end result might look like in A:M as a plugin?

Links to research, development or code examples?

Link to comment
Share on other sites

  • Hash Fellow

Rodney's mention of the Font Wizard brings to mind something I've wanted.

 

The AI wizard is like the Font wizard... it imports a vector outline and can extrude it and put faces on each end. However, it can only import the proprietary .ai format and there are no good low-cost programs that can create and export an .ai file.

 

How about adding an open-source vector format like SVG to the file formats that the AI wizard can accept?

Link to comment
Share on other sites

There is a Mel script called "Monkeyshuffle" for Maya that I think is also ported for Blender as "BlenderJam!" that converts timing from MonkeyJam to something that Maya and Blender can use in Actions. Something similar for A:M for either MonkeyJam or OpenToonz might be useful.

Link to comment
Share on other sites

My most wanted:

- an FBX-Exporter / Importer.

- a plugin, which will set normals by placing a light inside a model, RMB on the light and open the Plugin. After a setting dialog (bounce-time, distances?) the rays of the light will now hit the patches from "inside". Anywhere where a light hits a patch, the patches normal will flip to the inverted direction of the light ray which it has been hit from. The "light ray" should than be killed or even better: bounce of and do the same again for a setable amount of bounces. Should only be done to "visible" patches. (so if you hide something, it should not be affected)

 

Just to mention it: It does not have to be a light of course... if a null or a point of a spline is easier to do, that is as good as a light too :).

 

See you

*Fuchur*

Link to comment
Share on other sites

  • Admin
- a plugin, which will set normals by placing a light inside a model, RMB on the light and open the Plugin. After a setting dialog (bounce-time, distances?) the rays of the light will now hit the patches from "inside". Anywhere where a light hits a patch, the patches normal will flip to the inverted direction of the light ray which it has been hit from. The "light ray" should than be killed or even better: bounce of and do the same again for a setable amount of bounces. Should only be done to "visible" patches. (so if you hide something, it should not be affected)

 

Gerald,

I'm curious about the reason for running such a plugin. I must assume that it isn't just to make sure every normal points outward (on the true surface of a model).

Would it be to create position based normal maps or something like that?

 

At a guess I'm imagining that the bounces would have to be a magnitude of 2 because you wouldn't want to flip a normal once and then have it flipped back to it's original state upon being hit the second time.

And I'm not sure how biased normals might factor into the equation. I don't know enough to consider but am guessing there are 4 (although I sense a theoretical 9) possible positions based on A:M's current ability to rotate images/normals. If we were to delve into the idea of two sided patches that number might be what... 54 possible angles?

 

I guess what I'm trying to envision is the underlying purpose/usage for the process of flipping based on a specific origin.

I can imagine some uses but am not sure of the specific purpose of your proposed plugin.

Link to comment
Share on other sites

 

I guess what I'm trying to envision is the underlying purpose/usage for the process of flipping based on a specific origin.

I can imagine some uses but am not sure of the specific purpose of your proposed plugin.

 

In some cases it doesn't matter which way the normals are facing. However for some of A:M features to work properly, eg hair, some materials, patch images, rendering of 5 point patches - it is necessary for normals to be facing in right direction, ie not facing inward.

 

For some reason normals get messed up during modeling, and it is a PITA currently to correct them - ie laborious. Gerald is looking for plug in that easily determines what is "inside" versus "outside" and to automatically make the normals point outward.

Link to comment
Share on other sites

Bingo Nancy... it is really just about the normals pointing the right way. Seffen tried several ways to implement an algorithm to get that job done ("Correct Normals"-Plugin, Refind-Normals, etc.) but it never works perfectly for me in more complex situations. And I think this plugin should do the job. And yes, some features need correct normals... hair, 5-pointers and all the exporters, especially for 3d-printing-stuff it is quite important.

 

See you

*Fuchur*

Link to comment
Share on other sites

  • Admin

In some cases it doesn't matter which way the normals are facing. However for some of A:M features to work properly, eg hair, some materials, patch images, rendering of 5 point patches - it is necessary for normals to be facing in right direction, ie not facing inward.

For some reason normals get messed up during modeling, and it is a PITA currently to correct them - ie laborious. Gerald is looking for plug in that easily determines what is "inside" versus "outside" and to automatically make the normals point outward.

 

 

Bingo Nancy... it is really just about the normals pointing the right way. Seffen tried several ways to implement an algorithm to get that job done ("Correct Normals"-Plugin, Refind-Normals, etc.) but it never works perfectly for me in more complex situations. And I think this plugin should do the job. And yes, some features need correct normals... hair, 5-pointers and all the exporters, especially for 3d-printing-stuff it is quite important.

 

 

Yes, I understand the issues with normals facing the wrong direction. The problem I'm having is seeing how the proposed solution resolves the issue.

I -think- what is under consideration is why Pixar and others have gone to bi-directional raytracing as the rays can more easily spot orientations of surfaces when they keep track of orientations, reading surfaces in all directons.

 

If placing an object inside the object under consideration for normal correction is a viable starting point I can't help but wonder why something like determining the Center of Gravity (COG) or Center of Mass (COM) wouldn't just as easily accomplish the same thing. And if the response is that an external object can be placed anywhere... the same thing could be accomplished through COG/COM with an offset. Another similar option might be to automatically use the Models pivot point and adjust that as desired.

 

Now originally, when Gerald mentioned the same thing in another topic, I thought the idea of a light inside the model was an interesting one because as a light inside a closed/airtight object no light would escape except where the object was in fact not airtight. Surfaces that faced inward might then be considered 'open' and let the light out.

 

But all of this seems to me to be extra steps.

 

I'm not sure how the current 'Refind Normals' plugin works (i.e. what specific process it goes through) but I'd imagine it might compare each normal with it's neighbors and if outside of a specific tolerance will tag it to be inverted. This seems logical to me.

 

The underlying issue is that during modeling an external surface can very quickly become an internal surface and A:M isn't always able to resolve what is what.

So the first and best resolution is good modeling practice.

Of course this is complicated when external models are imported into A:M as they often may have no optimal normal data attached.

 

At any rate, my purpose here isn't to sound like I'm against the proposed plugin but rather to better understand the underlying process the plugin proposes to address.

Especially if there is a simpler methodology that can be implemented to achieve the same (or at least satisfactory) results.

Link to comment
Share on other sites

COGs will not do the trick Rodney... make a tube-construction with several tubes intersecting each other, etc. and try to find the center of gravity of the whole tube-construction it may not even be in one of the tubes... it will not work for more complex structures but only for very simple once. I thought about it for quite some time now, and there is not really a better solution I can think of.

 

Yes it can be overcome by good modeling practice, but I think I am quite able to model in a good way and understand more or less what I do, but even with all that years of modelling experience sometimes I run into trouble with normals if I try to model fast without thinking too much about it and just having a little bit of fun till it turns out to be something nice I want to go on with...

Link to comment
Share on other sites

  • Admin
COGs will not do the trick Rodney... make a tube-construction with several tubes intersecting each other, etc. and try to find the center of gravity of the whole tube-construction it may not even be in one of the tubes... it will not work for more complex structures but only for very simple once. I thought about it for quite some time now, and there is not really a better solution I can think of.

 

I think I"d best give up on this one as I'm not seeing how a COG differs from a point of light, Null, etc.I'm obviously missing an important part of the equation.

The same thing applies to all cases... at least from the approach of the proprosed plugin... all must be inside the mesh/geometry. With lights I assume manual placement as would be the case with an manually offset (unbalanced) COG. The COG is just a way to automate an initial placement of that point from which measurement/raycasting originates. That point would then be offset to the desired location from which to run the New Normal plugin.

 

I'm mostly interest for my own understanding of the problem and potential approaches to solutions though so am confident that whomever is going to program this plugin will eventually figure it out.

Still given any given problem there are approaches that can be tagged for exploration while others (at least temporarily) ruled out.

 

Of course this is the underlying problem with wishes. Wishes are easily formed but the realization of those wishes inevitably harder. :)

 

This does beg a few question however, such as:

 

How does such a process as 'Show back face polygons' work?

Could those features that use normals in their equations be enhanced to ignore those normals that don't point in the same direction as a given percentage or the average direction of the other normals? Thereby bypassing the problem altogether.

Could an option be given to allow particles to emit from both sides of a surface regardless of direction of the normals?

 

The entire concept of normals suggests closed systems that have outer and inner surfaces and yet we more commonly interact with open systems where technically speaking all surfaces can be simultaneously internal and external.

How can we make better sense of these and other related underlying paradigms?

 

One solution would appear to be suggested in the name itself; normals.

How often do adjacent patches flip their normals 180 degrees from each other? Surely not often.

So any patch with neighbors that are abnormal (approaching 180 degrees away from what is deemed normative) can safely be flipped.

This process can then iterate until all patches are normed.

Then the entire model is then either right sided or inside out ready for a final flip if necessary.

(Note that I assume this is basically how the present 'Refind Normals' plugin likely works)

Link to comment
Share on other sites

The difference is, that you can put the light wherever you want. Not only the Center of Gravity but anywhere you like.

A very simple example why it can not work with COG in many situations can be seen in the image attached.

 

The only possibility to get the approach to work is to define the point where to start by yourself. This could be done with a null-object, a single-point on a spline (for instance one that is part of a 2-CP spline but is not connected to the patches you want to do the correct normals algorithm on), a pivot of a group or for instance a light. I use the light-reference, because it is easier to understand how it should work if you think of a photon-mapping-approach here, but it really does not matter much.

 

Show back facing polygones is easy... it takes any "polygon" created, copies it and inverts it using the normal as a mirror-point in realtime (or while final rendering, if it is necessary).

 

Concept of normals: It really is not very important in a spline-world, but in some situations, where the calculations are based on the direction of the normal:

Exports, especially if you want to create real objects from it (3d-printers, etc.), Hair and Hair-Dynamic Simulations, Smooth-Algorithems for showing smooth surfaces, etc.

 

"So any patch with neighbors that are abnormal (approaching 180 degrees away from what is deemed normative) can safely be flipped."

I think this is, what Correct Normals is trying to do (it does a little more, but it is the basic algorithm). The problem is, that the neighbour-patches are not necessarily facing the same way as the first one, because they can be heavily bend for instance. That is fine for them to do, but the calculation has a problem there. And of course 180 degrees is a real extrem... but look at the patches shown on the donut-shape in the attached image. Even if the patches would be flipped, they would not really have a 180° difference, since they already are "rotated" by themselves in comparision to the neighbours. It would only be 150° or something like that... so you have to specify a starting-point for the flipping to get it right. for instance 90° or 120°. The problem is, the more complex the models are, the harder or even impossible it gets to find that number.

See you

*Fuchur*

cog_not_working.jpg

Link to comment
Share on other sites

  • Admin

Perhaps I should have underlined the word 'offset'.

(In the illustration you provided the user would simply move the COG to its desired location and run the plugin)

Think of your torus as if the majority of it's mass is at concentrated near 'the point that would work'. This would be the object's 'true' COG.

The COG you indicated is the center of the object (not necessarily the COG).

 

Not that I have a thing for COGs but...

Centers of Gravity are in themselves useful constructs... manually offsetting a COG merely creates an status of imbalance visually*. Although importantly, with regard to mass/weight the object is actually balanced.

The COG can be either inside or outside of an object BUT could be constrained to the inside or the surface or wherever required.

Like a light, null, cog or any other such construct it's still a just a point in space to measure from.

 

Yet another thing to consider when plotting out new plugins is how they interact and complement other features/plugins as well as paradigms of user interaction. Robert mentions retooling the AI plugin for use with SVG and this is an example of a logical progression based on code that exists now. I like the general idea of lighting the inside of a mostly airtight object because I get a good sense of what the result might be (light escaping from holes in said object) but the remainder of the process I'm still trying to figure out. BUT an important part of any new feature/plugin is addressing the question that arise in how to get somethng to work. I can see the scenario now, new user asks; "How do I get my normals to all point in the right direction?" Answer: "First you create a light..." This is where I begin to get lost. Of course the same might be said for COGs but keep in mind that COGs are already an intrinsic part of any object. The full effect of COGs might not be completely implemented but that doesn't negate the fact that every 'real' object has one.

 

*This thought of imbalance leads me to consider an additonal approach that could be added to an algorithm designed to 'repair' normals and that is to consider whether an object is structurally symmetrical/balanced or asymmetrical;/unbalanced. If the former then half of the calculations can be discarded at the outset and the results simply mirrored at the end of the calculation. This might be yet another reason to consider some level of COG in the operation.

 

Another question this discussion makes me curious about is how an Object's pivot point is determined. It should be easy enough to check but I'll guess that the object's pivot is placed in the direct center of an object.

I sense this could be considered the initial COG without respect to the objects true Mass. Likewise, when using Mass in A:M do the calculations use the Pivot point as a start?

 

Added: It's interesting to note that each Named Group can have it's own Pivot. So a Named Group could be the source for a Plugin to initiate processes Normal refactoring from.

And somewhat off topic: This makes me wonder if swarms of Named Groups couldn't attached themselves to an object in interesting ways and be the point(s) of interest for other plugins to use.

Link to comment
Share on other sites

In the end if you move the COG, it really is no longer important if it is a COG or not. Then it is the pivot part I mentioned above in this example from the selected group. I often work with that when for instance rotating things around other things in the modelling window, etc.

 

But back to this: It is all about defining a starting-point for the rays to shoot from... in which way it is done does not matter. Just give the plugin something to shoot the start-rays from.

A Null object, light, pivot (of a group, or even the modelling-window, but I recommend a named group), "moveable COG" (i do not think something like that exists, since it has to be at the center of gravity to be a COG and the gravity makes only then a sense if mass is taken into account), a CP, a numeric input (I do not recommend that) etc.

 

See you

*Fuchur*

Link to comment
Share on other sites

In my opinion, The export process should be something that isn't that complexed for the artist. People in the biz don't care whether it's a COG or light point.

We are currently spoiled to that process being figured out by someone else in other packages. So even though it's a very difficult development issue, it is greatly desired that

the capability be there.

 

NOW whether it can actually be done in A:M by the "push of the button" lies in the hands of the programmers. BUT professional artists currently have that in other packages.

THUS FBX is important in the industry. I should probably say "easy to use FBX".... and with good results on the other side after it's been FBX'd out to the beyond.

 

IT'S FUNNY.....Lumion 3D currently suffers from the opposite problem. They can't import animated mesh and refuse to try. "IT's A MAD, MAD, MAD, MAD World" for 3D artists. :) hee hee

Link to comment
Share on other sites

The normals issue can be vexxing one, I've set my default to not show backfacing patchs. But I still have to manually go through the model to check it, and sometimes they seem to flip for no reason. In fact, one of my mouse buttons is set for F(flip normals)/Shift+P(select patch).

 

There is a refind normals function when you right click in the modeling window, but it goes away when you select a patch. It seems to me that you could use a selected patch to find the correct front face.

 

Back to the emitter idea.

some issues to address:

Placing the emitter - if it is a temporary object created by the plugin, it has to be persistant until activated to allow for correct placement (switching through at least two views, and being moved repeatedly), and possible multiple uses.

 

Keying the emitter(s) - the emitter may need to be keyed to an object, otherwise nested or complex objects may be affected. Take a head as the example, often the eyes and interior of the mouth are seperate objects with some normals that face into the head. These patches may be exposed to the emitter, flipping some of the normals incorrectly.

 

Holes - holes in the model could expose exterior parts of the same model to the emitter, flipping them incorrectly (you hid the mouth and eyes, but now the emitter hits the eyebrows and nose)

 

I'm not saying it is not something to think about, just that there are potential problem areas that need to be worked out

Link to comment
Share on other sites

I'd say this could be overcome by just letting the user select for which parts of the model s/he wants to correct the normals for.

In general a 3d-designer will not attach eyes (for instance) to the head but just stick them in the holes for the eyes.

 

Like that, the user could easily just make a attach-select and hide or lock everything of the model that should not be affected.

The good thing about that approach is, that you can for instance have normals which guide in a special (unexpected) way if you want that.

 

See you

*Fuchur*

Link to comment
Share on other sites

In my opinion, The export process should be something that isn't that complexed for the artist. People in the biz don't care whether it's a COG or light point.

We are currently spoiled to that process being figured out by someone else in other packages. So even though it's a very difficult development issue, it is greatly desired that

the capability be there.

 

NOW whether it can actually be done in A:M by the "push of the button" lies in the hands of the programmers. BUT professional artists currently have that in other packages.

THUS FBX is important in the industry. I should probably say "easy to use FBX".... and with good results on the other side after it's been FBX'd out to the beyond.

 

IT'S FUNNY.....Lumion 3D currently suffers from the opposite problem. They can't import animated mesh and refuse to try. "IT's A MAD, MAD, MAD, MAD World" for 3D artists. :) hee hee

 

That are two different feature requests more or less. In the end there is no way for the software to know in which way the normals should point in a model. It could be that you want to export "reversed" normals for instance if you want to export a sky-sphere from A:M for a game engine it needs to have flipped normals. Like that you can see into it but you can not see out of it in the game. But a basket ball (which has roughly the same geometry) needs to have the normals pointing out. And this can easily get much more complex if we are talking about more complex geometry or even a whole scene.

 

I am not aware of an exporter in any other software that will automatically be able to determine what you want in every situation... like that it is not wise to make that an automatic process. We are the artists who need to have that kind of control over our models... any other approach is only helping in very specific situations and will give us a head ache in others.

 

But again: This should not be attached to the FBX-exporter. The FBX-exporter should work as most other exporters in A:M too. You define the surface / material / animation / bone-structure in A:M and the exporter really does nothing else than translating that to another file-format.

 

The correct normal algorithm I am talking about is to make the model look like it should before exporting.

This is an own plugin-request for a simple but effective way to fix normal problems created while modeling while preserving the full control over what you do.

 

All that talk about COG, Lights, etc. is just too much talk about a very simple thing:

You need to tell the correct normal algorithm from where to start. You do not really need to know what a COG is or how a light works. All you need to do is to define a starting point in the 3d space.

 

See you

*Fuchur*

 

PS: This is offtopic and only for explaination: I myself find Center of gravity (=> C.O.G.) a very stupid name for what we are taling about... COG is something that is an artifical concept for animators while animating or a real thing for physicist / structural engineer but it is not suitable in an articial geometric modelling universe itself. It can be helpful to create a certain impression for a human, but from a mathematical point of view, it is useless as long as no "real" masses are involved. I like the name pivot much better, because a pivot can be moved wherever it needs to be and has no "defined/fixed" position.

Link to comment
Share on other sites

Shouldn't normal consistency be based on a "master" patch instead of a point in space? That works every time unless you have a Möbius strip topology.

 

This can be helpful for many situations, but it would only work if everything is really attached to eachother, right?

The good thing about shooting rays from a point is, that that does not have to be the case there.

 

See you

*Fuchur*

Link to comment
Share on other sites

Not really... put two grids in the same size next to each other (facing the same way) and put the emitter in front of them. An emitter would now shoot rays on both surfaces and the direction would be changed for both in this situation. With a "master patch" you need to select one patch of each grids to do the same. In other situations an emitter could be bad so, if you for instance put a continous surface in front of it (a box) and shoot on it without being in it for instance. I am not saying that this is a much better approach, it is just a different one. If the master patch is better in your opinion or much easier to implement, I am fine with that one too.

 

See you

*Fuchur*

Link to comment
Share on other sites

If the master patch is better in your opinion or much easier to implement, I am fine with that one too.

Neither is hard. Both are very basic vector math as far as finding the direction goes. I'm not so sure about the subsequent (common) steps that involve surface evaluation.

Link to comment
Share on other sites

Guess I may as well put my two down.
A nif import/export

Which will require at least one helper plug in

and recently,
A new bake texture. I get why the generated texture is flipped, and I think I know why most of the patch maps are rotated, but I will have to dig through the code to find a real solution.

Edited by PopaR
Link to comment
Share on other sites

  • 2 weeks later...

Hello,

Dream of wizards ... great! :rolleyes:

Here, some ideas of wizards, these are ideas come when using repetition of certain work step. (My English is not good, so I made drawings.)

The first is a tool that would make a spline closed in a circle.

circularise-displegadenn.jpg

The second wizard would insert a fan bone.

Fandisplegadenn.jpg

The third is more complex. It would insert bones and splines to a selection for modeling in action mode.

extrudebone.jpg

  • ____ 1
Link to comment
Share on other sites

Smooth will "smooth" stuff... it is meant to be used to create "smoother" surfaces without changing them too much. Like that it is not exaclty what you want there... making a circle of the splines would change the spline very much...

I already thought, that it is not exactly what you want it to be.

 

See you

*Fuchur*

Link to comment
Share on other sites

  • Admin

I'm moving this topic up to the main Animation:Master forum because it's buried down deep in the SDK forum.

We'll have to look at what other truly A:M-related forums are buried too deep and move them appropriately also.

 

And to add another plugin idea... one that might relate more to being a feature...

 

It could be useful to use GPS data to position objects/images in 3D space.

I'm not sure how this would work relative to virtual space as well as imaginary spaces such as the city of Zarz on the planet Xeomopline.

Perhaps there might be a variable preceding the lat/long values that are used to provide an origin, scale, or to suggest if the object is within a set of visible ranges from other UPS (Universal Positioning System) coordinates.

How exactly does one use GPS in space or on Jupiter?

Link to comment
Share on other sites

  • Admin
The first is a tool that would make a spline closed in a circle.

 

I'm wondering what difference there might be between an averaging wizard and a circularize wizard.
I would guess the circularize wizard would use all of the CPs center as the origin from which to average the spacing of CPs around the splne from the average distance away from that point.
Whereas a simple averaging wizard would attempt to maintain the same spline but space the CPs out along that spline at the same distance (like the plugin Resample Spline does).
This makes me wonder if the Resample Spline plugin (or at least some of it's code) could be used in furtherance of a Circularize plugin.
Similarly, if the Resample Spline plugin could be enhanced so that it could leave a spline in place but move the CPs to a new location (average distance along the spline) that might work also.
Regarding FBX import/export... I don't think we'll find anyone that doesn't want an FBX plugin so I'll third that one. ;)
Question is... how do we get it done? Who do we hire/kidnap/invoke?
What does success look like? (I think I've asked this before)
If moving in stages from success to success... as an initial FBX plugin, will import of a mesh (without textures) be sufficient?
I ask this because if someone tries to tackle this that has to learn from scratch that would be a logical milestone.
The next milestone being to reverse the process and export to FBX (also without textures and animation most likely).
Milestone 3 then would be to get the import of textures to work.
Milestone 4; textures exported.
MIlestone 5, basic motion transfer (translate, scale and rotate)
Milestone 6, complex motion from a known origin*
etc.
All of this assumes that an indirect path to FBX (i.e. OBJ and MDD to FBX and vice versa) might not be easier to code.
Currently I'm not sure A:M can even reimport the MDD files it exports.... so that might be a logical first step to conquer.
For reference here is some basic best practice FBX info from Unreal Engine to consider: https://docs.unrealengine.com/latest/INT/Engine/Content/FBX/BestPractices/index.html
and here's a link to the 2017 FBX SDK: This contains sample import export programs for Visual Studio.
For someone well versed in Python it may be that the FBX Python bindings will prove useful.
*The issue here being that FBX has only recently began to solidify rigging standards and so those are still evolving.
We might therefore assume a basic motion capture format such as BVH as the norm although I see some standardization on HumanIK 2016.5.
Link to comment
Share on other sites

Rodney. I think you are correct that the MDD has some issues.

I would love to see the MDD working really well for both export and import.

 

I agree and 4th the FBX hopes. But that is obviously a very heavy hurdle.

Maybe someday it can be accomplished.

Link to comment
Share on other sites

  • Admin
that is obviously a very heavy hurdle.

Maybe someday it can be accomplished.

 

The first thing we might do is compare current import/export plugins with those of import/export available in FBX for other programs.

It's a lot like animation itself... in point... out point... break it all down in the middle. Then refine the performance.

I'm afraid I don't see a lot of folks lining up to take this on however.

Link to comment
Share on other sites

I would guess the circularize wizard would use all of the CPs center as the origin from which to average the spacing of CPs around the splne from the average distance away from that point.

Yes, precisely. Circularize is both very useful and easy to implement.

Link to comment
Share on other sites

  • 1 month later...

An another idea... Actually it is possible to snap a spline to another spline, but it is not possible to connect them without break the UV, or 5 points.
A plugin that will do it, will be great. Imagine you have a librairies with members (with rigging and UV), with this plugin, it will be easy to drag members on your model and connect them. This would give the opportunity to AM to be a sort of "creature creator"

connect.png

  • ____ 1
Link to comment
Share on other sites

Is there a plugin that cuts through a row of patches with a spline and keeps the decals intact? I'm thinking of writing one, but maybe it already exists.

 

I dont think so... the only one I am aware of which is close to this would be CutPlane, but I doubt it will keep the UVs (I have to admit, I never tried that before...)

 

See you

*Fuchur*

Link to comment
Share on other sites

 

An awesome plugin for AM would be LSCM (least squares conformal mapping) for UV mapping. Been asking for this for ages.

okie-dokie I'll bite:

 

what is this? Can you show a sorta maybe example-ish image

 

okay I found an explanation by googling (other mapping algorithms also included). Taken from link:

 

"The Least Squares Conformal Maps (LSCM) parameterization method has been introduced by Lévy et al. [7]. It corresponds to a conformal method with a free border (at least two vertices have to be constrained to obtain a unique solution), which allows further lowering of the angle distortion. A one-to-one mapping is not guaranteed by this method. It solves a (2 × #triangles) × #vertices sparse linear system in the least squares sense, which implies solving a symmetric matrix"

 

http://doc.cgal.org/latest/Surface_mesh_parameterization/index.html

 

images and CODE at link above.

 

This type of mapping sorta, almost, not quite looks similar to the mapping done by the BitMapPlus kci:dnd Plug-in. Perhaps a place to start?

LSCM.png

teapotbitmapplugincheckerimage.jpg

Link to comment
Share on other sites

  • Admin

For those with an aptitude for writing import/export plugins I recently saw this source code related to creating FBX plugins via glTF:

https://github.com/cyrillef/FBX-glTF

A pertinent part:

Converter
glTF is an open-source command-line pipeline tool that converts FBX file (and any file format that FBX can read such as obj, collada, ...) to glTF.

FBX importer/exporter plug-in
IO-glTF is an open-source FBX importer/exporter plug-in that converts FBX file (and any file format that FBX can read such as obj, collada, ...) to glTF.
This plug-in can be used by any FBX based application to import/export glTF files.

This is from Cyrille Fauvel who should know a thing or two about the FBX format because he's worked for Autodesk since 1993.
He maintains the blog 'Around the Corner' and a blog post from just over a year ago goes into some detail about how he arrived at the source while researching a WebGL project. LINK

 

The GL Transmission Format (glTF) is a runtime asset delivery format for GL APIs: WebGL, OpenGL ES, and OpenGL. glTF bridges the gap between 3D content creation tools and modern GL applications by providing an efficient, extensible, interoperable format for the transmission and loading of 3D content.

 

For more about glTF see info on their github site: LINK

Link to comment
Share on other sites

An import plugin should be able to convert all of a models assets or references to assets to something that AM understands. Likewise, an exporter should convert the AM references or assets into something that is understood by another program. The ideal should be the ability to convert a file to a mdl, then take that mdl anb convert it back and have the original program not see a difference.
However, taking the development of a plugin in steps allows the chance to test the parts and allow others to test the parts. It also allows others to build upon a started but dropped project, or improve a part, without starting from scratch.
Some exporters may need helper plugins to ensure that the assets of the mdl are correctly placed, or meet the standards of the target format. At some point, it may become possible to integrate the seperate helpers into the export process, at least as final checks, to make sure that it will perform as intended.

 

Sorry, didn't mean for the tone, didn't notice until I reread it a few days later. And I hadn't noticed that Rodney had given some sample milestones.

Link to comment
Share on other sites

Join the conversation

You are posting as a guest. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...