sprockets The Snowman is coming! Realistic head model by Dan Skelton Vintage character and mo-cap animation by Joe Williamsen Character animation exercise by Steve Shelton an Animated Puppet Parody by Mark R. Largent Sprite Explosion Effect with PRJ included from johnL3D New Radiosity render of 2004 animation with PRJ. Will Sutton's TAR knocks some heads!
sprockets
Recent Posts | Unread Content
Jump to content
Hash, Inc. - Animation:Master

Rodney

Admin
  • Posts

    21,575
  • Joined

  • Last visited

  • Days Won

    110

Everything posted by Rodney

  1. Here's a 45 minute documentary on the making of Zootopia. Well worth the watch! xhttps://www.youtube.com/watch?v=D3pF9owYlRI Some take aways: - Research is important - ideas (stories, plots, characters, etc.) continuously change - It's good to continuously reinvent and accept feedback before the actual production process begins. Not stated but apparent is that production is the execution of those firmed up ideas. Lots more than that but... watch the documentary and see for yourself. I think it should be readily apparent that the major changes made to the movie resulted in a much better movie.
  2. That may be the case and even if so can be quite useful. Someone looking at a screen (or text file) might logically relate CPs #45 and #47 together but might not consider CPs #45 and #65536 as related. Again, we'd have to look at what the code of the Renumber plugin actually produces. Easy enough to test. Interestingly, we appear to be speaking of a class of users that actually care about CPs assignments; which would appear to be programmers. It follows that Steffen might have produced such a thing primarily as a programming aid. The majority of users very likely don't care. A:M has a long history of shielding users from the arcane code that passes behind the scenes. For someone that just wants to dive in model and animate that's a good thing.
  3. Thanks for the exploration of the Planet material. I was using it the other day and trying to make some sense of it myself. (not very successully!) Theoretically... This might be a little unrelated to your posting but the main thing I can think of is to make sure you give yourself enough scale on your model to take advantage of the Planet material. While you can scale the material to allow for smaller models at some point you run afoul of the resolution of your screen. Thanks again for the write up on specifics. Much appreciated. Added: If you don't have a video screen capture software like OpenBroadcast I highly recommend it (especially the studio version as it appears to be more fully featured). Not just because you can share what you have here but because there is surely coming a day when you want to remember what you learned with Planet material settings and find you've forgotten and must explore it all again.
  4. I'll have to disagree with that but I don't know your reasons for forming that opinion. Let's assume we are in agreement but just looking at the same thing from different vantage points. The principle of primacy could be used to argue a 'first saved best reinstated'. But that would run up against the principle of recency where 'the most recent is more easily recalled' because it's what we have immediately at hand. In this scenario the master gets lost and is replaced by the new which may or may not be 'better' in the present light or situation.. At any rate, I can't subscribe to the underlying idea that any CP order is as good as another with perhaps one exception. That case might be in a perfect state where CPs are equal. In other words, it's the change (potential or realized) that makes the reference useful or necessary. To counter the idea from a different angle consider that the idea of ' one is just as good as another' would suggest no need for optimization. All orders being equal suggests nothing can be improved upon either. We all know this isn't the case. But optimization itself is variable (some orders being more appropriate than others) because what is optimal (well ordered) in one case will not be so in another case. The master model by itself doesn't provide anything particularly useful outside the closed system. It's when matched up against other models whose make up is different that we have opportunity for useful contrast and comparison. If the CP order of a (master) file is in a optimized state then it might be ideal but if not it may not have full title to that of mastery as of yet. In this we might have to consider from where we are taking our measurements that define mastery. I will assume that in A:M each CP is named and numbered in chronology. If CP number 17 is deleted, the next CP created is 17 but the next available number after the current count of CPs. (I've never checked so may be wrong here!). I must therefore assume that a heavily edited model has CP numbers all over the chart with respect to order and placement. A reordering of CP numbers then optimizes them based on some specific criteria (which I can only guess at). I must assume it's not from top to bottom of mesh. Nor is it from right to left. Although... importantly... once ordered, these numbers can be recalculated that way if necessary. This ordering can then be used to determine and project other efficiency. Without yet looking, I *think* the order (of renumbering CPs) is according to spline assignment and continuity. That would make sense to me. And this would be my exhibit A against the idea that 'any CP order is as good as another' in that I would think that CPs are best ordered when they align with spline assignment and continuity.
  5. One purpose suggests itself to me but it's not for direct use but for reference. What I would imagine it could be used for is a comparison of one model to another. For instance, if a model is saved and then altered and saved again with a new file name those two models can then be compared and the change isolated via a difference check. Further, several comparisons could be ran to extract more useful information. For example: the comparison could be made both before and after the renumbering of CPs to firstly isolate the area that needs to be inspected (all other areas can then be set aside as trivial unless needed again). The renumbering process is executed and another comparison ran which identifies the CP order of change. Now we have the area of interest and the identity of CPs that remain unchanged and those that have changed. We can flag those that are unchanged as clean and those changed as dirty (or vice versa if the output direction of dataflow needs to change). Clean can be manipulated further without refinement while those dirty must be processed further with respect to the ideal topology (which may or may not be the original model that was saved). Then it's mostly a matter of assigning CPs to their most likely equivalent in another space (i.e. yet another topology or change) More practically speaking, renumbering of CPs seems to be a step one might take just prior to saving a (master) model that one wants to use as the exemplar; a model that is considered final and will never change. This is the equivalent of a read only file one that should be referenced but never altered. This exemplar is the model that is production ready and any change (in topology) from it will prove costly because that change will break something further down the food chain. As a best case scenario the change will just go along for the ride (and might not impact anything) but it sits in a hotspot waiting clandestinely for the opportunity to break things. This whole workflow would be of interesting usage in combination with versioning as iterative states of the same file could walk up and down the historical record showing the states of change (and even inbetweening them as necessary to create a form of resolution-independent view of prior states). This would be the ultimate undo because all changes could be dialed back to a former state. And moreso... the data can be used to project and identify potential/future states. A near equivalent of 'living data' that can aid the animator by maintaining/repairing and 'animating' itself.
  6. This is the only topic I could find that appeared related to the subject (guys named Detbear and Robcat... and other animals) contemplating the renaming of CPs.: https://www.hash.com/forums/index.php?showtopic=44008&hl=renumber&do=findComment&comment=378740
  7. I have a vague memory that the reordering had something to do with export to OBJ format. Something related to trying to resolve the issues leading up to five point patches and hooks but... slightly before that. I'd suggest looking into topics frequented by Malo and Nemyax related to that subject.
  8. I've tried to use Nuke (non commercial) a few times and can't wrap my head around some of it's workflow. Fusion has a very simple organization to it. So simple in many ways that I've caught myself being frustrated a few times by the desire for something... anything... new. With Fusion that interest could very easily be covered by learning to use the LUA scripting language but my head currently doesn't want to go that route. Nuke is the up and comer because money is being poured into development and (IMO) The Foundry has a suite of other tools that studios can purchase, Katana being one of the buzz word programs. But I really really like Fusion's simplicity and yet it's ability to be as deep as necessary (through complex combination of nodes). The interface itself though is very straightforward. That is one of it's major strengths. William, With your knowledge of production you really should have Fusion in your arsenal. On the surface Nuke and Fusion are very similar but Nuke feels heavy (much like Adobe products often do... my primary critique of Adobe) but Fusion feels light as a feather in comparison but doesn't fall short in capabilities or capacity for throughput. And one of the more impressive aspects of that is how it's the same program that you get for free (minus the expensive plugins and an army of programmers at your beck and call ala Hollywood).
  9. One of the most difficult aspects for new users of Fusion I believe to be the simple opening and saving/exporting of files. Initially this seems the opposite of intuitive. The user must find and open an Input Node in order to open an image/resource. This is strange territory to folks in the habit of using a simple File/Open approach. A simple conversion of images in Fusion however might be very straightforward: Input node Output node So a sequence of images in PNG would be fed into the input The input connects to the output node The output node specifies the format and filename to convert to (i.e. MOV) Want to add an effect, color correction, text, etc.? Insert that new node between the input and the output. The file/save structure is then reserved for collections of these nodal instructions. That way those instructions can easily be reused.
  10. It won't be once you dive into it. The breakthrough for me was to realize that A:M's Compositor basically was the same thing just without the visual representation of nodal 'cubes'. I must admit that intially Fusion (and nodes) turned me off but that was because I had tried some node based software that wasn't very intuitive before and it soured me on the whole idea of nodes. I gritted my teeth and suffered through about one week's worth of video tutorials and emerged with an appreciation for nodal workflow. Not that I see that as better than other approaches but useful never the less. I know for a fact that you will be dangerous once you get a feel for the tools and look forward to that and ESPECIALLY what I think you'll be able to push toward A:M given the workflow. And that is really the prime word for nodes.... flow... those silly strings that connect nodes didn't appeal to me at all but when they can connect in a logical flow that I can trace from start to finish it helps in troubleshooting and in knowing exactly where to go to focus on what needs to be done. This can be much harder in an approach that doesn't have a good visual representation of how and where data is flowing. And lest someone think I'd recommend A:M gain 'nodes'... no I don't really think so. The PWS is an excellent equivalent. I have seen screenshots of Rasikrodi's bone tool software and that was nodal. If he ever brings that online that might gain some additional interest in nodes. I really don't have a thing for nodes though... just useful workflows. The nodes are a minor aspect of that and the more invisible the better. Object Orienting Program is basically nodal so there is that too and that is an important aspect of the programming of A:M and why it has such excellent workflow.
  11. For users of Photoshop (and for use with A:M's current capability to render out to Photoshops PSD format you'll be pleased to note the PSD import works again in Fusion. This wasn't the case in the previous releases. Fusion handles the standard image formats that A:M users (bmp, tif, tga, exr, png and jpg) as well as movie formats (AVI and MOV). As Fusion is resolution independent it can also be used to convert to other formats and resolutions not supported by A:M. Fusion has no problem with importing of OBJ files, including model sequences, exported out of A:M although a knowledge of the strengths and weaknesses of various model formats will help. Importing and Exporting is accomplished through the FBX import/export nodes, just make sure you change the file type to the appropriate one for import into A:M (OBJ, 3DS, etc). Fusion does well at exporting to OBJ format for import into A:M as well. I haven't explored other formats in depth as OBJ appeared to work best. Some of the modeling tools can make quick work of props and sets for use in A:M although the standard issues apply when using polygon models. I haven't explored a fully quad-based workflow in Fusion and have reservation about that approach because Fusion is technically not a 3D modeler so is not optimized for that. Still Fusion can create/export models of all kinds for use with A:M. Fusion can very likely access the additional channels that A:M renders with Open EXR format to include depth channels (which A:M doesn't support yet). This can be useful when rendering out to EXR and extending those files further for use elsewhere. Particles can easily be contained/constrained within the confines of a Area, Model or Text and one of the more useful effects/nodes is Fast Noise which can be used to create a sense of environmental depth in 2D or 3D space.
  12. Here's a pretty nice demo of some water effects that should be familar to A:M users who have done work with water. As I say, most of these effects have one-to-one correspondence with similar approaches in A:M. The demo is from the movie 'Anonymous'. xhttps://www.youtube.com/watch?v=mOpN6C3ZrjY For those not familiar with node-based workflow the quickstart tutorials are recommended: xhttps://www.blackmagicdesign.com/products/fusion/training
  13. For those of you that like to composite imagery rendered out of Animation:Master... BlackMagic Fusion 8.0 is out of Beta and available for Mac and PC. (The Linux release is still in development) https://www.blackmagicdesign.com/products/fusion The program is free for commercial and non-commercial use. The full studio version ($995) adds stereoscopic rendering, project management etc. and has also been released. The manuals are well suited to get you up and running quickly as are the quickstart and introductory tutorials. And of note, many of the features in Fusion are available in Animation:Master but are optimized from the standpoint of an image and effects compositor. As such a workflow generated in Fusion can often be replicated in A:M for those that prefer to keep their projects inside A:M from start to finish. The list of features that the standard (free) release of Fusion brings to the table is a long one but some highlights include: - 2D and 3D environments, texturing and lighting - 2D and 3D text - Vector Paint - Particles (to include linking 3D Models as particles) - Color Management/Color Correction - Tracking and Rotoscoping - Volumetric Effects - Node based workflow - Filters and Effects - Keyframe Spline Editor - UV Mapping - Macros, Expressions and Scripting - GPU Accelerated Compositing - Render Manager - Chroma Key/Mattes/Masking These capabilities may be useful for preparing resources for use with A:M or for extending the use of A:M.
  14. For a second there I thought Jason was making an announcement about a new A:M Films! I got all excited even. I think the inner workings of the A:M Films upload were gutted when some servers got exchanged. Jason may need to manually load the films (assuming other folks who have previously registered can't still do that on your behalf. Jason at hash dot com is his email.
  15. You make an excellent point Darrin. What exactly is 'real time' rendering. I don't think any of us believe that the short film at the link is actually playing in realtime... only that in some way it was pre-rendered in realtime. Still, I'm going to guess there is some optimized code that makes the most out of the latest and greatest in graphics technology (real time or otherwise). I haven't seen a behind-the-scenes yet although I expect just such a thing should show up on FXGuide shortly. At a guess I'd say the performances were motion captured and stored, tweaked and readied for playback and rendering. Materials and lighting then tweaked to optimize every possible calculation and the the real time renderer was turned on. How did the imagery get from display to harddrive? Not sure. I assume they didn't just screencapture the whole thing so what we are seeing probably can't be called realtime rendering. I must assume that an equivalent is playing back on computers somewhere for demo purposes.
  16. Their 'realtime' render hardware is surely more advanced than mine but... Here's the latest and greatest from Unity 3D who is demo'ing their latest beta via a short video titled 'Adam': http://unity3d.com/pages/adam
  17. Perhaps I should have underlined the word 'offset'. (In the illustration you provided the user would simply move the COG to its desired location and run the plugin) Think of your torus as if the majority of it's mass is at concentrated near 'the point that would work'. This would be the object's 'true' COG. The COG you indicated is the center of the object (not necessarily the COG). Not that I have a thing for COGs but... Centers of Gravity are in themselves useful constructs... manually offsetting a COG merely creates an status of imbalance visually*. Although importantly, with regard to mass/weight the object is actually balanced. The COG can be either inside or outside of an object BUT could be constrained to the inside or the surface or wherever required. Like a light, null, cog or any other such construct it's still a just a point in space to measure from. Yet another thing to consider when plotting out new plugins is how they interact and complement other features/plugins as well as paradigms of user interaction. Robert mentions retooling the AI plugin for use with SVG and this is an example of a logical progression based on code that exists now. I like the general idea of lighting the inside of a mostly airtight object because I get a good sense of what the result might be (light escaping from holes in said object) but the remainder of the process I'm still trying to figure out. BUT an important part of any new feature/plugin is addressing the question that arise in how to get somethng to work. I can see the scenario now, new user asks; "How do I get my normals to all point in the right direction?" Answer: "First you create a light..." This is where I begin to get lost. Of course the same might be said for COGs but keep in mind that COGs are already an intrinsic part of any object. The full effect of COGs might not be completely implemented but that doesn't negate the fact that every 'real' object has one. *This thought of imbalance leads me to consider an additonal approach that could be added to an algorithm designed to 'repair' normals and that is to consider whether an object is structurally symmetrical/balanced or asymmetrical;/unbalanced. If the former then half of the calculations can be discarded at the outset and the results simply mirrored at the end of the calculation. This might be yet another reason to consider some level of COG in the operation. Another question this discussion makes me curious about is how an Object's pivot point is determined. It should be easy enough to check but I'll guess that the object's pivot is placed in the direct center of an object. I sense this could be considered the initial COG without respect to the objects true Mass. Likewise, when using Mass in A:M do the calculations use the Pivot point as a start? Added: It's interesting to note that each Named Group can have it's own Pivot. So a Named Group could be the source for a Plugin to initiate processes Normal refactoring from. And somewhat off topic: This makes me wonder if swarms of Named Groups couldn't attached themselves to an object in interesting ways and be the point(s) of interest for other plugins to use.
  18. Most keyboard shortcuts haven't changed although there are surely a few but most of those that changed were undocumented in the first place so shouldn't cause too much confusion. You can generate a list of your current Keyboard Shortcuts by going to Tools/Customize and opening the Keyboard tab. Location the button that says Export and upon pushing it an HTML file will be generated with all of the current shortcuts, including any you might have set up yourself. Most browsers will allow you to right click and Save the document. I'm attaching a copy of the HTML file (converted to PDF) generated from me pushing the Export button on my end but keep in mind that it could potentially include shortcuts that I've added (although I doubt this as I haven't recently added any). Animation_Master Keyboard Shortcuts.pdf
  19. I think I"d best give up on this one as I'm not seeing how a COG differs from a point of light, Null, etc.I'm obviously missing an important part of the equation. The same thing applies to all cases... at least from the approach of the proprosed plugin... all must be inside the mesh/geometry. With lights I assume manual placement as would be the case with an manually offset (unbalanced) COG. The COG is just a way to automate an initial placement of that point from which measurement/raycasting originates. That point would then be offset to the desired location from which to run the New Normal plugin. I'm mostly interest for my own understanding of the problem and potential approaches to solutions though so am confident that whomever is going to program this plugin will eventually figure it out. Still given any given problem there are approaches that can be tagged for exploration while others (at least temporarily) ruled out. Of course this is the underlying problem with wishes. Wishes are easily formed but the realization of those wishes inevitably harder. This does beg a few question however, such as: How does such a process as 'Show back face polygons' work? Could those features that use normals in their equations be enhanced to ignore those normals that don't point in the same direction as a given percentage or the average direction of the other normals? Thereby bypassing the problem altogether. Could an option be given to allow particles to emit from both sides of a surface regardless of direction of the normals? The entire concept of normals suggests closed systems that have outer and inner surfaces and yet we more commonly interact with open systems where technically speaking all surfaces can be simultaneously internal and external. How can we make better sense of these and other related underlying paradigms? One solution would appear to be suggested in the name itself; normals. How often do adjacent patches flip their normals 180 degrees from each other? Surely not often. So any patch with neighbors that are abnormal (approaching 180 degrees away from what is deemed normative) can safely be flipped. This process can then iterate until all patches are normed. Then the entire model is then either right sided or inside out ready for a final flip if necessary. (Note that I assume this is basically how the present 'Refind Normals' plugin likely works)
  20. Yes, I understand the issues with normals facing the wrong direction. The problem I'm having is seeing how the proposed solution resolves the issue. I -think- what is under consideration is why Pixar and others have gone to bi-directional raytracing as the rays can more easily spot orientations of surfaces when they keep track of orientations, reading surfaces in all directons. If placing an object inside the object under consideration for normal correction is a viable starting point I can't help but wonder why something like determining the Center of Gravity (COG) or Center of Mass (COM) wouldn't just as easily accomplish the same thing. And if the response is that an external object can be placed anywhere... the same thing could be accomplished through COG/COM with an offset. Another similar option might be to automatically use the Models pivot point and adjust that as desired. Now originally, when Gerald mentioned the same thing in another topic, I thought the idea of a light inside the model was an interesting one because as a light inside a closed/airtight object no light would escape except where the object was in fact not airtight. Surfaces that faced inward might then be considered 'open' and let the light out. But all of this seems to me to be extra steps. I'm not sure how the current 'Refind Normals' plugin works (i.e. what specific process it goes through) but I'd imagine it might compare each normal with it's neighbors and if outside of a specific tolerance will tag it to be inverted. This seems logical to me. The underlying issue is that during modeling an external surface can very quickly become an internal surface and A:M isn't always able to resolve what is what. So the first and best resolution is good modeling practice. Of course this is complicated when external models are imported into A:M as they often may have no optimal normal data attached. At any rate, my purpose here isn't to sound like I'm against the proposed plugin but rather to better understand the underlying process the plugin proposes to address. Especially if there is a simpler methodology that can be implemented to achieve the same (or at least satisfactory) results.
  21. Gerald, I'm curious about the reason for running such a plugin. I must assume that it isn't just to make sure every normal points outward (on the true surface of a model). Would it be to create position based normal maps or something like that? At a guess I'm imagining that the bounces would have to be a magnitude of 2 because you wouldn't want to flip a normal once and then have it flipped back to it's original state upon being hit the second time. And I'm not sure how biased normals might factor into the equation. I don't know enough to consider but am guessing there are 4 (although I sense a theoretical 9) possible positions based on A:M's current ability to rotate images/normals. If we were to delve into the idea of two sided patches that number might be what... 54 possible angles? I guess what I'm trying to envision is the underlying purpose/usage for the process of flipping based on a specific origin. I can imagine some uses but am not sure of the specific purpose of your proposed plugin.
  22. Can you provide a little more detail on what that entails? What the end result might look like in A:M as a plugin? Links to research, development or code examples?
  23. If you would have asked this a year or so ago I would have had a long list. As it is right now I'm drawing a blank. But... I'll remember some soon and post those. Practically speaking,and mainly for the learning experience, I'd like to see a public project that expands the Font Wizard. It might move forward in phases like this: Phase 1: Add ability to input the text from an external text file. Phase 2: Expand the capability of the text file input to include special characters (carriage returns, font styles, etc.) Phase 3: Expand the capability of the text file syntax to allow for placement of the text in 3D space. Phase 4: Mirror the -new- Font Wizard capability in a new Model Import Wizard that allows A:M to import and place Models in 3D space at specific coordinates designated in a referenced text file. Note that the format might mirror that of A:M Libraries shortcuts with the addition of XYZ coordinates tacked on to the end. Phase 5: Explore ability to save out 'scripts' for reimport at later time with options to change elements of the script in the Wizard. At this point we've almost come full circle to simply importing a Chor or opening a Project file with the resources specified in detail. R&D phase (With the knowledge/experience gained by programming those enhancements to the Font Wizard identify obstacles to overcome in order to create a full scripting environment for A:M) Note that similar plugins could be created by swapping out the target format (i.e. SVG instead of AI. etc.) I'm not sure if the code is freely available but plugins such as the Terrain Wizard might demonstrate excellent areas of plugin capability that are not often explored (The Terrain Wizard is one of the few areas where users can paint directly in A:M)
  24. Glad to see others are exploring too. I haven't been able to get the scanner process (a program called GTS) to work yet. (I probably need to read the instructions) Here's a shot at what a test sequence looks like going the longer route to getting the images in after scanning outside of OpenToonz and drag/dropping the images into he Xsheet:
×
×
  • Create New...