sprockets Rodger Reynold's Architectural WIP Live Answer Time Demo Tinkering Gnome's Sparkler PRJ Shelton's new Char: Hans It's just donuts by ItsJustMe 3D Printing Free model: USS Midnight
sprockets
Recent Posts | Unread Content
Jump to content
Hash, Inc. - Animation:Master

Rodney

Admin
  • Posts

    21,522
  • Joined

  • Last visited

  • Days Won

    103

Everything posted by Rodney

  1. Creating joints like that is a most excellent use for 3D printers.
  2. This appears to be a Visual Studio Dynamic Link Library that is missing from some Win 10 installations. Here's some information related to that with some suggestions for installing: https://www.drivereasy.com/knowledge/api-ms-win-crt-runtime-l1-1-0-dll-missing-error-solved/
  3. No... I got THAT part. I'm just saying with a name like 'Stubtoe'...
  4. Nice likenesses. Okay, that officially has me curious. I can't help but note that your rendering could (at least theoretically) be reduced significantly by this being in black and white (grayscale).
  5. It's important to note that mere usage would likely not constitute a violation of trademark. It's the identification that is important. So if you companies sound byte ran, "Animation:Master TOOT TOOT"... that would likely cross the line. UNLESS In the case of Mark's use of Lone Ranger that his whole modus operandi is to present the material not as his own but as a send up... a satire or... or a comical reference to a cultural icon. In my estimation he would in fact be underlining and supporting the trademark by pointing the one to the other. Now... that's for the trademark... but copyright is something else altogether. BUT... Mark is well covered there also under fair use with satire etc. Disclaimer!: My words should be construed solely as opinion. No legal advice is suggested nor shall any be implied. Consult certified legal counsel for matters pertaining to law. (or hire Martin Hash as IP adviser)
  6. Here's an example of Sound Marks that generally should be avoided in a creative endeavor: https://www.uspto.gov/trademark/soundmarks/trademark-sound-mark-examples Most of these fall into the common sense category but a general rule of thumb is to steer well clear of sounds made popular in jingles and advertisements for commercial products. When I consider some of these and how simple they are (such as NBC's three note sound mark) I am reminded of just how precarious the road forward can be in establishing recognizable themes that identify specific products or services. This isn't to say that the three notes from NBC's title cannot be place anywhere else. It does suggest that if those notes are used to identify another specific product or service we might find ourselves up against an established Trademark. For example, it might not be wise to open the presentation of all your films with this little ditty: https://www.uspto.gov/sites/default/files/74629287.mp3
  7. Okay. This is going to go even farther afield so I apologize in advance. Every once in awhile I run across a (scholarly) paper that outlines a specific approach... usually to graphics... and something catches my eye. Less frequently I recognize something in the description; terminology or concepts that I actually understand to some degree. I am still shocked and amazed (and amused) when the implication is that I have guessed something right. It doesn't take much to amuse so this can take the form of a specific word that seems to perfectly fit a processes. Examples of this might be 'spatial and temporal coherence' or statements of assertion like 'it is always better to do x than y' where I happened to guess at some point in time that x was generally preferable to y. This doesn't mean a lot. Both the paper and my silly guesses could be wrong. But for a brief moment... the lights flicker as if they are about to go on. And this is where things get weird. Seam carving is one of those concepts out on the periphery that haven't quite made a match but that I am confident can fit well into the grander scheme of probing through frames and optimizing (or extracting) duplicates. This isn't to say this isn't already being done.... I'm just acknowledging my understanding of the process involved might be catching up. Seam carving is that technological approach (mostly in scaling) where important objects are identified and then the less important space around those objects is extracted... removed in the case of downsizing and increased in the case of adding more data into that gap. It's how digital photographers and editors can remove that interloper you don't like from a family photograph with the end result looking just like it was taken that way in real life. Where things get particularly interesting is when we take what is normally processed on surface of a single flat plane and use the process in a volume of space and time. Here's a somewhat random blog entry from someone who thought seam carving would be too hard to program into any of his projects: http://eric-yuan.me/seam-carving/ (Note: This article is from 2013 so additional uses and understanding of seam carving have been made since then) The actual code and the understanding of how it works intrigues me but before I start to settle in I already find myself distracted by the fact that a seam carving map already exists in A:M via the Channel Editor (i.e. Timeline). I have to pause for a moment when I consider just how timely this discovery is to (potentially) understanding more about the temporal framworkings under consideration. In the Timeline or in the Channel Editor we are actually seeing our seam carving data cutting across all frames in any given animation. This is why when we scale a selected group of keyframes down or scale it up it doesn't break the keyframes it just condenses or expands the space inbetween that is deemed to be less necessary. Returning for a moment to the topic at hand... Where we can apply the same approach to rendering we gain some serious control over optimization. Personally I think A:M already does this. We just can't see that process unfold and likely wouldn't know what to make of it if we did. But I do see opportunities for better understanding how we as users can better understand what we have at our disposal so that we can best take advantage of it. #Optical Flow I'll add this here as a note to self although it doesn't particularly apply to the above. The statement is made in another article on Eric Yuan's blog (see above): Where this does apply is in considering that we don't want or need to sample every color (as mentioned before with millions of colors to choose from)because we can immediately scale that problem down to RGBA channels where 255 gradients steps can be used to illuminate the way forward (via brightness). A similar thing can be seen with motion as large motions or the finer motions of distant objects might initially be disgarded because they won't be seen from the POV where our calculations start. Something to also consider is that visual language (phenomenon?) that suggest slowly moving objects are in the background faster objects are more often than not perceived to be in the foreground. We might use that understanding (and movement in general) to identify and categorize objects.
  8. Not unless I"m misunderstanding something here. I would expect it to be either 24 FPS (traditionally) or 30 FPS (as the default in A:M). The FPS is independent of where the keyframes are located. But you raise a good point here in that 4 frames at 6 FPS would run as if it were 24 frames at 24 FPS. Similarly, 5 frames at 6 FPS would run as if it were 30 frames at 30 FPS. The downside of that straight conversion would be that all the frames would be stretched evenly unless some method were presented to allow for applying ease (slow in/slow out). A compositor (replicator?) would need to have a means to accelerate or decelerate the references to file in a variety of meaningful ways.
  9. Well YOU don't. I do. Constantly. Yes, it's definitely a bad habit. I may be way off track but I think if we polled A:M users most would admit to running final renders continuously. They just change the settings to limit excessive render times. That'd be cool if A:M could keep track of those automatically but it can't. Although that is somewhat what I was suggesting could happen so I'll add that as an Exhibit A. For discussions sake... I'm not suggesting any new features... I'm exploring and investigating.... What if that 'custom list of frames' was driven by the keyframes of the timeline. I know that you tend to animate on 4s so... that might might equate to 1, 5, 9, 13, etc.... just like traditional animators! A post processing might even fill in the gaps with frames 2 - 4 (as copies of 1), 6-8 (as copies of 5), 10-12 (as copies of 9). Of course these things will generally be accomplished in a compositor so all we have to do is composite. That compositor can even be an HTML page where a simple reference does the job of 'copying' for us. So a fully playable view of 13 frames could be viewed based on only those 4 frames rendered. All of the inbetweens frames would simply reference (re-reference) the keyframes. Aside: This may be where we cycle back to consider file formats again because with an HTML compositor only a few image formats will be appropriate; PNG being the most likely candidate. I shouldn't go to far afield here because I will eventually get back to concepts that tend to cause pain and apathy; concepts such as having the renderer always render to the same exact location. Folks can't seem to get beyond that to be able to move on to the next stage which takes away all the pain. This is my failure not theirs.
  10. This is a branch off of the other Rendering discussion that more directly relates to the camera and its settings rather than various rendering approaches and such. For instance, I don't ever recall using the 'Use for new cameras' setting on the Render tab. Perhaps I'm just very forgetful. The feature is highly useful for getting cameras to stay in sync with settings assigned in the Render Panel or at least start from that base. If you've got any tips or suggestions or favorite approaches regarding use of the Camera in A:M I'd love to hear them. I'll add a few more myself along the way. Added Tip: Just below the 'Use for new cameras' option is 'Save upon Render'. In case it's not clear, that is 'Save Project upon Render' and not a means to save the camera settings upon render. Of course the camera and it's settings are also saved along with a project. So that's covered as well. This is where we would turn that option off if we don't want a Project to be saved as we launch a render. Before we render we usually will want to seat that data onto a harddrive and not have it only maintained in volatile memory. Without saving, If a render ever fails we may have lost all of our changes all the way back to the last point of saving.
  11. This is something I brought up at the very beginning. That with 3D rendering and especially for 3D animation the norm is going to be for things to change. But this does not mean that things (ultimately pixels) do change and in fact very often on a frame by frame basis they do not change. This is more often than not a stylistic choice but also falls into the realm of just-in-in time optimization. Example: If an animator is animating in a blocking methodology and is using a stepped mode approach where intepolations are held rather than eased in/out etc. We might know for a fact that certain frames do not change. And yet, a renderer will approach each of those frames that are exactly the same as if they are entirely different; the opposite of optimization. We've got to admit that can represent considerable waste. Added: It is here that this investigation runs into differences between realtime and final rendering approaches..... and the large swath of territory inbetween.
  12. That certainly makes sense. There is an aspect of that that concerns itself more with accuracy than with prediction that I'm still not having sink in to my brain. Of course, I'm not sure how to put into into the language I need with terms and concepts that are agreed upon. As soon as I see the words, "The only test..." that rings bells in my brain but that's because the equation is framed by the latter part of the statement, namely, ..."ascertain if a pixel in frame 2 is exactly the same as in frame 1...". There are several constraints that may lock that statement down but they don't necessarily apply to the broader scope of the investigation. For instance in that statement we have necessarily limited the equation to 'pixels' in order to better define that specific test. But even there I find a little wiggle room in that a test and a prediction aren't the same thing. The test only informs us about the accuracy of the prediction. So we might postulate that all pixels at a certain location are within a specific tolerance (without rendering) and then render it to assign a value to that prediction (presumably a value of 0.0 would be given to no correlation and a value of 1.0 would be assigned to a result that matched exactly. With this in mind a ray cast down the stack of (potential) frames becomes just like launching rays toward the virtual objects in a scene. Predictive outcomes are postulated but that prediction is updated with the return of each new ray returns with new information. So in essence we may not need to know that some random pixel at a given x,y,z location in time and space is the same as another. It may be sufficient to know that one pixel in the upper left corner is the same or different. If different then... that's important information that can be used later. If the same, that also is informative. It may help to think of a ray shot into any given linear space as two ends of a spline (ohoh... here we go!) with two control points. Let's also predict even beforehand that we will eventually be using that spline to draw (or render) a plane that describes the journey along that spline (coupled with an array of other splines) though temporal space. (But more on that later) There are specific details we can assign to this spline based upon where it is located and what is encountered. Let's say that each of these two control points are constrained to remain in the same relative location they are exactly linear with no deviation. Note that before raycasting they occupy the same space on a single point but as one of these control points is shot out it does not return (or terminate) until it hits something or has completed its mission. Because we might know beforehand there are 30 frames of time the ray will penetrate through we can tell it to travel to any of those frames and then terminate. This can be useful if we want to target a specific frame (and not all frames or other frames). We might also set up a receiver (at frame 31) that catches the control point and announces the arrival. So a ray shot from frame 1 to frame 31 might encounter no resistance (i.e. none influential change) and the event tagged with a value of 1.0 to show that whole range is linearly the same. Now.. this gets crazy pretty fast so let's scale this back in a little. We may (theoretically) want to test for every possible value where test and find that something has changed. (Think of testing millions of colors vs just testing black vs white.... the latter is faster... the former mostly a matter of scale). Think of this again as our spline traveling from frame 1 to frame 30 along a linear path but then it encounters a change at frame 3. That's like hitting the Y key on a spline to dissect it in half but then moving that newly created control point to a position at similar scale. And interesting things begin to happen when we perceive that those changes indicate movement through time and space. And at this point it may be important to realize that without rendering anything we already have a large body of data available to predict how those frames will change. When we look at the channels of our Timeline they are all on display. So, perhaps now we can better see how we can predict how pixels from any number of frames will be the same or different from other frames. And all of this is predictive before we render anything to disk but also from the perspective of testing through the large number of pixels that have already been rendered... often many times over and over again... to screen). That data from those real time renders is just thrown away... wasted (by the user and presumably to a very large extent also by the program beneath). And therefore our travels thus far will not be of much assistance in optimizing any 'final' rendering. Note: A:M does allow users to save Preview Animation directly onscreen via Right Click > Save Animation As I'd like to be wrong here but I doubt many users take advantage of this. I know I haven't. Since starting this topic I have assigned a shortcut key combo of Alt + R to Preview Render and hope to use that more in the future. It's especially nice in filling an area where I thought A:M was lacking... cropping of imagery. I now see that I was mistaken and Hash Inc yet again anticipated my needs.
  13. Just in the process of opening this topic I feel I've gained some speed and insight into using A:M more effectively. Now, if I can just remember to keep using what we are learning.
  14. Another decision point in this is that of formats themselves... both the interim and final file formats. As an example, consider that frame 1 of a given sequence is 1kb in size on disk. If all other frames in a 30 frame sequence were exactly the same as this frame each frame would contribute an additional 1kb to the whole sequence. This is of course where compression comes to play and algorithms are quite good at recognizing patterns and encoding/reducing the footprint of duplication. I will assume that in most cases the application of any codec and optimizations associated with it is applied after rendering is complete in order to take full advantage of pattern matching in the streams of bits and bytes. I'd like to think that these could occur simultaneously (or perhaps in parallel*) so that patterns matched in rendering can also apply to compression of data and vice versa. This might go against the idea of that it is always better to render twice as large as to render the same thing a second time but that might also suggest that a dual rendering system where a low rez preview and full rez final render might be launched simultaneously. The user would see the preview which would render very quickly while the full render ran the full course in the background. The low rez render could be set to be updated periodically with data from the larger render as specified by the user in the preview settings (i.e. never, low quality, high quality, etc.). At any time the user could terminate the render and still walk away with the image created at that moment's resolution ala a 'good enough to serve my current purpose' decision. The important thing here is to return the UI to the user as soon a possible. All of this is not unlike A:M's Quick Render and Progressive Rendering directly in the interface. I wouldn't be surprised if it were a near exact equivalent. What's the difference you say? Well, for one thing, all of the various compositing and rendering options cannot be found in one place. A case in point might be the difference between the basic Render Settings of Tools/Options and the Render Settings of Cameras. It is important that we have both... don't misunderstand me here. But it might also be nice to be able to change a setting in one place and know that change will be reflected in the other. That might be accomplished through an option to "Link all cameras to default Render Settings." This setting might be considered dangerous if it actually overwrote camera settings so it would best be implemented by non destructive means. It might require the creation of a new default camera that is always coupled with Render Settings. Other cameras would then be either external or embedded and independent. It'd be interesting to diagram this out and see what success might look like. There are a lot of moving pieces and parts involved in rendering. Optimal file formats and camera settings are two to consider prior to the outset of rendering. I must assume A:M's ultimate 'format' is one we never see rendered to image file or screen. The data stored to define the near infinite resolution of splines and patches that is. As for the camera.... I'll have to ponder a little more about that.
  15. Yes, and that is also the point of my query here. I'm trying to improve upon that call to better understand how to plan and maximize compositing and rendering especially given the tools available in A:M. My thoughts some time back led me to suggest a few render related UI enhancements which Steffen graciously implemented and we can use today. The first (or one of the first) in that series of optimizations was to have A:M save the project file prior to rendering... because if they are like me, other users fail to do that. The underlying thought behind that was that, prior to rendering, a great many settings were only in memory and should some error occur anywhere during a potentially very long rendering session... all that data could vanish without a trace. So, it makes good sense to save before we render. Step one accomplished. (with minor adjustments later implemented to refine the exact moment where a project is saved). Another implementation allows users to see all options available (or nearly all) at the same time in the render panel. Steffen plussed this up with some additions of his own and it's a useful improvement. There are other aspects of the saving of images that often elude A:M Users to their own detriment and you mention compositing. That certainly is an area where users can save time and perhaps even avoid rendering altogether (where images already exist). But that 'Save As Animation' option isn't featured very prominently so it is likely many users don't take advantage of it's capabilities. Many users resort to the use of external programs because they don't know Animation:Master can perform a specific operation. It'd be nice if that were not the case. I personally have a tendency to lean toward external compositors but that is largely due to dealing with images that already exist such as scanned in drawings or digitally drawn doodles and animation. As A:M is not specifically optimized for compositing it makes good sense to composite those elsewhere. At the point where anything involved touches something created with A:M the consideration changes and the question is at least asked, "Can this operation be performed exclusively in A:M"?. Sometimes it can. Of the two options, compositing and rendering, composting tends to take considerably less time. Therefore compositing considerations should be on the high end of our personal considerations. A project solely created in A:M with no external references is a very good candidate for rendering (i.e. there's nothing to composite yet). With every rendering I am dealing with two entities that are largely clueless; me as the user and the renderer. The renderer gets a clue when we give it to them (via user settings and project files or the underlying programming instructions). I can get a clue by learning more about how to make better and more timely decisions.
  16. Rendering presents a wide category of interests but I'd like to discuss a few relative to what we have in A:M and what we can do as users to get the most out of that. While the discussion can certainly go far afield to include external rendering solutions the focus here is on what we can do with the internal renderer. Netrender is also an important factor to consider although for the purpose of this discussion I'd classify that as an external renderer also. At the heart of my present thinking is the idea of duplicate frames in a sequence and that may be where this topic starts and ends because for all intents and purposes in a purely 3D lit and rendered scene there may technically speaking not be any such thing as two frames that are exactly the same. The norm is that some pixels will change in every image within any given sequence. A part of my thinking also rests in that I think I may roughly know how A:M renders when in fact I really do not. For instance, I am reasonably sure that pixels are sampled, targeted or measured from the pov of the camera and that data is then pushed into a file; an image. What is not clear to me is if A:M does a similar test that traverses down the sequence of frames in order to determine what (if any) pixels have changed over the course of a stack of (potential) images. I must assume that A:M does this or can do this and perhaps it most likely does this when Multipass is turned on. Once the data is read in there is much that can be done with it at little cost to render time because in a way almost all potentials are there already in memory... they just haven't been written (rendered) to disk. Yet another part of this thinking then is based on what is happening how we as users can optimize our own efforts so we aren't working against the optimizations of the renderer. A case to examine might be a 30 frame render of the default Chor where nothing changes. Frame 1 is exactly the same as frame 2, and frame 3 and so on. If A:M's renderer knew that they were all the same it might simply render the first frame and then duplicate all of the other 29 frames, theoretically saving a lot of render time. But there is not a lot of data that would inform A:M's renderer this was the case other than the fact that there are no channels with animated movement to be found in the entire project. That would be useful information to pass on if it isn't already. Tests could certainly be ran in order to educate ourselves and there are other options we can pursue. For example, as users we might know that nothing has changed in a sequence so, using the case above, we might only render frame one and duplicate it ourselves via external program (or in A:M via 'Save Animation As' which combines images together quite speedily.** We might also decide that due to the style of our animation we might want a more choppy or direct movement from frame to frame and so use the Step option to render out only every 5 frames in our sequence. We might then use an external program to pad duplicate frame inside those gaps to save rendering time... or create a batch file or utility that simply copies frames and renames them with directions to fill in the gaps for us. A specific case we might investigate would be a sequence where all keyframes of an animated sequence are keyed on fours (every fourth frame). If the interpolation between those keys is stepped (i.e. Hold) then every four frames is the same. So re-rendering the 2nd, 3rd and 4th frames might be deemed wasted rendering cycles. But I don't think we have necessarily told A:M what to look for with regard to linear rendering of this sort. Most renderers are more akin to say.... Pixar's approach where every frame is basically a new start from scratch with the data being processed all over again with little or no regard to adjacent frames; they very likely have never met or even seen their neighbors. My thought in this regard is that if we know that we have a stack of images that are all 24x24 pixels then we should be able to run a few probes through from frame 1 to frame 30 to sort (or extract) same or similar frames and optimize rendering. If the probes determine every pixel is different then it might be assumed that another fully nonlinear approach to optimization can and likely should be used. If however, the probes find similarities then the frames are sorted according to how similar they are and... I would imagine... the two frames that are most different... on opposing sides of the spectrum... are rendered first. The easiest to render might be used to create the initial preview image while the most difficult would get the lion's share of the computers resources reserved for it because all other frames might then use it as a reference point. I know my thoughts are entirely too naive and also that A:M already does a wonderful job of crunching data and spitting out images. I am just trying to better understand how I can best interface with it all. Any thoughts?
  17. I thought this one would look like fall leaves when rendered... It didn't. (The orange thing is the one that appeared in the Chor window)
  18. Because the swatch is a grid... it doesn't have to be flat. Not exactly sure what this thing is.
  19. same basic thing... just more leafy... I see some patterning in there so best to try to remove that.
  20. Sometimes what you get in a quick render within the Chor window is very different that the final render. Perhaps that is because it might use the render properties in Tools/Options? This is the same model as the one on the right above and what it looks like before rendering: Kinda interesting.
  21. These were interesting... I was seeing if I could get the grass to look like trees without adding a tree image to the hairs.
  22. Today's Live Answer Session had me itching to play with particle hair again and I wondered how close I could come to some little swatches of fake terrain that yet another kid I've been working with had created. Among other things she enjoys creating dioramas and I have plans to use some of those as backdrops in projects. In the course of creating her dioramas she made me a few tiny cardboard test swatches that I thought would be perfect for scanning into the computer. So, I launched out on a journey to create some particle grass swatches. Attached is an image of the real world swatch (on the left) and my pitiful attempt to recreate it (on the right). Stil I thought it worked out pretty nice. I've attached a second image that has a number of other patch swatches I put together to remind me of the various parameters used with particle hair. One thing I learned is that it's very easy to overcomplicate particles and if not careful pretty soon layers upon layers of settings accomplish nothing. And... it's always nice if we can remember the settings we used so we can recreate or recognize them later. As such it's always best to keep things as simple as possible. I was pretty heavily into it before I realized I should be rendering with alpha channel transparency and with orthoganal camera if I ever decided I wanted to patch together all the various digital swatches.
  23. Back in those days update were for a *calendar* year as opposed to the current method of one year from the time of purchase. Because the CDs were the primary source of digital security they had to cut off somewhere and that was officially 31 Dec of each year. Occasional (and often many) updates would continue to be posted online that referenced the CD until such a time as a new CD was sent to press and then the process would start all over again. The current cycle is better... although the criticism would be that it isn't as permanent. Updating a subscription ($299 initial purchase) use to cost $99 so we do save $20 on each upgrade/ and more when we realize we don't have to pay the initial or even trade show discounted price ($199). I'm moving this topic over to the Animation:Master forum so... look for it there...er... here.
  24. I hesitate to post because all I want to say is... "I need that train model!" That is a great looking machine. In my 'Tuckertown' topic and on my blog I have a few posts related to a project where I'm workng with a young boy to realize his story about a train robbery. It's fun and every time I see him I try to have some kind of update for him. I won't ask for the model BUT can I have your kind permission to show him the picture of your train? That will show him the possibilities and definitely make him happy. Perhaps we could even work a deal to have his smaller train... roll through a station directly in front of yours. We'd gladly pay you 25 cents for the image. **(He has a very small budget). His name is Jeremiah. At any rate, you know I'm a huge fan of your work. You still know how to lay down some good spline. Keep it up! **It'd be good for him to learn he has to pay for good quality production assets or build them himself. And thinking along those lines... if you want to sell a copy please name your price and I'll pass that on.
×
×
  • Create New...