sprockets tangerines Duplicator Wizard Gound Hog Lair metalic mobius shape KM Bismark Super Mega Fox and Draggula Grey Rabbit with floppy ears
sprockets
Recent Posts | Unread Content
Jump to content
Hash, Inc. - Animation:Master

Rodney

Admin
  • Posts

    21,597
  • Joined

  • Last visited

  • Days Won

    110

Everything posted by Rodney

  1. Random (Baby) Groot... I make no claim this looks much like Groot as I was doodling from memory and a general feeling. One of these days I'm going to start actually refining my doodles with reference but that can take a little of the fun out of it and the random doodles are mostly just to keep the muscle memory of modeling and to explore how to quickly and more optimally put splines and patches together.
  2. I'd say more like 6s. In the ball bounce there are four poses with one repeated (the stretch) and the animation is 30 seconds long so... 300/5=... yeah, I'd say 6s. I made no attempt to inbetween the poses stretched the keyframes out to 3 seconds. Each pose is a different instance of the model. In the case of Eddie Jumping (which definitely needs to be inbetweened) I see seven poses. 3 seconds (roughly) @30FPS/7=just shy of 5 so, that'd be equivalent to animating on 5s or 6s. Like I said though, the idea is just to get the main poses into place quickly. Those could either just be used for reference for 'pure' animation or animated straight ahead (and backward as necessary) from each pose. One area of R&D I need to explore is that of taking two models and finding an optimal (even automated) way to get at an inbetween model. The inbetweens would ideally be of the 1/2, 1/3, 1/4 and 1/5 order to allow for slow in/slow out accelleration into and out of the key poses**. While straight inbetweens can certainly be automated the breakdown pose isn't something that generally can (or should) be automated because that route... the decision of where/how to pose the breakdown... is what drives the entire performance. An automated breakdown will often equate to a boring performance with little or no character/personality. **The concept of slow in/slow out (ease in/ease out) is something many animation packages have incorporated to automatically give a sense of life/motivation to otherwise uninteresting (unmotivated) action. This is not 'animation' per se but it does add a more realistic sense of movement... of influence... to the effected objects. Adding acceleration /deceleration into and out of a pose almost always improves a shot because it adds change to a otherwise static sequence. Of course it goes much deeper than just getting that sense of change but changing of shapes is what animation is all about. If nothing changes, there is no story being told. What 'animators' do beyond automation and mechanical inbetweening of shapes however is give a sense of life to a per-form-ance. That illusion of life emanating from inanimate objects has much to do with motive/motivation... cause and effect... forces... what is compelling the object or character to move?
  3. I think we'd have to know more about the target effect. It'd be pretty straightforward to set up a system where frames are rendered out and then those frames get duplicated so that animation is exposed on 2s, 3s, 4s or whatever to achieve a desired look. Doing this in other software such as OpenToonz would make quick work of that but A:M also can be set up to achieve something similar with a bit of work. Once the setup is saved then it's mostly a matter of keeping that project handy so that we don't have to set everything up from scratch again. And there's the thing. Almost any 'look' can be achieved. It's mostly a matter of research and development and then someone making the determination of when to say 'good enough'.. Something I like about this particular approach (each frame or series of frames derived from a different instance of the same model) is that once a 'golden' pose is achieved it is pretty well locked in. Working normally I always have to be careful not to accidentally undo something I did before that I wanted to keep.
  4. That's a good call. And... Thanks for the wireframe! Nice. Nice. Very nice.
  5. Same premise as demo'd by Eddie:
  6. Here's a quick test of a stop motion approach to animating that uses... no animation... just different (instances of) the same models. (Those instances can then of course be adjusted/animated) Click to play gif animation. StopMoStyle.prj
  7. This handy addition to A:M adds some new capability well above and beyond older methodologies of copy/paste and other means of duplication. Basic Usage Select an Object in the Chor, Right Click and select 'Duplicate'. This results is a duplicated copy of the selected Object. One way to see how useful this feature can be is to duplicate a Light. Duplicating a Light in the Chor is useful in several ways. For instance, we might want one light to be responsible for Shadows Only. Another Light might be designed to capture a particular look and feel (and therefore only be activated when that look is needed). One Light (or Object) might be almost perfect in every way but the user may want to explore other possibilities without losing what they have already. Starting from a known location and with another Object's settings can be a good way to proceed. In this way, Objects in a Chor become self-documenting and non-destructible. There is no reason to lose progress already gained. Objects that can be Duplicated Models Cameras Lights Duplicating a camera in it's exact location with the same settings can be advantageous when tweaking only a few minor changes. Then switch between cameras to use the desired settings. Objects not included in the feature Forces Nulls Motion Capture Material Effects Flocking Spring Systems Other/Unknown/Feedback Layers. *I note the Duplicate option appears for Layers but if invoked the Layer isn't duplicated. Workaround: An alternative method to duplicating resources (to include those that do not have the 'Duplicate' feature is to drag and drop the resource back onto the Choreography container in the PWS. In most cases the Control key must be held down while dragging and dropping. This also results in a duplicated copy. Note: Various means of duplicating these already exist to include saving the resource and then importing it back into A:M. More complex usage Steffen has gone beyond the basics of the initial feature to add a Shift/Duplicate option that doesn't copy keyframed settings. This is very useful and although I haven't explored this aspect of the feature in depth I can readily see it to be especially useful in reverting back to the previous/original states of altered/animated properties. Here's a quick way to explore the possibilities: - Import Rabbit into a Chor and crack open his User Properties - Slide the 'Dynamic Setting' to the far right and note how Rabbit's pose changes - Now (with Rabbit selected) Right Click and Duplicate - Note that a new instance of Rabbit is exactly duplicated in the Chor (unless/until altered it will remain exactly the same). - Now Duplicate Rabbit again but this time with the Shift key held down - Note that the new instance of Rabbit is created in his default "T" pose This should demonstrate how the use of Right Click/Duplicate (and Shift/Duplicate) will be useful in animation workflow. Need to get that cartoony look of arms flailing wildly? Right Click/Duplicate and adjust an instance of the Model as necessary. What to use a 'stop motion' approach to your animation? No problem. Duplicate your model and move it into it's next position. Duplicate. Adjust. Duplicate. Adjust. Then animate the Active setting of the Model instances to turn each 'pose' on/off. This animation using the Right Click/Duplicate methodology: Thoughts going forward This feature is even more useful than originally anticipated. Thanks Steffen for plussing up the idea!
  8. Thanks Steffen, I need to learn to read the change log more carefully!
  9. Okay... An uninstall and a reinstall and I've got it. It may have been there before (under the Import dropdown) as opposed to where the AI plugin use to be. I suspect user error BUT the important thing is that it is now working. Woohoo! Thanks!
  10. I got the AI plugin to appear again (although I don't know where that plugin actually is because there is no .htx file named AI.htx in any of the install folders. I think I shall do a full uninstall and then reinstall.
  11. First, THANKS for the release of Beta 2. I've been anxiously looking forward to checking it out. For Beta 2 installer I'm having problems with the new SVG showing up. The AI plugin is missing in action too but I'll copy that over from previous installation and hope that it works. I have a copy of the SVG plugin in my HXT folder and Tools/Options/Folders appears to be pointing to the correct location.... but the plugin still doesn't appear in the dropdown menu. I installed the 32bit release as well and same thing there. Disclaimer: I hastily downloaded and installed last night right before heading out the door to work so user error is certainly possible. Update: The AI (Adobe Illustrator) and SVG (Scalable Vector Graphics) plugins are now accessed via the Import menu.
  12. You are going down that... 'now why didn't I think of that' road. Outstanding characters. Characters with character even. There is only one thing more that I can say at present... Wireframes please!!!
  13. Looks good from here Simon. Just in case you are wondering... I don't often comment on things that look good enough to go to the next stage. Some folks prefer deep feedback and others look for encouragement but I struggle a bit with this... it sounds silly but... I guess I want what I write to be read. Yes, I read too much into this but some people don't care for 'Looks good.' commentary and some desire in depth feedback, analysis, etc. etc. even to the point of discussing the tiniest detail. But we can also over analyse a shot to the point where nothing ever gets finished. We definitely don't want to do that. I'll have to look at the forum's 'Like' system and see if it could be tweaked for use with a method to quickly suggest a thumbs up/green light. Then it might become mostly a matter of looking for those cues from people whose feedback you consider sufficiently in line with your expectations to commit to moving forward. Bottom line: You are making impressive progress with this project and I enjoy seeing your updates. That's why we have this WIP section. Keep it up!
  14. That is definitely promising.
  15. That shouldn't be the case. We (as a community) will have to stare at that and see if there is something we can do to assist. Heck, being from Portland Oregon... as close as you are you should be able to drive over to Jason's house and ring his doorbell!
  16. Awesome! It's great to have you back.
  17. Yes, definitely ping jason: jason@hash.com He was out SuperBowling at the time you put in your customer support query. (He hasn't been seen in the wild since so... hope he is recovering.)
  18. I'm posting this mainly as a reminder to follow up on this. Attached is a comparison of two sound files. On the bottom, the original "I've got a Secret" audio and (above) a quickly edited version that accomplishes a few things: 1) Removal of unwanted noise 2) Adjusted timing 3) Adjusted volume* 4) Separation, removal of sound/sound bytes on a frame by frame basis 5) Keyframe and Spline-based refinement (Free!) *Appeared to work well but need more testing. Not yet explored in any depth but available for use: - Automatic Time Stretching (All or by Selection) - Integration with Papagayo and Magpie Pro Also attached are the original and updated wav files. asecret_old.wav asecret_new.wav
  19. Impressive results. That looks a lot like hand drawn outlining and shading.
  20. Simon, There are a lot of resources but here's one that covers normal maps while comparing to other mapping techniques (bump, displacement) although primarily for gaming: https://www.youtube.com/watch?v=SQrHkKnSBcA Normals indicate the direction of a surface. Normal maps that capture this directional information adjusting the surface characteristics of an object at render time. This data can then generate surface and lighting effects. In this way a flat 2D plane can appear to have shape/depth in 3D space because of how normal maps adjust color and shading. Normal Maps can be rendered directly in A:M by using the Normal Buffer. While not required, using a image format like .EXR can be beneficial because it can store data that other image formats cannot. Technical aside: Normal maps can be used with A:M particle hair to orient entire areas and even individual strands in desired directions. In Rodger's case he mostly uses Normal Maps to gain additional definition and detail that would take a considerable amount of splines to display. A metal panel with thousands of rivets might be a very dense mesh if every rivet was modeled but if each of the rivets is created via a Normal Map then all thousand rivets can be displayed on a single patch.
  21. Ah! Thanks for that! I will try to digest that and put it to good use. S'funny, I was just going through your list of video tutorials, posts and experimentations and didn't see that one.
  22. Because this may need a little research and I don't want to waste live-Live Answer Time-time resources I'm posting this here. I may ask this or related question during a future sessions unless I stumble upon proper solutions before then. The setup: Randomizing an initial setting Given a model placed in a choreography that moves from left to right (in simplest manner possible for demo purposes): How do we setup the model so it will appear at a random height each time the cycle repeats? - The model should begin at the random (seeded) height then move at that same height/trajectory horizontally - When the cycle begins again the model should appear at a new random height and move horizontally at that height. Note: I attempt to use an expression for the random height as that is the only way I know to introduce randomness. For illustrative purpose success might be demonstrated by a Gopher (single model) who randomly pops its head up through a number of holes in the ground; appearing in a random hole each time the animation cycles. This would be easy enough to animate manually but for our purpose can narrow the scope of experimentation. I am guessing that rather than see this randomness reset 'live' with each cycle it will be best to extend the length of the sequence to account for the number of 'random' appearances and then key the random element... but that isn't the imagined 'best case' scenario. Need a different (perhaps slightly simpler) scenario to consider? How about a bullet casing (single model) that randomly appears at the ejection port of a gun and spins outward from there making long cycles of the animation appear to eject dozens/hundreds of bullet casings outward. I realize here we may be approaching a solution where particle images (displaying a sequence of images... the bullet spinning) might work well. In the case (pun intended) of the bullets, the origin/location of initial appearance of the casing is only slightly offset/random. This randomness could easily be achieved by animating the location of the bullet/model at the start of each cycle. This forms an acceptable solution, i.e. hey, works for me! At any rate, perhaps something to consider.
  23. Nice!!! I think that may be the version I saw demo'd at Las Vegas (NAB computer show) by the Hash Inc crew that convinced me that I wanted A:M. Grabbed a demo video (VHS) instead and watched and watched that thing until it almost fell apart.. It'd be another four/five years before I actually bought the program which sounds about right ('98). .
  24. Done. Link to this topic posted in the CQ Models section. I did a little stop-mo-type animation with the face I built from the template. Wasn't sure it was worth posting but... it was a bit of fun creating so... here it is.
  25. hehe. That definitely reads as Monty Python.
×
×
  • Create New...