sprockets Shelton's new Char: Hans It's just donuts by ItsJustMe 3D Printing Free model: USS Midnight Rodger Reynolds' 1950s Street Car Madfox's Pink Floyd Video Tinkering Gnome's Elephant
sprockets
Recent Posts | Unread Content
Jump to content
Hash, Inc. - Animation:Master

rickh

*A:M User*
  • Posts

    456
  • Joined

  • Last visited

Posts posted by rickh

  1. Richard, do you think that your compensate plug-in can be incorporated into the install rig plug-in? Or have the install plug-in execute the compensate plug-in when it's finished its task?

     

    I was considering that. I definitely could do it.

     

    Initially it will be seperate plugin and we will see how well the concept works first.

     

    The thing is that it will be a useful tool while you are building a new rig for a model and that might mean either leaving it as a seperate plugin, or combining it with the install rig, but add a interface window to the Install Rig plugin. I definitely like the idea of minimizing the number of plugins - too many is just plain confusing.

     

     

    Richard Harrowell.

  2. David and Mark,

     

    This is the logic I was considering for my plugin:

     

    If an offset has been set on a constraint - even if it is currently a zero offset (ie if the offset numbers show up in bold type) and if the offset is a constant value, I will recalculate the offset.

     

    If there is no current offset, I will assume that I should not calculate this offset. I am assuming here that there will be some constraint that are never meant to have offsets. For example, if you want the eyes to aim at a target null, you probably really do want the eyes aiming exactly at the target null.

     

    Finally, if a constraint seems to have an animated offset, I will not touch it's offset values.

     

    So in practical terms, this would meant that in the Squetch rig, someone would need to identify which constraints need to be set, and see that a value has been entered into the offset properties. In many cases, this will mean typing in offset values of zero.

     

    Does that sound reasonably compatible with the Squetch rig?

     

    Constraints do not have custom names, so I cannot use naming conventions to control which offsets get calculated. If the method I have proposed is no good, I do have an alternative method which would be a text file listing constraints that need to be calculated.

     

     

    Richard Harrowell.

  3. Well it's a little early to release, but had a request to post it. The issues I had with this method will be in the next release of v13 (monday). V14 alpha 6 has the fix for the roll issue on export and importing model with materials, but a new issue has cropped up, images for decals do not import (decal and stamp are intact). This can be worked around by opening the model first bringing in the images into the project, then import the model into the installation rig.

     

    They are fixing the "Export to Model" from an Action !

     

    That is huge! Thanks Mark and thanks Hash.

     

    I will be interested to see what you have done, because I now have to do something similar to the rig we are developing.

     

    My target for a test version of a plugin to automatically set all the Constraints offsets is Monday. This would eliminate one of the major difficult "next steps" in installing this rig. I can see the day very soon when a rig skeleton can be installed comfortably in 10 minutes. I will have to try and find someone who can compile the MAC version of te plugin.

     

    Weighting, smartskin. face poses - no easy way out there yet. To do really well, that can be 2 days work or more.

     

    By the way, Mark, let me know if there are any features you need added to the Install Rig plugin. Noel was kind enough to send me a copy of the source, so I can add features if they are needed.

     

    Richard Harrowell.

  4. btw, Rick, OpenEXR is directly supported on the latest Nvidia grahics cards. They helped ILM to develop it.

     

    I am not sure it does anything that will help us, unless you are going to get into very serious programming. To access the nVidia features, you have to use libraries that are currently only in the nVidia SDK (several hundred of mBytes to download). I am not sure nvidia licensing lets you distribute the libraries so I think everyone has to download the rediculous SDK.

     

    As a result, most programmers do not use the nVidia enhancements.

     

    Richard Harrowell.

  5. Hi Richard.

    I have just installed XNview and I cannot find an option to import EXR files (only HDRI). Are you sure it works or did I missed something?

     

    Same here. Perhaps a plugin needs to be installed?

     

     

    There was something that had to be done from memory. Do you have a file IlmImf.dll in the plugins folder? If you do, then you have OpenEXR support in XNView.

     

    Have you gone to Tools->Options->Associations and ticked "OpenEXR" (if you use "view as name" mode or "exr" if you use "view as extension" mode?

     

    Richard Harrowell.

  6. Like any image format OpenEXR can be supplanted when something better comes along.

    TIF images for instance can do much of the same thing as EXR and in some ways more.

     

    OpenEXR is much better from the purposes of film/animation then Tiff. Forgetting the huge compatibility problems of Tiff, Tiff cannot handle multiple layers and it cannot handle 16 bit floats (which is all you need for animation or movies).

     

    Tiffs have things like 32 bit floating point channels and 64 bit integer, but so what ? - nothing around can seriously make use of that resolution. You just end up storing twice or four times as much data, but no extra content.

     

    All of the OpenEXR compresison formats are lossless or close enough to lossless so it doesn't matter - -again this is perfect for renders or post production intermediates.

     

    I think it is very lucky there is at least one good image type. OpenEXR is a format crafted to precisely fit the needs of movie and animation work so why would you want to look to other formats, particularly closed source proprietry formats (lie all the alternatives are).

     

    Richard Harrowell.

  7. Fusion 5.1 directly supports AM OpenEXR files.

    Add a loader and then click on the format tab.

    Under the RGBA tabs select the channel that you wish to use EG AmbienceR, AmbienceG, AmbienceB and AmbienceA.

    For the next layer of the OpenFX Tab add another loader and then select the Keylight layers in the same manner.

    Works like a charm.

     

    Fusion can actually read multilayer OpenEXR? That is very useful to know.

     

    Definitely, very few programs even attempt to read multilayer OpenEXR's at the moment.

     

    Richard Harrowell.

  8. I assume/hope the alpha channel is retained in the conversion from OpenEXR to Targa?

    (Easy enough to test)

     

    If you convert with XNView, the alpha channel is preserved.

     

    I agree that the benefits of Open EXR are many.

    Time to leverage that I think.

     

    This then begs the question... why not use EXR format for TWO?

     

    Probably too many old 8 bit graphics apps. The trouble is that all the graphics plugins were written for 8 bit and it will take time for everything to be replaced with floating point versions. So with TGA's, you can put renders straight into Virtualdub and you have temporal filters, blur, crop - you name it. Same with lots of the favourite photohop plugins that people have collected over the years.

     

    Even if I was going to use 8 bit programs, I would still render in OpenEXR and then convert the renders in XNView.

     

    Sorry to MAC users if I keep mentioning all this Windows software, but you are luck enough to have Cinepaint available - OpenEXR support, layers, full set of floating point image tools, frame buffer, scripting, cloning between different frames, etc. Its going to take ages for the Windows version of Cinepaint to become useable.

     

    Richard Harrowell.

  9. Increasing numbers of programs support OpenEXR nowadays, and at the high end level, most can support it.

     

    The problem is that when multiple layers are used in OpenEXR, there is no standard for how the multiple channels are defined and so most programs that support OpenEXR will read the top layer and ignore everything else.

     

    A:M outputs 16 bit channels, but it is important to make it clear that these are 16 bit floating point numbers which means 10 bits of actual colour resolution per channel, one sign bit, and 5 bit for the exponent. That probably really is sufficient for most purposes - especially if the target is for DVD format with its 8 bit resolution per channel.

     

    I would always render out to OpenEXR now instead of TGA's because with OpenEXR it takes the same time, but you definitely get more out of the rendering time. If I need TGA's, then I can always get A:M to convert them, or use the free XNView package to batch convert them to any other 8 bit format in an instant.

     

    XNview is like Irfanview that many people may know, but it has the extra benefit of supporting OpenEXR that is something Irfanview cannot do. There is of course only one file manager in the world - TotalCommander, and since it supports XNView integration, coping with OpenEXR images is a breeze.

     

    I do think a simple batch utility that can extract layers from an A:M OpenEXR light buffer render to seperate OpenEXR images would be very useful, and when I finish some plug-in programming, I will give it a go.

     

    One of the beauties of OpenEXR is that everyone using the format is using the same OpenEXR source code or libraries, so that the compatibility of the format between different applications that support it is absolutely excellent. Everyone usually supports every one of the compression formats. Typically, OpenEXR support by older 8-bit only graphics applications never works to well, but the newer packages that do support floating point colors channels or at least 16 bit integer channels usually support single layer OpenEXR very well. Most of the "compatibility" issues I have seen with OpenEXR relvolve around 8 bit graphics apps that choose to convert from 10 bit floats to 8 bit integer in very odd ways - perhaps they are trying to compress the fulldynamic range of the OPENEXR images down into 8 bit intger range which results in washed out looking images.

     

    Edit in floating point and the problems are solved.

     

    Every one of the OpenEXR compression formats is completely lossless for A:M renders - even the lossy option in OpenEXR is lossless for the A:M resolutions. This means you cannot really go wrong with the format.

     

    Also OpenEXR make fabulous decals for A:M to use, especially for displacement and bump maps. If you are getting A:M to generate decals for A:M to use, then OpenEXR is the only way to go. It is not just the extra 2 bits of real resolution, it is the fact that A:M actually does properly support the floating point displacements so you can get displacements much bigger then 1.0 (equivalent of 255 in an TGA channel).

     

    It really is a superb format for movies and animation. It doesn't have the compatibility problems of, say, the Tiff format. The Tiff format does have almost every capability you can imaging (except for multiple layers), but how many programs can actually understand all of these capabilities? The negatives of using OpenEXR really revolve around the older generation of 8 bit graphics programs and low end 8 bit video programs that do not support it, and that is only a short term problem. It is an open format, and most of the alternatives are actually not open formats. The proprietry formats often have licensing issues, limited flexability and since everyone is writing their own different libraries, there are real compatibility problems.

     

    I think OpenEXR is an inspired choice of format by Hash and I think within a couple of years, I think the choice will prove to be absolutely correct.

     

    Richard Harrowell.

  10. If Martin/Hash no longer wish to support Netrender, is it practical to open source it - possibly built/re-built from scratch?

     

    Cheers

     

    A more practical alternative to Netrender may be to add something to the standard A:M program to allow it to be remotely controlled when rendering - something like a very simple HTML interface running on a high port in A:M that allows a project to be loaded, render setup parameters to be sent, renders to be started and stopped and render status info to be returned.

     

    With quad core PCs becoming available, it is less important for network rendering to have to share frames for a scene - it will probably be more common to set different render PC's to different scenes. This fact will make the use of a modified A:M more vialble as a production renderering platform.

     

    Render server packages or scripts could be developed by users or 3rd party providers - Hash wouldn't need to concern themselves with anything other then the A:M program.

     

    This solution would mean, for example, that you could have a database program containing all the details including the render parameters, and the render status of every scene in a production. You could then just look through the database to see what scene you want to render or re-render. Then just press the render button in the database and the task is queued for the next available rendering PC.

     

    Or alternatively, a full Netrender Server could be run from a web server.

     

    I still think Netrender is a great program, so I am only suggesting this as a solution if Netrender is dropped permanently.

     

    Richard Harrowell.

  11. Hmmmm...Is the "root" bone the body squetch base?

     

    Will it mess anything up if I rotate that in an action?

     

    Looks like that bone does the trick. I hope I hope.

     

    I was wondering why you didn't just rotate the model bone, but I just realised there is no model bone in an action window. I never quite registered that fact.

     

    Richard Harrowell.

  12. Perhaps I'm a little slow this a.m. -- "A:M interface runs on a single core" -- ok, I sort of understand that -- in other words things like rotating something in a choreography or modeling are single core. But "dual cores are currently only used for rendering" - I thought I was rendering a 2 second animation in that clip I showed in my original message. Do you mean to say that rendering from the _interface_ doesn't use dual core, while rendering externally somehow does.

     

    I didn't see your previous post when I sent my last reply, so I didn't know that you have tried V14.

     

    A render to disk from the A:M interface in V14 can use dual core rendering.

     

    Your photo puzzles me a bit - I would expect both cores to be camped on 100% almost continuously. The snapshot shows you are in the middle of rendering patches so A:M would be asking for maximum power and neither core is near 100% ! . If this is Vista running on an AMD dual core PC, then obviously some application, service or the operating system is robbing A:M of full CPU cycles.

     

    If this is the Hyperthreaded Pentium 4 we are seeing, the the display is showing the Pentium 4 is running at full speed. Core 1 + Core2 = 105% which is about the absolute maximum a Hyperthreaded Pentium 4 could do.

     

    By the way, if you have XP on one of your systems, why would you want change it to Vista? XP will be more stable for years at least. There is nothing in Vista that can help A:M run any faster then it can in XP.

     

    I would definitely try this test out on the AMD dual core with the Xp on it. To see the CPU usage graphs, just run Task Manager. It would not surprise me if the AMD 4200 rendered at about 4 times the speed of the Pentium 4.

     

    Also remember that 14 Alpha means Alpha. V14 probably still has a lot of refinement before it is ready for the production release. That probably includes work on the dual core support.

     

    Richard Harrowell.

  13. Ok, It's installed, it's still not using both pseudo-processors fully like I hoped it would be able to, I have an AMD Athlon 64 x2. I might be getting a one second improvement in performance (from 11 seconds to 10 seconds with "LOW" resolution rendering on the can-can animation).

     

    The dual cores are currently only used for rendering - the A:M interface itself still just runs on a single core. Hopefully as work into V14 continues, we might see more multi-threaded code appearing in A:M.

     

    There is also a setting on the Global tab of the options window in A:M that sets the number of threads that you want to use. If "Threads" are set to 1 and Auto is off, then A:M V14 will only use one core at all times.

     

    By the way, there is nothing "pseudo" about the current multi-core processors - they physically have two CPU cores and if both are running, the speed does double. The old Pentium 4 processors with Hyperthreading - now that was a genuine pseudo-dual core processor.

     

    Richard Harrowell.

  14. David,

     

    I thought you might be interested that we are using a modified version of this arm rig. We have built our own rig using ideas from everywhere and we are just starting to start some test animations now.

     

    I basically used your arm rig, except I have a single null that is the hand/wrist controller and it controls that hand for both the FK and IK modes. We wanted to be able to switch modes without having to ever switch the hand controllers, so that that we can get unbroken animation curves right though a FK_IK switch point.

     

    I also added a third mode where the arm is attached to a rail from the shoulder. It is excellent when you need to animate swinging arms.

     

    I will post the our version of the rig once we have done our testing.

     

    Richard Harrowell.

  15. OK, so to actually create the displacement map, I have to render out this video--

    http://s3.photobucket.com/albums/y83/budgi...=WaterTest1.flv

     

    Then I import that video into the project as a video decal, and apply it to the grid? Where do I set it as a displacement decal?

     

    -The Bird Man

     

    That diplacement map looked a bit odd. The high points stay high and the low points stay low. Are you animating the right parameter when you make the displacement map? You should start off with a Sine turbulance and then you animate the Z Translate parameter only of the turbulence. Also make sure your amplitude for the turbulence is not over 100% - just under is best. Over 100% and you will start to get some flat spots on the peaks and troughs.

     

    It might be a result of the FLv compression, but the movie does seem to have flat regions where the colour is 0,0,0 suggesting you are using over 100% amplitude.

     

    Richard Harrowell.

  16. Apply the WaterMap animation to a dense mesh as a decal and set the type of decal to Displacement.

     

    You used to need a dense mesh for displacement maps, but the new pixel displacement maps in the current V13 and V14 no longer needs a dense mesh for displacement.

     

    Richard Harrowell.

  17. this technique is to just simulate a body of water. It is not to simulate a wake created by a boat or other floating objects.

    Correct, the first tut only is for the waves, but the third tut (if you could call it that) is about combining the waves with a wake created by an object

     

    http://www.babbagepatch.com/wakes.htm

     

    Charlie

     

    I never thought of using particles to generate displacement maps - I will have to experiment with that.

     

    I did post a method to make a wake using a "moving" displacement map decal ages ago. All the links should still work.

     

    [topic=http://www.hash.com/forums/index.php?showtopic=6154]http://www.hash.com/forums/index.php?showtopic=6154[/topic]

    [topic=http://www.hash.com/forums/index.php?showtopic=4035]http://www.hash.com/forums/index.php?showtopic=4035[/topic]

    [topic=http://www.hash.com/forums/index.php?showtopic=7012]http://www.hash.com/forums/index.php?showtopic=7012[/topic]

     

    It is limited in that I only know how to make decal's "move" in straight lines.

     

    By the way, if you are generating displacement maps from A:M, OpenEXR format give you more detail then TGA's, Mov's etc.

     

    Richard Harrowell.

  18. I am currently working with Animation Master version 11.1

    Does anyone know what file formats work for exporting models into After Effects 7.0

    Are there special plug-ins that I need to download for this?

     

    Thank you.

     

    If you have After Effects 7.0, then the best format to use by a big margin is OpenEXR.

     

    This is especially true if you intend to layer your renders by doing some renders with an Alpha Channel.

     

    Essentially if you use OpenEXR, you can manipulate the image in AE with effectly no loss in quality for the final video output. If you render in TGA, AVI , etc , any image modification in AE will lose a bit of quality in the final render. Using TGA's is still OK, but OpenEXR is just much better.

     

    AE7 can only read single layer OpenEXR so it is no use rendering with the Light Buffers set to "ON".

     

    Richard Harrowell.

  19.  

    Will this work for any rig or just the squetch rig?

     

     

    It basically works for any situation where:

     

    1. No bone moves when a model first opens in an Action (ie - you wan all offsets set so that all bones are aleady in exactly the position the constraints intend them to be in), and

     

    2. You are not animating the offsets.

     

    Once I have the plugin working, there are probably refinements I can do, but for the moment I am designing it for models that fit these requirements.

     

    And yes, it would be kind of handy for a rig with cogs. :)

     

     

    Richard Harrowell.

  20. Status of the AutoOffset plugin

     

    In spite of the silence, I am making real progress.

     

    I have managed to get a working dialog box, and I have worked out how to navigate the SDK to locate the constraints. Work has been frustratingly slow since this is my first A:M plugin, but now that I am finally talking with A:M via the SDK, things are starting to work. This image of the plugin dialog gives an idea of what am trying to achieve:

     

    ao1.png

     

    An alpha version of the plugin for PC's is not far away. I do not have the ability to compile Mac versions.

     

     

    Richard Harrowell.

  21. The problem is rotational values do not export with the model. X and Y rotation of a bone in an action get exported as bone end position, Z rotation does not effect the position of a bone, so no Z rotation on export, bone default back to orginal rotation on the Z axis. I only have a couple of poses that use Z rotation (which work fine in the action) and I didn't notice the problem until I did a test install and exported it.

     

    That is very frustrating.

     

    Is it simply a matter of getting the Z rotation in the exported model that same as in the action, or does this problem cause errors all the way down the bone tree?

     

    Why I ask is that in a week or two, I can probably look at a simple plugin that will capture the bone rotations in the action to an external text file, and then use that to correct the bone rotations in the exported model.

     

    Would that work?

     

    The thing is that even if this big is fixed, it may only be fixed in V14, and will not be fixed in V12. A plugin can be made to work for all versions. I have the source code for the InstallRig plugin, so perhaps I can add this functionality to it somehow.

     

    I think this work is too important to have it stall.

     

    It would even be possible to write plugin that would take all the bones and positions in an action and write them directly into a destination model (the one with all the final constraints, etc), but it sounds a bit beyond my plugin writing ability at the moment.

     

    Richard Harrowell.

  22. Given the natue of the light coming from the new LCD screens and Video projectors, there is every reason to think that the cheap colorometers will be quite stable, so they will ensure that your color today is the same as it was a month ago and two months ago - that is very valuable.

     

    The exact frequency of the R, G and B colors is set by the phosphors or gases used in the light source and this frequency is unlikely to change at all. This means for one monitor, the colorometer does not have to try and adapt for changing RGB frequencies - it only has to worry about the level of each, and I would expect even a cheap device to stay within about 1% accuracy. You don't have to worry about how well a cheap colorometer conforms to the standard colorometer CIE 1931 response curves. (It is a very weird standard - the blue sensor on a colorometer is required to respond to both blue and red light.)

     

    Colin, you are right about the problems of printing. The thing is that once you have a calibrated scanner, you can use it to measure the colors from the printer. A major problem is that lots of scanners and printers have overblown drivers that try and automatically optimise the color all the time, and if you are doing calibration, you want extremely simple drivers that do not adjust for any reason.

     

    I gather the standard for commercial printing companies is to work with all their monitors set at a low color temperature (I think it is 5000 deg Kelvin). They use the same lighting to evaluate the prints and they get an excellent match this way.

     

    Animation companies tend to use 5000 degrees room lighting with the walls painted a neutral grey and the monitors are set to to 6500 deg Kelvin. This means that a calibrated print will never match the colors on a calibrated monitor.

     

    Richard Harrowell.

  23. I have been looking into color calibration and the more I look, the more my head spins.

     

    There are a number of good cheap calibrators - the Pantone is just one of them.

     

    The huge problem is that all colorometers are just relative units - they cannot accurately set any monitor without some kind of reference calibration.

     

    If you have many monitors of the same kind, they can calibrate them all so they are behaving the same.

     

    They can also help accurately calibrating Gamma, since that is a purely relative measurement.

     

    Here is the problem. You have three different kinds of monitors all set to produce 100% red on the screens. Lets say they have been calibrated somehow so each one is emitting exactly the same red light energy.

     

    If you measure the amount of red using a colorometer, it might give a reading of , say, 1.0 for monitor 1, 0.9 for monitor 2 and 0.8 for monitor three. Is the colorometer faulty? - no this is how it should work. Each monitor uses slightly different wavelength red, and a colorometer is meant to give a different reading for each different red frequency.

     

    So how are colorometers used professionally? Usually this way. First you buy a whole stack of exactly the same kind of monitors for your company. Then every six months or so, you send a monitor and your colorimeter to a color lab who has a whole lot of massively expensive gear like acurate spectrometer photometers and so on. They will produce a calibration matrix that will correct your colorometer with that one type of monitor. The matrix cannot be just calibrating the R, G and B chanels by themselves - they also have to determine what color tone the monitor produces when, say you mix 50% red and 50% green and 0% blue. You cannot know the color until it is measured.

     

    Once you have this calibration matrix, the colorometer can now calibrate with great accuracy, that one type of monitor.

     

    This is a common way many big animation studios work (some probably even have their own color labs with hundreds of thousands of dollars worth of light measuring gear).

     

    Using a colorometer without specific calibration for the models of monitors you are using could produce good results (you could be lucky) or it could produce worse results then the factory calibration would have produced. If you use the cheap colorometers to calibrate six different types of monitors, you will almost certainly end up with six monitors that do not quite match in color.

     

    Why I am saying this is that if you rely on a cheap calibrator, it might make you think your screen colors are correct when they are not. You might end up being "that studio that produces all that stuff with the green tint".

     

    As I said, Gamma is a different issue, and cheap colorometers are capable of ensuring your work has correct tonal variation.

     

    So what is the solution?

     

    This is the best solution I can think of for the moment. If you know a place with a calibrated monitor, take your laptop there and after giving the laptop half an hor to warm up, put it beside the calibrated monitor and adjust your notebook's color purely by eye ( not by any measuring devices) till your notebook's white exactly matches the calibrated monitors white. Your eye is not that bad at seeing the tone differences in white in a side-by-side comparison.

     

    Now back at home/work, line up each different model of monitor that needs calibration and adjust them until the white color on each of them exacty matches the white color of your notebook.

     

    When this is done, measure the colorimeter reading of each monitor and the reading numbers will become the calibrated "White" color when you use the colorometer to calibrate each monitor. Each different type of monitor will need to be calibrated with different numbers for "white". Intermediate color tones will not match exactly between the differnt monitors, but hopefully they will be close enough to live with.

     

    There are lots of flaws with this process but it is probably the best you can do cheaply.

     

    Also, bear in mind that all monitors that will be connected to a PC by an analog cable must be calibrated on the PC it will be used with - calibrating an analog monitor without its PC is worse then useless because you are actually calibrating the monitor/graphics card combination.

     

    Digital monitors connected to a PC with a digital cable can be tested away from its normal PC since the PC has no influence on the colors the monitor produces.

     

    Technically, the cheap colorometers are often extremely good devices. The cheap ones tend to come with highly automatic software that makes you think that calibration is a simple process.

     

    The same companies sell more expensive calibrators and there will not be much different in the hardware (often identical), but they will have better software that allows you to feed in calibration matrices along with many other extra software features. The quality of the software sets the price for colorometers.

     

    Now if anyone has a better suggestion, I would love to hear it. The idea of spend huge hours to get the color absolute perfect in a production, only to discover that your monitor calibration was way out is a major nightmare.

     

    How about calibrating scanners and printers? Now that is a totally different story. Just get the open-source program LProf from

     

    http://lprof.sourceforge.net/

     

    and buy a $10 "R1" calibrated color target from Wolf Faust at

     

    http://www.targets.coloraid.de/

     

    and you can now produce very accurate and comprehensive color profiles automatially.

     

    Richard Harrowell.

  24. The Auto Offset Concept for Constraints

     

    In most rigs for any purpose, you do not want any bones to move when the rig is first dropped in a Choreography or Action.

     

    This means that there is no need for manual setting of any Constraint Offsets. All that is needed is for something to set an automatic offset in an action window that compensates for the bone's positions in the modelling window. In other words, it automatically sets a fixed offset for each constraint that ensures no bone will be moved from it modelled position due to a constraint unless you start animating bone positions.

     

    Since this is exactly what how you want a constraint to work 99.9% of the time and since it is so easy for A:M to calculate these values exactly for a massive rig in a fraction of a second, it is an obvious feature to have.

     

    Noel's refinement of the idea is that the Compensate mode button along with the Offset properties would remain. The offset properties would act as an additional effect to the auto-compensate - it would allow you to deliberately nudge a bone a bit. The Auto-Offset function would actually completely ignore the numbers in the Offset property when it calculates the offset. So you end up with two offsets that get added together.

     

    The Compensate Mode button would still generate offsets based on the bone positon in an action rather then the modelled positions, so it is still there when you need to set the offset for a constraint to a target bone that has been moved from its modelled position.

     

    In summary, if constraints had this Auto-Offset tick box available, it would mean that in most cases, A:M could pre-calculate all the offsets live and it would cease to be an issue you even have to think about much.

     

    In my idea for a test plugin, I intend to use the simple logic that if a constraint currently has an Offset property, then I will assume it will need the Offset recalculated. Any constraint that does not have any Offset property currently set will be ignored. I might have a second option that will calculate offsets for every constraint.

     

    Richard Harrowell.

×
×
  • Create New...