sprockets The Snowman is coming! Realistic head model by Dan Skelton Vintage character and mo-cap animation by Joe Williamsen Character animation exercise by Steve Shelton an Animated Puppet Parody by Mark R. Largent Sprite Explosion Effect with PRJ included from johnL3D New Radiosity render of 2004 animation with PRJ. Will Sutton's TAR knocks some heads!
sprockets
Recent Posts | Unread Content
Jump to content
Hash, Inc. - Animation:Master

Rodney

Admin
  • Posts

    21,575
  • Joined

  • Last visited

  • Days Won

    110

Everything posted by Rodney

  1. I can't really address your question although I could guess at it... but I won't. It'd be worth presenting to Stephen or Arthur Waselek who as far as I know was the original author of the OBJ plugin (there were several export plugins I just assume his was the one deferred to for most uses of the OBJ format). He had stated that he would try to address any issues raised... of course that was a very long time ago.
  2. I think that perhaps I don't know what 'new hair' is. I was under the impression this was all working with the newest hair available. Edit: I do see where his site references old hair. Wow. I had forgotten that was with old hair. As an aside keep in mind we can still use the old hair locks by holding down the Shift key when creating the new hair. And technically speaking we may have all those old options... which I didn't think we still had. Matt, you sly fox. You've revealed a long lost feature (at least lost to me!)
  3. Here's an example of what that painting produces after blurring the image: (I applied the same image as color.... but not sure why the color and the direction aren't the same in each instance as they seem to be inverted vertically)
  4. For direction you'll want to check out Colin Freeman's appication as well as the basic insights into how color directs hair orientation. http://www.colins-loft.net/hairbrush.html The application is flash based and so will not run in many modern broswers but there is a downloadable program on that page. The painted results in the window can then be exported for further testing. During one of the Live Answer Sessions we explored this and Robert even demo'd basic use via Photoshop.
  5. Welcome and make yourself at home!
  6. You should be able to use an image to control the length as well and that might be highly advantageous. Looking good thus far!
  7. That could easily pass for a photo of a doll in a playset. That particle hair is quite impressive. (I assume that it is particle hair) Looking good!
  8. As for the annotation tool... I like how it allows a drawn line (or text) to be placed on a single frame or on all frames. And the erasing of same... nice. That's very useful.
  9. This would appear to be that elusive FFMPEG interface I've been looking for. Perhaps not THE interface but... so far so good.
  10. I haven't played with it much but from what I see it is indeed quite powerful. It can be that bridge A:M users often need between .MOV and other formats as well to include images sequences in a variety of still image and movie formats. To get a .MOV format compatible with my installed codecs I had change from the default .MOV codec to a more common one. As I said, the user interface isn't the most intuitive but it's not hard to figure out. Right Click to Open/Save etc.
  11. The user interface for this program isn't exactly intuitive but I thought it might be useful for those that don't have a dedicated viewer to open and adjust EXR renderings outside of A:M. Being able to relight images easily is very useful. https://sourceforge.net/projects/mrviewer/?source=typ_redirect The program can do a lot more than just open, display and convert EXR images so may be worth a look. Listed features: A video player, interactive image viewer, and flipbook for use in VFX, 3D computer graphics and professional illustration. Features Flipbook player HDRI viewer Multichannel Support in OpenEXR, PSD, MIFF and TIFF formats MultiView OpenEXR support 4K Video and Audio player Network syncing support Non-destructive EDL Grease Pencil support Multi Part EXR images Deep OpenEXR images (Deep Scanline and Deep Tile) Animated GIF support Color Transformation Language (CTL) Support ACES 1.0.3 Support OpenColorIO (OCIO) support Linux 64bits and Windows 32 and 64 bits Video and Audio Transcoder Scrubbing with audio VR support for environment maps and VR movies and sequences Passive 3D Stereo Support (anaglyphs, top/bottom, side by side, interlaced, checkerboard) OpenImageIO (OIIO) support
  12. On discord I've been posting some quick texture tests/renders based on found items on the internet. Most of the latest tests have been based on materials designed for use with the Thea renderer. Nothing fancy but it's fun to explore.
  13. The joys of antivirus software.
  14. With regard to rendering multiple frames at a time... Here's some research that in loosely based way suggests an approach to projection/prediction of frames in temporal space. http://www.cs.cornell.edu/~asaxena/learningdepth/NIPS_LearningDepth.pdf Here's a somewhat related application that follows the general idea: https://github.com/mrharicot/monodepth The label given to the process of determining depth from a single still image is "Monocular Depth Estimation". Of course a renderer isn't going to do that... that wouldn't be practical or optimal... but the basic process used mirrors that of raytracing/pathtracing as an underlying framework. Each shot into temporal space rendered reveals more about the volume within the framework of frames/slices along the path each tixel takes. That same tixel can record its journey over its allotted lifetime, as with any particle shot out into space. I would imagine the standard two sensors would be RGB and Alpha. Where the former represents a hit in temporal space (that registers the albedo of the surface and then returns to the origin after collecting required data (such as angle of deflection etc. which will not always be followed but stored for future reference). The ray then returns to the origin in the most direct path (which unless the receive is otherwise placed should be along the same path in which previously cast). An anticipated and well known measure would be that of rays that travel directly from origin to Alpha. This is a known linear distance based on a given frame range. If all rays in a 24 frame range reach Alpha then that space are rendered for all 24 frames; there is nothing (no object) occupying that space. Now again, because of channels and keyframes we don't need to cast any rays. The channel keys and splines already identify objects, orientations and movement in that space. So we can already predict and project where objects will be rendered volumetrically in temporal space.
  15. Ooooh. Nice catch Robert. That change in the Compensate would definitely account for the results being observed here.
  16. I like the idea of the moving displacement map. The terrain wizard is nice but I think for your needs it's not ideal.
  17. Fuchur is on the right track. I seem to recall that adding Aim At constraints toward the end of a process works best.
  18. Whooops. I spoke too soon. The CD I have evidently can run v14 and so the installed program launches (with CD). Of course launching is one thing... getting it to work and work optimally is another. (As much as I like the images on the CDs... I am reminded of what a pain it was to have to insert the CD to launch A:M)
  19. While doing some spring cleaning I found my v13 CD in a box in the closet. I gave running it via Windows 10 with various recommended stuff file and compatibilities in place but no dice. I have a few older computers I might need to keep handy for those moments I get nostalgic. But then again I could say the same thing about a few old programs with floppy disks... and I don't have any of those drives lying 'round.
  20. The error message says "Playback on other websites has been disabled by the video owner." which suggests to me that some share setting needs to be enabled in order to see it here in the forum. The distort cage approach is a great animation technique. I'm surprised we don't see more of that. Great stuff!
  21. If I understand the question correctly I'd say the only way to achieve that for deformed/animated meshes is via baking. But this is a general restriction of material and not exclusive to BitmapPlus. If I'm reading too much into this and you just want the bitmap texture to stick to shapes animated via bones then I'd say 'make sure you have 'Global axis' turned off. But I can't imagine this latter one is your case.
  22. It's worth repeating... so it's definitely worth repeating the requirement here. In order to get the gif to animate in the post the gif has to be a a specific size. A few years ago when gif animation capability was added to the forum I tested a few sizes.. Huge (or even large) animated gifs are generally better linked to (via uploading to the forum) because people usually open a topic to read the topic and don't like to wait for gif animations to ramp up. If a topic has a lot of gif animation files posted it can get unwieldy pretty quickly and back then people complained about delays in images displaying so I tried to avoid that. With faster computers these days and a resurgence of interest in gif animation we may need to revisit the size and perhaps allow slightly larger animated gifs to play in a post. P.S. I knew there would be a great story behind the green hair!
  23. There has got to be a story behind the green hair. Come on Dan.... let's hear it! For those that haven't clicked on the image... it's animated so... click to see.
  24. They launched in 2010. So, they aren't short on submissions. Asid: If they catalogued those submissions (which I assume they did) they have a pretty good pulse on what kinds of screenplays are in play. That data alone is valuable. Given that number of submissions I foresee that more than a few lawsuits claiming "they stole my idea" will be forthcoming but I'm sure the submission criteria legalities covered that eventuality. Ref: https://en.wikipedia.org/wiki/Amazon_Studios
×
×
  • Create New...