-
Posts
517 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by Bendytoons
-
You are dead on about the animating. I keyframed his tongue for the "thilenly thneak". Extremes will come, it's in my nature.
-
This approach fundamentally remaps the motion, but all of that is done in the puppet itself. The motion is used unaltered. And once the puppet is set up for a puppeteer there are only tiny tweaks to make for each new performance. Switching performers will require a little more work, I guess, so far it's just me. This method is creating a key on every frame, but you can use A:M tools to reduce them if you want. As my wife said this morning, the only thing to buy at this point would be me. This is all still a cobbled together system that mostly shows off A:M's flexibility. I've developed nothing patentable so far, just a bunch of clever hacks gathered over a lot of trial and error. Shaazam. Here is a zip with a divx avi in it.spystory.zip
-
Hmm. Well I was gonna post a full size DivX version because it was better quality and much smaller than the QT. Unfortunately the forum will not let me do so. IF anyone knows how to do so please let me know.
-
You have been spoiled. I blame Martin.
-
Thanks, Dhar. This years goal is productivity, so hopefully there will be plenty more.
-
Here is the next piece in my development of the facial puppeting system. The goal was to begin exploring the performance possibilities. I gotta say I think it went quite well. Let me know what you think. (Sorry about the big file, best I could do) spystory.mov edit: here's a zip with a DivX avi in it, this looks and sounds better spystory.zip
-
Yeah, so as mentioned in a previous post, I used to work for a company that tried to do a live face capture product for the masses. The company was called Eyematic and if you were at Siggraph 2001 you might have seen us performing our celebrity theater. The practicality of making such a realtime system work well consistently was a bear and a half. The technology certainly exists now to make a practical realtime system, and If you got 20 or 30k to drop you can probably get one- hey, they made Polar Distress. But I don't see a personal studio option coming too soon. All the people who were working on such things now are working on how to identify your face at the superbowl and other "government" work. And the market is still too small for a low cost option to make money, I suspect. But I'm with you all the way, Matt. I've always wanted to just plug-in my avatar and go. As far as process goes, I used Syntheyes for the initial motion capture, and A:M for puppet building- everything else is highly classified voodoo. And no, you don't HAVE to look like a ninja, but it makes it easier to track the top of your head.
-
place the sword in the hand then, using compensate mode, use a "translate to" constraint to lock the sword to the hand bone, use an "orient like" constraint to get it to swing around with the hand bone.
-
I think I've figured out how to do blink tracking and eyelid tracking in the next rev. as far as teeth go, the Bug and I are having ongoing negotiations. I might try some body capture, especially upper body, but that is a lot more complicated and really requires more cameras than I have at the moment. And I did use Sorenson3. Dots were made with a MAC liquid eyeliner (which makes them surprisingly expensive), coulda used a sharpie but I had a lunch date. The cameras involved are a Canon GL2, and a little Sony handycam, both of which are DV video. Wow. Very nice. A bit "jittery"; I assume that is due to inconsistencies in tracking those oh-so-stylish dots on your face. I watched the split screen several times in succession, very promising, very weird to see the bug mimic your face. Can't wait for more details on "how it was done". Yeah Jitter is video interlacing, basically. If I had me a couple o' them HD cams things 'uld be different, oh yes they would. I am at the moment unsure of where this will go outside of my productions. I don't see it being an easy to use tool any time soon. The technologies are all off the shelf, but the process takes a lot of know how in a bunch of areas. I used to work for a company that tried to build "easy to use" facial tracking software. It was a disaster because you just can't make things simple enough for the casual user. However, a dedicated amateur could do this basically the same way I did with a total cost not more than $1500 not including computer. Keyframe density is thirty per second, basically one per video frame (A:M didn't like the data at 29.97). But the beauty of using A:M is that you just drop extra layers ontop of the data to adjust or correct, you don't have to worry about the keyframe density. I hope it speeds up facial animation for me, right now. Than we'll see about this mythical average animator.
-
It is fundamentally motion capture, and I'm driving his face with mine, but it works more like a puppet because the mocap data is driving a pose based animation set up rather than directly controlling muscle animation. Here is a split screen with my face.splitscreen.mov
-
Here is the first camera test of my new puppeting system. Sorry for the lame delivery but this was just a proof of concept. I know there are a heap of problems with the performance, but please let me know what you think. talk6_.mov
-
my wombat's batteries have imploded
-
Bug has left the counter and moved to the floor. This was an automatic track out of Syntheyes. At first I couldn't get it to work, but once I corrected for lens distortion it worked great. tracking took maybe half an hour. floor_.mov
-
it recreates the camera movement by analyzing the parallax movement of multiple tracked points. You can do it either automatically or with user picked track points.
-
That is exactly the method I used, and i ran into the same issue. I had a repeat of it tonight so I'll file a report.
-
This is a further progression of my work at integrating Syntheyes camera tracking into A:M. This iteration is a better render (9 pass)better shadows and reflection, and a better(though still not right) walk cycle. walk11a_hi.mov walk11a_lo.mov
-
#1- You could make the light rectangular and narrow, and simply use it that way; you could add a rotoscope to the light with a bar pattern; you could put a cookie between the light and the character to cast the shadow. #2- use volumetrics. B
-
This is possible, and has been done many times. Do a search of the forum for "BVH" and you will find several threads. I found one of my own old threads with a sample project and description. You might start there: http://www.hash.com/forums/index.php?showtopic=12152 Lots of people have played with mocap in A:M, not many have done more than that, so mostly you'll have to figure it out for yourself. Good luck, have fun, and let us see what you come up with. Ben
-
with the midi controller in this thread you could do what the Animusic people did and have the same midi sequence drive both music and animation
-
Y2CMV ("Your Two Cents Might Vary") yeah, these days I doubt it's even worth 1.5c
-
Gotta disagree with that ... Avoid camera movement unless it's motivated. Interesting points of view here. I find that keeping the camera static only works when the action remains perfectly framed throughout a shot. Should the action require movement, I find that the transition from static to moving extremely distracting. So, *my* rule of thumb is that if there's to be *any* camera movement within a shot, never let it start from or come to a complete stop. Keep it moving, but ever so slightly. Motivated camera moves, like a pan, work because the viewer is learning about the character as the camera moves, and hopefully the attention remains on the character. However, moving cameras subtract power from the movement of the character. Think of a character running through a scene, if the camera is static the character is creating strong dynamic movement, but if the camera is following it nullifies this movement and effectively makes the character static. So if you are following, it must add power to the shot some other way. We the viewrs often want the camera to follow because we are intent on more intimate character detail than movement through the scene. That is to say if I am watching the face of a character as he runs I might want the camera to follow so I can be intent on those details, but if I am watching the action of the run I want the camera still so that the run has power. Indeed if you have an action that contains lots of cross screen movement, like a run across the screen, the framing needs to change to follow the movement. However editing is an effective way to follow the action without resorting to camera movement. In a sense it gives you the best of both worlds- you can keep the viewers attention focussed on the character, and yet retain the power of the characters movement through the scene. Also by cutting instead of following you can change radically change the composition of your image without jarring camera movement. As far as the always moving camera, it is a concept that has become very popular in the wake of MTV and other content free media, but it rarely flows naturally in a narrative. In a classic "invisible camera" narrative style you want to move the camera only when the viewer would move it themselves. Imagine that the camera is binoculars and the character a bird you are watching. If the bird flies from tree to tree of course you want the camera to follow, so you can keep watching the bird. But if the bird is hopping around on a single branch you hold the binoculars still and watch the movement of the bird. If you tried to move them up and down with his hopping it would make you ill. Of course there are many times when you choose to break these rules, but it should be a concious choice to have the viewer aware of the camera. okay thats my 2c.
-
Gotta disagree with that (not the great work part). Avoid camera movement unless it's motivated. The pan in the piece is a good example of motivated camera movement, but it needs a little tweak. As it stands the camera moves before and after the character moves. Let the charater drive the camera instead. Start moving only when the move is needed to follow the character, and stop moving just before the character does. This will help to keep the camera "invisible". And when cutting from a static camera to a moving one, it helps if the camera begins moving in the shot, it is less jarring. I think the static feeling Jon refers to is not because of the lack of camera movement but the vanilla composition. A centered character composition doesn't have much attitude. Also it's best to change camera angles on a cut, so you might choose a less centered, not "straight on" composition for the close up at the beginning to help it cut with the shot of the tunnel. 2c. Ben
-
Looking good, but the dust needs to start on the same frame as the muzzle flash. Bullets move really fast, film going thru a camera does not. It would certainly take less than a 24th or 30th of a second for the bullet to travel the few feet between them. 2c.
-
All 3d programs are basically the same. They have different approaches to doing things, but all do the same things. So all you've learned with Hash will serve you well. Having said that, you should go learn another package. You will know alot more about 3D in general once you have mastered two different packages. I recomend _____ (you figure it out), as it is the most popular flavor of 3D in the pro world right now. It is often said the software you use doesn't matter,it's your talent. This isn't true in the game world. Ramp up for projects tend to be so fast that you need to know the software packge they are using at least well enough to get by. So use your A:M work to show you have the chops, but know the package they are using.
-
are you importing the sprite image along withthe material?