-
Posts
21,575 -
Joined
-
Last visited
-
Days Won
110
Content Type
Profiles
Forums
Events
Everything posted by Rodney
-
Yes, A:M is a highly capable compositor. The waste is in re-rendering frames/pixels that never change. For instance, a 30 frame sequence that has a static background throughout and yet that background may be rendered again and again for each of the remaining 29 frames. Of course, camera moves (which are frequently used in cinema) often limit a computer's ability to read forward (anticipate) what will and will not change. With a camera move it can reasonably be expected that -everything- (that is to say every pixel) will change. So an algorithm that predicts what needs to be rendered might start with the question: Does the camera move in any way? If not then the next stage might be to examine what is moving in the scene. I must assume that some of this predictive processing is going on in any renderer. After all the final pixels of a frame and those of the shot/sequence have to be determined in some way. To further speculate, I'd say that after considering the camera the next area to examine would be the Shot (as opposed to Sequence) in that a shot (strictly speaking) is more likely to contain similarities. Consider for instance a three shot sequence where Character A says something, Character B reacts and then a third shot continues the narrative. The frame of Shot 1 (Character A) could be expected to be relatively similar especially where nothing moves in the scene (no camera pan/zoom/etc., no moving background, minor movement of the character (especially in dialogue)... so... this leads me to yet another algorithmic consideration... that of audio. After considering the camera the second test might be to parse the audio where it will be readily evident where changes occur. I'll leave it with that for now but I find these considerations interesting. I very recently posted an older video interview with animator Tissa David on my blog where insists on always animating to audio (dialogue or music).** I find this compelling enough to almost suggest our 'rendering algorithm' should begin with the audio BUT in these days I don't see any particular reason why that processing couldn't be run in parallel. In fact, it might be optimal for each 'node' of the algorithm to examine and then track it's own element within the frame/shot/sequence where data related to that element remains at the ready for consultation. But regardless, these initial tests are boolean (true/false) in nature and therefore return their status quickly; the camera either moves or it does not, object1 either moves or it does not, the audio is either silent or it is not, etc. After that initial test the process then knows what it needs to focus on to extract finer detail. After the initial test to determine state, splines and patches are especially suited for linear or nonlinear 'reading' of the extent of change. **This requirement to always animate to audio suggests a new look at Richard Williams's suggestion to unplug (i.e. never listen to music while animating). The suggestion being that an animator cannot be listening to music if they are already listening to the dialogue or music specifically related to the sequence that is being animated. Think about this for a moment and let it sink in. While it is relatively easy to animate to a given beat once an audio track is broken down (hence the reason for another of Tissa Davids requirements; always use an exposure sheet!) there are sure to be elements of audio not captured that convey just what is needed to deliver the personality and performance of the characters in the scene.
-
Here's more technology that makes Screen2Gif an essential addition to the toolbox. Cinemagraph Cinemagraphing is what is generally seen when a sequence of images has large portions of the image area that never change and only a small area (or areas) that do change. Screen2Gif has the ability to identify those areas via paint/pen or by shape (i.e. drawing a rectangle) so that only those areas change in the entire sequence. The idea of cinemagraphing relates to rendering in an interesting way and yet one that is not leveraged by any standard renderer that I am aware of in that renderers are not generally smart enough to know out the gate what pixels will remain the same throughout a sequence and which pixels will change. Renderers do figure this out of course, over time and after considerable calculations but think for a moment on the potential here to render only one frame that will occupy the majority of 'render space' which will result in considerable savings of time re-rendering those same pixels from other frames. This is closely akin to compositing where only those elements needed are rendered and then stacks of images are overlaid. So, a potential workflow that saves render time begins to emerge in that only the moving parts of an image might be rendered first, then a render of static elements is rendered and placed as frame 0. A mask (ala cinemagraph) is then created using frame 0 to project all of the pixels through the remainder of the sequence. At any rate, the Cinemagraph feature might be quite useful to anyone that wants to make a simple animated image where only parts of the image change. It's features like this that have me adding Screen2Gif onto my top 10 list of programs and utilities for every artist and animator.
-
Screen2Gif can capture at anywhere between 1 and 60 frames per second. Now I say that but I haven't tested to see if any frames are dropped in that capturing process. It APPEARS that it does in fact capture at 60 frames per second. Capturing at 1 FPS might be useful for stop motion style capturing. 1 FPS might also be a useful FPS to use for line testing recordings (and stopmotion) when capturing imagery via webcam. Aside: Screen capture is one of the enablers making slow rendering of sequences to file a thing of the past. If what we see on the screen is what we want to present to an audience it may be enough to simply capture and redistribute the on screen playback. That is one outlier of a group of enablers within grasp that change the rendering paradigm (from the user perspective). Another is the fact that what we are seeing on screen is (for all intents and purposes) wasted energy especially where what we've already seen on that screen is what we ultimately want to 'capture'. In a perfect world that imagery wa automatically captured and readily available for review, sharing or extraction. 'Final Rendering' is only required when targeting specific external applications. The the holy grail of 'rendering' in my estimation is the foregoing of 'rasterization' to the greatest extent possible in order to directly work with splines and patches. The irony that I requested... and Steffen graciously implemented... several additions to v19 that could be seen as encouraging users to stick with the current paradigm of reliance on 'rendering' is not lost on me.
-
I find the fact that folks are now vacationing in Cuba very interesting. Sounds like you had a great time. That image is a nice variation on the dual heads facing each other motif/illusion. I'd be neat to import that as separate parts via the new SVG importer in v19.
-
Nicely done intro Gerald. That's very polished looking. And congrats on your 50th episode of Continuum Gaming!
-
Hmmmm. No, can't say I've considered frame rate of recording. I'll have to include that in my considerations. I was under the impression that most recorders captured at least 24 frames per second but your question makes me realize I need to dig deeper. It appears that OBS has the following video settings: 10 20 24 NTSC 29.97 30 48 59.94 60 Those apear to be hard coded in as options but I'll guess there is some way to customize further. OBS is my first recommendation (after Camtasia) for screen recording. That is to say the first pick in the free software category. Camtasia at $299 per seat is well worth that price but it is understandable that folks might balk at that payment. Active Presentation is a very close second (in the free category) as it is nearly the equivalent of Camtasia but doesn't quite hit the mark in all categories. At a quick look back (from memory) the following strike a chord as worthwhile applications (in order of preference): Camtasia Open Broadcast Studio Active Presentation Screen2Gif Screencast-o-matic (there are a host of similar online screencapture tools but S-o-m appears to lead the pack). Hypercam and CamStudio) appear somewhere near here. There are many many other screen capturing options and more appear every day. I note that Adobe has a program called Captivate... it doesn't appear to be part of the Creative Cloud suite and I've yet to try it. From Techsmith, the creators of Camtasia is Jing and Snagit (Jing has both free and paid services while Snagit, which was once only for still image capture has been gaining video capture capability). On the Mac side video recording via Quicktime (?) appears to come standard with current releases. I can't speak to Mac only applications but more power to them. I feel confident it is only a matter of time before screen recording is standard on all operating systems. This is both good and bad but mostly inevitable. Aside: In the still image capture category I don't care for anything currently available. Nothing really beats the Print Screen key that has long been available in any copy of Windows for ease of still frame screen capture. Noteworth exception: Animation:Master might be noted for it's unique ability to capture with an alpha channel intact. I don't know of any screen capturing program other than A:M with that capability. The key here might be to turn off the grid in the modeling window to avoid capturing the lines of the grid.
-
Slowly... ever so slowly.... The Animation Collaborative is sharing a few resources. There isn't a lot to see here but here's a few short videos from approx. two years ago... xhttps://vimeo.com/animationcollaborative
-
There are a ton of video capture programs out there and here's yet another one (PC only... sorry mac guys/gals). http://www.screentogif.com/ Source code is available as well. This is definitely a progam to check out because it has most of the tools that standard screeen capture software doesn't have. A few quick pluses: - Capture to GIF animation, image sequences, avi (mp4 and other formats via FFmpeg tab) - A basic frame editor - Basic drawing on screen (or board recording) - Webcam recording - Lots of other stuff I'm forgetting such as text, cropping, basic transitions, etc) - It's free (donations to the creator welcome!) - Actively developed (last update was 10 days ago) The downside I saw was that the program seemed to operate pretty slowly. I haven't had much time to look into that. The program does require Net Framework 4.6.1 be installed (which I believe the program's installer will install for you). It does or else I already had it installed on my system because it ran fine after install.
-
.It could be used in addition to... but not exclusively for webcam/streaming. I haven't looked in to the full paid version and any additional features that might be reserved for that. I mainly look for tools that everyone can use. I don't think he's parted ways with them so much as embraced in person instructional sessions they (the crew at Animation Collaborative) achieve on site rather than online. He doesn't tour as much as some of his teaching buddies. For what it's worth I perceive that the folks at Animation Collaborative are trying to move toward online training but haven't yet created the necessary infrastructure nor found the ideal platform. Specifically, I'm talking about the sharing of lectures, demonstrations, etc. that are taped by them but only available for viewing in their facility. If they could find a way to release those resources without them instantly being shared with everyone in the world I think they'd do it. It is interesting to note that most (many?) of the animation schools were headed/staffed by instructors who taught a class or two at Animation Mentor but wanted to explore a different approach. Several specifically stated the high cost of training as a reason for offering their own courses; Mike Gallagher (Animschool) and Jason Ryan (iAnimate) in particular have tried to open the doors a little wider and in turn have encouraged Animation Mentor to consider additional training options. But most of this is very old news. Relating to the topic of collaboration tools I note that at least one of the creators of RGBnotes (Eriks Vitolins) is a current instructor at Animschool.
-
Yes, Animschool, The Animation Collaborative (where Victor Navone has been teaching), Academy of Art, Ringling School of Arts as well as many others. Of course that isn't to say they don't use other tools as well but I'm sure they standardize on one product as much as possible. It does make sense that each of these schools would gravitate toward their own form of collaboration and review software. Artella for instance, is attached to Animation Mentor. Added: I see that RGBNotes also claims AnimSchool as a client. I suppose they could be using both... letting instructors use the tools they prefer. Or... perhaps Animschool has recently shifted emphasis from one to the other.
-
Yet another online collaboration and review tool has appeared. The drawing tools are basic but adequate to communicate the reviewers intent. The free tier should satisfy basic requirements for a quick review but likely won't meet the needs of projects in full production. It's interesting to see where these online collaborative tools are heading. Worth checking out. https://syncsketch.com/#features I've lost track of similar products in circulation but a few of those include: RGBNotes Frame.io Artella While these services all have much to offer I think these may all be surpassed and eclipsed by what I'm seeing referred to as Web 3.0 (with a little of what is called edge computing thrown in for good measure). This looks to bring peer to peer sharing of resources to everyone who owns a personal computing device and brings real time collaboration to everyone's desktop/browser by largely cutting out the middle man in the process. The edge computing part would be that of the services targeting the user at the time and place where they require the service or product.. The peer to peer aspect might best be demonstrated by a project (currently in beta) called keybase. With a setup such as keybase, review of projects (say, with A:M) would be made easier because collaborators could be working in real time on iterations of the same project. :
-
That's exciting news. Like a phoenix from the ashes!
-
Definitely looks interesting. I need to keep PixPlant in mind for realisticly rendered projects. The call to realism is ever present but for the most part I continue to resist it. I have never liked long render times and targeting realism easily multiplies the time it takes to complete a task by five (a conservative estimate). Refining my workflow would certainly help but as of yet I'm not that organized.
-
Thanks for the feedback guys. I'm excited by the stats because they validate that our forum is meeting the needs of its members. Of course there is always room for improvement. Hardly! The list continues downward, Papa bear made the first page! So you and anyone who sees themselves represented should consider themselves as being near the top. To say it another way, consider that if the list had every forum and subforum listed it would have well over one hundred entries. And that is a conservative estimate. If we include the forums that are direct links to other areas (such as the Quick Links at the bottom of the main forum) there are technically over 400 forums. I find the SDK's placement very telling. People genuinely are interested in contributing to the A:M ecosystem, building bridges, enhancing features, etc. Of course, if one considers views in isolation we might get a false impression. There are a certain amount of views generated simply by someone posting into a topic. Subsequent views are largely that of other forum members simply doing their due diligence, checking out the new post to see if (or how) that post might apply to their current or anticipated interests. One must take care not to create a direct correlation where none might exist. I realized this a few years ago when I noticed that some topic or forum I was posting in was getting a lot of views but as I considered it I realized that the views were mostly being generated by the fact that I was posting new content for others to view. Taking this into consideration we might have to weight these stats a little to account for new releases'. In the case of the SDK forum I have no doubt that some of the popularity directly relates to the fact that some guy named Nemyax was diligently posting updates about work on a plugin that enabled Blender to A:M (and vice versa). Even if there is no direct interest to a forum member that member might check out the post to keep up with what is going on. I recall a lot of interest in Malo's postings over the past few years also. The subject matter was genuinely interesting... relevant... even to those that will never open the SDK or write a single line of code. We should also account for the fact that I personally read every post so... the number of views (per topic) should probably start its count one point lower times the number of posts. I'm not trying to bring lofty thoughts of relevance down but am trying to keep out collective expectations grounded. We aren't a huge group of people but every forum member is important. I must say though that I'm pretty pumped that if we were to compare this forum to one like Blender's going back to their opening (which is largely the same timeframe), the number of posts and views here in the A:M Forum is considerably higher. This shouldn't be the case considering the vast number of users that Blender is reported to have in tow. This also isn't an entirely full picture of course as there are several Blender forums and I'm only considering the primary forum. All this to say, keep up the great work. You may think that no one cares or that no one is reading your post but... people are watching!
-
-
Jason Hess said: Welcome back Jason! You've been missed too!
-
Very cute. The irony that I'm over in another topic posting of bullet holes and flesh wounds and you are upping the ante on niceness is not lost on me. We need balance so keep it up!
-
Not at all. I thought of making a few variations on this theme that demonstrate different approaches and (as always) the situation will dictate what approaches will work best. Some of those variations might be: - To make the tears in the pants more like grazing cuts than holes. The thought behind the current holes was that the skin isn't penetrated by the bullet because as everyone knows.... werewolves are only vulnerable to silver bullets. So the scenerio would be: werewolf gets shot multiple times where holes in pants and shirt demonstrate his invulnerability to bullets. Werewolf snarls in defiance. Camera shifts to girl with gun who places silver bullet into gun, then smiles. Camera back to Werewolf who receives shot to the chest which has obvious effect. - Setup a scene where holes are shot out of a brick or concrete wall (I've had many scenes like this show up in my artwork over the years and have tested this out a little in 3D). In many cases with walls the ideal approach to creating bullet holes would be to use Boolean Cutters where the models used to remove the wall are textured spheres or similar shapes that texture the wall even as they cut out parts of it. Additonal debri would then be introduced to suggest parts of the wall fragmenting. A lot will depend on the style we would be aiming for. But back to the question. Perhaps the easiest way to add a sense of torn flesh or holes in the legs would be to apply a decal to those 'squib' models; a short series of animated images might work best and allow for dialing in different effects. Similar approaches would be used to create blood splatter etc. although the blood splatter would of course be coming out of the back of the leg. Small entry point. Large exit wound. Etc. (Ah, the joys of ballistics)
-
I left off plugging Aaron's Art Tips for a couple reasons. The primary one being that I didn't want to be seen as trying to sell products. BUT the fact remains that Aaron's online presence is an essential stop for animators. Aaron has added a lot of videos and courses since I last posted a link to him. Of possible interest, he offers an all you can eat annual membership that grants access to everything he has released. (Disclaimer: I haven't yet opted for that) Of late Aaron has started twice weekly Live Sessions (one via Facebook and one via Youtube) where he covers a variety of art and animation related techniques. Here's a relatively old video on lipsync (which I don't think I posted before). It's quick, interesting and entertaining. xhttps://www.youtube.com/watch?v=b8OAlOy6QNU
-
Yes, Kat (Katherine) was definitely working in that style. Here's a link to her Special Projects area: LINK
-
Hmmm... I may not be much help. When I render frames 2 and 3 the toon lines turn out the same. I'm uploading a modified project file that removes a few models/objects that don't appear to be related to the problem. The project should only render frames 2 and 3. Try this project and see if you get the same (that is to say wrong) results. Modified Scene 06C.prj
-
I confess that much of my work tends to avoid the solid computer generated look (read: realistically rendered) This may be because of my focus more on a 'comic book' abstracted style with an emphasize on linework over surface and shading. Having said that, when I see the results of the solid rendering style I can't help but like it! The specific area that I like seems to be that where the objects take on the look of plastic but I imagine the same thing could be said about other surfaces. We just happen to be looking at a plastic looking surface at the moment. It's great that Animation:Master can accommodate so many styles.
-
Awesome results Mark! Truly fantastic. For negative lights I turn the shadows off or else they produce negative lights. Not sure if you did that or not. ...and hehe... 'speculation'. hehe... That must be the secret sauce.
-
'Sci Fi High School' by Courtney Bell (known as Paradymx here in the forum) was another (mostly) flat shaded project that had fun characters and great potential. It also had going for it an anime style. http://www.scifihighanimation.webs.com/ The basic premise reminds me of another flat shaded project that you may recall from way way back in Hash time, 'Nosferatu' which was (I believe) by Tony Lower-Basch (also of the Dojo Project fame**). He recompiled the individual weekly episodes together into two larger episodes. Great stuff! Download the episodes here: http://www.museoffire.com/Nosferatu/index.html Neither of these are/were as polished as 'Stolen Smells' but they were headed in that direction. **Precursor to 2001 rig, etc. etc.
-
Davinci Resolve (Non Linear Video Editor/Color Correction)
Rodney replied to Rodney's topic in Open Forum
Here's an overview of the video editing process: xhttps://www.youtube.com/watch?v=XTuvpin_z_M