Admin Rodney Posted May 7, 2017 Admin Posted May 7, 2017 There are a ton of video capture programs out there and here's yet another one (PC only... sorry mac guys/gals). http://www.screentogif.com/ Source code is available as well. This is definitely a progam to check out because it has most of the tools that standard screeen capture software doesn't have. A few quick pluses: - Capture to GIF animation, image sequences, avi (mp4 and other formats via FFmpeg tab) - A basic frame editor - Basic drawing on screen (or board recording) - Webcam recording - Lots of other stuff I'm forgetting such as text, cropping, basic transitions, etc) - It's free (donations to the creator welcome!) - Actively developed (last update was 10 days ago) The downside I saw was that the program seemed to operate pretty slowly. I haven't had much time to look into that. The program does require Net Framework 4.6.1 be installed (which I believe the program's installer will install for you). It does or else I already had it installed on my system because it ran fine after install. Quote
Hash Fellow robcat2075 Posted May 7, 2017 Hash Fellow Posted May 7, 2017 Any idea what screen capture program can has the highest frame rate? Hypercam only goes up to 10 fps. Quote
itsjustme Posted May 7, 2017 Posted May 7, 2017 Any idea what screen capture program can has the highest frame rate? Hypercam only goes up to 10 fps. OBS has worked great for me: https://obsproject.com/ It is free, Open Source and cross-platform. Quote
Admin Rodney Posted May 8, 2017 Author Admin Posted May 8, 2017 Any idea what screen capture program can has the highest frame rate? Hypercam only goes up to 10 fps. Hmmmm. No, can't say I've considered frame rate of recording. I'll have to include that in my considerations. I was under the impression that most recorders captured at least 24 frames per second but your question makes me realize I need to dig deeper. It appears that OBS has the following video settings: 10 20 24 NTSC 29.97 30 48 59.94 60 Those apear to be hard coded in as options but I'll guess there is some way to customize further. OBS is my first recommendation (after Camtasia) for screen recording. That is to say the first pick in the free software category. Camtasia at $299 per seat is well worth that price but it is understandable that folks might balk at that payment. Active Presentation is a very close second (in the free category) as it is nearly the equivalent of Camtasia but doesn't quite hit the mark in all categories. At a quick look back (from memory) the following strike a chord as worthwhile applications (in order of preference): Camtasia Open Broadcast Studio Active Presentation Screen2Gif Screencast-o-matic (there are a host of similar online screencapture tools but S-o-m appears to lead the pack). Hypercam and CamStudio) appear somewhere near here. There are many many other screen capturing options and more appear every day. I note that Adobe has a program called Captivate... it doesn't appear to be part of the Creative Cloud suite and I've yet to try it. From Techsmith, the creators of Camtasia is Jing and Snagit (Jing has both free and paid services while Snagit, which was once only for still image capture has been gaining video capture capability). On the Mac side video recording via Quicktime (?) appears to come standard with current releases. I can't speak to Mac only applications but more power to them. I feel confident it is only a matter of time before screen recording is standard on all operating systems. This is both good and bad but mostly inevitable. Aside: In the still image capture category I don't care for anything currently available. Nothing really beats the Print Screen key that has long been available in any copy of Windows for ease of still frame screen capture. Noteworth exception: Animation:Master might be noted for it's unique ability to capture with an alpha channel intact. I don't know of any screen capturing program other than A:M with that capability. The key here might be to turn off the grid in the modeling window to avoid capturing the lines of the grid. Quote
Admin Rodney Posted May 8, 2017 Author Admin Posted May 8, 2017 Screen2Gif can capture at anywhere between 1 and 60 frames per second. Now I say that but I haven't tested to see if any frames are dropped in that capturing process. It APPEARS that it does in fact capture at 60 frames per second. Capturing at 1 FPS might be useful for stop motion style capturing. 1 FPS might also be a useful FPS to use for line testing recordings (and stopmotion) when capturing imagery via webcam. Aside: Screen capture is one of the enablers making slow rendering of sequences to file a thing of the past. If what we see on the screen is what we want to present to an audience it may be enough to simply capture and redistribute the on screen playback. That is one outlier of a group of enablers within grasp that change the rendering paradigm (from the user perspective). Another is the fact that what we are seeing on screen is (for all intents and purposes) wasted energy especially where what we've already seen on that screen is what we ultimately want to 'capture'. In a perfect world that imagery wa automatically captured and readily available for review, sharing or extraction. 'Final Rendering' is only required when targeting specific external applications. The the holy grail of 'rendering' in my estimation is the foregoing of 'rasterization' to the greatest extent possible in order to directly work with splines and patches. The irony that I requested... and Steffen graciously implemented... several additions to v19 that could be seen as encouraging users to stick with the current paradigm of reliance on 'rendering' is not lost on me. Quote
Admin Rodney Posted May 8, 2017 Author Admin Posted May 8, 2017 Here's more technology that makes Screen2Gif an essential addition to the toolbox. Cinemagraph Cinemagraphing is what is generally seen when a sequence of images has large portions of the image area that never change and only a small area (or areas) that do change. Screen2Gif has the ability to identify those areas via paint/pen or by shape (i.e. drawing a rectangle) so that only those areas change in the entire sequence. The idea of cinemagraphing relates to rendering in an interesting way and yet one that is not leveraged by any standard renderer that I am aware of in that renderers are not generally smart enough to know out the gate what pixels will remain the same throughout a sequence and which pixels will change. Renderers do figure this out of course, over time and after considerable calculations but think for a moment on the potential here to render only one frame that will occupy the majority of 'render space' which will result in considerable savings of time re-rendering those same pixels from other frames. This is closely akin to compositing where only those elements needed are rendered and then stacks of images are overlaid. So, a potential workflow that saves render time begins to emerge in that only the moving parts of an image might be rendered first, then a render of static elements is rendered and placed as frame 0. A mask (ala cinemagraph) is then created using frame 0 to project all of the pixels through the remainder of the sequence. At any rate, the Cinemagraph feature might be quite useful to anyone that wants to make a simple animated image where only parts of the image change. It's features like this that have me adding Screen2Gif onto my top 10 list of programs and utilities for every artist and animator. Quote
Wildsided Posted May 9, 2017 Posted May 9, 2017 Not entirely the same thing, but I've (and no doubt others) got a similar result by turning off all the objects that move in a scene and then rendering out everything that doesn't move. Then flipping the active and inactive objects so the moving things get rendered separately. Then I just layer them over each other to make a complete scene. Saves the renderer having to render everything for every frame. Quote
Admin Rodney Posted May 9, 2017 Author Admin Posted May 9, 2017 Not entirely the same thing, but I've (and no doubt others) got a similar result by turning off all the objects that move in a scene and then rendering out everything that doesn't move. Then flipping the active and inactive objects so the moving things get rendered separately. Then I just layer them over each other to make a complete scene. Saves the renderer having to render everything for every frame. Yes, A:M is a highly capable compositor. The waste is in re-rendering frames/pixels that never change. For instance, a 30 frame sequence that has a static background throughout and yet that background may be rendered again and again for each of the remaining 29 frames. Of course, camera moves (which are frequently used in cinema) often limit a computer's ability to read forward (anticipate) what will and will not change. With a camera move it can reasonably be expected that -everything- (that is to say every pixel) will change. So an algorithm that predicts what needs to be rendered might start with the question: Does the camera move in any way? If not then the next stage might be to examine what is moving in the scene. I must assume that some of this predictive processing is going on in any renderer. After all the final pixels of a frame and those of the shot/sequence have to be determined in some way. To further speculate, I'd say that after considering the camera the next area to examine would be the Shot (as opposed to Sequence) in that a shot (strictly speaking) is more likely to contain similarities. Consider for instance a three shot sequence where Character A says something, Character B reacts and then a third shot continues the narrative. The frame of Shot 1 (Character A) could be expected to be relatively similar especially where nothing moves in the scene (no camera pan/zoom/etc., no moving background, minor movement of the character (especially in dialogue)... so... this leads me to yet another algorithmic consideration... that of audio. After considering the camera the second test might be to parse the audio where it will be readily evident where changes occur. I'll leave it with that for now but I find these considerations interesting. I very recently posted an older video interview with animator Tissa David on my blog where insists on always animating to audio (dialogue or music).** I find this compelling enough to almost suggest our 'rendering algorithm' should begin with the audio BUT in these days I don't see any particular reason why that processing couldn't be run in parallel. In fact, it might be optimal for each 'node' of the algorithm to examine and then track it's own element within the frame/shot/sequence where data related to that element remains at the ready for consultation. But regardless, these initial tests are boolean (true/false) in nature and therefore return their status quickly; the camera either moves or it does not, object1 either moves or it does not, the audio is either silent or it is not, etc. After that initial test the process then knows what it needs to focus on to extract finer detail. After the initial test to determine state, splines and patches are especially suited for linear or nonlinear 'reading' of the extent of change. **This requirement to always animate to audio suggests a new look at Richard Williams's suggestion to unplug (i.e. never listen to music while animating). The suggestion being that an animator cannot be listening to music if they are already listening to the dialogue or music specifically related to the sequence that is being animated. Think about this for a moment and let it sink in. While it is relatively easy to animate to a given beat once an audio track is broken down (hence the reason for another of Tissa Davids requirements; always use an exposure sheet!) there are sure to be elements of audio not captured that convey just what is needed to deliver the personality and performance of the characters in the scene. Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.