sprockets The Snowman is coming! Realistic head model by Dan Skelton Vintage character and mo-cap animation by Joe Williamsen Character animation exercise by Steve Shelton an Animated Puppet Parody by Mark R. Largent Sprite Explosion Effect with PRJ included from johnL3D New Radiosity render of 2004 animation with PRJ. Will Sutton's TAR knocks some heads!
sprockets
Recent Posts | Unread Content
Jump to content
Hash, Inc. - Animation:Master

Rodney

Admin
  • Posts

    21,575
  • Joined

  • Last visited

  • Days Won

    109

Everything posted by Rodney

  1. The bones for this 'not a giraffe' are simply placed and not at all designed for precision. I added new child bones on the fly as new parts of the model were added. A functional rig which makes me want to investigate more that gap between such simple functioning rigs and what might replace it later as a more fully articulated rig if the simple model were upgraded to a highly detailed one. The jaw bone in particular was an attempt to just get a little bit of movement to those areas to hint that there was an actual mouth there. There is something I really like about this doodling approach to designing a character in that while working through it I start to see those things that would need to be updated and replaced. If diving into the more detailed and final model a lot of time encountering errors would make that a frustrating experience. As opposed to this approach which to me feels more like drawing and sketching ideas. Not as fast as drawing but with the benefit of actually having a working model as opposed to ideas on paper that still need to be implemented.
  2. So much that could be changed! Regrets? I do wish I had used more cross sections when lathing the neck as it would allow for better placement of his differently colored belly. His mane/back scales need more detail/definition. Perhaps a sense of bones/webbing. Mouth. Need mouth. Nostrils. If the entire eyes were patch images or decals I probably could get away with more there and have better control of the look/feel. Why do I always resist just modeling the stuff in the first place? I wanted to have his horns be more turned but perhaps for this guy these are the best? More ornate horns might be reserved for other dragons this guy would encounter. Some additional deformation/detail and the cheeks and chin would suggest scales/hair. Ears... not very subject to gravity in this iteration. Those might be some of the obvious areas to work on. What else?
  3. I was lathing a shape and thought I could easily turn it into a giraffe's head... A few hours later this dragon thingy appeared: The motion blur... a bit thick no? I thought I'd be lazy and make the eyelids be patch images that turn on/off. That was probably more work than it should have been if I would have just made actual eyelids. It kinda worked though. Had fun with a few things that won't be particularly apparent such as having the color of the horns change so that the lightest/grayest horn is always in the back (presumably aiding in giving a sense of depth). This guy technically has no mouth although he does have a jaw bone. He really needs a mouth. I was going for flat shaded and almost got what I was after. I need to explore that more and get that approach into muscle memory. He needs a body no? I thought about faking the reason for not having one by adding ripples of blue to indicate water. Added: Definitely needs eyebrows! It was a fun 3D exploration that reminds me that when doodling especially... there is always something more to tweak!
  4. The consensus seems to be that everyone wants to see a little more of this. It's as if we all instinctively know there is more of this story that hasn't been told. You've set up the scene and run through the performance. (I do like the idea of some resistance or exaggeration such as @fae_alba and @Roger suggest. A happy dance wouldn't be out of the question either eh @Pizza Time?) Now... how about that payoff! @Roger's suggestion of the big sniff is a good one as mentioned. So many things that could be done so this not so much a suggestion but an exploration. It'd be nice for instance, if there at the end the camera zoomed in really close to show his face and the flower. What is he looking at there? Is there something on the flower? What is his intention in plucking the flower in the first place? How does that relate to our currently hidden payoff? Ah... the possibilities. All this to say, keep up the great work!
  5. VERY smooth animation Steve! When I see animation as good as this I get an almost uncontrollable urge to want to see the animation curves to peer into its secrets. The animation however speaks for itself. Love the sound effects too!
  6. Not particularly related... A generated depth map from video: DogTurn_vis.mp4 My general sense is that of leveraging these depth maps and such for smoothing, blurring etc. whether applied to 2D or 3D.
  7. Next up (I think I captured a character but not the character in the rotoscope. Thought this guy's jaw would be easy to rigs so added that.
  8. Figured I'd post some of the modeling I'm doing with @Pizza Time (who doesn't like pizza???) I'll be glad to post the models here but I'm giving them to PT first. In the meantime I'll share some turn arounds and perhaps add some detail about what I did... why I did it and... maybe even why you shouldn't do it that way! (Disclaimer: I don't know any of these characters names so the names I supply very likely will be random) First up:
  9. Also, the AI and SVG Wizards work in much the same way (SVG being the more accessible format these days). Any complex fonts and symbols not available can be created in other programs and brought in that way.
  10. If you are talking about the Font Wizard then... Right Click in the Modeling WIndow > Plugins > Wizards > Font Wizard You should be able to use all True Type fonts installed on your system (not just Ariel).
  11. This is not an great example but I think just creating some models (even if only flat shaped) and overlaying those on top of your shot could be the quickest way to success. Then you just place the control points for the laser beams where you want them and animate where you need them to be. All this to suggest the shot will dictate the approach. Doing some simple placement such as in a model overlay however will allow precision without the setup becoming overly complicated. Then once placed and animated you can get as fancy with the look and feel of it as you need.
  12. There are so many ways to do lasers. How to do it without making it overly complicated... now there's the question. We could use: Multiple models Material Effects Boolean Cutters (to make lasers models appear) Patch images Layers Action Objects Odd approaches such as using Hair Objects on paths Particles! Lights! Let me stop there for a moment because Lights might be the way to go. Why? Because they can easily be placed, animated, turned on/off, brightened, and they can have Lens Flair and effects applied to them. Hmmm...
  13. As for the duplicate lasers/massive firepower... There are always mutliple ways to do things. The most direct might be to just use the Duplicate option to quickly duplicate the lasers. Then it might just be a matter of staggering the animation in the Choreography to get them timed and spaced.
  14. The only thing you seem to be missing is Ambiance Color and Ambiance Intensity on the lasers. You could also add glow but for the most part those two will get the job done. They can be animated too to change color and intensity. The transparency can also make them more (or less) intense. You might want to keep the lasers independent of the spaceships as then they can move independently and in straight lines as lasers usually do and you really only need two keyframes to animate the lasers... starting point and end. You've got this!
  15. Vern! Have been thinking about you and your coding prowess of late while wandering though the Extras CD. Good times. We can merge this new username/account with your old one and set a temp password. Will look into the status of your old account and see what is there to see.
  16. Here's an interesting model I want to remember as there are some ideas attached to it I want to follow up on: <MODELFILE> ProductVersion=19.5 Release=19.5 PC <POSTEFFECTS> </POSTEFFECTS> <IMAGES> InstanceCropStyle=InsteadOfCache </IMAGES> <SOUNDS> </SOUNDS> <MATERIALS> </MATERIALS> <OBJECTS> <MODEL> <MESH> <SPLINE> 262144 0 1 -1.86365604 2.01896167 0 . . 262144 0 2 -2.22603416 -4.08969021 0 . . 262144 0 3 1.5530473 -2.74371624 0 . . 262144 0 4 2.95078969 1.13890147 0 . . 262148 0 5 2.58841228 5.48743343 0 . . </SPLINE> <PATCHES> 1 1 5 4 3 2 </PATCHES> </MESH> <FileInfo> Organization=Open Animation Library Url=newartofanimation.com Email=rodney.baker@gmail.com LastModifiedBy=Rodney </FileInfo> </MODEL> </OBJECTS> <ACTIONS> </ACTIONS> <CHOREOGRAPHIES> </CHOREOGRAPHIES> FileInfoPos=482 </MODELFILE> I should not have named the attached 'Simple5pointer' as it isn't your standard 5 pointer but rather that very interesting exception where one spline can be a 5 point patch. Consider that we could... although perhaps would not want to... place a 5 point patch or better yet a valid set of 4 point patches directly over a valid area that could be closed as a 5 point patch. These might not be 'attached' in the classic sense of spline attachment but might be 'glued' to CPs so that the floating patch exactly covers the area. A rendering algorithm would likely smooth the transition between meshes. The thought here is that all extraordinary vertex might receive this type of floating patch which because it isn't actually attached with continuity of splines to the main mesh would not adversely distort the area in any significant way. As these floating patches would be 10 or more quad patches (5*2 being the lowest common denominator) they would not be subject to artifacting on the surface and only the edges would need to be given additional attention. If the area to be covered by the 5 pointer the floating patch might consist of 20 four point patches or whatever number required to meet the needs of the model. The users focus would be to choose the density of that floating patch if something other than the default is desired. In the case of the attached mainly consider the variation we can get in patch coverage with a flat 5 point patch on one single spline whether peaked, smoothed or both. Compare then then with a similar set of patches covering that same area but not flat. Added: Note the values of the spline and the patch and how simple they are. What this suggests to me is that (in theory) all 5 point patches could have the same spline patch data. It would just be their orientation and placment that is variable. Simple5pointer.mdl
  17. I like this potential approach Robert. And I can easily agree that we need to drop all potential candidates if the CPs are found on the same spline. Note that I have no idea how to do this programmatically of course. With a proper nudge in the right direction though... that's why we are here considering AI usage. Added: I do think we should be able to quickly rule out any spline 'segment' that already has a patch applied to it. We can't apply a 5 pointer to a set of splines that already has a patch applied to it (or at least we don't want to). Or perhaps better stated any spline that has two patches adjacent to it. And, isn't there already a plugin that finds 5 point patches?
  18. My mind goes back to the idea of odd number of control points versus even. Ideally, we want to work with even number of control points. 1*, 3, 5, 7, 9, etc. are all problematic... primarily because dividing by two also gives us an odd number. As discussed elsewhere we can't have a single control point... well we can but it's a bug/oddity. The closest we can get to that 1 CP is when another is exactly in the same place... or... from specific POVs we only see 1 control point because another (or more) are hiding behind that CP due to the angle of view. If even then we can subdivide... and subdivide and subdivide... to near infinity. We can also duplicate... over and over again... seemingly forever. So a theory off the cuff for me would be that from counting splines and control points we can narrow the scope of not just how many 5 point patches there may be but also where they may be. Isn't one way to identify any open area (5 point patch eligible or otherwise) to examine that area's normal? If there is no surface then there is no normal. Something we have to account for of course is that an odd numbered spline times an even number will give us an even result. So we might have to account for pairs of odd CP numbered splines, reducing then down to that inventable one that is truely 'odd'. We can then trace along that odd spline and locate eligible areas for 3, 5, 7, 9 and other odd numbered patches. Of course we are (at least theoretically) only interested in finding 5 point eligibility. In other words, if a model has even one 5 point patch eligible area we could get rid of it by increasing the number of splines by 2 or 4 (or some other even number. 2x2=4, 2x4= 8, 2x6=12, etc. A problem with that is overly dense models which Animation:Master deals with (when displaying models) by dynamically subdividing meshes so they have just enough detail/smoothness for where it counts with respect to the viewer/camera. Pixar's SubDiv's get around part of the problem by doing an immediate step of multiplying the mesh by 2. This ensures that all meshes can at least be divided in half/divided by 2 and therefore are always 'subdivisible'. They still have to deal with extraordinary vertexes (3 and 5 point areas etc.) but... low and behold... those are all related to meshes having that odd man out.
  19. I downloaded a few models @Darrin S All that I've downloaded have worked. Thanks for sharing! Edit: Actually... I spoke too soon. I had only downloaded models and not projects. In order to include files such as models in a project you'll first need to embed the models. That can be done via the menu just prior to saving via Project > Embed All. Without that embedding the Project files will be missing external resources. Edit 2: I see that your individual files/models are likely what the projects are looking for. What you might want to do in that case is Zip up all the files into a single Zip file and then post that. Then all of the projects and models can be maintained in one single zip file.
  20. Oooo.... Accidently found a version of her! Please see attached. Pre-rigged Mum.mdl
  21. Unfortunately, I can't find the model and... the TWO SVN projects site appears to be down. Link: http://project.hash.com/movie/svn/active/ Here's what she looks like if we can find her:
  22. I can think of several likely candidates. The specific one I'm thinking of at the moment though is Woot's mom from 'Tinwoodsman of Oz'. That might be a little more cartoony than you have in mind though.
  23. Here is what we might expect to see in the console window if our project file is set to render 24 frames: This is a batch file running with the following variables: "F:\runme.bat" "Pool1.rpl" "TheJob" "06/24/2024 12:56 AM" 25 " 0:01:26" "F:\am\renderfolder" program F:\runme.bat (batch file) pool Pool1.rpl job TheJob time 06/24/2024 12:56 AM frames 25 elapsedtime 0:01:26 outputfolder F:\am\renderfolder ffmpeg version N-112991-g081d69b78d-20231215 Copyright (c) 2000-2023 the FFmpeg developers built with gcc 13.2.0 (crosstool-NG 1.25.0.232_c175b21) configuration: --prefix=/ffbuild/prefix --pkg-config-flags=--static --pkg-config=pkg-config --cross-prefix=x86_64-w64-mingw32- --arch=x86_64 --target-os=mingw32 --enable-gpl --enable-version3 --disable-debug --enable-shared --disable-static --disable-w32threads --enable-pthreads --enable-iconv --enable-libxml2 --enable-zlib --enable-libfreetype --enable-libfribidi --enable-gmp --enable-lzma --enable-fontconfig --enable-libharfbuzz --enable-libvorbis --enable-opencl --disable-libpulse --enable-libvmaf --disable-libxcb --disable-xlib --enable-amf --enable-libaom --enable-libaribb24 --enable-avisynth --enable-chromaprint --enable-libdav1d --enable-libdavs2 --disable-libfdk-aac --enable-ffnvcodec --enable-cuda-llvm --enable-frei0r --enable-libgme --enable-libkvazaar --enable-libaribcaption --enable-libass --enable-libbluray --enable-libjxl --enable-libmp3lame --enable-libopus --enable-librist --enable-libssh --enable-libtheora --enable-libvpx --enable-libwebp --enable-lv2 --enable-libvpl --enable-openal --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopenmpt --enable-librav1e --enable-librubberband --enable-schannel --enable-sdl2 --enable-libsoxr --enable-libsrt --enable-libsvtav1 --enable-libtwolame --enable-libuavs3d --disable-libdrm --enable-vaapi --enable-libvidstab --enable-vulkan --enable-libshaderc --enable-libplacebo --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libzimg --enable-libzvbi --extra-cflags=-DLIBTWOLAME_STATIC --extra-cxxflags= --extra-ldflags=-pthread --extra-ldexeflags= --extra-libs=-lgomp --extra-version=20231215 libavutil 58. 33.100 / 58. 33.100 libavcodec 60. 35.100 / 60. 35.100 libavformat 60. 18.100 / 60. 18.100 libavdevice 60. 4.100 / 60. 4.100 libavfilter 9. 14.100 / 9. 14.100 libswscale 7. 6.100 / 7. 6.100 libswresample 4. 13.100 / 4. 13.100 libpostproc 57. 4.100 / 57. 4.100 Input #0, image2, from 'F:\am\renderfolder\image.%04d.png': Duration: 00:00:00.83, start: 0.000000, bitrate: N/A Stream #0:0: Video: png, rgb48be(pc, gbr/unknown/unknown), 200x200, 30 fps, 30 tbr, 30 tbn File 'F:\am\renderfolder\output.mp4' already exists. Overwrite? [y/N] y Stream mapping: Stream #0:0 -> #0:0 (png (native) -> h264 (libx264)) Press [q] to stop, [?] for help [libx264 @ 0000023141102c80] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2 [libx264 @ 0000023141102c80] profile High, level 1.2, 4:2:0, 8-bit [libx264 @ 0000023141102c80] 264 - core 164 - H.264/MPEG-4 AVC codec - Copyleft 2003-2023 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to 'F:\am\renderfolder\output.mp4': Metadata: encoder : Lavf60.18.100 Stream #0:0: Video: h264 (avc1 / 0x31637661), yuv420p(tv, progressive), 200x200, q=2-31, 30 fps, 15360 tbn Metadata: encoder : Lavc60.35.100 libx264 Side data: cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A [out#0/mp4 @ 000002313eec1500] video:28kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 3.984505% frame= 25 fps=0.0 q=-1.0 Lsize= 29kB time=00:00:00.76 bitrate= 313.7kbits/s speed=16.6x [libx264 @ 0000023141102c80] frame I:1 Avg QP:23.13 size: 2367 [libx264 @ 0000023141102c80] frame P:13 Avg QP:27.04 size: 1478 [libx264 @ 0000023141102c80] frame B:11 Avg QP:27.13 size: 606 [libx264 @ 0000023141102c80] consecutive B-frames: 28.0% 40.0% 0.0% 32.0% [libx264 @ 0000023141102c80] mb I I16..4: 34.3% 42.0% 23.7% [libx264 @ 0000023141102c80] mb P I16..4: 0.6% 7.0% 1.4% P16..4: 22.9% 26.1% 15.0% 0.0% 0.0% skip:26.9% [libx264 @ 0000023141102c80] mb B I16..4: 0.2% 1.8% 0.2% B16..8: 37.1% 17.4% 5.5% direct: 1.9% skip:35.8% L0:40.8% L1:43.2% BI:16.0% [libx264 @ 0000023141102c80] 8x8 transform intra:63.2% inter:61.5% [libx264 @ 0000023141102c80] coded y,uvDC,uvAC intra: 62.8% 63.9% 34.4% inter: 24.8% 18.5% 3.2% [libx264 @ 0000023141102c80] i16 v,h,dc,p: 72% 1% 18% 8% [libx264 @ 0000023141102c80] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 31% 9% 21% 10% 4% 7% 3% 6% 9% [libx264 @ 0000023141102c80] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 24% 16% 23% 6% 7% 10% 5% 3% 7% [libx264 @ 0000023141102c80] i8c dc,h,v,p: 59% 13% 20% 7% [libx264 @ 0000023141102c80] Weighted P-Frames: Y:0.0% UV:0.0% [libx264 @ 0000023141102c80] ref P L0: 80.0% 15.9% 2.9% 1.1% [libx264 @ 0000023141102c80] ref B L0: 95.0% 5.0% [libx264 @ 0000023141102c80] ref B L1: 99.5% 0.5% [libx264 @ 0000023141102c80] kb/s:271.08 MP4 file created successfully. Creating zip archive of PNG images a image.0000.png a image.0001.png a image.0002.png a image.0003.png a image.0004.png a image.0005.png a image.0006.png a image.0007.png a image.0008.png a image.0009.png a image.0010.png a image.0011.png a image.0012.png a image.0013.png a image.0014.png a image.0015.png a image.0016.png a image.0017.png a image.0018.png a image.0019.png a image.0020.png a image.0021.png a image.0022.png a image.0023.png a image.0024.png Zip archive created successfully: F:\am\renderfolder\imagesTheJob.zip Deleting original PNG files PNG images archived and original files deleted
  24. For anyone that wants to lean forward a little here is a batch script (to be named runme.bat) that targets all PNGs in the target directory to create an MP4 video using FFMPEG. If successful in creating the MP4 file it then zips up the PNG files (using Windows Tar.exe utility). If the zip file is created successfully it then deletes the PNG files. Disclaimer: This batch file is more complex than it really needs to be but I started to dive into error checking which allows for stripping of special characters from the variables passed by Netrender and checks to ensure files are properly created before continuing and especially removing/deleting files. The 'setlocal enabledelayed expansion' is new to me as well and presumably allows for the most current value of variables at time of command execution. I need to research more to determine how to strip all special characters out of a variable but it might be better to simply... simplify. TODO: I would like to master timestamping of files to better facilitate archiving and backups. Gotta deal with those special characters... @echo off setlocal enabledelayedexpansion cls echo This is a batch file running with the following variables: echo %0 %1 %2 %3 %4 %5 %6 echo. set program=%0 set pool=%1 set job=%2 set time=%3 set frames=%4 set elapsedtime=%5 set outputfolder=%6 rem Strip quotes from variables for %%I in (program pool job time frames elapsedtime outputfolder) do ( call :stripQuotes %%I !%%I! ) echo program %program% (batch file) echo pool %pool% echo job %job% echo time %time% echo frames %frames% echo elapsedtime %elapsedtime% echo outputfolder %outputfolder% REM Run FFMPEG with the dot 4 digit wildcard pattern (image.0000.png) ffmpeg -framerate 30 -i "%outputfolder%\image.%%04d.png" -c:v libx264 -pix_fmt yuv420p "%outputfolder%\output.mp4" REM Simplified error checking if exist "%outputfolder%\output.mp4" ( echo MP4 file created successfully. REM Create a zip archive of all PNG images echo Creating zip archive of PNG images tar -a -cvf "%outputfolder%\images%job%.zip" -C "%outputfolder%" *.png REM Check if the zip file was created successfully if exist "%outputfolder%\images%job%.zip" ( echo Zip archive created successfully: %outputfolder%\images%job%.zip REM Optionally delete the original PNG files echo Deleting original PNG files del "%outputfolder%\*.png" echo PNG images archived and original files deleted ) else ( echo Failed to create zip archive: %outputfolder%\images%job%.zip ) ) else ( echo Failed to create MP4 file. ) pause exit /b :stripQuotes set "%1=%~2" exit /b This script does expect to find FFMPEG so if not in the Windows Environmental path it should probably be placed in the target directory or other location where it can be found by the script. We could also hard code the exact path to FFMPEG.
  25. I want to add some information related to Netrenders Event Commands that allow us to run batch files and programs after a job completion (or frame completion). This process can allow use of other programs to post process output from Animation:Master's Netrender and help us understand how we can do the same basic process without Netrender as well. There are three basic components of this process: 1. Set up the Event Command in Netrender 2. (Optionally) structure a (textual) batch file to take arguments from Netrender and pass them on to other programs 3. Run the desired programs and processes This does assume a basic understanding of running Hash Inc's Netrender and how to use a text editor (to create batch files) Setting up the Event Command in Netrender Here we've selected Job Completed rendering as our trigger event. Commands added here will be executed upon successful completion of a rendering job. Note the arguments/variables that Netrender can pass to other programs; pool name, job name, time of completion, number of frames rendered, elapsed time of rendering and the output folder files were rendered into. In this example our command will pass all of these arguments to a Windows batch file where we can then use those as needed. Note the specific formatting of the path where the program, in this case a batch file is identified in quotes with extra back slashes to account for one slash being an escape character. If or location was deeper in the directory structure we might need to repeat that pattern thusly: "F:\\deep\\deeper\\stilldeeper\\runme.bat" The arguments we will use as variables later are then added after that command: %p %j etc. The (Optional) Batch File A Windows batch file is simply a text file with the .bat extension that can be used to run useful commands at the command line. Note that a batch file can be ran independently of Netrender so in many cases we might simply run the batch file rather than wait for Netrender to run it. Here we do want to pass information from Netrender to other programs so we want to take advantage of that capability in batch files. So we open our favorite text editor and create a text file named "runme.bat". (as that is referenced by Netrender via the Command Event. In our runme.bat batch file we might create something like the following: I won't explain everything here but the important part is that we are allowing Netrender to run a script that in turn can now run other programs. Here we take the passed arguments from Netrender (the %1, %2, %3 and other arguments) and use them to set up variables we can use elsewhere in our batch file. In this way we can refer to something recognizable like "UsethisVery SpecificVariableName" rather than %8 which we may forget what it references. Note that %0 references the current program being run which in this case is our batch file. So next we use 'set' to store the arguments from Netrender as recognizable variables. Here I've used the echo command just to display information to the screen. In the case of our story this might be the program we run where we use those variables. Each time the batch script encounters a variable it uses that variable's value in its place. I've added the pause command at the end to make sure the user has a chance to see everything and acknowledge the information before the program closes. Pause can be given specific messages after the command but by default simply asks for a key to be pressed in order to close the program. Running the Program (Program Output) Here we see what our automated batch script has produced: Rather than just output text or information we might prefer to convert an image sequence from PNG images into a GIF animation, or an MP4 video... scale images up/down... run backups... feed the cats... or any crazy little thing we can dream up. Running some of these useful options, such as using FFMPEG to modify, convert and merge images, video and audio is what we will try to explore next.
×
×
  • Create New...