Search the Community
Showing results for 'project'.
-
As I am nearing the end of another animation project I wanted to say thank you to Robert Holmen. During the last several months we have gone through the animation frame by frame. He has used this technique as a teaching tool and have learned so much. I can not tell you how many times I have said wow through the process. Anyway thank you Robert!!
-
Can we get the project file for this?
-
Can I send you...trustfully the Android wrapped up project in a single .zip file and you can find those to help from there?
-
I downloaded a few models @Darrin S All that I've downloaded have worked. Thanks for sharing! Edit: Actually... I spoke too soon. I had only downloaded models and not projects. In order to include files such as models in a project you'll first need to embed the models. That can be done via the menu just prior to saving via Project > Embed All. Without that embedding the Project files will be missing external resources. Edit 2: I see that your individual files/models are likely what the projects are looking for. What you might want to do in that case is Zip up all the files into a single Zip file and then post that. Then all of the projects and models can be maintained in one single zip file.
-
The past year has been right out of the pages of "Mr. Toads wild ride". Got laid off August of 23. Trying to find a like position in the tech industry has been impossible. By this march it really looked like I was going to lose the house and everything else. But I took the painful step of cashing out my 401k to cover expenses and then managed to parlay I prior contact into an hourly consulting project that put myself and another gent to work full time. Now I am a co founder of a software company building a product and on the investor hunt. Which brings me to here. Since we spun up our corporation my linkedin feed has been flooded with cold queries from salespeople. Normally I ignore them. But this morning one came thru that got my interest. For some context one of my challenges with my business is putting together demo, training, and pitch videos for our software. I have been experimenting with a whiteboard video but in the back of my mind I always think of how AM would be a better way. The sales message I got this morning was for this company (inovitagency.com) . So my question is this: what.are your thoughts on AM fitting into this market. This company isn't the only one out there doing this. They are the 3rd that have pitched to me in the last 6 months. There is an obvious need (I for one need it!) So can am be used efficiently to fulfill and build a pipeline of short contracts?
-
The final step in the process is to export this animation data to the character. I first bake the animation in the choreography. I bake one keyframe per frame to maintain accuracy at this step. Then export the choreograph action file. Next, I needed to come up with a way to convert this data to a CSV file containing just the position over time data for the one axis of the bones I want to pass on to the animatronics. Fortunately, at the time we did this project, I had someone on staff who knew a little C++ and I had him code me an app. This app imports A:M .act files and allows me to select the bones I want to translate, define the axis and ranges, and even remap the values. Finally saving as a CSV that I can bring into the animatronic controller animation software. On this show, we were using a program called Conductor. It is the same software we used to use in the early days, and as I mentioned, it does not visualize any animation. Just stores position data on a timeline. I can import the CSV here and this tool does a great job at cleaning up the extra frames reducing to a manageable number of points without altering the shape of the curve and affecting the acceleration. This application becomes my second verification that I have stayed under my speed and acceleration limits because it has built-in a similar meter and warning function I created in A:M In the end, the file from this app is what is saved to the character on an SD card. There is a master show control system for the venue that simply sends a trigger signal to the character to run the animation.
-
The frog features the most of all the characters. This is the only character also that used any IK type rigging. All the other characters use FK through pose sliders. For the eyes I wanted the environment to do some of the animating for me. I built a representation of the entire location in the project at full scale, and in my rig for the eyes, I created a null that is the focal point of the eyes. With this null I could place it at the various locations the audience is present around the lake using poses. Then fading between pose sliders, I can direct his gaze around the lake regardless of where the position of the head is at. This adds a complexity to the animation that nobody is even aware of consciously. With the head moving, rotating, and bobbing to the music, the eyes can stay fixed on your position while you are sitting in the Steakhouse at the edge of the lake. Good restaurant by the way.
-
The first character we looked at was a giant toucan. We knew we wanted this to be an extremely dynamic figure, so to determine what kind of speeds and acceleration limits we would need to design within I made a quick animation using a model of mostly 2d cut-outs. Even though it was not necessary for the info we were gathering at this step, I still modeled at full scale. All the axes of motion were placed at the approximate real location we had determined in our napkin sketches during the initial conversations. Then this same project is what everything else was built on top of. All of it was created in a few hours. The great thing about the workflow in A:M is how easy it is to build a model that is almost made of nothing but the bones, animate it, come back and refine the sculpted model and even move the axis around later while utilizing the same animation data. I can almost work backward if needed. There is very little time spent on the front end to get to where I can work on movement. Lady Birds_test-01-sml.mp4 Tucan-Demo04_sml.mp4
-
Unfortunately, I can't find the model and... the TWO SVN projects site appears to be down. Link: http://project.hash.com/movie/svn/active/ Here's what she looks like if we can find her:
-
That got me thinking about how music fits in but too much for now... The navigation through the Hyperloop Intersection kind of got me thinking about music and it really almost unlocked the first 8 letters of the alphabet as you understand it. You are navigating from bottom to top at first... then right out to exit...noticing future exits to be built...then re-enter like Pac-Man from the left, but go left...back north and into the same loop as before. 11 sequences of 12 seconds total. Save #12 for the letter L for loop or move on. It gets quite funny... but loop (route) 'B' is just like 'G'.... I gets skipped cause I is the intro ... oops you just clicked on an InstantApp idea... Feel free to exit and we're gone off of ... like it never happened. And it loops until you click then ... well right now you see what I am drawing in A:M (that's another one)... like shown previously, the first image sequence (should be 12s) shows Route (loop) C(2). 2 because 1 is go straight , 2 is R, 3 is L. The second sequence is Route B. That's where for now we just demonstrate that useful buttons are to come. The artsy sketch with the sign shows a concept of seeing an ad first but obviously you should be going from say Las Vegas area to the East at that point if you see the sign on the left, but thankfully that's going way too far, just sayin.. Anyway, that takes us through the intro (1), and Hyperloop Intersection (10), and we could logically keep looping... and we do....because I think there's one more sequence we need to get A.I.'s input and maybe help with the Alphabet Song...again, going too far... but AI can't so no...the perfect audience for this.. wait! There's actually 8 new potential routes... but the master sequence should always show logically where you are at... so you know which files to swap in (previously called sprite sheets, but all reference to "game" is taken out of the program/project ...) such as 12 second music clips! but more like replacing 12x12 frame image sheets.. sprite sheets. Which is 12 seconds...could just be a background or the whole thing prepackaged... I fill a lot in with plain black images that don't render... can't do it all... but I need to see that loop...in my head... then... you can see the frames can show anything you want... I think of it like 12 second music video previews that can go 24 seconds... 36 seconds.... oh! they clicked off!!! get the next one...! This is 120 second loop. If my math is correct. musac3.mp3
-
-
I was working on this and was about to just put it on my website, but thought I'd test things out here first and maybe slow down a bit and build a better site. But it's an intersection concept for the so-called Hyperloop which usually looks like just a straight shot through a tube. It's the question of propulsion, and I figure pneumatic is something to consider... like pellet gun or blowing a toothpick through a straw... but to sustain the pressure flow becomes complex. Like a fan doesn't work cause the "train" runs into it... so more thought on that... And just as I was putting it on the website... a headline pops up that a company won a lawsuit against Playstore monopoly and I remembered that I forgot some things. Like simplifying the app that works very smoothly now... but I can feel people running away if I say "programming" or anything related, but the best way I see to move that forward is as an open source Android project... and it's just a folder with files in it that is best opened in Android Studio (I know, but it's not as hard as before) and then you can figure out where to put your assembled 12-second "sprite sheet", and well, let's just wait on that.... I'll upload more GIF's cause that's just more fun right now!
-
For character rigs, you could try using a pre-made rig like one of these: 2008 Rig TSM2 (also easy to setup quadrupeds, etc) Squetch Rig (Squetchy Sam)...more links in my signature. There is also the 2001 Rig (you'll have to search for it) and the Saucy Rig (I think the link is currently down). There are a lot of tutorials covering most of what you requested in the forums...if something isn't covered or you can't find the answer, ask a question. I'm looking forward to seeing your project!
-
Hello there, I have recently started diving into 3D animation using Hash Animation, and I am looking to optimize my workflow to make the most out of the software. As a newcomer to the 3D animation world; I am excited to learn from those with more experience and refine my approach to ensure efficiency and creativity. Currently; I am working on a short animation project that involves character modeling; rigging; and scene composition. I find rigging quite complex; and sometimes my rigs don't behave as expected during animation. What are some best practices for creating stable and flexible rigs? Are there specific tutorials or resources that you recommend for mastering rigging in Animation?🤔 I am trying to establish a solid workflow; from initial concept to final rendering. How do you usually structure your projects? For instance; how much time do you spend on modeling; rigging; animating; and rendering? Any tips on managing these phases effectively?🤔 I have noticed that rendering can take a significant amount of time. What techniques or settings do you use to balance quality and render time? Are there particular settings in Animation that are essential for achieving high quality renders without excessively long render times? Also, I have gone through this post; https://forums.hash.com/forum/408-sap-learners/ which definitely helped me out a lot. Also, if there are any community challenges or collaborative projects; I would love to participate and learn from others in real time. Thank you in advance for your help and assistance.😇
-
After doing the 2001 slit-scan project (https://forums.hash.com/topic/53303-2001-slit-scan-effects-simulated-in-am-with-mufoof/) using resurrected 2001 artwork (https://youtu.be/dujQGB-2EXw), I wanted to try some other graphics. The Mandelbrot Set is a wonderful source of interesting imagery. My college roommate, Mike Segor, who also went to that 1968 showing of 2001, had sent me some Mandelbrot work. Grist for the mill. He also contributed music for the final movie. The artwork: John Knoll has a fascinating video about slit-scan techniques (and shows a graphic of the Trumbull 2001 set-up at 5:07): https://vimeo.com/391366006 He discusses various approaches including circular slits and other, similar, symmetrical geometry like the Dr. Who Tardis/police call box/tunnel sequence. For this experiment, I decided to start with the 2001 slit-scan technique and then try circular slits. Did not use the soft-edged slit approach as much rendering on this project was complete before that idea was tested. The movie breaks down into these parts: Vertical slit scan created on left half of screen, flipped and inverted onto right half of screen; Composite=Add. Has the expected acceleration effect created by moving the slit-screen model element across the artwork during MUFOOF frames and moving the artwork between frames: Circular slit scan (full screen effect), artwork movement between frames; has sweep, is pretty, but no dynamic acceleration: Another circular slit scan clip overlaid on first circular slit scan clip with Composite=Difference; more dynamic because of counter-movement, but no real acceleration: Circular slit scan with slit-screen movement Right to Left during each MUFOOF frame, artwork movement between frames; has dynamic acceleration but is asymmetrical: Circular slit scan, slit-screen movement diagonally Lower Left to Upper Right during each MUFOOF frame, artwork movement between frames; has sweep and dynamic acceleration but is also asymmetrical: I made other attempts at a symmetrical set-up, but they did not give a dynamic acceleration effect: 1. Moving artwork away from camera faster than slit-screen (z-axis), exposing more artwork, in each MUFOOF frame; moving artwork between frames (x-axis). 2. Changing size of circular slit, exposing more artwork, in each MUFOOF frame; moving artwork between frames. 3. Moving circular slit from below centerline to above centerline in each MUFOOF frame; moving artwork between frames. Has the virtue of being symmetrical horizontally, though not vertically: 4. No slit, just an open circle in screen and no movement of screen in MUFOOF frames; moving artwork between frames. Has sweep but no acceleration: I wanted to try one more effect before giving up on finding a symmetrical set-up that would give dynamic acceleration — rotation of artwork 360 deg around the z-axis during MUFOOF frames, moving artwork left to right between frames, and using the open circle shown above. This idea went down a whole new rabbit hole where I spent several days, but it led to what appears to be a generalized solution for simulating slit-scans that create dynamic motion. Initial results looked very strange. Then I realized that as the animation progresses, the artwork moves left to right as intended but the rotation point travels with the artwork and rotates it out of the camera FOV. After thinking about that problem, I decided the solution might be an additional bone to rotate the artwork (which I named Pivot) while the artwork itself is assigned to a child bone of the Pivot bone. That worked and the result has dynamic acceleration. A still from the final part of the movie: Here is the movie: https://youtu.be/aD8xUaaGIF4 Here are files for the open circle scan with pivot rotating the artwork 360 deg during each MUFOOF frame, artwork movement Left to Right between frames: circle-no-slit-scan-MUOOF-backward-mandelbrot1-rotateArt360-sys3.chocircle-no-slit-art-sys-MUOOF-backward-60sec-rotateArt360-sys3.actcircle-slit-screen-backward-artwork-moveL2R-sys3.actcircle-no-slit-screen-art-mandelbrot1-system03.mdlcircle-no-slit-mandelbrot1-rotateArt360-sys3.pre Mandelbrot artwork appears above. But wait, there’s more. After successfully moving the artwork in relative space using the parent Pivot bone and then moving it in absolute space using the child bone in the second action, I realized this approach could be used for any slit scan animation, including the original 2001 project. Proved to be a bit more complex than that simple statement. I revised the 2001 slit-screen/artwork model with a new parent bone (Pivot) for the Artwork child bone. Now the slit would be stationary, the artwork would move 6” during MUFOOF frames, and the artwork would be moved an absolute amount between frames. Test renders did not look right. In the old method, the slit-screen element moved and the artwork was stationary. In the new method, the slit-screen is stationary. Where should that slit be located? For simplicity I decided to try changing the camera location on the x-axis. Eventually I found that moving the camera +3” on the x-axis gave test renders that were substantially similar to the previous 2001 slit-scan results. However, the image did not reach all the way to the center of the screen. Time to modify the model and MUFOOF action to allow for further experimentation. I added a new bone (System2) to the slit-screen/artwork model to be the parent of all model elements so I could move the whole model during the MUFOOF action. At action start, the model was located at -3” on the x-axis (and the camera was returned to zero in the cho). I experimented with end-of-action values for bone System2; eventually +3” gave the desired result — these are the same values as in the original move of the slit-screen element, except now we are moving the whole model during MUFOOF frames. This model move is analogous to panning the camera slightly during its travel down the track which Trumbull mentioned. Here are revised 2001 slit-scan files using the new method: soft-slit-scan-backward-2001artwork-60sec-PivotSystem2.chosoft-slit-screen-2001artwork-system2.mdlsoft-slit-screen-backward-2001-artwork.actsoft-slit-screen-MUFOOF-backward-60sec-movePivotR2Lsys2L2R.actsoft-slit-artwork01.tgasoft-slit-scan-backward-2001artwork-60sec-PivotSystem2.pre Here is the movie created using the new method which is substantially similar to the demo movie of the soft-edged slit-screen in the 2001 post: soft-slit-scan-backward-2001artwork-60sec-PivotSystem2.mov As the I Ching says, persistence in a righteous course brings reward.
-
- mandelbrot
- slit-scan
-
(and 2 more)
Tagged with:
-
Rob, I imported your head model into my project just to see. I had to se-set settings, color etc. but it won't let me increase density above 100%. leaving for gig. I'll check when I get back
-
I watched the tech talk video. Also found an old Matt Campbell PDF tute in my files. Could you possibly zip the above project for me to download for study? That's the hair I'm after. (Or show screen shots of the Material and emitter settings)
-
Here is what we might expect to see in the console window if our project file is set to render 24 frames: This is a batch file running with the following variables: "F:\runme.bat" "Pool1.rpl" "TheJob" "06/24/2024 12:56 AM" 25 " 0:01:26" "F:\am\renderfolder" program F:\runme.bat (batch file) pool Pool1.rpl job TheJob time 06/24/2024 12:56 AM frames 25 elapsedtime 0:01:26 outputfolder F:\am\renderfolder ffmpeg version N-112991-g081d69b78d-20231215 Copyright (c) 2000-2023 the FFmpeg developers built with gcc 13.2.0 (crosstool-NG 1.25.0.232_c175b21) configuration: --prefix=/ffbuild/prefix --pkg-config-flags=--static --pkg-config=pkg-config --cross-prefix=x86_64-w64-mingw32- --arch=x86_64 --target-os=mingw32 --enable-gpl --enable-version3 --disable-debug --enable-shared --disable-static --disable-w32threads --enable-pthreads --enable-iconv --enable-libxml2 --enable-zlib --enable-libfreetype --enable-libfribidi --enable-gmp --enable-lzma --enable-fontconfig --enable-libharfbuzz --enable-libvorbis --enable-opencl --disable-libpulse --enable-libvmaf --disable-libxcb --disable-xlib --enable-amf --enable-libaom --enable-libaribb24 --enable-avisynth --enable-chromaprint --enable-libdav1d --enable-libdavs2 --disable-libfdk-aac --enable-ffnvcodec --enable-cuda-llvm --enable-frei0r --enable-libgme --enable-libkvazaar --enable-libaribcaption --enable-libass --enable-libbluray --enable-libjxl --enable-libmp3lame --enable-libopus --enable-librist --enable-libssh --enable-libtheora --enable-libvpx --enable-libwebp --enable-lv2 --enable-libvpl --enable-openal --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopenmpt --enable-librav1e --enable-librubberband --enable-schannel --enable-sdl2 --enable-libsoxr --enable-libsrt --enable-libsvtav1 --enable-libtwolame --enable-libuavs3d --disable-libdrm --enable-vaapi --enable-libvidstab --enable-vulkan --enable-libshaderc --enable-libplacebo --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libzimg --enable-libzvbi --extra-cflags=-DLIBTWOLAME_STATIC --extra-cxxflags= --extra-ldflags=-pthread --extra-ldexeflags= --extra-libs=-lgomp --extra-version=20231215 libavutil 58. 33.100 / 58. 33.100 libavcodec 60. 35.100 / 60. 35.100 libavformat 60. 18.100 / 60. 18.100 libavdevice 60. 4.100 / 60. 4.100 libavfilter 9. 14.100 / 9. 14.100 libswscale 7. 6.100 / 7. 6.100 libswresample 4. 13.100 / 4. 13.100 libpostproc 57. 4.100 / 57. 4.100 Input #0, image2, from 'F:\am\renderfolder\image.%04d.png': Duration: 00:00:00.83, start: 0.000000, bitrate: N/A Stream #0:0: Video: png, rgb48be(pc, gbr/unknown/unknown), 200x200, 30 fps, 30 tbr, 30 tbn File 'F:\am\renderfolder\output.mp4' already exists. Overwrite? [y/N] y Stream mapping: Stream #0:0 -> #0:0 (png (native) -> h264 (libx264)) Press [q] to stop, [?] for help [libx264 @ 0000023141102c80] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2 [libx264 @ 0000023141102c80] profile High, level 1.2, 4:2:0, 8-bit [libx264 @ 0000023141102c80] 264 - core 164 - H.264/MPEG-4 AVC codec - Copyleft 2003-2023 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to 'F:\am\renderfolder\output.mp4': Metadata: encoder : Lavf60.18.100 Stream #0:0: Video: h264 (avc1 / 0x31637661), yuv420p(tv, progressive), 200x200, q=2-31, 30 fps, 15360 tbn Metadata: encoder : Lavc60.35.100 libx264 Side data: cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A [out#0/mp4 @ 000002313eec1500] video:28kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 3.984505% frame= 25 fps=0.0 q=-1.0 Lsize= 29kB time=00:00:00.76 bitrate= 313.7kbits/s speed=16.6x [libx264 @ 0000023141102c80] frame I:1 Avg QP:23.13 size: 2367 [libx264 @ 0000023141102c80] frame P:13 Avg QP:27.04 size: 1478 [libx264 @ 0000023141102c80] frame B:11 Avg QP:27.13 size: 606 [libx264 @ 0000023141102c80] consecutive B-frames: 28.0% 40.0% 0.0% 32.0% [libx264 @ 0000023141102c80] mb I I16..4: 34.3% 42.0% 23.7% [libx264 @ 0000023141102c80] mb P I16..4: 0.6% 7.0% 1.4% P16..4: 22.9% 26.1% 15.0% 0.0% 0.0% skip:26.9% [libx264 @ 0000023141102c80] mb B I16..4: 0.2% 1.8% 0.2% B16..8: 37.1% 17.4% 5.5% direct: 1.9% skip:35.8% L0:40.8% L1:43.2% BI:16.0% [libx264 @ 0000023141102c80] 8x8 transform intra:63.2% inter:61.5% [libx264 @ 0000023141102c80] coded y,uvDC,uvAC intra: 62.8% 63.9% 34.4% inter: 24.8% 18.5% 3.2% [libx264 @ 0000023141102c80] i16 v,h,dc,p: 72% 1% 18% 8% [libx264 @ 0000023141102c80] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 31% 9% 21% 10% 4% 7% 3% 6% 9% [libx264 @ 0000023141102c80] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 24% 16% 23% 6% 7% 10% 5% 3% 7% [libx264 @ 0000023141102c80] i8c dc,h,v,p: 59% 13% 20% 7% [libx264 @ 0000023141102c80] Weighted P-Frames: Y:0.0% UV:0.0% [libx264 @ 0000023141102c80] ref P L0: 80.0% 15.9% 2.9% 1.1% [libx264 @ 0000023141102c80] ref B L0: 95.0% 5.0% [libx264 @ 0000023141102c80] ref B L1: 99.5% 0.5% [libx264 @ 0000023141102c80] kb/s:271.08 MP4 file created successfully. Creating zip archive of PNG images a image.0000.png a image.0001.png a image.0002.png a image.0003.png a image.0004.png a image.0005.png a image.0006.png a image.0007.png a image.0008.png a image.0009.png a image.0010.png a image.0011.png a image.0012.png a image.0013.png a image.0014.png a image.0015.png a image.0016.png a image.0017.png a image.0018.png a image.0019.png a image.0020.png a image.0021.png a image.0022.png a image.0023.png a image.0024.png Zip archive created successfully: F:\am\renderfolder\imagesTheJob.zip Deleting original PNG files PNG images archived and original files deleted
-
That is some great history information. Thank you. I’m surprised by the fact that the variety of the .prj scene files wasn’t as big as I expected, but still some of these look interesting. Could you please show me some Shaded/Wireframe screenshots and final renders of “Jumper”, “Moonscape” and “roof top mayhem”? I guess this is what I was mainly thinking about. Well, I know the fact that the majority of the character models from that timeframe were characters from the “Why does the wind blow?” short film, but I guess that the rest of the content in that category was varying during that time (a lot of the models and especially the “realistic”, “anime” and “vehicles” categories), right? That’s strange. I even thought the scene with the two dragons (which was seen on the cover of the second edition of the Animation Master handbook by Jeff Paries) would also be included. This probably means that at the time they were released as separate models without a .prj file? I know I saw the tree model from that scene being used as a bush for an animated project from 1998 before (which might have been created using version 6). Also, what was the list of the motion capture files that were included? And in case you don't feel I'm asking for too much, what did the models which were coming from the tutorials folder look like, as well as the more refined of the “realistic”, “anime” and “vehicles” models (that weren’t included in the Extras CD) in AM2000? I would like to see screen captures of some in Shaded/Wireframe mode.
-
2001 Slit-Scan Effects Simulated in A:M? With MUFOOF?
fraizer replied to fraizer's topic in Work In Progress / Sweatbox
Thank you, Robert. Thank you, Fuchur. Banding -- a soft slit, that's an excellent idea. I will play with that. Not sure what a render "preset" is, other than the settings contained in the Cho file I posted. (I don't use Projects in my projects, that's why I posted individual files...). Re: EXR and exposure: A few years ago, I was working on a project with Rob Blalack and we discussed using EXR for something we were trying to do, but that's as far as it went; and that is the limit of my understanding of EXR -- very limited. I have found that a broad range of Keylight intensities works, with varying degrees of success; same is true for Pass and Blur values; to a large degree it is a matter of taste. I am very, very curious to know if there is another way to do the slit-scan technique in A:M without using MUFOOF... -
Hello Hashers- In the beginning of the sequence the volumetric beam reaches all the way to the ground plane but as the project continues (frames 30-60) the beam breaks up and doesn't reach the ground. Can anyone figure the problem? VolumetricTest.prj
-
I have often wondered if particle emission rates are per patch or per group. Your test project is a good chance to examine this. First, however, I'll note that the two squares in the Chor are not quite equal. DenseMesh is about 100 cm across while SimpleMesh is about 150cm across then scaled down to 62%. Scaling an object will scale the particles it emits so that may explain why Simple has sharper corners than Dense... If I edit the squares so that both are 100 cm across and both 100% scaled in the chor, they are starting to look much more similar... The emission rate in the Sprite Emitter is set to "1000"... It's possible that is so high that an overload of sprites is masking any difference between the two results. I'm going to scale that value down by adjusting the Emission rate in the Sprite System. This value is always a percentage, not a count. Why are there two controls for... the same thing? It is possible to have more than one "Sprite Emitter" as children of the "Sprite System". For example a fire material might have a flame sprite, a smoke sprite and a spark sprite, each with its peculiar settings for many of the parameters we see in "Sprite Emitter". Having these percentage settings in the "Sprite System" lets us uniformly scale the whole effect without needing to edit each emitter. With the emission rates scaled down to 1% we can observe the sprites being born... Frame 0: Frame 5 Frame 10: Frame 15: Frame 20: Frame 25: Even though DenseMesh has 25 times more patches than SimpleMesh, both seem to be putting out an equal number of particles. This is the opposite of what I expected. I thought the number of particles would increase with the number of patches. I thought the lumpy result your original PRJ had for SimpleMesh was because it had fewer sprites to blend together, but it was really because they were scaled smaller and perhaps had less overlap among the sprites. Thanks for inquiring, Tom!
-
Running a test with the Sprites material and the question is, does the patch count effect the "look" of the material emissions? Here is a picture of two meshes (project attached) and the denser mesh seems to have a smoother looking cloud... It would seem that the more patches, the "busier" the picture would be but this is not the case.. Can anyone explain this phenomenon? (BTW- the picture is from 30 rendered.) CloudSpriteNewTests.prj CloudletDisc.tga CloudletDiscDarker2.tga
-
Yes, that was a cool particle project. I will have to play with the zipped sprite cloud file I just downloaded. Thank you