sprockets tangerines Duplicator Wizard Gound Hog Lair metalic mobius shape KM Bismark Super Mega Fox and Draggula Grey Rabbit with floppy ears
sprockets
Recent Posts | Unread Content
Jump to content
Hash, Inc. - Animation:Master

Rodney

Admin
  • Posts

    21,597
  • Joined

  • Last visited

  • Days Won

    110

Everything posted by Rodney

  1. I have read that of the difference between 1000KB and 1024KiB that 'extra' 24 bytes is allocated to the file system. I'm not sure if this is actually the case but that would make sense and perhaps additionally explain why Windows OS has long used KiB under the hood. In systems such as Linux were KB is used exclusively I would assume the same thing is being done just in a different way.
  2. I had heard the term before but that didn't bring anything of use to my understanding of the term. It would seem that we missed a lot of new terms being rolled out in a similar vein: https://pc.net/helpcenter/answers/kibibytes_mebibytes_and_gibibytes That article was from 2005... and the measurements were placed into effect back in 1998 so we are definitely behind the power curve. From the (brief) article: This whole problem apparently arose because we like to round things into nice even numbers with lots of zeroes but that creates a lot of problems because in that particular context 1000 = 1024. We could (at least theoretically) lose that additional 24 bytes in the difference isn't taken into consideration because 2^10 = 1024, not 1000. At first blush it would appear that where we reference exact byte-wise numerations using 'bi' (instead of 'lo', 'ta', 'ga', etc.) to declare that level of accuracy would be more technically correct. There were suits filed based on folks being convinced they were being robbed of extra bytes due to this ambiguity: https://www.cnet.com/news/gigabytes-vs-gibibytes-class-action-suit-nears-end/ Thanks to Robert for making the mention... I learned some very interesting things today.
  3. Nokia's javascript HEIF library appears to still be maintained on github: https://github.com/nokiatech/heif/tree/gh-pages/js One thing that isn't clear to me at this point is how encumbered HEIF might be with patents. HEVC was know to be highly tied down with patents. This license appears to be very current: https://github.com/nokiatech/heif/blob/master/LICENSE.TXT Formatting updated as of 9 days ago.
  4. It appears some on Mac platforms may have already been using the HEIF formatting: Here's an article from September 2017: https://www.cultofmac.com/487808/heif-vs-jpeg-image-files/ At the end of that article is a link to yet another article entitled "HEIF: A first nail in JPEG's coffin?": https://iso.500px.com/heif-first-nail-jpegs-coffin/
  5. Here's the news that is making the rounds: https://venturebeat.com/2018/03/16/microsoft-releases-two-new-windows-10-previews-with-high-efficiency-image-file-format/ One of the reasons why 'image wars' might be an appropriate classification is that many of the new approaches will leave users of older approaches out in the cold. While it's certain that some will take advantage of the newer approaches and build bridges going backward most will go the easier route and build bridges going forward. What this means is that to take advantage of the modern architecture PC users will want to stay current with Windows 10. This is the ongoing effect of Windows as a service.
  6. This tech watch focuses on one specific foray into bringing image formats up to date with modern hardware and software but there is much more to come. I hesitate to say there is a war of sorts on the horizon but some negotiations are ongoing to determine which approaches get adopted and what settles in for the long term. Microsoft has initiated their roll out of the HEIT format. There currently aren't any editors for the format and the focus is on playback. This would appear to be a byproduct of Microsoft's purchase of Nokia several years ago. For more information see: http://nokiatech.github.io/heif/ Of interest, this format is one of many approaches that form a container around data that is then processed in a standardized/optimized way... as opposed to processing the data differently for each data type. Containers aren't anything new but this round would suggest that after studying various options in the wild were useful feedback could be collected the approach could be further standardized and optimized, Disclaimer: These tech watches usually don't effect us in the short term as they point to technologies on the horizon. Those that are aware however can take advantage.
  7. Impressive work all around Dan. I like the line-less approach.
  8. Rodney

    GPU Slave?

    Yes, to the best of my recollection no shaders (as we've come to know them) were in the HAMR pipeline. Basically, if you could see it in a realtime preview inside A:M you could see it in HAMR enabled viewers (web browser or standalone application).
  9. That's great stuff Mark! I'd think that might have garner some interest outside of normal channels as well.
  10. Rodney

    GPU Slave?

    Why must ideas have such characteristics? Ideas are not inherently smart... stupid... As for the quote "An idea is a seed not a solution"... I wouldn't have written it if I didn't believe it. Ideas are like seeds in that they must be nurtured so they can grow, blossom and bloom. But they also must weather the storms and be firmly planted in order to do so. There is a common thread that is hard to pin down in A:M circles where people seem to take offense very easily. (It's not unique to A:M clrcles BTW!) The only rationale I can find for it is that as artists we are conditioned toward this response. But we want feedback... we want ideas... we want speculation... the more the better. And it is precisely because we have a reasonable expectation that our desires may not be met that we can march forward boldly into the future. We'll get there.
  11. Rodney

    GPU Slave?

    Tore, I'm always suspicious of comparisons because they are taken in isolation. While it is a goal... and they are well on their well on their way to it... EEVEE rendering in Blender currently does not support animation other than that created by rendering still frames via camera and object translation, rotation and such. That's why the scene from the video you showed, while impressive only shows a static scene where nothing moves in any way that would be called 'animation'. This isn't to downplay Blender's progress. They are definitely moving in the right direction but from a holistic view taking all things into consideration... especially fully articulated characters.. it's not a particularly useful comparison. There are tricks that can be used to limit rendering time in A:M that can get similar response and we should definitely pursue those. Ed Catmull once said, "Computer Science is the science of dirty tricks." and this is still largely true. We just need to find innovative ways to leverage those tricks. For other shortfalls and hints of 'dirty tricks' that can be smoothed out and optimized see the EEVEE FAQ: https://code.blender.org/2018/03/eevee-f-a-q/ Added: It's interesting to note that initially in the FAQ it is stated that EEVEE supports animation but then later specifically states otherwise.
  12. I think that is the same or at least a similar image map as the one shared (by Robert) in this topic: https://www.hash.com/forums/index.php?showtopic=47505 If you want to work through some of the various approaches or just troubleshoot the approach you are already taking chat, voice chat and video chat can all be used via the Animation:Master Discord channel. I tend to have that on whenever I'm online and folks can join in and assist as they have the opportunity. The A:M Discord link: https://discord.gg/7G9MBc Here's an example of Robert's displacement added to a cylinder with the cylinder models then duplicated in the Chor. By only changing a Group's surface properties a wide variety of colors and styles can be created.
  13. Two other approaches that immediately come to mind: Patch images This would work great if you have a tiled rope image. The benefit here being that you could draw all of your ropes in place with single splines... then use sweeper to sweep a cylinder over those splines. Right clicking on the resulting SweptObject group and selecting Add Image would then allow you to apply the rope image to each patch. And all of this without resorting to any twisting or revolving of the patches (the images themselves accomplish that). For far shots you could then use the single splines colored black and of the desired thickness and turn the color setting of the patch image to 0% (fully transparent). The downside of this is that each patch might require tweaking if the rope image needed rotating or the patches normals flipped. An added benefit to this method would be that you might not need to apply this to any shape in 3D. Just extrude a series of patches and apply the patch image of the rope... then modify to taste with respect to the camera. Materials This is one I haven't attempted with ropes but I would think it very straightforward. The trick would be to apply it to an unshaped rope and then shape the rope in an action. In this way all of the rope would always twist in the correct direction. (I'll have to test this one out). Bitmap Plus materials might work well with this approach.
  14. As per usual there are several approaches. Here would be an approach to get very minimal patches in bending rope: To keep the patch count low I might model a master rope and then use that model as a decal for a lower patch density rope. Use the modeled rope for hero shots where the rope is seen close up to further suggest the detail of the rope when it is seen farther in the distance. In the attached the top two ropes only have four patches each. They were decaled using the modeled rope on the bottom. As for the knots... I'd be tempted to use a single patch set to one of several knot images and then place those closer to camera. The important thing being.. what is the camera/audience seeing in the shot.
  15. For a second there... I thought I understood the rationale behind A:M's 'is flat?' test... Didn't quite get there this time around though. I like the illustration of dividing until reaching the point of being flat (and/or small enough) to consider the area a plane and therefore render.
  16. Way back in the day Computer Graphics pioneers Jim Blinn and Ed Catmull and Alvy Ray Smith conducted lectures on their work to young and eager minds that were interested in pushing the technology to its full potential. It would have been wonderful if someone would have thought to record some of those lectures. And they did! These lectures haven't been viewed a lot on youtube bit should be for historical perspective alone. The lectures dive more deeply into subject matters we are well acquainted with to include splines and patches: xhttps://www.youtube.com/channel/UCNXre0qpHjdhC29xH8WkKnw/feed From my perspective, this timeframe was while I was in highschool and when visiting the local college to determine what I wanted to be when I grew up I got my first view of computer graphics beyond tiny squares on a green computer screen or that relating to gaming on the Atari. I wasn't sure where computer graphics might be heading in the future but I knew i needed to be be involved! But that was quite a stretch for this small town boy and short of access to a Commodore 64 I didn't have much exposure to computers until years later after I joined the Air Force.
  17. I haven't delved far into the history of it but the term 'predictive rendering algorithm' came to mind so I typed it in to Google. This paper from back in 1996 was high on the search list and gives a place to measure from to see where the idea has evolved. Without knowing more I will postulate it largely followed the path of real time rendering... https://www.cs.ubc.ca/labs/imager/th/1996/Fearing1996/Fearing1996.pdf There are some concepts that intrigue me... more labels to me at this point. These include concepts such as 'frameless rendering' or 'Chaning Motion Thresholds'.
  18. Here is a slice in time of where the art of rendering was measured to be in spring of 2017. It delves more deeply into rendering itself by it's other nom de plume 'image synthesis': The various lecture slides can be accessed via the index at the bottom: http://graphics.stanford.edu/courses/cs348b/ (Link to Stanford lectures) It should also be noted that blockchaining of distributed rendering is already a thing as demonstrated by the folks at Otoy (Link). They are pursuing one model from a larger set of approaches... I'm not sure they are specifically attacking the same things I'm after in predictive rendering but they are very likely gathering a large amount of data that can inform that approach. Added: It has been said that even PIXAR's Renderman approach discards all information and starts anew with each frame. They obviously know what they are doing but this is very much not where I'm heading in my various wanderings.
  19. Going deeper down the rabbit hole... a leaving a few notes so I can find my way back... One path I don't expect to follow but it's interesting to see where it might go: Many years ago a test of blockchain technology was attempted where individual pixels on a digital canvas were sold for $1 per pixel. Because it's hard to see individual pixels groups of pixels 10x10 were sold for $100. In this experiment those who participated 'truly' owned a piece of that digital canvas and could alter it or sell it to someone else. Other similar experiments were conducted and while interesting that specific idea didn't take off... although I'm sure lessons were learned. One similar project posted its source code on github so the inner workings of such can be explored. But that path is a considerable diversion particularly for it's pay to play requirement although the concept of 'ownership' is indeed useful. My thoughts turn to the underlying constructs of blockchains where 'ledgers' are concerned. Further, the evolution of exposure sheets and how they arose from the ledger sheets of old. But before going on it may be important to state that the current trend in blockchain is away from 'proof of work' for a number of reasons. The primary one being that of power consumption (which has been detailed in Robert Holmen's Bitcoin topic). I won't press into that any further here except to say that in many/most cases proof of work is unnecessary. This isn't to say it isn't useful but the need must justify the related cost. Additionally the speed of which favoring verification (decryption) over solving (full decryption) can be a useful construct. At this point, one might be wondering (as should be expected) what this has to do with rendering. There are several useful concepts that can be extrapolated into the realm of rendering and playback of stored data. Some of this fits more squarely into the area of compression algorithms and such and the differences between blockchain approaches and that of compression should be explored. In the case of the experiment highlighted above a single canvas of pixels was produced and then the owners would adjust their pixels as they saw fit. These adjustments then change the view of the current canvas but the history of every pixel state is preserved. This history is immutable as it is factored into the current state of the pixel. (like a never ending compression loop looking for patterns to reduce and leaving a key that leaves a path should a backtrace be necessary) At any rate, where the players in this game are known (likely by aliases/labels) they provide a means to identify frames of reference and what is seem from their vantage point. This then gives us more incentive to consider exposure to a given ledger where points of view can be overlaid to produce a composite result. An owner might state they own a set group of pixels within one frame of reference but also claim a different set on another. We can therefore compare the differences to verify where changes have occurred. We may not initially know who owns those changes so we refer to our ledger who never forgets a transaction and then determine the owner. In rendering this all can occur very quickly with Red claiming a share of the temporal canvas... "I own all of these pixels through all frames of reference!" Green might want to buy in on a few of those also while claiming some elsewhere. Blue does likewise and productivity results. An issue with current rendering approaches might therefore be that every pixel is mutable and stores no history of prior state. With each rendering the process starts anew even though it might not ever change value or ownership. The concept of Multipass surely rises above this deficit for a moment but at some point 'extraneous' data is discarded and potential gains are lost. Needless to say, this makes me very curious about A:M's internal RAW format and the actual point in which that data is released. If none of it is passed on for use in subsequent framings yet to be rendered then how best to measure that cost? Added: It has been demonstrated that blockchains are not 'immutable' but rather 'tamper resistant'. But within systems where mutability can be seen as advantageous there is little need for the expense related to proof of work. End states (or slices in time) are important but only for the briefest of moments.
  20. Enjoying your breakdown of how you put your entry together. Excellent breakdown of AO and how the size of ground plane effects the scene. I can see I need to experiment with reflected AO like that.
  21. That's an excellent example Rodger. Thanks! I've been leaning toward that too.
  22. A case that produces a spline with odd curvature might be to take a beveled cube such as one out of the Library and decal it with an image using the spherical application method. The initial observation not having much to do with the application of the image itself so much as the layout of decal splines that appear via Decal>Edit. There should be at least one decal spline that has excessive curvature. Beyond that I'll see if I can put together a decent project file related to this.
  23. The world would surely be a very different place if Bill Hanna had taken the bite and embraced that young buck Martin Hash.
  24. I had a random thought so thought I should investigate. The underlying observation/conjectures: Application (and Editing) of Decal assignment in A:M can sometimes run afoul of the inherent curvature of splines. This is one of the reasons why it is useful to flatten a model before applying decals. The question Would it generally be better to peak all the splines in a model prior to applying a decal? A follow up question After the application of the decal presumably the would be an adjustment of the assignment of specific locations of the image decaled onto the model. What impact (if any) should we expect to see as a result? I'm in the early stage of finding out the answer for myself but some of you may already know the answer. My initial results seem to indicate (at least with spherical application) no significant difference.
  25. Okay... the target keeps moving. If we want to work in compositing/nodes we can bring a whole new level of tools into the equation. A:M Composite provides a number of useful ways to bring glows into our scene from inside of A:M for starters as do the standard Post Effects (some of which are fairly recent additions to A:M). Post processing is a cheat... which I highly approve of... but we need to compare apples to apples rather than apples to disco balls. Also, keep in mind that once we invoke post effects/processing this opens up a whole new world of external programs that can work with A:M to achieve the desired results. In the referenced video almost the first step is.... go into the compositor. At this point we are in a completely different environment other than decal driven glows rendered in situ.
×
×
  • Create New...