-
Posts
21,575 -
Joined
-
Last visited
-
Days Won
110
Content Type
Profiles
Forums
Events
Everything posted by Rodney
-
Here is what I call the Long Line of Icons. If someone wants to submit some changed icon sets I'll be more than happy to test them. That's them v v v Down There v v v They are rather tiny.
-
We could start with something easy. Here for instance are the Navigation icons. I've attached actual size and same thing blown up by 500%. Note that the underlying issue these days with screen resolution suggests that SVG images (because of their vector based infinite resolution) would be ideal but the current UI in A:M relies on bitmaps. So we have two ways forward at this juncture 1) update the bitmaps 2) Seek updates that support SVG icons in the UI. I'm sure you can tell which one will be easier to do. Added: Another reason why SVG is ideal (where it can be used) is for transparency which allows colors to change due to activation state etc. or color and tinting without changing the actual icon. My take would be to do both. Create the updated bitmaps and then offer the SVG for future implemention. It should be noted that the SVG icons should likely be the 'master' set even if not implemented because they can easily be edited, scaled etc. So, consider that a recommended format to use while perfecting the icon set. Aside: This is no small undertaking even if only the bitmaps are updated. A:M has a lot of icons! Added: This is the 'small' icon set. There is also a large. I'll try to add that here as well. Large set added named NavigationLarge.png Note that the only real change from what A:M is currently using and these images is the image format (changed to PNG so they can appear in this topic).
-
It's a good idea that should be pursued. There are some hurdles to overcome in this. The first is just the standard disclaimer type that there will be many and varied opinions on what the icons should be and what suggested icons appear to represent. I frequent another forum where their software has undergone some updating of UI, icons and whatnot and the most contentious tend to be.... you guessed it... the icons. It seems everyone has personal preferences that often don't match with other preferences. They seem to have solved that by primarily giving the job of deciding what will be used to one person who happens to be quite good at designing icons. This doesn't mean that those decisions aren't vetted and approved 'offiically' or considered for other changes. The second potential issue is that of using generic icons such as Googles (however nice they may be). Hash Inc very likely does not want to use any icon that cannot rightly be stated as belonging to them. In this arena it helps that A:M has some very unique iconic processes to go with the imagery/symbolism. Those standard icons (and the processes they purport to represent) can certainly be a great starting place. I say go for it. A:M already has some alternative icons that are never used buried in its code. If even one icon gets improved and all are in favor... mission accomplished. So much the better if they all get a fresh coat of paint. Added: There are other pitfalls that likely need to be considered in updating icons/UIs but those are enough for a token barrier of entry.
-
This is yet another one of those general topics where I'm trying to formulate the question but don't have enough to get that done. I note that in many of my renderings the initial rendering passes that A:M reports tend to render pretty quick (note that this is with and without multipass on) even on the order of seconds. The initial antialiasing pass can take about that same time. During the antialiasing second pass things slow down considerably (a render last night took seconds for each of the initial passes but over an hour for the second antialiasing pass). Now granted... there may be a lot going on there in that pass and some of it might might not specifically be related to actual antialiasing... although that's what it states is happening. Also... roughness of surfaces and shiny reflections surely added considerably to that particularly lengthy pass. I have no issues with that. But my curiosity is officially aroused and I want to learn more about how best to wrangle this second antialiasing pass and to better understand how to manipulate it. Any insights, pointers or references will be appreciated. Also as an aside: Over the last two updates I distinctly detect better renderings in A:M although I am not aware of any specific changes. It could be and very likely is just me and my aging eyes but I like what I'm seeing these days. And regardless, A:M still amazes me.
-
Truly bizarre and well told! Every time I thought I knew how things might end I adjusted... and adjusted... and... never imagined how the story was going to end. (I hadn't realized 'House Cleaning' was so close to release. Congratulations!)
-
Back to HEIF... Here's an article that digs more deeply into what the format is... and does... https://www.1and1.com/digitalguide/websites/web-design/what-can-heif-high-efficiency-image-file-format-do/
-
I have read that of the difference between 1000KB and 1024KiB that 'extra' 24 bytes is allocated to the file system. I'm not sure if this is actually the case but that would make sense and perhaps additionally explain why Windows OS has long used KiB under the hood. In systems such as Linux were KB is used exclusively I would assume the same thing is being done just in a different way.
-
I had heard the term before but that didn't bring anything of use to my understanding of the term. It would seem that we missed a lot of new terms being rolled out in a similar vein: https://pc.net/helpcenter/answers/kibibytes_mebibytes_and_gibibytes That article was from 2005... and the measurements were placed into effect back in 1998 so we are definitely behind the power curve. From the (brief) article: This whole problem apparently arose because we like to round things into nice even numbers with lots of zeroes but that creates a lot of problems because in that particular context 1000 = 1024. We could (at least theoretically) lose that additional 24 bytes in the difference isn't taken into consideration because 2^10 = 1024, not 1000. At first blush it would appear that where we reference exact byte-wise numerations using 'bi' (instead of 'lo', 'ta', 'ga', etc.) to declare that level of accuracy would be more technically correct. There were suits filed based on folks being convinced they were being robbed of extra bytes due to this ambiguity: https://www.cnet.com/news/gigabytes-vs-gibibytes-class-action-suit-nears-end/ Thanks to Robert for making the mention... I learned some very interesting things today.
-
Nokia's javascript HEIF library appears to still be maintained on github: https://github.com/nokiatech/heif/tree/gh-pages/js One thing that isn't clear to me at this point is how encumbered HEIF might be with patents. HEVC was know to be highly tied down with patents. This license appears to be very current: https://github.com/nokiatech/heif/blob/master/LICENSE.TXT Formatting updated as of 9 days ago.
-
It appears some on Mac platforms may have already been using the HEIF formatting: Here's an article from September 2017: https://www.cultofmac.com/487808/heif-vs-jpeg-image-files/ At the end of that article is a link to yet another article entitled "HEIF: A first nail in JPEG's coffin?": https://iso.500px.com/heif-first-nail-jpegs-coffin/
-
Here's the news that is making the rounds: https://venturebeat.com/2018/03/16/microsoft-releases-two-new-windows-10-previews-with-high-efficiency-image-file-format/ One of the reasons why 'image wars' might be an appropriate classification is that many of the new approaches will leave users of older approaches out in the cold. While it's certain that some will take advantage of the newer approaches and build bridges going backward most will go the easier route and build bridges going forward. What this means is that to take advantage of the modern architecture PC users will want to stay current with Windows 10. This is the ongoing effect of Windows as a service.
-
This tech watch focuses on one specific foray into bringing image formats up to date with modern hardware and software but there is much more to come. I hesitate to say there is a war of sorts on the horizon but some negotiations are ongoing to determine which approaches get adopted and what settles in for the long term. Microsoft has initiated their roll out of the HEIT format. There currently aren't any editors for the format and the focus is on playback. This would appear to be a byproduct of Microsoft's purchase of Nokia several years ago. For more information see: http://nokiatech.github.io/heif/ Of interest, this format is one of many approaches that form a container around data that is then processed in a standardized/optimized way... as opposed to processing the data differently for each data type. Containers aren't anything new but this round would suggest that after studying various options in the wild were useful feedback could be collected the approach could be further standardized and optimized, Disclaimer: These tech watches usually don't effect us in the short term as they point to technologies on the horizon. Those that are aware however can take advantage.
-
Impressive work all around Dan. I like the line-less approach.
-
Yes, to the best of my recollection no shaders (as we've come to know them) were in the HAMR pipeline. Basically, if you could see it in a realtime preview inside A:M you could see it in HAMR enabled viewers (web browser or standalone application).
-
That's great stuff Mark! I'd think that might have garner some interest outside of normal channels as well.
-
Why must ideas have such characteristics? Ideas are not inherently smart... stupid... As for the quote "An idea is a seed not a solution"... I wouldn't have written it if I didn't believe it. Ideas are like seeds in that they must be nurtured so they can grow, blossom and bloom. But they also must weather the storms and be firmly planted in order to do so. There is a common thread that is hard to pin down in A:M circles where people seem to take offense very easily. (It's not unique to A:M clrcles BTW!) The only rationale I can find for it is that as artists we are conditioned toward this response. But we want feedback... we want ideas... we want speculation... the more the better. And it is precisely because we have a reasonable expectation that our desires may not be met that we can march forward boldly into the future. We'll get there.
-
Tore, I'm always suspicious of comparisons because they are taken in isolation. While it is a goal... and they are well on their well on their way to it... EEVEE rendering in Blender currently does not support animation other than that created by rendering still frames via camera and object translation, rotation and such. That's why the scene from the video you showed, while impressive only shows a static scene where nothing moves in any way that would be called 'animation'. This isn't to downplay Blender's progress. They are definitely moving in the right direction but from a holistic view taking all things into consideration... especially fully articulated characters.. it's not a particularly useful comparison. There are tricks that can be used to limit rendering time in A:M that can get similar response and we should definitely pursue those. Ed Catmull once said, "Computer Science is the science of dirty tricks." and this is still largely true. We just need to find innovative ways to leverage those tricks. For other shortfalls and hints of 'dirty tricks' that can be smoothed out and optimized see the EEVEE FAQ: https://code.blender.org/2018/03/eevee-f-a-q/ Added: It's interesting to note that initially in the FAQ it is stated that EEVEE supports animation but then later specifically states otherwise.
-
I think that is the same or at least a similar image map as the one shared (by Robert) in this topic: https://www.hash.com/forums/index.php?showtopic=47505 If you want to work through some of the various approaches or just troubleshoot the approach you are already taking chat, voice chat and video chat can all be used via the Animation:Master Discord channel. I tend to have that on whenever I'm online and folks can join in and assist as they have the opportunity. The A:M Discord link: https://discord.gg/7G9MBc Here's an example of Robert's displacement added to a cylinder with the cylinder models then duplicated in the Chor. By only changing a Group's surface properties a wide variety of colors and styles can be created.
-
Two other approaches that immediately come to mind: Patch images This would work great if you have a tiled rope image. The benefit here being that you could draw all of your ropes in place with single splines... then use sweeper to sweep a cylinder over those splines. Right clicking on the resulting SweptObject group and selecting Add Image would then allow you to apply the rope image to each patch. And all of this without resorting to any twisting or revolving of the patches (the images themselves accomplish that). For far shots you could then use the single splines colored black and of the desired thickness and turn the color setting of the patch image to 0% (fully transparent). The downside of this is that each patch might require tweaking if the rope image needed rotating or the patches normals flipped. An added benefit to this method would be that you might not need to apply this to any shape in 3D. Just extrude a series of patches and apply the patch image of the rope... then modify to taste with respect to the camera. Materials This is one I haven't attempted with ropes but I would think it very straightforward. The trick would be to apply it to an unshaped rope and then shape the rope in an action. In this way all of the rope would always twist in the correct direction. (I'll have to test this one out). Bitmap Plus materials might work well with this approach.
-
As per usual there are several approaches. Here would be an approach to get very minimal patches in bending rope: To keep the patch count low I might model a master rope and then use that model as a decal for a lower patch density rope. Use the modeled rope for hero shots where the rope is seen close up to further suggest the detail of the rope when it is seen farther in the distance. In the attached the top two ropes only have four patches each. They were decaled using the modeled rope on the bottom. As for the knots... I'd be tempted to use a single patch set to one of several knot images and then place those closer to camera. The important thing being.. what is the camera/audience seeing in the shot.
-
For a second there... I thought I understood the rationale behind A:M's 'is flat?' test... Didn't quite get there this time around though. I like the illustration of dividing until reaching the point of being flat (and/or small enough) to consider the area a plane and therefore render.
-
Way back in the day Computer Graphics pioneers Jim Blinn and Ed Catmull and Alvy Ray Smith conducted lectures on their work to young and eager minds that were interested in pushing the technology to its full potential. It would have been wonderful if someone would have thought to record some of those lectures. And they did! These lectures haven't been viewed a lot on youtube bit should be for historical perspective alone. The lectures dive more deeply into subject matters we are well acquainted with to include splines and patches: xhttps://www.youtube.com/channel/UCNXre0qpHjdhC29xH8WkKnw/feed From my perspective, this timeframe was while I was in highschool and when visiting the local college to determine what I wanted to be when I grew up I got my first view of computer graphics beyond tiny squares on a green computer screen or that relating to gaming on the Atari. I wasn't sure where computer graphics might be heading in the future but I knew i needed to be be involved! But that was quite a stretch for this small town boy and short of access to a Commodore 64 I didn't have much exposure to computers until years later after I joined the Air Force.
- 1 reply
-
- 1
-
I haven't delved far into the history of it but the term 'predictive rendering algorithm' came to mind so I typed it in to Google. This paper from back in 1996 was high on the search list and gives a place to measure from to see where the idea has evolved. Without knowing more I will postulate it largely followed the path of real time rendering... https://www.cs.ubc.ca/labs/imager/th/1996/Fearing1996/Fearing1996.pdf There are some concepts that intrigue me... more labels to me at this point. These include concepts such as 'frameless rendering' or 'Chaning Motion Thresholds'.
-
Here is a slice in time of where the art of rendering was measured to be in spring of 2017. It delves more deeply into rendering itself by it's other nom de plume 'image synthesis': The various lecture slides can be accessed via the index at the bottom: http://graphics.stanford.edu/courses/cs348b/ (Link to Stanford lectures) It should also be noted that blockchaining of distributed rendering is already a thing as demonstrated by the folks at Otoy (Link). They are pursuing one model from a larger set of approaches... I'm not sure they are specifically attacking the same things I'm after in predictive rendering but they are very likely gathering a large amount of data that can inform that approach. Added: It has been said that even PIXAR's Renderman approach discards all information and starts anew with each frame. They obviously know what they are doing but this is very much not where I'm heading in my various wanderings.
-
Going deeper down the rabbit hole... a leaving a few notes so I can find my way back... One path I don't expect to follow but it's interesting to see where it might go: Many years ago a test of blockchain technology was attempted where individual pixels on a digital canvas were sold for $1 per pixel. Because it's hard to see individual pixels groups of pixels 10x10 were sold for $100. In this experiment those who participated 'truly' owned a piece of that digital canvas and could alter it or sell it to someone else. Other similar experiments were conducted and while interesting that specific idea didn't take off... although I'm sure lessons were learned. One similar project posted its source code on github so the inner workings of such can be explored. But that path is a considerable diversion particularly for it's pay to play requirement although the concept of 'ownership' is indeed useful. My thoughts turn to the underlying constructs of blockchains where 'ledgers' are concerned. Further, the evolution of exposure sheets and how they arose from the ledger sheets of old. But before going on it may be important to state that the current trend in blockchain is away from 'proof of work' for a number of reasons. The primary one being that of power consumption (which has been detailed in Robert Holmen's Bitcoin topic). I won't press into that any further here except to say that in many/most cases proof of work is unnecessary. This isn't to say it isn't useful but the need must justify the related cost. Additionally the speed of which favoring verification (decryption) over solving (full decryption) can be a useful construct. At this point, one might be wondering (as should be expected) what this has to do with rendering. There are several useful concepts that can be extrapolated into the realm of rendering and playback of stored data. Some of this fits more squarely into the area of compression algorithms and such and the differences between blockchain approaches and that of compression should be explored. In the case of the experiment highlighted above a single canvas of pixels was produced and then the owners would adjust their pixels as they saw fit. These adjustments then change the view of the current canvas but the history of every pixel state is preserved. This history is immutable as it is factored into the current state of the pixel. (like a never ending compression loop looking for patterns to reduce and leaving a key that leaves a path should a backtrace be necessary) At any rate, where the players in this game are known (likely by aliases/labels) they provide a means to identify frames of reference and what is seem from their vantage point. This then gives us more incentive to consider exposure to a given ledger where points of view can be overlaid to produce a composite result. An owner might state they own a set group of pixels within one frame of reference but also claim a different set on another. We can therefore compare the differences to verify where changes have occurred. We may not initially know who owns those changes so we refer to our ledger who never forgets a transaction and then determine the owner. In rendering this all can occur very quickly with Red claiming a share of the temporal canvas... "I own all of these pixels through all frames of reference!" Green might want to buy in on a few of those also while claiming some elsewhere. Blue does likewise and productivity results. An issue with current rendering approaches might therefore be that every pixel is mutable and stores no history of prior state. With each rendering the process starts anew even though it might not ever change value or ownership. The concept of Multipass surely rises above this deficit for a moment but at some point 'extraneous' data is discarded and potential gains are lost. Needless to say, this makes me very curious about A:M's internal RAW format and the actual point in which that data is released. If none of it is passed on for use in subsequent framings yet to be rendered then how best to measure that cost? Added: It has been demonstrated that blockchains are not 'immutable' but rather 'tamper resistant'. But within systems where mutability can be seen as advantageous there is little need for the expense related to proof of work. End states (or slices in time) are important but only for the briefest of moments.