-
Posts
21,575 -
Joined
-
Last visited
-
Days Won
110
Content Type
Profiles
Forums
Events
Everything posted by Rodney
-
I fear that words I type will appear to be terse...not my intention... but we can safely move beyond stating the obvious (i.e. that future display tech will provide better resolution and faster processing than the current display tech). These are givens unless someone were to believe they will not or cannot improve. (i.e. should they bump up against a technological obstacle or scientific certainly that will delay or prevent that improvement). But we don't know anyone who believes that such improvement wont happen do we? I certainly anticipate there will be many qualitative improvements (to computers, computer displays, computer graphics, etc.) in the future (but this is also stating the obvious). For this discussion to be of any relevance it might help to be more specific about what classifies or does not classify as qualitative improvement to you. But more importantly... if the subject matter of this topic (HAMR etc.) isn't of relevance perhaps we can start a new topic with a focus toward areas of more direct interest to you?
-
Friendly looking guy! More please!
-
I'm not smart enough to disagree with things I haven't studied but I believe it is about display technology in that programmers have to deal with the inherent limitations of that technology. If that technology were sufficient then there wouldn't be obstacles to overcome. Consider also that when most people hear the word 'display' they think only of the screen of a monitor (there's your sampling) that is a receptacle that waits (actively or passively) for data to display. I would guess that the more passive the monitor the more direct while the more active the more data preprocessed prior to display. I can think of several technologies used to to display data that don't appear to be used much for either category of displays. One example would be the platts used to texture 3D objects in virtual space. Those platts are two dimensional and yet are projected into 3D space (and/or onto 3D objects). Edit: I almost hate to say projected here because they don't have to be projected if they are already the same points but in different dimensions of space. Another example: a 3D model flattened onto a 2D plane. It is in that area of further exploiting 3D space that the industry is heading (one example: EXR images that capture multiple images/levels in depth/z-space) There are several shortcomings of platts and one of them is that by themselves they contain no depth but when coupled with other layers/channels the communication of data can be transmitted both ways... and in lossless ways. It's almost as if bitmaps and vectors could be treated as the same thing. And the age old arguments exist there too... vectors are better than bitmaps except where they ain't. Direct or indirect sampling would be more akin to what I was referring to above. I don't see the point of stating you can't digitize something without digitizing it which is basically what you are saying. And besides all that, we are talking about data that is already digital. But lets not ignore what isn't digital. Due to world wide buy in, interest and investment, sensor tech is growing at more than an exponential rate and massive amounts of real world data are increasingly available and more will be as demonstrated by the game-changing leaps in point cloud technology (there's your sampling). But those represent the collection of data that isn't readily available. With 3D models (I speak here of both the objects and the processes) already digital now the task is not to sample (although the data can be resampled) but rather to move or transform the required data where it is needed.
-
I don't know enough about current display technology to even offer a suggestion. (Which in theory could be beneficial because I don't know what can't be done) Somewhat related: I found this to be a nice introduction to the actual process of subdivision: http://www.rorydriscoll.com/2008/08/01/catmull-clark-subdivision-the-basics/
-
Rendering directly from splines and patches.(and the word perspective here is meant to have a dual meaning as that approach would allow for rendering in more than two dimensions. i.e. holographics, immersive displays not built with modern day hardware, projection, 3D printing without print heads constrained to a single plane, etc.) In other words, not what we are constrained to now but where the future of computer graphics will be. Also of note, a thought I should have added above: the process of tessellating can be very (computationally) expensive. There are considerable benefits to removing the extraordinary from an (initial) equation. Or better yet, leveraging those (extraordinary artifacts) to better understand how to make the final product even better.
-
Well put. From a non-technical view (not constrained to operating with limitations of current hardware reality) I see this in reverse because the lines and surfaces must exist before they can be divided, much less subdivided. While the subdivision of patches is required for display via current hardware, if one were to look beyond the 2D displays currently offered that might offer some fresh perspective. But the fact remains that current display technology must be targeted. Somewhat unrelated: I'd be curious if this display constraint has anything to do with the old methodology of using half of a pixel... not sure how/if that even applies. This does also beg the question... when Hash Inc was working directly with graphics card makers to advance spline/patch technology was that similar to today's approach or was it something that would be still be considered novel technology? Translatng/updating the custom MFC classes alone would make such a port undesirable. It will take considerable time and effort for technology to advance to where it can more fully exploit the elegance of splines and patches in virtureal timespace. Added: It does help to note what type of subdivision surfaces are under consideration. Otherwise Catmull-Clark may be inferred. It may be worth noting that after one round of Catmull-Clark Subdivision all surfaces are quads. It is only just prior to rendering (to graphics cards that require this) that all the quads are tesselated (I tend to say degraded) into tris. In Catmull-Clark subdivision (at least by DISNEY/PIXAR) the first round is considered a pre-process. The whole purpose of which is to convert/translate tris to quads.
-
I see a vimeo tag in the BB code dropdown but not a youtube tag. I'll see about scrubbing through the various BB tags and adding youtube... although it should be there already. In the interim, I know manually typing "[ youtube ] URL [ /youtube ]"* in works. *without the extra spaces and with the actual URL of course.
-
Here's were I post my lame notice that due to other priorities I didn't get my entry submitted. It's just as well as too many elements hadn't come together yet. After the contest is over I'll share what I was playing at. Good luck to all contestant entrants!
-
I'll check with my family, deconflict on my end and pick a date/time. From there we'll see if that date/time works for you.
-
In the old forum inserting the video's id would embed the video in the post. Here in the new you want to paste the entire URL in between the tags. old forum [youtube]ABCDEFG[/youtube] new forum [youtube]https://www.youtube.com/watch?v=ABCDEFG[youtube] I'll check into having both methods but we'll probably need an assist from Jason to convert the old links.
-
I hadn't planned to be in Springfield but... if you are going to be there I might just make the trip. Will you be there and free any time on Wednesday (earlier preferred) or more optimally Thursday (anytime)? Tuesday (anytime) might also work well. The important thing would be to get together if even only for a short visit. You might then say, "Good grief, I never want to see THAT guy again!"
-
I'm not sure how I saw Robert's post. I think I logged out of the forum and whala... it was there.
-
I must assume you meant to say 'everyone' that matters. I was curious if you wanted to share where you thought the technology was heading in future iterations. SDS isn't moving toward SDS... it is SDS... and it certainly isn't end of life yet. SDS isn't in competition with splines/patches either. It simply brings polygonal surfaces more in line with continuous splines and smooth patches (nonlinear toward linear) and the industry has greatly benefited by this. There are a few things to consider with regard to SDS: You know a lot about this already but I'm posting the following here for general discussion's sake. Tri-mesh/quad-mesh Different subdivision schemes are used for triangular meshes and quad meshes. (no surprise there!) The schemes/algorithms are specifically optimized for each because they can ignore/bypass processes that aren't needed... resulting in leaner code... quicker processing... less waste) Quad meshes can be pre-processed as tri-meshes which in turn can be processed with tri-mesh schemes. Note that this is one of many reason why quads are considered superior to tris because a quad can be processed with either type of scheme while the reverse (adapting tris to quads) cannot. (well they can but in my view it's a lot like comparing bitmaps to vectors) Pre-processing of tris (from quads) can lead to artifacts as the splitting of quad faces cannot be generalized (i.e. without user input the computer must use a 'best guess'). So, tri-meshes tend to be inferior both coming and going. Interpolating/approximating The various types of meshes are usually defined by sets of control points. Subdivision schemes can smooth the shape of the mesh by inserting new vertices into that mesh. If the original control points are moved the SDS smoothing scheme generally approximates the shape. If the original control points are maintained the SDS scheme generally interpolates the shape. With polygonal meshes, approximating schemes tend to be more flexible and produce smoother surfaces but are more difficult to modify into a specific shape because the final shape no longer passes through the original control points (in fact those original CPs may no long exist). Face/Vertex splitting Face splitting schemes (primal) split polygonal faces into many (but optimally four) new faces. Vertex splitting schemes (dual) insert new vertices for each face. Note that dual schemes tend toward slower processing than primal schemes. The Catmull-Clark focus is toward quads. So what does this mean for the future? The biggest benefit to the industry in adopting SDS was a new emphasis on technology that pushed users toward quad meshes. So the question is still out there for everyone, even those who at present don't particularly matter, to guess: Where is Open SubDiv and similar technology moving the industry next?
-
Yes, a survey of available tech still falls very short of what HAMR could do many years ago. I tried to look into three.js and similar approaches but they are not (yet) very intuitive and quite frankly painful. In order to even move in the right direction you have to hang up your artistic license and focus on being technician. Not an ideal situation (we should only need to wear the technician hat when it is ideal to do so) and that certainly is not A:M's approach which wherever possible leaves the technical stuff in the peripheral view Still there is movement in the right direction and more bridges are being built. That's a very good thing.
-
Jason, I know you are still tweaking but there are forums that don't appear for anyone in the Admin permission set. You might not see this because you are in that set? At any rate, it's not just hard to see those areas (for Admin) but impossible to respond to those who can see it and do post. An example of this would be Robert's recent post to Forum Assistance forum where he suggests the youtube tag for old posts is broken (I'd link to it but cannot see it when logged in to the forum). As far as I can tell the new youtube tag is not so much broken as changed to now accept full URLs whereas the old forum only used the video id portion of the url. I believe the new way is preferred for several reasons (easier copy/paste of url) but it does unfortunately break the youtube embedding of the older posts. In every instance I could I added a direct link to the video immediately below the embedded video. This was to cover anyone who for whatever reason could not view the video in the actual post. So two problems here... the most pressing for me being that Admin permissions aren't set correctly so I can't fix stuff. One solution to the video issue (which I assume may also exist for vimeo, etc.) might be to set up two different tags: example: youtube [ youtube ] and [ youtubeID ]. The first would accept the full url while the latter would only work with the video's id.
-
It's good to have you with us Alvee!
-
And what exactly do you think SubDiv and such are pushing toward? Funny thing about techniques that don't stand a chance... like all good ideas... they keep coming back. Keep in mind that the whole HAMR endeavor was an effort to degrade splines to where they could be viewed by graphics cards optimized for rendering polygons. A full solution would bypass that unnecessary process and directly interpret the splines so they can directly viewed on all platforms. If that is the proprietary interpolation technique you refer to then we are in full agreement.
-
That would be great. Nice Tar banner!
-
Need to see more! Sell us multiple copies of A:M with those images! I have a few questions but most aren't relevant at this point. I guess the ultimate question is: Who will update/maintain the banner? My concern (because it was something of a bottleneck issue way back when): If the images and links are only editable by *an Administrator of the server* that puts a lot of responsibility on that person. I don't recommend this but not my decision to make. Currently the images/links in the banner can be changed by *any Administrator of the forum* with knowledge of how to copy/paste a url. While it would be best to document the process used to update any banner so that others can get 'er done I intentionally wrote myself out of the loop when implementing the current banner. The previous rotating images hooked in from A:M Stills and A:M Films were broke... for years... because that could only be fixed by a server admin. One of the reasons I update that banner as often as I do is that I remember those very painful years.
-
I'm afraid I don't understand the question. The term 'rendering' is also a bit too much of a catch all category. For instance, does your definition of 'rendering' include the process of scanning real world objects? Technically that's not rendering so you could add that to your list. Why view anything? I ask this question is all sincerity because it ties in with your previous question. This is the driving force that causes us to want to develop . If we assume by 'rendering' you mean capturing data on a 2D plane or ostensibly in 3D virtual space then the term rendering could cover almost anything. But rendering, in older terminology is more akin to output than input in that it is simply adding form to a shape via light and shade. I've already mentioned 3D printing and that can be considered an alternative means of rendering; extending the view of virtual objects into real three and even four dimensions of time and space. What did people do before the advent of modern day gaming? What did they do before watching television? What is the point of innovation? I'll offer a scratch at one possible answer: to go beyond being actively or passively entertained and be creative.
-
Initially I wasn't leaning that way and I'm not sure now either. There are too many things I haven't been able to test properly but my gut feel initially was that the MTL was somehow interferring with seeing the textures correctly. I'm still leaning toward that. (in A:M it'd be like placing a decal over a surface color and wondering why you can't see the surface color) It's interesting to note that two different .OBJ converters created two different MTL files and one's material (a spherical white thing) was transparent with visible leaves in the other... all based on the same image/texture. This would suggest that the problem is with the .OBJ or MTL file but the problem on my end was that regardless... both rendered incorrectly in A:M. (Drat... and here I thought I'd found the solution!)
-
I'm in a similar situation... allowed myself to get too distracted. I plan to submit my entry in whatever condition it is in tomorrow because I know I won't be able to touch it Sunday. Submitting an entry is actually the most important thing... more important than winning (it's easier to say that when you aren't likely to win!). It gets us in the habit of splining new projects!
-
That almost sounds like a rhetorical question. My answer would be; it depends on what the plugin could do with 3D content. This is a huge oversimplification but most 3D plugins seem to be focused three areas these days, realtime rendering, resource/asset management and 3D printing. Content browsing primarily resides in the latter two but fits well with the first in that realtime display of the content is often desired to improve the user experience for programs addressing those other two platforms. The downside of plugins is that by their very nature they have been historically proprietary. Each plugin having to bridge very wide chasms between otherwise incompatible technology. That almost would be a fourth reason for a content browsing plugin but I think it's already covered by one or more of the first three. HAMR is certainly no exception in the area of being proprietary but one must also remember that at the time no one was making spline compatible 3D browsers. So that would be the fourth reason to invest in a some form of plugin for viewing 3D content; to bridge significant technological gaps.
-
The short of it... the 'Tin Woodman of Oz' film was driving a round of software enhancements. Oooo... .oooo.... Rodney raises his hand. Patches rendering black is usually a sign of inverted normals. That kind of thing was vigorously squashed in the v15 timeframe but even now may occur (although to a much much lesser extent) if the models normals are not facing out.
-
Not only do I have an idea what those do... what else we can add too. It's all (or at least there is a lot of info) in the wikipedia write up. (at least in this case) the name of the MTL file is on the first line of the OBJ file.