Jump to content
Hash, Inc. Forums

Benchmark 1.0 (A Proposal - Revisited)

Recommended Posts

By way of establishing a useful and effective benchmark that helps A:M Users produce a short film or movie I submit the following for consideration:


Any benchmark undertaken should help to advance an A:M User toward the creation of their image, short film, series or feature.


Any benchmark in the 1.0 series should be A:M-centric. (Comparing A:M to other programs is not the goal but comparing A:M to itself on various systems will reveal useful information that can be adjusted and optimized).


The fewer variable in a benchmark the better. With these considerations in mind we can move toward a proposal for the first 'official' A:M benchmark. Note that this would be the second series of benchmarks because the first would use only default settings found in A:M as determined by selecting "Reset Settings" in the A:M Help menu then performing and measuring a stated task.


The default benchmarks (right out of the box) would likely be termed Benchmark 0.0. Through effective benchmarking that progresses the A:M User toward their goal all variables can be known, their effects measured and altered to suit the needs of the user. Any subsequent benchmarks beyond the default should be meticulously documented with all variable identified to the maximum extent possible (an unknown variation may still be within control limits but effectively out of the user's control).


Where possible a default file should be shared with the those who will conduct the benchmark and for testing purposes it would replace the default in A:M (example: replacing the default.cho file).


Assets required for a particular benchmark can be narrowed in scope to a Project file, Choreography, Model, Action, Material, etc. to benchmark each of those areas. It is hoped the best benchmarks would then be incorporated into A:M as optimized procedures which when run automatically (at installation or other time specified by user) establish a realistic foundation from which to project the user's production schedule.


Note: Wherever possible the introduction of variables within the Benchmark should be used as an opportunity to learn. (Example: Telling A:M what file format to render to, where on the user's harddrive or network to render to and then padding the filename with zeros in order to optimize the sequential order of images produced) These represent three user specified variables that will have an effect on the user's production schedule.


A proposal for the first 'useful benchmark' beyond default settings tests would be to render a two minute movie (from a project file provided). Assets rendered from this benchmark would be used to assist in the creation of the user's own two minute movie. (This setup can then be easily altered to target still image (single frame), 30 minute short, 90 minute movie, etc.)

Link to post
Share on other sites

hmmm... I think a benchmark should be fairly low-impact task for anyone to run. Associating it with some other goal seems a departure from its purpose.



On Benchmarks... i was trying to come up with a way to judge video card performance. I haven't figured out how to do it without testing all the cards on the same machine.

Link to post
Share on other sites

My first foray into benchmark produced:


Title: Background Plate Benchmark (Default Cho) Mini

Frames: 601 (equivalent to a 20+ second 'mini' movie)

Render Time: 1 minute 37 seconds


Lots of things not yet ideal nor optimized.


What I believe this benchmark to show:

The results suggest that (unless further optimized) it will can take no less than 1 minute and 37 seconds to render a 2 minute 'mini' movie on my computer.




Results from the second run of this particular benchmark but scaled to VGA are in.

Interestingly, A:M produced the same number of frames at the larger VGA size in less time (10 seconds less).


Title: Background Plate Benchmark (Default Cho) VGA

Frames: 601

Time: 1 minute 27 seconds


Conjecture: Decrease in rendering time possibly due to more memory being available to A:M at that time.

Link to post
Share on other sites
hmmm... I think a benchmark should be fairly low-impact task for anyone to run. Associating it with some other goal seems a departure from its purpose.


Trust me, low impact is the goal here. The whole point being to make it the lowest impact possible to promote optimization.

All that I am suggesting is that a smart benchmark can produce something that is useful beyond the benchmark itself (a reuse methodology).

If the benchmark is successful then it could then be incorporated into workflow and a new benchmark replacing that one.


I understand the old way of Benchmarks enough to know that they target the technical.

As such my proposal has three core series:


0.0 Default Benchmarks

1.0 Production Benchmarks

2.0 Technical Benchmarks


Of course it could have more but three areas would not only be optimal in and of itself but would mesh well with the whole idea of Preproduction, Production and Post Production.


Specifically, in Preproduction we establish (or reestablish) the defaults. A strong foundation to build upon.

In Production we execute the plan. Where there is no production plan we will fail or falter in reaching our production goal(s)

In Post Production we refine our products, improve our presentations and thoroughly establish new benchmarking based on successful standards that have proven to work. Then the cycle can start anew.


In the Default Benchmarks anyone with A:M can test things out just by opening A:M and executing a task and recording the results.

This is primarily useful at the programmers level (Steffen certainly has his own benchmarks)


In the Technical Benchmarks (which I believe to be the purpose you ascribe to in benchmarking) that tends to focus on hardware optimization.

This is the hard and cold facts that only technical minds enjoy. It's focus is also on those who have the best and most optimized equipment. As these benchmarks identify optimized hardware the vast majority of A:M Users cannot fully take advantage of them without considerable expense.


What I propose attempts to bring the user, the production and corporate knowledge of A:M fully to the fore in order to get into the production process itself. This is an area of analytics that is often misunderstood and undocumented. It directly effects the user who otherwise unknowingly may be working against themselves and forms a framework for establishing realistic expectations, production schedules and time management. It also provides a method of optimizing through recycling and waste management.


Somewhere along the way computers rested benchmarking away from other areas of interest. The goal in even hardware-centric technical benchmarks is to measure productivity. It does little good to optimize the hardware only to see that optimization neglected (or bypassed entirely) due to random user defined production criteria that can only be optimized through training. I base this on a belief that A:M Users want to produce animated products more than they want to optimize their hardware. Because there are no benchmarks that measure productivity I have little evidence to support or to suspend that belief.

Link to post
Share on other sites
Interesting. Do we have a default chor to render out? If you have one available I'd be glad to do some renders doe benchmarking.


Every installation of A:M has a default chor.

Just render a two minute movie with it.


For a first variable I would suggest:


Removing the default Ground Plane

There are two (primary) ways to remove the default ground plane

1 ) Select the Ground Plane from under the Choreography and delete it (Delete key on keyboard)

2 ) Inactivate the Ground Plane by opening the Ground Plane and setting the Active attribute to 'Off'.


After any changes are made the Project should be saved (to an approriate benchmark name) to ensure the setting are locked down.

The resulting file can be shared with others who wish to test the same benchmark on their system.



Added information that few will be interested in:


The products this benchmark will produce are of the following nature:


1) Backgrounds

2) Placemarks (Proxies)



Rendered backgrounds can be reutilized and should be saved for reuse. Re-rendering the same background over and over again with no changes is an example of poor optimization.



Rendered backgrounds can be reused and replaced in other scenes.

They can also serve as proxy frames pending replacement (Ref: Animatics)

The placemarks can be rendered to 1x2 with alpha channel on to be completely transparent (A faster method would simply duplicate the original frame to the desire time length)

When rendering to PNG format (or other format with transparency) the placemark can remain in place or be replaced as the subsequent/improved renderings arrive to fill in that space. PNG is currently optimal for use with HTML overlays. TGA is optimal for legacy compositing and interoperability. EXR is optimal for single image data (and with EXR 2.0 depth information is stored in the file as well).

Link to post
Share on other sites
Alrighty. I'll render out a test film with diffrent settings and post my findings here along with my specs from my machines.



You just made me realize that this will help in many way more than originally anticipated.

For those interested in the process of benchmarking itself it's important to note that there is a Point of View (POV)... or perspective if you prefer... in every test.

This is the bias that we need to account for in every single benchmarking test.

It's also one that is extremely hard to get.


But... this is where it gets really cool!


There are two primary POVs on any given benchmark (three really but bear with me here).

Your POV (Local)

Others POVs (Global)


Ultimately, the only benchmark you are personally interested in is your benchmark, the one that radiates outward from your POV.

When we continually compare our benchmark with other peoples benchmarks we are getting a skewed picture that does not (cannot) fully apply to our own effort.


It's good to keep both in mind but we need to focus on those variables that we can control at our production level.

To step out of the local arena is to assume a different level of responsibility.

Where frustration really starts to form and conflicts arise is where the level of access is not appropriate matched to the level responsibility.

In short, if you do not have access to the global resource how can that possibly help you locally?

Therefore, proper benchmarks at the local level must focus on variables at the local level. All others may be informative but they may or may not relate. Where they relate is at the crossroads of where the global and local levels meet.


That was a long way to get to this:


Given this, anyone can post a benchmark from the local level and someone else can test it at the global level.

The only thing remaining then is to loop in feedback so that the differences can be compared and contrasted.


But that's a bit broad for the present.

The goal is to reduce variation within a specified control.

Where possible that control should be user specified with optimal and suboptimal results recorded and shared globally.

But how does one determine what is optimal? For that we need some kind of control.

The point of the global benchmarking then is to demonstratively show others what (optimally) works.

The feedback loop can then confirm the optimization and the production cycle can begin anew.


Note: A project file need not move from the local level to the global unless/until it is seen to be out of control or if highly optimal. The concerns inbetween can (usually) be simply monitored as they satisfactorily move forward on schedule. Trivial matters then enter the realm of Research and Development where people purposefully break things in order to explore and innovate.


But what is accepted as being in or out of control?

I submit to you that in cases where things fall out of control the first comparison to use would be that of a running tests at the 0.0 Default level of Benchmarking. This compares the current results to that of the Norm and endeavoring to turn toward the default to the maximum extent possible . This can then also be compared and confirmed with the 2.0 Technical level of benchmarking to ensure the production is operating at peak capacity while ever seeking higher levels of optimization by thinking through the process.


At level 0.0 there is no thinking, just do the default and require everyone else to do it too.

This is how 'global reality' works. How closely does your 'local reality' align with it?


At level 1.0 Something new is produced.

Variables are added to the default that can be compared and contrasted with the default.

The product isn't optimized at this stage it is produced.


At level 2.0 Further optimization is secured with an eye toward the future.

Things get seriously broken. Learning is afoot.

Link to post
Share on other sites

601 frames is only 20 seconds.


At any rate with your settings as stated it took 3 minutes to render out with my i7 with 16 gig of ram.


Shadows On

Multipass Off

Reflections Played no Part so I won't mention them

Link to post
Share on other sites
601 frames is only 20 seconds.


Good catch. In my haste I messed that one up.

I'll correct it in my post above.


The numbers are important in benchmarks.




Wait a second!!!


You know what? hehe. Twenty seconds isn't right either!

Although it would be right at 30 frames per second.


What isn't given here is that the frames rendered can result in various times. It just depends on how those frames are spaced.

But that isn't what I want to talk about here I just want to get into the math of it.


601 frames divided by 24 frames a second gives 25 seconds (and change because of that extra frame).

I should have rendered frames versus SMPTE or perhaps cels I would realized that I was rendering an extra frame.

What I should have rendered was 600 frames which would have yielded exactly 25 seconds at 24 frames per second.


For 600 frames to be displayed in two seconds... that that'd be 300 frames per second.

Now that would be some serious frames zooming by...

Link to post
Share on other sites

Is your fps set to 30fps in the Project Properties?


600 / 22 = 27.3181 on my calculator.


Aside: I find it interesting that number is right smack dab in the middle of 24 and 30.

That is telling us something. At a glance it looks like the difference of one frame extra per second.


A general breakdown:

25 seconds at 601 frames = 24.04fps

24 seconds at 601 frames = 25.0416 fps

23 seconds at 601 frames = 26.1304347826086956521739 fps

22 seconds at 601 frames = 27.3181 fps

21 seconds at 601 frames = 28.619047 fps

20 seconds at 601 frames = 30.05 fps


Note: I am trying to make a distinction here between Feet Per Second (FPS) and Frames Per Second (fps) by using the lower case 'fps' as the number that goes by the aperture of a camera can vary. One foot in traditional animation terms is pretty much set in stone as 16fps.


Here's an interesting website that covers some of this information:




Marcos understands the higher (and lower) frame rates that drive innovations such as MUFOOF. :)


Here a response from Don Bluth after minds got confused on the issue:

Wow! When did technology get so complicated? All the 35mm movie projectors in the theaters run at 24 FPS. There are a few show case theaters that do their own thing. That's it. 24FPS is the general standard for movies. If you shoot and edit a movie in video, (30FPS) it will still have to be converted to 24FPS to accommodate the theaters' 35 mm projectors.


Sometime ago one fellow decided to shoot his movie at 96 FPS to increase image clarity and get rid of motion blur. It worked alright, but he still had to skip-print the footage to get it down to 24FPS.


When we animated "Anastasia." we worked in Pal 25FPS. In the conversion process, I understand that they simply threw one frame away. All the animators worked to the standard 24FPS but their dialogue and music tracks had been sped up by one frame. In the film version of Anastasia, it is slightly faster by 1 FPS. Good discussion guys. Think 24 FPS.


Of course the standard for video is closer to 30fps than 24fps and that is largely due to the need to bring down labor cost/time.

An animation on 2s for instance simply doubles all the frames so all you need to do is create/draw/render the odd numbered frames and repeat them for the evens.

A slower action could be on 4s which means one original and three copies.

An even slower action might be on 8s which really slows things down.

Reviewing on 8s is a good standard because otherwise the action is too quick to review.


David Nethery (whose website is worth checking out) has this to say:

FACT: The industry standard is 24fps . All the classic films you'll study and the books that you will study (Illusion of Life, Preston Blair, Don Bluth, Eric Goldberg, Richard Williams, etc.) assume timing based on 24fps.


Again, he's talking about traditional animation. Some conversion often takes place in order to move to video at 30fps.

But note that everyone in the traditional art will 'automatically assume' you are animating at 24fps.

This changes with computer animation in that 30fps is often easier to use in math. ;)


Everyone should take a moment to recognize the similarity between bits and bytes and traditional animation.

It is more than just coincidence.











Fun stuff this is. :)

Link to post
Share on other sites

I'm straying a bit off topic but this is fun stuff.


Something that I didn't understand before was how the reading of xsheets and those timing charts on the extremes was read.

That is real gold when trying to analyse classic animation sequences.


As a for instance, let's consider our standard timing in SMTPE format (00:00:00)

While this equates to hours, minutes and seconds it also equates directly to frames.



00:03:04 is the equivalent of 3 seconds (feet) and 4 frames.

00:20:01 equates to 20 seconds (feet) and 1 frame.


What then does 601 frames equal (in feet)?


00:20:01 (in SMPTE) which is equivalent to 20 seconds and 1 frame.


Is everyone still with me here?

Link to post
Share on other sites

Where SMPTE gets dicey... and people tend to get lost... is where the numbers have to deal with the 'set' frames per second (usually 24 or 30fps). Thusly...


What comes next in the sequence?:








Select the following text with your mouse to see the answer:



The answer is 00:19:00 if at 24fps.


Link to post
Share on other sites
00:03:04 is the equivalent of 3 seconds (feet) and 4 frames.

00:20:01 equates to 20 seconds (feet) and 1 frame.


What then does 601 frames equal (in feet)?


00:20:01 (in SMPTE) which is equivalent to 20 seconds and 1 frame.


Is everyone still with me here?




I'm pretty sure it took a foot and a half to make one second at 24 frames per second. I think there were 16 frames per foot in 35mm film.

Link to post
Share on other sites
I'm pretty sure it took a foot and a half to make one second at 24 frames per second. I think there were 16 frames per foot in 35mm film.


Yes, indeed. 16 plus 1/2 of 16 (8) is 24.


You have more than good reason to be assured. :)

Link to post
Share on other sites
I'm kind of lost on why there are 600 frames in a benchmark.


That's wasn't so much of a benchmark but was just to get the ball rolling.

The 'perfect' benchmark would likely render a series of sequences to account for default settings and then adjusted settings (thereby forming a way to measure what those settings are 'computationally worth'.


Initially I set about rendering 600 frames at 24fps but it looks like I forgot to account for frame zero.

Then I typo'd minutes for seconds... and yikes... doesn't that make a difference.

But if others are learning from my mistakes that can be a good thing.


The neat thing about 600 frames is that it is nicely divided by both 30 and 25fps

That 601st frame really got things rolling.




Without specifically remembering, I believe easy math my initial goal.


So, perhaps I instinctively rendered 20 seconds of animation at 30fps and only lucked out that it was so relative to 24?

In other words, I just got lucky. ;)



Attached is a pdf file I generated from an excel spreadsheet I just threw together that attempts to get a grasp on frame optimization.

Assuming no typos it seems to me that when switching over to 30fps things really start to break with regard to keyframe optimization (30fps has advantages but one of them is not syncing with animation with extremes originally drawn on 4s, 8s, 12s, 16s etc. 30fps simply doesn't align well. I haven't studied the 3:2 pulldown enough to understand it's full effects on optimal conversion of keyframes but a cursory view seems to indicate an attempt to targeting key-frames that will maintain a semblance of the original performance.


Disclaimer: I do not suggest that the attachment will in any way help understand what I'm going on about here but it's a raw data dump of numbers that I felt like typing into excel to make sure I understood it.


Link to post
Share on other sites
Wouldn't you want to render fps according to the playback device and format?


This is a given in the benchmark itself as the device or format used is simply the one you want to benchmark.

In this way we compare apples to apples and not apples to bowling balls or more likely if we aren't paying attention to the variables, johnathan apples picked at the prime of their season versus golden delicious apples picked far too early in their season.


So the answer is 'yes', you want to identify what device and format is being used in the benchmark.

Ideally this would be a set standard and only need to be expressly identified if/when that standard was deviated from.


As a for instance, in my previous benchmark it would have been good to specify I was rendering to PNG but with no Alpha Channel.


Rendering is something I'd love to spend some time researching because I think there are massive savings to be had or... in a perfect world... the concept of 'rendering' itself would become somewhat obsolete. That is to say that rendering would be obsolete in the sense that the user would no longer know that 'rendering' was occurring... or if known wouldn't particular care. That is the revolutionary promise of technology and why we use computers; to eradicate, or at least radically reinvent and revolutionize our understanding of time and space.



Impressive. Nice demonstration on the theory of "0" inclusive.

I'd like to hear more of what you think about this.

Who knows, perhaps we can discover ways in which it isn't strictly theoretical.


As I see that Benchmark 1.0, it strives to work with what is already given yet do this while accounting for considerable, even unwieldy, variation.

This benchmark is primarily of use to the one conducting the benchmark and as such is not (ideally) global in nature. Each person would have to conduct the benchmark themselves and compare to their own benchmarking consistently... and not always look over at the other fellow's achievments. Benchmark envy is to be avoided if you are to maximize use of what you have already.


This is not to say that a good benchmark cannot inform the global interpretation of information but by itself it cannot account for the vast complexities of unknown variation. What sharing personal benchmarks does then is allow us to mark, and therefore to benchmark against the relative changes in other systems and consider that in light of our own system. In this way we can recognize the approaches that best benefit on the global scale and effect a similar change at the local level.


It should be noted that the benchmark is not software/hardware agnostic but is highly dependent upon knowing the original configuration or if the original is not known well striving to maintain that original configuration. This is at odds with what most benchmarks attempt to do. They actually encourage variation when the goal is to reduce variation! The more that is known about the original the more accurate the benchmark will reflect measures of change. However, that does not suggest the original cannot be in a constant state of change itself. It is assume to be changing but as long as the configuration doesn't change in any significant way limited variation is maintained. This is like adding a component to a computer and then testing it, then adding yet another component and testing again. With each new test a new level of qualitative and quantitative information is retained. But even here, the poor data is not seen as waste... but as very useful information!


To better account for variation then the person conducting the benchmark simple tracks 'things' as they are introduced, removed or changed.

For instance, if more memory is added to a computer that new variable that can be expected to shorten product cycles in the benchmark.

Failure to see time shortened after increasing memory might therefore clue us in to the fact that the system being tested was already operating on maximum memory and increasing memory, without some other change, will not be advantageous. We then know to move our observation and testing to some other critical node in the system.


Production is like this. We might think that one element of a production is failing and with all good intentions we set out to correct the deficiency. But without adequate information we are at least as likely (better than a 50/50 chance) to do more bad than good in the exchange. We just might not know the difference until it is too late.


Next up I'd like to discuss Baselines:


In general a baseline's objectives are:

- Determine current status (is the system operating optimally)

- Compare the current status to standard performance guidelines (what is operating sub-optimally or exceeding expectations)

- Set thresholds/breaking points for when the status exceeds guidelines (if for no other reason than to notify the system that the guidelines need to be expanded or modified.

Link to post
Share on other sites
  • 4 years later...

I'm revisiting the subject of Benchmarks in light of reviewing the exercises from TaoA:M.

As such if anyone else is interested in benchmarks I'm interested in their feedback.


As Robert mentions above most benchmarks are technical in nature and follow the path of hardware testing.

That isn't my focus here... although hardware testing is certainly seen as the most important part of benchmarking.

But that is only where the hardware is changed.

The benchmarking I am most interested in removes (or at least isolates) variables introduced by hardware.

The benchmark then becomes a tool to explore 'software' with a goal toward user controlled optimization.

Bottom line: For those that don't want to change their hardware benchmarking can still be useful.


An example of the type of benchmarking I am interested in might be demonstrated by a recent render of Exercise 1 from the TaoA:M manual.

The render got to 100% rather quickly but then spent the better part of an additional minute finishing the render.

Odd. A frame that normally would take mere seconds took 1:57 to render.

How long should that frame from Exercise 1 take A:M to render?


I suspect with all of the various installations and programs running as well as recording my screen while A:M was rendering that very little memory was available for the frame to be rendered.


Potential suspects include:

- A heavily customized installation of A:M (best to get back to a clean installation)

- A system that hasn't been shut down in longer than I can remember. (lots of stray stuff still lodged in memory).


Taking that 1:57 second render as a loose benchmark it should be easy to improve upon and refine as a personal benchmark for my current system.

I anticipate that I should be able to get that frame to render in 1 to 3 seconds.

Watch this space and we shall see. :)

Link to post
Share on other sites

After clean install...


Keekat rendered to 100% in 16 seconds.

It then took another 20 seconds for the image to appear and UI to release control back to me.

Total time: 36 seconds


That's still too high so I'll be investigating some more.


This has me wondering if writing to disk is gobbling up the majority of time as the numbers would seem to be finished crunching at that 16 second mark where A:M displays rendering at 100%. I further assume the rendered image gets displayed in A:M immediately after success of saving to disk and never before that so that delay from hitting 100% to finish is surely that of confirming success of the image being written to disk.


Added: v18 Testing

Total time (including write to disk): 10 Seconds



Reducing variables* v19 now appears to be rendering at the same rate: 10 Seconds



*The primary variable is always going to be the user's preferences and this would appear to be the case here as well. Making sure A:M is set to render using the Camera (and not Render Panel Dialogue) in all tests eradicated the majority of variables and appears to have been the culprit in extending the Keekat render from 10 seconds to 1:57. That's pretty minor for a single frame but can prove to be a significant difference when rendering large sequences.


I still think I should be able to get Keekat to render in 1 to 3 seconds. ;)

Link to post
Share on other sites

I'll get my hands on a Ryzen 7 CPU (1700 or 1700x I guess, maybe overclocked) in next few weeks to month and will post the results then here too :)...

See you

Link to post
Share on other sites
I'll get my hands on a Ryzen 7 CPU (1700 or 1700x I guess, maybe overclocked) in next few weeks to month and will post the results then here too :)...



With that setup I'm thinking you should be able to get Keekat rendered in less than one second! :)


In other news: I'm zeroing in on a possible render benchmark that derives from/includes sound/audio.

The audio is the cartoon mix (combining all the audio effects from the old A:M CD into one wav file) posted to the forum elsewhere.

This equates to just over 2 minutes worth of screen time (approx. 4000 frames).


The minor benchmark (0.0 in this set) might be to render that length out to a 1x1 image (PNG sequence without alpha channel) with all the various settings turned off in render options.

This would give us the absolute lowest boundary (read: unrealistic) expectation for rendering that 2 minute sequence on our system.

*If* we ever beat that benchmark in production we know we have succeeded beyond our expectations.... and very likely need to create a new base benchmark to better inform future production planning.

From that foundation we then build additional benchmarks that measure projects with increased (read: viewable) resolution, fill space with interesting objects and target the outputting of pretty images.

Link to post
Share on other sites
  • 4 weeks later...

Speaking of benchmarks, milestones and such...


Wouldn't it be useful if we could benchmark the interests of A:M users in the forum?

Rhetorical question. Of course it would.


Here's some super secret stats of topic views over the past month.

Shhhh... this kind of benchmarking stuff is super secret and no one should know of it.


So, what do these statistics tell us?

Well, first and foremost it means I need to get back to work on getting the Tinkering Gnome's Workshop subforums operational (most probably have not ventured into 'The Unhidden Library' and 'The Unlocked Door'. Who knows what the Tinkering Gnome has been up to in those subforums.


In other words, a topic high on the viewing list should at least be studied as an area of popular interest with an eye on further promoting and exploiting... yes, exploiting.. that interest.

And what exactly is it that A:M Users are drawn to repeatedly in those storied topics and posts.

And of course the Tinkering Gnome should be congratulated for his continuing dedication to our community. :)


This doesn't mean every topic that doesn't get maximum views isn't important. Of course it's important. someone took the time to post it.

And we should remember that while stats should never drive our decision making it certainly can inform them.





Link to post
Share on other sites

If we take a little longer view (say over the past four years) we might get a slightly different view:

(yes, data intentionally cropped out... the number of views being not quite as important as what was viewed. For the curious, the top number is 375787 and everything thereafter is lower)

If your topic is on the listing PM me and I'll give you the exact number of views (although most of that can be derived directly through the public forum).

Stats Longer View.jpg

Link to post
Share on other sites

Wow...papa near made the grade, though at the very bottom! Guess I need to get back to work and post more. Trying to split time between work, this forum, my animation aspirations and my personal blog makes for some serious time crunches though.

Link to post
Share on other sites

Thanks for the feedback guys.

I'm excited by the stats because they validate that our forum is meeting the needs of its members.

Of course there is always room for improvement.



Wow...papa near made the grade, though at the very bottom!


Hardly! The list continues downward, Papa bear made the first page! So you and anyone who sees themselves represented should consider themselves as being near the top.

To say it another way, consider that if the list had every forum and subforum listed it would have well over one hundred entries.

And that is a conservative estimate. If we include the forums that are direct links to other areas (such as the Quick Links at the bottom of the main forum) there are technically over 400 forums.


Funny, the SDK section is pretty high on the first list.


I find the SDK's placement very telling. People genuinely are interested in contributing to the A:M ecosystem, building bridges, enhancing features, etc.


Of course, if one considers views in isolation we might get a false impression.

There are a certain amount of views generated simply by someone posting into a topic. Subsequent views are largely that of other forum members simply doing their due diligence, checking out the new post to see if (or how) that post might apply to their current or anticipated interests. One must take care not to create a direct correlation where none might exist. I realized this a few years ago when I noticed that some topic or forum I was posting in was getting a lot of views but as I considered it I realized that the views were mostly being generated by the fact that I was posting new content for others to view. Taking this into consideration we might have to weight these stats a little to account for new releases'. In the case of the SDK forum I have no doubt that some of the popularity directly relates to the fact that some guy named Nemyax was diligently posting updates about work on a plugin that enabled Blender to A:M (and vice versa). Even if there is no direct interest to a forum member that member might check out the post to keep up with what is going on. I recall a lot of interest in Malo's postings over the past few years also. The subject matter was genuinely interesting... relevant... even to those that will never open the SDK or write a single line of code. We should also account for the fact that I personally read every post so... the number of views (per topic) should probably start its count one point lower times the number of posts.


I'm not trying to bring lofty thoughts of relevance down but am trying to keep out collective expectations grounded. We aren't a huge group of people but every forum member is important.

I must say though that I'm pretty pumped that if we were to compare this forum to one like Blender's going back to their opening (which is largely the same timeframe), the number of posts and views here in the A:M Forum is considerably higher. This shouldn't be the case considering the vast number of users that Blender is reported to have in tow. This also isn't an entirely full picture of course as there are several Blender forums and I'm only considering the primary forum.


All this to say, keep up the great work. You may think that no one cares or that no one is reading your post but... people are watching!

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Create New...