sprockets The Snowman is coming! Realistic head model by Dan Skelton Vintage character and mo-cap animation by Joe Williamsen Character animation exercise by Steve Shelton an Animated Puppet Parody by Mark R. Largent Sprite Explosion Effect with PRJ included from johnL3D New Radiosity render of 2004 animation with PRJ. Will Sutton's TAR knocks some heads!
sprockets
Recent Posts | Unread Content
Jump to content
Hash, Inc. - Animation:Master

real Cores vs. Hyperthreading?


robcat2075

Recommended Posts

  • Replies 19
  • Created
  • Last Reply

Top Posters In This Topic

Interesting. It would seem to me (I could be wrong on this one) that virtual cores would be slower than actual cores. As with hyperthreading is just the illusion of having physical hardware. So in essence you would have one processor working on distributing work to 4 virtual cores. That to me would seem slower.

 

But like I said...I could be wrong....

 

I run an AMD 8 core processor that screams through renders. Haven't tested it with Netrender though.

Link to comment
Share on other sites

  • Hash Fellow

You have 8 cores and you're not using Netrender? :o

 

 

So in essence you would have one processor working on distributing work to 4 virtual cores. That to me would seem slower.

 

Yes, I would presume that 4 cores plus 4 threads would not be as powerful as 8 cores. But maybe it is more powerful than 4 cores?

 

Every definition of Hyperthreading I've read looks like technobabble to me so I'm not sure what Hyperthreading does that regular multi-tasking does not.

 

Well, this clears things up...

 

478px-Hyper-threaded_CPU.png

 

(Wikipedia diagram of "Hyperthreading")

Link to comment
Share on other sites

This is not very easy to solve... processors which use Hyperthreading are often optimized to use them...

If you deactivate that feature, they will not run at highest possible level (even if you only want to know how fast one of the cores is)

 

In general: Real Cores are faster than simulated once... but if you compare a 4 core-system with a 4-core hyperthreading-system it is very likely that the HT-system will win if the application takes advantages of the feature.

If you compare an equal fast 4 core + 4 HT-cores with a real 8 core-system, the 8 core system will outperform the other one. But this is hard to proove, since there is no CPU available which is only different in that (AMD vs Intel for instance... AMD does not use HT, but builts computers with more "real" cores (actually since Bulldozer this is not really true, but it is still much closer to real cores than HT would be).

 

Intel however is (quite a bit) faster on single-core performance. In Multithreading-situations, AMD outperformance all Intel-systems of the same price in most benchmarks, but AMD will loose very very likely against close to anything that is not multicore-optimised / multithreaded.

 

Additional to that, AMD uses another design when building CPUs, Motherboards, etc. so it is hard to compare those in that aspect.

And Intel does not offer equal chips with HT and without HT.

 

So how to give a real good answer to that? The only answer I can give is:

Anything that is emulated cannot be faster thatn something that does not need to be emulated because it is physically available.

 

See you

*Fuchur*

Link to comment
Share on other sites

If I can share my experience. From benchmarks at work, compared to single core no hyperrthreading render our renderer was a little more than 3 times faster when using the 4 cores of a quad core non-hyperthreading and almost 5 times faster on an quad core with hyperthreading.

 

Hyperthreading is a way to try to keep the CPU busy while waiting for data to arrive from memory. Main memory access is much slower than the CPU operations. That is why there are several level of memory caches on the CPU. Hyperthreading keeps two execution queues per core. Whenever one of the queue is waiting for data from memory because the required data is not in the cache, the other queue tries to execute its instruction using data that might already be in the cache.

 

The efficiency of hyperthreading depends heavily on memory access pattern of the application. Currently existing applications are quite bad at memory access patterns that optimize the cache utilization so hyperthreading tends to be worthwhile. But this needs to be tested with the application.

 

Another CPU feature that needs to be tested with the application is the size of the L1 cache. Usually, the larger the L1 cache, the faster the computations for a renderer. But again, this is heavily dependent on the memory access pattern of the application.

Link to comment
Share on other sites

  • Hash Fellow
Hyperthreading is a way to try to keep the CPU busy while waiting for data to arrive from memory. Main memory access is much slower than the CPU operations. That is why there are several level of memory caches on the CPU. Hyperthreading keeps two execution queues per core. Whenever one of the queue is waiting for data from memory because the required data is not in the cache, the other queue tries to execute its instruction using data that might already be in the cache.

 

That's the first cogent explanation of Hyperthreading I've read. thanks Yves!

Link to comment
Share on other sites

  • 5 months later...

Sorry to bring this topic back up but I've been doing reading on custom building PCs due to my need to have a new PC soon. I was on one website (Lifehacker) and they were describing the different components of a mid-higher end gaming computer. WHen they got to the processor, they recommended the Intel i5 over the i7 because they didn't see the need for hyperthreading in gaming. They did say however, that if the computer was being built for 3D design, video editing etc that the i7 would probably be the better processor due to hyperthreading. I'm wondering if we've determined that this is true and if hyperthreading is of any advantage to AM? Also, I use 3DS Max and Maya as well and perhaps hyperthreading is more geared for those programs? My point being, if I were to have to pick between the i5 or i7, which would be best?

Link to comment
Share on other sites

Sorry to bring this topic back up but I've been doing reading on custom building PCs due to my need to have a new PC soon. I was on one website (Lifehacker) and they were describing the different components of a mid-higher end gaming computer. WHen they got to the processor, they recommended the Intel i5 over the i7 because they didn't see the need for hyperthreading in gaming. They did say however, that if the computer was being built for 3D design, video editing etc that the i7 would probably be the better processor due to hyperthreading. I'm wondering if we've determined that this is true and if hyperthreading is of any advantage to AM? Also, I use 3DS Max and Maya as well and perhaps hyperthreading is more geared for those programs? My point being, if I were to have to pick between the i5 or i7, which would be best?

 

A:M can use hyperthreaded threads / faked cores with Netrenderer. I have read (do not remember where right now) that hyperthreaded cores are at least 33% slower than real cores. (if you use 4 real-cores and interpolate their performance up as if it would be a 8 core-machine and run that against 4 cores with hyperthreading, the interpolated Cores are together about 33% faster. This of course asumes, that the other cores are scaling linear, which often is not the case)

 

If this is true for A:M (I think it is), the i7 is the better CPU. But in general it is more expensive too. Like that: Your choice.

 

See you

*Fuchur*

Link to comment
Share on other sites

  • Hash Fellow

Since A:M itself makes only sparing use of multi threading, the extra four virtual threads won't get you much that the four real threads don't already. (I'm talking about a quad core CPU)

 

However in NetRender, based on other user's benchmark results, 4 real cores+ 4 virtual cores will get you about 25% better throughput than just 4 real cores.

 

Of course you'd have to buy a license for four extra Netrender nodes to have eight running simultaneously and you' need sufficiant RAM for those extra Nodes to operate.

 

All things considered, the hyperthreading is probably only a modest advantage for A:M, and only when using NetRender.

Link to comment
Share on other sites

As noted by Rob and Gerald above, virtual cores are not going to be as fast as real ones and that A:M itself doesn't really make use of them.

However Netrender can make use of virtual cores, treating them just like real ones.

As rendering can be such a large part of the time component in making an animation, gaining even a 25% increase in throughput is desirable.

Here's a link to Robs Three Tea Pots test where I tried doing 4 frames in A:M and then 4 frames with Netrender and then again at Rob's suggestion of turning the Hyper-threading off and using Netrender.

http://www.hash.com/forums/index.php?s=&am...st&p=383079

Although the hyper-threading slowed things down a bit it was still quicker over all when using Netrender.

 

So I would say if you can afford it then get an i7, then you would be covered for pretty much anything you wanted to use your system for.

Link to comment
Share on other sites

Nah I'll definitely be doing Intel. Every experience with AMD I've ever had has been a bad one.

 

Okay, whatever you want... exactly the opposit here ;).

 

Am currently on an AMD-processor with an AMD graphiccard and have more or less all my machines I maintain at work (about 20 computers) equipped with an AMD-CPU (APUs and CPUs)...

have a few older intel-models around here too... am not too happy with them, especially with the prebuilt once... too expensive even when they were new and a few hardware failures which would not have been necessary...

 

See you

*Fuchur*

Link to comment
Share on other sites

  • Hash Fellow

One argument against the AMD architecture I'm finding is that Windows isn't optimized to distribute threads for it.

 

If there are four threads to put on the CPU, Windows will assign them to an Intel CPU's real cores before it resorts to virtual cores for best performance

 

However, on an AMD FX CPU which shares one FPU between every two cores, Windows may distribute those four threads to the first four logical cores (only two FPUs for four threads) instead of distributing them to every other logical core (four FPUs for four thread).

Link to comment
Share on other sites

One argument against the AMD architecture I'm finding is that Windows isn't optimized to distribute threads for it.

 

If there are four threads to put on the CPU, Windows will assign them to an Intel CPU's real cores before it resorts to virtual cores for best performance:

 

However, on an AMD FX CPU which shares one FPU between every two cores, Windows may distribute those four threads to the first four logical cores (only two FPUs for four threads) instead of distributing them to every other logical core (four FPUs for four thread).

 

There are two hotfix from microsoft which fix that since around January 2012:

http://support.microsoft.com/kb/2645594

http://support.microsoft.com/kb/2646060

 

I am not sure if this is a problem at all with the newer processors, since that happend only with the 8150 (which is not a good processor and should not be bought if you ask me... get the 8350, which is a much better one). And it never was a problem from AMD but from Microsoft. Rumours even say, it may had have to do with Intel too. That is one of the problems I am having with Intel...

 

For instance: The biggest electronic stores in Germany (Media Markt and Saturn) had a non-public deal with Intel, that they would not sell AMD's products if there was something from Intel around.

Intel gave them better prices for that... it came out about 1,5 years ago and since then, Media Markt is offering AMDs again and Intel, Media Markt and Saturn had to pay a penalty... the loss for AMD was much worse than what these companies had to pay...

I am not very happy about that kind of economic "strategy"... in fact it is illegal and it is not very sympathic...

 

Anyway, I am not saying Intel produces bad CPUs... they are just a little pricy for my taste.

 

See you

*Fuchur*

Link to comment
Share on other sites

i have always used AMD chipsets. never had a single problem. in all the tests i done not one has ever burned out on me. and trust me the room im in get up to 110º F (43º C) so the machines i run are not in the most friendly environments (no AC in this office)

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...