What sucks the most is that even though Ivy bridge will have 50% more transistors which as Anand points out will more then likely be dedicated toward the GPU, people like me won't likely notice a difference since I use a dedicated graphics card connected via PCI-e. I although I doubt it would ever happen for the mainstream consumer line of CPUs I really wish Intel would go back to making CPUs without the on chip GPU. I don't want to spend the money to buy an expensive LGA 2011 platform yet the LGA 1155 platform waste die space on a GPU that I will probably never utilize. That extra die space could be used for more CPU cores, more cache or even just removed entirely so as to cut cost and the other benefits of having a smaller die.
The only benefit of this sort of GPU to such a powerful CPU is QuickSync. Why couldn't they just throw in a tiny section to the die to handle that?
AMD are doing it a different way. BD won't even have a GPU, and Trinity is only a dual-module design, leaving the FX models to be paired with a discrete card. One thing I found of interest is that AMD believes Trinity to be 50% faster at floating point calculations than Llano, which points squarely at the GPU and not the CPU, though you'd expect Enhanced Bulldozer to be equal to the CPU part of Llano at the very least, with the added bonus of turbo to lift up lightly threaded workloads. Then again, saying all that, AMD has been known for building up its products far too much in the past - Barcelona 40% faster than Clovertown? Hell no. Bulldozer up to 50% faster than Thuban and i7 950? Doubtful - benchmarks or GTFO.
Back to the original point, I'd really like to see what all this extra real estate on die is going to do for graphics, even if it's a strange idea to put what could end up being a 6600-class GPU with a 4C/8T monster.
I've always though the main point of AVX is to have twice the encoding speed of SSE4. Why is AVX not being used for encoding, but even more additional logic?
Time to develop software, lack of automatic compiler optimizations, etc.
Really, how many programmers do you think actually use SSE on a day to day basis? 90% of the time, its the code the compiler spits out that contains SSE, not because the developer put it in.
In general the more specialized a piece of hardware is the better it will perform at a given task. Quicksync is a dedicated decoder for several common formats and is blazingly fast for them; but won't do anything for more obscure formats or those developed in the future. AVX are general purpose vector instructions; they can be used for anything that needs vector processing not just arbitrary video en/decoding.
"One thing I found of interest is that AMD believes Trinity to be 50% faster at floating point calculations than Llano, which points squarely at the GPU and not the CPU, though you'd expect Enhanced Bulldozer to be equal to the CPU part of Llano at the very least"
I don't think that necessarily points at the GPU. I would hope that BD is at least close to 50% in fp than Llano since the cpu is really just a Phenom II. Maybe it does but I didn't think the truly merged cpu would come out for a couple of years.
Looks like they're doing it the right way to me - incentivizing you to buy a CPU more powerful (and expensive) than what you need, or conversely, making you pay for the GPU upgrade that you won't even use.
"Looks like they're doing it the right way to me - incentivizing you to buy a CPU more powerful (and expensive) than what you need, or conversely, making you pay for the GPU upgrade that you won't even use.
Cha-ching!!! "
That really applies to AMD's A8 chip, as their top Llano A8 is really the only one with a GPU powerful enough to play most games above the 30fps mark. All the reviews i've seen of the A6 say that it falls just below what you need to game at low-res, comfortably. So, to me personally, the only time i've ever really been compelled to move up to a more expensive version inside a cpu family, is with llano and the A8.
Currently, with Intel's two versions of IGP, they both suck at current games, so there's really not enough [gpu performance] "incentivizing" to really make the jump compelling for those who may want to do the occasional gaming. Now, that MAY change with ivy bridge...
of course, anyone and everyone who is a hardcore, serious gamer will be going Sandy Bridge with a fat card. AMD just can't play in this realm.
The only reason to go A8 is so you can game without a card, and so long as you don't need major cpu power. Of course, that is a pretty nice reason :-)
The thing is, the PC gamer/GPU-add in market is small, the market for people to use this chipset and forego discreet graphics on both the desktop and mobile market is HUGE in comparison.
I agree it would be nice if the top end parts traded die space for more CPU features, for me personally, but it is probably not profitable to design those parts for the consumer market given the way things are trending.
Unfortunately people like us aren't a large enough market to justify a variant die with all the extra engineering it entails. Not using the IGP gives us more thermal headroom for turbo-boost and overclocking; but that's it.
Does the extra transistors mean I can browse the internet quicker or make room for a larger, more bloated version of Windows?
I have to admit I am a little blown away by this. These processors are far more powerful than some of the Cray Supercomputers I used in the 1980's.
Simpler computers (like CDC 7600's) did not keep me from designing some really powerful hardware.
What I see with the more powerful computers today is inflated models that predict essentially the same behavior and arrogant young engineers who think the computers allow them to ignore design margins for stuff they do not know about.
I think of it as a sort of digital self-abuse. And it makes people stupid too.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
16 Comments
Back to Article
imaheadcase - Wednesday, September 14, 2011 - link
Also does sandy bridge new one coming out in a couple months..that have a gpu or no?DanNeely - Wednesday, September 14, 2011 - link
Sandybridge-E (LGA2011) doesn't have an IGP; this probably means that IvyBridge-E won't either when it launches next year.phatboye - Wednesday, September 14, 2011 - link
What sucks the most is that even though Ivy bridge will have 50% more transistors which as Anand points out will more then likely be dedicated toward the GPU, people like me won't likely notice a difference since I use a dedicated graphics card connected via PCI-e. I although I doubt it would ever happen for the mainstream consumer line of CPUs I really wish Intel would go back to making CPUs without the on chip GPU. I don't want to spend the money to buy an expensive LGA 2011 platform yet the LGA 1155 platform waste die space on a GPU that I will probably never utilize. That extra die space could be used for more CPU cores, more cache or even just removed entirely so as to cut cost and the other benefits of having a smaller die.ImSpartacus - Wednesday, September 14, 2011 - link
Yeah, it's kinda silly that the high end CPUs get the best graphics, yet they rarely utilize them.While the low end processors get the neutered GPUs, but those are the systems that will probably go without dedicated graphics.
silverblue - Wednesday, September 14, 2011 - link
The only benefit of this sort of GPU to such a powerful CPU is QuickSync. Why couldn't they just throw in a tiny section to the die to handle that?AMD are doing it a different way. BD won't even have a GPU, and Trinity is only a dual-module design, leaving the FX models to be paired with a discrete card. One thing I found of interest is that AMD believes Trinity to be 50% faster at floating point calculations than Llano, which points squarely at the GPU and not the CPU, though you'd expect Enhanced Bulldozer to be equal to the CPU part of Llano at the very least, with the added bonus of turbo to lift up lightly threaded workloads. Then again, saying all that, AMD has been known for building up its products far too much in the past - Barcelona 40% faster than Clovertown? Hell no. Bulldozer up to 50% faster than Thuban and i7 950? Doubtful - benchmarks or GTFO.
Back to the original point, I'd really like to see what all this extra real estate on die is going to do for graphics, even if it's a strange idea to put what could end up being a 6600-class GPU with a 4C/8T monster.
Slaimus - Wednesday, September 14, 2011 - link
I've always though the main point of AVX is to have twice the encoding speed of SSE4. Why is AVX not being used for encoding, but even more additional logic?gamerk2 - Wednesday, September 14, 2011 - link
Time to develop software, lack of automatic compiler optimizations, etc.Really, how many programmers do you think actually use SSE on a day to day basis? 90% of the time, its the code the compiler spits out that contains SSE, not because the developer put it in.
DanNeely - Wednesday, September 14, 2011 - link
In general the more specialized a piece of hardware is the better it will perform at a given task. Quicksync is a dedicated decoder for several common formats and is blazingly fast for them; but won't do anything for more obscure formats or those developed in the future. AVX are general purpose vector instructions; they can be used for anything that needs vector processing not just arbitrary video en/decoding.fic2 - Wednesday, September 14, 2011 - link
"One thing I found of interest is that AMD believes Trinity to be 50% faster at floating point calculations than Llano, which points squarely at the GPU and not the CPU, though you'd expect Enhanced Bulldozer to be equal to the CPU part of Llano at the very least"I don't think that necessarily points at the GPU. I would hope that BD is at least close to 50% in fp than Llano since the cpu is really just a Phenom II. Maybe it does but I didn't think the truly merged cpu would come out for a couple of years.
Taft12 - Wednesday, September 14, 2011 - link
Looks like they're doing it the right way to me - incentivizing you to buy a CPU more powerful (and expensive) than what you need, or conversely, making you pay for the GPU upgrade that you won't even use.Cha-ching!!!
fic2 - Wednesday, September 14, 2011 - link
Except that Intel doesn't get to hit the Ch-ching button on the discrete graphics.sinigami - Thursday, September 15, 2011 - link
"Looks like they're doing it the right wayto me - incentivizing you to buy a CPU
more powerful (and expensive) than
what you need, or conversely, making
you pay for the GPU upgrade that you
won't even use.
Cha-ching!!! "
That really applies to AMD's A8 chip, as their top Llano A8 is really the only one with a GPU powerful enough to play most games above the 30fps mark. All the reviews i've seen of the A6 say that it falls just below what you need to game at low-res, comfortably. So, to me personally, the only time i've ever really been compelled to move up to a more expensive version inside a cpu family, is with llano and the A8.
Currently, with Intel's two versions of IGP, they both suck at current games, so there's really not enough [gpu performance] "incentivizing" to really make the jump compelling for those who may want to do the occasional gaming. Now, that MAY change with ivy bridge...
of course, anyone and everyone who is a hardcore, serious gamer will be going Sandy Bridge with a fat card. AMD just can't play in this realm.
The only reason to go A8 is so you can game without a card, and so long as you don't need major cpu power. Of course, that is a pretty nice reason :-)
gramboh - Wednesday, September 14, 2011 - link
The thing is, the PC gamer/GPU-add in market is small, the market for people to use this chipset and forego discreet graphics on both the desktop and mobile market is HUGE in comparison.I agree it would be nice if the top end parts traded die space for more CPU features, for me personally, but it is probably not profitable to design those parts for the consumer market given the way things are trending.
DanNeely - Wednesday, September 14, 2011 - link
Unfortunately people like us aren't a large enough market to justify a variant die with all the extra engineering it entails. Not using the IGP gives us more thermal headroom for turbo-boost and overclocking; but that's it.douglaswilliams - Wednesday, September 14, 2011 - link
Anand,Someday in the next few years the number of transistors on a single die will be greater than the total number of people in the world.
I think the Mayans predicted the world will come to an end at that point or something.
ClagMaster - Wednesday, September 14, 2011 - link
Does the extra transistors mean I can browse the internet quicker or make room for a larger, more bloated version of Windows?I have to admit I am a little blown away by this. These processors are far more powerful than some of the Cray Supercomputers I used in the 1980's.
Simpler computers (like CDC 7600's) did not keep me from designing some really powerful hardware.
What I see with the more powerful computers today is inflated models that predict essentially the same behavior and arrogant young engineers who think the computers allow them to ignore design margins for stuff they do not know about.
I think of it as a sort of digital self-abuse. And it makes people stupid too.