1.12.2010

ARM vs Atom: Intel's Newest Challenge

The ARM architecture offers several advantages when compared to Atom.

ARM has smaller die sizes than the Atom processors which gives ARM a cost advantage. Having been designed for use in space sensitive environments, the ARM core is smaller than the Atom equivalent. This is the case even though Atom is being manufactured on a more advanced process than most of the current ARM designs.

In addition, ARM is more highly integrated than Atom. Almost all ARM products for use in the mobile space are single chip SOC solutions. This offers a substantial size advantage over the current Atom solution which requires three chips and the upcoming solution (Moorestown) that is still a two chip solution. Atom won't offer a single chip solution prior to the advent of Medfield sometime in 2011. So Atom won't be able to match ARM for solution size or integration until somewhere between 1 and 2 years from now.

But the biggest advantage ARM holds right now is in power efficiency. Qualcom's Snapdragon processor is the poster child for ARMs high performance processors, so I'll use that as a reference point. The Snapdragon processor is reported to use 250-500mW under load at 10mW at idle. Atom's Moorestown, due out later this year, should use ~1000-750mW under load and ~35mW at idle. So ARM offers about a 2-3X power efficiency advantage over the Atom platform.

With all these disadvantages, one wonders what Atom can bring to the table.

First and foremost is sheer processing power. If you look at Intel's marketing around the LG GW990 from CES, you will see an emphasis on multi-tasking. ARM is closing the gap on responsiveness on single apps, but the x86 architecture that Atom is based on still seems to have more horsepower and allows you to do more things at once.

Another big advantage that Atom currently enjoys is the ability to run flash applications. However, Adobe is reportedly working with ARM to enable their processor designs to run flash applications. So this advantage is going to be short lived. It has helped Atom become the dominant netbook processor but it will not continue to drive future growth.

The last advantage that the Atom brings to the table is the ability to run Windows. By being able to run Windows, Atom brings a large software infrastructure to the table for any device it is installed on. But this advantage isn’t quite as big as it might seem at first glance.

The Atom processor was designed to be a “good enough” processor for basic PC tasks like browsing the internet, viewing video, etc. But it lacks the power to run large applications well. So while Atom may be capable of running x86 applications, the experience with many of them is poor. If the software doesn’t run well it is not much better than not running at all.

The use of Atom in small form factors further offsets the advantage of using existing software. Many of the current applications don’t fit these small form factors very well. This can be fixed, but requires that the code be modified to correct the problem. Having to modify the code for this purpose nullifies much of the advantage of being able to use the existing software.

Intel’s marketing along the software lines seems to have matured beyond the idea of basic software compatibility of late. They are placing a greater emphasis on cross platform portability. I believe that this is a more realistic assessment of the x86 advantage than focusing on the software because it focuses on one of the few real weaknesses of the ARM architecture.

ARM doesn’t manufacture chips, it sells licenses to use its architecture. Each licensee is free to modify the basic design to suit the licensee’s needs. This results in an ecosystem where the various implementations from different vendors may not be compatible with each other even though they are based on the same core architecture.

Systems built around the x86 architecture bring the guarantee of cross system compatibility. Not in the sense that you can move the software directly, but rather in the ability to link the systems together and transfer data between them. So by choosing Atom, you know you are choosing a device that will work and play well with your other devices.

In summary, ARM and Atom are rapidly converging to similar levels of computing power and energy efficiency. Within a few years I believe there will only be one key differentiator between the two architectures. The differentiator will be the ease with which you can move data between your various computing applications.

Due to the homogenous nature of the hardware infrastructure Intel is building I believe this gives them a substantial advantage. However, there is still a need for urgency on Intel’s part. If ARM becomes the entrenched incumbent architecture in this new space, it will take far longer for Intel to move Atom down into the smaller devices. I believe the x86 architecture, warts and all, will become the dominant architecture in personal computing devices. But if Intel doesn’t move quickly enough they will miss the initial growth curve and the resulting profits that come from riding that curve.

Edit: Cleaned up the spacing

244 comments:

1 – 200 of 244   Newer›   Newest»
Ho Ho said...

The ARM Cortex-A8 in my N900 cellphone seems to be pretty nice. I can easily run bunch of background tasks (SSH server, moving files over SSH/wifi, updating software) while watching movies or playing 3D accelerated games without a hitch. Pretty much the only thing limiting me is the 256M RAM and lack of screenspace (800x480). With more of each I could have more apps running :)

ARM isn't as bad as you make it sound. It might not be quite as capable on PCs but I'd say it is good enough for majority of things.

InTheKnow said...

Ho Ho, I didn't think I'd painted ARM in a negative light at all. I certainly tried to present the facts on both ARM and Atom fairly. I stated that ARM was closing the performance gap, but that Atom was still currently a better multi-tasking chip by virtue of being more powerful. That doesn't mean ARM is bad, just not as good as Atom right now in this respect.

Here is an example of performance of ARM's A9 product compared to an unidentified Atom netbook. The ARM processor lags in a single task (though not by much), if you increase the task load, it will continue to fall further behind. Imagine trying to steam your movie while browsing the web. I don't know that the Atom solution wouldn't give choppy rendering, but I'm pretty sure the ARM solution would be worse.

As I stated in the blog, I expect ARM to improve their performance and Atom to continue to improve power efficiency. In the not too distant future, I expect the power/performance difference to be negligible. This is why I believe that in the long run Intel's ability to offer complete compatibility across "the continuum" (to use Intel's phrase) will be the deciding factor in this fight.

The question I have is whether or not Intel is moving fast enough to get to the target market before ARM does. I'm honestly not sure they are.

Anonymous said...

WOW, did I say this a few post and months ago.

You did miss one thing. In a few years one architecture will almost surely to continue to lead on process, maybe even more! That were very very powerful and motivated companies in the past with leads saw the demise even with superior and well bankrolled competitors; DEC, IBM, Compaq, HP, SUN, etc. etc. etc.

AMD as a direct competitor is gone, finished. Yeah they make some decent products and will continue to have between 15-20% marketshare. The only reason they survived is x86! Arab money will keep GF afloat for another generation or so and AMD will design some follow-on products but their MS size will hurt their ability to execute on multiple fronts.

Desktop/Laptop/Servers is a business that ain’t going away. But it ain’t going to grow by 20%. More like high single digits, unless the third world countries somehow figure out how they can suddenly afford a 499 laptop which is more than most people get in a month over there. But don’t discount or lose this fact this is close to 300 million unit and generates 6-10 billion profit for Intel. AMD and GF for reference don’t generate a profit. My believe is INTEL’s profit hear exceed the combined profit of the other 9 top semiconductor manufactures. That my friends is a lot of dough to plow into R&D and SOC!

Growth is in mobile computing, smartphone or whatever you want to call it is it. Be it in your pocket or in your vehicle. Intels 45nm is looking pretty good. If it execute at 32nm INTEL will have at least a generation lead in process node. That my folks is ½ the die or double the transistor in the same space. With that they get another drop in power. At 32nm that ½ die can more then cover the overhead in power/area that the x86 legacy with additional silicon to add more graphics, more powergating, turbo, etc. ARM has a huge lead on architecture, but its being halved with each generation. ARM depends on the TIs, TSMC, Samsung’s for technology. Today they are anywhere from 12 months to two years behind intel. Also they don’t generate the profits INTEL does from silicon. If Intel’s public statements are to be believed that their SOCa odd technology which used to lag the logic by a lot will be almost released at the sametime by 22nm. My prediction is that at 22nm INTEL will have Apple and most of the smartphone business in its pocket. 32nm is just around the corner. I think code and application compatibility is just too big a convenience. All other things being close enough it will win over a slight ARM power advantage. Remember the rule of TEN? ARM no longer has that 10x advantage and the x86 story is just so easy to sell.

Its too convenient to have one platform. An iTouch with 160GB that has everything ( pictures, video, excel, powerpoint pretty much everything you need ). Pop it into a dock and sync with your power laptop/desktop at home or work. On the road got full access and capability on your little screen, surf, watch movies, do a quick powerpoint edit on the beach.

If I am anyone but AMD I’m really really scared. AMD knows its place and competitor. The others think they may know, but the reality is their whole world is changing.

Tick Tock Tick Tock!

Ho Ho said...

"Ho Ho, I didn't think I'd painted ARM in a negative light at all"

I never thought you did, I'm just extremely impressed how much computing power is it possible to compress into so small form factor while using neglible amount of power. I've got no idea how good are ARM based desktop-oriented CPUs but I'm very sure Intel still has a very long path to go to get anywhere close to what ARM can offer in handhelds. Only "problem" with desktops is that CPU is already insignificant power drain compared to the rest of the system.

One day I should try using the phone as "real" PC with VNC or plain-old X11 over SSH and see how would it run some bigger apps :)

SPARKS said...

Sure there’s a huge battle brewing in the small form factor market. As you predicted nearly 2 years ago, just about everything you said has come to pass. I’m certain this post is no exception. However, as an armature process “wannabee”, I’d like to know who’s going to build these things.

Love him or hate him ole’ LEX seems to be pretty much on the money as far as process tech is anyway. 32nM is going separate the rock stars from the groupies. TMSC, as reported by Digitimes, cannot get yields over 70%.at half node 40nm.

http://www.digitimes.com/news/a20100113PD201.html

Atom will get smaller and more efficient, no doubt. However, with yields as they are with the rest industry and the way INTC keeps executing, wouldn’t this be a factor in leveling the playing field as we go forward? In fact they are moving quickly.

“Make haste, slowly.”

SPARKS

InTheKnow said...

Sparks, your question is a good one and I wanted to go a little more in depth to answer it than I could do with just a reply. To see my response, click on this link to go to my blog.

Anonymous said...

ARM's ability to futher reduce power from architecture point of view will be minimal as they started out with that as priority. Adding performance, cores moves them to be less power efficient.

INTEL on the other hand has always had performance as king, then later performance/power efficiency. They have a long way to go, but have a lot of options to further reduce power from the architecture/design side.

On the other hand from the process side ARM is behind and will fall further and further behind.

To try and compare ARM to ATOM on different nodes with different design approaches is very hard.

In the end this will play out like serves did with RISC. The rule of ten is going to be gone very soon, matter of fact already gone IMHO. This is right about like the late 90s with the RISC / CISC debate. It looked pretty compelling for them big pocket and clearly superior RISC guys then. Same song, different tune, we'll be hearing that INTEL jingle

Tick Tock Tick Tock

SPARKS said...

“with the RISC / CISC debate”

Not so fast, buddy. I’d go easy with that one. IBM’s Roadrunner is doing quite well in that specific arena. Further, Arm uses RISC too, albeit a considerably more complex instruction set than the original RISC based solutions.

(I’m quite certain our very own “Orthogonal” (and his team members) have had their hands full encoding all those combinations of registers and addressing modes over the years optimizing x86 more efficiently, and there’s plenty of work ahead)

However, the point is moot, at worst ambiguous, as ATOM brings X86 to the small form factor RISC/ARM based solutions, ah, ‘take two Windows CE’s and call me in the morning.’

However, due to the popularity of x86 apps, I’ll concede the point. That’s where we’re headed, no doubt, popularity as opposed to efficiency, perhaps? But, not necessarily better, ARM still has a couple of bullets in the chamber.

This is ITK’s basic thrust, and I agree.

Check out ITK’s link. While the illustrated chart is linear in its progression/projection, it still gives a good representation of how far we have to go. (Frankly, I’d rather be lusting after and talking about a 6-Core 130W job, but INTC’s lead has made that end of the spectrum superfluous, almost boring these days.) (Sigh)

Most importantly, hand held’s are all the rage today, ARM has some serious teeth, and RISC still works extremely well from Roadrunner to Smart Phones.

(ITK those numbers were in MIILLIWATTS!?!)

SPARKS

InTheKnow said...

I see that selling all of those Atoms is killing Intel's margins as well. A measly 62% for the quarter. Intel is clearly on the brink of bankruptcy.

SPARKS said...

"Intel is clearly on the brink of bankruptcy."

I know, it's scary, only 15B in the bank. My buddy on Wall Street says they have TOO MUCH cash!


Today, was 21 hump day. I was off by a buck. I said 22 a few months ago. Heh, so a little 1.25B payout here, a FTC suit there, who knew?

Andy Boy here in the NYS of bankruptcy must be licking his f**king chops.

Look for 25 by 2Q, in spite of liberal spend thrift bastard.

SPARKS

InTheKnow said...

You can find one of the best editorials exposing the fundamental flaws in the FTCs position viz-a-viz Intel here. I think it is well worth the time to read it.

Anonymous said...

F**king liberals don't know a good thing when they see it.

Not that I liked anything much with that idiot Bush who was too stupid if a mistake kicked him in the balls.

INTEL and Google today, like Boeing a few years ago, IBM a decade ago, and Bell Labs a few more years back is what is providing the money and technology leadership. American investment banking and the greed of Americans in general have come close to ruining this country. INTEL is the only company that develops and manufactures in the USA and what does Obama and the stupid ass liberals want to do?

Anonymous said...

Obama I have to say I voted for that sorry ass. How did I get fooled by his rhetoric, after a year we see he is nothing but another dumbass politician. What is worst he is a dumb liberal, and because of the color of his skin we can't criticize him for fear of being called a racist.

almost being like a Jew in Germany

Unknown said...

No matter who you/I/we elect, they're going to be just a "dumbass politician". They'll get shot or have some "unfortunate accident" if they don't play by the rules.

A Nonny Moose said...

ITK, do you or anyone else have a link to support the Lars Liebeler statement that "it (AMD) claimed publicly that it was capacity-constrained and sold all processors it manufactured" during the 2004 timeframe? There's a bunch of AMD fanbois over on THG who are vigorously arguing otherwise :).

9-Inch said...

"...INTEL is the only company that develops and manufactures in the USA"

Are you 100% sure about this one? :D

Anonymous said...

How about straight from AMD?

http://www.amd.com/us-en/assets/content_type/DownloadableAssets/AMD-Intel_Full_Complaint.pdf

"36. Intel’s misconduct is global. It has targeted both U.S. and offshore customers at
all levels to prevent AMD from building market share anywhere, with the goal of keeping
AMD small and keeping Intel’s customers dependent on Intel for very substantial amounts of
product. In this way, OEMs remain vulnerable to continual threats of Intel retaliation,AMD
remains capacity-constrained
, the OEMs remain Intel-dependent, and Intel thereby perpetuates
its economic hold over them, allowing it to continue to demand that customers curtail their
dealings with AMD. And the cycle repeats itself: by unlawfully exploiting its existing market
share, Intel is impeding competitive growth of AMD, thereby laying foundation for the next
round of foreclosing actions with the effect that AMD’s ability to benefit from its current
technological advances is curtailed to the harm of potential customers and consumers"

(bolding mine). Implicaion is that AMD *was* capacity constrained in 2005.

SPARKS said...

Moose, that reference was made to Intel’s court rebuttal in 2005 in Delaware. A general search of 2005 rebuttal will suffice.

The portion of AMD’s original claim to which Intel responded was redacted (blackened out for the folks who don’t know). It is however common industry knowledge AMD sold everything Opteron during the 2004-2006 time periods. The AMD case was filed in 2004.

In fact, they were really capacity constrained in 4Q 2006 and 1Q 2007(after the Dell Deal). This admission comes from none other than Share-a-Kook himself.

As you know they short supplied the white box market and many, if not all, of their channel partners to supply DELL. A wise man once said if you sit down to dinner with the devil you’d better have a very long spoon. (You may lose your arm)

The time line is moot. When they had a good product (2004~2006), when their share price was in orbit (40+), they failed to capitalize on their lead, despite INTC’s “unfair tactics”. They squandered their advantage on fusion and 5.4 B ATI instead of their “Core” business.

All roads lead to Core 2 successes and Barcelona’s failures. This is gospel, and I think that was Lars Liebeler’s major thrust.

“The only reason is that AMD is capacity constrained. Once AMD can supply 100% of the chips, the world will break free.”

They never did, and it never happened.

Under the heading: “Intel is anti-progress”’ Tuesday Jan. 30, 2007

http://sharikou.blogspot.com/2007_
01_01_archive.html

SPARKS

SPARKS said...

Sorry about that Moose. Sharp eyed Anon above picked up an unredacted portion. I rest my case.

SPARKS

InTheKnow said...

And then there is this line from none other than Hector.

That said, AMD may very well face some supply challenges in the consumer space this year, as demand for high-performance products increases. "It is very likely that this year, if the market does behave as we hope it does, that we will be challenged in a capacity situation." The numbers of servers that AMD must support, even with its growing market, is small enough, Ruiz said, not to give the company any challenges there. But to be a player in the commercial space, where customer quantity may be low but supply quantity is much higher, the company may need to make some tradeoffs to ensure "we will always meet the needs of those people that are signing up on the commercial space.
"If we find a place where we might have a challenge in meeting some of the demands," Ruiz concluded, "[it] might be in some segments of the consumer space. For example, a lot of the products that we use to serve the very high end of the desktop market might be products that might be better used and redirected to serve segments of commercial or server. In that sense, we might be tight in those regards. But it will be a year in which the balance between demand and capacity will be carefully managed quarter by quarter."


While Hector didn't say the were actually capacity constrained, he said he expected they would be. Given the way AMD shorted the channel later that year, I would have to conclude that AMD ended up with less capacity than they would have liked.

Of course I expect the more fanatical members of the zone to blame all of AMD's capacity problems on Intel anyway. AMD walks on water as far as the more rabid types are concerned.

http://www.tomshardware.com/news/amd-ceo-capacity-traded,2379.html

Anonymous said...

Poor AMD management is to blame for AMD's missfortune. I've said it long ago and I say it again.

Hector could have invested the billions he pissed away on ATI on capacity and than he could have flooded the market with superior products. The only reason AMD is where it is today is that fucking idiot was too wrapped up crying than.

Tonus said...

Is Nonny tilting at windmills again? :)

SPARKS said...

“The only reason AMD is where it is today………….”

Come on buddy, I know you know better than that. There were several reasons. I’m quite certain you know all of them, too.

I think the biggest TWO was AMD lingering problem with SOI and a large native quad on 65nM. Even INTC admitted it would difficult for them to do at the time. Had Barcelona achieved the performance levels of today’s Pheromone chips, the whole dynamic would have changed considerably. You know this.

If there were a third, I would have to say X2900 series (and above) graphics products that were absolutely trounced by NVDA products. Had AMD/ATI executed a landmark product like the HD5870 (Cyprus series) that currently sits in my machine, Wreaktors dream may have been realized.

In fact, HD5870 is an absolute GEM of a graphics card. I’ve never been happier with ATI or seen such an increase in performance since the Radeon 9700. Hell, even the ATI Catalyst Software opens five times faster and runs two times better.

Pheromone 965 and HD5870 would have been world beaters………………

…………2 1/2 years ago.

SPARKS

A Nonny Moose said...

OK, thanks everybody for the links - this should help settle some AMD fanbois hash over on THG. Or maybe not! :)

Tonus said...
Is Nonny tilting at windmills again? :)


LOL - just for recreational purposes, I assure you! :). Not that I'll change any pre-packaged fanbois opinions of course..

Axel said...

Well it looks like AMD is about to go from capacity constrained to lacking capacity altogether. So goes by the wayside yet another AMDZone delusion, that AMD would maintain control of GlobalFoundries' strategy and direction through their 50% voting rights.

Anonymous said...

^ Does that really surprise anyone.

Again the 50% voting scam was to ensure:
A) AMD did not lose their x86 license at the time of the deal
B) Get this thing pushed through US regulatory review quickly - would a deal transferring the technology and fabs to Dubai gotten through the FTC so quickly? Would NY written a 1.2Bil check to Dubai? At the time it was just a "partnership"/"subsidiary". I would like to at least see the FTC due their due diligence on this and take another look before allowing AMD to bail.

This was all pre-arranged with ATIC at the time of the deal, anyone who thinks the 50% ownership was a real plan other than to circumvent contracts and regulatory approval was kidding themselves. It's just sad that NY and the FTC did not have the foresight to see it and give the deal a closer look at the time it was made (not saying they should have blocked it, just that it should have vetted it with a bit more due diligence).

It's like vetting a presidential candidate BEFORE they become president... ummmm.... nevermind...

SPARKS said...

“would a deal transferring the technology and fabs to Dubai gotten through the FTC so quickly?”

And, in the past you said something like “I’d like to thank the FTC (or some other paper pushing bureaucratic branch of the government) for one of the largest transfer of technologies in history” or something to that effect.

I agree. However is the tech transfer sustainable? Can they build on the tech and can they innovate new and improved solutions to extremely complex and increasingly smaller processes in the future?

Basically, sure they’ve got the keys to F1 Ferrari, but can they maintain and drive the thing to the edge? Can they improve the model?

SPARKS

A Nonny Moose said...

Well it looks like AMD actually made a profit last year, thanks to the Intel settlement of $1.25B. And apparently AMD won't be paying any license fees to Intel for x86 in the future, so perhaps they will be staying afloat a while longer :P.

SPARKS said...

Ok, I guess that was a bad question.

Elsewhere, despite AMD's "profit" and great earnings report, they took a major hit today. They shed nearly 13% on the day while losing nearly 25% in two weeks.

I think the speculators (those outside the financial institutions) took the money and ran. Not a bad short term investment since early November when the stock was at 4 1/2. I’m sure these guys didn’t have an inside track on the INTC settlement, yeah, right, OK.

Anyone wanna bet AMD will never see 10 again?

SPARKS

Anonymous said...

Anyone wanna bet AMD will never see 10 again?

Not sure I would take the bet either way.

I think it will most likely have to do with whether the overall market grows or whether we have hit the true commodization of the PC market. If the market grows substantially, there will be a niche for AMD and they will grow, not to mention the graphics impact if the market continues to grow. There's also the Obama/FTC/foreign gov't cashgrab wildcard

If the market stagnates or devolves into a substantial chunk of the market being "atom-like", it'll be hard for AMD to make substantial money. Also while some point to the graphics advantage of AMD, if the atom/SOC ("good enough") market grows the growth potential in discrete cards is limited and may actually decline over time.

While there will always be some need/market for graphics cards, to put things in perspective, Intel still leads in overall graphics thanks to IGP's. Once the graphics gets to no brainer 1080p support across all market segments, the DX15 or DX891.3 support will be attractive to some gamers, but meaningless to most average Joe buyers.

Anonymous said...

Lets be serious AMD made a profit is about as accurate a indication of a healthy company as GM and Chysler making money a few short years ago. AMd like them american car companies have idiots at the helm, bad technology, broken culture and a BK business model.

Arabs may continue to throw billions at GF, but in the end AMD simply and will not make money. The reason are simply, you need some minimum market share to get economies of scale, they don't got it. Their fab will soon have difficult choices to make and either dedicated valuable capacity to money losing operation or try and have that capacity chase other customers. I still find it funny that the world needs another leading edge foundry. Tell me how many CPU manufactures does one need.

GF and IBM technology is MIA. I'm still waiting for their first wonderful and simply gate first process.

No amount of arab money can fix a broken cultural, bad technology, and bad business plan.

Anonymous said...

Moores law conquers all.

Remember the days of seperate math coprocessors

Remember the days of seperate caches chips

Remember the days of seperate memory controlers

Remember the days of seperate graphics

When you double the transistor counts it doesn't take to many doubles that you can only throw so much at cores. Now all them transistors are going to be helpful to throw at graphics

Bye bye Nvidia, bye bye ATI.

Bye bye AMD.

SPARKS said...

That’s two.

One article from MaximumPC is raving about the i5 661. So much so they said that if this core is any indication of Gulftowns future performance it will “be a monster.”

I like monster.

The second article, some crafty Taiwanese guy got his hands on a juicy Gulftown ES. He overclocked the baddy to 4 GIG (my, my!)

The bad boy generated some serious numbers.
Let see how the anti INTC minions play this one down.

(Hmmm, Federal return, State return……….I’m in!)

SPARKS

http://www.bit-tech.net/news/hardware/2010/01/25/mystery-intel-6-core-cpu-overclocked-to-4gh/1

InTheKnow said...


The bad boy generated some serious numbers.
Let see how the anti INTC minions play this one down.


Sparks, don't you know that only GPU limited games matter anymore. A decent CPU choked by the GPU will perform as well as a high end Intel CPU. You're just a sucker for spending more money on the CPU than you need to. :)

I never did see any response to my Spec analysis, either. Obviously server performance without the GPU as the great equalizer is just another set of bogus benchmarks too.

SPARKS said...

ITK, Absolutely I’m a sucker for spending more money (and proud of it ), and six cores are absolutely useless with todays software!

But what the hell, I’ll try to give the Gulftown something to do, like adding another HD 5780. 2-284W graphic cards, a couple of Velociraptors, I’m sure the overclocked 6-core will find something to do. Cheese, ya think it’ll get bored? :D

“I want it all I want it all, and I want it now!”


Anon,
Yes I do remember math coprocessors. In fact, I had to refer to my Curio Collection of Intel CPU’s which helped me recall some fond memories. Like ex-wives and old girlfriends, one glimpse and the magic returns.

The 80387 came in two versions. 80387DX worked with 80386DX processors, and 80387SX worked with 80386SX CPUs.

I had the A80386DX-28 it allowed for both 16 and 32-bit addresses. Interestingly, my coprocessor was made by ULSI fittingly called Advanced Math Coprocessor DX/DLC.

I don’t recalled how I got it, probably from the consumer bible called Computer Shopper. It was a huge magazine! I used to love to drool over page after page of these IMMENSE monthly tabloids.

The USLI device was a speed demon! It ran at 33 MHz, although I think it underclocked to match the speed of the processor.

Ah, those were the days.

SPARKS

Tonus said...

1.360v on that overclock, I wonder what the default voltages are.

And yeah... I could probably keep six cores busy. :)

COMPUTER SHOPPER is a shell of its former self, now it's just another 40-50 page normal-sized computer magazine. I remember when it was 4 pounds of tabloid-sized newsprint that was 700+ pages and about 95% of those were ads.

SPARKS said...

TONUS, hey that Computer Shopper was something else, wasn’t it?

Industry Standard Architecture had taken serious root. No longer were we at the mercy of IBM and it’s ridiculously high priced computers and parts. $9800 486-DX33 machines!

I built mine for $700 in early ‘93
I remember Mylex and AMI mobo’s priced at $1200 to $1600!

Then came the “Clones” as DELL, Gateway, and Compaq started to emerge as the big players in the “compatible” market.

Man that was a real transition time for the enthusiast. After you built that first machine you never went back. I really don’t most folks realize what the ISA standard did for the PC market. The parts themselves became a market within the market, creating new markets.

Truly, today’s PC’s are the sum of their parts. And who led the way? Intel and Microsoft did, of course. If anyone does think so then they don’t remember 3 months of Computer Shopper as big as Webster’s Unabridged, so much joy so little money.

Today, we just browse the web.

SPARKS

InTheKnow said...

Here is an interesting take on why Intel has already lost the fight with ARM.

It is a persuasive argument, but I believe that if Intel can get the power requirements down in the near future Atom's muscle (relatively speaking) will allow a faster, more satisfying response than the devices talked about here.

But as I've said before, Intel needs to move quickly or miss the advantages of getting to the market early on.

Pzyche said...

The problem is that ARM Cortex A8 is clock-for-clock as performant as Atom, and ARM Cortex A9 is even higher performance.

The next problem is that A9 SoCs are coming with two or four cores. Tegra 2 is a case in point for dual core, and Marvell have a quad-core chip coming out later this year.

The third problem is that the mobile Atoms are running at 800MHz to 1GHz to get that 1W load TDP. Compare that to 1GHz A8 systems on the market today, and future 1.5GHz Snapdragons and up to 2GHz dual A9s, and you can see that it is the ARM system that will have the greater performance and multitasking capability.

Then you need to factor in the SoC factor - let's take the 50mm^2 40nm Tegra 2 as an example - it has everything you need inside the chip. No support chips required. The Qualcomm chips have the cellular functions inside. Other SoC makers offer other facilities and combinations. The ecosystem is varied and healthy, and competition is high so prices are low.

But the biggest problem is Intel limiting Atom artificially to prevent cannibalisation. For example the digital video output is limited to a paltry and laughable 1366x768 resolution - as if the chip was made for 2006, not 2010!

Never mind Android, Chrome OS, Linux, Apple, etc all running perfectly happy on ARM, moving the software online, and replacing vast swathes of Microsoft functionality. Atom is nice for Windows compatibility, but that's no use on a mobile device where the Windows UI just doesn't fit. In any way.

SPARKS said...

"But the biggest problem is Intel limiting Atom artificially to prevent cannibalization. For example the digital video output is limited to a paltry and laughable 1366x768 resolution - as if the chip was made for 2006, not 2010!"

Excellent point, today’s narrow band between the Laptop market and the "smartphone" market will widen with the inevitable optimization of lower power requirements, speed, power consumption, and graphics functionality.

Be advised, I make Paul Ottelini look like an AMD Fanboy and my 25 pound PC's sit on a desk sporting a dedicated line conditioned 20A circuit. Further, held devices are such cute little gadgets designed to impress clients, coworkers, and friends. In effect, they are public display on one’s tech savvy.

However, there is no denying that ATOM will "chip" away at lower to midrange Laptop sales as they compete with ARM on the HD front. There's no stopping it, especially with those little dual core beasties. This is Gospel.

The euphemisms, "double edge sword" and "rock and a hard place" come to mind.

Look for this little beauty to be in INTC’s future. It’s specifically designed to give NVDA’s ION a run for its money.

http://www.broadcom.com/products/Consumer-Electronics/Netbook-and-Nettop-Solutions/BCM70015

SPARKS

InTheKnow said...

The problem is that ARM Cortex A8 is clock-for-clock as performant as Atom, and ARM Cortex A9 is even higher performance.

Can you provide a link? You may be correct, but data speaks louder than an unsupported statement.

The next problem is that A9 SoCs are coming with two or four cores.

Atom already has a two core product out there (the atom 330). It should be simple enough to clock it down for use in mobile devices if it looks like Intel needs it to compete.

The third problem is that the mobile Atoms are running at 800MHz to 1GHz to get that 1W load TDP. Compare that to 1GHz A8 systems on the market today, and future 1.5GHz Snapdragons and up to 2GHz dual A9s....

There are a couple of issues with this statement.

First, I suspect you are making the error of assuming that power is going to increase linearly with clock speed. It doesn't. I've seen speculation that the 2GHz snapdragon you are referring to runs 1.9W. That isn't going to kill Atom.

Second, I did make the comparison in my initial post, you did read it, right?

Currently ARM has a 2-3x advantage in power efficiency. The problem with looking at this metric is that the real power draw on the current platforms is the screen and the radios, not the processor. I've seen estimates that a 20% reduction in processor power consumption will give less than a 10% increase in battery life. I don't see this trend changing.

Then you need to factor in the SoC factor....

Again, I did. it's called integration and I pointed that out as an advantage for ARM in my original post.

I'm beginning to suspect you're just trolling, but I'm giving you the benefit of the doubt initially.


But the biggest problem is Intel limiting Atom artificially to prevent cannibalisation [sic].


That depends on where you think Intel is going with Atom. If they continue to drive the power down and push it into smaller form factors it becomes less of a threat to their main CPU business. I believe graphics performance is poor because Intel doesn't currently have a power optimized version of their leading edge graphics (no I'm not claiming Intel's graphics are the best, they aren't). Not because they are afraid of cannibalization. That is just my opinion, of course, you are welcome to disagree.

Atom is nice for Windows compatibility, but that's no use on a mobile device where the Windows UI just doesn't fit. In any way.

*sigh*

I'll simply quote from the original post again.

"In summary, ARM and Atom are rapidly converging to similar levels of computing power and energy efficiency. Within a few years I believe there will only be one key differentiator between the two architectures. The differentiator will be the ease with which you can move data between your various computing applications."

I don't believe that ARM will be able to move up the hardware stack significantly. If you accept that premise, then Intel has an advantage in being able to integrate across all levels of the hardware stack from server to hand held.

moving the software online

I probably wasn't clear on this earlier. I don't believe the 'Cloud' experience you refer to will ever compare to keeping things local. There is simply no way that pulling data out of the 'cloud' will ever be as fast as local access. So cloud computing will never provide as good an experience as operating at a local level.

Anonymous said...

The iFlop has arrived with ARM in it and with it validates my prediction that ARM is going to lose the war after one Tick Tock.

Its a 499 dollar device, yeah it looks cool but for 499 you can get a x86 that surfs the web, connects to 3g, has a keyboard and gets 3 hours of battery life. You get a couple hundred gigbytes more of storage, get to access the itunes store for movies, books, etc. etc.

You can BTW do excel, powerpoint, be reading email, facebooking, and listening to music all at the SAMETIME.

For ARM to catch they will need dual core, out of order architecture etc. etc. Guess what that costs power. In the end the guy with the best technology is going to win.

Apple simply hasn't learned their lesson well have they. Mr Jobs only needs to look to the past. 6800, PowerPC then what did they have to go to.

Tick Tock Tick Tock and ARM will be where PowerPC and 6800 and AMD were.

Pzyche said...

Ah, iFlop. Pretty much what people said about the iPod, and the iPhone, and anything that Apple has done that has gone on to be a massive success.

That 1.6GHz Atom netbook is overall less powerful than a dual-core 1GHz ARM Cortex A9, especially when the latter also includes acceleration for common tasks - video, security, games.

And the competition for those netbooks are Smartbooks, coming this year at a significantly cheaper price.

You can't compare a cheap-ass netbook with a premium Apple product. Especially when Apple differentiate themselves via software (usability, accessibility, simplicity) and making what they do support work very well. They're using ARM because it gives them more battery life, for a cheaper price, and it accelerates their UI and functions that they need, and Intel has nothing here to compete.

Tonus said...

I thought Apple was using its own proprietary chip for the iPad?

Pzyche said...

Apple have made their own custom System-on-Chip for the iPad (and future iPhones, etc) called A4. It incorporates fairly common components however - an ARM CPU being one element - either an ARM Cortex A8, but more likely an ARM Cortex A9 (one or two cores, unknown). That will live alongside a GPU, video unit, security/encryption unit, audio, and more.

As soon as someone gets an iPad, they'll write something to dump the hardware and we'll get far more information about what's inside. I hope.

InTheKnow said...

That 1.6GHz Atom netbook is overall less powerful than a dual-core 1GHz ARM Cortex A9, especially when the latter also includes acceleration for common tasks - video, security, games.

In addition to continuing to repeat a claim with no supporting documentation (the ARM more powerful than Atom claim) you are now confusing the processor with the platform.

ARM has a significant advantage in integration right now, but that advantage will be gone in 2-3 years. At that point in time Atom will be on 22nm as a fully integrated SOC product. Until that time, you won't know which architecture is fundamentally superior because you are comparing apples to oranges.

ARM does offer the more compelling platform right now, but it isn't without it's own issues. It may not run legacy software and can't currently run flash, each device from a different manufacturer potentially requires a software port because of architectural differences, and there is no guarantee of cross device compatibility from different vendors (or even the same one). These limitations are not trivial in their own right.

However, from this point on I'm going to put you on the same special list I have for Mr. Ticktock and ignore your posts until you choose to address some of the issues I've raised. I'll leave it to you to demonstrate you are interested in having a real conversation and not just trolling.

InTheKnow said...

I thought Apple was using its own proprietary chip for the iPad?

Indeed they are. I'm fairly sure that they based their design on an ARM design (they do have an ARM license). I believe that they chose to go this way after producing the MacAir.

Intel developed a custom chip with custom packaging to enable the form factor Apple wanted. After doing that for Apple, though, Intel did the logical thing and offered the new chip design to their other customers. Apple's volume for the MacAir doesn't exactly justify the cost of R&D and investment of manufacturing resources on its own. It is my belief that Apple saw that this would be the future model for working with Intel.

Apple would be able to produce leading edge products first by working with Intel to enable new designs. But they wouldn't have any real IP advantages over the competition. The purchase of PA Semi gave Apple an in-house design capability that would allow them to develop their own IP. I'm sure Apple felt this would give them a longer lasting market lead on new products.

I've seen no evidence that Apple chose to go this route because on Intel's inability to meet their needs with their future roadmap. But Apple thrives on offering unique products. Going with Intel's Atom infrastructure doesn't really fit Apple's market position. How many flavors of essentially the same netbook can you go out and buy after all?

Pzyche said...

How immature do you have to be to put everybody that might not have the exact same opinion as you on your block list because you can't cope with what they write?

In three years the battle will be over. Never mind Intel never being brave enough to actually make Atom be what it could, because they think it will cannibalise the sales of their more upmarket processors.

In the meantime we'll have smartbooks and tablets doing what consumers want from such a computing device, with the winning platforms being Android, Apple's mobile OS, Nokia's Maemo, maybe WebOS if Palm enter the market (alas for Foleo, you had the right idea but ran away scared).

I swear that people think that ARM have packed up their designs now they've done the A9. In reality they'll be releasing even more powerful processors and architectures (scaling up, whilst fortifying where they rule already), whilst Intel will have to restrict Atom to slow speeds to limit power consumption, even as they shrink processes.

It's not worth posting proof, because you won't read it anyway, instead deciding that you know best anyway.

Pzyche said...

"It may not run legacy software and can't currently run flash"

In the areas ARM is winning, and will continue to win, it is x86 that can't run the legacy software. Android is probably the most portable, because of it's Dalvik VM. It is the job of the operating system to handle and abstract the different hardware capabilities of the hardware, so that isn't a problem either.

And Flash 10.1 is available for ARMv7 architecture processors, including ARM Cortex A5, A8, A9 and future designs, Qualcomm Snapdragon (Scorpion), and Marvell's new SoCs.

InTheKnow said...

Finally, a response to something real rather than throwing out generalities. Thank you.

In the areas ARM is winning, and will continue to win, it is x86 that can't run the legacy software. Android is probably the most portable, because of it's Dalvik VM.

Last I checked Android ran on Atom. Based on your following statement (which I have issues with), you state that it is the job of the OS to resolve these hardware issues. So why can't the OS work with Atom and run the "legacy software"?

It is the job of the operating system to handle and abstract the different hardware capabilities of the hardware, so that isn't a problem either.

You are right up to a point. But only up to a point. Hardware abstraction costs resources. Those resources aren't available for computing. The more homogeneous your hardware stack, the fewer resources you are using to run hardware abstraction layers. That is an inherent long term advantage to Atom if all other things are equal. And I believe that we are heading toward equality.

And Flash 10.1 is available for ARMv7 architecture processors, including ARM Cortex A5, A8, A9 and future designs, Qualcomm Snapdragon (Scorpion), and Marvell's new SoCs.

Sorry, Flash10.1 is in beta. I don't consider this ready for prime-time software and apparently Adobe doesn't either. That is why I said that Flash doesn't currently support ARM.

Despite your claims to the contrary, I actually welcome an opposing viewpoint. But to quote Abraham Lincoln, "I shall adopt new views so fast as they shall appear to be true views."

Make a strong point and I will change my view.

SPARKS said...

“and anything that Apple has done that has gone on to be a massive success.”

A truer word was never spoken.

Last quarter Apple simply ran away with the market. I won’t even bore you with the links.

Personally, I wouldn’t piss on these things, but I’ve purchased TWO iPods, one for my wife, and one for my daughter. Apple nailed me twice by default!

Hell, they get the prime Intel high end gear first and I’ll bet dollars to donuts they have a 12 core solution waiting in the wings. On the other end, shall I say the low end; they have consistently created new trend setters. The iPad is no exception.

Oh, I can see it now, come this summer, with all those lovely ladies sitting in Starbucks on Fifth Ave., sipping their Caramel Macchiato’s , enjoying the morning café with iPads in hand.

Like it or not, this thing’s a winner, big time. A4, B4, after, whatever, it don’t matter! It’s trendy, convenient, user friendly, BIG, light, loaded with apps, and very cool. All the naysayers can piss and moan all they want.

I’m dreading the prospect of my wife getting her hands on one.

That’s three to Apple, for ole SPARKS. (Don’t look now but it is kind of trendy for me to admit that my house is loaded iSomethings or others!)

I WANT MY SIX CORE XE!

SPARKS

SPARKS said...

"It is the expected completion of its metamorphosis from 'AMD Proper' into the world's second largest fabless company that deserves the most attention. With the consolidation overhang gone, we believe AMD gradually exits the penalty box and transitions into a more typical valuation framework."

Fab-you-less, valuation framework, AMD Proper.

Someone one told me if it looks like crap, if it sounds like crap, and it smells like crap, it is probably is crap.

PU

http://www.marketwatch.com/story/amd-shares-fall-sharply-despite-upbeat-earnings-2010-01-22?reflink=MW_news_stmp

SPARKS

SPARKS said...

Hey GURU!

Are you out there? I just got my State Income Tax return! Well over 6 G's!

The New York Times,
The Daily News

When it comes to reality,
It's fine with me
I'll just let it slide.

Don't care if it's Chinatown
or on Riverside.


I don't need any reason,
I left them all behind.

I'm in a.........

SPARKS

Anonymous said...

Hmm there was SPARC, there was PARISC, there was 6800, there was PowerPC all were going to conquer x86, all are history and insignficant and incapble of funding the investment require to develop the next generation

You guys don't understand without x86 youd still be using 1lb clunker phones, pay twice as much for your computers and internet.

but have no fear with x86 volumes and profits they are one tick tock where on 22nm they will have an SOC that will crush ARM just like all those other superior archictures of past.

Do you dumbF think IBM, TSMC or Samsung got the $, the volumes or the technology and business to invest the 20billion for 22nm and the next generation after that, NOPE

x86 will be the onlything standin even the dumb arabs know that.

Pzyche said...

So there may be a few Atom based SoCs in 2012 on Intel's 22nm process.

Right now there are dozens of ARM SoCs available, with different feature sets, performance, prices, etc. Competition here is keeping the price down.

In 2011 they will be on GlobalFoundries 28nm process. That's a half-node from 22nm, and given ARM's inherent design advantages it won't be enough for Intel to compete, not at a price that they would be comfortable competing at.

"without x86 you would still be using 1lb clunker phones" - err, what? Mobile phones created a market for SoCs, which are now mostly ARM based, but in the past there was Hitachi SH3/4, MIPS, and myriad others. ARM enabled the Psion 5 over a decade ago. ARM enabled the Apple Newton even further ago than that. It was a major step up from the Z80s and 8088 clones that were used in early portable 'books (e.g., Amstrad NC100/200) and PDAs.

And the 68000 (via ColdFire - a not-quite-compatible offshoot of it) powered the Palm PDAs until they switched to ARM.

I'd love to see a 22nm Atom SoC with Intel Graphics "crushing" a 28nm Tegra 3.

Anonymous said...

GF ROFL... where are they going to get their low power high yielding HighK Metal gate? From IBM's wonderful gatefirst annouced 2 years ago but still missing in action?

From TSMC where they can't get yield on 40nm due to chamber matching.

A design is only as good as the process running on it, and a process can only be as good as the desing running on it. Right now if AMD delivers on their SOC programs ARM is history.

Anonymous said...

Arabs got money, lots of it. It can buy a building, lots fancy expensive immersion steppers, but without a crew that knows how to develop a high yielding process is is going to be a expensive flop breweing in upstate NY

SPARKS said...

"AMD: “We expect the company to continue to generate sizable losses in 2010, driven by the combination of losses at GlobalFoundries, meaningful exposure to the low-end of the desktop business, and potential share loss in high-end servers,” Goldman analyst Jim Covello writes. “Importantly, we continue to highlight hat AMD’s peak EPS is likely to deteriorate cycle-to-cycle given its high interest expense and higher share count vs. last cycle.” The stock is up 264% over the last 12 months, he notes, and already reflects a PC recovery. Goldman’s price target: $5.50."

Still don't wanna bet either way?

SPARKS

SPARKS said...

ITK,

ARM CEO Warren East is very bullish about his company’s future. In fact, he claims it’s possible to take 90% of the PC market!

I suppose ARM and Linux together will render the last 2 decades of hardware and software infrastructure obsolete and useless over the “next several years”.

Sounds like another ‘death of the desktop’ prediction to me. Who knew that handheld mobile devices would spell the death of the power users and gamers? They’ll just throw that pesky crowd in with the server market.

You see, once again, you don’t need all that power.

I can’t wait to download 50 photos from my Cannon EOS Rebel Xsi Digital, along with the few thousand or so saved, to my mobile device for some photo editing and storage. Now that should be a user experience.

I don’t know how I missed it. How did these devices gat so powerful, so quickly? I must have been too busy redesigning my new bath and kitchen home renovations in VISIO, or my last 12,000A service equipment installation in AutoCAD.

Power Users and gamers are SO boring. Cheese, no one in my house will ever fight for “daddy’s old machine” again.

Yeah, right.

SPARKS

http://www.pcpro.co.uk/news/355246/arm-our-netbooks-will-fly-with-or-without-windows

InTheKnow said...

ARM CEO Warren East is very bullish about his company’s future. In fact, he claims it’s possible to take 90% of the PC market!

Isn't marketingspeak great?

It will be at least as hard for ARM to move into the PC space as it is for Intel to move into the mobile space. What people don't seem to get is the fact that the ARM architecture isn't some sort of magic. It is bound by the same laws of physics that Intel is bound by.

If you increase clock speed, power consumption increases. And the increase isn't linear. Since they can't turn up the clock speed much more than Intel, then they will have to change their architecture to handle PC workloads more efficiently. In short if they want to play on Intel's turf (the PC space), they are going to have to build a similar architecture.

And as you pointed out, they have to do this without the benefit of the existing hardware and software infrastructure.

If you notice, this is exactly what Intel is doing with Atom. They are making significant changes to their architecture to be more like ARM. The difference is that ARM just sells IP. There is no monolithic hardware and software infrastructure in place. Their market space is already Balkanized and Intel is just going to be seen as one more state carving out a chunk of the landscape.

I differ with some that have posted here in believing that Intel's chunk will end up being a big one. According to the most recent ARM advocate to float through here we should know if I'm right or wrong within 3 years.

SPARKS said...

“…is bound by the same laws of physics that Intel is bound by.”

“If you increase clock speed, power consumption increases. And the increase isn't linear.”

I love engineering and the physical laws they are bound by when put so succinctly, logic and law.

They can shovel a lot of crap on the media, but you can’t pee on Quantum mechanics.

In my case it’s simple, actually. I’m always bragging about the powerful machines I build. However, what most folks fail to realize (and probably never will) is that high end machines are the “Swiss Knives” of the end user experience. They can do ANYTHING very quickly and run any program, multiple programs, or ANY game seamlessly with nary a hiccup. The only thing they can’t do is fit in your briefcase or handbag, to say the least.

From where I sit, most of the ‘one ARM advocates’ have never experienced the power of a state of the art, multi core, multi hard drive, multi graphic card powerhouse. In short, they are satisfied with the good enough solution where the Laptop is too powerful where an ultraportable will suffice.

Therefore, I believe the big winner in the Tablet PC market will be the ones that work well and coexist together with PC’s and their legacy software.

Hence, Steve Jobs incessant desire to isolate his products from the mainstream PC market is inherently flawed. Conversely, when a Tablet PC, or any other portable device for that matter, can play well and integrate with one of my x86 monsters and its software; that device will be a winner big time. (Jobs didn’t put an Ethernet port on the iPAD! Hmmm, I wonder why?)

More importantly, this is where the Tablet battle will be fought on both the personal and corporate level, coexistence as opposed to “taking over”. If anyone thinks MS and INTC aren’t headed in this direction, then they’d better go back playing to ‘Space Commander’ on an Atari 800 running BASIC.

Regarding power consumption, I think most users would be happy with 4 to 6 hours running time as opposed to Holy Grail of 10 to12 hours relentlessly sought by the Apple/Arm advocates. This will improve on the x86 front as INTC scales down in size in conjunction with architectural optimizations/innovations. This is gospel.

Hey, they said Rice Burners would take over the muscle car market. Today we’ve got 635 HP factory Corvettes running in the 11’s, all at 20 MPG highway, and people are cheerfully dropping $106K for the privilege!! That never happened in the late sixties, even with L-88’s and ZL-1’s.

Now that’s what I call speed and power consumption.

SPARKS

SPARKS said...

Well,that didn't take long.

http://www.hexus.net/content/item.php?item=22277

SPARKS

SPARKS said...

Neither did this.

http://www.semiaccurate.com/2010/02/08/
intel-and-nokia-make-chip/

SPARKS

InTheKnow said...

Sparks, that article did a good job of laying out the roadmap for Atom. It also highlights the reason I wonder if Intel is moving quickly enough. Two years might not be that long in the CPU world, but it is forever in consumer electronics. A lot can, and will, change before Intel has a truly competitive product.

InTheKnow said...

I ran across this article on TSMC's decision to go gate last and thought it was an interesting read. Ironically gate last gets harder to do as you get smaller. It wouldn't surprise me to see Intel move in some other direction (tri-gate maybe?) by the time everyone else jumps on the gate last bandwagon.

Anonymous said...

I think trigate (which may also coincide with SOI?) is likely when you will see a switch to gate first. In addition to the smaller feature sizes, the gate fill will likely be even harder when dealing with a non "U" shaped trench when using trigate.

Also trigate/SOI will probably allow an increased ability to Vt adjust the active channel which should (maybe?) help on the VT issues.

The other option, which is still probably a lot longer off, is to move to gate first if/when a switch is made to alternate channel materials (III-V, nanotubes, etc....), but I guess the wall on the ability to do the fill will be hit before any of those techs are ready.

It will be curious to see what IBM does - I imagine they may try to save some pride and rollout processes for both gate first and gate last after 28nm (or at 28nm) in an effort to save some face for all the hyping they did about gate first. Given the complexities of either flow they probably just need to pick a horse, but I'd not be surprised if they try to do both going forward...

SPARKS said...

Hi K Metal Gates- “ ``marks the biggest change
in transistor technology'' since Intel's pioneering use of polysilicon in 1969."-Gordon Moore

One of my top ten players, sports fans, said that a few years back.

It seems ole LEX was right. AMD/GF “ain’t gonna” have it anytime soon. Obviously, INTC’s proprietary process engineering has the entire industry stumped. As if we didn’t know.

And here, we’ve got GURU swinging the Tri-Gate/III-V metal bats.


KUDOS

http://www.xbitlabs.com/news/cpu/display/
20100211053554_Globalfoundries_No_AMD_45nm
_Microprocessors_with_HKMG_Incoming.html



SPARKS

Anonymous said...

Yo, Sparks you the man, beat me to the punch but I still got to say it!

LOL, did I say this a many moons ago! The world must go gate last. Of course IBM will quietly plow away and maybe by the time they figure out how to do it the world will have moved on to something that no one will remember that once a long time ago they claimed to beat of intel in proclaiming being first for HighK Metal Gate. First by a day, a weekend before INTEL announced, with the far superior gate first. Its been years since that day but IBM and the consortium still haven’t shipped a single die yet. INTEL shipped hundreds of millions and second generation as well.

At least the big guy at TSMC has testicles enough to admit that it is the way to go. I guess either he has honest engineers or IBMs are really stupid, or are the managers at IBM that stupid? The IBM CEO doesn’t care much as he has bigger things to do. Does this at all sound like SILK to anyone?

The iFlop with ARMs is a fine example of why ARM won’t rule the world. EVERYONE obeys the laws of Moore. Faster requires more transistors, more cores, deeper pipelines. Then you factor the fact ARM has overhead to run all the stuff important to you. You got this fancy touch screen, want to edit a picture.. ooops not enough memory, too slow. You want to watch a video on the internet.. oops won’t run flash. Netbook for 399 anyone, full fledged ULV laptop for 699 anyone? By the way does anyone see the problem with the thin client. Look only at the iPhone to understand that you can’t rely on the cloud. You need power at the your fingertips.

Those with Moore in their corner with the fastest transistors and smallest transistors win. Those with enough $ to keep Moore trend alive keep the lead and it’s a huge lead to have double the transistors and 30% performance and ½ leakage. Tick tock tick tock AMD is history, global foundries is history, ARM is next.

InTheKnow said...

Sparks, if AMD actually hits this schedule, they will be "sampling" HK/MG by June. LLano is fabbed on the 32nm process node and according to an AMD spokesman...

"At 32-nanometers AMD has introduced a high-k metal gate process for the first time to help manage power consumption...." Silcott said.

The actual product is supposed to ship in early 2011 from what I've seen. I'm going to be really interested in seeing how this ramps.

There is a world of difference in moving from a small number of wafers to full HVM. There are always unpleasant surprises when you increase from a pilot line to full scale production. And I'm here to tell you that no amount of secret APM sauce is going to prevent the 'gotchas' along the way. HK/MG presents a whole host of integration challenges that AMD is going to have to work through.

And don't let anyone kid you. Intel runs into the same problem. They have to work through the issues just like everyone else. Intel has just gotten really good at ramping up new nodes and working through the problems quickly.

AMD (or GF if you prefer) is going to have to prove they can do the same.

Anonymous said...

AMD has a long history of pushing the process technology envelop with success. I'm fully confident that they will ramp this complex highly complicated process smoothly. Issue with things like chamber matching, design to process interaction, they have all the experience and track record.

Sorry not, they got no record and a long history of failing here.

Sadly none of us will have fun time watching this slow motion train wreck. All we'll see are low volumes and delays, but those in the know will know what is going on.

Tonus said...

"AMD (or GF if you prefer)"

Question- how much of an additional impediment does the disengaging of manufacturing create for AMD when they are transitioning to a new process? Since their break with GF is recent, do they still get to work very closely with them, or will there be roadblocks to not having your foundry people in-house?

SPARKS said...

Tonus, I’ll take a shot.
As you all well aware my engineering experience is extremely lacking, almost nonexistent. However, as I have learned here:

You can’t order 32nM tools at the Home Depot, especially ones designed for SOI process. Can GF afford AMD’s very specific needs especially now when they are slowly distancing themselves from AMD? All this Hi-tech money spent for a “good enough” solution with a limited profit margin? Yeah, OK.

SOI is a more expensive process with many more steps involved. (At least this is what I got from tenure here.}

Regarding the close association between the process boys and the design boys TSMC’s big shot from the link ITK provided spells it out very clearly:

"everybody — the process people as well as the layout people — need to adjust the way they do things in order to make the products competitive."

Sounds like “everybody” involved will have their nuts in a twist due to “constraints”. I see it as the difference between a forced compromise based on manufacturing limitations vs. a shared, workable solution by ALL to achieve specific goals, subtle, but profoundly different.

The INTC’s boys hammer out solutions to problematic issues simply by beating each other up at daily in house meetings. I’m guessing, but if both camps need a modification(s) to a tool or tools to make a unilateral process/ architectural solution work, they’ll get it done. I don’t think the”rent-a-foundries” will be nearly as flexible or spend that kind of money. INTC does.

GURU, ITK, Orthogonal, and even LEX, have all made this close association abundantly clear in the past.

Then again, I’m just an electrician making a marginally educated guess. However, evidence suggests, given INTC monumental lead, I’ve been well schooled by my engineering buddies.

SPARKS

SPARKS said...

There are times when Charlie D. really shines.

In his latest report, he gives a very good inside look of what goes on inside a FAB, well at least at TSMC. I don’t know how accurate the report is because I’ve only seen pictures of a FAB. Perhaps someone here with bit more inside knowledge could tell us, the plain folks, if his report has any merit.

That said, if it is accurate, I see big problems ahead for the rest of the industry, sans my beloved INTC of course.

http://www.semiaccurate.com/2010/02/17/nvidias-fermigtx480-broken-and-unfixable/


SPARKS

Anonymous said...

There was enough detail in there to probably be fairly accurate although I'd take the quoting of yields with a grain of salt.

It does show the foundry dilemna... you either have to:
- put in overly restrictive rules which lessens performance or area scaling in order to go the "lowest common denominator" approach (make the process design rules capable of accepting any design)
- work with many customers on custom tweaks or determine which ones are worthy of doing this
- Deal with the multiple steppings when the design/process interaction inevitably fails.

Not sure how the costs are dealt with on re-spins/new steppings - while I'm sure things like mask costs are picked up by the customer there are still hidden costs to the foundry (more hot lots, more engineering time, increased support cost/effort like metrology, etc)

It also shows why foundries are typically about "n-1" or even "n-2" technologies (65nm is still high volume) - for many customers it's not worth the risk/effort to scale the die size or eke out 20-30% performance given some of the fixed costs (masksets), engineering costs on the design side, and potential risks like product slippage, etc. The other thing to keep in mind is you never really get the fundamental scaling (I think it's more typically about 40%) and the wafer costs also go up (typically 10%?) - so moving a product from one node to the next might not even be cost effective unless you are in high volumes.

I think this also will be a dilemma for GF in terms of long term business direction - with the exception of a few customers, the competition will be on previous nodes more than the leading edge and it's a question of how much time and resources you want to sink into the leading edge to get what might be a realatively small piece of the pie.

I'm not sure how sustainable (from a business perspective) it will be for gf to attempt to keep up with Intel on tech nodes (or keep the gap the same I should say). Keep in mind a foundry is now dealing with 3 process techs per generation (SOI, bare Si, and generally some low power process version) and also dealing with 1/2 nodes. Intel is churning out 1 tech per node (and then does a low power version of that node a bit behind it) and doesn't deal with 1/2 nodes.

Anonymous said...

The strength of IDM is going to be key going forward.

TSMC and others got fooled at 180, 130 and 90nm that buy and tool and slap a process togather and foundries are us is open for business.

At 65nm that model failed as noted with tight integration of Design rules, density and other subtle things making discrete transistors versus large dies with hundreds of million of trannies very very different. Just go ask the CEO of nvidia if he wished he was a real man with a fab versus a big dick with no testicles?

No you got GF with deep pockets, balls from arabs but no dick.

You got IBM a the skelelton of a IDM, but no stomach to invest as they don't have enough volume to justify development. They just are a prostitute farming out papers and ideas like gate last that are totally not really manufacturable for big chips.

Who is left standing with a process at 32nm and soon 22 and with it the ability to deliver 500 million trannies 5mm^@ low cost SOC or 1 billion trannies in a 4 core CPU 1cm^2 chip?

Anonymous said...

Oops IBM don't got gate last, they are first to HighK/Metal going gate first, LOL. Me bad its late!

Tonus said...

So you're saying that there are a lot of trannies at Intel? :o

SPARKS said...

“although I'd take the quoting of yields with a grain of salt.”

Hmmm, this is interesting. Rarely, if ever, do I take issue with anything you say. However, all Evidence to the contrary, at least from my limited perspective, reading the tabloids, websites, etc. Given AMD’s initial shortages with the 5xxx series graphics processors and now NVDA’s issues with Fermi, something is rotten in Denmark. Further, TSMC’s problems go back at least six months. Is it because you don’t believe the yields couldn’t be that bad?

And, why would they remask the whole enchilada a number of times? Didn’t they follow the design rules in the first place? Or are they winging it as they go along? That megalomaniac running NVDA had the balls to ridicule INTC’s Larrabee when in fact he is running into the same issues, if not worse. At least INTC doesn’t have “chamber matching issues.” It seems the entire mess has to do with both process and architecture.

Sure INTC took a PR hit, but it seems NVDA is stuck in a quagmire.

I don’t get it. What’s your take on Charlie’s report concerning the yields?

SPARKS

Anonymous said...

Larabee is another itanic. Intel has underestimated what it takes for software and architecture here.

The good it matters less and less as the standalong graphics business is become smaller and smaller and ATI and nvidia is going to be severly hampered by using TSMC and GF inferior old processes with crap yields.

I was thinking you invest 30-50 million in immersion steppers and you can't get the yield right because of chamber matching in the metalization layers. Something smells very very wrong in foundry land.

The world is going to integrated and with so many trannies on 22nm INTEL will have no problem providing graphics just good enough with GOOD yields.

Tick tock tick tock AMD is gone, and ARM is going to get steamed rolled. Is that a 32nm SOC chip I hear coming?

SPARKS said...

“Tick tock tick tock AMD is gone, and ARM is going to get steamed rolled. Is that a 32nm SOC chip I hear coming?”

Yes you did.

Mr. Pankaj Kedia over at the Ironworks seems very confident. Who am I to argue?

http://channel.hexus.net/content/item.php?item=22489

BTW: I absolutely loath the expression “Itanic”, damned four letter word. Besides, INTC put the kibosh on the whole thing before it got real ugly. They didn’t have an HP pushing the thing down their throats for what seems like 30 years.

SPARKS

InTheKnow said...

Larrabee is not dead. Intel can't afford to let it die.

Not for graphics, but because the GPU type architecture is too good for certain types of workloads for the CPU architecture to compete with it.

Intel will either have a successor to Larrabee or an announcement on a new roadmap for Larrabee II before the year is out.

Mark your calendars. If 1Jan11 rolls around with no new roadmap I will publicly admit my error.

InTheKnow said...

And at the risk of spraining my arm while patting myself on the back, I'll direct your attention here.

This sounds a lot like my claim that Intel is racing the clock for early adoption against ARM in my (admittedly biased) opinion.

Now back to the pile of school work that has consumed my every waking hour.

Anonymous said...

Stupid fanboy wrote: ...ATI and nvidia is going to be severly hampered by using TSMC and GF inferior old processes with crap yields.

LOL :D
Don't be surprised if you see GF overtake intel on process tech.

Remember: their owners swim in money. ;)

Tonus said...

If it was that simple, wouldn't IBM have been ahead on process tech years ago?

Anonymous said...

1) Remember a few years ago this little thin called a netbook with linux selling for 299. Wasn't till it was packaged with the familar windows did it take off. ARM is a wetdream

2) Remember 15 years ago this little company that was way ahead on process and stood for everything in technology from mainframes to software. They even invented the PC. They so wanted to own the uP and spent money on it from all angles. Lesson no dumb fuck arabs even with billions can win this one. You think there will be fabs in Arab, who will be the hard working engineers? Import asians to middle east, right... Arabs will piss 20-40 billion before they get close, and by then Moores law will be over, dumb fucks.

Itanic was billions wasted as contigency against PowerPC and other high end CPUs, in the end they all failed against the might x86 and so will ARM.

Tick Tock Tick Tock!

SPARKS said...

“Don't be surprised if you see GF overtake intel on process tech.”

Huh??? Someone get that keyboard away from that kid before he hurts himself.

Actually, after a good laugh, I was wondering about GF’s real competition TSMC. (That’s GF’s real target, Sunny Boy, not INTC) Let’s do some homework, shall we?

Try to follow.

The combined number of FAB’s (with the merger of Chartered in case you didn’t know) that GF will have WILL be-

Five a 200mm facilities, four 300mm, two of which are in production, one in Dresden that will be coming on line, and one in New York (to which I am personally contributing my tax dollars) is scheduled to come on line in 2012, totals 8 in all by 2012.

In contrast, TSMC has 9, and they are all on line.

One 150 mm, Six 200mm, and two 300mm mammoth FAB’s called Gigafabs, these two alone can crank out 150,000 wafers a month.

The combined revenue of second player UMC and upstart GF/Chartered together is one third the revenue of TSMC. (I don’t know how to factor in oil injections from UAE)

UMC and TSMC are the only ones turning a profit.

Third player SMIC decided to expand at the expense of profitability.

TMSC accounts for 58 to 62% of the entire Industries sales, and 80 to 90% of the Industry’s profits. They could conceivably, because of their sheer size, meet any challenge posed by any competitor at the expense of profits. Plus, they have money and as it stands don’t see serious competition.
GF hasn’t made a dime. They, to survive, MUST go toe to toe with TMSC as general purpose ‘rent a foundries’, not INTC. ‘Economies of Scale’ says they can’t; forget “overtaking Intel on process tech”.

TMSC can’t even do that, especially with Hi-K metal gates. No one can. It takes more than money, pal. It takes a well orchestrated, well coordinated engineering effort that make the London Philharmonic look like a one man band with a busted harmonica and a broke dick monkey.

Better head back to the AMDzone and wax the pocket rocket over there.

INTC has 32nM on the street and will be in my machine come April with six of the fastest cores on the planet, made with nuclear control rod moderator metals. (I’ve had Hafnium in my machine for nearly two years). 22nM is in development with proprietary innovations that AMD/GF/IBM/TSMC can only dream of.

GET IT?

SPARKS

Anonymous said...

Sparks you are an educated fanboi... I wonder what all them amdzone and sharikou and scientia do for fantasy masturbation these days!

Anonymous said...

And in a stunning turn of events, EUV is looking like it will slip yet again. Intel is now saying they may do 193nm immersion all the way down to 15nm now. I must say I was way off on this one - I knew the industry thinking 32nm was a pipe dream (I think that was the original plan) and thought maybe on 22nm, most likely on 15nm... now it looks unlikely (or at least iffy) on 15nm.

Don't I remember IBM talking up an EV dual stack not too long ago? I hear it looks great in a lab. Maybe it had airgap too ;)

Link on the EUV:
http://www.eetimes.com/news/semi/showArticle.jhtml;jsessionid=4WJGOEGZQPNAJQE1GHPSKHWATMY32JVN?articleID=223100024
(there are also some additional links on EUV on the main page)

Anonymous said...

That should say "EUV dual stack" not EV dual stack... I forget what the feature size was but I think it may have been 22nm.

Anonymous said...

The 10000ft view of SPIE, looks like the big news is intel is going 193 immersion for a few more generations then planned. Given the slow progress on EUV even with Intel backing it ain’t happening any time soon. Given the bank account of all the others I doubt any other company will push it thru.

Everyone will be stuck with immersion and creative things to push Moore’s law for the next 6 years and 3 nodes.

This is very very bad news for the foundries. The very nature of their business leaves them a big disconnected from their customers. INTEL with the tight IDM and design and process guys in the same company can begin working now and probably already have a plan on design rules, restrictions and path to make their choice viable. TSCM and the arabs don’t have any of that tight coupling. Lots of things working against the foundry model; tight layout to process interaction for everything from material choices, design rules and performance, power tradeoffs means that beside being between one to three years behind that going forward they will only fall further and further behind as not many people can afford to buy 100 million dollar tools and run them hard for a few years to shake things out like INTEL can.

Tick Tock Tick Tock.

Anonymous said...

Some thought about ARM.

With every doubling x86 gets more transistors that use less power for every switch. Take that for a few generations and you got a lot of transistor that are almost free and don’t burn any power.

ARMs model is outdated from a time and place long ago. It was a time where transistors were expensive and sucked power. They designed a architecture that was power and cost sensitive.. The results for the past decade was the perfect computational device for cost and power sensitive appliances.

But in 10 years the x86 sights are now set on the smartphone or universal compute device. The needs of this device include not only cost, and batter life but also compute power and the flexibility to run many applications we’ve come to expect.

What good is a mobile device that can’t run flash, that can’t open and edit your office documents attached to your email. What use is it if it requires a whole new infrastructure.


Today we are just one more node away from x86 exceeding ARM performance by a ton yet getting within 50% of the power. My prediction is that the compelling performance and compute synergy will more than overcome a slight battery lifetime disadvantage.


Lastly it is rumored that it took apple a BILLION dollars to fund their ARM chip. Let me ask you who has the pockets and volumes to pay for a BILLION dollar design. Lets not forget the 10 BILLION to fund the development and factory to build the BILLION dollar design. Not the arabs, not the foundries and not the ARM design houses that sell their chips for a few bucks.

They will be late to every node and have surprises as things get harder and harder with HighK/Metal gate and litho.

There is only the “one”

Tick Tock Tick Tock.

SPARKS said...

“I must say I was way off on this one”

------and

“-----maybe on 22nm, most likely on 15nm”


Nah, the first time you mentioned EUV, instinctively and adamantly said NFW. For the latter, I think you were (cautiously) hedging your bet.

I went with your instincts, naturally.

EUV eating lenses and mirrors sold me as well. And now, the link (on your link) is estimating each tool will be in the $90 million range!!! It seems EUV tools will give a new meaning to the term “The Law of Diminishing Returns” for the corporate bean counters.

In any event, well done, that crystal ball serves you well, as usual.

SPARKS

Anonymous said...


EUV eating lenses and mirrors sold me as well. And now, the link (on your link) is estimating each tool will be in the $90 million range!!! It seems EUV tools will give a new meaning to the term “The Law of Diminishing Returns” for the corporate bean counters.


Don't be so sure. $90 million over the lifetime of a technology is not as much as you might think.

Lets do some math. Conservative numbers. 1500 wafer starts/week, 52 weeks/year, four years for a technology.

Lets also say (extremely conservatively) we get 50 good die per wafer. (waay underestimating).

Cost of a tool per good die = 90000000/(1500*52*4*50) = $5 per chip per tool.

And that's not counting the number of immersion scanners you get to get rid of. Or fab througput.

No, the main hurdle is capability right now. Let the CBA occur after the pilot lines.

Anonymous said...

There is only the one, Intel will own all.

"Everyone will be happy with Intel Inside everything." - Maddoctor

"I hope that there is no semiconductor company other than Intel." - Maddoctor

"The World will be a better place without AMD and nVIDIA." - Intel Lover

Anonymous said...

I hope that everyone who have commented in this blog will get a chip to be implanted in their head. I like it, because Intel will control what people will think.

Anonymous said...

People love to hate the big guy in this case INTEL.

Just think back at the technology, the management the track record of the IBM, Moto, National, etc. etc...

You guys think if INTEL never switched from DRAMs to CPUs with IBM using powerpc where would we be.

We hate MS, but imagine if instead of that we had OS2, WARP.. LOL

You guys don't know how luck we are.

Anonymous said...

Yeah, many parties are whining to the most successful company like intel in this case. They don't like their competitors achievement. I believe Intel is innocent and did nothing wrong. I hate people who are comparing him to mafia like Al Capone. Paul Otellini is a good CEO who have directed his company to be more successful.

Anonymous said...

Anonymous said...
There is only the one, Intel will own all.

"Everyone will be happy with Intel Inside everything." - Maddoctor

"I hope that there is no semiconductor company other than Intel." - Maddoctor

"The World will be a better place without AMD and nVIDIA." - Intel Lover



sharikou, AMDZoner on the opposite? :)

Anonymous said...

BTW, In the early PC era most people did not have a clue what PC is. Many various manufacturers of desktop computer use its own solution like Motorolla 68K, Zilog, and 6809. So, it should not be a successful product if IBM had been useing other processor manufacturers or using its own processor product.

Intel have gained marketshare because of software piracy before Windows XP era.

Anonymous said...

No, it's not. It was an Intel Lover like other Intel's shareholders.

SPARKS said...

“Let the CBA occur after the pilot lines.”

Typically, maestro, you catch me off guard with terms like CBA (Cost Benefit Analysis) and pilot (sample) runs. After a little study a few things occurred during my enlightenment:

Tool maintenance
Upgrade
Refurbishment
Repair
Downtime
Logistical support
Depreciation/resale value
A plethora of others I’ll never know.

Without the necessary Ph D. in statistical analysis required for a complete understanding, are there certain industry standards that the bean counters plug in when they factor these variables? Or, do they re-factor the whole thing on each successive generation? (another variable)

(One more)

Reselling tools, now I know what Lockheed did with tooling for the SR-71. They destroyed them so no one else could build the badboy if they ever got their hands on them. How does INTC resell its tools to, say, TSMC when they (partially) hold the key to manufacturing Hi-K, for example?

Hell, as a shareholder I’d be out in the parking lot myself wielding an axe, waiting for the first truck to come out the gate! I’m sure it’s infinitely more complicated than just one tool (Yeah, yeah, I know it’s not the size of the tool, it’s knowing how to use it) However, what do they do with the highly proprietary tools, are they a complete write off?

Thanks.

SPARKS

Anonymous said...

You are wrong Sparks, TSMC and Intel have a cross licensing agreement in process development. Most of tools are available to any party who wants to bought it. Althought, there is some customized design tools, Intel must provide TSMC for its last gate HKMG process that will be enough to produce, and Intel will get SOC process expertise from TSMC. I think it is win-win solution

Anonymous said...

TSMC and Intel have a cross licensing agreement in process development.

What!? I beg to differ. If there is any IP coming from TSMC, I have yet to see it.

Intel has licensed the ATOM *design* building blocks to TSMC, but not the process. Like any other design, it would need to be ported to the different process.


Most of tools are available to any party who wants to bought it. Althought, there is some customized design tools, Intel must provide TSMC for its last gate HKMG process that will be enough to produce, and Intel will get SOC process expertise from TSMC. I think it is win-win solution


No way. Intels HK/MG is as secret as it gets. They don't tell engineers that are developing the stuff how it works, such that nobody has a complete picture. I can guarantee you that TSMC does not have it, either.

As for the toolset, it is true that "anybody" could buy it. Where the trade secrets are is *what* they buy and *how* they set the parameters on the tools. I doubt FSEs even get to see a full set of parameters any more.

Those parameters are quickly erased beofre a tool is demo'd.

SPARKS said...

“You are wrong Sparks,”

About what? May I suggest a brush up on Basic English, say, grade school grammar? I posed hypothetical interrogatives, and one minor example in my post.

Obviously, well informed Anon waxed your ass in the subsequent post, shot you down in flames, and exposed you as a flamebait who doesn’t know what the hell they’re talking about. Don’t go away mad, just go away.

“ No way. Intels HK/MG is as secret as it gets. They don't tell engineers that are developing the stuff how it works, such that nobody has a complete picture. I can guarantee you that TSMC does not have it, either.”

That’s just the way I like it. See, I may be dumb, but I ain’t stupid! Neither are the boys at INTC. And if anybody opens their mouths, we can give them a Coney Island Whitefish Party and a nice sendoff.


“Those parameters are quickly erased beofre a tool is demo'd.”


Got it, it’s all in the recipe. Tool depreciation is a main liability in CBA, thanks.


SPARKS

Orthogonal said...

"I can guarantee you that TSMC does not have it, either.

As for the toolset, it is true that "anybody" could buy it. Where the trade secrets are is *what* they buy and *how* they set the parameters on the tools. I doubt FSEs even get to see a full set of parameters any more."


Yep, although I haven't heard of any 300mm tools sold to 3rd parties (perhaps only obsolete toolsets for Intel processes), its mainly the 200mm facilities that are shutting down right now and being auctioned off. They are of course wiped of all IP and custom configurations. Also, as pointed out, no tools or process info is/will be shared with TSMC, it is just being used as a foundry model. It's no more an IP risk than Nvidia and AMD using them to fab GPU's.

It's funny you mention the aspect of FSE's working on the tools and being essentially in the dark. As each node comes and goes, the push for IP security gets even stronger, but the FSE's are still asked to do more and more with less and less information to go on. It's a delicate balance.

InTheKnow said...

Also, as pointed out, no tools or process info is/will be shared with TSMC, it is just being used as a foundry model.

It is funny that this should come up now as Intel has admitted that this plan has gone nowhere.

I did find some interesting speculation on why subcontracting out Atom hasn't been successful here. I think this whole conversation about intellectual property and Intel's mindset is very germane to this issue.

I've said it before and I'll say it again. Intel is going to have to make some big cultural changes if they are going to make any serious progress in the SOC market. It requires more openness, not less. They will have to empower their employees and provide them with the information they need to essentially run multiple process technologies on the same chip.

I agree with the authors conclusion. Intel is quite capable of being successful here, it is just a question of whether or not they are willing to do what needs to be done.

Anonymous said...

SOC equal openess NOT.

SOC is about offering integrated chips with a core with features that people want. INTEL doesn't need to share the core. What intel needs to do is migrate the atom core to 32nm where they will get even more density/power advantage. Figure at 32nm they will get another 50% power reduction over their best 45nm. Die size should shrink about 30% as well ( won't be 50% as they need at more stuff over their current chip ).

Throw in IMC, baseband MAC, graphics, all the interface drivers and you have something pretty compelling.

Openess is not needed.

I think ARM exec are looking at their process roadmap and compare to INTEL and pooping in their pants as the clock is ticking

Tick Tock Tick Tock

Anonymous said...

HighK / Metal gate last is a bit like Coke. Everyone knows the ingredients, knows what Coke buys to make it but somehow nobody can make something that tastes as good.

Sparks the secret is in the special recipes. You can reverse engineering till you are blue but unless you steal the recipes for every step and CE! I highly doubt anyone can match INTEL without spending a few billion and a few years of hard work from 200-400 engineers. Guess what nobody has that money or engineers nor equipment.

And for the intel guy why don't you tell everyone what a FSE is...

SPARKS said...

ITK, interesting article, but I just can’t buy it. I agree that it would be nice for everyone else if INTC shared its IP, but what’s in it for INTC in the long term?

Those jokers in the Far East would be standing on line to clone anything they possibly could. INTC’s IP wouldn’t be worth a nickel in a few years.

He points out, “But I would not be surprised if Intel has a real fear about losing control of its intellectual property.”

Jesus H., ya think? He ponders this ecumenical revelation as if he’s Moses descending from Mount Sinai. Hello, what company in INTC’s position wouldn’t be??? And he wouldn’t be surprised?!?!?

He says, “3) Intel is not providing adequate visibility into its core”

I say tough shit.

The author criticizes INTC on six points, but I think I’d rather let INTC figure out what kind of balancing act they need to do to stay a powerhouse in the market in the long term, while keeping the family jewels safe.

Hey, when I go out to dinner with friends, they may get to dance with my wife (maybe), but I’ll be damned if they’re going to take her home. She is definitely IP, and they will never know her like I do.

I think as the numbers go INTC has done pretty well so far. Why change things, and for who’s greater benefit overall? INTC has far more to lose, and that is definitely gospel.

SPARKS

Anonymous said...

The real reason why no ATOM at TSMC.

1) If someone finds a way to intergrate a slower core ( remember its TSMC, LOL ) usefully

2) It is a hot seller

What do you think intel will do? Soon offer something with the same SOC wrapped around a faster core in 1 years it will be on a faster, smaller cheaper.

No company with a neat idea for an SOC will use the Atom. If it is a hit they lose, if it isn't they lose.

Lose no matter what, the creative engineers will always pick ARM.

Doesn't change anything the clock has almost clicked to boom on ARM its just two generation before there are where AMD is

Tick Tock Tick Tock

InTheKnow said...

ITK, interesting article, but I just can’t buy it. I agree that it would be nice for everyone else if INTC shared its IP, but what’s in it for INTC in the long term?

Sparks, I didn't read that article to imply that Intel needed to sell it's process IP. Rather, they need to open up the at least parts of the Atom design. Frankly, that is something that can be reverse engineered by any one who has enough cash and wants to do it badly enough. Figuring out how to match the process tech is much harder.

This goes back to what I've said earlier about Intel having to change their mindset. SOC is not about in-house design. Almost every customer that would buy an SOC design has their own IP blocks they want to attach to it. Lock out their IP blocks and you lock out their business. It is as simple as that.

Note that until just a few years ago Texas Instruments did all their own process development. But they found a way to compete in SOCs without giving away their process IP to all their competitors. If they were able to protect their IP while keeping the designs open enough to win their customer's business I fail to see why Intel can't do the same.

If Intel is really going to be a competitor in the SOC market, they are going to have to allow customers access to certain elements of the design. If they aren't willing/able to do this, they should get out of the SOC business right now.

As to what Intel gets out of it? How about a truckload of new customers in a rapidly growing market? I've heard rumors that Intel has a plan to sell Atom profitably for 5 bucks a pop. At that price they are very competitive with high end ARM offerings. But if they don't offer the design flexibility that customers in this market demand, they won't sell anything regardless of pricing.

I contend that Intel must find the right balance between openness and protecting their IP or they won't succeed in this effort. And I don't believe they have found that balance yet. When you see the SOC design wins start rolling in, you will know that Intel has found the right balance.

SPARKS said...

ITK, if one mantra as been pounded into my head, it’s been the very close association between process and architecture. Couldn’t architectural IP give way to conclusions like “well, if they did this they must of done that, therefore we can try this material or that material, or a combination(s)”, thereof? Who would know better than INTC what to share and not to?

Here’s a case in point why INTC shouldn’t give anyone squat. Charlie once again gives us another brilliant report, everything from chamber matching issues to contamination problems. He’s also reporting “fracturing and transistor variability”.

Now I’m no GURU, not even in the same Galaxy, but I have learn something from you geniuses and sounds like someone has been over stressing those gates. Further, he’s also reporting “bonding/failure problems”, sounds like the sheets of Lasagna ain’t sticking too well. Can it get uglier than this?

My point, twofold, these outfits and their marketing pimps throw around process numbers, 40nM and 28nM, like a blackjack croupier throws cards in Vegas. Yet, they can’t seem to get around very serious issues at 40.

1. What will they do when the nodes get smaller? Uh, get some help from a company that know how to do it? We all know who that company is, right? Sure, give ‘em all the IP they want, all for a nice hug and wet kiss.

2. As far as SoC is concerned, we all KNOW INTC will successfully scale smaller thereby lowing power requirements and increasing functionality moving forward. INTC is at the top of their game, with more to come. Why worry?

Is any wonder why they’re all bitching why INTC should be more open? Nah, I say make haste slowly, hold those cards close to the chest, and see how the SoC “design win” thing plays out over time.

SPARKS

http://www.semiaccurate.com/2010/02/27/why-tsmc-still-having-40nm-woes/

SPARKS said...

Think I'm exagerating about those smaller nodes? Oh, they've 28nm AND 22nM all lined up, just like that, with EUV, no less!

SPARKS

http://www.theinquirer.net/inquirer/news/1594021/tsmc-40nm-yield-issues-explained

InTheKnow said...

As far as SoC is concerned, we all KNOW INTC will successfully scale smaller thereby lowing power requirements and increasing functionality moving forward. INTC is at the top of their game, with more to come. Why worry?

One word. Netburst.

That is the attitude that got Intel in trouble once before. Those who don't learn from history are doomed to repeat it.

InTheKnow said...

Couldn’t architectural IP give way to conclusions like “well, if they did this they must of done that, therefore we can try this material or that material, or a combination(s)”, thereof?

May I suggest you check out www.chipworks.com? Determining materials, and to some extent process flow has already been done. Anyone with the cash and the interest already knows what Intel is doing. What they can't do is figure out exactly HOW they are doing it. It is far more about process conditions and process flows than material choices. And Intel wouldn't have to give that out.

Remember that what Intel was going to do with TSMC was to port the Atom design to the TSMC process. The only IP at risk was the atom design. No HKMG, no stress/strain, no process IP at all.

To quote your own NY Times:

Intel confirmed this week that a lack of customer demand had put the partnership on hiatus for the short term. Which is to say, there will be no jointly developed Atoms arriving anytime soon, although Intel continues to hope for the best down the road.

“I think we had a lot of key learnings from the partnership so far,” Robert Crooke, Intel’s Atom chief, said in an interview. “We haven’t given up. These things never happen superfast.”


So whatever Intel was offering just didn't interest many customers.

SPARKS said...

"One word. Netburst."

Excellent! LMAO

SPARKS

SPARKS said...

The time has come. E-tailers are already listing the big, bad, and beautiful i7-980X. That’s the good news. The bad news, much to my chagrin, is the price. It ain’t pretty, at 1500 escaroles, ouch! (How come every time I go to upgrade it costs me 500 bucks more than the going XE rate of $1K?).

My last and most pleasant upgrade to date was the venerable QX9770. It was also $1500. It served me brilliantly clocking along @ 3.84 Gig for two years. It will definitely be VERY difficult to abandon this brilliant piece of hardware.

However, life is short, love affairs are fleeting, the best ain't cheap, and there are no luggage racks on a hearse.

ETA 3/28/10

https://www.techbuy.com.au/p/136946/CPU
_INTEL_SOCKET_1366_(CORE_I7)/
Intel/BX80613I7980X.asp

SPARKS

Unknown said...

Sparks those prices are in Australian dollars.

According to xe.com today:

1,495.00 AUD = 1,351.71 USD

Wonder if that price is even competitive with the 8 and 12 core G34 Opterons being released shortly...

SPARKS said...

In an early post, someone suggested that I take Charlie D.’s report on TSMC’s low yields with a grain of salt. However, evidence is supporting Charlie’s claim that the yields are tragically low.

I give you this, his latest report, of the product release of the GTX 480 (Fermi) which says,

“Tweakers.net claims that there will be an “initial run of 5,000 cards worldwide”

This is for a high end, first release, flagship product! I’ll tell you what, if INTC said there would only be only 5000 pieces of ANYTHING worldwide, I swear, Id sell every share I owned. They had 8 months to get this pig in the air, 5000!!! How many “pilot runs” (or steppings) does it take to put 5000 high end, top bin picks out for worldwide distribution? I’m sure they’ll be plenty of low end products in the 4XX series in short order; all the garbage that couldn’t make the top grade
.
Sure, that TSMC really was successful with/on immersion.

No sir, I’ll pass on the salt.

SPARKS

http://www.semiaccurate.com/2010/03/02/fermi-strips-secret-location/

InTheKnow said...

Sparks those prices are in Australian dollars.

Which means there is probably also a pretty stiff VAT that you won't have to pay when you buy in the US.

Unknown said...

Lem
Sparks those prices are in Australian dollars.

InTheKnow
Which means there is probably also a pretty stiff VAT that you won't have to pay when you buy in the US.


There's a 10% goods and services tax (GST) in Australia, not too stiff..

SPARKS said...

“Wonder if that price is even competitive with the 8 and 12 core G34 Opterons being released shortly...”

LEM, that’s an interesting question and I sure under considerable discussion from both camps.

The Nehalem architecture as an advantage of executing more instructions per cycle (one I think), and at considerably higher clock speeds, and the memory bandwidth is in orbit.

The Opterons, on the other hand, scale extremely well plus they have more cores to do more work, albeit at lower clock speeds. (AMD throttled the chip back due thermal issues.)

My money is on the Opterons when used with programs that can take advantage of the additional cores. Plus they are killers with encryption. The server boys will love ‘em , as OS licenses are charged by the socket. Opterons in 4 X 8 and 4 X 12 configurations (48 cores on a 4P server!) should be a very potent piece of hardware. Be advised, these Opterons aren’t cheap, make no mistake.

There’s another player that’s seldom mentioned, the Nehalem EX. This will be a completely different animal on the 4P front, and another wild card in the mix.

The 6 core i7’s will be a no brainer on the workstation market, especially with the dual socket MOBO’s I’ve been seeing. Apple is ready let their 2 socket badboy loose, too. This will sell.

As far as single socket desktop rig is concerned, there’s simply no contest. With Turbo Boost enabled the 980X with auto throttle to 3.6 Gig. (Factory overclocks by INTC, who would have thought!?!) The lunatic fringe, me included, will definitely go higher.

This is all speculative, however. But it is nice to know that our enthusiast’s daytime drama is alive and well.

SPARKS

Unknown said...

Well, Socket G34 has four memory channels and supports DDR3-1333, so I'd say theoretical memory bandwidth is higher than LGA1366.

The way I understand it is: initial G34 Opterons (Magny-Cours) have two dual channel memory controllers (the 12 core units are MCMs of 6 core dice, each with its own memory controller), whereas Nehalem on LGA1366 has a single triple channel controller. Guess it will be single socket NUMA on G34? Not sure if Bulldozer (Interlagos) will be an MCM with two memory controllers, but the socket looks like it could support a single quad channel controller.

Socket C32 on the other hand is dual channel.

Should find out more this week..

Anonymous said...

While there will always be a need for 4P+ configurations it seems with the increase in cores this space at best will be stagnant if not shrink over time. I think Intel's strategy of gaining traction in 1P and 2P space on a new core and then work on 4p+ with a lag (even if it is 1 year plus) is the smart business approach. I forget the market splits but 1P and 2P is well over 50% of the market (maybe something like 75-80%)

There will always be demand for supercomputers and high end niche applications, but the vast majority of server configurations are 1P and 2P configs and with the increase in core count (server seems to be the one area best capable of capitalizing on multicore), that trend is going to continue; unless the vision of cloud computing takes hold.

Similarly while heterogeneous chips might be the future, this is going to be a SW effort like the multicore effort - if you don't have SW capable of it, how much value will it be? While integrated graphics is probably doable (I consider this a bit different than hetereogenous chips), will different type "cpu" cores take hold in mass and benefit the average consumer? I can see a tweaked version of turbo boost where you have one core operating at really high frequencies and other cores operating at mid speed and maybe one or two at low speed for background/non-demanding applications being of more value short and intermediate term.

Anonymous said...

How big is the foundry business at the leading edge?

We know everyone wants to play there as that is where the glory and money is. At 90nm we had TSMC, Charter, UMC, SMIC, IBM and many others. Flash forward and who is playing? You got Charter gone, SMIC going going gone, IBM publishing papers but nothing much more. TSMC with chamber matching and who knows what else going one.

Oh yes you got GF trying to get in. Now you also got Samsung making noise about how they will be there to. Honestly how many people can afford millions for mask, variability of yield and other things at the bleeding edge, billions of revenue to justify the need to go on the leading edge? You really need 3 companies all jumping for that money; GF, Samsung and TSMC?

You look at Samsung they have a track record in TVc, phones, DRAMs, flash. They got deep pockets and hard working engineers.

In GF you got no track record, nuff said.

In TSMC you got a train that just ran off the track.

Where oh where will ARM go, looks like their only hope to beat the clock is to go to Samsung. But, the have never done any high volume SOC, don’t have any of he infrastructure, but if their rise in other business is any record chipzilla should be pretty scared. But than again when they say going gate first.. should anyone be?

Conclusion: TSMC is falling behind and very late going gate last way after INTEL. Figure they are two years away from having highk. IBM and GF they are even further behind being stubborn and going gate first. Nothing like NIH and ego to put them ever further behind. Samsung is the unknown factor, they got the money, they got the engineers and those engineers are known for working 7 days a week 18 hours a day. INTEL is ahead and the only company on earth capable of catching them is samsung.

Tick Tock Tick Tock.

SPARKS said...

“Similarly while heterogeneous chips might be the future, this is going to be a SW effort like the multicore effort - if you don't have SW capable of it, how much value will it be?”


Oh, how true.

Since the end of the Netburst Era, Core 2 and i7/i5 have been dominating the CPU performance market worldwide. As it stand today in most every segment of the market AMD has competed on a price performance ratio, sans two, high end servers and HP Computing.

AMD has been forced into a position of niche player. On one end of the spectrum, a value player where a “good enough” solution based on cost (read: cheap), on the other, the ultra high end multi core server/scientific market. I could be wrong but the gap between the two is widening in Intel’s favor considering LGA 1156/i5 and Nehalem EX are poised a serious threat on both ends.

Fortunately for the AMD camp, they can argue that a good enough solution will work well and be as good as any INTC machine where graphic cards set the performance bar. They can also claim AMD has the performance crown when national laboratories calculate computation fluid dynamics or simulated nuclear weapon yields on aging stock piles in the US inventory. The problem, as AMD is fully aware, there’s no money in these niche segments to pay the bills.

So, where does AMD stand on pure performance in each segment, forgetting price?


Top CPU benchmarks in multiple CPU systems:
(8-way) 6-Core Opteron 8435
(Note: AMD fans, be advised, don’t look at other processor configurations, it ain’t pretty)
http://www.cpubenchmark.net/multi_cpu.html


Worlds Fastest Computer:
Cray Jaguar
http://www.top500.org/

That all, folks.

SPARKS

Anonymous said...

Fortunately for the AMD camp, they can argue that a good enough solution will work well and be as good as any INTC machine where graphic cards set the performance bar.

Agreed and this is what tends to get a bunch of press. But there are 2 significant problems with this (the press, not your comments):

- The commercial space (both desktop and notebook) is I think now bigger than retail and for 95% of most business applications 1080p and frame rate at high resolutions in the latest 3D game is not really a purchasing consideration (well maybe it's their 253rd priority)

- Retail is still a lot about branding and remains CPU-centric. Also those who really want graphics go discrete. Finally retail is quickly becoming notebook centric where graphics is less emphasized and those wanting true price /performance will eventually move to netbooks (when those get a little more capable).

Until Intel gets a bit better on the integrated graphics side (which I think they are getting cloes on), AMD will get the value(cheap) and price/performance fanboy market. However once Intel gets solid 1080p capability (are they there on the 32nm fusion solution?), this point becomes moot. Those wanting FPS, DX11 for games that cost $50 will probably spend $100-200 (or more) on a discrete graphics solution.

Anonymous said...

AMD good enough? Sorry did AMD make money last quarter?

Did GF make money? Will GF be able to diversify with TSMC and Samsung. How many leading edge foundries are required if in the end graphics is a niche low volume?

SPARKS said...

Rarely, if ever, do I criticize my beloved INTC. Now I know they’re never wrong on anything, but sometimes they come up a little short on being right.

I could recall the time when the term ”overclocking” was absolutely verboten in the hallowed halls of INTC corporate. Fearsome threats of voiding ones warranty seemed almost punishable by LAW!

Times have changed. INTC removed the starched shirts and ties and has EMBRACED the enthusiast community with open arms. It’s like Big Daddy gave the nod of approval to take out the Corvette out and say, “Take her for a spin son.”

Fuck’n a bubba!

Well, not to be outdone by his own generosity, dear old dad has just given the keys to the kids, God bless him. Yes folks, dad will be making lesser CPU’s, for the enthusiasts of course, with---- get this ----UNLOCKED MULTIPLIERS!!!!

Now I’m no marking guy, not by any stretch, but I know a swing at AMD’s Black Editions when I see one. This is an enthusiasts dream come true, a pop in AMD’s eye, from a kinder, gentler Intel, all in one fell swoop. BRAVO!

http://www.xbitlabs.com/news/cpu/
display/20100303151423_Intel
_Plans_to_Offer_Inexpensive_Microprocessors
_with_Unlocked_Multiplier.html

You see, I always knew it was the enthusiasts that were the most vocal about processor performance. Don’t ever forget, one geek can help 20 “Joe Six packs” with their computers and purchases, not to mention everyone else in the family and on the block.

SPARKS

SPARKS said...

“Those who don't learn from history are doomed to repeat it.”

ITK, Very true, but I think INTC learned a lesson, and they haven’t fell asleep at the wheel yet.

This link’s for you.

http://www.semiaccurate.com/2010/03/05/intels-ulv-calpella-platform-almost-here/

SPARKS

9-Inch said...

...INTC has 32nM on the street and will be in my machine come April with six of the fastest cores on the planet, made with nuclear control rod moderator metals. (I’ve had Hafnium in my machine for nearly two years). 22nM is in development with proprietary innovations that AMD/GF/IBM/TSMC can only dream of.

GET IT?

SPARKS


That's too much intel @ss-kissing I've ever seen on a blog.

Once again, and I'll make it bold for you: Don't be surprised if one of these days you'll see GF overtake intel in process tech AND volume!

;)

InTheKnow said...

Don't be surprised if one of these days you'll see GF overtake intel in process tech AND volume!

Maybe so. Care to back this up with some data? WHY will it happen? Just because you WANT it to won't cut it, but I'll listen to any rational argument you want to put forth.

Anonymous said...

9 inches, did someone tell you you are measuring cm LOL...

Anonymous said...

9 inches, did someone tell you you are measuring cm LOL...

In your wet dreams maybe

Anonymous said...

9 inches, did someone tell you you are measuring cm LOL...

In your wet dreams maybe

9-Inch said...

Maybe so. Care to back this up with some data? WHY will it happen? Just because you WANT it to won't cut it, but I'll listen to any rational argument you want to put forth.

For starters, this was announced today:

Abu Dhabi's ATIC eyes $2-3 bln tech spend

quote: ABU DHABI, March 8 (Reuters) - Abu Dhabi government-owned Advanced Technology Investment Company (ATIC) plans to spend $2 to $3 billion this year in capacity expansion as it eyes a larger share of the global contract chip industry.

Chief Executive Ibrahim Ajami told Reuters on Monday ATIC's stake in Global Foundries, a U.S.-headquartered semiconductor manufacturing company and joint venture with Advanced Micro Devices Inc (AMD) (AMD.N), would increase to 70 percent soon as part of its plans to take over the entire AMD stake in Global Foundries.


Is not a matter of how, but when... ;)

Anonymous said...

^ Realy?

How is 2-3Bil in tech spending helping ATIC/GF to catch up when Intel regularly spends in the neighborhood of 5Bil/year.... if you really think they will close the capacity gap, they'll neeed to spend as much (or more) than Intel... spending 1/2 is only going to widen the gap.

GF has 1.X 300mm factories (The 2nd Dresden fab is not even at capacity) right now and will have another one on line (at capacity) in what 2012? This 2-3Bil is simply the buildout of the 2nd 300mm fab as well as the shift to new technology node). If Intel did not spend a single penny for the next 3-4 years, they'd still have roughly double (triple?) the capacity of GF's *PLAN*

So when you say one of these days I assume you are talking at minimum 5-10 years for when the gap starts closing, if not you really do not have a clue as to how hard it is to bring capacity (especially new sites) online.

Again 2-3Bil in capacity in today's terms is effectively 1/2 of a fab and is about 1/2 of what Intel invests in capital annually.

9-Inch said...

Again 2-3Bil in capacity in today's terms is effectively 1/2 of a fab and is about 1/2 of what Intel invests in capital annually...

Funny how people in this blog suffers from chronic amnesia.

Let me give you a hint: How far has ATIC invested in Glo-Fo so far (and this is less than a year)?
;)

InTheKnow said...

Funny how people in this blog suffers from chronic amnesia.

Funny how you missed this part of the comment


So when you say one of these days I assume you are talking at minimum 5-10 years for when the gap starts closing, if not you really do not have a clue as to how hard it is to bring capacity (especially new sites) online.


Money can't buy time. There is very little that Global Foundries can do to accelerate the pace of bringing capacity on line.

Intel has 2 300mm fabs in Oregeon, 2 300mm fabs in Arizona, 1 300mm fab in New Mexico, 1 300mm fab in Ireland and 1 300mm fab in Israel for a total of 7 300mm fabs. I'm not counting the 300mm fab in China that isn't certified yet, which would make 8 fabs.

Even if Global Foundries started building the additional 6 fabs they need to bring on line to match Intel's capacity today, they would still be 5-10 years out to get them up and running.

And Global Foundries has yet to prove that they can match the fabs. I'm not discounting the APM technology that AMD and now Global Foundries makes such a big deal about. I'm just not buying into an unproven concept. You are welcome to buy into vaporware if you want, but until they match two fabs with the technology, I consider it to little more than an unproven theory.

So feel free to prognosticate on the industry 5-10 years from now. My crystal ball doesn't see that far.

Anonymous said...

9 inch better get a new life.. LOL

Did you know how much 32nm immersion steppers cost?

Do you know how many layers are immersion on 32nm?

Do you know how many wafers a stepper can output?

2-3 billion gets you very little capacit if at all at the next node. A few hundred wafers starts isn't enough to shake out a process. A few hundred wafers is about enough to run two chambers and discover they aren't matched. Ask TSMC and nvidia how that went.

The arabs are blowing money in a business they dont' understand. In 4 years and 10 billion later they will wonder what a fucking white elephant they got in NY. IBM technology hasn't evern been scaled to much then low volume boutique processing. Try and scale it to complex SOC or foundry business will be a disaster of epic billion dollar loses that will make Madoff look brilliant compared to how them arabs pissed away money

Anonymous said...

You are welcome to buy into vaporware if you want, but until they match two fabs with the technology, I consider it to little more than an unproven theory.

And when you think even further about it, the theory will not be tested until ~2012 as the 2 fabs in Dresden will be doing different processes. The new fab coming on line (the 200mm refurb) will be primarily bulk Si "foundry" type work and the existing fab will be SOI (I assume almost 100% AMD CPU's as despite all the talk about how great SOI is, the foundry biz for SOI is pretty lean at this point in time).

So you really won't have any real significnant fab level process tech matching until the NY fab comes up and even at that point GF may be staggering the node done there and the nodes being done in Dresden... especially if they have any sort of biz sense and realize economically it doesn't always make the greatest sense to turn a new node over in every fab as depreciation for the capital equipment is done on a minimum of a 4 year cycle. (this is why Intel typically skips a node in each of their fabs and then retools)

On a side note, you should consider the Chartered capacity as GF/ATIC has acquired them - it's all 200mm and 150mm but it is not completely insignifcant. (Still doesn't change the facts though)

Until ATIC/GF can get budgets up in the 5Bil/year range they will continue to have less capacity and not even come close to closing the gap. It's not like the price of tooling differs much from one customer to another (if anything Intel may get a slightly better price on capital as they buy more equipment)... so you can almost look at the relative capital expenditures and translate that into relative capacity added.

You do have to look at whether the capital is going for new capacity or replacing an old node but still 2.5Bil is not that much when you compare it to what the big boys spend (Samsung, TSMC, Intel)

9inch strikes me as an AMD fan (which is fine), a guy with strong opinions (also fine) but a failure in perspective of what facts and news articles mean in a relative sense (the 2.5Bil link being a good example) and a lack of background to understand what it really means. Hate to paint with a broad brush, but he strikes me as one of the CANzone folks who think they can read some articles from various 'tech' sites like the INQ and Fuddie and make broad analytical determinations based on that. (reminds me a bit of Dementia and Shari-kook or when Abin-stupid counted fabs and came to the conclusion that Intel had 1/3 the yield of AMD)

Tonus said...

A couple of weeks ago someone posted a similar thought, that GF would pass Intel on process tech because "their owners swim in money."

I pointed out that if it was simply a matter of capital, IBM could've passed Intel on process tech long ago, as they have much greater revenues and therefore more money.

Ask the governments of NY and CA how well the "throw money at it" solution works for fixing problems.

Anonymous said...

GF / AMD / IBM is like watching a train wreck in slow motion. Its been a good 10 years of slow motion disaster from strain, silk, SOI, and the latest airgaps.

Life is so boring once Intel got its clock going.. tick tock tick tock

InTheKnow said...

From the land of the blind we get this

Also, Intel considers anything that's not made by Intel "inferior." To date, Intel would say any 64-bit processor is inferior to Itanium; any IGP is inferior to GMA; and yes, any interconnect is inferior to FSB-1600.

And from reality

“To be fair, in the past few years, other than this year, AMD with ATI had a better integrated graphics solution than Intel,” said Mr. Perlmutter.

I’m only left to presume that a public admission the AMD/ATI integrated graphics have been superior to Intel’s GMA solution by a very high ranking Intel representative is deemed to be a claim that “IGP is inferior to GMA” in some odd alternate universe.

InTheKnow said...

On a side note, you should consider the Chartered capacity as GF/ATIC has acquired them - it's all 200mm and 150mm but it is not completely insignifcant.

I didn't count Intel's 200mm capacity either. Though in fairness, between the recent Chartered acquisition and Intel phasing out a lot of 200mm capacity, that balance clearly lies in GF's favor.

InTheKnow said...

Another thing 9-inch has missed is the lead time on major process innovations. Intel was working on HK/MG something like 10 years before they were actually ready to produce it.

It will be a good 8-9 years before any major innovation that GF starts working on now will be ready to bring to market. Until then they are relying on the IBM consortium to lead them to the promised land.

And we see how well that worked out for them on HK/MG. GF is just starting to sample it around 3 years after the consortium announced it.

Again, these aren't things that GF is going to change with boatloads of money. It all takes time.

Anonymous said...

IBM HighK Metal Gate.. is this the one that they annouced BEFORE intel a good 3 years ago. I thought they were FIRST, they sure made a lot of noise about being first. I'd think if it was so simply and so good and so eaily integrated they'd have products shipping way before intel who is cleary using an inferor and more difficult and stupid gate last process.

That stupid and more complex gate last which was annouced second has only shipped what a couple hundred million cpus already to AMD/GF and IBMs ZERO.

can't wait to see what more innovative suprises IBM/AMD and GF have to share. I can't wait to see their air gap backend.

More vapor annoucments from the three amigos.

Anonymous said...

IBM HighK Metal Gate.. is this the one that they annouced BEFORE intel a good 3 years ago. I thought they were FIRST, they sure made a lot of noise about being first. I'd think if it was so simply and so good and so eaily integrated they'd have products shipping way before intel who is cleary using an inferor and more difficult and stupid gate last process.

That stupid and more complex gate last which was annouced second has only shipped what a couple hundred million cpus already to AMD/GF and IBMs ZERO.

can't wait to see what more innovative suprises IBM/AMD and GF have to share. I can't wait to see their air gap backend.

More vapor annoucments from the three amigos.

Anonymous said...

The clock is ticking on IBM and the consortium...

Another semiconductor manufacture annouced they are going to gate last. That much more expensive, complicated and unscalable gate last process sure seems to have a lot of new fans, very odd. You'd think if gate first was so good and robust, cheap and all and scalable that it would be the choice for more then one generation. A lot of work optimizing gate, metals and etchs only to throw it away after one node. Something not sounding right was that annoucement 3 years ago by IBM real or vapor annoucement like silk and airgaps?

The clock keeps on ticking

SPARKS said...

Hey, Nine Inch,
Anand has a nice full benchmark review of a 32nM i7-980X. He threw in a Pheromone X4 965 and my QX9770 for good measure. My 2 ½ year old chip is still making the AMD dog look lethargic, forget the 980X, there’s simply no contest.

When did you say AMD was going to surpass Intel on process and volume?

http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=3763&p=9

SPARKS

SPARKS said...

You can buy 'em, but you'd better know how to use 'em.

http://www.nikonprecision.com/
products/nsr_s620.html

Sparks

Tonus said...

Grab your lobster bib, Sparks. Bit-tech OC'ed the 980X to 4.4GHz on air and 4.7GHz with water.

SPARKS said...

I'm drooling.

SPARKS

Anonymous said...

Latest word is that GF is developing technology in dresden!

What gives spending billions of Arab and NY citizens money in upstate NY. Spending billions co-developing logic technology in East Fishkill with the dream team consortium. Now they want to do something on the side. You'd think GF was making money hand overfist and just wanted a few more side projects to waste people, capacity and valuable resources on..

No the real reason they are doing it is there is something not very right with the decisions being made in NY. Samsung, TSMC and others already got to find another and it isn't about going first its about going last..

Lock is a ticking!

Tonus said...
This comment has been removed by the author.
Tonus said...

If they are spending billions of dollars in New York and getting nothing back, then they sound a lot like the local government. And they're even using tax dollars to do it!

Anonymous said...

Hey Sparks have you guys sent out your 33% off payment BEFORE the money is spent by Abu Dhabi or are there actual provisions to tie the money to ACTUAL spending and jobs created in NY? Considering it's about 1mil/job created at full buildout it would seem the money would be tied to milestones.

Of course I assume the people who "run" NY do not confuse running NY with running for their next office - which seems to be the main reason for holding office in NY these days (i.e. use current position, create some ridiculous subsidy or entitlement or try to take on "evil" corporate America and then use that as a launching pad for higher office)

I do find it amusing as the financial crisis and the scaling back of both jobs and income/bonuses in that area is going to hit NY pasrticularly hard on the tax revenue side and make it harder to pay for pork. I'm not saying it's a bad thing but while the politicians bang on the fat cats in the financial industry, I wonder if they realize they have now become the fat cats and are the ones spending/creating risk way beyond their means. (and hopefully what happened in the financial industry will also happen in the political world)

SPARKS said...

“Considering it's about 1mil/job created at full buildout it would seem the money would be tied to milestones.”

I don’t have a clue on what’s going on up there. I’ve had some personal issues to attend to (Mom passed) and I am in the middle of consolidating personal assets and (thankfully) liabilities.

I haven’t the time for much research as I’d like to. I abhor being completely ignorant about any subject. It’s especially true with this place.

I suspect Luther Forrest given its location (Upstate) being so far removed from where the real money is (Downstate), no one cares. They haven’t pissed off any environmentalist groups, people are working, the Unions are quiet, and given the scale of such a project, or future projects, everyone will just STFU, i.e., big projects=jobs and prosperity.

The same thing happened here on Long Island (until reality set in) with the Shoreham Nuclear Facility (3B build up, 3b teardown), try to factor the cost per jobs on that one (I dare you in 70’s money).

Comically, the States answer to our fiscal dilemma is (get this) a SODA TAX! Yes, our illustrious Governor in brilliant move of altruistic protectionism is going to save the fat kids from themselves. He wants to tax all carbonated soft drinks statewide.

And he’s never seen a fat kid! Can you imagine what they’re telling him?

Seriously, revenue is revenue, and being in the business myself, when most folks see big construction projects (and people working) it’s almost always perceived as a positive thing. Little do they know, that place, as big as is, will be dominated by robots that cost 20 to 30 million apiece.

Well, some local yokel has got to sweep the floors, clean the toilets, and take out the garbage, right?

SPARKS

SPARKS said...

Since, I’ve had a little time to read some reviews of what’s being hailed as a six core “monolithic monster”, a few things occurred.

Looking at this marvel of engineering I noticed that the die had two distinct 3 core sections, literally mirroring each other.

http://www.hexus.net/content/item.php?item=22801&page=4

It seems, given INTC’s current penchant for architectural modularity, they could have easily split this processor in two thereby competing with AMD three core chips!

Hmmm, whatever happened to that huge completive market gap/niche the AMD X3 was going to fill? Well, that theory went by way of the Dodo bird and the mastodon. The X3 (failed quads) couldn’t compete with 2 core E8500’s rockets nor could compete with Q6600 crazy clockers, and now the i7-870 and 15-750 are simply blistering at warp factor 8. What happened?

CHEAP failed bottom bin garbage, that’s what happened. I told you AMD was marketing garbage last year.

Another thought. Since the new 6-core goliaths (server chips in the guise of enthusiast chips sans one memory channels of the EX which reportedly has 4) are being produced en mass, why are there no 3, 4, or 5 core “failures” being marketed? Could it be INTC yields are THAT GOOD, and the losers are simply tossed? Bravo, throw out the bastards, I say.

A third thought. Since TMSC’s dismal failure with the FERMI architecture and process at 45nM, and considering INTC obvious success on 32nM merrily pumping out six core gorillas, one has to ask WHAT THE HELL IS GOING ON IN THE REST OF THE PROCESS WORLD?

Hey, I love to watch CSI Vegas like most folks and just like those characters, I’m following the evidence. The case doesn’t look good for the rest of the industry. It’s not a case of “who done it”, it’s case of who ain’t doin’ it-----

-----and INTC is coming up ROSES! No decomp here.


How’s that for “@ss licking”.

SPARKS

Anonymous said...

Sparks - sorry to hear about your mom.

I doubt you will see 3 or 5 core solutions unless yields are truly abysmal as it makes no business sense (similar to how I continued to hammer on the business sense of AMD doing tri-cores).

I don't see 5 core chips at all, because if you are going to down bin a failed hex core, you might as well bin it down to a quad where you already have a market segment and this way you're not introducing another pricing segment which will put pressure on both the quad and hex markets. I also find it unlikely there'll be much of a technical benefit - those feeling the need to go for a 5 core over a quad, they might as well go hex-core or live with a quad.

I don't see Intel going tri-core out of hex core chips... that's probably a tiny amount and again you don't want to crowd your other market segments (dual, quad and integrated graphics all would compete with the tri-core space).

As in the case of AMD, it need not always about simply can you make any money off of it - just because you can sell them doesn't mean it's a good thing as it may have unintended consequences on either demand or pricing of other products you sell. Understanding this is the difference between a business and an engineering company - you not only have to ask can you do it, but also should you do it.

See: monothilic quads on 65nm, tri-core, and to some extent hypertransport and x64 capability (which both seemed, in my view, too early for most of the market). And what you saw in most cases was AMD boasting on the technical accomplishment, while Intel waited until some of these things made business/economic sense.

SPARKS said...

“(similar to how I continued to hammer on the business sense of AMD doing tri-cores).”

Oh, I remember it well. Everything you said, including the wide array of products within a narrow price band, came to pass. It was, as the numbers go, based on AMD’s 2007 and 2008 financials, catastrophic.

As I recall, I said they’d sell anything as long as they were getting revenue, especially that ridiculous marketing nonsense about unlocking the fourth (busted) core.

Obviously, INTC has no such problems. Dare I say never did? If one follows the logic of Occam’s razor, one would have to conclude that INTC’s yields are very good. Considering INTC’s 32nM successes in contrast to the other industries player’s press releases and Power Point Roadmaps, their successes still remain to be seen. So far, all evidence to the contrary.

Walk softly and carry a big 32nM 3.33 GHz six core chip.

(Thank you, condolences greatly appreciated)

SPARKS

Orthogonal said...

Sparks, my condolences.

The only chip that has been publicly announced to have potential harvesting for defect chopping is the Westmere-EX chip. The follow on to 45nm Nehalem-EX (aka Beckton). Being an MP chip it is obviously very niche and high end and will not likely disrupt the product lineup/pricing.

Any other rumors are pure speculation.

SPARKS said...

(Thanks Orthogonal.)

Here’s a lovely made for the media press release for our pleasure. They’re talking tough about this bad boy, and I believe them.

I figured I’d throw in a quote for good measure.

“Intel's current generation of Xeon processors already represents some of the fastest silicon you can buy, and yet the company's forthcoming Nehalem-Ex-based Xeons are being touted as the single greatest generational jump in its history.”

Nice job, all. Let the HPC wars begin.

Hoo Ya

SPARKS

http://www.engadget.com/2010/03/08/intel-readies-8-core-nehalem-ex-processors-for-a-march-launch/

SPARKS said...

Good news for ole’ SPARKS, folks! Every so often in life, you pull out of a miserable winter, crawl away from some rotten circumstances, and the stars and planets line up in such a way that a small thing happens to bring a little joy.

INTEL’s decision to sell I7-980X at 1K is a prime example.

ZIP ZOOM FLY has just listed BX80613I7980X for $1069. (For all you folks that work for INTC who get their chips at the company store (or for free) the BX designation signifies “Boxed Retail” i.e., civilian slobs.)

Apparently, INTC is not putting the squeeze on the enthusiast monkeys.(Like they did with the QX9770 @ 1500 bucks. Oh. how we paid dearly for the 1600 native FSB) Frankly, truth be told, it was worth it.

They could have done the same thing this time around with their only competition being the 975EE at an even 1K. They could have easily turned the thumb screws and charged an extra 500 bucks for the lower thermals, minor tweaks, two extra cores, fat cache, and 32nM goodness. They didn’t.

Oy Vey, such a bargain!

Now where did I put that New Egg Preferred Card?

SPARKS

http://www.zipzoomfly.com/jsp/ProductDetail.jsp?ProductCode=10012312

Tonus said...

Sorry to hear about your mom's passing, Sparks.

I saw one of the boutique PC builders (Cyber Power?) offering the 980X with their systems. Sometime next year I'll have dropped my debt to zero, and will consider my options at that point (not that I need to hurry, my current system chugs along just fine). Maybe by then the prices on 4GB DIMMs will have dropped a bit...

SPARKS said...

Thanks TONUS.

I must say, I can’t blame you for staying with your current 920 system. That’s the upgrade I reluctantly passed on when you upgraded. I had to overclock the QX9770 to 3.84 GHz to even get close, and it was still no contest.

Besides, your still way ahead of the curve, you’ve got the 1366 platform. A BIOS change, a CPU swap, and presto, you’re in.

As you know, I have no such option. It ain’t pretty, and it’s gonna be tough keeping this “upgrade” under the wifey radar. (She’ll push for a trip or a cruise in September as an excuse to even the score.)

Life’s a bitch, and then you marry---------------------


------------ Hardware freak.

SPARKS

SPARKS said...

Ladies and Gentleman, we have a winna!

So, it seems that ole’ Sparks is not alone in his craziness for i7-980X. It seem like the rest of the world can’t get enough of these $1k plus monsters.

See, it ain’t about money. It’s about performance. Hot women, hot cars, hot boats, hot bikes, hot sound systems, hot chips, it don’t mater, as long as it’s the best.


SPARKS



http://www.fudzilla.com/content/view/18155/1/

InTheKnow said...

I found this article to be rather amusing. There is so little real detail that it renders the entire article suspect. We have no idea what the author was even looking at to determine that he saw

multiple "zero-defect wafers", i.e. wafers that had 100% yield. We saw multiple 100% yielding wafers with commercial products as well as wafers with less than 10 defects.

The implication that the author was allowed to wander around and look at whatever he wanted to is also amusing. No fab tour is unescorted and allows total access.

But the real icing on the cake was the post on another board that lead me to this article. The poster attributed these great yields to APM. I'd really like to see the author of that post explain how APM can adjust tool parameters to eliminate defects ( a necessary component of a perfect wafer).

Anonymous said...

GF = FUD

Got no HighK/Metal G
Going first but in last

That article was laughable. To give a tour and let press publish such trash only shows how stupid or desperate the management is at GF. Discarded dies.. WTF

Thanks ITK that was a good laugh.. AMD has been so low and a failure that they aren't even entertaining anymore to make fun of.

The clock has run out for them, ain't ticking for them or anyone in the fishkill consortium. Some one needs to wind the clock for a little more tick tock

Tonus said...

Someone posted that link at Ace's a few days ago, and this was Paul DeMone's reply:

"In this context a defect is not a crystal lattice imperfection but rather a macroscopic physical flaw that renders a device non-functional.

It is not uncommon with a well engineered device in a mature process produced in high volume to
occasionally come across a wafer with every single die passing probe test. It is like winning the
lottery or getting a hole in one in golf. Reportedly Andy Grove had a defect free 486 wafer he kept
on display in his office. Test guys hate them because the corner cases (100% yield and 0% yield)
usually indicates a problem with the test software or test hardware setup and requires immediate
investigation."


I suspect that if GF could consistently produce defect-free wafers, or if they were producing an unexpectedly high number of them, that we'll be hearing a lot more about it, as it would be a very unusual phenomenon.

SPARKS said...

Theo Valich, who used to write for the INQ, has gone on to spew trash in his own little corner of the web.

He was the same jack ass who was writing how wonderfully AMD was doing in back 2007 and in 2008.

The name of his website is BSN (Bright Side Network).
Given the context of the article, and from who it’s coming from, I suspect another name for BS Network may a bit more appropriate.

Also, he worked for Jon Peddie research for, get this, 1 (one) month (2008). I don’t know about you guys, I wouldn’t be bragging about that, let alone put it on my resume.

However, one question, has a 200mm or 300mm wafer with ZERO defects EVER been produced by anyone? Given the complexities of wafer production, is this possible? It would seem statistically impossible by any stretch.

SPARKS

InTheKnow said...

I suspect that if GF could consistently produce defect-free wafers, or if they were producing an unexpectedly high number of them, that we'll be hearing a lot more about it, as it would be a very unusual phenomenon.

If you are a foundry and are producing numerous perfectly yielding wafers, you aren't waiting for some website to spread the word for you. You are showing them to any and every customer you can find.

If I can yield better than my competition, I can charge less for the same product and guarantee you get your parts on time. If I can charge less and deliver on time, I can fill my fab to capacity. And a full fab is a profitable fab.

So I have to believe that if GF were really cranking out multiple defect free wafers, we wouldn't have to wait to hear about it.

InTheKnow said...

However, one question, has a 200mm or 300mm wafer with ZERO defects EVER been produced by anyone?

I've been told that Intel has produced at least 2 perfect 200mm wafers. Given the number of 200mm wafers Intel has produced, that should tell you something about the likelihood.

To the best of my knowledge, if GF has indeed produced a perfect 300mm wafer, it would be the first.

pointer said...

To Sparks,

sorry to hear about your really bad news.


InTheKnow said...
However, one question, has a 200mm or 300mm wafer with ZERO defects EVER been produced by anyone?

I've been told that Intel has produced at least 2 perfect 200mm wafers. Given the number of 200mm wafers Intel has produced, that should tell you something about the likelihood.

To the best of my knowledge, if GF has indeed produced a perfect 300mm wafer, it would be the first.


There were 0 defect wafers in 300mm fabs in Intel ... as read in an intranet information few years ago, but i am not sure it is so frequent is such it just happen when some one visit the fab :). Not working in the fab, so do not know how normal is it in Intel ...

Anyway, having 0 defect wafer is something to be boasted, if that 0 defect wafer correlate to higher yield (without considering assembly defects) after assembled and test.

if it is not, then it is better to catch those bad dies up front to reduce unnecessary assembly cost.

Anonymous said...

Who care about perfect wafers?
What matters is who has the most Die Per Wafer.
That goes to those on the most advance nodes as you get more transistors / cm^2 and make it cheaper.
That results in cheaper die and more profit.

Last I checked INTEL is ramping 32nm and GF is still dreaming about it.

Guess who is making money and who is making FUD

InTheKnow said...

There were 0 defect wafers in 300mm fabs in Intel ... as read in an intranet information few years ago, but i am not sure it is so frequent is such it just happen when some one visit the fab :). Not working in the fab, so do not know how normal is it in Intel ...

I've searched high and low for evidence of a perfect 300mm wafer and haven't been able to find any.

I do know this. The production of a perfect wafer is rare enough that when a fab produces a perfect wafer the lucky company typically shells out a few bucks on each employee to provide a memento.

I have a friend that used to work in Fab 20 in Oregon. According to him in the 10+ years that Fab 20 was making 200 mm wafers they didn't ever produce a perfect wafer.

This is why I have problems with the BSN story. It has nothing to do with whether it is TSMC, GF or anyone else.

I wouldn't believe it if the report said Intel was producing multiple perfect wafers either. With hundreds of process steps and hundreds of die on the wafer it isn't hard to figure out the odds of producing a perfect wafer are long odds indeed.

SPARKS said...

(Thanks much, Pointer)

“isn't hard to figure out the odds of producing a perfect wafer are long odds indeed.”

And how! The closest I get to anything that resembles a wafer is a salami sandwich, and I figured that one out.

How far will shrinks go before you boys will need to factor in Alpha and Beta particles (Cosmic Rays) affecting yield statistically?

Perhaps this is a bad joke, but given the densities of today’s devices, with the number of devices per wafer, plus the number of steps involved, perhaps the perfect wafer is the holy grail of engineering by any standard.

Hell, if I were a corporate big shot, I’d want that sucker hanging on my wall, in my office, untouched, and in one piece, center stage.

“Oh yes, that? It’s just a little gift from my boys in engineering, a perfect wafer. Can I get you a cup of coffee or anything?”

SPARKS

Anonymous said...

Alpha Beta matter for error rates for the larger caches and do factor into DPM for large servers where fault tolerance matter.

For yield they have no impact.

As to perfect wafers the big boys worry about mean yield and sigma and maximizing good die per wafer.

People without profits, technology or other good news search to divert attention from their busted business plan and talk about irrelevant things like perfect wafers

The clock as run out on AMD and GF

Tick tock Tick tock

Anonymous said...

Sparks - here's a crazy thought.... do you really want perfect yield?

Ideally the answer is "why wouldn't you?"... but then you have to ask at what cost?

Yield is clearly one of the bigger lever bars from a cost perspective, but at some point you have to ask what you'd be sacrificing to get that... Are you running machines with much slower processes that requires you to purchase more machines to get the same # of wafers out or are you adding extra cleans or other processing steps which requires more capital tooling (and does that cost outweigh the cost benefit of the yield improvement?).

Or maybe you are sacrificing performance to get that yield... make the control limits wider so everything falls in the limits, but that may mean lower overall performance in terms of speed or power consumption

Or perhaps you are sacrificing die size to get the perfect yield (maybe you are loosening the pitch at some of the critical litho steps or building in redundancies) and then the question becomes does the extra few good die out offset the fewer die per wafer due to larger die sizes.

Like everything else there are a myriad of tradeoffs and while yield certainly is one of the largest cost levers, at some point ther are diminishing returns where the next cost of improving the yield may actually outweigh the benefit.

SPARKS said...

“at some point ther are diminishing returns where the next cost of improving the yield may actually outweigh the benefit.”

And---------------


“Ideally the answer is "why wouldn't you?"... but then you have to ask at what cost?”

Perfect, absolutely magnificent, this is the point where engineering meets art and practicality yields to desire.

It’s like a trophy wife. She’s great at parties, dinners, and functions with colleagues, friends, and coworkers. She drops jaws at vacation spots. Hell, she’s got enough looks for the both of you, even on your worst day.

However, the upkeep and temperamentally makes one question the value of it all. But just like that perfect wafer, if it just happens I get the chance to nail one on my wall, I’ll be damned if I don’t show it off.

The way I see it, we’re talking the ultimate engineering ego pump here. To hell with practicality, a perfect wafer by happenstance has got to be the ultimate definition of big time engineering chops.

“Psst, hey, look over there, that’s the shift that got the perfect wafer!”





“Sparks - here's a crazy thought.... do you really want perfect yield?”


The flip side, as a shareholder, corporate bean counter, production manager, you ask? Hell no, I'd say -----

“That’s nice, enough fun and games now let’s get back to work.”





“Alpha Beta matter for error rates for the larger caches and do factor into DPM for large servers where fault tolerance matter.”

Brilliant, I had no idea. Amazing, micro electronics may have gotten to the point where “God doesn’t play dice with the universe”, but you boys are playing marbles with God.


SPARKS

SPARKS said...

"Or maybe you are sacrificing performance to get that yield.."

What a terrible thought!

SPARKS

InTheKnow said...

Like everything else there are a myriad of tradeoffs and while yield certainly is one of the largest cost levers, at some point ther are diminishing returns where the next cost of improving the yield may actually outweigh the benefit.

Excellent point. At some point the game does cease to be worth the candle.

However, if yields are uniformly high, then the probability of getting that perfect wafer is greater. So in that sense I view a perfect wafer as an indicator of a high baseline yield level. The thing you obviously can't tell is just how much of an outlier the wafer is from the normal distribution.

SPARKS said...

Oh how I remember all the hoopla and fanfare surrounding Barcelona a few years back. 40% faster (according to simulations as I recall), monolithic engineering brilliance, ad nausea, Intel merely pasted 2 dual core chips, “FOUL” they cried!

My, have times changed. AMD is now using pasties in its Mangy Coors , while INTC quietly introduces Nehalem-EX monolithic eight core Xeon 7500 series processors with nary a peep from all the web droids and naysayers.

Eight cores on a single die scalable to a maximum of 256 chips per server. This magnificent piece of engineering is simply brilliant, and absolutely stunning by any measure.

KUDOS to all involved, well done, and brilliantly executed.

SPARKS

Anonymous said...

AMD and them vapor wear meassurbators are so yesterday. Like I've said years ago Hector Ruin AMD pissed it all away wasting all his companies time, money and energy suing and buying ATI versus focusing on design and process

Look were they are now, clock has run out. They castrated themselves and sold their manhood to a bunch of arabs. AMD is nothing but a castrated irrelevant company.

The only thing interest these days is when will ARM fall or will INTEL trip over itself because it can't get over performance and instead focus on power efficient CPUs for the low end.

clock is ticking but it has run out on AMD and GF

SPARKS said...

Oh, by the way, about that Xeon 7500? Here’s where the rubber meets the road, like shattering world records.

http://blogs.technet.com/enterprise
_admin/archive/2010/03/31/new-xeon-
shows-muscle-of-windows-server-2008-r2.aspx

(LEX, I hate to say this, you may be right, I think Itanium’s time is at hand)

SPARKS

SPARKS said...

I thought perhaps, if anyone is interested, that AMD has a counter against 980X. It’s built on 45nM process. The news hasn’t been too clear on the size of the die, nor has it been clear on what process. I’ll assume it’s on SOI.

The good news for AMD fans is that it’s backwards compatible with older motherboards, providing they fall within the 125W power envelope.

The not so good news is that only three cores will turbo boost and there are going to be several incarnations of AMD’s six pack. The ones that didn’t make the high end cut, perhaps? I'm guessing, but it will be a considerably larger die.

The great news is they will probably be much cheaper than INTC’s six pack, and given AMD’s scalability successes, they will probably run well in multithreaded apps.

Be advised this is still a paper launch probably to deter INTC sales on its very expensive XE and motherboard/memory combo.

In short, not bad, not bad at all; they are obviously burning the midnight oil and are still in the game.

As for the numbers, we will just have to wait and see.

http://www.maximumpc.com/article/news
/amd_talks_about_phenom_ii_
x6%E2%80%99s_turbo_core

SPARKS

InTheKnow said...

The not so good news is that only three cores will turbo boost....

Yeah, I loved this one. AMD's supporters have been quick to call turbo boost a gimmick. Now that AMD has gotten on board with the gimmick I wonder how long it will be before it becomes the greatest feature since sliced bread.

First it was MCM, now it is turbo. It is clear to me that absolutely no innovation is coming from Intel.

Unknown said...

ITK said...
Yeah, I loved this one. AMD's supporters have been quick to call turbo boost a gimmick. Now that AMD has gotten on board with the gimmick I wonder how long it will be before it becomes the greatest feature since sliced bread.

The consensus at AMDZone forums still seems to be that turbo is a gimmick, no matter who implements it. Some of us were hoping AMD's implementation to be better, but it seems it's similar to Intel's at this point (that software can't readily discern whether turbo is engaged or not). Probably wont really know until a "T" model Phenom comes out.

Personally I see value in turbo for desktop usage (I consider laptops almost like desktops in this case, just power matters a bit more). There are loads of single threaded workloads still around, but also increasingly more parallel workloads. Turbo on a multicore processor lets one get the best of both worlds (high clocked single/dual core, lower clocked multicore).

For servers turbo makes no sense. If you need turbo in a server CPU, did you buy the wrong CPU for your workload? IMO anyway. (disclaimer: never worked with servers, in any data centres etc)

Having said all that, I agree with Kaa from while back where he said that turbo modes will become more advanced and integral in the function of CPUs. Personally I'm expecting sub-core (module) level turbo to appear with Bulldozer. For example one or both of the int cores, or the FPU core could clock up depending on chip power usage etc. (wild speculation with no evidence to back it up)

Anonymous said...

I get the need to do turbo for AMD, but are they seriously trying the price/performance game in the high end desktop market (again)?

People buying 6 cores are probably a little less price sensitive. If they price this too low... it does what tricore did - it puts yet another pricing segment in an already crowded lineup and gives AMD near zero pricing flexibility (quads will have to be lower, tricores even lower and dual cores even lower)

I think the folks who would buy 6 core chips will buy them at higher prices and I'm interested to see if AMD has any business sense or has learned anything from tricores.

If they are appealing to the upgrade crowd, they can get away charging a little more than a simple price/performance comparison to Intel as the people upgrading from an old AMD chip get value from not having to buy other system components. If they are looking for someone buying a new system, these are early adopters who usually are willing to pay a premium to be first.

People don't realize, desktop quad is still not a huge market overall... to set the bar low on what will be a niche 6 core desktop market I thnk is foolhardy, as it will just pressure your higher volume segments... It's better to lose out on an occasional 6 core sale here and there then to depress the pricing of your higher volume segments.

InTheKnow said...

The consensus at AMDZone forums still seems to be that turbo is a gimmick, no matter who implements it.

Lem, do you remember the outcry against MCM from the AMD supporters when Intel bolted 2 single and then 2 dual core dies together? That was also considered a kludge and a gimmick. It wasnt a "real" dual/quad core.

Now AMD has bolted two 6 core chips together to make a 12 core chip and there is no complaint. Somehow it isn't a "fake" chip.

In view of the sudden shift of attitude on that front, perhaps you can understand my skeptical belief that I'll see a similar shift taking place regarding turbo.

I'm just looking to the past to predict the future.

Unknown said...

ITK, yes I do remember the outcry over MCM quads such as Kentsfield, vividly. At least with Magny Cours, both dies are connected on-package by high speed, low latency HyperTransport, and each die has its own memory controller (creating single socket NUMA; maybe we'll see a true quad channel controller with Bulldozer). The performance scaling of this arrangement is far superior to cramming pairs of cores onto a front side bus and external memory controller (with the worst example being Dunnington I guess).

Luckily for Intel, the kludge generally worked well enough. Meanwhile AMD dropped the ball with the original K10, which made the kludge even more acceptable, relatively speaking. There was a lot of grasping at straws with Barcelona/Agena vs Kentsfield et al, which I believe is a significant reason why the monolithic chip argument was played as hard as it was (by AMD fans mostly I guess, AMD itself probably could show multithreaded workloads where the monolithic design would really outperform the MCM/FSB arrangement).

About Turbo, it seems the reason a few of the Zone members don't like Intel's turbo but like AMD's, is that they think (based on slides) that AMD's turbo wont let the chip exceed rated TDP, and is guaranteed to operate even with the stock HSF. I think that's it in a nutshell anyway, but we'll see when the chips are actually reviewed. Until then none of us can be sure how it behaves in reality.

SPARKS said...

Every once in a while some of the worst site monkeys come up with s a report that make you think twice about giving them a complete right off.

I’m referring to Theo Valich. He has done quite a nice job on reporting TMSC and NVDA’s woes on the new 40nM Fermi. He is almost marching in lock step with Charlie D. (Where they get these inside numbers one can only guess).

However, if any of this is to be believed, TSMC’s and NVDA’s situation is even uglier than previously reported. There is a multitude of details such as yields, cost per wafer, etc. If any of you process whiz kids feel so inclined, please check out the numbers to see if they hold water.

If they do (at the risk of being redundant)------------------------

WHAT will they do at 32???

570mm^2 die????

$208 per chip???

LESS than 25 winners per 300mm wafer???

8000 cards worldwide??




http://www.brightsideofnews.com/news/
2010/1/21/nvidia-gf100-fermi-silicon
-cost-analysis.aspx

SPARKS

SPARKS said...

Duh, that "write" off with a "w".

SPARKS

Tonus said...

"There was a lot of grasping at straws with Barcelona/Agena vs Kentsfield et al, which I believe is a significant reason why the monolithic chip argument was played as hard as it was"

I think that it was played as hard as it was because Intel did one thing and AMD did the other, and fans split along those lines. From your description and what I recall of discussion at AMDZone, the line of reasoning seems to be that before, MCM was a bad idea and monolithic cores were the way to go. Now, MCM is okay because AMD did it right.

As I've said in the past, I prefer to study this stuff in hindsight, in large part because I do not have the technical depth of people like Guru. But it appears to me that a lack of technical know-how doesn't stop a lot of would-be experts from making analysis and predictions that seem to be based more on biases and desires than on actual technical merit.

This is not such a big deal on forums and most blogs, as it amounts to nothing more than trash talk. But it also seems to permeate any number of supposedly legitimate industry reporters (George Ou comes to mind, and see Sparks' comment above regarding Charlie D and Theo Valich). When the quality of reporting is barely above what you get in the comments section on a blog or review site, that's just embarrassing.

InTheKnow said...

Every once in a while some of the worst site monkeys come up with s a report that make you think twice about giving them a complete right off.

Yeah, other than the fact that there is no accounting for packaging cost in those figures. Even though your wafer costs $5k to manufacture, you still have to pay to test each chip, dice the wafer, and assemble the good chips. Those costs may not be negligible. There is a lot more to the equation than just the cost of manufacturing the wafer.

In short, I'd say it was a pretty flaky analysis.

Anonymous said...

First it was MCM, now it is turbo. It is clear to me that absolutely no innovation is coming from Intel.

Well, I believe Intel will copy AMD's method in its x86 processor like separate FPU addressing. Even some Intel's new technologies were modified from VIA processors like on chip encryption acceleration and intel's Turbo Boost (VIA made adaptive overclocking in its VIA Nano if the CPU temperature is cool enough to raise its frequency.)

InTheKnow said...

Well, I believe Intel will copy AMD's method in its x86 processor like separate FPU addressing.

They might well do that.

Even some Intel's new technologies were modified from VIA processors like on chip encryption acceleration and intel's Turbo Boost

Yeah, and AMD didn't invent the concept of hyper-transport interconnects. Intel's failed Timna came before AMD's implementation, and I believe it was DEC that had an implementation before that, so what's your point?

To be clear, I think it is foolish to say that any of the players aren't innovating. The length of the design cycle makes it very hard to tell who did what first. There are a lot of very bright people out there all working to solve the same set of problems. No one should be surprised by overlapping design approaches. All we really know for sure is who got any specific implementation to the market first.

And the bottom line is as long as all the players keep implementing the best ideas we all win.

Anonymous said...

@InTheKnow, thanks for your balanced comment.

SPARKS said...

Naturally, because ITK’s mentioning of Timna , I am absolutely compelled to offer an brief historical perspective on the ill-fated chip that was leaps and bounds ahead of its time, perhaps to simply bore you all half to death.

Actually, it was nearly TEN years ahead of its time. By CPU/product life cycle time line, that’s an eternity.

Graphics, memory controller (allow me to repeat that, MEMORY CONTROLLER), et al, on a single die.

Timna’s first and largest problem, in my opinion, was RAMBUS. Going forward with that memory standard was like pissing up a rope. The major memory players, the same ones who rule the roost today, wanted no part of it.

Consequentially, RAMBUS has been making a living suing them for patent infringements for ever since. They won every battle. (Yes, I know XDR memory is another way they make money.)

Incredibly, vendors weren’t interested in the Timna solution either. They didn’t want to be stuck with an all in one chip where they couldn’t mix and match components allowing different performance solutions aimed at specific markets, competitive edge perhaps?

Finally, INTC tried to redesign the memory controller, they had some issues (read serious bugs), and cancelled the whole enchilada in 2000.

Subsequently, almost 7 years later, none other than Wreaktor Ruinz came up with a brilliant idea called fusion! This idea will revolutionize the industry, he cried! What genius, what brilliance,--------- what horseshit.

Well, in any event, now that they have rehashed this 10 year old idea into something special, we have everyone clamoring for credit on a ten year old enigmatic catharsis on new products marketed today.

http://www.cpu-world.com/CPUs/Timna/index.html

SPARKS

Anonymous said...

Like a well oiled clock intel keeps on ticking!

Did you folks see the numbers intel put of for Q1. Damm Q1 is usually a slow one and look at them numbers!

This is for sure intel may copy AMD on everything but there is ONE thing AMD can't copy intel at that is MAKING MONEY, MAKING A PROFIT, PUTTING MONEY IN THE BANK.

There INTEL doesn't copy AMD!

«Oldest ‹Older   1 – 200 of 244   Newer› Newest»