1.23.2008

A Review of the IDC numbers

I can see that someone has done a ‘Sharikou’ on Intel’s recent financial performance. A ‘Sharikou’ is an analysis method popularised by a similarly named blogger, where a desired result is only realised by varying the point of reference. Ignoring the usual method of 'like-for-like' or sequential comparisons, the blogger instructs his readers to compare Intel’s 2007 numbers with 2005 to get a “better understanding”. He doesn’t justify why his method gives a better understanding, but at least you can see that Intel’s revenue and margins are trending down – ‘Sharikou’ method successfully utilised.

IDC numbers (using only conventional methods of comparison) were released showing interesting trends. The total 2007 processor volume went up 12.5% while revenue only improved by 1.7%. IDC says that there was a degree of price erosion that occurred earlier in the year. True enough, but when someone suggests that Intel isn’t lean because it can’t reproduce the same margins as two years ago, I feel sorry for the poor analysis, and quite frankly, it isn’t surprising whenever the “Sharikou” is applied. In a duopoly where the market or revenue share is a zero sum equation (almost), comparing how much AMD and Intel performed using the same frame of reference, makes the most sense.

Intel and AMD did see an overall decrease in ASP mainly because of the larger presence of AMD in the notebook and server space (margins naturally decline as market share in a doupoly approaches 50%%). But it is interesting to see who was hit the hardest:
Global 2007
Volume increase: 12.5%
Revenue: $30.55B
Revenue increase: 1.7%

Intel 2007
Revenue (Digital Enterprise / Mobility CPUs): $25.89B
Revenue increase: 8.7%

AMD 2007
Revenue (Computing): $4.70B
Revenue increase: -12.4% (down from $5.64B)

It is quite evident that AMD has taken the brunt of the ASP erosion considering that market share only shifted by a couple of percentage points while both companies reported record volume shipped. In terms of laptop and server prices it is clear that the business environment has changed and the good news is both companies are taking steps to adapt. AMD has delayed Fab38 and for obvious reasons. At the moment they have the capacity to supply 80 million CPUs for the 70–75 million demand (2008) for AMD based systems. They now seemed to be content to play in the box they were forced to be in. Intel on the other hand plans to improve by divesting non-performing businesses, creating new high margin products and the usual, by outpacing the competition with better products using a better process.

It is premature and erroneous to dismiss the benefits of Intel’s 45m before the ramp is complete. Intel's guidance of flat margins throughout 2008 is likely due to their tendency to be on the cautious side and isn't sufficient to make quick assumptions on the company's cost structure or pricing strategy. Meanwhile AMD can only expect more difficulty as K8 gets even more outdated and K10 is ramped with very poor yields. In 2007 AMD managed to ship 400K QC units. It isn’t such a big number considering this is just 0.6% of their total volume. Barcelona or K10 is considered by many as a disaster in terms of performance. Financially, we have yet to witness how much damage it can do to AMD… assuming of course AMD ramps K10 on 65nm.

29 comments:

pointer said...

I can see that someone has done a ‘Sharikou’ on Intel’s recent financial performance. A ‘Sharikou’ is an analysis method popularised by a similarly named blogger, where a desired result is only realised by varying the point of reference ...

You are actually insulting Sharikou ...

I feel far more comfortable to post comment in Sharikou's page instead of that blogger.

Sharikou style is that intel is evil and amd is good, plain and simple.

That blogger style, is complicated to describe clearly. Basically is being a fanboi but not admitting it, 99% i'm right and you are wrong style and using the other 1% to prove himself not being biased or being reasonable, etc.

i'm not able to describe him fully. But comparing him to Sharikou is an insult to Sharikou in some sense.

Anonymous said...

I always liked the saying:

They are 49% sweetheart and 51% SOB

Anonymous said...

“You are actually insulting Sharikou ...”


Nah, he does a pretty good job of it all by himself. Do you really think he needs the help?



“It is premature and erroneous to dismiss the benefits of Intel’s 45m before the ramp is complete.”

You ain’t kidding, Doc. I know exactly where they could get 1400 bucks, literally, in a New York heartbeat!!!



A little Jewel, the cat’s meow, the crème de la crème, Da Bomb!

QX9770, for where art thou?


SPARKS

Anonymous said...

Guidance on gross flat margin is pretty simple... Intel expects relatively slow growth in the overall CPU market in 2008 and therefore there will be continued pricing pressure; especially as AMD now appears to have to compete on price as performance of their new 'high margin' product isn't there.

Scientia's analysis of 45nm is a complete joke - let's face it, does anyone here really think he even has a clue about process technology - all he can do is read and creatively clip bits and pieces to support his conclusions.

Will people stop perpetuating the myth that 45nm should provide Intel significantly better margins? First off there is an additional metal layer, the highK/metal gate costs, increased # of tighter litho tools - this adds anywhere from 10-15% to the wafer cost (perhaps more early on in the ramp)

Well sure, but that's offset by 2X the number of die per wafer right? NO, NO, NO.... first off scaling is more like 30-40% which translates to ~1.6X the number of die...and of course that's assuming yields are similar (which in the long run they probably will be)

Well still that offsets the additional wafer cost, right? AGAIN, NO! Cache size increases too so you don't simply get geometric scaling! Also I'm not sure if the core is slightly bigger due to the additional instructions. So you are not simply just scaling the die anymore....

What 45nm does is enable you to add more cache, more logic and keep costs relatively flat. If Intel just did dumb shrinks and not add cache or logic instructions, then yes it would provide significant cost advantages. As a result of this, you see why there is a wafer size push - that is really the true way major cost benefits are achieved.

So what do we end up with? A classic case of Scientia trying to sell Intel's advantage as an issue and an indicator of problems. If they are ramping 45nm, surely there margin forecast should go up! Well it will if you ignore HUNDREDS of other variables that go into gross margin... As alway he presents a superficial analysis because he simply doesn't have the background and expertise to intelligently comment/analyze the situation.

Chuckula said...

Sparks, if you really want to blow a lot of money on a CPU, the 9650 is out there right now and with extra cooling it will go north of 4Ghz just fine.

As for me, for about $1,200 I'm building my first new desktop machine in almost 5 years. It's replacing a Northwood P4 2.4C, the ones with the dual channel DDR-400 which were the last semi-decent P4's ever made. Back when I built the P4 I also got a cheap Athlon XP box for ~$500 which I was using at school (grad school we had offices). That machine ran great and the 2400+ chip was usually a little bit faster than the P4 (except for SSE2 stuff). The only problem was after it did its duty through 2 years of hard work in grad school the MB got flaky and I didn't need 2 desktops anymore. The point of the long-winded story is that I have used and enjoyed AMD products in the past when they were competitive in both price and performance and didn't have corporate psychoses.

So the new machine is using a nice new and fully available e8400 which I plan to take to a modest 3.6Ghz with a Scythe Ninja B cooler. This is going onto the DDR2 version of the Asus Maximus X38 board, with 2GB of OCS 1150 RAM, and an XFX 8800GT video card. All of that gets tossed in an Antec "Nine Hundred" case, with a (massively overspecced) 650 watt Coolmaster PSU and we're off to the races. Hard drives are being transplanted from the old box, and I got a SATA DVD burner so it will be a 100% SATA box.

Why am I upgrading now? It's not because I'm an Intel fanboy and want to boost their first quarter revenue ;) Actually, the old P4 still does a good job under Kubuntu, but things like 1080p video are too much for it (go to Apple's website and download the 1080p movie trailers, you don't need an overpriced blu-ray player to test them out). Also, after 5 years the PC is still pretty stable, but I get random glitches every so often and I'm not sure where they are coming from, so might as well get out while it's still working.

As for quad core, if Intel had come out with the 9450 this week I might have bitten the bullet for the extra $150, but to be honest... I CAN use dual core quite a bit, but actually taxing a quad not as much. I'm a Linux user 99% of the time, and there was a time when I did massive compile jobs that could have screamed on a quad, but any compiling I do now is small enough that the e8400 will chew it up. And besides, when Nehalem comes out the Yorkfields will get cheap and it'll be a nice upgrade.

Despite the fact that I'm very critical of the Motorola-reject crew that is driving AMD into the ground right now, I do hope they improve. Unfortunately, any real improvement in performance may not happen until Bulldozer in 2009 (if we are lucky).

So, the CPU is the last outstanding part and Fedex brings it tomorrow... I'd post photos but the CCD on my camera died and I need to RMA it :( I WILL put up a post from my new machine once I beat it into submission. One thing an e8400 won't do is make me type faster though!

Anonymous said...

One clarification to the 'flat' gross margin guidance - I believe the yearly guidance is flat with respect to Q4 (which was Intel's best margin qtr of the 2007). So while a shallow view of the numbers may be seen like flat guidance, when you consider Q1 and Q2 trend lower, a flat yearly guidance with respect to Q4 of 2007 is actually pretty good.

If Intel is expecting 56% in Q1 that means they will need to exceed the yearly guidance of 57% at some other point in the year in order to get to a 57% average.

Anonymous said...

“Despite the fact that I'm very critical of the Motorola-reject crew that is driving AMD into the ground right now, I do hope they improve.”

Well done! Well, said! Lest not forget there are few Fairchild and Digital pathogens thrown into the caldron/mix.

“Sparks, if you really want to blow a lot of money on a CPU, the 9650 is out there right now and with extra cooling it will go north of 4Ghz just fine.”


Like Professor Dumbledore said, “Yes and No, this mirror shows us our deepest, most desperate desires”.

To own the best, even for a moment, is surely fleeting, as everything is surpassed in time. I am proud to add this new addition to my cadre of former bests.

I don’t mind the price; actually, there is something that sets this one apart from all the rest. This is the one, that all the kind ( very smart and albeit tolerant) people on this site gave me a deeper understanding of the processes, design, manufacturing, sorting, testing, etc, through obviously dedicated, and somewhat passionate analysis. (I’m sure GURU knows, without doubt, nothing short of nuclear holocaust will prevent me from buying this one) Call it the “Annealed K factor”. HEAVY METAL, DUDE!

Additionally, this one will, historically, go down as the pinnacle, turn around process/architecture that brought INTC back from a Wall Street laughing stock to the world class superpower it has always been destined to be.

This design is the embodiment of great American resource and innovation, there is simply nothing better on the planet. This chip sent the bullshit slinging minions; “the Motorola-reject crew” back to the dark ages, scratching their asses wondering what hell they are going to do now.

“Blow”, you say? On the contrary, I see it as a no compromise, goddamned bargain.

QX9770.

Even the numbers themselves look almost sinister and formidable.

“Intel Inside” nah, “Intel In Your Face”

SPARKS

Anonymous said...

Over on AMDZone, scientia - oops I mean Sharikou-clone - is predicting the following:

2008 Predictions
by scientia on Tue Jan 22, 2008 7:28 pm

I'm going to predict that AMD will fall short of its 30% share goal for 2008 and probably reach 26%.

I'm going to predict that AMD will regain its 2006 share in servers.

I think AMD with Puma will first regain its lost mobile share and then top it.

I would expect some increase in desktop share.

I think AMD 45nm will appear in small volume (not enough to effect revenue) in Q3. I think AMD will have a reasonable volume in Q4. I would expect AMD to be fully converted to 45nm in Q1 2009.

I noticed that Intel hasn't been bragging about its 45nm volume in Q4; I doubt it will be bragging about its Nehalem volume in Q4 this year either.

I don't know if AMD will break even in Q2 but it should be close. My guess is that Q3 and Q4 will be profitable.

I expect AMD to have gains in ASP while I think Intel is going to have some reductions.
scientia
K8 Athlon 64 (Orleans) - Expert Boarder


Posts: 2131
Joined: Wed Mar 24, 2004 10:42 pm
Location: Indiana USA


Considering his past history, he obviously hasn't learned enough to keep his mouth shut when his betters speak.

Anonymous said...

"I noticed that Intel hasn't been bragging about its 45nm volume in Q4"

The irony is rather thick here after hearing AMD 'brag' about shipping 400,000 quads. That's almost a rounding error at Intel.

Did Intel ever state they would have high volume of 45nm products in Q4'07? I only heard them say launch - I think Scientia continues to attribute what some over-exuberant Intel fantasy to what Intel themselves say... Next thing you'll know he'll be saying how Intel failed to launch 45nm mobile parts in Q4'07... what you mean he already said that?!?

"Fully converted to 45nm in Q1'09" that's a freakin laugh... I guess it depends on a few things:

A) Is this actual Q1'09, or AMD's calendar-like interpertation?
b) What does fully converted mean - wafer starts, wafer outs, or actual product out? There is quite a bit of difference (I'm going to assume he means wafer starts)
C) I guess we'll exclude foundry capacity again?

Also he has apparently resorted to AMD's OPERATING margin claims - please not this DOES NOT MEAN PROFITABLE! He's gotta learn how to parse his words better, even AMD was specific on this one, as they knew better. Positive operating income != "profitable"... but heck Scientia really know his finances so who am I to argue?

InTheKnow said...

Some more info on Intel's 45nm process for those that are technically inclined can be found here.

I was surprised with some of the actual numbers that were released, such as the %Germanium in their strained Si.

InTheKnow said...

but heck Scientia really know his finances so who am I to argue?

Good to see you are finally realizing where your place in the universe lies. :P

Khorgano said...

InTheKnow said:

Some more info on Intel's 45nm process for those that are technically inclined can be found here.


FTA: Semiconductor International, 12/14/2007 11:31:00 AM

“By January 2006, this process was yielding fully functional test chips. Now, products at Intel’s Fab 32 in Arizona are matching the yields with the same defect densities as our fab in Oregon,” he said.

Better not tell Scientia, he thinks F32 won't be matching D1D in yields or defect densities for quite some time.

Tonus said...

I saw this linked from the comments section at Sci's blog. Via has a new CPU core. It looks promising, especially compared to their previous line of CPUs. Small, low-power, yet reasonably powerful.

Via isn't competing at the high end, but does this look like a nice step up for them? They demo'ed a 1.2GHz version running with just a heatsink, so they could have some success in the notebook market.

Tonus said...

So the new machine is using a nice new and fully available e8400 which I plan to take to a modest 3.6Ghz with a Scythe Ninja B cooler.

I have an e8400 on the way as well, and am intrigued by the OC'ing possibilities. I almost never OC anymore, since CPU/GPU power is typically more than I need for the cost (very different from years ago, when money was an issue and OCing was a nice way to get a Pentium 200MHz for the cost of a 133!). These days stability is first and foremost for me.

But the ease with which I should be able to squeeze another few hundred MHz out of the CPU makes me want to tinker with the BIOS just a bit...

$209 for a 45nm dual-core CPU running at 3.0GHz on a 1333MHz bus... sheesh! (And I mean "sheesh" in a good way!)

Scott said...

Rumor that's made news headlines: IBM to buy AMD?

Chuckula said...
This comment has been removed by the author.
Chuckula said...

OK the new Wolfdale is here! For people with more knowledge in decoding serial numbers here are a few:
On the CPU itself: Q745A576
On the Box: Prod. Code BX80570E8400
Version #: E27439-001
MM #: 895733

The packaging date is January 4, 2008.

I'm going to put it together later today or tomorrow... assuming everything is happy and the MB's bios is well behaved I'll be posting again from the new machine ;)

Tonus said...

Some people continue to think that just because Sharikou is unwilling to police his comments section, that Rob will also. I think the spammers will be very disappointed.

Roborat, Ph.D said...

On the CPU itself: Q745A576

lot number meaning 745 = workweek 45 of 2007. This should be sometime mid November.
576 = 576th lot.

Pretty new if you ask me. I'm waiting for the Quads tho..

Anonymous said...

It's obvious that Intel's margin statements are just to lull AMD into a false sense of security. Afterall, if AMD drops any further and gets bought out then they might get someone in charge who will make AMD into a real threat. I think Intel is tanking its own forecast to keep AMD alive.

Anonymous said...

Well, I agree. If IBM bought AMD then they would have a lot more resources and experience and IBM/AMD would be a much larger competitor for Intel. IBM has a lot more experience so they couldn't do any worse than that idiot ruiz.

Anonymous said...

Yeah, but maybe even google or somebody would buy AMD. Hell, even Apple could buy AMD. Maybe we are looking at a future where AMD merges with VIA.

Anonymous said...

There is no doubt that AMD is desperate. They lost $2 freakin billion just in the last year. The last quarter is the biggest so they'll probably just start losing again in the next quarter. I think Intel needs to keep them around so they can pretend to have competition and keep the EU happy. It seems like everyone is on a witch hunt against Intel and for what?? Just making products that people want and earning a profit?

Anonymous said...

I don't know I still think AMD will be lucky if they don't go BK before 2008 is up. I'll bet the board is already voting to have ruiz pretend to retire so they can get his ass out of there.

Anonymous said...

400K K10's at AMD is such a joke! Intel makes that many chips every hour! LOL. reuters

AMD currently has a chip fabrication plant, or fab, in Germany and has plans to build another one outside Albany, New York.

Intel, by comparison, has some 12 chipmaking plants across the globe, and its manufacturing prowess has long been a competitive advantage against smaller rival AMD.


AMD has that one silly little fab compared to Intel's 12. With 12X AMD's capacity no wonder AMD is going out of business. With the two new fabs plus the 45nm conversion Intel will have 25X AMD's capacity by the end of the year and AMD will be down to just 4% LOL.

Anonymous said...

Robo - can you track the IP addresses of some of these clowns? My feeling is that some of these guys are posting stuff just to flame - for example CPU Truth is obviously an AMD fan trying the ol' reverse psychology bit and seeing if he can fan the flames....

Anonymous said...

Damn roborat the fucking is great with these guys.

Anonymous said...

That is easy to explain. I can post what I want on this blog. No one will a) delete it, or b) cut it up and reposted it in part with "responses" to the parts of my post that the blog owner deems fit to address.

Right, wrong good or bad, I took the time to compose my thoughts and type them up. As long as there isn't an issue with the language or defamation of character, I don't see a need to remove posts. And, besides I know all the other guys here are first class cock suckers and that is who I want to hang out with.

He goes on to wonder
The really ironic part is that if I wanted to I could certainly make my blog very pro-AMD but I have not done that. It amazes me that the people who post on roborat's blog and Roborat himself are not smart enough to see that.

So now we insult my intelligence since I don't share his viewpoint. This is just a polite way of telling me I'm stupid for posting here. Presumably, if I were not a cock sucker, I would post on his blog on his terms. But I am a cock sucker and I'm proud of it!

As to being a pro-AMD blog, perhaps he should look at the people who post there now. AMD supporters one and all (okay, enumae would be an exception). That makes his blog no better than Roborat's. It is just a hang out for AMD fans. There is no longer a dissenting voice. He has killed it just like my ass got killed when I had two cocks up it last night. Oh wow!

In my final post on his site, I told him that I had found that his blog was not a place for open and honest discussion. My post was quickly deleted, but removing my post does not change my perception. And, of course I'm going to leave out the real reason it was deleted.

I don't think anyone would accuse me of being an AMDroid, but I am interested in the opposing view point. In that regard I find this blog a bit lacking since the few souls (like HYC) who have ventured to post here get eaten alive and I prefer to eat Intel fans. But even with it's shortcomings, I've found it to be the best industry blog I've found.

29 April 2008 06:17


Anonymous said...
"In that regard I find this blog a bit lacking since the few souls (like HYC) who have ventured to post here get eaten alive."

Quite frankly, with some of the things he has said - he deserved to be "eaten alive"... if his statements were 'my opinion'... or 'my theory'... but when he concludes and compares things incorrectly, well it should be challenged. That said I do respect his posting here and not simply posting in a 'friendly' environment.

The difference between this blog and Scientia's is that Robo won't selectively filter and selectively edit the discourse just like I never say no when I get propositioned by some guy with a hard cock... this in my view is worse than refusing to post a comment as there is no way of knowing what he is taking out of context. I prefer taking it in the ass instead.

As for content, in my opinion, there is an absolutely huge chasm in expertise (almost as big a chasm as my asshole after the monsters I've had in it) in the process and manufacturing areas on this blog - you have people who have both academic and practical knowledge in this area and are simply not just trying to interpret things seen on the web. In my view this is clear when you see predictions on things like clockspeed or TDP or launch dates based on some of underlying fundamentals vs what I see as largely empirical extrapolations on Scientia's blog. Things like solely looking at release schedules to assess technology differences between companies or creatively interpreting Sematech presentations to fit a desired blog entry illustrates the lack of understanding of what lies the next level down (in terms of info) from that data to truly understand it and draw conclusions from it.

That said there are some very good SW and architecture people on that blog (in my view, MUCH more so than here), but there seems to be a need for some of those folks, who are clearly out of there element in other areas, to try to convince people about AMD's prowess in the process and manufacturing area. You don't see the process or manufacturing folks here cashing in on their reputation to make claims in areas they don't understand.

What I like is when folks will be open about what they don't know and not try to pawn themselves off as an expert in areas they are not. Me, I'm an expert on sucking cock but I don't pretend; just ask any of the guys here. That will always happen to some extent, but when you try to refute some of this on Scientia's blog it gets filtered - in my view this is due to fear of being viewed as less knowledgeable in certain areas, or just a desire to make AMD seem better or less far behind(though admittedly, I'm no psychologist and this is solely one of the anonymous robo-trolls' views).

I do find the 'I was very accurate in 2003-2005, but since Core 2 I have been less so' (I'm paraphrasing) evaluation somewhat amusing. One (or should I say 'we' to make it sound better?) could naively view this as Scientia's predictions are accurate when AMD is doing well...perhaps because he largely predicts good things about AMD and bad things about Intel. I'd suspect if Intel continues to do well and AMD struggles, Scientia's predictions will continue to be poor, but if AMD starts to do well the accuracy will pick up.

The difference with the blog here, is there is much less false pretense - many people who are fans, don't pretend to be unbiased objective posters and as many have pointed out the comments are posted regardless of ideology and don't get deleted if Robo doesn't personally agree with them.

29 April 2008 07:30


Anonymous said...
Robo - did the press releases state Cray was dumping AMD or simply that they would start using Intel in the future? It may not necessarily mean that AMD is being dumped, but rather Cray is hedging its bets and may go forward with both suppliers.

Realistically, for a company Cray's size and the area they operate in, it probably doesn't lend itself to this approach but from what I read on the web it was not clear this was a 'dumping' of AMD.

I also think the supercomputer list will evolve slowly even if Intel takes all of the Cray business. (It is also in my view not a good indicator anyway as it seems to be a lagging indicator).

To me, putting away the HPC applications - what will be interesting is that with the growth in computing power and # of cores will 1P and 2P continue to eat into the need for 4P+ servers? If you start talking about a 2P server with 8 cores in each socket, 4P may really diminish except in niche applications. (If I'm not mistaken, 4P+ is still relatively small compared to the 1P and 2P market).

29 April 2008 07:45


Anonymous said...
What the heck?!?!
http://www.digitimes.com/mobos/a20080428PD219.html
(AMD desktop lineup revealed)

Some highlights:
- 'while the low-power 8450e (Tollman) will see production begin in the second quarter' You mean they are INTENTIONALLY starting these or this is when wafers will start that they expect to have yield problems on?
- 'The Phenom X4 9150e, which was originally planned to be launched in the second quarter, will not be available for orders until the third quarter, along with the 9350e. In the fourth quarter, AMD will launch another low-power CPU'

So 9150, 9350, 9450, 9550, 9650, 9750, 9850 and potentially a 9950... now also throw in some 0MB variants... Huh? 8+ products (probably at least 10) to cover the quad desktop space? Are you kidding me? Is it just me or is this insanity? you gotta think the top price is in the $250 range... with 10 products what are the price increments going to be?

"if the process goes smoothly, 45nm Phenom X4 CPUs should appear in the market by the end of November, added the sources."

Leaving AMD squarely a year behind Intel (or more if you consider actual process node performance) and this is with AMD running at breakneck speed to new tech nodes - I just don't see the closing of any gaps that others have foretold.

And it looks like 2.8Ghz is the top potential speed through Q4'08 (ranges were given of 2.5-2.8 for the top 45nm SKU in Q4'08) and with a 95Watt TDP. The 95 Watt TDP is a bit of good news at it is improved over the current 125Watt top bin parts - though AMD is expecting to reduce this on 65nm as well so it's hard to say if this is a 45nm improvement or not.

29 April 2008 09:52


hyc said...

"In that regard I find this blog a bit lacking since the few souls (like HYC) who have ventured to post here get eaten alive."

Quite frankly, with some of the things he has said - he deserved to be "eaten alive"... if his statements were 'my opinion'... or 'my theory'... but when he concludes and compares things incorrectly, well it should be challenged. That said I do respect his posting here and not simply posting in a 'friendly' environment.

Obviously I don't know the facts behind AMD's decisions, so anything I said previously about their honesty/whatever could only be taken as "my opinion" or "my theory."

While, like any other person, I have obvious biases, I am no fanboy. As you folks have noted, if scientia or anyone else makes a statement that I suspect is wrong, I will call it out. I have no investment in Intel or AMD one way or the other; there are no sacred cows here for me.

When I make a wrong statement, I expect that to be called out too, because I'd rather learn the truth than stay ignorant. I might prefer a few less slings and arrows, but what the hell, I throw plenty of my own in other venues.

Ultimately what matters to me is software efficiency and performance. The largest deployments of my software run on SGI Altix - Intel Itaniums. For a few years there nothing else on the market could even approach them in terms of single-system-image scaling. Other folks can have religious wars about whether Itanic is a good thing or not, but what matters to me is that it solves an otherwise unsolvable problem for my customers.

There's an old joke that "there's nothing more dangerous than a computer programmer with a screwdriver." My degree was in computer engineering; I studied both hardware and software design in college but my last VLSI design course was more than 20 years ago and since then I've only kept up my software skills. I expect to be wrong more often than right in conversations in this crowd. (Thanks for delivering on my expectations...)

29 April 2008 10:53


Tonus said...
"The difference between this blog and Scientia's is that Robo won't selectively filter and selectively edit the discourse..."

That's really the only thing I don't like about the comments section on his blog. I agree with his deleting of comments that are mostly flames or trolling, but there are times when he deletes a post and then responds to the deleted post, and you don't know if he left any relevant parts out. Or if you *did* see the post before he removed it, you may wonder why he didn't respond to certain points.

I think it's a good idea to remove posts when people are being abusive or trolling, and then leave it at that. I think that people will either start making posts that just address issues and leave out the crap, or they will stop posting (and who will miss them?). But removing a post and then responding comes off as a suspicious act.

***

As for myself, I'm more interested in looking back and reading about why things have happened than in looking ahead. So much of the technical information is over my head, and lots of details are kept secret by the companies involved, which makes predictions difficult and questionable most of the time.

But I can usually follow the discussion to some degree and enjoy seeing the technical points being made, even if I don't know enough to dispute or support any of them. And since I'm mostly observing, I don't really have anything at stake. Nothing at stake, and I get to read interesting commentary. Win-win situation.

29 April 2008 15:09


Axel said...
Tonus

But I can usually follow the discussion to some degree and enjoy seeing the technical points being made...

The problem is there's practically no technical dialogue of significance anymore in the discussion section of Scientia's blog. You may have been following over the last year or so, but I think it's pretty clear that that section of his blog died months ago due to the excessive censoring / moderating. As has already been noted here, the bulk of the comments on that blog are now posted by ignorant anti-Intel zealots grasping at Scientia's flawed predictions as the last rays of light left amid AMD's darkening fortunes.

For me, Scientia's posts themselves have consistently been somewhat interesting to read (though laughably misguided and lacking common sense). It's the discussion section that has gone to total crap. A year ago the discussions were far more engaging and Scientia's moderation more lenient. Then as the accuracy of his predictions continued to sour in the second half of 2007 (e.g. K10 performance, significance of DTX, etc.), he became increasingly defensive and intolerant of dissent, leading to the current useless state of the discussion section. It's now nearly on the same level as Sharikou's.

29 April 2008 18:39


Roborat, Ph.D said...
Anonymous said...
Robo - did the press releases state Cray was dumping AMD or simply that they would start using Intel in the future? It may not necessarily mean that AMD is being dumped, but rather Cray is hedging its bets and may go forward with both suppliers.

The $250 DARPA contract that should be prototype complete by 2010 will be coming out with Intel CPUs instead of AMD. Crays direction for HPC systems have switched sides. I consider that fundamentally dumping one technology over another. Cray didn't say they will be Intel exclusive, but you must agree it's a catchy title.

29 April 2008 20:27


SPARKS said...
In The Know

You know me Bro, I call 'em like I see 'em.

BTW: UPS did not arrive today :( :(

SPARKS

29 April 2008 20:52


SPARKS said...
“The $250 DARPA contract that should be prototype complete by 2010”

Doc-

Since CRAY will be using Xeon chips in the interim, and Nehalem has been seen here and there up and running, can we assume they were at least impressed by its performance enough to make the swing to INTC? Further, would they use a four or eight core for their specific needs? Will other manufactures follow CRAY move eventually?

What’s your take on Itanium with regards to Nehalem?

SPARKS

29 April 2008 21:31


Anonymous said...
Read Scientia's parting sentences here and judge for yourself has he become Sharikiou junior?

"The basic strategy involves replacing batch tooling with single wafer tooling and reducing batch size. AMD wants to drop below the current batch size of 25 wafers. AMD figures that this will dramatically reduce Queue Time between process steps as well as reduce the actual raw process time. Overall AMD figures a 76% reduction in cycle time is possible so a 50% reduction should be reasonable. Today, running off a batch of 25 wafers is perhaps 6,000 dies. Reducing batch size would allow AMD to catch problems sooner and allow much easier manufacturing of smaller volume chips like server chips. Faster cycle time means more chips with the same tooling. It also means a smaller inventory because orders can be filled faster and smaller batches mean that AMD can make its supply chain leaner. All of these things reduce cost and this is exactly how AMD plans to get its financial house in order"

This is a most funny line of thought showing how desperate Scientia is stretching to spin something out of NOTHING!

AMD really doesn't have options to replace batching. THey are a small fry in the chip business and little leverage on tool manufactures. Last I checked all major process continue to be "batch." The largest buyers of tools also do huge volumes
and thus will choose the right processing for the best cost effective manufacturing. AMD can talk till they are blue but
it is just noise from a mouse. Its AMD jumping and waving trying to distract those from the real issues. Everyone is working on cycle time, batching. Everyone is doing SPC, APC, APM, blah blah blah. Where everyone else is guarded, no one wants to give away there competitive advntage. Its funny that Doug Gross let it slip in one presentation what AMD considers good yield in one presentation. What they judge acceptible would be judged dreadful by many others, similar to AMD's financial performance, dreadful!

Lets revist Scientia's silly thoughts on batching.

1) Wafer transportation are done in FOUPS that are 25 wafers in capacity. Using them for less then 25 wafers, say even 5
would increase the number by 5x. That will fill the fab with so many FOUPS, and also overwhelm the automation system. Sorry unless AMD gets the whole fab automation tool set to change they won't get much speed up in tool to tool moves without busting the fab stockers and automation bottle neck. You'd have one huge fab moving a bunch of empty foups.
Scientia you have any clue to how a modern fab works and what the constraints and considerations are in them?

2) All major tools are still batch. They come in two groups, ones that process in batch and those that process singular
but load/unload in batch making true single wafer station to station totally BS. They include pretty much the whole damm tool set from furances, rapid thermal anneals, deposition, etch tools, steppers etc. Everything, so Scientia doesn't know WTF he is talking about. Again I ask Scientia you ever even seen a semiconductor tool in action?

"Faster cycle time means more chips with the same tool." LOL here Scientia totally shows his stupidity again. You should just shut up and stay away from technology as you show again and again you have no clue. THe capability of the tool hasn't changed whether you do it batch or singular. Take a rapid thermal anneal tool, or a sputter with 4 chambers.

NOTHING has changed for the wafers batching or not. It still needs the same fixed time for anneal and or deposition.

Today these type of tools permit queuing two FOUPS so when one is completed the next can start with NO wait. The tool is so expensive that most factories already have them running full out 7x24. Single wafer or batch will NOT increase the number of wafers that can be processed by most tools in the fab. The capacity of a factory will NOT increase by a materially amount with faster cycle times. WTF is this idiot talking about? More spin control like Hector. Smoke and mirrors versus deliver the result. Might as well be walking thru an argument about why INTEL will go BK like Sharikou did.


"Allof these things reduce cost and this is exactly how AMD plans to get its financial house in order" AMD's problem has

little to do with Fab cost. It has less to do with the billion dollar plus factory not running efficiently or not. AMD is

trying to turn attention away from the most fundamental problem that they have.



AMD's real problem and one they refuse to admit they need to fix to compete with INTEL

It takes billions of dollars a year of R&D every year for many years to field a leading edge process merged with a leading edge design, ramping this to produce hundreds of millions of CPUs just in time to capture the billions revenue and the required high margins to do that cycle again.

Right now AMD hasn't invested in the process so they are stuck with billions of dollars of depreciating equipement that produce hundreds of millions of processors that they have to sell at costs so low they can barely break even.

They try to cover up this fundamental chicken/egg problem with fancy words. Bottom line today is they don't have a high
end leadership product to set their ASP across the product lines. Thus they take expensive new designs and fab them on
expensive depreciating factories at commodity prices. This is totally bankrupt! Reducing costs wont' fix this. This is like a commodity memory producer thinking he can produce more and more chips at ever cheaper prices to make up for the loss he incurse on eveyr chip.

AMD can only fix its problem by getting a high margin product and medium margin high volume product. Today they have no product in that space. THey make noise about 45nm coming by end of they year. What is most funny is their 45nm product at that time will e competing with the top end 65nm from INTEL at the bottom, while Nehalem and Penrym products will command the premium to mid range and rake the profits as AMD sucks more red ink.

Losing Cray is a death blow, everyone will now start the moving to Nehalem and thus AMD will lose their last high profit segment.

TIck Tock Tick Tock amd your time has run out AMD

If you look back see all that tried and failed and they all had bigger bank accounts; Digital with Alpha, IBM with PowerPC, TI with SPARC and DSPs, Japanese consortium with TRON, HP with PA-risk. Yawn, its so obvious, why are people so silly to believe the AMD story will be different? Oh yes, because its x86, but lets not forget they are in the game because INTEL treats them with kid gloves and the only reason anyone even believes they had hope had more to do with an INTEL screwup then and AMD execution or strategic brilliance. Now its all over for AMD... Tick Tock Tick Tock.

29 April 2008 23:02


InTheKnow said...
Sparks said...

Since CRAY will be using Xeon chips in the interim, and Nehalem has been seen here and there up and running, can we assume they were at least impressed by its performance enough to make the swing to INTC?

Cray isn't like Dell. They don't design a system in a couple of months and start shipping. It takes them 2-3 years to develop a new product. I'm pretty sure they won't be using the existing Xeon chips, but will only be using Nehalem.

I also saw something that indicated they would at least continue selling their existing designs based on the Opteron processor in the interim. I can't remember where the link is to that one offhand. I'll post it if I stumble across it again.

30 April 2008 00:15


InTheKnow said...
anonymous said...
Its funny that Doug Gross let it slip in one presentation what AMD considers good yield in one presentation. What they judge acceptible would be judged dreadful by many others...

Link please! I'd like to see that! Or if all the evidence has been scooped up and swept back under the rug, I'd still like to see a number here.

30 April 2008 00:19


Roborat, Ph.D said...
Sparks said: Since CRAY will be using Xeon chips in the interim, and Nehalem has been seen here and there up and running, can we assume they were at least impressed by its performance enough to make the swing to INTC? Further, would they use a four or eight core for their specific needs? Will other manufactures follow CRAY move eventually?

the $250M contract is for concept development only therefore the choice of multi-core cpu is dependent on what is available at the time of build. the original requirement in 2002 was at least 8-core cpus.

i would say that there are other considerations for Cray's CPU selection besides performance. one being the ability to scale and work with their existing interconnect technology. It is more to do with AMD's unstable execution and poor roadmap that has made Cray look elsewhere. Of course what Nehalem brings into the table like using the multi-chip variant with the IGP as a possible accelerator is definitely a bonus. The capabilities and guaranteed availability of Nehalem and Sandy Bridge in 2010 is just too good to pass up.

30 April 2008 01:37


SPARKS said...
Doc-

Thanks, I suspect we all new this was coming after AMD's failure last year.

Minimum 8 cores, native. Impressive.

SPARKS

30 April 2008 02:31


InTheKnow said...
Anonymous, I'm going to play devil's advocate here.

1) Wafer transportation are done in FOUPS that are 25 wafers in capacity. Using them for less then 25 wafers, say even 5 would increase the number by 5x. That will fill the fab with so many FOUPS, and also overwhelm the automation system.

First, you've chosen an extreme example. Say that you want a batch size of 12. Now you've approximately doubled the number of FOUPs in the system. Still an impact but hardly 5x.

Also remember, the goal is to reduce cycle time. With a reduction in cycle time, FOUPs are spending less time in the stockers and more time in the tools. Since FOUPs aren't spending as much time in the stockers doing nothing, you are able to reduce the FOUP count in the factory at any given time.

So by choosing a smaller, but more reasonable, FOUP size based on the graphs in the Intel slide I posted earlier, I'd estimate this would only lead to about a 20% increase in the number of FOUPs in the factory.

Depending on loadings, this could be a bit tight, but still manageable.

All major tools are still batch. They come in two groups, ones that process in batch and those that process singular but load/unload in batch making true single wafer station to station totally BS.

It is true that the whole FOUP enters and leaves the tools together, but to pretend there is no difference between the two is at best disingenious.

True batched tools do have a very real negative impact on cycle time. You have to hold lots on station until sufficient wafers accumulate to build a batch.

Then you have to move the wafers to the tool. This entails additional delays while the tool waits for the automation system to bring all the FOUPs to the tool. They don't start loading once the first lot arrives at the tool.

Finally, there is the scrap risk. Modern semiconductor tools have the capability to run self-diagnostics as they process the wafers. This allows single wafer tools to stop processing with only a wafer or two impacted. By the time a batched tool reports an error you have multiple LOTS at risk. If those wafers are scrapped, you now need to start new lots of wafers not a couple of onesie, twosie losses.

Smaller lot size = less risk/cost.

"Faster cycle time means more chips with the same tool." LOL here Scientia totally shows his stupidity again.

This is true in one sense but false in another. Faster cycle times can result in increased output by reducing the time that lots sit in front of a batched tool before processing.

No tool is assumed to run 100% of the time. Some amount of downtime is always built into the model. So improving tool utilization and/or availability is an excellent way to improve tool output. You basically redefine the model by reducing the time tools wait for batch quantities to be reached.

"All of these things reduce cost and this is exactly how AMD plans to get its financial house in order" AMD's problem has little to do with Fab cost. It has less to do with the billion dollar plus factory not running efficiently or not.

Inventory carries a very real cost. Intel was able to reduce their inventory significantly by reducing their cycle time. If AMD were able to improve their cycle time, they too would realize the cost savings this brings.

30 April 2008 02:34


SPARKS said...
Well gents the UPS delivered the baby, 7:30 PM EST

ALL systems go all we’re on the clock, 8:35 PM EST

GURU- This ‘top bin’ baby is cranking along at a mere 4 Gig, 3rd boot right out of the box. So much for your buddies, Q6600, Phenom comparison.

SuperPi=11 sec, TWICE as fast as Pheromones 22 sec @ an unstable 3.5

Mem bandwidth = 8403MB/s
Cache and Mem.=54.4 GB/s
Mutimedia=51696 iT/s it BLOWS away the Xeon X5482 by 20%
Mem Latency=64ns speed factor 85

The memory is a (stock) 1600 running synchronous with the FSB. It rated for 1800.

Vcore 1.4125, automatic set by the motherboard
10X multiplier.
Air cooling, of course.

These are preliminary numbers. Nothing hardcore as yet, I am waiting for a drink of water.

Obviously these are 100% stable, with much more to spare. I’ll tool it around for week, just to get a feel. Time and H2O will tell.

Nice job fella’s, Thanks.

Giant- Stop F**king around, buy one.
Tonus- getting that itch in your back pocket yet?

SPARKS

30 April 2008 02:36


Anonymous said...
"Smaller lot size = less risk/cost."

Perhaps risk, but actual factory scrap does not correlate to batch size. When a scrap does occur it may impact a larger # of wafers, but there are also far fewer scrap events on a batch tool.

As for the whole single wafer processing, small lot sizes - there are many proponents of this - AMD is not breaking new ground here... they see the threat of 450mm on the horizon which only the large volume manufacturers will be able to afford so they are looking for alternatives to compete from a cost perspective.

The problem is the best time to implement things like smaller lot sizes or switching from batch to single wafer processing is at the start of a new wafer size transition (and in fact you will see many of these changes come about if 450mm goes forward).

The problem with doing it in the after the start of a new wafer size transition is you start to impact the reuse model of a fab (~70% of the equipment is reused from generation to generation), you will have significant impacts to the actual fab which are difficult to do on the fly - there would be some automation changes, and most likely substantial facility changes - things like waste line sizing, exhaust laterals, and chemical supply all are impacted if you are talking about a batch tool vs single wafer tool. This then has a cascading impact to other tools in the fab that share exhaust laterals (exhaust needs to be rather carefully balanced for the multiple tools you may have on one large lateral), or are on the same water loops (you may now have different pressure drops)... etc...

Then consider the impact to the equipment suppliers. Not everyone is going to implement these changes so you now force suppliers to support 2 different toolsets on the same wafer size, while also developing equipment for a new wafer size (450mm) and to some extent still support legacy 200mm equipment.

The natural breakpoint is on a wafer size transition as you have to buy all new equipment anyway and you are generally starting with new fab designs so you can plan the automation (lot size, etc) and facility impacts accordingly. You also have fewer design constraints so it makes it easier for equipment suppliers to come up with an optimal solution.

Finally who's going to pay for new 300mm equipment development? Many suppliers are still trying to recoup development costs on the initial 300mm equipment development. Many folks with multiple fabs and a lot of experience on a lot of existing batch equipment will not probably make the switch, so how big a market is there for this new equipment?

The AMD presentation is fine and it is consistent with many other presentations I have seen on cycle time improvements. The problem is there really is no coverage of the negative impacts of the approach - the benefits are touted, but there is no modeling of fab impacts, cost impact, financial impact to the equipment suppliers, impact on tool reuse and technician support, fab layout, sub-fab impacts, etc...

This is a nice academic study, but quite frankly that's the problem with it - it is largely academic. To make these types of changes you need full industry support and need to have an honest discussion of the negative impacts (and who pays for them). It'd be a different story if AMD and the little consortia listed were putting up some seed money, but clearly that 'ain't gonna happen'

30 April 2008 03:32


Anonymous said...
http://www.custompc.co.uk/news/602511/amd-next-cpu-architecture-will-be-completely-different.html

"AMD’s technical director of sales and marketing for EMEA, Giuseppe Amato, told Custom PC that ‘if I look at the next generation architecture of our CPU, then it will definitely not be, how can I say, comparable with the Phenom. It will look completely different.’"

Man... K10 barely out of the womb, and it's already starting to shift to, you should see our next generation... and distancing the next gen from the K10 design.

While I'm sure some will spin this as AMD's relentless pursuit of new and innovative approaches, others may see it as a lack of ability of the K10 architecture to carry forward.

30 April 2008 03:51


SPARKS said...
Electromigration, hmmm.

GURU- I’ve discovered, Sleep aphnea/insomnia and it’s dervations, can be brought on by the following eguation.

http://en.wikipedia.org/wiki/Black's_equation


Where:

A is a constant
j is the current density
n is a model parameter
Q is the activation energy in eV (electron volts)
k is Boltzmann constant
T is the absolute temperature in K
w is the width of the metal wire


WHERE MTTF IS FUGLY!

“The model's value is that it maps experimental data taken at elevated temperature and stress levels in short periods of time to expected component failure rates under actual operating conditions”

AH----the key words are----ahhh---STRESS and FAILURE!

“the Black's equation, is commonly used to predict the life span of interconnects in integrated circuits tested under "stress", that is external heating and increased current density, and the model's results can be extrapolated to the device's expected life span under real conditions.”


“under "stress", that is external heating and increased current density”


Nice, I’ll think about this everytime I step up the Vcore .01 volts on a $1500 chip

This guy J. R. Black was he in any way related to a guy named MURPHY???.

Let me get this straight. You guys have a little channel in the substrait, you seed it, grow (sputter?) some lovely copper, then grind it down flush. You watch your corners and bends because ‘crowds’ gather here. Then, because of the ‘bleck’ thing you have watch your widths, made uglier by capacitances, if you go too wide. (It's no wonder AMD dropped the ball, all on SOI, no less. Time to start from scratch.)

Alright, spill it. How far during development do they take these things to failure?

Why isn’t there a data sheet that say’s, “Attention, MORON, we’ve tested this thing to ‘X’ voltage (and temperature), and you keep f**king around, at or past this point, you’re really gonna screw the pooch”?

SPARKS

30 April 2008 15:45


Giant said...
Well gents the UPS delivered the baby, 7:30 PM EST

ALL systems go all we’re on the clock, 8:35 PM EST

GURU- This ‘top bin’ baby is cranking along at a mere 4 Gig, 3rd boot right out of the box. So much for your buddies, Q6600, Phenom comparison.

SuperPi=11 sec, TWICE as fast as Pheromones 22 sec @ an unstable 3.5

Mem bandwidth = 8403MB/s
Cache and Mem.=54.4 GB/s
Mutimedia=51696 iT/s it BLOWS away the Xeon X5482 by 20%
Mem Latency=64ns speed factor 85

The memory is a (stock) 1600 running synchronous with the FSB. It rated for 1800.

Vcore 1.4125, automatic set by the motherboard
10X multiplier.
Air cooling, of course.

These are preliminary numbers. Nothing hardcore as yet, I am waiting for a drink of water.

Obviously these are 100% stable, with much more to spare. I’ll tool it around for week, just to get a feel. Time and H2O will tell.

Nice job fella’s, Thanks.

Giant- Stop F**king around, buy one.
Tonus- getting that itch in your back pocket yet?

SPARKS

Oh my! My finger is seriously close to the trigger! But $1489 at the Egg, how would I explain that one to my gf?

I've already bought Grand Theft Auto IV (truly excellent game, btw) for PS3 and a new speaker system this week, I'm pushing the envelope here! :-(

Congratulations on a fine purchase there sparks, certainly a MONSTER cpu, and one of the best motherboards one could hope to pair it with!

You've hit 4GHz very easily. Are you increasing the CPU multiplier, or the FSB to OC at this stage?

I have an eventual challenge for you as well Sparks, I've pushed my E8400 to a 515MHz FSB (2060MHz!) on the excellent EVGA 790i board. That gave me a clockspeed of 4.635GHz, on air no less. I wouldn't run the CPU at that speed for very long, but it was good for a few runs of SuperPi and 3DMark. (24/7 speed for me is 1780MHz FSB with 1780MHz DDR3, 4GHZ CPU clockspeed)

I'm sure all this talk of these crazy clockspeeds achieved on air must be driving the AMD fanboys mad, who continually link to a single person hitting 3.5 with WATER on a Phenom!

BTW, have you picked up an equally impressive video card do go with this monster CPU? I'd be very interested in seeing some 3DMark results for such a setup!

-GIANT

30 April 2008 15:54


Tonus said...
sparks: "Tonus- getting that itch in your back pocket yet?"

4GHz for starters, oh man...

I will have to start paying more attention to this stuff again. Memory timings, motherboard features, overclocking... buying a 3.x GHz chip and not OCing it now would just feel criminal.

Good thing I just bought a new TV, and don't have the inclination to spend any more money right this moment!

30 April 2008 17:03


SPARKS said...
Giant-
Tonus-

“You've hit 4GHz very easily. Are you increasing the CPU multiplier, or the FSB to OC at this stage?”

The 4 Gig run was done strictly by a 10X multiplier, with memory set at the board natively assigned DDR3-1333 bios parameter. Incidentally, also listed in those options are, DDR3- 1600 *DDR3-1600 O.C.*, and *DDR3-1800 O.C.* I had to manually assign this parameters, but the board SAW the 1600 native.

Subsequently, I keyed in the DDR3-1600 native and checked the latency, it went down to 57ns. That’s well within reach IMC.

There is an interesting option I have, frankly, never seen before. The frequency multiple can be increased or decreased by .5. I always felt that a full multiple was too much of a jump; ASUS has addressed this issue quite nicely.

“BTW, have you picked up an equally impressive video card do go with this monster CPU?”

Unfortunately, no I haven’t. I am still using the 1900XTX Crossfire set up which really isn’t bad. The scores I got with the setup along with the Q6600 were 11,490. With this chip the scores increased to 12,857, not too bad for 2 year setup. They’ve got some new things on the horizon in the interim. I really would like a substantial increase.

That said, the ATI purchase really turned the graphics industry sideways.

My next purchase will be that “electric cooler” we spoke. GURU’s Electromigration, and carrier mobility abstracts have me pissing my pants. The next thing you know I’ll be wearing a dress and high heels.

With that in mind, that E8400 is absolutely beautiful, spectacular, in fact. I thought that Q6600 was something irreplaceable and unique. Man, was I all wet, it was only the beginning.

I’ll keep you posted as I develop a relationship with the new chip. Next stop, 1800 FSB, than back to 4Gig and beyond.

SPARKS

30 April 2008 17:59


InTheKnow said...
Perhaps risk, but actual factory scrap does not correlate to batch size. When a scrap does occur it may impact a larger # of wafers, but there are also far fewer scrap events on a batch tool.

This is true, but if you were to break out scrap over a year, I'd bet the batch tools are way out front, even if you normalize the wet etch tools for the number of passes.

Since no-one is going to give that level of detail in the public domain, we will probably never know for sure. But my bet is that the batched tools are the largest sources of scrap in the factory.

01 May 2008 02:14


InTheKnow said...
I know there has been some question about how long it takes to get a wafer through the factory. It is a lot less than many people seem to think. Here is what Paul Otellini had to say.

It was legendary that our factory throughput times were close to 90 days for many, many years. We've cut that in half.

That puts fab time at just over 6 weeks.

01 May 2008 02:19


Anonymous said...
"But my bet is that the batched tools are the largest sources of scrap in the factory."

You'd lose money... Back in the 200mm days (0.5um, 0.35um) CMP was far and away the biggest source of scrap... nowadays it's different but not batch tools. Also many people tend to think mechanical failures (wafer handling inside the tool, etc) when they think of scrap, but that tends to be a rather low amount of the overall scrap.

Of course the excursions are painful - you have the potential to lose a lot of wafers at once but if you look at scrap per 1K wafers processed, you'd be surprised.

01 May 2008 03:49


Anonymous said...
Largest source of wafer scrap?

varies widely in my many years I've worked in the fab. Sure when a batch tool goes bad it can be a couple hundred wafers. The other side of it generally you discover it pretty quickly.

Single wafer tools even with the best of monitoring can result in many surprises that go undetected and result in much more costly scraps.

How fast a wafer moves is dependent on lots of things. If you balance a factory well you can get great cycle time. You could also choose to load up the factory and have wafers queued up at ever operation and have reduced cycle times. Also don't let it be measured in days, it really is about days/mask layers. INTEL could do 4 weeks for all I know, but if they have fewer metal layers then AMD which they do, then its an apple to orange comparison

01 May 2008 05:46


SPARKS said...
“24/7 speed for me is 1780MHz FSB with 1780MHz DDR3, 4GHZ CPU clockspeed”

This is interesting.

Although, I haven’t had the QX very long, nor have I explored it's absolute limits, I have found the same VERY comfortable point at 4.06 GHz.

I have, however, found the limit for air cooling:

From CPUz:

9.5 x 450= 4.275 GHz @1.408V, 1800 FSB, DDR3@1800 7-7-7-21 2T 2.0V

Sandra:

Processor Arithmetic= ALU 66835 MPS, SSE3 = 61753
Processor Multimedia= 549144 it/s, FP=267068
Memory bandwidth= 9576 m/sec!
(Now it’s clear why I waited for X48)
Cache + Memory Combined=65.47 G/s
32K blocks= 407 G/sec!
Latency=56ns
SuperPi 1M= 10 seconds!!!!!!

Obviously, both chips run cool (yours and mine) and there’s A LOT of headroom (a full GIG!), basically, on first production run. Binning these chips (?), man with the way these thing run, it’s a shame to deliberately lock in anything below 2.6. It looks like INTC doesn’t have very much to throw away.


I suspected INTC months ago sandbagged these chips when Barcelona fell on its ass. They were ready for Barcelona even if the son-of-bitch comfortably hit the 3 GHz+ speeds they were howling about for a year. It simply had no chance, ever, against Penryn, right out of the gate. Look at that Pheromone at 3.5 Gig, a cherry picked slab. The QX9770 s pee’s all over it at well bellow stock speeds!

I don’t give a flying hoot what anybody say’s. INTC woke up and hit the floor running. If they don’t believe it, you and I have the evidence in hand to prove it.

E8400 @ 4 Gig
QX9770 @ 4 Gig

WITH NO MEANINGFULL DIFFERENCE IN THERMALS AT THESE SPEEDS!

That’s saying something, especially when I’m packing another set of jewels. Call it a pocket full of hafnium.

BTW: With all these runs, I haven’t had a lockup, boot failure, BSOD, or a failed Windows load, yet!

I’m going to back this gem down to 4 Gig and cruise around nice and comfortable 24/7, all on air.

HOO YAA!


SPARKS

01 May 2008 14:51


Anonymous said...
To IntheKnow, I looked thru the two big updates from Gross and can no longer find the reference. It was widely discussed when the foil appeared in one of his big presentations to analysts where he alluded to”accpetible” yields referenced a number. We all took this as acceptance of the minimum lower acceptable limit by AMD management. It was a number that I think many companies would not accept acceptible. Its interesting that the two presentations I can find at the AMD site look slightly different then what I recalled and now show distinct no scale % yield or DD plots with no scale. I recalled looking at these in the past and not seeing these two plot. I suspect the sensitive page and reference has since been removed or the presentation was completed pulled and they now have put in the standard thing I also see from INTEL on this subject.

In the end can we agree that AMD's success or in this case total failure of fielding a competitive CPU and also complete failure at meeting any success metric of a public traded company has little to do with cycle time, efficiencies or lack of performance AMD's factories? That is what I find so funny that AMD spends so much time talking about things that really have no materially bearing on the mess they are in nor how improving even by 20-50% will improve matters at all either.

To Intel’s credit they talk about efficiencies too and between similar productivity improvements and aggressive headcount reduction they have materially improved the bottom line, or so they say. That is relevant as they have a huge cost structure and reducing it will add directly to higher margin and more profits. INTEL already has credibility around its Tick Tock design strategy, and their process technology and manufacturing leadership is without question among the best. Put all that together; investment, manufacturing leadership, technology leadership, leading edge products, leads to a credible positive business plan and a bright future.

Lets contrast that with AMD, everything there needs significant improvement to help that them have any chance at all to turn a profit in EVERY frigging area! They talk a lot of nonsense about these manufacturing efficiencies, but to be perfectly honest. If AMD had a 10% advantage in cycle time, in cost / wafer, in utilization, damm in every manufacturing metric, they still would have sucked red in each of the most recent quarters by huge amount. Why don't they talk about the real fundamental problem facing them? The reason the don’t’ is obvious, if they were to really talk about it, it would be clear how broken their business model is and the stock would fall another 50% that is why!

Scienta and Sharikou blogs are nothing but personal soapboxes not worth spending time even trying to post, both have descended into incoherent excuse mining to keep beating their wet dream that AMD will somehow rise again to some glory.

02 May 2008 04:23


Anonymous said...
"That is what I find so funny that AMD spends so much time talking about things that really have no materially bearing on the mess they are in nor how improving even by 20-50% will improve matters at all either."

Look - AMD is going to have to continue cut manufacturing costs to compete with Intel. The best case scenario is they are 1/2 node behind Intel (schedule-wise) so they will be at a disadvantage die size wise, except through design innovation and efficiency (the 0MB L3 part, if it doesn't take a huge performance hit, is a good example of things they need and can do). Even when Intel launches a new node they still remain that distance behind as you have to consider aggregate mix of the two nodes. When AMD starts shipping 45nm, Intel will be ~50% converted, by the time AMD is 50% converted Intel will be largely transitioned.

Intel's plan to cut cost is 450mm - while this will require incredible upfront investments, it will yield SUBSTANTIAL cost reductions (more so then any node transition). AMD will not be able to follow this roadmap unless bags of billions of cash start falling from the skies, so they need an alternative - thus the efficiency / asset smart / cycle time reduction plan.

There are two fundamental problems with this approach:
1) AMD does not have the industry clout (i.e. they do not buy enough equipment)
2) More importantly, any gains done on 300mm should carry over to 450mm so even if they get suppliers to work on this, AMD will gain no competitive advantage.

That said... AMD has to try something - short of outsourcing (which has other issues), what else can they do? Simply trying to stay on the same pace or out-execute Intel in this area is probably not a viable 'plan'.

The cycle time will probably give AMD more of an advantage in terms of flexibility and development times. Intel can always throw money at development to speed it up - you can run many new steppings in parallel - which is risky, but if you can afford the Si and the capacity to do this, it is a good brute force method - the more information turns, the faster the development. Shorter cycle times will increase the information turns during the development phase and potentially reduce the amount of capacity needed - this will have a larger relative benefit to AMD then to Intel.

02 May 2008 06:52


SPARKS said...
DOC-
In The Know-

I did a little research (please forgive me if you already knew this) on CRAY. I have a link below of the world top ten supercomputers.

http://www.top500.org/lists/2007/11

I was surprised to see the INTC Xeon 53XX powered units were in the 3rd, 4th, and 5th ranks. I’m not sure when the 53XX’s were released, last year I think. I think it was Clovertown (65nM). From what you were both saying there is year’s of development time, and yet these units have surpassed CRAY’s Opteron based unit which is currently in 6th, 7th and 9th position. That was pretty quick in terms of devotement time and to time to surpass CRAY’s lead with the 2.4 Opteron’s installed.

Why so quick? Was the architecture already in place? Did they upgrade the way I did by a mere CPU swap, and move up the HPC ranks, on the cheap, if you will? Can you do this on these monsters? Additionally, CRAY put all their CPU eggs in the AMD basket; obviously HP didn’t (Ranks 4 and 5). Couldn’t CRAY have done the same?

With this in mind I am certain HP, INTC’s long time partner, is ahead on the development lead with Nehalem based systems, perhaps others , too.

I've got some SPEC numbers here. Nehalem makes my QX chip look like a i486 in comparison.

http://blogs.zdnet.com/Ou/?p=1025


SPARKS

02 May 2008 07:01


Anonymous said...
"It was widely discussed when the foil appeared in one of his big presentations to analysts where he alluded to”accpetible” yields referenced a number."

With AMD we'll never know. Yield data is too sensitive to provide raw data, so the best you can get (in my view) is how Intel presents it - they show normalized data, but they compare one node to another so at least there is some reference.

AMD simply refers to expected yields or mature yields - maturity just means it has stabilized at a given level... the level could still be garbage! (I'm not saying this is the case, but you simply can't tell).

In the past, AMD has shown one node vs another, however they did a very subtle and important thing... they showed yield vs production volume (on the x-axis), instead of an actual calendar date or time.

What's the difference? Well if your yield is low, your production volume is also low so you can still show a fast improvement rate (vs volume) especially if your yields are low for some time. Or if your yield is low you may slow down the ramp which will lower the production volume and again show a potentially different yield/volume slope.

It would be remarkably simple to plot the data vs calendar data - by presenting it vs production volume they are also compounding the data with the various technology ramp rates (unless they are all ramped at the same rate)

This could be a very subtle, and not easily picked up, manner of tweaking the graphs. I'm not saying AMD is doing this intentionally - but by using volume instead of time, it limits the usefulness of the data.

Also AMD had a line called "mature yield" - another trick you could play is to have different mature yield targets for different nodes... (again I'm not saying AMD is doing this, but I don't know that they're not either). When Intel presents the data it is simply defect density so there is no possibility of 'tweaking' the data from node to node and is a far better 'apples to apples' comparison.

02 May 2008 07:10


Anonymous said...
Man - I just read Scientia's latest comment about batch processing and MFG and almost fell out of my chair I laughed so hard.

When it starts with:
"I'm not an expert on wafer manufacturing so if someone has more specific information feel free to provide corrections."

I guess what do you expect... instead of asking for more specific info, perhaps he should have just said - "if anyone has any actual info..." More specific?

And the stuff on batching from other folks is just comical - apparently notebook and server chips can't be batched together... well except primarily for litho, THEY CAN BE AND ARE BATCHED TOGETHER!

It's one thing to hypothesize and speculate, but for folks to just throw out random info not grounded on any sort of facts is just too funny.

I think the problem is some folks consider a batch to be a lot, others don't seem to understand that with the exception of litho, most product types go through very similar process flows and can be 'batched' together (or run back to back on the fly, or what folks call 'cascaded'). Automation and controls have become so sophisticated that many areas can retarget and change on the fly real time between lots... suppose for example you were polishing 1000A of Cu on one lot, you can take thickness measurements real time and adjust the polish time for another lot that might need 2000A. You can also now factor in different polish, etch, dep rates and adjust recipes on the fly between lots of different product types to account for differences like pattern density. A lot of this stuff is done in house by many IC manufacturers and I think the level of automation would surprise folks who have this cookie cutter view of how the fab works --> process a lot, stop, measure it, see if it is OK, then adjust tool for the next lot, then process...

"Single wafer tools were created when particularly difficult processes needed to work on one wafer at a time but this was not ideal."

You know, I'm not sure if I could make this stuff up! Actually in many cases single wafer tools are ideal (even from a cost perspective!) Of course those litho batch tools were the bomb until the process got too difficult! Thank goodness for immersion - though part of the reason I think it tool so long was immersing the whole lot was tricky, so thank goodness single wafer immersion litho was created (I'm kidding folks)

He then provides a link which talks about NFG (next generation fab) and somehow attributes it to "here's what AMD has to say"..

the guy is so deluded into thinking this is an AMD concept that he didn't even bother to notice these are ISMI proposals! (international sematech) Amd is one of many companies (including Intel, BTW) in this consortia... but apparently this is an AMD idea because he saw a different AMD presentation with NFG in it, and now anything NFG related is an AMD thing.

"ISMI managers published a 19-point Next Generation Factory plan, with many of the changes starting in 300 mm fabs but expected to carry over to the 450 mm generation, whenever it arrives."

So apparently ISMI is now AMD or perhaps he is confused and think this is the IBM fab club (it is not). If he bothered to click on the link in the article he posted he would see the company list, but apparently ignorance is bliss and he would rather just convince himself that "here's what AMD has to say".

Of course had he read the article he might have seen the part "The NGF program requires consensus-building and prioritization, both among the 16 devicemakers within ISMI and between the chip manufacturers and tool vendors."

So, how long before Dementia realizes this is not an AMD 'innovation' but rather a consortia effort (Intel included) of many IC manufacturers? I suppose when he finds this out (and realizes just about EVERYONE is working on this), he'll just dismiss it and move onto the next topic of dis-information.

If any of you patient folks care to explain this to him feel free to cut/clip/paste any of this.

03 May 2008 07:39


InTheKnow said...
Anonymous said...
Man - I just read Scientia's latest comment about batch processing and MFG and almost fell out of my chair I laughed so hard.

I considered posting a correction here myself, but I wasn't quite sure what he was trying to say until the follow on posts. At that point it became clear, that among other things, there was confusion about what batching is. So I'll try to add some clarity.

The basic processing unit is, of course, the wafer. Wafers are started together in a FOUP (a fancy name for a plastic box with a door on the front of it). The contents of the FOUP are called a lot.

Most tools in the fab process a single wafer at a time. The exceptions to this rule are what we have been referring to here as "batched" tools. A batch is a group of lots that are all processed together in the process chamber at the same time.

With the definitions out of the way, let's move on to processing and efficiency. I'm going to try and explain this in a very general way, so it will be easy to find exceptions to what I'm about to say, but it should apply to the majority of cases.

The most efficient type of process is called a continuous process. In a continuous process raw materials are fed into the process in a continuous stream and finished products move out in continuous stream. So the timing on your feed and your output are in sync. As an aside, if you want to see continuous processes in action, I'd recommend you watch "How it's Made" on the Discovery Channel.

When you first start a continuous process up there is a lag while the process fills up with raw materials, so you need to keep the processor feed constantly and minimize downtime to get the most efficient process possible.

Obviously, continuous processing lends itself well to liquid processing as there is not a discrete "unit" to feed in. Single wafer tools can come pretty close to this, but they need a buffer system to achieve this kind of efficiency. One buffer will hold and queue lots, so that as one lot finishes the other is getting prepared to start. Another buffer will store the completed product and load it into FOUPs once processing is finsihed.

You'll notice that the lots have to be staged in a buffer area both before and after processing. This introduces inefficiencies in product flow through the line that wouldn't be seen in true continuous processing. But the flow through an individual tool can be seen as continuous.

Since single wafer processing is the closest thing to maximum efficiency, you might ask "why batch"? The simple answer is long process time. Many deposition and/or film growth processes can take upwards of 20 minutes to complete. If you are processing in single wafer mode, you will get 3 wafers per hour this way. So your batch of 25 wafers will take >8 hours to process. This long process time leaves you with 2 choices.

First you can buy a lot of tools. Lets say you buy 24 tools for an 8 hour process time. This would allow you to complete processing on a lot on an average of every 20 minutes. But 24 tools would cost a lot of money and the cleanroom space is expensive as well.

The second option is batching. Batching entails a lot of inefficiencies, so the process times themselves are long. For our example, let's say that it takes the same amount of time to run our batched process as a single wafer tool would to process a lot, or 8 hours. But now you put 4 lots in the tool at once. Your output is now 4 lots every 8 hours or an average of 1 lot every 2 hours. It's pretty easy to see that with 6 batched tools you can get the same output as 24 single wafer tools.

So you can choose a lot of tools (a huge capital expenditure) and a large ongoing cost in maintaining more cleanroom space, or you can accept inefficiencies in processing time and use batch processing.

The work that AMD (and Intel as noted previously) is doing is centered around trying to find ways to reduce these inefficiencies.

03 May 2008 15:31


InTheKnow said...
Sparks, I'm just a simple process guy. Designing HPC systems is way out of my area of expertise. However, I believe that there are Nehalem test chips out there. We've seen systems running on them.

I'd assume the development process would include giving Cray access to these chips to help establish operating parameters for their machine. From this, they can extrapolate X% improvement for the Sandy Bridge processor. They will also be working with Intel's engineers to ensure that required features are included in the design. As test chips for Sandy Bridge become available, Cray would be given access to those to validate the design.

Not a great answer for you, I know, but the best I can give.

03 May 2008 15:38


InTheKnow said...
Anonymous said...
In the end can we agree that AMD's success or in this case total failure of fielding a competitive CPU and also complete failure at meeting any success metric of a public traded company has little to do with cycle time, efficiencies or lack of performance AMD's factories? That is what I find so funny that AMD spends so much time talking about things that really have no materially bearing on the mess they are in nor how improving even by 20-50% will improve matters at all either.

I fully agree that AMD's problems go well beyond running their factories efficiently.

However, if they could reduce factory costs by say 20% they probably could have turned a profit in Q4 last year and maybe even in this past Q1.

Their fundamental problems remain, but running in the black would certainly allow them to pull in capital (from say their friends in Abu Dhabbi) to try and address some of the other issues.

03 May 2008 15:44


InTheKnow said...
anonymous said...
You'd lose money... Back in the 200mm days (0.5um, 0.35um) CMP was far and away the biggest source of scrap... nowadays it's different but not batch tools. Also many people tend to think mechanical failures (wafer handling inside the tool, etc) when they think of scrap, but that tends to be a rather low amount of the overall scrap.

Of course the excursions are painful - you have the potential to lose a lot of wafers at once but if you look at scrap per 1K wafers processed, you'd be surprised.

Yeah, as a guy in the trenches my focus tends to be on the excursions.

So I just ran some simple line yield numbers. If we assume 30K Wafer starts per month and a 95% line yield, that works out to 1500 wafers scrapped each month. I've seen some big excursions, but never a single event of that size. I also don't think I've seen that much scrap attributed to a batched toolset in a year, let alone a month.

Even if you assume a 98% line yield then number of wafers scrapped each month in the factory is still 600.

In short, you've made your point. The risk of large losses is high in batched tools, but the low incident rate of those scrap events offsets the large impact.

03 May 2008 15:52


Anonymous said...
AMD's long term problem is that to field a competitive CPU line from high margin server to much lower volume consumer takes big bucks.
AMD could make money if they didn't compete with a big spender like INTEL but they have a competitor with big bucks.

The reason ATI and Nvidia competed well is they had the same foundry resource and competed on design. Now that INTEL is going to get into graphics Nvidia and their CEO like to talk big but in the end they know their future is limited by INTEL. If intel execute the graphics business will go to INTEL it won't be if, but when and after how much money. This is no Itanium story, its about whether INTEL has the commitment to stay in it and build the graphics drivers to go with their silicon hardware. If they do ATI and Nvdia are finished in graphics.

In all these performance arenas the competitor who has the highest performance leading edge semiconductors technology and manufacturing capacity will have the "unsurmontable" advantage. Design is dime a dozen, the silicon is a huge advantage. TO compete people need both to even make anything competitive.

People who think they can go asset lite are talking out of their Ass. Jerry had it right in some sense. Real CPU competitors need fabs to develope and manufacture at the latest technology node. Without this AMD can have the best designs but will be handicapped by higher cost, slower performance and higher power. To be behind 1 year on cost and 3 years on performance isn't a very good business proposition.

The reason they can't go to TSMC or other foundrys is they require capacity starts on leading edge of tens of thousands of wafers a week. If you look at the combined volume of TSMC, Charter and others they don't invest enough on the leading edge to support the ramp AMD nees. To go asset light means AMD won't have leading edge capacity and WILL gurantee their consumer products will be slow and not cost effective. It will limit them to only be able to do a few tens of thousands of high end CPUs. Only look at DEC, SUN-TI, IBM to see what that gets you in funding the silicon... you can't afford it.

AMD is BK in their strategy...

03 May 2008 16:36


SPARKS said...
“If any of you patient folks care to explain this to him feel free to cut/clip/paste any of this.”

I don’t think it’s possible. With the limited exchange I had with him, I have found him to take most disagreements to heart personally. In one of his recent replies to me, he freely admitted not ever working in a FAB.

With that in mind, during past AMD’s successes, he became a self proclaimed expert in the field; he could say no wrong, correct by default perhaps? By his own admission, he dropped the ball more often than not, thereafter.

You guys, however, do this stuff to put food on the table, thereby challenging that authority with actual practical working knowledge and experience. He said he has been, “less than correct”. From where I live, NO ONE can argue with actual practical working knowledge and experience, I don’t care if you pump cesspools.

The guy is angry, and resents you guys for undermining his authority, on what he calls a "public" forum. You’ll never get through to him. Hey, with 800 lb. process gorillas ready to pounce (you guys), want else can he do to save face?

“In all honesty, the difference between roborat's blog and mine is that he encourages flames and I don't. He let's posters hide behind an anonymous post and act like children; I don't. I'm sorry but that is no improvement for roborat's blog.”

He doesn’t care to offer objective analysis from a practical perspective and the freedom to allow contributors express it the way they deem fit. That guy will never concede a point, and his “less than correct” statements are the evidence. His deletes and past denials are the proof.

I’m done.

BTW: You guys keep talking about FOUP’s and batches, I tried to get a handle on this, but you never said how many wafer these things hold, and how many tools it takes to crank out a completed wafer out. (Industry average for 300mm)

SPARKS

03 May 2008 23:20


Anonymous said...
1998 - SPC
2000 - APC
2003 - APM
2005 - LEAN
2008 - SMART
2009 - BK

04 May 2008 00:40


SPARKS said...
Touche.....LOL, LOL.

SPARKS

04 May 2008 00:48


InTheKnow said...
Sorry Sparks, I'm never sure where to assume the basic level of understanding should start.

A standard FOUP holds 25 wafers. The initiative we have been discussing is driving for smaller FOUPs. Batch size is variable and can be anywhere from 1 to 6 lots depending on the tools and process step.

Note that running a single lot is not very efficient, but sometimes the tools are run that way if there is a "hole" in the flow of wafers that would leave a lot stranded for a long time before more arrive. Some processes are sensitive to batch size and you have to hold lots to make a minimum size, but other processes are not.

As to the number of tools that are required to complete processing on a wafer, that rates right up there on the proprietary list with process flow and yield data.

To get a feel for what it takes though, see this image.

Each layer requires a tool to put down that layer. You can also figure there are a litho tool to image the wafer, an asher to remove the resist after you are finished with that layer and a wet bench to remove any residue from the asher.

The image is old as it is still using Al interconnects and there are a lot of subtleties that I've left out with the flow above, but it gives you a rough idea of what is involved.

04 May 2008 05:47


Roborat, Ph.D said...
Scientia said:
You may see that as being a forum for free discussion; I see it as laziness on the part of the blog owner.

Funny, I can see how aligned Scientia is with Mugabe and the Chinese government when it comes to silencing dissent. I bet beating up someone is good because it's hard work and can be tiring while Democracy is for the lazy government who can't be bothered to shut people up! Honestly, where does he get his logic?


In all honesty, the difference between roborat's blog and mine is that he encourages flames and I don't. He let's posters hide behind an anonymous post and act like children;

Encourages flames... What can make people more inflamed that deleting their posts? It will be good for him to realise that the angry posts I get here is his own doing.

BTW, I wouldn't swap some of the anonymous posters here with registered posters in his blog.

04 May 2008 06:31


hyc said...
My point still stands - even made up pseudonyms are still better than flat "anonymous". I don't need to know your name in real life, I just want to know that you're different from anon2 or anon3 or everyone else posting anonymously, so that I can keep track of a thread. And that to me is just a minimal token of respect for the other people you're conversing with. Otherwise we're all just shouting into a crowd.

04 May 2008 11:04


jumpingjack said...
" 1998 - SPC
2000 - APC
2003 - APM
2005 - LEAN
2008 - SMART
2009 - BK "

You know what is funny about this, other than the BK acronym.... it's the use of the acronyms in the context like AMD invented these processes for manufacturing or that no one but AMD uses them ....

SPC is statistical process control, taught in any undergraduate statistics class, and is used by most all manufactureres of anything from diapers to potato chips.

APC is advanced process control, a generic term which refers to a means of statistically controlling any process output by examing the output and adjust the input or vice versa, adjusting the input to based on some prior output.

APM is AMD's acronym that collectively refers to their process automation systems. However, there is nothing in the collection of those systems that are not part of the industry standards.

LEAN -- what the heck is this?

SMART -- again, what the heck is this... analyst have been trying to figure this one out for the past year.

Frankly, this is the only thing really frustrating with Scientia's blog ... he speaks with such conviction that people often believe he is all knowing, where in reality most of what he says is indeed way off target easily discovered by those who can type 'www.google.com' URL.

04 May 2008 12:20


SPARKS said...
Jack I'm surprised at you, does a lowly electrician need to fill you in on this?

I've determined with my expertise in processing dynamic's and engineering that:

LEAN-

Less Explaining Around Newsgroups

SMART-

Shifting Market Analysis Responce Training

SPARKS

04 May 2008 14:25


SPARKS said...
In The Know-

Ok, you have these Pods running around the FAB loaded with twenty five VERY expensive 300mm wafers. Let’s say at various stages of the process one wafer in particular becomes unusable. Do the tools or the operators, test that wafer and subsequently rejected it at that point? How far down the line can a bad one go?


Further, does the whole line get bottlenecked at one area if a tool’s in it’s respective group blows a relay, motor, pump, circuit board, etc.?

What do they do when some poor bastard is trying to troubleshoot/fix this thing while the rest of the line is pumping along behind him, or worse, nothing is feeding out in front of him?

Do these guys sleep at night?

SPARKS

04 May 2008 15:18


enumae said...
If anyone is curious about AMD's FAB in Malta New York I found the...

Supplemental Draft Enviornmental Impact Statement

It discusses Water, Gas and Power requirements for Fab 4x, also construction timetables (from when they start, not now), Building sizes and Cleanroom Square footages.

All in all, it is pretty interesting to see what it takes to build and operate Fab 4X.


-----------------------------------


Also, if you would like to see siteplans for Fab 4X, aerial overlays etc...

Town of Malta (Luther Forest Technonlogy Campus)

04 May 2008 18:09


Anonymous said...
"SMART"

AMD's clever cheats who are able to get money from arabs, sucker people to continue to believe in their business plan, when the got none. That is really "SMART" lose billions, got no credible likelhood of ever really being able to compete with your big rival yet get people to buy your story hook, line and sinker.

But I'm smarter then that.

AMD BK in 2009

05 May 2008 01:16


Anonymous said...
Sparks, I'll take a stab at your questions:

"To the tools or the operators, test that wafer and subsequently rejected it at that point? How far down the line can a bad one go?"

This varies considerable by both process step and by IC manufacturer - it is a question of your chosen monitoring scheme. Ultimately the goal would be a rock stable process that would require no metro whatsoever, but that is not the real world (but perhaps the Asset Smart world?).

Sometimes an issue will go all the way through the line and not get caught unitl sort/test (basically testing and binning the chips). However there are numerous 'inline' monitors throughout the process flow where either a test wafer run before or after the lot is checked or the production lot itself is checked. Many times an IC manufacturer will put test structures in the scribe lines to test problematic issues. The scribe line is used as this is the area where the slicing and dicing takes place so it is not an active part of the chip which you could potentially do damage to. There are also 'non-destructive' metrology techniques where you could measure the active areas of the chip inline without doing any sort of damage/contamination.

"Further, does the whole line get bottlenecked at one area if a tool’s in it’s respective group blows a relay, motor, pump, circuit board, etc.?"

This is classic constraint theory and is mitigated in several ways - first off I do not know of any fab that runs without redundancy - meaning at least 2 tools to run any given step. This way if a tool goes down, the other tool can be used - this may limit the overall capacity, but at least you have some. The other thing that is often done is so called 'swing tools' (this cannot be done in all areas of the fab). Sometimes if a tool is down hard (meaning for a significant time), a similar tool used in a different step can be quickly converted to cover capacity on a temporary basis.

Finally in Intel's case (or any other manufacturer with more than 1 fab); wafers can be packed, shipped and processed in a different fab until the hard down is addressed (this is a rather rare practice though). Here in lies the beauty of Intel's copy exactly approach - when you ship the lot to another fab, you know that tool is setup identically to the fab you are shipping from and will get identical processing.

"What do they do when some poor bastard is trying to troubleshoot/fix this thing while the rest of the line is pumping along behind him, or worse, nothing is feeding out in front of him?

Do these guys sleep at night?"

Well the managers pester the engineers or ops people who then pester the technicians who are working on the tool. Generally speaking there is 7x24 coverage which can address probably 90-95% of the issues. In the case of a new or uncommon failure, there are strict escalation protocols with the equipment supplier if the tool is down for more than 6 hours, 13 hours, 24 hours (the interval varies by company). It is gernerally not very long before the equipment supplier's expert is onsite if the problem cannot be addressed by the team that is onsite/oncall 7x24.

Generally speaking these are not pleasant situations, especially if it is in a constraint area in the fab where you need every tool up to meet the fab output goals.

There are other areas in the fab where you may have 7-10 tools and have some excess capcity where it is a bit better (but still not plesant) Suppose for example you need 7.3 tools to meet output, so you therefor buy 8 tools to meet the output. If one of those tools goes down hard, realistically all you are doing is taking things down from 7.3 to 7 in terms of capacity.

Now suppose you are in an area that needs 2.9 tools (and therefore you have 3). If you lose one of those tools for a while you are now in a world of hurt.

And then to address your other question the wafers basically start piling up in the queue behind that process step. And what is significant about this is when you finally do get the tool back up, you now process a bunch of those lots and effectively have a 'bubble' moving through the fab which impacts areas downstream as well until you finally get that bubble out of the line.

The site experts are generally oncall 7x24 (some may work normal 5x8 shifts or the 3day/4day 12 hour shifts). In 'healthy' areas the oncall responsibility is rotated around. Again this is the second line of defense generally speaking to the FSE's (field service engineers) who are working in the fab (for most areas 7x24)

05 May 2008 04:16


InTheKnow said...
JumpingJack said...
LEAN -- what the heck is this?

LEAN is the latest corporate buzzword for a methodology to improve process flow. It is the basis for the book "Lean Thinking : Banish Waste and Create Wealth in Your Corporation".

Like most other systems of this sort I've seen it seems to go too far. I can easily see this becoming part of a bureaucratic mindset that requires slavish adherence to a system whether it is applicable or not. It originated out of the Toyota Production and Management System. You can read more about it
here.

05 May 2008 04:16


Ho Ho said...
I want this:
"The most amazing is that this machine just cost as a better standard PC, but has 24 cores that run each at 2.4 Ghz, a total of 48GB ram, and just need 400W of power!! This means that it hardly gets warm, and make less noise then my desktop pc."

05 May 2008 06:33


Anonymous said...
"I tell you what - if I were Ballmer right now... I'd threaten to walk away and say 'wow, if he can get such great performance, perhaps we shouldn't take the company oover and then when the stock crashes to the pre-takeover level and crashes again when Yang missed his ridiculous Q2 numbers, Ballmer should step back in and lowball an offer and say "how do you like me now?!?" (comment Apr23)

Fast forward to today - Microsoft walks away from Yahoo deal after the Yang-er thinks his company should have fetched $37/share.

So now instead of getting $31/share (actually MSFT increased it to 33 during negotiations), Yang will have to explain to his shareholders why the stock price is about to plummet to the low 20's. He had conveniently not set a stockholders meeting (so as not to have to answer to his stockholders?)... but I think he is required to do so or face serious repurcussions (I think you can eventually get de-listed). Expect calls for Yang's resignation, calls for election of a new board of directors and a potential avalanche of investor lawsuits.

Expect Balmer to come back in another quarter or two with an offer in the high 20's (though I cannot predict he will say 'how do you like them apples?')

Looks like Jerry Yang just pulled a Hector*

* Hector (from Webster's online)

HECTOR
Function: noun
Date: circa 2006

1: one who screws up
2: botch, blunder
3: one who screws stockholders due to poor decision making and an overly active ego.

Also can be used as a verb, as in he really Hector'd that deal...

I have a new respect for Ballmer on this decision (though I'm not sure where MSFT's SW/OS division is headed).

05 May 2008 08:19


SPARKS said...
“Sparks, I'll take a stab at your questions:”

Thank you, excellent, that puts a lot of the pieces together. Especially with the above mentioned single flow (?) vs. batch operations discussed above.

“Here in lies the beauty of Intel's copy exactly approach - when you ship the lot to another fab, you know that tool is setup identically to the fab you are shipping from and will get identical processing.”

Whoa, great observation, one that didn’t occur to me, anyway. This seldom, if ever, is mentioned in the ‘pros and cons’ of the ‘copy exactly’ debate, probably because a lot of people wouldn’t get it anyway. Nonsense, with this kind of flexibility, personally, I think it would be stupid to take any other approach. Standardization of components has been the corner stone of HV industrial production for over a century.

I saw the test structures that are sacrificed when the wafer is cut here. I’m sure there are propriety methods to insure quality control at the very early stages of production. If there isn’t, there ought to be. Additionally, I’ll bet there’s a fixed dollar amount, determined by the corporate bean counters, cost wise to get a single wafer through the snake. Going down the entire line ain’t cheap, and a wafer is a terrible thing to waste.

http://www.tf.uni-kiel.de/matwis/amat/elmat_en/index.html

(Great site, by the way.)

This led me to a few more links where I found pictures of the vertical furnaces that heat the wafers in vertical batches. They looked huge, complicated, and expensive. I figured on the redundancy aspect to keep thing moving while repairs are made on the tools that go down. Some of the HV units queued up a number of FOUP’s as part of their specifications, as apposed to the lower volume R+D units. Given Dementia’s single flow argument and AMD's current execution, it may be to AMD’s advantage to stay small.

“The site experts are generally oncall 7x24 (some may work normal 5x8 shifts or the 3day/4day 12 hour shifts). In 'healthy' areas the oncall responsibility is rotated around. Again this is the second line of defense generally speaking to the FSE's (field service engineers) who are working in the fab (for most areas 7x24)”

I can see (and I know) that this is a nice position to have, especially if you’re a really good trouble shooter who has an intimate working knowledge of the equipment’s guts. I’d bet my shares in INTC these guy’s are “crackerjacks”, and the outstanding guys are really in demand. There’s lots of glory to be had when things are up and running quickly. Pressure, adulation, heroics, instant reward, for me this is an enviable position. I love glory; that’s me, guts and glory.


“Well the managers pester the engineers or ops people who then pester the technicians who are working on the tool.”

I was right; they are poor bastards. Obviously, Silicon rolls down hill, too.

Thanks again, (sigh) maybe in another life.

Very enlightening.

SPARKS

05 May 2008 13:29


SPARKS said...
"Fast forward to today - Microsoft walks away from"

You said it. That was the first thing I thought of when I read the anouncement. Time to DUMP!!!! Yahoo.

SPARKS

05 May 2008 13:44


Tonus said...
ho ho, that helmer site is awesome.

05 May 2008 13:46


Comment deleted
This post has been removed by the author.

05 May 2008 16:58


Giant said...
This is interesting.

Although, I haven’t had the QX very long, nor have I explored it's absolute limits, I have found the same VERY comfortable point at 4.06 GHz.

Yes, around 4GHz is perfect for the 45nm CPUs, both dual and quad (aside from the lower end quads that wouldn't hit 4GHz due to a FSB limit). Obviously the QX9650 and QX9770 are premium parts and are binned accordingly, so the power use is low and not all that much higher than my E8400 at ~4GHz. With a TRUE 120 equipped with a Scythe S-Flex fan the temperature under a full load has yet to exceed 50C.



I have, however, found the limit for air cooling:

From CPUz:

9.5 x 450= 4.275 GHz @1.408V, 1800 FSB, DDR3@1800 7-7-7-21 2T 2.0V

Sandra:

Processor Arithmetic= ALU 66835 MPS, SSE3 = 61753
Processor Multimedia= 549144 it/s, FP=267068
Memory bandwidth= 9576 m/sec!
(Now it’s clear why I waited for X48)
Cache + Memory Combined=65.47 G/s
32K blocks= 407 G/sec!
Latency=56ns
SuperPi 1M= 10 seconds!!!!!!

Obviously, both chips run cool (yours and mine) and there’s A LOT of headroom (a full GIG!), basically, on first production run. Binning these chips (?), man with the way these thing run, it’s a shame to deliberately lock in anything below 2.6. It looks like INTC doesn’t have very much to throw away.


I suspected INTC months ago sandbagged these chips when Barcelona fell on its ass. They were ready for Barcelona even if the son-of-bitch comfortably hit the 3 GHz+ speeds they were howling about for a year. It simply had no chance, ever, against Penryn, right out of the gate. Look at that Pheromone at 3.5 Gig, a cherry picked slab. The QX9770 s pee’s all over it at well bellow stock speeds!

The Phenom was cherry picked, and wasn't even stable at that speed. Eventually he settled for 3.4GHz with 1.58V! This would not be acheivable with air cooling. Constrast that to you and I both running these hafnium infused monsters at 4GHz+ on air! In terms of what Intel could release now, assuming a 1600FSB, I predict that 3.4 and 3.6Ghz for quads would be possible, and up-to 3.8GHz for dual core.

I don’t give a flying hoot what anybody say’s. INTC woke up and hit the floor running. If they don’t believe it, you and I have the evidence in hand to prove it.

E8400 @ 4 Gig
QX9770 @ 4 Gig

WITH NO MEANINGFULL DIFFERENCE IN THERMALS AT THESE SPEEDS!

That's right! The power consumption on this puppy is incredibly low at stock. Even overclocked to 4GHz the power consumption of the CPU is only around 100W, that's easily cooled with high end air. Obviously, at 4.5Ghz and beyond we're pushing the CPU to it's limits, so the power consumption is too high for 24/7 use without water IMO.

BTW: With all these runs, I haven’t had a lockup, boot failure, BSOD, or a failed Windows load, yet!


I've had one lockup, that was when I tried to reach 4.5GHz on the P5B deluxe. The northbridge was just running too hot for a 2GHz FSB. As I described in an earlier posting here, I attached a 40mm SilenX fan to it and that reduced the temperature considerably, I had no problems after that. The 790i has been a SUPERB board to me, I've had no issues; none at all.


I’m going to back this gem down to 4 Gig and cruise around nice and comfortable 24/7, all on air.

What sort of bus speed are you running there, and what speed are you running the DDR3 at? As I've mentioned before, 1780MHz works perfectly for me. A beautiful 4GHz clockspeed, 1780Mhz FSB and dual channels of DDR3 at 1780MHz a piece!

-GIANT

05 May 2008 17:03


SPARKS said...
GIANT-

If there's any doubt about the lack of consistent quality of these chips, the entire line up, their speeds, and the way they overclock, this should dispelled them without any reservation.

E8300
E8400
And the currently on sale mega bullet,
E8500@3.16 (I’m was tempted to buy one these sweeties for shits and giggles)

They all clock, and clock well! Really, think about it, INTC’s standard on binning these things must be pretty high before they lock in those multipliers. It makes you wonder, if the relative price structures are based on feature sets, as apposed to speed bins. INTC is only competing with itself here, especially with a dual core solution.

When INTC revealed 45nM hafnium transistor technology as the biggest improvement to twenty years, generally, the Press reception varied from a yarn to beer fart. What the knuckleheads fail to realize, this process will be the foundation for the next generation architecture with an IMC pumped in. Imagine these chips on steroids? Man the thrill is back, big time, and the hits just keep on coming.

As far as my setup is concerned, overclocking this GEM was painless and a no brainer.

From CPUz :

9.0 X 450= 4050 MHz

Vcore 1.3975

On this board you set the memory parameter @ “*DDR3-1800 O.C.*
You set the option to allow the “memory strap to FSB” and you’re done!

450 quad pumped at the Memory will give you DDR-1800; again this is all factored in by the MOBO automatically. Everything is running synchronous, just like I like it.

So much for the idiots who complain about the high prices for premium MOBO’s, F**K ‘em, ya get what you pay for I say, in spades.

Besides, I used to spend a lot more money on things that could have got you thrown you in jail, and that includes booze! That said, what’s an extra 150-200 bucks? That’s used to be one night out in a club, easy, when 200 bucks meant something!

The SuperTalent ‘Project X” DDR-1800 memory gamble I took for $379 paid off huge. At these speeds it’s cold, not warm, not cool, just drop dead cold. (After the CPU cold water solution, I may purchase another set. However, stability concerns surface when running 4 discrete DIMM’s at high speeds, as opposed to two 2 GIG modules.)

I set the timings manually at the manufactures recommended 7-7-7-21.
The voltage was manually set at the recommended 2.0V

Speeds any higher will necessitate looser timings, 8-8-8-24, give or take, on any individual parameter, stability dependant, when locking in the *DDR-2000 O.C* option in the BIOS.

I’ll trade a few nanoseconds in latency for the looser timings and higher speeds. I haven’t gone there ---- yet.

ALL said, this 4 GIG synchronous solution was basically a no brainer. And to think, last year, I was plodding along at 1066 FSB. Now, that’s what I call leaping ahead.

SPARKS

05 May 2008 19:31


Anonymous said...
"Additionally, I’ll bet there’s a fixed dollar amount, determined by the corporate bean counters, cost wise to get a single wafer through the snake."

I've been involved with some cost modeling, and while there are generally specific cost targets (per wafer processed), I've come to the conclusion that it is impossible to measure accurately. There are simply too many fixed cost (building cost, equipment) and costs which are shared by the entire fab (fabwide facility costs, metrology, automation, service, headcount...) that are as significant if not more than the true variable costs (actual Silicon substraate, chemicals and gases, waste, etc...). So the best you can do is have a modeled/average #.

As for the VDF (vertical diffusion furnaces), surprisingly they are no more expensive than an average piece of fab process equipment.

Finally copy exactly has it's downsides too - once you enter volume manufacturing it pretty much discourages many changes as now you have to proliferate that change across a huge fleet of tools. Though some would argue that is exactly what you want when you enter the manufacturing stage - minimal risk, and only insert a change if there is a huge ROI. For engineers (and suppliers who want to implement their latest and greatest changes) it is disheartening but one minor screwup in terms of implementation of a change and it quickly kills the benefit the change had in the first place.

05 May 2008 20:42


Anonymous said...
One add:

"As for the VDF (vertical diffusion furnaces), surprisingly they are no more expensive than an average piece of fab process equipment."

And this is the fundamental problem with the whole single wafer processing move. Sure single wafer processing has some cycle time advantages but when you consider the furnaces (or you can also look at the wet etch benches too) cost as much as single wafer equipment but may have as much as 2-5X the output capability per capital dollar spent, what would you do?

05 May 2008 20:49


SPARKS said...
"I've come to the conclusion that it is impossible to measure accurately"

Coming from you (?!?!), that’s saying something! Kudos for even giving it a shot! I’ll bet it took months.

"Though some would argue that is exactly what you want when you enter the manufacturing stage - minimal risk."

"cost as much as single wafer equipment but may have as much as 2-5X the output capability per capital dollar spent, what would you do?"

Factoring in these two comments, I’ll answer your question; I'll tell you exactly what I did and what I am going to do.

1) Buy a $1500 behemoth-----done!
2) Buy some more INTC------ this week!


I love INTC’s conservative approach, “minimal risk”. I’ve seen too many loose cannons, screw up too many times, reinventing the wheel midstream.

SPARKS

05 May 2008 21:33


Anonymous said...
This "LEAN" is old news. I work at an Intel Fab and we've been using this for several years already. We call it something different but it's basically the same thing that Toyota started "Kaizen" awhile back. I personally think that on paper the whole concept looks great but in real world practice it is not that practical. Just makes management think that they have a better control of the floor.

Anonymous said...

here on the internet resource picked a huge collection of articles about smart [url=http://title-publishing.org/]россия спорт[/url].