9.21.2007

Reading Graphs 101

It should be obvious by now that a lot of folks in the semiconductor industry visit this blog and any wild speculative comments about chip production never gets a free pass. I get quite a few myself but I am always grateful for the learning experience. But for some people, it's like banging your head against the wall.

Take for instance making a claim about things beyond ones expertise. There's a big difference between making claims and stating an assumption. The 1st mistake is making claims about a company's future process. Nobody on this planet except an Intel or AMD approved spokesperson will disclose or make a statement about future yields. You get fired for breaking that rule and if someone does, normally you won't get proof anyway. So if you hear an outsider making claims about future yields, save the grain of salt and dismiss the claim outright. The 2nd mistake is presenting it as facts and using it for one's argument. The 3rd mistake is to keep on insisting even when provided with counter evidence.

Take for example when Scientia said:
"Intel's yields, in contrast, on its brand new 45nm process will take a couple of quarters to reach maturity... Intel will improve its 45nm process and this should pay off by Q2. The process will be mature when Nehalem launches in Q4 08. ".

InTheKnow countered with this evidence (from Mike Bohr, Intel senior fellow). But if you thought the discussion should have ended there, you're wrong.

Scientia's comments about this graph:
"No. You are reading the graph wrong. What it actually shows is that 90nm had worse initial defect density than 130nm but about the same improvement rate. The chart further shows that there was no improvement in initial defect density with 65nm but the rate of improvement got worse. It further shows that 45nm is close to 65nm and worse than 90nm. Again, this matches with what I said."

I'm not sure if Scientia is deliberately misreading this graph but this is the worst graph interpretation I've come across. The way I see it, and if I were to stick to the main point, 45nm matures before 2008 as each node reaches maturity quicker. This is right smack into the 45nm early ramp.

Here's InTheKnow's more detailed and accurate interpretation.
1) Intel required ~24 months to reach the same level on 130 nm that they eventually reached on 90 nm.
2) Intel reached the flat portion of the plot in ~22 months at 90 nm. This despite several flat spots on the graph that showed significant yield hurdles had been encountered. I would expect this since the 90nm transition also overlapped with the 12 inch transition somewhat.
3) On 65 nm Intel matched 90nm yields in ~19 months. Yields continued to improve from beyond the 90 nm levels.
4) 90nm launched around the end of December '04. Intel had reached the flat part of the graph ~2 months prior to this.
5) 65nm launched in Jan '06. Intel matched the 90nm yield levels ~3-4 months prior to this.
6) 45nm is now at about the 18 month point on the plot. If they are matching 65nm yields then they should be very close to the 90nm yield level now. The launch is believed to to be 2 months away and they should be well into the mature portion of the yield graph by then.

46 comments:

Anonymous said...

No, No, No, you guys don't know what your talking about. That's not a graph on Intels yields and ramps!

Being the chip expert that I am, I can tell you conclusively that those are graphs of AMD SHARE PRICE holdings of various financial institutions!

See DOC, GURU, I understand everything now!

SPARKS

Anonymous said...

what an idiot.

booyah AMd sucks Intel rules

Anonymous said...

Good article - one nitpicking point.

Intel did the 200mm to 300mm on "860 / 1260" process (0.13um); 90nm was purely 300mm and the reason for the steps in the yield graph is not 300mm specific (or related). If you look at what Intel introduced on 90nm, you might be able to guess some of the problems they had (especially if you look at when there competitors adopted some of those changes).

That said, good summary. Despite Dementia's claims to the contrary, 45nm is healthy and the fact that there will be product out <2 years prior to the 65nm first product (Jan 06) is a decent indicator of the overall health of the process.

Funny Dementia makes a big deal of AMD's PAPER CLAIM that they will switch from 65nm to 45nm in under 2 years, but no mention that Intel has DONE IT (of course assuming they aren't lying about the Nov 12 launch). Oh and by the way Intel's 45nm will include highK/metal gate on top of that! (which given the disruptiveness of this technology should have made it harder to hold to a 2 year cycle).

Bottom line - Scientia has no background whether it be formal or informal training on Si process technology or manufacturing - so one would be a fool to believe a single analysis he does unless it was well supported with specific facts. It does make for good entertainment though.

Intel Fanboi said...

At this point the only function Scientia's blog serves is to give AMD fanbois something to hold on to. He is fueled by the desperate belief that by writing something down, that it will become true. There is really no point in arguing with him since he filters all posts. I don't even bother to read it anymore.

InTheKnow said...

Intel did the 200mm to 300mm on "860 / 1260" process (0.13um); 90nm was purely 300mm and the reason for the steps in the yield graph is not 300mm specific (or related).

It was my understanding that 1260 was never really a commercial process and the development team did not really have much flexibility to really "develop" the 12" toolsets. I'm not saying that I'm not mistaken, but that is the impression I had.

I can certainly see the point that strained Si (introduced at 90nm), among other things, could account for the delays in development, though.

Anonymous said...

"It was my understanding that 1260 was never really a commercial process and the development team did not really have much flexibility to really "develop" the 12" toolsets."

Yes and no... it was small volumes by Intel standards and the equipment was pretty much slammed in. Keep in mind Intel was the first into 300mm manufacturing and thus was responsible for working with the equip suppliers to make a lot of the equipment manufacturing worthy - but most of this was sorted out prior to 1262 (90nm).

Strain was definitely an issue, I believe silicide was as well. I'm fairly sure that the 300mm transition was not much of a factor from a yield perspective (it may have been a ramp /conversion issue but that is not factored into Intel's yield/defect density plots)

Not the best reference, but one picture of Ni silicide issue (page 16)
http://www.asminternational.org/images2/istfa2005p.pdf

While I have no inside information as to AMD's 65nm issues - their 65nm process uses embedded strained Silicon and Ni silicide for the first time. Based on my background I can easily see the embedded SiGe process causing yield and clockspeed issues and NiSi causing yield issues.

Embedded strained Si is also fairly pattern sensitive and depending on pattern densities, amount of dummification and general pattern the process may be need to be tuned a bit from product to product. This could potentially mean good/respectable process performance on one product, but different performance/issues on another. So if say K8 is performing fine, the process might not behave the same on K10...or it may yake longer to get similar performance (just a hypothetical)

The stuff above is another example of the great fallacy of Scientia's sole use of launch schedule to compare Intel and AMD process technology (and to conclude AMD is only a year behind) - Intel implemented both of these on 90nm, AMD/IBM on 65nm. You see the same thing with HighK/metal gate (Intel -45nm, AMD late 45nm or more likely 32nm). When you factor in performance, ACTUAL scaling (as measured by SRAM density), schedule, and yield, Intel is still at least 2 years ahead.

Throw in manufacturing cost - things like Intel being able to run their process with 1 or 2 FEWER metal layers than AMD for a given tech node and Intel's use of bare Si as opposed to a more expensive SOI substrate and you start to see how naive (sp?) is it to simply look at launch schedules to compare who's ahead/behind in process technology.

Finally - you'll notice you NEVER see AMD yield plots with actual time on the x-axis, you see normalized things like time after intro to manufacturing - this is a VERY NEBULOUS time indicator and thus AMD can play with those curves - I believe they also do some normalization to production volume to which allows you to stretch or condense those curves in the x0direction which is another totally bogus/non-scientific way of presenting data). It would be rather simple if AMD just use REAL TIME as the bottom axis - I suspect the graphs that people have seen from AMD would look a WHOLE LOT DIFFERENT if plotted in a similar fashion to Intel.

Anonymous said...



Being the chip expert that I am, I can tell you conclusively that those are graphs of AMD SHARE PRICE holdings of various financial institutions!


LOL! I've got half a mouthful of orange juice all over my desk, keyboard and monitor because I was laughing so hard from reading that!

I think Hector Ruiz, just like the guys that came up with the idea of native quad core at 65nm are a bit naive!

In a recent Interview he said this:

http://www.businessweek.com/globalbiz/content/sep2007/gb20070921_920191.htm

We have demonstrated over the last five years that every major innovation has come out of AMD

You mean the major stuff like Quad core, shared cache, 45nm high-k process etc. all came from AMD first?

He must be referring to stuff like the awesome AMD QUAD FX platform. Eliminating the need for heating in homes!

Or the awesome 'innovation' in producing a native quad core at 65nm and realising that your yields are abysmal at best. Or the greatest idea yet: Disabling a core on those bad die(s) and announcing these brilliantly 'innovative' tri-core CPUs that everyone wants!

Of course, the market demand for these brilliantly innovative tri-core CPUs will dry up once AMD has their yields sorted out!

Unknown said...

The above post was from me. I just forgot to login. Oops.

Anonymous said...

FUCK DAMMIT

No one cares about Intel achieving mature yields in Q2/Q307 as opposed to Scientia's dumb analysis of Q408?

Anonymous said...

At this point the only function Scientia's blog serves is to give AMD fanbois something to hold on to. He is fueled by the desperate belief that by writing something down, that it will become true.

That's the impression I get, that his is the "wishful thinking" blog. It wouldn't be so bad if not for his constant heavy-handed moderation of the comments area.

enumae said...

Here is a better image for your defect density graph from IDF Webcast (Paul S. Otellini - slide 34).

InTheKnow said...

Thanks for providing the link to a better graph. Looking at this plot I can see a couple of interesting things.

First, 90nm saw some significant yield improvements from 2H05 to the end of 1H06, about 18 months after introducing the process. Second, the 65nm yields improvements match the 90nm yields where they overlap.

I don't see the first item as proof that Intel introduced a process to production at immature yields and ran there for 18 months before fixing it. Instead, I think the most probable explanation is that Intel needed to run the 90nm process under stable conditions for a period of time and understand the process. With that understanding in place Intel was able to apply their learning to make substantial yield improvements.

That brings me to the second observation, the match between 90nm and 65nm yield improvements. The match would indicate to me that Intel has been able to apply process learning from one technology (90nm)and pass it forward to the next technology(65nm). Sharing the 90nm learnings allowed 65nm to be introduced at record yield levels. Those yields have continued to improve since introduction and the process continues to be better understood.

Why am I rehashing all this? Because it supports the claim I'm about to make; that the yield learning from 65nm is being passed forward to 45nm resulting in a faster yield improvement than was seen on 65nm. Inspection of the graph shows the 45nm slope to be greater than the 65nm slope and I believe the reason for the higher slope to be due to that yield learning being passed forward. 45nm has already surpassed the yield levels that were achieved when 90nm was introduced to the market. I expect it to reach the same or better yield levels than 65nm when 45nm products are introduced to the market.

That Intel could do this while chucking 50 years of semi-conductor technology overboard (the SiO2 gate) is nothing short of mind-boggling.

Of course another interpretation of the data would be that Intel achieved some key learnings in 65nm development and ramp that were passed forward to 90nm production. That interpretation would not explain the increased rate of yield improvement seen on 45nm, however.

Anonymous said...

"The match would indicate to me that Intel has been able to apply process learning from one technology (90nm)and pass it forward to the next technology(65nm)."

This is a fair theory - most people forget ~70% of the process equipment is re-used from generation to generation and while recipes may not be matched between generations a lot of the equipment learnings can be passed on or backward (as well as integrated defect issues). I would speculate that this was the case of additional yield learnings on 65nm being passed backward onto 90nm as generally when you are that far into manufacturing on 90nm you will not be doing a lot of yield development / improvements. Once you get 18-24 months into production you are talking about a pure output, cost and risk adverse environment. While yield improvements obciously cut costs they can be risky - my guess was it was proven out on 65nm and ported backward.

The high K implementation is nothing short of astonishing: keep in mind SiO2 is GROWN (meaning the Si surface is oxidized) where as the High K film is DEPOSITED. Depositions are inherently dirtier - you deposit film on the wafer but also in a lot of other places in the equipment which can then build up and eventually flake. The SiO2 growth process does not have this issue as it needs Si (thus films do not grow except on the wafer). Obviously when you're talking about films on the order of 12A-20A (3-6 atoms thick!), any particles can become a HUGE YIELD issue. It'll be interesting to see how IBM/AMD fair with this in manufacturing.

It is rather amusing to listen to Scientia's take on this. What is funny is I think he honestly believes he is right - he is taking random observations and trying to make a model out of a very scientifically complex concept. What is rather remarkable though is when he is presented with some background and data he just ignores and dismissed it as it doesn't fit his preconceived notions on the topic. While I can forgive ignorance and will not mock him for that, the lack of ability to assimilate new information is hilarious and DESERVES TO BE MOCKED!

"That interpretation would not explain the increased rate of yield improvement seen on 45nm, however."

If you look at the graphs overall the rate is pretty much the same between 65nm and 45nm - if you look at the top there is ~2 year separation and at the bottom there is also ~2 year separation (which is also about the same as 90nm rate too)

Anonymous said...

GURU, I know that's you.

Suggestion: Why not take the name 'GURU', so that we may have an easier time ID'ing you, aside from your expertise, of course.

Now down to brass tacks.

During my association with this site, all of you, In The Know, inclusive, mention chip tooling. I haven't a clue what these high tech ice cream machine, easy bake ovens do. Are they single process? Do they spray chemicals? Do thet heat? Do they vary pressure/vacuum? Are they upgradeable and to what point are they useful until they need replacing? How big are they? How many functions, do they mask 1 layer deposit, one layer, etch one layer then move on to the next tool. Are they linked together in some networked chain and share info.

In short, what is this voodoo science? Got any links? Who makes them? Is the address on this planet or is it from 'them' ? What the hell goes into a 4.5 billion dollar ceramic tile factory???

BTW: GIANT, Sorry about the O.J./keyboard thing, but the laugh was worth it.

SPARKS

Roborat, Ph.D said...

Sparks,
Some basic links to get you started.

HTMAC

Learning the fun way

Anonymous said...

Thanks Doc, a bit more complicated than my homebrew tube amps.

Thanks Again

SPARKS

InTheKnow said...

Sparks, answering all your questions in one shot would take a book and strain my meager store of knowledge beyond the breaking point.

For the sake of simplicity, lets start with a general overview of processing tools.

Most equipment sets are single wafer processing tools. That is they process one wafer in the carrier at a time and then move to processing the next. Many tools have several chambers so more than one wafer is loaded on the tool at any given time to increase throughput, but each chamber is only occupied by one wafer at any time.

Notable exceptions to the above are wet benches (basically large tanks of various acids) and diffusion furnaces. These tools run batches of several lots at a time due to fairly long process steps. Increasing the number of wafers that run in the tool at one time compensates for the long process time giving an average process time per wafer comparable to the single wafer tools.

Ideally, all tools would be single wafer process tools because this reduces the risk of scrapping large numbers of wafers in one event (read the loss of big bucks). Single wafer processing has long been the holy grail of the equipment developers, but I don't know of anyone who can say they are really there yet for the whole fab.

InTheKnow said...

I would speculate that this was the case of additional yield learnings on 65nm being passed backward onto 90nm as generally when you are that far into manufacturing on 90nm you will not be doing a lot of yield development / improvements. Once you get 18-24 months into production you are talking about a pure output, cost and risk adverse environment. While yield improvements obviously cut costs they can be risky - my guess was it was proven out on 65nm and ported backward.

A very good point.

I know that Intel has been making a big to-do of their efficiency improvements of late. To see the kind of cost savings Intel has been claiming you have to do a lot more than just cut heads. I wonder if they haven't chosen to really roll the dice and try to drive improvements across the board.

Anonymous said...

Regarding the Yield Curve analysis.... Scientia is flat out wrong. Period. End of Story :) ...

The yield curve is on a 2 year cycle, the top of the yield curve is the start of development, as development progresses, yield learning is achieved, and the curve declines rapidly, at the point it turns flat it is 'mature', the fact that defect densities match up node for node shows that Intel matches or exceeds yields for the prior node. Lining up release time with each curve shows Intel works to the best defect densities before launching product.

This is not a hard thing to see, and there is in fact updated yield graphs like this on the net showing 45 nm rapidly approaching the 'mature' limit as all other nodes have.

Yield learning and management is an interative process...
For more yield learning info, it does not take much effort to find good resources
http://smithsonianchips.si.edu/ice/cd/CEICM/SECTION3.pdf

Intel Fanboi said...

I wrote:
I don't even bother to read it anymore.


I got bored and decided to see what Scientia is up to. I was shocked to see the state of his blog. He doesn't even let people post, he just responds to the post himself!?!?! There can't be any more solid proof that your arguments have no merit than that. I suggest to you all to not post there anymore. Let him live in his fantasy world.

And his #1 ally is Abinstein. That should make him reconsider his views.

Anonymous said...

The yield discussion, to put it rather bluntly, is beyond 99% of the bloggers' comprehensions on these boards. Even Intel's plots, which look very straightforward, are rather complex:
1. what exactly is defect density? (what is classified as a defect? what if you have 2 defects on the same die - is that 2 or 1? How are clustered("area") vs non-clustered defects handled?
2. How does Intel handle the generation to generation differences - what is a defect or yield issue on 45nm might not be one on 90nm?
3. What die or test vehicles is used - is it normalized or constant between generations - some defect types may be specific or more sensitive to a design or pattern...
4) How are areas of a chip that have some built in redundancy handled?

Now you look at AMD's, quite frankly, CRAZY, yield plot which doesn't even plot time on the x-axis but production volume- and DOESN'T EVEN INCLUDE DEVELOPMENT!. (is that chip volume? wafer volume? does it correct for 200mm vs 300mm chip counts? 90nm vs 65nm dies sizes? Is it all product - desktop/mobile/server, if so what if the yield is sensitive to a specific product and the volume mix varies over time? etc..)

The one thing about Intel's plots is at least it seems somewhat consistent from line to line and absolute time is a better metric than volume when ramp rate isn't given. AMD is essentially plottig the part of the line on the Intel graph which is at or near the bottom of the graph - it says nothing of how long it took to get there.

For a make believe example suppose 45nm production was delayed a year to fix some yield issues - you WOULD NEVER SEE THIS on AMD's graph as they are only plotting it after production starts. Well that example is not realistic as we would know about delays of that order right? Sure, but how abut some of the incremental CTI steps that AMD does? You would never see if a variant of 65nm had a yield issue or was pushed out because AMD would simply keep production on an older rev of the process (and say get slower K10's? or some tri-cores? Just spitballing here...)

Yield improvement is generally driven by information turns, or more basically testing a new lot (or smaller portion of the process flow) with whatever process or design change and doing this on tens or hundreds of process steps and then integrating all of the changes together. If you look closely at Intel's graphs there are basically 4-6 break points on each line. Intel basically bundles all of the changes once per quarter (or so) and then validates it (within each quarter there are THOUSANDS of individual experiments testing these changes to identify what should be included on the major revisions). This is why the time component is more important than the volume component (especially at the early stages of development). If it takes a lot 10-12 weeks or control loop (3-4 weeks in some cases) to move through the fab it really doesn't matter what volume you are running as long as you have a statistically significant sample size.

This is also where Intel's copy exactly philosophy really comes into play - if you validate that change on one set of tools, you can quickly populate it across all tools with minimal additional testing. If tools are tweaked or setup differently you now have to validate the change on all tools that are different (or rely on APM or some other algorith to predict the "offets" from tool to tool with respect to the new change).

Scientia (or was it Sharikou?) claims that copy exactly makes it slower to implement a change over APM - it is exactly the opposite - if you know every tool is the same, once a change is validated at one fab it can be quickly implemented at all sites with minimal risk.

Suppose for example I was testing a change that reduced an ILD deposition step by 10deg to reduce particles and improve yield. Now suppose I'm in a fab which has one tool at 375C but another tool through APM at 370C to account for an inherently faster dep rate due to some differences on the tool and a 3rd tool which is @ 375C but has a slightly higher pressure. Do I now need to test the change on all configurations? Rely on APM to predict what each change should be on each tool? At Intel it would be a matter of uploading a SINGLE recipe and replacing the old one regardless of how many tools were in the fab as they are all the same. So tell me again how it is SLOWER to implement a change when you know all tools are copy exact?

-"GURU"

Anonymous said...

INTHEKNOW: I refuse to post on Scientia's blog but here are some of the RIDICULOUS FLAWS in Abinstein's analysis of Nehalem die size:

1) He forgets Nehalem has L3 cache (no density info given on L3 that I know of) - it is not clear to me how much L2/core there will be. I guess you could blindly assume packing density will be similar on all cache levels and you can just count total cache - I don't know if this is true.

2) His "eyeballing" of cache size relative to die size is wrong. Actual L2 cache size on Penryn is <50% of die size.

3) Nehalem was designed for 45nm, Penryn is a shrink - as such the transistor packing on the logic portion of the Nehalem core is likely better/more optimized than a shrink of a product designed on 65nm. While not as bad a shrink as K8, remember the crappy scaling AMD got from K8 90nm to K8 65nm? Hard to say how significant this will be but it could be substantial - when working from a "clean slate" you can pack the logic a lot better than when you are forced (more or less) to geometrically scale an existing design.

4) His cache transistor counts are all over the place from post to post. Also when he does his 8MB cache calculation he scales the transistor count with all of the overhead added in as opposed to just increasing actual 2MB transistor count (probably a small issue)

Your estimate ~175mm2 is probably on the low end (but not that far off) because transistor packing on cache is better than logic and Nehalem will have a lower percentage of its transistor budget allocated to cache. (However as I noted above the Nehalem logic will likely have better transistor densities than the Penryn shrink)

However Abinstein's estimate of 279mm2 is RIDICULOUS if you look no further than the picture of the Fudzilla sight comparing a 45nm Nehalem to a dual die dual core chip! Does he not believe the picture! WTF!?!?

Keep in mind Abinstein is the same person that once concluded on his blog that AMD's yields were 2X better than Intel based on his elementary analysis of capacity. Of course he couldn't comprehend that if this were true there would be no possible way Intel could have the gross margins it has while AMD has the margins it has. When flaws in his capacity calculation were exposed out he dismissed them as trivial and second order. (As an example of his ignorance he preferred to count the # of fabs as opposed to considering the actual SIZE of the fabs)

Anyway feel free to lift any of the points above - I'l just sit back and enjoy the show...just keep in mind once Abinstein has formed a conclusion he will manipulate the #'s to get to that outcome. (i.e. like arguing with Scientia about yield curves, you are likely just beating your head into a wall)

Anonymous said...

INTHEKNOW: Ignore my previous post and just post this link:

http://www.fudzilla.com/index.php?option=com_content&task=view&id=3214&Itemid=1

Basically Abinstein is saying a quad core Nehalem (the single die in the back) will be either ~220 or 280mm2 depending on cache size and therefore is either the same size or BIGGER than the quad core Penryn (the one in the front) which we know is ~206mm2 total?

Look at the picture - is the monolithic die as big as the 2 die? Or, as in one of Abinstein's 8MB cache "calculations" ~40-50% greater Si area?

They must had had the photography they used on Lord of the rings that played tricks with the cameras to make the hobbit actors look much smaller even though they appeared to be next to humans. Perhaps Intel just paid off Fudzilla to doctor the photo?

Good luck!

Anonymous said...

The yield discussion, to put it rather bluntly, is beyond 99% of the bloggers' comprehensions on these boards.

And it's important to keep that in mind, particularly when people are trying to use technical specifications and technological innovation to predict sales and pricing.

For one thing, even with the links (thanks much, Roborat!) and the explanations, a lot of this stuff is going to sail over the heads of many of us. I like to think of myself as technologically inclined, but I find myself out of my depth when reading much of this stuff. It's natural, because it's so complex. But it's easy for the non-expert to simply accept claims from Scientia or Abinstein or GURU because well... they sound a lot more knowledgable, and it's natural for people to accept what more knowledgable people say, because we can barely understand it, much less refute it.

Second, in spite of all of the discussion on process technologies and product innovation, there are lots of other factors at work in the marketplace, and those are much easier to understand. Performance, production capability, marketing budgets, mindshare... all of these have a direct and more easily seen impact on sales. Hey, your CPU design is more efficient and faster at a specific clock speed, that's great! But your fastest CPU can't run at a high enough clock speed to beat the other guy, so he can sell his CPU for more money. More importantly, he is selling them right now, and you're not.

I do appreciate the efforts made to help us neophytes understand the underlying technologies and give us an idea as to what the companies are doing, and why and how they do it. But the attempts by people like Scientia and Abinstein to predict sales performance based on complex technical details means that they'll always be making excuses for why they were so wrong.

Knowing how a car engine works is great, but unless you get some driving lessons, you're not going to get anywhere...

Anonymous said...

More crap from Abinsteins is an idiot.com (also known as Scientia's blog):

"I'm not saying this from any percentage scaling PoV, but from this wafer picture. The 300mm diameter is squeezed about 22 Nehalem dies horizontally and about 15 vertically. This makes Nehalem about 13.6mm by 20mm, or roughly 270mm^2."

Let's say he's counting correctly, though anything over 20 is questionable as you start to run out of fingers and toes...

So let's look at his "analysis" 300mm /22 die = 13.6mm in one direction and 300mm / 15 die = 20mm in other direction.... Sounds simple how can I possibly refute that?!?! Well let's try:

ISSUE #1 - He is from the Sharikou school of Si processing where die are printed to the edge and there is no edge exclusion! In real life on 300mm there is at least a 3mm edge exclusion (some processes are actually more like 3.5 or even 4mm, but 3mm is the public consensus). Thus usable area in one direction is not 300mm but 294mm - of course there is the curvature of the wafer so it is only 294 on the diameter and as the die is not a line, the usable "width" of the wafer is not 294mm but actually less.

ISSUE#2 - Partial die; the die doesn't go perfectly to the edge (thus even using 294mm is incorrect)

ISSUE#3 - there needs to be some width allocated for scribeline for when you are dicing the wafer (unless of course you think the cutting is a lossless process!?!? Let's say 0.15mm for this - sounds rather small, and insignificant, no? Now multiply this by the 22 die width and you now lose another 3 mm or so....

So conservatively, in a perfect world (the die happen to get right to the edge of the exlusion zone which is not really the case).

291mm / 22 die = 13.22mm (accounting for 3mm edge exclusion and some scribeline width to cut the die)
292mm / 15 die = 19.46mm
(Again accounting for 3mm edge exclusion and some scribeline width to cut the die)

So this PUTS ABSOLUTE MAX SIZE at 257mm2... now factor in partial die at the edge, the curvature of the wafer and it is even smaller than that... to put this in perspective let's say there's a quarter of a die to put things in perspective - that would be 0.5 die total in one direction which is another ~5.5% (~15mm2) area reduction (0.5/22 = 2.2% width reduction and another 0.5/15 = 3.3% length reduction)

If someone wants to post at Scientia's fine, otherwise I'm happy letting them live in their naive little world - hopefully the people reading this site will realize that what they are spewing is COMPLETE CRAP!

If I had to GUESS - I would pout the die size at around 220-240mm2

Anonymous said...

Update - looks like Abinstein at least tried to account for partial die - thought the 15 he counted in one direction is a bit on the low side as you can clearly see the partial die are 1/2 width as you can clearly see 1/2 the L2 cache.

Now if only Intel can learn how to print die to the very edge of the wafer like AMD and cut the die with zero loss!?!?

Anonymous said...

"Last year, AMD did 65nm testing in Q2, then 65nm production in Q3, and this was then released in Q4. If Intel is equal to AMD then one would expect if they were doing 45nm testing in Q1 that production chips would be available in Q3. Q4 lags by one quarter"

More DEMENTIA! Hmmm... chips AMD 65nm "released" in Q4...how about available on the market in Q1'07! (he is again comparing AMD's SHIPPING to Intel's product available)

If shipping is the milestone, then for a November 12 release the chips likely started shipping around end Q3 (end Sept). I'm also not sure where he came up with the testing dates...but that's another story.

Oh and K8 was a dumb shrink, Penryn had some added instructions...but again that's another story...

Oh and AMD's "65nm" (I use this term loosely) was really 90nm transistors with 65nm dimensions - this was an initial 65nm process, nt a mature process meeting final node targets...but that's another story...

Just when I think Scientia's arguments can't get any shallower, he manages to drain a little more out of the puddle...

Anonymous said...

Abistein and Scientia are cohorts.

It's evident that Scientia enjoys deleting posts questioning or challenging abistein.

Abistein does not call himself an AMD fanboi yet his posts clearly show his affiliation. To wit: abistein on AMDZone

LOL such tools

Anonymous said...

IN THE KNOW
DOC
GURU

I must say, after going to the Applied Materials website, I constructed my transistor. I have gotten so good at it, I can predict the following:

1. Simulated performance runs show 40% over Penryn!
2. Yields show great promise, simulations show 99.5%!
3. New process technologies with exclusive tooling!

http://www.youtube.com/watch?v=taFYwBa0q5M

4. TDP 8 watts!
5. I’ve developed new throttling technologies 'CTC'

http://www.youtube.com/watch?v=HX9cDtdzoeY

6. New SSE instruction set!
7. Dementia, Albeinstein, Share a Coo Coo, were available for testing and comments:


http://www.youtube.com/watch?v=O7txeOlujTc

Power Point Demo’s to follow after financial discussions with Investment Bankers.

SPARKS

Anonymous said...

So AMD's stunning announcement for September 25, 2007 was... (drum roll)...

...an Athlon 64 X2 5000.

Anonymous said...

Quite frankly, I find Scientia's nom-de-plume ironic. He would like to suggest that he possesses some scientific insight, but really he needs to learn what the scientific method is - observe, hypothesize, experiment, refine hypothesis. He only manages the first two.

He reminds me of the medieval "scholars" who didn't like to get their hands dirty by actually trying to ascertain the truth for themselves via experimentation - they thought merely observing phenonema and then explaining same with more-and-more contrived theories was the best way - e.g., the apparent motion of stars and planets as seen from Earth was due to their being embedded on celestial spheres within spheres.

In much the same way, Scientia merely looks at what other people have done and reported, then proceeds to ignore and twist facts to suit his own preconceived notions. He has no facts of his own to report.

Case in point: Fudzilla's article "Intel's yields at 45 nanometre up to 90 percent" at http://www.fudzilla.com/index.php?option=com_content&task=view&id=3201&Itemid=54:

"Intel thinks that AMD is in trouble, well no real surprise there. The company sources have told Fudzilla that Intel usually has yields up to 90 percent which sounds really great while Intel doesn't believe that AMD can come even close to that number with Barcelona.

K10, Barcelona is a big chip and that is why some of the cores are failing, hence you get triple core chips. AMD now doesn't have a choice and it will be fine once the revision B2F is out but Intel will defintiely keep the lead for the rest of the year and through most of 2008."

Anonymous said...

Come on folks, you need to disregard Intel's defect density graphs, AMD' triple core, Intel's gross and profit margins and AMD's lack thereof....all lies I tell you!

It is clear to anyone with half a brain that AMD's yields ARE DOUBLE those of Intel. Abinstein himself, who clearly is an objective and knowledgeable on process technology has proven it on his blog!

InTheKnow said...

just keep in mind once Abinstein has formed a conclusion he will manipulate the #'s to get to that outcome

He does seem rather hard to persuade, doesn't he? In any event the debate is entertaining.

And remember...

"There are three kinds of lies: lies, damned lies and statistics." - Benjamin Disraeli

Anonymous said...

I especially love this Abinstein comment:

"My guess is the integrated northbridge in Nehalem is probably the part with lowest yield, and chips that have the integrated northbridge not working (well) will be sold a low-end non-IMC ones."

His guess on yield must be based upon his years (decades?) of Si technology background? Much like his ability to count die on a wafer and deduce the dies size of Nehalem?

And Intel will just fuse it off a non-functional CSI and use what to communicate with the outside world?

My guess is Abinstein has never set foot in a factory, done any sort of engineering with regards to Si technology yet believes through reading web articles and using backward assumptions to calculate things like yield he can pass him off as an expert to the willing AMD'roids.

pointer said...

"My guess is the integrated northbridge in Nehalem is probably the part with lowest yield, and chips that have the integrated northbridge not working (well) will be sold a low-end non-IMC ones."

Hahaha, i really want to laugh ... while What Abinstein said might have some truth where any defected IMC part can be possibly sold as non-IMC nehalem, but the whole statement sounds more like an AMD fanboy revenge-statement thing as AMD was forced to consider tricore due to the yield issue and intel's supporter laughed at that :)

Anyway, i do believe we can intelligently guessing the die size base on the wafer picture. Enumae actually did a nicer job by drawing a perfect circle to deduce the die's perimeter.

Anonymous said...

Ok, first off, after looking into the chip fab tooling thing for a few days a couple of issues occurred to me. What I have been saying about Wrecktor Ruinz’s grand scheme buying ATI for 5.4 B was wrong. Now, that I have a (general) idea of what it takes to make these things, let me revise my position. He was totally out of his mind. When AMD had 4B in cash and a lead in the market, HIS SINGLE BIGGEST GOAL WAS TO PRODUCE A FASTER AND BETTER CHIP, especially on the server end, everything else would have trickled down.

This stuff makes rocket science child’s play. It is incredible with the complexities of the equipment, the various types of tools that do specialized work, the number of machines it takes to do it, the associated number of variables at each process stage, the number times it needs to be done, and finally, the cost of integrating these things into a productive system. This is not to mention the cost of the machines themselves. The point, you may ask? Why, on this planet, did he focus away from his primary task?

I suspected it has been game over, but in view of this? It is absolutely impossible, in their current financial position to INVEST in the new technologies this industry demands! I don’t care what anyone says about fabless production, that’s all bullshit. Man, if I were going to make these things I would want absolute control over the entire process. I can’t see any other way of maintaining STRICT control by comprising one stage in the process. To take a grinding wheel, albeit a high tech one, to very expensive wafer and not knowing who is at the controls? Hell no. Christ, no one even factored in basic vibration, temperature control, chemical purity, and human interaction, in the process variables.

Forget ‘yield graphs’ and those 3 ignorant monkeys. The execution of product, on the whole, is the big enchilada. This, by the way, from all evidence, Intel is doing flawlessly.

Additionally, because of the complexities involved, if any hopeful naysayer thinks that if AMD goes ‘belly up’ Uncle Sam will breakup, destroy, hurt; whatever, Intel, think again. The U.S. is not, by any stretch, going to destroy a leader in the world market with this kind of technological lead. Not with this complexity, tax revenue, and certainly, not to give the market away to foreign power. Not on your life. Microsoft was given a free pass. So will Intel.

We have a global market with global competition. This is not the 30’s, and these are not the 80’s with local telephone carriers. If you even pee in Intel’s parking lot, you just may screw up one of the last true remaining dominant American Industries and institutions. Further, you silly bastards who wish for the breakup of Intel, because of half assed AMD (The Imitator) BLOWING their market position, better think again. Think of American Industry, think of American technology, think about American product, think about American workers, and think about a world leader of Information Technologies. Think about you, me, and AMERICA. Get it though your skulls, Intel is an strategic American institution, not the enemy. If AMD played their cards right, they could have had a piece of it with Intel, as they’ve had for 20 years. Crammer was right July 5th, ‘The Hall of Shame”. Stupid, Wrecktor, he should have focused on his CORE technologies with these extremely difficult and expensive processes. I fear AMD’s substantive loss in production, research, and technology, at this juncture, is irrevocable

By the way, then there is this. Did Wrecktor factor in this extremely ugly scenario before he blew 5.4 B???

http://www.xbitlabs.com/news/chipsets/display/20070925093840.html

What a fool.

Gentlemen, allow me to offer sincerest thanks for the education, and hats off for your expert analysis.

Please, keep it coming.

SPARKS

Anonymous said...

I agree, Sparks. Tech specs and fancy ways of doing things are only part of the equation when you are in a production environment. AMD is limited by size and money, moreso after buying ATI. Meanwhile, Intel can do this...

Intel CPU Pricing Q4/Q1

Looks like Intel is going to try to put the pricing squeeze on AMD before Phenom debuts. 3GHz dual-core for $183 and 2.6GHz quad-core for $316 in January 08. The 3.16GHz dual-core with 65W TDP for $266 looks like the way to go for gamers and buyers who don't really need quad (or tri) core.

Anonymous said...

http://www.tcmagazine.com/comments.php?id=16193&catid=2



GREAT LINK!!

Well there it is. The three wise men can shove their totally speculative nonsense concerning yields and production up their ridiculous asses. Not only is Intel getting great yields, and they’ve got the things boxed and read to rock.

As a side note, Wall Street thinks so, too. Intel has been floundering at its high for weeks, kissing and hugging 26, one good word on historical margins and will go over the top. I told a certain writer at the INQ, 3Q ’07 at $27 back in Nov 06. For a non professional, that’s pretty damned close. Man, they are looking good. Further, Christmas is coming.

Ottelini and COMPANY (as not to ruffle any feathers) have done brilliantly. The Tic Tock cadence, and its subsequent execution, is a model of exemplary engineering and production. Kudos’ to all concerned!

Now, it’s on to a traditional $34. And, a fat juicy QX9650 and X38!!!! YOO YA!

Shall I be as so bold to say, 34, 2Q ’08?


SPARKS

Anonymous said...

Is Intel stupid or what?

E8200/E8300/E8400
All cost the same?

And the CPU models are already all used in just one year?

Intel is shooting them self in the foot.

AMD when was in the lead had much better price distribution with their Athlon 64 and Athlon X2.

Alert people is must really be completely stupid if you buy one of the current 6300, 6550, … processors today or tomorrow…

Intel doesn’t know how to be in the lead after 4 years. I think the where already accustomed to be in the low end market.

InTheKnow said...

Dr. Yield,

Might I inquire as to why you would recommend the Murphy yield model rather than a Poisson model?

I'd have posted this on Abenstein's blog where you made the recommendation, but since all posts must receive his stamp of approval, I figured I'd just ask here.

Anonymous said...

"with an edge exclusion between 2mm to 3mm you get # dies 225 to 229."

Here in lies (one of) Abinstein's fundamental problems - you give him a tool but he doesn't even have the background to understand the inputs going into it... 2-3 mm EE? The "official" 300mm edge exclusion is ~3mm, but in reality there are processes that run nominally at 3.5mm or even 4mm EE - Abinstein must have gotten the 2mm from 200mm, but if he thinks that is achievable on 300mm he is on crack...does he even know how far the bevel extends in from the edge or the spec on edge grip handling? Heck there are tools that have guard rings or shadow rings to hold the wafer!

So why does he throw in 2mm EE? Is it a realistic #? Abinstein has NO FREAKIN CLUE, but if he plugs it into the model it helps him get closer to a number he wants. By throwing it in a range "2-3mm" he tries to make it seem possible where the range he should be using is 3-4mm (if he actually had a background other than "advanced googling")

This is just another example of how he tries to stretch the inputs to get to a # he wants to get to. He'll probably say 2mm EE is "best case" or something like that when 3mm EE is the absolute best on 300mm.

As for the scribeline he is cooking the books again the 0.7mm is between die! So if you are simply going to count "N" die across the wafer the # of srcibelines is N+1. If you are going to cut it in half as it is shared with the die next to it, you have to double it again anyway to account for the other side!

So to get size by counting die across the wafer:
300mm - 6mm (3mmm edge exclusion on both sides) - 0.7 * n - partial die... however these are all variables which we don't have exact #'s for:

1) 3mm is "nominal EE" some processes don't achieve this - if a specific process tool can't achieve this, you certainly won't get a working die there.
2) Scribeline width - I haven't seen a definitive source for this (though I believe it to be ~0.6-0.7mm). It also obviously is dependent on the dicing method - there have been new methods introduced in this area as SiOC (low K) ILD films have issues with lateral cracking during the dicing. I say this because if you are using a source more then a few years old it may no longer be valid.
3) Partial die size on the edge - impossible to determine accurately from the picture...
4) Finally you need to take into account that the die has some width and the length is only 300mm on the diameter! It is less as soon as you move off the diameter (I'm too lazy to do the math - but it introduces yet another error). So if you are abe to visualize, you have a strip across the wafer the width of a die, which is only 300mm at the center of the strip, it is actually less on the edges of that strip.

I admire your patience on Dementia's blog - I see Abinstein wrote his own blog on this, which is just as comical...maybe it's better to leave them in their ignorance? I mean you provide the model, the inputs and they just disregard and play with the model to get what they want...

Unknown said...

When trying to prove a point I am always conservative with numbers and estimates.

If you have to squeeze the numbers then you are cheating to obtain the result you want and not the facts.

That seems to be what Abinstein is trying to do.

Anonymous said...

Is Intel stupid or what?

E8200/E8300/E8400
All cost the same?


If yields are good, it looks like that pricing scheme is designed to squeeze AMD in the low/mid-range 2-core segment, while Intel enjoys the better margins on the 3.16GHz part. AMD's dual-core CPUs will be competing against older Intel CPUs that will be priced in the $90-130 range, most likely.

This could backfire on Intel if AMD had any marketing smarts at all, but I'm not sure that they do. So they will probably be hurt really badly for a couple of quarters while they ramp up production of their native quad-core CPUs.

Unknown said...

E8200/E8300/E8400
All cost the same?


They all cost the same to produce also. This is Intel's entry/mid level 45nm product. They will sell bucket loads of these.

Penryn's die size is 107 mm², so they can hammer AMD's margins with ease. I wonder if they are going to do 45nm Allendales. The die size will be very small if they do. That's a lot of processors from 1 wafer.

anonymous said...

I chose Murphy because it is the most conservative at the high and low end of the curves, and is fairly middle of the road elsewhere. No strong science behind the decision.

I like how pointing out the holes in Abe's hyperbole with simple facts earned me the vaunted title "fanboy" amongst a bunch of other things. And I'm a faker too! Who cares that I have 11 years in the mask layout and yield businesses- and a real PhD and MBA to boot. Since my simple statements don't match his reality, I merit ad hominem attacks. What an @$$.

Anonymous said...

I'm an Intel employee.

You have read and understood that graph perfectly.