10.23.2007

Ah Yes! That DTX Again

Once hailed by some blogger to be a potential source of revenue, AMD's DTX surfaces yet again to remind everyone that it's still alive. In fact this time around it has a complete set of specifications. I've already given my verdict on the outcome of this AMD endeavour and it appears that theInquirer completely agrees with my opinion.

by Wily Ferret: "The DTX spec appears to bring nothing new to the table, bar some compatibility with ATX which, to us, seems contradictory in purpose if not actual physical reality. Chalk this one up for another veto by the Taiwanese board-making mafia, then."

Like I said in the past, DTX is facing an uphill battle. The fact that the dominant player in the industry isn't supporting the standard already creates enough doubt for mobo makers not to waste time on such a risky platform. The second and also the biggest problem is the fact that SFF's already exist in proprietary formats . Large OEMs already have them in their own unique way and it is that differentiation that generate healthy margins in an already saturated desktop market. An open standard means open competition and clearly DTX only means low margins to OEMs. To illustrate further, this is just like AMD coming in and creating an open standard on Mac PC's. Clearly if everyone can now make Mac's, guess what that does to Apple's business?

As to the future of DTX? The TechReport spells it out even better:
"Nearly every motherboard and chassis maker we contacted for this story had a similar position on DTX. Publicly, they have all announced support for the standard, usually in the form of a single product or project, in conjunction with AMD's efforts. Privately, they are hedging their bets, waiting to see whether DTX gains any momentum in the market before committing to producing anything in volume".

Other than to remind someone that we told him so, I must admit to not really having any substantial reason to write about this topic -- and continue beating this already dead horse. Apologies.

53 comments:

InTheKnow said...

It looks like the Abinstein is dredging up the die size discussion again.

I will let the image speak for itself and refer you to some nice work by enumae. Here is the die image.

By any estimation, 270mm^2 is too big.

Anonymous said...

You missed the most important point Dementia had on DTX - it will allow lower costs in the budget area....

Of course one could look at how the only 2 DTX boards currently cost NORTH of $250, but why get the facts get in the way of a good blog?!?!

Who wants those real "expensive" Intel based boards when they get get a "cheap" (?) DTX board with just one PCIE slot, one PCI slot and just 2 slots for memory.

It's not like for $250 I could get a board with that much functionality for Intel (or even an AMD) ATX or micro ATX solution! Of course AMD marketing will soon be saying customers are demanding higher cost boards with far less functionality! That's $250 / platformance / board size / watt / "native"** PCI = VALUE

** Native PCI means a SINGLE UNIQUE PCI slot, not like all those boards that "glue" all those additional PCI slots... DTX is a native design, baby! You want more PCI slots? You can simply network more motherboards together with the advance AMD interconnect protocols - who has a need for >1 PCI slot for a single computer? That's not an elegant design!

Anonymous said...

you talking about someone that rhymes with dementia?

Anonymous said...

scheming scientia
idle fella
closet fanboy
baked a half ploy
blame it on dementia

Anonymous said...

^^the meter needs some more work

Anonymous said...

This is part of an interview with ASUSTEK President Jerry Shen. It is worth the read in its entirety. If the was ever answer to the DTX question here it is in a nut. However, it also speaks volumes about AMD’s current state of affairs and the way in which they do business. If this DTX thing doesn’t fly with ASUS, it will never get off the ground, certainly, not with this kind of criticism and industry clout.

Further, he mentions AMD “generation process technology lag behind Intel”. If anyone on this site had any doubts about that, here it is from one the biggest, if not the biggest industry players.


“6 - What do you make of the current Intel vs AMD vs Nvidia vendor standing? How does it affect Asus? And how important is for the industry to have the competition alive


Talking about Intel, we are good friends with most of their top management, and in the middle of top 10 Intel partners for processors and chipsets. Overall, Asus is comfortable with Intel’s support over the years. Intel has consistent and stable corporate culture, which we like.

AMD likes to rank partners more by volume alone. Acer uses a lot of AMD processors and we feel AMD favors them. We hope to have a level playing field from AMD. Also, on the chipset and GPU, ex-ATI, side: AMD marketing, product delivery and strategic alliance relationship development needs improvement. Their CPU 2002-2006 product performance was better. Since mid 2006 we feel Intel regained the lead. Departure of top executives is a big challenge for AMD, in my opinion. Our concern is also the combination of no product leadership with reduced volumes. There seems to be no clear near-term strategy how to win back the lead.

Last October they had processor shortage for the whole quarter. Vendors, including Asus, were forced to switch to Intel even further. The same shortage repeated this last quarter with ATI GPUs. Finally, they suddenly changed their focus to top tier vendors from us, the OEMs. I feel the relationship has not yet reached the required stability for consistent business.

Also, we are concerned about their continuous one generation process technology lag behind Intel, while the on board memory controller and interconnects will be anyway matched by Intel during next year. However, I feel that the 2009 Fusion is a great chance for AMD to succeed with integrated processor and GPU approach.

As for Nvidia, we have an OK relationship, except for their product schedule uncertainties. Being squeezed between Intel with CPUs and upcoming GPUs, and AMD with both CPUs and GPUs as well, I feel Nvidia needs to define a new position in this changed environment, in my opinion.”


http://www.theinquirer.net/gb/inquirer/news
/2007/10/23/interview-asus-president-jerry


Case Closed.

SPARKS

InTheKnow said...

Another rare jewel from Abinstein.

Nehalem follows K8 and Barcelona in almost every step. In terms of architecture K8/K10 do matter a lot to Intel because they are what Intel is copying from.

Let's see Barcelona was "released" to wide acclaim and huge consumer demand on September 12th.

Nehalem was demoed at IDF on September 18th.

Now I know Abinstein tends to be a little on the optimistic side when describing the accomplishments of his beloved Intel, but I think even he would have to concede that going from first concept to product demo in 6 days is a bit hard to swallow.

As to copying K8, you might make a case that the quickpath interconnect was developed in response to hyper transport. But it seems to have slipped his mind that Intel had Timna with an IMC long before K8. They just bet on the wrong memory protocol and the project was scrubbed.

In typical Intel fashion they refused to take another risk after having been burned once. As it turned out, they stuck with the FSB too long on servers. While AMD fans like to decry the FSB on the desktop, it seems to work well enough on single socket platforms.

InTheKnow said...

Another point that no one ever seems to mention about Nehalem. Intel claims that it is a modular design.


It says here that Otellini said Nehalem's modular architecture will make it possible to use the architecture as a template for a variety of products with different cache sizes, core counts, input/output options and power demands.

That falls a little short of what I would call real modularity. I would define modularity as the ability to pull out a circuit module and replace it with something else like a GPU, accelerator, etc. But that may prove to be too difficult to ever achieve, I don't know.

The point is, for all AMD hypes the native quad core Barcelona, I've never seem similar claims regarding modularity. But it should still be clear to anyone with half a brain that Nehalem is almost totally dependent on Barcellona for inspiration.

NOT!

Tonus said...

"They're copying from us" is one of those mantras that makes a person feel better about a product, even though it is meaningless in the long run.

When AMD had the performance advantage, if Intel supporters had chided them for "using Intel x86 technology", what difference would it have made? Turn that around now, and claim that Intel is basing its future CPUs on "copied" AMD technology. What difference will it make? Zero.

It's like Apple fanatics who constantly talk about Windows being a "rip off" of MacOS, then Windows fanatics tell them that MacOS is a "rip off" of the Xerox PARC research. Big deal, where do Microsoft, Apple, and Xerox rank in sales of operating system software?

Ho Ho said...

Interesting. Scientia sais that Penryn is only a small improvement. Well, I wouldn't call quadcore 3GHz ES Penryn under full load with entire system power usage of 118W "a small improvement". The CPU is supposed to have 130W TDP but I'd say it isn't using more than 60-70W, probably even less.

pic
Power usage is shown on the white multimeter in top right corner of the image

Anonymous said...

Roborat said:
To add to that, the 8% increase in operational cost is indeed alarming. I'm beginning to sense that AMD is losing a lot of wafers due to bad 65nm yields. Barcelona has yet to make an impact negatively.


I think it may actually be worse than that. It looks like the boys in Sunnyvale are also doing some shady accounting to prop up their margins. Buried within their SEC filings.
http://www.sec.gov/Archives/edgar/data/2488/000119312507172819/d10q.htm#tx37268_6

We read this little gem:

"We record grants and allowances that we receive from the State of Saxony and the Federal Republic of Germany for Fab 30 or Fab 36 as long-term liabilities on our financial statements. We amortize these amounts as they are earned as a reduction to operating expenses. We record the amortization of the production related grants and allowances as a credit to cost of sales. The credit to cost of sales totaled $34 million in the second quarter of 2007, $33 million in the first quarter of 2007 and $28 million in the second quarter of 2006. The credit to cost of sales totaled $67 million in the first six months of 2007 and $54 million in the first six months of 2006. The fluctuations in the recognition of these credits have not significantly impacted our gross margins."

AMD is using their gov't subsidies to credit (read: Erase) operating costs and artificially propping up their margins. If I was a stock holder I'd want some answers, this is clearly misleading. They claim it doesn't significantly impact margins. While a few percentage points may not seem like a lot, it speaks volumes about the state of current core operations.

Anonymous said...

Well here's a nice happy birthday to all the AMD/ATI shareholders. This day, Oct 25, is one year to the day it all happened officially.

The gift you may ask? The aggregate market cap for both companies at the time was about 21B. Today it is about 7. That's 15B in 12 months.

Brilliant!


SPARKS

Anonymous said...

"Well here's a nice happy birthday to all the AMD/ATI shareholders. This day, Oct 25, is one year to the day it all happened officially"

Come on you have to commend AMD for this brilliant strategy! Remember there was some debate over whether they should have tried to acquire Nvidia...think of the additional billions in market cap that would have evaporated had they done that!

Tonus said...

I guess you can describe it as a tip of the cap.





(sorry!)

Anonymous said...

scientia and the rest of the AMD fanboi crew are in for a surprise

Intel's Penryn architect speaks
http://www.theinquirer.net/gb/inquirer/news/2007/10/26/intel-penryn-architect-speaks

Unknown said...

See this hilarious letter:

http://www.theinquirer.net/gb/inquirer/news/2007/10/26/egg-gone-rotten

i Mike

As a reader of The Inquirer for many years, I have always enjoyed the professionalism of the site and the articles. So for good reason The Inquirer have been my news site of choice.

But today I had a very bad experience when I read this article by Wily Ferret.

This is the kind of article I would expect from Toms Hardware, which is coloured by who that pays the most. So I hope it is not the case here, i.e. Intel paid for the article.

I am really disappointed to this kind of garbage on The inquirer.

The whole article is terrible, but these three sentences clearly shows no resource done by Wily Ferret what so ever:

"Nobody quite knows what DTX is supposed to be for."

"But is anyone really crying out for these things?"

"The DTX spec appears to bring nothing new to the table, ..."

If he had done just a minimum of resource by searching the web, he would have found very good articles that addresses why DTX was created, why there is a need for it, and why is it revolutionary.

He is completely missing the whole point of DTX, which is lowering the PCB manure factoring costs and avoiding a vendor lock in by e.g. Shuttle.

I hope to hear a comment from you about this.

Paul Hales is CC'ed to this email, in case this email should have been addressed to him.

Best regards,

Martin Jeppesen


DTX is the greatest breakthrough since the microprocessor itself! :-P

pointer said...

See this hilarious letter:

http://www.theinquirer.net/gb/inquirer/news/2007/10/26/egg-gone-rotten ...


actually I have a feeling that this complain came from the AMD itself ...

Anonymous said...

I loved that DTX letter, especially:

"He is completely missing the whole point of DTX, which is lowering the PCB manure factoring costs..."

As the ONLY current available boards are >$200, are the costs really lower? IF so I would like to go back to the cheaper SFF Shuttle "monopoly" - apparently "competition" has managed to increased costs to the consumer for less functionality!

And DTX REVOLUTIONARY? What specific functionality does it enable that makes it revolutionary over currently available solutions? It has no new funationality, is apparently about as small as micro boards that have been selling FOR YEARS, and the "cost benefit" is actual currently a cost PENALTY for consumers. Much like Intel's BTX solution, I see very limited success for this. I know AMD is new to the whole paltformance thing - but they should stick to what they do best (or I should say second best)

Are we supposed to know this Jeppesen guy is, or is he just another AMD poser?

Anonymous said...

All -- please stop poking fun a the letter writers of the Inquirer ... we all know why they read the 'Rag'... and we all understand why they get upset if there is neary a mention of something not praising AMD for world-class, industry-leading, flawlessly-executed innovation... they can get quite upset if there is a slightest hint of anything less than pure and brilliant than thier beloved AMD.

Anonymous said...

Back to the main subject: DTX.

Perhaps it is an answer in search of a question.

When I was looking to do a scratch-built case, I looked at pieces and parts all over the spectrum.

To facilitate discussion:

Pico-ATX I seem to recall - is there a such animal?

ITX?

What's wrong with mini and micro ATX? And a lot of computers (Gateway 1 or whatever it's called) are going to a lappy chip on a proprietary board to get the size down.

I say again, answer in search of a question.

InTheKnow said...

Martin Jeppesen said ...
"He is completely missing the whole point of DTX, which is lowering the PCB manure factoring costs..."

Humorous misspelling aside, this statement shows a remarkable lack of understanding. I've made PCB's so I feel qualified to comment on this.

The cost a PCB shop can command is based primarily on 2 factors, the number of layers and the number of square inches. The shop I worked in made boards anywhere from 2 to 32 layers. We didn't make money on any board that wasn't at least 8 layers thick. The only reason we took a job on a board that was less than 8 layers was because the company that ordered it made an order for a higher layer board contingent on taking the order for the lower layer count board. DTX is a 4 layer board, so based on layer count, this baby is dirt cheap and not cost effective for most board shops to build in anything but high volume.

Now for the area. I tried to explain to Scientia the concept of panelization and how boards are processed. He didn't get it. I throw it out here because I think the people that frequent this board will at least make the effort to understand. It really isn't that tough.

Board shops put multiple boards on a panel whenever possible because it reduces the number of lamination runs (one of the longest steps in the manufacturing process) they have to make to push out a given number of boards and putting more boards on a panel reduces waste around the edges. Much like a wafer, you can't use the area all the way out to the edge. Many shops limit their panel sizes to ease manufacturing and contain costs (you hold less laminate in inventory).

So being able to put more boards on a panel will ease manufacturing costs. But there is a flaw in the whole DTX presentation by AMD. They refer to a "standard manufacturing panel", but any serious player has at least 4 panel sizes. So this standard panel is semi-fictitious. Putting more boards on a single panel will save you money by reduced processing through lamination and reduced waste. But since you aren't locked into just one panel, you pick the panel size that will give you the most boards per panel and uses as much of the panel as possible. It is not really a one size fits all choice as AMD is implying.

So in summary, the DTX board is cheaper than dirt to manufacture. It only has 4 layers and is small, so get more boards on a panel and you pump out more boards per lamination cycle. Sounds good, right?

But remember what I said at first, most PCB shops are going to lose money on 4 layer boards anyway. Even if you can get more boards per panel. The boards just don't command a high enough price per square inch to justify replacing a higher count board in the line.

And finally the real killer. The cheapest part of a finished board is the board itself. A stuffed board (one with all the components mounted) can easily go for 10 to 100 times the cost of the PCB itself.

So if I have a standard board that I sell for $10, by the time the components are mounted on the board, the OEM is paying $100 for a cheap board.

Now I get a DTX board and can make it for $5. But if it still has all the components of standard board, it will cost $95. You are only getting a 5% savings in this case unless you decrease functionality.

My point here is to provide a little insight into board manufacturing and show that the cost savings that AMD is touting for DTX in trivial unless the board has reduced functionality.

InTheKnow said...

BTW, there are a number of simplifications in the statement above. If anyone really wants to split hairs and get into the really gory details we can, but I doubt most people reading the blog even want as much detail as I gave. :)

Anonymous said...

In The Know

Gory details? No. But, I would like to know a couple of things, since I sometimes roll my own, circuit boards, that is.

Why, in the early 90's, was Mylex and AMI boards were SO expensive? And, why the HUGE price drop? This is not factoring adjusted 90's dollars.

Secondly, how is the actual "layering" process done?

Are they exposed, etched, new layered; exposed, etched, new layer; etc?

Are the tiny holes (unused) between all the layers, the interconnect points common to all layers?

Do they roll sheet after sheet to get the layers? Are they glued? Are the common interconnecting points soldered?

Are relative traces, between each layer, critical to the performance of each manufacturer’s board?

Is this entire process automated?

Got a link?

By the way, personally, I wouldn't pee on a board that costs less than 150 bucks!

SPARKS

Axel said...

Copied for posterity:

Scientia

It is probably easier for AMD to convert FAB 30 without it operating and AMD can probably save some organizational complexity by dropping 200mm production.

No, AMD's original plan was to upgrade Fab 30 piecemeal, as per their Q2 2006 earnings CC:

BOB RIVET
Think of it this way, Joe -- we will probably never go below 50% utilization in the facility. As we continue to flip out tools, what we are actually doing is building a separate building to actually augment the capacity to be able to flip the tools through the system. We will never actually go below about 50% utilization of the facility in the worst quarter of time.


Clearly, due to the current financial situation along with lower than anticipated demand for K8 due to Core 2, AMD could not justify sticking to their original plan of upgrading piecemeal. They will instead completely shutter Fab 30 by year end in order to save on the high costs of operating the cleanroom, among other costs. They have provided vague guidance on the pace of the upgrade, including some smoke and mirrors about a race car idling in the pit. The reality is, even if demand for K10 in 2008 turns out to be higher than Fab 36 alone can supply, AMD are unlikely to bring Fab 38 on-line in 2008 because K10 is too weak a product to raise ASPs high enough to operate two fabs. I believe Fab 30 will remain shuttered throughout 2008.

"- There isn't enough market demand for K8 to warrant the high fixed costs of operating Fab 30."

You are obviously overlooking the large volume of Brisbane chips that AMD will continue to produce. Remember, FAB 30 cannot produce 65nm chips.


No, my statement focused on the cost of operating Fab 30 and made no mention of Fab 36. The fact is, AMD had hoped by now to have the capacity from both fabs to supply the demand for K8, but due to Core 2 this demand is now less than initially anticipated. So AMD are shuttering Fab 30 to save cost. Re-opening date is TBD and will be based primarily on what K10 revenue and ASP look like in 2008.

Again, not true. AMD is moving forward with both 65nm K10 and 45nm K10. Bulldozer is nearly two years away.

Yes, moving forward with a mediocre product that cannot support the company. In fact I doubt that AMD will turn a profit in 2008 even with Fab 30 shuttered. K10 is simply too slow, both in IPC and clock.

Completely false. FAB 38 will come online in early 2008.

Sorry but I believe that this statement is destined to go the way of most of your predictions over the last eighteen months.

InTheKnow said...

Sparks, that is quite a list. :) The easiest thing to do is just walk you through the manufacturing flow.

You start off with laminate which is a fiberglass like material impregnated with resin (the core) and clad with copper. There are a variety of copper and core thicknesses and those are chosen based on design and electrical characteristics.

This is where the panel size issue comes in. Most shops will only buy a few different sizes of laminate to reduce what they have to keep in inventory. The company I worked for actually bough uncut sheets of laminate and did our own cutting, but that meant additional equipment costs and headcount to run the equipment. It also added time to the manufacturing process since you had to wait for the laminate to be cut to size.

The laminate is then sent to innerlayer where it is imaged on each side and etched. Following etching, the layers are sent to an inspection step where the board is compared to a reference image. It is critical that there not be any shorts or opens at this point since that will result in scrapping the entire board after investing all the money into building it.

Once all of the innerlayers are imaged, they are sent to layup. The innerlayers are stacked in the appropriate order and aligned. Prepreg is also placed between each layer at this time. Prepreg is just another core material, but without the copper cladding on the outside.

The boards are then stacked and pressed in a lamination furnace with a controlled temperature ramp and pressure profile. These parameters combined with the type of resin in the prepreg will determine the final board thickness and affect the impedance characteristics of the finished board.

After lamination the board is sent to route. In route the edges of the panel are trimmed up and reference holes for drill and outerlayer imaging are placed.

After route comes drill. The boards are laid out on a drill table (typically with 3-4 drill heads per table) The boards will be stacked up the the maximum stack height permitted by the drill bits to reduce the drilling time. When you are drilling up to 30K holes, it takes hours, even with modern drill equipment. The holes without components mounted in them will provide connection points between the various layers of the board.

After drill the panel goes to a brush line to remove drilling burrs. Once the burrs are removed the holes in the panel will be etched to improve adhesion of the copper that is electrolessly plated on the board in the next step. Once the holes have a thin electroless copper plate on them to provide electrical conductivity through the holes they go to flash plate. In flash plate a thin layer of electroplated copper is deposited on the boards to protect the electroless copper plating from being removed in the next step, outerlayer imaging.

After the outerlayer has been imaged on the board, it goes back to plating where the outerlayer line thickness is built up to the desired thickness. A masking metal, usually tin is plated on top of the copper at this time so you can go to SES.

SES (or strip, etch strip) removes the dryfilm put on in outerlayer that was used to define the lines is stripped off. Then the copper between the lines that is under the dryfilm is etched away. Finally the tin that protected the lines from the etch process used to remove the base copper on the panel is stripped off.

Following SES the board goes to final route where the individual board are cut out to the panel.

You now have a functional board and it is ready for finishing. Finishing includes application of solder mask (the green stuff on the board if you prefer)and application of a metal finish (typical finishes are solder, gold or silver).

If the board passes electrical testing, you pack it up and sell it.

The process is best described as semi-automated. The boards are moved from each tool to the next manually and are manually loaded onto the tool. Once on the tool, the process is automated. Layup is the notable exception. It is a totally manual process.

The trace layout, type of pre-preg used and thickness between layers all interact and affect the operating characteristics of the board. But the board shop has very little input to this. Shops receive board designs from their customers and manufacture them according to the specs provided.

By the way, personally, I wouldn't pee on a board that costs less than 150 bucks!

Which is probably a $15-$30 board. It is really the components on the board you are paying for.

Why, in the early 90's, was Mylex and AMI boards were SO expensive? And, why the HUGE price drop? This is not factoring adjusted 90's dollars.

Two reasons, really. First during the 90's demand exceeded capacity. This allowed board shops to dictate prices for their products. It was a sellers market. Second, bare boards (without components) are pretty much a commodity item these days. The guy who can make it cheapest will get the order. I'm sure that the components on the board you are talking about have largely been commoditized now as well.

Anonymous said...

In The Know

Ah, so the prepeg layer is the insulating layers between the double sided trace pattern conductive layers. Therefore, a 7 layer board would have 4 conductive layers and 3 prepeg layers. Further, my high end ASUS premium boards are nice, fat and heavy because they have high insulting properties and/or many layers. Plus they come with juicy components.

Hmmm--, with that in mind, can I go to Mouser.Com and replace some of the power supply caps/filters with larger, higher value, high quality caps? I do this with my tube amps, especially in the signal path and PS stages to reduce esr, signal distortion and ripple, respectively. eg. Black Gate, Wondercap, Cardas, Jensen, etc.

SPARKS

Anonymous said...

"It is probably easier for AMD to convert FAB 30 without it operating and AMD can probably save some organizational complexity by dropping 200mm production."

Scientia is right on this one, but as Axel indicated, AMD EXPLICITLY stated they were not going to do this and BOASTED about how they would be able to 'convert on the fly' and not lose that much capacity. In fact Dementia, Abinidiot, et al made a big deal of this saying how great it was and that AMD would not be going back to effectively one fab during the conversion. Now he is saying the exact opposite decision is the right one, because it is easier operationally, blah, blah, blah, never set foot in a fab so I'll spout out more words to make it seem like I'm an expert on this...organizational complexity....blah blah blah...

The truth is whenever AMD changes a decision it is OBVIOUSLY the right thing to do and Scientia obviously has the right argument behind it!?!

So a little background about an on the fly conversion from 200mm to 300mm fab.

The hardware automation is completely different and incompatible between 200mm/300mm. 300 uses an overhead vehicle system (OHV), 200mm uses carts pushed by hand, or lots hand carried by people or SMIF pods. The major reason for the delta is the ergonomics - the weight of a full 300mm lot is beyond the OSHA safety requirement for a repetitive lift.

The subfab is also very different - this is the pipes, electrical, etc that supply power, chemicals, water, exhaust, etc... these type of things are hard to "convert" as they are often different sizes, capacities, posisbly different FLA electrical requirements, etc... Also if you have say one main slurry system for CMP tools that is sized for a specific volume on 200mm, it's not like you can just tap into the system for 300mm tools. You have pressure drops which are dependent on things like line length and # of elbows in the line, you probably have different capacity requirements which may require a stronger pump, larger ID piping, etc... Now take this one example and apply it to tens and hundreds of fab tools with unique chemical delivery requirements.

The tools themselves have to be moved in and out - assuming the fab is packed efficiently (and densely) the act of physically moving a tool in to the right location is not a trivial thing if the fab is already fairly populated. The industrial engineers will typically factor in move in and move out paths - but these were done when the factory was started up years ago, and were base on 200mm tool sizes (not 300mm). Surrounding tools will often need to be powered down during the move, or worst case they may even need to be temporarirly moved themselves.

Software - despite the big to do about APM (which BTW, is more or less a joke as EVERYONE uses some form of APM), the actual control SW is different between 200mm and 300mm, the tools have different automation interface standards etc. You now have to manage 2 SW systems including the new system you will have to run for 300mm.

So a conversion could theoretically be done real time, but this would likely incur additional cost and slow down the ramp rate of 300mm volume. Gutting the fab is the simplest and most effficient way to go. Had this been a transition of similar size wafers (like 65nm to 45nm) then the process is MUCH SIMPLER, but AMD was kidding themselves, as were the ignorant fanboys, that they could just simply ramp up 300mm while taking down 200mm without much manufacturing impact.

To anyone with a background in manufacturing or planning, it was clear that AMD's plan was a joke and just something they told to analysts to make them feel better. At best they could have taken down a large portion of the fab (like 1/2) and started laying in the 300mm infrastrucutre - but a lot of that depends on the layout - a lot of times the fab layout has similar tools clustered together. Taking down 1/2 a fab spatially is one thing - but if you arejust plucking out tools here and there across a fab that won;t help as you need clean space to put the vatsly different 300mm infrastrucute in. I doubt AMD layed out large areas of the fab as self contained entities that could effectively be unplugged and pulled out as units during a ramp down. That would be foolish as this is the MOST INEFFICIENT way to layout a fab as the support equipment to support various areas would then have to be spread out (and in some cases duplicated) across the fab - it is far simpler and more efficient to put all tools of one type near each other.

This is a hard topic to explain so let me know if there are questions.

Anonymous said...

An interesting 300m vs 450mm debate is just starting to brew at Sematech... some of the larger manufacturers (Intel, TSMC, Samsuung...) are looking at this as the next logical progression - AMD appears to be fighting it and will likely try to delay this process as much as possible.

450mm will be afforadable by just a few of the large manufacturing houses - if this transition occurs, AMD know they will be completely dead in the water as there will be no way they can afford the cost of a 450mm fab (and the development to go along with it).

It'll be interesting to see what happens. Oh and as someone who has done some work in this area this "300mm prime" is a BUNCH OF CRAP - let's make tools more efficient so we don't need 450mm so soon. We'll just make it 30% cheaper on 300mm...

Great and maybe they can invent some of those machines that just start pushing out cash in $100 bills. How would they get to 30% cost reduction? Well, that needs to be studied... (read - no friggin clue!)... we'll just make things "more efficient"... how? (again no friggin clue)

What is even funnier is that the equipment vendors are also obviously pushing back (sell more tools on 300mm or fewer tools on 450mm?). A lot of them also are pushing 300mm' in order to delay 450mmm. The problem with this logic. If 300' is successful and they gain alll these efficiencies, it means the equipment vendors will end up selling fewer tools ANYWAY! And their job is to make things more efficient in the first place! If they can get 30% efficiency gains - dosen't imply that their current designs are inefficient?!

Look for a small "fab club" to start workng on 450mm (Intel, Samsung, TSMC, maybe Toshiba and a few others) and to post a check for the first equipment makers to be able to produce 450mm equipment. As these are the major guys buying 300mm equipment today, it'll be interesting if the equipment vendors seeing a potetnial huge loss of 300mm equipment volume and a potential loss of access to the early 45omm market will continue to push 300mm' for the 3 tools that AMD will buy... or if they may have a change of heart and start working on 450mm.

Axel said...

For posterity:

Scientia

The cost savings of electricity is tiny compared to the cost of raw materials. Raw materials are of course not needed until new tooling is purchased. The truth is that new tooling purchase is the primary constraint.

Not quite correct. I was referring to OPEX and new tooling purchase is CAPEX. From Figure 5 in that link you can see that raw materials (supplies) comprise only some 20% of 300-mm fabrication operating cost. Electricity and cleanroom upkeep would presumably be in the 10% 'Direct-Labor' category, indeed not a major factor but certainly not tiny. I cannot speak for the veracity of that link but it is consistent with what I've read before, that equipment depreciation and yield losses are the main killers. Once the equipment has been purchased, it needs to be put to work immediately and fully. This is precisely why AMD are stalling on the Fab 30 upgrade. They do not believe the demand will be there to justify these fixed costs. According to AMD themselves, little if any actual production activity is now expected from Fab 38 in 2008. When AMD say 'modest activity', it means little or nothing.

However, it would be ridiculous to think that AMD would buy a $30 million scanner and then leave it unused. AMD may have plans to move a scanner from FAB 36 as they buy new 45nm immersion scanners or they may obtain a new one for FAB 38 but either way, they have to have one.

Any of the tooling is subject to rapid depreciation and it would indeed be foolish of AMD to leave such expended CAPEX unutilized. Who knows how AMD are managing this? The bottom line is that according to Hector himself, Fab 38 will see little, if any, action in 2008.

The truth is that AMD's volume share is only slightly below its all time high and is slightly above the 2006 average, this is in spite of the fact that market volume was up in Q3.

You are forgetting what's happened to their fab capacity in the meantime. The truth is that AMD have much more fab capacity today than they did a year ago (considering both the Fab 36 ramp and the 65-nm shrink), but they are serving less of the market than they were a year ago and they do not anticipate significant continuing gains in share. Hence the reason for shuttering Fab 30.

If you really think that Barcelona is so poor then you obviously have not looked at either the 2-way or 4-way SPECfp_rate scores.

SPECfp_rate is irrelevant for this fiscal-centered discussion. K10 will be poor from a revenue standpoint, because in the markets that matter for revenue (hint: not server) K10 significantly underperforms Yorkfield per clock, and does so with a significantly larger die and much lower yield. There are already quite a few benchmarks out there to support this position. The sooner you come down to reality from the clouds of marketing materials like the K10 Software Optimization Guide, the better for your logical thinking.

Feel free to post any that I overlooked.

I don't have time at the moment for this endeavour but several folks have taken good stabs at this over at Roborrat's blog over the last few months.

BTW, did you even bother to read your own link? It doesn't support your claim that FAB 30 will be indefinitely shuttered.

I never claimed that Fab 30 would be shuttered indefinitely, but expressed my personal belief that it will not produce CPUs through at least 2008.

InTheKnow said...

Therefore, a 7 layer board would have 4 conductive layers and 3 prepeg layers.

Not exactly. Layers refer to the number of metal layers in the board. So you don't see odd layered boards. You could make one if you wanted, but nobody does.

Hmmm--, with that in mind, can I go to Mouser.Com and replace some of the power supply caps/filters with larger, higher value, high quality caps?

I'd hesitate to do that. The boards are over-engineered, of course, but there are other concerns. Most modern boards have paired impedance traces (lines) on them that are used to control critical timing circuits. If you jack up the power on these traces beyond the design specifications you may mess up the carefully balanced timing circuits.

InTheKnow said...

What is even funnier is that the equipment vendors are also obviously pushing back (sell more tools on 300mm or fewer tools on 450mm?).

Everything I've heard along these lines is that the equipment manufacturers are still looking to increase their ROI on the 300mm transition. Some of the equipment saw radical modifications when going from 200mm to 300mm, so I'm sure the investment in development wasn't insignificant.

Look at the number of 200mm fabs that were built and compared that to the number of 300mm fabs in existence or under construction and you see there are far fewer 300mm fabs. Factor in the reduced number of tools in the 300mm fabs and it doesn't seem that they would have made as much from the 300mm transition as they did on the 200mm transition.

Almost everybody went 200mm. A lot fewer went 300mm and even fewer will go 450mm.

If the equipment manufacturers haven't seen the return on their investment dollars on the 300mm transition they want to see, I think it is pretty easy to see why they aren't in a rush to go to 450mm. There is even less incentive since they will be selling to an even smaller market than 300mm.

I'm not sure I articulated the idea real clearly here, so if what I'm saying isn't obvious, let me know and I'll try to clear it up.

Anonymous said...

"you may mess up the carefully balanced timing circuits."

Ah---Trace length tuned the cap value. No doubt coupled with the inductors at the CPU, basically a tuned resonance circuit, with trace length (and other variables) factored in.

Hmmm--- makes you wonder why, at a miserable 2 or 3 hundred bucks, anyone would consider a half assed MB with productivity and precious data as a long time cost/value proposition! Or, perhaps, buy a real cheese cake like DTX.

I can hear ‘Margaret’ at station 132 now “something’s wrong with my computer; its running slow and locks up all the time. Can ya come down here and check it out”

I suppose it will go well with their half assed crippled triples!

AMD’s new mantra, Think CHEAP!

Anyway, thanks for the insight and the ART in making really good motherboards.

SPARKS

Anonymous said...

"I'm not sure I articulated the idea real clearly here, so if what I'm saying isn't obvious, let me know and I'll try to clear it up."

What your are saying is clear, but quite frankly simplistic. 300mm saw a 30-40% equipment price increase (theoretically to deal with the increased complexity of 300mm).

However there were and are a few benefits on 300mm - the front ends of tools were standardized to the point where tool vendors could buy off the shelf 3rd party solutions if they didn't want to develop it themselves. Those who did the development were able to use a single solution on virtually every different type of equipment they made (as opposed to 200mm where many equipment vendors had different solution for different tools). Due to increase wafer cost, test wafer reclaim became ubiquitous, helping to control the Si cost for protoyping and development.

The robotics internal to the tool was a rather simple scale up from 200mm - little development was needed here (and certainly not a 30% cost increase just for this). Early 450mm projections i sthat the additional weight of the wafer will not require any significant re-work of the dry robots inside tools. And while all the process tools required development, the vats majority were straightforward scale ups.

The problem the equipment suppliers don't see is that those who are not pushing 450mm are pushing 300mm' and are expecting to see a 30% cost reduction over existing 300mm. This is WORSE than 450mm because IC manufacturers can realize a 30% cost decrease even with the price of tools going up 30-40% on 450mm, thanks to the die area.

Any bets on how a 30% cost reduction is going to be acheived with 300mm prime?
A) tools are sped up (unlikely as most of the runrate improvements were squeezed out in the early 300mm development). And even if equip vendors were to magically speed up the tool this would result in fewer tool sales too...

B) Tool price REDUCTION...this would seem to hurt, not help the old ROI argument for equip vendors. This is personally where I think folks like AMD will focus - they'll push the equip vendors of the world to look at second sourcing some of the expensive (and often critical parts) to cheaper alternatives. They'll try to squeeze even more out of spares and service (which won;t amount to more than 3-5%) and then they'll push on general price reductions.

C) speed up things operationally in the fab - any bets if this happens that the benefits will be shared with the equip vendors?

People fail to realize it was 5-6 technology nodes from 200mm - 300mm. 300mm is entering it's 4th node (depending on what you count for the start), so in 1-2 generations it will be about time to move on. If you look at cost/cm2 of Si over time, each node adds 5-10% to wafer cost due to additional metal layer(s), increased litho costs, insertion of new technologies to meet performance rqmt's. The only proven way to reduce the cost/cm2 of Si is through increased wafer size - the sooner the equipment vendors realize that the better off they'll be.

Without a wafer size increase to offset the increasing cost of each technology node, folks will stop (or slow down) the technology node scaling and this means fewer tools purchased as well.

Growth occurs through new markets - you can only squeeze so much blood out of the optimization/efficiency stone before it is completely dry. Generally speaking after 4-5 generations of tool development on a given wafer size, there is not that much you can do without fundamental changes to make it faster/better/cheaper.

"Some of the equipment saw radical modifications when going from 200mm to 300mm"

Very little equipment saw "radical" modifications - most of the equipment was a geometric scale-up which is a matter of engineering (not research and development). I would curious to hear what tools you consider a radical change from 200mm to 300mm.

InTheKnow said...

I would curious to hear what tools you consider a radical change from 200mm to 300mm.

Well, you have the advantage on me here since my experience with 200mm tools is limited to what I've read and been told. I've only entered the industry since the advent of 300mm.

But I believe that Nanometrics(?) UDI (a particle detection tool for those that don't know) is one system that saw significant changes. Even after being in use for several years, many engineers seem to be surprised to learn that it spins the wafer. The old 200mm system didn't.

Another obvious candidate is the Applied Materials HIM tool. I'm pretty sure the idea of strapping 13 wafers to a 6' wheel and spinning them at several hundred rpm was a new idea as well. Whether or not that is a sane thing to do or not is another question all together. :)

A final example is ASM's line of dual reactor diffusion furnaces. To the best of my knowledge these tools do not have a 200mm analog. The lack of a mature design on the early tools would seem to support that assumption.

Anonymous said...

“What you are saying is clear, but quite frankly simplistic.”

Whoa, easy GURU, nothing about this stuff is simplistic!


“Require any significant re-work of the dry robots inside tools. “

Is that robot with an (s)? What do they have in these tools, little robots running around inside the units, moderating the etching and annealing process?!?!? What the HELL is a “dry robot”, a Steppford wife, perhaps?


“Most of the equipment was a geometric scale-up which is a matter of engineering”

Ok, so AMD had the right tools for the job. Was it they couldn’t make the process work because they didn’t know how to use the tools? Or was it the SOI process that screwed the pooch? Yet, AMD TALKS about 45nM mid 2008 (yeah right, OK), but they can’t get good yields and 65nM. So what’s so simple if they couldn’t pull it off?


“I would curious to hear what tools you consider a radical change from 200mm to 300mm.”


Let’s say I know what I’m talking about, which I don’t. However, all you chip wiz’s mention as the wafer size increases the “edge patterning” increases, wouldn’t that mean unusable chips a the periphery would be proportional the square of the wafer? (That sounded too good for me to say) Whatever the number, it’s a lot, no? Is the 30 to 40 percent increase worth it as opposed to going to 32nM at 300mm. Wouldn't you get the same yields, relatively?


AMD CANNOT afford 450nM now, unless CHARTERED SEMI makes the transition, can they?


SPARKS

Roborat, Ph.D said...

InTheKnow said: Almost everybody went 200mm. A lot fewer went 300mm and even fewer will go 450mm.

You also need to consider that the companies that moved to 300mm purchase more that 90% of equipment produced today. The market dynamics when everyone shifted to 200mm is drastically different from 7 years ago when 300mm was introduced. A lot of semi startups today go 100% fabless at the onset.

To some like the analogue IC industry, 200mm is the sweet spot where scaling isn't as critical and revenue/volume growth isn't as healthy as the digital industry.

As long as Intel, Samsung and the big foundries are interest in having a leg up in cost using 450mm, the shift will happen.

Anonymous said...

"Whatever the number, it’s a lot, no? Is the 30 to 40 percent increase worth it as opposed to going to 32nM at 300mm. Wouldn't you get the same yields, relatively?"

No...most people completely IGNORE the added costs of moving to a new tech node. Yields are not the only story. Sure you (theoretically) get 2X area with each node shrink - but the dirty little secret is typically you are lucky to get 40% (logic doesn't shrink like SRAM) and you are adding ~10% to the cost for the additional metal layer, things like high K, better litho, more sophisticated implants, anneals, strain technologies, etc...

With wafer size increase you are getting 2 - 2.25X area increase. The capital is more expensive, howeever you are still achieving a 30-40% cost/cm2 reduction. So even in the terrible case where you oly get yields out on a 300mm inner diameter on a 450mm wafer (which obviously would not be the case), you are getting the same output at 30-40% lower cost. Now compare that to shrinking which may give you 40% better density, but at 10% ADDED cost.

Now consider the likelihood of only getting the inner 300mm of a 450mm wafer yielding and I think the answer to your question of shouldn't the relative yields be similar is rather obvious (they're not) So you need to look at yield normalize to cost - I could probably get near perfect yield these days on a 0.18um process, but how much good would that do me?

Simply put Moore's law becomes a cost bust without periodic wafer size increases.

Anonymous said...

"So what’s so simple if they couldn’t pull it off?"

#1 - they (AMD) did not have the capital for 300mm early on - while the ROI is good, there is a huge up front cost and there were huge early development costs. Intel with their size had a quicker ROI than a smaller volume company and therefor could afford to foot the bill of the new fabs not to mention the significant cost of debugging ALL of that 300mm equipment and working with the vendors to make the tools manufacturing worthy.

I think SOI may also have been a part - at the time of 300mm, there was difficulty getting enough bare Si 300mm capacity online, yet alone 300mm SOI... Forget costs (which would have been astronomical), I don't think the capacity would have been there even if AMD had the money.

Finally the tools were generally geometric scale ups but the process integration was not trivial. There were very subtle issues like the change from cassettes (which meant wafers were constantly exposed to fab air when not being processed) to FOUP's, which while not hermetically sealed, meant much less exposure to fab air - this had huge impacts at some of process steps which required a great deal of engineering - not from the equipment vendor, from the IC manufacturers.

It is clear that AMD cannot afford 450mm - which is why they are fighting it. The problem I have is they shouldn't be making bogus arguments like we can get a magical 30% cost reduction on the existing wafer size (which has NEVER been done in the history of Si processing) and/or 450mm will bankrupt the equipment suppliers (they will be selling fewer tools but at higher margin), and there will still be significant 300,, equipment sales. AMD would like to do what they did on 300mm - let others (Intel, IBM, Samsung) do all of the heavy lifting and draft in when things are considerably cheaper. The problem is if things on 450mm start soon, they will not get all of the leverage they want out of 300mm and they don't have the working capital to spring for a new 450mm fab. The longer it is put off the more AMD holds their disadvantage to the ~2year technological lag they currently have (I say 2 years, because as an educated person in this area I'm not naively looking solely at technology node starts as the sole determining factor). If AMD falls behind on both nodes and wafer sizes, Intel will price AMD into the ground permanently.

Could you imagine what the pricing pressure would be like if Intel were getting 3-4X the chips/wafer (2-2.2X for wafer size x ~1.6X for node). How much would those ~266 quad cores be going for?

Anonymous said...

"What the HELL is a “dry robot”,"

Yes, I meant robot(s) - some tools have several robots inside a single tool. In fact virtually every tool has a front end robot which takes a wafer from the FOUP (pod that holds the wafers) and puts it inside say a vacuum chamber (more precisely a load lock). Then there is another robot (or even 2) inside which would take the wafer from the vacuum chamber to the specific process chamber.

These "robots" generally have 3 axis of motion, an "r" (extend), "theta" (rotate) and "z" (up/down). The robots on the front end in addition to these motions will also move back and forth a track (in order to handle multiple FOUP's being on the tool at a single time).


Then there are tools with far more complex robotics...

Surprisingly enough the design that can handle 300mm wafers can also handle 450mm. Some only need a simple change to the "end effector" (the part that physically holds the wafer). Others may need some minor mods (such as longer linkages due to the increased wafer size)

Here's a video of a typical robot (it's from a small startup company) if you are really interested:

http://www.blueshifttech.com/downloads/BlueShiftRobots.wmv

Anonymous said...

"Simply put Moore's law becomes a cost bust without periodic wafer size increases."

Got it. Even with a larger wafer, you still got to trim some fat off the edges to get to the filet, especially, when you shrink. Then, there will be less fat over time as you learn how to cook the edges.

I suppose that’s why INTC refines a process before they move on to a larger wafer. Then, they shrink when the older process is proven on the larger wafer.


"How much would those ~266 quad cores be going for?"

Yo, that's easy, even for me!

Cheaper than Crippled Triples!


SPARKS

Anonymous said...

GURU! GURU! Blue Shift Video!

Those things are beautiful! The most I ever get to install are huge switch gear, industrial motor controllers and integrated build management systems.

But these things, wow! The precision machining and tolerances must be incredible! Not to mention the metals that are non reactive to the process. The whole tool looks like surgical stainless. Further, the video says they LEARN as they go! More than I can say for some of the knuckle heads I work for or with!

I saw the software interface PDF, nice! Come on Guru, spill it. Do you get to play with stuff? Is this what you do everyday to keep your significant others well fed, in the pink, and happy?

SPARKS

Anonymous said...

"Is this what you do everyday to keep your significant others well fed, in the pink, and happy?"

The Blueshift stuff specifically no, but with semiconductor equipment, yes.

I thought this was some pretty cool stuff especially what they did on the SW side - in addition to on the fly adjustments they can tell on some occasions if a robot is going bad or needs a PM (preventative maintenance) based on the amount and type of drift.

The robotics are not quite as cool as what you would see on an automotive manufacturing line (now those robots are RIDICULOUS!) but the robotics in the semi space have their own unique challenges - namely those robots have to do those motions while generating <1 0.09um particle on a wafer.

Anyway I digress - how about that DTX stuff? Really starting to takeoff now, especially after AMD (supposedly according to Scientia) delayed their past analyst meeeting in part to hype and show off DTX. All I got to say is thay Dementia's analysis is dead on and that push out from June to July of that analyst meeting made all of the difference for DTX! It is remarkable how dead on his predictions are (or should I maybe just say 'dead')!

Oh well I'm sure he will have another 30 paragraph blog to talk about why he is right and how DTX is just being kept down by the man, or maybe it's the compiler's fault or maybe APM will save it or.... it's a good start...

Anonymous said...

Yeah, I know what you mean. I've seen a considerable amount of "drift", especially after lunch when some of the 'equipment' had a little TOO MUCH preventive maintenance. I’ve personally seen tolerance drift go off the scale.

Nothing is uglier than (6) 4 inch ridged 90's, with a high 22 1/2 kick, and 3 of 'em are doglegged into a $75,000 ATS. Trust me. If you’re on the west coast, you’ve probably heard me screaming. (It ain’t pretty) I work on the east coast.

Well, perhaps, maybe a DTX with extremely limited functionality, with a 2.6 GHz Crippled Triple sitting on my desktop with 1 PCI E slot, and 1 PCI slot!

HORROR OF HORROR"S!

SPARKS

Epsilon said...

Hey roborat, OT but worthy of a new topic IMO. :)

http://news.expreview.com/2007-10-29/1193590532d6599.html

Crysis CPU benchmark between E6850, QX6850, QX9650 and Phenom X4!

Doesn't look too good for AMD I'm afraid. 5 - 10% slower than C2D/C2Q clock for clock!

Unknown said...

Hey sparks, you'll like the looks of this! http://enthusiast.hardocp.com/article.html?art=MTQxMSwxLCxoZW50aHVzaWFzdA==

Yorkfield overclocked to in excess of 4.3Ghz!

I guess it's just too bad I can't justify spending $1000 on a CPU. If I could, I'd buy one of these come November 12th!

Anonymous said...

"Doesn't look too good for AMD I'm afraid. 5 - 10% slower than C2D/C2Q clock for clock!"

I guess the 40% better is only on obscure benchmarks for server chips. Talk about lowering expectations I'm sure that Dementia will include this in an article and say the important thing is that it is "competitive" and might be offered for a lower price. Couple of things wrong with this:

1) That Phenom 3.0GHz doesn't exist. Who knows when it will.
2) all of the performace/watt is now thrown out the window as Phenom will not even be close...Scientia will claim this is an enthusiast chip so it won't matter.
3) DESPITE claims that it is an enthusiast chip so power isn't important, Dementia will then claim the overclocking potential (which obviously will be far superior on Penryn) is not important as few people do this.
4) This is the starting point for the Penryn enthusiast chip, compared to what AMD HOPES to achieve.

Is it almost not funny that the idle of this chip is just about where the current K10's are? Or maybe funny that this thing consumes about as much power as AMD top of the line DUAL CORE chips at peak and SEMPRON SINGLE CORE chips at idle?

Tonus said...

The biggest thing for Penryn, from a PR standpoint at least, will be the overclocking capacity. If those chips can do close to or more than 4GHz reliably, they'll become the flavor of the month for enthusiasts. And so far I've seen one site OC it to 4.3GHz and another to 4.7GHz. Both on air if I am not mistaken.

Could we see 5GHz clocks on water-cooled systems before long?

Granted, the enthusiast community probably doesn't drive sales to the extent that we might think it does, but highly-overclockable Penryns could bury Phenom in terms of how much attention they get amongst the community. Some hardware geeks are die-hard Intel or AMD fans, but most of them just want to OC the hell out of something.

Anonymous said...

Who gives a cooter about DTX? By the time it launches, never mind takes off, mobile will continue to usurp desktops. PC gaming is dead so who gives a cooter about mobile GPU. Add a wireless mouse and a Penryn, you're all set to destroy DTX.

Anonymous said...

Yo GIANT!!! Thanks! I missed that review over the weekend.

I’m gonna need a water cooling setup on my credit card ‘cause it’s burning hole in my back pocket!

SPARKS

Anonymous said...

Giant! Chew on this a while. Even the 'RAG' is clocking the 'Penny'!

Interestingly enough, and quite coincidentally, they've clocked it against my beloved 955EE (R.I.P.) at the same 4.26 GHz I used to run.

So, was does this show Dementia and Company? Ya don’t need a super long, phallic pipeline to get super clocks! All ya need is the best people and chip company IN THE WORLD!

No more skywriting in California, and no more gaudy posters on Park Ave.

Target 34, 2Q ‘08

http://www.theinquirer.net/gb/
inquirer/news/2007/10/29/
first-inqpressions-intel-qx9650

SPARKS

anonymous said...

One more comment on "everyone" doing APM-

Intel published this paper in 2005:
"Improvements in polysilicon etch bias and transistor gate control with module level APC methodologies ", Williams, et.al. IEEE Transactions on Semiconductor Manufacturing. Vol. 18, no. 4, pp. 522-527. Nov. 2005.

Note that this was reporting resultins on P1260 (.13um), and they report a 12% improvement in gate CD control- a big deal.

InTheKnow said...

Guru, am I to take your lack of a rebuttal to mean that at least in the specific instances I sited, there had been significant R&D and not just tool scaling. I'm not trolling here, it's a legitimate question since I lack direct 200mm experience to base my assumptions on.

Anonymous said...

It's the best time to make some plans for the future and it's
time to be haррy. І've read this submit and if I may just I wish to counsel you some attention-grabbing issues or tips. Maybe you could write subsequent articles regarding this article. I wish to read even more issues about it!
My web page > www.pikavippis.net