As promised from a previous blog entry , it was only a matter of time when Cray makes a deal with Intel and leave AMD out in the cold. Cray's 2007 earnings was drastically affected solely by AMD's product delays and poor performance: "... The decrease in product revenue (16%!!!) was principally due to delays in completing new products in time to recognize revenue in 2007 because of delays in product development and component availability" - Cray 2007 annual report.
WSJ : "Intel Corp. and supercomputer maker Cray Inc. announced an alliance that could add to pressures on Advanced Micro Devices Inc., and aid Cray's battles with much larger hardware vendors".
While the immediate implications are obvious, such as the decline in 4P market share for AMD and the associated drop in healthy margins, there are other far reaching but less obvious things to consider. For one, AMD's foothold on the server spare would completely evaporated once Nehalem ships with the likes of Cray on board. Design wins in HPC space are always critical due to the significantly longer product life cycle (sales and after-sales support). It is never easy to get back in this segment unless there are very compelling reasons to do so because price which seems to be AMD's only forte, plays a lesser role.
The news about the collaboration in R&D could only mean more bad news for AMD. Intel will be willing to pay for a significant portion (read: almost all) of the R&D as long as it involves their CPUs and a bit of pressure to forget about competing platforms. A clever and legal way of locking in your supplier for future products. Cray may be only one HPC vendor making such an announcement, but the shift between the big players have been going on for quite sometime now.
Expect Opteron based systems to disappear one by one from the supercomputer list in 2010.
4.28.2008
Cray Dumps AMD
Subscribe to:
Post Comments (Atom)
73 comments:
I'd hate to be the guys at Cray who had to negotiate terms with Intel after this happened.
Almost as much as I'd hate to be the guy at AMD who ever has to contact Cray about a strategic partnership in the future.
Sparks,
I have found your discussion with Scientia to be interesting to say the least.
He says:
I'm sorry but it is just very difficult for me to understand why someone would prefer the locker room/clique mentality of roborat's blog.
That is easy to explain. I can post what I want on this blog. No one will a) delete it, or b) cut it up and reposted it in part with "responses" to the parts of my post that the blog owner deems fit to address.
Right, wrong good or bad, I took the time to compose my thoughts and type them up. As long as there isn't an issue with the language or defamation of character, I don't see a need to remove posts.
He goes on to wonder
The really ironic part is that if I wanted to I could certainly make my blog very pro-AMD but I have not done that. It amazes me that the people who post on roborat's blog and Roborat himself are not smart enough to see that.
So now we insult my intelligence since I don't share his viewpoint. This is just a polite way of telling me I'm stupid for posting here. Presumably, if I were intelligent, I would post on his blog on his terms.
As to being a pro-AMD blog, perhaps he should look at the people who post there now. AMD supporters one and all (okay, enumae would be an exception). That makes his blog no better than Roborat's. It is just a hang out for AMD fans. There is no longer a dissenting voice. He has killed it.
In my final post on his site, I told him that I had found that his blog was not a place for open and honest discussion. My post was quickly deleted, but removing my post does not change my perception.
I don't think anyone would accuse me of being an AMDroid, but I am interested in the opposing view point. In that regard I find this blog a bit lacking since the few souls (like HYC) who have ventured to post here get eaten alive. But even with it's shortcomings, I've found it to be the best industry blog I've found.
"In that regard I find this blog a bit lacking since the few souls (like HYC) who have ventured to post here get eaten alive."
Quite frankly, with some of the things he has said - he deserved to be "eaten alive"... if his statements were 'my opinion'... or 'my theory'... but when he concludes and compares things incorrectly, well it should be challenged. That said I do respect his posting here and not simply posting in a 'friendly' environment.
The difference between this blog and Scientia's is that Robo won't selectively filter and selectively edit the discourse... this in my view is worse than refusing to post a comment as there is no way of knowing what he is taking out of context.
As for content, in my opinion, there is an absolutely huge chasm in expertise in the process and manufacturing areas on this blog - you have people who have both academic and practical knowledge in this area and are simply not just trying to interpret things seen on the web. In my view this is clear when you see predictions on things like clockspeed or TDP or launch dates based on some of underlying fundamentals vs what I see as largely empirical extrapolations on Scientia's blog. Things like solely looking at release schedules to assess technology differences between companies or creatively interpreting Sematech presentations to fit a desired blog entry illustrates the lack of understanding of what lies the next level down (in terms of info) from that data to truly understand it and draw conclusions from it.
That said there are some very good SW and architecture people on that blog (in my view, MUCH more so than here), but there seems to be a need for some of those folks, who are clearly out of there element in other areas, to try to convince people about AMD's prowess in the process and manufacturing area. You don't see the process or manufacturing folks here cashing in on their reputation to make claims in areas they don't understand.
What I like is when folks will be open about what they don't know and not try to pawn themselves off as an expert in areas they are not. That will always happen to some extent, but when you try to refute some of this on Scientia's blog it gets filtered - in my view this is due to fear of being viewed as less knowledgeable in certain areas, or just a desire to make AMD seem better or less far behind(though admittedly, I'm no psychologist and this is solely one of the anonymous robo-trolls' views).
I do find the 'I was very accurate in 2003-2005, but since Core 2 I have been less so' (I'm paraphrasing) evaluation somewhat amusing. One (or should I say 'we' to make it sound better?) could naively view this as Scientia's predictions are accurate when AMD is doing well...perhaps because he largely predicts good things about AMD and bad things about Intel. I'd suspect if Intel continues to do well and AMD struggles, Scientia's predictions will continue to be poor, but if AMD starts to do well the accuracy will pick up.
The difference with the blog here, is there is much less false pretense - many people who are fans, don't pretend to be unbiased objective posters and as many have pointed out the comments are posted regardless of ideology and don't get deleted if Robo doesn't personally agree with them.
Robo - did the press releases state Cray was dumping AMD or simply that they would start using Intel in the future? It may not necessarily mean that AMD is being dumped, but rather Cray is hedging its bets and may go forward with both suppliers.
Realistically, for a company Cray's size and the area they operate in, it probably doesn't lend itself to this approach but from what I read on the web it was not clear this was a 'dumping' of AMD.
I also think the supercomputer list will evolve slowly even if Intel takes all of the Cray business. (It is also in my view not a good indicator anyway as it seems to be a lagging indicator).
To me, putting away the HPC applications - what will be interesting is that with the growth in computing power and # of cores will 1P and 2P continue to eat into the need for 4P+ servers? If you start talking about a 2P server with 8 cores in each socket, 4P may really diminish except in niche applications. (If I'm not mistaken, 4P+ is still relatively small compared to the 1P and 2P market).
What the heck?!?!
http://www.digitimes.com/mobos/a20080428PD219.html
(AMD desktop lineup revealed)
Some highlights:
- 'while the low-power 8450e (Tollman) will see production begin in the second quarter' You mean they are INTENTIONALLY starting these or this is when wafers will start that they expect to have yield problems on?
- 'The Phenom X4 9150e, which was originally planned to be launched in the second quarter, will not be available for orders until the third quarter, along with the 9350e. In the fourth quarter, AMD will launch another low-power CPU'
So 9150, 9350, 9450, 9550, 9650, 9750, 9850 and potentially a 9950... now also throw in some 0MB variants... Huh? 8+ products (probably at least 10) to cover the quad desktop space? Are you kidding me? Is it just me or is this insanity? you gotta think the top price is in the $250 range... with 10 products what are the price increments going to be?
"if the process goes smoothly, 45nm Phenom X4 CPUs should appear in the market by the end of November, added the sources."
Leaving AMD squarely a year behind Intel (or more if you consider actual process node performance) and this is with AMD running at breakneck speed to new tech nodes - I just don't see the closing of any gaps that others have foretold.
And it looks like 2.8Ghz is the top potential speed through Q4'08 (ranges were given of 2.5-2.8 for the top 45nm SKU in Q4'08) and with a 95Watt TDP. The 95 Watt TDP is a bit of good news at it is improved over the current 125Watt top bin parts - though AMD is expecting to reduce this on 65nm as well so it's hard to say if this is a 45nm improvement or not.
"In that regard I find this blog a bit lacking since the few souls (like HYC) who have ventured to post here get eaten alive."
Quite frankly, with some of the things he has said - he deserved to be "eaten alive"... if his statements were 'my opinion'... or 'my theory'... but when he concludes and compares things incorrectly, well it should be challenged. That said I do respect his posting here and not simply posting in a 'friendly' environment.
Obviously I don't know the facts behind AMD's decisions, so anything I said previously about their honesty/whatever could only be taken as "my opinion" or "my theory."
While, like any other person, I have obvious biases, I am no fanboy. As you folks have noted, if scientia or anyone else makes a statement that I suspect is wrong, I will call it out. I have no investment in Intel or AMD one way or the other; there are no sacred cows here for me.
When I make a wrong statement, I expect that to be called out too, because I'd rather learn the truth than stay ignorant. I might prefer a few less slings and arrows, but what the hell, I throw plenty of my own in other venues.
Ultimately what matters to me is software efficiency and performance. The largest deployments of my software run on SGI Altix - Intel Itaniums. For a few years there nothing else on the market could even approach them in terms of single-system-image scaling. Other folks can have religious wars about whether Itanic is a good thing or not, but what matters to me is that it solves an otherwise unsolvable problem for my customers.
There's an old joke that "there's nothing more dangerous than a computer programmer with a screwdriver." My degree was in computer engineering; I studied both hardware and software design in college but my last VLSI design course was more than 20 years ago and since then I've only kept up my software skills. I expect to be wrong more often than right in conversations in this crowd. (Thanks for delivering on my expectations...)
"The difference between this blog and Scientia's is that Robo won't selectively filter and selectively edit the discourse..."
That's really the only thing I don't like about the comments section on his blog. I agree with his deleting of comments that are mostly flames or trolling, but there are times when he deletes a post and then responds to the deleted post, and you don't know if he left any relevant parts out. Or if you *did* see the post before he removed it, you may wonder why he didn't respond to certain points.
I think it's a good idea to remove posts when people are being abusive or trolling, and then leave it at that. I think that people will either start making posts that just address issues and leave out the crap, or they will stop posting (and who will miss them?). But removing a post and then responding comes off as a suspicious act.
***
As for myself, I'm more interested in looking back and reading about why things have happened than in looking ahead. So much of the technical information is over my head, and lots of details are kept secret by the companies involved, which makes predictions difficult and questionable most of the time.
But I can usually follow the discussion to some degree and enjoy seeing the technical points being made, even if I don't know enough to dispute or support any of them. And since I'm mostly observing, I don't really have anything at stake. Nothing at stake, and I get to read interesting commentary. Win-win situation.
Tonus
But I can usually follow the discussion to some degree and enjoy seeing the technical points being made...
The problem is there's practically no technical dialogue of significance anymore in the discussion section of Scientia's blog. You may have been following over the last year or so, but I think it's pretty clear that that section of his blog died months ago due to the excessive censoring / moderating. As has already been noted here, the bulk of the comments on that blog are now posted by ignorant anti-Intel zealots grasping at Scientia's flawed predictions as the last rays of light left amid AMD's darkening fortunes.
For me, Scientia's posts themselves have consistently been somewhat interesting to read (though laughably misguided and lacking common sense). It's the discussion section that has gone to total crap. A year ago the discussions were far more engaging and Scientia's moderation more lenient. Then as the accuracy of his predictions continued to sour in the second half of 2007 (e.g. K10 performance, significance of DTX, etc.), he became increasingly defensive and intolerant of dissent, leading to the current useless state of the discussion section. It's now nearly on the same level as Sharikou's.
Anonymous said...
Robo - did the press releases state Cray was dumping AMD or simply that they would start using Intel in the future? It may not necessarily mean that AMD is being dumped, but rather Cray is hedging its bets and may go forward with both suppliers.
The $250 DARPA contract that should be prototype complete by 2010 will be coming out with Intel CPUs instead of AMD. Crays direction for HPC systems have switched sides. I consider that fundamentally dumping one technology over another. Cray didn't say they will be Intel exclusive, but you must agree it's a catchy title.
In The Know
You know me Bro, I call 'em like I see 'em.
BTW: UPS did not arrive today :( :(
SPARKS
“The $250 DARPA contract that should be prototype complete by 2010”
Doc-
Since CRAY will be using Xeon chips in the interim, and Nehalem has been seen here and there up and running, can we assume they were at least impressed by its performance enough to make the swing to INTC? Further, would they use a four or eight core for their specific needs? Will other manufactures follow CRAY move eventually?
What’s your take on Itanium with regards to Nehalem?
SPARKS
Read Scientia's parting sentences here and judge for yourself has he become Sharikiou junior?
"The basic strategy involves replacing batch tooling with single wafer tooling and reducing batch size. AMD wants to drop below the current batch size of 25 wafers. AMD figures that this will dramatically reduce Queue Time between process steps as well as reduce the actual raw process time. Overall AMD figures a 76% reduction in cycle time is possible so a 50% reduction should be reasonable. Today, running off a batch of 25 wafers is perhaps 6,000 dies. Reducing batch size would allow AMD to catch problems sooner and allow much easier manufacturing of smaller volume chips like server chips. Faster cycle time means more chips with the same tooling. It also means a smaller inventory because orders can be filled faster and smaller batches mean that AMD can make its supply chain leaner. All of these things reduce cost and this is exactly how AMD plans to get its financial house in order"
This is a most funny line of thought showing how desperate Scientia is stretching to spin something out of NOTHING!
AMD really doesn't have options to replace batching. THey are a small fry in the chip business and little leverage on tool manufactures. Last I checked all major process continue to be "batch." The largest buyers of tools also do huge volumes
and thus will choose the right processing for the best cost effective manufacturing. AMD can talk till they are blue but
it is just noise from a mouse. Its AMD jumping and waving trying to distract those from the real issues. Everyone is working on cycle time, batching. Everyone is doing SPC, APC, APM, blah blah blah. Where everyone else is guarded, no one wants to give away there competitive advntage. Its funny that Doug Gross let it slip in one presentation what AMD considers good yield in one presentation. What they judge acceptible would be judged dreadful by many others, similar to AMD's financial performance, dreadful!
Lets revist Scientia's silly thoughts on batching.
1) Wafer transportation are done in FOUPS that are 25 wafers in capacity. Using them for less then 25 wafers, say even 5
would increase the number by 5x. That will fill the fab with so many FOUPS, and also overwhelm the automation system. Sorry unless AMD gets the whole fab automation tool set to change they won't get much speed up in tool to tool moves without busting the fab stockers and automation bottle neck. You'd have one huge fab moving a bunch of empty foups.
Scientia you have any clue to how a modern fab works and what the constraints and considerations are in them?
2) All major tools are still batch. They come in two groups, ones that process in batch and those that process singular
but load/unload in batch making true single wafer station to station totally BS. They include pretty much the whole damm tool set from furances, rapid thermal anneals, deposition, etch tools, steppers etc. Everything, so Scientia doesn't know WTF he is talking about. Again I ask Scientia you ever even seen a semiconductor tool in action?
"Faster cycle time means more chips with the same tool." LOL here Scientia totally shows his stupidity again. You should just shut up and stay away from technology as you show again and again you have no clue. THe capability of the tool hasn't changed whether you do it batch or singular. Take a rapid thermal anneal tool, or a sputter with 4 chambers.
NOTHING has changed for the wafers batching or not. It still needs the same fixed time for anneal and or deposition.
Today these type of tools permit queuing two FOUPS so when one is completed the next can start with NO wait. The tool is so expensive that most factories already have them running full out 7x24. Single wafer or batch will NOT increase the number of wafers that can be processed by most tools in the fab. The capacity of a factory will NOT increase by a materially amount with faster cycle times. WTF is this idiot talking about? More spin control like Hector. Smoke and mirrors versus deliver the result. Might as well be walking thru an argument about why INTEL will go BK like Sharikou did.
"Allof these things reduce cost and this is exactly how AMD plans to get its financial house in order" AMD's problem has
little to do with Fab cost. It has less to do with the billion dollar plus factory not running efficiently or not. AMD is
trying to turn attention away from the most fundamental problem that they have.
AMD's real problem and one they refuse to admit they need to fix to compete with INTEL
It takes billions of dollars a year of R&D every year for many years to field a leading edge process merged with a leading edge design, ramping this to produce hundreds of millions of CPUs just in time to capture the billions revenue and the required high margins to do that cycle again.
Right now AMD hasn't invested in the process so they are stuck with billions of dollars of depreciating equipement that produce hundreds of millions of processors that they have to sell at costs so low they can barely break even.
They try to cover up this fundamental chicken/egg problem with fancy words. Bottom line today is they don't have a high
end leadership product to set their ASP across the product lines. Thus they take expensive new designs and fab them on
expensive depreciating factories at commodity prices. This is totally bankrupt! Reducing costs wont' fix this. This is like a commodity memory producer thinking he can produce more and more chips at ever cheaper prices to make up for the loss he incurse on eveyr chip.
AMD can only fix its problem by getting a high margin product and medium margin high volume product. Today they have no product in that space. THey make noise about 45nm coming by end of they year. What is most funny is their 45nm product at that time will e competing with the top end 65nm from INTEL at the bottom, while Nehalem and Penrym products will command the premium to mid range and rake the profits as AMD sucks more red ink.
Losing Cray is a death blow, everyone will now start the moving to Nehalem and thus AMD will lose their last high profit segment.
TIck Tock Tick Tock amd your time has run out AMD
If you look back see all that tried and failed and they all had bigger bank accounts; Digital with Alpha, IBM with PowerPC, TI with SPARC and DSPs, Japanese consortium with TRON, HP with PA-risk. Yawn, its so obvious, why are people so silly to believe the AMD story will be different? Oh yes, because its x86, but lets not forget they are in the game because INTEL treats them with kid gloves and the only reason anyone even believes they had hope had more to do with an INTEL screwup then and AMD execution or strategic brilliance. Now its all over for AMD... Tick Tock Tick Tock.
Sparks said...
Since CRAY will be using Xeon chips in the interim, and Nehalem has been seen here and there up and running, can we assume they were at least impressed by its performance enough to make the swing to INTC?
Cray isn't like Dell. They don't design a system in a couple of months and start shipping. It takes them 2-3 years to develop a new product. I'm pretty sure they won't be using the existing Xeon chips, but will only be using Nehalem.
I also saw something that indicated they would at least continue selling their existing designs based on the Opteron processor in the interim. I can't remember where the link is to that one offhand. I'll post it if I stumble across it again.
anonymous said...
Its funny that Doug Gross let it slip in one presentation what AMD considers good yield in one presentation. What they judge acceptible would be judged dreadful by many others...
Link please! I'd like to see that! Or if all the evidence has been scooped up and swept back under the rug, I'd still like to see a number here.
Sparks said: Since CRAY will be using Xeon chips in the interim, and Nehalem has been seen here and there up and running, can we assume they were at least impressed by its performance enough to make the swing to INTC? Further, would they use a four or eight core for their specific needs? Will other manufactures follow CRAY move eventually?
the $250M contract is for concept development only therefore the choice of multi-core cpu is dependent on what is available at the time of build. the original requirement in 2002 was at least 8-core cpus.
i would say that there are other considerations for Cray's CPU selection besides performance. one being the ability to scale and work with their existing interconnect technology. It is more to do with AMD's unstable execution and poor roadmap that has made Cray look elsewhere. Of course what Nehalem brings into the table like using the multi-chip variant with the IGP as a possible accelerator is definitely a bonus. The capabilities and guaranteed availability of Nehalem and Sandy Bridge in 2010 is just too good to pass up.
Doc-
Thanks, I suspect we all new this was coming after AMD's failure last year.
Minimum 8 cores, native. Impressive.
SPARKS
Anonymous, I'm going to play devil's advocate here.
1) Wafer transportation are done in FOUPS that are 25 wafers in capacity. Using them for less then 25 wafers, say even 5 would increase the number by 5x. That will fill the fab with so many FOUPS, and also overwhelm the automation system.
First, you've chosen an extreme example. Say that you want a batch size of 12. Now you've approximately doubled the number of FOUPs in the system. Still an impact but hardly 5x.
Also remember, the goal is to reduce cycle time. With a reduction in cycle time, FOUPs are spending less time in the stockers and more time in the tools. Since FOUPs aren't spending as much time in the stockers doing nothing, you are able to reduce the FOUP count in the factory at any given time.
So by choosing a smaller, but more reasonable, FOUP size based on the graphs in the Intel slide I posted earlier, I'd estimate this would only lead to about a 20% increase in the number of FOUPs in the factory.
Depending on loadings, this could be a bit tight, but still manageable.
All major tools are still batch. They come in two groups, ones that process in batch and those that process singular but load/unload in batch making true single wafer station to station totally BS.
It is true that the whole FOUP enters and leaves the tools together, but to pretend there is no difference between the two is at best disingenious.
True batched tools do have a very real negative impact on cycle time. You have to hold lots on station until sufficient wafers accumulate to build a batch.
Then you have to move the wafers to the tool. This entails additional delays while the tool waits for the automation system to bring all the FOUPs to the tool. They don't start loading once the first lot arrives at the tool.
Finally, there is the scrap risk. Modern semiconductor tools have the capability to run self-diagnostics as they process the wafers. This allows single wafer tools to stop processing with only a wafer or two impacted. By the time a batched tool reports an error you have multiple LOTS at risk. If those wafers are scrapped, you now need to start new lots of wafers not a couple of onesie, twosie losses.
Smaller lot size = less risk/cost.
"Faster cycle time means more chips with the same tool." LOL here Scientia totally shows his stupidity again.
This is true in one sense but false in another. Faster cycle times can result in increased output by reducing the time that lots sit in front of a batched tool before processing.
No tool is assumed to run 100% of the time. Some amount of downtime is always built into the model. So improving tool utilization and/or availability is an excellent way to improve tool output. You basically redefine the model by reducing the time tools wait for batch quantities to be reached.
"All of these things reduce cost and this is exactly how AMD plans to get its financial house in order" AMD's problem has little to do with Fab cost. It has less to do with the billion dollar plus factory not running efficiently or not.
Inventory carries a very real cost. Intel was able to reduce their inventory significantly by reducing their cycle time. If AMD were able to improve their cycle time, they too would realize the cost savings this brings.
Well gents the UPS delivered the baby, 7:30 PM EST
ALL systems go all we’re on the clock, 8:35 PM EST
GURU- This ‘top bin’ baby is cranking along at a mere 4 Gig, 3rd boot right out of the box. So much for your buddies, Q6600, Phenom comparison.
SuperPi=11 sec, TWICE as fast as Pheromones 22 sec @ an unstable 3.5
Mem bandwidth = 8403MB/s
Cache and Mem.=54.4 GB/s
Mutimedia=51696 iT/s it BLOWS away the Xeon X5482 by 20%
Mem Latency=64ns speed factor 85
The memory is a (stock) 1600 running synchronous with the FSB. It rated for 1800.
Vcore 1.4125, automatic set by the motherboard
10X multiplier.
Air cooling, of course.
These are preliminary numbers. Nothing hardcore as yet, I am waiting for a drink of water.
Obviously these are 100% stable, with much more to spare. I’ll tool it around for week, just to get a feel. Time and H2O will tell.
Nice job fella’s, Thanks.
Giant- Stop F**king around, buy one.
Tonus- getting that itch in your back pocket yet?
SPARKS
"Smaller lot size = less risk/cost."
Perhaps risk, but actual factory scrap does not correlate to batch size. When a scrap does occur it may impact a larger # of wafers, but there are also far fewer scrap events on a batch tool.
As for the whole single wafer processing, small lot sizes - there are many proponents of this - AMD is not breaking new ground here... they see the threat of 450mm on the horizon which only the large volume manufacturers will be able to afford so they are looking for alternatives to compete from a cost perspective.
The problem is the best time to implement things like smaller lot sizes or switching from batch to single wafer processing is at the start of a new wafer size transition (and in fact you will see many of these changes come about if 450mm goes forward).
The problem with doing it in the after the start of a new wafer size transition is you start to impact the reuse model of a fab (~70% of the equipment is reused from generation to generation), you will have significant impacts to the actual fab which are difficult to do on the fly - there would be some automation changes, and most likely substantial facility changes - things like waste line sizing, exhaust laterals, and chemical supply all are impacted if you are talking about a batch tool vs single wafer tool. This then has a cascading impact to other tools in the fab that share exhaust laterals (exhaust needs to be rather carefully balanced for the multiple tools you may have on one large lateral), or are on the same water loops (you may now have different pressure drops)... etc...
Then consider the impact to the equipment suppliers. Not everyone is going to implement these changes so you now force suppliers to support 2 different toolsets on the same wafer size, while also developing equipment for a new wafer size (450mm) and to some extent still support legacy 200mm equipment.
The natural breakpoint is on a wafer size transition as you have to buy all new equipment anyway and you are generally starting with new fab designs so you can plan the automation (lot size, etc) and facility impacts accordingly. You also have fewer design constraints so it makes it easier for equipment suppliers to come up with an optimal solution.
Finally who's going to pay for new 300mm equipment development? Many suppliers are still trying to recoup development costs on the initial 300mm equipment development. Many folks with multiple fabs and a lot of experience on a lot of existing batch equipment will not probably make the switch, so how big a market is there for this new equipment?
The AMD presentation is fine and it is consistent with many other presentations I have seen on cycle time improvements. The problem is there really is no coverage of the negative impacts of the approach - the benefits are touted, but there is no modeling of fab impacts, cost impact, financial impact to the equipment suppliers, impact on tool reuse and technician support, fab layout, sub-fab impacts, etc...
This is a nice academic study, but quite frankly that's the problem with it - it is largely academic. To make these types of changes you need full industry support and need to have an honest discussion of the negative impacts (and who pays for them). It'd be a different story if AMD and the little consortia listed were putting up some seed money, but clearly that 'ain't gonna happen'
http://www.custompc.co.uk/news/602511/amd-next-cpu-architecture-will-be-completely-different.html
"AMD’s technical director of sales and marketing for EMEA, Giuseppe Amato, told Custom PC that ‘if I look at the next generation architecture of our CPU, then it will definitely not be, how can I say, comparable with the Phenom. It will look completely different.’"
Man... K10 barely out of the womb, and it's already starting to shift to, you should see our next generation... and distancing the next gen from the K10 design.
While I'm sure some will spin this as AMD's relentless pursuit of new and innovative approaches, others may see it as a lack of ability of the K10 architecture to carry forward.
Electromigration, hmmm.
GURU- I’ve discovered, Sleep aphnea/insomnia and it’s dervations, can be brought on by the following eguation.
http://en.wikipedia.org/wiki/Black's_equation
Where:
A is a constant
j is the current density
n is a model parameter
Q is the activation energy in eV (electron volts)
k is Boltzmann constant
T is the absolute temperature in K
w is the width of the metal wire
WHERE MTTF IS FUGLY!
“The model's value is that it maps experimental data taken at elevated temperature and stress levels in short periods of time to expected component failure rates under actual operating conditions”
AH----the key words are----ahhh---STRESS and FAILURE!
“the Black's equation, is commonly used to predict the life span of interconnects in integrated circuits tested under "stress", that is external heating and increased current density, and the model's results can be extrapolated to the device's expected life span under real conditions.”
“under "stress", that is external heating and increased current density”
Nice, I’ll think about this everytime I step up the Vcore .01 volts on a $1500 chip
This guy J. R. Black was he in any way related to a guy named MURPHY???.
Let me get this straight. You guys have a little channel in the substrait, you seed it, grow (sputter?) some lovely copper, then grind it down flush. You watch your corners and bends because ‘crowds’ gather here. Then, because of the ‘bleck’ thing you have watch your widths, made uglier by capacitances, if you go too wide. (It's no wonder AMD dropped the ball, all on SOI, no less. Time to start from scratch.)
Alright, spill it. How far during development do they take these things to failure?
Why isn’t there a data sheet that say’s, “Attention, MORON, we’ve tested this thing to ‘X’ voltage (and temperature), and you keep f**king around, at or past this point, you’re really gonna screw the pooch”?
SPARKS
Well gents the UPS delivered the baby, 7:30 PM EST
ALL systems go all we’re on the clock, 8:35 PM EST
GURU- This ‘top bin’ baby is cranking along at a mere 4 Gig, 3rd boot right out of the box. So much for your buddies, Q6600, Phenom comparison.
SuperPi=11 sec, TWICE as fast as Pheromones 22 sec @ an unstable 3.5
Mem bandwidth = 8403MB/s
Cache and Mem.=54.4 GB/s
Mutimedia=51696 iT/s it BLOWS away the Xeon X5482 by 20%
Mem Latency=64ns speed factor 85
The memory is a (stock) 1600 running synchronous with the FSB. It rated for 1800.
Vcore 1.4125, automatic set by the motherboard
10X multiplier.
Air cooling, of course.
These are preliminary numbers. Nothing hardcore as yet, I am waiting for a drink of water.
Obviously these are 100% stable, with much more to spare. I’ll tool it around for week, just to get a feel. Time and H2O will tell.
Nice job fella’s, Thanks.
Giant- Stop F**king around, buy one.
Tonus- getting that itch in your back pocket yet?
SPARKS
Oh my! My finger is seriously close to the trigger! But $1489 at the Egg, how would I explain that one to my gf?
I've already bought Grand Theft Auto IV (truly excellent game, btw) for PS3 and a new speaker system this week, I'm pushing the envelope here! :-(
Congratulations on a fine purchase there sparks, certainly a MONSTER cpu, and one of the best motherboards one could hope to pair it with!
You've hit 4GHz very easily. Are you increasing the CPU multiplier, or the FSB to OC at this stage?
I have an eventual challenge for you as well Sparks, I've pushed my E8400 to a 515MHz FSB (2060MHz!) on the excellent EVGA 790i board. That gave me a clockspeed of 4.635GHz, on air no less. I wouldn't run the CPU at that speed for very long, but it was good for a few runs of SuperPi and 3DMark. (24/7 speed for me is 1780MHz FSB with 1780MHz DDR3, 4GHZ CPU clockspeed)
I'm sure all this talk of these crazy clockspeeds achieved on air must be driving the AMD fanboys mad, who continually link to a single person hitting 3.5 with WATER on a Phenom!
BTW, have you picked up an equally impressive video card do go with this monster CPU? I'd be very interested in seeing some 3DMark results for such a setup!
-GIANT
sparks: "Tonus- getting that itch in your back pocket yet?"
4GHz for starters, oh man...
I will have to start paying more attention to this stuff again. Memory timings, motherboard features, overclocking... buying a 3.x GHz chip and not OCing it now would just feel criminal.
Good thing I just bought a new TV, and don't have the inclination to spend any more money right this moment!
Giant-
Tonus-
“You've hit 4GHz very easily. Are you increasing the CPU multiplier, or the FSB to OC at this stage?”
The 4 Gig run was done strictly by a 10X multiplier, with memory set at the board natively assigned DDR3-1333 bios parameter. Incidentally, also listed in those options are, DDR3- 1600 *DDR3-1600 O.C.*, and *DDR3-1800 O.C.* I had to manually assign this parameters, but the board SAW the 1600 native.
Subsequently, I keyed in the DDR3-1600 native and checked the latency, it went down to 57ns. That’s well within reach IMC.
There is an interesting option I have, frankly, never seen before. The frequency multiple can be increased or decreased by .5. I always felt that a full multiple was too much of a jump; ASUS has addressed this issue quite nicely.
“BTW, have you picked up an equally impressive video card do go with this monster CPU?”
Unfortunately, no I haven’t. I am still using the 1900XTX Crossfire set up which really isn’t bad. The scores I got with the setup along with the Q6600 were 11,490. With this chip the scores increased to 12,857, not too bad for 2 year setup. They’ve got some new things on the horizon in the interim. I really would like a substantial increase.
That said, the ATI purchase really turned the graphics industry sideways.
My next purchase will be that “electric cooler” we spoke. GURU’s Electromigration, and carrier mobility abstracts have me pissing my pants. The next thing you know I’ll be wearing a dress and high heels.
With that in mind, that E8400 is absolutely beautiful, spectacular, in fact. I thought that Q6600 was something irreplaceable and unique. Man, was I all wet, it was only the beginning.
I’ll keep you posted as I develop a relationship with the new chip. Next stop, 1800 FSB, than back to 4Gig and beyond.
SPARKS
Perhaps risk, but actual factory scrap does not correlate to batch size. When a scrap does occur it may impact a larger # of wafers, but there are also far fewer scrap events on a batch tool.
This is true, but if you were to break out scrap over a year, I'd bet the batch tools are way out front, even if you normalize the wet etch tools for the number of passes.
Since no-one is going to give that level of detail in the public domain, we will probably never know for sure. But my bet is that the batched tools are the largest sources of scrap in the factory.
I know there has been some question about how long it takes to get a wafer through the factory. It is a lot less than many people seem to think. Here is what Paul Otellini had to say.
It was legendary that our factory throughput times were close to 90 days for many, many years. We've cut that in half.
That puts fab time at just over 6 weeks.
"But my bet is that the batched tools are the largest sources of scrap in the factory."
You'd lose money... Back in the 200mm days (0.5um, 0.35um) CMP was far and away the biggest source of scrap... nowadays it's different but not batch tools. Also many people tend to think mechanical failures (wafer handling inside the tool, etc) when they think of scrap, but that tends to be a rather low amount of the overall scrap.
Of course the excursions are painful - you have the potential to lose a lot of wafers at once but if you look at scrap per 1K wafers processed, you'd be surprised.
Largest source of wafer scrap?
varies widely in my many years I've worked in the fab. Sure when a batch tool goes bad it can be a couple hundred wafers. The other side of it generally you discover it pretty quickly.
Single wafer tools even with the best of monitoring can result in many surprises that go undetected and result in much more costly scraps.
How fast a wafer moves is dependent on lots of things. If you balance a factory well you can get great cycle time. You could also choose to load up the factory and have wafers queued up at ever operation and have reduced cycle times. Also don't let it be measured in days, it really is about days/mask layers. INTEL could do 4 weeks for all I know, but if they have fewer metal layers then AMD which they do, then its an apple to orange comparison
“24/7 speed for me is 1780MHz FSB with 1780MHz DDR3, 4GHZ CPU clockspeed”
This is interesting.
Although, I haven’t had the QX very long, nor have I explored it's absolute limits, I have found the same VERY comfortable point at 4.06 GHz.
I have, however, found the limit for air cooling:
From CPUz:
9.5 x 450= 4.275 GHz @1.408V, 1800 FSB, DDR3@1800 7-7-7-21 2T 2.0V
Sandra:
Processor Arithmetic= ALU 66835 MPS, SSE3 = 61753
Processor Multimedia= 549144 it/s, FP=267068
Memory bandwidth= 9576 m/sec!
(Now it’s clear why I waited for X48)
Cache + Memory Combined=65.47 G/s
32K blocks= 407 G/sec!
Latency=56ns
SuperPi 1M= 10 seconds!!!!!!
Obviously, both chips run cool (yours and mine) and there’s A LOT of headroom (a full GIG!), basically, on first production run. Binning these chips (?), man with the way these thing run, it’s a shame to deliberately lock in anything below 2.6. It looks like INTC doesn’t have very much to throw away.
I suspected INTC months ago sandbagged these chips when Barcelona fell on its ass. They were ready for Barcelona even if the son-of-bitch comfortably hit the 3 GHz+ speeds they were howling about for a year. It simply had no chance, ever, against Penryn, right out of the gate. Look at that Pheromone at 3.5 Gig, a cherry picked slab. The QX9770 s pee’s all over it at well bellow stock speeds!
I don’t give a flying hoot what anybody say’s. INTC woke up and hit the floor running. If they don’t believe it, you and I have the evidence in hand to prove it.
E8400 @ 4 Gig
QX9770 @ 4 Gig
WITH NO MEANINGFULL DIFFERENCE IN THERMALS AT THESE SPEEDS!
That’s saying something, especially when I’m packing another set of jewels. Call it a pocket full of hafnium.
BTW: With all these runs, I haven’t had a lockup, boot failure, BSOD, or a failed Windows load, yet!
I’m going to back this gem down to 4 Gig and cruise around nice and comfortable 24/7, all on air.
HOO YAA!
SPARKS
To IntheKnow, I looked thru the two big updates from Gross and can no longer find the reference. It was widely discussed when the foil appeared in one of his big presentations to analysts where he alluded to”accpetible” yields referenced a number. We all took this as acceptance of the minimum lower acceptable limit by AMD management. It was a number that I think many companies would not accept acceptible. Its interesting that the two presentations I can find at the AMD site look slightly different then what I recalled and now show distinct no scale % yield or DD plots with no scale. I recalled looking at these in the past and not seeing these two plot. I suspect the sensitive page and reference has since been removed or the presentation was completed pulled and they now have put in the standard thing I also see from INTEL on this subject.
In the end can we agree that AMD's success or in this case total failure of fielding a competitive CPU and also complete failure at meeting any success metric of a public traded company has little to do with cycle time, efficiencies or lack of performance AMD's factories? That is what I find so funny that AMD spends so much time talking about things that really have no materially bearing on the mess they are in nor how improving even by 20-50% will improve matters at all either.
To Intel’s credit they talk about efficiencies too and between similar productivity improvements and aggressive headcount reduction they have materially improved the bottom line, or so they say. That is relevant as they have a huge cost structure and reducing it will add directly to higher margin and more profits. INTEL already has credibility around its Tick Tock design strategy, and their process technology and manufacturing leadership is without question among the best. Put all that together; investment, manufacturing leadership, technology leadership, leading edge products, leads to a credible positive business plan and a bright future.
Lets contrast that with AMD, everything there needs significant improvement to help that them have any chance at all to turn a profit in EVERY frigging area! They talk a lot of nonsense about these manufacturing efficiencies, but to be perfectly honest. If AMD had a 10% advantage in cycle time, in cost / wafer, in utilization, damm in every manufacturing metric, they still would have sucked red in each of the most recent quarters by huge amount. Why don't they talk about the real fundamental problem facing them? The reason the don’t’ is obvious, if they were to really talk about it, it would be clear how broken their business model is and the stock would fall another 50% that is why!
Scienta and Sharikou blogs are nothing but personal soapboxes not worth spending time even trying to post, both have descended into incoherent excuse mining to keep beating their wet dream that AMD will somehow rise again to some glory.
"That is what I find so funny that AMD spends so much time talking about things that really have no materially bearing on the mess they are in nor how improving even by 20-50% will improve matters at all either."
Look - AMD is going to have to continue cut manufacturing costs to compete with Intel. The best case scenario is they are 1/2 node behind Intel (schedule-wise) so they will be at a disadvantage die size wise, except through design innovation and efficiency (the 0MB L3 part, if it doesn't take a huge performance hit, is a good example of things they need and can do). Even when Intel launches a new node they still remain that distance behind as you have to consider aggregate mix of the two nodes. When AMD starts shipping 45nm, Intel will be ~50% converted, by the time AMD is 50% converted Intel will be largely transitioned.
Intel's plan to cut cost is 450mm - while this will require incredible upfront investments, it will yield SUBSTANTIAL cost reductions (more so then any node transition). AMD will not be able to follow this roadmap unless bags of billions of cash start falling from the skies, so they need an alternative - thus the efficiency / asset smart / cycle time reduction plan.
There are two fundamental problems with this approach:
1) AMD does not have the industry clout (i.e. they do not buy enough equipment)
2) More importantly, any gains done on 300mm should carry over to 450mm so even if they get suppliers to work on this, AMD will gain no competitive advantage.
That said... AMD has to try something - short of outsourcing (which has other issues), what else can they do? Simply trying to stay on the same pace or out-execute Intel in this area is probably not a viable 'plan'.
The cycle time will probably give AMD more of an advantage in terms of flexibility and development times. Intel can always throw money at development to speed it up - you can run many new steppings in parallel - which is risky, but if you can afford the Si and the capacity to do this, it is a good brute force method - the more information turns, the faster the development. Shorter cycle times will increase the information turns during the development phase and potentially reduce the amount of capacity needed - this will have a larger relative benefit to AMD then to Intel.
DOC-
In The Know-
I did a little research (please forgive me if you already knew this) on CRAY. I have a link below of the world top ten supercomputers.
http://www.top500.org/lists/2007/11
I was surprised to see the INTC Xeon 53XX powered units were in the 3rd, 4th, and 5th ranks. I’m not sure when the 53XX’s were released, last year I think. I think it was Clovertown (65nM). From what you were both saying there is year’s of development time, and yet these units have surpassed CRAY’s Opteron based unit which is currently in 6th, 7th and 9th position. That was pretty quick in terms of devotement time and to time to surpass CRAY’s lead with the 2.4 Opteron’s installed.
Why so quick? Was the architecture already in place? Did they upgrade the way I did by a mere CPU swap, and move up the HPC ranks, on the cheap, if you will? Can you do this on these monsters? Additionally, CRAY put all their CPU eggs in the AMD basket; obviously HP didn’t (Ranks 4 and 5). Couldn’t CRAY have done the same?
With this in mind I am certain HP, INTC’s long time partner, is ahead on the development lead with Nehalem based systems, perhaps others , too.
I've got some SPEC numbers here. Nehalem makes my QX chip look like a i486 in comparison.
http://blogs.zdnet.com/Ou/?p=1025
SPARKS
"It was widely discussed when the foil appeared in one of his big presentations to analysts where he alluded to”accpetible” yields referenced a number."
With AMD we'll never know. Yield data is too sensitive to provide raw data, so the best you can get (in my view) is how Intel presents it - they show normalized data, but they compare one node to another so at least there is some reference.
AMD simply refers to expected yields or mature yields - maturity just means it has stabilized at a given level... the level could still be garbage! (I'm not saying this is the case, but you simply can't tell).
In the past, AMD has shown one node vs another, however they did a very subtle and important thing... they showed yield vs production volume (on the x-axis), instead of an actual calendar date or time.
What's the difference? Well if your yield is low, your production volume is also low so you can still show a fast improvement rate (vs volume) especially if your yields are low for some time. Or if your yield is low you may slow down the ramp which will lower the production volume and again show a potentially different yield/volume slope.
It would be remarkably simple to plot the data vs calendar data - by presenting it vs production volume they are also compounding the data with the various technology ramp rates (unless they are all ramped at the same rate)
This could be a very subtle, and not easily picked up, manner of tweaking the graphs. I'm not saying AMD is doing this intentionally - but by using volume instead of time, it limits the usefulness of the data.
Also AMD had a line called "mature yield" - another trick you could play is to have different mature yield targets for different nodes... (again I'm not saying AMD is doing this, but I don't know that they're not either). When Intel presents the data it is simply defect density so there is no possibility of 'tweaking' the data from node to node and is a far better 'apples to apples' comparison.
Man - I just read Scientia's latest comment about batch processing and MFG and almost fell out of my chair I laughed so hard.
When it starts with:
"I'm not an expert on wafer manufacturing so if someone has more specific information feel free to provide corrections."
I guess what do you expect... instead of asking for more specific info, perhaps he should have just said - "if anyone has any actual info..." More specific?
And the stuff on batching from other folks is just comical - apparently notebook and server chips can't be batched together... well except primarily for litho, THEY CAN BE AND ARE BATCHED TOGETHER!
It's one thing to hypothesize and speculate, but for folks to just throw out random info not grounded on any sort of facts is just too funny.
I think the problem is some folks consider a batch to be a lot, others don't seem to understand that with the exception of litho, most product types go through very similar process flows and can be 'batched' together (or run back to back on the fly, or what folks call 'cascaded'). Automation and controls have become so sophisticated that many areas can retarget and change on the fly real time between lots... suppose for example you were polishing 1000A of Cu on one lot, you can take thickness measurements real time and adjust the polish time for another lot that might need 2000A. You can also now factor in different polish, etch, dep rates and adjust recipes on the fly between lots of different product types to account for differences like pattern density. A lot of this stuff is done in house by many IC manufacturers and I think the level of automation would surprise folks who have this cookie cutter view of how the fab works --> process a lot, stop, measure it, see if it is OK, then adjust tool for the next lot, then process...
"Single wafer tools were created when particularly difficult processes needed to work on one wafer at a time but this was not ideal."
You know, I'm not sure if I could make this stuff up! Actually in many cases single wafer tools are ideal (even from a cost perspective!) Of course those litho batch tools were the bomb until the process got too difficult! Thank goodness for immersion - though part of the reason I think it tool so long was immersing the whole lot was tricky, so thank goodness single wafer immersion litho was created (I'm kidding folks)
He then provides a link which talks about NFG (next generation fab) and somehow attributes it to "here's what AMD has to say"..
the guy is so deluded into thinking this is an AMD concept that he didn't even bother to notice these are ISMI proposals! (international sematech) Amd is one of many companies (including Intel, BTW) in this consortia... but apparently this is an AMD idea because he saw a different AMD presentation with NFG in it, and now anything NFG related is an AMD thing.
"ISMI managers published a 19-point Next Generation Factory plan, with many of the changes starting in 300 mm fabs but expected to carry over to the 450 mm generation, whenever it arrives."
So apparently ISMI is now AMD or perhaps he is confused and think this is the IBM fab club (it is not). If he bothered to click on the link in the article he posted he would see the company list, but apparently ignorance is bliss and he would rather just convince himself that "here's what AMD has to say".
Of course had he read the article he might have seen the part "The NGF program requires consensus-building and prioritization, both among the 16 devicemakers within ISMI and between the chip manufacturers and tool vendors."
So, how long before Dementia realizes this is not an AMD 'innovation' but rather a consortia effort (Intel included) of many IC manufacturers? I suppose when he finds this out (and realizes just about EVERYONE is working on this), he'll just dismiss it and move onto the next topic of dis-information.
If any of you patient folks care to explain this to him feel free to cut/clip/paste any of this.
Anonymous said...
Man - I just read Scientia's latest comment about batch processing and MFG and almost fell out of my chair I laughed so hard.
I considered posting a correction here myself, but I wasn't quite sure what he was trying to say until the follow on posts. At that point it became clear, that among other things, there was confusion about what batching is. So I'll try to add some clarity.
The basic processing unit is, of course, the wafer. Wafers are started together in a FOUP (a fancy name for a plastic box with a door on the front of it). The contents of the FOUP are called a lot.
Most tools in the fab process a single wafer at a time. The exceptions to this rule are what we have been referring to here as "batched" tools. A batch is a group of lots that are all processed together in the process chamber at the same time.
With the definitions out of the way, let's move on to processing and efficiency. I'm going to try and explain this in a very general way, so it will be easy to find exceptions to what I'm about to say, but it should apply to the majority of cases.
The most efficient type of process is called a continuous process. In a continuous process raw materials are fed into the process in a continuous stream and finished products move out in continuous stream. So the timing on your feed and your output are in sync. As an aside, if you want to see continuous processes in action, I'd recommend you watch "How it's Made" on the Discovery Channel.
When you first start a continuous process up there is a lag while the process fills up with raw materials, so you need to keep the processor feed constantly and minimize downtime to get the most efficient process possible.
Obviously, continuous processing lends itself well to liquid processing as there is not a discrete "unit" to feed in. Single wafer tools can come pretty close to this, but they need a buffer system to achieve this kind of efficiency. One buffer will hold and queue lots, so that as one lot finishes the other is getting prepared to start. Another buffer will store the completed product and load it into FOUPs once processing is finsihed.
You'll notice that the lots have to be staged in a buffer area both before and after processing. This introduces inefficiencies in product flow through the line that wouldn't be seen in true continuous processing. But the flow through an individual tool can be seen as continuous.
Since single wafer processing is the closest thing to maximum efficiency, you might ask "why batch"? The simple answer is long process time. Many deposition and/or film growth processes can take upwards of 20 minutes to complete. If you are processing in single wafer mode, you will get 3 wafers per hour this way. So your batch of 25 wafers will take >8 hours to process. This long process time leaves you with 2 choices.
First you can buy a lot of tools. Lets say you buy 24 tools for an 8 hour process time. This would allow you to complete processing on a lot on an average of every 20 minutes. But 24 tools would cost a lot of money and the cleanroom space is expensive as well.
The second option is batching. Batching entails a lot of inefficiencies, so the process times themselves are long. For our example, let's say that it takes the same amount of time to run our batched process as a single wafer tool would to process a lot, or 8 hours. But now you put 4 lots in the tool at once. Your output is now 4 lots every 8 hours or an average of 1 lot every 2 hours. It's pretty easy to see that with 6 batched tools you can get the same output as 24 single wafer tools.
So you can choose a lot of tools (a huge capital expenditure) and a large ongoing cost in maintaining more cleanroom space, or you can accept inefficiencies in processing time and use batch processing.
The work that AMD (and Intel as noted previously) is doing is centered around trying to find ways to reduce these inefficiencies.
Sparks, I'm just a simple process guy. Designing HPC systems is way out of my area of expertise. However, I believe that there are Nehalem test chips out there. We've seen systems running on them.
I'd assume the development process would include giving Cray access to these chips to help establish operating parameters for their machine. From this, they can extrapolate X% improvement for the Sandy Bridge processor. They will also be working with Intel's engineers to ensure that required features are included in the design. As test chips for Sandy Bridge become available, Cray would be given access to those to validate the design.
Not a great answer for you, I know, but the best I can give.
Anonymous said...
In the end can we agree that AMD's success or in this case total failure of fielding a competitive CPU and also complete failure at meeting any success metric of a public traded company has little to do with cycle time, efficiencies or lack of performance AMD's factories? That is what I find so funny that AMD spends so much time talking about things that really have no materially bearing on the mess they are in nor how improving even by 20-50% will improve matters at all either.
I fully agree that AMD's problems go well beyond running their factories efficiently.
However, if they could reduce factory costs by say 20% they probably could have turned a profit in Q4 last year and maybe even in this past Q1.
Their fundamental problems remain, but running in the black would certainly allow them to pull in capital (from say their friends in Abu Dhabbi) to try and address some of the other issues.
anonymous said...
You'd lose money... Back in the 200mm days (0.5um, 0.35um) CMP was far and away the biggest source of scrap... nowadays it's different but not batch tools. Also many people tend to think mechanical failures (wafer handling inside the tool, etc) when they think of scrap, but that tends to be a rather low amount of the overall scrap.
Of course the excursions are painful - you have the potential to lose a lot of wafers at once but if you look at scrap per 1K wafers processed, you'd be surprised.
Yeah, as a guy in the trenches my focus tends to be on the excursions.
So I just ran some simple line yield numbers. If we assume 30K Wafer starts per month and a 95% line yield, that works out to 1500 wafers scrapped each month. I've seen some big excursions, but never a single event of that size. I also don't think I've seen that much scrap attributed to a batched toolset in a year, let alone a month.
Even if you assume a 98% line yield then number of wafers scrapped each month in the factory is still 600.
In short, you've made your point. The risk of large losses is high in batched tools, but the low incident rate of those scrap events offsets the large impact.
AMD's long term problem is that to field a competitive CPU line from high margin server to much lower volume consumer takes big bucks.
AMD could make money if they didn't compete with a big spender like INTEL but they have a competitor with big bucks.
The reason ATI and Nvidia competed well is they had the same foundry resource and competed on design. Now that INTEL is going to get into graphics Nvidia and their CEO like to talk big but in the end they know their future is limited by INTEL. If intel execute the graphics business will go to INTEL it won't be if, but when and after how much money. This is no Itanium story, its about whether INTEL has the commitment to stay in it and build the graphics drivers to go with their silicon hardware. If they do ATI and Nvdia are finished in graphics.
In all these performance arenas the competitor who has the highest performance leading edge semiconductors technology and manufacturing capacity will have the "unsurmontable" advantage. Design is dime a dozen, the silicon is a huge advantage. TO compete people need both to even make anything competitive.
People who think they can go asset lite are talking out of their Ass. Jerry had it right in some sense. Real CPU competitors need fabs to develope and manufacture at the latest technology node. Without this AMD can have the best designs but will be handicapped by higher cost, slower performance and higher power. To be behind 1 year on cost and 3 years on performance isn't a very good business proposition.
The reason they can't go to TSMC or other foundrys is they require capacity starts on leading edge of tens of thousands of wafers a week. If you look at the combined volume of TSMC, Charter and others they don't invest enough on the leading edge to support the ramp AMD nees. To go asset light means AMD won't have leading edge capacity and WILL gurantee their consumer products will be slow and not cost effective. It will limit them to only be able to do a few tens of thousands of high end CPUs. Only look at DEC, SUN-TI, IBM to see what that gets you in funding the silicon... you can't afford it.
AMD is BK in their strategy...
“If any of you patient folks care to explain this to him feel free to cut/clip/paste any of this.”
I don’t think it’s possible. With the limited exchange I had with him, I have found him to take most disagreements to heart personally. In one of his recent replies to me, he freely admitted not ever working in a FAB.
With that in mind, during past AMD’s successes, he became a self proclaimed expert in the field; he could say no wrong, correct by default perhaps? By his own admission, he dropped the ball more often than not, thereafter.
You guys, however, do this stuff to put food on the table, thereby challenging that authority with actual practical working knowledge and experience. He said he has been, “less than correct”. From where I live, NO ONE can argue with actual practical working knowledge and experience, I don’t care if you pump cesspools.
The guy is angry, and resents you guys for undermining his authority, on what he calls a "public" forum. You’ll never get through to him. Hey, with 800 lb. process gorillas ready to pounce (you guys), want else can he do to save face?
“In all honesty, the difference between roborat's blog and mine is that he encourages flames and I don't. He let's posters hide behind an anonymous post and act like children; I don't. I'm sorry but that is no improvement for roborat's blog.”
He doesn’t care to offer objective analysis from a practical perspective and the freedom to allow contributors express it the way they deem fit. That guy will never concede a point, and his “less than correct” statements are the evidence. His deletes and past denials are the proof.
I’m done.
BTW: You guys keep talking about FOUP’s and batches, I tried to get a handle on this, but you never said how many wafer these things hold, and how many tools it takes to crank out a completed wafer out. (Industry average for 300mm)
SPARKS
1998 - SPC
2000 - APC
2003 - APM
2005 - LEAN
2008 - SMART
2009 - BK
Touche.....LOL, LOL.
SPARKS
Sorry Sparks, I'm never sure where to assume the basic level of understanding should start.
A standard FOUP holds 25 wafers. The initiative we have been discussing is driving for smaller FOUPs. Batch size is variable and can be anywhere from 1 to 6 lots depending on the tools and process step.
Note that running a single lot is not very efficient, but sometimes the tools are run that way if there is a "hole" in the flow of wafers that would leave a lot stranded for a long time before more arrive. Some processes are sensitive to batch size and you have to hold lots to make a minimum size, but other processes are not.
As to the number of tools that are required to complete processing on a wafer, that rates right up there on the proprietary list with process flow and yield data.
To get a feel for what it takes though, see this image.
Each layer requires a tool to put down that layer. You can also figure there are a litho tool to image the wafer, an asher to remove the resist after you are finished with that layer and a wet bench to remove any residue from the asher.
The image is old as it is still using Al interconnects and there are a lot of subtleties that I've left out with the flow above, but it gives you a rough idea of what is involved.
Scientia said:
You may see that as being a forum for free discussion; I see it as laziness on the part of the blog owner.
Funny, I can see how aligned Scientia is with Mugabe and the Chinese government when it comes to silencing dissent. I bet beating up someone is good because it's hard work and can be tiring while Democracy is for the lazy government who can't be bothered to shut people up! Honestly, where does he get his logic?
In all honesty, the difference between roborat's blog and mine is that he encourages flames and I don't. He let's posters hide behind an anonymous post and act like children;
Encourages flames... What can make people more inflamed that deleting their posts? It will be good for him to realise that the angry posts I get here is his own doing.
BTW, I wouldn't swap some of the anonymous posters here with registered posters in his blog.
My point still stands - even made up pseudonyms are still better than flat "anonymous". I don't need to know your name in real life, I just want to know that you're different from anon2 or anon3 or everyone else posting anonymously, so that I can keep track of a thread. And that to me is just a minimal token of respect for the other people you're conversing with. Otherwise we're all just shouting into a crowd.
" 1998 - SPC
2000 - APC
2003 - APM
2005 - LEAN
2008 - SMART
2009 - BK "
You know what is funny about this, other than the BK acronym.... it's the use of the acronyms in the context like AMD invented these processes for manufacturing or that no one but AMD uses them ....
SPC is statistical process control, taught in any undergraduate statistics class, and is used by most all manufactureres of anything from diapers to potato chips.
APC is advanced process control, a generic term which refers to a means of statistically controlling any process output by examing the output and adjust the input or vice versa, adjusting the input to based on some prior output.
APM is AMD's acronym that collectively refers to their process automation systems. However, there is nothing in the collection of those systems that are not part of the industry standards.
LEAN -- what the heck is this?
SMART -- again, what the heck is this... analyst have been trying to figure this one out for the past year.
Frankly, this is the only thing really frustrating with Scientia's blog ... he speaks with such conviction that people often believe he is all knowing, where in reality most of what he says is indeed way off target easily discovered by those who can type 'www.google.com' URL.
Jack I'm surprised at you, does a lowly electrician need to fill you in on this?
I've determined with my expertise in processing dynamic's and engineering that:
LEAN-
Less Explaining Around Newsgroups
SMART-
Shifting Market Analysis Responce Training
SPARKS
In The Know-
Ok, you have these Pods running around the FAB loaded with twenty five VERY expensive 300mm wafers. Let’s say at various stages of the process one wafer in particular becomes unusable. Do the tools or the operators, test that wafer and subsequently rejected it at that point? How far down the line can a bad one go?
Further, does the whole line get bottlenecked at one area if a tool’s in it’s respective group blows a relay, motor, pump, circuit board, etc.?
What do they do when some poor bastard is trying to troubleshoot/fix this thing while the rest of the line is pumping along behind him, or worse, nothing is feeding out in front of him?
Do these guys sleep at night?
SPARKS
If anyone is curious about AMD's FAB in Malta New York I found the...
Supplemental Draft Enviornmental Impact Statement
It discusses Water, Gas and Power requirements for Fab 4x, also construction timetables (from when they start, not now), Building sizes and Cleanroom Square footages.
All in all, it is pretty interesting to see what it takes to build and operate Fab 4X.
-----------------------------------
Also, if you would like to see siteplans for Fab 4X, aerial overlays etc...
Town of Malta (Luther Forest Technonlogy Campus)
"SMART"
AMD's clever cheats who are able to get money from arabs, sucker people to continue to believe in their business plan, when the got none. That is really "SMART" lose billions, got no credible likelhood of ever really being able to compete with your big rival yet get people to buy your story hook, line and sinker.
But I'm smarter then that.
AMD BK in 2009
Sparks, I'll take a stab at your questions:
"To the tools or the operators, test that wafer and subsequently rejected it at that point? How far down the line can a bad one go?"
This varies considerable by both process step and by IC manufacturer - it is a question of your chosen monitoring scheme. Ultimately the goal would be a rock stable process that would require no metro whatsoever, but that is not the real world (but perhaps the Asset Smart world?).
Sometimes an issue will go all the way through the line and not get caught unitl sort/test (basically testing and binning the chips). However there are numerous 'inline' monitors throughout the process flow where either a test wafer run before or after the lot is checked or the production lot itself is checked. Many times an IC manufacturer will put test structures in the scribe lines to test problematic issues. The scribe line is used as this is the area where the slicing and dicing takes place so it is not an active part of the chip which you could potentially do damage to. There are also 'non-destructive' metrology techniques where you could measure the active areas of the chip inline without doing any sort of damage/contamination.
"Further, does the whole line get bottlenecked at one area if a tool’s in it’s respective group blows a relay, motor, pump, circuit board, etc.?"
This is classic constraint theory and is mitigated in several ways - first off I do not know of any fab that runs without redundancy - meaning at least 2 tools to run any given step. This way if a tool goes down, the other tool can be used - this may limit the overall capacity, but at least you have some. The other thing that is often done is so called 'swing tools' (this cannot be done in all areas of the fab). Sometimes if a tool is down hard (meaning for a significant time), a similar tool used in a different step can be quickly converted to cover capacity on a temporary basis.
Finally in Intel's case (or any other manufacturer with more than 1 fab); wafers can be packed, shipped and processed in a different fab until the hard down is addressed (this is a rather rare practice though). Here in lies the beauty of Intel's copy exactly approach - when you ship the lot to another fab, you know that tool is setup identically to the fab you are shipping from and will get identical processing.
"What do they do when some poor bastard is trying to troubleshoot/fix this thing while the rest of the line is pumping along behind him, or worse, nothing is feeding out in front of him?
Do these guys sleep at night?"
Well the managers pester the engineers or ops people who then pester the technicians who are working on the tool. Generally speaking there is 7x24 coverage which can address probably 90-95% of the issues. In the case of a new or uncommon failure, there are strict escalation protocols with the equipment supplier if the tool is down for more than 6 hours, 13 hours, 24 hours (the interval varies by company). It is gernerally not very long before the equipment supplier's expert is onsite if the problem cannot be addressed by the team that is onsite/oncall 7x24.
Generally speaking these are not pleasant situations, especially if it is in a constraint area in the fab where you need every tool up to meet the fab output goals.
There are other areas in the fab where you may have 7-10 tools and have some excess capcity where it is a bit better (but still not plesant) Suppose for example you need 7.3 tools to meet output, so you therefor buy 8 tools to meet the output. If one of those tools goes down hard, realistically all you are doing is taking things down from 7.3 to 7 in terms of capacity.
Now suppose you are in an area that needs 2.9 tools (and therefore you have 3). If you lose one of those tools for a while you are now in a world of hurt.
And then to address your other question the wafers basically start piling up in the queue behind that process step. And what is significant about this is when you finally do get the tool back up, you now process a bunch of those lots and effectively have a 'bubble' moving through the fab which impacts areas downstream as well until you finally get that bubble out of the line.
The site experts are generally oncall 7x24 (some may work normal 5x8 shifts or the 3day/4day 12 hour shifts). In 'healthy' areas the oncall responsibility is rotated around. Again this is the second line of defense generally speaking to the FSE's (field service engineers) who are working in the fab (for most areas 7x24)
JumpingJack said...
LEAN -- what the heck is this?
LEAN is the latest corporate buzzword for a methodology to improve process flow. It is the basis for the book "Lean Thinking : Banish Waste and Create Wealth in Your Corporation".
Like most other systems of this sort I've seen it seems to go too far. I can easily see this becoming part of a bureaucratic mindset that requires slavish adherence to a system whether it is applicable or not. It originated out of the Toyota Production and Management System. You can read more about it
here.
I want this:
"The most amazing is that this machine just cost as a better standard PC, but has 24 cores that run each at 2.4 Ghz, a total of 48GB ram, and just need 400W of power!! This means that it hardly gets warm, and make less noise then my desktop pc."
"I tell you what - if I were Ballmer right now... I'd threaten to walk away and say 'wow, if he can get such great performance, perhaps we shouldn't take the company oover and then when the stock crashes to the pre-takeover level and crashes again when Yang missed his ridiculous Q2 numbers, Ballmer should step back in and lowball an offer and say "how do you like me now?!?" (comment Apr23)
Fast forward to today - Microsoft walks away from Yahoo deal after the Yang-er thinks his company should have fetched $37/share.
So now instead of getting $31/share (actually MSFT increased it to 33 during negotiations), Yang will have to explain to his shareholders why the stock price is about to plummet to the low 20's. He had conveniently not set a stockholders meeting (so as not to have to answer to his stockholders?)... but I think he is required to do so or face serious repurcussions (I think you can eventually get de-listed). Expect calls for Yang's resignation, calls for election of a new board of directors and a potential avalanche of investor lawsuits.
Expect Balmer to come back in another quarter or two with an offer in the high 20's (though I cannot predict he will say 'how do you like them apples?')
Looks like Jerry Yang just pulled a Hector*
* Hector (from Webster's online)
HECTOR
Function: noun
Date: circa 2006
1: one who screws up
2: botch, blunder
3: one who screws stockholders due to poor decision making and an overly active ego.
Also can be used as a verb, as in he really Hector'd that deal...
I have a new respect for Ballmer on this decision (though I'm not sure where MSFT's SW/OS division is headed).
“Sparks, I'll take a stab at your questions:”
Thank you, excellent, that puts a lot of the pieces together. Especially with the above mentioned single flow (?) vs. batch operations discussed above.
“Here in lies the beauty of Intel's copy exactly approach - when you ship the lot to another fab, you know that tool is setup identically to the fab you are shipping from and will get identical processing.”
Whoa, great observation, one that didn’t occur to me, anyway. This seldom, if ever, is mentioned in the ‘pros and cons’ of the ‘copy exactly’ debate, probably because a lot of people wouldn’t get it anyway. Nonsense, with this kind of flexibility, personally, I think it would be stupid to take any other approach. Standardization of components has been the corner stone of HV industrial production for over a century.
I saw the test structures that are sacrificed when the wafer is cut here. I’m sure there are propriety methods to insure quality control at the very early stages of production. If there isn’t, there ought to be. Additionally, I’ll bet there’s a fixed dollar amount, determined by the corporate bean counters, cost wise to get a single wafer through the snake. Going down the entire line ain’t cheap, and a wafer is a terrible thing to waste.
http://www.tf.uni-kiel.de/matwis/amat/elmat_en/index.html
(Great site, by the way.)
This led me to a few more links where I found pictures of the vertical furnaces that heat the wafers in vertical batches. They looked huge, complicated, and expensive. I figured on the redundancy aspect to keep thing moving while repairs are made on the tools that go down. Some of the HV units queued up a number of FOUP’s as part of their specifications, as apposed to the lower volume R+D units. Given Dementia’s single flow argument and AMD's current execution, it may be to AMD’s advantage to stay small.
“The site experts are generally oncall 7x24 (some may work normal 5x8 shifts or the 3day/4day 12 hour shifts). In 'healthy' areas the oncall responsibility is rotated around. Again this is the second line of defense generally speaking to the FSE's (field service engineers) who are working in the fab (for most areas 7x24)”
I can see (and I know) that this is a nice position to have, especially if you’re a really good trouble shooter who has an intimate working knowledge of the equipment’s guts. I’d bet my shares in INTC these guy’s are “crackerjacks”, and the outstanding guys are really in demand. There’s lots of glory to be had when things are up and running quickly. Pressure, adulation, heroics, instant reward, for me this is an enviable position. I love glory; that’s me, guts and glory.
“Well the managers pester the engineers or ops people who then pester the technicians who are working on the tool.”
I was right; they are poor bastards. Obviously, Silicon rolls down hill, too.
Thanks again, (sigh) maybe in another life.
Very enlightening.
SPARKS
"Fast forward to today - Microsoft walks away from"
You said it. That was the first thing I thought of when I read the anouncement. Time to DUMP!!!! Yahoo.
SPARKS
ho ho, that helmer site is awesome.
This is interesting.
Although, I haven’t had the QX very long, nor have I explored it's absolute limits, I have found the same VERY comfortable point at 4.06 GHz.
Yes, around 4GHz is perfect for the 45nm CPUs, both dual and quad (aside from the lower end quads that wouldn't hit 4GHz due to a FSB limit). Obviously the QX9650 and QX9770 are premium parts and are binned accordingly, so the power use is low and not all that much higher than my E8400 at ~4GHz. With a TRUE 120 equipped with a Scythe S-Flex fan the temperature under a full load has yet to exceed 50C.
I have, however, found the limit for air cooling:
From CPUz:
9.5 x 450= 4.275 GHz @1.408V, 1800 FSB, DDR3@1800 7-7-7-21 2T 2.0V
Sandra:
Processor Arithmetic= ALU 66835 MPS, SSE3 = 61753
Processor Multimedia= 549144 it/s, FP=267068
Memory bandwidth= 9576 m/sec!
(Now it’s clear why I waited for X48)
Cache + Memory Combined=65.47 G/s
32K blocks= 407 G/sec!
Latency=56ns
SuperPi 1M= 10 seconds!!!!!!
Obviously, both chips run cool (yours and mine) and there’s A LOT of headroom (a full GIG!), basically, on first production run. Binning these chips (?), man with the way these thing run, it’s a shame to deliberately lock in anything below 2.6. It looks like INTC doesn’t have very much to throw away.
I suspected INTC months ago sandbagged these chips when Barcelona fell on its ass. They were ready for Barcelona even if the son-of-bitch comfortably hit the 3 GHz+ speeds they were howling about for a year. It simply had no chance, ever, against Penryn, right out of the gate. Look at that Pheromone at 3.5 Gig, a cherry picked slab. The QX9770 s pee’s all over it at well bellow stock speeds!
The Phenom was cherry picked, and wasn't even stable at that speed. Eventually he settled for 3.4GHz with 1.58V! This would not be acheivable with air cooling. Constrast that to you and I both running these hafnium infused monsters at 4GHz+ on air! In terms of what Intel could release now, assuming a 1600FSB, I predict that 3.4 and 3.6Ghz for quads would be possible, and up-to 3.8GHz for dual core.
I don’t give a flying hoot what anybody say’s. INTC woke up and hit the floor running. If they don’t believe it, you and I have the evidence in hand to prove it.
E8400 @ 4 Gig
QX9770 @ 4 Gig
WITH NO MEANINGFULL DIFFERENCE IN THERMALS AT THESE SPEEDS!
That's right! The power consumption on this puppy is incredibly low at stock. Even overclocked to 4GHz the power consumption of the CPU is only around 100W, that's easily cooled with high end air. Obviously, at 4.5Ghz and beyond we're pushing the CPU to it's limits, so the power consumption is too high for 24/7 use without water IMO.
BTW: With all these runs, I haven’t had a lockup, boot failure, BSOD, or a failed Windows load, yet!
I've had one lockup, that was when I tried to reach 4.5GHz on the P5B deluxe. The northbridge was just running too hot for a 2GHz FSB. As I described in an earlier posting here, I attached a 40mm SilenX fan to it and that reduced the temperature considerably, I had no problems after that. The 790i has been a SUPERB board to me, I've had no issues; none at all.
I’m going to back this gem down to 4 Gig and cruise around nice and comfortable 24/7, all on air.
What sort of bus speed are you running there, and what speed are you running the DDR3 at? As I've mentioned before, 1780MHz works perfectly for me. A beautiful 4GHz clockspeed, 1780Mhz FSB and dual channels of DDR3 at 1780MHz a piece!
-GIANT
GIANT-
If there's any doubt about the lack of consistent quality of these chips, the entire line up, their speeds, and the way they overclock, this should dispelled them without any reservation.
E8300
E8400
And the currently on sale mega bullet,
E8500@3.16 (I’m was tempted to buy one these sweeties for shits and giggles)
They all clock, and clock well! Really, think about it, INTC’s standard on binning these things must be pretty high before they lock in those multipliers. It makes you wonder, if the relative price structures are based on feature sets, as apposed to speed bins. INTC is only competing with itself here, especially with a dual core solution.
When INTC revealed 45nM hafnium transistor technology as the biggest improvement to twenty years, generally, the Press reception varied from a yarn to beer fart. What the knuckleheads fail to realize, this process will be the foundation for the next generation architecture with an IMC pumped in. Imagine these chips on steroids? Man the thrill is back, big time, and the hits just keep on coming.
As far as my setup is concerned, overclocking this GEM was painless and a no brainer.
From CPUz :
9.0 X 450= 4050 MHz
Vcore 1.3975
On this board you set the memory parameter @ “*DDR3-1800 O.C.*
You set the option to allow the “memory strap to FSB” and you’re done!
450 quad pumped at the Memory will give you DDR-1800; again this is all factored in by the MOBO automatically. Everything is running synchronous, just like I like it.
So much for the idiots who complain about the high prices for premium MOBO’s, F**K ‘em, ya get what you pay for I say, in spades.
Besides, I used to spend a lot more money on things that could have got you thrown you in jail, and that includes booze! That said, what’s an extra 150-200 bucks? That’s used to be one night out in a club, easy, when 200 bucks meant something!
The SuperTalent ‘Project X” DDR-1800 memory gamble I took for $379 paid off huge. At these speeds it’s cold, not warm, not cool, just drop dead cold. (After the CPU cold water solution, I may purchase another set. However, stability concerns surface when running 4 discrete DIMM’s at high speeds, as opposed to two 2 GIG modules.)
I set the timings manually at the manufactures recommended 7-7-7-21.
The voltage was manually set at the recommended 2.0V
Speeds any higher will necessitate looser timings, 8-8-8-24, give or take, on any individual parameter, stability dependant, when locking in the *DDR-2000 O.C* option in the BIOS.
I’ll trade a few nanoseconds in latency for the looser timings and higher speeds. I haven’t gone there ---- yet.
ALL said, this 4 GIG synchronous solution was basically a no brainer. And to think, last year, I was plodding along at 1066 FSB. Now, that’s what I call leaping ahead.
SPARKS
"Additionally, I’ll bet there’s a fixed dollar amount, determined by the corporate bean counters, cost wise to get a single wafer through the snake."
I've been involved with some cost modeling, and while there are generally specific cost targets (per wafer processed), I've come to the conclusion that it is impossible to measure accurately. There are simply too many fixed cost (building cost, equipment) and costs which are shared by the entire fab (fabwide facility costs, metrology, automation, service, headcount...) that are as significant if not more than the true variable costs (actual Silicon substraate, chemicals and gases, waste, etc...). So the best you can do is have a modeled/average #.
As for the VDF (vertical diffusion furnaces), surprisingly they are no more expensive than an average piece of fab process equipment.
Finally copy exactly has it's downsides too - once you enter volume manufacturing it pretty much discourages many changes as now you have to proliferate that change across a huge fleet of tools. Though some would argue that is exactly what you want when you enter the manufacturing stage - minimal risk, and only insert a change if there is a huge ROI. For engineers (and suppliers who want to implement their latest and greatest changes) it is disheartening but one minor screwup in terms of implementation of a change and it quickly kills the benefit the change had in the first place.
One add:
"As for the VDF (vertical diffusion furnaces), surprisingly they are no more expensive than an average piece of fab process equipment."
And this is the fundamental problem with the whole single wafer processing move. Sure single wafer processing has some cycle time advantages but when you consider the furnaces (or you can also look at the wet etch benches too) cost as much as single wafer equipment but may have as much as 2-5X the output capability per capital dollar spent, what would you do?
"I've come to the conclusion that it is impossible to measure accurately"
Coming from you (?!?!), that’s saying something! Kudos for even giving it a shot! I’ll bet it took months.
"Though some would argue that is exactly what you want when you enter the manufacturing stage - minimal risk."
"cost as much as single wafer equipment but may have as much as 2-5X the output capability per capital dollar spent, what would you do?"
Factoring in these two comments, I’ll answer your question; I'll tell you exactly what I did and what I am going to do.
1) Buy a $1500 behemoth-----done!
2) Buy some more INTC------ this week!
I love INTC’s conservative approach, “minimal risk”. I’ve seen too many loose cannons, screw up too many times, reinventing the wheel midstream.
SPARKS
This "LEAN" is old news. I work at an Intel Fab and we've been using this for several years already. We call it something different but it's basically the same thing that Toyota started "Kaizen" awhile back. I personally think that on paper the whole concept looks great but in real world practice it is not that practical. Just makes management think that they have a better control of the floor.
Hey DOC, tell ya what, send that guys IP address to me. I'll even pay you for your efforts.
I've got some friends in law enforcement here in NYC that owe me a few favors. I did some network installs downtown. I gave him a couple freebees. He'll more than happy to run this asshole down. Just put up on your next post.
SPARKS
Hector needs something more then his right hand, he is going crazy
Funny... Scientia claims no flaming on his blog... well I did a little looking and may have found the magic answer!
He has 20 comments on his last blog, and 12 of them ARE FROM HIM! Comically, Sparks is in second place with 3 comments. To put that in perspective a mere 25% of the comments are not from him (or Sparks).
Yeah... nothing like good open, objective discussion! Perhaps if he finds a way to get to 100% of the comments being his, he will reach utopia.
Advanced Micro Devices Inc. antitrust lawsuit brief - PDF - 108 pages
"He has 20 comments on his last blog, and 12 of them ARE FROM HIM!"
Heh, it still makes me smile when he used to compare his blog to this one and claim that his one is better because it had nearly twice as much comments. Of course when I said half of those were his own he wasn't exaclty gentle with me :)
Just a reminder that the "collapse comments" option turns those massive spam posts into a single line of text that can be easily ignored.
"However, it is quite easy to label a site as brilliant when you only count the fraction of posts that are authoritative and accurate and ignore the rest of the garbage."
I'd much rather wade through a bunch of posts and find some good accurate posts, then read filtered down posts that sound authoritative and accurate but generally aren't because dissenting opinions are not allowed.
It's much easier to collapse the comments of a few and it is a small price to pay for the good dialogue that does exist.
BTW - AMD shares jumped today on some analyst thoughts that the lawsuit might be worth something and the possibility that asset lean/smart/[insert catchphrase of the week] may be rolled out soon.
I continue to think there is some value to the suit if for no other reason than the nuisance aspect of it - I'm sure Intel would pay some finite amount just to make it go away. However the longer it goes on the less incentive there is to do it. I think AMD should ask for cash (maybe 250-500mil) and elimination of the outsourcing clause in the x86 license (which would be a big deal for AMD) and move on. If this thing goes to trial and AMD does manage to win, it will likely be appealed and AMD won't see any money until probably 2010-2011 best case. The case has the most leverage it is going to have right now (once it goes to trial it becomes a carpshoot).
My reference to the "collapse comments" feature was to deal with the spam posts. Someone copy/pasted enormous amounts of text to create very lengthy posts that made navigating the comments very difficult. They did it on Scientia's blog and then this one, the same way they used to do to Sharikou's blog.
But that feature lets you browse and read the comments that you want to see without having to scroll pasts the extremely large spam posts.
As for the blogs and comments sections, I read both blogs and both comments sections. I used to skim through the AMDZone forums before they locked them for non-registered accounts. The namecalling and flaming are easy enough to filter out on my own because they are not as pervasive as people want to claim, and there is enough worthwhile info to read. I can't judge its accuracy aside from seeing how things turn out in the future, so I don't worry about whether I'm being sold a bill of goods or not. Time and market forces will tell.
scientia sucks
Post a Comment