Anyone proficient with public relation strategies would be familiar with the concept of “controlled leaks” and “damage control. AMD’s filing of new charges against Intel and the rumour of an imminent restructuring announcement is just too much of a coincidence. Reading bits and pieces of the lawsuit reveals just how badly AMD’s business has become that anyone can only conclude that insolvency is inevitable. Conveniently, the asset-lite rumour started floating around giving just the needed amount of hope to anyone planning to continue to do business with AMD. The last thing AMD wishes is for its customers to think it is a risky option for future design platforms.
There are several ways of looking into the two AMD related events today. One that everyone should be familiar by now is the direct correlation between AMD’s cries of monopoly and its poor financial performance. Invariably, disastrous news from AMD always seem to follow everytime they talk about Intel's alleged illegal business practices. Anyone following AMD devotedly needs to hold on tight because this complaint is the biggest one yet. If AMD wasn't lying all along and indeed have an asset-lite strategy and were to announce it soon, based on the proximity with the new lawsuit, I can only conclude it’s going to be bad for AMD. Whatever the deal is, it must be something so awful that AMD needed to a file a lawsuit/excuse/scapegoat to justify their decision on asset-lite.
(thanks to enumae for the pdf link)
5.06.2008
AMD's Asset-Lite could be something really awful
Subscribe to:
Post Comments (Atom)
227 comments:
1 – 200 of 227 Newer› Newest»DOC-
Add this to the mix.
GURU said it, twice the price for a FAB.
“Analysts say plants using 450mm wafers are likely to be very expensive, making it difficult for any chip maker to build on its own.”
“Chang-won Chung, an analyst at Lehman Brothers, estimated that a 450mm facility could fetch twice as much as a 300mm facility which costs between US$2 billion and US$2.5 billion.”
This was part of the news today with INTC, TSMC, and Samsung reaching a 450mm wafer agreement.
AMD was conspicuously absent. I guess you need some money to join this club.
SPARKS
No problem. Two of the statements that I found very telling were...
1. "Intel has consistently earned more than an 80% revenue share over the past ten years. What's left over is not sufficient to sustain the level of investments necessary to remain a viable innovation competitor."
2. "Through the end of 2007, it (AMD) garnered roughly 13% of total x86 microprocessor revenues, less than half of what it requires to operate long-term as a sustainable business."
LOL, it was in AMD's own brief that they said unless they doubled their market share from their current cut that they were unsustainable.
So let me repeat again, by AMD's own assesment they can't sustain the required investments in this capital intensive business without doubling their market share.
So they are billions in debt, behind a full generation ( 2 years ) in performance and 6 months to a year in cost effectiveness, and have 1/2 the market share to be viable.
WTF is AMD going to do.
Oh yes sue INTEL and hope to get some sympathetic judge to give them billions. I don't think even a couple billion will help them.
Lastly no judge will penalize INTEL rightly or wrongly so much say 10 billion or so required to make AMD viable as it would not materially help AMD even if they go this for years and worst yet the whole CPU progress would halt for the 3 or 4 years before AMD could recover. It would be like what they did to Standard oil a years ago, but in todays economic situation. Regardless of what AMD and INTEL fanatics think who is right or wrong the larger world economy won't do well with such a judgment so it won't happen.
In the end AMD is BK..
Sharikou and Scientia there is no amount of rationalizing that will change that fact.
AMD conviently, and selectively, indicated a 13% revenue share in 2007 ... but failed to mention they were on a steep incline through 2006:
http://www.ibtimes.com/articles/20060727/intel-amd-marketshare.htm
Assuming Intel had not released Core 2 Duo and AMD retained the significant performance margin, one can speculate they would be over 20% by now...
The main point of Intel's response is that when AMD offers a competitive product, the market responds, the data suggests Intel is right.
AMD gained share 14 quarters with the introduction of K8/opterons, proving the best product sells -- this does not absolve anything that Intel may have done inappropriate, but it does show strength in Intel's 'the market is functioning as a competitive market should' argument.
DOC is 100% on the money. The biggest, most important, attractive, and sellable products AMD has to offer, currently, are lawsuits.
Factor this. Considering their debt, short and long term, terrible execution and their anemic product lineup, frankly what else is there?
AMD’s highest performing products can barely compete with INTC oldest and slowest. The once highly touted native quad core solution has, ultimately, been their undoing. Its relatively enormous size, without a smaller more competitive dual core solution, was a death knell from the onset. This planned myopic strategy left AMD and its buyers with absolutely no product flexibility or options.
When C2D was introduced in mid 2006 the alarms should have been shaking the walls down at AMD corporate headquarters. On the contrary, they coasted along as if nothing but the ATI purchase, a devil pact with DELL, and a new unproven architecture on bad process where their only focus against an extremely powerful competitor. Each of these individually contributed to their failure and their current position as a whole.
Why am I regurgitating these obvious historical blunders? When they finally go to court in mid 2009 they will have to explain away these failed strategies, while, ironically, proving their products were genuinely strong enough to compete with INTC. That’s going to be a hard sell, if they can survive that long. That said, they could seek bankruptcy protection based on the suit alone, it’s their only real asset at this juncture.
As each day passes their equipment is depreciating, they are bleeding cash to maintain operations, losing market share, maintaining crushing debt, and their current products are selling at a loss. What else could be attractive here?
LAWSUIT(s), that’s what.
SPARKS
"Regardless of what AMD and INTEL fanatics think who is right or wrong the larger world economy won't do well with such a judgment so it won't happen"
Brilliant, well said.
SPARKS
I think AMD's main argument is not that they did not gain when they had a better chip, but rather the gains were artificially limited. This may be true, but it will be difficult to prove.
What I think is lost on some people is that the best technical product does not always win in the marketplace. While this is the predominant factor (along with price obviously), there are other factors. There's support, a stable and proven supply, a known performance against roadmap. When AMD was gaining market share initially some of these factors were unknown at the time. They also continued to proudly exclaim that they were selling every chip they made so was their market really limited? (or were they supply limited?).
It seems like AMD's argument is we had the better product we should have gotten more money and more share. In my view that perspective is part of the reason why AMD struggles and also why engineers often make poor businessmen. It is not only the product that you sell - it is brand, support, roadmap, customer loyalty, customer history (how often do you hear people saying they bnought a product because they had owned a previous product from that company and had good experiences with it) and other intangibles.
The other critical hurdle AMD still faces is that most of the chips were made in Germany and Intel is arguing that the US only has jurisdiction over chips eventually sold in the US - there is a fair amount of legal precedence and past rulings to support this and if it holds, Intel's damages exposure goes way down.
One additional thing - AMD not only has to prove that Intel excluded them from the marketplace, but also that AMD was financially injured from that activity.
While I'm sure they will say we would have had more money to invest in fab capacity, which in theory would have allowed them to make more chips - this is not an easy proof. There is significant leadtime to getting that capacity online and it's not like AMD was throwing away chips or eating inventory. They've made so much about being capacity constrained, and record fab ramp and conversion rates...it'd be rather ironic if that comes back to hurt them. Suppose they could have sold more chips... would they have been able to make them?
I remember that a few months ago it was quite popular to calculate money earned per die/wafer. I wish someone would do the same today for the K10 to see how well do those extremely cheap prices effect AMD.
Facts for Scientia and other deluded AMD fanboys..
During AMD's great success competeting with Prescott it mattered little if INTEL had given Dell and other's incentives. If they had said we don't want prescott and INTEL and we want all AMD please triple our orders, would AMD have been able to materially supply Dell, HP, IBM, Toshiba and others. No fucking way could they have supplied any more people. They were supply constrained from their one frigging factory. It takes years to build a factory and get it ready to ramp. AMD didn't invest and that was why they couldn't take advantage it had NOTHING to do with anything INTEL was alledged to hve been doing, NOTHING.
AMD was too much a weenie to make the investments when it had a chance to pull ahead. And when they had any money and credibility to use what did they do with it? Did they invest in silicon technology, did they invest in more capacity? NO, they decided to go buy a company and bury themselves in debt.
Paul made a comment about how intel invests and it being a high stakes games. Went something like this. You dig a big hole and sink a few billion to build a factory for a process you don't know how to do yet, to make a product you haven't designed, for a market you don't know will exist. That is betting big time. INTEL invested at 90nm, 65nm, and 45nm and is now investing big time at 32nm. What is AMD doing? suing and whining. They don't invest in anything but a lot of hot air and lawyers. Is it no wonder they don't reap the investment of their bets? They aren't couragous enough to bet! Thus they don't deserve to play
AMD BK I say
"INTEL invested at 90nm, 65nm, and 45nm and is now investing big time at 32nm."
DUDE!, don't forget about those FAT 450mm tools they're betting on. That's a big piece of glass, and a bunch of big machines to feed.
Imagine, 32nM on 450mm.
Now that juice.
SPARKS
"Imagine, 32nM on 450mm."
It'll probably end up being the 15nm node. 32nm will be 300mm as this starts roughly late 2009/early2010. Figuring 2 year cycle that would put 22nm starting ~late 2011/2012.
Intel/TSMC/Samsung are talking about first tools in 2012 timeframe and keep in mind these probably will not be production worthy at the start.
My best guess would be a similar strategy to the 200mm to 300mm transition. They will do the bulk of the 22nm node on 300mm and there is probably an outside chance of transitioning it to 450mm in fairly low volume, with the following node coming up on 450nm.
What might be kind of crazy is if litho runs into major issues I could almost see 450mm as a cost reduction alternative to tech node shrinks as scaling litho for wafer size is actually pretty straightforward. Obviously the attempt will be to do both the node shrink and wafer size increase, but who knows what will happen?
One thing is for certain - there was a lot of division within Sematech and the industry on 450mm vs 300mm 'prime' (this is the improved efficiency/lean approach). With TSMC, Samsung and Intel synching timelines and effort, money will talk and Sematech will be again relegated to a bit player in the grand scheme of things. This may also hurt the 300mm prime efforts - if suppliers are forced to choose between them they will probably side with the folks with the large capital budgets.
The Dell deal really seemed like an albatross, lost all the goodwill from the channel partners who had supported them all along. Then when Dell turned around and didn't buy the vast amounts of CPUs they said they wanted, AMD was totally hosed. (Which by the way, seems to be a bit more than just bad luck...)
It still seems to me that AM2 was premature. DDR was still going strong, and DDR2 had yet to show any real performance gains over it. Given the pretty much 0% performance difference between the S939 and AM2 products, I think they could have waited another year before totally disrupting their user base.
As for AMD not investing - they started up Fab36, I don't see that they had any opportunity to do it any sooner.
I'm still puzzled about the ATI acquisition. It seems to me that rather than spending $5B, they could have tossed a much smaller sum at the problem by way of a partnership, commissioning ATI to develop chipsets to their specification to be sold under the AMD name. The Fusion/GPU thing still doesn't make sense to me. And right now, the ATI brand name still exists, so the notion of "vendors want to just work with a single supplier" still isn't really met - the ATI division is still a very separate operation from the AMD CPU division.
Looking at the situation today without combined CPU/GPU stuff - AMD currently has an advantage over Intel in bandwidth-hungry apps. That tells me that the bandwidth that AMD CPUs currently has is "good" and maybe even "enough" (though that's not proven). If you toss a GPU into the same socket, which has its own gargantuan bandwidth demands, I don't see the combination doing anything besides starving under load.
IMO, discrete coprocessors are still the best choice for overall performance and overall system balance.
Does anyone feel as if this lawsuit may end up like the one filed by the USFL against the NFL? The jury in that case found that the NFL was guilty of abusing its monopoly position, but felt that the USFL's struggles were of its own making (ie, even if the NFL didn't have an unfair advantage, the jury felt that the USFL would have failed). The USFL was asking for $576 million in damages, which would have been tripled to $1.7 billion since it was an antitrust suit.
The jury awarded the USFL a $1.00 "victory", which was tripled to $3.00. The USFL folded soon after.
I wonder if AMD's lawsuit will end with a judgment that Intel was guilty of anticompetitive practices, but that AMD's troubles were of its own devices, and give them a "victory" that leaves them in no better financial position than before.
Tonus-
Ed at Overclockers.com wrote about this a few weeks back. The USFL won the battle but ultimately lost the war. The $1.00 check was never cashed. If AMD were to suffer the same fate, it's game over.
Ed has been pretty much on the money when it comes to AMD. His analysis of INTC can be less than correct, at times. Specifically, his product release analysis can be correct, but for all the wrong reasons.
SPARKS
As for AMD not investing - they started up Fab36, I don't see that they had any opportunity to do it any sooner.
They did invest in FAB36, and they did make a deal with Chartered, but this capacity didn't come until 2006 when Core 2 came along. Until then all they had was FAB30 alone, so the market share they gained was limited. If AMD had an additional fab in their glory days back in 2005 they could have captured large swaths of market share from Intel, no doubt at all.
"They did invest in FAB36, and they did make a deal with Chartered, but this capacity didn't come until 2006 when Core 2 came along"
Keep in mind AMD can only outsource 20% of their CPU production (by the terms of the x86 license) - so the only way AMD truly could have increased capacity further was through fabs - this is really the hole in AMD's case - if the market was more 'open' could they really have sold more chips?
I still see this case settling as the most likely outcome - AMD can't afford to wait until the case is judged and then the eventual appeal should they win. Their highest leverage is now (well actually it was probably about a year ago).
Keep in mind this case is going to cost AMD 10-100's of millions. Fudzilla made mention of over 200million pages of discovery - that would equate to reading ~12,000 pages an hour for 2 years (24 hours a day).... and that's just the reading part.
"Keep in mind AMD can only outsource 20% of their CPU production (by the terms of the x86 license) -"
Then factor this new roumd of rumor and FUD. AMD being split into 2 seperate entities.
(What will he think of next.)
Charlie D. at the INQ is spinning quite a yarn.
SPARKS
SPARKS
"Then factor this new round of rumor and FUD. AMD being split into 2 seperate entities."
My best guess is that folks have heard some rumors and misinterpreted them (or in Charlie's case, he'll claim he was just dumbing it down for the masses)
What you will likely see is AMD spinoff manufacturing into a separate entity but maintain a 51% stake in it (thus maintaining ownership and getting around the outsourcing issue). Much like Spansion, what people seem to forget, is that AMD as the majority holder will have to prop that company up (if it goes under then they lose their manufacturing and again they can't outsource more than 20%). They will also sustain the bulk of losses it incurs.
When the x86 license is renegotiated in 2010, AMD will likely push to remove the outsourcing restriction and if they are successful they will quickly sell off their stake in this manufacturing spinoff, which will crash that stock (if it hadn't already) and allow private equity firms or another IC manufacturer to pick at the carcass for pennies on the dollar.
The fundamental problem with the foundry model is that AMD manufacturing is too small and they simply will get beaten like a red-headed step child by the likes of TSMC. The only advantage they will have is they will be the sole producer of AMD chips for a while, but I challenge whether other IC houses will really want to use 'AMD foundry' (I'd also bet that the graphics would continue to be done at TSMC, which at that point would be their competitor)
It is quite comical as this is simply going to be a transparent attempt to get debt off the books and attempt another crash grab (either through stock issuance, private investments, or loans). I really hope Wall St asks the question - how is this company split into 2 any better than the current company? It makes no sense. How is the manufacturing portion of the company ANY different than today? You will now have more headcount and inefficiencies as two separate entitites and how does the revenue and cost situation change?
So I think people are hearing the rumors and have the general theory, but the conclusions are slightly off. AMD can not just simply dump the manufacturing which is what many AMD cheerleaders hope for as they believe AMD has a superior design and it is simply manufacturing holding them back. The trouble is that they HAD a superior design with K7/K8, but one has to wonder if it was a one hit wonder?
If you read the latest INQ articles many of the paper plans like Shanghai have gotten ripped up and now we have the old Intel plan of more is better. This could be a good bandaid if they have a new architecture coming, but bulldozer seems to be out of the near/intermediate picture now and we are seeing more evolutionary changes to a design that is clearly behind now.
And the shell game begins:
AMD discloses 12-core server chip
http://www.eetimes.com/news/semi/showArticle.jhtml;jsessionid=H2V5ZDMJQLOUAQSNDLPSKHSCJUNN2JVN?articleID=207600531
Well, the schedule for this chip is 2010. Yes that would be nearly 2 years from now. Also, interesting this is a 45nm chip... and here I thought AMD would be on 32nm by then (at least Dementia had me convinced!)... oh and it's an MCM! The eight core chip is now gone from the roadmap and AMD is saying 6 core monolithic chip in 2009 (hmmm... again this seems familiar to Intel)
"Allen would not comment on reports AMD might sell its fabs or form a joint venture around them. He did say the current road map takes into account any business changes with the fabs."
Recall - AMD in the past has outright denied these comments... the no comment and roadmaps taking it into account makes it sound like a done deal (it = joint venture).
So what's the shell game? If they do a spinoff/joint venture they will likely be looking for some money, hence the roadmap 2 years into the future... hey look we have a 12 core chip, we'll be competitive! Please make that checkout to cash....
Charlie, Scientia, and Sharikou are all equally deranged. Their writing styles and conclusions may not appear at first to be similar but in the end they are all equally flawed and so wrong that they can all be called insane.
Its clear AMD has realized Barcelona is busted. This was no different then in the early days INTEL realized Prescott was busted and then threw ever larger caches, deeper pipelines to cover their fundamental architecture issue. In the end it took considerable time and resources to recover with C2D. And with the new Tick Tock model are on pace to crush AMD. AMD can’t keep pace try as they might.
AMD is now trying the same playbook talking about more and more cores to try and make up for a inferior architecture and inferior technology. The difference in AMD’s situation is they can’t throw unlimited resources and money at the problem like their competitor. Another major issue is they have neither the capacity nor technology advantage to execute these muti-core products in volume effectively. It will actually speed their BK as they are finding out with their quad-core. They are a technology node behind in this strategy. If AMD was 6 months to a year ahead in getting to 32nm node it might be interesting to see their plan to out core INTEL is viable. But a 6 core on 45nm nine to 12 months behind INTEL’s superior 32nm technology will be a losing proposition, not cost efficient, not performance efficient, not power efficient. They don’t realize how screwed up their strategy is. Anyone who knows technology will laugh at the new plan. Anyone who understand technology and the business of semiconductors will realize this issue and laugh too. Intel with their new bus can slap on cores and with their manufacturing capacity and money can afford this kind of arms race. AMD perhaps study the cold war for a lesson here. INTEL will BK AMD very easily this way.
As to comments about AMD investment, AMD during their time of glory should have invested more. Don’t give me this shit about them having Charter and F36 as evidence of capacity. These were too little to late. Where AMD had no balls was to invest when times were tough. It was a past INTEL CEO who said you can never save yourself out of a downturn; you invest thru it so you can emerge stronger. AMD management had no faith in their design team, their process team, and their manufacturing team. And why would anybody that had been at AMD have any. They have a long history of either technology, design or manufacturing to let themselves down, why should anything change. And look at them now, its let them down AGAIN! If they really had doubled their capacity things might have been interesting a few years ago. Like an earlier poster had said, the CPU business is all about taking big risks. You have to decide on spending the billions on each factory years before you have a technology or design. AMD isn’t willing and as such doesn’t deserve to play in this business. Look at them today; they are too cheap to even develop the process themselves instead willing to farm it out. You think IBM is committed to creating the best process for AMD’s CPU needs, on a critical schedule to intercept a multi billion dollar fabs timing and CPU design intercept? NO, all IBM wants is your money, so what if the technology is a day late or a dollar short in yield or performance, IBM got its money already.
Now look at INTEL, you think the process team in Oregon, the design team in Israel or their fab team building the next 32nm ramp factory have any option. They are all committed to a schedule and must hit it right on or the company is history. AMD doesn’t have its heart in anything beside the determination that if things don’t go well its someone else’s fault. Boo hoo it was INTEL that caused our failure, it was INTEL that prevented us from selling more processors, it was INTEL that coerced the OEMs not to buy AMD. It was never AMD’s fault that they didn’t have the technology then or now to make fast processor, it wasn’t AMD’s fault they didn’t invest the billions to manufacture the processors that everyone wanted. It was everyone’s fault but there own.
Till Hector, Dirk and the rest of AMD stand up like men versus the pussies they are and say the reason we failed was because we didn’t execute, didn’t invest AMD will never succeed. Forget the lawsuit, its nothing but another smoke screen and distraction. It’s another sign of how dead AMD is that they spend time on the lawsuit at a time like this. To save AMD, AMD has to start producing leading great technology, great CPUs, and invest in capacity to produce that great product in volumes. Are they doing any of that credibly? Hell NO, instead all you hear is crying, whining and blaming somebody else. Does this sound like a management team that will win?
AMD BK in 2009.
I think the capacity impact of multicores is overdone with the exception of desktop quads.
Server unit volumes are still VERY, VERY low compared to notebook and desktop. So while 6 core and/or more L2 cache will increase the die size it is on a relatively low volume product. Given the likely prices of these products, the relatively small capacity hit is probably not that big a deal.
That said, what AMD is doing in the desktop space with quad core and tri-core (which is the same size as quad) is just craziness... by pricing these down next to nothing they are playing right into Intel's hands. The low prices will only hasten the wider spread adoption of quad desktops which will further eat into AMD's capacity... they are better off financially selling dual cores.
AMD has seemingly lost their way - what got them in the game was the K7/K8 core and IPC. Yeah, IMC and HT were important in the 4P+ server space and gets most of the press, but it doesn't really mean much in the volume segments of x86 (especially desktop and notebook). Unfortunately K10's focus seemed more driven on peripheral improvements as opposed to the actual core. Faster HT, independent core clocking, monolithic design are good design features but the actual core improvement seems to be marginal over K8.
AMD seems to have not yet figured this out or perhaps you have an internal debate like you had at Intel during the P4 days. More cache will help marginally short term, but it is not a long term solution. The PR is spinning out DDR3 support, a new socket, 6 cores, etc but fail to mention if the IPC will be drastically improved which is what is needed (either that or a huge clock gain).
Isn't it interesting that back in the last quarter of 2005, Intel looked at the trends in their business and immediately restructured to respond to the trend ...
I took AMD from Q4 2006 to now (assuming they reveal something, anything) to take similar actions when... some 1.5 years after ....
Makes one wonder who the nimble company really is?
Whoa, holy shit!
“AMD can not just simply dump the manufacturing which is what many AMD cheerleaders hope for as they believe AMD has a superior design and it is simply manufacturing holding them back.”
“So what's the shell game? If they do a spinoff/joint venture they will likely be looking for some money,”
“they are too cheap to even develop the process themselves instead willing to farm it out.”
Do you know something? I can’t believe how quickly you guys compiled, processed, and collated my last post with such analytical precision. It’s as if you were ready for AMD’s possible move before my last post suggested this whole scheme as a possible reality!
That said, do me a favor once and while, kick me in the ass, and remind me never to get into a one on one debate, or ever underestimate your intellect or awareness of the current state of affairs regarding the AMD/INTC dynamic.
I really don’t want to look that stupid or ignorant. You guys are really on the fucking ball, my compliments, sincerely.
SPARKS
If that sounds like a locker room , so be it. Like General Patton said, "put it to'em loud and dirty, so it'll stick."
SPARKS
"I can’t believe how quickly you guys compiled, processed, and collated my last post with such analytical precision."
The 'spinoff' (more like joint venture) has been kicked about for a while - when you combine it with the restrictions on the x86 license terms that forbid >20% outsourcing there is really only 1 or 2 possible outcomes.
Then you simply consider the PR aspects. Much like companies in the subprime mess - if you are going to look bad might as well air it all out at once as opposed to death by a thousand papercuts.
So you have a revised roadmap with promises out to 2010, stuff that is completely scrapped (8 core project), a promise of asset light (which may have been one of those 'intentional' unintentional leaks) and it is pretty clear the info on asset light, umm...lean?, ummm...smart? will be coming out shortly.
At this point AMD needs money and there are only a few alternatives:
1) issue more stock (impossible given the level of the current stock)
2) another loan (the terms on this would be ridiculous given AMD's credit rating and the credit environment we're in)
3) Start selling assets - the only problem here is the only ones they can sell aren't probably going to raise a whole lot of cash (misc land, Consumer electronic unit...)
4) Private equity investment - would be hard to convince anyone to this again without a serious mgm't shakeup or restructuring
5) Sell part of/spinoff a minority stke in the fabs and future capacity to another company - this may give AMD enough cash to get the NY fab off the ground (sorry Sparks) or buildout the F30 conversion - NY may be actually cheaper thanks to the 1.2Bil payola. The problem here is what is this new entity going to make other than AMD chips? And as they are an all SOI house at this point, they STILL can't even do their graphics parts. The foundry model only works at large capacities and high utilizations.
6) Settle the lawsuit with Intel.
I'm still not sure why AMD would not be doing #6 in addition (or instead of) #5. From an economic perspective they can pay down debt and lower the interest payment, or use the cash to further build out / do more R&D. 500Mil today is probably worth 1Bil by the time this case would get adjudicated (when you factor in cash saved by interest avoidance, avoidance of all of those lawyer billable hours, and the NPV of that money over 3-5 years).
Unless Intel has no interest in settling? Which would be a bit surprising in my view - if I were Otellini I would be happy to pay anything under a billion just to be done with it (to put this in perspective it would simply be one of those one time charges on the quarterly report which would still not swing Intel to a loss in that quarter).
I'm sure there is an ego component for Hector - part of him probably wants to say see, we were failing because of the Intel monopoly, not because of curious business decisions and bad planning and marketing. He may also have visions of a multi-billion dollar award, but I just don't see it happening.
Speaking of lawyer's billable hours, Fudzilla is saying the AMD claim was based on over 200million pages of discovery...
That would be ~11,400 pages read per hour if you were reading 24 hours a day for 2 straight years... now consider how many lawyers are needed to go through that. And that's just the reading then you have the motions, the planning, the 'stragerizingnessment' and the actual trial and appeals. Hundred's of millions in fees when all is said and done? And you gotta think the firms representing AMD may also get a piece of any settlement after fees are paid (not sure for certain, but this is fairly typical).
“They don't invest in anything but a lot of hot air and lawyers. Is it no wonder they don't reap the investment of their bets?”
There’s a lot more said here than meets the eye.
Over the past months/year, whatever, AMD has announced some plans/strategies and products. 45nM, 8 core, 16 core, 92 core, I don’t remember or care.
But, given the substantial lead time in design and engineering, how much developmental time and resources (money and people) do they put into these “plans” before they make a new announcement like, “ah---we’re going to 12 cores and – ah---it will be 2 sixes on ---- ah---MCM”
Really, did they waste a lot of time and money on previously announced future products or was this some marketing hype they threw together one morning over coffee and Krispy Kremes?
Basically, was there anything substantial here, developmentally? Were their engineers working the old hype, told to stop, start over, and work on the new hype? I mean, it’s not like changing the bathroom toilet location on future home remodeling job.
Obviously, if that’s the case, that’s a lot of money, time, and effort, lost. This puts them further behind the eight ball. What kind of money are we talking here before they reinvent the wheel, yet again?
SPARKS
I think comparing AMDs investment pattern to Intel is bogus. Intel can afford to invest in the down times, because they are still making money. AMD is losing money in the down times. In my mind this makes it ridiculous to say that "AMD should have invested earlier." The plain and simple fact is that they couldn't.
Now they are reduced to spewing gibberish like Ruiz's latest.
"We are re-architecting the business so that our financial success is not invariably dependent on continuous component performance leadership over a rich and dominant competitor," Ruiz said.
Sounds like they've found the key to mass brainwashing if they think they can convince most people to buy an inferior product and still sell it at a profit.
Ruiz must be reading "The Space Merchants".
“Our plans are bold, and progress is ongoing”
“And I hope to communicate additional details of this complex undertaking in the very near future."
Is the executive team at AMD got its head up its Ass. They have lost what 4 billion dollars in the past year! It was nearly a year ago when the bleeding started that their leader hinted at some grand plan to return to profitability and hinted at new strategy. Its been 4 quarters and billion dollars later and what does he say.
When asked what does Ruiz say “"I know you'd like it and I feel terrible, but I can't provide details that I'd love to,"
You know why, he don’t know what the shit to do.
To compete against INTEL AMD needs leading edge silicon technology. That is both for performance and transistor density and routing. He needs 45nm 6 months ago but won’t get it for another 6 months. He needs 32nm in a year but won’t get it most likely for another two years.
He counts on a partner to develop the technology but that technology is 2nd class. That partner got them stuck with expensive SOI, is late with High K metal gate by one generation. Isn’t versed in how to ramp from a few hundred wafers to tens of thousands of wafers a month.
He needs billions to fund the next 32nm factory, hundreds of millions to fund multiple design teams, hundreds of millions a quarter to fund silicon R&D, yet he is sucking red quarter after quarter as his competitor continues to turn out faster products on faster technologies and brings on more and more capacity.
Why doesn’t Hector just share the grand plan? Is he somehow afraid INTEL will turn on a dime and react. Lets get real, INTEL’s course is cast. They got Penryn running out, Nehalem on deck, there is nothing else till 32nm Tick product shows up. If Hector got something unless it’s a home run he is going to shock and awe us with what does he gain keeping it to himself. Hector and AMD need to get some money the best way is to show the investors that they got a credible plan. The reason he doesn’t share is because he has NO plan.
Tick Tock Tick Tock AMD going BK.
"Sounds like they've found the key to mass brainwashing if they think they can convince most people to buy an inferior product and still sell it at a profit."
Quite frankly a large portion of the CPU volume is becoming a commodity - and many markets are becoming 'cost is king'. While it is nice to command the performance crown, if the industry becomes commoditized then price becomes the key issue and performance is secondary. Intel obviously also sees this trend, however there is one stark difference in their approach - they designed a low cost product (atom) and didn't just lower prices.
The flaw in AMD's approach is if they think performance is no longer king then they need to have a production cost advantage to compete and while the 0MB quads are nice from a price perspective you don't aim large quad core die at the volume market - you cut down a dual core. AMD has started this 'good enough' philosophy but they are aiming the wrong product (quads and tri's) at it. They should be pumping out dual cores for this market.
I'm also not sure if it is reasonable for AMD to think they will outdo Intel on production costs. SOI is a huge cost disadvantage, they use more metal layers and they tend to be 1 -2 years behind on scaling. Unless their yields are spectacularly better then this is an uphill battle for AMD unless AMD has a significantly smaller aggregate die size. The other thing to keep in mind is that in addition to some of the wafer cost details, Intel also enjoys economies of scales - they are amortizing some of the hige fixed costs (like development) over a more significant volume.
Is it just me or there is no G3MX on AMD roadmap?
After comparing the comments section on Robo's blog to that of Sharikookoo and Scientia, here are my thoughts:
Robo's blog: The commenters seem to have extensive industry experience as well as advanced engineering education - the comments are quite insightful albeit occasionally disrespectful and even a little inflammatory. However mostly fun to read.
Scientia's blog: The commenters, when not edited out, seem to fall into 2 camps - those who lap up Sci's statements like they were carved on a couple stone tablets, & those who are about to be edited out of existence. Sci reminds me of why the scientific method was invented - before Aristotle, the philosophers of the time thought it was unnecessary and even a bit "dirty" to actually verify what "truths" they reasoned out, based solely on 2nd or 3rd party observations, via testing and experimentation. They thought their minds were so brilliant that they could merely deduce the truth through careful reasoning. In other words, they would take "data" observed by others and interpret it according to some theory they probably had crafted beforehand, and then explain away or ignore data that didn't fit. Sound familiar, Sci?
Sharikook's blog: The commenters seem preoccupied with the exact scientific definition for calibrating the "douchbagginess" of the blog author. Clearly this is Ph.D. level research going on, right under our noses!
Sharikook's blog: The commenters seem preoccupied with the exact scientific definition for calibrating the "douchbagginess" of the blog author. Clearly this is Ph.D. level research going on, right under our noses!
DAAMIT, please don't spoil my dissertation.
This is a must read in it's entirety.
http://www.tomshardware.com/news/amd-corporate-culture,5206.html
SPARKS
I think the differences in the blogs are a little more concrete than that. The people on Robo's blog who claim expertise or extensive knowledge come from mainly process and manufacturing related backgrounds and as such are very authoritative on those subjects and are qualified to make conclusions and insights across the industry (Note: Most of the expert commentary here is NOT from Intel employees, like Scientia claims). Within the scope of those domains, there is a very healthy and informative dialogue among commenter's. Now, there is also a lot of financial speculation made by poster's here, but no one is claiming expertise, inside knowledge or absolute certainty as to what will happen to either company. We are simply making educated guesses based on current financial performance and evaluations of balance sheets and cash flows. It should be noted that we are not alone in these conclusions, but are supported by what many expert analysts are calling in this field.
Now, on Sci's blog, at least historically, I have noticed that he has a fairly extensive background in processor micro-architectures, low level programming, and over all broad understanding of the x86 Instruction Set and its extensions/derivatives. Within that area, he and several other posters have deep and informative discussions on the relative advantages/disadvantages from both companies processors and the implications for the market. I don't think many here have ever questioned him on those assertions. Where the dissconnect comes from poster's here and there, is the fact that he readily claims he has never set foot inside a FAB, but continues to make predictions and sweeping claims about manufacturing and process performance. He'll usually include bits and pieces of publicly reported data and AMD insider comments regarding the subject, but will then form conclusions without understanding the implications or nuances behind the data. His commentary on these subjects is really no different than editorials from the likes of The Inq, DailyTech and Tom's. Just generalizations and hype.
Now, Sharikoo's blog? eh, I don't even have to go there, it speaks for itself ;)
The new AMD lawsuit claims seem kind of silly (I don't care if you think I'm biased ;)). Their argument is totally contradictory. Lets look at it from the raw data.
AMD claims they need to at least double their CPU MSS to be viable (Currently at ~13% according to report), but since Intel is "strong arming" OEM's, they can't increase their MSS or make sales. Then, in other statements, they also claim that they are selling EVERYTHING they make. So which is it, either you can't sell your chips, or you can't make enough due to high demand. Sounds like a capacity issue to me. It's also funny, that at the same time, Intel is "capacity limited" and selling everything it makes too. So in other words, the MSS is directly a ratio of the FAB capacity of both companies, and a 5:1 to 6:1 ratio of Market Share for each company is EXACTLY the same ratio of FAB capacity of both companies in the market, go figure.
Looking deeper and more realistically, this isn't a story about MSS, but ASP's. If they're going to make accusations about anti-competitive behavior, it should be about slashing prices and depressing their ASP's, and in this respect, they may have something. It seems that Intel is setting the price range of chips, regardless of who has the Performance Crown.
Back in 2003-2005, when during the P4 days when A64 ruled the roost. Intel set prices where they could make decent margins and profit and AMD priced their chips on the price vs. performance curve relative to the P4's which allowed them to have very healthy margins on the A64 and early X2's , doesn't everyone remember the $300+ A64 3200+. AMD was simply following Intel's lead. Now, fast forward to 2006 with the release of Core 2. Intel again, prices those chips in a price range where they make a decent margin and profit and AMD follows suit by slashing their prices on the price vs performance curve. Since they have yet to release a chip that can compete in the high end Mobile/Desktop space, they continue to suffer.
2007, Then we get into a price war where AMD cuts prices (read ASP's) to increase their price vs. performance attractiveness in an attempt to increase MSS, but Intel then follow's suit. AMD cuts again, and so does Intel in a vicious cycle which only achieves basically parity in MSS in the grand scheme of things, but destroys their margins in the process.
Now, moving forward, even IF AMD where to magically get a new FAB or two up and running to get the capacity to service more than double their current MSS, what do you think that is going to do to ASP's? A huge spike in supply like that will only serve to further depress prices and destroy margin's at both AMD and Intel.
This whole thing comes down to pricing and AMD simply doesn't have the product to compete in the profitable product range. If they want to blame Intel for that, they sure can, but it doesn't change the fact that this is an AMD execution and product problem more than anything else along with a delusional management seeking an MSS at all cost strategy.
In conclusion, it's actually quite comical, they want to claim Intel is Anti-Competitive, but when looking at all the actions that AMD has done (price war) and Intel (new products, Tick Tock), this has only served to make a very competitive market where consumer's are being served better than they ever have been for CPU's. The only one "hurt" in all of this is AMD, and it's mostly their fault.
The people on Robo's blog who claim expertise or extensive knowledge come from mainly process and manufacturing related backgrounds and as such are very authoritative on those subjects and are qualified to make conclusions and insights across the industry.
I think this is a key point.
One thing few people seem to realize is that due to the large capital equipment costs there are actually very few equipment suppliers. This consequence of this small set of equipment suppliers is that everyone is using pretty much the same tools. So if I work with, let's say an Axcelis asher, then I would be in a fairly good position to comment on what anyone can achieve with an asher.
The caveat here, is that there is constant work being done to improve the process tools. The tool modifications that result from those efforts may be considered proprietary and may give a performance advantage to the company that has implemented that mod. However, these are small incremental changes, not radically new tools.
So within that general framework, anyone with knowledge of a specific toolset is in a position to make general observations regarding that toolset across the entire industry.
The argument that posters here are Intel employees and, therefore, don't make accurate predictions regarding AMD's process performance just doesn't hold water in light of these facts.
And my favorite quote from the news media this week has got to be this beautiful piece of misdirection by Randy Allen.
Even though high-k metal gate has shown clear benefits for Intel's (NASDAQ: INTC) new Penryn processors, Randy Allen, general manager of AMD's server and workstation division, said the company would stick with the immersion lithography it uses for 45nm designs.
Comparing immersion litho to high-k metal gate is like our having a conversation about quad core processor performance and I suddenly start telling you why I think Valencia oranges are the best oranges.
High-k metal gate and immersion litho have absolutely nothing in common.
“This whole thing comes down to pricing and AMD simply doesn't have the product to compete in the profitable product range.”
We know this, obviously. The performance speaks for itself. I use the word “we” loosely, you make the stuff and know it well (as others on this site).
I know it because I buy the stuff a clock the shit out of it, and I’m personally invested.
Perhaps others, shall we say, possible future AMD investors/owners don’t, or even care? They may see an opportunity into buying into a world class, albeit 2nd class, semiconductor manufacturer.
These cash rich, technology/manufacturing/business poor countries are buying assets, big time. I see it all the time. A good percentage of the hotels in NYC are owned by Middle Eastern concerns looking for a business future when the oil runs out. A couple of billion is nothing to big oil money to buy into a fully operational big name semiconductor firm.
I’ll stick my neck out here, at the risk of loosing my head. Wrectors strategic long term goal is to make the company as attractive as possible to such prospective buyers.
The buyer(s) may see:
That AMD is quite capable of giving its competition a serious challenge, if not outright producing a better product. After all, they’ve done it before.
AMD not only comes with manufacturing capabilities with 2 FABs , but with also a world class graphics component, technically a one stop shop of an established world market in computing.
AMD also comes with a total platform solution, chipsets, laptops, and other marketable Intellectual Property, inclusive.
They have an established supply chain, market exposure, a broad customer base, and a competitive solution for the world’s top 500 HPC solutions. Additionally they have good, if not excellent performance, presently, in the 4P market.
They are slated to start construction on a new state of the art FAB with 1.2 billion dollars of government incentives in July of 2009.
They have open complaints against their main and only competitor, conceivably worth billions.
AMD has strategic allies abroad, the EU inclusive, and a relatively untapped market in the Middle East, where top performance would not be a factor, nor would unvarnished favoritism towards an Arab held company.
IBM is a major technology partner.
Wrector’s long term strategic goal of holding on to market share at all cost, an in house graphic component, competitive laptop solutions, major world market inroads, may indeed have a silver lining, after all. They could do it again, with a little help.
All they need to do is sell this to a buyer with deep pockets, who already owns an 8 percent stake, the entire middle east backing (read: trillions), and a dream of obtaining a world class facility for a miserable couple of billion.
At 6 to 7 bucks a share and under 4 billion in market cap, this might look like a goddamned bargain. Hey, Blackstone bought the Hilton chain for 26B, a nice buy, for private equity, choke change at 4 billion, for big Middle Eastern oil.
FTC, SEC, and other choke points you may ask? So, ultimately this rests in the hands of American political interest to force AMD to die? Nope, this time, unlike the Arab held World Port that failed, this one just may fly.
INTC licensing issues 49% is close to 51%, but not that close.
The slightest pissing hint of this would send AMD stock through the roof, double overnight in fact. We will know soon enough.
Just some food for thought.
SPARKS
Does anyone feel as if VIA's Isaiah processor could impact the AMD/Intel suit?
Let's assume AMD's problem is the following, based on what I've read here mostly:
- They have CPUs that are relatively costly to make.
- They can't price them high enough to make a profit, for two reasons: they are limited in terms of performance levels they can reach, and Intel's pricing doesn't give them much freedom in terms of setting prices.
- The second of those could be a point in their favor, but they do not have the production capacity needed to make that a slam dunk, since they are selling everything that they can make.
Now you get VIA, introducing a CPU that competes at the low and mid ranges, but costs less to produce, and thus can be profitable in the price niches that they would be consigned to. Would Intel point at VIA and say "hey, they did it, so aren't AMD's problems of their own making?"
I don't know if that is a good argument, partly because I don't know much about the Isaiah CPU and partly because I don't think that an argument from Intel that "they'd have been fine if they'd planned to remain in their place" would go over very well in a courtroom. So it's not an argument that they would want to make.
Or is it?
Scientia doesn't read comments on this blog?
Why is that?
I read many sites, Tom, Scientia, and others, there are people everywhere with a few snippets of juicy info. It is from there that you can easily piece togather the larger underlying truth. To not get this information when it s there for the education is what?
Is Scientia incredibly stupid?
Or is he is smart enough and so afraid to learn something that will burst the farts that come out of his ass about silicon technology and that AMD has a brokne business plan.
I do give him credit as the other poster that in some areas he makes very educated and correct comments about architecture and design tradeoffs.
But the semiconductor business is perhaps like the car making business. You can create a great engine ( read architecture ), but if you can't wrap a car around it that people want to buy at a price you can make money and survive to the next model year it is really very academic whether AMD or INTEL architecture is more elegant, higher performing.
Scientia needs to get his eyes out of the trees and step back and see the forrest. If he does it'll be obvious we don't care about pipelines, bus architecture, who is using IMC or seperate slow FSB technology. The larger question is who really has a sustainable business to get to the next platform and technology node.
AMD is bleeding away, it simply doesn't have enough MS, nor momentum in the high margin areas to carry its larger need for total revenue.
BK is the only outcome. I saw some other commentary that compared AMD to some other larger industry giants that survived, that idiot had no understanding of the business.
So what is AMD to do?
Is it going to over leading edge high performance computational products? Lets see a few years ago there was PowerPC, SPARC, x86, Itanium, Alpha and many others. All backed by companies with billions in revenue. What happened to them all? They all discovered that to make money you needed performance leadership. To have unquestionable performance leadership you need leading edge silicon, architecture could only get you so far. To gurantee leadership you need both design and technology. The design is easy, string a few hundred smart designers togather and they are a dime a dozen, a few hundred serves and you can turn a design in a couple years. The stumbling block is the silicon development, that takes billions. As one CEO said you need to cough up a few billion for the factory, and thats before you even know you got the technology, then you got to cough up a few billion to develop the technology, then you need to hope them few hundred designers deliver that design and it all comes togather when the tools start showing up. My math says you need something like 5-7 billion minimum each round. So what happens? All the independent CPU guys have to fold under the onslaut of x86. Between the volumes and profits from the top to bottom voluem and the huge software infrastructure in the end none of the independent CPUs have a viable profitable plan.
To see where silicon is going only look back in time?
Every couple generation the CPU would swallow more and more fucntion. First there was the FPU, then there was the cache, now its the memory controler.
Look forward, where is the remaining performance computing coming from, graphics. And what is AMD and INTEL doing? Integrating graphics. No wonder nVidia is so scared. Do they have great designers, of course! Do they have great silicon, NOPE. They are a generation behind INTEL. What happens if INTEL gets the design, right, the software drivers right and integrats it all on leading edge silicon that is faster and cheaper then what TSMC can ever do for nVIDIA...
Nvida is in worst situation then AMD...
AMD to go ass lite is like trying to become a nvdia with a captive manufacturing arm. Doens't work, look at IBM they are really limping Microelectronics. Since they spun it off it is falling further and further behind and only thru pimping itself does it survive.
AMD is finished
nVidia is finised
Not if only when.
In the end only TSMC, Samsung, and INTEL will be around manufacturing on leading edge sized wafers in 2015.
"Scientia doesn't read comments on this blog?"
That's a lie... I posted a comment on an error on one of his blogs (he basically misread data off a table), an within hours he had put a comment on his own blog with the same correction. Coincidence, but I tested the theory...
I posted a term which was completely made up at around the same time, and he used the EXACT SAME (WRONG) TERMINOLOGY on his blog...
He may not read the comments on anymore,but he definitely used to. But quite frankly, given the level of openness and lack of expertise on his site - who cares anymore?
His comment ratio (his comments to total comments) is now up over 60%, so you are not getting a dialogue or multiple points of view, you are simply getting a lecture.
Sure there is bias here, but at least you get MULTIPLE points of view and are allowed to actually pos! Where as on his site, you get just as much bias, but with no points of view but his and frankly a lot of misinformation as he empirically analyzes stuff he reads on the web and incorrectly draws conclusions from it.
Quite frankly, let's move on folks - everyone knows the story and there is no point in continuing on about it.
Intheknow said
I think this is a key point.
One thing few people seem to realize is that due to the large capital equipment costs there are actually very few equipment suppliers. This consequence of this small set of equipment suppliers is that everyone is using pretty much the same tools. So if I work with, let's say an Axcelis asher, then I would be in a fairly good position to comment on what anyone can achieve with an asher.
The caveat here, is that there is constant work being done to improve the process tools. The tool modifications that result from those efforts may be considered proprietary and may give a performance advantage to the company that has implemented that mod. However, these are small incremental changes, not radically new tools.
So within that general framework, anyone with knowledge of a specific toolset is in a position to make general observations regarding that toolset across the entire industry.
OK, I know absolutely zero about this area. What exactly does this mean, making mods to the tools? Tools can be an inert lump of material (e.g. a hammer) or a complex piece of software, or essentially a big robot driven by a complex piece of software. Are you talking about software mods to the control logic, or physically altering pieces of a machine?
Who performs these mods, the company that bought the tool, or a representative of the tool manufacturer? What's the global relevance of these mods, are they the sort of thing that get fed back to the manufacturer for incorporation into a future tool version? Or are they so site-specific that they wouldn't be applicable anywhere else?
But the semiconductor business is perhaps like the car making business. You can create a great engine ( read architecture ), but if you can't wrap a car around it that people want to buy at a price you can make money and survive to the next model year it is really very academic whether AMD or INTEL architecture is more elegant, higher performing.
A big part of "a car that people want to buy" is marketing and perception. It's pretty clear that AMD hasn't marketed as well as Intel, even when they had the winning product.
The interesting thing about all this is that there will always be a small number of enthusiasts who actually care about the technical merits, marketing be damned. And when their product-of-choice gets withdrawn from the market because it simply didn't sell well enough, they're all left hanging. I think this is a shame in the car market as well. (How I would love to own a shiny new RX7. Sorry, they don't make them any more...)
Somewhat of a digression, but I wish that companies would open source more of their designs when they decide to stop pushing them. Why should the product just completely die when it can't make sufficient sales. If it's good enough that some small number of people still want it, and it's otherwise worthless to the company, why not let them have it? Why just bury it? I have to give some credit to Sun here for open sourcing the Niagara design...
If TSMC can stay near the leading edge, and if designers really are a dime a dozen, then it seems like any company that wanted to ought to be able to resurrect the Alpha or launch a new chip design and get TSMC to build it in enough numbers to suit their own needs.
Why TSMC won't work for leading edge designs.
At the leading edge TSMC offers a very mediocre package. It is perfect for those that leverage the standard collateral that TSMC gives. With each shrink they get cost advantage, power advantage, and some performance. TO squeeze really leading edge CPU design requires manytimes custom blocks; very dense well characterize RAMs, etc. Here TSMC foundry DR aren't the best as they are very conservative as they need to support a very wide range of products, die size, and layouts.
INTEL and other past silicon developers can work with their leading design teams to tune the design rules, process to squeeze some extra density, performance or something else.
Does those dime a dozen designer and layout folks at the IDM have a big advantage, huge. In a world with no INTEL then TSMC is a viable option, in world with INTEL, NOPE.
Sure on the surface TSMC shows a stella paper at IEDM for their 32nm ( note still no HighK/metal gate) but the devil is in the details. If it was easy why didn't DEC, SPARC and others flock to TSMC? Because after you factor in the other stuff it was a non-starter.
High end CPU design isn't about what you can do / when, its also about what the big guy INTEL can offer, and 99 out of a 100 times they will beat you.
AMD can't figure that out for some reason and continues to pretend it can go head to head.
That nVidia guy is running scared and spewing things out of his A$$ about how come INTEL has to have it all, why can't they let other companies survivie blah blah blah. Its a competitive world, if you can take the business and make your stock grow why wouldn't INTEL want that market. But more importantly INTEL has to find things to fill their fab and justify the doubling of transistors every generation. There are only so many more MegBytes of cache or so many more cores you need for general purpose CPU. The future is integrating a billion high performance transistors into hundreds of vector processing units to do amazing numerical simulations and visual computing. Nvidia is making noise because they are like a pig going to the slaugher house, squealing but soon they will be nothing but bacon on INTEL's plate.
What is AMDZone afraid of?
They have now closed anyone but registered members from being able to read the forum posts.
What are they trying to hide? Wallowing in self-pity, low-self esteem?
Things must be really down with the AMD fanboyz these days.
I can't blame them.
"they are like a pig going to the slaugher house, squealing but soon they will be nothing but bacon on INTEL's plate."
I love you.
SPARKS
What is AMDZone afraid of?
They have now closed anyone but registered members from being able to read the forum posts.
They must know what asset lite really is and don't want the rest of us to know. :-P
HYC, I'll take the easy one first.
Somewhat of a digression, but I wish that companies would open source more of their designs when they decide to stop pushing them. Why should the product just completely die when it can't make sufficient sales. If it's good enough that some small number of people still want it, and it's otherwise worthless to the company, why not let them have it? Why just bury it?
The problem with this suggestion is the design is tied to the process tech. If someone wanted to produce 90nm netburst processors for some reason (I have no idea why), then they would need not only the design, but Intel's Ni Silicide and strain technology to make the processor work properly.
Intel is currently on their 3rd generation of strained Si if I'm not mistaken. But here is the rub. Their current strained Si process is a derivative process of their 90nm strained Si. If you give away the old process tech, you open the door to competitors closing the gap on your current process tech.
HYC,
The term "tool" in the semiconductor industry is generally used to refer to a piece of or multiple pieces of equipment used to perform a single processing step. For example, a litho "tool" is actually several different modules all ganged together that are required to coat the wafer with resist, image the resist, and expose the resist.
When I'm talking about tool mods, it could be just about anything, but typically it will be some small mechanical change, though I've seen software changes as well. These modifications are generally performed to improve process performance.
Some examples would be:
To change the material of a part of the tool that contacts the wafer to reduce defects from handling.
To change the software/firmware to eliminate an alarm response on the tool that has been determined to be less than optimal
To change the design of a physical component of the tool to address issues that have been uncovered during HVM (High Volume Manufacturing) and weren't seen during development.
All of these changes will address fairly small issues, but they add up.
Let's say that a typical wafer gives 400 good die for example. If the ASP of a die is $300, then saving a single wafer per month puts $120,000 dollars back on the table. It doesn't take much to add up to big chunk of change.
Regarding who does the modifications, the answer is yes to all of the above. Sometimes the company that bought the tool makes the modifications, but this would be on an older tool that is no longer under warranty, since the owner doesn't want to void the warranty. It could be the tool manufacturer, who is held to certain throughput and uptime standards by the purchasing contract, or it could be a collaborative effort between the tool owner and the manufacturer.
How information of the modification can be shared/used is all covered by contracts and NDAs. Generally, the tool manufacturer would like to apply any learnings to all their tools to improve their ability to sell the tool. The tool owner would prefer that no one else on the planet become aware of what was done. The contracts will determine where on that spectrum any particular modification falls.
Nehalem and X58!
Orthogonal-Keep the good work buddy!
3 Channel DDR3
QPI!
HOO YAA!
http://www.expreview.com/news/hard/2008-05-09/1210326729d8556.html
SPARKS
"If someone wanted to produce 90nm netburst processors for some reason (I have no idea why), then they would need not only the design, but Intel's Ni Silicide and strain technology to make the processor work properly."
This is not quite true - they would need to know the specific transistor or lot file (a decription of all the key transistor, and other, parametrics), but they wouldn't necessarily need the same process - just similar performance.
However the whole open source thing in AMD's case is moot due to the x86 license - they couldn't simply just open it up to anyone even if they stopped manufacturing.
Folks need to be careful about TSMC's process - they often have 2 processes for a given node that are tweaked differently for different applications. In general people get hung up on the schedule, but as things get even more complex the litho feature size is no longer the key driver and can be misleading.
Keep in mind there are plenty of non high performance parts that TSMC does that benefit from the litho scaling, but really is not taxing the process.
I'm trying to figure out what the point of Nvidia's outrage is... seriously why would he continue to antagonize his biggest long term threat?
- Does he think he will anger Intel and force them into a bad decision (or say speed up a product and cause it to fail)?
- Does he think his customers will like the 'scrappy' attitude and hopefully be more loyal to him?
- Is he trying to setup an eventual monopoly lawsuit should Intel start to take serious discrete graphics share in the future?
- Is he under pressure from the board and is trying to put up a front that he is strong and ready to take on Intel?
Seriously - what good comes out of this? Unless he is just an attention whore, who thinks his time in the limelight might fade and is trying to get as much attention as possible?
It also seems as though he has dismissed AMD/ATI as a threat - that business looks to be dwindling rapidly with AMD's platform approach (the next gen server roadmaps have all AMD chipset solutions). I think with AMD's slash and burn strategy on the discrete market, Nvidia should not just be worried about Intel.
"Let's say that a typical wafer gives 400 good die for example. If the ASP of a die is $300, then saving a single wafer per month puts $120,000 dollars back on the table. It doesn't take much to add up to big chunk of change."
That's not quite how it's done - you are looking at opportunity cost as opposed to actual cost. Any ROI (return on investment) is typically done with production cost saved, not opportunity cost.
If you use opportunity cost, you can often justify just about any change (which is not a good thing in manufacturing). As fabs are never truly fully loaded if a modification 'saves a wafer', really all you are saving is the production cost of processing an additional wafer - thus the cost of production (and not opportunity cost) is used.
This is not quite true - they would need to know the specific transistor or lot file (a decription of all the key transistor, and other, parametrics), but they wouldn't necessarily need the same process - just similar performance.
This is indeed a more accurate statement, but in the context of "open sourcing" the design, I'm not sure it would be worth trying to create a whole new process that would match the performance for such a small niche product.
"Nvidia should not just be worried about Intel."
NVDA’s share price peaked @ 35.23 on Dec 26, 2007. On March 19, 2008 the share price dropped to an incredible 17.66. In comparison, that would be like INTC down to 11. They made record profits during 2007. ‘Ultra’ video cards were selling for over NINE HURDRED DOLLARS! They have had NO competition in any segment of the market. The AMD 2600 series fell on its ass, ditto for the 3xxx series. AMD didn’t gain any market share, in fact, they lost market share. NVDA's balance sheet is, by any standard, to die for.
NVDA lost half their market cap in three months, half!
544.78M shares, you do the math.
So what went wrong? I could be wrong here, but I believe it is investor confidence in the company. NVDA is 86% percent institutionally owned. It is one of the highest in the segment, if not the entire exchange.
During the money melt down, analysts and investors, in conjunction with low consumer confidence, don’t see anyone during these lean times purchasing high end/high priced stand alone graphics solution, hence price slashing across the board during the past few months. The high margins weren’t there anymore.
Coincidentally, investors during the same time line were keeping an eye on INTC’s entry into the high end market. Nearly every time INTC mentioned Larrabee NVDA share price took a hit. NVDA felt it big time. I am no Wall Street genius, but I’ll tell you one thing, when INTC speaks everyone shuts the fuck up and listens. INTC has been on a serious, roll; they were listening GOOD!
This guy came out swinging and punching simply as a PR tool to restore investor confidence in his company. He crawled out from behind his desk in mid April when it was below 18 and hasn’t shut the hell up since, and its working. The share price is back to mid February levels.
4 months ago you couldn’t get an interview with this egocentric bastard if you were the President of the United States. Now he’s out there doing this same song and dance for anyone who’ll listen. And, he’s not done.
He’s no fool. He knows INTC is an elephant in the 10 x 10 living room. When INTC moves they feel it next door. This little dog is going keep to barking, nipping, and biting so he gets noticed, too.
(Further, let’s not forget, he MUST sign a deal with INTC for X58 and beyond, no SLI again? I don’t think so.)
He’s scared, it obvious, and they know it.
SPARKS
That's not quite how it's done - you are looking at opportunity cost as opposed to actual cost. Any ROI (return on investment) is typically done with production cost saved, not opportunity cost.
In fact, I would further argue that all you save is the cost of processing the wafer up to the point where it would have been scrapped. But to get a truly accurate measurement you also have to figure in the time value of money for the lost days/weeks on the product wafer.
The real issue, though is deciding just what the cost of manufacturing is at any point in the process. In my experience, this information is not readily available. The best even those on the "inside" seem to be able to get is an acceptable yield number to make a scrap vs run decision.
Given the lack of any hard data, I took the easy way out and used opportunity cost to make my point. The point is still valid even if the numbers aren't, you don't have to save too many wafers to make it worth spending money on a tool mod, given you will see a long term cumulative effect.
Though I'm sure I could have stated my point more clearly in the first place. Hopefully, the above clears up any confusion.
"The point is still valid even if the numbers aren't, you don't have to save too many wafers to make it worth spending money on a tool mod, given you will see a long term cumulative effect."
I don't agree, especially after seeing 'unforeseen' issues post upgrade... you are looking at this ROI way too simplistically.
Here are just some of the costs that are considered:
- cost of the upgrade
- cost to qualify the upgrade (both test wafers and the eventual pilot line wafer risks)
- cost of the time you take down the tools to upgrade them (lost capacity)
- cost of the time to re-qualify the tools
- cost of purging out any effected spares
- cost of documentation and training
- etc...
And then of course there is the negative risk yield... you actually do need to save a substantial amount of wafers to make it worthwhile - obviously there is an assessment of potential risk that is factored in, but I can assure you the opportunity cost has never been considered in factories I have worked in or with and you seem to be painting a picture that I don't think reflects how changes are done or considered.
“they would need to know the specific transistor or lot file (a decription of all the key transistor, and other, parametrics”
Hmmm, this is interesting. Let me see if I get this. In our hypothetical factory, we have a 28 tool production line. Each machine is programmed to do its specific function, dripping, spinning, bombarding, gassing, heating, buffing, whatever, with all their respective time and process variables.
How are these tools programmed individually? Are they networked together at a, for the lack of a more accurate term, a master station. Or does an engineer(s) individually walk around with clip board, and manually set the parameters on each tool?
Was the original recipe determined during the R+D phase of production? Who has the authority (and the huge set of balls) to institute any changes during production? How does one (or the group) know that, for example, we sputtered on tool number four, so we need to heat a little more at tool six, and we need to polish a little longer at tool number 7? When do they find out if this change has significantly affected yield and or die quality/performance, either positively or, god forbid, adversely?
Basically, is this change is what is called a stepping? Is it a crap shoot on a production level? Or is someone going to be a hero or their ass(es) reamed in seven weeks?
SPARKS
Sparks,
Generally, a process tool has a lot of "knobs" in both software and hardware you can turn to tune it towards a target (film thickness, refractive index, stoichiometry, etch rate etc...). When a tool is first qualified, or as part of a major scheduled PM (Preventive Maintenence), the tool will require adjusting to get into a desired target range. Most process tools are fairly hands off once you get them up and running. APC (Automated/Advanced Process Control) allows you to automatically retarget tools as they drift over time as metrology data is fed back to the process tool. If the tool happens to drift or shift more than is allowed or expected, a more hands on approach will be required.
"How does one (or the group) know that, for example, we sputtered on tool number four, so we need to heat a little more at tool six, and we need to polish a little longer at tool number 7?"
Generally it doesn't happen that way, the process and recipe is setup so that when a wafer moves from one process step to the next you don't have to know all those little details to target the next process tool to a target. The target window should be (hopefully) setup to have enough flexibility that each process tool will do its thing and as long as it is on target, the next step will work as desired. In Fact, once a process has matured, very little if any product is actually measured in-line after each step. The measure rate (atleast where I work) is statistically calculated based on process capability at each step. The tools are then just monitored on a regular basis (shiftly, daily, weekly test-fire wafers). Sure, there is obviously some risk involved with this, but thats the trade-off for improving cycle time and eliminated wasteful monitors.
Orthogonal-
“Stoichiometry is essential on the surface reaction processes in view of the crystal growth mechanism and impurity doping characteristics”
I didn’t know what the term was. Am I on the right track here? What I couldn’t figure out whether this was an analytical tool relating to defect density, or an actual process.
“the process and recipe is setup so that when a wafer moves from one process step to the next you don't have to know all those little details to target the next process tool to a target.”
So basically, the tools are setup that when a critical tool is setup to different parameters the other tools relating to that stage of process is adjusted accordingly and automatically, with in target windows.
Now I know why you guy are always laughing and beating the shit out of AMD’s CTI (continuous transistor improvement)! Now this could be a crap shoot!
Thanks-
SPARKS
Now I know why you guy are always laughing and beating the shit out of AMD’s CTI (continuous transistor improvement)! Now this could be a crap shoot!
I think you are confusing AMD's CTI with APC. APC is where tools or processes or recipes adjust to information from other tools or monitors in the fab (some may call this feed forward or feedback control). The reason why many people on this board laugh at this is not that it doesn't work or isn't potentially a good technology, but that AMD makes it sound as if they have some sort of unique capability here - every IC manufacturer has their own flavor of APC, so it is not clear what the big deal is (other than the PR associated with it).
As for CTI, it has its merits but it also has significant drawbacks, but this is not an automated process - this is AMD (and/or IBM) engineers making process improvements after they ramp the technology and implementing them along the way (rather than waiting for the next node).
Some of the benefits:
- This allows AMD to move to the next node earlier than they probably could have. To generalize broadly, AMD is mainly just shrinking litho and implementing other changes after ramp - if they waited to get everything ready (like Intel does) they would fall even further behind.
- In theory this is the most efficient way of implementing major changes as again you are not waiting until the next node to implement them.
The drawbacks:
- This introduces more risk to manufacturing and/or yield. Intel basically has one risk every 2 years when they introduce a new node, AMD is essentially breaking this risk into multiple events .
- Impact to fab ops. Yo now have multiple process iteratuions and potentially tool differences to enable those changes running concurrently - this may mean inefficient use of capital if for example a piece of hardware or material is modified requiring a distinct tool for one step and another tool for the previous CTI step.
- Development/qualification time. You now have a lot more monitors, qualification lots and product certification to do as opposed to doing this essentially once every 2 years (or once for every product).
- Finally while some may see this as more flexible, it actually may make things less flexible - you are less likely to make a major change in one of your CTI steps for fear it may fail (or have significant impact to the fab). When you do a node transition, you already have this factored in so you may actually be less risk adverse to a major change with a big ROI.
The process has become such an integrated mess, that even subtle changes can have major downstream impacts. To have to deal with this on a continuous basis with CTI vs focusing on it in one major step shows a dramatic difference in philosophy between AMD and Intel.
“I think you are confusing AMD's CTI with APC.”
Quite right, but then again, you’re always right. :)
But, to me they are both crap shoots (the lesser of two evils) given the complexity of process and the required level of HV production to be consistently successful in the long term.
“but this is not an automated process - this is AMD (and/or IBM) engineers making process improvements…….”
Uhhh? Not automated?!?
A year ago I wouldn’t have known the difference between a wafer and a Frisbee. However, many years ago (i386-i486) INTC’s rates of execution, volume, performance, among other factors, lead me to become an investor of the company. Now, in retrospect, I can pat myself on the back for my brilliance. (It just took me all this time to figure out why) This model of stability goes to the heart INTC’s core philosophy (pun intended).
Obviously, we can see just how well IBM and AMD have executed/manufactured/implemented en mass over the course of the past 17 years.
There’s a big picture here and I don’t know how anyone could be blind enough to argue against it.
“- Development/qualification time. You now have a lot more monitors, qualification lots and product certification to do”
“The process has become such an integrated mess, that even subtle changes can have major downstream impacts.”
Wouldn’t it be safer and cheaper to build an entire production line (without the redundancies you mentioned earlier) solely for the R+D folks? Essentially say,”here ya go boys you can play and tweek here as much as you like. When you get it dialed in, give us a call, we’ll come by and take a look”?
Then, when they get it right, institute the changes, success guaranteed.
SPARKS
Wouldn’t it be safer and cheaper to build an entire production line (without the redundancies you mentioned earlier) solely for the R+D folks? Essentially say,”here ya go boys you can play and tweek here as much as you like. When you get it dialed in, give us a call, we’ll come by and take a look”?
Then, when they get it right, institute the changes, success guaranteed.
You are right, it is much more conservative and safe approach to do it that way, but then again, you have to look at it from AMD's perspective.
First, they only have 1 operational fab, so there isn't a whole lot of extra space for development. It isn't cheap to dedicate a whole line to just R&D taking up white space in your only production facility. If instead you are just making small, incremental upgrades, most of the R&D tools can double as production tools too. There's probably lots of logistical problems that present themselves, and perhaps some tools that cannot do that at all, but in the event that there is enough overlap. It would make sense for CTI.
Second, since you hit process shrinks "faster" you get the market perception of being on equal footing with the big boys at an earlier date. Even though the cost benefit of the die shrink will likely be eaten up by further CTI improvements (testing, monitors, training etc...) it atleast gives a boost to die output (assuming good initial yields/bin splits) which helps feed their MSS at all cost strategy.
It is ultimately a gamble, but I think that's the way they have to play, or atleast the way they think they have to in order to compete.. They simply don't have the time or money to play it safe. They could certainly improve their execution, but they just don't have the capital to wait it out. If their management was smart, they would have pushed for Dual Cores and not monolithic dies on their new multi-billion dollar architecture. The ROI on Barcelona has been significantly reduced now that they are essentially forced to sell K8's to fill the market.
"AMD does the execu-shuffle"
http://www.theinquirer.net/gb/inquirer/news/2008/05/12/amd-does-execu-shuffle
Central engineering officer, chief talent officer - are you kidding me?
The Central Engineering leadership team will have the charter to define the company’s technology roadmap, in partnership with Dirk and the business unit general managers and CTOs, and develop the system architecture and foundational IP blocks that will be leveraged across the business units.
In layman terms, this means they will take all of the roadmaps from the individual business units, staple them together and say look at us, we are taking a holistic approach.
And out of curiosity, is not the CHIEF TECHNOLOGY OFFICER responsible for defining the company's technology roadmap?
"Leverage" = rather than have the business unit managers talk to each other about their roadmaps (which presumably is part of their jobs?), we will create a new layer of bureaucracy to do it for them. This is what many would call "asset heavy" or perhaps "cost heavy" or "intelligence light".
And there is no truth to the rumors that the chief talent officer is scouting American idol for potential candidates...
"Wouldn’t it be safer and cheaper to build an entire production line (without the redundancies you mentioned earlier) solely for the R+D folks? Essentially say,”here ya go boys you can play and tweek here as much as you like. When you get it dialed in, give us a call, we’ll come by and take a look”?
No playing and tweeking, the complexty of yielding a complex CPU with hundreds of millions of transistors on the tighest DR running at the highest performance is no childs play. It takes dedicated process engineers, design engineers, test enginers, FA engineers etc. etc. They all need to work as one tight team with a single goal to deliver the product on time, with in cost, and performance goals.
Here is where AMD and IBM bust. The IBM consortium is a hodge podge of engineers from different companies with different competing pet peeves. I'll even wager they all have their favorite stepper, deposition tool etc. and in the end they need to compromise on tool set, process trade offs etc. Then they do it all the way in East Fishkil the hot bet of technolog in the middle of no where. Has anyone every gone and visited that place, its really in the middle of no-where.
How can AMD think it can get buy using IBM technology and port it to Dresden and do it successfuly and ramp to millios of CPUs. I don't care about SPC, APM, APC or ass lite or whatever other smart ass words Hector can dream up it doesn't work.
You need the team in a dedicated fab and then do what INTEL claims copy exactly. Then you can ramp them factories anywhere and they are matched, builds redundancy and comfort for the Apples, Dells, and HPs that they will get their chips on time. Imagine if on the AMD glory days everybody lined up to get AMD only? Would they be screwed with Phenom ( the Phenom of a failure ) or Barcelona.
Then to think you could yield those on 65nm as a monolothic quad core. What were they thinking?
Continuous improvement is another fancy set of words invented by AMD to describe what they need to do when their first version sucks, they need to fix it. Sorry we screwed up its too slow. Oh, lets call it improvement and spin it.
LOL, AMD going BK, tick tock tick tock
Poor Sharikou, Poor Scientia, Poor Rick
"Then to think you could yield those on 65nm as a monolothic quad core. What were they thinking?"
The issue is more binsplits than absolute yield - I stand by the theory that the 65nm process doesn't have enough margin to get good binsplits (reasonable power at a given speed) for the quad die. I'm sure at this point the issue is not non-functional die (or non-functional cores within a die), but the inability to get the bin splits up.
I would go so far to speculate that tri-core is now more a matter of working quads not meeting TDP bins and thus having one quad disabled to get the TDP down. Many fans and armchair technologists just assumed that tri-core would enable higher speeds as it was just a matter of one non-functional core, or simply one 'slow' core.... you would have t have a VERY poor process for this to be the case as the individual cores within a quad are not physically far enough to introduce that much fmax or TDP variation (with the exception of possibly the edge of the wafer).
Given some of the early data on idle power and that the idle power not seemingly significantly reduced for a tri-core, I'm guessing many of these tri-cores are just quads that couldn't fit in the the TDP bins. It is a nice short term bandaid for AMD to recover some of this potential revenue, but the long term market/ASP impacts for AMD will be harsh if they sell these in volume for an extended period of time.
How can AMD think it can get buy using IBM technology and port it to Dresden and do it successfuly and ramp to millios of CPUs.
The issue is not really porting it over - the issue is the window / process margin designed into the process. As some previous posters point out there are quite a few agaendas in the IBM fab club, and they all are probably not aligned.
IBM has a very good and well known research team (world class), but development and manufacturing is whole different thing. What works in research and early development mode may not lend itself to high volume manufacturing. A process that may work or be reasonably stable on a small pilot line with probably one or two tools at any given step may look solid (you have a lot of engineering expertise on and the tools are probably 'dialed in') , but when you start introducing some subtle tool to tool variation as you scale up to higher volumes you will quickly find out if you do not have enough margin built into the process and it will be painful. The response to this is not APC or tweaking/adjusting on the fly to get things into spec, it is to develop a bigger process window that won't be as susceptible to the inevitable variation. As I was once told 'give me a process I can drive a truck through' - you don't want to be monitoring the heck out of the tool and adjusting it frequently.
Yeah you may be leaving a little performance on the table but many times that 'extra' performance at a given process step has no noticeable impact on the final performance of the chip.
“No playing and tweeking, the complexty of yielding a complex CPU with hundreds of millions of transistors on the tighest DR running at the highest performance is no childs play.”
Attila The Anonymous-
I sincerely apologize, I didn’t mean to suggest by any stretch that there was anything simple concerning the extremely complex processes required in building hundreds of millions of atomic circuits on a finger nail size substrate.
Obviously, with the various types/methods of depositions, variables regarding time and temperature annealing, photo mask, etching, material removal, chemical interactions and bonding, layer by layer, all on an atomic level is no small feat.
Further, I didn’t wish to minimize or downplay the disciplines required to work on the forefront of inner space of our leading edge technologies regarding process engineering.
On the contrary, my sole objective was to point out how one company appreciates the complexity involved by taking a conservative approach, while the other takes risks, perhaps unnecessarily, to obtain the same results; a mass produced, successfully implemented, high performance product manufactured en mass, all in the interest of a profitable enterprise.
The loss of billions of dollars and tens of thousands jobs is not child’s play.
AMD subsequently, has taken risks, while Intel has taken a more disciplined and conservative approach regarding the aforementioned technologies over the years.
One anonymous poster referred to K8 as a “one hit wonder”. They got lucky. Conversely, Intel needed no such luck. Kudos. In retrospect, I’ll submit to you, these philosophies have determined the current state of financial affairs between the two companies.
In the final analysis, the law of averages dictated the outcome, one by chance, the other by consistent disciplined methodology.
A famous man once said, “God does not play dice with the universe”.
AMD wasn’t listening.
This was my intent, purely from a laymen’s perspective, my sincerest apologies for my methodology.
SPARKS
Orthogonal-
"If their management was smart, they would have pushed for Dual Cores and not monolithic dies on their new multi-billion dollar architecture. The ROI on Barcelona has been significantly reduced now that they are essentially forced to sell K8's to fill the market."
A truer word were never spoken.
This is gospel.
SPARKS
"The response to this is not APC or tweaking/adjusting on the fly to get things into spec, it is to develop a bigger process window that won't be as susceptible to the inevitable variation. As I was once told 'give me a process I can drive a truck through' - you don't want to be monitoring the heck out of the tool and adjusting it frequently."
Thankyou GURU, this is precisely where I was going. I don't see any other approach in the long term.
(Nice cliche!)
SPARKS
I don't agree, especially after seeing 'unforeseen' issues post upgrade... you are looking at this ROI way too simplistically.
I can't deny I'm looking at this simplistically. I'm a victim of my experience. I've seen management get a burr under their saddle too many times and drive for a fix that didn't seem justified.
Somebody up the food chain draws a line in the sand that says we will hit this yield/cost target, and it is off to the races. I've seen very expensive changes made to obtain 1 EDI (Estimated Die Impacted, or an average of 1 die per wafer if you prefer) improvements. If you assume 500 potential die per wafer (a nice round number) that is a 0.2% yield improvement. At 30K wafer starts per month, you get 12 wafers a month back.
So yeah, I'm a bit skeptical about the whole ROI thing. It is the right way to do things, but it is often ignored in the name of expediency.
Yo Sparks please don’t take offense for me taking of one of your statements. This is a free wheeling blog for us to react and put forth commentary, your bite was simply too juicy not to have a bit of for fun and commentary. Many here and almost all on a few other blogs have no clue and can’t get over reading a little to gain some wisdom and are too sensitive. We aren’t too sensitive here that we can’t take our lumps, not like some others we know heah?
Yo inthknow you should choose your acronyms more carefully it might give others a clue to where you work heah?
Sparks enjoy your posts and give you credit for even trying to talk to that deluded Scientia on his blog. I don’t waste much time there at all. Its interesting to note that he is almost his largest poster in his own comment section. He doesn’t need a comment section anymore he should just pontificate, LOL.
As to bin split or Yield limiting quad-core, it is both
First for, yield to first order can be simply estimated by
Yield = exp ^-( DieSize * DefectDensity )
There is no way around it if you double the die size for a given defect density your yields are going down. If you are limited by random defects going from needing 4/4 cores working is much lower then 2/2 or 3/4 cores really impact yields. Of course as you go to more advance generations the die size goes down in half and really helps you provided your next node has the same defect density. INTEL was very engineering smart and realistic to go with quad core as putting 2 dual cores in a package on 65nm This effectively raised their yields for the quadcore by halving the die size from wafer point of view. AMD was really stupid, not only late to 65nm but also probably running an inferior process with lower performance/yields deluded itself into thinking it could yield native quad-core. What a joke, they are in the volume business. This isn’t about yielding a few thousand servers that sell for a million a pop where you get another million or two in service contracts like IBM does. It is all about making money on the silicon. They don’t seem to comprehend that. To go native quad-core requires that they enable effective halving the effective defective density of what should already be a mature and well yielding process, totally lunacy!!
Bin split is another thing. Performance is limited by designing in enough margin at the target frequency to tolerate the expected variability of device performance in a die. The larger the die the larger the expected variation with in die. Transistor performance can be assumed to be a Gaussian distribution. Each core will be limited by natural process variation in that die to a given max frequency, then you have to have the luck of all the 4 die in the native quad-core all having the right top bin right on top of each other. With Gaussian distributions the probability gets lower with more die even if they are adjacent so in the end up binning to the slowest of the 4 die. INTEL on the other hand only has to deal with half the problem finding pair’s of die, and should always get higher bin split for a quad-core by assembling two pairs, the two pairs don’t even have to come from the same wafer.
For example, just assume that you have a process for whatever reason is fastest on in the center or very edge of the wafer with a gradient across the wafer. AMD would get perhaps one good fast quad-core in the center and possibly none at the edge as they would require it to land just right in that sweet spot. INTEL would have a much smaller challenge as they requires a much smaller area of the wafer to be in the sweet spot. You can leverage this feature to get more die for a given performance, or push the wafer parameters harder to get higher performance. Generally as you push wafer speeds you get more parameter variation that gets wider, a wider Gaussian distribution. Again, AMD bad strategy cost them both yield and performance. As you can see being ahead in both performance and process node yields huge options and advantages in die size, performance options, design options, and cost. Being first and best is a killer advantage. Doesn’t matter in memories or in CPU.
Basically at the leading edge where performance drives your pricing and competitive stack not being there as AMD is now and was for most of it life is a money LOSING prospect. The K8 days were a total anomaly never to be repeated as long as INTEL doesn’t trip and fall on its face like it did during the Itanium, Prescot, Ceder Mill, Tejas days.
My last comment is the latest news found here:
http://www.digitimes.com/mobos/a20080513PD207.html
If this is Hector’s ass light plan then AMD and nVidia are in for big problems. TSMC since 130nm has been slipping the real production starts and has shallower and shallower ramps at 90 and 65nm node. 45nm node is even later and with a shallower slope. Now you might argue you have AMD and nVidia to best friends competing for silicon starts on the same node at the same foundry, get real, what a mess. TSMC will ask for cold hard cash and or very hard gurantees of wafer starts with huge back out penalties to tool up to support an aggressive ramp to produce millions of logic cpus a quarter. I don’t think TSMC can support AMD cpus, nVidia graphics and chipsets, and ATI graphics and chipsets on 45nm say early next year, or evne 65nm for that matter. Another clean example of why it is only Samsung and INTEL and then the rest on the bleeding edge.
Tick Tock Tick Tock Sharikou, Rick and Scientia will soon be really fantasizing. That nVidia CEO will soon be joining that club too!
AMD was really stupid, not only late to 65nm but also probably running an inferior process with lower performance/yields deluded itself into thinking it could yield native quad-core.
AMD's former CTO was hired from IBM. AMD chose to go with the more elegant, but less practical native quad core approach. I have to believe that this is more than a coincidence.
IBM's culture is to choose the elegant approach over the pragmatic one. When you bring in an IBM guy to plan your technology roadmap you shouldn't expect old habits to just go away. He may be brilliant and highly skilled, but you're going to need someone to yank him out of the ivory tower every so often.
"First for, yield to first order can be simply estimated by
Yield = exp ^-( DieSize * DefectDensity )
There is no way around it if you double the die size for a given defect density your yields are going down"
I hate when people use this - you are assuming the primary yield fallout is due to random defects... any data to support that - have you seen a yield pareto for 65nm or 45nm (or for that matter 90nm)? It's not a question of will it yield go down because of the die size factor but how significant a factor it is to overall yield.
And because of this (flawed) assumption, you can't conclude:
"To go native quad-core requires that they enable effective halving the effective defective density of what should already be a mature and well yielding process, totally lunacy!!"
If the issue WERE really non-functional cores than the tri-cores should show lower power consumption over quads at idle, no? (one less core?) Why is there virtually no difference?
And for the TSMC foundry perspective from the other anony poster - TSMC runs a bulk Si process, they will not be able to run 45nm AMD CPU's (as they are based on a SOI-based process). Forget shallow ramps - the above is a fundamental issue.
If AMD outsources (or I should say spins off a minority stake) it will likely be someone from the IBM fab club (like Chartered or IBM); especially as they share a similar process.
In the future TSMC may do some CPU work (fusion? or a future core?) but having to re-validate the K8/K10 design on a bare Si process would not be a simple chore,
"With Gaussian distributions the probability gets lower with more die even if they are adjacent so in the end up binning to the slowest of the 4 die"
I hear this one a lot... and it sounds really good theoretically and intuitively - but think about the some of the actual dimensions here.
Let's say a 283mm2 die is ~19mm x15mm... that would put each core about 10mm x 5mm (again this is crude as I'm ignoring cache and actual layouts)... so what I'm hearing is that cores within a given die that are probably ~10mm apart (and at most 15-20mm apart) have significantly different fmax's?
On the edge... sure I would buy the argument... on the inner 2/3 of the wafer... not thinking they'll be that much variation across 10mm (or 15-20mm) distances.
I really think this whole one slow core theory is way overdone - it sounds nice but I have yet to see a single shred of supporting data. Again... if this is the case where are the tri-cores with the high clockspeeds (you know after simply disabling the slow core)? Where are the fast K10 dual cores?
I'm standing by the conclusion that this is a power issue - to get fmax up, one of the ways to do it is to incease the overdrive (amount of voltage over the threshold voltage of the transistor) - of course this has huge implications to power. Conversely if you are over your power design targets, one way to get there is to lower the clock. Finally if you have enough product which has really poor power splits, then you can look at intentionally disabling a working core to get a tri-core within the TDP spec. (thus I think the tri-cores are simply the quadcores with poor power binsplits more than quads with one non-functional core)
Given where the gate oxide scaling is at as well as some of the key transistor process steps I think it is more likely there is significant power variation (via leakage or Ion/Ioff)than fmax variation (though the 2 are somewhat inter-related)
You arm chair QB got no f**cking clue.
At some companies believe it or not they are competent enough that random defects limited yield. Of course if you are at one of the incompetent ones then you have other problems to fix.
If you don't do good front up DRs, DFM and characterization of the tools and process from generation to generation perhaps you do get surprised with systematic issues, be it density, OPC, or other things. But if you are competent and work well with your design team these are easily overcome. Another reason consortium based technoloyg development seperated from the designers needs and wants is going to unravel. You can take the conservative approach of a foundry DR and you get that result, medicore results. Another reason real men have fabs, and they fabs work with the design teams closely.
People have been making noise about transistor variation for a few generations. Its like the litho people crying wolf, about the end at 0.5um, at 0.25 um etc. etc. You characterize and work again with your designers and its easily managed if you understand it. Again the incompetent make execuses.
Of course power matters, faster transistors are generally leaker and P=CVF^2 too. Power limits everything.
Anyone who thinks you aren't limited by your weakest link is a retard. That is the same for a football team, CPU speed, or multi-core. You're preformance is limited by the weakest player, the slowest speed path with lowest margin, or the slowest core. Why isn't that easy to understand.
Of course in a perfect world maybe all your paths are slow on all your cores so they are all the same, SLOW, they you are correct you got a crappy design running on a crappy process and you get a uniform crappy pproduct. I concur AMD has a uniform crapy product, roadmap, management team and business plan.
Tick Tock Tick Tock game over
“your bite was simply too juicy not to have a bit of for fun and commentary.”
Cool. I thought you were serious for a moment. I would never underestimate the processes of the guys who mess around with 10 to 20 million dollar tools! The fact is, most don’t even have a clue! But, what astounds me are the imbeciles ‘at the top pf the food chain’ who didn’t!
Just look at the posts above!
Your defect density is a perfect example! I’m a nobody from Oshkosh, but if I were at the “top of the food chain”, and if one of you boys came to me and said, “listen, knucklehead, this thing ain’t gonna fly. If the defect density math doesn’t kill us, the inflexibility of a native quad core will”! I would have told the big shots come up with a better plan. If the primadonna’s didn’t like it, I would have shown them the door!
Only now, when it’s too late, are those IDIOTS at AMD are chopping out the dead wood.
There’s a big problem here and it infuriates me. The gods in the Ivory towers are total separated from the guys who really know what the fuck is going on in the real world! Enter the high and mighty American corporate structure. Perks, Ferrari’s, multimillion dollar salaries, whatever, meanwhile, the walls were caving in around them, due to incompressible bad decisions.
I never went to the Henry Kissinger School of electrical theory. I don’t give flying frig who the hell these guys think they are. I’ve been a top foreman for too many years for any pencil pushing, finger pointing son-of-a-bitch to come on my job, and even try to reinvent the wheel.
I’ve got a little card in my pocket that’s says, “Fuck you, give me my money, and I’ll go work for someone else, and I’ll turn a 30 to 40% profit for them.”
When Kelly Johnson (Lockheed) designed the Skunk Works, he insisted that engineering be on the floor with the techs. There was a reason for this, to obtain practical solutions for a jet that flew faster than a rifle bullet. IT WORKED.
I wouldn’t give two cents for any of today’s corporate megalomaniacs who didn’t keep one or two of you geniuses under each arm, let alone work on the floor for a couple of years.
Those arrogant bastards were probably told by guys just like you that a runaway freight train was heading straight for them. Either they weren’t listening, too busy, and/or too high and mighty to listen to reason. Let’s hope INTC can get some of the good guys before Samsung or TSMC does.
Further, I hope INTC brass is right next to you guys in a BIG way.
SPARKS
"Another reason real men have fabs, and they fabs work with the design teams closely."
I rest my case.
SPARKS
I am no designer, for certain, but the intricacies of design to process is very evident just in the progression of various steppings and improvements.
Also, how does out sourcing fit into AMD's CTI model, if they choose to offload most of the manufacturing burden... what leverage/control do they have on improving transistor and steppings overtime as opposed to in house controls and mechanisms.
While it is easy to understand the economics of moving toward a asset light model, ultimately this becomes an achilles heel to AMD as the marriage between design and process is much harder to make. Just take their own statements about barcelona delays 'we had difficulties tweaking the design to the process' -- paraphrased of course... this speaks volumes to the level of detail it requires to get something as complicated as a native quad core design to work at a given node.
AMD will find it more difficult, not easier, to compete if they farm out too much of their capacity in my opinion.
Jack
"At some companies believe it or not they are competent enough that random defects limited yield. Of course if you are at one of the incompetent ones then you have other problems to fix."
And I suppose you have inside info as to what is limiting AMD's yield. Well I guess that's apparent as you've concluded it is area (defect density) related. Must be nice having only 1 bar on the yield pareto in utopia process land...
I bow to the superior intellect. Just one stupid question.... you tend to get yield fall off close to the edge... I suppose that is just a 'random' issue related to defects? Is edge yield a significant or insignificant portion of yield loss? I'm not saying random defects aren't an issue, but you are ignoring many other things.
"Anyone who thinks you aren't limited by your weakest link is a retard. That is the same for a football team, CPU speed, or multi-core. You're preformance is limited by the weakest player, the slowest speed path with lowest margin, or the slowest core."
I'm not arguing with this but the variation on a 15mm space IS NOT THAT SIGNIFICANT... you really think the transistor speed are vastly different at r=0 vs r=20 vs r=50 vs r=65 (r = radius). Again this is an issue on the outer radius, but I think you are 'intuiting' that this must be the issue. Even if you have a gradual or systemic variation as you move out the radius, each quad has internals cores which are only 15-20mm apart so it's not like you are making a quad with one core at r=5mm and the other core at r=125mm.
Yes, there are speed mismatches but within a 15-20mm linear spacing you are probably talking 1 or 2 speed bins at most (except possibly at the edge).... it's not like you have 3 cores at 2.6GHz and 1 at 2.0GHz.
"People have been making noise about transistor variation for a few generations. Its like the litho people crying wolf, about the end at 0.5um, at 0.25 um etc. etc. You characterize and work again with your designers and its easily managed if you understand it."
This is completely naive and misleading - even Intel will tell you this is not 'easily managed'... if you look at transistors you no longer have on/off, you have on and less on. You have significant signal to noise ratios, crosstalk, latching and all sorts of effects.
For you to say 'easily managed' tells me you don't work in the area. Design rules help...but to characterize it as easily managed?
JJ - outsourcing is what I think folks have been misrepresenting it as. The 'outsourcing' is AMD maintaining a majority owned stake in a separate entity which will initially only producing AMD parts.
I think early on this will behave no different than the current company other than how they count the financials - you will simply have a shift of AMD engineers to whatever the spinoff is. I look at it as not that much different than operating the fab as a separate business unit with a full-time collaborator like Chartered (or whoever the minority owner is)
The question is what will happen if/when AMD renegotiates the x86 license in 2010 and if they are able to remove the no more than 20% outsourcing restriction (and essentially dump their ownership in this spinoff). I think only at that point will you only see a truly separate entity which will introduce the typical foundry issues (communication with designers, alignment on design rules and process targets, etc...)
I suspect they will keep the CTI approach, this is the only thing keeping them close to Intel on the tech node scaling (schedule-wise). For a foundry with multiple customers though this would be brutal - you are going to have some customers which have less critical parts which may not want to move on to the next CTI step and you run the risk of having to support several legacy CTI steps.
And when the spinoff does become a truly separate entity you have AMD design having to pay that foundry a premium to cover the foundry's margins which will make things a little harder to compete on the cost side with Intel.
"JJ - outsourcing is what I think folks have been misrepresenting it as. The 'outsourcing' is AMD maintaining a majority owned stake in a separate entity which will initially only producing AMD parts. "
What you say is very true... I made this statement in the frame of mind of the rumor that AMD will start using TSMC for CPU production...
Frankly, the whole strategy that AMD is throwing together has yet to be known ... it will be an interesting shell game I have no doubt. If they succeed in this 'split the company into manufacturing and a design entity' approach that has such strong rumors with majority ownership in the 'new compnay', then all it really turns out to be is some very creative book keeping. The debt stays, the losses stay, the execution problems stay... nothing changes but the names on the letter heads.
Jack
then all it really turns out to be is some very creative book keeping. The debt stays, the losses stay, the execution problems stay... nothing changes but the names on the letter heads.
Yes but it gives AMD an artificial excuse to raise money - potentially through a stock offering and also through whatever minority stake they sell. In the hypothetical scenario it also would likely mean the debt would be transferred to the foundry part, which would mean that AMD 'design' might also have a chance of securing more loans (or additional private equity investment)
Yeah, there is no new value generated, but the debt movement would enable additional financial flexibility in terms of raising capital - at this point the current AMD is kind of in a tight box from a credit perspective.
Also, I don't see the TSMC stuff happening in the near future as TSMC remains a bulk Si foundry and AMD would need to port the K10 design over from an SOI based process (don't see where they get the additional resources to do this, given they are already severely resourced constrained). If they do an MCM fusion approach early on, the graphics part would likely be TSMC made, but I don't see them doing CPU's in the near future for AMD. Given Chartered's participation in the IBM fab club, this to me seems to be the only viable outsourcing for CPU's for the immediate future (and a potential candidate for the minority stake in the AMD foundry spinoff?)
I wanna PUKE!
Talk about in the middle of no where!!!!
650 MILLION!!!
5+ BILLION IN DEBT!
6 CONSECUTIVE QUARTLY LOSSES!
LAYOFFS BY THE BUSHEL!
VISION! WTF!!!!!!
http://news.moneycentral.msn.com/ticker/article.aspx?Feed=ACBJ&Date=20080513&ID=8636907&Symbol=AMD
SPARKS
“Given Chartered's participation in the IBM fab club, this to me seems to be the only viable outsourcing for CPU's for the immediate future (and a potential candidate for the minority stake in the AMD foundry spinoff?)”
Well buddy, looks like you hit the nail on the head.
http://www.tomshardware.com/news/cpu-phenom-amd,5370.html
SPARKS
Be careful with the 'analysis' in the article... Theo is an idiot on these matters.
He somehow concludes SOI was the reason IBM is in game consoles and SOI = lower power. Another case of people wanting to fit things into what they intuitively think. GPU's will not benefit significantly from SOI (unless it is accompanied by other changes like high K).
Then you will have the IBM or AMD shell game... change 2 variables at the same time (substrate and gate oxide) and then assign the resultant change to the desired variable (SOI). You see this with their stress claims - they always compare to a completely unstressed transistor as opposed to the previous generation to make the benefit sound huge.
I don't know if GURU left a post at that Tom's article, but whoever did copied one of his comments from this blog from 6-9 months ago almost verbatim.
That was me... tired of Theo posing as a technologist when all he does is re-print press releases and then mis-analyzes it.
He does the same thing on financial matters.
Orthogonal, Master of the Chipset, your humble servant beseeches you and your wisdom.
I have new, never before seen and largely undocumented parameters on my shinny new overclocking beast. (X48). From the articles I have read, tweaking these borders on voodoo science and witchcraft. I read enough about them to know the limits as not to jeopardize my lovely QX9770. Do you perhaps have a link on where I could find/obtain specifics on what they do and how the interact with timings/voltage on the board and on the CPU?
CPU GTL Voltage Reference (0/2)
CPU GTL Voltage Reference (1/3)
I’ll assume that we are balancing individual die or core voltage(s) in pairs. The parameters are in ratios set at .005x increments. Why would this enhance stability?
CPU PLL
I’m assuming this a phase lock loop regarding the core power supply regulator(s).
Voltages here are considerably larger than cove voltage (1.50-2.78), huh?!?!
Am I on the right track here?
SPARKS
Wait wait, I’m thinking okay done.
Radial yield sensitivity are always there, as tool manufactures got better at 150mm, then 200mm, and then 300m they’ve improved. I’d say at 300mm they are damm good but you still see systematic issues both from defects and process variations. I suspect 450mm will be another adventure in the next decade, and I can’t wait!
Tool uniformity, try as they might 300mm is a big wafer and uniformity is never perfect. What is nice is there is a lot more center area then edge and that continues to get better and better the larger you go, and that helps a lot!. Look at any process and map the tool level parameters. They always show some center to edge variability. Stack a bunch of them on top of each other and you cause device and interconnect systematic uniformity and in the worst case take away further process marginality. For example, stack polish which will always have some center to edge uniformity. The physics of angular velocity at the edge are just different then the center, even when you change the down force getting it right over the lifetime of the pad, heads is very hard. There is always center to edge variation. Think of you trying to sand a pizza with a circular sander and get it right to a few tenths of microns or less. Combined that with center to edge difference say from etch and other depositions. Now imagine taking this not so flat wafer into a stepper trying to print minimum dimensions with limited depth of field. Do this across hundreds of operations and stacking all the tools with different center to edge uniformities and you have an interested integrated process with systemic variation from center to edge. Now imagine it being developed in IBM on one tool in B323 in East Fishkill perhaps on only one tool on a experimental chip. Then you transfer this very mature process to Samsung, Charter, and AMD on a different set of steppers, deposition, polish tools, etc. Sure you got recipes, and targets for everything and the engineers will work hard to dial in all those things like thickness and gas flows but you NEVER get it perfect on a different tool. Stack hundreds of steps and multiple different tools and you are talking potentially a very very long learning process or maybe you get lucky, or maybe you do what INTEL does copy exactly and eliminate the need for luck. If you don’t do this you generally get is a surprise of the most unexpected kind. AMD seems to gets surprises a lot more then not and to the bad news side, INTEL gets the expected steady output and generally good news or stready news.
Edge and center are never the same. At the very least at somepoint near the edge you have partial die and the bevel. Things like pattern density, thermal radiation all change as you get to the edge. No matter how good your clamping and rings and process tweaks are there is always a small cross wafer and edge interaction.
Any yield engineering from any company can tell you stories about radial yield. Sometimes the center has a dip, sometimes a donut of good, or bad, but in the end on a mature well characterized process the edge will yield lower. How much lower from the base is what sets companies apart. Ask any defect engineer to tell you to plot the stacked defects from their monitors from all layers. In general the edge shows more stuff. Often it doesn’t and you can be fooled because the defect tools key off pattern recognition. They often have to desensitize the edge because it looks different. But if it looks different won’t it produce different chips, and yes it does. So its both defects and systematic that play at edge, and everywhere else. The edge is always harder.
AMD has multiple problems at the moment. They got a design/process interaction problem I suspect with the TLB. Or something they missed in their validation test. They got a process/performance/power problem, as their design was totally mismatched to the marketing need for power/performance from what their technology delivered. Its like INTEL’s Prescott adventure, what worked well at 130nm for Northwood didn’t scale at all at 90nm and the market had changed expectation too to power efficient designs. If people had accepted 200 watts we’d all have space heater Tejas I’m sure.
Now n a perfect world where you fixed the systematic issue on every tool in the process you still got the random component, that is the nature of thermo dynamics, you can’t cheat Boltzmann. You always will have Gaussian distribution of CDs, dopant atoms etch variation, just about everything that you’d like to keep tight you have a distribution. If you need each core to be constrained within a certain distribution, I don’t care if it is at the center or edge of the Gaussian distribution, and you require four cores with 4 separate Gaussians of paramterics/defects to align to produce one good quadcore it matters not if they are 10mm in size or 2mm in size the math always leads to lower yield or a tighter window for success for 4 die versus 2 die glued at the package level. And why is AMD going to a glued 12 core, perhaps they learned something somewhat painfully, heah? That is why everything gets better at the next process node as die size gets smaller provided you have same normalized defect density and uniformity. Intel waited to 45nm to do a monolithic quad core, waited till time was right, I’d call that smart versus succumbing to some marketing monkey wishes.
As to “easy” that is a relative statement. INTEL appears to have a system to make it manageable and economic as demonstrated by their track record of making money and turning out designs like clock work every 2 years. For AMD it is clear it hasn’t been easy as you see the can’t make money more years then not, are deep in debt and have fundamental restructuring of core business. Don’t confuse this with INTEL which recently also looks like it did restructuring, but that was trimming people and getting rid of fat to make more money. It looks like one company makes it easy and for the other it appears impossible.
Any outsourcing that isn’t copy exactly and done with very tight discussion with the designers to tune to what they want is a quick decent into oblivion for leading edge design. I don’t care if it is Charter, TSMC, or iBM itself. For ATI/nVidia their competitive advantage was in design cadence and software drivers support for graphics. The silicon side was neutral as they both used TSMC. For AMD to move to foundry to compete against INTEL is a huge step back in competitive positioning. Foundries can’t afford to give any customer anything special. If it does it fails in its fundamental model of being a foundry to all. Foundries are everywhere tripping over each other if it isn’t TSMC then there are companies like Samsung, IBM itself, SMIC, Charter, UMC all waiting to be the neutral foundry if TSMC gets too cozy and doesn’t offer the latest and greatest to all willing paying partners with no biases. Thus the silicon becomes a generic commodity and AMD is left as nothing but another design house. INTEL wins going away. Let’s not even talk about the licensing issue coming up with the foundry model for AMD.
AMD even mentioned they hope their IP is strong going into negotiations will be enough to extort out something during license negotiations. My guess is INTEL won’t put up a fight, as to let AMD go to foundry is a win win for them. Don’t need to worry about the competition as it has shot both feet off with this ass lite strategy. It comes back to AMD has a broken business model.
1) Not enough competitive products to grow demand and market share
2) AMD not enough market share to get enough revenue to feed the required capital investment in design and manufacturing technology to catch up. End up in item 1) again.
AMD got the perfect virtuous feedback process to lose billions while being the favorite of Scientia, Rick, and Sharikou.
Where did AMD get so stupid and start, it wasted billions on design IP purchase when what they needed was billions to catch up technology. Totally busted and set them down the irreversiable course of marginalizing their capability to be nothing but like ATI/nVidia just a design house. It all started with Hector Ruiz, did it not?
Tick Tock Tick Tock the end is coming sooner then expected if foundry is where AMD is going. It’s the death sentence for them!
“For ATI/nVidia their competitive advantage was in design cadence and software drivers support for graphics. The silicon side was neutral as they both used TSMC”
Yes indeed! This is precisely the reason Nvidiot is running scared! I knew they were, but I didn’t know exactly why, till now. Jing Hung is well aware of these limitations, as you have revealed his weak hand.
(I would add multiple GPU’s on a single card to that design cadence)
Your essay explains why both ATI and NVDA GPU’s are such power hogs and leaf blowers. They have reached the thermal limits of their process and size, and more importantly, they are mercy of outsourced foundries. Therefore, not only is INTC a threat in design, but also monumental threat with a refined process that ATI/NVDA will never obtain.
Any graphic component INTC produces will get the benefit of INTC’s cutting edge technology and manufacturing, 45 and lower with nuclear control rod metals.
Well done, brilliant essay.
SPARKS
"Any yield engineering from any company can tell you stories about radial yield."
and...
"At some companies believe it or not they are competent enough that random defects limited yield."
random defects have NOTHING to do with the edge yield dropoff and that IS and WAS my point...you make such absolute statements and quite frankly they are wrong - your own words contradict each other...
You then go on and on about what causes systematic radial issues and again MISS the point... when you are talking about variation on a die which is ~15mm linear spacing, it is not sensible to look at center to edge variation and then assume it also exists across a 15mm distance (it is ther, just no to the effect you make it out to seem)
So which is it - is all yield random defect controlled or will you concede that perhaps that statement was less than accurate?
You have so many mis-statements in your comment I will try to detail a few in the next comment...it's as if you know just enough to be dangerous but not enough to synthesize it.
"At the very least at somepoint near the edge you have partial die and the bevel."
Fantastic - this is called edge exclusion and partial die or die that fall beyond the edge exclusion are not part of the yield calculations. (and the bevel is outside the edge exclusion)
"As to “easy” that is a relative statement."
No easily manageable means.... easy... means minimal effort required... means....umm.... easy? Even Intel will tell you this is not easy (as you allege) - this whole 'relative' thing is just you trying to weasel out of a mis-characterization.
"Things like pattern density, thermal radiation all change as you get to the edge."
Hmmm... pattern density changes on the edge? Or does the process response to pattern density change at the edge. Are you printing different pattern densities on the edge? Again, how does this fit the random defect model again? (Or are you now agreeing that perhaps there are other yield factors that may also be important)
Thermal radiation - well depends on the tool and the design you can throw out a bunch of technical sounding terms like Botlzmann and Gaussian and angular velocity... but you are just proving my point - yield (even at first order as you alleged in your intitial post) is not simply correlated to random defect density, there is plenty of systemic variation which will lead to yield issues.
"I’d say at 300mm they are damm good"
Thanks an unsupported, subjective statement is always useful... about as useful as AMD saying 'mature yields'.
Just out of curiosity your Gaussian, Boiltzmanian, Einsteinian, angular velocity x thermal radiation would lead to how much variation across a 15mm space on the wafer (that's not at the edge)? This would cost you at most, what, one speed bin? Again your theory is that with such a big die size and the distributions we are talking about lead to significantly different speeds on one of the 4 cores... Have you looked at the spacing on AMD's dual core design (between cores?) How much linear difference is there vs the quad core? Have you looked at a sort plot? Yeah the speeds are different center to edge but that's 100-150mm apart... how much speed variation do you see on adjacent dual core dies?
Most of your statements have a fair degree of truth to them but you appear to take them to the extreme and make absolute statements in an effort to discredit AMD's approach. I'm no AMD fan and don't agree with their approach on quad core, but most of your statements are stretching things quite a bit and don't have any supporting data.
Yo expert, why don't you tell us the real story since you are so good at trying to agree yet not whats your real point?
"Yo expert, why don't you tell us the real story since you are so good at trying to agree yet not whats your real point?"
My real point is the issue with AMD is thermals... this crap being spewed about a bad 4th core leading to tri-core or the quad core being limited by a magic slow core is some unsubstantiated myth that for some reason people are perpetuating. While this may appluy to some die, generalizing this to the key limiter is random speculation. Does anyone have any data?
When you look at the size of the quad die - you just don't have that much GHz variation between cores in that small a space (again with the possible exception of the very edge). AMD is on a cliff with power and their choice with quads are either produce lower clocks to fit in the TDP windows or turn one core off for a real bad power bin and make it a tri-core. The idle power comparisons between like-clocked tri and quad core supports this.
If it was simply disabling a really bad core or leaving out a non-functional core, then the tri-core should have a measurably lower idle power than a quad (from the limited data on the web this does not seem to be the case). Also if you have quad cores being limited by one bad core, than the tri-cores should be coming out at a higher bin or two then the quads... which also is not the case. Also if 65nm was just an issue with defects or the problems are K10 design related then you would have seen higher clocked K8's by now (which you also don't). AMD's problems are 65nm thermals which are exacerbated by the K10 design.
So you can throw out all these great terms to show why you have center edge variation (which to a large degree is true) but it doesn't apply over the space of a single die, it applies over (relatively) large distances.
Using the word Gaussian and Boltzman or talking about the physics of angular velocity doesn't make it any more right or applicable to the issue at hand (which is local variation within a die not global wafer variation).
Oh and the other point is there is more to yield than random defects - even 'competent' companies know this. (And apparently even arrogant posters may now see this).
Yo, is that any clearer?
SPARKS said...
...specifics on what they do and how the interact with timings/voltage on the board and on the CPU?
CPU GTL Voltage Reference (0/2)
CPU GTL Voltage Reference (1/3)
I’ll assume that we are balancing individual die or core voltage(s) in pairs. The parameters are in ratios set at .005x increments. Why would this enhance stability?
I would suggest to you not to touch these parameters and use the default one instead, if you do not change the FSB frequency. See link below for what it does.
http://www.thetechrepository.com/showthread.php?t=87
The A/GTL+ inputs require a reference voltage (GTLREF) which is used by the receivers to determine if a signal is a logical 0 (low) or a logical 1 (high).
Btw, just a bit curios, what was the range of these parameters? What is the default value that you are seeing?
“I would suggest to you not to touch these parameters and use the default one instead, if you do not change the FSB frequency. See link below for what it does.”
Pointer-
I’ve got the thing up to 1800FSB (9.0 x 450 = 4 GHz) memory running synchronous 7-7-7-21. Rock stable 24/7, the QX9770 Vcore 1.408 @ 54 C. I would like to go higher; obviously you and I are on the same page.
“What is the default value that you are seeing?”
Presently it is set to ‘Auto”.
Ranges are (0/2) = 0.370x to 0.760 with a 0.005x interval
Ranges are (1/3) = 0.410x to 0.800 with a 0.005x interval
I suppose the lower numbers are default. I haven’t monkeyed with them, just making inquires.
Thanks for the link! Any more info would be appreciated.
SPARKS
"Yo, is that any clearer?"
LOL
SPARKS
“I would suggest to you not to touch these parameters and use the default one instead, if you do not change the FSB frequency. See link below for what it does.”
Pointer-
Great link buddy! I can’t believe that is an INTC Document!!!
READ THIS:
“Intel processors and chipsets have split power planes that allow setting the I/O operating voltage (VTT) to an independent fixed value even though the CPU may be operating at a higher core voltage (VCC). As overclockers we can use this to our advantage.”
“As overclockers we can use this to our advantage.”!!!!!!
Who said INTC was full of stuffed shirts, horseshit! Go Chipzilla!
X48 QX9770 Nothin’ even comes close, baby.
HOO YAA!!!
SPARKS
SPARKS said...
I suppose the lower numbers are default. I haven’t monkeyed with them, just making inquires.
per the same link, it said the normal value shall be 2 third, thus, 0.67. you can find this value in other forum too. Anyway for QX9770, this ratio might be different, but won't be far off. You can try experiment with the 0.6xx and do the shmoo plot (GTL reference vs working FSB frequency) to look for the optimal value.
Pointer-
That’s what I love about this stuff, more, better, faster and best of all, another learning curve to ride.
Incidentally, you were right about the default voltage. I took the GTL setting out of default. The nominal settings were listed in the help bar @. .630, etc. In a perfect world, I would love to check the voltages with my Fluke 45, along with my scope, against the voltage waveform/peaks mentioned in the article. I would aslo love to know where these points are on the MOBO. (I’m sure Mr. Fugger, with the help of INTC Engineering, knows where they are.)
Obviously, at 1800 FSB and above, there is a very narrow window to obtain a noiseless DC component. It seems to rise exponentially the closer you get to 2K (like anything else when dealing with waveforms).
In any case, an increase from .630 to .690 seems to be the sweet spot for 1800+ with the least amount of risk. The best Vcore voltages seem to be in the 1.45V to 1.475 range with a 9.5 to 10X multiplier at the CPU. . However, I’m not going there on air that’s for sure.
By the way, visuals and speed have improved dramatically in Crysis.
Head shots are cake!
MS FS X is smooth and beautiful.
Thanks a lot Pointer, I really appreciate it.
Clock till ya rock.
SPARKS
Jeeze, it’s kinda quiet today.
I sure hope everyone’s everyone has their guns holstered
Anyway,
Last year the Anal-ists beat the life out of the chip sector. Every time we saw significant financial gains, there was some self appointed financial genius, knocking it back on it ass on a quarter by quarter basis.
I would like to know what these bastards were doing before the financial intuitions, Bear Stearns for example, were ready to cave in.
The chip sector not only survived the 4 month financial onslaught relatively unscathed, “bell weather” INTC has bounced back brilliantly. 3 bucks on the week!
This will only get better as 45nM margins improve into the third quarter, and Atom processors pound out an additional half a billion in revenue, possibly from what I read, 4 cents a share. I can envision this miniature, power sipping processor in everything from paper thin laptops to ‘Intelligent Mechanical Phallus’.
This way, Anal-ists could utilize them to get a very special wake up call.
SPARKS
Yo Sparks,
Lehman had a positive today and the stock peaked above 25 before closing just a hair below.
I think you have it right, 45nm ramping big time, atom with huge revenue / wafer due to the tiny die and INTEL is poised for a breakout!
AMD, well thats another story. When you got no technology, got no design, got no manufacturing and need to double your market share just to continue, that is BK I say
What do you think Scientia, Sharikou and Rick Geek.
Did I call it or what
Tick Tock Tick Tock
“INTEL is poised for a breakout!”
INTC peaked at 27.73 on 12/3/07 as I predicted somewhere on this blog during the third quarter 2007, until of course the mortgage meltdown. I am by no stretch a professional in finance. I didn’t see the mortgage leverage fiasco coming at all. However, neither did the professional leeches/blood suckers who call themselves anal-ists. So much for their “professional opinions”, that and two bucks gets you on the subway.
I’ve previously mentioned my friend who works on the “floor” at the exchange. In January he kinda/sorta poked fun at INTC’s performance. Now he tells me making money in the financial sector is “like running around the Long Island Expressway during rush hour scrambling for nickels and dimes.”
This is a guy who has two laptops, trades on the floor with a tablet, has four desktop machines in his house, and another laptop on the yacht out at Montauk! I asked him if he saw a trend here!!!
People want cheap computers, yes.
People want fast computers, yes.
People want computers that can do everything, yes.
People want cheap computers to do all of the above, naturally.
People want their cake and eat too,---not so fast.
Fusion, fission, implosion, I think many people wrongly assume that bread and butter e-machine’s are the true future money makers in the industry. Sure they’re good enough for the other guy, but “ME I want some performance and functionality in MY machine”.
They always talk about the other guy. What they fail to realize they are the other guy from my perspective. (I’m the real lunatic) Everyone considers performance as factor when committing to a purchase, everyone. No one wants to intentionally settle and live with a dog.
Why? Your kids use them. Their friends have them. We adults can no longer live without them. And all of us want a really good machine, especially the kids. They’ll pay a few extra bucks. Dell learned this the hard way, and so will AMD.
Performance is king, and it’s gonna cost ya.
INTC is wearing the crown. Breakout is still, in my opinion, an understatement. We were just held up by high performance mortgages, not processors.
SPARKS
Sparks the days of Intel's good stock growth are gone. They are a big company which helps from a stability standpoint when the market tanks, but this also means it will be hard to move the stock price a lot, unless the buybacks increase in magnitude.
I think the stock should run a bit in H2'08, but even 45nm is not going to move the needle much (atom in a year or two may be a different story). The main problem is that people want to whack the chip stocks when things are below expectations but if they are above you get a gradual rise as Intel is no longer a momentum stock like a RIMM or Google or Apple.
As computers start the march toward a commodity business, you just won't see explosive growth.
as for the financial mess - with hindsight there was plenty of indicators, I think people (including myself) just chose not to look. You had financial companies levered at over 30:1, which in hindsight meant a sigificant slide could easily cause a run on the bank like happened at Bear Sterns. You had an increasing # of homes being bought with NOTHING down, no actual checking of the mortgage applications (I've heard as much as 1 in 3 subprime loans had lies in the mortgage applications), and you had people buying beyond there means assuming they would be able to flip the home in a few years. It's frustrating as you hear all the politicans lineup to say we need to help the millions of foreclosures on the horizons, but there is no mention of how many of these foreclosures are 2nd homes being used as an investment, homes well over 750K and mortgages where the applicant lied about their income to get the loan - I for one don't want to bail any of those cases out.
“Sparks the days of Intel's good stock growth are gone”
Agreed, however back to the low thirties wouldn’t be unreasonable if they got the margins up to traditional levels. (call me an optimist)
“You had financial companies levered at over 30:1”
Bookmakers and shylocks run a better business model!
“I for one don't want to bail any of those cases out.”
Word. I don’t like to admit this, simply by association, but I personally know 3 people whose eyes were bigger than their stomachs. One couple in Florida simply packed up and walked out. Their mortgage ballooned from 2400 a month to 4000 a month. FLORIDA no less! They had nothing into it (down payment). They just walked. My response to them was ‘you play, you pay’.
No sympathy here either.
SPARKS
anon: "AMD, well thats another story. When you got no technology, got no design, got no manufacturing and need to double your market share just to continue, that is BK I say"
Isn't it really worse than that? As I understand it, AMD was saying that they need to double their REVENUE share, not market share, in order to survive. In other words, unless Intel lets up on the pricing pressure and "allows" AMD to raise prices on its own CPUs, they will have trouble surviving.
I tend to feel that having one company control the market leads to slower progress and higher prices, yet here is a company basically saying that unless they and Intel are allowed to raise the cost of computing, they (AMD) will have trouble surviving.
So it's something of a paradox-- we want AMD to survive because it means lower prices, but for AMD to survive we need higher prices. That makes me think of that scene in No Country For Old Men, where Chigurh tells Wells "If the rule you followed led you to this, of what use was the rule?"
"I tend to feel that having one company control the market leads to slower progress and higher prices"
I'll play devil's advocate on this one as I hear this constantly repeated... how does Intel grow, if they have a dominant market share and they slow their progress and raise prices?
Markets forces will react - businesses will slow down their upgrade cycles if there is less reason to upgrade. Most casual/home users will slow their purchases as well.
The high end enthusiasts will feel the main pinch as they are the 'captive' market and have probably the most price inelasticity (they will pay the extra $XX, even for marginal performance gains). Even now where Intel appears to be in a more dominant position what is happening on the price front? The top bin parts have gone up but the mainstream is flat to down; Intel has continued on a 2 year node scaling despite ever increasing R&D costs. When you have that much market share you have to grow volume... raising prices and slowing innovation doesn't enable that.
I wouldn't expect them to slow their progress or raise prices to a degree that would harm their growth. But surely a company without competition has greater freedom to determine the pace of progress and the pricing on their products.
"But surely a company without competition has greater freedom to determine the pace of progress and the pricing on their products."
This is a nice academic argument, but look at the rate of progress in the semiconductor industry and stack it up against ANY other industry. At this point Intel is more constrained by the overall x86 market growth then AMD. If Intel exercised their pace of progress freedoms and slowed things down - they would take a hit on the stock price, revenues, etc...
The US Post Office could jack their prices up to $0.80 for a stamp, but do they? The market would correct for a company taking greater freedoms.
I know you are not arguing the standard 'we need AMD for competition', but that is a bunch of crap - if you believe in Intel's dominance, then you have a weak company in AMD which is simply enabling that dominance...either get better or fold. Perhaps a better outcome would be for AMD to collapse and get a more viable competitor (or alternative technology).
Propping up or subsidizing companies or trying to restrict larger companies (through regulation, lawsuits, etc...) is nearly always a hugely inefficient market solution - let the market forces work and sort it out (barring illegal activity). Simply being big or having a large market share is not illegal or 'evil', but that is what most people associate it with.
Correct, I am not saying that AMD needs to be propped up in order to "keep Intel honest" or anything like that. While I believe that having one company controlling the market is worse than having two or more companies competing, I don't think that we should create artificial competition by trying to cripple the dominant company in any industry.
My post was an example of why this is a bad thing-- AMD is, in essence, telling us that in order for it to survive, the consumer must bear the burden of higher prices across the board. If one benefit of competition is supposed to be better prices, and AMD is promoting an approach contrary to that, then there's no longer a benefit to having them 'compete.'
I've seen some forums where the attitude is to promote purchasing AMD CPUs, not because they're necessarily better, but because it might just help keep the company above water. And I keep thinking, why would I want to pay for an inferior product, in order to support a company making an inferior product, in the hopes that it might one day produce a superior product?
Why not just buy the superior product?
AMD at one time was a good influence in the industry but somewhere along the line they somehow got confused and thought that their existence was justified just because they competed with INTEL. Somehow that they deserved somehow some special treatment, kind gloves. Sorry Hector you need to compete.
When they had a great product what happened. They won market share, actually made money. When they had a crap product what happened, they lost money. The fact that their competitor is huge with a big war chest and ahead on technology which gives them a leg up, perhaps they might, should compete in another industry? Why do they think they deserve to continue in an industry where their business model is BK.
INTEL will innovate, they have to. They invest billions in R&D, in new products, and even more billions in factories. If they can't people to buy 300 million CPUs and chipsets then they got empty factories. Better to innovate then sell next generation at cheaper price and at a loss. Expect INTEL to always offer more each generation it simply has to.
INTEL as a monopoly is like MS, in the end they too have to innovate, or rebel of which you see a little of. At somepoint people wont' take it if it doesn't add value. Look at Vista as an example!
Nah, AMD ass lite is more like ass stupid. In general competition is good in the case of semiconductors the doubling of transistors every two years simply puts that business in a different space.
AMD isn't needed they can go
tick tock tick tock to BK
Here's a ray of hope, by the usual suspects, I might add.
To me it sounds like they're blowing a ray of sunshine up the markets ass.
Some of the analysis is contradictory.
http://www.theinquirer.net/gb/inquirer/news/2008/05/16/early-read-rites-amd-analysts
SPARKS
I found this innovative article about AMD secret plans
http://www.geek.com/making-the-case-for-zram-an-interview/
What a joke, its been two years now. AMD should have had time to run this on two or even three successive test chips and then a product and a good year to productize. Damm its two years a full technology node cycle.
Where is this wonderfuly reduced memory cell that will increase AMD on die memory that INTEL can't do because it doesn't have SOI?
Perhaps AMD wasted all this time on multiple steppings and forgot to pay attention to Barcelona and Phenom?
Guess what, innovation takes years and years, it doesn't fall out of the sky by licensing some technology from some other company.
It takes years to bring SOI, lowK, strain silicon, HighK-metal gate from labs to products. NOTE AMD can't do any of it. And now they are even more cash poor how can they afford the innovation to do 32nm. Look at them their 45nm is late and a full generation behind INTEL. Sure its 45nm, immersion, but immersion doesn't that mean they can't figure out to use far cheaper dry lithography and instead have to resort to much more expensive complex wet litho to do the same thing that others can do dry?
Tick Tock Tick Tock AMD going BK
"Where is this wonderfuly reduced memory cell that will increase AMD on die memory that INTEL can't do because it doesn't have SOI?"
Well for one thing there was concerns over speed... not sure if things have progressed but I believe at the times it was questionable for an L2 cache application (and certainly not fast enough for L1)
There are also alternate high density approaches that people in the industry have been looking at for some time(instead of the traditional 6 transistor cell). It was probably worth a shot when AMD licensed it - this is pretty common, you have to take a shot when things are still in the research phase when the tech can be had relatively cheap. If you wait until it gets further along then the price tag on the license goes way up.
Again, AMD spun this a bit, but it was more the press (and the fanboys) who talked it up without knowing the issues and complexities in bringing it to market... or the fact that there were also other techniques that could do similar packing densities (like a 1 transistor, 1 capacitor cell).
It gets back to the, well it's a potential competitive advantage, the 'underdog' has it, and it gives us a chance to stir up the pot. And of course, you have a ton of non-technology people doing analysis on a press release without understanding the technology. Couple that with laziness, why not talk to some folks who are literate with the technology or may have an alternate view, and you have a lot of hot air. A few sites tried to ask some questions (I think digitimes was one of them), but most just reprinted and spun what was spoon-fed to them.
The tech may prove viable - it's hard to say how much it's advance since the announcements several years ago, but it was clear at the time that it was pretty far away from commercialization, and quite frankly 2 years is not much for something this disruptive (That's why I though it was ridiculous when I heard some folks saying maybe K10, maybe 45nm at the time of the announcement)
eDRAM makes far more sense coupled with a fast SRAM cache for higher density, those are two well understood and used widely technology. INTEL once did drams as did IBM, IBM recently had some interest papers about eDRAM. It'll be interesting to see if someone combines high density eDRAM at 3rd level with 2 levels of fast SRAM.
Sure it'll take probably another 5-6 extra masks but think about the potential.
Tick tock the clock is ticking down...
IBM recently had some interest papers about eDRAM.
Which, unfortunately, tells us nothing about its suitability for HVM.
LOL,
Yeah thats the same company that invented DRAMs but don't make them anymore
invented the HD and sold off that business
invented the PC, kinda of, selected the CPU and OS and gave away the golden goose to INTEL and MS.
Didn't they once have the best laptops in those thinkpads, and they sold that too.
If it involves cut throat engineering and tought manufacturing and efficiency don't expect it from IBM.
If they invent it expect them to not make a dime from it.
Tonus’ case for maintaining AMD’s sustainability, basically, regulating the x86 CPU industry, is an interesting supposition.
Intel would need to structure its products price/performance dynamic in coooperation with AMD to insure both AMD’s and INTC’s profitability and survival. The end result would be 3 fold:
Higher margins for AMD.
Higher margins for INTC.
Higher prices for the entire market, ultimately, us.
IF (with the strongest emphasis) this was possible, do you think the FTC and other regulatory bodies would conditionally sanction price fixing of the entire CPU market? Who would mediate the price structure and set the criteria and at what price and margins? After such regulation, who then would approve or disapprove future increases if one or both companies failed to meet future regulated margins?
What, then, would channel customers, OEMS, and distributors do in such a scenario? What about their margins, profitability, competition, and business model?
Where and how do the other related manufacturers fit in to all this? Anyone with the x86 license, therefore, would qualify for this ‘sustainability structure’. Why should the market favor AMD, and not VIA, for example? The lawyers would be crawling out of the woodwork like cockroaches.
Would all the foundries raise their prices accordingly? The idea being, “Well if the CPU market is regulated, we could step up our prices a bit.” Question is where would it end? Additionally, there’s the whole CPU supply chain to consider, tooling, materials, chemicals, etc.
Where would NVDA fit into all this? As the only GPU competition, wouldn't they need to be included in this scenario, also? Helping AMD in their CPU marketing woes indirectly helps nearly half of the GPU market by default. (Unless of course they were to ironically force separate ATI from AMD)
Look, ALL of this is in 180 degrees in direct contrast to what the Carter Administration did when they DEREGULATED the Airline Industry. After nearly 30 years we are still feeling the after shocks to this day. However, we as consumers did benefit with lower prices, which was the ultimate goal, despite over a quarter century of Airline industry bedlam.
However, what I would like to remind everyone is that this is called price fixing. They don't do this with GOLD! The memory companies tried this, twice. The first time, legally, they nearly commoditized ram, just like soybeans. It didn’t work. The second time, a few years back, illegally; they were sued and they paid dearly in nearly every state in the USA. They were blasted in other countries, too.
The moment AMD signed the ATI cataclysm, their fate, and dynamics of the entire industry was set into motion. I’m afraid there is no going back. (Actually, the FTC and the SEC should have NEVER allowed this purchase to go forward) The whole industry, AMD included, would have been better for it and we wouldn’t be in this conundrum today.
In a free market economy there isn’t room for price fixing, not in the interest of saving one foolish company. If companies like Enron, WorldCom, Global Crossings, and Bear Sterns can go belly up, then there’s plenty of room for AMD.
After all, we’re only talking about a few billion in comparison to hundreds of billions,---------- peanuts!
In the final analysis, there are no entitlements in business, especially at the expense of the consumer, ultimately. AMD has screwed the pooch, the market will adjust. Governments, politicians, and regulatory bodies with agenda’s should stay well clear of this disastrous price fixing of the CPU/semiconductor industry, all in the interest of saving "The Scrappy Little Companies" ASS.
SPARKS
sparks: "Tonus’ case for maintaining AMD’s sustainability, basically, regulating the x86 CPU industry, is an interesting supposition."
Oh, I wasn't making the case that the industry should be regulated. That was what AMD was insinuating, when they complained that they needed to double their revenue share in order to remain a viable business. If they are capacity constrained, but only making half as much money as they feel they need to make, then they are by extension asking for price controls.
Which is why I pointed out that this goes against the idea of a competitive market. A competitive market is supposed to lead to lower costs, increase progress, or both. By implying that they need relief, AMD is asking the government (or the court) to allow price-fixing.
And my point was, if the results of 'competition' is actually worse than the alternative (having one company in control of the market), then I'm thinking that the alternative isn't so bad.
"If they are capacity constrained, but only making half as much money as they feel they need to make, then they are by extension asking for price controls."
In fairness to AMD they are alleging Intel operated outside the law to constrain Intel (which has yet to be shown). What AMD is (smartly?) trying to do is make an emotional argument that they can't survive with out double the revenue share. However the question is: did Intel prevent this from happening through illegal means?
Simply competing aggressively and preventing your competitor from getting market share is not an abuse of monopoly. AMD focusing on what they need to sustain a viable business is a subtle plea to the US (and other) governments to do as much as they can to enable this (you know to prevent the big bad monopolist from taking over).
Quite frankly what AMD needs to have a sustainable business is more or less irrelevant to the case - the only question is did Intel abuse it's position in the industry to artificially suppress AMD's market share. What if the number was 50%? What if it was 10%? The question is about Intel's alleged actions... AMD is trying to set a threshold market share in which anything below that must by default mean a monopoly is taking over (or will takeover). This is a very smart point from a PR perspective but an intellectually bankrupt one.
Look no further than US politics where you have Barrack Obama and Hillary saying we need to get the big bad oil companies who are gouging the Americans.... out of curiosity I looked up some #'s and Exxon mobile has a profit margin of 16.5%, healthy sure, but is that gouging? (Earning 16 cents on the dollar).
Time Warner has a profit margin of 18.7%... I must have missed Barrack's outcries against the entertainment industry who are robbing people (I say robbing because if ~16% is gouging, then what do you call 18%?).
And in the irony of ironies First Solar has a profit margin of over 30% (nearly DOUBLE that of Exxon)... What are we doing to prevent these big, bad, evil solar companies from gouging people? Imagine how much more pervasive solar would be if they cut their prices and didn't gouge their customers...
Food for thought -(I being a bit sarcastic in order to show how people can spin an argument), that is when we're not burning 25% of corn to subsidize one of the most ineefficient ways to produce ethanol.
Tonus, I realize that, and I wasn’t pointing my comment towards yours. The supposition was theirs, I knew it wasn’t your personally. Even if IT WAS, it’s a GREAT point for discussion!
Quite the contrary, in AMD’s current financial/market position, this would be their equivalent to Valhalla. A decision of this magnitude would negate every monumental failure they’ve made during the past two years. It would be rewarding failure, they could maintain relative market position, and be assured success at the expense of the industry and the consumer. This would be protectionism at its absolute worst with far reaching unseen consequences.
This was my point, in fact, I’m happy you brought it to the floor.
“(having one company in control of the market), then I'm thinking that the alternative isn't so bad.)”
Actually, let me ask you something. Would it not be easier to monitor one company’s price gouging tactics, than to regulate an entire industry to protect one company? Here’s where we differ, perhaps.
Thanks
SPARKS
sparks: "Actually, let me ask you something. Would it not be easier to monitor one company’s price gouging tactics, than to regulate an entire industry to protect one company?"
Disclaimer: Take the following with a grain of salt, I'm going on a lot of assumptions and pure guesswork here...
I wouldn't want there to be regulation for the sake of protecting one company, as much as for the sake of protecting consumers, or an industry. If there is no need for regulation, I'd just as soon see no regulation, either.
I like what we have here in the USA, or at least what we strive to have. Industries where competition is encouraged, with no more regulation than is needed to make sure that the public is protected (ie, safety standards, environmental impact, checking worker abuse, etc). That may not be what we get, but I think it's a good ideal. This also means that if one company emerges as a dominant force in any industry, we accept that, making sure only that they don't abuse their position.
The CPU industry has a very high cost of entry, but that's just how it is. How do you regulate an industry where you must make multi-billion dollar investments and squeeze every last dollar out of them before they become obsolete, apparently within just a few years? I wouldn't want to ask the government (of all people) to make sure that Intel was being efficient!
I think that there came a point where AMD had a decision to make-- would they accept staying at the low end and trying to earn a comfortable living producing "budget processors", or should they take the big risk and go toe-to-toe with Intel? They decided to go for it all, and I can't blame them, because it's a really big prize. And for a short time they seemed to be playing the game far better than Intel was.
But size and economics and the reality of the market have caught up with them, and they're left in a bad spot. Do they accept that they've worked themselves into a corner and try to become that bargain basement CPU company? I doubt that they could do that without sucking out whatever value is still left in the stock, and the head guys would be run out of town on a rail. And I think the ATI purchase is where they crossed the point of no return.
They have no choice at this point but to try and become something that it seems they won't be able to become, because they lack the capital and apparently because they don't have the product. It seems as if they need to pull a rabbit out of a hat at this juncture. The admission that they're way short on the necessary revenue share does strike me as a ploy (as one of the anon poster's mentioned), but it's also a remarkably candid thing to admit. In a country like this one, which despite its liberal approach to market dynamics still prefers its capitalist approach, admitting that you just can't make it without a serious increase in revenue is tantamount to cutting your own throat.
I think Ed at overclockers said it best-- this company needs a bailout, because it does not seem as if it can survive of its own accord. And I'm wondering if there's anyone who would want to try and bail it out at this point.
“And I'm wondering if there's anyone who would want to try and bail it out at this point.”
Whoa, that’s a tough one. Let’s go this way.
You and I are BIG capital investment, Tonus Sparks Capital Management.
We have a 100 billion or so, and we looking at companies. Say we take 6 billion to get AMD out of debt. We will first have to ask INTC for permission. Then we will need to spend another 7 or 8 billion to purchase the company, we will have to buy the shares from other investment groups. AMD is leveraged far over 80% and it will not be at 7 dollars a share.
Now we would have to go to work on getting AMD current, technologically. Modernizing existing FAB’s and/or building new ones, more untold billions.
When were done, what do we do?
Go right back to competing with NVDA and INTC, once again.
Listen partner, what do say we take our money and invest in corn or rice futures instead? Maybe we should buy some solar power manufacturing? What do you say, buddy?
SPARKS
From the horses mouth:
http://www.eetimes.com/news/semi/showArticle.jhtml;jsessionid=5GOOIHAXHIU4CQSNDLPSKH0CJUNN2JVN?articleID=207801086
"We want to use capital in a more efficient way instead of building a factory by ourselves," said Ruiz.
Asset Smart? What's kind of funny is the statement somewhat implies that AMD will not be able to fully load the factory (otherwise why would it be more efficient with partners).
Ass smart
What Ruiz means, he needs to find a partner that can make something in a shared fab that can be sold for some real $ so they can get the investment back.
AMD CPUs are loss leaders at the moment, they can't pay back for the design costs, the test costs, the manufacturing cost, the amortized Silicon R&D and most of all all the laywer and executive pay.
They need to find some high margin parts, damm maybe INTEL will partner with them LOL
Attila The Anon-
Was it you who asked:
“WTF is AMD going to do?”
http://www.mini-box.com/Intel-D945GCLF-Mini-ITX-Motherboard
This little sweetie cooks only 39W @ full load, IGP inclusive!
The form factor is 6.75 in. X 6.75 in., the size of a dollar bill!
Did you see the power supply module on the site above?!?!
80 bucks for the board! OMG!
WTF is AMD going to do?
SPARKS
AMD’s new game.
It seems AMD can’t figure out it’s the technology that in the end sells.
Build great CPUs, graphics and chipsets and integrate it all and people will come.
Instead they have NO cpu, no graphics but they got “game”
Oh did I mention they are bleeding cash, have no technology roadmap to put them back in the “game”
Perhaps they are confusing the fact they are “game” for INTEL and INTEL has caught them, are in the process of roasted their goose. At 32nm they’ll be fully cooked and finished.
Got game, give me a break.
Good god what the hell is AMD going to do, they aren’t even close to getting 45nm samples let along volume and here comes INTEL dropping the hammer again with their 45nm and very mature 65nm process. How do pray tell Hector think he can return to profitability with prices dropping like this.
http://blogs.zdnet.com/hardware/?p=1898
Here are some of the highlights from the rumored price drops:
Core 2 Quad Q9550 - New price: $316 | Old price: $530
Core 2 Quad Q6600 - New price: $203 | Old price: $224
Core 2 Duo E8500 - New price: $183 | Old price: $266
Core 2 Duo E8500 - New price: $163 | Old price: $183
Core 2 Duo E7200 - New price: $113 | Old price: $133
New CPUs
Here are the new processors you can expect:
Core 2 Quad Q9650 (3.0GHz, 45nm Yorkfield, 1,333MHz FSB, 12MB L2) - Price: $530
Core 2 Quad Q9400 (2.66GHz, 45nm Yorkfield, 1,333MHz FSB, 6MB L2) - Price: $266
Core 2 Duo E8600 (3.33GHz, 45nm
Wolfdale, 1,333MHz FSB, 6MB L2) - Price: $266
Core 2 Duo E7300 (2.66GHz, 45nm
Wolfdale, 1,066MHz FSB, 3MB L2) - Price: $133
Tick Tock Tick Tock
UMPC portal posted a piece on the Atom processor here.
Atom 1.33Ghz: 1159 (Normalised 0.87/Mhz. Recalculated to 1.8Ghz, approx 1560)
Celeron 630Mhz: 997 (Normalised 1.57/Mhz Recalculated to 1Ghz, approx 1413)
This is a telling set of figures because it shows the result that we’ve been expecting and that is that, clock-for-clock, the Atom processors are less powerful than the older Celeron/Pentium devices but at 1.8Ghz, the Atom Silverthrone processor should be about 10% more powerful than a 900Mhz Celeron.
Is this good? Is this an advancement of processor technology? You might look at the results and say ‘No’ but there’s one important element that has to be taken into consideration. Power-usage. The 1.8Ghz Silverthorne/Poulsbo combo will return these figures with a platform TDP of about 4.5W. The Celeron at 900Mhz would require a platform with a 10W TDP. That’s a 50% improvement in platform efficiency and that’s exactly what we need to see for handheld Internet and productivity devices. (emphasis added)
I expect to see a rash of reviews after June 3rd as the designs roll out in force.
So my question is "where is AMDs bobcat?" The MID/UMPC market seems to be shifting into high gear without AMD.
I'm sure some sort of Intel anti-competitive action is responsible for the lack of a bobcat processor. :P
As an aside, if you go to the article at PC watch, you will notice that the PC board in the case is red. Red soldermask is typically used to designate prototype boards.
So depending on how close the board is to the final design, there may be a bit more power/performance to squeeze out of the design.
"So my question is "where is AMDs bobcat?" The MID/UMPC market seems to be shifting into high gear without AMD. "
This is very similar to what happened when Intel launched Centrino. While a mobile market existed at the time, Intel made significant headway in driving the mobility and capitalizing on the momentum. AMD tried their best to shoe horn an Athlon into a laptop but could not make it work until they came along with Turions.
AMD was pretty much absent from the laptop/notebook space for a long time ...
Similarly here, AMD really has nothing to counter or confront in this space within the same specs... Geode is horrendously insufficient to the task, and the worst part of it is it hits AMD where they have their last bit of success (volume wise, not profit wise) ... the ultra-low cost, ultra low end...
In The Know-
I've got to be honest with you. Given my penchant for high end juice, initally, I wasn't too impessed with a processor/microbe like Aton.
Ok, I'm an idiot, no doubt.
I am happy to say I've come down from GORILLA QX9770 intoxication (for a while).
I WANT ONE OF THESE HANDHELDS!
http://www.youtube.com/watch?v=2IbScl8csNg&feature=related
SPARKS
"and here comes INTEL dropping the hammer again with their 45nm and very mature 65nm process."
Well this makes perfect sense when you consider Fudzilla's (and the INQ at one point) comment about Intel's 45nm production problems. It should be obvious when you are having production problems and there is short supply, you lower prices and start EOL'ing some of your previous generation processors...
OK sarcasm now over...
Intel focused initial 45nm on server and then mobile and finally desktop. Mobile will be largely converted by end of the year (server already is largely converted) - so perhaps the short supply on desktop was a high demand coupled with Intel's focus on server and mobile space? Perhaps Intel focused the more energy efficient processors in the spaces that may care a bit more about energy efficiency (and coincidentally enough have higher margins)
I'm not saying there couldn't have been production problems but for websites to report it based solely on shortages in one area of the x86 space (desktop) - perhaps that is more a shot in the dark then a conclusion...
Hmmm....could it be that I got lucky and guessed right?
Fudzilla is reporting that all that memory for Nehalem is going to require an 8 layer board for due to all the memory tracing.
I'm sure the call had nothing to do with oh, I don't know, experience in the field. We all know how over-rated that is.
For a good laugh, Tom Yager...
So much for the tri-core dominance over dual core.
Got Game?
Naah..
Game over.
http://www.tweaktown.com/reviews/1427/intel_core_2_duo_e7200_budget_penryn/index.html
SPARKS
"I'm sure the call had nothing to do with oh, I don't know, experience in the field."
Using experience and knowledge to make predictions? What are you thinking!?!?
It is far better to simply read stuff on the web, pick out some data points and simply extrapolate to the point you want to get to. You have to start with your conclusion first, then try to backfill data around it. Why would you want to use actual fundamental knowledge and experience?
Oh crap... sorry I'm on the wrong blog... oops!
Quite frankly the early adopters of Nehalem are rather price inelastic... this is only a real issue to Intel if it is a fundamental issue that can't be overcome with time (any thoughts intheknow?) when Nehalem goes more mainstream, probably not until at least mid-2009.
base on his tone and writing style, it is my opinion that he might be Sharikou himself, although Sharikou commented in the very same blog. :)
To clarify a bit, there is no "production" problem with 45nm chips. It is simply a capacity problem at this point (ya, I know, still production ;) ). All original prognostications were built on the assumption that F28 was originally going to begin ramp in late Q1'08. That obviously didn't happen. It is merely logistical, but the fact remains that until another fab comes online, coupled with the high demand, there will be 45nm chip shortages.
That's an interesting link, enumae. Is Tom Yager really claiming that...
1- Intel pushing the performance envelope via their tick-tock strategy is a bad thing? and...
2- Buyers should purchase AMD CPUs now because in two or three years they'll see a payoff when software 'catches up'?
Who, exactly, is he trying to do favors for? Intel is probably having a good chuckle at the first suggestion, and the AMD board probably wants to string this guy up for the second one. Sheesh...
“To clarify a bit, there is no "production" problem with 45nm chips.”
Didn’t the same self appointed tech “insiders” claim the same nonsense months ago? It’s all crap.
“Fudzilla is reporting that all that memory for Nehalem is going to require an 8 layer board for due to all the memory tracing.”
Didn’t the same self appointed tech “insiders” claim the same nonsense months ago? X48 was having the same problems. Yeah, Ok, sure, right.
My X48 is breezing very nicely at A NATIVE 1800 FSB. Corsair just set a memory record with MY (P5E3 Pre.) motherboard @ 2.4+ GHz!!!! The question BEGS, if X48 mobo’s are doing so well, why on earth would there be a problem x58!?!?!
Orthogonal buddy, don’t listen to these jackasses. You just keep cranking out those FAT chipsets. I know you can’t talk much, so, if you'll excuse me, I’ll be more than happy to shoot my big mouth off for ya!
Besides I’ve got the retail hardware to do it!
QX9770, X48-Top Dog, Big Dog
HOO YA!
SPARKS
Sci's at it again, there's a new post. Here's my favorite part
"This would be fine were it not for the fact that Intel is using high-K at 45nm which is looking more and more like the right choice. It has yet to be shown whether or not AMD can be competitive with low-K."
Oh dear...
Hey GURU!
What the hell is low-k?
SPARKS
"What the hell is low-k?"
Sarcasm? Well, assuming it isn't it has nothing to do with high K or gate oxides.
Low K is the dielectric used in the backend (low K means low dielectric constant). ALL, I'll repeat ALL, IC manufacturers use some form of 'low K' in the backend process; however IBM (and hence AMD) has claimed the bulk ILD they will use in 45nm is lower than the industry.
As always, IBM is technically right on this issue, but it is completely misleading. You have 2 major speed constraints in chips - one is switching speed (transistors turning on/off), the other is signal delay in the backend (RC delay). Your overall chip speed is limited by the slower of the two.
In general, things are limited by the transistor speed, and the goal in the backend is to make it fast enough so as not to be a limiter. So making the backend 'faster' via a lower k (which means lower capacitance, which means lower RC delay) does nothing if the transistor is not also sped up. You just get to the red light faster.
Intel has managed to keep the backend delays from being the limited without resorting to ultra low k's in the backend - while the press see this as a disadvantage, it is yet another example of Intel being able to extend an existing technology further. Similar to dry litho vs immersion this is a good thing for manufacturing, but the press gets easily confused and thinks newer = better.
The other real misleading bit for the process junkies out there is the need to report effective k, not just the bulk k for the film. The reason for this is that you need thin etch stop layers for patterning in the backend and these are starting to contribute significantly to the 'effective' (combined) k of the backend. If you use a very low k film, but need a really thick barrier or a higher k barrier this can quickly offset any gains from the bulk ILD film. I have not seen any reports of effective k from IBM, so it is hard to assess if they are playing games with the data (I suspect they are not playing games though)
Again the thing to keep in mind is Intel somehow can use FEWER metal layers and "OLDER" ILD technology, yet get better performance? Which is more advanced and better again? You don't use a ferrari to commute 2 miles to work everyday and yell look at me I'm so much better than the person using a Toyota who gets there at the same time, spends less money and uses less gas.
Again the thing to keep in mind is Intel somehow can use FEWER metal layers and "OLDER" ILD technology, yet get better performance?
That's another important point that has been discussed before, but I think is worth a refresher.
Having an effective lower-k can do one of two things. You can decrease the RC delay for a given feature size, or you can effectively keep the same RC delay and shrink the feature size (or some combo of both) in order to have tighter interconnect packing allowing you to use fewer metal layers.
So the question now becomes, if AMD's ultra low-k ILD is "world class" why do they require more metal layers than on Intel's backend process?
Without knowing details, it's hard to compare apples-to-apples due to the large differences in architecture, layout and process integration, but it's clear that Intel's backend process is good enough, and that AMD's is either overkill or compensating for something.
"So the question now becomes, if AMD's ultra low-k ILD is "world class" why do they require more metal layers than on Intel's backend process?"
Well.. a few things... as I mentioned in the last post, the "C" part of RC delay is total capacitance not just capacitance due to the bulk ILD - the etch stop layers (how thin you can push these, and how low a k you can achieve on these) are becoming increasingly important. There is no mention of this on the IBM announcement. Suppose you needed a thicker etch stop layer to pattern the new low k material or a new material for the etch stop itself - this could eat into a lot of the gains you achieve on the bulk ILD (I have to believe there is an overall net positive, or else there would be no reason to swicth to this though).
Then don't forget the R (resistance) part... it's not just simply well they both use Cu interconnects. You have barrier and seed layers prior to electroplating Cu... how thin and how resistive these are also plays a huge part on the overall Resistance of the back end.
So you have AMD/IBM playing the shell game - look at all these cool technologies, but just don't look at the integrated performance. Clearly Intel has less of an RC issue even with use of 'higher' low k's (or I should say less lower lower k's?) ILD's. Makes you scratch the head doesn't it....
For Scientia to ask well we'll see if the low k process can compete shows that he doesn't get it... the low K doesn't speed the device up it keeps it from being slowed down (or adding metal layers to prevent this from happening).
I must give him credit for finally starting to open his eyes - he apparently now suggests that perhaps RDR's are helping Intel (I recall an earlier argument that this was garbage) although for him to suggest Intel's 65nm process is better solely due to small layout/design rules is quite frankly ignorant and a return to his wishful thinking and wanting something to be true as opposed to having actual knowledge and being unbiased as he purports to be.
- Intel's conventional SiO2 gate oxide on 65nm is better
- the anneals are better
- the implants are probably better (I don't know this for sure)
- Intel's overall strain is better (despite only using 2 as opposed to 4 types of stress techniques)
- Salicide is also potentially another advantage.
- and again if you choose not to bury your head in the sand and look at ACTUAL IEDM reported data on the 2 65nm processes you will see Intel has a better overall transistor performance... that is not just 'layout'
He also makes it sound as if AMD is suddenly adopting DFM at 45nm and therefore should see a boost... well DFM/RDR's have been in use for YEARS (decades?); it's just a matter degree (AMD usese a level of RDR's on 65 and 90nm processes). Again this is just another desperate, 'this could be the silver bullet that AMD needs' wish as opposed to actual analysis based on some inherent knowledge.
“Intel's lead remains clear”
“what steps AMD might take to become profitable again remain a mystery”
And with that Father of Sharikou could have stopped at that, but no he talks for more then 1400 other words.
IN the end there is a very simple conclusion
INTEL has far superior silicon technology at a given process node. Matter of fact, the 45nm the lead is larger then its been in the past 3 or four generations.
INTEL has almost a year lead to getting to each technology node.
INTEL has a better then 4 to 1 manufacturing advantage.
Combining these three items allows the designers to architect superior higher performance, lower power and more cost efficient products that result in measurable bottom line EPS.
AMD is like a 2nd tier small time college football team, such as Boise State competing with the tier 1 programs. INTEL like those programs got bigger bankaccounts, bigger better atheletes and all this feeds to keep the advantage. Like Boise State beating Oklahoma is possible but 9 out of 10 times a focused Oklahoma will always beat Boise, and similarly INTEL will beat AMD. Sure AMD can come out and give a good game, but as long as INTEL leverages all the tools it already has and executes it will win almost all the time.
Opteron versus Prescott was like the Blue Smurfs versus the unfocused and lazy Schooners, it happened once and will never happen again.
Scientia got nothing to add, he can speculate all he wants about Nehalem or such, but in the end regardless of where Nehalem lands AMD is finished in every competing again in the big time. Sure they play but it isn’t very interesting at all. More importantly its more interesting to see how INTEL’s thrust into graphics, and low cost Atom do then INTEL versus AMD. That game is over, finished, end result already known.
Tick Tock Tick Tock
No sarcasm intended. I don’t think you guys ever mentioned Low-K.
Here we are, 100’s millions of R+D invested in the HI-K/ Hafnium era, moving forward, now we step back to low-k, and crazy enough, ultra low-k! However, now I know it has nothing to do with Hi-K in transistors and gates, thanks.
As I see it, this MUST be a supreme balancing act between trace widths (current), the materials’ dialectic properties, voltage, layer thickness, a transistors thresholds voltage and speed.
I’ve got the concept of a lower k dielectric lowing capacitance in the back end so the traces don’t becoming miniature rc networks thereby limiting signal speed. Again, thanks.
Here’s what I don’t get. Aren’t you increasing leakage, therefore heat and crosstalk as you go with materials with lower dielectric properties. Is this another factor in the balancing act, or am I total lost here?
SPARKS
Sparks you are good!
Lower-K generally have poorer thermal and structural properties. So it is true that the lower-K materials great care must be taken that your thermal properties aren't busted as well as you chip doesn't just fall apart. You should ask IBM all about their smooth as SILK implementation of their first generation SILK ILD, LOL. Their customers loved them for that one...
Again between good process cooperation with the designers needs and desire results in the best situation. That is why IDMs where process and design are the same company and talk to each other and take into consideration each others needs on a daily/weekly basis is best.
IBM prostitute gets multiple requests from all her John's and thus has to balance the needs and desire of all of them will end up with a less optimized transistor and intereconnect for AMD CPUs.
Like I've said it takes dedicated design team, dedicated process team and a integrated plan with manufacturing to have leading edge product.
INTEL has beating IBM on technology and IBM/AMD on the combined technology/design, and will surely beat nVidia on graphics.
Tick Tock Tick Tock AMD is gone and nVidia is next
Can't wait to see IBM turn their fancy air-gap to a real product and watch in crumble i the field smooth as SILK
Tick tock tick tock the clock is ticking
“ Sparks you are good!”
Not that good. The first thing we learn is dielectric properties of various insulators, especially with my kind of voltages. You screw the pooch here; you won’t be going home, ever.
Crosstalk, now that’s a bit trickier, considering the design challenge of working with the shortest possible lengths (and widths), all on incredibly small level. I can see where the design people and the process people (materials and thickness) had better be on the same page on a daily basis.
All said, it all boils down to headroom. QX9770 can clock almost a full GIG with nary a sweat. With Pheromone, it simply can’t happen.
That’s saying something.
Thanks Fella’s
SPARKS
"IBM prostitute gets multiple requests from all her John's and thus has to balance the needs and desire of all of them will end up with a less optimized transistor and intereconnect for AMD CPUs."
Yeah, but she sure is a class act, looks real good, and knows how to turn a trick.
LOL
SPARKS
"For Scientia to ask well we'll see if the low k process can compete shows that he doesn't get it... the low K doesn't speed the device up it keeps it from being slowed down (or adding metal layers to prevent this from happening). "
Scientia is not a bad fella, he can be debated with and he does make good arguments now and then... but this I must agree, his conceptual understanding of device physics is extraordinarily lacking but he says it with such conviction... hmmmm.
Anyway, the emprical data is overwhelming supporting the conclusion that AMD/IBM's dielectric in the backend is somewhat flawed... the Brisbane product generally underperformed the 90 nm Windor equal clock/equal cache product and cache latency measurments showed significant cache cycle loss -- between 2 to 10 cycles. This points squarely at the wire delay going up from 90 nm to 65 nm, tremendously so... such that even though the cache is physcially smaller, latency did not improve. Add to that AMD felt compelled to go to an 11 layer backend (again large die, fight latency issues, and still the L3 is astronocical)... you begin to see the intense advertisement of 'ultra-low k' in the 45 nm process -- they need to fix this.
AMD's explanation that they were 'reserving ability to increase cache' was an outright lie and shameless play on the technically ignorant. You will never ever see a 1Megx2 Brisbane (ever).
Jack
There is more to the story
6 t memory cells on SOI are a bit handicapped and that is why they are a bit small and limp.
Is it any wonder IBM looking at edram and AMD looked at zram
And that ain't no jack
tick tock tick tock amd you got your clock cleaned, lol
"Here’s what I don’t get. Aren’t you increasing leakage, therefore heat and crosstalk as you go with materials with lower dielectric properties."
Sparks - a good thought but I don't think leakage is as much an issue. Remember, with the gate oxide you are talking 10's of Angstroms thick, with the ILD's you are talking 100's-1000's of Angstroms - this has an exponential impact on electron tunneling. There also are barriers cladding the Cu to prevent Cu atoms from physically diffusing into the ILD - Cu is a fairly fast diffuser and if it gets all the way down into the Silicon it is a disaster as it is a deep trap state in silicon, this is also why you have very clear Cu and non-Cu segregation protocols in the fab.
The real issue with the low K ILD's is generally speaking the hardness of the film is proportional to the dielectric value, so as you go to lower k's you (generally) tend to get softer films leading to all sorts of integration issues. (ability to polish, dicing the wafer, packaging, etc)
As for heat in the backend that is a combo of the wire size (resistive heating) as well as the thermal conductivity of the dielectric. I don't believe heat in the backend is a significant issue, though if I recall correctly, IBM claimed the ULK material had good thermal properties.
I'm a front end guy by nature (so I may be a bit biased), but generally the goal of the backend is to not be a speed limiter and not to be a yield limiter (and to be as cheap as you can get). Generally speaking you want the transistor switching speed to be the limiter on speed (and define the fmax) as generally that is where you run into some of the more severe issues in scaling. (Please note I'm not saying that the backend interconnect process is a piece of cake).
As JJ stated - there was some questions about AMD's L2 cache... many of the technical folk (including myself) believe it was an interconnect (RC delay) issue. However there isn't enough public data to verify. However given that AMD is claiming improvements on 45nm and seems to be doing minimal improvements to the transistor; perhaps this is consistent with the interconnect being the limiter on 65nm and AMD addressing this a bit on 45nm with the lower k film. I still find this hard to believe given they have more metal layers then Intel on a given node which should in theory allow for more spacing and relax some of the RC delay issues.
6 t memory cells on SOI are a bit handicapped and that is why they are a bit small and limp.
Is it any wonder IBM looking at edram and AMD looked at zram
Could you explain this? ZRAM and EDRAM are slower than a traditional 6 transistor(6T) cache cell. The main advantage is that they are 4-5X smaller (which is why the ZRAM developers are claiming that ZRAM could be competitive with L2 cache, due to the smaller size, hence smaller line lengths and signal delay).
Could you be specific as to the issue with a 6T cell on SOI or provide a link? Thanks.
"Here’s what I don’t get. Aren’t you increasing leakage, therefore heat and crosstalk as you go with materials with lower dielectric properties."
Leakage is not a function of the dielectric constant. The mitigating factors that affect leakage are the dielectric quality, breakdown potential, and the potential between the two electrodes -- this under the classical limits.
Leakage has also become a problem within the quantum mechanical limits as length scales (of the gate, front end) allow tunneling at these dimensions (purely a quantum effect).
The parasitic affect in the wiring of the transistors is the capacitance between adjacent metal lines that produce electric fields to setup capacitors between areas where lines overlap spatially. In this case, one would desire small or lower capacitance. C=dQ/dV by definition, but in construction C=e*A/D where e is the permittivity in vacuum. This changes by a factor (k) when something other than a vacuum exists between the electrodes, so C=k*A/D where A is the area of overlap and D is the distance separating the plates (electrodes), in short lower-K decreases the cross talk.
If you study the literature on these materials, the common theme is to go more porous or less dense to lower the k value and decrease capacitance. This is sufficient but at a point the material becomes very weak, to the point it falls a part. There are oodles of papers discussing maximizing the porosity while also retaining the strength of the material.
"Could you be specific as to the issue with a 6T cell on SOI or provide a link? Thanks."
SOI has benefits and it has disadvantages ... the benefits is that it eliminates bipolar parasitic effects between adjacent transistors (called latch up), in bulk adjacent transistor setup Thysitors (http://www.siliconfareast.com/latch-up.htm), however SOI eliminates the pnpn motif by isolating the transistor completely from nearest neighbors. As such, latchup does not occur with SOI CMOS devices. SOI also removes junction leakage and improves some components of short channel effects.
However, there are draw backs. Because SOI completely isolates the transistor, the entire transistor is electrically floating. As such, they tend to hold a charge even after they are switched off... this is sometimes called the floating body effect.
In fact, this is what ZRAM is ... it takes advantage for the floating body effect in SOI transistors. However, in a 6T typical SRAM cell, a charged transistor is death.
Jack
http://iroi.seu.edu.cn/jssc9697/data/00585285.pdf
Here, here is a paper I was able to find that discusses the deleterious effects of the floating body of SOI on SRAM... best I could do without sending you references requiring a trip to the library.
Jack
sparks: "What the hell is low-k?"
Breakfast cereal?
“C=dQ/dV by definition, but in construction C=e*A/D where e is the permittivity in vacuum. This changes by a factor (k) when something other than a vacuum exists between the electrodes, so C=k*A/D where A is the area of overlap and D is the distance separating the plates (electrodes), in short lower-K decreases the cross talk.”
Jack-
You have no doubt heard the expression “back to future”? I would like to submit to you “Forward to the Past”. I hate to bring up what Mister Spock would call “Stone Knives and Bearskins”, but these equations look suspiciously like the formulas and parameters in designing vacuum tubes!
Anode to Cathode capacitances are determining factor in a tube performance, along with the quality of it’s vacuum. Which is why I only build directly heated amplifiers (Triodes, 300B, 845), eliminating another source of capacitance, the ‘screen’. They aren’t as efficient, as say, a Pentode (EL34, KT88), but the quality of signal amplification is breathtaking, especially in pure Class A, single ended operation.
“a charged transistor is death.”
Hmmph, a capacitor in the signal stage/path is death.
Perhaps it took this long for the top evolutionary atomic scale processes to show their ‘DNA Roots’ in electron tube dinosaurs.
Too bad you guys couldn’t develop a way of suspending a structure or grid work of copper traces in the backend enveloped in air, or in perfect world, a vacuum! Nothing delivers juice like a wire in free air, let alone a vacuum. (Just take a look at HV transmission lines.) Transistor speed in micro architecture would be, exclusively, the limiting factor.
This explains completely why the ‘softer the better’ in the backend, to the point of the structure collapsing with subsequent process steps.
You guys have added a new dimension to process engineering. The backend guys and front guys deal with entirely different structural approaches. That said, there must be quite a bit of tension between the two factions, especially with groups of disciplined guys who, apparently, don’t cherish compromises.
Thanks to ALL.
SPARKS
"As such, latchup does not occur with SOI CMOS devices."
I believe this only applies to fully depleted SOI devices (FDSOI); IBM's (AMD) process is still PDSOI (partially depleted); thus the transistors are still not truly isolated (the STI does not go all the way down to the buried oxide, and you still have some latchup potential).
Im speculating here, but I think the SRAM 'issues' (added latency) AMD has had on 65nm is more backend (interconnect) process related then use of SOI (especially as you consider they were also using SOI on 90nm)
"I believe this only applies to fully depleted SOI devices (FDSOI); IBM's (AMD) process is still PDSOI (partially depleted); thus the transistors are still not truly isolated (the STI does not go all the way down to the buried oxide, and you still have some latchup potential). "
In a complimentary device (i.e. both PMOS and NMOS integrated into the same substrate), the condition that leads to latch up is the PNPN motif that is created between a source (or drain), the well, substrate and other source or drain of a complimentary transistor. I guess you could say this is the classical concept of latch up.
A SOI transistor will not have this motif, whether fully or partially depleted. Now, the floating body effect could lead to a transistor remaining ON when it should be in an off state since excess charge in the channel could keep the charge density around the inversion level... the transistor that suffers from a floating body perpetual "on" state could also be termed to 'latch up' I guess, but I am more accustomed reading the literature with respect to latch-up as applied to a thrysistor, which is the condition of bulk CMOS conditions.
Ohhh... addition to my comment above --- SOI (partially or fully depeleted) still laterally isolate with trench isolation methods. AMD has a patent on it:
http://www.patentstorm.us/patents/6534379.html
So it SOI transitors are fully, electrically isolated.
Jack
I'm always amazed when I get misquoted on roborat's blog which seems to be about half the time (latest Sci comment)
Funny, didn't he just say the other day he doesn't read the comments anymore? Putting aside the veracity of the allegation, how would he know he's being misquoted 'half the time' if he doesn't read the comments?
They would look very foolish after trashing the concept in their ads. (They = Intel, referring to if Intel for whatever reason released tri-cores)
First off, there were no "ads"; there was a quote by Otellini in an interview where he said he prefers to have all the cores working, but I don't recall any ads. See, here-on lies the fundamental difference between Ruinz and Otellini - if the market is there and it makes good business sense to release tri-core, Intel will swallow their pride and do it.... it's about making money and selling products, not some ego trip where you bite off your nose to spite your face.
I'm just hoping some good news comes out for AMD soon... with 50% of the comments on the blog being his, and the complete stretches his latest blogs have become, it's getting hard for him to keep up the "unbiased" charade. Even the 'great article, Sci" comments have evaporated as his writing has become increasingly transparent.
this is only a real issue to Intel if it is a fundamental issue that can't be overcome with time (any thoughts intheknow?) when Nehalem goes more mainstream, probably not until at least mid-2009.
Well, the board in question is a high end board for the enthusiast Nehalem processor. I don't think that the lower end Nehalems will be set up for that much memory, but that is pure speculation on my part.
As far as the cost impacts go, I think that Fudzilla is making a mountain out of a molehill. Worst case, the cost of the PCB board itself is going to be less than 25% of what you pay for the finished board with mounted components. So if you figure ~$300 for your motherboard, worst case is that $75 of that is the cost for the PCB board.
Current DDR2 compliant boards have 6 layers (incidentally the DDR boards had 4 layers with the increase to 6 layers coming to support the 2nd memory channel in DDR2). The cost of a board is roughly based on square inches x layers. Since the ATX standard is being kept, you'll only see the cost increase due to the change in layer count. Going from 6 to 8 is a 33% increase. So the cost of the board goes up $24.75.
Is having the creme-de-la-creme worth an extra $25 buck to an enthusiast? I suspect so.
Remember that this is for triple channel DDR3 memory feeding a quad core processor. If you don't use all 3 channels on a lower end board, or you don't allow as many memory slots, you can still use a 6 layer board and keep costs at the current levels.
“Is having the creme-de-la-creme worth an extra $25 buck to an enthusiast? I suspect so.”
In The Know- Are you trying to smoke me out for a comment. I’m compelled!
Here’s the short answer…. F**k’en A Bubba!
I am a firm believer that more is better! Less is un American.
Big Hooters, big butts, big caches, big power supplies, big VR's, 8 layers, you just lay out those fat 32 ounce Porter House motherboards, we’ll eat ’em.
This FAT $400 P5E3 Premium motherboard is the absolute BEST motherboard I EVER purchased, bar none! It was worth every penny.
Native 1800FSB---GMAFB.
Hey, if Nehalem needs it, it needs it!
SPARKS
I'm always amazed when I get misquoted on roborat's blog which seems to be about half the time (latest Sci comment)
one cannot be "always" misquoted when it occurs only about "half" the time. He should stop being a girl ("we never go out anymore") and feel lucky we even quote him after being utterly wrong since 2006.
...still waiting for DTX!!!
I also found this blog post regarding the move to 450mm wafers.
If the author is correct, then the Intel group may bear far more of the cost for the transition to 450mm than they might have been prepared for.
Hmmm... My first post disappeared.
I found a short article that detailed some of the issues with the gate first approach to high-K/metal gate.
Intel supposedly is launching a 1.6GHz dual core atom in Q3, and according to Fudzilla (so take with a grain of salt) it will be under 8 Watts.
AMD's 'energy efficient' quad core is at 1.8GHz, right? Could you imagine an MCM of the dual core atom operating as a a quad under 20 Watts? Sure the FSB is probably not high enough, but could you imagine the pressure Intel could bring to bear with this thing? (I don't see any scenario of them going this far as it would also potentially cannibilize dual and quad core Core 2's)
I think folks are vastly underestimating the potential market for atom... the 2nd gen could be huge, especially if Intel pushes it into the traditional low end x86 market and not just MID's, etc...
"If the author is correct, then the Intel group may bear far more of the cost for the transition to 450mm than they might have been prepared for."
The author also stated ~100Bil in R&D which is the crap the equipment suppliers have been spewing out. A 100 Bil? Think maybe 50 major toolsets... 2Bil of R&D per tool?
Me thinks that # may be a bit inflated.
Having been through this on 300mm, the equipment suppliers have a couple of choices... they all band together and no one does it (some may call this collusion), or one or two realizes there is money to be made, especially if they can be first to market, and the flood gates open on the development.
As for R&D cost, typically the capital equipment has 15-20% baked into the tool price to cover R&D - if suppliers are expecting to recoup R&D up front, the I'm sure they will strip out the R&D out of the tool price for the future?
Having been in negotiations where a GM has cockily said, 'our tools have sold so well we have recouped our R&D', I asked when I should expect the 15% for R&D built into the tool price to be stripped out. The GM's face went back and he mumbled 'well...umm... that's needed for FUTURE R&D...' So they've been collecting for future R&D and now they want additional seed money upfront? And then of course they will leave R&D baked into the future tool prices anyway...
Look at other industries... does Ford ask for hybrid technology R&D money up front? How about the Boeing for the next major airplane?
Undoubtedly they're will be some seed money (there always has been during these transitions) and equipment suppliers will likely have the IC manufacturers to front a lot of the Silicon cost and metrology, but to expect the cost to be paid upfront? Not very reasonable... As soon as one major company jumps in realizing if they go early in with Intel, TSMC, and Samsung they will have a tremendous leg up on the competition - you will see a steady migration.
but to expect the cost to be paid upfront? Not very reasonable
Of course sports team owners aren't reasonable to expect a city to pay for a venue that they will then use to increase their revenues (and prices) either, but they get it. :)
"Of course sports team owners aren't reasonable to expect a city to pay for a venue that they will then use to increase their revenues (and prices) either, but they get it. :)"
Sports are not competitive businesses, they are monopolies... equipment suppliers are not monopolies. Also the sport team owners don't end up owning the stadiums (they are given a lease), nor can they rent out or sell the stadium to others.
Once the equipment suppliers get the upfront money, they won't want to be able to sell equipment to other customers? They won't retain the IP?
An interesting example, but I don't think it is on point... I think the airplane industry is the closest - huge upfront costs into next gen development (say the 787), typically there is cost and schedule over-runs and large risks involved, but there aren't that many options and it is more a matter of time than a question of if it will happen. Boeing gets this money back via long term contracts and advanced bookings of planes - they don't ask for a handout / blank check prior to starting the development.
The equipment suppliers should do their research on the potential market - structure some long term contracts (on 450mm you are talking about companies that do huge volumes of business) and take some risk - isn't that what business is? (taking informed risk and measuring risk-reward?). If they ask for zero risk upfront they should then get close to zero reward.
“I think folks are vastly underestimating the potential market for atom... the 2nd gen could be huge, especially if Intel pushes it into the traditional low end x86 market and not just MID's, etc...”
Well said. And there’s more here than meets the eye.
I must admit, I fall/fell into the category of those who underestimated Atom’s potential. After all, I blindly and squarely fall into the “bigger, faster, give me more power” camp.
However, upon researching INTC’s developmental history of Atom, and the way OEM’s are clamoring for this remarkable/innovative product, it clearly shows why I’m out here, and they are running the company.
C2D, Q6600, Nehalem, and Atom, are incredibly brilliant solutions intuitively set into motion years back, no doubt. Actually, the timing has been flawless. (Yeah, I know Tic Toc Tic Toc, BK!, BK!)
I don’t know how well this will go over with the talented engineers on this site. But, I must say, this is what happens when you’ve got a marketing guy running the company, as opposed to engineers with “elegant” design solutions. (Sorry, fella’s)
SPARKS
I've been an advocate for the potential of Silverthorne since it was first announced.
If you dig hard enough on another blog out there, I'm sure you can find my claims that I thought it was one of the biggest developments of the year. I was mocked and ridiculed for saying such out loud, but looking back, I think it's turning out to be a bigger deal than DTX or SSE5. Both of which were ranked ahead of Silverthorne by said blogger.
The current chip doesn't really have the horsepower it needs to fill the role I envision, but it is the first step towards what I think is a major inflection point.
Potential is NOTHING
Imagine if there were two versions of Silverthorn?
The current one and one 100x lower power. 45nm is more then capable to have produced such a beast and it would have been in iphone and imagine that seemeless experience. Now we got to wait and wait and wait
InTheKnow-
“I was mocked and ridiculed for saying such out loud, but looking back, I think it's turning out to be a bigger deal than DTX or SSE5.”
“The current chip doesn't really have the horsepower it needs to fill the role I envision.”
Not so fast. Give yourself (and INTC) a little more credit than that. From what I understand, there’s a dual core Atom scheduled to be released after the single core Atom’s launch.
Since most current OS are compiled for multicore processing, I believe Atom’s low power requirements would be a small price to pay for an immediate and substantial boost in performance with larger devices that need the extra power.
Larger devices could (and will) be equipped with larger batteries at a mere 9-12 watt power hit at FULL load. Not a big deal.
SPARKS
"The current chip doesn't really have the horsepower it needs to fill the role I envision, but it is the first step towards what I think is a major inflection point."
You have been the most vocal - I still think it will be about the 2nd generation product (that is true of many products). It does look like the first gen may do better than I thought - I figured it would be a get it out there, get a bunch of feedback on the deficiencies and get it right on the next iteration.
It will be interesting to see how much Intel pushes this into the computing space and how much more horsepower they might give it - at some point you are talking about eating into Core2 (though these chips probably have better margins).
As for a sales guy running thing - it really comes down to product marketing and timing... this looks to be the right target at about the right time. (of course you need the engineering and manufacturing to be able to deliver it)
There does seem to be a bit of a mindset change though - it used to be get the best you can do out there and force it into the market and invest in infrastructure to build demand (Itanium?) - now there seems to be a bit more balanced and targeted approach with the end market in mind during development. The atom reminds me a bit of the Centrino/Pentium M/Core development - it wasn't about the best possible performance, there was a specific market requirement/segment in mind.
InTheKnow-
Here you go, 200 Mhz faster with Dual Cores, speed and SMT, hello.
It's like having your cake, and eating it. It seems INTC is way ahead of us all.
http://www.engadget.com/2008/03/11/intel-roadmap-reveals-1-87ghz-dual-core-atom-processors/
SPARKS
Oh yeah, even the Overclocking/hardware freaks are keeping an eye on this thing. Apparently, they feel they feel the same way I do, with the dual core Atoms being used in larger devices, that is.
SPARKS
Whoops, forgot the link.
http://www.overclockersclub.com/news/22428/
SPARKS
anon: "I figured it would be a get it out there, get a bunch of feedback on the deficiencies and get it right on the next iteration."
And of course, if this is what does happen, you can be sure that there will be lots of comments on how it is a "failure." And then when the 2nd generation comes out and is a success, those same commenters will path themselves on the back about how "Intel finally gets it."
Without realizing that it was the plan all along. Ahhh, technology...
I still think it will be about the 2nd generation product (that is true of many products).
We are actually in agreement here, it is really about the 2nd generation product. I think the only thing we really differed on was the size of the splash the first gen product would make. I think that Intel needs a success right out of the box to make this thing fly.
Ultimately Intel wants to get into smart phones where ARM is the entrenched player. To do that, they don't need to knock it out of the park with their 1st gen product, but if they don't get on base, the whole thing could have come crashing down around their ears. If gen 1 Silverthorne doesn't look at least respectable, then it will be much harder to breach the smartphone space with the next generation.
With the product announcements starting to leak out the race is on, and Intel is the company that is playing catch up.
Sparks, I think Scientia's last reply to one of your posts sums up the problem with his blog. I'll just take the first point he picked to show what I mean.
For example, the discussion about 450mm wafers.
AMD trailed Intel's adoption of 300mm by several years; they will of course do the same thing with 450mm, putting off the transition as long as possible.
It is true that AMD delayed the 300mm transition as long as they could. It is also true that they will delay the transition to 450mm again. Why does this make the conversation pointless?
Did putting off the 300mm transition help AMD? Did it hurt them? I would argue that it hurt them not to have the funding to make the transition sooner as they lost out on the economic benefits for those years. Does that not have any bearing on the financial position that AMD finds itself in today? Again, I think it does and is a relevant point.
He also missed a key point in the conversation, at least in my mind, which revolved around when the transition would take place and who would pay for it.
But since it doesn't interest him it is "garbage". And I think that that is ultimately what is wrong with his blog. His focus is too narrow and results in the excessive censoring of posts that he deems to be "garbage".
Well, you know what they say, one man's garbage is another man's treasure.
“Does that not have any bearing on the financial position that AMD finds itself in today?”
I realized that immediately. AMD being late to the table with 300mm wafers is a mater of history. More importantly, what good is 300mm it if you’re making chips larger that mosaic tiles and the half you’re making come out cracked anyway? Further, each time you refine the process, the cost of doing so never reaches the expected (or desperately needed) gains, 3 times?!?!
As far as him criticizing this site or the individuals on it, I take it with a grain of salt. I’ve clearly (and gratefully) have obtained a wealth of insight and knowledge here on this site, which isn’t open to debate, anyway.
However, when it come to the performance of my beloved QX9770 and X48, well let us say that’s an entirely different mater. I took him up on his challenge to run Prime95 for a half hour. He certainly wouldn’t admit that these were, indeed, terrific products, and cleverly avoided any positive comments concerning the results.
I’m not certain you read the entire thread, beginning with my first post. But to his credit, he did post them. His final reply was a meek:
“ I would say that your results show exactly what I said: If Intel sockets were designed to handle more wattage then Intel could easily increase clock speed. But since the great majority of Intel systems are stock this doesn't help Intel much right now. However, you could make the argument that Intel could bump dual core speed enough to remain competitive with AMD's triple cores, but I doubt Intel would actually do this.”
Huh? I’ve got a Gigabyte P35 motherboard released last fall that can clock the piss out of a Q6600. Hell, my Bad Axe (975X) was doing fine up until the X48!
I really didn’t want to get into an argument with him. I KNOW most everything I’ve seen with ANYTHING C2D has TERMENDOUS head room, especially the dual core stuff at 45nM. INTC can ramp up speeds any damned time they want, damned the sockets.
This was my point. He thinks AMD’s broken quads are going to be competitive.
Personally, I would piss on a three core chip. I don’t relish the idea of THROWING my money away on a busted, second rate product, even if it was FREE! I’d simply buy a Q6600 clock the hell out of it and pound the X3 into the pavement.
Wait, huh, I did that last year!
SPARKS
"I realized that immediately. AMD being late to the table with 300mm wafers is a mater of history."
Ignorance is bliss... what is rather amusing is the comment well they trailed by a few years, so he assumes 450mm will be similar. The old 'extrapolation' analysis, which again shows his lack of background on these matters.
One, ummm, slight problem... 300mm saw a greater ratio of equipment capital cost to overall wafer cost (vs 200mm). 450mm will get worse for this ratio as capital costs will grow at a faster rate than things like conusmables, chemicals, silicon costs, etc (on a normalized basis). So why it this important? Well the up front cost is now much bigger, while the 'operating' or ongoing costs (I'm not using these in the strict economic sense of the words) shrinks. This means AMD needs an even greater cash flow for upfront fab investments.
You also need a sizeable factory to use the equipment efficiency - while you get a theoretical 2-2.5X die output increase, you can't simply just cut the fab size in terms of wafer starts in half because then you will have a highly inefficient tool utilization. This means you need to have a fairly large demand to justify building a fab which is ~2X the output of a 300mm fab. AMD is struggling to load a single 300mm fab - they would need to more than 2X their demand to load a reasonably sized and efficient 450mm fab. (Actually this would be more like a 3X demand increase as I would suspect they would not kill their 300mm fab off and would want to keep that loaded efficienntly too).
This is a fairly high level overview from a manufacturing perspective - if folks want more detail on some of the generalizations I made let me know.
Perhaps people will also tie this into the eventual fabless (with the intermediate fab-share) philospophy AMD is heading in. To build a fully loaded 450mm fab (or for that matter a 2nd 300mm fab); AMD will need the cash for the upfront capital investments (which I think everyone sees and understands), but they will also need the partners' demand (in addition to their own) to keep that fab reasonably utilized to give it a shot at operating it in the black.
"I would say that your results show exactly what I said: If Intel sockets were designed to handle more wattage then Intel could easily increase clock speed. But since the great majority of Intel systems are stock this doesn't help Intel much right now. However, you could make the argument that Intel could bump dual core speed enough to remain competitive with AMD's triple cores, but I doubt Intel would actually do this."
Isn't it also possible that Intel doesn't raise the speed on a model because they feel they do not need to? If sales were simply a direct reflection of performance, AMD would have gained lots more market share during the time when they had a performance lead, prior to the time when Intel finally scrapped Netburst and developed Core.
There is also the factor that it is not AMD that is setting the pricing levels, it is Intel. If AMD has a better price/performance ratio at a specific level, I think that it's because they positioned their products and prices in response to Intel, and not that Intel is struggling to match them in either price or performance.
There are lots of other factors, including OEM deals and manufacturing capacity, to consider when thinking about how each company prices its products. Maybe Intel could increase the speed on some dual-cores to better compete against AMD triple-cores, performance-wise. But do they need to?
Tonus-
Exactly! INTC doesn’t need to simply because there’s absolutely no threat. Further, as 45nM fully ramps AND the process matures over the next six month they simply adjust their products on two fronts both speed AND price with a major sacrifice in margins. Any other assertions like the so called Tri-Core threat is utter nonsense.
The chip is a last ditch effort to return some money on the entirely failed line called Barcelona. Ok, so you got great scaling with HPC and servers with the functional quads. This is not enough to save the day, however.
If they didn't sell the broken quads (x3), they would be merely scrap, anyway. Caveat Emptor.
SPARKS
That's WITHOUT a major sacrifice in margins, sorry.
SPARKS
Here we go again Doc the EU gangsters are at it again!
http://news.moneycentral.msn.com/ticker/article.aspx?Feed=Bcom&Date=20080527&ID=8693851&Symbol=INTC
What a couple days for INTEL!
The arrival of their Centrino2 is delayed due to a bug and missing some paper work? Can’t wait to here more about the bug and how they missed filing some papers, AMD gets time for a last meal.
Rumors of a big fine coming from the EU. Too bad won’t make materially difference to INTEL. It’s the old narrow old world thinking of the EU again. INTEL has something like 8 billion bucks in the bank. Even with its massive capital investments and R&D continues to generate positive cash flow, continues to buy back stock at billion dollar plus rate. A fine of a billion or two would be painful, but not change matters one bit for AMD. In the worst scenario that INTEL loses the appeal, and certainly they will appeal and keep this in the courts for a few more years it matters not. Not one dollar will go to AMD, not one quarter will INTEL lift of the gas pedal and slow down process or design development. The money won’t go to AMD so sometimes I find it so laughable that AMD has a leader who spends time on such things. You’d think Hector would spend more time figuring out if his design team is doing good pre silicon validation, that the process team is doing real work instead of just taking IBM’s shoddy joint venture process and giving it to Dresden. Damm is Hector doing anything at all justifying the ridiculous pay he earns. He already ruined one company and has all but completed the ruizing of a second.
First benchmarks are out showing Barcelona performance. What a disappointment, Nehalem has little to stretch. Perhaps it was AMD’s strategy all along. Hey we are so far behind lets turn out another turd so they will go to sleep then we’ll give the a can of whoop ass at 32nm.
In the end I’m surprised the AMD cum lappers aren’t celebrating, but in the end its clear why. They have experience so much whoop ass that they got nothing left in them. Only so much whoop ass can one dog take.
Tick Tock Tick Tock AMD’s clock has been cleaned
Attila The Anonymous-
I’m glad you wrote that above. I was starting to get a little depressed, sincerely. With all this goddamned bad press, along with Fuddie and the INQ, this has all been driving me nuts.
However, what you said about the AMD piss pumpers, is very true. They’ve been smacked down so many times they’re not going to say squat until they see some hard numbers. No doubt, that’s why they’re so quiet.
Frankly, I can’t wait to see 2nd quarter numbers to put everything in it's proper perspective, and put all this bullshit to rest.
Hmmm---Last Meal.
SPARKS
Does anyone else find it humorous that the morally correct EU who is investigating Intel for fair trade is now defending itself about adding a tariff to certain electronic goods?
They're defense - well the parts they are taxing weren't part of the original WTO agreement.. so much for fair trade, eh?
http://www.physorg.com/news131191338.html
Well... they're all about free and fair trade, except when it applies to imports where they can make a buck... then it's about re-classifying parts and imposing tariffs. Hypocracy anyone? (or are the actions consistent in that they are both just EU government money grabs?)
It's all about a level playing field... except when the EU doesn't feel like it.
Post a Comment