There is a report that AMD is in talks with TSMC to sell Fab30. As AMD struggles to bring the company back to profitability this particular “rumour” doesn’t sound so outrageous, especially when we’ve already entertained the possibility of such a strategy. The “idle-race-car-in-a-pit-stop” description mentioned by Mr Ruiz on the current situation of Fab30 is probably not the best metaphor for an empty Fab. A Fab with a ready capacity is the real “idle race car”, but then again this cost $millions to sustain. An empty Fab without cash for the necessary upgrade sounds more like an empty pit stop - no race car! And if such is the case then its just a wasted asset. There is also the other problem of excess manpower that AMD isn’t allowed to release lest they lose their grant. Anyone familiar with wafer manufacturing knows that an idle Fab however less costly is just as bad as an underutilized one.
If there is one reason why anyone should believe the rumour it has to be based on this simple fact. Nobody builds or upgrades a Fab without expecting enough volume to sustain the added operating cost, let alone a return on investment. The initial assumptions made to support an AMD with two Fabs did not materialise and the result of which is just one idle Fab that it cannot afford to keep. Even if AMD did have the money to upgrade Fab30 to 300mm/(Fab38), there won’t be enough volume to support such a massive increase in operating cost. Clearly AMD finds no use for Fab38 at the moment and if AMD plans to keep Fab30 idle then that simply goes against any asset-lite approach. An unutilized asset which incurs cost doesn’t make any sense. What makes more sense is a foundry like TSMC with enough customers to fill a Fab jumping in and taking over. This even coincides with the other rumour that TSMC will ramp AMD’s 45nm in 2009.
11.13.2007
All Your Fabs Are Belong To Us - TSMC
Subscribe to:
Post Comments (Atom)
86 comments:
Perhaps they are saying that it's like an idle race car... with the driver asleep at the wheel. =)
hahahhah great title doc.
Not too sure about this rumor. They may sell/lease some capacity to TSMC, but I think some of the recent German govt subsidies, which are still ongoing, may be at risk if sold outright (though I do not know for sure)
I still think AMD is far better off bringing chipset and GPU biz in house to load their fabs - this would give them some flex to withstand bubbles in CPU demand. I have to assume if they are not doing this that they really have no competitive technology advantage (either cost or performance) over the foundries, despite multitude of AMD fans who think/claim this is the case (APM3.0 anyone?).
If AMD was doing better (or even as good as the foundries), it would be better to produce graphics parts/chipsets in house and not pay the foundry margins. The fact that they don't leads me to believe they are simply not cost competitive (and at best are probably process competitive).
Still I don't see AMD outright selling F30/38.
They may sell/lease some capacity to TSMC, but I think some of the recent German govt subsidies, which are still ongoing, may be at risk if sold outright (though I do not know for sure)
The German gov't could be in with the negotiation. They're not stupid to let AMD go under by burdening them further with past agreements. They know they would gain nothing if AMD goes belly up. What the German authorities want is 2 fully loaded fabs in Dresden. They don't care who really owns it.
Still I don't see AMD outright selling F30/38
I know its a rumour and you may be right. It's just that the "idle race car" plan by Hector Ruiz is less realistic than actually selling or leasing Fab30.
"I know its a rumour and you may be right. It's just that the "idle race car" plan by Hector Ruiz is less realistic than actually selling or leasing Fab30."
Come on - anyone with half a brain (which obviously excludes the press reporting on it), knew the whole idle racecar stuff was a bunch of crap and meant purely to spin a nice face on putting F30 idle (due to LOWER THAN EXPECTED CPU DEMAND)
Much like the whole asset lite crap - while AMD may start selling some assets and more towards outsourcing, when they spouted out that crap they clearly had no intention of doing so, and were continuing to be vague about even the NY fab build. I realize now if/when AMD starts to move fabless, people will say "ah that was the asset lite plan", however it will be more a move of desperation.
AMD's inability to read the market and have real solid STRATEGIC planning is absolutely killing them.
The only way to be able to sustain 2 fabs is to have a substantial market share (30+% range) or start making GPU/chipsets in house. Clearly Hector had designs on 30+% market share (at all costs), however in his arrogance several years ago, he did not think to put a backup plan in place - like planning a move for inhouse GPU production should CPU market soften and/or Intel did not give up market share. Instead it was giddiness of:
1) Let's sign up foundry support from UMC (which burned them as they were forced to make CPU's there due to their contract, even when they had the capacity in house)
2) Let's convert F30 on the fly (which by the way is far more costly and time consuming then just taking it down and gutting it)
3) Let's plan a 3rd fab in NY
Why was no one asking at the time what happens if they don't continue to build market share, or in bizarro world, what if Intel takes some back? The only thing they at least put some hedge on was the NY fab. Now their plannig consists of how do we decrease our fab capacity , not how can we utilize what we were putting in place with other products? (Short-sighted to say the least)
Now AMD will do yet another REACTIONARY move and potentially sell /lease out F30 or in the best case just mothball it. If/when they suddenly get a decent product out the door and need capacity, they then won't have it and have to go the more expensive foundry route (and then they'll be back in the vicious circle of 'should ewe build another fab?' debate)
And the board of directors is where in all of this?
Speaking of fabs and capacity, there has been a lot of talk of Intel currently running "capacity constrained". I'd like to clarify a few points on this and show you why that actually isn't the case (atleast not right now) as it is a little more complex than that. I've alluded to it in earlier posts, but haven't fully clarified.
It has a lot to do with the current restructure and efficiency program put in place last year (Believe it or not, it's not just hyperbole). Intel used to just push everything into the channel that they could, cause it would sell eventually right? Well, that's not the most efficient supply model. When Core 2 was released, there were still large stockpiles of P4 product in the channel. Obviously, with the Core 2's, demand for the old P4's diminished. Intel had to dramatically cut prices on the old chips in order to eventually get rid of them. So, Intel decided to institute a market pull supply model. This has become increasingly more important now that they have introduced the Tick-Tock product release cadence with a full product refresh approximately every 12 months. They hope to ensure they never fall into this trap again.
Now that the new model is in place and the channel is leaner, there is some risk in the event that there is a sudden increase in demand that there is a short-term drought in certain market segments. Note, the inventory may be lean, but it is not equal across all product lines. Now with the market pull strategy, Intel can focus the HVM (High Volume Manufacturing) sites to place temporary factory priority on certain product lines to meet the demand. Then they can adjust wafer starts on the fly to accomadate changes the market. There is generally enough WIP (Wafer's in Process) and Cycle Times are short enough, that there isn't any real risk to being unable to meet customer demand. It's also nice since there is no visibility to this on the floor because everything is handled through the automation system.
Now, the only real snag or potential pitfall in all of this is that there is now incredible pressure on the HVM sites to execute flawlessly, meaning: Maximizing tool-up time, mitigating excursions, etc... (and believe me, this is pounded into our skulls daily). So, as long as we remain on track and demand is strong, Intel should continue to have an excellent 4th quarter.
“What the German authorities want is 2 fully loaded fabs in Dresden. They don't care who really owns it.”
Truer words were never spoken. I said it months ago, and I’ll say it again (maybe I wont get beaten up this time); it looks like the German want to get into the chip business.
They chose, what they thought at the time, the best chip manufacturer to subsidize. (Never EVER sell INTC SHORT) Now that AMD is getting their asses kicked in, the Germans still have a VESTED INTEREST. Since they paid into the deal, they are at the table and they have something to say, simple.
The bottom line is they want a state of the art Fabrication facility in the former East Germany. They don’t give a hoot who runs it, just as long as German citizens are hired to run the place. This means floor sweepers to engineers. The spin off and what their paying for, by the way, is chip fabricating expertise in that part of the country. (This not to mention the logistical infrastructure in the area) This knowledge cannot be taken back, nor can anyone remove the components that make up the facility.
Germany has paid for the privilege of keeping it, and its parts, right where they are. Further, I ask you, if YOU had a couple of billion, would you loan more to AMD it their current financial and technical position? The Germans are done. TCMC is a far better choice for the take over of the FAB, without AMD and it’s failing products. After all, TSMC is not limited to the production of a single companies chip. Now, TSMC can manufacture an entire range of products for many companies in Europe, and other companies worldwide. It’s a win, win scenario for both parties. The loser is AMD, of course.
They need the cash, desperately.
The time has come where the rubber meets the road. AMD’s spin machine has run out of gas, with the Industry and the press. Barcelona leaks like a sieve and throws more heat than my Presler 955ee and has revealed itself to be a total flop. IBM, I’ll say it again, I--B--M can’t get enough chips to qualify its products for sale and god knows what CRAY computer is going to do. AMD is billions in debt and is losing millions daily, with NO PROFIT IN SIGHT!
The Germans will get the FAB expertise, employment benefits, the facility they subsidized, and TSME will get (couple?) nice FAB(s) on the cheap. AMD will take the money and run to become a mere shell of what it once was.
We all saw this coming, but I think to all our surprise, it actually happening, as we predicted. This is just another chapter in the rise and fall of Advanced Micro Devices, superbly orchestrated by IDIOTS who THREW AWAY 5.4 BILLION on a graphics card company a little over a year ago. This will happen, finances dictate that it does.
By the way, Anonymous out there, do you still think Sen. Chuck Schumer is going to ram the AMD NYS facility down our (NY Taxpayers) throats?
SPARKS
Have you guys ever built a race car? I have. Camshafts with long durations and high lift don’t like idling. Plugs foul, high rate springs put excessive strain on the valve train, especially cam lobes. The rockers will probably break first. Low oil pressure during idling exacerbates this ware as will low operating temp. Most race motors are line bored and honed at operating temperature (185F, water jackets filed and torque plates fastened) this way the close tolerances will expand to race dimensions creating a nice film of flowing protective oil. Oil viscosity is critical at temperature and rpm. Fact: Start a Big Block Chevy and run it hard and cold and you WILL spin a bearing. Idle it a long time the plugs will foul and you will never see anything over 3500 RPM without it breakup. There are a multitude of other sins in low temp, prolonged idling race motors that is well beyond the scope of this discussion.
However, Wrectors stupidity goes well beyond his expertise in chip manufacturing. Race cars are Born To Run, not sit idly by. They are built for speed, just like FABs, I suppose.
SPARKS
Orthogonal, your points are well taken, but a couple of issues:
"Then they can adjust wafer starts on the fly to accomadate changes the market." This has always been the case the only question is how accurate the market forecasts are.
"There is generally enough WIP (Wafer's in Process) and Cycle Times are short enough, that there isn't any real risk to being unable to meet customer demand."
This is not entirely true - typical cycle time for a 65nm/45nm type process is now over 100days (>3 months); throw in sort and packaging and you are talking about 4 months from wafer start to die shipment. Intel, like any other supplier cannot build to order (the AMD claims were hilarious on this) - so there is always some chance of having the wrong mix. In general the demand is met by having a decent forecast and some burst capacity; if you do in fact run real lean the risk of having a bad mix increases (ask AMD about this on the desktop/mobile fronts in H2-06/Q1'07).
"Speaking of fabs and capacity, there has been a lot of talk of Intel currently running "capacity constrained"."
This isn't talk/opinion - Otellini stated this directly in the Q3 earnings call - someons asked about inventory levels and Otelllin answered it was lower than what he wants (typically ~ 1 quarter's worth of CPU's) and that there was a risk of being able to meet any upside demand. He also specifically stated that he had to turn away some low end business in Q3. Not sure if the earnings call is still available online or not.
"Now, TSMC can manufacture an entire range of products for many companies in Europe, and other companies worldwide. It’s a win, win scenario for both parties."
Not sure if it is win-win, unless TSMC is getting the fab for pennies on the dollar, plus I think the more likely outcome is a lease for the capacity and not an outright purchase.
TSMC physically having the fab in Germany has no material impact on their operations - in fact it probable is more a nuisannce in terms of managing it as they can't use the economies of scale of the infrastruture they likely have in place for their other fabs. Once the wafers are done they will be shipped to Southeast Asia for assembly and packaging so wafer production in Germany is frankly irrelevant. The only "win" for TSMC is if they get this on the cheap and/or they get more AMD foundry business as a secondary effect of having built up goodwill with AMD. The other thing to keep in miind is that TSMC uses a different process flow for 65nm than AMD so it's not like the fab is a "drop in", there might be rool differencesm there will be automation differences and othe logistical issues (do they hire the AMD fab folk as contractors? do they transfer to TSMC? Are there intellectual property issues?)
"By the way, Anonymous out there, do you still think Sen. Chuck Schumer is going to ram the AMD NYS facility down our (NY Taxpayers) throats?"
I hope not but he certainly tried!
Quite honestly having lived in the state for 20years (but no longer), and given the almost absolute election security Democratic senators have in the state, they are pretty much free to do what they want without reprisal. In fact if you have a good name and are a democrat looking to run for President, it is convenient to temporarily move to the state get elected, do nothing for the state for a term or two and then run for president (not that I'm directly referring to anyone in particular).
But I digress... I would look for something else to replace AMD though... with the Albany tech center up there now, NY is desperately trying to lure in high tech - there is nothing wrong with this concept, but someone needs to ask at what cost and how much should the subsidy be - the better way to do it would be as an income tax break rather than upfront payments, so you ensure continued operation and give the company an incentive to do well.
Abinstein is such an idiot- much book learning, little practical experience.
He keeps piping on about things he reads about subthreshold swing, etc to try to back support his claims of the extreme importance of reporting the temperature while measuring TDP - he even pulls out an imprseeive looking graph showing subthreshold swing at different temps...
The problem - the graph compared 300K to 200K to 100K...that's right 100degree differences! Sure the temps weren't reported in the various TDP measurements, but who here thinks the delta may have been say less then 100degrees different?
It's the difference between book knowledge and applied knowledge - a good theoretical argument, a useless practical one!
This is not entirely true - typical cycle time for a 65nm/45nm type process is now over 100days (>3 months); throw in sort and packaging and you are talking about 4 months from wafer start to die shipment.
Just an FYI. 65nm cycle times are currently ~6 weeks. 45nm is a little longer due to more metrology steps and lower skip rates while the process is still maturing but it will continue to get better.
Also, you are right about the inventories being lean, I never denied that, what I meant was that it is only in particular segments that are at risk of missing customer commits. Thus the reason for factory priorities etc...
Sure there's risk to missing upside demand, but at the same time, if the both Intel and AMD are selling everything they can make, this scenario can only be positive and help raise ASP's.
"Just an FYI. 65nm cycle times are currently ~6 weeks."
So, 42 days from Si start to wafer out of the fab? And this is a standard production lot, not a priority lot or hot box correct? 42 days doesn't quite seem right.
This flies in the face of all Sematech benchmarks on things like DML (days per mask layer) - with over 40 lithography steps this would be DML on the order of ~1 (world class is in the 1.2-1.3 range and generally that is only done under certain circumstances, not normal high volume production). And in terms of DML, generally speaking Intel is in the middle of the pack as they are far more conservative on things like metrology and gating monitors (and skip rates) and sacrifice a bit of cycle time for yield and process window.
Are you sure you are quoting cycle time the same as me? (total time from wafer start to wafer out of fab) I'm wondering if you are quoting a priority lot.
Here is some supporting info on DML, from the ITRS roadmap (page 4):
http://www.itrs.net/Links/2006Update/FinalToPost/10_Factory_2006Update.pdf
the target is in the 1.4-1.5 range (make sure to look at production lot row, not priority lots). You'll also notice they claim no manufacturable solutions known down in the 1.2 DML area.
The 42 days of cycle time you quoted would put DML somewhere below 1.0... I don't have an exact count of mask steps but it is north of 40 - there's 2 per each metal layer alone, pretty much one for every implant step (over 15...off the top of my head 2 VT adjust for NMOS, 2 Vt for PMOS, 2 well, 2 tip, 2 compensation, 2 halo, 2 source/drain), and then a bunch of etch steps in the front end. A couple more for the replacement flow for the highK/metal gate, etc...
...Intel, like any other supplier cannot build to order (the AMD claims were hilarious on this) - so there is always some chance of having the wrong mix.
agreed. in fact AMD & Intel (and most semi's) employ the finish-to-order method for meeting customer demand. At this stage, the processors are already past test and waiting to be laser marked. Ofcourse the aim of this is to be more responsive to customer needs rather then lessen the impact of having the wrong mix.
Intel did some changes with the MCM approach but FTO still remained within the confines of assembly/test.
I love this one:
"2.) Intel should be releasing faster 3.16Ghz chips soon. AMD is probably going to be limited to 2.6Ghz this year."
Intel HAS RELEASED these chips, and at this point the fastest K10 chip is 1.9GHz Barcy (you can possibly argue the 2.0GHz vaporware release)
The Phenoms have yet to be released (but that box sure does look purdy!), and it appears as though the fastest at the initial launch will be 2.3GHz, though AMD will be pricing out 2.4GHz to trick people into thinking this actually exists. If this were a store, some might construe this as bait and switch and could be considered illegal. (Oh you want a 2.4Ghz, well we seem to be out of those right now, but I have a nice 2.3GHz to sell you....)
I have yet to see a single roadmap from AMD showing 2.6GHz by end of year...has anyone? They might do it but given that it is now THE MIDDLE OF NOVEMBER, you'd think if they were going to release anything faster than 2.3 by end of year, they'd have announced it by now. My best guess is they are waiting for the Phenoms to come out of the line and are hoping they can sort a few at higher clocks and claim some sort of PR victory. But put it this way - if AMD was planning ANY VOLUME of 2.6GHz chips, you'd think they would have released that info by now with just 6 weeks left in the year You'd also think their partners would be aware of this and pricing details would have leaked (again it is Nov 13 and there is not much time left in the calendar year).
I predict a dubious, 65nm-like, we have 'shipped' 2.6GHz chips by end of year claim, conveniently omitting details on volume or actual retail availability. AMD fans, forgetting original estimates of 2.6 AT LAUNCH (mid-year) and their boundless optimism on the INQ report of a magical 3.0GHz sample earlier this year, will claim success and that AMD is back and now executing to plan!
AMD will call it a 2.6GHz "introduction" (remember the K10 Q2 introduction? A press release on the last Friday of the quarter) - this way they don't get drilled on the "we only do hard launches" mantra again.
Robo - I think it is time to get that prediction blog prepared, some suggestions:
1) K10 2.6GHz in "volume" in March (maybe some dribbling out in Feb)
2) 2.8 by end of 2008 (possibly 3.0Ghz)
3) 45nm SAMPLING mid-year, initial release toward end of Q3/early Q4 with much lower clocks than top 65nm bin (~2.4GHz top bin) With slick use of words, AMD claims manufacturing was indeed "ramped" in H1'08, yields are "as expected", process maturity is "as expected".
4) F38 will not have any appreciable 300mm wafer outs (in direct contrast to Dementia's claim of initial 300mm outs in Q1'08)
5) AMD will come close to break even in Q1 (operating net, that is), Q2 will worsen and AMD may get back to black in Q4'08
6) No mention of fusion, bulldozer 2009 schedule narrowed down to H2'09.
7) MCM finally release (Q4'08); AMD claims "it is the right time, and this is what the market wanted". When asked about it being an inelegant solution, AMD says it's about bottom line performance, customer don't buy "nm" or "native" designs - this done as the PR spinmeister swallows hard to keep the vommit in his mouth from coming out.
8) AMD announces high K will be a 32nm introduction, bare Si being considered for 22nm node, and that it will be more than a 2 year span to go from 45nm to 32nm.
9) Hector 'resigns' or 'retires' from AMD (it is widely reported that he left on 'amicable terms'and this transition gad been planned for some time). He claims success in starting to break the Intel "monopoly" but more work is needed to give customers "choice". He leaves with a ridiculous fat severance, ummm, I mean 'retirement' package. Not sure who replaces him - maybe Dirk?
10) NY fab plans officially scrapped.
Orthogonal said...
This is not entirely true - typical cycle time for a 65nm/45nm type process is now over 100days (>3 months); throw in sort and packaging and you are talking about 4 months from wafer start to die shipment.
Just an FYI. 65nm cycle times are currently ~6 weeks. 45nm is a little longer due to more metrology steps and lower skip rates while the process is still maturing but it will continue to get better.
are you all sure that the discussed information is not orange or red colored? ;)
Anonymous said...
Abinstein ...- much book learning, little practical experience.
He keeps piping on about things he reads about subthreshold swing, etc to try to back support his claims of the extreme importance of reporting the temperature while measuring TDP - ...It's the difference between book knowledge and applied knowledge - a good theoretical argument, a useless practical one!
He surely has no manufacturing chip testing experience. A chip is spec'ed to have working temperature range (0 to 70C or -40 to 100C). If there is a TDP test, surely the passing unit meet the TDP requirement under those range. Anything outside of the range is not per spec.
This is Scientia's list of "troll" words of the Intel fannies
It isn't that difficult to see lex's milder trolling. Here is a list of words associated with each:
Intel - lead, better, performance, advantage, first, better, leadership, lead, better, leadership, leader, ahead, superior, superior, competent, superior, untouchable leadership, successful, leagues ahead, earlier, earlier, advantage, ahead, maximize, optimize, superior, superior, first, performance, leadership, further ahead, advantages, focused, power/performance, healthy, well.
AMD - lagged, weakness, handicapped, inferior, late, behind, hampered, noise, propaganda, late, propaganda, fanbois, behind, behind, behind, spinning, behind, boring, late, inferior, problem, penalty, late, behind, ills, lagging, disaster, slow, handicapped, inferior, loses, mistake, dumb, grandiose, wasted, failed, finished, BK, loses, bad, broken.
LOL
Basically he said that unless you praise AMD and bash Intel you are trolling and your posts will likely get removed. How nice of him :)
wow, just wow @ the list of words.
=)
BTW, did you know that whatever benchmarks that will be coming out for the Phenom against the Core2 next week will be BIASED unless the writer mentions "native quad core" (minimum at least once every paragraph) and "elegant design" with AMD.
For example, "we know that the Phenom lags behind the Core2 by 25% in clock speed and also 15% in IPC, but don't forget it is a native quad core and a more elegant design!"
of course, even the quoted sentence above is deemed trolling by Scientia's standards since it used the word "behind" when describing AMD.
ATI bottles Spider launch with canned benchmarks
"Since time is short... we have pulled together the most appropriate benchmarks that allow you to fully test the overall performance of the platform. All benchmarks will be provided to you on site so you will not need to bring anything with you."
Which, we suppose, is very kind of ATI - why make hardware journalists actually do a job when you can provide the canned benchmarks for them?
The tests that DAAMIT has authorised journalists to use are:
PCMark – Vantage
SYSMark’07 – Preview
Windows-Send2CompressedFolder – Vista32-Ultimate
WinRAR (rar-best) – 3.7
iTunes (wav2aac) – 7.4.3
MovieMaker (jpg, EuroVacation) - Vista32-Ultimate
Nero-8 Recode (avi/mpg2, portableAVC) – 8.1.1.0
Nero-8 Recode + Showtime (avi/mpg2, portableAVC) – 8.1.1.0
POV-Ray RTR – 3.7-beta22
3dMark’06 (CPU) – 1.1.0
3dMark’06 (hardware + software) – 1.1.0
Call-of-Juarez (DX-10)
None of which, we're pretty sure, are on the benchmark list of any of the top-tier enthusiast sites. I mean, 3DMark, seriously?
These settings and restricted benchmarks should ensure that every website comes back with the same scores and results, which we're sure ATI has run a million times over to obtain their own results for. We're absolutely sure that journalists at the event will be able to "see for themselves" the benefits of Spider... through red-tinted goggles.
ATI very generously goes on to mention that, "You will be able to test anything you like... in your own labs, when we send you the parts post-launch".
'ATI very generously goes on to mention that, "You will be able to test anything you like... in your own labs, when we send you the parts post-launch".'
Wow AMD had gotten really desperate - remember the outrage they had over benchmarking not more than 2 years agos....look at how things have changed.
Barcy - samples given to reviewers on the Friday before a Monday launch, they are also asked to sign an NDA if they wanted an early sample.
Phenom - apparently NO BENCHMARKING by reviewers prior to launch. People are supposed to buy these on faith that they'll be good? Or perhaps yet another paper launch is planned such that noone will be able to buy one anyway. By the time they are available, the sites will finally have a review?
AMD apparently has assimilated the movie industry model. The movies that are not allowed to pre-screened by reviewers...do those generally tend to exceed expectations? (I forget).
AMD has resorted to a 90sec trailer (cherrypicked benchmarks), in hopes of luring people in for that initial weekend, before word gets out (i.e. more complete and thorough benchmarks) and the movie is panned. Unfortunately this model sometimes works in the movie biz because the revenue is so frontloaded on the first 2-3 weeks of release, with chips AMD will have to live with it for several years! I guess there collaboration with Lucas Arts goes beyond computers and incorporates how to market a box office bomb.
From above
"Wow AMD had gotten really desperate - remember the outrage they had over benchmarking not more than 2 years agos....look at how things have changed."
Also, it is interesting that AMD has chosen to put the ATI face infront, tout a platform rather than a Phenom launch, and the benchmark list is somewhat obscured...
Based on the initial 3870 reviews, they chose the only game that the card can win in...
http://allyoucanupload.webshots.com/v/2005524076639585514
That data is from the tweaktown review for the 3870.
AMD is trying to divert attention away from any one component, obviously.
"AMD is trying to divert attention away from any one component, obviously."
Wait the spider launch is Phenom? :)
If the press does not ABSOLUTELY 'hammer' (pun intended for the old timers) AMD for this then you know the press has lost all sense of objectivity and is clearly vested in propping up AMD.
By the way good observation on the use of ATI on this platform - don't want to sully that pristine AMD brand image :) I'm just surprised that the launch is not on a Friday evening so it could be a complete stealth launch.
No time to discuss further I have some serious Nero re-coding to do, prior to playing call of Juarez - I just wish these applications I use all of the time weren't so damn slow on my C2d - if only there was an platform that was optimized for this!
So let's see if I can do this right.
When comparing benchmarks between AMD and Intel, the AMD processor came in a respectable second, while the Intel processor came in second to last. In order to achieve these results the Intel processor lagged behind the AMD processor, only consuming 89W.
AMD's forthcoming Shanghai processor will defy all the laws of semi-conductor physics and vault AMD from second place to first place while only consuming 190W due to AMD's superior process tech on the yet to be released 45nm process. This stunning performance will be achieved with the use of immersion lithography and SOI (both of which Intel is behind on) and without the need to resort to exotic gate materials.
When Nehalem is released with IMC and quickpath, Intel's last remaining advantages will be gone. It will be forced to compete on a level playing field against Shanghai and all of Intel's process flaws will be revealed for the world to see.
Did I miss anything?
"Did I miss anything?"
I think you got it wrong...
'When comparing benchmarks between AMD and Intel, the AMD processor was able to take one of the top 2 slots'
FANBOY!
Advanced Micro Devices files $700 mln stock shelf
http://www.reuters.com/article/marketsNews/idUKN1533368720071115?rpc=44
Those purposes could include reducing outstanding indebtedness, increasing working capital, acquisitions and capital expenditures, the filing said.
A couple of interesting articles on Intel's High-k Metal Gate technology.
The first is here.
It may surprise everyone to learn that AMD's gate tech on 65nm is no better than Intel's. It doesn't sound like it is any worse either. But it sounds like there may be some real issues at 45nm. Is anyone surprised?
Some more interesting reading on High-k Metal gate can be found here.
Meanwhile, Barcelona's travails continue if this. report from Blomberg is to be believed. It is a good thing AMD only does hard launches or their credibility would start to suffer.
Hmmm.... from our friends at the INQ (via Financial times so this is not an INQ speculation):
"Abu Dhabi is set to take a 9 per cent stake in Advanced Micro Devices, the US chipmaker, in a deal that underlines the purchasing power of sovereign wealth funds across the globe."
"The acquisition would go through the Mubadala Development Company which is a front for the Government of the Emirate of Abu Dhabi, which in the United Arab Emirates"
Well I guess we now know where the $700mil stock raising was going. It'll be interesting to see if the US gov't allow this to go through.
I expect we'll see an article by Sharikou about this shortly!
here it is from AMD's own website:
http://www.amd.com/us-en/Corporate/VirtualPressRoom/0,,51_104_543~122154,00.html
an 8.1% stake in AMD.
"AMD has said on the record that Phenom FX will be the first unlocked Phenom and that it will start shipping at 2.6GHz. This part is scheduled for Q1 2008 and there will also be a version for Quad FX systems with eight cores.
Until then we will be stuck with slower CPUs and 3GHz is not even in the map."
Source: Fudzilla (so take with a grain of salt).
You have to ask why they would unlock the chip early on - generally this is done on the highest bin part (so you can command a premium, like Intel does) or late in a lifecycle to drum up sales (ala X2 5000 black edition). Doing it so early in the lifecycle, can be intereperted by some (i.e. me) as AMD is not looking at substantially higher clocks in the near future beyond its release and AMD is hoping enthusiast will OC this to get it closer performance-wise to Intel chips (but seemingly forget to overclock the Intel chips?)
Hmmm... who was saying that AMD will likely be limited to 2.6GHz THIS year and 3.0GHz was probably going to come in Q1'08 - you know because they demo'd it 6 months prior to them and AMD doesn't cherrypick chips for demo! (of course no details were given on the demo so it could have simply been an OC'd chip)
Shockingly enough it appears Scientia is wrong AGAIN and actually had no support behind his statements other than his typical EMPIRICAL observatiosn ("genererally speaking a chip will come out in production 6 months after demo"). This is what happpens when you lack knowledge of what is going on and try to form conlcusion on empirical observations. (Hey stock XYZ has gone up in Dec every year for the last 5 years, thus it should go up this year, right? What the CEO may have changed, their products may be different, or other underlying conditions may be different?) I hope Scientia doesn't apply his logic and reasoning skills to the stock market.
2.4 Max this year (and in near paer launch supplies), likely 2.6 max in Q1, and in all likelihood 2.8GHz in the late Q2/Qish timeframe. 3.0GHz? Recall it was never on AMD's roadmaps in the early stage...
I'm sure Scientia will spin this someway positive for AMD, soem possible explanations/FUD:
- they are accelerating 45nm so thy don't feel the need to work that hard on clock on 45nm
- K10 is designed for 45nm so it is expected that it will perform worse on 65nm
- AMD is choosing (emphasis on choosing as if they had a choice) to work on yields, or maybe he'll say power, intstead of clock.
Don't forget, the lower the yields are for the quad-core, the higher the yields are for the tri-core.
By correlation,
AMD's delay of their tri-core can only mean one thing: they are having excellent yields on their quad-cores!
See, win-win situation again for AMD.
;)
Scientia's used to eating crow. He's so consistently worng that i'm beginning to automatically assume everytime that the exact opposite happens to everything he predicts. i've seen more wrong predictions than in a psychic's convention.
Ah, I don’t know if anyone is aware that AMD has just sold 8.1 percent of the company to a Middle Eastern country. Hmmm, I wonder, what will the State Department, and the Committee on Foreign Investment in the U.S. think about that. I believe AMD has a few U.S. military customers.
AMD is in dire straights, make no mistake.
http://news.moneycentral.msn.com/ticker/
article.aspx?Feed=AP&Date=20071116
&ID=7828571&Symbol=INTC
What da ya think, DOC?
SPARKS
Anonymous, out there, see, they couldn't get the money from N.Y. State Tax Payers. (Thank God)
We were going to give away well over a billion! READ: Schulmans Folly
SPARKS
"Anonymous, out there, see, they couldn't get the money from N.Y. State Tax Payers."
I agree it was a folly but the issue wasn't getting the money; AMD just doesn't have enough of their own to go through with the build.
Consider yourself lucky that the offer wasn't to someone like Intel or some other company that knows how to plan and execute. And be wary, as I think you will find other high tech companies knocking on Schumer's door for a handout.
Scientia just doesn't get it:
"What I said was that I have seen other people claim that Intel got ahead by instituting RDR. The Prescott design was clearly bad so presumably these changes would have had to have been after Prescott."
Again, with statements like these he CLEARLY has no understanding of what RDR's are and how they are used. The Prescotts issues were an ARCHITECTURE ISSUE! How can people not understand this? The architecture was built to run at high clock speeds, look no further than the # of stages in the pipeline. It's success/failure had nothing to do with use of RDR's!
RDR's are put in place to make things more manufacturable - it widens the process window on many steps (meaning less out of controls, or less potential scrap or less chance of poor binsplits). It also can lead to a faster development cycle as it should theoretically be easier to dial in the various process steps and ensure a design is not severely impacted by process/architecture interaction marginalities (see K10 time to market and binsplits)
For Scientia to say Prescott was poor therefor RDR's must have been after this is just plain ignorant.
Oh and here's a little secret for Scientia - many (most?) RDR's are not DESIGN SPECIFIC, they are process node specific! Things like min/max metal densities apply to all products on a given node! I think he got confused because the acronym contains "design" rule. It doesn't mean a set of rules for each design - it means the boundaries the design has to live within for a given process node! But what's the point, it's not like he will be interested in hearing any of this as it does not support his conclusion and misconceived perception of what RDR's are.
I just find it amazing how he can write a whole (long) blog article on something he clearly has so little understanding of. It's then funny how after 100's of comments virtually every one of his assertions is exposed for incorrectness and lack of supporting data. Of course, the reason it takes so long is that it is up to the readers to prove the Great Dementia is wrong and he does not have to back up a single thing he says (it is by default correct).
"Well, if Intel ran off GPU's in the same way they run off CPU's then they would have the fastest GPU's through, say, mid 2009. However, I'm not sure that they could hit the same price point for the majority of the cards."
Yeah, given that Intel would be producing GPU chips on the 45nm node while AMD and Nvidia would be using 55nm or 65nm processes, clearly Intel would have a tough time on the cost side of things, with those smaller die sizes (?) Also I hear that it is cheaper to use a foundry to produce chips for you - they generally do this for free and simply pass on only the wafer cost, correct?. It's not like the foundries are in business to make money off of producing wafers for other people!?!?
And Intel will have to incur all of those development cost for 45nm, oh wait they already incur that for CPU's and there really is no added cost in this area (save the reticles/masks)?
I hear it is also faster to use foundries for development than relying on a process in house which you can optimize and adjust for yields/binsplits as opposed to having to port your design over to a foundry, understanding their specific lot files and then trying to get them to optimize a process for you.
Does he really believe the stuff he says? Does he really think people will believe this crap? I understand some mistakes as he doesn't work in the area but some of the things he says are just so, well frankly, stupid that he must know they are not right?
"Also unlikely. AMD will need to get its processes in order by mid 2009 (if it does high K on 45nm) or 32nm at the latest. AMD absolutely cannot wait until 22nm."
???? Well he's right about AMD not being able to wait until 22nm, but mid-2009 on 45nm? Unfortunately for AMD, 45nm is what is for them, remember they are "ramping it" (and I use this term rather loosely) in H1'08 which theoretically starts in about 6 weeks.
45nm is going to be pretty much a "dumb shrink" (to use a description of product shrinks) of 65nm (the major change is featire size reduction, but I'm unaware of and major front end process changes which would lead to significant transistor perfomance gains). As such you will see some power gains, but little to no clock performance gains. The only way AMD would be able to squeeze some performance is if they give back the power gains.
Just look at 90nm to 65nm (and that had actual process improvements and changes beyond the litho shrink!). If people are thinking 45nm will be magical for AMD, they will soon (soon may not be the right word?) be in for a harsh does of reality. Remember the optimism for 65nm - AMD claiming 40% better, there were some fairly significant process imporovements (selective SiGe, NiSi) and that's given what performance gains on K8 65nm? AMD has publicly stated they are targeting 20% improvement for the 45nm node transition. So take the gains they got from 90nm to 65nm, cut that it in half and now you see my pessimism for the 45nm products.
I would expect similar binning issues (45nm products will start below 65nm) and a slow grind to get 45nm on par with 65nm - my best guess would be about a year after initial product launch (which would put it out until about mid/late 2009). AMD will continue their strategy of high performance parts on the "old" node and the mid/low range parts on the new node - this is a byproduct of CTI where the transistor performance is essentially the same on the 2 nodes at the transition point (however one process node is a heck of a lot more stable than the other!)
AMD will not implement high K/metal gate in 45nm (or will do so only in a VERY limited case like a specific product type so they get learning for 32nm) - there are far too many issues: the process integration changes, brand new tooling is required and there will likely need to be changes to the design (which means all new masks and tapeouts for a limited time in manufacturing on 45nm before switching everything again to 32nm). This is not the type of change you just decide to put in as a CTI step, nor is it one you want to amortize over 6-12 months.
AMD's only hope is to wade through the muck of 45nm as quick as they can - they will still enjoy the economic benefits of shrinking so they can continue the slash and burn price strategy (and hope there are more stock offerings?). They then have to hope 32nm is flawless and delivers a quantum leap over 45nm so that it significantly closes the gap with Intel on the performance side. Of course at this point Intel will be well into 32nm and on their way to 22nm and they'll also have 2 generations of yield and process learnings on high K.
Unfortunately, I think the only way AMD catches up on the process-side (or gets close) is if Intel seriously mis-steps. The only time I see this happening in the near future is when Intel makes the switch to tri-gate. I don't know when this is, this could be 32nm (unlikely) or 22nm (more likely). Also AMD is completely reliant on IBM for the technology now so I don't know if there is much they can do to speed up IBM's schedule. Heck the major blocks of 32nm are probably already beng set for IBM. To put things in perspective Intel's 32nm process architecture is likely more or less nearly done (on the high level integration side) - they already have it up and running in D1d and at this point are tuning and shifting focus to manufacturability and yields.
Similarly IBM already has likely made most major decisions on their 32nm process and are now focused on integration.
GURU,
65nM
45nM
32nM
Is it true that some of the layers are only 5 Atoms thick(!)? Can AMD make anything that works at 5 Atoms thick?
SPARKS
It just keeps getting better (scientia's Si process understanding is pure gold)
"However, I'm somewhat puzzled because if Atomic Layer Deposition is a problem then presumably AMD could have obtained Atomic Vapor Deposition technology from Aixtron."
What he doesn't understand is that these are essentially the same technology - the only real major difference is the Aixtron tool "sprays" (atomizes) the liquid precursor (kind of like a car liquid injection). ALD tools typically use a conventional buubler like approach (gas a carrier gas through or over the source). In either case once the gas reaches the deposition chamber the deopsition is pretty much the same and both processes use a layer by layer approach with traditional alternating pulse/purger routines).
Perhaps he is puzzled because he doesn'tunderstand the technology.
"AMD was talking about Performance of Nitrided Hf Silicate High k Gate Dielectrics back in 2003.
And, so was IBM: Growth and Characterization of Al2O3:HfO2
Nanolaminate Films Deposited by Atomic LayerDeposition (ALD)."
Intel was working on high K as early as 2001 IN HOUSE! (not to mention external research prior to then>. What Dementia doesn't understand is Intel doesn't typically publish sensitive process material (or only does so close to when it is implemented). By simply digging up some old papers and assuming there is a clear timeline between publishing a RESEARCH paper and putting something into production show his lack of understanding. Also the papers quoted show only very basic high K data, what he wouldn't notie is the lack of inverted capacitor data and/or clean transistor data in those papers.
The AMD paper is primitive - for one thing it is still using Poly Si gates, it is using MOCVD for deposition, the electrical thickness are quite thick for capacitor data and the use of quantox data to obtain Dit and Vfb is a bit questionable....other than that they are REALLY REALLY CLOSE! The IBM paper is similarly limited to only capacitor data, it also uses some Al2O3 in the films which simplifies things significantly in terms of stabillity (Actually DRAM manufacturers have implemented Al2O3 high K films long before Intel), however I would be a it surprised if the IBM solution for 45/32nm had Al2O3 in it (it's K value is significantly lower than HfO2 so it would significantly limit the benefit of the high K in the first place)
The trick with high K/metal gate is getting the transistor working and on target in terms of Vfb and getting the right Vt's. The capacitor data is rather trivial as it has none of the actual process integration that a full Si procees flow would have (it's basically substrate, clean, deposit higK, deposit gate, etch and measure). One of these capcitor samples can literally be processed start to finish in a matter of a few days.
It ignores a lot of the implants, anneals, careful etches that are need for a full transistor flow which also impat the performance of the gate oxide.
Again this level of understanding is lost on Dementia, he sees a paper back in 2003, doesn't understand the technical level at all (but is trying to show off to his minions that he knows what he's talking about) and then says I'm confused why Intel is ahead...
"I'm still a bit puzzled exactly how Intel got ahead in this area."
Classic - as if he understands anything about the area... other than Intel starting earlier, likely putting more resources on it, spending more money in terms of Si and tooling, and having a research and development team second to none, I can't understand it either - it just doesn't make any sense! But if they didn't publish any papers they couldn't have been working on it could they? I mean that's what companies are supposed to do right? Publish papers on research?
"For Scientia to say Prescott was poor therefor RDR's must have been after this is just plain ignorant."
ROTFLMAO -- Bravo!!!! But you must admit, Scientia's ramblings on such technical things makes for a great deal of humor.
"Does he really believe the stuff he says? Does he really think people will believe this crap?"
Unfortunately, he does think he is an authority on the subject, and he states with such conviction that he convinces the ranks of AMDzone that he is some sort of God. So yes, many believe his antics.
"I understand some mistakes as he doesn't work in the area but some of the things he says are just so, well frankly, stupid that he must know they are not right?"
I don't think he does (know his rubbish is not right).... it is kinda sad really.
"Just look at 90nm to 65nm (and that had actual process improvements and changes beyond the litho shrink!). If people are thinking 45nm will be magical for AMD, they will soon (soon may not be the right word?) be in for a harsh does of reality. Remember the optimism for 65nm - AMD claiming 40% better, there were some fairly significant process imporovements (selective SiGe, NiSi) and that's given what performance gains on K8 65nm? AMD has publicly stated they are targeting 20% improvement for the 45nm node transition. So take the gains they got from 90nm to 65nm, cut that it in half and now you see my pessimism for the 45nm products."
I tend to agree... in fact, 45 nm will likely be a repeat of the 65 nm introduction, i.e. it will underperform the prior node. While at mid-bin the power/thermals may improve, the drive currents and Vts will be lacking and that adds to slower overall clocking.
People banking on a huge leap forward at 45 nm for AMD will most likely be disappointed.
"Is it true that some of the layers are only 5 Atoms thick(!)? Can AMD make anything that works at 5 Atoms thick?"
Not really - some of the 65nm gate oxide films come close but this is more over-simplification for some good headlines. A 12A thick film or so divided by 3-4 A/atom and you have something that is 3 or 4 atoms thick right? If you put a bunch of ping pong balls in a box the balls would not directly stack on top of each other some would slide in the recess between 2 balls on the previous layer. Thus the atomic layers can be though to overlap a bit so you can't simply take the thickness and divide that by the diameter of an atom.
With high K the electrical thickness makes the film behave like it is a 5 atom thick (or so) SiO2 film, however physically it is thicker because of its higher K value. These films may be anywhere from 10-20 layers thick (depending on material used and electrical thickness targeted)
Of course at these thicknesses you are looking at a TEM (transmission electron microscope) image and sample prep and depending on who is eyeballing it the physicial thicknesses become a little meaningless (in my opinion). What matters anyway is how they behave electrically.
Sparks,
"Is it true that some of the layers are only 5 Atoms thick(!)? Can AMD make anything that works at 5 Atoms thick?"
Right now, for Brisbane, AMD's oxide thickness is 1.2 to 1.3 nm thick, this is 12 to 13 angs, or about 5 to 7 atomic layers thick, so yeah they can make it. Growth of SiO2 is so well characterized and so easily controllable that thicknesses of this magnitude are common, while not trivial, it is capable.
The key is replacing this well characterized process with one that introduces many new variables, i.e. high-K. Regardless of the type of material, weather Hf based, zirconium, or tantalum based ... the complexity of introducing such radically different materials at the key component of the gate is difficult. Probably the most important of which is the interface between the silicon of the channel and the material.
Interface properties have huge and dramatic influence of the electrical behavior of the device, most commonly reported are charge traps at the interface. Think of it this way, when two different materials meet, ideally one wants all atoms at the interface bonded to a) make a good adesive contact and b) minimize 'dangling bonds' or unsatisfied atomic orbitals. Item B is the critical part because as current flows electrons can get trapped at these 'unsatiated' sites and charge up the interface, a charging has the affect of increasing the threshold voltage which is bad for performance.
It remains to be seen, if you look at the recent EETimes report on Intel's 45 nm process and the leaked IBM PDF from the register on their 'high-K' process, the differences are like night and day ... my concern?? If Intel has already tried what IBM is doing and determined it did not work, well, IBM (and consequently AMD) are much farther behind the high-K curve than I suspect.
Jack
Last one on Scientia, I just can't resist (sorry for deluging this blog with this stuff)
"I'm still a bit puzzled exactly how Intel got ahead in this area."
(followed by 2 papers written in 2003 by AMD and Intel)
http://cws.huginonline.com/A/132090/PR/200105/823171_5.html
In May 2001, this company (ASMI) was celebrating their ONE YEAR ANNIVERSARY of shipping ALD tools and "and can count seven of the world's top twelve largest IC makers as its ALCVD customers" (FYI, ALCVD = ALD = AVD)
Any guesses on whether Intel may have been one of those customers? (I'll also remind folks AMD was nowhere near the Top 12 back in 2001 and was not licensing technology from IBM back then either).
So while Scientia can't understand how Intel got a lead in this area and linked some papers from 2003 to show how early AMD and IBM were working on this, he fails to realize that development ALD tools were shipping as early as 2000 to some folks... a full 2-3 years earlier than these papers he linked.
And then consider whether these folks would have ordered fab tools without having already done some research on the films even prior to then?
The level of ignorance on that blog just gets more amazing!
"Interface properties have huge and dramatic influence of the electrical behavior of the device, most commonly reported are charge traps at the interface."
Ahhh....someone with some real background. Most of the integration schemes rely on a ultrathin chemical oxide (left over from the surface clean process) to keep the interface looking more like a layer or two of SiO2 (so you don't have the issues you referenced). The other approach is to go the HfSiO2 route which has a lower K value but tends to have a better interface than HfO2 directly on Si.
Also it is difficult to initiate ALD growth on a pure Si surface (typically after an HF last etch process), so a very thin oxide layer also helps the ALD growth too. The problem is this needs to be kept VERY thin or else you defeat the point of moving to high K film by having a low K film right underneath it. It then becomes a matter of how good your surface prep (wet clean) process is.
Also IBM is using a gate first process so the anneal after high K and gate dep may also make the film interfaces different than Intel's process. It remains to be seen how well IBM's process will integrate.
Also IBM is using a gate first process so the anneal after high K and gate dep may also make the film interfaces different than Intel's process. It remains to be seen how well IBM's process will integrate.
Again I find myself wondering if this isn't another example of IBM seeking an "elegant" solution. From what I've seen in the articles I referenced earlier, the process flow is more complicated in the gate last process.
The allure of gate first is easy to understand. A simpler process flow is always the first choice when you can take it. It seem to me that it would be very like IBM to take the gate first path and seek to engineer solutions to subsequent issues induced by downstream processing. IBM is not exactly know for taking the most expedient path.
my concern?? If Intel has already tried what IBM is doing and determined it did not work, well, IBM (and consequently AMD) are much farther behind the high-K curve than I suspect.
First, I'd be inclined to think that Intel already went down this road. You should always try the simplest solution first. But I don't really think it is an issue of what will and won't work in this case. I see this as a fundamental difference in philosophy.
IBM has a very academic approach and will invest their resources in finding the most elegant solution possible. They do some of the best pure research in the world and are quite capable of finding solutions to the issues you bring up if those solutions exist.
Intel, on the other hand, is a manufacturing company that happens to depend on research to advance their product. Their focus is to find the process with the widest window and get it into manufacturing on schedule to keep pace with Moore's Law.
I believe it is this fundamental difference in philosophy that has resulted in these companies taking such divergent paths. And sadly, you can't rely on an objective analysis from one company on the other company's process, both for competitive and philosophical reasons.
So I guess in summary, I don't believe that IBM's choice of a different path is necessarily indicative of them being behind or ahead on any particular advancement. I just think that it is the natural result of completely different corporate cultures.
"Small here is very misleading. AMD cranks out over a million chips a week. AMD has about 30% of Intel's capacity but it hasn't topped out FAB 36 or converted FAB 30."
Categorically incorrect - he ignores Intel's manufacturing capacity for chipsets (which AMD outsources) and flash. He also ignores the fact that Intel likely has a greater aggregate die size (even being a 1/2 to full node ahead) as Intel has both a greater mix of dual core and quad core then AMD. Manufacturing capacity isn't measured by market share in a single segment, especially when one of the companies produces significant product volumes (in hose) in other segments.
And you shouldn't just compare AMD to Intel - if you look at TSMC, UMC, Samsing they all dwarf what AMD can do. As always Scientia is using a singular observation (market share) and interpreting it incorrectly. Next thing you'll kow he'll look at the IC revenue list, say AMD is in the top 10 and ignore the fact that a good chunk of that revenue is from parts that are outsourced. I mean heck is mentioning passing MOTO awhile back! MOTO, are you freakin kidding me?
Simply put, Scientia doesn't understand manufacturing.
Scientia takes a look at other poost and critizes for minor trolling and statements not adding anything to the argument. Let's take a look in the mirror shall we:
"Also, we have to wonder why if Intel is doing so well compared to AMD with power draw then why did Supermicro just announce World's Densest Blade Server with Quad-Core AMD Opteron Processors."
Exatly how does this fit into is Intel's process tech so far ahead blog? Also why do "we" have to wonder, shouldn't Scientia be saying "I" have to wonder - is he a mind reader now? This use of plurality to make it sound as if his belief is widespread is rather Sharikou-esque.
"There is a slim possibility that K10's could show up in the November Top 500 Supercomputer list."
What does this have to do with the blog question of is Intel's process tech ahead?
"Too often we end up with apples to oranges when we try to compare Intel with AMD"
Like comparing AMD's top clockspeed on one architecture to Intel's tope speed on a DIFFERENT architecture and using that to conclude things about process technology?
“The key is replacing this well characterized process with one that introduces many new variables”
“Intel was working on high K as early as 2001 IN HOUSE!”
“If Intel has already tried what IBM is doing and determined it did not work, well, IBM (and consequently AMD) are much farther behind the high-K curve than I suspect.”
No doubt INTC is leveraging multiple design teams to systematically institute or eliminate, for that matter, process variables in an extremely organized, systematic, and cooperative way.
IBM and INTC have devoted vast resources to R+D, but the similarity ends as INTC can scale to full volume production en mass successfully. IBM’s discoveries make wonderful reading in scientific journals; rarely, however, are these “breakthroughs” beneficial to the average consumer.
On the other hand, INTC’s average consumer wouldn’t know the difference between ‘Hi-K’ and a ‘High colonic’, yet the technology will be featured in most computers, now, and in the foreseeable future.
As for Arabian Micro Devices, they have now moved the introduction of their ‘triple cripples’ to 2Q ‘08 from the previously stated 1Q launch. I suppose they are at ‘GURU”s”, correctly forecasted, leakage wall, ceiling or floor, depending on your perspective. (I’ll take the floor) However, any way you look at it, DOC was right from the beginning. This Barcelona abomination is, in fact, D.O.A.
The newly acquired cash infusion will sustain the company for another year as they try to reorganize without the protection and humiliation of bankruptcy proceedings. This, the TSMC/German deals, in conjunction with other measures, AMD is leveraging its multiple accounting teams to insure survivability.
This is the real difference between INTC and AMD and the way in which they focus and leverage their respective assets. AMD is further behind than most realize.
“LEAP AHEAD”, was an understatement.
SPARKS
According to Paul Otellini's keynote at Oracle OpenWorld, Intel was aware that they would have a problem with Silicon Dioxide power/perf scaling in 1996.
You can watch Hector Ruiz's and Paul Otellini's keynotes here
They are like night and day..poor Hector.
"According to Paul Otellini's keynote at Oracle OpenWorld, Intel was aware that they would have a problem with Silicon Dioxide power/perf scaling in 1996."
Folks let's be clear here, everyone knew the wall was coming - if you plot gate leakage vs thickness you will see an exponential relationship (this is due to quantum tunneling).
The problem Scientia has is he thinks things like high K are done on a schedule. Looking at when you start the research is somewhat meaningless in this specific example - it may tell you when you took the problem seriously, but is says nothing about the expected completion timeline. This is not simply ratcheting down a litho feature or making an oxide thinner, or moving to a shallower implant (while all of these things are difficult they are not breakthroughs). He apparently also thinks 5 years is a short time for an invention of this magnitude, which it's not. Going from research paper on a non-integrated flow to production on something like this is, shall we say, not easy. The fact that AMD had papers as "early" as 2003 is a useless trivia fact, no (reasonable) conclusions can be drawn from this single data point alone - also as I, and others, have previously noted 2003 was in fact not very early at all (though I'm also sure the papers Scientia linked did not mark the beginning of IBM and AMD's research on this).
High K/metal gate is not a matter of simply engineering a solution, it requires SEVERAL breakthroughs. Simply putting more money, time resources, partnering with others does not automatically speed things up.
Also the trick/issues are in the integration as much as the material choices - this is important because it is not going to be easy to cut a wafer open and attempt to reverse engineer the solution. Intel (like many other IC's) will patent some of the broad stuff, but the critical "tricks" will likely be kept trade secret and specifically not patented to avoid companies from reverse engineering solutions.
" Folks let's be clear here, everyone knew the wall was coming - if you plot gate leakage vs thickness you will see an exponential relationship (this is due to quantum tunneling)."
http://researchweb.watson.ibm.com/journal/rd/462/taur.pdf
This is a good article demonstrating your thesis.
Jack
http://researchweb.watson.ibm.com/
journal/rd/462/taur.pdf
One more time to ensure the URL is there.
“I have yet to see a single roadmap from AMD showing 2.6GHz by end of year...has anyone?”
According to this report by the ‘RAG’, it ain’t gonna happen.
http://www.theinquirer.net/gb/inquirer/news/2007
/11/18/amd-delays-phenom-ghz-due-tlb
Frankly, I don’t know what the hell he’s talking about. What ever it is, it’s flying in the face of the bad process/bad architecture theory, to which I ascribe to, by the way.
He is putting the blame ENTIRELY on “9 to 5” engineers, schmuck! He is giving management a free pass by default.
This guy’s stock, as far as I’m concerned, is falling somewhere between WorldCom and Enron.
SPARKS
"He is putting the blame ENTIRELY on “9 to 5” engineers, schmuck! He is giving management a free pass by default. "
I'm not sure how much I trust the whole article - it claims the issue is the L3 errata that occurs above 2.4GHz. While that may be AN issue, it can't be the only one (ummm...why are there no 2.0, 2.1, 2.2, 2.3 GHz Barcies?)
Also, AMD was running 3.0GHz demos a while ago - did they just find this out now or did they bury the issue and are now just raising it as a convenient excuse to buy time as they fix other problems? The story smells, and I think AMD is using Charlie (unkowingly?) to spin things. Did they notice this issue when they were "dancing in the aisles" with the 3.0GHz part how long ago? You'd think they would have fully loaded all 4 cores at some point in their test/validation process?!?
In my humble opinion this is another spin to cover up more significant problems that are being addressed on the new steppings. Recall the "we're giving the customers what they want" excuse with the underclocked Barcelonas - they know they can't get away with that ABSURD excuse with the Phenoms. If this really is the key issue there is no reason why there are no Barcelonas releasded right now between 2.0 and 2.4GHz.
Oh by the way, is there any question on why the analyst meeting was canceled on such short notice a little while ago? Do you think they may have been just slightly afraid that someone may have asked when Phenom was coming out? Or asked about cash flow (they proably wouldn't have been able to comment on the stock offering at that time) Of course I'm sure they will have a much more accurate and better roadmap discussion as that was the "REAL" reason on why it was pushed out.
I completely agree with the management getting a free pass. If things go well, then it is management doing a good job setting direction and guiding the ship. If things go poorly it is an execution problem AND that execution problem is an engineering issue and not poor contingency planning and strategic planning?
AMD's mgmt woefully underplanned the strategic side of the K10 release - about the only thing they didn't screw up on the planning side was releasing server first. (though these products are underwhelming they will sell and it was the right strategy to protect that market first). Other issues:
1) Marketing anyone? The first new architecture in how many years - ever think about advertising it a little more than just on the web and in random interviews? (unless they knew it was going to underperform and would be late so they just decided to save their money)
2) They have screwed up on dual core desktop introduction - why is this waiting until Q2'08 when this market is bigger and yields/bins will likely be better?
3) They botched the PR on cripple core - did they really think people were going to view this as an AMD "unique capability"? Selling it is one thing, drumming it up as something special is another! You don't see supermarkets selling dented cans as an innovative container for canned foods, do you?
4) They were overaggressive on both the performance claims and the timeline of K10. Better to be a bit conservative (perhaps they though they were and couldn't even meet the conservative estimates?)
5) They now have (will) completely undercut their pricing power by releasing the bottom bins first on desktop and server.
None of these are execution/engineering issues.
Hexus PHENOM review:
http://www.hexus.net/content/item.php?item=10427&page=1
See not only the awful performance of PHENOM vs. C2Q Q6600 but also the substandard performance of AMD's chipsests in HDD performance, USB performance etc. vs. Intel's chipsets.
Also, some excellent quotes from the article:
Irrespective of whether you think that Intel's glue-dual-cores-together approach is architecturally inelegant, the fact remains that Core 2 Quad - in both its Kentsfield and new-and-improved Penryn flavours - is a fast and efficient processor in practically every way.
AMD's nascent Phenom also suffers under the considerable yoke of Intel's Core 2 Quad 6600 pricing, which at £165 for a hugely-overclockable 2.4GHz part is something of a bargain. AMD, though, is pitching its slightly underperforming quad-core part at roughly the same price. The industry needs AMD to survive and succeed yet it's very difficult to make a compelling buying recommendation for a processor that's a year behind its competitor - one who has already moved on to a more-efficient 45nm manufacturing process - is between 10-20 percent slower in most benchmarks, and costs much the same.
. Right now, pressed for buying advice, we'd recommend our readers opt for the competition's processor, chipset, and graphics cards.
AMD is a joke right now! All these 'next generation' products (Barcelona, AMD 790 chipset, Phenom CPU, ATI 3800 video cards are ALL slower than the competition's existing parts!)
Lets also not forget that this is Phenom vs. Kentsfield, not vs. Yorkfield which is even faster. Lets not also forget that Intel has speeds of up-to 3Ghz.
Holy PR Batman:
http://biz.yahoo.com/bw/071119/20071118005069.html?.v=1
"In a new initiative to measure real-world processor power consumption, AMD surveyed consumer and commercial users to understand precise usage patterns. AMD measured power consumption for these usage patterns and has found that AMD Phenom processors with Cool’n’Quiet 2.0 technology rated at 95W TDP can consume an average power of 32W for consumers and 29W for commercial users "
They neglected the initiative including drugging these users such that they would pass out and the "uasge" pattern would consist of things idling! Actually if you read the notes it assumes 39-44% idle time)
Seriously why not do a similar comparison study with K8 users to see what that performance would be like? Thus they can truly show off the 'new' benefit of the new platform. Why? Perhaps the power levels have more to do with the measurement technique than the actual product?!?!? Just a thought. How about doing it with an Intel CPU?
Fantastics scientific discipline - let's do a study and provide no baseline or reference point in order to draw a reasonable conclusion!
"AMD Phenom processors 9600 (2.3GHz) and 9500 (2.2GHz) are now available for $283 and $251"
Slightly >10% cost difference for 100MHz? Which would be <5% speed delta... why even bother with 2 bins at this point? Heck might as well sprinkle some triple cores in there (probably got tons of those laying around)
Giant
Hexus PHENOM review
Confirmed to be slower than Kentsfield per clock, as the illicit previews all year have shown. Even worse, the 2.4 GHz Phenom 9700 has apparently been delayed to late Q1 08.
Here are a couple other reviews confirming this cold hard truth:
Hot Hardware
Tom's Hardware
Meyer will inherit a trainwreck of massive proportions from Ruiz.
“Also, AMD was running 3.0GHz demos a while ago - did they just find this out now or did they bury the issue and are now just raising it as a convenient excuse to buy time as they fix other problems?”
I agree, the possibility also crossed my mind.
“Charlie (unkowingly?) to spin things.”
Nah, the article was written by Theo Valich. I agree with the rest of your supposition, however. Besides, in Charlie’s defense, I think he’s done with quoting AMD’s spin/pabulum without reservation, unfortunately he learned it the hard way.
“None of these are execution/engineering issues.”
Exactly the point, this is 100% on target.
Let take this a bit further, shall we? If they didn’t spend 5.4 B on a graphics company, perhaps they could have spent a little money on some midnight oil.
Then, why bother? It has never been said, or leaked, how many AMD’s engineers knew (the IBM tunneling read above) that quantum effects would be a limiting factor in Barcelona’s viability. Obviously, the hand writing was on the wall (IBM’s), literally.
Personally, I put these engineers in the GURU league of chip genius. If you knew it, I’m sure they knew it. Perhaps management rammed it down their throats anyway!
Then again, I’m a Union Electrician, if I think something ain’t gonna fly, I’ll tell ‘em to shove it, and I have. Perhaps highly disciplined corporate engineers don’t have that luxury, the poor bastards.
But, dumping on the guys out it the field? Nothing infuriates me more. This is what I believe these lying sons of bitches are doing; they have found a very convenient scapegoat, the guys who been busting their asses for over a year trying to make this pig fly.
SPARKS
Meyer will inherit a trainwreck of massive proportions from Ruiz.
Indeed. Dirk Meyer is in for one hell of a job. The only good thing going for him is that Ruiz has already done the damage. If he can save AMD he will be heralded as a great CEO. If he fails, and AMD goes under, Ruiz will get the blame since he's already done the damage.
Confirmed to be slower than Kentsfield per clock, as the illicit previews all year have shown.
That's just pathetic. All these reviews aren't even counting the fact that Intel goes to 3Ghz AND that Yorkfield is coming for the masses in January. Yorkfield offers 5% IPC gain without SSE4, and that can go much higher when SSE4 is used.
The reviews look about where things were expected, given the recent lowering of expectations. Clearly the 40% better has turned into, well when you consider the price, it's kind of competitive....
What's scary though is the power #'s, if the THG review is to believed. The idle was horrendous when compare to AMD's K8 - The K10 was more than double the highest bin X2 (and at MUCH lower clock). It also only seemed to be competitive / slightly better than Intel's 65nm quad (I was expecting better performance here)
The hexus review was similar - even with a total system power measurement (which hits Intel harder because of the Northridge), the Q6600 was better at idle and under load. This doesn't even begin to factor in what is already known about the 45nm power consumption #'s.
I also find it amusing that it appears as thought the triple core has slipped from Q1 to Q2... I mean, slipping a defective product? How sad is that, they can't even get a part with a defective core out the door! Perhaps they can just keep downbinning these into dual cores and then they won't even need to bother with "native" dual cores.
As JJ astutely pointed out, it is rather obvious why this is a "platform" launch and not a "CPU" launch - AMD is clearly trying to take some attention away from the CPU performance.
so wait, all the review sites have the Phenom 9700 on display in their benchmarking but just at the last moment AMD decides to delay it till Q1 2008?
wow is that bait and switch or what? So technically, the reviews all have a cpu on display that is vaporware even though their articles are written with the 9700 in mind.
Although the Phenom 9700 still loses to the Q6600 in performance, it at least matches it in clockspeed, an important frame of reference for those "clock vs clock" people.
But in reality, AMD has neither caught up to Intel's _slowest_ consumer quad core in clockspeed nor IPC.
Hexus about sums it up:
"But what we've also seen is that AMD cannot match the clock-speed of Intel's slowest quad-core processor and, worse still, can't match Core 2 Quad's performance on a clock-for-clock basis either. "
"Bottom line: the new Phenom quad-core processor and 7-series chipset pack in some potent technology. Trouble is, Intel got there first. You need to be better than the competition if coming from behind: AMD's new launches aren't quite that."
Intel should send out their lower clocked Yorkfield and Wolfdale cpus to reviewers just to fill up their benchmark pages.
I mean, sure you won't be able to buy them till Q1 2008, but why let that interfere with good benchmarking. Just ask AMD. =)
"so wait, all the review sites have the Phenom 9700 on display in their benchmarking but just at the last moment AMD decides to delay it till Q1 2008?"
Come on, you're such a cynic - the INQ article says the error only occurs when all cores are loaded and certain circumstances are run. Clearly this would have taken months to find out and was mere COINCIDENCE that they found it right before launch!
You don't think they pulled it this late knowing that the sites would make a note of the lack of availability but in all likelihood keep the benchmarks in the reviews anyway?
It is also obvious this is the L3 issues is the ONLY reason why the clocks are so far below expectations. Well it's either that or AMD is 'focusing' on the low end products customers are apparently demanding.
"so wait, all the review sites have the Phenom 9700 on display in their benchmarking but just at the last moment AMD decides to delay it till Q1 2008?"
Well, AMD is in the habit of publishing benchmark data on processors they are not launching at that speed aren't they.
Anand did an interesting side not editorial:
http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=3153&p=3
AMD is in rough times, there is no doubt about that.
Anand's review is somewhat interesting, he was able to clock one of the 2.4 GHz Phenoms to 2.6 GHz (basically, AMD's FX processor when and if it actually launches), it still cannot beat the Q6600 :) :)
Catch that, "AMD's fastest Quad is Fragged by Intel's Slowest Quad" a headline you will not see on Sharikaboob's blog.
"I also find it amusing that it appears as thought the triple core has slipped from Q1 to Q2... I mean, slipping a defective product? How sad is that, they can't even get a part with a defective core out the door! Perhaps they can just keep downbinning these into dual cores and then they won't even need to bother with "native" dual cores."
NOW is the time I really wish amdzone's archive was still in place. I read it sometimes for a laugh and one thread had Sci reasoning that the triple core was delayed because yields on the quad core were so good. then there were comments like "good point, most people would have missed that.., etc." Absolutely hilarious.
THG - "Our engineering sample was very promising indeed. We were able to overclock the CPU by 25%, resulting in a 15% performance increase in 3DMark."
Promising? A 25% overclock? And how about that scaling - if this scaling holds true for the actual retail chips... there's no way this will be competitive with Penryn (putting aside the fact that Penryn also will likely clock higher). It really looks like AMD will have to price these lower to compensate for lower performance for the forseeable future.
I'm not a big 3Dmark benchmark person - is this benefit somehwat representative of what you would expect to see in other applications?
I'm curious as to why Tom views this result as promising? Also I wonder if anything should be read into AMD not allowing folks to play with Vcore - with a power measurement, this probably could have given folks an idea of how good/bad the leakage is on these things. Or perhaps AMD just assumed the journalist wouldn't know what they are doing and didn't want to fry the chips - still would've been nice to see any Vcore change.
I guess we'll have to wait until these become available tomorrow for a review site to purchase and do an overclocking review (he says laughing)
Anyone else wondered why didn't AMD give journalists retail CPUs? Perhaps if it had given them then there wouldn't have been enough to send to retailers.
Ho Ho --
Read AnandTech's review, page 3:
http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=3153&p=3
Bravo to Anand for standing his ground.
Firingsquad went to the Tahoe dog and pony show, but they labeled it as such:
http://www.firingsquad.com/hardware/amd_phenom_preview/
"I read it sometimes for a laugh and one thread had Sci reasoning that the triple core was delayed because yields on the quad core were so good. then there were comments like "good point, most people would have missed that.., etc." Absolutely hilarious."
I recall seeing that too (though don't recall if it was Sci) and nearly blowing the soda I was drinking through my nose I laughed so hard.
I couldn't find it but I found this Scientia "nugget" (his blog, Oc'07):
"So, I'm thinking Barcelona is at least 10% of production and that isn't bad for initial launch.I'm thinking that AMD is ramping normally but that the demand is high enough that there are still shortages. The descriptions that I've seen suggest that AMD is ramping aggressively."
I'M THINKING his 'thinking' is more HOPING and WISHING than actual thinking!
And this one for the next time he says he made no claims about 3.0GHz chips (this was a comment in his '2007: The second half' blog)
"Also, AMD will almost certainly have 3.0Ghz quads by Q1 08 "
Of course this could have been a typo and meant to be Q1'09?. I'm sure if confronted (and in the event he doesn't just censor the post), he'd say that 'almost certainly' doesn't mean "definitely" or some other ridiculous backtrack. He had other comments to this effect (and 2.8's in Q4, with an outside chance of 3.0) - however I can't find those.
Thanks, AMD.
November 19, 2007 – 6:56 am
Thank you for not just letting me down, but letting me down in the most unimaginative way possible. It’s like you didn’t even try. First, you told me your quad core was going to be WAY faster than any of Intel’s offerings, and that turned out to be a bust. And when that didn’t work, you conjured up some sort of proprietary-esque “platform” nonsense. It’s like you’re purposefully trying to hurt my feelings.
I had high hopes for you, AMD, but I don’t see a point in continuing this relationship. It’s like you’re purposefully trying to hurt my feelings. I told people to wait for you. I told people things would shape up. And what do you do? You drop a shitty quad-core CPU with shitty clock speeds, shitty benchmarks, and tell me that it’ll work great with your shitty new video card and your shitty new chipsets. WHY did you have to go and buy ATI? What was the point, really? Nobody cares about ATI anymore. The people who care about ATI and AMD anymore are the same people who flush money down the toilet for fun.
I don’t care if your new processor is cheap. Cheap doesn’t compensate for suck. Go fuck yourselves, AMD. And don’t bother calling me. We’re through
Thanks, AMD
Ha ha ha ha ha
So out of curiosity, I wonder if Rahul Sood of HP/VoodooPC knowingly lied when a couple months ago he spouted the nonsense that Phenom 3.0 Ghz would "kick the living crap" out of any CPU then on the market. Or did he unknowingly regurgitate the lies told him by someone at AMD? Either way, I'm now crossing Sood off my list of credible sources. Good riddance!
jumpingjack:
Unfortunately, he does think he is an authority on the subject, and he states with such conviction that he convinces the ranks of AMDzone that he is some sort of God. So yes, many believe his antics.
It is very likely that a large part of his audience is even less informed on the subject than he is. Being in that category myself, I couldn't tell you if he knows his stuff or is just blowing hot air. I read these sites with interest but also am aware that while I can grasp some of the technical discussion, I lack the experience and schooling to know for sure what is right and what is mistaken.
I think a lot of visitors are like that. If they are heavily pro-AMD, they believe what he is saying because it's what they want to be true. If they are heavily pro-Intel, they hope that he is mistaken because that is what they want to believe. But in the end, the results are what will matter. It's interesting to read all this stuff, but I prefer not to invest myself personally in it.
My next CPU or GPU purchase won't be based on RDR or MCM or whatever acronyms we can throw on the pile. My purchases will be based on performance on applications that matter to me. I don't care if your CPU has better IPC, if it's not the fastest at a given price point, I'm not buying it!
jumpingjack:
Unfortunately, he does think he is an authority on the subject, and he states with such conviction that he convinces the ranks of AMDzone that he is some sort of God. So yes, many believe his antics.
It is very likely that a large part of his audience is even less informed on the subject than he is. Being in that category myself, I couldn't tell you if he knows his stuff or is just blowing hot air. I read these sites with interest but also am aware that while I can grasp some of the technical discussion, I lack the experience and schooling to know for sure what is right and what is mistaken.
I think a lot of visitors are like that. If they are heavily pro-AMD, they believe what he is saying because it's what they want to be true. If they are heavily pro-Intel, they hope that he is mistaken because that is what they want to believe. But in the end, the results are what will matter. It's interesting to read all this stuff, but I prefer not to invest myself personally in it.
My next CPU or GPU purchase won't be based on RDR or MCM or whatever acronyms we can throw on the pile. My purchases will be based on performance on applications that matter to me. I don't care if your CPU has better IPC, if it's not the fastest at a given price point, I'm not buying it!
tonus
"I don't care if your CPU has better IPC, if it's not the fastest at a given price point, I'm not buying it!"
but ... how can you ignore things that are just better by design? I mean K10 is true quad, it has multiple cache levels for optimal performance. Also split memory controllers and IMC with uberadvanced HT. Sure, even with all those things making it one of the best CPU in the world its final performance is still lacking but so what? The cpu is just beautiful!
:P
In other news, AMD sais its 9900 at 2.6GHz will be released late Q1 next year at 140W.
Anyone wants to guess when will 3GHz quad with normal thermals be availiable? Would it be before next Christmas?
"So out of curiosity, I wonder if Rahul Sood of HP/VoodooPC knowingly lied when a couple months ago he spouted the nonsense that Phenom 3.0 Ghz would "kick the living crap" out of any CPU then on the market. Or did he unknowingly regurgitate the lies told him by someone at AMD?"
I challenged him on that (I must credit for him having the integrity to publish and respond to the comments). He stood by the comments, said he doesn't speculate on performance (i.e. it was measured), and that he continues to stand by the comment.
He did get a little squirrelly as he says his comment was kick the living crap out of CURRENT processors at the time (which I think was the 2.93 Kentsfield?)
When pointed out that that may have been a bit of a copout as there was no reason to believe a3GHz Phenom was due out soon when he made that comment, he say he had an inside source which lead him to believe this would be coming soon. I also don't think he is allowed to publish the data (either he saw the demo or did the demo with the agreement not to publish any data)
I fully intend to post the 3.0GHz data on his site when it comes out in H2'08 and compare it retroactively to a 2.93GHz Kenstfield to better understand his concept of a "stone cold killer".
Ouuuuuchhhh!
"We were able to overclock the 2.6Ghz Phenom 9900 to 3.06Ghz and even at 3GHz it didn't pose a threat to really any of the Intel processors, so the only way AMD is going to be able to compete with Intel is by lowering prices once again and pitching the whole platform and not just the processor."
Even if you get a magical bin-split and get to 3 GHz, it still does not affect the overall competitive landscape.
Link for statment of post just above:
http://www.legitreviews.com/article/597/13/
My post copied for posterity from Scientia's blog.
Ho Ho
If not then I'm afraid only savior AMD can hope is Bulldozer
To be blunt, AMD have almost certainly known since before this past July that K10 would simply be a stopgap to Bulldozer. During the July Analyst Day conference, AMD essentially downplayed K10 and talked up Bulldozer as the real deal, probably for the sole purpose of winning over gullible investors like the Abu Dhabi government so that AMD can survive until late 2009 when Bulldozer is slated to launch.
It hardly matters anymore if 65-nm K10 can hit 3.0 Ghz by Q2 08 with decent thermals. Its IPC to die size ratio is so far behind that of Yorkfield, and Yorkfield's pricing so aggressive, that AMD will not be able to make enough money on K10 to stop the bleeding even if they can get faster clocks out by Q2. At this point we're looking at 2.4 GHz and hopefully 2.6 GHz by late Q1. I doubt that 2.8 GHz would emerge until Q2 at earliest.
Shanghai in late 2008 would bring the die down to competitive sizing but the big question is whether AMD can bring the leakage down to reasonable levels to clock them competitively.
Effectively, AMD are screwed for 2008 with their current form of operations. They must either raise a lot more cash or start selling assets. The Abu Dhabi windfall buys them a quarter or two of time to consolidate and execute on asset lite.
Anonymous poster said:
I fully intend to post the 3.0GHz data on his site when it comes out in H2'08 and compare it retroactively to a 2.93GHz Kenstfield to better understand his concept of a "stone cold killer".
http://enthusiast.hardocp.com/article.html?art=MTQyMiw2LCxoZW50aHVzaWFzdA==
There is an overclocked 3GHz Phenom and 3GHz Kentsfield there, which is close enough to 2.93GHz.
Stone cold killer my ass! ROFL
Post a Comment