8.23.2007

Walked The Plank Or Rat Jumping Ship?

Henri Richards departs from AMD:
"...AMD said Richard is resigning on "his own accord and on completely amicable terms."..."

Of course those are typical nice words use whenever a top level executive leaves a company. Whether he truly decided to leave or was pressured to do so, there's a slim chance that we'll ever find out what really happened. (Unless of course an AMD executive later on decides to write a book entitled "Barcelonagate"). But one thing's for sure and for whatever reason Henri left, it doesn't bode very well for AMD.

If Henri was pressured to leave, then the problem only points to AMD's abysmal marketing record which can only get worst with severely handicapped next-gen products. Or if Henri did decide to leave, it only suggests a bitter and irreconcilable disagreement. A product launch is the busiest time for marketing and leaving before such an event means a withdrawal of support or unwillingness to be responsible for its outcome. Changing heads before a war suggest there's internal confusion and conflict. Or maybe it's a simple case of integrity on the part of Henri against AMD's increasingly shady claims of leadership.

Meanwhile on the Intel front, the Centrino advertising blitz has begun (Intel Ads). Intel plans to halt AMD's gains in the mobile segment with an estimated $50M advertising budget. This is in contrast to AMD's overall budget of $20 for making this YouTube video. (And no, there isn't a missing 'M' in the $20 figure).

63 comments:

GutterRat said...

Just to let you all know that scientia censored my latest post challenging abistein to provide facts behind his ridiculous statements about Penryn vs Barcelona/Phenom.

I provided concrete evidence of knowledge about rev 10h and scientia is hiding my post from others because he knows I am right.

So there you have it: scientia once again censoring for the sake of protecting something that we know is utterly broken.

Anonymous said...

Things will only get better when the upper level is cleaned out (starting with Ruiz) - Wall St will understand if only one or two execs going than it's just another dog an pony show.

In other news did you see where Schumer (Senator in NY) claims AMD is committed to building the NY fab. Though the AMD response is somewhat vague and non-committal.

As a former NY resident, I'm appalled that they NY state will be ponying up to 1.2Bil when there are so many infrastructure needs that the money can be spent on. Rather amazed at the hypocracy of Clinton and Schumer making such a big deal over corporate welfare and tax breaks and then turn around and hand AMD 1.2Bil. I suppose if the money goes to a company building in one's home state, it's OK!?!? I'm glad I don't live in NY anymore.

Anonymous said...

Hmmm, nice analysis, Doc. However, the company’s in the hole for billions and this unabashed, outspoken mouth piece bails? I’d prefer to go with the ‘Walk The Plank’ scenario.

You’ve wisely posted a link, ‘AMD Scrambles To Replace Henri.’ I don’t think the article cuts it. I think they already hired a replacement weeks ago, Lee Brooke, former INTC PR guru and Euro insider. After all, Richards’s abrasive style bordered more on PR than it did Global sales, anyway. Additionally, Brooke may bring some more and extremely welcome Euro sympathy and support (and sales).

Conversely, perhaps I’m all wet here, and he chose to leave as he was told to keep his big mouth shut. Maybe AMD felt there were too many cooks in the spin kitchen and they need to put this high profile salesman on a short leash. Naturally, he would have none of it, not with that kind of ego.

These guys are a special breed; nasty jobs as far as I’m concerned. The bottom line here is I don’t know the difference between PR and sales, but I do know the difference between bullshit and crap. AMD is peddling both. At this juncture, they need a new guy to float the stuff down the toilet.

http://www.theinquirer.net/default.aspx?article=41379

SPARKS

Anonymous said...

"Barcelonagate"

This book, and the resulting royalties, may be what keeps food on the table for may ex AMD employees who decide to spill the goods.

It's not like they're going to make dough from their ESPP, stock options (not many get them) or restricted shares.

Mario Rivas is the next guy to go.

Hector will leave at the end of the year and the BOD will put Dirk in charge which will be the death blow.

Roborat, Ph.D said...

GutterRat said...
Just to let you all know that scientia censored my latest post...


some pre-dump their argument here before they post it there if they know there is risk of being deleted. Especially if what they're about to post makes a valid point. You're welcome to do that here.

Anonymous said...

stupid irrational widowed fanboys

Anonymous said...

Can we start another blog on why Scientia, quite frankly, is an idiot and should not be commenting on process technology and manufacturing.

His latest blog:
"October 2006 to April 2007 would be 6 months. So, this would mean that AMD made a 100% transition in two months less than it took Intel to reach 50%."

Seems reasonable, no? (all his linked info was correct). So why is this such a dumb statement? Because he is comparing Intel's complete production crossover (all fabs) to AMD doing a single fab... AMD will not be at complete 90nm crossover until F30 has stopped producing chips and Chartered has stopped as well. If I were to only look at a single Intel fab and ignore all other production can I say they have 100% crossover too?

Oh and the idiot is using the dates Intel claimed CPU shipment crossover (meaning chips were done and being shipped), to AMD's claim that all WAFER (um, F36only, please ignore F30) STARTS were 100% 65nm. So, add 3-4 months to this for when all SHIPMENTS from F36 would be 65nm.

So let's see the comparison again:
1) factor in 3-4 months to go from starts to wafer shipment - this would put AMD shipment at 100% 65nm from F36 in the ~Jul/Aug timeframe (NOT APRIL!)
2) consider AMD's 90nm, 200mm F30 and Chartered 90nm, 300mm production
- F30 was at 25-30K wafer starts in Q2, divide that by 2 to compare (roughly) to F36 production. This would put equivalent starts at ~ the same between 90nm and 65mn. SO ration would be ~60-70% 65nm when you factor in smaller die size
- Factor in 3K wafer starts (and these are 300mm!) at Chartered now, and that brings the ratio to ~50%

So we now have AMD at crossover at ~Jul/Aug when you consider TOTAL PRODUCTION, which is what Intel claimed. Carzy enough this puts Intel's transition at roughly the same timeline as AMD (though Intel managed to do it on a MUCH LARGER PRODUCTION scale).

Funny if you don't dig into the details on his links and claims, his argument ALMOST sounds like it has some merit, but like a good magician when you look a bit closer you'll notice he has you focusing on the things he wants you to see and ignore the obvious.

"n fact, other than HyperThreading there seems to be no major changes to the core between Penryn and Nehalem. "

Has there been architectural details release on Nehalem? pipeline length, IPC, cache structure? WTF? (I cut his comment where he mentions CSI, IMC) Has anyone seen architectural details yet? How can he make this statement?

"Intel only got a modest shrink of 25% on 45nm "

Again, rose colored glasses - no data and no actual real comparison (was cache the same? did he consider increased transistor counts?). Funny when AMD got 30% from a DIRECT / DUMB SHRINK (no architecture tweaks) going from 90nm to 65mn that was fine. However a 25% shrink, with some architectural changes, larger cache is simply modest? (this of course is assuming his 25% # is even real!)

"AMD is the sole reason why today Itanium is not the primary processor architecture."

WTF? Don't think Microsoft/SW support had anything to do with Itanium's acceptance in server land? Note he said AMD IS THE SOLE REASON.

"Q1 10 - 32nm Bulldozer Trailing by 1 quarter"

Ummm... Mark this date down... To put this in perspecive AMD is ~6months into 65nm tech node, is claiming 45nm SHIPPING (not product available) in Q3'08. By the way this is a node that has stripped out most of the improvements and is itself pretty much a dumb shrink of 65nm. AMD expects ~20% transistor speed gain which is HORRENDOUS for a tech node transition. And 32nm will follow in less than 18 months?

"But they aren't; Barcelona shows the same 70% reduction as Brisbane. This suggests to me that AMD has skipped a second die shrink and is concentrating on the 45nm launch. I'm pretty certain that if 45nm were going to be late that we would be seeing another shrink of 65nm as a stopgap."

This shows Scientia's lack of background. So his logic is there is no second shrink because 45nm is on track.... Here's a question - what 65nm product WOULD they shrink? The 90nm Opterons? The 65nm K8's that will be replaced with 65nm K10's? Or maybe he thinks they would shrink the 65nm K10's which aren't even out yet?

At this point in time AMD has no 65nm products TO SHRINK that makes sense! The only potential ones would be mobile. Let's suppose Scientia is wrong for a second and 45nm will not be out until early 2009. What would they shrink? K8 desktop's? (why, when they will be phase out in 2008 by K10) K8 servers? (well that would be dumb as those are 90nm) K10 products? (when would they be shrinking these as they are just NOW introducing them?)

He just makes no sense - he just tries to take random observations (like no 65nm shrink beyond the CRAP 30% AMD got) and fits it into this means AMD is doing well (no 65nm shrinks = 45nm good, grunt, grunt). So it is not a bad shrink, it is just AMD is SO GOOD they don't need it...

Perhaps AMD CAN'T shrink 65nm any further, or get enough to justify increased costs for validation, mask/tapeout, development, etc? Or they don't have the MONEY to do it? Or the RESOURCES to do it?

Yeah, I know these are 'crazy' and completely implausible theories, on my end so my apologies... they are not as "fact" based as Scientia's? (Must have missed the link he provided for no shrink = 45nm good to go)

Anonymous said...

'Let's look at 2008 where things we'll be interesting' (paraphrasing)

H2'06 - ignore Core2 is small production, so it''s not a good comparison.

Q1'07 - temporary shift, not interesting as K10 will be the key comparison point - H2'07 will be the key!

Q2'07 - market share RECOVERED. Keep in mind you can't look at a single quarter's market share # UNLESS it favors AMD. And on top of that I'll just take say the Q2 # and compare it to the average 2006 and say AMD has recovered?

- shouldn't I average all of 2007 to date to do an adequate comparison? Eh... you would think that - but the Q1 #'s were anomaly because that was only 1 quarter, the Q2 #'s were not an anomaly because those made sense to me!

- H2'06 market share was actually significantly more than H1'07? IGNORE! Must compare ALL of 2006 to a single 1 quarter data point in 2007 to make #'s look as favorable as possible!

H2'07 (the previous "exciting period because that would be when AMD has K10 up against Core2)- well that's boring because AMD CHOSE to release low clock speed parts (to keep things best for consumers!) This whole Penryn thing will be low volume anyway, not like Phenom which will be "limited availability"?!?

45nm intel advantage, blah blah blah, skip over, Nehalem, blah blah blah skip over.

In 2010 things we'll REALLY be competitive! How's that for analysis!

The point of all of this rambling? An attempted walk through the mind of the great Dementia, I mean Scientia (damn typos!)

Anonymous said...

"2007 is far from over but it seems that lately people prefer to talk about 2008. Perhaps this is because AMD is unlikely to get above 2.5Ghz with K10 and Penryn will only have a low volume of about 3%"

Hmmm... that got me thinking... let's take a stab on K10 volume... (you know since Penryn is so low)

SERVER:
Overall it's ~5% of AMD's unit shipments
Assume 0% -> 50% ramp by end of year over 6 months (which is a bit optimistic in my view).

Factoring in yearly total - this is ~12% of the yearly server shipments, which is about 0.6% of total shipments being K10 server

YES THAT's RIGHT 0.6% of yearly shipments!

MOBILE:
Err... that would be 0...(even Dementia can't get that math wrong!)

DESTKTOP:
I'll assume intro in Q4 and ramp to ~20% (which is probably VERY GENEROUS).

This would mean ~10% average for Q4 or 2.5% for full year.

I don't know the desktop unit % from AMD, but I'll assume desktop is ~60% of their total unit shopments, heck let's assume 75%..

That put's K10 desktop at just under 2% for full year....

So K10 will represent ~2.5% of AMD's total unit (unit, not revenue!) shipments in 2007.

Perhaps folks can now understand that all those folks who were spouting off CRAP about K10 helping AMD's revenues in 2007... Heck even in Q1'07 it will still be fairly small...

But hey, as Scientia's suggest let's ignore Penryn as it is "low volume"...

Of course that low volume will be very similar to TOTAL K10 shipments (keep in mind 3% of Intel's production is ~10-12% of AMD's)

If ignorance is bliss, Dementia must be an extremely happy person.

InTheKnow said...

anonymous said ...

Because he is comparing Intel's complete production crossover (all fabs) to AMD doing a single fab... AMD will not be at complete 90nm crossover until F30 has stopped producing chips and Chartered has stopped as well. If I were to only look at a single Intel fab and ignore all other production can I say they have 100% crossover too?

Lol. I just posted something similar on Scientia's blog before reading your statement. I saw the same flaws in the logic. I'll be interested to see his response, because, like you, I think his basic premise is flawed.

pointer said...

Just gave a quick glance over there ...and using his same logic/statement ... because Intel Core is so good, AMD got scared, they 'quickly' rush out Barcy, thus Barcy owe to Core ... :) hahahah

he has such a fanboy thoughts and setting up quite some psycology trap on boasting AMD superiority

Roborat, Ph.D said...

pointer said ... because Intel Core is so good, AMD got scared, they 'quickly' rush out Barcy, thus Barcy owe to Core ... :) hahahah

i have to agree with you on this one. his view is over simplified thinking design teams react to current trends when pathfinding happens 3-5 years before product launch.

And if he thinks AMD's going to catch up with Intel process transitions because AMD's slide says so then we already know how that's going to turn out.

Anonymous said...

That guy ( dementia ... ) is even more biased than Sharikou .

Sharikou is just a clown trying to push clicks on is blog regurgitating tons of nonsense claims . Dementia actually tries to paint it on a "fair discussion" view and then every and single point related to AMD is inflated by 10x , viewed in the best light possible and then to add some bit put running down the hill with the wind on theire backs , on the other hand everything coming from IntC is the other way around ( just check that "comparison" stated on the last article where AMD is going to catch IntC on process in 2009 , that will surely make you laugh ) .

I really dont know why Ho Ho and others keep posting there , that lunatic got the balls to even compare a working , booting , benchmarkable penryn A0 running at 2.9 ghz with the circus presented with later steppings of Barcelona running taskmanager ... the guy is a idiot.

Holonist

Anonymous said...

I read this over at scientiazone (in reference to early penryn demo):

"Not really. Intel gave no benchmarks which suggests that performance was not that good. To show genuine stability you would need to have seen these ES chips run on motherboards without BIOS updates. And, we didn't see that."

Of course, if you look at the Anandtech penryn benchmarks, he uses A0 silicon with no bios updates. Too band scientia is too blind to remember details like that.

Also, I think its safe to assume that Intel will demo a much higher frequency penryn at IDF (see demos from past fall IDFs). After that happens, I think we'll really get to enjoy some hypocrisy at scientia's fanboy blog. After claiming AMDs 3Ghz demo means near term availability, they will flip around and state that any IDF penryn demo is just overclocked and won't be feasible on a large scale. Mark my words.

I'd post all this over there, but I much prefer the role of a gutter rat.

pointer said...

Roborat, Ph.D said...

...
i have to agree with you on this one. his view is over simplified thinking design teams react to current trends when pathfinding happens 3-5 years before product launch.


his view is not over simplified, hr is just being a fanboi. this is not the first time he did that. he once said Intel had to come out with SSE because of AMD's 3DNow. While in fact it was the MMX that did the force move on AMD. he has been always trying to paint a rosy picture for AMD, a normal act of a fanboi.

on the other hand, if you alter his statement in such K5 owe to Pentium, K6 owe to PII, etc, while these are intel-fanboish statement, looks correct too.

Finally a side note on the Idle power of the Penryn: we can NEVER obtain the correct idle power benchmark without a BIOS upgrade, because the Penryn's supported Cx state support has to be turned on to effectively observe the idle power. What Anand measured is simply the power saved due to the effect of (better transistor - increase of transistor number), not the proper idle power when Cx state is on.

Anonymous said...

Too bad you can't turn wishful thinking and cherry-picked data into real-life production and sales. All of the carefully-manipulated data in the world isn't going to make AMD hit their targets on time or have enough CPUs at the hoped-for speeds to matter.

Besides, AMD has had the performance lead in the past. Just having the performance edge won't save them, or they'd be in a much better financial position today. A company that can't take advantage of such a good position probably isn't going to do very well when that performance lead is no longer theirs.

I don't think AMD will "BK", but I think they may wind up limping along if they continue to underperform, and that could be worse than simply dying off.

Anonymous said...

“I don't think AMD will "BK", but I think they may wind up limping along if they continue to underperform, and that could be worse than simply dying off.”

I agree. After all, they do have over a billion in sales per quarter. This, by anyone’s estimation, is still a lot of money and product. They will and must cut their losses, offer a competitive price per performance product on the low end of market, and squeeze out a living as they have done in the past. The margins, however, will not be there until they can refine viable working product with good yields, as opposed to generating anger and losses with their business partners. On the plus side of things, they did acquire chipsets and graphic capabilities, inroads have been made with the major OEMs, they are trying to woo back the channel, they have established a good product and presence in the server arena, and can have Chartered and TMSC crank out millions of relatively inexpensive chips for emerging markets in the Far East. If they, over time, refine what they have now, and successfully address those markets with cost effective volume, they will survive. In other words, don’t try to compete with INTC.

Wrector Ruinz realizes his grand dream of AMD “The Scrappy Little Company” who took on behemoth INTC and won, was a pipe dream gone bad. In fact, it was a momentary, Intel ‘brain fart’. Consequently, he made monumentally bad decisions in 2006, the sooner he addresses this failure, the quicker he will be able to save the company. It’s OVER, get on with it, and deal with it.

They’ve got time, thanks to the convertibles.

Their “asset light” model is out of shear necessity at this juncture. They have been/will be knocked back to the traditional niche market they’ve existed on in the past. They went to war with the big guy, and got their asses handed to them. Subsequently, they were quickly and dramatically put back in their place, in 3 quarters.

Paul Otellini has got some serious f--king chops. Man, don’t piss this guy off. Crippled, starving and limping away, AMD needs to stop the bullshit and get to work. They must heal their wounds, just to stay alive. Ironically, much as Intel did, they need to get back to and refine their ‘CORE’ business and not spend much needed millions on ridiculous lawsuits. One thing Wrector needs to realize, Ottelini has Wrectors balls in his top desk draw.

New platform technologies, esoteric technologies, and other wastes of valuable resources is just plain foolish, ---now. Further, in their current financial position, building and funding new Fabs, even with CHUCK Schumers $1.3B FOLLY, in my home state, would be impossible.

This is exactly where Intel put them.
This is exactly where Intel wants them.
This is exactly where Intel and needs them.

Does this look like the kind of guy that could cut your lungs out while you were still breathing?

http://www.intel.com/pressroom/kits/bios/
otellini.htm

I’ll take the N.Y.C subways any day, rather than square off with this guy.

Nice try Wrector.


SPARKS

Unknown said...

Look at the performance on one of the hottest new games out there Bioshock.

http://www.firingsquad.com/hardware/bioshock_directx10_performance/page5.asp

Even overclocked HD 2900 XT from AMD with 1GB ram is fragged all over by Nvidia's forth fastest GPU - the 8800 GTS 320MB. Pathetic. This is the same kind of performance we will see from Barcelona.

AMD is finished. AMD BK Q2'08.

Ho Ho said...

anonymous
"If ignorance is bliss, Dementia must be an extremely happy person."

I think I just found myself a nice sig :D


Great review of Scientias analysis, btw. Mind if I (try to) paste some of it to his blog to see what he thinks of it? It should make an interesting discussion.

Anonymous said...

Great review of Scientias analysis, btw. Mind if I (try to) paste some of it to his blog to see what he thinks of it? It should make an interesting discussion.

Did you see any interesting discussion here? I only see trolling. Where are the facts? I only see bla bla with nothing good said, in fact all the posts here are personal attacks to Scientia.

Anonymous said...

Even overclocked HD 2900 XT from AMD with 1GB ram is fragged all over by Nvidia's forth fastest GPU - the 8800 GTS 320MB. Pathetic.

The 200$ card from Nvidia is fragged all over by Ati 100$ card. Pathetic.
Mainstream Solutions Performance

Anonymous said...

Finally a side note on the Idle power of the Penryn: we can NEVER obtain the correct idle power benchmark without a BIOS upgrade, because the Penryn's supported Cx state support has to be turned on to effectively observe the idle power.

Those extra power states are for the mobile version not for the desktop.

Anonymous said...

"Walked The Plank Or Rat Jumping Ship?"

Hey RoboRat here is what he said:

“I am leaving AMD at a time when the company is in position to break the monopoly that plagues this industry.”

Giant, Intel the plague BK in Q4/2008.

Chuckula said...

Well it looks like Sharikou is posting on a real blog for a change. Hey, why did you change from Q2 2008 to Q4 2008 for the BK that will only happen in your head?

Anonymous said...

and state that any IDF penryn demo is just overclocked and won't be feasible on a large scale.

Tell me what product is commercially available today from Intel that has already been demonstrated:

The Core 2 Duo 3.5Ghz?
Or the Core 2 Quad 3.33Ghz?

Anonymous said...

Hey, why did you change from Q2 2008 to Q4 2008 for the BK that will only happen in your head?

The K10 delay!

Roborat, Ph.D said...

anonymouse said: Where are the facts?

try looking harder. nothing gets deleted here.

The 200$ card from Nvidia is fragged all over by Ati 100$ card. Pathetic.

i don't think anybody here disagrees with the notion that AMD is the undisputed leader in the cheap and slow value segment.

Your argument only backfires because the real question you should ask yourself is why is it that NVIDIA's slower graphics card commands a $100 premium over the supposedly faster AMD card?

Roborat, Ph.D said...

Anonymous said...
Hey, why did you change from Q2 2008 to Q4 2008 for the BK that will only happen in your head?

The K10 delay!


Only 2 QTRs? Someone needs to wake up. If AMD was only 6 months behind with Barcelona they should be launching 2.6Ghz next month.

You can only stop counting AMD's delay the moment they release the 2.6Ghz. Anything sold before this date are CPU's produced with a broken process.

pointer said...

Anonymous said...

Those extra power states are for the mobile version not for the desktop.


nope, desktop has its Cx state too, although not as deep as the mobile version. On Conroe for example, it supports up to C2

Anonymous said...

“Only 2 QTRs? Someone needs to wake up.”



Doc, what you said is gospel.

If AMD had ANYTHING capable of performing ANYWHERE near competitive, or even close to the 10 month hype we (and investors) have been force fed for nearly a year, you could bet the farm, EVERYONE, who was anybody, would have a sample delivered to their home doorsteps for review. What the hell happened to 40%????

If they had anything they could sell, ANRI RISHARD, wouldn’t have bailed!

Well, some might say they don’t want to tip their hands to Intel. Horseshit, I say. Being this close a release and still no word? Whatever they have, should be in the box and on the truck! The die, literally, was cast six weeks ago. Presently, they must go with what they have, or they go with nothing. Which is precisely what is happening now, they are orchestrating major damage control, and another half billion dollar loss in 3Q 2007. The rest of this stuff is pure speculation and conjecture. This is all AMD has left, FUD.

Besides, we all KNOW Otellini could drop the hammer anytime he wants. Intel can leverage multiple design teams, in multiple locksteps to make 6 months look like antiquated technology. AMD wanted competition; this is precisely what the got. This is NOT fanboism, children, this is BUSINESS. READ: MONEY!

2.6 GHz; in a pigs eye!

SPARKS

Unknown said...


The 200$ card from Nvidia is fragged all over by Ati 100$ card. Pathetic.


Untrue. Firstly, as Roborat pointed out, it's rather pathetic that AMD's fastest midrange video card is selling for $100. The 8600 GTS starts at $155, not $200 (Newegg prices).

In real results, it's clear that the HD 2600XT can't even keep up with the 8600 GT.

http://www.neoseeker.com/Articles/Hardware/Reviews/powercolor_hd_2600_xt_review/5.html

In fact, in some cases, it even gets fragged by a 7600GT! AMD is pathetic. All their products are late and pre-fragged by the competition. Barcelona will be no difference to this rule.

Ho Ho said...

"I only see bla bla with nothing good said, in fact all the posts here are personal attacks to Scientia."

After I get a few answers for the ray tracing questions I'll paste some of the stuff there and let Scientia himself tell if it is a personal attack to doubt in his claims.

Anonymous said...

""If ignorance is bliss, Dementia must be an extremely happy person."

I think I just found myself a nice sig :D


Great review of Scientias analysis, btw. Mind if I (try to) paste some of it to his blog to see what he thinks of it? It should make an interesting discussion."

Go ahead - I've tried posting there and at Sharikook in the past about process/Si technology and they don't listen (also at Abinstein's).

The problem with Scientia is he could afford to be reasonable when AMD was doing reasonably well several years ago. There was no need to cherry pick random data points and try to fashion them into a conclusion.

For comparison of Si technology you need at least 4 things (Scientia tends to focus on one)

1) Introduction and ramp rate. Introduction is relatively easy (except for AMD's "shipping" or "wafer Starts" BS vs actual wafers out or product available for purchase. Scientia spent a lot of time comparing 1 fab's (F36) wafer starts to Intel's total chip production crossover. In this manner he is able to lop 4 months off AMD's timetable (~4months from wafer start to chip ready to ship) and easily ignores F30 and CHartered production capacity.

2) Actual process performance. People who don't understand this rather easily confuse this with PROCESSOR performance (i.e compare Intel's P4 90nm processor to AMD K8 90nm and say obviously Intel's process is crap - while conveniently forgetting the architectural influence - just check out the mobile's 90nm performance). Scientia also now uses clockspeed as an indicator which is just as ridiculous.

The only way to compare Si technology on a purely process performance is from transistor perfotmance or things like SRAM cell size. Things like Ion/Ioff ratios, Idsat, CV/I all show Intel equal or better than AMD and the margin will get worse on 45nm node as AMD is indicating 20% gain (Intel is on the order of 40% gain). Also if you look at SRAM cell size, Intel is able to more aggessively scale a standard 6T SRAM cell.

The other dirty little secret is Intel hits it's key process tehcnology targets prior to ramp (the 30-40% gain) whereas AMD incrementally does it. Neither approach is necessaruly good or bad - but you can't simply compare the start of Intel's 65nm process with the start of AMD's 65nm process becasue is it essentially just a shrunk version of 90nm. So while it seems like a 1 year delta in timelines it is actually much larger as it is generally at least 9-12 months into AMD's process when they start hitting mature technology node targets. This is convenient PR fodder for most of the technical press as they don't understand this nuance.

3) Manufacturability. Much is made about APM, but little is actually understood about it. All IC manufaturers have some sort of this, however only AMD sees the need to tout it. In a perfectly ideal manufacturing process you should have NO NEED for APM as your process should be rock solid stable (obviously in real life this is not obtainable). The more you need to "tweak" your process wafer to wafer, lot to lot or product to product, the less likelihood you have of having a stable, predictable manufacturing process. The best manufacturing has a relatively wide process window that works fro a given design so you don't have severe binsplit or even yield dropoffs with minor process fluctuations.

4) Cost (I'll ignore yield even though this is one of the biggest lever bars as there is no PUBLIC info on AMD vs Intel yields)

AMD touts the move to immersion litho as an advantage but THIS IS ACTUALLY MORE COSTLY than a two pass conventional dry litho 193nm process. Hard to believe but true (Intel even provided some normalized data on it). The immersion tools are ~2X higher capital price/tool - so that pretty much wipes out the 1 pass vs 2 pass factor and operates at a SLOWER runrate (wafer per hour) which tilts the cost back into the dry litho favor. There are some negatives to two pass dry litho - consumable cost, fab floorspace but we are talking only generally 4 critical litho layers (STI, Poly, Contact, metal 1) so floorspace and consumable considerations are not that big a deal. The immersion tool is also a less mature tool so the availability/utilization at best will be equal to a dry litho approach (more likely lower utilization). Not to mention Intel can potentially reuse some of their tools from 65nm.

Other key cost issues: AMD uses 1, (maybe two with K10?) metal layers for the equivalent Intel node. This is a function of how good their interconnect technology is and/or if they are having yield problems. If you have poor RC performance than you need greater spacing and in some cases additional metal layers. IBM touts "ultra low K" but this is complete bunk from those in the know - a good portion of the effective capacitance comes from the etch stop layers required to pattern the ILD and there is no mention of effective K value with this factored in (some rally low K solutions have failed due to poor etch stop solutions which may actually make the effective K worse than a "higher K ILD")

IBM/AMD use more strain process steps (again people view this as more "advanced" but it actually means a more complicated, costlier process). And the SOI vs bare Si cost delta is significant (estimates put this DLETA at ~10% of overall wafer cost)- not to mention a lack of competition of the substrate suppliers in the SOI area puts you at a relatively weak bargaining position, not to mention at the whim of any substrate supplier production problems.

On a side note you'll also here these fantastic cost saving scenarios for AMD due to 300mm (a while back and 90nm to 65nm. There are some gains but don't confuse cost gains with capacity gains.

Most benchmarks (some may be public at Sematech) put 300mm aroud 30% savings. And each new tech node generally adds 10-15% wafer cost and we all know by now AMD doesnot get 2X the die moving from 90nm to 65nm. When you factor in the larger K10 dies sizes, the aggregate die size on 65nm will not be that much better than 90nm especially as K8 ramps down and K10 ramps up. The other dirty little secret is F30 was pretty much fully depreaciated so the capital aspect of wafer cost (~50%) was "free" while this is obviously not the case for F36.

Sorry for the ramble / rant...I'm done...

Anonymous said...

In real results, it's clear that the HD 2600XT can't even keep up with the 8600 GT.

In real results, it’s clear that the 8600 GTS can only keep up with the HD 2400 XT.

Mainstream

Entry-Level


DirectX 10 Games

In fact, in some cases, the 100$ card even frags by 160% the 200$ card! AMD is superb and even gives a sound card for free!
All their products are the best and frag the competition. Barcelona will take this rule.

Penryn 4.75% faster processor than Conroe will not be enough. The smaller 25% die is pathetic. Intel 200$ processor will cost 160$, Intel will start losing huge amounts of money because it has to price their processors to 60$ to remain competitive. Intel BK Q4/2008.

Anonymous said...

So your point is that ATI outperforms NVIDIA at a level where game performance is uniformly unacceptable?

Because being twice as fast as a card that gets just 5 FPS is like being twice as wealthy as someone who is worth $10, when you are trying to get a bank to lend you money so you can buy a house.

Christian H. said...


Meanwhile on the Intel front, the Centrino advertising blitz has begun (Intel Ads). Intel plans to halt AMD's gains in the mobile segment with an estimated $50M advertising budget. This is in contrast to AMD's overall budget of $20 for making this YouTube video. (And no, there isn't a missing 'M' in the $20 figure).



This is all like a soap opera with little kids. From Scientia to Sharikou to Sharikou180 to here, the same jerks say the same things.


Grow up. This is worldwide business not a pissing contest. And don't worry this site ISN'T on my favorites. I puke enough.

Christian H. said...

Oh I forgot to add this little tidbit from TGDaily.

AMD 51% of retail purchases:

66% desktop up 11%
44% mobile up 20% (from January)

62% revenue desktop
42% revenue mobile

Linkage

Snippet:

According to the market research firm, AMD-based desktop and notebook PCs achieved a unit share of 51.0% in July in U.S retail: Compared to June 2007, the company was able to steal 13.4 points from Intel; AMD's share was also up 4.3 points year over year from 46.7% in July of 2006.

Surprisingly, AMD-based notebooks also showed an unusually strong presence during the month of July. 44.8% of all U.S. retail notebooks integrated an AMD processor – a record high for the company. Unit share was up 11.8 points from 33.0% in June and up 20.5 points from 24.3% in January of this year. Intel Centrino notebooks, on the other hand, were estimated at a retail unit share of 55.2% in July - down from 67.0% in June and more than 75% in January.


It looks like the cheap worthless K8 is fooling everyone.

ALL HAIL THE DUOPOLY!!!

Unknown said...

HD 2600 XT compared with several different Nvidia cards:

http://www.hardwarecanucks.com/hardwarecanuck-reviews/1778-ati-hd2600xt-performance-preview.html

A clean kill for Nvidia in all performance tests. Using tests were cards are getting 5 or 10fps is just stupid.

The resuls clearly show that Ati's fastest video card is left competing with Nvidia's third fastest 8800 GTS.

We've seen the Barcelona performance:

http://www.dailytech.com/Quick+and+Dirty+AMD+K10+Cinebench/article7574.htm

It has higher IPC than K8, but is still 5% below Kentsfield in IPC. Add to that the fact that it is running a poky 2Ghz and we can easily see the performance Barcelona brings.

Intel is making plenty of profit. Intel's Q2'07 profits were up 44% YoY while AMD's were down a whopping 700%! If you do the calculations, AMD is losing $7 every single day.

While AMD loses $7 a day Intel makes $14m a day. It's perfectly clear who will being go BK within a year.

Unknown said...

$7? $7M! D'oh!

Penryn delivering 10% IPC gains with SSE4 over current Core 2 CPUs, scaling to 4GHz.

http://www.adsensetips.com.cn/2007/08/18/intels-45-nanometer-penryn-core-processors-may-exceed-4ghz/

AMD is finished.

Unknown said...

LOL.

Market share doesn't mean didly squat when you're selling everything at little to no profit at all.

Anonymous said...

"LOL.

Market share doesn't mean didly squat when you're selling everything at little to no profit at all."

Bigger LOL at the idiot Howell's remarks:

US RETAIL MARKET!

Let's see US overall market is ~35% of overall CPU sales (desktop, mobile, server).

US retail is probably <40% of US sales...


NOW LOOK AT THE ACTUAL ARTICLE:
Compared to June 2007, the company was able to steal 13.4 points from Intel; AMD's share was also up 4.3 points year over year from 46.7% in July of 2006.

So a 1 month snapshot with 13% increase? A 4.3% increases from YOY?

Now take that 13% (or 4%) * 35% * 40% * (whatever the desktop mix is) ....

And compare that to a 30-35% ASP decrease...hmmm...which one is bigger?

Yay! I've got a business plan, let's cut our prices by >30% and pickup <1% overall marketshare. Hey we could always borrow money...

Mr. Howell, your intellect is truly dizzying... I liked you better when you were marooned on the island.

Anonymous said...

HD 2600 XT compared with several different Nvidia cards:

ATI HD2600XT Performance Preview 28-06-2007

A clean kill for Ati even after using an old beta driver it completely fragged OC NVIDIA cards and used much less power consuming.
Using tests were cards are getting 500 or 1000fps is just stupid.

The results clearly show that Ati's fastest video card is left competing with their own cards.

Anonymous said...

Ati old and not supported chipset completely frags new Intel G33 chipset. Intel doesn’t let OEM to sell much better products because of this:

ATI Radeon Xpress 1250 Performance Review

Ati with the old SM2.0 offered much better game play, much better performance, perfect image quality. Intel SM3.0 worked on paper but didn’t run most of the tests. Missing textures and other issues is what you get from faked specs.

Nvidia will have hard time selling Intel chipsets because of Intel illegal tactics.
Intel to threat Nvidia to give SLI to them or else…

Anonymous said...

Intel crossfire implementation buggy.
Intel faked crossfire support with their chipsets.
This is a real Crossfire implementation:


Gigabyte introduces Mainboard that supports Quad CrossFire

Gigabyte again leads the industry…


Enthusiasts dreams motherboard and much cheaper than Intel castrated (faked specs) alternative.

Anonymous said...

"AMD 51% of retail purchases:

66% desktop up 11%
44% mobile up 20% (from January)

62% revenue desktop
42% revenue mobile"

Umm, if their doing so well in the market, with your keen eye for detail, why they are they posting MORE than $500 MILLION per quarter? I could corner the market on selling 10 dollar bills for five dollars!

Obviously, your a bait the arquement type of guy who reads something, posts anything, but doesn't have a clue of what he is reading.

What usually ask mouthpieces, such as yourself is, "are you holding positions this company?". I'll wager no, your not.

You see, that's the rub. Isn't it? AMD lies. My dislike for AMD? They keep sucking in unwary investors with misinformation and FUD. JUST LIKE ENRON.

Still a believer? Then by all means, buy 500 shares of AMD. You can get 'em cheap $11.67 today!

GOT IT?

SPARKS

InTheKnow said...

Sigh. There goes the neighborhood. At least I don't have to actually read our anonymous troll's posts since he is kind enough to bold them all and warn me.

Anonymous said...

Gosh, Scientia is just so damn uninformed it's embarassing (because he truly believes he's in the know)

His latest comments on "asset light":

"Since AMD does have a second source (unlike Intel) this is Asset Light."

Actually Intel does have some outsourcing it is just not well known and is done with the foundry's processes and the vast, vast majority is still done in house. AMD running anywhere from 1000-3000 wafer starts is not "asset light" this is ~5% of AMD's production.... To put this in perspecitve AMD is outsourcing something the rough the size of Intel's research and development lines for new technologies. (But again AMD talks up "flex capacity" and the uniformed press, analysts, and AMD fanboys lap it up.

Did I ever tell you about how fantastic APM3.0 is? It's much much better than APM2.0 which is much much better than APM1.0. How does it work? Well that's complicated and trade secret, but it is WORLD CLASS!

"Since AMD doesn't have a separate development FAB this is Asset Light."

Not his isn't asset light it's called doing your development in a different place! Intel's development line turns into production (D1A - F20, D1c - F25, D1d...) - I guess in Dementia's world Intel is also doing "asset light"? The only real difference is the dedicated RESEARCH facility (RP1), both companies turn their development lines into production lines.

"Additional: Intel has its own compiler which is coded in-house whereas AMD chooses to work closely with Portland Group to make sure that a compiler is available that fully utilizes its CPU hardware."

Yeah because that compiler work sure takes up a lot of fixed assets...are you freakin kidding me? What this saves AMD the money of having to buy a bunch of Intel based computers to do the comiler development - how is that asset light? AMD "chooses" to outsource this or "doesn't have the expertise and/or desire to do this in house"?

I suppose Henri leaving and not being replaced could also be considered part of the asset light strategy!?!?

"There is nothing wrong with Sharikou's intelligence. His problem is that he wants AMD to succeed so badly that he loses objectivity. You can see similar situations where common sense takes a backseat to wish fulfillment by watching Ghost Hunters"

Is he describing Sharikou or himself? I see a lot of "wish fulfillment" in his more recent blogs - AMD will be caught up by 32nm node, AMD crossed over faster, "asset light" comments, etc..

His blog is coming close to the sheer entertainment value of Sharikou's. Meaning don't take the content seriously, just read it for the laughs.

Anonymous said...

paper 2GHz vs current Intel 3GHz is no match lol

Unknown said...

A clean kill for Ati even after using an old beta driver it completely fragged OC NVIDIA cards and used much less power consuming.
Using tests were cards are getting 500 or 1000fps is just stupid.


Who cares if the drivers are old? There are new Nvidia drivers as well.

So according to you, using benchmarks where a card gets 5 -> 15fps (A slide show, not a game!) is sensible. But when looking at cards getting 30 -> 60fps (this is a smooth framerate, games run great at 30fps or above) is stupid? The only reason the HD 2600 XT consumes less power is because of the 65nm process. The 8600 is still made at 80nm.

Here are a ton more reviews: Go crazy!

http://www.hardocp.com/article.html?art=MTM3NywsLGhlbnRodXNpYXN0
http://techreport.com/articles.x/12843
http://www.hexus.net/content/item.php?item=9601
http://www.hexus.net/content/item.php?item=9187
http://anandtech.com/video/showdoc.aspx?i=3023

The Anandtech review is great, as it includes the HD2900 XT and the 8800 GTS 320MB as well. Look at the results with AA enabled and it's no surprise that Nvidia owns high end gaming.

pointer said...

Hi Roborat,

Just realized you put up a link to my blog. Thanks for that. Although I won't post often, but will try to post more (well, still won't be that much).

I just saw 2 new comments there, well, i guess it is roborat link effect ... :)

Anonymous said...

"What this saves AMD the money of having to buy a bunch of Intel based computers to do the comiler development"


God, I LOVE THIS F--KING SITE!

Why do think this guy has been on the inside, and seen it all?

Impressive!

SPARKS

Anonymous said...

Well, here it is. The rubber has finally hit the road at 1.9 GHz. Maybe, now, we can get some real benchmarks on this thing.

They will, indeed, put all the bullshit to rest. Personally, I can't wait.

http://www.theinquirer.net/default.aspx?article=41980

SPARKS

Anonymous said...

A clean kill for Ati even after using an old beta driver it completely fragged OC NVIDIA cards and used much less power consuming.

From your link:

3DMark 05-- ATI places 3rd, after two NVIDIA cards
Prey-- ATI finishes last
Rainbow Six: Vegas-- ATI finishes fourth
Company of Heroes-- ATI finishes fourth
Medieval II: Total War-- ATI finishes fourth

I'm not even sure why you linked this, the reviewer admits that the ATI card was an engineering sample and that the drivers were very buggy and unstable. How is a pre-release card running buggy drivers and losing every comparison a case of "completely frag[ing]" the competition?

Anonymous said...

the reviewer admits that the ATI card was an engineering sample and that the drivers were very buggy and unstable. How is a pre-release card running buggy drivers and losing every comparison a case of "completely frag[ing]" the competition?

Ask Giant. He was the one who linked to this old preview article. I just change his story a bit.

Even those results arent that bad like you said since the NVIDIA cards where OC and Ati cards cost 50% to 100% less.

Unknown said...

Ask Giant. He was the one who linked to this old preview article. I just change his story a bit.

Use these links then:



http://www.hardocp.com/article.html?art=MTM3NywsLGhlbnRodXNpYXN0
http://techreport.com/articles.x/12843
http://www.hexus.net/content/item.php?item=9601
http://www.hexus.net/content/item.php?item=9187
http://anandtech.com/video/showdoc.aspx?i=3023

Anonymous said...

Ask Giant. He was the one who linked to this old preview article. I just change his story a bit.

Even those results arent that bad like you said since the NVIDIA cards where OC and Ati cards cost 50% to 100% less.


giant's first link was to a full review, where the 2600XT struggled in games when AA/AF was used. It was unable to differentiate itself very much from a 7600 card.

You responded with a link showing that in a scenario where a game is unplayable due to poor frame rates, a 2400XT has better numbers than an 8600GTS.

I think the point made in most of the articles is that there's a good reason that the 2400/2600 series cards are so inexpensive compared to the NVIDIA brand... they don't perform very well unless you really turn down the quality settings on your games.

This is not good for ATI. I'm not sure that many people will be willing to accept the lower price if it means sacrificing quality to that degree. I do think that there's a market for the 2400/2600 in low-end OEM systems, and ATI could potentially do very well there. But I think that's pretty much the limit for those two cards.

Anonymous said...

It was unable to differentiate itself very much from a 7600 card.

The DX10, VIVO and the HDMI is not enough?
Tell me what’s the difference of Nvidia 7xxx line VS 8xxx line?
You win a cookie if you have the right answer.
You responded with a link showing that in a scenario where a game is unplayable due to poor frame rates, a 2400XT has better numbers than an 8600GTS.

So anything wrong with that? Both are unplayable so what?
Do you prefer the card that does 200fps and cost 400$ or the card that does 100fps and cost 200$?
Or the card that does 5fps and cost 70$ or the card that does 5fps and cost 199$?
I’m guilty because they can’t do graphics cards reviews?

I think the point made in most of the articles is that there's a good reason that the 2400/2600 series cards are so inexpensive compared to the NVIDIA brand... they don't perform very well unless you really turn down the quality settings on your games.

The Ati cards where as fast or faster then some games with the same quality settings and cost much less…
I just don’t understand, first you say they are both unplayable but you want to turn on AA to make the game even more unplayable? Your point is a little strange…

This is not good for ATI. I'm not sure that many people will be willing to accept the lower price if it means sacrificing quality to that degree.

Again, the same crap… what’s your point, its unplayable… but you want at least turn on AA because at least you have better AA so that you can enjoy the better quality slide show, is that it?

I do think that there's a market for the 2400/2600 in low-end OEM systems, and ATI could potentially do very well there. But I think that's pretty much the limit for those two cards.

So saving a few bucks, and having better Video decoding, good power consuming, more than enough performance its not OK?
Slow me what game is unplayable with the 2400/2600 line that is very playable with the 8400/8600 line.

Anonymous said...

It was unable to differentiate itself very much from a 7600 card.

The DX10, VIVO and the HDMI is not enough?
Tell me what’s the difference of Nvidia 7xxx line VS 8xxx line?
You win a cookie if you have the right answer.
You responded with a link showing that in a scenario where a game is unplayable due to poor frame rates, a 2400XT has better numbers than an 8600GTS.

So anything wrong with that? Both are unplayable so what?
Do you prefer the card that does 200fps and cost 400$ or the card that does 100fps and cost 200$?
Or the card that does 5fps and cost 70$ or the card that does 5fps and cost 199$?
I’m guilty because they can’t do graphics cards reviews?

I think the point made in most of the articles is that there's a good reason that the 2400/2600 series cards are so inexpensive compared to the NVIDIA brand... they don't perform very well unless you really turn down the quality settings on your games.

The Ati cards where as fast or faster then some games with the same quality settings and cost much less…
I just don’t understand, first you say they are both unplayable but you want to turn on AA to make the game even more unplayable? Your point is a little strange…

This is not good for ATI. I'm not sure that many people will be willing to accept the lower price if it means sacrificing quality to that degree.

Again, the same crap… what’s your point, its unplayable… but you want at least turn on AA because at least you have better AA so that you can enjoy the better quality slide show, is that it?

I do think that there's a market for the 2400/2600 in low-end OEM systems, and ATI could potentially do very well there. But I think that's pretty much the limit for those two cards.

So saving a few bucks, and having better Video decoding, good power consuming, more than enough performance its not OK?
Slow me what game is unplayable with the 2400/2600 line that is very playable with the 8400/8600 line.

Anonymous said...

The DX10, VIVO and the HDMI is not enough?
Tell me what’s the difference of Nvidia 7xxx line VS 8xxx line?


I was referring to the benchmarks.

So anything wrong with that? Both are unplayable so what?

That *is* the point. What good is focusing on gaming benchmarks if the cards being reviewed are awful at games?

I’m guilty because they can’t do graphics cards reviews?

What? My point was that you were focusing on the wrong review in order to make a point, especially if the intent was to ridicule giant's post.

The Ati cards where as fast or faster then some games with the same quality settings and cost much less…

Not in the initial link giant posted, in which the cards produced frame rates ranging from passable to acceptable, and the ATI cards did not compare well. In the second link, where performance was too low to be useful, the ATI cards "won".

I just don’t understand, first you say they are both unplayable but you want to turn on AA to make the game even more unplayable? Your point is a little strange…

There are two links that giant provided, the first showing playable framerates, the other showing extremely low frame rates. On the first one, the ATI cards suffered considerably when AA/AF was turned on, a problem that did not exist with the NVIDIA cards.

So saving a few bucks, and having better Video decoding, good power consuming, more than enough performance its not OK?

I don't see where they had "more than enough performance" in either of the linked reviews.

Slow me what game is unplayable with the 2400/2600 line that is very playable with the 8400/8600 line.

You're missing the point. If games are unplayable on either card, I'm not buying either card. Comparing the cards in benchmarks where they both look lousy doesn't help the consumer make a choice.

Roborat, Ph.D said...

pointer said: ...Although I won't post often, but will try to post more (well, still won't be that much).

i'm sure you can feel the pressure already.

Anonymous said...

AMD "rewards" it's loyal folks for holding out?

Here's a crappy early sample 1.9GHz at 95Watts chip.

I don't care HOW GOOD THEY MAKE THE BENCHMARKS! In 3 months they will release either something several speed grades up at the same TDP, or this same 1.9GHz chip at a lower TDP.

Why would anyone buy these first chips knowing there will be a substantial improvement right behind it? AMD is just trying to gouge early adopters with not ready for primetime chips. Given the probable low volume of these chips what is the point of releasing a substandard chip? It's not like they are going to make that much off of them - looks to me like they are just trying to save PR face by meeting the "mid-2007" launch.

If you are looking for aserver chip, wait for the next stepping (or buy Intel).

Anonymous said...

INTHEKNOW: Saw your one of your recent posts on Dementia's asking about whether a 200mm fab can be easily converted to 300mm fab.

I refuse to post on Dementia's because of his refusal to listen to anyone who presents a counter argument.

So to asnwer your question - there are several issues converting (gutting) a 200mm fab into a 300mm one.

1) height - this may not be that big an issue - there are standards on 300mm equipment regarding max height allowable and unless AMD cut their 200mm fab really tight, it should be OK

2) Acutal weight (specifically point loading) of the much bigger/heavier 300mm tools. I actually worked in a fab which did a 200mm/300mm conversion and special care had to be taken where certain heavier tools were placed from a structural integrity perspective - this is more a nuisance and inconvenience than anything else (the induustrial engineers deal with this in the layout)

3) Overhead vehicle system - this is new to 300mm due to the weight of a lot being non-ergonomic and therefor not allowed to be carried by hand. This requires a fair amount of work as tracks and the entire system have to be put in place. Short term carts can do this so again not that big a deal.

4) Facility (subfab) requirements. Power, water, bulk chemical delivery all differ between 200mm and 300mm so the subfab may have to be partially or fully gutted as well. Again not a structural issue but this is costly /time consuming.

In short height is the only potential showstopper but it is unlikely to be so. Of course when you read Dementia's and everyone's view of converting the fab they make it sound simple and just a matter of ripping out old equipment and putting new equipment in - it is actually more costly and complicated than that.

It is also far easier, faster and less costly to start from scratch then doing this incrementally as AMD is planning to do - of course AMD doesn't seem to have a choice as they want/need the 90nm production but I think they would be better off outsourcing the 90nm work out to Chartered and completely gutting the fab.

GutterRat said...

MUST READ Article.

How AMD's Failures Are Triggering An Intel Monopoly

http://hubpages.com/hub/How_AMDs_Failures_Are_Triggering_An_Intel_Monopoly