1.22.2009

Another wider-than-expected loss for AMD

SAN FRANCISCO (Reuters) - PC chipmaker Advanced Micro Devices (AMD.N) posted a wider-than-expected loss in the fourth quarter as worldwide demand for PCs continued to shrink.
AMD reported on Thursday a net loss of $1.42 billion, or $2.34 a share for the quarter ending December 27, compared with a loss of $1.77 billion, or $3.06 a share, a year ago. Excluding certain items, the company posted a loss of 69 cents a share. Analysts, on average, had expected a loss of 56 cents a share, according to Reuters Estimates.
Revenue for the second-largest maker of central processing units for personal computers fell 33 percent to $1.16 billion, compared to analysts' estimates of $1.19 billion, according to Reuters Estimates. AMD said it expected revenue in the first quarter to decrease from the fourth quarter.

If AMD never made money during the best market condition, what hope is there when the economy is at its worst. For the first time the talk of bankruptcy for this company can never be more serious. Looking at the bright side of things, the Zoners could finally get a life.

354 comments:

«Oldest   ‹Older   201 – 354 of 354
Tonus said...

"I wonder how Paul Otellini's "we see a distinct advantage to having all our cores work" comment tastes now?"

I suspect that he will brush it aside. What I am wondering though, if this is in reference to Xeon processors which normally go into workstations and servers-- who will want such a CPU? Hey, why not try our 6-core Xeon? They cost less, and so what if two of the cores failed validation testing?

Anonymous said...

Hyc: For reads, yes, a single SSD can beat a striped pair of raptors. For writes, no. Striped SSDs however.....

That's what I've heard as well. I've never had a problem with a failed drive in Raid 0, in about 10 years on various PCs; however I do take precautions with weekly backups on an external drive. So if SSDs come down in price I might just stripe a pair.

Sparks: I suspect, as insiders, they are watching INTC's lawyers very closely on this deal.

Or maybe they're too busy watching their investments that actually return some growth and dividends :). Anyway I believe AMD will hold another stockholder's vote tomorrow, and since they're close, I expect it will pass. Amazing how the stockholders will vote to dilute their own investment...

ITK: I wonder how Paul Otellini's "we see a distinct advantage to having all our cores work" comment tastes now?

LOL - I always chuckle when I see dry humor like that - much funnier than AMD's or nVidia's 'in your face' style which often comes back to haunt them when they crash & burn.

Isn't Otellini retiring in the next few months? If so, then Intel can just attribute the tricore statement to him and move on :)

Anonymous said...

Ahhh... Dementia at it again over at AMDzone refusing to accept ACTUAL DATA that Core i7 uses less total power for the given task then a PhenomII.

While the world of AMD fanbois continues to shrink and shrivel, Sci and his diehards continue to stink and snivel :)

If they wish to believe i7 is an inefficient power hog, let 'em. After all, they have had little good news since summer '06. Besides, I think Westmere will shut even them up next October. Well, on 2nd thought, maybe not :)

I saw a posting on Westmere last December on a rumor-type blog, indicating that according to 'sources' Westmere will show remarkable performance gains over Nehalem in certain benchmarks - apparently some features or improvements did not make it into Nehalem in time. I'll see if I can find the link. Dunno how true it is, but the guy may have some inside sources, and one of his predictions was right on the money as well.

Anonymous said...

Another funny Dementia error - from the front page of AMDZone:

Nvidia Regains GPU Share In Q4

Forbes reports from Nvidia's earnings call that they believe they regained share in Q4 from AMD. The bad news is that in Q4 sales fell off a cliff.


Guess this puts the lie to his bogus explanation as to how nVidia's GPU marketshare is sliding downhill:

I've looked over the figures from Jon Peddie research and I'm frankly baffled how some people here have reached the conclusion that nVidia is doing well. nVidia is getting slammed.

For example, let's compare graphic chips Q3 2008 to Q4 2007. Why are we comparing different quarters? Because Q4 is usually the highest quarter of the year but in 2008 the sales were unsually high in Q3 and much lower in Q4. So the cross comparison should be more accurate.


Sci should learn to just leave his "anal-lysis" at the front door before stepping outside :).

Anonymous said...

Looks like the rumor mill blog I mentioned above, about Westmere's performance enhancements, was on the money after all:

http://www.dailytech.com/Gulftown+is+the+Flagship+of+32nm+Westmere+Line/article14227.htm

All Westmere chips will feature higher performance and lower power consumption. This is made possible through the use of fourth generation strained silicon and second generation high-k/metal gate technology, referring to the use of a High-k gate dielectric and a metal gate electrode.

Intel is reporting at least a 22 percent performance increase clock for clock over their 45nm process, and there are still many steppings to go before they go to market. Westmere also has seven new instructions, designed for accelerating encryption and decryption algorithms. All Nehalem and Westmere based processors will use DDR3 exclusively.

Gulftown is the successor to the Nehalem-based Core i7 and is due in the middle of 2010. Gulftown has six cores, but is capable of efficiently handling twelve threads at once, thanks to its next generation Hyper-Threading. It will use the X58 chipset due to the LGA-1366 socket, but there are rumors of a newer version coming in 2010 that will feature support for USB 3.0 and SATA 6Gb/s.


Now I'm back to waiting for GULFTOWN :)

Tonus said...

The last few comments here seem like a good example of what Guru is saying about scientia. He is quoted as saying (both comments being very recent):

"His argument? The Intel i7 is using DDR3 vs the Phenom using DDR2 and he thus deems it impossible to conclude anything."

...and...

"For example, let's compare graphic chips Q3 2008 to Q4 2007."

It appears as if having different parameters is a problem when it works against his claims, but different parameters are okay to use when they support his analysis. That's pretty convenient.

Anonymous said...

Sci should learn to just leave his "anal-lysis" at the front door before stepping outside :).

He does this... why do you think he effectively has shutdown his blog (even with the heavy handed moderation) and only posts in the friendly confines of AMDzone where if anyone challenges him, others yell fanboy/troll and attempt to drive that poster into a comment where mods can ban them.

Anonymous said...

Intel is reporting at least a 22 percent performance increase clock for clock over their 45nm process, and there are still many steppings to go before they go to market.

Here's a better link as I think Dailytech got some thing confused:

http://www.lostcircuits.com/mambo//index.php?option=com_content&task=view&id=55&Itemid=42&limit=1&limitstart=1

I think the Dailytech guys messed up the lingo. I've seen other sites reporting a 22% performance increase for 32nm over 45nm (not sure how dailytech is adding in clock for clock language when comparing one tech node for another). Now if they had said Westmere vs Penryn or Westmere vs Core i7. - then the clock for clock language would have made sense.

Intel is also reporting significantly lower leakage which obviously should help power for equivalent clocks (or allow Intel to move the clocks up). Keep in mind though a 5X or 10 reduction in Ioff will not translate into a 5X-10X power reduction!

Also of note - look at the defect density plots (which are ahead of the previous ramps) - remember the whole "AMD will have an advantage by implementing immersion on 45nm as Intel will have to deal with the learning curve" argument? Ummm... not so much... implementing technology for technolgy's sake is not an advantage unless it has a TANGIBLE benefit (time to market, cost, performance). AMD spent a lot of time talking about implementing immersion at 45nm, but have never published a single data point on the tangible, real world benefit of doing so.

The final thing of note is how Intel compares performance to the previous (current) technology - there is no BS comparisons to stuff that is 2-3 generations old like IBM/AMD has done with things like strain.

BTW - 14% NMOS, 22% PMOS is nothing to sneeze at, but is not that major for a tech node change. If 32nm is to see major benefits (power, clock, IPC) I think it will likely mostly come from the design side.

Anonymous said...

Tonus: It appears as if having different parameters is a problem when it works against his claims, but different parameters are okay to use when they support his analysis. That's pretty convenient.

Agree 100%. That's why he is known as a biased fanboy, despite the disclaimers on his blog. He jumps all over Anandtech for "biased" testing, not that anybody outside AMDZone really cares, or even knows what Sci blathers on about, and yet his analyses are far more deficient. Either he is too deeply ingrained to see his own bias, or too stupid to realize the yawning cracks in his logic.

Anon: why do you think he effectively has shutdown his blog (even with the heavy handed moderation) and only posts in the friendly confines of AMDzone where if anyone challenges him, others yell fanboy/troll and attempt to drive that poster into a comment where mods can ban them.

I think he rarely posts on his blog because (1)it's embarrassing to him that Robo's blog here gets 20 times the traffic and about 100 times the expertise :), (2) the threads quickly die out unless he responds defensively, which takes a lot of time and energy, and (3) even he has reached the bottom of the barrel when it comes to writing good stuff about AMD :). Oh, and (4) he's probably embarrassed by being about 90% wrong the last 3 years or so - even a buffalo nickle has better odds than he does of calling it right :)

Anonymous said...

Anon: I think the Dailytech guys messed up the lingo. I've seen other sites reporting a 22% performance increase for 32nm over 45nm (not sure how dailytech is adding in clock for clock language when comparing one tech node for another).

Yes, after rereading the Dailytech article and then looking at the Anandtech article, I noticed the "22% improvement" the DT article mentioned, matched the 22% improvement in the P-channel MOSFET performance. And Intel has stated before that their 32nm node had outstanding performance and the highest drive currents of any 32nm node. So you're correct in that DT probably screwed up their story.

Oh well - if Westmere indeed had 22% IPC improvement over Nehalem, I'm sure all of us here would be drooling by now :).

SPARKS said...

How the hell is INTC getting silicon to run at nine tenths of a volt? I've got HEXFETS and Schottky's in my power supplies that are comatose at 1.25 v!

Sure, they've got a low voltage drop (~0.2 0.4), but that's AFTER you get 'em to turn on!

What does INTC got, Professor Dumbledor working in the basement?

SPARKS

InTheKnow said...

...only posts in the friendly confines of AMDzone where if anyone challenges him, others yell fanboy/troll and attempt to drive that poster into a comment where mods can ban them.

Actually he is being taken to task by a fellow Zoner named kaa. Reading a number of kaa's comments I'd say it is a pity he doesn't post here as well. He seems well informed, and most unusual for those posting in the zone, he isn't blinded by hatred for Intel.

InTheKnow said...

Dug up a couple of links that may be of interest to the more technically inclined on this board.

First is a process flow that is supposed to be based on a generic 90nm process. It shows images after major process steps and names the process steps between each image in green text at the top. A couple of slides don't make any sense unless you can read the Korean, but the flow is all in english.

Second is a more detailed discussion of Intel's metal gate process. It is actually about a year old, but I haven't seen it before.

InTheKnow said...

I always chuckle when I see dry humor like that - much funnier than AMD's or nVidia's 'in your face' style which often comes back to haunt them when they crash & burn.

I thought it was a great line too, but it does make it hard for Intel to claim any kind of "moral high ground" any longer.

InTheKnow said...

What I am wondering though, if this is in reference to Xeon processors which normally go into workstations and servers-- who will want such a CPU?

I don't think that is really an issue. The process is no different from fusing off a section of RAM, which is done all the time.

InTheKnow said...

G, was that you posted the question for Sci over at the zone about the market share numbers? I noticed that a very straight forward question with real numbers wasn't well received and has been "moderated". LOL

Anonymous said...

I don't post over there anymore - tried to do it once or twice but anytime I disagreed with Dementia or Abinstupid (who has actually toned it down quite a bit), I was called a troll or fanboy (or both). When I dared to question some argument (which of course had no links to support it); I, of course, had to provide links to refute it. I questioned this philosophy to no avail (I asked for links to support the original argument) and apparently, by providing a boatload of links to refute some ridiculous arguments (instead of just 1 or 2), it wasn't received too well...

Then again it could have been me saying - if you don't like this link, here's another and here's another and.... I guess I may have been rubbing it in a bit :) Needless to say the message was ignored and the messenger was attacked. And thus ended my posting over there.

Tonus said...

"Actually he is being taken to task by a fellow Zoner named kaa."

I get the impression that Kaa presents something of a problem for them there. He is a long-time member and has a good reputation among them. He is also pro-AMD. So the more fanatical members cannot simply dismiss him with the "intel fanboi" label and persecute him the way they do anyone else who doesn't repeat the party line.

I've read his commentary with great interest. He is clearly a person who really enjoys his line of work and follows it closely. I think he wants to see AMD succeed out of a genuine hope that more companies competing at the top is a good thing, as opposed to simply hoping that AMD crushes all opposition and builds a utopian society on the ashes of Intel.

InTheKnow said...

I get the impression that Kaa presents something of a problem for them there. He is a long-time member and has a good reputation among them. He is also pro-AMD.

I have no problem with the pro AMD bias. In fact, I like seeing the opposing viewpoint if it is presented rationally. I posted at Sci's board for quite a while before it became totally overrun with rabid AMDites and Sci began his moderation campaign.

I'll freely admit to an Intel bias, so it would be rather hypocritical of me to deny others the right to a bias.

I just find it refreshing to a modicum of sanity show itself in the Zone.

Tonus said...

"I have no problem with the pro AMD bias."

Well, when I say 'bias' I am usually thinking of someone who will try his darnedest to skew things in favor of his bias. That's why i didn't use the word when describing Kaa. I think he's pro-AMD in that he would like to see AMD succeed, but it doesn't cloud his arguments or perception of the CPU industry.

There are a number of people at AMDZone who are pro-AMD, and there are a number who are biased. The ones who are pro-AMD want to see AMD succeed, but they are unwilling to swallow the more outrageous arguments and theories that are presented. The ones who are biased come up with reasoning that ranges from questionable to downright batshit insane.

It's what I like about this place; there are pro-Intel people here, but there's not much bias. Wanting to see company A succeed over company B is one thing. Discarding reality in order to fit the "facts" to your theories or beliefs is entirely another.

Tonus said...

"Well, when I say 'bias' I am usually thinking of someone who will try his darnedest to skew things in favor of his bias."

I mean this to read "in favor of his viewpoint" or "in favor of his desire." It's pretty awkward as written, though I guess it makes sense in some strange way.

pointer said...

yup, being supportive for your favorite company is one thing (true supporter), and being fanboish is another thing, and quite some people there show the later traits (a few are true supporters):

on the reviews
1) shoot down every single review that's against what they wish
2) call every single test (benchmark) non-real world, and not able to give any 'real-world' apps example that they wish to compare (scared to shooting their own foot if name those)
3) like one poster who just get perm ban lately there said "there are sign people ready to throw spec benchmark out of the windows (somewhere in the NHM spec benchmark thread:)

on name calling
1) said ppl spreading FUD while the one said so is a real FUD king :)
2) calling troll / fanboy to flame bait

Anonymous said...

ITK: I thought it was a great line too, but it does make it hard for Intel to claim any kind of "moral high ground" any longer.

Maybe Intel can blame their change of heart on the lousy economy?? :) Personally I doubt that many people, outside of AMDZoo - I mean AMDZone - would pay that much attention :)

Speaking of AMDZoo, it looks like the tar & feather gang has succeeded in ousting most of the moderate members who don't libel Intel every other sentence. I bet Kaa and some others will be banned before too long..

Tonus said...

"I bet Kaa and some others will be banned before too long.."

On the one hand I cannot imagine them doing that, as he seems to have the respect of the forum members. On the other hand, of late he has been unmasking a few of them (see his latest posts on power efficiency where he points out how sci and abi are using data and arguments in an out-of-context and/or misleading manner in order to support an incorrect claim).

If you take away too much of their emotional safety net, I suppose they could do just about anything.

Anonymous said...

see his latest posts on power efficiency where he points out how sci and abi are using data and arguments in an out-of-context and/or misleading manner in order to support an incorrect claim.

Yeah but Kaa brought out Abinstein's true 'genius' who has now apparently graduated from the Dementia school for analysis. Apparently the way you measure the power efficiency of a processor or architecture is the delta between the idle and full load power consumption.

So by this amazingly stunning analysis if you make a processor that idles at 200W but only consumes 205W under load, then you may have made one of the most efficient architectures. The other thing you could do is instead of idling the chip down in the 1GHz range and/or lowering Voltage, you could just increase the idle speed to close the power delta between idle and full load. Of course you can not look at ACTUAL power consumption to measuer the power efficiency... that's just crazy talk that will earn you a ban.

These delta/ratio arguments are just so damn amusing... the funny thing is these folks have become such slaves to the numbers that when they can generate a number that magically fits their argument/belief, they don't ask what would happen if we apply the same criteria elsewhere.

hyc said...

Yeah but Kaa brought out Abinstein's true 'genius' who has now apparently graduated from the Dementia school for analysis. Apparently the way you measure the power efficiency of a processor or architecture is the delta between the idle and full load power consumption.

So by this amazingly stunning analysis if you make a processor that idles at 200W but only consumes 205W under load, then you may have made one of the most efficient architectures. The other thing you could do is instead of idling the chip down in the 1GHz range and/or lowering Voltage, you could just increase the idle speed to close the power delta between idle and full load. Of course you can not look at ACTUAL power consumption to measuer the power efficiency... that's just crazy talk that will earn you a ban.

These delta/ratio arguments are just so damn amusing... the funny thing is these folks have become such slaves to the numbers that when they can generate a number that magically fits their argument/belief, they don't ask what would happen if we apply the same criteria elsewhere.


I think they're just failing to explain themselves completely.

The delta is obviously only interesting once you've already established a clear reference point. "A-B=35" doesn't tell you anything unless you already know A or B. The numbers A and B are already published/known, so the analysis continues from there.

From this report
http://techreport.com/articles.x/16147/12

The idle power for PII 940 is 143.8W and for i7 965 is 166.5W. We don't know how much of that figure is the CPU alone, but presumable the majority of it is the system/peripherals/etc.

However, we can assume that the majority of the delta between idle and full load is due to the CPU, and that very little of the remaining system power draw changes with CPU load. So yes, it's an interesting number for seeing how much energy the CPU needs to do X amount of work. But it's only useful because the baseline is also known. Obviously the delta by itself is meaningless.

pointer said...

Hyc said ... However, we can assume that the majority of the delta between idle and full load is due to the CPU, and that very little of the remaining system power draw changes with CPU load. So yes, it's an interesting number for seeing how much energy the CPU needs to do X amount of work. But it's only useful because the baseline is also known. Obviously the delta by itself is meaningless.

yes, the delta would be some indication ... however, knowing the baseline is not enough .. under the load, we have no idea how much added stress to memory / harddisk, etc compare to when the system is idle, hence added power consumption there too.

Still, Sci's GPU to CPU ratio analysis is amusing :)

InTheKnow said...

So yes, it's an interesting number for seeing how much energy the CPU needs to do X amount of work.

Not really, it is only part of the picture. You also have to look at the time scale. What is of interest is the area under the curve. Just looking at the delta itself can be very misleading.

The energy used to render the scene by all the i7 processors is less than the energy used by the PhII processors. The worst of the Nehalems is 16% more energy efficient than the PhII system on this benchmark.

Interestingly enough, if you look at the i940 idle power it is 16% higher than the PhII system. If you assume the systems are configured comparably from a power perspective (which I'm not sure is the case), it is easy to figure out which system is better for your needs.

If you load the system more that 50% of the time, the Nehalem system is more efficient. If you let it sit idle for more than 50% of the time, the PhenomII system is more efficient.

That is what Abinstein is overlooking, as Kaa correctly pointed out to him.

As a final note, I have to point out the the i7 processors are enthusiast processors. They were not designed with maximum energy efficiency in mind. I expect the server version of Nehalem to have better idle characteristics.

I believe the current PhII processors are server processors by design, so if AMD puts out a "Black Edition" I would expect the idle power to be worse than what we are looking at here.

Anonymous said...

So yes, it's an interesting number for seeing how much energy the CPU needs to do X amount of work

First - the delta is not a measure or how much energy the CPU needs to perform the task. It's not like the CPU is "off" at idle. The energy the CPU needs to perform a task is the total energy (power) the CPU needs.

There is one or two assumptions baked into your number... either the idle power baseline is similar between the 2 processors or that they are both have 'maximum efficiency' starting at the idle state. We know Intel and AMD use different idle speed clocks (and I'm not sure how much the voltage is cranked down - too lazy to look it up) so you can't simply say the work to perform a task is load-idle because the definition of idle (in terms of power and what is on/off or clocked down in the CPU) is different between the 2 processors.

What if, for lack of a better term, the idle state for one processor is far less optimal and is closer to a load state then the other?

In the case where one returns to idle state faster shouldn't you the put in a weighted average with the time in idle (where the "delta" would be 0)?

In an extreme case what if one processor is running not far above the idle clock and takes 2X-3X the time? It will look tremendously efficient with the new delta metric...

Yes you can argue the chipsets, the memory (BTW - thanks for setting Scientia/Abinstein staight on the memory power ridiculousness), the MOBO's - but it's amazingly transparent that a group of people continue to argue you have to consider the platform when evaluating cost or total performance, but then when the #'s aren't 'fitting' they suddenly change their platformance religion and have to start isolating variables and making up new metrics (like power efficiency ~ load-idle delta). Is it about the total platform or the components? On AMDzone, it is dependent upon which puts AMD in the most favorable light...

Is it OK to have turbo on? Is that 'cheating'? It makes the power delta between load and idle greater so I guess in this case it would not be 'cheating' as it would make Intel look worse in the new delta measurement?

Looking at this from a higher plane (and not after looking at the #'s)....wouldn't the measure of efficiency simply be the total energy to complete the task in question? If one processor can do a task in 10 joules while another does it in 8 joules - do I really measure the efficiency by something other than the total energy? Do I argue the 10 joules is more efficient because it has less of a power jump from idle to load?

This takes out any differences in idle states, doesn't penalize one chip from finishing faster, and allows the CPU to turn on/off whatever is best suited to perform the task (SMT, turbo, clocking down any unused cores if it is not a full multithreaded application, etc).

Anonymous said...

However, we can assume that the majority of the delta between idle and full load is due to the CPU, and that very little of the remaining system power draw changes with CPU load. So yes, it's an interesting number for seeing how much energy the CPU needs to do X amount of work

Even IF you can assume that, Energy = power * time.... you are attempting to associate (or loosely correlate) changes in power with differences in energy efficiency and complete ignore the time component.

Take the Intel vs AMD thing out of the equation and apply the delta metric to only the AMD chips in the link you provided. How would you rate the efficiency of an X3 8750 dropped into the same exact PII board/system with the "delta" measurement? It has an extremely low load-idle delta; would that be a more efficient processor than the PII 940 quad?

If I look at the bottom of the page at the energy to perform the task, I would see that the X3 takes almost twice as much total energy to perform the task so I would tend to think it was less efficient, no?

hyc said...

Yes, ultimately the Task energy chart is more important than the delta, and clearly the i7 is the most efficient.

Still, we'd like (OK, *I'd* like) to understand how far off the PII is, and why. The PII 940 *is* a Black Edition processor by the way, so it could certainly be clocked faster.

And yes, that's an open question I have about what the Cinebench workload really represents in terms of disk activity and anything else. Again, it's hard to factor out what's really the CPU and what's really the rest of the system. Even with identical disk drives doing "the same amount of work" the power use will be different if the throughput is different - there may be different intervals of spinup/spindown for the drive motors, etc. (So benchmarks like this which operate on 100% deterministic data should use SSDs to eliminate that factor.)

I'm curious to see if the speed vs power tradeoff is a net win or loss here, if the PII was overclocked and completed the run sooner. In my experience with my laptops, number-crunching in "battery saving" mode is a net loss, because the entire system needs to be running in C0 state for a much longer time. Slower speeds are only a net savings when the CPU is actually lightly loaded.

That's why I'm interested in the delta, although perhaps I should just be interested in the peak power. If the PII 940 was OC'd to a speed such that it matched the i7's power draw, how long would it take, what would it's Task Energy be? More or less than at its stock speed?

Anonymous said...

The PII 940 *is* a Black Edition processor by the way, so it could certainly be clocked faster.

And the i7's couldn't also be clocked faster? Sure the unlocked mutli is nice, but any Intel chip (even the non-EE ones) can be overclocked, too.

I'm curious to see if the speed vs power tradeoff is a net win or loss here, if the PII was overclocked and completed the run sooner.

That's fine, and a very good question - I also have the same question, as it also pertains to whether is it better to have turbo mode on/off on the i7's or in general and OC'ing) but it has nothing to do with this ridiculous load-idle power = efficiency argument that is absurdly being (mis)used and misrepresented at UAEzone.

I guess my point is... putting aside the Intel v AMD eternal twisting of the facts... the load-idle doesn't even hold water if only looking at AMD only (or for that matter Intel only) products. The fact that it is being used, speaks to the level of bias at AMDzone. It appear to be a metric being used to explain away data that is undesirable, but is clearly not consistent even when used only on AMD products (X3 vs x4). It seems to be a tool (much like the bogus DDR3 v DDR2 argument), to cast FUD.

I think the point that many have mentioned here, is that it is one thing to be a fan and put spin/bias on certain facts (it is done by folks, including me, on this site). It is a whole different thing when things are created and distorted simply to promote a personal view that is biased. Especially when faced with contradicting data, the messenger or some small portion of the data is attacked to cast doubt on the overall argument.

I think the question you raise on speed vs efficiency is a great one... especially as we go down the mode of turbo modes, OC'ing and likely heterogeneous cores. (I wonder how much analysis Intel did on this for the turbo mode or whether their goal was performance without regard to optimal energy efficiency?) Is it more efficient to finish a task quickly at higher power consumption or more slowly at lower power consumption?

In any event I think it has little to do with the whole load-idle made up metric that is being promoted as some sort of artificial metric of efficiency. (BTW - the Atom would be the ultimate efficient architecture with this metric, no?)

That said I appreciate your point of view, and the fact that you at least listen to opposing viewpoints, even if you don't agree with them. I would post over at AMDzone (as I think may other here would), if I wouldn't be dismissed as a troll as the first sign of ANY disagreement. While many disagree with you here - you do not see the same name calling or calling for a ban here or Robo 'moderating' you out (and I hope you appreciate/respect that). Sure there is some ribbing and sarcasm, but I think for the most part people here at least listen to (if not agree with) your point of view.

pointer said...

I think the question you raise on speed vs efficiency is a great one... especially as we go down the mode of turbo modes, OC'ing and likely heterogeneous cores. (I wonder how much analysis Intel did on this for the turbo mode or whether their goal was performance without regard to optimal energy efficiency?) Is it more efficient to finish a task quickly at higher power consumption or more slowly at lower power consumption?


i did read a statement (related to turbo mode) some where, some time ago that goes like this:

"it is more energy efficient to complete your task as soon as possible (hence entering turbo mode), and goes idling once done."

I think that the above should be true if user enable deeper cstate, which i am not sure whether those test did that or not. From some OC setting that i see in the forum, they tend to disable Cstate (means C2 and above), worst, some go to the extend of disabling C1E ...(this is against spec btw)

Anonymous said...

Tonus: If you take away too much of their emotional safety net, I suppose they could do just about anything.

Agreed, especially Ghostie. He seems to have a low tolerance level for anyone he perceives as posting too many pro-Intel viewpoints, or maybe not enough pro-AMD viewpoints. Apparently there is a minimum Koolaid level, perhaps around belly-button high, that posters must routinely display to him or else he targets them with stupid posts, twists their statements against them and then perma-banning them :).

I can see why the quality of discussion over there usually goes in the crapper after a while, generally after he bans a couple of people. If the Tom bros. really wanted a reputable website, and not a joke to most people incl. AMD itself, they would put Ghostie out to pasture.

Also, shouldn't Ghostie change his signature waving American flag to the UAE flag now?? Certainly he doesn't embody too many American ideals such as freedom of speech :). In fact, he reminds me more of some crotchety old mullah, who wants to behead all infidels every Friday at noon.

hyc said...

Is it more efficient to finish a task quickly at higher power consumption or more slowly at lower power consumption?

In any event I think it has little to do with the whole load-idle made up metric that is being promoted as some sort of artificial metric of efficiency. (BTW - the Atom would be the ultimate efficient architecture with this metric, no?)


Indeed.

In the absence of any actual testing, I'd hazard a guess that there's a hard threshold for this. Since power is proportional to frequency and also to the square of the voltage, and you must raise the voltage when going for higher overclocks, that says that: in general, getting the job done faster is better, but there's a point where the cost of the increased voltage overshadows the gains from increased speed.


That said I appreciate your point of view, and the fact that you at least listen to opposing viewpoints, even if you don't agree with them. I would post over at AMDzone (as I think may other here would), if I wouldn't be dismissed as a troll as the first sign of ANY disagreement. While many disagree with you here - you do not see the same name calling or calling for a ban here or Robo 'moderating' you out (and I hope you appreciate/respect that). Sure there is some ribbing and sarcasm, but I think for the most part people here at least listen to (if not agree with) your point of view.


Thanks. Yes, I do appreciate that and of course I try to give the same in return. I think one of the differences here is that everyone has a sense of humor and perspective about the discussion. Nobody's gonna die if we disagree, or if I'm right or wrong. That frequently seems to get lost in the other forums.

SPARKS said...

Do you guys recall me asking if AMD was having issues with DDR3?

Like I said, "I'm no marking genus, but I know a bullet in the foot when I see one."

Confirmed, they are having issues with the DDR3 controller, and it ain't pretty.

http://www.techpowerup.com/img/09-02-12/60a.jpg

As far as the above fanboy/amdzone discussions are concerned, I'm afraid anyone with a modicum of objectivity cannot dismiss that AMD's time as a serious player is over.

They seem to develop issues with every newly introduced technology. DDR3 is no exception.

Sure, I'm a rabid INTC fan. But, I'd be pissed as hell if INTC were (God forbid) executing so poorly.

The question begs, does AMD miss these issues completely, or do they release the product fully aware of the deficiencies, simply to 'look' competitive?

Whatever, a miss is a miss. Changes and/or modifications ain't cheap, to the company, or their customers.

SPARKS

Tonus said...

The impression I get is that they should have it fixed by the time it would've been any issue (ie, AM3 motherboards with DDR3 RAM slots are widely available). It's definitely not a good sign, but will likely not be a problem for AMD in the practical sense.

However, negative publicity of any kind at this point will likely have the effect of strengthening the perception of Wall St that AMD continues to do poorly and could hurt the stock further. I have no idea if it would have any effect in any other areas, such as their agreements with Abu Dhabi.

The reaction at the Zone is predictable. And I think that is the topic where someone accused Kaa of spreading anti-AMD FUD. I suppose it was a matter of time before he came under attack.

Anonymous said...

The question begs, does AMD miss these issues completely, or do they release the product fully aware of the deficiencies, simply to 'look' competitive?

KISS... Keep it Simple Stupid.

The whole TLB thing was overblown in my view and may have been hard to uncover during validation. Using 2 DIMMS on the same channel? How was this not seen during validation? Even IF it is only on certain DDR3 and even if the workaround (running everything at 1066 instead of 1333) is probably not that big a deal performance-wise, this in my view speaks to the desperation and rush to get products out these days.

Do folks need a DDR2/3 combo controller? Sure it is real nice, but exactly what % of the CPU sales does this impact? (Yeah the upgraders will whine, but quite frankly that is probably AMD's biggest captive audience and they have to do a lot more than that to piss them off). AM3/AM2+ forward/backward/sideward compatibility (kind of)? Also a nice to have.

Did any of this impact AMD's ability to get 2 sticks of DDR1333 on the same channel working? I have no idea, but it certainly didn't help! How many more permutations/bios's do you now have to validate for each chip? You have all DDR2 (all speeds, major suppliers) x DDR3 (all speed,suppliers) x various DIMM configs (1DIMM, 2DIMM, 3DIMM , 4DIMM with the various channel combos for the 2 and 3 DIMMS) x doing this on all major AM2+ and AM3 boards (and the various chipsets on each board) x doing this on all major CPU SKU's. That is a rather large factorial to validate. And right or wrong, and whoever's problem this ends up being it is AMD's problem and AMD's responsibility to make it go well.

Just like the TLB (and in my view this is FAR WORSE as it should have been much easier to catch), this is mainly a PR issue. The problem is when you are small (relatively speaking) and you have virtually zero brand / marketing presence, you can't afford to continue to take PR hits.

Anonymous said...

I love the tweaktown quote (from AMD)

I spoke with Damon Muzny at AMD about this and it is not actually a problem. This behavoir is actually by design.

So apparently AMD does not see having to lower your DDR3 speed to get 2 DIMM's/channel working a problem! Apparently it was by design?

Makes you wonder why there is a fix planned if this was "by design" and "not actually a problem". I love when suppliers issue fixes to things that aren't actually a problem...

They were faced with a choice; either they could drop listing support for DDR3 - 1333 or they could design the system to down clock the memory to 1066 and recommend it to everyone using 2 modules per channel.

Again... not a big deal, but why not be upfront about it? State the chip officially supports 1066 MHZ memory, but will support 1333MHz in 1 DIMM/channel configurations? To say this isn't a problem is to be living in a state of denial. Regardless of the cause (or the solution), current users may have to derate the speed of DDR3 to get it working in 2 DIMM/channel configs. You can argue all you want that it probably isn't a big performance hit, but it is a problem! You can't tell me AMD "designed" things to run slower in 2DIMM/channel configs... this y friends is a work-around, not a "design" and that tells me this particular AMD spinner views his customers as idiots who he can spin things to.

hyc said...

This sounds like just another AMD PR fiasco, i.e., they have mishandled it in classic fashion.

The first socket 939 chips couldn't run 2 DIMMs per channel at DDR400, they downclocked to DDR333. Big deal. It took till revision E3 before they released chips certified to handle 4 DIMMs at full speed. How is this situation any different?

pointer said...

so the original report by digitime at 15 Jan actually has some leg ... just that AMD decided to continue the AM3 release with BIOS workaround ...

http://www.digitimes.com/news/a20090114PD221.html

"Meanwhile, AMD's is still struggling with technical difficulties to achieve stability and compatibly with the DDR3 controller built into its AM3-based CPUs, and so the company is also unlikely to transition to DDR3 until it is able to come out with a workable BIOS, added the sources."

Anonymous said...

How is this situation any different?

I think that's the point?!? Why doesn't AMD learn from it's mistakes and handle things better in terms of PR? They have enough problems, that PR mishaps should not be acceptable, especially repeats of ones they have done in the past - that is ridiculous.

Why try to play fast and loose, instead of learning from the past and just calling the chips 1066 compatible due to this issue? (Am I allowed to actually call it an issue? AMD PR says it is not a problem and they are eventually going to fix the non-problem).

Which is worse the PR hit from calling these things 1066 compatible (instead of 1333) or the press coverage about how they (AGAIN) are playing a bit fast and loose about what the chip supports? How many times must they repeat the same problem before LEARNING FROM IT? Did they think people wouldn't notice? How much value do they get out of claiming 1333 vs 1066 support? The folks who do understand the difference, frankly won't care or will know enough to work around it... the folks who don't understand the difference will only see another spurious claim and wonder if it is important? I just don't see the upside in AMD doing this as the issue is not convincing enthusiasts to buy the chip, but the general masses - all this does is take their credibility down a bit (even if it is just a small bit) and add doubt.

And as they say, the coverup is often worse then the crime. Instead of flat out saying it shouldn't be labeled 1333 compatible and that was a mistake on their part, AMD tries to talk around the issue by saying it is running slower BY DESIGN and IT'S NOT ACTUALLY A PROBLEM. At this point, and as you point out after seeing this same movie not too long ago, why not just say these things should be labeled as 1066 and those only using 1 DIMM are fine with 1333? (where is the downside in that?)

It's like AMD is living in a dream world and they don't realize or care if brand perception matters or if confidence in a company's claims matters. Every time they play around with things like this (no matter how small) they are making it that much harder to build their brand (even if the "real" effect is small or none).

I don't get how AMD doesn't just seem to get it...

SPARKS said...

"And as they say, the coverup is often worse then the crime. Instead of flat out saying it shouldn't be labeled....."

Please allow me to preface/add "what is---is". Ah, the immortal phrase from Slick Willie Clinton. Billy wanted America to buy into a pipe job being less than sex.

At the time, some did. Perhaps in the high end world of the presidency/public figures, it could be less than sex. But the idiot lied under oath.

In the real world, in my world, I can assure you, my wife would have all my computer equipment, model trains, model planes, electronic test equipment, hi-fi gear, 928, model ships, (MY TOOLS!!!), suits, et al., kicked to the curb, with an order of protection to put things into perspective for yours truly. She would cut me no quarter, none, lie or no lie.

This, my friends, belies the issue as one of 'perception.'

I can't tell you how many times people on this site have said I was absolutely bonkers for spending 1500 bucks on QX9770. Maybe, but I bought into the best chip money could buy.

Are some guys nuts for buying into SSD's at 800 buck a pop? From, my perspective NFW, as an enthusiast I applaud the purchase. Christ, why didn't they buy two and stripe the bastards? However, I don't want to hear a WHISPER of problems or issues regarding these products, not a frig'en glitch, not at these prices.

In short, AMD as screwed the pooch, again, just like Slick Willie. And, like my wife, we make no compromises when it come to 'hardware,' if you will. My overall perception is that AMD's QC, testing, etc., is falling/failing due to budgetary constraints. I'm sure industry insiders feel the same way as they have to buy and sell the stuff.

From the above comments, whether your crazy enough to spend stupid money or not, we all obviously share a commonality. We are hardware freaks who would cut "no quarter, none, lie or no lie," especially when the hardware gets personal.

Buying flawed components is very personal, regardless of who's doing the buying. There's no way of bullshiting around this fact.

It's gonna cost them. This is gospel.

SPARKS

Anonymous said...

Look - Intel had the same problem with the floating point bug. Was it a big deal? How common could the error occur?

Intel (initially) kept trying to argue it's not that big a deal and it wouldn't affect most people. In the end they has their asses handed to them by the court of public opinion (right or wrong) as people didn't want what they viewed as potentially defective chips. Intel seems to have learned from this mistake.

AMD still seems to be learning that often times perception is (or is more important than) reality.

As an engineer in a large company, I learned (the hard way) it isn't always about being technically right - I kept trying to argue that well that SHOULD be the way things are... A wise man told me that's not how they are today and wished me good luck with busting my ass to generate technically correct solutions and recommendations while noone listens to them. Or, he told me, learn what is driving other people and drive the technically correct recommendations in a manner in which people will listen. (Or more basically, while it is good to have the facts on your side, that alone is not enough - perception does matter)

SPARKS said...

Oh, how right you are. Think of clowns that, even to this day, will chant "yeah but Netburst this and Netbust that" mantra, simply because it was a legitimate point at the time.

I can recall you once making a very valid point on why AMD got a pass for being the scrappy little company when they kicked sand in INTC's face. Naturally, I didn't like it, but you were correct (of course). More importantly, those were the perceptions of the times.

Times have changed. The way INTC is executing, with a budget of 6B+ in the worst of times, and what seems to be a non event into a 32nM transition, hell, they could claim to have anti-gravity chips, people would be getting in line to buy them!

Anyway, from a more down to earth perception/perspective, I truly believe that the Nehalem EP, EX iterations will be revolutionary and set a new bar in performance and efficiency. As a hardware geek, I am looking forward to the analysis and comments of everyone on this site. We've got a very nice and very smart crew here, (ROBO and the Gang), on the hardware end, and now, on the software end. (Yeah, that means you, hyc)

There are exciting times ahead despite the terrible and unfortunate economic climate.

SPARKS

Anonymous said...

Hector RuinedAMD blew his one wadd during the nebust days.

While INTEL was screwing up AMD could have invested money in technology, invested money in factories and grabbed more market share.

But what the Hector RuinezAMD do. Pontificate about his benchmarks and challenging Paul to benchmark races. Guess what Paul was doing, getting his house in order.

Hector went on to waste billions buying ATI and million of executive and engineering man hours trying to figure out how to get that to work. Imagine what they could have done if they hadn't wasted time doing that.

But INTEL has come out on top, like I always predicted.

Anonymous said...

But INTEL has come out on top, like I always predicted.

Wow... predicting Intel would come out on top.... don't go too far out on a limb there Nostradamus! Tell us, oh wise one, what are your predictions about the sun rising tomorrow? (I'm thinking it will)

SPARKS said...

"Nostradamus", Come on now! He's Sooo 19th and early 20th Century! Pa-leese. Let's get with the times, man.

Edgar Cayce is more like it! I think his comatose induced/sleep apnea/clairvoyance may be more appropriate here.

http://www.sleepapnea.org/info/index.html

http://www.edgarcayce.org/are/edgarcayce.aspx

SPARKS

Anonymous said...

Got to read the Intel Hypocrisy At It's Best thread at the Zone. Abinstupid at it again ...

http://www.amdzone.com/phpbb3/viewtopic.php?f=52&t=136097

Gia

Tonus said...

I guess you can add "samwise" to the list of people who will have a short tenure at AMDZone.

SPARKS said...

Hey! They took my catch phrase, "Triple Cripple!" How dare they!

Well, I suppose they sneak over and browse these pages once in a while, and imitation is the sincerest form of flattery. Besides, the whole crew are representative of a company of imitators anyway.

Even Dementia jumped into the fray.

'Samwise' seems to be a fairly objective fellow. You simply don't dump 6 functional cores, he's right about the yield issue. They're just to biased to see the difference/relevance. Poor Sam was pissing in the wind. I like to see him come over here.

SPARKS

Anonymous said...

Looks like this guy "samwise" ran into these clowns before on Larrabee too.

http://www.amdzone.com/phpbb3/viewtopic.php?f=52&t=135409

Gia

Anonymous said...

"The first socket 939 chips couldn't run 2 DIMMs per channel at DDR400, they downclocked to DDR333. Big deal. It took till revision E3 before they released chips certified to handle 4 DIMMs at full speed. How is this situation any different?"

HYC, it is different because then, AMD was not a marketable entity, they suffered from the enigma of being the second source supplier to Intel. AMD built itself up b catering to the smaller system builders, mistakes like these were easily overlooked, only really drawing attention to the technical crowds who understand the pitfalls and problems (and work arounds) that come along.

When AMD found success with the Opteron, and built the brand, they also has to establish a certain trust through quality to break through into the high profile OEMs for servers, desktops etc.

Each time AMD releases a product today, it comes under more intense and different scrutiny. As such, each blot on the product brings back painful pre-Opteron memories of AMD's typical historical performance.

This is where it hurts, this is also part of the reason AMD continues losing marketshare in server even after Shanghai launched in Q4. The memory of Barcelona cast a certain shadow of doubt in terms of quality in the minds of the IT/corporate enterprise (add to this the bad economy) and AMD cannot afford any more PR/technical gaffes.

Anonymous said...

This is where it hurts, this is also part of the reason AMD continues losing marketshare in server even after Shanghai launched in Q4. The memory of Barcelona cast a certain shadow of doubt in terms of quality in the minds of the IT/corporate enterprise

Not sure if this is the right rationale at all. First AMD just launched Shanghai - in Q4 it would have been pretty low volumes. Secondly, even if Barcelona had been 5X faster, the server market simply doesn't move that fast. To expect a new product to change the competitive landscape in the same quarter of the launch of the product is a bit unreasonable in any market, but even more so in the server market. While I have no direct evidence, I'd have to say the market share #'s are more likely due to Penryn continuing to grab market from Barcelona.

I'm not saying Shanghai will recover lost share, but it is WAY too early to assess this. To say this is related to some sort of "trust" issue is a bit premature and trying to fit an argument into the data. The other key factor will obviously be how well Nehalem is ramped and accepted.

SPARKS said...

If any one is interested, or even cares, there's a reason AMD took a 17 cent hit today. The shareholders agreed to the spin off. So end's another great American company at the hands of fools who squandered billions on a deal that never really had any traction to begin with. ALL the wrong moves at precisely the wrong time, with obviously, the wrong people. What a shame.

I wonder what INTC's next move will be?

SPARKS

SPARKS said...

http://www.bizjournals.com/austin/
stories/2009/02/16/daily33.html

SPARKS

Unknown said...

Sparks said...

You simply don't dump 6 functional cores, he's right about the yield issue.


You could say the same thing about AMD's tricores, "You simply don't dump 3 functional cores". The ratio is the same, 3 out of 4, or 6 out of 8.

It makes perfectly good business sense. Intel just wanted to take a cheap stab at AMD with the preferring to have all cores on a die working. I'm sure they've sold dies with non-functional (or out-of-spec) cores disabled. Was there ever a true single-core Core 2 derivative, or were they all Core 2 Duos with 1 core disabled?

Anonymous said...

Was there ever a true single-core Core 2 derivative, or were they all Core 2 Duos with 1 core disabled?

There was one HUGE DIFFERENCE which you are ignoring... AMD called tri-core innovative and unique as if they had created or invented something new and issue a bunch of press releases on this "unique" product. Early on some even had the delusion that it was even a three core die (and AMD made little attempt to clear this up). Intel simply put the diasbled cache/disabled cores out in the market and did not try to market it as some triumph of engineering or create some new market niche!

In my view that was why Intel took a shot at AMD. AMD made it sound like something innovative, I think Otellini was trying to educate people on the fact that it was simply a chip with a bad (disabled) core, not some novel invention or new product that the marketing and fanboys made it out to be.

And it still kills me how the AMD fans say, well it makes sense because it is like found money as they would have scrapped the chip (yet in other comments somehow argue tri-core doesn't necessarily mean bad core). What they assume is these chips are sold in a vacuum and do not impact any other AMD sales and every tri-core sold is either a sale taken from Intel or a new sale. Think some of these tri-cores may have cost AMD some quad core sales or some dual core sales? Think it may have exerted a little pricing pressure on AMD dual cores? As AMD could not push quad pricing up for performance reasons, tricore had to be shoved in between the dual anbd quad markets. A single 2-3 ASP decline in dual core ASP's due to this could cost AMD miilions and quickly wipe out any tri core gains. It sounds insane, but there are many scenarios where scrapping a chip is actually the best thing you can do (economically).

Intel runs the same risk with 6 core chips, though they are probably a bit better off as there is greater pricing flexibility in the 4,6,8 core space. There will likely be more "pricing room" between quad and octa-core segments to fit hex-cores in vs the (virtually non-existant?) gap between duals and quads. I also wonder if Intel will be calling these chips 'innovative' and 'uinique' (I doubt they will and if they do they should be called out for it)

Anonymous said...

If any one is interested, or even cares, there's a reason AMD took a 17 cent hit today.

I think the hit was mainly the dilution - recall part of the deal was that the Abdul's got some stock (or warrants) which means every pre-existing share is now worth a little less (or basically AMD just say #!& you to the existing shareholders as we need the money desperately and had to give up part of our company to get them to accept the deal so AMD just decided to effectively print more shares and screw the stockholder).

What you will see over the next few months/quarters is a debate between whether AMD is better off with the debt off the books and much more even cash burn (paying per wafer vs spending in capital/fab cycles). What is lost on the people who are gung ho on this move, is AMD is about to take a hit on gross margins - the foundry by spinning off is still manufacturing on the same process with the same expenses so the cost/wafer has not changed and now the foundry will be grabbing a little of their own margin ob every wafer sold to AMD. The only difference is AMD is spreading out the costs of the fab evenly over the wafer and the foundry now has to deal with the cycles - but make no mistake, AMD is still paying those costs, it's just a change of how they are paying.

The long term question is whether the foundry will be able to get enough economies of scale through volume from other customers to offset the now "extra" foundry margin on every wafer. If they don't, this is essentially another extra cost, that will make it harder for AMD to compete with Intel. Short term the answer is no as there are no non-AMD customers; long term I think the answer will also be no as they will be competing for other customers against the likes of TSMC, who can barely turn a profit on their own. That and there is a huge difference manufacturing mainly one or 2 products in high volume vs servicing many customers, many customers in low-mid volumes.

People forget this is still an AMD fab with the same AMD fab cost and profitability issues and is just under a different name - ATIC is bringing no expertise to the table so I think it is ridiculous that people think a change of the name on the front door to the fabs is going to lead to any change short term. Now that they are the foundry company has the cost to produce a wafer changed? Has the fab utilization changed? (long term there is a CHANCE things may change, but for the next 2 years it's same challenges, different name)

Unknown said...

@ Anonymous

Yeah, I think I forgot about the "innovative" claim, probably because at the time I didn't believe it. Disabling parts of a die is innovative how? ... Indeed.

Frankly I have little idea about the economics behind CPU manufacturing, but if AMD thought it'd cost them more to release tricores rather than that same die as a dualcore, don't you think they would have released the die as a dual? Who knows, surely AMD does by now though. And they've done it again with Phenom II (much to some review sites' delight, they seem to love that 720 BE), so maybe it does make economic sense to them.

pointer said...

I believe this is the part that Intel (my company) claim:
The company claims that it can effectively isolate the nonfunctioning cache and cores, so that these extra parts don't increase the chip's power draw by letting through leakage current.

http://arstechnica.com/hardware/news/2009/02/intel-details-eight-core-xeon-cache-and-core-recovery.ars

remember that when E6400/E6300 first came out, they are actually E6600 4MB cache dies with 2MB cache fused. However, that fused part did consume/leak energy.

and then compare to AMD's X3 version, according to the link below (lazy to search more, anyone is welcome to challenge it):

it's idling power shows no improvement (not sure about the absolute CPU idle power on this, if it is low enough, then this is not an issue), its peak power is lower but the rendering energy is higher compare to the equivalent X4. I know i can't make an absolute/good judgement base on the above single example, but the point that i wanna bring out is if one could disable components in such that the disabled component do not attribute much additional power consumption, it is an innovation. AMD might or might not have done that. I am not sure. Against, welcome to be challenged with example.

On a separate note, I do not think one would ever see 3-Core with WSM-6C due to marketing reason. WSM-6C would be under the 1366 high end platform. No point having 3C there since 1156 already has 4C.

additional separate note :). Added new SKU cause money, marketing wise and manufacturing wise.

pointer said...

word correction: last paragraph cause -> cost :)

Anonymous said...

Anon: In my view that was why Intel took a shot at AMD. AMD made it sound like something innovative, I think Otellini was trying to educate people on the fact that it was simply a chip with a bad (disabled) core, not some novel invention or new product that the marketing and fanboys made it out to be.

Yeah, that was pretty much my take on it as well. AMD marketing must think their customer base is a herd of idiots, or else they wouldn't trot out such inane "newswmaking" ploys. However when you're desperate to get your name in the news, even the lamer stuff gets plumped up like a pig with lipstick :).

On 2nd thought, maybe AMD marketing is actually the fanbois over at AMDZoo :). That would explain a LOT of their problems...

Anonymous said...

Anon: What is lost on the people who are gung ho on this move, is AMD is about to take a hit on gross margins

Unfortunately AMD did not have any options left - they admitted they could not afford to spend the $$ on fab upgrades so no 32nm, etc. At least the ATIC deal gives them a cash infusion to hopefully tide them over this rough economy.

What I found funny about AMD's official annoucement on the spinoff, was the "cautionary statement" at the end of the announcement, where AMD states the deal may not succeed due to various nefarious acts by Intel:

Risks include the possibility that Intel Corporation's pricing, marketing and rebating programs, product bundling, standard setting, new product introductions or other activities targeting AMD's business will prevent attainment of AMD's current plans

Sound like AMD is still trying to build "evidence" for their antitrust suit :).

Anonymous said...

FYI, this past week I discovered that my wife had been running (ruining?) her laptop with no AV protection - apparently Trendmicro saw some change, maybe from an XP update, and wanted verification of ownership of the software. In the meantime it incoveniently disabled itself. So she goes browsing some questionable Vietnamese celebrity websites (she is Vietnamese) and comes down with the TROJ_BHO.TY trojan last weekend.

Anyway, she finally got annoyed enough with the continuous popups proclaiming her laptop being infected with 34 viruses, port attacks (even with wifi disabled) and fake scan reports sitting on top of every other window, that she told me about it. So first thing I do is find the trojan is dumb enough so that I can see it in XP's process window, and so I kill its thread, then validate Trendmicro and try to scrub it off her system with several different tools Trendmicro makes available on their website, after the AV software failed to do the job. Although their AV identified the trojan, there was no info on their website about it.

After a couple evening sessions with no success, I gave up on Trendmicro and went to try Microsoft's Onecare, which cleaned it off in one go. Also scrubbed the registry so now her laptop boots faster. Upshot - when Trendmicro's license expires in June, I'll be switching to Onecare.

Tonus said...

"AMD marketing must think their customer base is a herd of idiots, or else they wouldn't trot out such inane "newswmaking" ploys."

I'm curious, though, where these statements 'end up' marketing-wise. It's not like AMD does TV ads anymore, and Joe Average isn't skimming eWeek for the latest headlines.

If this stuff is only used in co-op advertising, it doesn't seem such a big deal. "TRI-CORE, THE EIGHTH WONDER OF THE WORLD" probably gets buried under all of the OEM claims about how multiple cores equal HUGE PERFORMANCE (which is backed by a graph drawn in crayon) and similarly outrageous claims about their hard drives, DVD burners, and so on.

Who *is* AMD trying to convince? OEMs? They can't be trying to convince the average tech geek. Intel-biased people will wave the claims off, and AMD-biased people aren't the people that need convincing about AMD's products.

Anonymous said...

Sound like AMD is still trying to build "evidence" for their antitrust suit :).

This is known as conditioning... you say something enough and soon it is no longer asked is that really true? You get it out to enough different sources and suddenly people view this a multiple sources saying the same thing as opposed to a single source just spreading it around.

Just look at the conditioning of the American people that solar and wind power will reduce/eliminate our dependence on foreign oil. If we got 100% of our electricity today from these sources we would not even impact our oil consumption as most of our electricity generation is from coal, nat gas and other sources.

But then we can convert to electric cars right? Well if the focus was on electric cars! We could move to electric cars today (if the technology and infrastructure was mature) and still eliminate a huge dependency on oil WITHOUT solar and wind! The difference would be that electricity would be generated by coal and nat gas. This is an ENVIRONMENTAL ISSUE, not a dependency on foreign oil issue. What the environmentalists have done is melded development of electric cars to reduce oil dependency and the development of alternative energies to reduce dependency on coal and nat gas into a one single issue. The press, not actually caring to do any reporting on the problem (or having the same ulterior motives) simply just keeps repeating alternative energy like wind and solar will eliminate our need for oil... and eventually this statement is challenged by very few people and just accepted.

If you asked a 100 people does solar and wind energy greatly reduce our need for oil, I bet 90-95 would say yes. If you asked 100 people if moving to electric cars without solar and wind energy development would greatly reduce our oils dependency, I bet 20-25 would say yes.

Conditioning.

SPARKS said...

LEM

"You could say the same thing about AMD's tricores, "You simply don't dump 3 functional cores"."

Sure, I'll see your three core and raise you with a six core. Yield/die size, which by the way, was what 'Samwise' trying to point out, if you missed that, again.

The difference being is that AMD came late to the table with the X3 BECAUSE their yields WERE so bad on X4. X3 was not planned nor was it on their road map. To prove the point, before INTC went to native quad, even they said it would be difficult to do at 65nM at the time, that meant unacceptable yields. This is why INTC they waited for a matured 45nM process with Nehalem.

AMD however, being committed to quads only, and extremely with poor performance overall, only exacerbated the X4 X3 issue price wise. In an attempted to obtain some revenue from a bad product with terrible yields and to compete with low end INTC dualies, X3 became a (losing) make shift ad-lib solution.

Conversely, INTC knew very well that the large 8 core die was going to have less than optimal yields so they PLANNED for the six core. Let's not forget are talking 6 fully functional blazingly fast chips as opposed to a mediocre three core that couldn't compete with a previous generation dual core, especially with power consumption and ROI.

The difference is huge, especially when INTC has the quad core and a NATIVE quad core market all sewn up. If you think their's no difference in what AMD did with their X3 performance solution and what INTC has done with their XEON X6 solution, think again.

Planning and expecting an 8 core fuckup is one thing, dealing with a fuckup when you put everything in one quad core basket the way AMD did 2 years ago is quite another. Pheromone and X3, X4 with their terrible yields and terrible performance was the final nail in AMD's coffin. We all have to agree with that, right? INTC's 6 core 8 core 'issue' will not effect 99.9% of the mainstream anyway. So who cares?

The rest is just bullshit.

SPARKS

Orthogonal said...

I haven't been following all the discussions that closely, but I'm not sure what the fuss is over the 6-core salvaged Westmere-EX chips. The MP market is already niche enough. There isn't likely going to be much downward price pressure from these chips on the Nehalem-EP market. These salvaged chips should not to be confused with Gulftown (Native 6-Core Westmere) which is the i7 follow-on.

Anyway, as far as AMD's decision to release salvaged triple cores (now rumoured dual cores). I don't know, it's a tough decision that has been discussed here in depth, but I'm sure they've tried to model the ROI on releasing these chips vs. the downward pressure on K8 Dual Cores. Without inside info, it's hard to say whether it has paid off.

Unknown said...

Sparks,

My argument was simply the "we prefer to have all cores on a die functional" thing that Intel (Otellini?) said. I don't recall the exact quote, but that was the gist of it. If they really meant it, they wouldn't do a 6 core out of an 8 core. Yeah, it's a really shallow argument, but after all is said and done I don't really care who said what. Things change. As long as there's decent technological innovation and (fair) competition happening, it's all good.

Anonymous said...

"we prefer to have all cores on a die functional"

The questioned that was asked (and now is conveniently left out to provide proper context) was about tri-cores....meaning unless you are having yield problems with quad cores why would you do it? Again, AMD was making it sound as if these products were always part of the roadmap and planned innovation. I think one of the implications (or at least my interpretation) of Otellini's statement was that why would you do tricore unless you were forced to? (thus implying AMD was having less than ideal quad yields) And at the time Intel simply wasn't.

If the roadmaps on the six core product are valid... that to me would be an indication of enough of an EXPECTED (read: planned, read: not thrown together suddenly and spun like a top) yield loss on the octa-core product to justify introducing a 6 core version. The key difference here is market size (and the whole planning part)... AMD was trying to make quad mainstream in the desktop area. My understanding of the 8/6 core products are they are primarily server parts and thus would be relatively low unit volumes. They would also be significantly bigger than quad cores, even when you factor in the node transition so a higher yield loss would be expected.

Sure there is a whiff of hypocracy or irony, but the situations are quite a bit different if folks take the time to analyze it.

InTheKnow said...

Sure there is a whiff of hypocracy or irony, but the situations are quite a bit different if folks take the time to analyze it.

I should probably respond to this, since I brought this up first.

I'm not trying to say that Intel is not doing the right thing. I also realize that octo-core and quad-core are different beasts and should be treated differently. I just found it a bit ironic that Otellini gave such a smug (and admittedly funny) answer only to be planning to do something similar down the road. I just think it is a bit of minor PR hit even if it is lost in the sea of red ink that AMD is bleeding right now.

I will freely admit that I don't know a lot about AMD's tri-core products, but it seems to me that Intel's approach is better. As I understand it, simply fusing off the bad die/cache doesn't reduce leakage current. The Nehalem power controller seems to give the ability to shut down the power to the disabled die, however. So if my assumptions are correct the hexa-core Nehalem would have lower idle power than the octo-core. By contrast, the tri-core Shanghai would still have comparable idle power to it's quad-core progenitor.

Feel free to correct me if I'm mistaken here.

InTheKnow said...

From elsewhere on the web....

JulianL wrote:Clearly all the L3 cache is still active but what is the power distribution (i.e. power consumption split between the L3, the IMC, the active cores and the disabled cores) likely to be in a loaded 2.8GHz dual core part harvested from a Deneb 920? (For the sake of simplifying the arithmetic, lets just assume that the actual TDP for the Deneb 920 is 100W.)

With a reply of:

These Deneb chips are sold as dual-core particularly due to their relatively higher TDP if otherwise.
In other words, you are not scaling a 100W QC down to 65W DC, but a 140W QC down to 95W DC.

BTW, SOI transistors have _much_ lower leakage than bulk Si ones when they are turned off.


Sounds like a healthy process no where near the edge of the cliff to me. LOL.

SPARKS said...

Yeah, LEM, he said it all right, and I applaud your admission that the argument itself is not profound enough to locate the elusive Higg's Boson, nor unify current physics.

When 'Big Paulie' made the comment last year I, being foaming at the mouth rabid poster boy for an INTC fan, was delighted with the little snicker. The anonymous poster above who has amplified my argument, tempered my enthusiastic approval with a cautionary observation, that it was silly thing to say for a man in his position. So did many others on this site, even those who WORK for INTC!

Huh? The NERVE, I thought!

Well, as usual, they were right, especially the anonymous poster (GURU), who will suck someone's misguided logic out of their heads and beat them to death with it. The comment, indeed, has come back to bite 'Big Paulie' on the ass PR wise, as you pointed out.

This is why Lawyers tell you keep your mouth shut, and why NVDA's CEO should be kept in a padded office under lock and key.

The fact is the entire industry has been built on cheaper, less than optimal performance, shorted out components, for decades.

This why you buy EXTREME, top end, top binned chips, and sweep the rest aside.

My soon to be retired QX9770 is absoulutely flawless, so there! %)

SPARKS

Anonymous said...

The comment, indeed, has come back to bite 'Big Paulie' on the ass PR wise, as you pointed out.

It's not just a matter of concern over it coming back to bite you on the ass, because even if it doesn't, when you are in a position of dominance (which Intel was at the time), it may also come across as smugness. If done enough, then there is a chance of complacency setting in or a chance of the press trying to take you down a notch 'just because'.

In my view the best statement is silence and simply riding over the noise. Most people root for the underdog and love to see the big guys taken down a notch (whether it be sports, business, politics, celebrities, etc), so when you are in a position of leadership, you generally have a higher chance of doing more harm than good (especially when talking abut competitors).

That said it was pretty damn funny at the time. Though I wonder if the hex-core was already planned at the time Otellini said it, remember Otellini comes from sales and I doubt he would make such a statement knowing something similar was planned. Now if he were an engineer like me, he would have said "Tri-cores? Ask me for a comment when they start 'innovating' dual cores out of these quads!" and if asked for a comment now, he should say "we prefer to minimize any issues to 2 bad cores out of 8, not 2 bad cores out of 4"

Anonymous said...

In other words, you are not scaling a 100W QC down to 65W DC, but a 140W QC down to 95W DC.

Hmmm... good to see UAEZone coming around to something that was mentioned on this site 6 (or 9?) months ago. Remember when it was 'random defects' and bad cores. Now the question is who will take the next leap over there and ask, well why would enough of these chips (especially the lower clock bins!) have TDP issues? This speaks to process variation and a small process window (as opposed to a random defect type issue).

Sure you could expect some significant fallout at the top bin(s), but then you could possibly just lower the quad core bin (not convert it to a tri-core or even dual cores now). Why are there so many of the lower clock tri-core bins?

Eventually someone will say, wow AMD was really having 65nm process variation issues and when they said 'mature yield', they were speaking in tongues (sure the die may have been functional, but the process still was poor). One could say the process was near a cliff?

InTheKnow said...

when they said 'mature yield', they were speaking in tongues (sure the die may have been functional, but the process still was poor)

I really think this stems from a failure to address both parametric issues and defect issues when discussing yield. I've never seen anyone post die yield or e-test yield data, so it kind of disappears from the media's (and hence the public's) view. The focus is almost always put on defect density.

AMD, the ever forthright, honest friend of the man on the street, seems to have embraced this fact and has adopted the motto "eschew obfuscation" when discussing their process.

Besides the obvious fact you don't give the competition sensitive data like die yield or e-test data, it is easy to see why the focus is on defect density. You can readily identify defects in-line on a daily basis and the less visible parametric issues don't show up until a wafer gets to a point where you can do e-test. (The transistor has to be complete to test it, so the wafer is done with front end processing by this point.) So even the guys in the trenches are conditioned to focus on defect density.

This focus on defect density seems to be a bit of a paradox to me. Defect density seems to slowly improve over time (look at Intel's yield plots over the last several process nodes), yet the process window get tighter with each successive shrink. At some point it seems inevitable that parametric yields will become a bigger yield driver than defect density. When defect density gets low enough, the drive for perfect binning will begin.

And my job will be that much more interesting.

Anonymous said...

Defect density seems to slowly improve over time

The claims can be a bit misleading... as it depends on classification of defects. At the module level, I've seen areas where the defect density gets driven down to single digits/wafer and then either the "gain knob" gets cranked up or the bin size on the defect metrology gets altered and suddenly you are back in the 100's or 1000's. You can also be simply measuring surface roughness that shows up as defects to the metrology - so a lot also depends on the classification of defects.

As for the Intel yield plots... not too sure how they do them - is it killer defects/area? an aggregate of various layers? An integrated monitor? Also the bin size of the defect is probably lower as the tech node shrinks so even though Intel does it's best with those plots, I'm not sure how apples to apples it is between generations (perhaps they normalize for this before they plot it?). If Intel is also reducing the bin size then the trend is actually better than what is shown (as the random defect density scales inversely with the square or sometimes cube of the defect radius) - this again assumes random defect density is a significant component of that defect density graph (which it may or may not be).

Even with the limitations, the plot is far better than what AMD shows, and it also gives a very good idea of the learning/improvement curve for Intel and how it compares to previous generations.

But as ITK says it is far easier for folks to focus on random defects then understanding the parametric issues that contribute to yield loss (or bin split issues)... I recall having a long conversation with Abinstupid on his blog a long time ago; where he based most of his modeling/thoughts on random defects (to which I asked, if it is random, why is the yield loss generally greatest at the edge of the wafer?) The systemic variation of the various processes, which may not lead to a non-functional die, but a die that is out of spec for one reason or another is the greatest challenge - when folks talk DFM or RDR's it is not to address random defects... While I think both companies understand this, Intel appears to put more emphasis on it and puts a greater restriction on its design parameters, knowing what the integrated process can deliver (and putting in acceptable cushion to allow for the inevitable process variation). AMD seems to be coming around to this, but when you get a process from IBM (whose focus in my view is not manufacturability), the process may look sufficient in the pilot line/development stage, but may not have enough of a process window to scale well to larger volumes.

SPARKS said...

Well, here's a case in point. Hung Sum Wang over at NVDA should, considering his company's market dominace, keep his big mouth shut and let the lawyers handle things.

That said, he and NVDA are absolutely furious about thr lack of an Nehalem licencing agreement, and I am absolutely delighted with INTC's formal responce.

"Intel has filed suit against NVIDIA seeking a declaratory judgment over rights associated with two agreements between the companies. The suit seeks to have the court declare that NVIDIA is not licensed to produce chipsets that are compatible with any Intel processor that has integrated memory controller functionality, such as Intel’s Nehalem microprocessors and that NVIDIA has breached the agreement with Intel by falsely claiming that it is licensed. Intel has been in discussions with NVIDIA for more than a year attempting to resolve the matter but unfortunately we were unsuccessful. As a result Intel is asking the court to resolve this dispute. It is our hope that this dispute will not impact other areas of our companies’ working relationship."

Oh the joy!

http://hothardware.com/News/NVIDIA-Responds-Boldly-To-Intel-Court-Filing/



SPARKS

InTheKnow said...

As for the Intel yield plots... not too sure how they do them - is it killer defects/area? an aggregate of various layers? An integrated monitor?

As I recall the plots are defect density vs time. It was my assumption that it was killer defects/area at end of line. But that is strictly an assumption on my part. As you point out, there are numerous other options.

And as Scientia was always quick to point out, the defect density is on a log basis. He never did enlighten me on what basis Intel chose for their log scale. I would have chosen something random like 8.42751 and defied anyone to try and back real values out of the data without knowing the basis, but I'm just perverse that way. :)

Anonymous said...

As I recall the plots are defect density vs time. It was my assumption that it was killer defects/area at end of line. But that is strictly an assumption on my part. As you point out, there are numerous other options.

Yes, they are log scale. And since they are used to compare defects/wafer back to the dawn of process technology, you can probably guess which scale. (Hint: intel was (and still is) an engineering company at heart)

I'm sure intel plots random (particle) defect density... however the more useful metrics *are* the binned defects, which are obtained *after* sort/etest.

Those are numbers seen internal to the company -- how many functional (within spec) die/bit seen -- albeit obfuscated by yet another log transformation where high is good. I'd be suprised if the underlying data presented in the "defect density" plots were any different -- all it is is a summary stat anyhow.

Chuckula said...

So the guys over at Arstechnica used some refrigeration equipment to get Phenom II up to 4.2Ghz and ran some tests check it out

At that clock speed the Phenom II is a solid chip, and I will say that it does do very well at games (where the GPU has a big part of the load anyay). However, it is very interesting that in some tests even i920 at 2.66Ghz actually beats a Phenom at 4.2Ghz!! The i965 beat the overclocked Phenom in the majority of tests as well, while using much less power at 3.2Ghz.

The problem for AMD is that this article is sort of like a time machine traveling into the future. By its own roadmaps AMD will not have a new architecture before 2011 at the earliest. The most optimistic roadmaps I've seen are calling for 4Ghz Phenom II's coming out at the end of 2010: So what we are seeing now is a slightly overclocked version of what AMD will be offering at the end of next year.

If Intel is aggressive with the 32nm transition, it will have much smaller (read cheaper) chips than the current phenom that are performing even better than the current i7's.... that cannot be a good thing for AMD.

Tonus said...

I forget, is Ars a "spIntel shill site" or not?

:)

I agree that their comparison doesn't seem to bode well for the Phenom II. AMD needs to break out of the pricing niche that Intel has them stuck in, and if they can't outgun Core i7, that will not happen.

(It still amuses me to see people talk about how AMD is putting pricing pressure on Intel. Let me guess... Intel's recent poor results are entirely due to AMD's pricing pressure, right? AMD's strategy of taking massive losses in order to drive down Intel's ASPs is working!!!)

Anonymous said...

Interesting tidbit on lithography:
http://www.eetimes.com/news/semi/showArticle.jhtml;jsessionid=EZI3SQ1CEWG4GQSNDLPSKHSCJUNN2JVN?articleID=214502317

At 32-nm, Intel will insert its initial immersion scanners, that is, 193-nm wavelength technology. At the node, the company will use single-exposure technology--and not double patterning, Sivakumar said.

The immersion part is not so new, but I thought most (well, OK, at least I did) had expected 32nm to need both immersion and double-patterning at 32nm. This makes immersion a given for 22nm (either through further extension of immersion or immersion+double patterning) and makes me think EUV (or some alternate tech) is not a certainty for 15nm either.

I wonder what AMD's plans are at 32nm? (if they are using double patterning)

SPARKS said...

Chuckula, Tonus- Concerning that Ars Technica article, there's no need to guess what my take is on the report. Imagine ole' SPARKS here interpreting this as anything less than a complete washout for the Pheromone? Yeah, right, OK, sure.

First, they need an "actual CPU temp as reported in the BIOS was -40C" to get to 4 gig? Huh!?! You give me that cooler on a toilet trained Q6600 (GO), I'll show you some benchies.

Second, why the QX9650? How's about a bone stock 3.2 Gig QX9770? (-10 to -15% faster, clock for clock)

Third, I could show my daughter how to overclock any one of those INTC baddies and it completely blow that PII completely out of the water, even at 4.4 gig!

Fourth, is a 4.2 Pheromone supposed to impress someone? I've been doing this on AIR since last April, hello. Giant was screaming an E8600 at 4.5!

My take? Even under the best of circumstances, this dog has no bite, none, EVEN WITH A $900 refrigeration unit!!!

Maybe Tonus is right, Ars Technica are INTC spinsters, and they're showing just how bad AMD is when compared to INTC iron in a very nice way.

The thing's a mutt.

BTW: I wish I knew what the power consumption was on the PII at those speeds.

SPARKS

Tonus said...

"Second, why the QX9650? How's about a bone stock 3.2 Gig QX9770? (-10 to -15% faster, clock for clock)"

The inclusion of the Q9650 was odd. If the idea was to include a CPU that was in the same price range, the Q9650 doesn't quite fit, being close to $100 more. If the idea was that it was a 3GHz quad core to compare with the p2-940, it would've made more sense to OC it. But OC'ing it would've taken away from the thrust of the article, which was to see what a 4.2GHz P2 brings to the table.

Perhaps it was a sort of reference point. I'm not sure. They chose a weird mix of CPUs for the comparison. Just pitting the OC'ed P2 against the i7 965 would've been sufficient, given what they were saying in the article.

InTheKnow said...

My take? Even under the best of circumstances, this dog has no bite, none, EVEN WITH A $900 refrigeration unit!!!

You're too focused on the need to refrigerate the chip to get the overclock. The point was to see what the future potential of Shanghai was against Nehalem.

The author of the piece, whom the inhabitants of AMDzone view as one of their own, drew the correct conclusion. Even future stepping of Shanghai with higher clock rates aren't going to beat Nehalem. A conclusion I'm sure you're more than willing to agree with.

Perhaps not surprisingly, this article isn't getting much press in the zone.

InTheKnow said...

On a more distressing note, the chairman of TSMC sees 3 years before production levels will return to the Q3'08 levels.

SPARKS said...

"You're too focused on the need to refrigerate the chip to get the overclock. The point was to see what the future potential of Shanghai was against Nehalem."

Nah, it still doesn't fly. Would that potential be dependant on speed or temperature? To start with, everything changes when you can keep the temps below 40C, forget -40C!

Christ, maintaining ambent room temp is overclocking Valhalla for 99.9% of most overclocking freaks. I'm sure you and GURU have some exponentially driven formula 'where the inverse square of the temperature, multiplied by the cuberoot of the voltage', who knows? All I know, it ain't linear.

Frankly, for this chip to do so poorly WITH phase change is precisely what surprized me most. As far as the "future potential" angle goes, I don't buy that either. What future are we talking about? Is it speed, architecture, process, feature size, cache, voltage, switching speeds? There are so many variables YOU GUYS TAUGHT ME!!! This whole thing is like comparing todays lemons with tomorrows oranges.

Sorry, ITK, my "focused on the need to refrigerate the chip" shows me exactly why AMD doesn't stand a snowballs chance in hell, no mater what they do now, today, given the previously mentioned variables stay the same. And, they won't.

Conversely, shall we take an i7 965, chill the shit out of it, and see what INTC's future potential is?

No mater, at any voltage or temperature, right now, Pheromone is a dog.

SPARKS

InTheKnow said...

As far as the "future potential" angle goes, I don't buy that either. What future are we talking about? Is it speed, architecture, process, feature size, cache, voltage, switching speeds?

I think it was a legitimate attempt to see what you could expect as an upper end best from PhII over the next year through future steppings and CTI. The tester was able to run an absolute best case stable high end clock.

While I don't really understand your venom, I see no reason to adopt Sci's tactics of picking apart the testing methodology, when the author admits a) this is a best case test, and b) that under best case conditions, PhII won't be able to best Nehalem.

In other words, Nehalem owns the top end and will continue to do so until at least the 2010-2011 time frame.

Of course by then, AMD will be looking at Westmere/Sandy Bridge.

Anonymous said...

On a more distressing note, the chairman of TSMC sees 3 years before production levels will return to the Q3'08 levels.

I don't know about you guys, but this seems like a perfect time to be starting a new foundry! Also seems like a time in which small/moderate volume suppliers will be extremely eager to sink money into adpating their existing designs to 32nm instead of simply continueing to produce them on current and lagging technologies.

Sarcasm aside - this appears to be a worst case scenario for the AMD foundry company. A foundry's viability is based on it's fab utilization level and when you can't load one fab and are going ahead building out a second (yes I know they plan to do their own chipset/graphics in house); I don't see how they keep the fab utilization high enough. It's not like there's a lot of 45nm SOI foundry business, AMD is waiting for 32nm bare Si for 3rd party work (perhaps they'll do ultrasmall 45nm SOI business?) which means 2010? And most of the foundry work today is still 65nm and folks are just starting to consider 45 or 40nm techs.... not sure when the demand for 32nm is really going to start. (foundry demand significantly lags what most folks are used to on CPU tech)

Tonus said...

AMD will be okay though, won't they? Once they split up all of their debt disappeared... didn't it? And with the foundry producing chips at a loss, AMD's gross margins should recover beautifully!

Anonymous said...

AMD's gross margins should recover beautifully!

Actually their margins will shrink in the new arrangement (and AMD has acknowldged this). Debt has very little impact on gross margin (I'm not even sure if debt payments are even factored into GM).

The problem with the gross margin? The cost to produce a wafer is STILL THE SAME! So if the foundry intends to make ANY profit, the cost/wafer to AMD will actually go up as it now also includes the foundry's margin (and thus AMD's GM will go down).

This is all about cash flow... the deal avoids the larger capital outlays that comes in chunks when building out fabs and all it does is spread that cash outlay out evenly over wafer purchases and allows them to adjust the cash flow more rapidly as demand changes and pushing that risk onto the foundry (Once you have spent the money to build a fab, there is not a lot to do if demand fall off).

AMD hopes long term that if the foundry gets more efficient through larger volumes (and more customers) that the wafer cost will go down and offset the added cost of the foundry margin. In this environment and with the competition and health of the foundry industry overall - I think calling it a tremendous uphill struggle may be overly optimistic.

Anonymous said...

Remember the IBM press on the 22nm EUV process?

IBM, AMD push 16-nm EUV effort despite tool delay
http://www.eetimes.com/news/semi/showArticle.jhtml;jsessionid=BLLIHKFFNHEEYQSNDLRSKH0CJUNN2JVN?articleID=214502843

Some say EUV won't be used until 2016. Others say it won't work at all. Originally, EUV was targeted for the 65-nm node, but the technology has been pushed out due to the lack of power sources, resists, defect-free masks and other technologies.

(My own emphasis added, also please note the 65nm plan was the original overall industry plan not IBM/AMD). Just goes to show how people can figure out how to extend existing tech when up against the wall.

SPARKS said...

Thanks, bud! I suppose my previous post, in your objective estimation, didn't have one valid point? And then, you compare me with Dementia?

Perhaps I should tone down my opinions/style to your more unbiased and objective sensibilities. Clearly, from your perspective, nothing I said made any sense at all, did it?

"I think it was a legitimate attempt to see what you could expect as an upper end best from PhII over the next year through future steppings and CTI."

And from my post..................


"Sorry, ITK, my "focused on the need to refrigerate the chip" shows me exactly why AMD doesn't stand a snowballs chance in hell, no mater what they do now, today,....."

Where did I go wrong to be compared to Sci? I'm sorry. I'll temper/tailor my opinions to your more delicate objectivity's in the future when addressing you.

GURU said something about Paul Ottelini that I think you should read, and then re-read.

"Sure there is a whiff of hypocracy or irony, but the situations are quite a bit different if folks take the time to analyze it."

God, he sure does have a way with words. Doesn't he?


SPARKS

InTheKnow said...

Where did I go wrong to be compared to Sci?

To quote the only line from "Cool Hand Luke" that justifies the existence of the movie: "What we have here is a failure to communicate."

First, my apologies. It was not my intent to offend, though I can see where you might take it that way.

All I was saying is, that in this case, it seemed to me that you were adopting a flawed approach in focusing on the methodology over the substance of the review. Since Sci seems to be the master of that approach, the comparison came to mind. In no way, shape, or form do I lump you into the same bucket as him.

The only point that I'm trying to make is that while there are problems with the testing, I think there is some useful information to be pulled out of the effort.

I was fortunate enough to have a chance to spend a number of years in research. And the big take away for me was to try and get past the problems with the data to find the things you could still use to move forward.

Hopefully, the above helps clarify where I'm coming from on this point. I'm trying to separate the wheat from the chaff, and I think the "wheat" is that PHII doesn't pose a threat to Nehalem.

I suspect the folks at AMD zone feel the same way, since there hasn't been any acknowledgment of the article at all.

You can certainly question whether or not I have filtered out enough of the noise and included some "chaff" in my analysis, but I don't think you can legitimately accuse me of failing to look at the situation closely.

Anonymous said...

You boys play nice, or I'll bring some UAEZone moderators over here to ban you!!!!

No need for either of you to tone it down - so long as it is not personal, who cares?

InTheKnow said...

I thought this link was interesting. I see this as an interesting attempt to reduce development costs.

I'm a big believer in modeling, but I have to wonder how well this is going to work out. Modeling has very real limits and you have to know what they are, or you can find yourself in deep kimchee real quick.

InTheKnow said...

No need for either of you to tone it down - so long as it is not personal, who cares?

I agree, I certainly didn't intend anything I wrote to be taken personally. I thought of it more as a good natured dig.

SPARKS said...

"You boys play nice, or I'll bring some UAEZone moderators over here to ban you!!!"

All right, all right, message received.

Although, I nearly dropped a nut being in the same sentence with Dementia. Sure, I'm a vapor deposition/annealed INTC fan. I admit it. What I can't stand about that guy is that he won't admit he is AMD Flambe fanboy.

ITK, it's cool, no sweat. The thing is though, and I still maintain vehemently (not venomously) that even under the absolute best of circumstances, PHII did poorly. That's all.

I've thought about phase change a number of times and nearly dropped $900 for the beast.

http://www.frozencpu.com/cat/l1/g41/Phase_Change.html?id=6kjqEYD4


However, with the performance of the QX9770, and it's overclockability on air no less, the purchase would have been extraneous and superfluous, given its exceptional performance nearly a year ago.

Coincidentally, it was Dementia who challenged QX9770's clock speeds and performance when I first purchased it, if you recall.

SPARKS

Tonus said...

"ITK, it's cool, no sweat. The thing is though, and I still maintain vehemently (not venomously) that even under the absolute best of circumstances, PHII did poorly. That's all."

I don't think that you're that far from what ITK was saying. His point was that a 4.2GHz Phenom II, a CPU that might not show up on the market until... 2010? 2011? Is not as fast as a 3.2GHz Core i7, a CPU that is going to show up on the market... oh wait, it's already on the market!

In other words (as I read it) a 4.2GHz Phenom 2 would be a lot like the current Phenom processors. Very competitive against an Intel processor that has been available for a good long time now. Of course, AMD will price them to sell, and when Intel knocks down prices on its bottom SKUs, we'll hear about how AMD is putting pricing pressure on Intel again.

InTheKnow said...

I've thought about phase change a number of times and nearly dropped $900 for the beast.

I've never liked the idea of LN2 cooling. The thought of having to wait to run my machine (the way I'm accustomed to) until some yutz with Air Liquide gets around to my place to refill the dewar is a real turn off for me.

SPARKS said...

"His point was that a 4.2GHz Phenom II, a CPU that might not show up on the market until... 2010? 2011?"

OK, lets go with this supposition. Say, it's 2010 or 2011, for that matter. Do you guys think 4.2 GHz will be the standard, the way 3.2 or 3.3 is today? Wouldn't it be more sensible to add another IPC the way Core 2 did in 2006, along with other enhancements? Really, my 955EE running at 3.73 is certainly no match for my QX9770 at 3.2 GHz. Nor is the QX9770 a match for i7 965 at the same 3.2 GHz.

Perhaps, I learned a lesson years back when 2.8 Opterons were trouncing the 955EE. This is what I got from the Ars Technica article. Having some overclocking room is nice to have, especially with the top bin varieties, that's what you pay for. You push up the multiplier, take your chances (electron tunneling, perish the thought), and presto, you've got the best PERFORMING chip on the block.(Besides, those 4.5+ gig speeds wreck havoc with all the other ancillary components in the system.)

Odd is may seem, even for this 24/7 overclocker, I've come around to the idea that speed isn't the end all solution it used to be. For AMD to be competitive in the future, those dates inclusive, they better go back to the drawing board. PHII in its present incarnation ain't gonna do it. Not at any speed, neither did a 955EE in the beginning of 2006, not at any speed. Even for this overclock whore, as the numbers go, size isn't everything.

Not any more, now it's what you do with what you designed. 'The motion of the ocean', baby.

"Captain, we're going around in circles at warp 8, and we're going no where mighty fast."

I submit, Core I7 is the antithesis to Scotty's remark. The Ars Technica article supports it, obviously. That's what was on my mind.

SPARKS

Anonymous said...

Do you guys think 4.2 GHz will be the standard, the way 3.2 or 3.3 is today? Wouldn't it be more sensible to add another IPC the way Core 2 did in 2006, along with other enhancements?

Sparks, I think that was kind of ITK's point... Bulldozer (last we heard) was pushed to 2011, so between now and then all AMD has up its sleeve is more clock (or possibly more cache?) - thus even under the most optimistic circumstances the best you will probably see performance-wise from AMD is a 4 or 4.2GHz Phenom. I don't want to put word in ITK's mouth, but I think that is what he is trying to point out (which is essentially similar to what you are saying)

Doing this on a stock part is a huge reach on 45nm, it MAY be possible on 32nm with highK - but that's probably no until late 2010/2011. So while it would make more sense to work on IPC, that means new architecture (and presumably this is a focus for Bulldozer), and that means 2011 and that means between now and then it will be how much can PhenomII get ratcheted up via clocks.

The problem is OC's never seem to be a reliable roadmap for what the stock clocks will be - there's power, binning, meeting the spec with a stock cooler, oh and a little something called reliability specs. While a 4.2GHz OC'd part may be stable, you think this is going to meet a MTTF of 7 years? While this may not be a concern for enthusiasts who are OC'ing and upgrading routinely... when you release a stock part, it kinda has to meet reliability targets. Assuming it can fit into the TDP window (which is a huge assumption), operating at 4.2GHz could introduce other problems like electromigration.

It's kind of funny that the AMD folks who (rightly) reminded people of this when looking at Intel OC's during the early Core 2 days, conveniently forget this when it may also be applied to AMD.

InTheKnow said...

all AMD has up its sleeve is more clock (or possibly more cache?) - thus even under the most optimistic circumstances the best you will probably see performance-wise from AMD is a 4 or 4.2GHz Phenom. I don't want to put word in ITK's mouth, but I think that is what he is trying to point out (which is essentially similar to what you are saying)

Yep, that was my point. All they have to improve with is clocks (or cache) and I seriously doubt they can push them past 4.2GHz. I'm not convinced they can push them that far. So AMD is going to continue to slip until at least 2011.

By then, Intel will be offering a mature Westmere product with Sandy Bridge by the end of the year.

Incidentally, the folks over in the zone just seem to have discovered this article. Does it surprise any one here to learn that the favorable results are all due to compiler optimizations?

Anonymous said...

Anon: You boys play nice, or I'll bring some UAEZone moderators over here to ban you!!!!

ROFL - good one! :) But without their mod powers, I doubt either Sci or Ghostie would last here very long. Even with them, they'd probably try to ban Robo as well :)

BTW, I see Sci has updated his Anandtech-critique blog, this time concerning the systems choices at different cost points. Methinks he secretly wishes Anand Lal Shimpi would give him a paid job, or something. That way, Sci might not worry so much about the odd $5 here and there for his next el-cheapo build.

Anonymous said...

ITK: I've never liked the idea of LN2 cooling. The thought of having to wait to run my machine (the way I'm accustomed to) until some yutz with Air Liquide gets around to my place to refill the dewar is a real turn off for me.

I actually investigated the cost of buying a LN2 generator once, for an astronomy club (used to be popular for chilling CCD cameras so as to reduce dark noise during prolonged exposures). The salesguy on the phone told me their cheapeast (11-liter-per-day) unit would cost $42K but if I wrote on the club letterhead they would let it go for a measly $30K.

I suspect the heat output of an oc'd CPU would consume a lot more than 11 liters per day (CCD cameras are much lower power by a couple orders of magnitude). Not to mention the salesman recommended a lot of auxiliary equipment for filtering the compressed nitrogen for impurities. And here I had thought it would work on ordinary air, seeing as how air is 80% nitrogen anyway. Anyway, needless to say that was far too rich for our budget...

SPARKS said...

AMD is at it again.

http://www.theinquirer.net/inquirer
/news/175/1051175/more-amd-layoffs

SPARKS

Tonus said...

(Looks like my earlier reply got swallowed up. Here we try again...)

"OK, lets go with this supposition. Say, it's 2010 or 2011, for that matter. Do you guys think 4.2 GHz will be the standard, the way 3.2 or 3.3 is today?"

I think the point was that if there is a 4.2GHz P2 in 2010/2011, it will be competing with the Core i7 965. Of course, by that time Intel will have a Core i7 999 or Core i8 965 or something else that AMD is unable to match. Which leaves them where they are today, having their pricing effectively controlled by Intel.

If AMD cannot get back to performance parity with Intel, they will slowly be bled to death. And that's only taking the desktop and low-end server markets into account. AMD is in for a fight in the 2P/4P/xP server market, and they do not seem to have a viable answer to Atom. Come 2011, Intel's lead in desktop performance may be the least of AMD's problems.

Anonymous said...

If AMD cannot get back to performance parity with Intel, they will slowly be bled to death.

There is one other (improbable) option and that is to have some significant unit production cost benefit which allows AMD to operate at lower pricing while maintaining a healthy margin. Unfortunately, short term the manufacturing cost/unit will likely either be flat or slightly higher than current costs, thanks to the foundry (and long term it is not clear if it will be able to deliver any cost advantage). The other issue on the manufacturing side is the tech node progressions and with Intel maintaining a 1 year lead, this only puts AMD at a further unit cost disadvantage.

So the ability to make cost lower than Intel falls on the same area as the ability to get the performance on parity - it has to come out of the design world (via die size). Unfortunately rather than following Intel's lead and putting together a design for a now quickly growing netbook/low end notebook market, AMD is trying to push performance in this cost sensitive market with a substantially bigger die part and simply shrinking/underclocking an existing part that was originally designed with SERVERS in mind!

AMD has a very good IGP part, while it would obviously take money and time, a SOC solution for them would seem to be a no brainer, perhaps not in the netbook market where you don't need IGP's that do 1080p and can play the latest FPS games, but in the low end notebook market this would seem to be perfect. With the power of their IGP part they don't even need that powerful a core - but they need one small, low power and cheap - in other words a stripped down version of K8, not merely an underclocked one.

I think AMD will live to regret not looking into a specific design for this market and simply thinking they could push down the existing parts and at the same time try to "raise" the market by telling people they need more capabilities in this market. What AMD is forgetting is they are not solely competing against Intel in this market, they also will have ARM. I highly doubt things like 1080P and 3D graphics is going to have more clout over power and cost in this market and AMD will be forced into ceding the market completely or putting together a new design.

InTheKnow said...

Tonus, what a great segway into my next comment. :) I don't see AMD raising a serious threat to Intel again any time in the near future.

I think that the real action will be between ARM and Intel. I don't always agree with Rob Enderle, but I think he is right in this case that the real threat to Intel now is ARM.

ARM's power characteristics are sufficient to hold the low end. Now they can focus process shrinks on increasing their computational power while holding power constant.

Intel has the computational power and needs to use the process shrinks to drop the power while maintaining computational power.

I'm puzzled by the lack of urgency on Intel's part to move Atom to 32nm. The Moorestown platform due out at the end of this year is going to be on 45nm, not 32 which will be out at around the same time. I understand not putting a new platform on a new process node, but I haven't seen much press about moving Atom to 32nm. I would think that Moorestown coupled with 32nm would give Intel a clear edge over ARM with more computational muscle and equivalent power.

In any event, it will be an interesting race over the next 2-4 years. I think it will take that long for the dust to settle.

Anonymous said...

"New California emissions rules target IC fabs"
http://www.eetimes.com/news/semi/showArticle.jhtml;jsessionid=JD5UOOYD0ZVE0QSNDLSCKHA?articleID=215000002

Or as I like to title it "New California rules target phase out of all IC fabs for the state"

2 problems:
- While they look like a "leader", this is a state that is 40Bil in debt and looking to drive businesses AWAY? I guess when you have a president that will use the rest of the states of the union (via tax money) to bail you out, it doesn't matter?
- This does the environment absolutely ZERO good if everyone else does not adopt similar rules it just pushes the pollution elsewhere.

Anonymous said...

Robo... if you were ever going to moderate something.....

let's keep it civil, guys

SPARKS said...

Tonus, I hate to beat this one to death, but I'll give it my best shot. At the risk of sounding like LEX, I don't think AMD has the chops now, or in the foreseeable future to compete with INTC.

Let's forget the phase change for the moment. The original premise behind ROBO's last post was "Wait no more" at stock speeds and stock cooling. His conclusion/observation speaks volumes. They finally bested a Q6600. Bravo, wonderful, magnificent!

Assuming, I repeat, ASSUMING everything remains the same ARCHITECTURALLY and PROCESS wise by 2010, 2011, they still couldn't beat a QX9650 or ANY Core i7 product made TODAY even if were clocked at 4.2 GHz on air in 2011. (fat chance).

What am I missing here? First, they would be absolutely F**KING crazy trying to tweak this design for the next two years!!!. Wake up a smell the coffee, Barcelona was still born and shot at any speed, on air, on water, on LN2, on phase change. on the backside of PLUTO, it ain't gonna fly! THEY NEED A NEW PLAN!

Secondly, by that time INTC will not be sitting on their hands WAITING for a 4.2, 4.5, or even 5!!! GHz (what a hope) PHII part in 2011! INTC will be TICK TOCKING THEM TO DEATH with better designs, superior processes, and smaller tech nodes, and 6 BILLION smackaroos! Give me a break. With the way INTC is executed how can anyone compare anything now to ANYTHING in 2011!?!?!

I'll even go toe to toe with GURU on this one. (very stupid, but I'll take my chances) The only thing is see in this article is that PHII is shot at any speed and any cooling, NOW!

Arab Micro Foundries needs a new design, some serious engineering, and a huge leap in process, now, in 2010, and 2011 to compete with INTC. Frankly, at the risk of sounding venomous, they are in a world of hurt.

I'm sorry guys, gang rape me if you must, but I don't see how this Ars Technica article has ANY relevance in 2011, case closed.


(BTW-I didn't take you serious about the AMDzoners threat. Apparently one slipped in. That was pretty raw.)



SPARKS

SPARKS said...

Hey! Checkout this video! I think I saw Orthogonal in one of the fancy white suits!!!

FAB 32 Their not fooling around with the 32 either.

Naturally, I was lusting on the video. The music is pretty cool, too! 568 miles of wire, check out the concentric offset bends in the ridged pipe!

The lovely automated tools!
The stacks of wafers!
OMG!

God, I love INTC.

http://www.hexus.net/content/item.php?item=17409

SPARKS

Tonus said...

Sparks,

"At the risk of sounding like LEX, I don't think AMD has the chops now, or in the foreseeable future to compete with INTC."

That is my feeling as well, which was influenced by the results from Ars Technica. I think that the fact that Ars decided to attempt this (OC a P2 that high and test against Intel CPUs at stock speeds) is because P2 is currently competing with Intel's mid-range CPUs. I think that they were hoping for some hope, ANY hope, that P2 might be able to tide AMD over until they can get something better online. Their results make it seem like a very small hope.

I can't help but get the feeling that many reviewers are very down about the idea that there is no longer a competition at the top of the desktop CPU ladder, and they're hoping beyond hope that they can find some faint sparkle of hope somewhere. I'd love to see AMD (and a few other companies) produce competitive CPUs and get back on track, but that does not seem likely. And being in denial doesn't change that, either.

Anonymous said...

I can't help but get the feeling that many reviewers are very down about the idea that there is no longer a competition at the top of the desktop CPU ladder

Well reviewers are generally about the cream of he crop, testing the latest and greatest and hoping that this brings in the enthusiasts and advertising dollars and more stuff to play with and test.

If you were a reviewer and foresee 2 years of price/performance and platformance metrics of the day to look forward to, would you be all that excited? Couple that with the continued deterioration of the desktop market, the climb of the netbook/nettop market and essentially nothing new on the horizon from AMD on the CPU side, save some marginal clock increases for the next 1.5 -2 years, and I would not be all that enthused about doing reviews for the next 2 years.

Anonymous said...

Why INTEL will win against nVidia and ARM.

1) Because technology is a competitive advantage that matters
2) Moore’s law relegates architecture superiority to become less and less important over each generation
3) Economies of scale from x86 allows it to spend more on 1) and 2) to be ahead of everyone.

Let me first spin back 10 years or was it 15 years ago when we had x86 and in the high end we had PowerPC, SPARC, Alpha, and other very competitive high end architectures. Most were far superior then the trusty old then new Pentium. They were supported by some of the biggest technology companies with very leading edge silicon technology; IBM, HP, DEC, Fujitsu, TI and others.

Remember what happened? INTEL continued to drive Moore’s law and from the 0.5um thru 0.130nm the 4 generations they were able to throw so many transistors that the old x86 mistake of an architecture came to conquer all the other competitors. Simply the power of performance and density with the huge economies of scale to re-invest with their multi hundred million PC base to simply overwhelm the smaller but superior competitive architectures.

We can spin back even further to remember when math-coprocessor or stand alone SRAM was a viable business 15 years ago. Moore’s law allowed that all to be absorbed by the CPU. AMD lead the way with the memory controller and soon more and more functions will also be absorbed.

ARM is a nice little efficient architecture conceived from the bottom up to be superior for power. But now that the x86 guns are looking to grow their volume all that is left is the nice little embedded ARM space. Today ARM is licensed and manufactured by a host of people. Most are manufactured on 130nm or 90nm, with a few on 65nm and none on 45nm. So today INTEL has a 2 generation process lead. Now that they have their sights on this space their first shot at ARM was Silverthorne. Give INTEL two more process generations and equal design cycles and they will have caught up with design efficiency and will have such a superior process lead in power/density that ARM products will be uncompetitive. Apple will have no choice as other makers to go to INTEL, no different then the PowerPC versus x86 story.

Now lets look at nVidia. Its their CEO so delusioned that he thinks he can compete.
Sure they have like a 10 year design / driver lead, but their process technology is 2 to 3 generations behind. INTEL has the time and money to learn the design and driver. nVidia and its partner TSMC has neither the time nor money to catch INTEL let alone keep the pace. Worst case 6 years and the can of whoop ass will be served to that arrogant CEO, best case 4 years and nVidia will be another Transmeta, Via, DEC, Sun story.

AMD, well I agree Intheknow got it, they are finished longterm as a competitor. The spin off of the fab has forever finished any chance they can compete.

Tick Tock Tick Tock the clock has run out on all of them. They never appreciated the true implication of Moore’s law and the x86 advantage that INTEL got blessed with.

Some can of whoop ASSS heah!
Lex

Anonymous said...

Completely flawed model.... x86 started to win because of the infrastructure around x86, not because it is a better architecture or because of Moore's law... Itanium failed because why? Moore's law allowed x86 to catch up to it? Funny that Moore's law didn't help Itanium...

Things are heading in the EXCACT OPPOSITE direction you are suggesting. HW has far outpaced SW and coupled with the economic conditions we have hit an inflection point where CPU's (mainstream ones) are truly becoming a commodity and it is not about performance, it is about cost and integration. The 'good enough' age has hit upon the masses. Sure there will always be some demand for more performance, a little better power whether it he the server space or the high end desktop space... but what happens when quad core becomes truly standard.... are people going to want 8 cores? Are people going to want 4-5GHz instead of 3.5GHz?

Moore's law will help with the system on a chip progression... but a 'killer App' is needed to fuel Moore's law long term when NGL (next gen litho) tools will be going for 90Mil a pop.

Anonymous said...

Duh

I never said x86 was superior it was inferior.

The reason it won is because it had volume, the reason it had volume is because it was backward compatible for everything and application you wanted was also available on x86. So yes it was infrastructure. x86 is so strong now nothing can beat it.

I don't disagree that the cpu cycles are less and less valuable. But here again moore's law and the guy ahead wins as a generation or two lead results in a die 1/2 to 1/4 the size, uses less power and cheaper to make. Guess who has that advantage?

Today the netbook with an Atom is good enough for 95% of us. Guess who makes the fastest and cheapest while still MAKING MONEY? And who will turn that money around to design it first and with even more features, lower power and cheaper in 32nm. who will have more ability on 32nm first.

AMD, nVidia, and the IBM gang have no chance here

InTheKnow said...

Today ARM is licensed and manufactured by a host of people. Most are manufactured on 130nm or 90nm, with a few on 65nm and none on 45nm. So today INTEL has a 2 generation process lead.

Ummm. In case you hadn't noticed Intel is still on 45nm. So that is a 1 generation process lead. TI is scheduled to roll out production on 45nm in 2H09, or a comparable time frame to Intel going to 32nm, so still a 1 generation lead. Leading foundries have 45nm capability and TI shows the designs for 45nm are coming out, so the lead will stay at 1 generation for the next 2 years at least.

You also need to consider just how different ARM's business model is from AMD's or Intel's. ARM has made their living on low power designs. Their designs crush Intel's efforts from a power perspective. And they have done that on an "inferior" process technology. That expertise isn't just going to evaporate overnight. They will focus on providing the designs and let the foundries focus on keeping the process gap from getting too large. Specialization has advantages as well as the obvious disadvantages that have been discussed here before.

To properly evaluate ARM vs Intel you really have to look at the leading edge ARM offerings. So what if a lot of the ARM stuff is made on 130nm. Atoms chipset is made on 130nm as well. Neither of those facts is relevant to this competition. The only thing that matters is that ARM has offerings at about 1/3 the cost and 1/20 the power while offering ~1/2 the performance.

Those metrics are still a net win for ARM and that is the gap Intel has to close in order to win this competition. ARM is going to be working hard to improve their position at the same time Intel is trying to close the gap. A stern chase is always a long chase.

And we are starting to see the first design wins for ARM in the netbook space as they try and move the fight to Intel's space before Intel moves into theirs.

This fight is far from over and anything but a slam dunk for Intel. I believe that Intel will win the competition, but it is far from the certain outcome you seem to believe it is.

InTheKnow said...

Moore's law will help with the system on a chip progression... but a 'killer App' is needed to fuel Moore's law long term when NGL (next gen litho) tools will be going for 90Mil a pop.

For Intel I really think that has to be speech recognition. If you look at what really prevents small form factors from taking over the computing space, it is input devices and display. The computing power is close enough now.

Intel doesn't play in the display arena, so they are hostage to others there. But they do have the ability to drive computing power to a point where speech recognition becomes a viable option. My best guess is that computing power will reach that point around the 15nm node.

Anonymous said...

But here again moore's law and the guy ahead wins as a generation or two lead results in a die 1/2 to 1/4 the size, uses less power and cheaper to make. Guess who has that advantage?

Well apparently not you, because 2X THEORETICAL SCALING never translates to actual 2X scaling... Again, not saying Moore's law isn't an advantage.... it's just not the advantage it USED to be, and the advantage of Moore's law will decrease with time (for a given time lead). Also the economic benefit of Moore's law is also starting to diminish with time, as each node adds significant cost to the wafer (which is why Intel was/is pushing 450mm). You get 35-40% area reduction, but these days you may be adding 10-15% or more to the wafer cost.

And I have to completely agree with ITK on ARM... and whose opinion in this area is signiifcant, especially him and Robo putting me (and Sparks?) to shame on the initial impact of Atom. Intel will need a generation lead just to stay competitive in terms of power. And this is not because of poor design or Intel execution. ARM is designed for low power, it is far simpler than x86 so it doesn't need the transistor count. Now, it doesn't have all the capability of x86, but in the markets it competes in, it is probably enough and it is starting to encroach on the low end x86.

Intel saw this and this is why I think atom was started up (that and a way to grow x86). There is no way they would be foolish enough to do what AMD is doing (shrink and underclock an x86 design that was intended for performance and hope people at the ultra low end somehow 'demand' more performance). Intel's play here seems to be - cut down the performance to get the power in the ballpark, but keep enough of the x86 compatibility so they can offer more than ARM can do.

I also think ARM will have a hard time winning in the low end notebook/netbook model as people have gotten conditioned to having capabilities (whether or not they actually need it is another story). I think people will accept these devices being slower and less powerful in exchange for paying less (or for more battery life, lighter weight, etc). I don't think they will live with less functionality though.

The smartphone/true MID space is another story - I think ARM will remain competitive in that area.

Anonymous said...

All the responses sound just like the 15 years ago debating PowerPC, SPARC, Alpha and PrecisionRISC.

Guess who one that one?

By the way last time I checked TI 45nm doesn't even do their own silicon anymore. Too expensive they can't invest nor gain the money back. Their 45nm look worst then some other companies 65 or 90nm.

InTheKnow said...

By the way last time I checked TI 45nm doesn't even do their own silicon anymore.

Then you haven't checked close enough.

SPARKS said...

"......Worst case 6 years and the can of whoop ass will be served to that arrogant CEO.....", et al.

OMG, from your post to GODS ears. I will relish and savor that day. The inexcusable prices for discrete graphics, CO-conspired by ATI/NVDA, must end. Last years $900 high end graphics were nothing less than obscene. (And they ridicule my QX9770 purchase!)

I can't WAIT to spend my money on an INTC discrete graphics component. I couldn't care if it were 20% slower in performance, overall. Simply getting away from those huge, expensive, power sucking slobs, would be a joy in itself. I sincerely hope their reign is coming to an end.



"him and Robo putting me (and Sparks?) to shame on the initial impact of Atom"

True, quite true. However from some twisted perspective I feel kind of flattered that I was in such good company. Being put to shame with you is a plus for me.

Somewhere between the smart phone market and the netbook market there seems to be a gray area, which as a consumer (forget the enthusiast), I don't quite understand, yet.

Sure, this is one hell of a debate, rightfully so. My mornings on the LIRR, interestingly enough, tell me a great deal about consumer demand. It's rather comical to see a good portion of the riders in each car furiously clicking away at their handhelds at 7:15A.

The crew I ride with, a majority who are in IT one way or another, receive newer devices more often than you would expect. Some carry two or more. One of my buddies who manages servers carries three, a company Blackberry, a new super I-Phone, and small laptop! I told him he was hooked up like goddamned Borg with an addiational blue tooth device sticking out of his head.

Sometimes, they have a half-assed impromptu show and tell, as one quarter of the car are exchanging devices to see various features and functions. This tells me, as if you didn't already know, this market is huge. I don't know where or how Atom can fit into all of this. But, I do know one thing, they absolutely crave power and functionality as small as they can get it.

As for me, I waiting for a clipboard, a 3/8ths of an inch thick (or less) with a virtual keyboard on the bottom, wireless all the way, that won't need a pencil or a mouse. I'm getting old, my eyes are shot, and I need a good sized screen. This would be a perfect place for a dual core ATOM.

SPARKS

SPARKS said...

Oh yeah, my dream 'clipboard' would need to run XP, all my Windows apps, and be 100% compatable with my big machines and network.

SPARKS

Anonymous said...

Ti 45 nm is a joke if it exists compared to what INTEL delivers.

They aren't even relevant anymore when it comes to silicon technology.

In DSP and analog they are beast. But if it makes business and formfactor sense to combine analog, dsp and logic TI will go the way of all the other silicon manufactures. Its only a matter of a few more generations.

Look at litho, today how much does a 32nm immersion track cost. Who can afford to purchase the pre-production to start debuging process and design. Then who can afford to purchase a few dozen for a fab. Not some bit player with one or two products with 10-50 million volumes. NO to be able to get your money back you need a volume of at least 200million units. Only place you get that is commodity memories and high volume CPUs. Only one archictecture has that.

That intheknow likes to hang on the trees does or does not TI have 45nm. He forgets the whole fucking forrest is on fire for TI, IBM, AMD and even the foundrys. Such narrow understanding of the bigger picture

Tick tock tick tock

Anonymous said...

He forgets the whole fucking forrest is on fire for TI, IBM, AMD and even the foundrys. Such narrow understanding of the bigger picture

At least he doesn't spout information! And knows how to pluralize FOUNDRIES (must have been a typo)

Ti 45 nm is a joke if it exists compared to what INTEL delivers.

So you don't know if it exists... but if it does (which again you don't know), it is a joke (again if it does exist, but you're not sure).

I think that about sums up your level of knowledge and analytical abilities.... well played!

Anonymous said...

Oh great jedi master

Tell me about that TI 45nm

Whats the SRAM cell size and go compare to Intels?

What it the driver current for the same Ioff that intel quotes in their 45nm paper from IEDM a few years ago? Maybe compare even with their 65nm from 3 years ago.

Lets see where TI is. Heah, didn't TI close the kilby fab and say they were moving to foundry for future generations of logic....

Is it TI or is it some chink process

I want to see the numbers...

By the way you never answered the really bigger question. How do these companies play and compete in technology without the economies of scale to amortize the billion dollar/year TD required over many years for each generation.

Nah that would be insignficant details for a dumb fuck who cant spell foudrys...

Anonymous said...

Civilly, gentlemen.

I *work* for intel. In PTD. But I also tend to agree with ITK's point.

I for one would also like to see the numbers, but as you are the claimant, it's up to *you* (not ITK) to come up with them. I suspect you know the answer already.

However, how do those numbers, which are tuned for performance, affect the "good enough" crowd?

InTheKnow said...

This is circulating on the web.

According to the United Daily News, TSMC has won an Intel contract to supply mobile Internet device (MID) and the Intel Atom processors to Intel.

I'll go on record right now saying this is delusional. You can't build Atom without Intel's 45nm process. Does anyone here really think that Intel would give their HG/MG process to TSMC?

Anonymous said...

Tell me about that TI 45nm

I'm not sure how I can tell you about them, as according to you it doesn't exist... it's like talking to Dementia... make a claim and then leave it up to everyone else to prove it wrong! You were caught (again) and now are back-tracking on the "but, but, but Intel is better argument". And this also is just like Dementia (except sub Intel with AMD), when caught in complete ridiculousness (Asset Smart? Core2 capability? Process lead diminsishing?), just twist and move the argument and hope noone notices the ridiculous intitial statements.

So, I think the comment from a couple of folks is... oops... did he really say TI doesn't have a 45nm... and oops... did he just try to cover himself by saying 'well even if it does, it's worse'! You are entitle to your own viewpoint, just don't make stuff up to support it.

And talk about the forest from the trees - do some research on ARM. As ITK points out even though it is lagging by a process generation (or however many you think you want to claim); it is still SIGNIFICANTLY better power-wise, despite Ion/Ioff or highK/no high K or whatever nice technical terms you want to throw out... Don't get me wrong the process is important, but when you get into non-high performance applications, the DESIGN is far more important! Clearly ARM can't compete performance-wise with an x86 chip, but it can compete in low power, low performance segments. It remains to be seen if it can move "up" into the netbook/nettop segment - I'm a bit skeptical as some performance does matter here and it's not ONLY about power.

So try to expand your mind and at least try to listen to others (even if you don't agree)... you may actually learn something (like apparently TI has a 45nm process).

Anonymous said...

TI doesn't publish anymore because their technology is sorry.

I don't debate that ARM has a nice and very efficient architecture.

Give INTEL another two spins of Moore's law and at 32 or the next one and two more turns at the design crank. The overhead for the extra x86 will become miniscule and ARM will be seing the blues just like all those server guys from 10 years ago.

You sound just like that.

hyc said...

I think it's way too early to declare this one the same type of victory as x86 over RISCs. As you already pointed out, x86 grew up to meet and exceed the "high end" processor designs, by advancing the process and adding transistors.

Adding transistors isn't going to help x86 grow *downward* to fill the MID space. The x86 frontend at ~5M transistors was a trivial fraction of the transistor count of a Pentium, but it's a significant fraction of an Atom. From this point downward it is a significant liability, not an advantage.
There are several processor families still playing in the embedded space (M68K, ColdFire, MIPS, etc.) that all have more efficient ISA and equivalent functional capabilities. Aside from the 80186 Intel has never played well in this space.

x86 compatibility doesn't matter here, battery life is it. Symbian, Blackberry, Android/Linux, whatever, are all succeeding just fine without x86/Windows. Indeed, most of those devices would be mostly useless if they were running Windows. Can you imagine what kind of battery life you'd have on your smartphone, if it had Windows and all the requisite anti-malware running on it constantly?

InTheKnow said...

Hyc said...
x86 compatibility doesn't matter here, battery life is it.

I'll agree to this up to a point. Battery life is a key requirement, but only up to a point. Once I have all day battery life, whether I have to charge my device each night or once a week becomes irrelevant.

All x86 has to do is achieve all day battery life. Something that seems to be quite achievable as we are starting to see real 6+ hour netbook designs on the Menlow platform. Moorestown should improve on this substantially.

You are all so missing out on something else. The low end x86 offerings offer more processing power. That translates to speed, and is a key component to user satisfaction. Try going back from a more modern processor to a 386 machine and see if the slow down you will experience doesn't detract from your satisfaction with the experiment.

So ARM will have to add transistors to add performance. x86 will have to find ways to cut power through design. I suspect what we will eventually have is pretty much power/performance parity.

Once you have parity, then I think the legacy x86 stack works to Intel's advantage. I know you don't like either Intel or x86, but don't project your wishes on reality.

Anonymous said...

So the whole TSMC-Intel collaboration?

http://channel.hexus.net/content/item.php?item=17430&page=3

Basically TSMC doing SOC work for atom, but atom continues to be made at Intel. The whole 'maybe they'll share the High K IP' or the whole 'maybe they'll outsource atom/chip production to TSMC'... not so much!

I even saw Abinstupid claiming this was proof that Intel is worried about the AMD foundry (before the announcement was made).

Anonymous said...

Anybody see the Spansion bankruptcy news today?

http://news.moneycentral.msn.com/ticker/article.aspx?symbol=US:AMD&feed=OBR&date=20090302&id=9654183

AMD had about a 10% stake in Spansion I believe - guess that's another investment down the drain.

What's funny is that if you look at Spansion's stock prices, they fluctuated mostly between $2-$3 for the better part of a year, then nosedived into the penny range back when the economy tanked. Sorta reminds me of AMD in a way...

hyc said...

ITK: hm, ok, mostly I agree.

But as a point of reference, a couple months ago one of my buddies at Google showed me his copy of Quake running on his G1 phone. Perfectly smooth, fast action. (Sadly I didn't have a USB cable on me at the time otherwise I would have grabbed a copy for my G1.) Point being, I think ARMs are not as far behind in performance as you may think. And given an efficient software stack (like Linux-based Android vs Microsoft bloatware) I think they're already at least at parity.

I see Atom as being somewhat self-defeating in this respect: it delivers more performance (for some ambiguous definition of performance) and x86 compatibility; but it *needs* more performance if you're actually going to run off-the-shelf x86 code on it. And in reality, very little of common desktop x86 software migrates easily to a handheld formfactor. Menus, scrolling, etc. all need to be redesigned to fit within the smaller confines of such compact displays; keyboards may not exist, etc... Given the integral part a GUI plays in today's OSs and applications, when you're forced to completely redesign the UI, you can't really be said to be running the same app any more anyway.

Picking up on that thread - I wonder what it will take to develop HD display resolutions for head-mounted displays. I'd be pretty happy with lightweight spectacles with 1920x1200 resolution. The last set I looked at was still only 800x600; not even gamers would use that resolution today.

re: Spansion - they were spun off from AMD because they were already a losing division. So none of this comes at any surprise; what's more surprising is how long it took them to die after being divested in the first place.

SPARKS said...

"So ARM will have to add transistors to add performance. x86 will have to find ways to cut power through design. I suspect what we will eventually have is pretty much power/performance parity."

This is a very interesting suposition/observation, pretty much in line with Lex's :

"Give INTEL another two spins of Moore's law and at 32 or the next one and two more turns at the design crank".


The comment about legacy x86 software is a brilliant observation. Combine these with my comment regarding consumers/commutors demanding more powerful devices with a multitude of functions, it sounds as if ATOM may be poised to do exceedingly well in the near term, better perhaps, in the future.

Man, I never saw this coming, the "ATOM impact", and all you guys being pretty much in agreement.

Remarkable.

SPARKS

InTheKnow said...

This is a very interesting suposition/observation, pretty much in line with Lex's :

"Give INTEL another two spins of Moore's law and at 32 or the next one and two more turns at the design crank".


With one important difference. I don't think it is slam dunk for Intel.

ARM is the entrenched architecture in the smartphone space. You need to bring something extra to the table to break in to a market against the entrenched player. That is what I think the x86 legacy software stack brings. I think that gets Intel a look, but they will have to deliver.

InTheKnow said...

Okay, so I was wrong on the TSMC Atom announcement. TSMC will make Atom, but Intel will not TSMC the process tech. Instead they port it to TSMC's process.

That has the potential to give us a process comparison unlike anything we have seen in a long time. We should be able to get a direct comparison between an Intel Atom and a TSMC Atom and get some real insight into how the process tech stacks up.

InTheKnow said...

HYC said...
And given an efficient software stack (like Linux-based Android vs Microsoft bloatware) I think they're already at least at parity.

And Atom wouldn't see the same benefit if you run the same OS?

I think you are putting too much emphasis on Microsoft.

x86 promises something I don't think ARM ever will. It gives the promise of backwards compatibility that ARM has yet to deliver. By nature of being a "custom" solution, ARM's flexibility comes at a price. That is the risk of not being backward compatible from one generation to the next.

I see Atom as being somewhat self-defeating in this respect: it delivers more performance (for some ambiguous definition of performance) and x86 compatibility; but it *needs* more performance if you're actually going to run off-the-shelf x86 code on it.

This link shows the performance difference I'm talking about. This is a real world difference that consumers care about.

InTheKnow said...

but Intel will not TSMC the process tech.

That should read "Intel will not use the HK/MG process tech."

hyc said...

And Atom wouldn't see the same benefit if you run the same OS?

I think you are putting too much emphasis on Microsoft.

x86 promises something I don't think ARM ever will. It gives the promise of backwards compatibility that ARM has yet to deliver.


Yes, of course Atom would/could see the same benefit. But the only reason x86 compatibility matters is because of the legacy of Microsoft software. So you can't have it both ways - either compatibility matters to you, which means you're only going to run bloatware, or you're going to run different software, in which case compatibility is moot.


This link shows the performance difference I'm talking about. This is a real world difference that consumers care about.


Well, that's an interesting read, but it's already pretty dated.

Check this out
http://www.tabletpcreview.com/default.asp?newsID=1338

10-15 hours; it would have a life of only about 3 hours using an Atom.

pointer said...

first, I do agree currently ARM is more power efficient than Atom. However, you are exaggerating:
Check this out
http://www.tabletpcreview.com/default.asp?newsID=1338

10-15 hours; it would have a life of only about 3 hours using an Atom.


if the advertised number were to be believed, check this out:
http://asus.notebooks.computer-technology.eclub.lv/en/computer_technology/notebooks/asus/asus_easy_to_learn_easy_to_work_easy_to_play/Asus_Eee_PC_901_Black_8_9_1024x600_Intel_ATOM_CPU_1_6G_1024MB_DDR2_SSD_12GB_WLAN_802_11N_Bluet.html

the above has the same screen size, slight better configuration (SSD size, camera, etc), and claim to run up to 8hrs on battery. This is base 6 cells (no wattages number), not too sure about the ARM system that you referred too.

hyc said...

first, I do agree currently ARM is more power efficient than Atom. However, you are exaggerating:

if the advertised number were to be believed, check this out:


OK, my apologies for that.

And you're right, the advertised numbers are usually so far off from reality that we can't trust them.

But I'm kind of jazzed at the prospect of having a toy that can operate thru the full duration of a transAtlantic flight. (I don't know what ITK considers to be "all day" battery life. My flight from LAX to ATY was 25 hours, one long drawn out dull moment after another. I guess such extended trips are pretty rare, but I'll be doing this trip a few more times in the next couple months, and a gadget like this would be a godsend.)

Anonymous said...

Here ya go Sparks (D0 core i7 stepping):

http://www.xtremesystems.org/forums/showthread.php?t=219898

4.15GHz at 1.056V(air cooled)
4.6GHz at 1.56V (air cooled)
These appear to be only boots, they claim Prime95 stability at 3.33GHz (wasn't clear to me why the large gap)

Also note, this is a 920! It will be interesting to see a larger sample size of results (and to understand the 3.33GHz stability vs being able to boot at 4.15GHz at such low voltages)

If the board prices come down, I may have to rethink the plan to replace my dual core with a Penryn Quad.

Ho Ho said...

"Give INTEL another two spins of Moore's law and at 32 or the next one and two more turns at the design crank. The overhead for the extra x86 will become miniscule and ARM will be seing the blues just like all those server guys from 10 years ago."

If so then why hasn't Intel still caught up on the stuff AMR was doing 5-10 years ago?

SPARKS said...

"Here ya go Sparks (D0 core i7 stepping):"

I knew it. Whenever, INTC, releases a new stepping immediately after a major launch, the second stepping tweaks are always significant enough to warrant a new product. This is no exception. Thanks for the link. The tax refund will be arriving shortly. The trigger will be pulled.

And I thought 4 gig, 24/7 on QX9770 was something, 4.6 on air, wow!

SPARKS

Anonymous said...

And I thought 4 gig, 24/7 on QX9770 was something, 4.6 on air, wow!

Just a word of caution - it was apparently a boot only (and COU-z shot). Still it was a 920, and I'm going to go out on a limb and guess you will not be buying the "bottom" bin model?!?

What struck me was the low voltage at 4.15GHz - not clear why it wasn't stable (chip? board? uncore? other?). Gotta think the 965 would perform a bit better.

Anonymous said...

This is gold, it's gold Jerry, gold I tell you! (seinfeld)

In other words, I'm very sceptical that the architecture beyond Nehalem will be as great as it sounds. (Dementia, UAEZone)

This from the person who was skeptical about Core2 (even when early benchmarks existed - they clearly had to be rigged!)

This from the person who was skeptical about Intel being able to pull off IMC and QPI (as AMD had refined this over several generations)

This from the person who was skeptical (oops, I mean sceptical) about atom.

Given this illustrious track record, I'm going to have to defer to Nostradumbass (TM pending), but I will remain a bit 'sceptical'...

SPARKS said...

"....guess you will not be buying the "bottom" bin model?!?

What struck me was the low voltage at 4.15GHz..."


No bottom bins for me. I have been spoiled in two regards, there are more, but basically two. The first is the 'binning' education I have received on these pages with regard to technical issues and variables within the manufacturing process. Secondly, the unlocked multiplier which allows another dimension to mild to mildly aggressive overclocks.

The XE chips, I have found, have individual 'personalities', if you will. Like beautiful women, if you treat them with respect and individuality, while tolerating peculiarities and quirks, they will take you to wonderful places, 24/7. Cajole, rather that push, is in serious order here, in both cases.

The low voltage is absolutely incredible. Somewhere on this page I mentioned the low voltage operation INTC has achieved when compared to discrete components, and it was rightfully ignored.

Low Ion/Ioff is INTC's proprietary ace in the hole. I suspect it has to do with Hi-K properties, a fringe benefit, perhaps. Nonetheless, it is an outstanding engineering accomplishment which idiots like ShareaKook and Dementia completely underestimate.

That said, some of the tech is drifting down to other facets of the industry. A-DATA has ultra low voltage modules specifically targeted for the high end i7 market. (As a side note, if those two morons, and their followers, thinks INTC isn't driving the industry, they have no business posting comments anywhere, let alone here.)

http://www.slashgear.com/overclocked-a-data-xpg-2133x-memory-breaks-world-record-2331744/

As you know, these parts must fall within the 1.65v guidelines.

Incredible.

SPARKS

Anonymous said...

Hector Ruiz, Nostradumbass (TM still pending), Jr.

Hector Ruiz sees 'flight to value'
http://www.eetimes.com/news/semi/showArticle.jhtml;jsessionid=3GG0PIOVF10GWQSNDLOSKH0CJUNN2JVN?articleID=215801326

Some of the 'insights':
"We are going through times in this industry that are really, truly unique"

He said the rising complexity of chip making would continue, leading to more expensive equipment
(I was particularly stunned at this crazy talk)

Using the example of lithography systems, Ruiz said that wafer aligners that cost $5,000 in the 1970s have evolved into scanners that in some cases cost more than $25 million today. Five years from now, he said, these and other systems will cost considerably more. (tools will cost more!? Come on, now he's just screwing with us, right?)

The time-worn cliche about management serving the needs of shareholders has limitations, Ruiz warned, noting that 20 percent of AMD shareholders are people who shorted the stock. (what's Ruiz talking about - he was all about serving the people who shorted AMD stcok during his days... so was that the problem? Perhaps he was TOO stockholder friendly? (well, those that were shorting)? And I'm pretty sure the 'cliche' about serving the stockholders is not directed at those shorting the stock, but it's great that Ruiz apparently felt the need to warn us about it)

"Know who owns your company," Ruiz said, adding that private equity investors often do not have a company's long-term best interests at heart. See, I kind of think that was the problem... Ruiz ran the company as if he owned it, but didn't (see: market share at all cost strategy). AMD only recently added significant private equity money, so this is a red herring as it did not exist when Ruiz started the AMD downslide - what private equity pressure was there when the stock started to drop off a cliff?)

What is kind of funny about this whole thing (other than the statement of the obvious) is a underlying (subtle?) arrogance in the article... Ruiz is preaching about how companies should look at things as if he has this glorious track record in his past. He also uses some examples about his AMD days which he almost uses to justify some of the problems that existed under his stewardship. He's blaming the failure of the global foundries proxy vote the first time on too much institutional ownership or is this simply another case of poor execution? (when you have a 97% yes vote, but the measure fails because you can't scare up ~52% of the stockholders to vote)

«Oldest ‹Older   201 – 354 of 354   Newer› Newest»