In an alarming trend, AMD's stock seems to be on a free fall since last week. Today the stock reached a new 52-week low dipping below $10 no thanks to the downgrade by analyst Doug Freeman. It's nice to see that the investment community is finally catching up with our own sentiments about AMD's future.
"We do not expect sentiment to improve anytime soon and believe even improved execution will not be enough to show true operating leverage in the near-term," he wrote in a research note...While we remain bullish on PC demand in general, we don't see a material catalyst for upside emerging in the next 3-4 months and suspect investors will view any announcements on the company's fab-lite strategy with skepticism, regardless of this view's merits."
What I find interesting about this downgrade is the suggestion that even if AMD improves on its execution, the outcome still won't be enough. Even if AMD proceed with its yet to be announced asset-lite strategy the outcome will still be immaterial. And even if the PC demand remains strong AMD will continue to lose money. So what that means is if AMD becomes lucky enough to get a "perfect weather" of good execution, strategy and excellent market conditions, still, there is no hope for the struggling company, not now or anytime soon. "Investor skepticism" has finally caught up with AMD and it will be interesting to see where the stock settles after hovering around the $13-14 region for a year.
Now if only some of the missing bloggers return and put up the usual face of hope, maybe the AMD stock can rally back up.
11.27.2007
AMD's Stock Reaches New Lows
Subscribe to:
Post Comments (Atom)
82 comments:
I don;t think most folks understand - outsourcing CPU production (with all other things being equal like ASP's), will ERODE AMD's margins further!
There are several major benefits of outsourcing, but most of those are dependent on being inherently unhealthy in certain areas. When/if things are healthy outsourcing is generally worse (for CPU production).
1. Cash position - Obviously the upfront costs of the fab is gone with outsourcing, but all you are effectively doing is spreading these costs out over the lifetime of buying chips from the foundry (as obviously the foundries capital costa are baked into what they are charging for the wafer capacity).
2. If the foundry has better technology or has better binsplits then there is potential to lower costs. As the rumors are implying this is for LOW END processors at first I don't think this is applicable unless AMD is also having troubles producing low end parts.
3) The foundry has an inherently cheaper manufacturing process. This one may be true (in comparison
to AMD) as they have greater economies of scale and don't run an SOI process.
The problems:
1) TSMC is running a Si (not SOI process). This means new design and validation and support of 2 designs for as long as AMD continues to produce parts in house. This means more engineers and development Si, and various costs like mask sets... on the plus side if only one thing gets screwed up you still have some production!
2) The foundry margins has to be paid. Suppose TSMC is cost competitive with Intel, AMD is then inherently not cost competitive as they now have to pay TSMC for making the chips. The only way this is a net neutral is if TSMC is manufacturing more cheaply then Intel.
3) AMD would be at the whim of TSMC's technology node transitions. TSMC may have a different strategy/philosophy on when togo from 45nm to 32nm to... then AMD does. Also if they screw up the transition, they are less in control of the problem (though with purchase of IBM's technology, it's not like AMD is in that much control right now anyway)
4) TSMC is not pat of the IBM fab club (I think). Not sure if this precludes TSMC from running IBM's SOI process (though this might be a GOOD thing)
5) AMD has less flexibility to tweak the process to the design and must make the design meet the process (which is perhaps why they are starting to do this on low end first)
I don't see how signing an outsourcing deal would give the stock a pop from any knowledgeable analyst (and you'll notice the rumors did nothing). Folks covering the industry know this is a sign that there are significant, fundamental problems to drive someone to do this. And while it may help the cash position short term (by reducing capital expenditures) which may allow AMD to reduce losses and perhaps pay down some debt, long term this puts AMD in a much less competitive picture.
What is odd is that they are not outsourcing to Chartered which SHARES the same process and ALREADY DOES SOME WORK FOR AMD in the CPU space. I think the choice of partner is the key...in my view it is a signal/admission that they will not catch Intel in the Si process tech space (If they thought this then Chartered would seem to be the better choice). I also see this as a signal that AMD feels the only way they will compete is on the cost front (which TSMC is better equipped to do).
The wildcard is fusion - with TSMC already doing graphics for AMD and by bringing in CPU, they potentially have a one stop shop. Though with the initial implementation likely to be done as MCM, I'm not sure how critical this is short term.
New post up by Scientia
http://scientiasblog.blogspot.com/2007/11/amd-all-dressed-up-but-no-place-to-go.html
Chicken shit
I almost dumped 5 Grand into AMD prior to K10 launch because of all the sweet talk and Power slides from AMD.
Then I got some sense knocked into me and pulled out. (this was when the stock was hovering around $13 I think).
BOY AM I GLAD!!!!!
This from another blog...
I began crunching numbers. I normalized the scores based on the number of processors and clock speed. I was happy to see very tight clustering at 2.0 for dual core Opteron. Then I ran the numbers for Barcelona and it showed 4.0. This was double the value for dual core. This seemed to be a big problem to me since with twice as many cores and double the SSE width it would seem that K10's top SSE speed should be twice per core or four times larger for Barcelona. In other words, I was expecting something around 8.0 but didn't see that. This would suggest a problem.
However, I then ran the Woodcrest and Clovertown numbers. Woodcrest clustered tightly at 4.0. This was not a surprise since it too has twice the SSE width of K8. Unfortunately, the Clovertown numbers continued to cluster at 4.0. This was a surprise since (with twice as many cores) I was expecting 8.0 for Clovertown as well. So, unfortunately, these scores are inconclusive. K10 is showing the same normalized score as Clovertown but neither is showing any increase in speed over Woodcest. The questions still remain unanswered.
An interesting conclusion. The results don't make sense, but the validity of the normalization scheme isn't being questioned.
I know when I run numbers and they don't make any sense, the first thing I do is check the arithmetic. If the arithmetic is good, I move on to looking at the math (in this case the normalization scheme).
I just find it interesting that from what is posted, it seems the approach must be correct, so we are left without a conclusion. And we take this position even when both sets of quad core processors seem to under-perform. At some point I'd like to think that you change the analysis method and try looking at the data from a different angle.
Of course what do I know. I'm the guy that can't even read a semi-log plot. ;P
"And we take this position even when both sets of quad core processors seem to under-perform."
Problem A: He assumed the jump from Cloverton to Woodcrest (a shrink) was 2X, and similar to the jump from K8-K10 (an architecture change). In the end this sadly may be true, but neither will give 2X from architecture performance.
Problem B: He assumes that 2X the cores = 2X the performance for both architectures. While AMD has shown they get good SOCKET level scaling (say from 1P to 2P with K8) this doesn't have to mean within core scaling would HAVE to be the same (it might, and it might not). There are far more complicated issues with the within die multicore scaling in terms of performance (as we are starting to see) and it is not a simple Hypertransport is THE BEST-EST-EST thing ever argument!
Also after listening to him tell us for years on why the FSB doesn;t scale - he now assumes it scales perfectly - he does this to raise the expectation of the Intel score, thus when it doesn't achieve it he can say AH HAH!
As always with these clowns, it is about the assumptions (much like that Abinstein idiot and his AMD's yields are 50% better than Intel conclusion - which now makes perfect sense when you consider the outsourcing rumors?!?!?). When those guys assume they make an ASS out of U and.... well it basically it just makes an ASS out of themselves.
I suppose Dementia will hold this weak blog up as an example of how he is "objective".
Trying to find a bottom is like trying to catch a falling knife: You may get lucky and do it every once in a while, but more often than not you are going to bleed.
Why would anyone want to try to time a bottom on AMD?(or any other stock for that matter) Vetter to see it hit bottom stabilize, and start coming back up (with some fundamentals behind it) before getting in.
For fans of the show Fast Money (CNBC)... the other expression is stocks rarely go to 0 in a straight line (meaning they will plateau and may even reverse a bit prior to dropping further). Anyone investing in AMD at this point might as well just go to Vegas and play the slots.
BTW - the foreign investors in AMD just pumped 7.5 BILLION into Citigroup - these guys are clearly bottom fishing and trying to get lucky on some of these beat up stocks.
ROBO - you owe me 20 minutes! This is the time I won't get back after visiting Chicagrafo's site.
If folks actually take ay some of his older blogs, he called Intel's 65nm "marketing", claimed AMD was far closer to bleeding edge than Intel (and held out implementation of ZRAM in 1-2 years as an example!), recommended buying AMD in Oct06, claimed AMD's valutaion was closer to $57 (I believe this around the time when the P/E multiple was over 100!), and doecumented the reasons why AMD would get to 50% market share (citing manufacturing capability and the smart managements as 2 of the reasons!)
How can you link to this clown? Look through some of his old articles - he changes his mind monthly and is yet another blogger who analyzes information he clearly doesn't understand and then tries to fit empirical observation to a conclusion!
In a stunning display of not being able to understand the data I will now offer the following.
Early reports on Albert2, BMW's supercomputer, were that the computer used 512 processors or 1024 process cores. The top 500 supercomputer list puts this machine at 1080 "processors" or if you prefer 540 Woodcrest processors.
It would seem that our intrepid analyst assumed the numbers posted on the Top 500 supercomputer site were sockets, when in actuality it was the number of cores that is listed. Oddly enough, when that trivial detail is accounted for the numbers seem to magically make more sense (by our analysts reckoning).
So if we set the Opteron to a value of 1 we get the following numbers (in terms of RPEAK per socket).
Opteron = 1
Opteron Dual Core = 2.02
Opteron Quad Core = 8.00
Woodcrest = 3.99
Clovertown = 7.99
Though how this tells you anything with a sample size of 1 for the Opteron Quad core is beyond me.
AMD fanboy's have OUTDONE themselves.
http://www.amdzone.com/index.php/Forums?func=view&id=129&catid=6&limit=10&start=30
"New post up by Scientia
http://scientiasblog.blogspot.com/2007/11/amd-all-dressed-up-but-no-place-to-go.html
Chicken shit"
He actually posted a level headed blog, he did not try smearing lipstick on a pig.
Yo, Giant, where you at?
The badboy is comin’ down, yo!
Fire up the plastic, I’m get’n spastic
Fire up the plastic, the shit be drastic.
http://www.theinquirer.net/gb/inquirer/news/2007
/11/27/record-shattering-first
SPARKS
I think that what the analyst that Roborat quoted is trying to say is that he does not expect the situation for AMD to get better. There is a lot of hope that AMD will shore up its problems and get back into the race and start making money. But their performance this year has led to a loss of confidence. The last few weeks were particularly bad.
It's like I've said before, you can argue process and design and good and evil and a million other factors until you are blue in the face. It is what the companies actually manage to put on the table that matters. I remain hopeful that AMD can stay relevant and produce competitive processors, but I am at the "believe it when I see it" stage with AMD.
Anonymous said...
ROBO - you owe me 20 minutes! This is the time I won't get back after visiting Chicagrafo's site.
we're all entitled to make mistakes. I mean is there anyone here who imagined that Phenom would perform so poorly at launch that it can't even beat intel's slowest QC? Didn't think so. Phenom performs so bad that i'm sure even Hector Ruiz was caught by surprise. They don't lie you see.
"we're all entitled to make mistakes..."
I'm fine with folks making mistakes or making a bad prediction but when people are commenting authoritatively on something they clearly have no background in that's a whole different ballgame. When the blog starts with:
"Read if you want to get out of the darkness, bear with me about the technicalities"
It kind of (arrogantly) suggests the person understands the technicalities.
AMD stock...bad call, but just an error, no big deal.
AMD is closer to bleeding edge of technology because ZRAM might be implemented in 1-2 years (this was said in 06)....no clue what ZRAM is and the issues with it, no clue on how long it takes something to go into manufacturing, no clue of what bleeding edge technology is.
"Intel has a manufacturing tech. of cloning factories to the cafeteria that simply is primitive...AMD has APM, a robotic tech. for chips production, leaps and bounds ahead of Intel's. That's why when I hear Hector Ruiz saying that basically they will do 65nm as soon as AMD feels like it..."
"First fallacy: It is cheaper to manufacture the same number of transistors at 65nm than 90nm"
This isn't bad analysis - it is a complete lack of understanding of the concepts being talked about. It isn't a mistake, it is misinformation.
Well I did read and I did get out of the darkness - the blogger doesn't have a clue about Si technology and is not worth reading further - I posted here so that folks wouldn't similarly waste their time. If he wants to blog about topics he has a background in, great, but if it is about technology, I would advise the folks reading this comment to not bother.
I like this blog as there are a lot of educated people who comment, nothing is taken as gospel. It would be nice to get some more AMD point of views, but given the folks I see on the other blog sites, I think they understand that they will not get away with unsubstantiated marketing and PR fluff without being challenged for facts/supporting links (which is why I suspect they don't post here).
"I think they understand that they will not get away with unsubstantiated marketing and PR fluff without being challenged for facts/supporting links (which is why I suspect they don't post here)."
Oh, yeah, plus, they WILL get eaten alive with actual working experience, facts and supporting links.
SPARKS
By the way, INTC is up a FAT $1.08 today.
Ladies and gentlemen, please, fasten your seat beats.
SPARKS
Some time back I had the temerity to suggest that Intel's Silverthorne processor might be one of the top developments of 2007. I was, of course, mocked ridiculed and informed I was at least clueless if not a downright idiot.
Now it seems that Ars Technica might be in agreement with me. Silverthorne does have the potential to be a big deal.
This is another opportunity that it seems AMD will miss out on as their competing product (Bobcat) won't be out until at least a year after Silverthorne.
I guess the old adage is true, "there is none so blind as he who will not see."
And yes, I know that the EEE PC doesn't currently use Silverthorne. But the plan is to use it next year and that will make a good thing even better.
SPARKS said...
By the way, INTC is up a FAT $1.08 today.
Of course that had nothing to do with the whole market rallying big today right? ;)
Anyway, a quick glance at Intel and AMD's stock charts we can see that Intel has weathered the current market turmoil fairly well and technicals are starting to look positive again.
http://tinyurl.com/2vrves
Today's rally marked a bullish crossover for the MACD and Stochastics still with heavy resistance at the $27 to 27.50 range.
AMD on the other hand continues to take a dive. While getting a small bump from today's rally, it remains on it's downward trajectory.
http://tinyurl.com/2socap
After breaking through all intermediate support levels, it drops to 4 year lows. (No doubt doubly punished from the Abu Dhabi share dilution). Fundamentals and Technicals continue to look bearish for the short-term.
Anyway, happy trading and gl to all, it's still a bumpy ride out there, but Daddy's got some options deep in the money now and I hope to cash out before anything really bad happens.
This is just too good to pass up.
Our fair and unbiased blogging friend at another URL had this to say.
AMDs big problem right now is that people suck.... up to Intel so Intel can ... force them to take what they get. OR ELSE.
Where is the FTC? I guess getting lobbied.
I suppose that it was Intel's fault that AMD promised the sun, the moon, and the stars and then under delivered with Barcelona. It was Paul Otellini blackmailing Hector Ruiz into buying ATI and leaving his company strapped for cash. And of course it was Intel's design engineers that stole AMD badges and stealthily entered the premises and sabotaged the 2900 series video card.
No, none of the blame for AMD's current problems should be attributed to them. It is all Intel's fault. And of course all those stupid people who let Intel get away with it.
"Some time back I had the temerity to suggest that Intel's Silverthorne processor might be one of the top developments of 2007. I was, of course, mocked ridiculed and informed I was at least clueless if not a downright idiot."
Rightly so... I believe the argument was with the acquisition of ATI and their CE group, AMD was light years ahead - you can tell from all the ASUS EEE competitors in the market today! :)
I also find it amusing that Negroponte (I'm a former MIT alum, so I figure I can hammer on him), is still pissed off that Intel seems poised to "take" his idea and turn it into a potentially profitable business! Apparently you have to do these things at non-profit for it to be good (despite the fact that Intel is now pretty much cost competitive with more functionality and has a far more viable supply chain, infrastructure chain, etc...)
http://biz.yahoo.com/bizwk/070924/sep2007tc20070923960941.html?.v=1
It is a good idea but Negroponte is not a business man and his ego is getting in the way. His $100 PC is now $188, he went back on his promise not to sell in the US as the whole buy one give one campaign is the only way he is getting even minimal sales (by the way the two $188 PC's cost $400) and his projections of large volumes are way off and now much of the supply chain which signed on to do this is pissed off as the small #'s OLPC is selling will probably hammer the folks supporting the project.
Not sure why he would be upset if Intel (or even AMD in the future) could deliver comparable, or better, PC's at similar or potentially lower prices with a much more viable support infrastructure.
intheknow already posted this but this is too good.
AMDs big problem right now is that people suck.... up to Intel so Intel can ... force them to take what they get. OR ELSE.
- Dementia
LOL
"Some time back I had the temerity to suggest that Intel's Silverthorne processor might be one of the top developments of 2007. I was, of course, mocked ridiculed and informed I was at least clueless if not a downright idiot.
"
I didn't :) ... if Intel plays this right -- it could be huge.
"This isn't bad analysis - it is a complete lack of understanding of the concepts being talked about. It isn't a mistake, it is misinformation."
Indeed, though I've heard pretty much the same thing on certain other blog where the owner claims to actually know something. Also, another source of misinformation.
InTheKnow"
"And yes, I know that the EEE PC doesn't currently use Silverthorne. But the plan is to use it next year and that will make a good thing even better."
Dang, I first thought "cool gadget" but now I'm really thinking I should get it to replace my ancient PDA. Sure, it is twice as big but it will also offer so much more.
Some classics below. I would post on Dementia's blog, but he appears to be enjoying having a conversation with himself, or only pulling random bits of other comments and responding to them. Is it just me or has he become joke now? Is he really that insecure? Why even allow people to comment at all? He should just make up a bunch of aliases and post conversations with himself... kind of like... Sharikou!
"AMD's position does seem fairly good going into 2008 since it still insists that 45nm is on track and will be ready six months after Intel's."
Hmmm....can't wait for those 45nm products in MAY 08! And I'm talking about available for purchase like the Intel ones, not some PR copout of in production or shipping or "hard launch".
"So, AMD is just going to have to bite the bullet until Q3 when things should improve. The chipset and graphic sales should be up by then and AMD should be fully anchored on the desktop with mini-DTX and DTX."
I don't know about everyone else, but as it is now Q4, I'm feel very anchored with DTX and mini-DTX! It is rather amazing how it has become the de facto SFF standard as Scientia opined.
"Assuming Intel's 45nm server chips are available in 3.2Ghz speeds, AMD will need at least 2.4Ghz quad cores."
No comment required here...
"I'm certain the 3.0Ghz quad demo left many people wondering when such chips would actually be available. Anandtech's take seems particularly negative suggesting as late as Q2 08."
Yup it looks like Anand may have been wrong after all...he may have been too OPTIMISTIC!
...."But then Anand hasn't exactly been objective about AMD in the past five years so perhaps we should consider that the upper bound. Realistically, the 3.0Ghz chip could have been cherry picked. And, it generally takes about six months for production to catch up to a cherry picked chip. So, I can't imagine that 3.0Ghz would arrive later than Q1 08."
So the next time Dementia says I never said Q1'08, feel free to pull this up! Oops...
"This suggests that Intel's bulk production quality lags its initial production quality by a full year. This would seem to explain both having to destroy the initial 45nm chips. Clearly, Intel's demos are not indicative of actual production as seen by the lack of chips clocked above 3.0Ghz."
Yup, all those "destroyed" 45nm chips are now for sale - so much for that theory. And the current 45nm PRODUCTION chips that are being reviewed clearly appear to be overclocking dogs :) I would also expect when Intel has their Q4 conference call, they will like mention that a point or two of their margin came from the sale of 45nm chips that they had to write off in Q3 (for accounting reasons). Or I could be a complete idiot when it comes to business and assume write-off means destroyed and these new chips just magically appeared.
"The number of partners added to IBM/AMD research consortium also vanquished persistent rumors that AMD would toss SOI on 32nm."
Whether or not AMD sticks with SOI, the logic here is completely flawed. Research != production, a good hit rate is about 50% of research activities make it into production as planned (they are often delayed or outright dropped). The actual hit rate might be closer to 10-30%. AMD themselves have said they haven't made a decision on 32nm SOI yet! Again Scientia yakes an observation and uses it to prove a conclusion he wants to believe but has no data to support.
Want to see something funny? Compare "my" post in Scientias blog under the name "Ho Ho *Edited*" with this:
_________________
erlindo
"I really expect R700 to blow out of the water any Nvidia offering."
I'd be surprised if R700 was more than twice as fast in games as hd3870. Sure, multichip sounds very nice but when you start thinking about how it can be implemented you can see how much problems it really has. Just imagine how one could make a 4P Opteron board with 32bit HT3 links fit on GPU board and how much would it cost.
"Also, according to some news I've read a few days ago (can't remember source), AMD seems confident to fight Nehalem with Shangai."
They also said that Barcelona will be 40%+ faster than high-end Clovertown in various workloads and now it seems they are quite a bit behind in all but bandwidth limited scenarios. Now they claim to only be be competing so assuming history repeats itself we can predict that Nehalem will absolutely destroy it.
It's fun to use past experience to predict future events, isn't it?
"what should we expect from Shangai (perfomance wise)?"
I'd say not more than 5-10% IPC increase, bigger L3 could help a bit but probably not too much.
randy allen
"It will be key for AMD to ramp up frequency of the Shanghai product, they'll need at least 3.5GHz IMO"
Yes, they will need it. Good luck in getting there, though. I bet 45nm won't be released at higher than 2.5GHz, by that time 65nm is hopefully near 3GHz.
andyw35
"Also, if there is a BIOS workaround why no put out the faster chips like they are doing with the slower?"
Why do you think you can use BIOS tweaks to work around it? I have seen no signs of it being doable. After all the bug is near L3, BIOS can do little there.
"I think it is an excuse only and they cannot do 2.4+ currently in enough numbers to support the market."
That is exactly what I think.
"This is, as pointed out by scientia, opposite to what was claimed on scaling and backed up with results at 3Ghz from various sites."
Well, the whole K10 performance has been shown to differ quite a bit from what was claimed by AMD. I wouldn't take claims of random people too seriously (superlinear scaling, suddenly gets superfast at >2.4GHz, 3Ghz blows everything out of the water etc). Those claims are just redicilous.
"The IPC is yet another disappointment, forgetting the silly 40% claim for a moment I was hoping that a general value of 10-15% above Core2 would be present that would go some way to making up for Intels clock speed advantage, however it seems more like a negative 10-15%."
Actually there is not big surprise there. Sure, SSE got much better but integer and FPU ALUs are still quite similar to what is in K8 so I didn't expect it to be better than Core2. Guys much smarter than me said K10 will have lower integer IPC than Core2 and is a bit faster in floating point, just as K8 was. Now with Penryn there is even bigger IPC difference with integer and they are on par on floating point workloads.
Now I guess abinstein and/or Scientia start telling me to read the architecture manual. Well, I've read it and I still can't find the reason why K10 should be as good as you've claimed. Just make a list of the reasons it should be that much better than core2 and be done with it.
"I would say that as soon as Core2 results came out last year AMD realised thay they had been caught by surprise and ould not be able to catch up on the IPC side and were hoping for pure speed to somehow be got with later spins."
As even Scientia said Core2 probably wasn't a big surprise to AMD. Later mobile pentiums already had higher IPC than K8, Core2 was just an evolution of them.
"The main problem I see for AMD is that even if they get 65nm sorted out, Intel has Nehalem next. We know the process is going to deliver, so the only unknown is whether the architecture will."
One thing about the architecture is pretty sure, it cannot be worse than Core2. Now with lower memory latency thanks to IMC AMD has lost one of its main trumps. I'd say AMD has hard times coming.
"So it's hardly likely to come out only 1-5% better than Penryn, it is more likely to be higher, nearer to Core2 type jump."
Intel itself once said it'll actually be even better than Core2 kind of jump. I guess the return of SMT and IMC help quite a bit too. I would certainly like to see as big jump as was Netburst to Core2 but I doubt we would see too big differences in single threaded workloads.
scientia
"The simple fact is that the Integer IPC on K10 should be at least 20% above K8 and the SSE3 performance should be nearly double."
So, can you make a list of what exactly made integer IPC so much better? You've claimed it several times but never could prove it. "Go read a book" isn't really a proof, you know. Is it really that difficult for you? Once you've done it you at least know that I won't bother you with that question again.
"That's not actually true as AMD still has to sell 90nm chips at the higher speeds."
Remind me when was 65nm crossover for AMD? Wasn't it done in record time? Now I wonder how many parts are still on 90nm at the moment and how long will it take to entirely get rid of 90nm. Assuming their 65nm works fine there is no reason to continue making 90nm CPUs as they are more expensive and having three different products instead of two is certainly making logistics a bit harder.
"AMDs big problem right now is that people suck.... up to Intel so Intel can ... force them to take what they get. OR ELSE."
Erm, what? Was it you or Howell who said that? If you then please elaborate as you are not making any sense.
"If they are totally cherry-picking 5000+ BE, then perhaps but they all seem to run above 3GHz without exotic cooling (nearly every reviewer got above 3GHz)."
So the fact that CPUs run at much higher speeds with inbox coolers sais something about them? What about Penryns at 4GHz+ or Core2 at 3.5GHz+ with their inbox coolers? I thought that OC'ing didn't mean a thing and only released parts were that mattered. At least that was your POV when we talked how high clocked Core2 Intel could release if it wanted to. Double standards?
Does anyone know how well do other non-black 65nm K8 OC? Is it much different from the black edition? I also wonder if anyone has seen power measurements of 65nm K8 running at 3GHz+.
_________________
Notice how he conveniently cut out parts that considerably changed the point I was trying to make. Also all the questions I asked him are lost.
The deleting and editing of anything aside from obvious trolling or name-calling hurts the quality of the comments section, IMO.
yes, when reviewers overclock Core2 to 3ghz+, it doesn't matter because overclocking results do not count and is in no way indicative of how the architecture will perform. More likely, "people suck" and the reviewers are just biased towards intel.
The closest answer I got to why K10 should be so much faster than Core2 from abinstein is that it runs spec rate faster and that the "toy benchmarks" that review sites run simply do not reflect K10's potential.
You see, AMD is so far ahead, even the benchmarks have not caught up to AMD!
To All,
This blog is now consistently receiving more posts than Scientia's. Scientia's blog is near death due to its irrelevance. I would suggest the following:
1. This blog must grow past just being a place to respond to Scientia. At this point he has been exposed for his bias and lack of technical education. So lets cut out the copy and paste from his blog.
2. STOP POSTING AT SCIENTIA'S! That meas you Ho Ho :-) Drive down the number of posts over there and leave that forum to Scientia, BaronMatrix and Abinstein. That is worthless trio.
"STOP POSTING AT SCIENTIA'S! That meas you Ho Ho :-)"
Guilty as charged. I just like to argue with people, especially when I know they are wrong.
Oh well, I'll see what Scientia replies and try to stay away from there.
Copied for posterity, my final post on Scientia's blog after the excessive censoring:
This blog has gone to crap just like Sharikou's but for different reasons. In the past Scientia used to generally tolerate amusingly mild sarcasm, aggressive on-topic debating, and statements questioning his credibility and knowledge. There were periods of censoring and this made the environment somewhat repressive, but it was generally tolerable.
Today, the blog is a regime worthy of Mussolini's best efforts. Scientia now:
- has become extremely sensitive to sarcasm, probably from an increasing awareness that 90% of his predictions over the last year and a half have been way off base
- no longer tolerates even mild aggression in debates (and it's difficult not to get increasingly aggressive when half your posts get deleted)
- can no longer handle it when people question his credentials and joke about his incorrect predictions. This results in immediate deletion.
- enables comment moderation during periods of particularly extreme sensitivity to sarcasm and questioning of credibility, such as following the Phenom flop and its revelation that the AMD marketing volume known as the K10 Optimization Guide was a severely misguided foundation for predictions on performance.
I'm through posting on this joke of a blog. In fact, perhaps Scientia can preclude me from the occasional temptation to post by permanently banning me.
Btw, Barcelona is a paper launch even by Scientia's generous definition. It has been two and a half months since the launch. Yet Newegg has pulled all speed grades from their listing due to lack of availability. Not a single major OEM has a Barcelona system in stock.
Now that was certainly lame comeback from Scientia. Also he doesn't seem to remember what he has said in the past.
Anonymous said...
The closest answer I got to why K10 should be so much faster than Core2 from abinstein is that it runs spec rate faster and that the "toy benchmarks" that review sites run simply do not reflect K10's potential.
You see, AMD is so far ahead, even the benchmarks have not caught up to AMD!
Ultimately 3rd party benchmarks are meaningless; what matters is how any system handles your actual workload. With that understood, you have to look at the real purpose of a given benchmark effort. If the point is to demonstrate the absolute no-holds-barred performance of the machine, then you need test software that was fully optimized for that machine. If you want to demonstrate how that machine will run the software currently on your shelf, that's a different story.
As far as canned closed-source software goes, it's fair to say that the programs have not caught up to the processor. Most reviewers test on Microsoft Windows, and most of those tests are only using the 32 bit OS. Right away you're hogtied because the OS and most of the programs running on it are optimized for some other microarch, and not even making use of the 64 bit programming model.
I guess that's relevant information for a vast majority of people, since they just run a few canned programs on Win32. None of it is interesting to me, since I write open source server software. In my case, I can compile everything on the machine from the bottommost kernel driver to the topmost API with all the architecture-specific tweaks available. Hardly any review sites test that way.
Sure, a new chip ought to handle existing apps well. But if you want to see the potential that a new design offers, you have to use new software too. In that respect, the Spec benchmarks are more relevant because they are compiled specifically for the target platform.
A little off topic, but not a Scientia-bashing comment either:
It looks like we probably can put the 'well with one core disabled AMD should be able to release parts with much higher clocks' theory. You know the one that went 'well if you could disable the one slow core then you could get a much faster tr-core part'
Well our favorite rag, the INQ is reporting 2.3 and 2.5GHz tri-core parts for next year.
I said it a while ago, and I'll say it again - tricore is a yield, not a binsplit, issue. The gain is from parts that have a core dead or badly malfunctioning. You do not have massive frequency variations between cores that are ~10-15mm apart on a wafer. Yes you might gain a bin (or maybe 2), but it's not like you have 3 cores at 3GHz and 1 core at 2.2 or 2.4GHz (which is one example I saw written down)
So perhaps we can put that theory to rest now? So tri-core will help with yields and recycling what would be scrapped parts, but it's not going to allow tri-core to gain 3 speed bins over a quad core part.
BTW - I agree with the other anonymous poster, if the blog is to grow, the Scientia cut and paste should be eliminated (I have been guilty of this in the past and will stop) By now he has been exposed for what he is, so we should give our thanks to Robo for this forum by posting good, intelligent comments and challenging ROBO when he is (often? just kidding)wrong!.
Thanks Robo.
What is it with AMD fans that makes them so unaccepting of facts or info may show Intel chips as better performers? I'm seeing a bit of dishonesty and a lot of misleading arguments on their part.
AMDZone, what a joke! I thought HoHos arguments were reasonable and well stated yet he gets accused of being a "troll" (by the mod no less!). While guys like Abinstein can say things like "every Core 2 chip has serous bugs that can be exploited as security threats" and suggesting its serious to warrant owners claiming their money back.
Seems the Phenom reviews were such a shock to them (their Sept.11) and now they're going thru what was described in another forum as:
1. Denial
2. Anger
3. Bargaining
4. Depression
5. Acceptance
Altho not sure if acceptance will ever come about.
if you want to see the potential that a new design offers, you have to use new software too. In that respect, the Spec benchmarks are more relevant because they are compiled specifically for the target platform.
So as I understand it, if Spec is the most relevant then the 4 summary benchmarks of interest are: SPEC_FP, SPEC_INT, SPEC_FP_RATE, and SPEC_INT_RATE.
SPEC_FP and SPEC_INT should indicate basic processor performance on floating point and integer respectively. However, as I understand it, the *_RATE tests give a better look at performance when memory access is the bottleneck.
As such, the *_RATE tests the AMD fans love to quote are really an indication of the platform performance rather than the processor.
Is this correct, or did I miss something fundamental along the way?
One last Scientia comment and then I will move on. I attributed the following to Scientia...
AMDs big problem right now is that people suck.... up to Intel so Intel can ... force them to take what they get. OR ELSE.
It was actually a comment made by Howell and edited by Scientia. So my apologies to Scientia for attributing the rabid diatribe to him.
Guy’s, Guy’s, GUY”S! I know how much you love to beat up Dementia and Sinistilla, but here is something more current you could ram down their throats. Feast!
http://www.dailytech.com/article.aspx?newsid=9838
SPARKS
I have been inspired... I hereby pledge to never again post on Scientia's blog. I certainly will miss Abinstein, but do I seriously need to waste my time explaining to him that core size is not a solid way to determine core performance?
Congratulations to Roborat on the ascension of his blog to the primary blog in the AMD vs. Intel sphere! Great entries, smart technical insight and a bit of humor thrown in has been a successful formula.
I've given up posting on Brent's, umm, Scientia's blog and agree that others ought to stop feeding his ego.
Abinstein, the tool, is a NOP. Always was.
I too will no longer visit scis blog, here is home.
So as I understand it, if Spec is the most relevant then the 4 summary benchmarks of interest are: SPEC_FP, SPEC_INT, SPEC_FP_RATE, and SPEC_INT_RATE.
Most relevant of the off-the-shelf software, sure. With the initial point remaining (what's most relevant to you is your own real workload).
SPEC_FP and SPEC_INT should indicate basic processor performance on floating point and integer respectively. However, as I understand it, the *_RATE tests give a better look at performance when memory access is the bottleneck.
As such, the *_RATE tests the AMD fans love to quote are really an indication of the platform performance rather than the processor.
Is this correct, or did I miss something fundamental along the way?
You're drawing a distinction where no practical one exists. The AMD memory controller is integral to the processor design; you cannot buy an AMD core without a memory controller. You can't buy an AMD core without Hypertransport controllers.
In that respect "the platform" is just a bunch of wires; just a bunch of traces from the CPU to the DIMMs. Are you really claiming that extending the scope from looking just at the CPU socket to including this purely passive component outside the processor is skewing the test results?
Anyway, the _RATE tests aren't only about platform memory bandwidth - they also tell you how well the CPU handles multiple threads, how well it handles concurrency and context switches. They give you the theoretical upper bound for scaling. When you see your own real software in action, you can use these numbers as a yardstick to tell how well your software was written, how close it gets to the maximum scaling improvement that the hardware can achieve.
But the thing is, unlike Intel where the memory controller is a separate component, it just doesn't make sense to talk about "the platform" separate from "the processor" - you can't run a machine without memory. Or from the opposite side - you may be able to load all of your problem set into a Xeon's on-chip cache, but does that really tell you how *the core* is performing? Isn't that on-chip cache really just an extension of the platform? It's certainly not a part of any ALU. It's memory. Fast memory yes, but it's not core logic. It's "platform."
Anyway, if you're testing a multi-core CPU, don't you really want to know how well it does multiple tasks at once? What good is a multi-core processor if you're only going to run a single thread on it? Any fool can design a fast single-threaded system. But if introducing a second thread cuts the system performance in half (or worse), you need to know that up front, because it will affect how you use the machine.
hyc
"Anyway, the _RATE tests aren't only about platform memory bandwidth - they also tell you how well the CPU handles multiple threads, how well it handles concurrency and context switches"
From what I know no Spec benchmark is multithreaded, unless you let compiler to multithread it. Rate benchmarks only run multiple copies of the same program simultaneously. As most benchmarkers pin the processes to certain cores it will not show too well how CPU handles multiple threads as there is nearly no context swiches and threads-processes do not move from one place to another.
You're drawing a distinction where no practical one exists.
Which is why I asked if I was mistaken. I wasn't sure. :)
As to being a multi-threaded test my (obviously limited) understanding is that the _RATE tests put a heavier load on the system than it is ever likely to encounter in the real world. By doing so, it is more heavily weighted towards memory bandwidth than it is towards core performance.
Again I freely admit I could be all wet here, but that is my current understanding. Please correct me if I'm mistaken.
From what I know no Spec benchmark is multithreaded, unless you let compiler to multithread it. Rate benchmarks only run multiple copies of the same program simultaneously. As most benchmarkers pin the processes to certain cores it will not show too well how CPU handles multiple threads as there is nearly no context swiches and threads-processes do not move from one place to another.
As I said, it gives you an upper bound. Any real multi-threaded program will have additional synchronization overhead that's not reflected here, that will limit it to less than this upper bound. But again, if the idea is to show the absolute limit of performance for the machine, then this is the metric you want. This is the number most interesting to me, because I can compare the efficiency of the software I write against it, and look for bottlenecks and tuning opportunities, and know when it's time to stop looking.
If for example it shows that running 4 identical processes on a quadcore machine is only 3x faster than running a single process, I can infer that there are hardware limits (concurrent access to on-chip cached? concurrent access to bus interface? could be anything) that will place an absolute limit on how well any software scales on the machine.
I have a test of my server software that shows it processing 1.8 GB of data in 0.71 seconds on an AMD X2 3800+, using a single client. Extending the test to use two clients concurrently, the test runtime remains unchanged at 0.71 seconds. Extending to 4 clients, the total time to completion is 1.42 seconds. I.e., as far as is measurable, I get perfectly linear scaling on that machine. The test drives the CPU utilization to 100%; maybe in real world deployments you'll size systems to only hit 80% utilization on average, but it's still a good indication of how the server will perform in the real world.
Posting here, deleting my posts on Sci's blog. I am also done.
In other news.
Quad FX systems are no longer a concern of AMD and could possibly be cancelled.
Techreport already reports that QuadFX is indeed canceled. I had been saying since it's beginning that it was a very lame attempt by AMD to stop the bleeding (and it did not). AMD touted it's support with future quadcore processors, 8 cores, quadfire and so on. With intro of Spider, Quad FX is no longer a concern by AMD. Sucks for those who got suckered into AMD's Quad FX.
More other news includes AMD dropping out of Isuppli's top 10 Chip manufacturers while Intel leads the top spot.
What Intel Schedule? You got a link?
Scientia, I've asked this before and I don't think it's much of a difficult thing to do.
When you make a claim some will question, why not add a link to your backup and shut them right before they even get a chance to question you?
You make A LOT of claims and fail to back ANY of them up. We're not just gonna be blind and assume you "know it all".
1) Can you please show us where AMD said K10 is designed for 45nm.
2) Please provide links that show that 45nm schedule has slipped or slipped considerably.
So let's start with those and we'll continue to others later.
Just posting it here..... deleting from sci's blog.
EVERYONE PLEASE DELETE YOUR COMMENTS FROM SCI'S BLOG.
Check this out:
http://www.247wallst.com/2007/11/ceos-who-need-t.html
CEOs Who Need To Return To Business School
Looking around the wreckage of some of the big cap companies it is not hard to find a few CEOs who probably need to go back to business school. Wall St. would like to see them at Harvard so they would get the best education possible, but the school in Cambridge might not let them in.
First on the list is Hector Ruiz of AMD (AMD). He already has a PhD, but it may be the money order kind that you can get through the mail. Ruiz has effectively taken AMD from a high-margin chip company which had the technology to compete with larger rival Intel (INTC) to a sad shadow with big debt and small margins. He engineered the buy-out of graphics chip company ATI, which has been a disaster of their first order. Less than two years ago AMD traded at above $42. It now sits at under $10.
Found this info online. It's from an Intel employee:
He was asked:
Hey JK how large is the Nehalem die going to be? If you can't give details can you say if it is larger or smaller than the Phenom's 283mm^2?
He responded:
It's smaller than a conroe.
Rumored Upcoming Nehalem Variations:
Mainstream Dual-Core Havendale/Auburndale: mobile Nehalem-based Havendale/Auburndale. Integrated GPU core. Two CPU cores with a shared 4MB cache connected, via a Quick Path Interconnect (QPI), to a Graphics Memory Controller Hub (GMCH), then to the GPU packing a DDR3 memory controller. Also: Thermal Design Power (TDP) under 95 watts (W), PCI Express Gen2 x16.
Octo-Core Nehalem-EX (Beckton): 8 CPU cores, 16 threads, 24MB shared cache, 90/105/130W TDP, 4 QPI links, QPI link controller, integrated memory controller.
Extreme-Performance Bloomfield: 4 CPU cores, 8 threads, 8MB shared cache, 130W TDP, DDR3, QPI link.
Quad-Core DP (dual-processor) Nehalem-EP (Gainestown): 4 CPU cores, 8 threads, 8MB shared cache, 60/80/130W TDP, DDR3 800/1066/1333 (memory), QPI link, integrated memory controller.
Performance Mainstream Quad-Core Lynnfield/Clarkfield: 4 CPU cores, 8 threads, 8MB shared cache, 95W, DDR3, PCI Express Gen2, integrated memory controller.
http://www.x86watch.com/news/intel-havendale-172.html
giant
"It's smaller than a conroe."
Well, dualcore Conroe is 143mm^2, put two together and you have 286mm^2. It is quite certainly smaller than that.
A few comments about some certain blog post:
"Well rendering is one of the tasks than can easily be multithread"
Sure, rendering a few triangles is not that difficult. Now try to render a whole bunch of stuff like stuff off the screen (shadowmaps, mirror cubemaps, model skinning, physics simulation etc). Also consider that every single multiGPU solution needs to have identical data in every GPU memory pool. In order to copy that with an MCM GPU you'd need to either have insane bandwidth between the cores (more than 32bit HT3 offers) or duplicate data between all memory pools (VooDoo5). Both are pretty much out of the question. Also there is that small thing that GPU pool needs to have a central place for thread scheduling. Not simple.
Of course one could say they simply use current multi GPU solutions like SFR or AFR. Split frame rendering doesn't scale at all with the amount of geometry. Alternate frame rendering behaves badly if dynamic data needs to be stored between frames and it also increases input latency quite a bit. E.g when you have 4 GPUs running at 60FPS then your input is processed as if the game would run at 15FPS.
Classic Hector:
"It is too early to say when and how a fab of our own in India will be attractive to AMD, and we anyway have [sufficient] manufacturing capacity for some years," Ruiz added."
He's joking right - they still have the conversion of F30, the option on the NY fab, and now he's saying it's too early to consider fab'ing in India? It's not too early - the answer should be there is no need!
I love the "we have sufficient capacity for some year", so much for asset light? Rather than spin and PR it, why not just say there is no need for it for the foreseeable future.
Next thing you know he'll be saying his priority is to return AMD to profitability. :)
Well, not only has AMD dropped the FX-74 dual platform, it seems there will be no Fasn8ing platforms either! Apparently, TWO (2) (II) Pheromones cannot compete against one (1) I QX9650!!!
FASEN*ING, after all the spin, pump, and HYPE! It’s a loser and a dog and it’s never gonna happen.
AMD is so lost and beaten; I don’t believe they have any direction. There is simply no way to turn, or any way to go. In a word, it’s pathetic.
SPARKS
http://www.xbitlabs.com/
news/cpu/display/20071129193525
_AMD_Abandons_Plans_for_Dual_
Processor_Enthusiast_Platforms.html
Enumae asked when 65nm launched. I'm sure the intent was to call Scientia on his claim that Intel's 45nm launch had slipped. But for the record, Intel's launch dates were:
~ Dec04 for 90nm.
~ Jan06 for 65mn.
Nov07 for 45nm.
So Intel is well within the 2 year tick-tock window they were projecting. In short, there is no slip. In fact, the launch is 2 months ahead of the 24 month schedule.
I finished reading the interview with Wrecktum Ruinz in Bangalore. Then, another at the INQ (the Rag), basically agreeing how innovative AMD has been on ever major technological innovation.
Well, let me say this IS ALL HORSESHIT. The goddamned IMC, lets start with that first. INTEL was the first to incorporate one on the die on the Timna Family of processors back, way back, in 2000. (As a side note it had graphics on die, too. So much for a 5.4 billion dollar Fusion)
http://www.geek.com/procspec/intel/timna.htm?rfp=dta
It wasn’t the K6 introduced in 1998, which had a FSB 66 MHz to 100 MHz during its life cycle. Nor was it the K7 who also had a FSB that clocked in at 100 to 200 MHz. during its life cycle. It wasn’t till 2004, 4 (FOUR!) years after Timna did the IMITATOR copy the IMC on the Athlon 64 and Opteron processors. READ: COPY!!
It should be said, as Intel did all the heavy lifting with TRULY innovative IDEAS, Dork (K7) Meyer and the other band of clowns were there to capitalize.
After all, the IMC is the key, isn’t it? It has given the Athlon and Opteron processors their edge for the past three years, hasn’t it? INTC is now kicking AMD’s teeth in with the ‘old’ FSB! At 1600 MHz FSB and higher, latency doesn’t mount to a hill of shit anymore. Then when INTC redevelops its INVENTION IMC in Nehalem, I hope AMD just goes broke and DIES.
Really, I had a bit of sympathy for AMD in it current position (BAD). Now, after reading MORE bullshit and arrogance from WRECTUM, indirectly absolving himself from any fault, I hope INTC grinds their ASSES into the pavement where they should have been, right next to Cyrix and VIA.
Good riddance!
SPARKS
I finished reading the interview with Wrecktum Ruinz in Bangalore. Then, another at the INQ (the Rag), basically agreeing how innovative AMD has been on ever major technological innovation.
Yeah, I read the the INQ's article myself and had to bite my tongue. The list I compiled looked something like this.
IMC - Intel with Timna.
NiSi - Intel 90nm.
Strained Si - Intel 90nm.
Hypertransport - AMD on x86 architecture.
Quad Core (MCM) - Intel
A viable 65nm process - Intel
Quad core (Native) - AMD but a dog.
High K / Metal Gate - Intel.
A viable 45nm process - Intel
So AMD can claim 2 genuine firsts on the x86 architecture. But one of them was ported from another architecture (HT) and the other was a disaster.
I know which track record I'd be proud to claim a part of.
"IMC - Intel with Timna.
NiSi - Intel 90nm.
Strained Si - Intel 90nm.
Hypertransport - AMD on x86 architecture.
Quad Core (MCM) - Intel
A viable 65nm process - Intel
Quad core (Native) - AMD but a dog.
High K / Metal Gate - Intel.
A viable 45nm process - Intel "
The list grows a bit longer --
low-K - Intel 180 nm (fluorinated SiO2)
copper damascene - AMD (@130 nm)
Strained Si should be expanded:
- Embedded SiGe - Intel 90 nm
- Stress liner - Intel 90 nm
I also would not count IMC in Timna because it never made it to market.
I also would not count IMC in Timna because it never made it to market.
I debated including Timna for that very reason. Your's is probably the more fair assessment. One can't really copy something that was never produced in volume.
The IMC is even older than that. The 386 and 486 had an IMC. It was moved off-die with the Pentium processor.
"The IMC is even older than that. The 386 and 486 had an IMC. It was moved off-die with the Pentium processor."
This is true, there was an 386 that I know had the memory controller on die... it was used in embedded applications.
"I debated including Timna for that very reason. Your's is probably the more fair assessment. One can't really copy something that was never produced in volume."
Guys, Inovation does not need volume production. The point is INTC had it first, it was not AMD's idea.
Besides, they are not producing Barcelona in volume. :)
Sparks
Guys, Inovation does not need volume production. The point is INTC had it first, it was not AMD's idea.
If you're not limiting it to ideas that made it into real x86 products, then it seems silly to restrict it to x86 at all. Every 68000 family processor controlled DRAM directly. There's probably dozens of commercially successful CPU designs that had IMCs long before x86.
64Bit on X86 is a AMD first. It is something which helped AMD to make inroads into server market and one thing which definitely shook Intel out of complacency.
I believe though that 64bit on desktop pushed as futureproofing by AMD is a con on PC users.
Ask Apple Computer how being innovative allowed them to become the standard in desktop operating systems. Oh, wait...
I'm glad to see people are refraining from posting at sci's blog.
Let's keep it up and show him what his blog really is without us so called "trolls"....
It is not really about who does it first, it is about who does it better.
There is one thing Ruiz can't claim about Intel copying, and that is his management running two major companies into the ground.
Chip problem limits supply of quad-core Opterons
News of this problem is notable because it confirms that the TLB erratum affects Barcelona server processors as well as Phenom desktop CPUs, and that the problem impacts AMD's quad-core processors at lower clock speeds. AMD's initial public statements about the erratum and the delay of the 2.4GHz Phenom seemed to imply that the issue was closely related to clock frequencies.
AMD has publicly estimated the performance penalty for the BIOS fix could be around 10%, and one source pegged the penalty at 10-20%.
--
At present, Microsoft doesn't offer a Windows hotfix to address the problem, and our sources were doubtful about the prospects for such a patch. CPU makers have oftentimes addressed errata via updates to the processor's microcode, but such a fix for this problem also appears to be unlikely.
come on Ruiz, let's all blame the Intel monopoly for this one.
The erratum can cause a system hang with certain software workloads. The issue occurs very rarely, and thus was not caught by AMD's usual qualification testing.
Anyone want to guess which important development step did AMD skip to release K10 "on-time"?
"Chipmakers refer to chip-level problems as errata. Errata are fairly common in microprocessors, though they vary in nature and severity. This particular erratum first became widely known when AMD attributed the delay of the 2.4GHz version of its Phenom desktop processor to the problem. Not much is known about the specifics of the erratum, but it is related to the translation lookaside buffer (TLB) in the processor's L3 cache. The erratum can cause a system hang with certain software workloads. The issue occurs very rarely, and thus was not caught by AMD's usual qualification testing."
All right, that’s enough.
GURU, where the hell are you? If you please, explain this TLB spin babble. What does it ‘look’ at in the L3 cache? Is this a design fault, some kind of timing issue, just a bunch of crap, or what?
“The issue occurs very rarely, and thus was not caught by AMD's usual qualification testing.”
They had a year to test this thing. Did someone drop the ball, or is there a hole in the die??
SPARKS
Sparks, if we assume that we can take AMD at their word (an iffy assumption at best recently), then just read this portion of the statement.
The erratum can cause a system hang with certain software workloads. The issue occurs very rarely....
They say it only affects certain (conveniently unspecified) workloads and only happens rarely. In fact, it happens so rarely that AMD didn't catch it during their normal testing process.
Q&R (quality and reliability) testing should be pretty extensive on a new architecture. I would also expect them to do some benchmarking to see just how good their product was. They didn't see the bug during any of these tests (their claim, not mine).
So while they may not want to put out a product on the market due to a critical bug, this still doesn't explain the big question.
Why can't they get this pig to clock?
So my conclusion is that while there may be a real bug, AMD is just blowing smoke while they scramble to try and fix this disaster.
"GURU, where the hell are you? If you please, explain this TLB spin babble."
Unfortunately, I'm a process guy not a chip architecture guy. And unlike others who will remain nameless (rhymes with Dementia, Labinstein), I won't speak authoritatively on subjects I don't have the background in.
So conjecture - I can say if it were a timing issue, it could easily have been difficult to find in the simulation/design stage (things are so complex now, it is hard to simulate everything) - it's hard to say whether it was there all along or cropped up on a recent tapeout (it's possible it may have been there all along and just not seen due to other issues).
From the stuff on the web - it seems like the issue is sensitive to clock, and this is another example of where AMD's 65nm probably hurt them. With the thermals they were/are seeing on the initial steppings, it is likely (speculation on my part) they didn't see it at the low clocks, or it was not as noticable given other issues.
It is also clear that AMD has completely spun this story and as it dribbles out it just makes AMD look worse and worse - much like politicians, the attempted coverup is often worse than the issue itself.
It is unlikely they saw anything with the A0 tapeout last Dec - recall the "task manager" demo? It is rather ironic that they showed all cores fully loaded and they say that one of the conditions for the errata is to have things fully loaded. So if you put the next functional tapeout in the Mar-Apr timeframe, they had much less than a year of validation with the Barcy launch in Sept.
I'll say it again - why did AMD bother to launch Barcelona/Phenom at all? They are undercutting future margins by launching low end (are they suddenly going to be able to justify charging more for a 2.3GHz Phenom, even if it is 10% better when they fix it?) They are also giving K10 a brand hit. And they are spending capacity on products they are not likely getting a good margin on.
Ok, In The Know, GURU, what have we got so far?
1) We’ve established 65nM and AMD’s subsequent stepping was to address thermal issues. The time frame is roughly 3 or 4 months, August to present, (the time it takes to crank out a new stepping), therefore, this new issue either surfaced or was masked or missed during initial testing. Or worse, perhaps, they changed some process variable in the “improved” stepping only to exacerbate a previously minor or insignificant issue. In my mind, they screwed the pooch on the new stepping. Was it not is the B0 that was supposed to save the world? Think again, I say
.
2) During our previous discussions, we established, from a process perspective, that the SOI leakage potential at a mere 5 atom thickness was going to be difficult, if not impossible to address. Then it is quite possible, as they tried a new variable at one stage of production, they CAUSED a bigger problem at another stage of production, covering it up with this erratum/errata TLB, yes, no? (I’ll take errata, as it is plural, one of many)
3)It seems DOC has got something. He is fishing with a validation question on how they missed something during testing:
“Anyone want to guess which important development step did AMD skip to release K10 "on-time"?”
Ok, DOC, spill it, no ones answering the question, and I am no PhD. Personally, I’m still in the Cro-Magnon stage of chip development.
Thanks In The Know, GURU.
SPARKS
"I'll say it again - why did AMD bother to launch Barcelona/Phenom at all?"
GURU, thats easy, and I'm no chip guy.
They are desperate, as they have nothing else to offer. They pumped this pig to be the best thing since Neil Armstrong doing a two step on the moon.
They are desperate to supply IBM, HP, SUN, CRAY, etc. with something tangible. These guys must be SCREAMING daily! This failure could shift the entire server market with disastrous consequences for all parties. Basically, the entire space becomes a market crap shoot as no one knows where any one stands with this product, at this juncture.
They are desperate. They are pumping good money after bad to get something out of this FAILED product.
They are desperate. They want to look good a possible to future investors.
They are desperate. They are deeply in the hole, they are not making money, and they losing market share daily.
Basically, THEY ARE DESPERATE.
SPARKS
We’ve established 65nM and AMD’s subsequent stepping was to address thermal issues. The time frame is roughly 3 or 4 months, August to present, (the time it takes to crank out a new stepping)
I'd be hesitant to paint the stepping thing with such a broad brush. This has got to be AMDs top priority. If it isn't, it should be.
For a top priority (hot) lot you can push it through the fab in something less than half the time of a standard lot. Let's say 6 weeks or so. If process time isn't the real issue, then what is?
The time is going to go into debug and troubleshooting. You have to sort though this pile of millions of transistors and decide what part of the circuit is broken and decide what you have to do to fix it. Someone more familiar e-test than I am could tell you what goes into that.
Once that is done you have to design and print the new masks. It is not unheard of for a hot lot to arrive at a litho step ahead of mask availability and sit. I have no idea who makes AMD's masks and what their turn around time is, but I do know that getting new masks is not a trivial consideration. Even with the magic of immersion litho. ;P
With those variables to play with, it is hard to say how long any particular stepping will take. Troubleshooting doesn't follow a timetable.
I'm sticking by the TLB is just one of the problems theory. It is a nice catch all as it seems like it will require a new tapeout and that gives AMD time to fix other things as well. Should they fix these other (speculative) problems in parallel, the public will never know and management can say well if only we didn't get unlucky with the TLB issue... if they don't and more issue are seen once the TLB bug is fixed, than mgm't looks like clowns and well this is no different than how they look right now - so all in all a decent risk/reward.
Factor in the fact that they may need to go dipping into the equity well again, and well you want to limit the number of visible issues as much as you can (much like a good used car salesman).
However consider this - if they had fixed the bug on the current stepping, would that have done anything to the power consumption? This seems to be the ultimate limiter to the clockspeed. Even if they fix this bug, they want to stay in the same power envelop ("drop in replacement", ya-da-ya-da-ya-da). If they start ratcheting up the clocks on the current K10's, that ain't gonna happen.
The issue does however explain why perhaps we have seen no higher clocked K10 dual cores. Even with the thermal issues, with only 2 cores, AMD should be able to clock higher than quad and still fit into an acceptable TDP. However if this vug limited clocks to ~2.6GHz (or 2.4 or whatever), then now it makes sense why the K10 dualies were pushed out to Q2.
So back to the quads - while the "TLB /L3 cache/whatever you want to call it" issue may yield 5-10% improvement per clock, they can't get the clocks up without addressing power further. While I'm sure there are some tricks on the design side, this would seem to be more of a process related issue/need. I guess they could still be fighting speed paths - but with the # of steppings they've had by now they should have a handle on this. And I just don't see where they will be getting this on 65nm (unless they try to squeeze more out of strain or work on implant/anneal). However any tweaking is likely just going to push the process closer to the edge of the cliff (APM or no APM). In this regard AMD should be glad to have only one fab.
“ I'm sticking by the TLB is just one of the problems theory. It is a nice catch all as it seems like it will require a new tapeout and that gives AMD time to fix other things as well.”
“With those variables to play with, it is hard to say how long any particular stepping will take. Troubleshooting doesn't follow a timetable.”
This is precisely what I thought. Further, pushing out the dual cores to 2Q fully substantiates the theory. Hence, it’s not merely a four core issue, nor a timing one. These errata (plural) are pointing to a multitude of sins that go to the core. Pun intended.
I suspect the major players are well aware of the problems. It is not to their advantage to disclose these issues publicly, as they are all in the same boat no mater how much they all collectively invested in the AMD “Smarter Choice”/Barcelona infrastructure. The only alternative they have to simply present INTC offerings, or stay with the older proven AMD product line. Currently, however, the upgrade path with AMD doesn’t look so attractive to prospective buyers considering new platforms.
As far as HPC in concerned, previous generation Opteron must, and will, hold the line on that front since they scale well and the infrastructure is still in place. This, ultimately, will not last, as INTC is not sitting back, sipping Pina Coladas in Maui. I am quite certain they understand this fully.
It seems AMD failure is complete, from the low end to the high end. There lays AMD’s vulnerability no mater how they spin it, cry to the press, or fleece their enthusiast base. Personally, I wouldn't waste my time or money on a broken product with undisclosed bugs. This ain’t software patches we’re dealing with here.
This is what was on my mind, and you both have confirmed my suspicions.
SPARKS
Oops one correction to my post 2 comments up.... apparently the fix to the TLB errata would CAUSE a 10% performance hit. Fixing it would give no performance gain (at a given clock), other than being able to eventually clock higher.
My apologies for the misinformation.
http://www.techreport.com/discussions.x/13724
"Currently, however, the upgrade path with AMD doesn’t look so attractive to prospective buyers considering new platforms."'
I have to disagree - look no further than the 4x4 platform, AMD supported that for at least 9-12 months! Now that's platform stability and upgradeability!
Or you could drop a Phenom chip in to an AM2 board and pray it works (some boards are not supporting this). Oh and you can turn off many of the new features on K10 or move to a new board.
"Oh and you can turn off many of the new features on K10 or move to a new board."
I wouldn't have believed this if I didn't read it myself. Buy a
Chip, and disable its L3 cache features to get it to run stably, incredible.
SPARKS
I think robo reached a new low when he was banging my ass. More!
Post a Comment