1.25.2008

The importance of the performance crown

Mercury Research published their numbers showing some very interesting facts. Anyone with doubts about the importance of keeping the performance crown needs to look at this graph.
AMD’s drastic decline in overall ASP coincides with the release of Intel’s Core 2 microarchitecture. One thing to note is the increase in ASP gap between the two vendors from ~$35 in 2006 to ~$55 in 2007 simply because one had a better product. It is suffice to say that $20 per CPU is the price to pay for losing the performance crown. That is a third of the average value of an AMD CPU and is quite a lot of money.

Breaking it down further by market segment it becomes apparent what Intel meant by ‘walking away’ from some of the businesses. AMD’s mobile ASP is even lower than Intel’s desktop prices. Another thing to note is the continuous price erosion on Intel’s mobile segment. The gap between mobile and desktop is diminishing due to the tremendous demand for lower priced laptops towards the end of 2007. One can get a sense of how AMD is well entrenched in this space forcing Intel to lower prices if it wants to get back market share. The surge in Q207 server ASP for AMD comes from the Iranian supercomputer purchase which is rumoured to be paid in oil.
One thing that needs to be taken into context when looking at these figures is the fact that AMD lost a lot of money in 2007 while Intel took in healthy profits. Mentioned in an earlier blog, both companies have adjusted their business model to try and meet historic margin levels. And to some degree we have seen results. But as Intel and AMD continue to battle it out, the one without the superior product has no option but to compete on price. The problem is there isn’t a lot of wiggle room starting from $60 per CPU.

179 comments:

Khorgano said...

Just to be clear, is the last graph unit shipment in 1000's?

Great blog post, it puts a lot of things in perspective.

Roborat, Ph.D said...

Just to be clear, is the last graph unit shipment in 1000's?

Yes. Sorry for lack of clarity.

Anonymous said...

Doc, Did you say 60 bucks? Personally, I wouldn't wee wee on it, however, there must be a market for it.

http://www.pcsforeveryone.com/
Product/Intel/BX80557E1200

Rather catchy name "pcsforeveryone", wouldn't you say?

SPARKS

Anonymous said...

By the way, let's get back to the top performance angle, shall we?

http://anandtech.com/mb/showdoc.aspx?i=3208

HOO YAA!


SPARKS

Anonymous said...

Doc -- thanks for posting this info, some entries back I had commented about AMD's ASPs being in the low 60's, and that DT and mobile were converging. Adds valdiation to what I said....

If you go back to pre 2006, it is quite shocking the precipitous drop in ASPs, AMD's is almost a step function.

Anonymous said...

Robo... the graphs kind of put things in perspective, but what are you doing using ACTUAL DATA in your blog? Aren't blogs supposed to be pulled completely out of the air and unsupported?!?

The mobile ASP's explain it all... AMD is attempting a bottom's up strategy in mobile. They think simply getting a foothold in that market will allow them to eventually gain traction and move to better ASP's - unfortunately they are sorely mistaken. The low end market share they have taken in that area is completely price sensitive (will buy lowest price PC over features/battery life). This is where (long term) Intel's strategy in my view is more sound... by developing a low cost alternative (silverthorne and its successors) they will be able to differentiate the high end and the low end. Intel is selling Absolut and Grey Goose Vodka to the different segments and will be introducing Gordon's (silverthorne) for the ultra low end, rather than trashing the Grey Goose brand by slashing prices. AMD is selling Gordon's and is hoping that once people buy that they will eventually be able to relabel the Vodka up to Absolut - but the problem is there won't be enough of a taste delta for people to want to do this.

It is also what will kill them with K10 - they have set a ridiculously low bar by releasing 'mainstream' first (meaning they couldn't get performance any better and chose not to take the PR hit and delay the part until they got it right). Now it will be difficult to justify $500 K10's if they are only marginally better than the $200 ones... dual core will be even worse as they will be benchmarked against the cutrate K8's.

It's different if you release the parts at the sametime - sure there is probably not enough difference between Inte;'s Top and 2nd bin chips to justify the cost delta, but by releasing them together they let people choose, and some will want the better chip. It is a whole different story then releasing a chip and 6 months later wanting people to pay 30-40% more for a new chip which is 10% better

Ruiz's market share at all cost is now clearly demonstrated to be an abject failure. It will likely take AMD a completely new product cycle (one that they can release at the high end first and THEN trickle it down to mainstream) before they can undo the price erosion. They can probably get back to profitability through cost cutting and assuming the CPU market doesn't go into the tank... but the ASP's will not likely recover significanty during the K10 generation. And if the overall market goes south, AMD is screwed as Intel will have to lower prices to sell their capacity which will put additional ASP pressure on AMD.

Anonymous said...

The bottom mobile volume graph is really telling - for all of AMD's talk about mobile market share look at the volume #'s!

It shows the fallacy of purely looking at market share growth in this area... As AMD was/is starting from such a low starting point in the mobile area, a relatively minor volume change will show up as significant market share growth... but when you look at the volumes it is simply a big #/small # effect.

If for example AMD goes from 10 to 13 while Intel goes from 100 to 103, both only sell three more processors, but AMD's share growth would appear to be significant (~1.3%).

enumae said...

Off topic, but does anyone see the irony, or is it just me?

AMDZone

Axel said...

Anonymous

Now it will be difficult to justify $500 K10's if they are only marginally better than the $200 ones

It's possible that AMD 45-nm proves to be a savior from a clock standpoint, but this is unlikely due to probable high current leakage. So the chances are that they are stuck with a ~$200 price ceiling for at least the next TWO YEARS on desktop and mobile parts. In addition, with an increasing mix of quadcore from Fab 36, overall unit fabrication capacity is substantially reduced so their market share in those spaces will probably shrink through 2008.

Can AMD muddle through and operate for the next two years on a $200 price ceiling and ASP hovering between $60 and $75? Probably, because their CAPEX budget for 2008 is only $1.1 billion, a reduction of some 35% from 2007. This is a tremendous cost savings but this strongly implies that the tooling of Fab 38 will not be completed in 2008 and therefore AMD are manufacturing from a single fab for the rest of the year and probably well into 2009 if ASPs don't improve. The problem is there is very little upside in their ASPs for the next two years unless 45-nm improves both IPC and clock greatly. Yorkfield & Wolfdale are simply too powerful for 65-nm K10 to compete with. And if Intel's recent claims of Nehalem IPC gains are for real, it could be the hanging sword that keeps AMD's ASPs down where they are all the way through Bulldozer in 2010 and 2011.

They might muddle through but it's going to be pretty ugly for the next couple years. Even if IBM helps them out by buying Fab 36 or constructing the Luther Forest facility for them, their product roadmap as it stands now does not appear poised to support higher ASPs for a long time to come...

Roborat, Ph.D said...

Off topic, but does anyone see the irony, or is it just me?

Jeff Tom must have been crying while doing that article.

Roborat, Ph.D said...

Aren't blogs supposed to be pulled completely out of the air and unsupported?!?

not to worry, i slipped in some Iranian rumour into the blog just to balance it out.

Anonymous said...

Now it will be difficult to justify $500 K10's if they are only marginally better than the $200 ones

LOL how dumb can you be.
So what?
Doesn’t the same situation applies to the Q6600 VS Q6700 where one cost 275$ and the other 550$.
Or because it’s an Intel CPU the Q6700 is not marginally better, is amazingly better!

You Intel folks are dumb as a rock.



Also soon the new VIA processor will frag all your pity processors:
Isaiah Processor Faster Than Silverthorne and Core 2


“I'm proud of the team here. We did this new architecture and first chip with a company of less than 100 people, including lots of testers and support people.”

LOL Intel with much more people and much more money released a CPU not long time ago with only 32 bits (the Core Duo)…

LOL how a company so small can do what AMD and Intel does with less resources and less transistors. Amazing.

“If you look at the timings at the instruction level, [our floating point unit] is faster than anybody else's. We actually have some major inventions there. We can do four single-precision adds and four single-precision multiplies in a single cycle.”

I already see Intel sending them one of their law suits to steal all their technology and make sure they don’t out run their poop (Garbage/Silverstone) out of the market before it’s even released.

Isaiah architecture to frag Silverstone and compete with Core 2 Duo notebooks

Unknown said...

From your own article:

... the Core 2 for performance is the fastest architecture in the world.

Anonymous said...

'Doesn’t the same situation applies to the Q6600 VS Q6700 where one cost 275$ and the other 550$.
Or because it’s an Intel CPU the Q6700 is not marginally better, is amazingly better!'

No it doesn't because INtel releases these AT THE SAME TIME! The early adopters are rather less price sensitive and if offered both at the same time they will often go for the best chip.

Now if you are AMD and only release the $200 model and then hope people buy the $500, that ain't gonna happen - they miss out on the folks that want to be the first to own the next big thing, and now you are talking about REPLACING a $200 CPU for a $500 CPU with only marginally better performance. In Intel's you are spending an extra $300 and you don;t have to wait - yes foolish for most, but some will do this.

And if you look at Intel's past pricing strategy, they rarely increase the top bin price within a given product cycle. They will release a new top bin part at the (high) pricing of the old top bin part and then waterfall down - this way there is always an "expensive, money is no object" option for those who choose it. In AMD's case they will now have to explain why they are increasing their top bin part price for marginally better for performance - this will be a difficult chore as this will not be the top performance chip someone could buy. (it might be a different scenario if they were the top dog performance-wise)

Anonymous said...

I must say it is excruciatingly funny reading how Scientia attempts to interpret the transistor data from IEDM... He throws around the terms as if he knows what he's talking about, and when he comes to a data point that isn't in the table, he states:

"Unfortunately, the IBM data is absent but if I were to estimate I would say that Intel has about a 16% advantage plus lower power draw."

This would be an estimate based on ....his vast experience working on transistors or his semiconductor physics education? I think he should have stated 'if I had to pull a number out of my ass, with absolutely nothing to back it up other than a gut feeling and my desire to keep AMD close..."

By the way 16% and a lower power draw is about a generation lead... most generations target ~30% performance (either in switching speed or power or a split between the two). He makes it sound insignificant... this would be like Intel being another technology node ahead of AMD on the performance-side. Sure it doesn't have the economic side of things but Intel's 45nm is essentially what IBM (err...AMD) will hope to achieve on 32nm!

For those who care - the IEDM data is important (specifically the Ion/Ioff RATIO as opposed to the absolute #'s) but what is critical is a plot of Ion/Ioff and looking at that slop as opposed to a singular point... but hey what do I know? I'm not the great Dementia, 'expert commentator on all, knowlegde of none'

So if he wants to focus on the 90nm #'s where he thinks IBM/AMD had a lead he should consider a few things... look at the high Vt device data in conjunction with the low Vt data... look at the respective Vdd's (1.0 vs 1.2)... Intel was driving that node for pure Idsat/clocks, hence the higher Vdd targets (and higher Ioff on the low Vt devices). If you look at the high Vt devices (which are slower transistors), Intel has an advantage in the Ion/Ioff ratio... it's called cherrypicking #'s out of context, trying to sound authoritative and trying to fit the data to a pre-formed conclusion. He knows that dog wouldn't hunt anywhere except in the cozy confines of his own blog where he can simply censor out any post that attempts to point out the folly in his logic.

But hey... I hear experts say AMD is closing the gap... If Intel does nothing for the next 3-4 years on the process technology front this might be true.

Or to put it another way... the 45nm data IBM reported was WORSE than Intel's 65nm process! It was barely better than their own 65nm process... that is scary and should clearly tell you what to expect from AMD 45nm.

Finally consider what IBM/AMD is reporting - is this the performance at the beginning of the technology ramp or after several CTI cycles when the technology hits maturity (hint...the answer wouldn't please Scientia). So you are comparing what Intel achieves at the START of a tech ramp vs something AMD achieves several iterations into the ramp (~1 year? This is complete guess)

So to conclude (my speculation), AMD's 32nm process performance ~1 year into the ramp (or 2011) will be performing, transistor-wise, close to what Intel stared ramping Q4'07.

http://www.realworldtech.com/includes/images/articles/iedm-2007-2.gif

Anonymous said...

Forgot this gem:

"It is notable that IBM's low power bulk 65nm process is identical to Intel's but comes 2 years later."


Yeah they had the same Ion, but Intel was reporting 1/10 the Ioff power! (Figures on the right side of the table)

Does he not understand 1 != 0.1? What an idiot... Lex, if you're reading this is fodder for your cannon!

Anonymous said...

CLASSIC DEMENTIA!!!!!!!

'Oh, a small correction:

"It is notable that IBM's low power bulk 65nm process is identical to Intel's but comes 2 years later."

IBM's ldsat is the same as Intel's however Intel's loff power draw is only 0.1 compared to IBM's 1. Obviously Intel would have substantially less power draw at idle.'

Hmmm... this post seemed to occur sometime after my previous post... what a KO-INK-E-DINK!

Kind of funny how he needs to come to this blog to get the facts and corrections... perhaps if he didn't censor folks, he wouldn't be such a laughing stock and get some real post at his blog!

It would also help if he left the real technical analysis to folks who actually have some knowledge and background! Does he even know what Idsat is? (and a tip for folks - Idlin is actually becoming a more important, lesser known and reported metric - Sparks go work on this after you're done with Igate!). I luagh at the superior intellect....said in my best Ricardo Monteblan voice!

Don't drink and blog! (But drinking and commenting is OK, at least in my case!)

- GURU

Anonymous said...

"I luagh at the superior intellect....said in my best Ricardo Monteblan voice!"

Damn... I really need to stop the whole drinkin' and commentin' thing... that would be Kirk, not Kahn! And the specific quote would be 'I'm laughing...' - in Scientia terms, it's a minor correction, other than getting the quote and the person wrong I was dead on!

(I'd blame the typos on the beverages too, but I'm just a crappy typer!)

Anonymous said...

"It is notable that IBM's low power bulk 65nm process is identical to Intel's but comes 2 years later."

Small correction... on the low POWER PROCESS, the POWER is 10X different.. but hey let's compare Ion... because on a LOW POWER process, POWER isn't the key metric! Just a minor thing you know... really I know what I'm talking about here..,

God forbid I actually just say oops I totally botched that one... let's downplay it - I mean who would really look at Ioff when comparing low power processes? AMD process = Intel...must find data...must ignore data that doesn't fit... must support my conclusion.... must not admit to being out of my league in terms of knowledge in this space... must show the fanboys I am an expert... must show I'm not biased by posting a 'minor' correction to an obvious mistake... must not seem like Sharikou so I'll post something on his blog too...crap this is getting hard...

Did I mention the compilers are RIGGED! Why don't folks use 3rd party compilers instead of Intel optimized ones!

MONOPOLY! Sorry, default utterance when confused and have nothing to support AMD with...

AMD is closing the gap...6 month maximum lead on 45nm... OK, 9 months minimum... Ok we also need to ignore performance... but other than that... CLOSING THE GAP BABY! Did I mention how cool immersion litho and SOI are! Intel still has better process metrics with "older" technologies? ...that's just propaganda by paid pro-Intel sites... oh that is a technical conference that is peer reviewed by scientists? ... well...well... AMD is just "future proofing" with these technologies! (you know like 64bit, baby!)... Did I mention ULK?!?

Anonymous said...

"Hmmm... this post seemed to occur sometime after my previous post... what a KO-INK-E-DINK!"

Hum his post says:
January 27, 2008 3:59 AM

Yours say:
27 January 2008 05:07

Unless you know how to read dates...
Nice try really.

Also according to techreport TSMC 45nm is better than Intel:
http://techreport.com/discussions.x/13973

"At IEDM, Wang says TSMC discussed an immersion lithography-based 45nm process tech that has finer geometries than Intel's dry lithography-based 45nm tech."

It’s funny that the Intel FUD boys didn't comment the article. Maybe they where to scared that someone could find out the truth.


And from the IEDM:
“In session 10.1, TSMC presented its 45 nm process technology, utilizing the 193-nm ArF immersion lithography and nitrided oxides as the gate dielectric. TMSC demonstrated NFET/PFET drive current performance of 1200/750 uA/um Idsat at 100 nA/um Ioff and 1.0V Vdd. Despite the lower reported drive currents relative to Intel’s 45 nm process technology, TMSC was able to – perhaps illustrating the capability of the 193-nm immersion lithography - attain finer geometries relative to Intel’s 45 nm process technology. For example, TSMC reported physical gate length of 30 nm relative to Intel’s report of 35 nm gate length, and TSMC further reported that its 45 nm process is able to support SRAM cells sizes ranging from 0.202 um2 to 0.324 um2. Specifically, TSMC reported a functional 32 Mb SRAM test chip with cell size of 0.242 um2. This reported cell size is substantially smaller than the SRAM cell size of 0.346 um2 reported by Intel for its 45 nm process technology.”

And:
“In particular, Intel’s choices of going with metal gate-electrodes and hafnium-based gate-dielectrics, but without immersion lithography at the 45 nm node enabled it to present a process that has higher performance and a lower cost basis than processes at the 45 nm node from those of its competitors. At the same time, these unique choices may also introduce some risk factor into a successful ramp of the 45 nm node. Only time will tell if these aggressive choices will enable Intel to upgrade to a larger steamroller, or merely keep the one that it already has.”

Anonymous said...

Hook, line and sinker....

"IBM's ldsat is the same as Intel's however Intel's loff power draw is only 0.1 compared to IBM's 1. Obviously Intel would have substantially less power draw at idle." (Scientia's blog, latest comment)

'Ioff power draw'...hmmm... Ioff refers to off state current... I wonder where that term and the confusion of current with power came from... That phrase seems remarkably familiar to a post made on this blog...

"Yeah they had the same Ion, but Intel was reporting 1/10 the Ioff power!"

Hmmm... Guru said that? That seems like an absurd mistake for someone like him... that Sparks guy keeps talking him up as a knowledgeable guy... how could he make such an obvious mistake in calling Ioff (which is offstate current) power draw? Man, maybe this Guru is just a poser and has no clue what he is talking about...

Or maybe he introduced a subtle error to see if someone (who will remain nameless) would blatantly copy the term and 'minorly correct' himself... damn introduction of an intentional error.... can you say 'fish on'? ...that's just brutal, eh?

Yeah, it was a bit diabolical on my end, but the truth needed to come out!

Perhaps next time the great Scientia will learn 'cut/paste' does not equal knowledge! And I apologize to the readers on this site - this was a necessary evil to expose a poser! I will not do it again...or will I? To a certain someone - you are VASTLY OUT OF YOUR LEAGUE and you better be careful about what you read and copy, someone (perhaps a bit more knowledgeable?), may introduce a subtle error to test you!

Comment on what you know, feel free to speculate on that which you do not, just don't put your speculation out as fact!

Again my apologies to the faithful readers of this blog. (the error in this case was minor and rather innocuous to draw someone out)

Robo - again, thank you for the lack of censorship on this site... folks should continue to challenge each other in an attempt to have an honest dialog... it shouldn't be about driving toward an agenda... this is the reason I keep posting on this site (despite JJ's appeal/invite to the extreme CPU site - though I must say JJ is one of the most knowledgable folks who I have come across on the blogosphere in terms of Si process, who doesn't seem to have worked in the area, yet seems to want to learn about it). It is also (one of) the reasons I post anonymously.

I'm not out for making a name... I'm out for some intellectual honesty... which is why anonymous posters, if they can support their post with facts and links should be given as much credence as someone who is just as effectively anonymous but happens to use a name.

- GURU (or is it really Guru? Does it really matter?) Damn this is just a bad Matrix comment.... or is it? End program! (OK it's now time to stop- STNG reference for those who noticed!)

BTW - to the idiot, ummm... I mean last poster... TSMC is NOT PART OF THE IBM/AMD consortia! The "T' in the acronym is TOSHIBA, you nimrod! Better to remain silent and have people question your intelligence than to open your mouth and prove it! If you actually read your quote carefully... it said only time will tell and say 'may'... i.e. opinion... i.e no data at present time... i.e. try again! Why are you comparing TSMC, which has nothing to do with the IBM consortia to Intel? Do you actually have some data or are you going to speculate based on someone else's opinion... oh and are you going to keep confusing TSMC with AMD (I would like to know for future reference)

Also finer geometries!= better performance (Exhibit A: AMD's 90nm vs 65nm performance). Finer geometries means smaller... not necessarily better.

Dementia is that you? Come on man? I (anxiously?) await your explanation as to how you simply confused current with power (if you were not plagarizing) and in a manner that was remarkably familiar with a post here? Time stamps? Are they the same? Did you filter out other comments? Seems like a remarkable coincidence that you would mistakenly use the same terminology as posted here?

I'll leave it to the readers to decide...

Anonymous said...

All right GURU, I got the (Idsat) current thing going on (I think?).

It seems that an increase or decrease in saturation current on the source/drain is dependant on current crowding and/or an increase of contact resistance. From what I’ve read the actual shape of the S/D structures themselves can be the limiting factor, square pointy shapes as apposed to nicely rounded ones. Also, the proximity of the S/D structure to the gate is another variable. I suspect if you get them too close then the ugly sub threshold leakage thing rears its ugly head, to far away and the device won’t saturate. (Ion?)

What they don’t talk about is capacitance at the junctions, but from your past analysis I know its there. (Even a moron knows what happens when you put a dielectric between two metal anything’s.) Holding a charge is good thing in power supplies, not so good when you want (Ioff).

What they NEVER talk about is the material they use, (of course).

Additionally, I couldn’t find a goddamned thing on (Idlin) Where?

Finally, with all these variables, materials, structure placement and shapes, factored in with high/multiple angle layer depositions, its enough to make me drink.

I’ll juice up the 60.000 pound (empty) chiller any day and throw in the control systems, just for shits and giggles.

SPARKS

Anonymous said...

"Hmmm... Guru said that? That seems like an absurd mistake for someone like him... that Sparks guy keeps talking him up as a knowledgeable guy... how could he make such an obvious mistake in calling Ioff (which is offstate current) power draw? Man, maybe this Guru is just a poser and has no clue what he is talking about..."

Hey tough guy, you do have the capacity to understand there is still current draw at the Ioff state, don't you? It's called leakage, and it is precisely why AMD lost 6 billion in 2007 and why Barcelona fell on it ass, read JUNK. Or, haven't you figured that one out yet, with your deceptive, keen eye for detail.

May I suggest Paxill.

SPARKS

Unknown said...

XBITLABS readers choice awards:

http://xbitlabs.com/articles/editorial/display/2007-awards.html

AMD loses out in every single category.

Orthogonal said...

It’s funny that the Intel FUD boys didn't comment the article. Maybe they where to scared that someone could find out the truth.

So... what does TSMC have to do with the current discussion again? So they were able to attain better geometries and better packing density but still slower transistors? Ok, w00t!

If you hadn't noticed, this entire discussion has been about comparing IBM (and consortium) process tech vs. Intel. The fact that TSMC wasn't added for comparison isn't an admission of FUD or hand-waving, it's just irrelevant to the current situation. If (When?) AMD outsources CPU production to TSMC we can analyze how this will affect the CPU industry.

Orthogonal said...

BTW - to the idiot, ummm... I mean last poster... TSMC is NOT PART OF THE IBM/AMD consortia! The "T' in the acronym is TOSHIBA, you nimrod!

Actually, the "T" stands for Taiwan, as in Taiwan Semiconductor Manufacturing Corp., I guess you made a mistake, which is Human of course, and spare your the indignity of being called a "Nimrod".

InTheKnow said...


Actually, the "T" stands for Taiwan, as in Taiwan Semiconductor Manufacturing Corp., I guess you made a mistake, which is Human of course, and spare your the indignity of being called a "Nimrod".


The T is Toshiba if you are referring to ITSA in the summary table given for the IEDM 2007 results. I'm pretty sure that is what the poster was referring to, not TSMC.

InTheKnow said...

Anonymous said...
"At IEDM, Wang says TSMC discussed an immersion lithography-based 45nm process tech that has finer geometries than Intel's dry lithography-based 45nm tech."

Yep, they have smaller cell size, that's a fact. But in his selective editing our intrepid poster conveniently left out the following...

"Astute readers will note that in the section on Intel’s 45 nm process technology, we placed the phrase “on paper” into the sentence that pointed to the absolute statistical dominance of Intel’s 45 nm process technology. The reason for the presence of the phrase is that while it is true that, as described, Intel’s 45 nm process will have cost, speed, and power advantages (possibly with a slight deficit in circuit size, as evidenced by the larger SRAM cell size) over that of other semiconductor manufacturers, this statistical dominance of Intel’s 45 nm process technology will be meaningless unless Intel’s fabs can execute and ramp the 45 nm process node to plan."

So what the summary actually said, is that if Intel's approach doesn't hit a snag, they will have the best process in terms of cost, power, and speed while giving up a little size.

Only time will tell if Intel's gamble paid off.

There is also an interesting statement regarding TSMC's 32nm process and it's lack of suitability to CPUs. That makes you wonder if their 45nm would have similar issues, but I don't have time for details on that right now.

Anonymous said...

'The T is Toshiba if you are referring to ITSA in the summary table given for the IEDM 2007 results. I'm pretty sure that is what the poster was referring to, not TSMC.'

Of course I was... clearly the person cutting and pasting the IEDM info has no real background and just assume the T in the IBM consortia acronym was TSMC...as others pointed out why even bring up TSMC when Scientia was attempting to compare IBM's process and Intel's... it's called misdirection... when you can't argue the facts, change the argument, let's throw out some TSMC data and see if it sticks, eh?

"Hey tough guy, you do have the capacity to understand there is still current draw at the Ioff state, don't you? It's called leakage, and it is precisely why AMD lost 6 billion in 2007 and why Barcelona fell on it ass, read JUNK. Or, haven't you figured that one out yet, with your deceptive, keen eye for detail."

I'm going to assume this is not sparks (based on the use of a nickname instead of signing in)... Clearly he would understand that Ioff is not = to off state power consumption... Clearly, Scientia CHOSE to use the same incorrect terminology I planted by mere coincidence... "Ioff power draw" is a bunch of FUD I put in and some poser decided to clip it and use it...

I have no problems with errors - people should just own up to them and not try to cover up mistakes with "I mispoke", or I made a minor error... just say my bad, I was wrong, and move on... reminds me of Abinstein's refusal to admit his mistakes when he calculated that AMD's yields are 100% better than Intel's (which he downgraded to 50% better as even he knew it was absurd).

intheknow... your commments are dead on, perhaps folks will listen to you?

Unknown said...

AMD 3870 X2 fragged by 15 month old 8800 GTX:

http://enthusiast.hardocp.com/article.html?art=MTQ1NCwxLCxoZW50aHVzaWFzdA==

We need to look at the performance in comparison to the closest competition for this video card, a single GeForce 8800 GTX. In every game except Half Life 2: Episode 2, the GeForce 8800 GTX delivered higher framerates. Keep in mind we are simply using a stock clocked GeForce 8800 GTX, so this doesn’t take into consideration a partner overclocked video card or the 8800 Ultra.

Roborat, Ph.D said...

AMDZONE moderator too sensitive and doesn't seem to know what "ironic" means.
Enumae's discussion with a child

Funny how Scientia keeps casting me as the bad guy and try to gather AMDZONE readers to rally behind him. As if that site is still relevant. Now only if they would rename their site to VIAZONE...

enumae said...

Some Nehalem Information

Translation sucks, but there are a couple pieces of information (if true) that are interesting.

Anonymous said...

‘Keep in mind we are simply using a stock clocked GeForce 8800 GTX, so this doesn’t take into consideration a partner overclocked video card or the 8800 Ultra.”

Giant, I was hoping that this wouldn’t happen, but it did.

There we are, sitting with a shiny new QX9770, an Asus Rampage Extreme, and we are still stuck with a single card NVDA solution.

Both companies make me sick. AMD/ATI can’t deliver a competitive single card solution and NVDA will not release SLI to all. Well, maybe it’s Skulltrail, after all, with the fancy NVDA chip on board. Let's hope the 98 GX2 thing bares some fruit.

I want to puke.

SPARKS

enumae said...

ISSCC 2008 - PDF

Itanium Tukwila, Quickpath high-speed links enable peak processor-to-processor
bandwidth of 96GB/s and peak memory bandwidth of 34GB/s.

Can someone explain what this means and/or how it may compare to AMD?

Thanks
enumae

Anonymous said...

I Don't even know why people bother having a conversation with Ghost?

He's always looking for a reason to ban people. He'll lure you in, take the subject off topic and then ban you. You'd eat your own brain out before he starts making any sense.

I love how he keeps trying to add fuel to the fire that's not even burning and trying to lure in enumae-k. Props to Enumae for keeping his cool and catching on to Ghost's plan.

Ghost, I know you're reading this.

The whole point is that it's IRONIC (please look up the definition, it seems to me that you don't know what it means).
Ironic that a website that advocates hatred towards Intel (Yes, that is how I view AMDZone) would review an Intel platform. It's not funny, It's Ironic... please learn the difference.

Also, my P5B-Dlx which I bought 15 months ago, Supports Penryn. Please take a moment to swallow that bit of fact. Again, my 15 MONTH old board supports the latest and greatest.

I can put in a Penryn Quad in a few months and squeeze out another year and half easy... 3-4 YEARS of life for an intel board while supporting the latest...Eat your heart out Ghost, eat your heart out.

Anonymous said...

Oh and Ghost, I wish you had the balls to post on any other forum or blog besides AMDZone.

I know you posted at xtreme and we all know how that went.

It's obviously clear that unless you can control other people's actions (as a moderator), you don't have the balls to talk in an open forum.

Anonymous said...

Well, it seems AMD’s stunning 2007 failure has indeed bared fruit which, of course, was just a mater of time. I have always felt (and it could be fairly counter argued) that in the final analysis, ‘power per watt’ will always take a back seat to outright performance. Case in point:


http://www.theinquirer.net/gb/inquirer/news
/2008/01/29/amd-loses-largest-asia


As INTC performance AND power decreases have a greater advantage in the market (not to mention avalability), they will have their cake and eat it too. AMD will never come back from this and it will only get worse.


SPARKS

Anonymous said...

Sorry for bringing this up in this thread ... but I have been reading your blog for a while now and think it is truly one of the best and open blogs on AMD / Intel battle that is out there.

One thing that I hav read so much about is that if someone buys AMD the x86 license does not come with it .... my question is how long does Intel have the patent and does it not expire at some point ?

Again, I apologize for asking this particular question in this thread.

Appreciate any insight here and don't mind if you tell me to get lost.

BTW, this would be my first post ever :-(

Anonymous said...

Ghost, abinstein, brent (scientia) are all AMD fanbois who are too chicken to engage in debate outside their sandbox.

Chicken s@(#*@s all of them.

Keep up the good work, roborat!

LMAO

Anonymous said...

GURU, you never answered my question.



“Additionally, I couldn’t find a goddamned thing on (Idlin) Where?”


SPARKS

Roborat, Ph.D said...

.... my question is how long does Intel have the patent and does it not expire at some point?

First of all it not really an “x86 license”. Instead it’s a cross license agreement that allows Intel and AMD to use each others technologies without fear of being held liable for infringement. It includes ALL patents held by both companies. It’s not one patent that would expire one day. The agreement is more of a blanked protection from any claims rather than being specific to one technology. The agreement ends in 2011 but everyone knows Intel is obligated to renew, “in good faith”. If someone buys AMD or goes bankrupt, it’s only natural that the agreement terminates. But that doesn’t mean that whoever owns the new AMD will not be able to enter into a new agreement with Intel.

Anonymous said...

Scientia at it again, spinning the delays again lol.


AMD schedule more challenging than intel.

Anonymous said...

SSE5, wasn't that the top development of 2007?

Anonymous said...

"Scientia at it again, spinning the delays again lol.

AMD schedule more challenging than intel."

Man that guy is quite delusional... I mean I joke about his lack of knowledge but now it is clear he just doesn't care about the details on Si process or what it takes to validate and get an architecture out! He simply just looks at a date and then purely projects forward taking no actual facts into account.

One 'minor' flaw in his creative time schedule.... First did AMD actually even give a date on Bulldozer? I only heard pushed back to 2010... where did Q2 come from (of course no link as nothing he says needs to be supported)

Did AMD say anywhere that Bulldozer would be 32nm only or is this another prediction?

Anyone want to comment on the sanity of a new architecture being the first product on a new process which by the way will be using a completely new gate oxide?

He really needs to work on a presidential campaign in the US... I hear Mike Huckabee is always looking for a spinner - we had a victory in Florida thanks to our close to 3rd place finish... the people have spoken!

So putting aside the 'they're actually ahead argument' (though they pushed the product back at least a year)... especially as you know AMD sells products not process technologies (but wait isn't it the PRODUCT that was pushed back?)

I think AMD's reasoning on this is sound. For 45nm without highK (yeah I know they might insert it late...that's a bunch of PR crap), there is no point introducing an architecture late in the tech node cycle and then having to shrink it on a process that is really quite different and would be more than just a simple shrink. Look at Intel's 65 (std gate ox) to 45nm (w/high K) shrink - that wasn't your typical dumb shrink.

So AMD either needed to pull in Bulldozer (which was not feasible due to the issues with K10) or push it out. They also have to get Fusion out, because if they do not release it near or ahead of Intel's solution, they will potentially be forced into whatever standard Intel sets.

Also given K10's current performance, my opinion is that this means Bulldozer is not quite there yet - if it was vastly superior, I would think AMD would be dumping resources into it to speed up the schedule given where K10 is competitively - I'm not saying Bulldozer won't be good, but it apparently has a lot more work needed on it.

Or you can choose to believe the ABSURD Scientia argument and blame it on MOBO pushback - If the chip was really great don't you think MOBO makers would be excited to put out new MOBO's (which they would also be able to price higher if the CPU's performance is that much better?)

Roborat, Ph.D said...

Whoever said Barcelona is delayed is completely out of his mind.

Bug Free Barcelona Ahead of Schedule

Anonymous said...

on a side note, I just can't believe how successful the DTX form factor has become. Looks like someone was right again all along..

Anonymous said...

Let me help you guys, there are a few confused people about technology.

SOI versus bulk: IBM has always been a big proponent of SOI. There is no question in my examination of this technology there are some fundamental advantages and IBM does its part make noise about it. No question in small volumes such as servers or mainframe chips the huge ASPs and low volumes make it a reaonable technology to consider for its performance boost, capacitance reduction. IBM has invested a good decade in the technology and design infrastructure to make it go and it makes sense for their product line. AMD, that is another matter. They leaped to SOI to look for a quick fix to its lagging silicon performance. In the end they fell on their face and went to IBM to get help. SOI as you scale it requires some seriour improvement in SIMOX control of very thin top silicon epi to get the benifit at 45nm and 32nm. I think you are seing the first rumblings of concern with noise about some migration back to non-soi. In addition the powers of scaling were ignored by the IBM camp, as you scale from generation to generation advantage of capacitance reductions falls as the area of junctions have shrunk by 2x so what looked like a huge advantage when SOI was introduced at 90nm is 4x less at 45nm and 8x less at 32nm. That combined with process tightness I think you'll see some real problems with SOI as we go to smaller nodes and increase the level of strain into the SOI structure, adding HighK/MetalG addes even more complexity. My prediction that is way to much for a company like IBM to debug in a consortium constrained development and way to much complexity for AMD to solve with their resources. AMD will be suprised in ramp I'm certain.

Lithography: Much as been made by some about how Intel took the high risk at 45nm by going with traditional 193nm lithography while AMD/IBM and TSMC are being smart in adopting the more capable immersion technology. I start first with how we've been proven again and again wrong in our ability to extract more resolutoin out of 248nm and now 193nm. Between clever RET, OPC and smart design rules we continue to be able to resolve smaller and smaller dimensions very well after we every thought possible to do. What is more the risk: engineer a custom solution for a few very high running products using a tried and true tool and buying a lot of those tools mature tools like INTEL did. Or would INTEL be smarter to try and adopt a brand new platform, new materials, new process, and hope that the get debugged before the rampand do this across 4 factories across the globe. COmpare that with TSMC, AMD, or IBM who don't have a huge volume runner, only need a few tools in one factory and limited money and engineering resources to brute forces a solution. Its a no brainer for them too, you have better chance of success given your constraints, they should go immersion. In my opinion here, both have picked the right path. If I was a betting man I would say INTEL's likelhood of not getting into issues is better then AMD/IBMs. THe fact either does or doesn't get into trouble doesn't mean the path was wrong. Don't confuse PR claims about someone using a fancy new tool to print patterns it is better then an older generation of tools. Its like arguing if the a Lexus engine built with robots is better then a handbuilt AMG engine for Mercedes. All that matters is how does the dq4 perform.

Ion/Ioff: Ratio is misleading term. Ioff should include all sources of drain leakage at high bias with gate at low. You have junction leakage, gate to Drain leakage and drain to source leakage. Ion is determined by mobility, channel length. YOu can have great ratio of Ion/Ioff but still a very poor CPU transistor. Depending on application you can be limited by too high Ioff and Igate or you could be limited by Ion, or by junction leakage. In the end the best comparison of two technologies is to compare the Ion vs Ioff curves across several orders of magnitude of Ioff. Go to the conference and you can see the curves. IN this realm I think at 90, 65, and 45nm INTEL's Ion/Ioff is far superior compared to the respective IBM/AMD published papers. Pick any level of Ioff and INTEL still has superior Ion. FOr those that can't comprehened. It means for a given MPG INTEL always has more horsepower. WIth INTEL transitioned to HighK/Metal Gate they have achieved another advantage of greater gate capacitance control of transistor with simulatenous improvement in gate leakage. This is the 10x to 1000x reduction in power you hear about. The physics says AMD/IBM simply won't be in the same ball park at 45nm without HighK/Metal gate, the gap is HUGE. Without HighK/Metal Gate they can get close in Ion but will suffer hugely in gate leakage or match drain leakage and miss by a mile in Ion. For those that can't comprehened this is like either match INTEL's MPG but get 20-30% less horsepower, or match the horsepower and end up getting 2x worst gase mileage. Think of INTEL's highK like comparing Lexus's hybrids against Cheverlot's non-hybrid offerings. No matter what point you pick AMD/IBM will lose out.

SRAM size: Much is being made about SRAM size, that is often out of context. It isn't hard to make a very small SRAM. The smaller you make it the more senstive it is to process variations and for becoming unstable. Publishing a yielding SRAM isn't the hard part, making yield in huge cache sizes on hundred of millions of chips and with good stability is the real task. If demonstration is any test, you can look at INTEL which is shipping large server products on 45nm. I got to figure for servers you can't tolerate cache data corruption. I don't know of TSMC ever having to deliever a huge cache product in a fault tolerant enviroment? DOn't get seduced by size, sometimes size does matter and bigger is better here, no matter what your lover tells you.

Square and round contacts: Physics of lithography say that square drawn contacts end up round, and rectangular drawn contacts end up as ovals. Proxmity of a contact to a gate / transistor doesn't change the leakage of the transistor. Proxmity reduces the external resistance of the transistor carrying the current to the lower resistance contact. Putting contacts closer to the gate is always good. The downside of putting Contacts too close to the transistor is miller capacitance and margin for shorting between gate patterning and contact patterning. Put them to close and I will assure you that your lithography engineer will be a very unhappy engineer.

Bottom line, all the data out there show INTEL with a process node lead of between 6 months to a year depending on what you believe from Hector. But even at a given node INTEL has between 15-30% advantage at the transistor level, that in principle gives the circuit designers a huge advantage. Properly used by INTEL designers I see no chance for AMD ever to match INTEL for performance without giving somethign up. In the end AMD will be relegated to lower ASP and as such can't invest as required for silicon technology and they will continue to be dependent on consortiums to help them. So here you have AMD with depressed ASPs driven by having to work with inferior process at given node and getting to the next node a year to a year and half late, billions in debt. Yet there are still delusioned bloggers that think AMD has a sustainable business and they'll be back. Now I ask are they bent over and have their head somewhere? Is this Scientia dude one of those with his head up his rear, I think so!

Anonymous said...

Barcelona delayed?

No way B3 with Q1 2008 was always the schedule.

Hector always said B3 Q1 2008 that is the date..

InTheKnow said...

Enumae said...
Itanium Tukwila, Quickpath high-speed links enable peak processor-to-processor
bandwidth of 96GB/s and peak memory bandwidth of 34GB/s.


I'm not going to claim to be extremely knowledgeable on this subject, but I can tell you what the HT protocols are. HT 1.0 = 12.8 GB/s. HT 2.0 = 22.4 GB/s. HT 3.0 = 41.6 GB/s.

So it would seem that quickpath is superior in all respects to HT 1.0 and 2.0. It would appear that quickpath is better than HT 3.0 at transferring data between processors and not as good at transferring data between the processor and memory. But quickpath should still be far superior to the FSB in multi-chip systems.

As far as I know, AMD has yet to actually implement HT 3.0, though I could be wrong.

InTheKnow said...

"Scientia at it again, spinning the delays again lol.

AMD schedule more challenging than intel."


The statement is true as far as it goes. AMD claimed to be trying to close the process gap by the 32nm node. They did set a more ambitious schedule than Intel.

My issue with his current position is that he mocked and ridiculed those with the process knowledge to tell him this was a pie in the sky dream. He fell back to the "AMD wouldn't have set the schedule if they couldn't meet it" position.

Now he backtracks with no acknowledgment that the people in a position to know better (and told him so) were right. I find his lack of willingness to listen to people who clearly speak from a position of experience to be sufficiently annoying that I quit posting on his blog.

The big question I keep coming up with, though, is "where are the AMD process experts?" And I mean real, knowledgeable individuals, not self styled "experts". Why are those AMD experts not posting rebuttal's to the Intel experts who post on this (or any other) blog.

Claims of protecting IP are just a smoke screen. There is enough public information to have a meaningful exchange without having to reveal IP as Guru has demonstrated.

And if they think that this forum is too hostile, Scientia certainly provides a forum that would receive their opinions favorably, but they don't post there either.

Perhaps I'm fated to remain eternally puzzled.

Anonymous said...

"In addition the powers of scaling were ignored by the IBM camp, as you scale from generation to generation advantage of capacitance reductions falls as the area of junctions have shrunk by 2x so what looked like a huge advantage when SOI was introduced at 90nm is 4x less at 45nm and 8x less at 32nm."

A very good and important point - Intel had a technical paper (not sure if it was IEDM, can't find a link easily), about how while they saw an improvement with SOI (I believe this was either 130nm or 90nm), they decided against the technology in large part due to the fact that the benefits of SOI diminish over time.

This was a huge internal debate (from what I've been told). You also have to factor in the cost side - despite all the claims, SOI adds a significant cost to a finished wafer (~10%)... and on 300mm when it first started out it wasn't just a matter of what was best technically but also what was viable from a high volume perspective. Standard 300mm wafers (epi quality) were hard enough to come by in the early days... 300mm SOI....fuhgeddaboutit!

The difference between IBM and Intel (or at least one of the key differences) is the weight of manufacturing considerations on technical decisions... it's not just about what's best from a pure technical performance point of view - there is cost, yield risk, equipment maturity risk, materials risk, and process window/sustainability to name a few.

To the anonymous poster above - an excellent description of the technical details.

Anonymous said...

“Ioff should include all sources of drain leakage at high bias with gate at low.”

This is where I was confused.


“Hey tough guy, you do have the capacity………….”


My sincerest and deepest apologies GURU, when I read:

“how could he make such an obvious mistake in calling Ioff (which is offstate current) power draw? Man, maybe this Guru is just a poser and has no clue what he is talking about..."


I thought it WAS someone compromising your Integrity, you have been correct too many times for anyone to get away with that, at least with me. Call it pit-bull, loyal instinct.


I didn’t realize what was going on until I reread the posts a few times, that it was, indeed, you. Then, after the regrettable Paxill statement I knew I made an ass of myself (not the first time), as I incorrectly over reacted, again, sorry.

Will you ever forgive me?

Humbly,
SPARKS

Anonymous said...

One addition - look at the variables mentioned above and consider dry litho double patterning (193nm dry)vs immersion litho for 45nm.

Cost - double patterning is better

Equipment maturity - again double patterning - dry 193nm tools have been around in manufacturing for a while.

Yield risk - difficult to say, but the double patterning is likely better here (immersion may potentially introduce new defect modes)

Process window - tough to say, if the immersion process is mature, then it would likely have the wider process window. Though from a maturity perspective the 193nm dry litho process window is a fairly well known and characterized enitity.

Materials risk - double patterning again gets the edge - etch chemistries, solvents, photoresist have all been utilized in HVM.

So while immersion may seem 'cool' and more advanced and may be technically superior from a raw performance point of view... which would you rather ramp in volume with if you had a choice?

Anonymous said...

"And if they think that this forum is too hostile, Scientia certainly provides a forum that would receive their opinions favorably, but they don't post there either.

Perhaps I'm fated to remain eternally puzzled."

Puzzled....come on..man... My own personal theory is they know they will not be able to get away with the blanket unsupported statements that get made on Scientia's blog.... I mean heck just look at the author of the blog... Intel has low 45nm yields? Based on what, 45nm product availability on Newegg? That twisted logic (without support) wouldn't fly here and I think most of those technology posers know it.

It's clear (at least to me) from the censorship and statements that exist on that site, that there is a good deal of insecurity and people get defensive when their 'facts' are either clearly proven wrong or made clear that their 'facts' are really opinions dressed up as facts.

I tried posting a few times and gave up... I'll never know why my posts were rejected, many (but not all) were purely technical and had supporting links, but were not published... I've also got turned off by Scientia clearly editing other's post and selectively publishing within his own comment and refuting others comments (without posting the original comment). There is now way of noting how things are taken in or out of context when he does this and I consider it mental midgetry. He claims it is because of personal attacks, but based on some of the comments that people have copied to this site just in case, it is clear he just didn't like the opposing point of view and would only publish the pieces he could argue or refute.

I would like to see more of a view of 'the other side' but based on the comments that get posted over there, do you really want some of that unsubstantiated rubbish here? I don't mind the counter opinion so long as it is clear it is opinion and not presented as FACT.

InTheKnow said...

One addition - look at the variables mentioned above and consider dry litho double patterning (193nm dry)vs immersion litho for 45nm.

One factor you didn't mention. Throughput. I've been told that immersion is slower than dry litho, but I suspect it does confer a slight throughput edge over double patterning with dry litho.

Another thing I keep running across is the emphasis on yield learning that AMD will have over Intel when Intel reportedly goes to immersion at the 32nm node. One key thing the people pushing this theory tend to miss is the hidden cost of being an early adopter. The early adopter gets a first generation tool. Those that follow get a more mature design.

Non-disclosure agreements are intended to secure the lead for the early adopter. But somehow equipment vendors seem to find a way to make the second generation tool better despite the agreements. So yes, the early adopter gets an edge in yield learning, but those that follow get a more robust tool. Personally, I think it's a wash from a yield perspective.

Being an early adopter is a flat out no win proposition for the engineer that has to keep the first generation tool up and running. Even with retrofits, a first of a kind tool is always a dog compared to a later generation tool.

Anonymous said...

"...as I incorrectly over reacted, again, sorry....Will you ever forgive me?

Humbly,
SPARKS"

No worries.

I was long winded, and trying to be cute speaking in the third person... I was a little bit over the top on some of those comments, I just get upset with Scientia's fantasy knowledge and his ability to take numbers completely out of context and so I decided to see if I could trap and expose him. And then the arrogance (for fear that he might be exposed?) to try to back his way out of things instead of just saying he was wrong, just set me off.

Anonymous said...

"ne factor you didn't mention. Throughput. I've been told that immersion is slower than dry litho, but I suspect it does confer a slight throughput edge over double patterning with dry litho."

Yeah, good point - this is buried in the cost piece... most people assume 2>1 so 2 passes must cost more than 1 pass. People ignore the capital cost of the immersion tools (which are nearly 2X that of dry litho) and the slower throughput. Due to this, double patterening ends up cheaper (counter to most people's intuition)

Anonymous said...

Whew..............!


Thanks

SPARKS

enumae said...

Thanks InTheKnow.

Unknown said...


There we are, sitting with a shiny new QX9770, an Asus Rampage Extreme, and we are still stuck with a single card NVDA solution.

Both companies make me sick. AMD/ATI can’t deliver a competitive single card solution and NVDA will not release SLI to all. Well, maybe it’s Skulltrail, after all, with the fancy NVDA chip on board. Let's hope the 98 GX2 thing bares some fruit.


I hear ya sparks, but it apparently just gets worse:

Apparently, 3 way SLI will only work with nforce 680i and 780i - it won't work with Skulltrail. Even worse, Quad SLI may be limited to the old 7950 GX2 cards on Skulltrail, as opposed to the upcoming 9800 GX2 cards.

Not happy about that at all. I might still end up picking up a 9800 GX2 though; depending on the performance.

-Giant

http://www.fudzilla.com/index.php?option=com_content&task=view&id=5459&Itemid=54

pointer said...

Just realized someone posted as me in the other roborat64 old article: https://www.blogger.com/comment.g?blogID=2602471396566186819&postID=7849487980608162677

A way to check if it is a real me is by looking at the icon beside me, real me has the 'Blogger icon', fake me has a Anonymous icon. You can go to the comment #67 to #70 to compare the difference.

Khorgano said...

InTheKnow said...

Another thing I keep running across is the emphasis on yield learning that AMD will have over Intel when Intel reportedly goes to immersion at the 32nm node. One key thing the people pushing this theory tend to miss is the hidden cost of being an early adopter. The early adopter gets a first generation tool. Those that follow get a more mature design.


Intel may not be an "early adopter" of immersion Litho tools when they finally implement it in HVM, but they are definitely NOT an early adopter of Immersion Litho in general, they have had the tools since early this decade (2001 or 2002 I believe). They have just been waiting for the right time to pull the trigger on which process node to go for full process appropriation.


pointer said...

Just realized someone posted as me in the other roborat64 old article:


Some loser has been posting as several frequent poster aliases with homosexual banter on older blog posts... I wouldn't worry about it. I just hope Robo can block that sucker.

Unknown said...

AMD's deplorable customer support:

http://www.theinquirer.net/gb/inquirer/news/2008/01/31/deal-amd-phenomenon

Roborat, Ph.D said...

To play the devil's advocate on the immersion vs dry litho debate...

I have seen several articles that suggest that immersion at high-volume becomes cost efficient assuming the process matures and the yields are similar to dry litho. Especially, when you consider the small yield fall out from overlay issues with double patterning.

There main argument for immersion is the fact that the overall throughput is better. Although immersion may be slower at the lithography step which also includes whatever conditioning you add on to the resist, double patterning involves twice the normal litho process plus a couple of etch process steps and then a couple more cleans. Another cost driver for double patterning is the greater number of mask sets.
If Intel were to run immersion today, I haven’t seen any data that suggests that immersion would not be the more cost effective solution. But not everyone has the same capacity and cost structure as Intel which can easily recover capital cost by manufacturing high volume/high margin parts. My argument therefore is based on an apple to apple comparison. Comparing IBM/AMD immersion vs Intel’s dry litho costs may be another matter.

One good point raised by someone (Dr Yield, I think) is that Intel chose to use double patterning dry litho because of copy exactly! concerns. Intel normally makes a decision to freeze the process at the discovery-development transition. Looking at the timing of 45nm, this would be sometime late 2006. Immersion lithography at this time period was still immature and more likely to be relatively expensive. It's not surprising back in 2006 that when Intel had to chose what technology to develop for 45nm high volume manufacturing, dry lithography came to be the best, if not the only choice. If Intel were to make the same decision today, the outcome might be different.

Unknown said...

DUAL CORE CPU SHOOTOUT:
http://www.xbitlabs.com/articles/cpu/display/dualcore-shootout.html

So, new Intel Core 2 Duo E8000 processors based on 45nm Penryn cores do not have any worthy competitors at this time. They are considerably faster than Core 2 Duo with smaller model numbers and outperform the top AMD Athlon 64 X2 CPUs with overwhelming advantage. Add here their fantastically low power consumption and pretty democratic official pricing and Core 2 Duo E8000 will turn into a potential market hit. It is especially true for Core 2 Duo E8200 and E8400 models.

Only retailers can cast a shadow over this rosy situation, because they keep the prices for these promising models at a pretty high level since the market hasn’t been saturated with them just yet. However, this problem should very soon get resolved.

As for the top Athlon 64 X2 processors, they turned out seriously overpriced after the launch of the new Core 2 Duo E8200. Today they can only compete against Core 2 Duo E4000 and Pentium E2000. Moreover, we can check out how reasonable AMD’s price policy actually is with a simple empirical rule: for AMD Athlon 64 X2 processor to perform as fast as a Core 2 Duo E4000 or Pentium E2000, it should run at about 20% higher clock speed.

It means that from the performance standpoint AMD Athlon 64 X2 6000+ should cost as much as Core 2 Duo E4600, and Athlon 64 X2 5000+ shouldn’t be priced higher than Pentium E2200. Only in this case the dual-core processors pricing would be considered fair and reasonable. Moreover, we will have to disregard the power consumption rates in this case, because regular Athlon 64 X2 cannot be considered economical.

So, the results of our today’s dual-core processor shoot-out indicate clearly that Intel processors win the “Best Buy” title in every single price segment. And it will remain this way until AMD reduces the prices on its Athlon 64 X2, which keep rapidly losing their appeal. The situation may also change if they launch revised triple-core and dual-core processors on Phenom-like architecture, which have a chance of become more competitive against Core 2 Duo. However, it will hardly happen any time soon.


The facts speak for themselves, as this article proves.

Unknown said...

Penryn annihilates in the Dual Core field.

The E8200 (priced LOWER than X2 6400+) walks all over the 6400+.

There is not a single benchmark in xbits test where penryn fell from top 3 spots.

and to top it all off, the penryn consumes more than 100W LESS than the AMD solution.

The power of Penryn is beyond what was expected.

Anonymous said...

"Another cost driver for double patterning is the greater number of mask sets."

Robo with all due respect how many additional masks are we talking about 4? (at most) This is only at the critical litho steps and it is not clear if it is all 4 of the typical critical layers.

If you compare the cost of a single litho tool to the 4 (or fewer) additional masks the cost in this area is miniscule. There are what 40-50 mask steps now? Yes it is an added cost, but in the grand scheme of thing is it nothing.

Heck look at the training cost (for operators, maintenance techs, etc). The installation costs (new design packages are needed). These are all minor costs too... but would work against immersion litho short term.

In my view immersion only makes sense if you have no other viable solution. The cost parity is several years out and the maturity is just not there (compare to current dry litho maturity). I'm sure in time the pricing will come down a bit and the maturity will be there - in this rare occurence Intel will be able to draft off the work of others (it is usually the other way around).

The whole learning curve that others have mentioned is a load of garbage too - Intel (and other IC's) generally have multiple solutions in house and even if they don't go into production they get worked on (immersion, EUV, etc..) Some of the AMD fans make it sound as if Intel will be starting from scratch on immersion when they get to 32nm, Intel probably has as much knowledge on the toolset (with the exception of volume production) as anyone else in the world.

Anonymous said...

Sorry one final thing on the high volume learning that AMD will get on immersion litho:

Assuming they are ~20K WSPM (wafer starts per month) at some point on 45nm.... this is ~4600WSPW (per week)

Now let's say an immersion tool runs at 60wph (I have no clue - can anyone help me out here?). Let's assume a typical 70% utilization and with 168 hours per week you get

- 7056 wafer passes per tool per week

With ~4600 WSPW and 4 immersion litho passes this means you need 18.4K immersion passes/week

So AMD in "high volume" will need ~3 immersion tools... this is volume learning? How quickly do you think Intel will get this amount of learning? And when do you think AMD will get to this level on 45nm? Mid2009 maybe? When will Intel start ramping 32nm? Late 2009?

Perhaps this will put a perspective on AMD's learning on immersion litho - for the better part of their 45nm ramp they will likely be running everything through one or 2 tools. Yes they will learn on these, but what if these early tools are outliers or have subtle differences that the equipment OEM is still working through (as can be the case on the first few tools off the manufacturing line)? AMD won't learn that until well into the ramp.

Roborat, Ph.D said...

Robo with all due respect how many additional masks are we talking about 4? (at most)

that would be 1 new mask set
per tool
per critical layer X 2 per pattern
per device type
per site
not to mention the shorter reticle life cycle.

at about $500K each (guessing) it's not that insignificant.

Anonymous said...

"There is not a single benchmark in xbits test where penryn fell from top 3 spots...and to top it all off, the penryn consumes more than 100W LESS than the AMD solution."

What's curious is how AMD can have 2 processors (6400, 6000) priced higher than the Intel 8200 which beats both of them pretty easily (even if you completely ignore power consumption!)

And that's raw performance - if you prefer to utilize AMD's performance per watt, it's that much more dramatic. Based on performance, it would appear the $178 6400 and the $167 6000 should now be priced below #163 8200.

The problem is AMD has EIGHT products in a ~$100 price range.... that is absurd... they need to thin down the # of products so they can drop the price of a SKU or 2 without having to cut down the entire line as they are all so closely spaced together.

Anonymous said...

“The problem is AMD has EIGHT products in a ~$100 price range.... that is absurd...”


“There is not a single benchmark in xbits test where penryn fell from top 3 spots.”


“The facts speak for themselves, as this article proves.”



If you guys recall, two topics past, I posted:


“Comeback kid my ass.

WARNING TO AMD:

BE AFRAID, VERY AFRAID”


The E8XXX series is an absolute murderer. I don’t have a head for the numbers, but if you get ‘JJ’ or ‘In The Know’ to work out the dual core DPW on these badies, AMD HAS to be pissing their pants.

INTC will be cranking out (and selling) these 45nM nuclear control rod metal, overclocking, cool running, power sipping, monsters like no tomorrow! It was incredibly absurd that AMD put all their CPU eggs in the native quad core basket.

I’m no process genius by any stretch. But, I know one thing, a lot of fast little dualies on a 300 MM wafer as apposed to a fat slob, broken, TLB errata, power hog, cripple triples, MUST be killing their yield/cost ratio. Further, to make maters worse, single/dual core performance really takes a dive when compared to 45nM C2D single treaded performance.

The AMD hoard should be arrested for their sheer stupidity, absolutely no foresight, none.

Can you guy’s work out the numbers on the E8xxx’s/Pheromone/ DPW/Cost thing? It must be horrible.

Doc, this must be a topic on it’s own.


SPARKS

Anonymous said...

I rest my case!

http://www.theinquirer.net/gb/inquirer/news/
2008/01/31/intel-hit-cpu-shortages-report

SPARKS

Anonymous said...

Orthogonal, do you remember when you told me to pull back the “pitchforks” on AMD knowingly selling faulty products? Let the game begin. I still say a class action suit is a possibility. They did it with the IBM ‘Death Star’ hard drives.

http://www.theinquirer.net/gb/inquirer/news/2008
/01/31/deal-amd-phenomenon


SPARKS

Anonymous said...

It's obvious that INTEL has AMD by the neck.
Intel will indeed squeeze the supply and maintain higher ASPs and the demand high.

INTEL'S management is top notch when it comes to brining in money....
and it's also top notch when it comes to making a top of the line product.

Simply put, it's a beast that you just don't wanna mess with right now.

Anonymous said...

"Simply put, it's a beast that you just don't wanna mess with right now."

You are wise beyond your words. Imagine, all the AMD takeover horseshit thats been flying around the web and Wall Street.

Buy, AMD for billions, eat AMD's long term debt, and last but not least go up against the INTC juggernaut! Oh, yeah, sure this would be a terrific Idea.

What a joke.

SPARKS

InTheKnow said...

Perhaps this will put a perspective on AMD's learning on immersion litho - for the better part of their 45nm ramp they will likely be running everything through one or 2 tools.

My money is on two tools. Not even AMD at their worst could be stupid enough not to have redundant tool sets.

On the other hand, I don't see Intel's PTD group having more than two immersion tools to develop the process on. Their wafer starts are going to be a lot lower than what a production fab would see.

D1D ramp might get another 2. This will put you at 2 tools for Intel for the first 6 months or so (my guesstimate) of product ramp on 32nm in addition to the 2 years that PTD will be playing with them.

So when AMD actually launches 45nm, (in volume that is) we'll be able to better see how much time they'll have to get volume learning before Intel's size kicks in and let's them catch up through sheer volume.

In the meantime, TSMC is potentially cranking out some serious production and helping to work the bugs out of the tools. By the time Intel puts a production tool on the floor in D1D, they should be a lot more robust.

Anonymous said...

"My money is on two tools. Not even AMD at their worst could be stupid enough not to have redundant tool sets."

Well you always have two tools for redundancy... however generally speaking one tool is fed or given priority (unless there is significant WIP in the area). In this manner you have fewer tool quals and can minimize idle time (by grouping the lots in blocks). Given the output of the tools it will be sometime before 2 tools are fully loaded (we're talking >12K WSPM by my guestimate). So AMD's near term 'volume' learning is just a wee bit over-rated. My point is they are likely getting volume learning' off just one tool that is fully utilized and a 2nd tool that is will likely only be partially utilized for some time.

Intel's 'devlopment' line is fairly considerable in volume, between the development lots, short loops for integration work and monitor lots for various development activities, you're likely talking 1000 wafer starts per week or more - this is on the order of a 4500 WSPM capacity. This will load tools fairly well and enable Intel's engineers to get some 'volume' learning on a tool as well.

Anonymous said...

To beat a dead horse some more, its clear that AMD's choice to go to immersion is one of hail mary long bomb at 45nm. Bet money on it, its a BAD bet dude!

Figure immersion tOol costs will run 30-40 million for the beta tool, then another 10-20 million for the track. AMD will probalby have between 120-150million invested on a the earliest beta tools out there running early versions of resist. They will surely learn lots of things about longterm interaction of liquids, resists, stages, and many other complex interactions this and next year. They will have to work thru trying to ramp real volume and debug 2-3 beta tools with their vendors. I'm almost certain that AMD will lose a signficant portion of their capacity for periods of time as they recovery unexpected issues. I'm also sure that they don't have an extra 30 million for a spare tool. Expect 45nm to be in short supply from AMD this and next year with many supply hiccups! I don't know of a single instance in this industry where we've introduced a new tool and materials without a year to two long learning curve with beta tools. Its like buying the new year of a complete new model redesign? Even the best engineers from Toyota, Honda, BMW, Mercedes find that first year products still need some significant cleanup for follow-on. For those that can't comprehend this simply don't appreciate the complexities of lithography today. A stepper/track combo is orders of magnitude more complex then any car ever made on this planet, even a F1 race car!

INTEL will at the same time be pushing similar beta tools at much lower volumes on their TD line running handfuls of lots for 32nm development. They can fold all of the vendor learnings and their own into their volume tools for 32nm production that doesn't show up for another two years.

Smart hard working engineers working on tried and true materials and equipment is a far smarter path to produce predictible high volume products then relying on prototype tools with lots of promise but no track record. Technology always works out but never without painful hiccups. AMD will get all those suprises.

For those that were IEDM there was an interesting assessment at the short-course performance boosters on the risks for the INTEL versus the IBM approach to HighK/Metal gate. INTEL approach to HighK/Metal Gate as well as lithography are the same both are about lower risk but high on engineering effort path to guranteed success. I think you see that in the bottom line too. INTEL dominates marketshare, profits and meets schedules in general while AMD has been anything but that in general.

AMD has a bankrupt business plan, always had. They only looked good for a three year period during the Craig Barret days with Prescott mess. They got a new chief and the ship is righted and AMD is finished.

Anonymous said...

Some folks need to step back and take a look at the forrest....

B3 is ahead of schedule? Which schedule would this be the latest one or the one that in Q4 said it was out of the fab and being tested by AMD? If you revise the schedule enough times, eventually you will be ahead of it!

"If AMD really thought they were competitive the 6400 would have an FX designation. I doubt this review was a surprise to anyone who reads this blog."

No not a surprise... just a surprise that AMD believes even though they are not competitive they should be priced higher than chips that can outperform them (8400)... you know because AMD is all about the consumer, right?

"It looks like AMD has successfully undercut Intel's lowest quad price."

Fantastic - AMD has undercut Intel's price with a chip that is more expensive to produce! (Bigger die, lower yield) I hear that's good for the whole 'profitability is out #1 priority' philsophy. Is anyone else troubled that AMD actually had to clarify that to the analysts.... what's scary is that they did have to!


"With such a big gap:

Q6600 2.4Ghz $275
Q6700 2.66Ghz $540

and 2.4 and 2.6Ghz Phenoms due late Q1 my guess is that Q6700 Kentsfield prices will tumble"

First off they will come down as more 45nm gets pahsed in...but as always the logic is just not rationale... Most reviews have th 2.6GHz Phenom as merely competitive with the 2.4GHz Q6600... why would Intel need to slash prices? Why would Intel need to even touch the 6700 if AMD has nothing that can outperform (again except for the fact that they will likely do it to clear out 65nm inventory)

"Perhaps AMD could have 2.8Ghz parts by mid year."

This is classic! Coming from someone who "KNEW" AMD had internal 2.6GHz chips internally in Q3'07 and though there was an a chance of 3.0GHz beingout by end of 2007, with Q1'07 being the latest! Now we have 'perhaps' and 'midyear" (which if it is on AMD's calendar I take to mean Sept?). Does Scientia now work for the AMD PR team?

"Perhaps the 2.4Ghz version at least may be available in Q1."

This was suppose to be the 2nd from the top bin (2.6) at launch and in theory toward end of Q3'07... kind of put things in perspective... talk about a reset on expectations (putting aside the fact that it was also supposed to be 40% better!)

"Supposedly the Q9300 that replaces the Q6600 will start at $270. This would not appear to match tri-core."

This is just CRAZY! Why would Intel need to price a functional quad core chip at or lower than a tri-core chip? By the way that quad will be higher clocked, have better performance clock for clock (not that AMD can even match clocks even with a core disabled/ nonfunctional), oh and the power may be just a tad different too... Yeah intel better try to match that tri-core price, else they might have difficulty moving those 45nm quads! What planet is he living on?

What he fails to understand is the Q9300 puts a ceiling on AMD's quads (as they likely will, at best case have their top quad be simply competitive with it)... this ceiling on quads (at ~270?)... pushes AMD's lower bin quads further down (high 100's?) , this then pushes tri's down (unless they think people will pay more for three cores than 4 because of the 'coolness/uniqueness' factor?) Oh and guess where this puts the top of the line K10 duals, assuming they ever come out...

Anonymous said...

Having already shot both feet AMD is now reloading and taking aim at their own kneecaps - while most people see the tri-core as a way of recovering money for a poor yielding process, they don't see the impact it will have on the dual core pricing.

This effectively will push the dual core bins (which oh by the way are the high volume parts) down a few price points to make space for the tri's. AMD better have a bunch of non-functional die, otherwise they are better off just throwing out the crippled die which might allow them to keep the dual core price bins up a bit more. Unless the dual core chips can outperform the tricores (and if they could why bother with trying to sell tricores?), the tricores will have to be higher priced then duals. Either that or the dual cores will have to have vastly better clock speeds.

As currently is the case with the K8 dual cores, AMD will have an absurd # of SKU's in a very tight price range which give them ZERO price flexibility - as soon as any price changes they will have to change them all, as they will be so tightly grouped together. Does anyone at AMD have a background in marketing and/or business? They seem to think more bins are better and it is necessary to have 100MHz increments... They might as well put out enough products to fill every $5 price point from 100-200, so this way they will have all segments covered, maybe they can put out various cache sizes with the same CPU clock or go to 1/4 multipliers! (sarcasm) IDIOTS! (not sarcasm)

I can understand the technical problems with K10 and the difficulties with the process, but to see such lunacy on the product planning, pricing and marketing is just painful and inexcusable... someone fresh out of an MBA program could see the ridiculousness of the number of products. I mean they have the Phenom Black and "non-Black" editions at the SAME PRICE! Why not just unlock all the 2.3's if they are the same clock and the same price... why artificially create yet another SKU?

Most of this is caused by AMD prematurely launching K10... they simply couldn't put out a single SKU so they announced a 2.2 and a 2.3 and paper launched (for PR and a press review junket)a 2.4GHz... Eventually they will have a 2.6 out (?) and potentially a 2.8(maybe?) That would gives them at least 5 SKU's just for quad core, throw in a couple of black editions or an FX type thing and things are outta control long term.

AMD should not have bothered with the 1/2 multipliers and gone for 2.2, 2.4 and 2.6GHz. It's obvious they couldn't do this and meet their launch schedule for Phenom, or they would have simply had a single 2.2GHz product and looked rather pathetic. So rather than wait until they at least had a 2.4GHz core, they created these smaller more tightly grouped SKU's. Good for the short term launch as you can get a 2nd product out at launch, but not so good for long term pricing strategy. Heck it would have been better to downgrade some 2.2's down to 2.0's (maybe squeeze these into a slightly lower power bin?) and then phase out the 2.0's when the other SKU's come online.

Or they simply could have waited until the 2.4's are ready, perhaps work on yield a bit and then there might not even be a need for tri-cores (if you don't subscribe to the 'unique capability' AMD spin)

AMD really seems to lack a coherent pricing strategy other than to react to whatever Intel does and they seem to ne winging it on the whole tri-core (which by the way was NEVER a part of AMD's long term plan and popped up out of nowhere) and mid-range Black editions. Black edition kind of makes sense for the top bin or two but mid-range? Do you really want your enthusiasts buying unlocked mid-range low priced chips or the higher priced higher bin ones? Do you not want to get ANY price premium for the unlocked multiplier? (is AMD that flush with cash and sees no need to get a few dolllars for the additional feature?)

It just seems kind of crazy... (or maybe I'm missing something real obvious?)

Anonymous said...

Starting second half 2008, AMD will have like 20 variations (dual, tri, quad) of Phenom all priced under $250. That is, if they ever manage to come out with a dual core Phenom that can exceed the performance of their 90nm 6400+...

Anonymous said...

while most people see the tri-core as a way of recovering money for a poor yielding process

Proof please.


As currently is the case with the K8 dual cores, AMD will have an absurd # of SKU's in a very tight price range which give them ZERO price flexibility - as soon as any price changes they will have to change them all, as they will be so tightly grouped together.Does anyone at AMD have a background in marketing and/or business?

Well and Intel?
Who the hell releases a new product that:
- Outperforms their own older one.
- Uses less power thus more efficient.
- Adds more feature (SSE4)
- But costs the same of the older ones?!?
Who is the dumb guy here?

Who the hell in his right mind will buy today one of the 6xxx when the 8xxx or 9xxx are much better overall and cost the same.
Who has phased out their complete line and R.I.P.E.D. all its products forcing a premature EOL of all its complete product line (98% of its manufactured products)?
You seem to know what’s wrong at AMD, how about you pee to Intel too?


AMD should not have bothered with the 1/2 multipliers and gone for 2.2, 2.4 and 2.6GHz. It's obvious they couldn't do this and meet their launch schedule for Phenom, or they would have simply had a single 2.2GHz product and looked rather pathetic.

Well you seem to suffer from Alzheimer. Because Intel released the Q6700 then the Q6600. Even after ONE year Intel has LESS quad core offerings than AMD. If AMD had not have all those issues you would have a complete near perfect line at lunch, something that Intel after more than a year still have to do.

Anonymous said...

The E8200 (priced LOWER than X2 6400+) walks all over the 6400+.

What's curious is how AMD can have 2 processors (6400, 6000) priced higher than the Intel 8200 which beats both of them pretty easily (even if you completely ignore power consumption!)

I think it’s amazing how a virtual processor (read: no existent) can beat real available products.


Also read this:
www.google.com/search?hl=en&q=e8400+temperature+problems&meta=

Do you know why you can’t find problems with the 8200/8300/8xxx?
Because they does NOT EXIST!

Anonymous said...

AMD's deplorable customer support:

Intel deplorable customer support and Idiots!

http://www.ocforums.com/showthread.php?p=5460892

Anonymous said...

Intel has low 45nm yields? Based on what, 45nm product availability on Newegg? That twisted logic (without support) wouldn't fly here and I think most of those technology posers know it.

Well various people here say AMD has poor quad core yields. Based on what, product availability on Newegg?

Axel said...

abinstein

Who the hell in his right mind will buy today one of the 6xxx when the 8xxx or 9xxx are much better overall and cost the same.

And when does this not happen every time a new CPU generation is introduced? 386 to 486, 486 to Pentium, the older generation must be reduced in price as the new one ramps, otherwise there's no reason to buy the old anymore. So what's your point?

The problem for AMD is that their processors are so far behind Intel's that they will be unable to participate at price points beyond $200-$250 for the remainder of 2008 and probably well into 2009. If the rumors that 2.6 GHz K10 has been delayed to Q3 are true, this only guarantees these low ASPs.

Do you know why you can’t find problems with the 8200/8300/8xxx? Because they does NOT EXIST!

Funny, the new E8400 I received three days ago is running fine, not to mention at 3.6 GHz at STOCK volts on a P965 motherboard purchased in the fall of 2006.

I see that NewEgg are temporarily sold out now but that's probably because these $230 E8400s are becoming ridiculously in demand with gaming enthusiasts, and Intel are very likely biding time with the shipments to give 65-nm inventory a chance to clear. They are in no hurry to flood the market with 45-nm when 65-nm is taking AMD to the cleaners perfectly well.

Anonymous said...

To the anonymous individual who believes the E8xxx series in non existent, please refer to the following link.

http://www.tigerdirect.com/applications/searchtools/search.asp?
cat=2396&keywords=&mnf=432

It seems there are some that do exist. That said, if you wait two weeks you will probably get them from any vendor. Additionally, it’s not as if you’ll wait for year for a miserably failed product, like a non existent Pheromone 9900. Even at the make believe speed of 2.8G+, they still cannot out perform Intel’s last generation product lineup. And please, spare me the clock for clock horseshit. Face the fact, they’re dogs that will NEVER see over 3GHz in sellable mass production.

As far as power requirements and thermals are concerned, current Intel offerings are far better than ANYONE else’s, anywhere. Barcelona’s thermals are an abomination above 2.6 GHz.

Look, you can rant and rave now about product availability, but wait a few weeks and you’ll be eating your words like the others who’ve posted ridiculous assertions on this site. You will be proven wrong and we won’t hear from you again. At least, until you have some more FUD to ejaculate.

By the way, That’s DO NOT EXIST, Junior.

SPARKS

Anonymous said...

"Well various people here say AMD has poor quad core yields. Based on what, product availability on Newegg?"

I don't think the yields are poor - I think the splits are! (Due to leakage)

Most of the 'evidence' is circumstantial:

- Tricore appeared on AMD's roadmaps out of nowehere (apparently due to customer demand for a product that wasn't on a the roadmap)
- AMD's power #'s on lower clocked parts are where the targets were on the higher clocked quad parts.
- AMD launched 2-3bins below expectation on both server and desktop parts and has yet to even get a 2.4Ghz part out
- AMD is unable to release a >2.8GHz part on a PROVEN architecture (K8). There top bin has gone from 3.2GHz on 90nm to 2.8 (do these even exist?) on 65nm.

Now I'm sure you can come to an explanation for each individual item separately, but when you consider it collectively... well I'll let folks decide.

This is in stark contrast to Dementia who bases his 'analysis' purely on the availability of parts on Newegg (in almost Sharikou-esque fashion!)

Anonymous said...

"You seem to know what’s wrong at AMD, how about you pee to Intel too?"

Yeah EOL'ing bigger die parts is an absurd strategy... this is directly supported by Intel's lack of profitability... clearly I need to question Intel's business and marketing strategy.

Thank you for poiinting out the error of my ways.... I stand corrected.

Who in their right mind would release a new product that is better then their own one like Intel... that's what the AMD fanboys call the Osborne effect, right? Clearly Intel is the one with the brain dead business strategy and we should bow down to AMD's superior intellect in this area.

Anonymous said...

"Well you seem to suffer from Alzheimer. Because Intel released the Q6700 then the Q6600. Even after ONE year Intel has LESS quad core offerings than AMD. If AMD had not have all those issues you would have a complete near perfect line at lunch, something that Intel after more than a year still have to do."

When you're the only game in town, you don't need it! Why would Intel release a slew of quads when:

A) the market for quad desktops was still maturing (and is hardly significant even today!) News flash you don't launch 5 SKU's into a market that is <5% of the total desktop market which puts it at <2% of the total CPU market! That's the whole point of my original comment! You ESPECIALLY DON'T DO THIS WHEN THE PRODUCTS ARE SO CLOSE TOGETHER (100MHz) performance-wise.

B) Intel would only be competing with themselves

C) Intel was constrained (based on the inventory #'s) - so you could make 2 dual cores or 1 quad? Which would you rather do?

D) Intel chose to focus quad output on the more lucrative server business (if you don't believe me look at the relative Core 2 conversion rates in the server and desktop areas)

E) the two processors you talk about are not separated by a mere 100MHz, like AMD's RIDICULOUS product lineup.

"Even after ONE year Intel has LESS quad core offerings than AMD."

Huh? Please tell me you're not counting the Black Edition! All AMD has is a 2.2 and 2.3 part on the desktop. Let's see do which part do I go for one is so amazingly different than the other (~5%) I don't know what to do!!! Please help! Why can't I get a 2.25Ghz part... that's the real sweet spot!

Anonymous said...

Axel - how reliable is that site? (regarding the 2.6GHz K10 delay)

As I look through it I also see a comment on AMD potentially working on a B4 stepping and I thought AMD had stated clearly that B3 was it until 45nm?

InTheKnow said...

If AMD had not have all those issues you would have a complete near perfect line at lunch,....

Personally, I prefer to avoid lunch lines.

InTheKnow said...

I see that NewEgg are temporarily sold out now but that's probably because these $230 E8400s are becoming ridiculously in demand with gaming enthusiasts, and Intel are very likely biding time with the shipments to give 65-nm inventory a chance to clear. They are in no hurry to flood the market with 45-nm when 65-nm is taking AMD to the cleaners perfectly well.

It is far more likely an issue of capacity. Keep in mind that Intel is still below half of their planned capacity on 45nm with 2 large HVM fabs still to come on line.

If you had to choose between high ASP server chips and lower ASP desktop chips, which would you give manufacturing priority to?

Anonymous said...

speculation versus reality.

To try and yield a native quadcore on 65nm where the total chip is as large as Barcelona will result in low total yield. By low yield I simply mean straight defect density will impact the probability of getting all four cores to be good versus the INTEL apporach of only needing two cores and glueing them togather at the package level makes far more sense at 65nm. Also getting all four cores to have the high speed on all for cores will also be commensurate more difficult. Its the law of statistical distribution for defects and device variations that can't be changed. Its no wonder AMD is going to be offering tri cores and such to offload all those partially good quadcores. To do so undercuts their dual-core and compounds their pricing problem and their profits too.

Now at 45nm where you half the die size things get a lot better. Is it no wonder that the company that makes money chose to do native quadcore when it would make sense.

As to claims that 45nm volume is in short supply I only ask you compare availability and pricing of the various products released from INTEL at this stage at 45nm and compare against 65nm, 90nm, 130nm, and so on. From that gauge whether you think the ramp is on schedule or not. Two of 4 45nm factories are running but you don't just turn a multibillion dollar factory on like a faucet at full capacity.

InTheKnow said...

To put the shoe on the other foot regarding the whole immersion litho discussion, we can look at AMD and ALD.

I'll state clearly up front that is an assumption on my part that AMD will be using ALD for metal gate processing when they do finally implement.

In that case, AMD will get the benefit of Intel's work, at least on the tool level. They will have different issues because they are using a very different process flow, but if they use ALD to deposit the gate material, at least they will have the benefit of using a more mature tool set.

This has been the case for almost all equipment innovation. Intel has led and AMD has followed.

Now the shoe is on the other foot. It will be interesting to see how AMD deals with a new set of issues that being an early adopter brings.

InTheKnow said...

One thing from the link that Robo posted to enumae's discussion with Ghost that I can't wrap my head around is Scientia's graphs at the beginning of the discussion.

Scientia said...
Anyway, I thought it might be more informative to normalize the numbers. The graph below shows the AMD percentage of combined total Intel/AMD ASP and AMD's volume share. However, we have that odd blip in Q4 and Q1 with volume share.

I decided to average the Q4 06 and Q1 07 volume share percentage numbers to get rid of the blip. Basically, it has been suggested that AMD oversold chips in Q4 and this led to the drop in Q1. If this is the case then averaging the two should show things more clearly.


He then goes on to base his analysis of these new graphs he created. However, to follow his argument it is necessary to understand his "normalization".

Can someone brighter than me please explain what the devil "the AMD percentage of combined total Intel/AMD ASP and AMD's volume share" means? I just don't get how on earth the relationship between AMDs and Intel's ASPs have anything to do with the price of fish in China, or the fact that AMD suffered a 30% drop in ASPs over that time period.

Another interesting point is how he says "I decided to average the Q4 06 and Q1 07 volume share percentage numbers to get rid of the blip. Basically, it has been suggested that AMD oversold chips in Q4 and this led to the drop in Q1."

I have no issue with the assumption. I just find it interesting that Scientia embraces the concept when it is convenient to his analysis after vehemently rejecting the whole idea that AMD oversold in Q4.

Unknown said...


It seems there are some that do exist.


Indeed they do exist. I picked up an E8400 and a new XFX 8800 GT 512MB card just yesterday for my second system to replace the E6600 and aging XFX 7600 GT.

Funny, the new E8400 I received three days ago is running fine, not to mention at 3.6 GHz at STOCK volts on a P965 motherboard purchased in the fall of 2006.

I haven't tried much overclocking yet, but mine is running fine at 3.2GHz with the included Intel heatsink-fan. This is on a P965 ASUS P5B-E motherboard.



If you had to choose between high ASP server chips and lower ASP desktop chips, which would you give manufacturing priority to?


This is what seems to be happening. Small quantities going to the desktop so that they can say they've met their launch schedule, but the majority of production going into servers where the real $$$ are. As Axel mentioned, Intel isn't having any trouble selling 65nm dual core CPUs for the higher end mainstream market.

Anonymous said...

“This is what seems to be happening. Small quantities going to the desktop so that they can say they've met their launch schedule, but the majority of production going into servers where the real $$$ are.”

Oh, how true. Besides G, the regulars on this site predicted this would happen months ago. Further, with INTC’s enormous lead in both production and capacity compounded by AMD’s drop dead failures, is it any wonder why they released 45nM for desk top at all? After all, a Q6600 alone can bury any product AMD as to offer, be it present or future releases, with a mild overclock.

You know this as do I, because I am typing on one at 3 GHz right now. You’ve been clocking one as long as I have. In fact, I’ll be willing to wager that the ‘G0’ stepping took the Intel engineers a bit by surprise, initially.

I’ve been building my machines since the late eighty’s; I have never seen anything clock like these 65nM parts, on straight air, no less! I can’t wait to get my hands on that unlocked, Hafnium bad boy, QX9770.

Look, we all knew the 2008 server assault was coming. What we didn’t know, naturally, is how it was going to be orchestrated. The bottom line, which was common knowledge months ago, was AMD’s last remaining market strength was in servers. What we didn’t know is how aggressively INTC was going to go after it. INTC will ‘struggle (!?!)’ past these minor supply delays, and the IDIOT conspiracy theorists will weave a plethora of fear, uncertainty, and doubt.

Meanwhile, as the ‘Scrappy Little Company’ scrabbles to put lipstick on the Barcelona pig and sell faulty, broken products, INTC will kick their teeth in on the server front during this half of the year. In fact, they already have.

In the second half, it’s game over. Nehalem will own HPC with eight cores and 16 threads, Christ! (I think Nehalem has IBM pissing their Brooks Brothers suits.) Haven’t you been saying ‘AMD BK in 2008’ all along? With nothing to compete with, as far as I can see, it may just happen, especially with 18 months of sequential financial losses.

Frankly, I believe, it was over a year ago. All the rest of this endless noise has been bullshit and nonsense. We haven’t been wrong yet, have we?

SPARKS

Anonymous said...

And when does this not happen every time a new CPU generation is introduced? 386 to 486, 486 to Pentium, the older generation must be reduced in price as the new one ramps

Well tell me where and when the 65nm parts got its price reduced when the 45nm parts hit the market.


otherwise there's no reason to buy the old anymore.

You agree. Thanks!


Funny, the new E8400 I received three days ago is running fine, not to mention at 3.6 GHz at STOCK volts on a P965 motherboard purchased in the fall of 2006.

Congratulations you win a price. Now where are all the other models that where tested at Xbit?

Anonymous said...

To the anonymous individual who believes the E8xxx series in non existent, please refer to the following link.

The link shows a CPU. Is that it?
The ALL E8xxx line is a single CPU?


Additionally, it’s not as if you’ll wait for year for a miserably failed product, like a non existent Pheromone 9900.

You mean the buggy E8xxx line that has issues in temperatures, errata, stability issues and other stuff that makes Intel delay then month after month?


As far as power requirements and thermals are concerned, current Intel offerings are far better than ANYONE else’s, anywhere.

Yes I know Intel competes with them selves.
Intel core 2 duo E6xxx VS Intel core 2 duo E8xxx.

One line exists the other don’t. Intel outdated its own CPU line with a virtual one.
Who needs AMD when Intel can create serious issues to them selves.


You will be proven wrong and we won’t hear from you again. At least, until you have some more FUD to ejaculate.

By FUD you mean real facts than any one can see except your blinded fanatics’.

Orthogonal said...

One line exists the other don’t. Intel outdated its own CPU line with a virtual one.

You realize that AMD owns ~50% of the retail marketshare right? Do you understand what that means in terms of product shipments? That means Intel focus's on the OEM channel first, and the retail channel secondly. If there is a shortage of chips in the retail channel initially after launch (less than 2 weeks ago :p) it would mean that there was obviously not enough chips sent to retail outlets to meet the demand. There are 2 explanations, the one you assume is that there aren't enough chips being produced, which is partially true, and the reality that the chips are going to OEM builder's first-and-foremost with Retail getting what they can. It's not vaporware, there actually was a hard launch and plenty of people have product to prove it.

Who needs AMD when Intel can create serious issues to them selves.

Yeah, this situation is dire I can tell you.

By FUD you mean real facts than any one can see except your blinded fanatics’.

What facts?!?! I'd say there hasn't been enough time for proper data to be collected for just about any position right now, just calm down, you're speculating and conjuring enough on your own.

Anonymous said...

That means Intel focus's on the OEM channel first, and the retail channel secondly.

OEM first, hardly.
Most OEM machines specially the business versions/models still use the outdated Pentium4 and PentiumD with 512MB RAM.

Desktop are mostly based on the Pentium 2xxx and Core 2 Duo 4xxx.

Your scenario while good for excuse is not realistic.


I'd say there hasn't been enough time for proper data to be collected

Just a link or two for the OEM machines based on any of the 8xxx models would be nice.

pointer said...

Actually there are some channels selling it. You just need to type E8200 in google search
http://www.google.com/products?q=E8200&rls=com.microsoft:en-us&ie=UTF-8&oe=UTF-8&um=1&checkout=1&sa=X&oi=product_result&resnum=1&ct=checkout-restrict


Quite some of them are back order or sold out, but there is one with stock (i guess soon to be out of stock too as it is in high demand)

here is the one with stock

http://www.dealiverable.com/ssproduct.asp?pf_id=1011154682

another one claim to ship after 2-3 weeks
http://www.pcsuperstore.com/products/N60961-Intel-BX80570E8200A.html/froogle/


Stock as of 11:25am 2/2/08 After your order is placed,
Allow approximately 2-3 weeks for delivery

pointer said...

another 2 stores selling E8200, E8300, and E8400 :

http://images.lowyat.net/pricelist/pczone.pdf

http://images.lowyat.net/pricelist/cycom.pdf

The above shops located in Malaysia famous Lowyat Plaza where a lot of computer related shops located.

Orthogonal said...

OEM first, hardly.
Most OEM machines specially the business versions/models still use the outdated Pentium4 and PentiumD with 512MB RAM.


http://www.pcper.com/image.php?
aid=486&img=desktop_cpu_
shipment_estimate.gif

OIC now, so the PentiumD and Pentium 4 which represented less than 20% of ALL chips Intel sold in 2007 (Which are completely EOL'd now for '08) was what Dell, HP, Acer, Gateway, Compaq etc... were clamoring for while Newegg and Zip Zoom Fly sold the rest... That makes a lot of sense.

Desktop are mostly based on the Pentium 2xxx and Core 2 Duo 4xxx.

So you're saying the high-volume cheap parts make up the majority of the Desktop market? Wow, thanks for the Businuss 101 lesson, I never would have thought that. While your point is true, it's irrelevant to compare apples and oranges, the discussion is about E8xxx supply, not overall desktop market share, the fact is the majority of the E8xxx supply is going to OEM's, not Retailer's and since it is a Mid-range to High end part it is naturally going to be in lower volumes. Notice the Graph on the chart linked, what does it say about Overall Wolfdale supply in Q1'08, about 5% of all chips shipped. Also notice that when Conroe launched in 2006, it had a similar volume ramp.

Anonymous said...

Orthogonal thanks for link.

Two points:
-The image says desktop_cpu_shipment_estimate.gif
So it’s "just" estimation.

-I don’t know if they are referring to the machine it self or the CPU.
Because there are still LOTS of 2006 and 2007 models around, just because they don’t show in that 2008 time line doesn’t mean they stop to exist in most computer stores:

http://www.shopping.hp.com/product/desktops/z560_series/rts/4/computer_store/RE500AA%2523ABA

http://h10010.www1.hp.com/wwpc/us/en/sm/WF04a/12454-12454-64287-321860-3328893.html

http://h10010.www1.hp.com/wwpc/us/en/sm/WF04a/12454-12454-64287-321860-3328896.html

http://h10010.www1.hp.com/wwpc/us/en/sm/WF06a/12454-12454-64287-321860-3328898-3232030.html

http://h10010.www1.hp.com/wwpc/us/en/sm/WF04a/12454-12454-64287-321860-3328898.html

http://www.dell.com/content/products/productdetails.aspx/precn_670?c=us&cs=28&l=en&s=dfb

http://www.dell.com/content/products/productdetails.aspx/precn_370?c=us&cs=28&l=en&s=dfb

http://www.dell.com/content/products/productdetails.aspx/dimen_e520?c=us&cs=22&l=en&s=dfh

http://www.dell.com/content/products/productdetails.aspx/precn_380?c=us&cs=28&l=en&s=dfb

http://www.dell.com/content/products/productdetails.aspx/precn_380?c=us&cs=28&l=en&s=dfb

http://www.dell.com/content/products/productdetails.aspx/precn_670?c=us&cs=28&l=en&s=dfb

http://www.dell.com/content/products/productdetails.aspx/precn_370?c=us&cs=28&l=en&s=dfb

http://www.dell.com/content/products/productdetails.aspx/optix_320?c=us&cs=28&l=en&s=dfb

http://www.dell.com/content/products/productdetails.aspx/optix_gx520?c=us&cs=28&l=en&s=dfb

http://www.dell.com/content/products/productdetails.aspx/optix_gx620?c=us&cs=28&l=en&s=dfb

http://www.dell.com/content/products/productdetails.aspx/optix_745?c=us&cs=28&l=en&s=dfb

Unknown said...


You mean the buggy E8xxx line that has issues in temperatures, errata, stability issues and other stuff that makes Intel delay then month after month?


Buggy? My E8400 has been running Prime95 now for twelve hours now with no problems overclocked to 4GHz from 3GHz. That's using just air cooling, no liquid cooling. Even under a full Prime95 load the temperature has not exceeded 45C. Compared to AMD? Get a Phenom Black Edition and overclock from 2.3 to 2.6! Wow!

You know Mr. Anonymous, it's quite amusing that you complain about a lack of E8xx CPUs when AMD launched Barcelona nearly FOUR MONTHS ago and YOU STILL CAN'T BUY A SERVER WITH THIS CPU IN IT. Until Intel catches up with AMD's impressive 'king of paper launches' award you'd be well advised not to complain about limited availability of the E8xx. As pointer pointed out (sorry for the pun!) the CPUs are possible to find, they are just in extremely high demand now. You'll recall the same thing happened right after the Conroe launch.

Oh, how true. Besides G, the regulars on this site predicted this would happen months ago. Further, with INTC’s enormous lead in both production and capacity compounded by AMD’s drop dead failures, is it any wonder why they released 45nM for desk top at all? After all, a Q6600 alone can bury any product AMD as to offer, be it present or future releases, with a mild overclock.

Yes indeed Sparks in fact I see Intel's lead increasing over 2008 and into 2009. A 45nm K10 with more L3 cache and clockspeeds at best topping out at ~2.8GHz as predicted by JumpingJack, Guru and some of the others here will barely stand up to a Q6600 - that's not even counting the 45nm quads and Nehalem! But, we don't need to worry - I'm AMD will point out the exceptional energy efficiency of the 45nm parts, since customers only want that.


You know this as do I, because I am typing on one at 3 GHz right now. You’ve been clocking one as long as I have. In fact, I’ll be willing to wager that the ‘G0’ stepping took the Intel engineers a bit by surprise, initially.

That's right, I've been using a Q6600 on a P5B-Deluxe motherboard since last April. Before that I had the E6600, which was (and still is) an excellent CPU. Mine runs at 3GHz 24/7 despite being the older B3 stepping. I've had this one as high as 3.3Ghz with increased voltage and temperatures. As I understand, you can get another 300 or so mhz out of the G0 stepping for a maximum overclock of 3.6 or 3.7Ghz on air cooling.

I’ve been building my machines since the late eighty’s; I have never seen anything clock like these 65nM parts, on straight air, no less! I can’t wait to get my hands on that unlocked, Hafnium bad boy, QX9770.

I haven't been building PCs that long (think early 90s - I built a killer Pentium machine for running Wing Commander 3, I could run it at 640x480! My friends with 486s that could only handle 320x240 were quite jealous!) but the 65nm and now 45nm parts are the best for overclocking that I've seen in a long while (Remember the Abit BP6 board? That would run a pair of Celeron 300A CPUs at 450Mhz perfectly!). They just overclock superbly and effortlessly. I've got 4GHz on air with my new E8400 without pushing the CPU at all.

Look, we all knew the 2008 server assault was coming. What we didn’t know, naturally, is how it was going to be orchestrated. The bottom line, which was common knowledge months ago, was AMD’s last remaining market strength was in servers. What we didn’t know is how aggressively INTC was going to go after it. INTC will ‘struggle (!?!)’ past these minor supply delays, and the IDIOT conspiracy theorists will weave a plethora of fear, uncertainty, and doubt.
Intel's attack on the server market has been relentless and it shows. First with Woodcrest, then followed up with Clovertown, now Harpertown. The release of these products simply means that Intel owns the SP and DP server markets. Whenever AMD finally gets around to shipping Barcelona they'll do okay in MP servers againt Tigerton provided they can launch at the promised 2.5Ghz - but this will end as soon as Nehalem arrives for MP servers in 2009. (Besides, MP servers is a small part of the market - SP and DP servers make up the majority of sales)


Meanwhile, as the ‘Scrappy Little Company’ scrabbles to put lipstick on the Barcelona pig and sell faulty, broken products, INTC will kick their teeth in on the server front during this half of the year. In fact, they already have.

In the second half, it’s game over. Nehalem will own HPC with eight cores and 16 threads, Christ! (I think Nehalem has IBM pissing their Brooks Brothers suits.) Haven’t you been saying ‘AMD BK in 2008’ all along? With nothing to compete with, as far as I can see, it may just happen, especially with 18 months of sequential financial losses.


Put simply: Intel now owns the SP, DP and MP server markets. Barcelona will give Intel some competition in MP servers but is too little too late in SP and DP servers. When Nehalem hits it's game over for AMD in servers.

I don't think AMD will be BK, but the days of them competing with Intel in high end are over. AMD's CPU operation will be somewhere between posting a small loss and posting a small profit, depending on how the CPU market does. Longterm, they should be slightly profitable if Ati's next generation products perform well vs. Nvidia.


Frankly, I believe, it was over a year ago. All the rest of this endless noise has been bullshit and nonsense. We haven’t been wrong yet, have we?


You're absolutely right. Abinstein and co. claimed Barcelona and K10 would be much faster than Intel's products. Randy Allen was out boasting 40% faster than Clovertown (search Clovertown on YouTube for some choice videos with Mr. Allen's claims!). We called BS, claiming that K10 would simply not be enough to regain the performance leadership from Intel. Then, as you mentioned, we know how that turned out.

Also, in a strange turn of events it turns out the 8800 GT I bought for my second PC is actually a good deal faster than the 8800 GTS 640MB I had in my main Q6600 machine. So I ended up switching the cards over: PC 1 has a Q6600 at 3GHz and an 8800 GT, PC 2 with the E8400 at 4GHz and the 8800 GTS 640MB. :)

Anonymous said...

"I'll state clearly up front that is an assumption on my part that AMD will be using ALD for metal gate processing when they do finally implement."

I don't know for sure, but I believe this is not the case. In all likelihood IBM is using MOCVD for their HiK and I also suspect they are using MOCVD (and or some sputter) for their metal. Keep in mind with the up front flow they are likely using a nitrided metal and this makes it more likely they would take the MOCVD approach. Also the uniformity requirements are not as great on the metal as they are on the high K, another reason why I suspect they are using MOCVD. Also as the films are a bit thicker on the metal side, the throughput (and thus cost) becomes an issue with ALD... even keeping in mind IBM's approach is likely a thin metal film topped with Poly Si.

"Can someone brighter than me please explain what the devil "the AMD percentage of combined total Intel/AMD ASP and AMD's volume share" means?"

It means Scientia needed a way to make the ASP changes appear to be small on a graph... from an analytical point of view it is ABSURD! If he wanted to normalize he could look at the direct ratio of AMD to Intel ASP or start at 1 and normalize the drop in ASP for each over time... but the whole adding thing is just make believe and driven by someone who doesn't understand and just wants to play with #'s to spit out something that supports his conclusion.

Robo's ASP graph is about as clear as it gets and is painfully obvious as to what is going on... while folks can argue that it is the relative drop that is important you can also see that in the graph too. Intel's ASP has dropped about 10% over the last 8 quarters, and most of that was Q2'06 and a little but in Q3'06. AMD's has dropped ~33% and it has been continuous over first 6 of the 8 quarters. How else canyou spin these being equal... make up a metric to get a relatively small absolute value and plot it on a graph with a decent range to minimize the drop.

Given the relative starting ASP's, Scientia's "metric" allows for AMD to fall at approximate 2X the rate of Intel and the metric will come out flat (since ~2/3 of the denominator and ~1/3 is AMD).... couple that with some creative graphing and you now can BURY a 3X faster decline in ASP's.

Hope this enlightens you as much as Scientia's graphs did! :)

Anonymous said...

Giant (and other INTC fanatics, of course, to which I cheerfully subscribe), check this out for your ultimate pleasure the Asus P5E3 Premium. How do you turn a Spider into a cockroach? HA HA HA HA HA HA!


Poor, poor AMD, their day in the sunshine is over.

Orthogonal, you just keep cranking out those lovely chipsets, buddy!


http://enthusiast.hardocp.com/article.html?art=
MTQzNSwxLCxoZW50aHVzaWFzdA==

SPARKS

Unknown said...

What a superb board Sparks! My P5B Deluxe is still great for me, it's been the best board I've ever used.

You've got your eyes set on the QX9770, but is it really worth buying one in late February/March when Nehalem will be out before the end of the year? I've decided to wait for Nehalem, and skip Yorkfield - my Q6600 is just fine for me.

-GIANT

Anonymous said...

"when Nehalem will be out before the end of the year?"

Desktop? Or are they doing server first? I would be a bit hesitant to be an earlier purchaser of this as there are significant changes and would prefer others to flush them out. There also could be some growing pains on the board or bios-side.

I don't see the IMC and QPI making that much of a difference on a single socket desktop setup, I think Nehalem's success (in mobile and desktop) comes down to the core improvements.

Anonymous said...

“is it really worth buying one in late February/March when Nehalem will be out”


I have been thinking about this for quite sometime. Really, when comes down to desktop performance and the majority of server requirements, the FSB has held up quite well in lieu of AMD’s Hypertransport IMC. If you refer to the P5E3 Premium article, you’ll see the memory bandwidth is not to far from the AMD 6400’s excellent numbers. Then again, INTC has proven that processor can be faster without the superior benchmark.

I suspect a 2000MHz FSB coupled with a 4 GHz processor and 1866 DDR3 memory the OLD FSB may reach, what was considered impossible, the 9000 MBS milestone. As the current benchmarks show the QX9770 and X48 are quite capable of getting into the mid 8000 MPS. That said, if this benchmark is the only fly in the ointment; let it be a little flavor in the stew. Besides, it won’t hurt the meat of the combo which is insanely fast anyway.

I realize the old FSB’s time has come, as INTC waited for that time, wisely, I might add. I also know this will be my last performance board based on the FSB. Sure, I get pretty nostalgic about these things, but that’s why we will always remember getting that Celery 300A clocking like no tomorrow, it was fun. Additionally, there will be plenty on time to adopt Nehalem based architecture; eventually, I suspect when the time is right, I’ll hear the call.

But for now, I’m certainly not ready yet.


SPARKS

Anonymous said...

Hey fella’s ,

I’m pretty much of a Cro-Magnon in blue jeans, IBEW LU#3, NYC Forman, turned technogeek by my contemporaries, protozoan by GURU standards, never the less, willing to adapt and learn.

That said, I’m a New Yorker first, -----


GIANTS WIN!!!!

HOOO YA!!!


SPARKS

Orthogonal said...

That was a mighty impressive game, I was truly shocked to see them pull it off, not that I really cared who wins anyway. Although I'll be glad to see them all leave and let things get back to normal. This place has been crazy the last couple weeks, you can't go anywhere without someone trying to gouge you. We even had to send some visitor's home early since no manager was willing to approve a $1000+ hotel room night. ;)

Anonymous said...

Sparks,

As a former NY'er you'll be happy to know that Guru is a Geeee-man Fan! The game was about as perfect as it gets.

And as I type this listening to a replay of Brady saying '23-17, I wish he gave us credit for a few more points...' Plaxi owes an apology... to the NY Defense!.

How come noone called Brady cocky for that comment?

Anonymous said...

"19-0: The Historic Championship Season of New England's Unbeatable Patriots"

http://www.amazon.com/19-0-Historic-Championship-Englands-Unbeatable/dp/1600781500/ref=pd_bbs_sr_3?ie=UTF8&s=books&qid=1202100711&sr=8-3

Gotta love the Boston Globe... for those who waited on this purchase I hear there may be a price reduction!

"New England Patriots 19 Game Winning Streak"

http://www.amazon.com/England-Patriots-Game-Winning-Streak/dp/B0007VSAKY/ref=pd_bbs_8?ie=UTF8&s=sporting-goods&qid=1202100711&sr=8-8

Did they start counting preseason games?! If you look at the what customers also bought - it is littered with floor and grout cleaners! Classic!

Sorry for the hijack Robo, the only thing more annoying then the AMD fanboys are the New England (all sports) fanboys!

enumae said...
This comment has been removed by the author.
Anonymous said...

"AMD responded in saying that in order to fulfill consumer demand, the company has made a decision with its partners to give launch priority to triple-core CPUs in the first quarter of 2008. The Phenom 9700 and 9900 will be launched in the second quarter of 2008."

http://www.digitimes.com/news/a20080204PD200.html

I have to give AMD bonus points - they have a story and they're sticking to it... Is ANY REPORTER going to actually interview an OEM and expose this crap? The press is just flat out LAZY...

Oh and also in the article...

"Other than Phenom 9900, whose launch date is not yet set, all other CPUs are currently scheduled to launch at Computex 2008 in early June."

TLB fix on schedule? or as I originally hypothesized is being used as a SCAPEGOAT for AMD's issues trying to get the clocks up on Phenom? the xx50's supposedly have the TLB issue fixed and those are launching, so why isn't the 9900 launching?

Rather than just having reporters take dictation... can someone, anyone, ask a reasonable question to AMD's mgmt?

Anonymous said...

"It wouldn't surprise me if Intel spends 50% more for X86 R&D than AMD. However, we can see that this is by no means "billions more" as lex suggested. It's about $660 Million more per year."

Where to start... the calculations are absurd and just plucked out of the air... let's assume this... let's assume that... The "us" part of let's, makes it sound like these assumptions have some basis in reality and are not the delusion of a single person with an agenda.

Let's assume Intel's flash R&D in 2008 will be significant (you know because the spinoff of IM flash, won't lower Intel's R&D in this area!?!?)

Let's just pluck Itanium R&D out of the air? Then just as arbitrary as the initial #, let's arbitrarily add 25% to it to take more money away from the x86 - and support the conclusion desired.

Let's assume none of the R&D in the Itanium area OVERLAPS with X86?

And by FAR THE BEST ASSUMPTION... let's assume AMD's ratio of chip to chipset R&D is 1:3... first off it's 1:2... but luckily he carried the mistake forward into his calc so it had no net impact... BUTTTTTTT.... do you think any of the spending is on, I don't know DISCRETE GRAPHICS! Let's just couple this with chipsets? Do you think Intel has the same level of discrete graphics chip R&D?

The guy has now officially moved into Sharikou-ville. He will twist and turn #'s to generate a conclusion that he wants, rather then being intellectually honest, analyzing the data and THEN coming up with a conclusion!

I mean the idiot just stated that AMD managed to cut the Q4 losses despite falling ASP's! NEWS FLASH - for the last 2 quarters AMD's ASP's haven't fallen! They've actually increased slightly, but hey why let some actual data get in the way of a good story! But I guess the story (and I do mean story) sounds better when you put as many things against AMD as you can...and then make t sound like they pulled off some sort of miracle with everything going against them.

(not that I'm bitter about it)

Anonymous said...

Warning To AMD:

BE AFRAID, VERY AFRAID.


http://www.fudzilla.com/index.php?
option=com_content&task=view&id=
5524&Itemid=1


SPARKS

Anonymous said...

Sparks, I can confirm that the 45nm dual cores are selling like hotcakes. Microcenter in Santa Clara got 70 E8400s in this morning and I picked one up. I know they were sold out by 5:00 this afternoon because my friend came away empty handed. Word is getting around that the E8400 is very overclockable and a great value. To be honest I'm a little disappointed in mine, I had to bump the voltage to 1.25V just to keep the system stable @ 3.6 GHz. Still it's running very cool, just 45 degrees max load during Prime95.

I don't know if this is a great sign for Intel financially just because retail processors are such a small segment of the market. It sure isn't bad news though.

Anonymous said...

“To be honest I'm a little disappointed in mine, I had to bump the voltage to 1.25V just to keep the system stable @ 3.6 GHz.”

Nah, get a good blower, stop being a process guy for a second, pump up the Vcore to 1.4, and you’ll hit 4 Gig with some GOOD memory and a P35 or better. 1.25V ain’t squat, put the Hafnium to work, Bro.

However, I know someone can do this. Please, instead of getting your nuts all twisted up reading ShariCooCoo and Dementia* web trash, can someone please do the numbers on the D.P.W and approximate cost per die on these things?

They’ve got this selling at $211:

http://www.mwave.com/mwave/Skusearch.hmx?scriteria=BA24501



They’ve got this selling at $275:

http://www.mwave.com/mwave/Skusearch.hmx?scriteria=BA24052


This is the $64 question; it’s the same speed ET Al!

It’s not like I’m asking for strongly enhanced double patterning calculations or anything. Jeese!

I’m thinking ASP here.


SPARKS



*Dementia is usually caused by degeneration in the cerebral cortex, the part of the brain responsible for thoughts, memories, actions and personality. Death of brain cells in this region leads to the cognitive impairment which characterizes dementia.

The cost of dementia can be considerable. While most people with dementia are retired and do not suffer income losses from their disease, the cost of care is often enormous. Financial burdens include lost wages for family caregivers, medical supplies and drugs, and home modifications to ensure safety. Nursing home care may cost several thousand dollars a month or more. The psychological cost is not as easily quantifiable but can be even more profound. The person with dementia loses control of many of the essential features of his life and personality, and loved ones lose a family member even as they continue to cope with the burdens of increasing dependence and unpredictability.

Anonymous said...

BTW FYI,

http://www.xbitlabs.com/articles/cpu/
display/intel-wolfdale_12.html#sect0

Antov has got the Vcore up to 1.46!

Fry'em up baby!

SPARKS

Anonymous said...

"Warning To AMD:

BE AFRAID, VERY AFRAID."

I think you are confused Sparks... AMD delayed analyst day last June, and 'CHOSE' to do this to show off their DTX platform in July (it had nothing to do with Barcy delays, really!). As predicted by some experts, DTX gas taken a foothold and is steamrolling in the low cost space.

It is highly unlikely with the lead AMD has with DTX and the strong, broad industry support, that diamondville will be able to stop that steamroller.

Anonymous said...

Guess what? I did a search on DTX. There's plenty for sale!

These are SOOO hot!


http://shopping.msn.com/reviews/shp
/?itemId=896548523

SPARKS

Anonymous said...

"can someone please do the numbers on the D.P.W and approximate cost per die on these things?"

Unfortunately the online die count calculator I use is taken down, so I'll wing it. Keep in mind this could be off a bit - folks feel free to correct me.

~400-450 potential Penryn dual core per wafer
~$4000 per processed wafer (45nm)

If yields were perfect this would be ~$10 per dual core + packaging costs ($3-$5?). Assuming yields are closer to 60%... raw variable cost is probably closer to ~$20. Then you have to throw in all the misc costs - R&D, overhead, etc...

As for the various clockspeeds - the finished wafer cost is the same; it is simply a matter of binsplits. Taking the yield out of the equation, the production cost of a wafer full of 8400's would be the same as 8500 or 8600's

All this is pretty crude... Also if I had to guess, I would say the pricing of the various bins is likely more driven by sale and marketing (i.e. what the demand will be at various speeds) more than the binsplit %'s... that is if a 8500 is priced 25% more than an 8400, I wouldn't assume that is based on the relative ratios (splits) that Intel gets out of a wafer.

Tonus said...

Skulltrail review at Tech Report.

No doubt this is priced out of reasonable range for many of us, but the specs are enough to warm my geek heart. I admit that there's a part of me... the part with more dollars than sense, the part that is willing to overspend for performance he doesn't *really* need... that wants to whip out a credit card and do something utterly unreasonable. But I figure I'll jump off of that bridge when I reach it.

Anonymous said...

OK, Scientia has dipped into the sharikou loon phase - he claims he won't intentionally mislead but then says things like this:

"First of all, Intel carries almost nothing from 45nm to 32nm. This is because a different chemistry is required to prevent the deposition surface from reacting negatively with the immersion fluid."

That crazy immersion liquid that is required? It's called di-hydrogen oxide, or more commonly WATER! Oh and by the the immersion fluid NEVER TOUCHES THE "DEPOSITION" SURFACE... it only touches the photoresist! My goodness this is far beyond the, hey I'm not a preson with years of experience in this area, to the I'll just make shit up to sound smart! In future gens the liguid medium may change to help the effective NA (numerical aperture) which would help print smaller features, but even then the liquid will only touch the photoresist.

Oh and there is this minor thing called high K/metal gate which in its current incarnation is extendable to 32nm... as opposed to AMD who will be trying to IMPLEMENT it fot the first time. For Intel 32nm will actually be VERY straightforward (much like AMD's move from 65nm to 45nm. For the large part is should be mostly a shrink of the 45nm process - yes they will need to move to immersion (which by the way they have been working on for longer than AMD) and there may be some changes/tweaks to the backend dielectrics, but for the most part with the high K solution they can get back to scaling the gate oxide for performance again. For those with background this gets back to the earlier days of process shifts where it came down to shrinking the litho and scaling the gate ox. Strain, salicide, and various other tricks which have greatly increased the complexity were driven in part due to the fact that gate oxide scaling was running out of steam and you had to get performance gains elsewhere.

I know I've said this in the past, but Scientia's latest comments are truly comical and I have no clue how he makes this stuff up.

"Secondly, AMD has an obvious fall back position if 32nm is late since they can shift to 45nm with high-K and still see improvements."

First he assumes 45nm will go swimmingly (as apparently 65nm has gone?). 2nd this simple "shift" is not something like flipping a shift... unless AMD is planning to run their 45nm process for 3-4 years (which would mean 32nm is VERY LATE)... their is no economic benefit in "switching" to high K late in a tech node! (of course AMD at times tends not to base decisions on economics, so maybe it is naive for me to think they will consider that aspect)

We are talking new masks, potentially a new design (at minimum new circuit layouts), retargeting many process steps, some new tooling (which is probably the easiest part), and oh they will need to go through a whole suite of certification processes as highK introduces a whole new set of reliability issues that need to be tested and verified... but hey what do I know Scientia really seems to have a firm grasp on this whole technology thing!

Serious question:
I know he is a fan and wants to put AMD in the best possible light, but what good does he do just making stuff up? The things he's saying here are not things that can simply be misinterperted or read on the web - this is just stuff getting plucked out of the air. Why would he do that? (Seriously) Why comment on process technology when he clearly knows nothing about it?

It's also unfortunate that those who don't have the background don't know enough to ask why or ask for supporting data or even have the ability to get exposed to a different point of view. Of course anyone pointing out his obvious mistakes is either censored, or he twists the comments and says he is being misinterpreted or has to make a minor correction.

Anonymous said...

“We are talking new masks, potentially a new design (at minimum new circuit layouts), retargeting many process steps, some new tooling (which is probably the easiest part), and oh they will need to go through a whole suite of certification processes as highK introduces a whole new set of reliability issues that need to be tested and verified”


Ok, let me get this straight. AMD, simultaneously, tried a new process and a new architecture with Barcelona, and it fell on its ass. So, with another stroke of brilliance, they’re going to do it again?

Hmmm, perhaps the issues that have plagued Barcelona are so irrevocable that they need to start with a fresh new design? From what I’ve read there will be no future steppings with Barcelona, currently. It sounds like design/process capitulation to me, in any case. Therefore, they may have no other alternative than to run the whole gambit all over again, despite the risks you mentioned.

Wow, reading between the lines here, it seems this thing must really blow. They are going to chuck the whole Magilla in lieu of the risks!

To quote a famous President, “There you go again”.

I guess it’s, “The Smarter Choice”

SPARKS

hyc said...

As I recall, AMD was already ramped on 65nm K8s, so Barcelona was not a whole new process at the same time as a new architecture.

And as for dropping 65nm and moving to 45nm - damned if you do and damned if you don't. You're giving them flack for trailing Intel to 45nm, you're giving them flack for trying to move to 45nm. Screw it. You're just giving them flack.

A lot of you folks pat yourselves on the backs for posting balanced comments here, especially sparks, but you're no different from any other crowd of fanbois.

Anonymous said...

"So, with another stroke of brilliance, they’re going to do it again?"

No they won't and that's my point - Scientia is crazy to think 45nm high K is a back up plan, it is almost like a brand new process node transition. That is not something you'd do (assuming you are sane of course) unless you were planning to run that process for a few years... so again it would make no sense to do this at the tail end of 45nm, unless 32nm is **VERY** late.

"From what I’ve read there will be no future steppings with Barcelona, currently."

There have bee quite a few rumors that B3 is the last 65nm stepping, and frankly this, if true, is the most intelligent decision AMD has made in some time.

45nm should start up reasonably well for AMD - they aren't changing much at all. For all the talk of immersion litho, quite frankly it is just an alternate method to pattern wafers - it is not something that from an integration perspective is going to muck things up. Yes it is not a no-brainer, you will need new resists, work on cleans, have to tune masks in terms of OPC rules, etc...and I don't mean to completely de-value the importance of the technology but frankly how is this realistically different than what has been done in lithography before? ( for example the switch from 248nm to 193nm litho)

As a result 45nm should be reasonably behaved at it will essentially be an optical shrink of 65nm. As has been discussed it will not have great performance gains - you will see active power benefits but likely see minimal clockspeed benefits over what 65nm SHOULD have gotten. You will see clockspeed gains and many will hail this as AMD being great, but it will represent fixing things that they could not on 65nm (so in my view this is a hollow victory)

For example if 45nm churns out a 2.8 or 3.0Ghz Phenom is this demonstration of a great 45nm process... or simply achieving what was originally EXPECTED for the 65nm process?!? I'm 99% certain the ignorant folks will be saying it's because of immersion litho and ULK ILD's! Keep in mind the lowest original expectation for 65nm was 2.8GHZ (most thought at least 3.0 or even 3.2!), so if you believe AMD's 45nm ~25% improvement, it should be in the 3.4-3.8GHz range. In reality 45nm will likely get to where 65nm was originally expected to get, and many with short term, selective memories will be singing AMD's praises.

Anonymous said...

"but you're no different from any other crowd of fanbois."

There is FAR MORE actual data and knowledge on this board then Dementia's... you don't see absurd process technology claims with no supporting data or a twisting of information that is not understood in order to support pre-formed conclusions.

Yes people have bias here (all people have bias), but at least there are some actual facts and reasoned logic here. Oh and everyone is allowed to comment as Robo has the integrity and self confidence to allow comments which may not agree with his views!

Tonus said...

"A lot of you folks pat yourselves on the backs for posting balanced comments here, especially sparks, but you're no different from any other crowd of fanbois."

I don't think that sparks has ever hidden his disgust at the awful job that AMD management has been doing, nor his desire to see Intel continue to succeed, as he owns Intel stock.

I don't know if he considers himself biased or balanced, but he has made clear where he is coming from. I don't see him as "especially" attempting to pass off bias as balance. He strikes me as one of the more honest and up-front people posting on these blog comment sections.

Orthogonal said...

If you look at most of the posts on this blog, you would see that they aren't giving AMD flak (except for on clear mis-steps about execution), but more giving flak to those who try to take the facts around the circumstances of each companies financial/technological/product position and make predictions and conclusions of which they have no background or expertise to make.

Unknown said...

AMD PHENOM SHIPPING WITH DEFECTIVE CORES?

I think THG should investigate this issue, it is popping up in forums everywhere.

There has been much speculation over why AMD has not released Phenom parts of speeds greater than 2.3ghz. The going perception was that the TLB errata was a big contributor, and possibly an immature manufacturing process. Unfortunately, the problem is actually much deeper than that. Thanks to the release of the Phenom 9600 Black Edition, the problems with Phenom have become painfully obvious. Plainly stated, AMD is selling a busted chip, and many people are getting ripped off, and I think places like THG and OCguide need to call them out on this. Look around the net... it is a huge problem... and one I wouldn't be suprised if AMD eventually got sued over. This problem will also explain the true reason as to why AMD is going to release a Tri-Core chip.

The problem may seem trite, as purchasing a 9600BE is a gamble. But the problem is not just with the Black Edition, but with all current B2 Phenoms. Most of them cannot be overclocked, yes, this is true and a well-known fact. However, there is also a growing number of Phenom buyers who cannot not run stable even at stock clockspeeds.

I recently took a chance on one of these chips and have had the same experience that many others on the internet are having. Here is my experience;

My configuration is:

AMD Phenom 9600 Black Edition cooled by a Zalman 9700
Gigabyte GA-MA790X-DS4 Socket AM2+ motherboard
4GB Gskill 5-5-5-15 DDR2-800 Memory (4x1GB Sticks)
3x74GB WD Raptors in RAID 0, Primary Drive
500GB Seagate PMR Hard Disk
eVGA 8800GTS 320MB
Vista Ultimate 64bit

My Phenom experience:

Upon installing the Phenom in my system, it booted up fine without a problem. I have not OC'd the chip at all at this point, simply running it at stock settings. Once it booted into Vista, I played around with it for a bit with no issues. I then decided to do the first real test, which was to see what the Vista rating on the processor was. I clicked on the "refresh my score" link... and the testing began. During the test I got my first BSOD. The details read...

"A clock interrupt was not received on a secondary processor within an allocated time. Error 0x101"

I rebooted the system and tried again. This time the rating completed without a hitch and showed a glowing 5.9 rating for the processor. About 20 minutes later, the same error happened again.

This happens at stock speed. Any attempts to overclock either results in the BSOD or Vista wont finish booting at all.

Over the next two days I fought with this problem to no avail. I tried bumping the Vcore, the NB voltage, tried setting the RAM down... everything I could think of. Nothing.

I then did some research online and found some interesting info on the subject...

http://forums.amd.com/forum/messag [...] TARTPAGE=1

http://www.xtremesystems.org/forum [...] p?t=175878

...a simple search, just google "BSOD clock interrupt Phenom", there are no shortage of hits...

Upon researching this problem I found that many Phenom users are having this same problem. Many only when trying to overclock... but also... many when trying to just run their chip at stock speeds. The problem points to one thing... a partially or wholley defective third core. Apparently, many people have had to use AMD Overdrive to purposely *underclock* the 3rd processing core (Core2, no pun intended) by lowering the multiplier specifically for that core, in order for the chip to run stable. The rest have had to do so in order to get any sort of stable overclock beyond 2.4 ghz. Does this sound familiar?? It should, as it is the frequency above which all Phenoms were yanked by AMD. The truth appears to be coming out... AMD doesn't have Phenoms above 2.4ghz available because one of the cores is flawed and won't allow for a stable chip at or above 2.4ghz. The errata appears to be more spin than anything... let the masses feast on the errata as the underlying issue when the real issue that is the manufacturing process being quite flawed at this point.... tries to fly under the radar. If AMD didn't want this flaw to be exposed, as I'm sure they didn't, they should have never released the 9600BE.

I tested this for myself and came to the same results. Whenever I tried to raise the multiplier on cores 0,1 and 3 I could go past 13 with no problems at all on stock voltage. The very instant I tried playing with core 2, BSOD. So my problem was the same as all the others... a bad 3rd processing core. But mine won't even run reliably at stock speeds...

I then decided to lower multiplier to see at what frequency the 3rd core will actually run reliable at. I first lowered it to 10.5 from the stock 11.5 (a freq. of 2100) and all stability issues seemed to vanish. I played with the system for the better part of a day and had no issues whatsoever. I stressed it rather intently with some video encoding projects and not a problem to be found. I then decided to push a little bit farther and raised the multiplier for the 3rd core to 11 (freq. of 2200). Unfortunately at this speed the random BSODs made a re-appearance rather quickly and I promtly re-adjusted the multi back down to 10.5. The bottom line was that the 3rd core can only run stable at or below 2.1ghz while the rest of the chip was capable of 2.6+ on stock voltage.

So for the money I spent, I got a functional and overclockable tri core cpu with an additional crippled core. This leads me to believe that the tri-core cpus will be capable of 2.6-3.0 speeds quite easily. Considering this issue is becoming more well-known by the day, AMD is facing a ticking timebomb in terms of when the major sites like THG will have a field day with this.

Bar None... AMD should not have released Phenom, much less the 9600 Black Edition.

It seems this problem is known by newegg already, as they appear to be granting RMAs rather unconditionally with these chips. I am RMAing mine currently, and hoping that the next one I get won't have the problem so severely. The word needs to spread and people need to not get ripped off. Hopefully this will inspire THG to do some investigating...


http://www.tomshardware.com/forum/248265-28-phenom-exposed-shipping-flaky-cores

Unknown said...

Sparks sounds as if he was eager for someone here to really OC their E8400! With my P5B-E motherboard I maxed out at 393 FSB, resulting in a rather mediocre 3.53GHz. I was certainly not happy with such a pathetic overclock with my new hafnium infused E8400!

So determined to get a better OC, I took the Q6600 out of my main computer and put it in the second system with the P5B-E. It's running no wories at 3GHz in that system. Now since the P5B Deluxe is a MUCH better overclocking board, I was very interested in seeing how far I could go. I pushed the processor voltage to 1.4V and had the FSB at 500MHz in no time at all. This is resulting in a rather nice clockspeed of 4.5GHz! In order to reach this though, the temperatures were pretty high. I also had to pump a bit more voltage into the northbridge and I installed a small fan on the northbridge to keep it a bit cooler. In the end I wasn't exactly comfortable with the northbridge voltage 4.5GHz, so I reduced the FSB to 475MHz which resulted in still a very nice 4.275GHz as my 24/7 stable OC - it finished a 24 hour session of Prime95 with no problems. With the reduced clockspeed I also reduced the voltages a little, resulting in cooler temperatures. At a full load with both cores at 100% after a few minutes maximum temperatures weren't exceeding 65C. This is a testament to the effectiveness of the 45nm high-k hafnium process I'd say!

-GIANT

Anonymous said...

“A lot of you folks pat yourselves on the backs for posting balanced comments here, especially sparks, but you're no different from any other crowd of fanbois.”


Tom Cruise, “I’m not Markinson. Are you Markinson?”

Whoa, Whoa, Whoa, easy wrangler. Let’s get a couple of things very clear here.

1)Never, ever, will I post anything “balanced” for the sake of some else’s opinion. It’s patronizing and condescending. For the record, I definitively and categorically abhor AMD’s way of doing business as they literally SHIT on their shareholders and partners, while filling the web (and the market) with a years worth of absolute inferior GARBAGE.

2)I am on this site offering a clear message to all, as many of my friends, who have lost “hundreds of thousands” of real money in believing AMD’s puke. I am backed up with over 6 BILLION reasons.


3)Intel Fanboi? I’m a goddamned poster child for an INTC fan! I keep a well lit curio cabinet of my beloved INTC chips going back nearly 20 years in my office. I won’t touch a frig’in keyboard/computer with an AMD chip in it, let alone repair it! If my MOTHER bought an AMD machine that screwed the pooch, I’d let it DIE! Further, I have never claimed to be anything less.

4)You wish you had 1 TENTH the shares I have in INTC.

5)I have been GIFTED to become an accepted member of this site whose other members are BRILLIANT industry insiders. They have light years of hands on experience cooking up wafers to put FOOD on their tables. These are not guys who will blow smoke up your ass, pal. If you’re wrong about something you’ll get jumped and buried with hard core data and facts, backed up with specific and relevant references.


6)EVERYTHING that has been posted on this site has, in everyway, come to clairvoyant fruition since its inception. I doubt ANY other sites could claim the same.

7)All said, If AMD had a serious product and/or clear technological advantage; I would buy a few hundred shares and hedge my position. I don’t see it happening, not now.

8)Intel got fat and lazy a few years back. That’s over. They are back on track; they are going to kick the shit of anyone who gets in their way. They’re on a serious roll, sweet cakes. Read ‘em and weep.

Jack Nicholson, “Are we clear? ARE WE CLEAR?”

Tom Cruise, “Crystal”

SPARKS

PS Thanks Tonus.

Anonymous said...

“Sparks sounds as if he was eager for someone here to really OC their E8400!”


Giant! Nice work! Well done! Now that what I’m talking about! This is precisely what I would like to do to the ‘Big Fella’ when I get my greedy little fingers on one. However GURU’s “di-hydrogen oxide” may keep the little sweetie stable at the brilliant 4.5 level.

I doubt, however, water will keep the credit card cool that’s burning the back of my ass.

G. I think we’ve got a lovely new historic ‘300A’ O.C. darling in the making!

Ah, the thrill of it all.

KUDOS!


SPARKS

Anonymous said...

“No they won't and that's my point - Scientia is crazy to think 45nm high K is a back up plan, it is almost like a brand new process node transition.”

Understood.

“In reality 45nm will likely get to where 65nm was originally expected to get”

Interesting supposition; you, excuse me, they are in good company as one of the INQ ‘experts’ describes in some form of voodoo math how AMD’s 45nM process may or may not clock.


http://www.theinquirer.net/gb/inquirer/news/
2008/02/05/
analysed-amd-nm-comeback-vs


SPARKS

InTheKnow said...

A lot of you folks pat yourselves on the backs for posting balanced comments here, especially sparks, but you're no different from any other crowd of fanbois.

I have a bias, and I think I've been upfront about where my bias lies. If that makes me a fanboi so be it.

I've never claimed to be balanced, just rational. I don't recall trying to defend positions that I couldn't support with data. Or taken positions that required creative interpretations of the data that is out there.

I'm not quite sure what got your knickers in a twist.

Personally, I would like to find a site that allows an open, uncensored discussion of process differences where knowledgeable AMD engineers post. So far I haven't seen hide nor hair of such a site.

I've been called on my mistakes and told I couldn't scratch up enough grey cells to have an intelligent conversation on some topics on this site, so I don't think you can say it is the warm and loving atmosphere that brings me back. :)

But it is possible to have an intelligent discussion about process issues on this site. The prevailing bias is irrelevant to me, it is the quality of the discussion that brings me back.

Anonymous said...

Yes, I can see that the comments here are soooooo accurate. For example, the childish snickering about DTX makes it clear that no one who posts here knows anything about DTX. Ah, but that doesn't prevent the fanboys from commenting. The last update on DTX says that none were expected in 2007. But, of course the lack of DTX motherboards in 2007 proves that the standard is worthless, right? No, the snickering, giggling, and heckling is so much more fun than the facts.

For the moment, DTX is still in its infancy and vendors are yet to release any products based on this standard.

However, motherboards should become available in early 2008, from the likes of Albatron, ASUS, Gigabyte and MSI, while cases, ranging from tiny desktop units to elegant home-theatre components, should become available from Cooler Master, SilverStone and Thermaltake, to name a few.


I would say that this comment will expire in June unless we get another status update on DTX. Now, if this expires with nothing new then you can all knock yourselves out with the sarcasm. But, maybe you should get in some more heckling now because if it proves true what will happen to all your fun?

InTheKnow said...

Scientia is crazy to think 45nm high K is a back up plan, it is almost like a brand new process node transition.

I think this comes from a very optimistic interpretation of IBM's description of their high-k/metal gate process. To read IBM's description of the process it sounds like you just replace the gate furnace with the new process tool and you are off to the races.

No one with any significant amount of process knowledge is going to buy that, but what percentage of the people out there really have the background to question the assertion?

Anonymous said...

"There is FAR MORE actual data and knowledge on this board then Dementia's..."

There could be far more actual data and knowledge (about Intel) on this board but there is also no doubt that this board is a clique and that misinformation (about AMD) thrives here equally well. There is also no doubt that this bias is both caused and encouraged by roborat. This board has some good points but claims to be far more than it actually is. It succeeds at what it wants to be: a fansite for Intel and nothing more.

InTheKnow said...

Yes, I can see that the comments here are soooooo accurate. For example, the childish snickering about DTX makes it clear that no one who posts here knows anything about DTX. Ah, but that doesn't prevent the fanboys from commenting.

I spent 3-1/2 years working in a board shop and owned 7 processes. I think I'm qualified to talk about the impact of PCBs on the market. I watched my company reduce an engineering staff from 23 to 7. I was one of the few fortunate enough to leave on my own terms.

Your credentials would be????

Like it or not, circuit boards are commodities (hence the huge reduction in the labor force where I was working), and there is nothing to prevent a board maker from adapting a successful form factor to accept Intel chips. So I would say that even if DTX is phenomenally successful, it doesn't really confer an edge to AMD.

The real reason for knocking DTX is that it really doesn't bring anything significant to the table. Any gains AMD might see from it are purely transitory.

InTheKnow said...

There could be far more actual data and knowledge (about Intel) on this board but there is also no doubt that this board is a clique and that misinformation (about AMD) thrives here equally well. There is also no doubt that this bias is both caused and encouraged by roborat. This board has some good points but claims to be far more than it actually is. It succeeds at what it wants to be: a fansite for Intel and nothing more.

Then where is the promised land, oh enlightened one?

Post the URL and I'll be the first to check it out.

Anonymous said...

"For example, the childish snickering about DTX makes it clear that no one who posts here knows anything about DTX."

Ahh... you apparently do not read the great Dementia's prediction that DTX would take foothold in H2'07... that was what the comment was in reference too. He made these grand references how Intel would get beat in the low end as DTX would take off and Intel would have nothing to compete with.

Yes I know very little about DTX, but apparently that should qualify me to make predictions about market acceptance like a certain other site!

And yes I will take intheknow's comments in this area as he clearly knows more than most others (especially his post on the making of the MOBO's in the past. DTX is only marginally better that Intel's BTX fiasco - that was done to fix a chip issue (the Prescott temp issues) and was clear it would not take hold, especially if the chip problem ever got fixed.

DTX's big 'advanatage' that I can see is theoretical cost benefits, as chip prices continue to come down as well as RAM and other components I don't see this taking hold. It was extremely naive of someone to think this would take hold in 2007 as they did not consider the market acceptance and business aspects of a new standard. But that person was so 'invested' in the greatness of DTX that heprobably felt he had no choice but to keep up the charade (or admit he was wrong).

Who knows in time DTX might work... just not anytime soon, and certainly not in the '07 timeframe that was predicted.

Anonymous said...

There is no doubt that the knowledge on this blog is vastly greater and more accurate than any other source. I will humbly attempt to summarize some of the tremendous wisdom from this blog:

1. Intel is at least 5 years ahead of AMD in process technology and 2 generations ahead in CPU architecture.

2. AMD is unlikely to survive to the end of 2008 because Intel's advantages in process technology and architecture will keep AMD in the red.

3. Even if AMD makes it to 2009 then without a new architecture to compete with Nehalem AMD will go bankrupt early in 2009.

4. If by some miracle AMD manages to survive 2009 then by 2010 when AMD finally releases a new architecture to replace the already outdated and aging Shanghai it will likely be half the speed of Intel's offerings. In other words, Intel's Silverthorne replacement will be as good as AMD's top end processor.

5. AMD is best compared with VIA. After all, Intel is in a class by itself so it simply is not fair to put it into a group with AMD. Since AMD is best compared with VIA (since it is not Intel) we can conclude that AMD is destined to be marginalized just like VIA.

6. Intel is larger than AMD and because it is larger it will always be larger and always be ahead of AMD. It is simply not possible for a smaller company to be competitive with a larger company. Larger always equals better.

7. It is a certainty that AMD will be purchased by a larger company since by definition only a larger company can compete with Intel. This means that AMD will either be bought by IBM or Samsung since these are the only companies large enough to compete with Intel.

8. AMD may not be bought since it is clear that AMD is moving to a foundry based model like VIA (since AMD is obviously not Intel). AMD will likely sell FAB 30 soon. AMD will also sell FAB 36 a bit later as soon as it moves its production to TSMC.

9. The FABless model is the true meaning of "Asset Light" that AMD has mentioned. This will become clear soon when AMD announces the sale of FAB 30.

10. The purchase of ATI was a huge mistake. It is clear that AMD's only reason for purchasing ATI was that they were trying to be like Intel. In fact, AMD was already trying to be like Intel when they acquired their own FAB and when they started making X86 processors. But, obviously, AMD is no Intel.

11. Clearly, AMD's business model is broken. This broken business model means that AMD cannot make enough money.

12. Obviously AMD cannot possibly compete directly with Intel in X86 based processors. However, if AMD were bought by a larger company they might have enough size to survive (probably a generation behind Intel).

13. If AMD switches to a FABless model they could probably survive but even further behind than they are now.

14.AMD would have been better off if instead of purchasing ATI they had excelerated their move to a FABless model.

15. Ruiz will be forced to retire before mid 2008. Afterall, Ruiz was the one who tried to make AMD like Intel and this was clearly wrong.

16. Intel was able to release 3.16Ghz chips in 2006 but chose not to because AMD wasn't competitive enough. Intel is easily capable of releasing > 4.0 Ghz processors in 2008 but Intel may decide not to since AMD is obviously not competitive.

17. AMD cannot gain share in 2008. AMD will lose share in 2008.

18. The only way that AMD could gain share in 2008 would be to sell its processors at even lower prices. But, this would only hasten AMD's bankruptcy.

19. In fact, AMD's position is so bad that it may even lose share to VIA in 2008.

20. We know that even if AMD loses ground and becomes marginalized or goes bankrupt that this will have no effect on Intel's price or chip cadence.

21. We can also be certain that if AMD were not a factor that Intel would also not try to phase out X86 in favor of Itanium. We know that Intel has been trying to phase out X86 for more than 15 years but nevertheless we are certain that Intel would not do this.

22. However, even if Intel did phase out X86 in favor of Itanium we can be certain that this would not effect prices or cadence and games would run just as fast on EPIC. And, Itanium would overclock just as well. In short, we are certain that if Intel phased out X86 it would only be to replace it with an even better architecture.

23. Finally, we know with absolute certainty that current Intel processors owe nothing to AMD. Intel would surely have increased speeds just as fast without the competition from K6 and K7. Intel would have added 64 bit extensions on its own without K8.

24. In short, AMD sucks sweating donkeys and Intel rules.

Anonymous said...

To the anony above... give it some time... you are still apparently in the denial stage.

While some folks may have said a few of the 24 items you mention above; you are clearly exaggerating the points in an attempt to humor yourself, and in attempt to self validate your conclusion of fanboyism (in that regard you epitomize Scientia's need to 'migrate' the facts to fit your conclusion)

As for the process lead - Intel is ~3 years ahead (as opposed to the complete Scientia dis-information of 9 months). If you are really interested in an intellectually honest discussion of this let me know and I will detail out the reasons why.

If your hyperbole helps you sleep better at night then feel free to misinterpret the comments of people here... I'm sure Robo is not so insecure that he will feel the need to censor out your absurd comments - almost anyone reading them can see your words for what they are. Also I can suggest a couple of sites where you will be deemed an expert and visionary if you'd like (facts are optional on those sites)

And in response to #11, clearly AMD's business model is hitting on all cylinders... it has enabled them to slow down the F30 conversion, mothball the NY fab, enable an ~50% reduction is ASP's and crack the share price by over 60%... who can argue with a business model that enables all of that?

The only thing working right now is graphics, which I think is actually set to make huge inroads vs Nvidia in 2008. While Intel is dabbling in the area I doubt Larabee will be an instant success in graphics and best case will take one major revision/iteration to take hold (meaning best case for Intel is probably 2010 to be a player in the discrete GPU arena)

Anonymous said...

“But, of course the lack of DTX motherboards in 2007 proves that the standard is worthless, right?”

Nice choice of words, actually, I couldn’t said it better my self. Ok, yes I could.

Ask yourself this, if you have got something good to sell, and there’s a market for it the troops will rally, get behind it, and see it as a good thing. It will sell.

However, INTC tried this nonsense with BTX, bad idea. The entire hardware industry was geared up for the door on the right. They basically told INTC to shove it. (BTX was done to provide a better cooling arrangement for increasing hot CPU’s at the time. Core2 efficiency rendered these thermal issues academic.)

The DTX was SUPPOSED to be a platform integration of the new ATI/AMD partnership design to put a very small, inexpensive, integrated unit on the business desktop. As it stands now, current NVDA and INTC are simply too good for the DTX platform to gain any traction. Further, because of the ATI/AMD buyout/failure, they are fighting for there lives in the current markets just to stay alive and competitive. READ: low end and big.

Pheromones thermals are off the wall, INTC chips are incredibly efficient, and ATI needs two GPU’s to compete with one NVDA 8800 GTX.

Need more proof? Check out Apples new Air Book. This thing is revolutionary, and it will SELL!

DTX, frankly, is dead in the water.

GOT IT?

SPARKS

Anonymous said...

"Rumor: AMD doesn't even mention DTX anymore so it must be a failure.

AMD is already counting DTX as a success for 2007:"

Scientia, 7/27/2007

"Without doubt though, the good news for AMD is mini-DTX. Again, the trades have suggested that mini-DTX will shove aside all other contenders to become the dominant small form factor standard on the desktop....

...This leaves AMD mini-DTX with no competitor and Intel with no sandbags to plug the gap in levee."

Scientia, 5/23/07

"In fact, mini-DTX versus mini-ITX is exactly this kind of mismatch with mini-DTX having more memory and memory bandwidith, more expansion capability, and more cpu power. This should at least help AMD hold on in the 2nd half of 2007"

Scientia 5/27/2007

"And, adding insult to injury, even Anandtech now agrees that AMD's Barcelona quad core will begin production in Q3 07 at 2.5Ghz."

"With two and a half weeks of margin, this suggests that a B3 revision could probably be released before the calendar end of Summer" (this would be 2007)

"The two companies' offerings should be very close in performance and features by Q3 or Q4 2007"

OK the last few are off topic...but just too damn funny!

Just so I'm clear... it is the folks on this board who have no clue about DTX?

Anonymous said...

"I will humbly attempt to summarize some of the tremendous wisdom from this blog:"

I am in agreement with points 1 thru 23. However, I am compelled to take issue with point 24. I don't think "sweating donkeys" is appropriate ", BIG sweating donkeys would be far more accurate.

Other than that, yes, I believe that sums it up in a nut.

Well Done.

SPARKS

Anonymous said...

I agree with point #24, but completely disagree with 1-23!

See we don't all think alike here!

Anonymous said...

That dude with the silly 24 points. He got 21 of them wrong. But I do like the thought of dirk and hector "sweating donkeys" where did he come up with that one... its as good as the 21 points he got wrong.

Tick Tock Tick Tock

Anonymous said...

Hey Giant, it just occurred to me. With stroke of compulsive/obsessive, mathematically brilliant , Fanboy, astute genius, your sweetie E8400 is 2X, err--, that’s twice---ahhh, 2 times, yeah-----right, two times, ahhh---- got it, 2 times faster than Pheromones.

We will need the Doc, In The Know, GURU, JJ, and others to confirm my calculations.

Here the calculations 2X 2.3 or 2X 2.4 = 4.5 GHZ!

Wait, wait -----my 10 year old daughter has just informed me that the results should be 4.6 or 4.8 GHZ!

It just goes to show you, if you’re not careful, fanboy-ism could lead you to make outrageous and unsubstantiated claims. Therefore, I suggest that some people should keep a 10 year old around to keep them honest.

With the help of my 10 year old daughter we can now say, the E8400 clocks NEARLY TWICE AS FAST AS A GODDAMNED PHERMONE, OUT OF A RETAIL BOX!

However, since she is a fifth grader, I would like the rest of the “fanboi” engineers, especially those with Ph D’s on this site to check my/our calculations. We want to factually correct, you know. The DOC’s site is gaining traction, and people are watching.

Ever Loving, Intel Sucking, cream in my Jeans, shareholding, Intel FANBOY,

SPARKS

Tonus said...

According to Ed at Overclockers, it seems that AMD will cancel the 9700/9900 Phenom B3 stepping processors and replace them with 'new' models. The only differences appear to be the name (9750/9950) and the availability (moved back approximately one quarter).

Ed claims AMD did it so that they can claim that the CPUs are not delayed. Whatever the reason, we may not see 2.6GHz Phenom processors until late in the year.

Anonymous said...

It looks like the "Bad Core #3" rumor about the Phenom is true. Quite a few sources confirm that they can hit 2.8 - 3.0 GHz on their Phenoms if they hold the clock speed on core #3 back.

Although this is a PR disaster, I now have more confidence in AMD's manufacturing process. It seems they CAN hit higher clocks than 2.4 GHz on 65nm when the design isn't flawed.

I bet Tritanic will ship in huge volume at higher speeds than the quads and all of them will have the same core 3 disabled.

It's interesting to note that the core 3 bug is a direct result of AMD choosing to go with a monolithic quad core design.

Anonymous said...

I bet Tritanic will ship in huge volume at higher speeds than the quads and all of them will have the same core 3 disabled.

While it's certainly true that these cores will be able to easily overclock over their quad core counterparts, it doesn't mean AMD will be able to ship them at those higher bins in volume. You still have to take into account the very large power consumption and TDP gains as the clocks increase on Phenom. Even if they fix the manufacturing issue of core2, they still won't be able to achieve 2.8Ghz without a >140W TDP.

Anonymous said...

"Ed claims AMD did it so that they can claim that the CPUs are not delayed."

As much as I rail on AMD, I think there is a more plausible explanation. Despite the fact that the TLB errata may not be that big a deal any 9X00 Phenom now has the stench of that bug. By cancelling these and making everything 9X50, it makes it a bit cleaner for consumers and AMD can say these are 'bug free' chips.

Yes it has the benefit of allowing some more creative interpretation of 'on schedule', but I think that is a secondary factor.

I hate to say 'I told you so' to all about this tri-core, 'disable the slow performing core crap' and get good clocks on the tri's - in the real world you don't see that much variance in clock on cores that are so physically close on the wafer (again except potentially at the very edge). Tri-core is simply a means of selling a quad with a NON-FUNCTIONAL core that would have lead to scrap. Or if demand is soft...AMD can intentionally disable a core to move inventory (which is good flexibility if you have a less than competitive product)

The situation on 2.6GHz availability is more brutal then i thought (I thought for sure these would be available for purchase toward the end of Q2). Keep in mind the order availability date of Q3 is not necessarily CUSTOMER order, it is OEM/channel orders.

So those AMD fans wanting a 2.6GHz for Christmas, they may still get their wish - just 2008 instead of the 2007 originally expected.

Anonymous said...

A 2.1GHz tri-core... what's the point? Will this even be competitive (meaning significantly better) with AMD's 3.0GHz K8 dual core? If not, won't it need to be priced under that? (I wonder which part has a higher margin given the die size)

Perhaps, there is another reason why K8 is not clocked above 2.7GHz. It will avoid the embarrassment of their one generation old DUAL core beating a next generation TRI core!

Of course AMD needs to still sell these 2.1 tricore as they sell 2.1 quad cores.... what else you going to do with those defective die? (Though I hear AMD is at 'expected' 'mature' yields). This is yet another down the line problem of introducing new SKU's at lower bins instead of waiting and getting the bins working.

There is now a freakin 1.8GHz quad "phenom"! And before someone says well it was a stopgap and they had to do it so they could sell stuff... AMD is even migrated this to the B3 stepping!

There are 2.2, 2.3 and soon to be 2.4 GHz parts, yet no 2.0GHz? I have absolutely no data, but if I had to guess where the cliff is for the bin splits? Between 1.9 and 2.1? Also notice there are no 1.8GHz tri-cores (suggesting decent splits at 1.8?)...

I can see doing this stuff in server land, but this is the desktop market - quad comprises, what 5% of all desktops? So AMD now has 4 SKU's in Q2 and soon to add a 5th (2.6GHz) and/or 6th in H2?

Yeah that AMD business model is not broken... in a small market the goal is to have as many products as possible! Then price them in such a way as it completely pins the rest of your market segments.

"sweating donkeys" comes to mind!

Anonymous said...

“It's interesting to note that the core 3 bug is a direct result of AMD choosing to go with a monolithic quad core design.”

I read this, too, and it had me wondering about the process/architecture relationship.

The way I’m reading this, it sounds as if it is third core specific. I was under the assumption that all four cores were identical and that the ‘bad’ or slower core was something that randomly affected one quadrant or another due to variances in the process.

But, when they say “core 3 bug” it sound as if the process or the design was flawed in one specific area common to all Pheromones, hence, the third core. How is this possible? Is it a design or process problem that affects that core? Is it a bad mask and they don't know where to look? Why can’t they fix that one problem area? Is that one core different that the others in some way? Honestly, I am at loss to understand the dynamics here. Am I reading this wrong?

SPARKS

Anonymous said...

I think it's impossible to say what is the root cause of the issue (assuming it is Core3).

Yes all cores are 'identical' but they're not - the interconnects are not the same as you have right 'handed/left' handed and other subtle differences. I'm not saying this is the cause but the cores are not EXACTLY the same with respect to interconnect, runs to the cache, etc...

It could be a mask issue - either a specific defect on the mask - I highly doubt with the low columes AMD is running that these lots are moving through more than 1 or 2 production tools for some of the steps. It could be an error in the mask making (OPC or design).

It could be an actual design issue. perhaps as some bit of code cycles through it gets nailed on the 3rd core, or some specific pathway.

Noone (other than AMD) has enough info to say. For folks to flat out rule out a design issue or a process issue or a manufacturing issue is just not possible with the info available.

And in all likelihood we will never know, unless the issue gets fixed on the same stepping. If the stepping is changed to fix this, then we probably will never know what the cause of the issue was.

Anonymous said...

Good News or bad news?

http://search.dell.com/results.aspx?s=gen&c=us&l=en&cs=&k=AMD&cat=all&x=0&am

Dell no longer selling AMD-based computers online anymore (just desktop?).

As I credit 'cracking' Dell as one of the accelerators for AMD's problems (lower prices and tying up capacity), it is unclear if this is good news or bad news for AMD.

Putting aside the negative PR for a second, I think this may actually be a good thing for AMD. This will allow them to re-focus on the channel (which is probably better margin then Dell anyway)

If I'm Intel Dell is getting the bottom bin chips when they are available with a note saying:

'Isn't qualification of a second source a goos thing for everyone? By the way your price for chips is going is going up! AMD tells us that customers are demanding low power chips instead of higher performance so please feel free to purchase chips from our competitor as needed!'

Axel said...

Anonymous

Dell no longer selling AMD-based computers online anymore (just desktop?).

So much for the foolish predictions out there of AMD increasing market share in 2008. Losing Dell on-line (vast majority of Dell's business) means a virtually overnight substantial loss for AMD in desktop & mobile share.

Anonymous said...

Fudzilla is saying Intel has shipped over a million 45nm (recall it launched ~3months ago).

If Intel is having 45nm yield problems, as some informed bloggers have indicated, and AMD managed to ship ~400K quad cores in the 5 months from the Barcy launch, what does that say about AMD's 65nm?

I'm just so confused, isn't Newegg product availability a good indicator of process yield! There are no other factors that impact product availability, are there? (lke demand, customer priority, segment ramp focus)

Anonymous said...

Man the Dell news made the CNBC business channel! Apparently they will sell on phone and retail, but no more online sales. The analysis was this is a huge blow as the majority of Dell's volume remains online sales.

It'll be interesting to see AMD's PR response... I hear customers are demanding they not be able to order computers online and AMD is just being customer centric.

Anonymous said...

Guru, there I was today, Hi-Pressing 4 sets 350 KCMIL, 3P, 277-480V in a 24x24 cutout box, I could help but think of your “pathway” comment.

“bit of code cycles through it gets nailed on the 3rd core, or some specific pathway”

In my world each set MUST be run together with its respective conductor, A, B, C …A, B, C, etc. If you were to run all the A’s or B’s or C’s together, in parallel, as little as ten feet, under load, hysteresis, and/or inductive reactance would heat the galvanized conduit enough to cause insulation failure, eventually.

The thought occurred to me with the thousands of interconnects how on Gods earth do they find these things on a microscopic level. Hell, all I would do is tell an apprentice to go sit his ass on that raceway, and then watch his reaction. (just kidding) Where would an architect/process engineer begin to look? Can they run simulations to isolate the specific area? It seems we’re talking about a needle in a haystack. Why can’t they examine the sum of the differences between the cores and then target those areas for study?

Then it occurred to me, maybe they have. Perhaps, the issue isn’t with the process or the “3rd Core”. Perhaps the integration of the IMC limiting ALL 4 cores coming up to speed together. We spoke of these issues months ago, as I recall. We spoke of timing issues. Therefore, if this case then perhaps it’s a more fundamental problem, inherent with the foundation of the design. Maybe one core always gets left behind. Perhaps the IMC, in its present incarnation will never allow all four cores to scale harmoniously. Is this possible or am I reaching for the wrong phase?

SPARKS

Anonymous said...

Did anyone read the Xbitlabs review of the Q9300?

http://www.xbitlabs.com/articles/cpu/display/core2quad-q9300.html

Something is wrong on the reported die size of the Q9300. With 3MB L2 instead of 6MB how can they be the same (107sq.mm)?

http://www.xbitlabs.com/articles/cpu/display/core2quad-q9300_2.html
http://www.xbitlabs.com/articles/cpu/display/intel-wolfdale_2.html

Anonymous said...

"Something is wrong on the reported die size of the Q9300. With 3MB L2 instead of 6MB how can they be the same (107sq.mm)?"

1/2 the cache is disabled - this can be intentional or unintentional (defect). This allow better yields (in the case where some of the cache may have a defect) and also is much easier in manufacturing as you don't have another set of masks, metrology recipes, product type... Also it provides some flexibility with planning as cache can be disabled if for example there is a shortage on the 3MB parts or an excess in 6MB parts.

Intel has done this in the past, at some point they may re-layout the chip with a smaller cache (to improve die size), but at this stage that is not worth the effort (both financially and engineering resource-wise).

Anonymous said...

One clarificationon intentional vs unintentional - the cache still has to be 'fused' in the case of a defect. It is a case of intentionally disabling 3MB of working cache vs disabling a non-functional portion of the cache (in case where you have a manufacturing defect)

Orthogonal said...

1/2 the cache is disabled - this can be intentional or unintentional (defect).

Precisely!

In fact, if everyone doesn't remember, when Core 2 was released in 2006, the first "Allendale's" were in fact Cache disabled "Conroes". It wasn't for a while that an actual "Allendale" mask set was put into production.

Anonymous said...

Intentionally disabling any fully functional beauties should be a capital offense, punishable by rereading all AMD’s PR and Power Point releases for entire 2007 calendar year, sifting through AMD’s reject bins for marginally running units, and finally, a forced purchase of AMD stock. (In lieu of a monetary fine)

Talk about being competitive with AMD’s low end, Ecch! Let it live in the basement like some hideous bastard cousin, stoking the low end fires, seldom discussed, rarely seen, but a necessary evil, never the less.

Even Charlie D. at the INQ is calling DELL’s recent move, “relegating them to the retail GHETTO!”


http://www.theinquirer.net/gb/inquirer/news
/2008/02/08/dell-dumps-amd



SPARKS

pointer said...

新年快乐!

Anonymous said...

"Intentionally disabling any fully functional beauties should be a capital offense"

Why didn't they under clock them?
It's more likely a yield issue like one of the previous posters said.
But they could let those go for the dual core ones. Unless they have many?

Anonymous said...

"But they could let those go for the dual core ones. Unless they have many?"

The Intel quad core is MCM - meaning 2 dual core die in same package... so it is not a matter of downbinning quad core into dual cores. While this approach is mocked as a bandaid or an inelegant approach it allows you to choose between dualcore/quadcore mix AFTER a wafer is processed (this is a 3-4 month benefit). When you have native quad core - you have to decide from day 1 as the mask between dual and quad cores are different - this makes it significantly harder to get the mix right as marketing forecasts 4-6 months out can be seriously off.

This is yet another example of the importance of considering the business, strategic and manufacturing aspects of a solution and not just the pure technology aspect.

Anonymous said...

Well Gents, I've been calling Charlie D. out on another AMD pimping predictions. Enjoy!


" DELL DEAL WITH THE DEVIL

Charlie, can’t you remember when you said AMD was going to pound INTC into the pavement with these BULLSHIT E5XX class machines? Man, we even did the calculations on companies buying these at bulk, thereby, saving millions!

BLAAAA! Wrong again! I said it then, I’ll say it now, not even some poor pent up, nine to fiver, locked up in some isolated cubicle (I recall you sitting in one, commenting how terrible they were!) wants to live with some hideous, grey, under pumped, dog of e-machine.

What you failed to realize back then is corporations need to spend additional money due to Capital Improvement, Tax considerations, further, they just might really care about the people that work for them. (Remember your report when you were sitting your ugly ass in one of Intel’s cubicles last year?)

So much for the brilliant sweetheart AMD/DELL deal that made DELL a litigation non combatant, made AMD screwed the channel, and ultimately, killed AMD’s margins.

Now they’re in the RETAIL GHETTO? That’s the crap they’re selling the American public because they CAN’T sell it to Corporate America, or the rest of the “on line” world.


DO YOU GET IT, NOW?

SPARKS"


http://www.theinquirer.net/gb/inquirer/news/
2008/02/08/dell-dumps-amd

SPARKS

Anonymous said...

"had to laugh when I read this. The posters over at roborat's blog insist almost everyday that I have no idea what I'm talking about and that I just make up things to try to make AMD sound better."

Well at least we know where Dementia goes for some real commentary... as always he has twisted the #'s conveniently to make a simple argument...

"On the other hand there were Intel fans that jumped on the 400K number as "proof" that AMD was doing badly. So, I stated that Intel probably moved a similar ratio of 45nm chips which would be about 1.2 million."

Well that makes sense giving the relative market shares... the problem... timeframes DON'T MATCH UP!

AMD released K10 in Sept at least on paper, though we certainly will not call it a paper launch, because we know for a fact AMD doesn't do these. And when did Intel launch 45nm? mid-Nov. So they shipped over a million chips in just under 3 months. AMD managed to ship ~400K chips in just under 5 months.

AMD did this on a process that was mature, ramped, and at mature yields. Intel is doing this on a brand new process. AMD simply had to change masks in a lithography tool... Intel had to install new equipment... so I guess his comparisons are reasonable.

And just curious - he claims the low availability of chips is due to poor 45nm yields...if he really believes this what does it say about 65nm process?

I guess if you cherrypick data and take the context out of things you can seem rather reasonable to readers who take what you say as gospel and don't question the context of your statements!

Of course this is the readership that talks about the 0.45nm and 0.65nm processes! So 'nuff said.

Oh and by the way the source who talked to Fudzilla never said WHEN Intel passed the 1mil shipment mark or how much over it they are... but let's just assume it means now and 1,000,001 as that is easier to plug into Scientia's pre-determined conclusion.

Anonymous said...

http://www.nvidia.com/object/io_1202940080671.html

NVIDIA Reports Record Results for Fourth Quarter and Fiscal Year 2008

Company Achieves Record Quarterly Revenue and Record Annual Revenue; Annual Net Income Increases 78 Percent Year-Over-Year
For further information, contact:

Michael Hara Calisa Cole
Investor Relations Corporate Communications
NVIDIA Corporation NVIDIA Corporation
(408) 486-2511 (408) 486-6263
mhara@nvidia.com ccole@nvidia.com

FOR IMMEDIATE RELEASE

SANTA CLARA, CA—FEBRUARY 13, 2008— NVIDIA Corporation (Nasdaq: NVDA), the world leader in visual computing technologies, today reported financial results for the fourth quarter of fiscal 2008 and the fiscal year ended January 27, 2008.

For the fourth quarter of fiscal 2008, revenue increased to a record $1.20 billion, compared to $878.9 million for the fourth quarter of fiscal 2007, an increase of 37 percent. Net income computed in accordance with U.S. generally accepted accounting principles (GAAP) for the fourth quarter of fiscal 2008 was $257.0 million, or $0.42 per diluted share, compared to net income of $163.5 million, or $0.27 per diluted share, for the fourth quarter of fiscal 2007, a net income increase of 57 percent.

Non-GAAP net income for the fourth quarter of fiscal 2008, which excludes stock-based compensation charges, a charge for in-process research and development related to an acquisition closed during the quarter, and the associated tax impact, was $292.6 million, or $0.49 per diluted share.

Annual revenue for the fiscal year ended January 27, 2008 was a record $4.10 billion, compared to revenue of $3.07 billion for the fiscal year ended January 28, 2007, an increase of 34 percent. GAAP net income for the fiscal year ended January 27, 2008 was $797.6 million, or $1.31 per diluted share, compared to GAAP net income of $448.8 million, or $0.76 per diluted share, for the fiscal year ended January 28, 2007, a net income increase of 78 percent.

Non-GAAP net income for the fiscal year ended January 27, 2008, which excludes stock-based compensation charges, a charge for in-process research and development related to an acquisition closed during the year, and the associated tax impact, was $919.3 million, or $1.56 per diluted share.

"Fiscal 2008 was another outstanding and record year for us. Strong demand for GPUs in all market segments drove our growth. Relative to Q4 one year ago, our discrete GPU business grew 80%. Our growth reflects the ever-increasing use of rich graphics in applications from Google Earth to Apple iTunes to online virtual worlds," said Jen-Hsun Huang, president and CEO of NVIDIA.

Mr. Huang continued: "This is the era of visual computing. The richness of the graphics is increasingly central to our computing experience. And at the core of that experience is the GPU, the processor that defines the modern PC."

Fourth Quarter, Fiscal Year 2008, and Recent Highlights:

* Fourth Quarter revenue grew 37 percent year-over-year to a record $1.20 billion.
* Annual revenue increased 34 percent year-over-year to a record $4.10 billion.
* GAAP annual net income increased 78 percent year-over-year to a record $797.6 million.
* GAAP annual gross margin reached a Company high of 45.6 percent, a year-over-year increase of 320 basis points.
* We launched multiple industry-defining products and initiatives:
o GeForce® 8800 graphics processing family, including the highly-acclaimed 8800GT < li>GeForce 7000 mGPU – the first single-chip motherboard GPU for Intel systems
o Tesla ™ computing system – the high performance computing industry's first C-programmable GPU
o Hybrid SLI® technology – the first hybrid technology for PC platforms
o CUDA™ technology – the first C-compiler for the GPU
o PureVideo® HD technology – the first video decode and post processing technology for Blu-ray and HD DVD
* NVIDIA® held #1 segment share in desktop and notebook GPU (Mercury Research PC Graphics 2008 Market Strategy and Forecast Report).
* NVIDIA held #1 segment share in workstation solutions (Jon Peddie Research Q3'07 Workstations and Professional Graphics Report).
* NVIDIA was named Most Respected Public Company by members of the Fabless Semiconductor Association for the second consecutive year.
* NVIDIA was named Forbes Company of the Year.
* We acquired Mental Images, the industry's leading photorealistic rendering technology provider. Mental Image's Mental Ray is the most pervasive ray tracing renderer in industry.
* In February, we announced and completed the acquisition of AGEIA, the industry leader in gaming physics technology.

Conference Call and Web Cast Information
NVIDIA will conduct a conference call with analysts and investors to discuss its fourth quarter fiscal 2008 financial results and current financial prospects today at 2:00 P.M. Pacific Time (5:00 P.M. Eastern Time). To listen to the call, please dial 212-231-2901; no password is required. A live Web cast (listen-only mode) of the conference call will be held at the NVIDIA investor relations Web site www.nvidia.com/investor and at www.streetevents.com. The Web cast will be recorded and available for replay until the Company's conference call to discuss its financial results for its first quarter fiscal 2009.

Non-GAAP Measures
To supplement the Company's Condensed Consolidated Statements of Income presented in accordance with GAAP, we use non-GAAP measures of certain components of financial performance. These non-GAAP measures include non-GAAP gross profit, non-GAAP net income, and non-GAAP diluted net income per share. In order for our investors to be better able to compare our current results with those of previous periods, we have shown a reconciliation of GAAP to non-GAAP financial measures. These reconciliations adjust the related GAAP financial measures to exclude stock-based compensation, patent license fees for past usage, in-process research & development charges related to acquisitions, a non-recurring credit associated with the net cumulative impact of estimating forfeitures as a result of the adoption of SFAS 123R, and the associated tax impact, where applicable. We believe the presentation of our non-GAAP financial measures enhances the user's overall understanding of our historical financial performance. The presentation of our non-GAAP financial measures is not meant to be considered in isolation or as a substitute for our financial results prepared in accordance with GAAP, and our non-GAAP measures may be different from non-GAAP measures used by other companies.

About NVIDIA
NVIDIA is the world leader in visual computing technologies and the inventor of the GPU, a high-performance processor which generates breathtaking, interactive graphics on workstations, personal computers, game consoles, and mobile devices. NVIDIA serves the entertainment and consumer market with its GeForce products, the professional design and visualization market with its Quadro® products, and the high-performance computing market with its Tesla products. NVIDIA is headquartered in Santa Clara, Calif. and has offices throughout Asia, Europe, and the Americas. For more information, visit www.nvidia.com.

Certain statements in this release including, but not limited to, statements as to: the use and importance of graphics; visual computing; and the role of the GPU are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: slower than anticipated adoption of new technologies or development of a market; the impact of competition and competitive products; technological advances; the development of more effective or efficient GPUs or CPUs; changes in consumer preferences or product uses; incompatibility of technologies; changes in industry standards; as well as other factors detailed from time to time in the reports NVIDIA files with the Securities and Exchange Commission including its Form 10-Q for the period ended October 28, 2007. Copies of reports filed with the SEC are posted on our website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

###

Copyright® 2008 NVIDIA Corporation. All rights reserved. All company and/or product names may be trade names, trademarks and/or registered trademarks of the respective owners with which they are associated. Features, pricing, availability, and specifications are subject to change without notice.

Note to editors: If you are interested in viewing additional information on NVIDIA, please visit the NVIDIA Press Room at http://www.nvidia.com/page/press_room.html


Intel and Nvidia report record profits and record revenues while AMD reports RECORD LOSSES.

Anonymous said...

http://www.nvidia.com/object/io_1202940080671.html

NVIDIA Reports Record Results for Fourth Quarter and Fiscal Year 2008

Company Achieves Record Quarterly Revenue and Record Annual Revenue; Annual Net Income Increases 78 Percent Year-Over-Year
For further information, contact:

Michael Hara Calisa Cole
Investor Relations Corporate Communications
NVIDIA Corporation NVIDIA Corporation
(408) 486-2511 (408) 486-6263
mhara@nvidia.com ccole@nvidia.com

FOR IMMEDIATE RELEASE

SANTA CLARA, CA—FEBRUARY 13, 2008— NVIDIA Corporation (Nasdaq: NVDA), the world leader in visual computing technologies, today reported financial results for the fourth quarter of fiscal 2008 and the fiscal year ended January 27, 2008.

For the fourth quarter of fiscal 2008, revenue increased to a record $1.20 billion, compared to $878.9 million for the fourth quarter of fiscal 2007, an increase of 37 percent. Net income computed in accordance with U.S. generally accepted accounting principles (GAAP) for the fourth quarter of fiscal 2008 was $257.0 million, or $0.42 per diluted share, compared to net income of $163.5 million, or $0.27 per diluted share, for the fourth quarter of fiscal 2007, a net income increase of 57 percent.

Non-GAAP net income for the fourth quarter of fiscal 2008, which excludes stock-based compensation charges, a charge for in-process research and development related to an acquisition closed during the quarter, and the associated tax impact, was $292.6 million, or $0.49 per diluted share.

Annual revenue for the fiscal year ended January 27, 2008 was a record $4.10 billion, compared to revenue of $3.07 billion for the fiscal year ended January 28, 2007, an increase of 34 percent. GAAP net income for the fiscal year ended January 27, 2008 was $797.6 million, or $1.31 per diluted share, compared to GAAP net income of $448.8 million, or $0.76 per diluted share, for the fiscal year ended January 28, 2007, a net income increase of 78 percent.

Non-GAAP net income for the fiscal year ended January 27, 2008, which excludes stock-based compensation charges, a charge for in-process research and development related to an acquisition closed during the year, and the associated tax impact, was $919.3 million, or $1.56 per diluted share.

"Fiscal 2008 was another outstanding and record year for us. Strong demand for GPUs in all market segments drove our growth. Relative to Q4 one year ago, our discrete GPU business grew 80%. Our growth reflects the ever-increasing use of rich graphics in applications from Google Earth to Apple iTunes to online virtual worlds," said Jen-Hsun Huang, president and CEO of NVIDIA.

Mr. Huang continued: "This is the era of visual computing. The richness of the graphics is increasingly central to our computing experience. And at the core of that experience is the GPU, the processor that defines the modern PC."

Fourth Quarter, Fiscal Year 2008, and Recent Highlights:

* Fourth Quarter revenue grew 37 percent year-over-year to a record $1.20 billion.
* Annual revenue increased 34 percent year-over-year to a record $4.10 billion.
* GAAP annual net income increased 78 percent year-over-year to a record $797.6 million.
* GAAP annual gross margin reached a Company high of 45.6 percent, a year-over-year increase of 320 basis points.
* We launched multiple industry-defining products and initiatives:
o GeForce® 8800 graphics processing family, including the highly-acclaimed 8800GT < li>GeForce 7000 mGPU – the first single-chip motherboard GPU for Intel systems
o Tesla ™ computing system – the high performance computing industry's first C-programmable GPU
o Hybrid SLI® technology – the first hybrid technology for PC platforms
o CUDA™ technology – the first C-compiler for the GPU
o PureVideo® HD technology – the first video decode and post processing technology for Blu-ray and HD DVD
* NVIDIA® held #1 segment share in desktop and notebook GPU (Mercury Research PC Graphics 2008 Market Strategy and Forecast Report).
* NVIDIA held #1 segment share in workstation solutions (Jon Peddie Research Q3'07 Workstations and Professional Graphics Report).
* NVIDIA was named Most Respected Public Company by members of the Fabless Semiconductor Association for the second consecutive year.
* NVIDIA was named Forbes Company of the Year.
* We acquired Mental Images, the industry's leading photorealistic rendering technology provider. Mental Image's Mental Ray is the most pervasive ray tracing renderer in industry.
* In February, we announced and completed the acquisition of AGEIA, the industry leader in gaming physics technology.

Conference Call and Web Cast Information
NVIDIA will conduct a conference call with analysts and investors to discuss its fourth quarter fiscal 2008 financial results and current financial prospects today at 2:00 P.M. Pacific Time (5:00 P.M. Eastern Time). To listen to the call, please dial 212-231-2901; no password is required. A live Web cast (listen-only mode) of the conference call will be held at the NVIDIA investor relations Web site www.nvidia.com/investor and at www.streetevents.com. The Web cast will be recorded and available for replay until the Company's conference call to discuss its financial results for its first quarter fiscal 2009.

Non-GAAP Measures
To supplement the Company's Condensed Consolidated Statements of Income presented in accordance with GAAP, we use non-GAAP measures of certain components of financial performance. These non-GAAP measures include non-GAAP gross profit, non-GAAP net income, and non-GAAP diluted net income per share. In order for our investors to be better able to compare our current results with those of previous periods, we have shown a reconciliation of GAAP to non-GAAP financial measures. These reconciliations adjust the related GAAP financial measures to exclude stock-based compensation, patent license fees for past usage, in-process research & development charges related to acquisitions, a non-recurring credit associated with the net cumulative impact of estimating forfeitures as a result of the adoption of SFAS 123R, and the associated tax impact, where applicable. We believe the presentation of our non-GAAP financial measures enhances the user's overall understanding of our historical financial performance. The presentation of our non-GAAP financial measures is not meant to be considered in isolation or as a substitute for our financial results prepared in accordance with GAAP, and our non-GAAP measures may be different from non-GAAP measures used by other companies.

About NVIDIA
NVIDIA is the world leader in visual computing technologies and the inventor of the GPU, a high-performance processor which generates breathtaking, interactive graphics on workstations, personal computers, game consoles, and mobile devices. NVIDIA serves the entertainment and consumer market with its GeForce products, the professional design and visualization market with its Quadro® products, and the high-performance computing market with its Tesla products. NVIDIA is headquartered in Santa Clara, Calif. and has offices throughout Asia, Europe, and the Americas. For more information, visit www.nvidia.com.

Certain statements in this release including, but not limited to, statements as to: the use and importance of graphics; visual computing; and the role of the GPU are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: slower than anticipated adoption of new technologies or development of a market; the impact of competition and competitive products; technological advances; the development of more effective or efficient GPUs or CPUs; changes in consumer preferences or product uses; incompatibility of technologies; changes in industry standards; as well as other factors detailed from time to time in the reports NVIDIA files with the Securities and Exchange Commission including its Form 10-Q for the period ended October 28, 2007. Copies of reports filed with the SEC are posted on our website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

###

Copyright® 2008 NVIDIA Corporation. All rights reserved. All company and/or product names may be trade names, trademarks and/or registered trademarks of the respective owners with which they are associated. Features, pricing, availability, and specifications are subject to change without notice.

Note to editors: If you are interested in viewing additional information on NVIDIA, please visit the NVIDIA Press Room at http://www.nvidia.com/page/press_room.html


Intel and Nvidia report record profits and record revenues while AMD reports RECORD LOSSES.

Anonymous said...

But as a self-anointed authority on all things semi-conductor related, he can't be educated. What a waste of a mind.

Guess what I can up with after all the hype, spin and horseshit? Those IDIOTS who bought into this bag of crap have to explain to their shareholders and bosses WHY THEY BOUGHT THE “SCRAPY LITTLE COMPANY” AND SOLD INTEL SHORT!

As an Intel employee it has been fun to watch things from the sidelines, even over at Scientia's blog, however, the quality of dialogue is vastly superior here. (some may accuse me of bias, but that's ok :) )

I'm impressed by the knowledge and experience many of the posters here have shown. Many of the assumptions and educated guesses of Intel's process and operations are strikingly spot on, while some may be a tad off ;)

Please keep up the good work and I'll comment from time-to-time from an insider's point of view.

Lest anyone be confused by the drivel posted by Scientia regarding Intel's "destruction" of 45nm chips here ...

whatever Semiconductor for Dummies book Scientia might be reading at the moment, it must be really old.

If they raised their standard to INTC's, AMD's Cripple Cores would be trash.

despite Dementia's assumption that D1d = development fab = low volume, D1d is roughly the size (capcity-wise) of AMD's F36!

But in Scientia's little world apparently schedule is the onlu thing that matters who cares if the wheels are falling off and the process engine is sputtering...

Scientia's thoughts on D1d chips are laughable as many posters are pointing out...

"This suggests that Intel's bulk production quality lags its initial production quality by a full year"

Scientia hitting the bottle!

It is simply astounding how little knowledge Scientia has in this area.

As for Abinstein - the guy is a joke...

Keep up the good work ROBO!

Dementia's have indicated the ridiculous of his assertions about Intel destroying their early 45nm production because it was inferior to 65nm. He thought this because he misinterperted a statement by Otellini as he lacks any sort of financial background, yet somehow felt qualified and compelled to draw an absurd (and wrong) statement as it was a potentially negative data point (of course it has been shown not to be the case) for Intel.

Funny when things are shown to be wrong at it doesn't change his preconceived conclusion (that he tries to fit the data to), he will update his blog. It remains to seen if he will update this misinformation as well?!?

As it is the only thing propping up his absurd "45nm is not in as great a shape as those Intel fanboys think" assertion, it is probably unlikely he will correct it. (Unless of course he can twist/spin some other data to once again fit his predetermined conclusions).

I HAD little respect for him, I now have ZERO respect for him. We all make mistakes, but a man with INTEGRITY will stand up and own up to them. A WISE man also will know his limitations and not try to draw absurd conclusions on data/statements he knows very little about. (And then in Dementia's case then use his own IGNORANCE as an excuse for them misinterpertation!)

I think it is clear to all now that Scientia has neither integrity or wisdom. (But the blog still makes for good entertainment due to the absurd reasoning and argument skills!)

We need some fresh meat. I am bored of Christian Howel and Abinstein.

You missed the most important point Dementia had on DTX - it will allow lower costs in the budget area....

you talking about someone that rhymes with dementia?

scheming scientia
idle fella
closet fanboy
baked a half ploy
blame it on dementia

In fact Dementia, Abinidiot, et al made a big deal of this saying how great it was and that AMD would not be going back to effectively one fab during the conversion. Now he is saying the exact opposite decision is the right one, because it is easier operationally, blah, blah, blah, never set foot in a fab so I'll spout out more words to make it seem like I'm an expert on this...organizational complexity....blah blah blah...

The truth is whenever AMD changes a decision it is OBVIOUSLY the right thing to do and Scientia obviously has the right argument behind it!?!

So, was does this show Dementia and Company? Ya don’t need a super long, phallic pipeline to get super clocks! All ya need is the best people and chip company IN THE WORLD!

hahahaha lmfao@scientia

Roborat, don't you know that the Crysis benchmark was compiled to favor Intel and cripple AMD? What are you thinking, man? ;P

LOL anonymous poster, thanks for posting Scientia's oh so accurate 'predictions'! :D

45nm doesn't appear to be as poor as Scientia attempted to suggest through his ridiculous analysis that Intel was throwing away inferior 45nm parts (due to his lack of financial knowlegde)

I do hate to sound like a broken record (like some of those AMD fanboys

PHENOM IS SIMPLY FRAGGED TO PIECES BY EXISTING INTEL CPUS.

SUPERPI 1M scores:

This isn't even counting Yorkfield, which will report even better scores!

RV670 PRE-FRAGGED BY 8800 GT:

In other news, AMD renamed the SPIDER platform to SNAIL platform. This reflects the snail pace that the PHENOM CPU and RV670 graphics cards run at!

It is quite plain to see that Intel is holding back considering the new 45nm processors can clearly clock up to 4Ghz on air and much much higher with a little effort yet Intel refuses to release any processors officially clocked over 3.16 Ghz.

F**k’en A Bubba!

Well, here it is folks. The Pheromone C2D killer, the one that was going to destroy Clovertown by 40%, will be launched @ 2.3GHz. B.F.D. !!!

Further, In The Know, Doc, GURU, and so many others on this site, your analysis and predictions have been 100% correct. All commentary ranging back for one year has been formally substantiated and postulated with clairvoyant precision.

Let me see if I can crawl into the mind of the great Dementia

It's funny he has become like a politician - he understates everything about AMD's roadmap so he can say they met/exceeded it and he intentionally overestimates Intel's roadmap so he can say they are behind or late.

Wow - there is just so much misiformation in Scientia's latest blog it is getting ridiculous

Looks to me like Scientia is just making excuses on why AMD is behind (now that he finally seems to accept that they are behind). I guess an extremely ignorant Intel fanboy could claim that AMD should be much further ahead as they have 4 companies working together as opposed to Intel doing it on it's own. That of course would be just as stupid as Dementia's people and spending arguments.

I would sign up for an account and post on his blog but would he really listen? (That's a rhetorical question - the answer is rather clear) Folks here should feel free to post the links I attached if they'd like! I would enjoy trying to see him wriggle out of his completely ignorant OPC comments!

Not a frick'en genius - its just compared to Dementia, I appear to be one. But then again when it comes to Si technology, my dog would also seem like a frick'en genius compared to scientia!

I just get upset when people pose as experts (under the guise of a blog), don't provide any support to backup their ridiculous statements and then refuse to acknowledge a counter point of view.

I'll say it again - Scientia has concluded in his own mind that AMD is "close" or "equivalent" or "not too far behind" Intel on process technology - he thus tries to make all data FIT that conclusion (rather than looking at the data first and trying to form a conclusion). As a scientist, this is amusing to me as it goes against everything a real scientist or engineer would do. You don't start with a pre-formed conclusion and then try to dig up data to support it and at the same time exclude data that disproves it.

I still find a surprising amount of entertainment in just watching him try to adapt concepts and topics he clearly doesn't understand (like OPC, SRAM cell size, RDR) into a support structure for his ridiculous assertions. It's almost amusing as his 'followers' writing great blog, as they also have no clue what some of the things Scientia is mentioning.

For Scientia to dismiss it,with obviously no technical background on what RDR is, means, and how it is used....is absurd. Not quite as absurd as his talking about SRAM cell size and gate length to suggest that it is not RDR giving Intel an edge, but absurd none the less. But still less absurd the his stubborn use of a technology node launch date and a comparison of clockspeeds on 2 different microarchitectures as the key metric to judge how far ahead/behind folks are on process technology. This is just so simplistic it is beyond funny - but then again what really could you expect given Scientia's limited background on Si processing?

And one of the reasons I think AMD fans/employees don't post here is they know the unfounded crap that they tend to spew will not be taken as gospel without a challenge and a request for supporting informartion.

Good one! But, I have the real truth here. Some of AMD's engineers were working late one night to fix these bugs and needed to stop for dinner. Hector Ruiz previously agreed that the company would pay for chinese food, since the engineers had to work such late shifts. But since the company is in serious financial trouble they couldn't afford to have the food delivered. The engineers had to take fifteen minutes of valuable time to collect the food. Since they lost these fifteen minutes they decided that these critical bugs just weren't worth fixing, so they decided to wait until 45nm to fix them!

and as usual, Scientia has hard time undertsand this, especially when thing paint possitively on intel's side.

Wow the misinformation on Scientia's blog just continues to mushroom, here's another comment (not from Scientia)

Where's that abinstein douchebag? Looks to be hiding from the Penryn massacre.

Let's motivate George Ou to write an article calling out AMD on this lapse.

This is beyond ridiculous. Even beyond beyond ridculous is Dementia still holding the faith.

I'm so confused, do I believe digitimes or Scientia's blog - Scientia has such a well documented background in manufacturing and technology (second only to Sharikou of course), I have to believe everything he writes even though he doesn't provide any facts to support it.

Where have abinstein and baronhowell now that Intel's Penryn performance numbers have been outed?

Chicken shits.

AMDZone has gone the way of the dodo. I guess the owner wanted to prevent mass suicides due to Penryn.

Scientia will try his unique brand of spin and censorship and claim that he does not have AMD bias.

Chicken shits.

Come on folks, be nice, you all need to remember this.

Scientia is never wrong, but on occasion, reality has failed to meet his expectations.

Shockingly enough it appears Scientia is wrong AGAIN and actually had no support behind his statements other than his typical EMPIRICAL observatiosn ("genererally speaking a chip will come out in production 6 months after demo"). This is what happpens when you lack knowledge of what is going on and try to form conlcusion on empirical observations.

I'm sure Scientia will spin this someway positive for AMD, soem possible explanations/FUD:

Don't forget, the lower the yields are for the quad-core, the higher the yields are for the tri-core.

By correlation,
AMD's delay of their tri-core can only mean one thing: they are having excellent yields on their quad-cores!

See, win-win situation again for AMD.

Scientia's used to eating crow. He's so consistently worng that i'm beginning to automatically assume everytime that the exact opposite happens to everything he predicts. i've seen more wrong predictions than in a psychic's convention.

For Scientia to say Prescott was poor therefor RDR's must have been after this is just plain ignorant."

ROTFLMAO -- Bravo!!!! But you must admit, Scientia's ramblings on such technical things makes for a great deal of humor.

"Does he really believe the stuff he says? Does he really think people will believe this crap?"

Unfortunately, he does think he is an authority on the subject, and he states with such conviction that he convinces the ranks of AMDzone that he is some sort of God. So yes, many believe his antics.

"I understand some mistakes as he doesn't work in the area but some of the things he says are just so, well frankly, stupid that he must know they are not right?"

I don't think he does (know his rubbish is not right).... it is kinda sad really.

"I read it sometimes for a laugh and one thread had Sci reasoning that the triple core was delayed because yields on the quad core were so good. then there were comments like "good point, most people would have missed that.., etc." Absolutely hilarious."

I recall seeing that too (though don't recall if it was Sci) and nearly blowing the soda I was drinking through my nose I laughed so hard.

Stone cold killer my ass! ROFL

Where is that retard abinstein now? Is he hiding under Scientia?

abinstein, how do you like getting your ass kicked?

Blabbermouth.

"how can even the Scientia's/Abinstein continue to be AMD fans and look themselves in the mirror everyday?"

fans ---> short for fanatical --> fanatical is not generally associated with logic and reason.

Fans appear rationale when things are going well (the same can be said about Intel fans too), however it is when things are not going well the tiger finally shows its stripes.

It's at that point where things become desparate and you have blogs like "K10: A Good start" (Oh this was meant to be serious!?!?) or a blog on process technology when the author has nary a clue of what Si is.

I like this blog as there are a lot of educated people who comment, nothing is taken as gospel. It would be nice to get some more AMD point of views, but given the folks I see on the other blog sites, I think they understand that they will not get away with unsubstantiated marketing and PR fluff without being challenged for facts/supporting links (which is why I suspect they don't post here).

"I think they understand that they will not get away with unsubstantiated marketing and PR fluff without being challenged for facts/supporting links (which is why I suspect they don't post here)."

Oh, yeah, plus, they WILL get eaten alive with actual working experience, facts and supporting links.

I can see that someone has done a ‘Sharikou’ on Intel’s recent financial performance. A ‘Sharikou’ is an analysis method popularised by a similarly named blogger, where a desired result is only realised by varying the point of reference ...

You are actually insulting Sharikou ...

I feel far more comfortable to post comment in Sharikou's page instead of that blogger.

Sharikou style is that intel is evil and amd is good, plain and simple.

That blogger style, is complicated to describe clearly. Basically is being a fanboi but not admitting it, 99% i'm right and you are wrong style and using the other 1% to prove himself not being biased or being reasonable, etc.

i'm not able to describe him fully. But comparing him to Sharikou is an insult to Sharikou in some sense.

So what do we end up with? A classic case of Scientia trying to sell Intel's advantage as an issue and an indicator of problems.

Jeff Tom must have been crying while doing that article.

Aren't blogs supposed to be pulled completely out of the air and unsupported?!?

Kind of funny how he needs to come to this blog to get the facts and corrections... perhaps if he didn't censor folks, he wouldn't be such a laughing stock and get some real post at his blog!

It would also help if he left the real technical analysis to folks who actually have some knowledge and background! Does he even know what Idsat is? (and a tip for folks - Idlin is actually becoming a more important, lesser known and reported metric - Sparks go work on this after you're done with Igate!). I luagh at the superior intellect....said in my best Ricardo Monteblan voice!

Don't drink and blog! (But drinking and commenting is OK, at least in my case!)