3.08.2008

The Task Manager Déjà vu

AMD launched a new batch of promises at CeBIT in Hannover. 45nm Deneb was demonstrated running Task Manager. The last time AMD used tskmgr.exe to demonstrate working silicon things didn't quite turn out according to plan. AMD promised to ship 45nm products by the second half of 2008. Nobody seems to disagree that this meant the fourth quarter of 2008. If AMD keeps its promise then that puts them a year behind Intel, which is again a major miscalculation from the blogger next door. Never mind the fact that Intel built up inventory in Q3'07, none of that is as important as the embarrassment for those who thought AMD is closing the gap. As usual, AMD is doing the job of correcting its own disillusioned fan base.

Not to make AMD followers a bit more nervous but this is the first time AMD will be alone implementing a microprocessor architecture on a new process. At the moment all that IBM is offering on 45nm are small ASIC designs on SOI and low power RF CMOS. Not quite a show of confidence on the new process. Clearly there is a sense that AMD is being forced to move on to the unrefined 45nm process. Maybe because it needs to show it is catching up or maybe they’re in a hurry to move away from their disastrous (and also hurried) 65nm process. Either one is valid enough. Meanwhile, the less pressured IBM waits for its alternative metal gate solution before implementing on their Power architecture. IBM has a crucial reason for waiting, and I’ll leave it for the boards to discuss.

But when it comes to announcements you have to give to AMD for cleverly stirring things up a little. It’s hard to get excited about 45nm promises when the competition is shipping them in volumes since last year, but they managed to pull it off. Throwing in technical jargons and masking process weaknesses into an advantage seemed to have worked for the scrappy little company. They got the press to notice so that’s job done for the over worked AMD PR machine. They just need to work more on coordinating their statements:

AMD spokesperson 1: “A common misconception is that being first to a new process technology generation is the fundamental determinant of performance and energy efficiency leadership. AMD has proven this to be false."
AMD spokesperson 2: “AMD's 45nm process generation is engineered to enable greater performance -per-watt capabilities in AMD processors and platforms”.

202 comments:

1 – 200 of 202   Newer›   Newest»
GutterRat said...

Roborat,

It should be obvious to those who've been following AMD's K10 debacle that there are two very distinct approaches to "showcasing" technology in development.

AMD chooses to fire up Task Manager on an idle Deneb platform in hopes that people will go "Wow, AMD's a player!"

Intel's approach is one to show Task Manager running a non-trivial load like they did at September 2007 IDF

Is there any reason to doubt why AMD is a technology laggard and why it can't reconcile its own public PR?

This is what happens when a technology company falls behind and is out of excuses.

Time to update my blog :)

ROFLMAO

Anonymous said...

Where are Baronmatrix and the rest of the AMD fanbois?

Sharikou seemed dead for a while but he's now posting claiming Puma is going to kill Centrino and God knows what other @)#)@_ garbage.

Hector's contract is up in April. Are there any posters here who believe his contract should be renewed?

Anonymous said...

Robo - you got this one COMPLETELY WRONG!

The K10 demo last Dec was with showing task manager FULLY LOADED, the current demo shows task manager UNDER VERY LOW LOADS.

It can thus be concluded that this demo shows a chip with much more power as it can run task manager at less than 100% CPU utilization!

Anonymous said...

Anonymous wrote,

It can thus be concluded that this demo shows a chip with much more power as it can run task manager at less than 100% CPU utilization!

What kind of a moron would write this?

Anonymous said...

Oh come on, I was being sarcastic!

Next time I'll explicitly state it so some OTHER MORON doesn't take it seriously**

**I now feel it necessary to include the disclaimer that I'm just kidding less you take this comment seriously too!

InTheKnow said...

From the blog...
They just need to work on coordinating their statements:

AMD spokesperson 1: “A common misconception is that being first to a new process technology generation is the fundamental determinant of performance and energy efficiency leadership. AMD has proven this to be false."

AMD spokesperson 2: “AMD's 45nm process generation is engineered to enable greater performance -per-watt capabilities in AMD processors and platforms”.


I see no inconsistency here. The salient points are:
A) Being first to a process node doesn't help you.
B) AMD is not first to 45nm.
C) Therefore, AMD will see performance and leadership gains.

Seriously, I think there is truth in the first comment. Prescott proved that being first to 90nm and 65nm gave neither performance nor efficiency leadership (despite clear process leadership). It took C2D on 65nm to do that. Intel proved you could throw away the benefits of a good process with a poor architecture.

AMD's has yet to prove their second comment. I'm doubtful it will buy them much, but I'm willing to wait and see.

Anonymous said...

Intheknow,

If AMD would have had a home run out of the bat with K10 on 65nm you can bet they would be pumping up point A quite a bit.

AMD are in a bit of quandary now because in order for them to lead, and I mean lead Intel, they will need to move to 45nm in a hurry because it does not seem that 65nm process and the K10 design marry well.

At the end of the day it may very well matter to AMD that they be on the same technology node at Intel given the problems of K10 on 65nm.

Anonymous said...

"AMD's has yet to prove their second comment. I'm doubtful it will buy them much, but I'm willing to wait and see."

Yes they have yet to prove it, but come on, they will be comparing it to their 65nm process and using K10 as the likely benchmark, so the bar is pretty low here! Note they also didn't say greater than Intel... they just said 'greater' and as usual didn't quantify it (so when if it turns out 1% better they can say see we were right!) I have little doubt they will be able to beat 2.3GHZ @89Watts on their 45nm process.

Look I'm willing to give the AMD PR folks a break - they have a job to do and families to support and really have nothing to work with. I admire their creativity and word-smithing as most of the claims are technically true, however misleading they might be.

It is really much more the fault of the press and readers for not asking the right questions and clarifying these claims and allowing themselves to get continuously fooled. It is also the press' fault for not holding AMD accountable to PAST claims (like the 40% better) and reminding readers of AMD historical track record when they willingly publish new claims/spins

As the saying goes... fool me once shame on you, fool me twice and you can work for AMD's PR department.

Roborat, Ph.D said...

...I see no inconsistency here.

the point is, one cannot make an argument from both sides. there is an element of dishonesty when someone makes a statement dismissing the avantage of transitioning ahead of everyone then turning around and saying they would get energy efficiency gains when they do so themselves.

a new process brings other advantages like cost, and AMD's first statement focusing only on energy efficiency and putting it off as it is ok to be behind doesn't really do them any good besides show the world that its all about sour grapes.

Anonymous said...

The AMD problem is AMD needs and IBM needs.

IBM needs to shrink cell and other Cpus without the need of increasing IPC, design change/improve, increase clocks and yields.

AMD needs to shrink K8/K10 and increase IPC, improve design, increase clocks and yields.

SPARKS said...

“Its Déjà vu all over again”- Yogi Berra-izm

Doc’s new post concerning Task Manager PR and R+D, (Razzle Dazzle).

The ‘new and improved’ B3 stepping that will not scale anywhere near 3 GHz.

Then there is this from Hexus:

(One of the few Tech sites that will call a dog, a dog. By the way, take a close look at the die shot’s on the wafer. They’re so big you can count ‘em!)


“Native quad-core support will be augmented by tri-core models that will be released in present 65nm flavours real soon. Tri-core will be achieved by either deliberately switching off a perfectly-functioning single core or, as AMD hopes, bringing to market silicon that didn't quite make the quad-core grade first time.”


http://www.hexus.net/content/item.p
hp?item=12201


Ah, let’s get a handle on this, shall we?

1. What ever they gained in the die shrink, they gave back with cache. Obviously, they needed a performance improvement in lieu of higher DPW.

2. Yields, apparently, have not gotten better, as they are still going with Tri Core. They didn’t make the grade before the transition and the B3, and they still can’t make the grade now. Doh!

3.Does size mater? In the chip business it does. (There is a little INTC spin peppered in the article.)

“Wishful thinking on there part” I’m no process GURU by a light year stretch , but I am looking at a 10% yield loss at the peripheral edges of this 300mm high tech brick.

http://www.tgdaily.com/content/view

/31974/135/



Then, there is this article:

http://gizmodo.com/363593/amd-
finally-shows-off-its-45nm-
processors

With the size of those things they should hide it.


4. Pricing? If their was ever a water mark to gauge a performance/price ratio, please someone explain to me, like I’m am a 3 year old. Where do these things fit in to make any money in view of new production costs associated with the failed 65nM to 45nM transition with INTC chips as a price/performance benchmark???

5. Does the 2st half of 2008 mean December? Didn’t we go thought this same “timeline” metric last year?


Let’s cut to the chase. This all, again, is a heaping, steaming pile of shit. The way I see it “The Scrappy Little Company” has been reduced to a scheming bunch of charlatans who is gasping for air in a tidal wave of monumental failures.

The bottom line is we, rather the brilliant guys on this site, have predicted these failures last year (during the same month), and by all accounts are doing so again.

Therefore, “Its Déjà vu all over again”- Yogi Berra-izm

SPARKS

SPARKS said...

If you have ANY doubts about the accuracy of the predictions I mentioned in my previous post, here’s what you do.

Go to March 9, 2007 on this site and read DOC’s post. Here’s a sample if you’re too lazy.


“Missed earnings, alarmingly low margins, poor product offerings, slow execution, huge debt, customer inventory, expensive and inefficient Fabs and poor yields. The future doesn’t look bright either. Any chance of AMD at least increasing ASPs with the release of K10s is now completely gone with the aggressive pull in of Intel’s Penryn 45nm family.”


It is entitled, “Everything that could go wrong has gone wrong, for AMD.”

Brilliant, simply brilliant.


SPARKS

Anonymous said...

Random thought:

At some point soon is Intel going to have to stop the unified design (similar core among desktop, server, mobile parts)?

Part of what made Core so successful was it started in the mobile arena and it's advantages carried through to the 2 other segments. It also allowed Intel to turn rather quickly (realtive speaking) given thety are such a large company.

Nehalem is starting to look more and more like a server-centric chip (IMC, hyperthreading, native quad)... these advantages while signifcant over the previous gen, may not carry over significnatly to the mobile sector (and to soem extent the desktop world as well).

I wonder of Intel is risking a return to the faster, more power days and away from the efficiency philopsophy of Pentium M and Core. At some points the architectures may need to split again.


I think AMD's strategy has been server first and waterfall down, but I think that has been largely an unsucessful approach. While folks may argue look at the K7/K8 success that is more a success due to the core architecture IPC improvements as opposed to IMC, HT, et al - and clearly this has had limited success in mobile world, enough that AMD has gone out of their way to tweak the Core specifically for mobile in the future.

I think Intel's Core approach of mobile was/is fundamentally sound, but I wonder if Nehalem is starting to drift away from this strategy.

Anonymous said...

Addressing anonymous's 'Random thought'

There's a set of presentations here and webcasts that show Nehalem being modular.

http://intel_im.edgesuite.net/2008/index.htm

If we are to believe Intel their designs are effectively lego blocks that can be mixed and matched.

We'll see.

Anonymous said...

To the Random thought poster.

I completely agree, I think we will see a Nehalem with very low power draw at idle and at moderated loads but with a huge power draw when fully loaded.

Nehalem might be great but I'm pretty sure even with Intel previous attempts to get rid of the Pentium3 design will not be successfully this time too. :)

Ho Ho said...

Shanghai vs Nehalem dies. As many knew long time ago their sizes are not that different. Nehalem has much better cache density and much bigger core logic compared to Shanghai.

S said...

IBM already has Cell on 45nm. If I remember it right, they expect a 40% reduction in power consumption and about 30% reduction in die size.

S said...

Wow ! It is surprising to see that Intel core is about 30% larger than AMD's. I hope Intel is making good use of that extra space as it seems Intel had to give up some L2 cache to fit in the larger core. I would been keen to know how the core sizes compare historically between AMD vs Intel and how it impacted performance lead.

Ho Ho said...

SMT in P4 was supposed to increase die size by a couple of percent. I assume it is (much) higher than that with Nehalem. Improved floating point and vector units will also increase die size quite a bit. Intel itself has claimed 10-20% performance increase for single threaded software. It takes quite an effort to get something like that out from an already quite efficient design.

As for caches, my guess is that L3 is not uniform in the CPU. I assume that the block closest to the core is accessible a bit faster than the ones further away. Assuming 8-10 clocks for L2 access I'd say around 12-14 clocks for the closest block of L3 and additional 2-4 clocks for each jump to the other caches. It would be much faster than what is in Barcelona

Unknown said...

Things just keep getting worse and worse for AMD!

AMD market share in graphics cards drops to new lows:

After unusually strong second and third quarters, the desktop graphics card market saw exceptional growth in shipments in the fourth quarter of last year, according to a new report by Jon Peddie Research. The market research firm says unit shipments increased by a staggering 50.3% between Q4 2006 and Q4 2007, and a still-impressive 23% sequentially between Q3 and Q4 2007. The overall value of the desktop graphics card market also increased 46.3% year-over-year, but lower average selling prices actually led to a sequential decrease of 0.6%.

Looking at market share, JPR's numbers suggest that AMD actually lost a fair amount of ground to Nvidia in Q4 despite the arrival of its Radeon HD 3800 series. Nvidia reportedly expanded its slice of the desktop graphics card market from 64% in the third quarter to 71%, leaving AMD with less than a third of all shipments. The report JPR sent us doesn't include any speculation on the cause of Nvidia's gains, but we'd be willing to bet the late October launch of the popular GeForce 8800 GT had something to do with it.


http://techreport.com/discussions.x/14311

Anonymous said...

Hoho - thanks for the links, I didn't realize Shanghai was that big, I guess I underestimated the cache size (and overestimated the cell density)

My point on SMT is - is it worth it? Yes I know it is difficult to get another 10-20% performance on single core, but I still question if if it is worth it. Remember that this is being done on all cores - so while it may help on single threaded, is it enough to justify doing it to all cores?

There was more of a return on single threaded preformance in single or dual core products as you were only increasing one or two cores (and cache was a larger % of overall die size anyway). As the % of logic within the die is now increasing, not to mention the increase in core count, is a modest single threaded improvement now worth it?

Anonymous said...

Sparks, check this out: NY Governor in Prostitution Ring.

Your favorite anti-Intel governor is in trouble ...

SPARKS said...

GURU, there ya go. The burning, scalding pot calling the tea kettle black! Ah, the joy of it all. To think that $4300 an hour (not including plane fair) could buy me two QX9775’s, a sweet Intel D5400XS, and two 8800 ultras! Now that’s my tax dollars ay work, baby!

Ah, the liberal democrat crusaders, the unabashed guardians of the people, all with collective interest of the ‘little people’ in mind. They have no agendas, they are driven by pure altruism.

Case in point, take the Governor of the great State of New York, for example. Here’s a man who condemns INTC for “monopolistic practices”, shed’s hundreds of millions of dollars on a failing companies pipe dream, and sues a competitor for being the better company. This is all in the interest of creating jobs. 1.2 billion Taxpayer money for 1200 jobs. There ya go, democracy in action!

Mean while, this bastion of integrity, is spending untold THOUSANDS of taxpayer dollars (an hour) on jaunts to Washington D.C. to play with the girls, while endeavoring in ‘internal affairs’. Don’t ya just love the sick, twisted irony of it all!

By God, I can’t wait to pay my State income tax each week. At 100 bucks a week, at 52 weeks, this should cover at least a good, professional blow job! And hey, I had to work a hundred hours for it, better yet!


‘If I could ------- make it there’
‘I’ll make it ----- anywhere’
‘It’s up to you --- New --- York’
‘New --- York’

La la --- lati da
La la --- lati da
La la --- lati da

SPARKS

Anonymous said...

Sparks - the previous comment wasn't me, but I do find the story ironic.

I think Cuomo (not Spitzer) is the one who brought the investigation against INTC. This is the guy who holds the post that Spitzer had before he became governor, largely in part for his attack on Wall St and bringing down a couple of prostitution rings...

You just can't make this stuff up - you now have Cuomo taking on the 'evil' big companies (anyone think he has designs on a higher office?) and oh by the way he is also investigating Spitzer for allegedly inappropriate use of the New York State police to follow and dig up dirt on one of his Republican rivals in the NY state senate. Gotta love NY politicians - they will eat their own if needed!

The only reason he hasn't resigned yet is he is clearly using it as a chip to avoid/lessen the charges that could be brought against him.

And your money is not just going to 'services', but also apparently TRANSPORTATION of NY 'professionals' as apparently while in D.C., Spitzer deemed those 'professionals' not good enough. So glad I no longer am a resident of that state and fund this crap.

SPARKS said...

“I think Cuomo (not Spitzer) is the one who brought the investigation against INTC.”

Agreed, however, the Governors office must behind this $1.2B boondoggle, as the deal has to be approved (signed and supported) by his office, no doubt. (I wonder how much of AMD’s “Incentives” would have trickled down to Kirsten and her friends, hmmm?) Certainly, the INTC investigation has all the earmarks of an Attorney General Style ‘probe’ characterized by high profile, media grabbing, spotlight seeking, litigation.

This was certainly Spitzer’s, Como’s, and Giuliani’s signature rise to power in our state. With that in mind, looking forward, I suspect this may take some of the wind out of the INTC probe. AMD (with IBM) have some serious friends in New York. But, today, I believe they lost a very big one.

AMD’s shares fell 33 cents on the day, and despite the overall market decline, INTC actually rose. Do you think there’s any connection?

It’s like playing the New York State Lottery, “Hey, Ya Never Know”

SPARKS

Anonymous said...

"AMD’s shares fell 33 cents on the day, and despite the overall market decline, INTC actually rose. Do you think there’s any connection?"

The market rarely reacts to crap like this. Though apparently there was a fair amount of applause on the NYSE floor when the news came out - I kid you not!

Clearly Spitzer signed off on Cuomo's investigation of Intel, I believe it started out of the AG office though. If I were Intel I would still be concerned - Cuomo now can try to 'bag a big one' and ride that into the Gov's office, especially with no clear incumbent anymore. In a state as liberal as NY, there is much hatred for the evil big companies, yet paradoxically no outrage for giving out 1.2Bil to a rather large company in AMD (and I believe NY is running something like 5 Bil in debt). It's almost as if when people see a big company in their own state they suddenly realize, perhaps they're not so evil after all, and hay they tend to employ people and pay taxes so the government can spend more money.

Apparently much of Spitzer issues occurred when he was down in D.C to testify before Congress about bond related issues. When asked, Spitzer remarked he was confused and thought it was BONDAGE issues and that was why he had Kristen flown down to DC....

I'll be here all week folks, remember to tip your waitresses!

SPARKS said...

"Cuomo now can try to 'bag a big one' and ride that into the Gov's office, especially with no clear incumbent anymore."

Well done! I didn't think of that angle. Christ, Como's campaign people must be drooling at the sight of Spitzers carcass. Man, Como is going to turn the screws, he needs the coverage.

So thats what the "safe" portion of the FBI transcript was, THE BONDAGE ISSUE!!

This is why you're smarter than me.

SPARKS

Tonus said...

"The market rarely reacts to crap like this. Though apparently there was a fair amount of applause on the NYSE floor when the news came out - I kid you not!"

Spitzer made his mark by being very aggressive in prosecuting Wall Street firms, so it's not entirely surprising that they would react this way. Not entirely mature, but some of them may have felt that karma had reached over and given him a good, hard slap.

Anonymous said...

"AMD's 45nm processors have already entered EVT testing, according to sources at motherboard makers, who added they expect to receive samples by August or September."

http://www.digitimes.com/news/a20080311PD210.html

Samples Aug/Sept ==> Shipping production after that ==> actual product available for purchase after that.

Can someone explain to me how AMD has closed the schedule gap on 45nm again? This DVT/EVT garbage is just another nice way of throwing up a bunch of smoke and meaningless technical jargon without committing to an actual product availability schedule.

If the sampling is in Q3, perhaps they can quit with H2'08 spin and call it Q4'08?

SPARKS said...

“Wow ! It is surprising to see that Intel core is about 30% larger than AMD's.”

Now you know two things. One, they are server chips. Two, INTC must be very confident about their yields.

Server CPU’s although larger in size command a higher ASP than their lower margin desktop cousins. As INTC’s 45nM process steadily improves over time, yields/performance will increase as well. I suspect they are pretty good now for INTC to put something that big on a 300mm wafer. Things will only get better.

Someone here once said that it cost about $60 to produce desktop chips no mater what the things clock at. Larger server chips margins are much higher and command higher prices due to size and higher validating standards/parameters and speed. JJ, I.T.K., and GURU call it binning.

In any case, these chips (Nehalem) on steroids (IMC) will not be cheap. Count on it.

Enter the Great 2008 server assault. This is Gospel.


“some of them may have felt that karma had reached over and given him a good, hard slap.”

Sounds kinky, if you’re into that sort of thing, her name was Kirsten, not karma! ;)

SPARKS

SPARKS said...

Nvidia, thy day of reckoning, approach-ith.

Vengeance is mine say-ith, The Intel!


BOOHOOHAHAHAHAHA!

http://www.fudzilla.com/index.php?
option=com_content&task=view&id
=6208&Itemid=1

SPARKS

InTheKnow said...

Can someone explain to me how AMD has closed the schedule gap on 45nm again?

The smoke screen blows away in the breeze when you look at when 32nm is due. According to AMD...
Silcott said AMD expects to hit 32nm by 2010 or 2011
.

Intel is scheduled to introduce 32nm at the end of 2009.

I read the 2010-2011 to mean that we are looking at the end of 2010 best case. These time lines would put AMD 12-24 months behind Intel again.

I'm sure there are those more optimistic than I am that will read 2010 and say that would be early in the year and AMD will only be 3-6 months behind, but I don't see it that way.

And please don't feel a need to assume that I think AMD's 32nm will be as good as Intel's. AMD will be introducing their 1st generation high-k metal gate vs Intel's 2nd generation. Plus whatever else Intel has up their sleeve.

Anonymous said...

"AMD will be introducing their 1st generation high-k metal gate vs Intel's 2nd generation. Plus whatever else Intel has up their sleeve."

For Intel 32nm, I don't see much beyond gate scaling... sure they will be using immersion at that point, but let's face it that just allows you to print features, and doesn't yield performance. They previously were quiet about tri-gate being an option on 32nm (this would yield a significant performance gain) but I would guess that gets pushed to 22nm. There's probably also a chance at some SRAM changes but that's probably also unlikely (things like instead of using a typical 6 transistor cell, using a cell that is 1 transistor and 1 capacitor os some other variant). For those unfamiliar, the 1T-1C cell is much smaller (I think on the order of ~4X) and would enable either more cache or a smaller die. From a performance perspective it's not any better as far as I know (in fact I believe it may be a little slower but probably is not an issue for L2 or L3 cache aplictaions - I would guess L1 would remain a 6T cell as the size is relatively small and speed is more a concern there)

As for AMD, the 2010-2011 timeframe is called the high K error bar! When you put a 2 year range on the timeline for a 2 year process that is, to put it politely, lame... and smacks of complete lack of confidence in either the state of the process development or the finances required to pull it off.

These AMD schedules are just getting ridiculous - we are now nearing the end of Q1'08 and the best they can say about the launch of 45nm is "H2'08"? Why not just own up to Q4? If it were really Q3'08, there would be a bit more info on speeds, part #'s and power. Late Q3 with some sort of weasely 'shipping to OEM's' claim is best case.

I take the 32nm 'schedule' as yet another confirmation of the garbage about high K possibly going into 45nm in the 2nd part of it. Put it this way - if they got around to implementing highK on 45nm, there would be a fairly easy evolution to 32nm and the timeline would not be so vague and the error bar would not be on the potential slippage side.

45nm looks like a maintenance of the status quo in terms of schedule, coupled with AMD falling further behind on performance. I think AMD can live with a schedule lag of a year, but they can't afford to fall behind on process performance as they no longer have a superior design to compensate for it. If I look into my crystal ball, AMD should close the gap performance-wise on 32nm (close does not mean eliminate though!), with the schedule perhaps widening a bit in Intel's favor. The other wild card is how long AMD stays with SOI, if they switch back to bare Si that may introduce some risk into the 32nm (or more likely 22nm) schedule.

Anonymous said...

Dementia is at it again. It's unfortunate that he lacks a fundamental understanding of things and simply starts fitting #'s to explain things....

"My guess is that similar amnesia will develop..."

Do we really need to go through amnesia and rollout all of the predictions Scientia has made? He pulls out a couple of outragesous predictions on this site, and naturally attributes it to EVERYONE on the site. I guess that makes him feel better in his own mind.

"However, with 60% more transistors devoted to core logic I don't believe that all of this could be offset."

This show Scientia's naivete with regards to design and process interaction - the process runs a dual Vt process, so what is critical in terms of power draw is the # of low Vt (high speed) transistors...not every transistor in the core is a low Vt or high speed transistor and typically cache (where Intel has more overall transistors) is high Vt which should help as this means Intel will likely have a greater % of transistors at high Vt (lower power). I'll discuss his ABSURD transistor count ratio below! It's just yet another example of him not really having a background and trying to fit the #'s (in this case blindly assuming raw transistor count in the logic area as determining power) to his preconceived notions.

There is so much more to relative power - the sleep states (and the # of transistors that go into the various sleep states), the relative ratio of high Vt to low Vt transistors, I hear the CLOCKSPEED of the chip might make a bit of a difference (AMD will have an advantage here if they can keep underdelivering on clockspeed!), the process (as he does in fairness mention), and the speed chosen for the idle state....are just a few of the variables!

But hey let's just look at the transistor count in the cores and use that - those other variables are probably not all that significant. I can start dazzling people with #'s which only represent 1 of many variables, but make it sound real authoritative!

I suspect Nehalem power will be near Shanghai for similar clockspeeds, but for differences far different that just a shallow look at the logic core transistor count.

What I love the most is his association of core size with core transistor count...something to think about Dementia (as he obviously reads this site):

Intel - 731Mil transistors
AMD - 700Mil transistors

AMD - 8MB total cache
Intel - 9MB total cache

Each SRAM cell uses 6 transistors... Intel has more 1MB more SRAM... 1MB of cache is 48million transistors... now which chip has more transistors dedicated to logic again?

Of course things are far easier to just say well the relative size of the cores should be equal to the relative transistor counts of the cores, no? Well in blogger world that may make sense, I guess...

I can't believe the idiot just assumes core size ratio can be used to approximate transistor count ratio... typical beginner mistake from someone lacking some real world experience.

The scary thing is his final outcome on power might be right (or close), but for all different reasons and he will say see I told you so and not realize some of the egregious mistakes in the assumptions he made.

Anonymous said...

Intheknow said ...

"AMD will be introducing their 1st generation high-k metal gate vs Intel's 2nd generation. Plus whatever else Intel has up their sleeve."

From an Intel engineer who posts over at XCPUS.com, Intel's 2nd gen HiK process is sampling now and shows significant improvement over 1st gen.

Anonymous said...

Anonymous said ...

"These AMD schedules are just getting ridiculous - we are now nearing the end of Q1'08 and the best they can say about the launch of 45nm is "H2'08"? "

IMHO, AMD's problem is that, being the much smaller company, they think they have to out-engineer Intel in order to compete. Maybe it's corporate hubris, especially given their recent track record, but their engineers are no smarter and a lot fewer in number than Intel's. Deciding to put all their marbles in the monolithic basket, 3 complex decoders instead of 3 simple + 1 complex like Intel uses - it's like they don't really grasp where their process actually is, or where the majority of x86 software actually is at the moment. They need to take a good hard look at their philosophy and maybe just start meekly copying Intel's every move :). It's obvious who has the better grip on reality :).

hyc said...

an anonymous moron said
IMHO, AMD's problem is that, being the much smaller company, they think they have to out-engineer Intel in order to compete. Maybe it's corporate hubris, especially given their recent track record, but their engineers are no smarter and a lot fewer in number than Intel's. Deciding to put all their marbles in the monolithic basket, 3 complex decoders instead of 3 simple + 1 complex like Intel uses - it's like they don't really grasp where their process actually is, or where the majority of x86 software actually is at the moment. They need to take a good hard look at their philosophy and maybe just start meekly copying Intel's every move :). It's obvious who has the better grip on reality :).


That's quite possibly the stupidest thing anyone has posted here in a while, and that includes most of Sparks' comments.

x86-64 wouldn't exist on the market today if AMD had just blindly followed Intel. We'd probably all be still waiting for a 64 bit software ecosystem to develop, and Itanic would be Intel's only 64 bit server processor. The fact is, whether Intel's engineers are collectively smarter or not, Intel's management won't let better products out the door unless their existing mediocre products are threatened by something else.

Anonymous said...

"x86-64 wouldn't exist on the market today if AMD had just blindly followed Intel"

While I don't agree with the previous anonymous poster, exactly what has this X86-64 technology bought the average person? Would 95% of the people who buy computers see any negative effects from not having X86-64?

While it is good that AMD pushed the technology (it will eventually be needed) - let's face the facts, on desktop and mobile this was about marketing and distinguishing AMD's chips then actual real life benefit. If someone had bought an x86-64 chip back in 2004 (or whenever AMD launched it) thinking about 'future proofing', what REAL benefit have they seen and it is more likely that chip will be replaced before any REAL benefit comes from the technology. It's funny no that AMD's marketing has now change to 'good enough' as the technology situatin has changed.

While I have no doubt 64bit has some real benefits in select area (server space comes to mind) - it has been one of the most OVERHYPED benefits in the last few years.

Anonymous said...

"Intel's management won't let better products out the door unless their existing mediocre products are threatened by something else"

This must be the millionth time I heard this RIDICULOUS statement. As Intel has ~80% of the PCU market, how exactly do they grow CPU revenue?

Answer - through growing the overall market (or developing new markets) and driving upgrade cycles. If Intel did not improve their product there would be a significant slowdown in the upgrade cycle as businesses (and many consumers) are simply not going to upgrade for marginally better performance.

Competition affects pricing but this nonsense about chips wouldn't improve without competition is NONSENSE. Folks are taking a really poor architectural choice in P4 (and tactical and strategic blunders to not get off that train earlier) with a lack of willingness to improve products. Don't confuse bad technical/strategic decisions with complacency.

Intel has led Si process technology development for the last 15-20 years. They led the move to 300mm wafer transition (think prices, with the same chip performance, would be where they are today on 200mm wafers?). And while AMD will whine about Intel 'copying' architectural features there are plenty of areas where Intel has lead which just don't get the pub.

This whole 'the market needs AMD' is a bunch of liberal crap. Perhaps it would be best if AMD collapsed and a true competitor could rise in it's place? If AMD is weak, the free market should work and weed it out and allow someone new to compete. So long as AMD is in the market, there is little chance of a 3rd competitor coming in. Just how long are you going to watch the Globetrotters vs the Washington Generals?

While things may change and AMD may turn things around, this baseless argument of simply having AMD because we need to keep Intel honest is getting old. You need a strong company to do this - if AMD is that company, great. If they aren't, I hope the market weeds them out. I'm not going subsidize a company to do this - if/when they have a better chip for the price I'm willing to pay, I will buy it - until then my money will go to the best product available.

Anonymous said...

A wise man understands his own limitations:

"Since the L2 density is similar we can roughly assume that logic density is about the same. Therefore transistors are about equal to area."

L2 density about the same...hmmm.... First off... they are about 5% different (Intel being more dense). Granted this is rather small...

Second, and most importantly!, Intel has more L3 cache which means more transistors dedicated to cache (extra 48Million if you count total, an extra 96Mil if you only look at L3). As the overall difference between the 2 chips is only ~31Mil transistors how can he continue to spew the BS about INtel having more logic transistors based solely on an AREA comparison.

Intel's L3 cache IS significantly more dense (>20%), meaning they pack the "transistor-rich" areas (>300Mil of the transistors) in a much smaller area than AMD.

Intel puts >400Mil transistors (cache), or >55% of total budget, into a small overall % of the die (~40%). AMD puts a lower % of cache transistors (~50%) into a larger % area of the die.

To a rationale person, this of course would mean the non-SRAM areas for Intel would have a much lower transistor density than AMD, as the overall transistor count and overall die area is close. This would seem surprisingly simple to understand, yet Dementia just wants to correlate die area to transistor count... And completely igonres the L3 impacts (in terms of both area and transistor density differences)

The scary thing is most of his readers are too busy kneeling at the altar of AMD to understand this rather obvious flaw in his 'analysis'.

SPARKS said...

“that includes most of Sparks' comments.

x86-64 wouldn't exist on the market today if AMD had just blindly followed Intel.”


Well, thanks pal, let’s examine your brilliant observation, and your keen eye for detail.

AMD’s revolutionary x86-64 was so important that they lost:

FY-2006 (47,000,000)
1Q-2007 (611,000,000)
2Q-2007 (600,000,000)
3Q-2007 (396,000,000)
4Q-2007 (1,772,000,000)
1Q-2008 (their not in yet but I’m sure x86-64 will make a huge difference)


God, this is terrific, look at how well x86-64 benefited AMD, and I’ve been so stupid!

Actually, aside from baiting the great people on this site, you are very entertaining. You should post more often. It’s nice to get some comic relief once and a while. You know, comic, like a clown.

But, if you’re serious about this x86-64 statement, which I doubt, you would be quite pathetic.


SPARKS

SPARKS said...

“Just how long are you going to watch the Globetrotters vs the Washington Generals?”

God, that was good. I wish I said that!

Here’s an interesting read concerning Nehalem. Part of it says:

“The launch of the new chips by Intel will not only strengthen the company’s positions on the market, but will also steal attention from the launch of 45nm microprocessors by Advanced Micro Devices that are also expected to arrive sometime in Q3 2008.”



I love it when INTC pee’s on AMD’s parade.


http://www.xbitlabs.com/news/cpu/
display/20080312115926_Intel_Plans_to_Speed_Up_
Introduction_of_Nehalem_Microprocessors
_Slides.html



SPARKS

hyc said...

Sparks - that's the same kind of shortsighted thinking that bankrupts our futures by wasting resources in the present.

Overall, your reply is a total non-sequitur. I never said anything about AMD profiting from x86-64. All I said was that if AMD hadn't introduced it, Intel would have kept dragging its feet until Itanium had a stronger position.

Before x86-64 the number of open source developers writing 64-bit capable code was nearly zero. Sparc64 was (and still is) expensive and slow, PA-RISC was still being phased out, and Itanium is (still) expensive and slow. Compiling, testing, and debugging on any of these architectures is at best an exercise in frustration. But in the meantime, with AMD64, large numbers of developers are extending their code for 64 bit who otherwise wouldn't have bothered.

You talk about there being near-zero impact from the introduction of x86-64 but it's obvious that that statement is false. It's also obvious that it takes time for the software development communities to adapt to new hardware technologies. You act as if the software world should've changed overnight, the same date that the chips first went on sale. Moron.

Anonymous said...

hyc,

Why so bitter? Don't be.

x86-64 didn't pan out as well as AMD and Microsoft would have liked.

Intel was right in that 64-bits wasn't needed on the client.

It's 2008. Where are the 64-bit client applications?

64-bits has been around for a while.

Get a clue, moron.

Unknown said...

AMD's 45nm CPUs are a nonissue. They'll launch a few SKUs in November or December this year. Who cares? We'll be lucky if they compete with Intel's 65nm Clovertown CPUs that launched eighteen months ago! To compete with Nehalem AMD will need Bulldozer which comes in 2010 at the earliest.

Of course, Intel is going to keep the pressure on AMD. Price cuts in April will mean even a 2.6 Phenom (7% slower than Q6600 according to Xbitlabs) will be sold below $200. Meanwhile Intel will rake the cash in from the sale of it's 45nm MONSTERS.

Roll on March 18th. 9800 GX2 and 790i should launch. Hopefully Intel will push out the QX9770 soon as well, if only to keep Sparks happy!

hyc said...

mike said
x86-64 didn't pan out as well as AMD and Microsoft would have liked.

Intel was right in that 64-bits wasn't needed on the client.

It's 2008. Where are the 64-bit client applications?

64-bits has been around for a while.


You're still missing the point. To say "64 bits isn't needed on the client today, therefore it's unimportant" is shortchanging the future. Because the vast majority of software developers are working on desktop processors, not server or workstation processors, keeping them away from 64 bit architectures discourages them from ever making the jump. It becomes a self-fulfilling prophecy that there will never be 64 bit client apps.

Microsoft's position in all this is interesting too; their purely artificial distinction between Windows Server and Windows (Desktop) was just another artificial barrier to development. If you look at Linux or the BSDs you see no such barrier, and people writing code for those platforms don't bother distinguishing between client platform and server platform.

I personally laughed at the notion of a 64 bit web browser, but now that I use Linux x64 every day, I'm pleased to have my 64 bit build of Mozilla Seamonkey running on my desktop. With 4GB of RAM on my desktop machine, there's no point in running a 32 bit OS. All in all it's a bit silly that we have these stupid 32 bit vs 64 bit ABI issues (e.g. http://www.emulators.com/docs/nx05_vx64.htm ), but given the circumstances I'll keep my environment uniformly 64 bit. For a large population of suckers running Windows, they probably still won't see 64 bit code for a couple years yet. For people running decent OSs, 64 bit client apps have been around for years, and it makes a worthwhile difference.

Anonymous said...

"You're still missing the point."

HYC - you are the one missing the point when AMD came out with X86-64, there was all this rejoicing at buying future-proof CPU's (which ironically be largely obsolete long before 64bit makes a significant dent in the mainsstream). While AMD made some PR gain out of this I think it is safe to say the optimism on the rate of 64bit implementation was a bit misguided. I hear some wise man even started a blog about Pervasive 64bit computing (at least I think that's what the blog CLAIMED to be about)

The bottom line today is, there is little need for the mainstream desktop user to have more than 2GB of RAM (unless using Vista bloatware), there is no reason for any mainstream notebook to use more than 2GB, and the actual benefit of 64bit software (vs 32bit) is sketchy and the vast majority of people are still using 32bit OS's which are MORE than capable for what is needed for probably >90% of the home users today.

You can say SW developers need it to code in 64 bit, but that is also a self fulfulling prophecy - they need 64bit CPU's so they can code in 64bits, but the question remains, do we really need 64bit SW for the vast majority of applications? ...and what is the significant benefit over 32bit that should justify the transistion? You can say it is needed to access more than 4GB of memory and I will ask when and why do I need that much memory?

You don't change simply to change, there should be a reason for it. Again, there may be some needs in server space, but in desktop/notebook I have yet to see any significant value proposition in switching to 64bit. I'm glad to hear you getting benefit from 64bit, quite frankly I would get close to ZERO benefit and am quite happy living in an archaic 32bit world for the near term future.

Releasing something 5 years before it is even close to start gaining acceptance is not shortchanging the future - it is shortchanging the consumers through inefficiency. Why design something in that will add cost, extra time, money, transistors, power when it won't be needed in the intermediate future and when the product you are putting it into won't be around to take advantage of it when that capability finally goes mainstream.

Bottom line - when there is a significant demand for something a solution will likely evolve... Should folks start working on developing and implementing a X86-128bit solution TODAY so we don't 'shortchange the future' on 128bit? Should the next generation of chips have this compatibility built in NOW or let's arbitrarily say by 2010, so SW can start working on solutions to utilize it? I think most people would ask why would we need 128bit SW, in much the same manner as many folks are asking why do I need 64bit SW today?

I'm no SW person, but I see huge inefficiencies in SW already today - giving developers that much more HW capability and memory would seem to eliminate any focus on streamlining SW. Much like water flowing to fill an empty space, I see SW 'bloating' to fill whatever new HW capability is available. And I don't think that will lead to efficient solutions.

hyc said...

When/why would you need more than 2GB of RAM? Do you ever work with video clips on your machine? An ever-increasing number of plain-joe home users do. From copying and editing their home video DV footage to re-encoding DVDs for portable players, it's become a ubiquitous activity and to do it with any reasonable speed, i.e., without waiting for huge chunks of data to be paged in and out, you need gobs of RAM.

You call it "changing simply to change" because you don't immediately see a benefit. That's only because you're not looking in the right place. That's ok, but just because it isn't apparent to you doesn't mean it's not real and relevant to many others.

x86-64 wasn't 5 years too early. The doubled number of registers in the programming model *enhances* efficiency. Granted, they could have just tweaked the register set without tacking on 64 bit extensions, but any change to the programming model is disruptive, you want to minimize the number of disruptions and get the most benefit possible out of each one.

In some respects x86-128 already exists, in the form of SSE. I think there's a lot of multimedia software out there that you may be taking for granted, that lives and dies by these features.

Anonymous said...

Hyc said ...

"That's quite possibly the stupidest thing anyone has posted here in a while, and that includes most of Sparks' comments.

x86-64 wouldn't exist on the market today if AMD had just blindly followed Intel. We'd probably all be still waiting for a 64 bit software ecosystem to develop, and Itanic would be Intel's only 64 bit server processor. The fact is, whether Intel's engineers are collectively smarter or not, Intel's management won't let better products out the door unless their existing mediocre products are threatened by something else."

Well thank you :). However you failed to rebut a single one of my points - all you did was give some bogus credit to AMD for forcing 64-bit x86 capability on Intel. That's living in the past, dude. What has AMD done for itself lately? It has an eroding marketshare and a mountain of debt to overcome - it can't afford to keep misjudging any aspect of the market or its own abilities.

If the point you're trying to make is that Intel was guilty of the same hubris, trying to force the market to 64-bit Itanium, and with Netburst, I'd agree with that. However Intel learned a lesson, one that AMD hasn't demonstrated lately that it has learned yet.

S said...

I bet the majority who bought a 64bit capable Athlons 3-4 years ago, would have upgraded before running a single line of 64bit code.

AMD says it delivers what customers want. But there are many examples to the contarary - AMD64, Quad FX, Barcelona B1-B2..

S said...

BTW, don't write off AMD. OEMs love to have AMD around them, just to keep Intel awake. If it is just you and me who buys CPUs, AMD would have been ground to dust by now with the performance lead Intel has for so long. AMD has far more support in the Industry now than in 8-10years ago when it faced a similar situation. Expect AMD to keep its unit MSS as long as it has products which are even barely marketable.

Roborat, Ph.D said...

hyc said: "x86-64 wouldn't exist on the market today if AMD had just blindly followed Intel. "

your own statement admits to it own flaws. If AMD didn't insists on extending the life of the god-awful and inefficient x86 code maybe we'd be moving towards a better one by now. you talk as if x86-64 is a good thing that happened to computing.

Unknown said...

Some ridiculous comments in here.

Are we saying that we shouldn't have made 64bit cpu's till we needed them?

Sounds like a great plan. That wouldn't cause problems for millions of developers worldwide at all!

Fair enough loads of cpus never ran 64bit code. What if 54bit was only introduced this year. Now and in 2 years a software company wants to switch to 64bit but it can't because it would be severely limiting its customer base because there is no 64bit O/S and half the world still has these 32bit only processors.

Changes have to be made before they are required. When they are required it is often too late. Take a look at the US economy as a quick example. Small subtle changes over 5 years would have been a lot easier than printing money, printing money and printing money when it's too late.

Anonymous said...

"Changes have to be made before they are required."

Fantastic observation - though you fail to mention the biggest issue of why exactly is X86-64bit 'required' and most importantly WHEN is it required? Was the time where X86-64bit CPU was introduced right on time or perhaps a bit early? Did customers buying the early chips gain any advantage from the technology? Do the majority of customers today get much benefit from the technology?

I keep hearing these esoteric arguments... but why exactly (from a performance perspective of the end user) X86-64bit better than 32bit TODAY. Note also I did not say 64bit technology in general, but x86-64 specifically.

These countless arguments of it is eventually needed, might as well get a head start on it is a good emotional argument but a very poor market and business related one. The devil is in the details and the detail in this case is TIMING.

Noone here said it is should be delivered the day it is needed - that is fantastic hyperbole in an effort to bolster your argument. The question is how far in advance is it needed? (something you conveniently avoid)

Given where we are today on 64bit... my opinion is, it was introduced earlier then needed given the lack of penetration in the general market - you can make whatever excuse you want for this (Microsoft, lack of uptake on 64but open source), but the bottom line is we had and continue to have hardware capability that is not well utilized. There is an 'efficiency' cost in this (i.e could 64bit have been introduced a generation later and those resources put on some other area of improvement within the chip back then?)

Granted it is difficult to predict when to bring something to market, but simply bringing it to market first doesn't always make sense. And unless people commenting on 'so and so copied the other' has worked for both companies, these are IMPOSSIBLE claims to support. It is impossible to know what internal programs existed and strategic business positions were made regarding technology and that applies to both Intel and AMD.

Can I say AMD copied Intel's turboflash? Multibit on flash? No, because who knows when AMD started working on these things - it is quite possible both companies were working on them in parallel and one just happened to get it to market first, or perhaps one decided for business reasons that it wasn't needed at the time. The same can be said for AMD 'innovations' that Intel 'copied' - I have no idea when Intel started working on x86-64bit or whether they decide to postpone this for business reasons. Same can be said of 'native' quad core or IMC. Unless the folks making these claims has actually worked for these companies these are just idle speculation.

So Andy - how far in advance was HW needed for SW development? This is important as I would like to get started on 128 bit and planning 256bit introduction. Thanks in advance. Also as a future chipmaker should I also exploit marketing these features even if they can't really be used right away?

Anonymous said...

Anonymous said ...

"The devil is in the details and the detail in this case is TIMING."

Exactly right. That was the point I was trying to make earlier, until hyc put a hiccup in the thread with his rant on AMD's giving us the gift of x86-64.

AMD touted monolithic as the "better solution" when their 65nm process was still half-baked --> BAD TIMING. AMD uses 3 complex decoders when the majority of sofware still runs much faster with 3 simple decoders plus one complex decoder --> MORE BAD TIMING.

At this stage, AMD has to execute their game plan smartly and efficiently. They no longer have the $$ and goodwill to keep stumbling around like a drunken "dodo" -- sorry, Sharikou but you probably meant "doo-doo" on your blog :) A dodo is an extinct flightless bird, sorta like AMD in about 5 years if they don't start executing smartly.

SPARKS said...

“Hopefully Intel will push out the QX9770 soon as well, if only to keep Sparks happy!”

Well said, Giant, well said! It will be the absolute, undisputed, fastest CPU on the planet.


“All in all it's a bit silly that we have these stupid 32 bit vs 64 bit ABI issues (e.g. http://www.emulators.com/docs/nx05_vx64.htm )”

Now I know you’re serious.

Perfect, emulators! Can any one remember the transition from Win3.1 to Win95? What a F**KING NIGHTMARE! They should have named it BSOD 1st Edition.

The transition from 16 bit to 32 bit was a horror, culminating, catastrophically, with Windows Maelstrom Edition!

First, most of the software, and a lot of the hardware, were rendered obsolete, nearly overnight! Many software companies didn’t have the money, time, or the inclination, to recompile most games, apps and drivers to 32 bit. Which is why 95 ran both, and it was a mess.

That said, yes, when we went to full 32 bit with XP, it ran (and runs) GREAT. Now, AMD believed, just after we got over the 16 bit transition, fresh into XP PRO, we and the industry were cheerfully going to go to 64 without reservation, they had to be out of their collective MINDS.

People, Corporations, etc., are reluctant to go to Vista 32, let alone go to 64. Do you get it, yet?

The theme of this post is Déjà vu. In 20-20 FORESIGHT, the entire industry is resistant to change direction again. We are not going to go through that misery all over again, simply because a very insignificant few feel the need for 64 with no appreciable increase in performance with EXISTING SOFTWARE AND HARDWARE, not when things are going so well with FABULOUS WIN XP PRO 32.

If you want it, go buy the 64 bit editions, of XP64, Vista64, Linux, or whatever. Go into your little world, by yourself, and play with your little Sea Monkey.

I’ll go with 64, but this time, it will be with a dedicated machine, when the software and the hardware are right, WHEN THERE IS A NEED FOR IT!


GOT IT?

SPARKS

Anonymous said...

An interesting flashback on 64 bit(Sept 03)

http://www.itworld.com/Comp/2055/030918athlon64/

"AMD acknowledges that building a market for 64-bit computing will require a great deal of education, but thinks the ability to address larger amounts of memory and improve overall application performance will win over customers, starting with the gaming market, Crank said."

Yeah that gaming market 4.5 years later has really been won over with 64bit capabilities!

"Since only 5 percent to 10 percent of servers in the world have 64-bit capability, Intel isn't sure why AMD thinks the desktop market needs this type of technology so quickly, said George Alfs, an Intel spokesman."

"Executives at Intel believe the market for 64-bit computing is several years away, and have said they plan to address the market when it is ready, but have offered no details about how they plan to approach it."

"Many in the industry are skeptical that users need 64-bit performance at this stage, but AMD thinks it can drive a entirely new set of applications by bringing 64-bit technology to the masses."

Now looking at some of these comments in 2003, yes 2003, is it fair to question the timing and expectations around X86-64bit?

SPARKS said...

Oh Giant, look what INTC has on their website. FEAST!

http://www.intel.com/cd/channel/
reseller/asmona/eng/products/
desktop/processor/processors
/core2extreme/feature/index.htm

SPARKS

Orthogonal said...

Sparks, I could have sworn I just saw a QX9770 with your name on it. ;)

InTheKnow said...

Another point to take into account. AMD did not introduce x86-64 because they were altruistic, or looking out for the consumer.

AMD thought they saw a marketing advantage, picked it up and ran with it. It was a purely cold-blooded business decision, nothing more.

So many people seem to think that AMD is their friend. I'm here to tell ya that just ain't true. They are a business, and if they were in the position Intel is in, they would be just as ruthless.

This doesn't make either company good or evil. It is just a simple fact that I think often gets lost in all the hyperbole.

I have an admitted bias towards Intel, because it serves my self interest. They invest money in my community and their employees pay taxes on their wages. Thus, if they succeed it helps me. AMD doesn't do this for me, so I see no reason to support them if the products are even remotely comparable.

Ho Ho said...

"This is important as I would like to get started on 128 bit and planning 256bit introduction"

I wonder what benefits would 128/256bit architecture bring us.

Here I assume that wideness shows the size of general-purpose registers.

hyc said...

So Andy - how far in advance was HW needed for SW development? This is important as I would like to get started on 128 bit and planning 256bit introduction. Thanks in advance. Also as a future chipmaker should I also exploit marketing these features even if they can't really be used right away?

You're talking as if you're only looking at the progress of computing from the hardware perspective. It seems a lot of hardware folks do this. You need to work with the software developers in tandem, so that when the hardware is ready, the software developers are on the same page, and all the clever new hardware features you've implemented are being utilized from day one.

roborat - ya got me there. I personally was happy with the M68K and given my druthers I would not be using x86 technology today at all. But that battle was lost long ago, so here we are.

sparks - since it seems Microsoft is your only frame of reference, I don't think you really have enough perspective to be participating in this conversation.

Some thoughts... while Microsoft was bumbling along in 16 bit land and their 32 bit transition, a good chunk of the world was already happily running in a flat 32 bit address space with real VM and memory protection and none of that transition nonsense. (E.g., aside from the workstation class Sun3, Apollo, NeXT, etc. my home systems were Atari STs and TT030s...)

Another bonus for you, since you seem so satisfied with WindowsXP - you realize of course, that Microsoft wrote and marketed that for the AthlonXP, don't you? So again, you have AMD to thank for the existence of a piece of technology that meets some degree of usability. Never mind that it was still a good 15-20 years behind the state of the art when it was newly released. Again, it's Intel's feet-dragging that has kept mainstream computing in such a perpetual state of stagnation.

The Intel-dominated world is finally realizing that multi-core is here to stay. They're finally beginning to consider what Intel has fought to delay for so long, the thought of heterogeneous processors, a brave new world where the x86, the "traditional" CPU is not the most important chip in the system. Tons of other people knew this long ago; they designed systems for Commodore and Atari with a plethora of heterogeneous coprocessors, all optimized for particular tasks. To put it in home/consumer terms, the best game programmers were already accustomed to these environments because that was the same approach used in arcade machines. PC programmers were shielded from all that "complexity", poor dears, and now they're all whining about how hard it is to adapt to this brave new world. A world that in fact was being created 20+ years ago, but was steamrolled by Intel.

Intheknow: thanks. I read your comment and was overcome by a feeling of reasonableness and rationality. Amid all the other junk being spewed, that was truly refreshing. I personally have nothing invested in either Intel or AMD, but your comments make perfect sense to me.

Ho Ho: kinda fun to brainstorm that huh. I remember coding on machines where "256 bits" was a vector register. It was pretty uncommon back then to even need more dynamic range than 16 bits could express. I also remember how cool it felt the first time I realized I could write a DES routine all in a couple of (64 bit) registers, instead of needing to shift and juggle between multiple 32 bit temp variables. The boost in performance was incredible. I would expect that a native 128 or 256 bit processor could totally obsolete a lot of today's crypto algorithms.

pointer said...

Joke of the day :)

hyc said ...Another bonus for you, since you seem so satisfied with WindowsXP - you realize of course, that Microsoft wrote and marketed that for the AthlonXP, don't you?


Just in case that you do not know, AMD has been riding on others for the marketing effort, from AthlonXP on WInXP to Live on Viiv. :)

ok, few things to talk about the 64bit computing. Some people easily pointing finger to intel saying that it is dragging the 64 bit development ... well, since when Intel is pointing guns to the developers' head forcing them not to do so? It is their free will (ROI driven) for not doing so at the early stage of X86-64.

Intel did have its agenda on Itanium version of the 64 bit computing. Intel put significant effort to design the 64 bit itanium. One can say that Intel failed its original plan, and Itanium is now serving some niche market, but one can't say that Itanium is bad (similarly to the 68k).

Intel made a bad business decision and failed to realize about the power of vast IA apps/tools/developers base, and that's about it. AMD didn't point gun to the developers' head on not coding for Itanium too. It is the free will/market.

pointer said...

continue of the 64 bit computing.

when Intel invest in Itanium, it put significant effort into its software stack as well: put in funding to encourage people port their code to the Itanium, make sure there are OSes for it etc. yet, Itanium is still not as successful as what Intel wanted ...

When AMD came out with the x86-64 ... it also tried to handles its software need ... but being a smaller pocket company, less well known by that time, things were harder. But What intel has to do with this earlier x86-64 bit software development? why must it help its competitor?

it was only when Intel has significant share on the x86-64, it starts to pour in money to boost the effort (also generally helped by the fact that Intel has a big market share, developers are more willing to code due to bigger ROI).

Similarly, we can also observe that Intel are aggressively pormoting multithreadings due to the current computing trend (benefiting to Intel)

Anonymous said...

"Another bonus for you, since you seem so satisfied with WindowsXP - you realize of course, that Microsoft wrote and marketed that for the AthlonXP, don't you?"

Who cares? It works, it satisfies the majority of users - I don't care if it was written for AMD, Intel, Apple or Transmeta! Why exactly does it matter who it was written for? Does it not work across all the x86 options?

You seem keen on defending AMD rather than having an actual discussion and you continue to fail to address the main point of specific timing (since you apparently have no rebuttal?).

Noone said X86-64 bit was NEVER needed, noone said X86-64bit HW is only needed at the same time SW is needed (obviously the HW needs to precede the SW). The question remains when is/was x86-64bit needed for the mainstream and how far in advance is the HW capability needed to achieve that goal?

The HW was marketed and exploited by AMD in 2003... here we are almost 5 years later and X86-64bit is hardly mainstream - the market has spoken. I'm not saying it won't eventually be mainstream, but AMD vastly overestimated the uptake rate and the real benefits from the technology that could be achieved in the near term.

As for the 128bit and 256bit comments - this is hyperbole... these obviously would be better no? They could address even more memory, no? Of course some would ask, looking at the 64bit memory limits is that really an issue for the foreseeable issue? (and they would be right). Others might ask what real life benefit would 128 or 256bit HW/SW give over 32/64bit especially when considering today's applications. Now apply those questions to X86-64bit back in 2003, or actually much earlier when AMD started to design the chip (and taking into account that the release slipped and was due on the market even earlier then 9/03).

You can argue it's better, but is better needed in a given timeframe? This I think is the fundamental disagreement on 64bit HW/SW - the majority of issues/benefits that x86-64bit was addressing was non-existent in 2003, and still remains rather marginal 4-5 years later. At some point I imagine it will start yielding benefit, but do you really need HW 5 years in advance or was this a little early? Perhaps the SW development was has also been slow in part due to lack of market demand? Perhaps if the benefits were so overwhelming the development timing would have sped up or something like WinXP-64 would have more users?

The fact that you keep dodging the specific question about timing and prefer only to say 64>32, shows that you are only interested in an academic debate, and given you comments on WinXP your second motive appears to be to put AMD in a positive light. You have failed to consider the market and business aspects of X86-64 and you seem to just want to cover your ears regarding the current state of X86-64bit and scream I'm not listening.

So was X86-64bit HW really needed in 2003? What amount of time after the HW hits the market is appropriate to give SW to 'catch' up?

SPARKS said...

“sparks - since it seems Microsoft is your only frame of reference, I don't think you really have enough perspective to be participating in this conversation”

And yet, you still address me on the issue with further irrelevant banter:

“Some thoughts...”

And……….

“Another bonus for you…..”

Well, which is it? Seems kind of manic and contradictory to be told not to “have enough perspective to be participating in this conversation” and then be readdressed on issue. Don’t you think?

With your narrow perspective, what you fail to recognize, you are relatively insignificant to the greater reality of what is obvious, useful, useless, and in fact, empirical. You are delusional as you fail to offer anything substantive to qualify your position. All passion, no street smarts.

As another poster wrote:

“You have failed to consider the market and business aspects of X86-64 and you seem to just want to cover your ears regarding the current state of X86-64bit and scream I'm not listening”

Your myopic arguments are fundamentally flawed, illogical, and irrelevant to the issue.


SPARKS

SPARKS said...

Sparks, I could have sworn I just saw a QX9770 with your name on it. ;)

Tell me more, Orthogonal. Was it nice? Did it shine? Tell me how they slice the little JEWELS from the wafers. Tell me how they qualify to be the best of the best. Tell me a nice EXTREME EDITION story, please! Do the Inteler’s “inside” get their pick of the litter, the prime cuts, if you will? Do they send them off with a kiss from the company ‘hotties’? Ooooh, I can’t wait.

QX9770
X48
DDR3-1800

Nuclear Control Rod Metal INFUSED!

(Enough to make GURU giddy!)

HOO YA!

Sorry, I just get so silly with a new build of this magnitude.

SPARKS

SPARKS said...

Alright fella’s, for your review.

http://www.fudzilla.com/index.php?
option=com_content&task=view&id=
6306&Itemid=1


How can a 4 core chip with one core disabled use only 6.4 watts less power???? This is all at 2.3 GHz. Don’t wait for a punch line, I simply don’t get it.

GURU, I’m sold, you were correct all along, the process must really SUCK. These things are hemorrhaging power.

SPARKS

Anonymous said...

"How can a 4 core chip with one core disabled use only 6.4 watts less power????"

The whole review, including it being from Fuzilla, seems a bit sketchy. For one thing they found the tri-core system at idle actually uses more power than a quad core at idle - this on the surface makes no sense at all, with a portion of the chip turned off, the chip has more consumption?

But taking the #'s at face value, assuming the rest of the system is all equal, and assuming the measurements are accurate, here goes:

I would speculate, this is potentially indicative of a binning strategy - chips that were 'out of spec' from a quad core perspective, could have one core fused which would bring power back down and thus allow it to be sold as a quad core. This would also be consistent with the idle power #'s vs the full load #'s. As you lower the freq when you are in the idle speed step states, you will not see as much of a power difference between chips. As the core is operating at lower speeds, it's impact as a % of overall power consumption is a lot lower, so even though you have one less core, you don't see as great a difference. All things being equal you would still expect the tri-core to be lower at idle (just less so then full load), but if you factor in my theory that these may be "high power" (above normal) bins the one less core is offset by the this effect.

However as you load the chip and run at the rated clocks, the core logic becomes a much larger factor of the overall power consumption and then you start to see the impact of 4 cores running vs 3 cores. As power increases exponentially with clockspeed - you see this effect winning out over the 'high bin" effect and thus fewer cores (which means fewer high speed transistors)is more important in this scenario.

This would be similar to the speculation of the AMD binning of the EE chips - where it was thought that AMD would 'skim' the good chips off and simply relabel them as EE and the rest would then be marked normally.

The other potential theory is that there is just significant variability chip to chip and Fuddie happened to get a 'bad' tri-core and a better quad-core. As this was a demo system setup by AMD, this seems unlikely - why would AMD choose a poor power tri-core to demo? Because of this, I'm going with my theory that these are different power bins at the same clock (you could think of it as in spec vs out of spec for quad cores)

Please note, there is ABSOLUTELY NOTHING WRONG with doing this binning and it makes great sense from a business perspective. If you have a signiciant % of quad core chips that don't meet TDP at a certain clockspeed you have a few options:

1) you can downgrade the clock (and lower Vcore) to hope that this gets it into the bin

2) you could just scrap the chip (obviously not a desirable option)

3) You can increase the TDP rating (also not a great option as then you either now have 2 TDP's and 2 model #'s for the same clockspeed, or you get no benefit from the low power chips and simply lump both good and bad together)

4) You can fuse a core and sell it as a tri-core.

If the quad core power is bad enough in my hypothetical scenario, simply dropping it a single speed bin may not be enough to get it to fit in the required TDP, so fusing it into a tri-core is simply another option for AMD and gives them some increased flexibility.

It would also be consistent with (but not constitute proof of) my belief that AMD's 65nm process is near a cliff. It would also be consistent with the lack of higher speed bin chips for either tri-core or quad cores.

Just my speculation - please feel free to poke some holes, offer alternative theories and provide constructive criticism.

This applies to all except of course for that yahoo Sparks! :)

Anonymous said...

One additional thing, in case my previous post wasn't long enough!

Tri-core, even if the chips are from the same power bin should not be 3/4 the power. You still have all the L3 cache, the IMC, the HT links being similar on both chips - so in reality you can't think of a tricore as 3/4 of a quad from a power perspective.

I'm too lazy to approximate relative transistors and do not want to use the boneheaded 'area ratio = transistor ratio = power ratio' assumption that Dementia uses. I would guess tricore would be expected to be more in the 85-90% power of quads (just pulled out of the air so take it for whatever you think it's worth...)

hyc said...

Hey sparks, glad to see you actually have enough of a vocabulary to use more than duo-syllabic words. I'm impressed.

Anonymous: yes, I was intentionally ignoring your point, because arguing with anonymous posters is inherently pointless.
Aside from that, I concede the point - good is the enemy of better. And good enough for the masses is pretty much all that matters for a consumer business. OK, you've won the issue. Congratulations. You've just argued for and won your right to be mediocre. Enjoy...

Tonus said...

You sound bitter. It's just CPUs. *shrug*

I think that AMD championed x86-64 because otherwise they'd be in a position to promote a non-x86-compatible 64-bit platform, versus whichever non-x86-compatible 64-bit platform Intel would promote (EPIC?). AMD would have lost that battle and been left out in the cold once 64-bit went mainstream.

So they did the practical thing-- extended x86 to 64 bits, and put PR pressure on Intel to follow them. It was a survival strategy, and has worked, IMO. AMD not have to license a 64-bit platform from anyone else, nor will they have to compete in a losing battle to promote the next non-x86 64-bit platform.

In their position I think it was their only real choice. The earlier that they developed and hyped it, the better for them from a survival perspective. A company in AMD's shoes doesn't have the luxury to wait for 64-bit to become mainstream before they jump in.

SPARKS said...

“It would also be consistent with (but not constitute proof of) my belief that AMD's 65nm process is near a cliff.”

You “speculated” this nearly seven month ago. Your analysis has been consistently accurate to date.

“offer alternative theories and provide constructive criticism.
This applies to all except of course for that yahoo Sparks! :)”

I resemble that remark! Anyway, I wouldn’t dare. I doubt anyone else would, either.

So, from your numbered analysis, what’ve we got?

1) Hamburger Helper

2) Frozen fish sticks

3) Pot pies

4) Canned Spaghetti O’s

It doesn’t sound very appetizing. What ever they’re serving up, it’ll be hard to swallow, even for the most rabid, starving AMD Fanboi. Well, at least it will be a cheap meal.

Conversely, I’ll take my 3.2 Fillet Mignon, ‘black and blue’, cooked over mesquite wood, straight from the beautiful Painted Arizona Desert, thank you. And, I don’t mind the price.

“Tri-core, even if the chips are from the same power bin should not be 3/4 the power. You still have all the L3 cache, the IMC, the HT links being similar on both chips - so in reality you can't think of a tricore as 3/4 of a quad from a power perspective.”

Ah, aside from the elaborate binning process, this is what I (incorrectly) assumed. I suspect prospective AMD buyers will assume the same. No, but I can think of a tricore as 3/4’s of Quad from hell.

Thanks and Bon Appetite,

SPARKS

(Orthogonal, I am counting on that reply.)

Orthogonal said...
This comment has been removed by the author.
Orthogonal said...

Sparks said...

Tell me more, Orthogonal. Was it nice? Did it shine? Tell me how they slice the little JEWELS from the wafers. Tell me how they qualify to be the best of the best. Tell me a nice EXTREME EDITION story, please!


The Lady of the lake, her arm clad in the purest shimmering samite, held aloft a QX9770 Extreme Edition. There drew he forth the brand, And o’er him, drawing it, the winter moon, Brightening the skirts of a long cloud, ran forth And sparkled keen with frost against the hilt: For all the haft twinkled with diamond sparks, Myriads of topaz-lights, and jacinth-work Of subtlest jewellery.

Do the Inteler’s “inside” get their pick of the litter, the prime cuts, if you will? Do they send them off with a kiss from the company ‘hotties’? Ooooh, I can’t wait.

Oh, how I wish we got "first picks", although its unfortunately much less glamorous. There are very few people in certain positions with that luxury. However, all is not lost, we do have a "Chip Loaner Program" where we can get 1 free Eng Sample per year, although it's generally only for product that has already entered the market. We also we get a nice discount on retail chips, but are forbidden from reselling them.

InTheKnow said...

hyc said...
Congratulations. You've just argued for and won your right to be mediocre. Enjoy...

Either you missed the point, or you are choosing to misrepresent it.

The question isn't whether 64 bit is good or bad. It is obviously better than 32 bit. The question is how much lead time do you think the hardware needs to give the developers before implementation becomes widespread?

I'm of the opinion that it is possible to introduce things to early. Look at Transmeta's low power efforts as an example. Now the industry is moving in that direction, but Transmeta is reduced to the roll of patent troll. And it's largely because of bad timing.

And you can't really blame Transmeta's demise on Intel alone. AMD didn't exactly jump on the low power bandwagon at that point in time either. It was just the right product at the wrong time and they paid the price for poor timing.

In addition to the timing, there is the question of resource allocation. People look at Intel as having unlimited resources, but that isn't true. They have lots of resources, but they aren't unlimited. So you have to decide what to give up in return for selecting a specific design strategy.

Now if you were to argue that Intel should have chosen 64 bit development over the netburst architecture I'd have a hard time arguing against it. :)

SPARKS said...

What price success?



http://biz.yahoo.com/ap/080314
/advanced_micro_devices_executive
_compensation.html?.v=1


You can’t make shit like this up.

SPARKS

Anonymous said...

sparks, your links are unusable... either make it as a hyperlink or stop breaking it by hitting enter.

try using what u post as a link by copy and paste and you'll see what i mean.

thank you.

Anonymous said...

hyc - many thanks for allowing me to be mediocre. As an anonymous poster, it really, really means a lot to be validated by another anonymous poster who just happens to have a screen name, who thinks that makes hims less anonymous.

Thankfully, as apparently someone's name is more important than the content of what they write, you came along and blessed me with mediocrity.

Perhaps now I can dare to dream of making up a arbitrary blogger account, misquote people's positions and then disparage the position that the person did not even take in the first place, and then validate myself by calling someone mediocre.

"Aside from that, I concede the point - good is the enemy of better." Who said this? Is this really what you read? No wonder the argument went on as long as it did, apparently you either lack reading comprehensions skills (which I don't think you do) or you are reading into what is being said, simply so you can argue with it.

Anyways... now I feel like my life can go on and a giant weight has been lifted from me as I have managed to ascend to the ranks of mediocrity!

Anonymous said...

Sparks - I saw the Ruiz pay over on overclockers.com - I would urge you to check out the article on K10.5 schedule.

Granted Ed is perhaps shall we say, a wee bit biased, but he makes a real excellent point about the 10.5 and Nehalem schedules. It is extremely imperative that AMD get the product out, and more importantly the REVIEWS get out before Nehalem. While it will make little monetary difference, it will be a potential PR disaster if Intel realeases Nehalem benchmarks first (this of course assumes Nehalem will give a performance jump which has yet to be demonstrated).

Should AMD get the K10.5 benchies out first - they clearly can compare it to Penryn, and if the clocks come up a bit as expected, they can say we are closing the gap and we have a chip that is good enough performance-wise and customers should look at price/performance ratio (of course there are some who may say good is the enemy of better, but let's not go there again...)

If Intel pre-releases Nehalem benchmarks, like they did with Conroe, and these are out there before K10.5, now Nehalem becomes the comparison standard. So instead of AMD showing K10.5 as closing the gap and trying to pawn it off as new technology (k10.5 vs K10) - people will focus on how it holds up against Nehalem.

The problem I see is that K10.5 is really just a shrink - in order for AMD to show this off as an improvement they will need to get the clock up and do the old change 2 variables (speed and architecture) and assign the benefit to both. If it shows up merely as 10% better (which quite frankly is a pretty darn good improvement for a shrink), but with no clock improvements, I think the PR will be tough.

In terms of finances it will come down to ramp rates and Intel will likely dribble out Nehalem for a while and focus mainly on servers. Let's face it is far cheaper for Intel to produce Penryn MCM's then Nehalem and they will likely milk this fact unless pressured.

AMD may even have some advantage if the 0MB L3 part turns out to be close to the 6MB part (or dare I say good enough?). Of course, I once heard a wise man say good is the enemy of better so under that mindset, AMD should do the 'right thing' and produce the bigger, more costly part even if the benefits are small - after all better is better!?!?

It is important for AMD to get out in front on this one and they should release K10.5 benchies as early as possible. They are implying (or at least trying to get the press to imply) they are far along, so this should be easy. I'm skeptical as this EVT garbage could mean anything from pretty close to production to multiple tapeouts away.

And the rather lame excuse of "we don't want to tip our hand to our competitors" ain't gonna fly this time around. Remember early on in the K10 cycle where there was more than just an AMD fan or two who fell hook, line and sinker for that one!?!

I think the summer will be a race to the benchmarks...

SPARKS said...

Orthogonal,

That was lovely. Your distinctive, and well executed prose has inspired me.

With your blessing, I would to work with a buddy of mine who airbrushes artwork on motorcycles. It would be an excellent theme sprayed on the case of the new machine.

She will be well endowed, of course, ---------err, the machine, that is, with pastel blues translucent whites, and sparkling gold leaf.

An all black Lian Li aluminum case should be an excellent foundation for the project.

That was wonderful, simply wonderful.

Thank you,

SPARKS

Anonymous said...

Prices on the tri-cores (courtesy of EEtimes:

2.1GHz - $159
2.3GHz - $179
2.4GHz - $200

AMD claims 30% speedup in 'highly mutlithreaded applications" over dual core (vs potential max of 50%)... ehhh... doesn't sound so great (though this may be more SW related?)

So looking at a 2.4GHz with a 30% speed up would put this at about a 3.1GHz AMD dual core (and this again for MULTI-threaded applications only, I imagine other applications would scale worse as they won't take advantage of 3 cores). Pricing seems a bit high even for the most die hard AMD fans. It appears tri-core is trapped into a rather small pricing box between AMD dual cores and AMD quad cores.

Of course AMD has helped fix this with the 65nm transition as the highest clock K8 has dropped to 2.8 (I think this exists right?) - this helps make the tri-core seem a little more competitive over dual core.

They just need a couple more tech node transitions like 65nm and that will help them bring the max clock speed down that much more and give them a bigger pricing window for tri-cores! (That would be sarcasm folks)

So can someone explain to me why we need all these products in the desktop? It seems the # of products are insane...

quad - 1.8, 2.2, 2.3, 2.3 Black edition, 2.4 (soon) and 2.6 (eventually)
tri - 2.1, 2.3, 2.4 (for now)
Dual - who knows but I imagine 4-5 as this is the high volume desktop market

I can't help but think of the Tom Cruise Line in A Few Good Men: "oh I keep forgetting you were sick the day they taught law in law school" I think AMD was sick the day they taught business in business school... holy over market segmentation Batman!

Unknown said...

QX9770 results are in: http://www.nordichardware.com/Reviews/?page=1&skrivelse=529

4GHz on air? Nice. With a water cooling you could get the frequency even higher.

They took pity on AMD and didn't put a Phenom in the tests. I suppose they didn't want to embarrass AMD too much after all.

hyc said...

re: made up screen names - you're not the only person posting anonymously here. That makes it difficult to carry on a conversation because we can't always correctly attribute to you what you said, and even on this page there have been people misattributing your posts.

As for myself, there's nothing made up about my name. You can follow the links from my profile and find anything you want about me. My code is running in every major computer system made in the past decade, and my name and/or initials are embedded in the code and credits of all of them. My code is running on the majority of computers on this planet, from embedded devices to mainframes and supercomputers, as well as a number of them off this planet, in orbit, and in deep space.

The point is, information is useless without a handle by which it can be accessed. You can have a conversation with a pseudonymous poster; it doesn't matter if the name has any relation to reality. All that matters is that they're the same person you were talking to a few posts earlier.

SPARKS said...

“And the rather lame excuse of "we don't want to tip our hand to our competitors" ain't gonna fly this time around”

I couldn’t have said better myself. As they say one bitten twice shy. Last year AMD took the press for a ride. They all had egg on their faces. Last May, Digitimes was the first to call them out on the spinning, then the others slowly followed suit. AMD will not get a free pass this time.

I read the K10.5 release schedule, as per your suggestion. The whole thing sounds like a repeat performance of last year’s timeline. Here’s another cliché for your enjoyment. The more things change the more they the same. It’s French and my least favorite.

I have a sneaking suspicion that Nehalem is far quicker and further along than anyone could imagine. The silence out of INTC is deafening. THEY are playing this one close to chest, as not to tip their hand. INTC will wait for right moment, and dial in the numbers to suit their market position with AMD. With cool, fast, power sipping Penryns running the way they are, I believe Nehalem will put an end to this AMD/IMC fluff and nonsense once and for all.

At the risk of sounding redundant, enter 2008, the great server assault.
(Or am I sounding, developmentally disabled, stupid or well versed? Only hyc, “the angry troll” can say).

SPARKS

SPARKS said...

“ re: made up screen names - you're not the only person posting anonymously here. That makes it difficult to carry on a conversation because we can't always correctly attribute to you what you said, and even on this page there have been people misattributing your posts.”

“WE can’t always….”!?!? So there is more than one of you? I was right you are baiting us. Either that or you missed your daily does of PAXILL. Perhaps, you’re working in concert with others to disrupt this site.

Why do you need to know who it is? Don’t you have the intellectual capacity to follow an argument on a post by post basis?

Everyone here has been fine with the way things are. Why are you so special?

At let me, an unqualified, stupid, industry outsider let you, ‘The God of Computing’, in on a little clue. The guy you labeled “mediocre” is absolutely BRILLIANT! I once thought about calling him “V’ger” as his knowledge in process engineering “spans this universe and possibly the next”!

I settled on GURU. He is light years ahead of you in intellect. He is kind, generous, patient, and an absolute pleasure to have on this site. He even tolerates me. Obviously, he doesn’t need to ID himself or post a resume, as you did, to validate his authority. Genius is seldom a word I use. However, in this case, since I am making the comparison between you both, you come up short, very short.

His accuracy, consistency, and knowledge on the site have been never been broached, compromised, or undermined, during my tenure here. Further, HE HAS ALWAYS BEEN CORRECT. You failed your argument because it was fundamentally flawed, and now you cry like a little bitch.

You can call me anything you like, knock yourself out, jackass, but don’t fuck with GURU!

Look, do us all a favor, either post something coherent and relevant, or go hump some one else’s leg, muttface. Don’t go away angry, just go away.

SPARKS

Anonymous said...

Easy there Sparks! While I appreciate the kind words, HYC is free to post anything he wants and it's up to the readers to decide. I'm secure enough in my thoughts and opinions not to be offended.

HYC - As for your anonymous poster issues, it has become a red herring to cover up your recent debacles in posting here. The issue was not following the argument as you would like to conveniently misrepresent. The issue was you chose not to read what people (both anonymous and non-anonymous posters, by the way) were saying and wanted to continue to misrepresent those words and change the argument.

I urge you to go back over the posts and ask yourself - was the issue anonymous posters and following them or was the issue you wanted to change the basis of the argument as you were clearly born out to be wrong regarding the TIMING around 64bit.

I have no issues with posting opposing points of view, but I do not feel it is necessary to post a resume to bludgeon people into listening to me... people can see an anonymouis post and based on the content can choose to believe it or not believe it.

I do not want people simply to take my word for granted solely because of my screen name, or because I have spent 15 years in process development and manufacturing, or have SW in every major computer system in the world. I also believe some people dismiss arguments (regardless of whether they are right or wrong) simply because of the poster and whether he/she's a pro-Intel guy or a pro-AMD guy. If narrow minded people choose to ignore an anonymous comment simply because I post anonymously, well that is fine by me, and in my view their loss, not mine.

I would encourage you to READ people's comments and not READ INTO people's comments. I for one would like to see you continue to post here, though this is not my blog, but apparently Robo has no issues either.

Anonymous said...

"“WE can’t always….”!?!? So there is more than one of you?"

I wouldn't take we to mean multiple pseudonyms or some grand conspiracy. "We" can be a very 'interesting' literary device to make one's opinion sound like multiple people's opinion. This is often uses to support an argument. (thouugh I'm not saying that was HYC's intent here)

Take a look at some of Scientia's post and you will often see "We can conclude...", "we can see..." doesn't that sound more impressive and authoritative than 'I can conclude', or 'I can see'. "We" almost makes it sound like it is widely accepted by many (and thus proven?) or that anyone reading the statement should conclude the same thing.

Whenever you see "We", just ask yourself who constitutes the "we", whether "we" should really mean "I" and you'll be able to tell if it is a device or not.

You will also see this device used all over Shari-kook's old site, it was a way he attempted to artificially inflate his view into a more common, widespread view - he just did it so frequently that it was painfully transparent.

SPARKS said...

Woof---grrr---Woof.

SPARKS

hyc said...


Anonymous Anonymous said...

Sparks, check this out: NY Governor in Prostitution Ring.

Your favorite anti-Intel governor is in trouble ...

10 March 2008 21:53
Blogger SPARKS said...

GURU, there ya go. The burning, scalding pot calling the tea kettle black!

SPARKS

10 March 2008 23:34
Anonymous Anonymous said...

Sparks - the previous comment wasn't me, but I do find the story ironic.

11 March 2008 00:04


Sparks, in this case, though I hesitate to acknowledge that you and I have anything in common, "we" is at the very least, you and me.

Anonymous: as I said, which you obviously didn't read, or didn't read enough into - I don't care who you are in real life. All I was saying was that even a made up screen name is better than no screen name at all, because it cuts down on confusion and serves as a point of reference. Not all of your posts stand on their own, some of them need historical context, and that context is obscured when your posts are just mixed in with all the other anonymous ones.

And as for bludgeoning people with my resume - it's also quite obvious that I didn't lead my posts with my resume. I only mentioned it because you implied that I was using a fake/made up name.

There's no red herring here, just plain truth.


Anonymous Anonymous said...

Anonymous said ...

"These AMD schedules are just getting ridiculous - we are now nearing the end of Q1'08 and the best they can say about the launch of 45nm is "H2'08"? "

They need to take a good hard look at their philosophy and maybe just start meekly copying Intel's every move :). It's obvious who has the better grip on reality :).

12 March 2008 13:27
Blogger hyc said...

That's quite possibly the stupidest thing anyone has posted here in a while, and that includes most of Sparks' comments.

x86-64 wouldn't exist on the market today if AMD had just blindly followed Intel. We'd probably all be still waiting for a 64 bit software ecosystem to develop, and Itanic would be Intel's only 64 bit server processor. The fact is, whether Intel's engineers are collectively smarter or not, Intel's management won't let better products out the door unless their existing mediocre products are threatened by something else.

12 March 2008 18:58

Anonymous Anonymous said...

"x86-64 wouldn't exist on the market today if AMD had just blindly followed Intel"

While I don't agree with the previous anonymous poster, exactly what has this X86-64 technology bought the average person? Would 95% of the people who buy computers see any negative effects from not having X86-64?

While it is good that AMD pushed the technology (it will eventually be needed) - let's face the facts, on desktop and mobile this was about marketing and distinguishing AMD's chips then actual real life benefit. If someone had bought an x86-64 chip back in 2004 (or whenever AMD launched it) thinking about 'future proofing', what REAL benefit have they seen and it is more likely that chip will be replaced before any REAL benefit comes from the technology. It's funny no that AMD's marketing has now change to 'good enough' as the technology situatin has changed.

While I have no doubt 64bit has some real benefits in select area (server space comes to mind) - it has been one of the most OVERHYPED benefits in the last few years.

12 March 2008 20:40
Anonymous Anonymous said...

"Intel's management won't let better products out the door unless their existing mediocre products are threatened by something else"

This must be the millionth time I heard this RIDICULOUS statement. As Intel has ~80% of the PCU market, how exactly do they grow CPU revenue?

Answer - through growing the overall market (or developing new markets) and driving upgrade cycles. If Intel did not improve their product there would be a significant slowdown in the upgrade cycle as businesses (and many consumers) are simply not going to upgrade for marginally better performance.

Competition affects pricing but this nonsense about chips wouldn't improve without competition is NONSENSE. Folks are taking a really poor architectural choice in P4 (and tactical and strategic blunders to not get off that train earlier) with a lack of willingness to improve products. Don't confuse bad technical/strategic decisions with complacency.

Intel has led Si process technology development for the last 15-20 years. They led the move to 300mm wafer transition (think prices, with the same chip performance, would be where they are today on 200mm wafers?). And while AMD will whine about Intel 'copying' architectural features there are plenty of areas where Intel has lead which just don't get the pub.

This whole 'the market needs AMD' is a bunch of liberal crap. Perhaps it would be best if AMD collapsed and a true competitor could rise in it's place? If AMD is weak, the free market should work and weed it out and allow someone new to compete. So long as AMD is in the market, there is little chance of a 3rd competitor coming in. Just how long are you going to watch the Globetrotters vs the Washington Generals?

While things may change and AMD may turn things around, this baseless argument of simply having AMD because we need to keep Intel honest is getting old. You need a strong company to do this - if AMD is that company, great. If they aren't, I hope the market weeds them out. I'm not going subsidize a company to do this - if/when they have a better chip for the price I'm willing to pay, I will buy it - until then my money will go to the best product available.

12 March 2008 21:08


OK, so, who is misrepresenting whom?

My post was in response to the statement "AMD should just be a whipped puppydog and blindly follow where Intel leads." While that might be a recipe for a long lingering death, that is clearly not a recipe for success.

Some other anonymous poster (I have no idea who, so up till now I ignored his post) made the comment that "64 bit is overhyped and 95% of people haven't benefited from it or its future-proofing." That doesn't prove that the technology came too early. That only means that 95% of people aren't getting the full value out of the hardware they paid for, and sure, software is to blame for that underutilization. Systems with 1GB of RAM were already common back in 2004, running WindowsXP. There's a funny thing about Virtual Memory - it only works well when your virtual address space is much larger than your physical address space. 32 bit processing was at death's door already, back in 2004. If you don't understand that, shut up and learn something.

http://blogs.msdn.com/oldnewthing/archive/2004/08/17/215682.aspx

As for incompetence vs complacency - OK, point taken. Never attribute to malice what can be explained by stupidity, fair enough.

hyc said...

More practical info on why 32-bit isn't enough, and hasn't been enough, for the past few years...

http://www.ittvis.com/services/techtip.asp?ttid=3346

Unknown said...

Yeowch. Sparks, it seems INTC isn't the only one that will play hardball. NVDA is apparently threatening to sue Intel over graphics patents in their IGPs if they don't get a QuickPath license for Nehalem!

http://www.fudzilla.com/index.php?option=com_content&task=view&id=6320&Itemid=35

I don't know about the other people here, but I've stopped caring about AMD outside of graphics. They just don't have what it takes to compete with Intel's tick-tock model. AMD's quad core CPUs that they hyped to all high hell are slower than Intel's eighteen month old Clovertown CPUs. Barcelona still MIA six months after the launch. It goes on and on.

AMD will be lucky if their 45nm products can match Clovertown forget Penryn! After all, Shanghai is just Barcelona shrunk to 45nm with a few tweaks and some extra cache.

Nehalem will clean kill AMD, it's that simple.

Also Sparks, for your reading pleasure!
http://www.vr-zone.com/articles/3_Way_SLI_Revived%3A_Asus_Striker_II_Extreme_NVIDIA_790i_Motherboard/5637-1.html

Alas, there are no benchmarks yet.

Anonymous said...

"32 bit processing was at death's door already, back in 2004. If you don't understand that, shut up and learn something."

Funny folks have said Moore's law has been at death's door since 180nm process technology..then 130nm... then 90nm... then 65nm... then 45nm... then 32nm and now they are saying that on 22nm!

32bit might have a limited timespan... but it is now 4 years after 'being at death's door', a major OS supplier apparently felt the need (right or wrong, and for whatever reason) to release their brand new OS on both 32 and 64 bit.

So you can argue whatever you want, but you need to learn something about the market and not just techno-speak... like it or not the best technology doesn't always win or get implemented right away. Here in the real world (again right or wrong) there are other effects. You can choose to ignore these and continue with the academic arguments, but most of us live in the real world where despite being on death's door in 2004, 32bit development and support has magically continued. You can say it is wrong that this has happened... but it has continued. You can say we would have been better off switching to 64bit earlier... but it still has happened? (Getting the point yet?)

Is it the right thing? The best thing? Who's fault is it? All great questions, but the bottom line is 95% of people in desktop and mobile are still using 32 bit OS's 4 years after death has come a knocking and it will be several more years before this even approaches 50/50.

"Shut up and learn something?"

From whom? From some who is so narrow-minded that they complete ignore market effects? From someone who lives is some utopian society where the best technology gets instantly implemented and is the one that always succeeds?

Sadly and unfortunately, I suspect that I could learn something from you.

I say unfortunately because your arrogance, unwillingness to listen to people and desire to misinterpret people's comments in an effort to simple 'win' an argument, distracts people from your knowledge.

And you want to know the real ironic thing? Now when I see HYC, I'll click on the name, and roll up the comment, and if you were to post anonymously I'd be more likely to judge your comment on the content of the post...

"click"...ahhh much better

Unknown said...

Lots of juicy information on Nehalem, Dunnington and Tukwila:

http://media.corporate-ir.net/media_files/irol/10/101302/Smith2008.pdf

And it all comes straight from Intel!

Anandtech has a nice writeup on all this as well: http://anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3264&p=1

Enjoy!

SPARKS said...

Giant, Fuddie is the only one who’s reporting this. It’s kind of premature to bring this out so early. Besides, these guys sue each other so much you would need a law professor to research their history's going back just a few years. I Googled Nvidia and Intel lawsuits, it’s crazy and it left me spinning-------Goddamned Lawyers.

"The first thing we do, let's kill all the lawyers" (Henry VI, Act IV, Scene II)

Anyway, Fuddie says, “if they won't (give them QPI), then Nvidia is considering suing Intel over graphics patent infringements that they claim Intel has done”.”

Between sounds kind of bitchy and petty to me. As if they we waiting for some unforeseen opportunity to pull something like this off. That’s of course, IF they didn’t get their way. However, with all the saber rattling, pissing and posturing on both sides, it’s hard to say what the hell is going on.

I’ll tell you what, though, we all knew this was heating up, and it will definitely come to a head, no doubt. It’s not going to be pretty, but it will be fun to watch.

2007 was the year of the AMD/INTC soap opera. This year it will be the NVDA/INTC daytime drama. INTC is on a serious roll, and Hen Sing Song is one arrogant bastard. Mark my words.

SPARKS

PS. Sorry I'm so protective. You guys are the only ones I can talk computer with and be understood. It’s lonely being a computer geek in a hard hat

SPARKS said...

GURU, you’ve got to be a frick’n genius. You actually understood HYC’s second to last post?!?!?!? I swear on my kids I couldn’t understand what the hell he was trying to say! Wow, it must take a TON of discipline to cipher that code. Christ, he is so stupid he actually thought I liked the Liberal Democrats in New York, particularly that arrogant son a bitch, Spitzer. Does this guy comprehend anything that he’s reading?

That guy needs a reality check, BIG TIME!

SPARKS

Unknown said...

Sparks, here are a few juicy links for your reading pleasure!

nForce 790i Ultra:

http://techreport.com/articles.x/14350

Geforce 9800 GX2:

http://anandtech.com/video/showdoc.aspx?i=3266

nForce 790i keeps up great with the X48. Nforce 790i and Quad SLI with dual 9800 GX2s looks to the definitive gaming platform, unless you want a Skulltrail system that is.

Already put down my order for an EVGA 790i Ultra board, 4GB Corsair DDR3 1333 memory, a new Corsair 750W PSU and a second 8800 GT for SLI operation. Should make for some crazy gaming performance! :)

Unknown said...

Anyone for some entertainment? Randy Allen is at it again!

http://www.theinquirer.net/gb/inquirer/news/2008/03/18/amd-responds-intel-roadmap

SPARKS said...

Giant, it seems the 9800 X2 is a clear choice winner for me. You know I will not purchase an Nvidia board, I want X48 and its bad enough I’m going with Nvidia for the first time ever.

My 1900XTX CrossFire set has been very good to me. I’m running Crysis @ 1200x1024, medium settings. The only serious frame drop is when I left the ‘ice forest’ towards the end of the game. I may just pump them in the new machine and wait till the drivers mature and all the marketing dust settles, before I drop the cash on the 9800 X2. It kills me to give NVDA any money, let alone 600 bananas.

Actually, the 3870 X2 is fairly competitive at lower resolutions, but falls flat on ass at 1900x1200 and above. With you new “Toy”, those fat resolutions, native, will be imperative. (I want a new toy in the near future, 26 or better)

All said, the 9800 X2 has a clear and consistent edge. I can have my X48, 1600 FSB, 1800 DDR3, and a pseudo SLI setup. Basically it’s like having your cake and eat it, too. That’s if you are dead set on X48 as I am. Hey, I gotta keep Ortogonal working.

SPARKS

SPARKS said...

Giant

By the way, I see your going to 4 gig with ram. What’s the deal, besides running Vista bloatware. Would I see any improvement with 4 gig on XP PRO?

SPARKS

Unknown said...

Sparks,

I use my computer for more than just games, so going with 4GB makes sense. I do rendering in Houdini among other things, and that kind of stuff loves ram. By going with 2x 2GB now, I've got the option for going to 8GB in the future. BTW, this is on XP x64 edition. With the 32bit version of Windows I would only be able to see 3GB ram, which would be a bit of a waste.

I'll probably end up using the Q6600 in this computer, since programs like Houdini will run much faster on a quad core than on a dual core with a higher frequency.

The E8400 and P5B Deluxe motherboard is still a potent combination, I'll put them in my second machine and my old P5B-E motherboard can go on eBay. :)

There's a first time for everything, I guess. Since AMD bought out Ati things have gone down hill from there. I've owned plenty of Ati products in the past - the 9700 Pro was probably the best card I ever owned. I bought that when they were brand new and it lasted two full years being able to run games maxed out. I also had an X1800 XT, which I kept until I purchased my 8800 in late 2006.

Hopefully SLI'd 8800 GTs will be enough to game at 2560x1600 (except in Crysis - probably have to drop to 1920x1200). The board has three PCI-E slots, so I've got plenty of room for upgrades in the future.

SPARKS said...

“BTW, this is on XP x64 edition. With the 32bit version of Windows I would only be able to see 3GB ram, which would be a bit of a waste.”

Despite that other guys irrelevant banter, I too, would have built a 64-bit machine long ago, but finding 64 bit apps (that I/we use) are as hard to find as virgins in a Vegas topless joint.

The 64 bit registers are huge and the amount of ram you could load into a machine is only limited by physical hardware constraints. Can you imagine a full 16.8 million terabyte (2 to the 64 power) capacity? Christ, throw the hard drives out, and load the whole magilla with everything you’ve got. I believe there are artifical limatations put on the current 64 bit OS’s as they are limited for practical considerations, however.

Imagine 500 Gig of RAM, say at 100 bucks a gig? Hey, at 50 large, it would be something to see. Stupid crazy, no doubt, it could be do-able and it would be something.

That said, what a waste, as I could have learned a great deal from HYC. In fact, I could build a 64 bit machine today with all the hardware I’ve got laying around, just for kicks and gigles. My trusty old 955EE and BADAXE would be perfect for the job. Load the badboy up with 8 gigs of cheap DDR2 800 see it fly, cool!

Then again, it would sit there burning electric and the machine would be grow progressively obsolete as time passed from the lack of use. It makes me wonder what the performance delta would be for identical hardware and software with the only difference being the code itself.

What a shame in both cases. (sorry for the double pun)

SPARKS

SPARKS said...

Uh oh, this ain't good. They are cannibalizing staff to stay afloat.


http://www.theinquirer.net/gb/inquirer/news/2008/03/19/massive-layoffs-amd

SPARKS

SPARKS said...

Well Giant, here’s my next credit card drop. This is a nice combo.

http://www.hothardware.com/articles/Intel_Core_2_Extreme_QX9770_Performance_Preview/?page=2


SPARKS

SPARKS said...

HotHardware article QX9770 @ 1600 FSB. What latency?


“These gains were largely realized due to the simple increase in core clock speed but also as a result of the increase in system bus speed, in conjunction with a synchronous memory interface speed at 1600MHz. All told, the new Core 2 Extreme QX9770 is the fastest desktop processor we've ever tested to date, bar none.”

It has been worth the wait gentlemen.

HOO YA!!!!

SPARKS

Anonymous said...

"Uh oh, this ain't good. They are cannibalizing staff to stay afloat."

If the INQ stories about a 5% layoff at AMD are true, it is a shame - I have been through a few of these before and they are rarely equitable, usually terribly inefficient and usually don't address the problem.

The problem with 5% across the board, is the issues at AMD (or typically in any other company in this position) are not across the board - you then lose people in well performing divisions and you demoralize many of the people who stay. Also in my experience upper management is rarely impacted equivalently. If they are impacted they are often re-shuffled into non-management jobs which costs someone else that job.

If this is purely a cost cutting measure it is a mistake and in my view is clearly meant for PR (in advance of earnings) and a really poor attempt for Ruiz to show people he has things under control. I think the INQ has this one 'spot on'

Put it this way if you CEO is so bad that he make Jim Cramer's wall of shame (5 worst CEO's), what's it going to take for the board to wake up and remove him? When are investors going to wake up and remove what is seemingly an incompetent board?

The market share at all cost strategy has failed and catastrophically eroded ASP's. This dream that K10 would come along and suddenly get back all that ASP erosion was naive. Obviously poor execution on K10 didn't help, but AMD is in the position they are in today due to K8 ASP's, K10 would barely have dented Q4'07 and Q1'08 even had AMD executed perfectly. Now AMD has shredded their image and become the
"cheap" (umm... I mean 'value' alternative)

But hey at least NY has approval to expand the local waste treatment facilities in preparation for the AMD NY fab... what's another 30mil, when you're forking over $1.2Bil?

Hey Sparks - when they told you 1.2Bil, did they say "and change"... I see roads, electrical infrastructure and all sorts of hidden costs and AMD pulling the plug after all this work gets kick started/planned.

SPARKS said...

“ Also in my experience upper management is rarely impacted equivalently.”


Sounds like the good ole boy network within. I work for a union shop, when they reduce work force (Layoff Reduction) the shops will only keep their best. They’re the ones who’ll clean the fish, and make the company money. Project Managers, you can count their time with an egg timer. Performance is king. I hate it when really hard working people get the shaft and the good ole boys get a slap on the back and stay on. Not on my watch.

“a really poor attempt for Ruiz to show people he has things under control.”

Well, I’ve been educated, I didn’t have a clue. Do really you think this was a PR/Wall Street spin before the earnings call? If so, it gives a new meaning to sacrificial lamb.

The “spot on” was Charlie D. Frankly; I didn’t have the balls to tell him I ‘told you so’. However, from his article he is clearly blaming upper management. This is a first.

“what's another 30mil”

Ha! I got you for the first time! You’re incorrect about those numbers, its 52mil! (See, I know how to abuse myself) 52 million dollars!?!?! What the hell are they “treating”, nuclear waste?!?!?

“I see roads, electrical infrastructure and all sorts of hidden costs and AMD pulling the plug after all this work gets kick started/planned.”

Go ahead, you sadist. You’re killing me. We are 4.4B in debt, and we need this?!?! (I’ll see your 30m, and raise you 1.2 B. Hell of a poker game, eh?) Plus, we’ve got a bunch of lecherous slime running the state! If they kept their wallets closed, their zippers shut, and got some work done; maybe we’d only be 2B in debt.

You have been correct about the slash and burn, ASP/margin thing for over a year.

I’m sorry if I offend anyone who believes the ATI purchase was a necessary or good move, but I still maintain, and I have said it all along, it was the biggest corporate blunder in history. Intel proved it when they focused on their “CORE” business realizing it was fundamental to the success of the company.

Wrecktor Ruinz and his minions were not paying attention.

What a mess.

SPARKS

enumae said...

Intel p0rn

Anonymous said...

"Ha! I got you for the first time! You’re incorrect about those numbers, its 52mil!"

You got me! Though 52mil will also be on the low end.

This deal is looking more and like an Emperor's Club deal, you pay a significant amount of money up front when things(?) are looking pretty good, but you don't realize once things are completely revealed, you'll end up paying a heck of a lot more.

Let's just face it NY state government has a long history of corruption and that just encourages more spending and debt. The best you can hope for is gridlock so no additional damage is done and money starts falling from the sky. The new governor has already come clean on some stuff, including multiple adulterous affairs including one with a state employee...who happened to 'work' in the Spitzer administration and...

"Campaign finance records indicate that the Paterson campaign paid the woman $500 for work she did for his State Senate campaign several years ago." (NYTimes)

At least he appears able to negotiate a better rate then Spitzer... maybe there is hope for the NY budget after all! Mark my words - next governor elect will be Cuomo!

You'd think these folks could have negotiated a better deal with AMD!

Anonymous said...

If Fudzilla is to be believed, then things are looking not so great for K10 dual cores:
http://www.fudzilla.com/index.php?option=com_content&task=view&id=6401&Itemid=1

"We learned that the upcoming Kuma CPU in 65nm will focus on the energy efficient market, rather than being a fast dual core K10 derived CPU."

For those who don't speak AMD-ese, let me translate: focusing on energy efficient parts = we can't make any fast clocks. Expect to see the typical "customers are demanding energy efficiency" spin to be rolled out (yet again).

"Athlon will remain the fastest dual core CPU from AMD and it looks like it's the only part that can get pushed over the 3GHz limit"

This would be really bad as it will likely mean a 90nm K8 has better performance than a 65nm K10. (i.e. a newer architecture coupled with a newer process gets beat by an old architecture on a previous gen process) It would however explain why AMD did not launch K10 dual cores first and why it has taken so long to get these out the door. On the glass is half full side, this will mean that the tri-core is a step up from dual core (on the AMD-side).

The ASP strategy on these has got to be painful - can you convince a consumer to pay more for a lower performing part purely based on energy efficiceny, or simply put K10 at or below the current K8 pricing? Also you now have a price ceiling with the tri core parts unless these dual cores have much faster clocks (which doesn't apparently appear to be the case)

It'll be interesting to see the clocks and power - these are supposed to launch in Q2, so the silence is a bit peculiar (reminiscent of the K10 pre-launch quad core silence). You'd think with a 'mature' manufacturng process and a 'fixed' architecture, these minor details would be known by now.

So now the question is why even bother with K10 65nm dual cores if these rumors are true? (perhaps 45nm conversion is not as close as expected?). If 45nm was right around the corner, you'd think 65nm would just focus on K10 quads, K8 duals and mobile chips.

AMD needs to just stop the spin, lay the cards out on the table and stop the absurd verbal jabbing with Intel (they're copying us and simply catching up on architecture technology). They then need to fire Ruiz (or have him decide to leave for personal reasons to make it sound better) and bring in a no-nonsense CEO (from outside of AMD) who will clean out senior management and get people who are focused on execution and prioritize business health over a Napoleanic complex war with Intel.

Tonus said...

sparks: "Sounds like the good ole boy network within."

The cynic within me is thinking that it's always easier to fire people that you know you won't have to look in the eye. On the other hand, I haven't worked for a 'big' corporation in almost 20 years. If I was being 'downsized', I'd get it straight from the top. I like working for a "small to medium sized business". Especially one that has been recording record profits for three years running!

Sad to hear about the layoffs, though, those always suck. Though I am wondering how Sharikou will spin this, he and some of the more rabid pro-AMD/anti-Intel group harped on Intel's 'workforce reductions' (can you tell I hate some of this jargon?) from the recent past. Will they swallow hard and analyze this reasonably, or fall back on the usual buffoonery?

And if AMD's New York plant gets past the planning stages, we'd probably have some work there as well. Whoa, put the stones down!!!

As for the QX9770 and 64-bit... I am pretty well set for a while with my present setup (Penryn Duo core on one system, quad core on a second, Penryn Duo core on this overpowered laptop), so I can wait for the developments to come. I'm wondering how long it'll be before I'm able to run Photoshop and Illustrator on a 4GHz quad core with 8GB of RAM... whee...

Unknown said...

Apparently AMD is suing behemoth Samsung:

http://www.theinquirer.net/gb/inquirer/news/2008/03/20/samsung-gets-writ-amd-fab-tech

Oops! I think they'll be countersued into oblivion now!

Anonymous said...

AMD laying off...

SPARKS said...

"Athlon will remain the fastest dual core CPU from AMD and it looks like it's the only part that can get pushed over the 3GHz limit"

GURU, again you hit the nail squarely on the head. You “speculated” that Barcelona would hit the wall @ ~ 2.8 to “perhaps” 3 gig.

Somehow, I think you have a mental picture of these layers with their relative thickness and position, in conjunction with types of materials used, to come up with these numbers. Well, you’re not called GURU for nothin’, obviously.

Anal-ists, peh, these are the same guys that missed the Bear Stearns fiasco. Enough said.

Tonus, are you running 64 bit apps? If you are, do you see any difference in performance? Can you make a comparison between two identical machines on a performance basis, one with 32 vs. 64? Is it possible to load the entire 4.7 G Windows OS to ram?

Are their any links?

SPARKS

SPARKS said...

"AMD needs to just stop the spin, lay the cards out on the table and stop the absurd verbal jabbing with Intel"

It ain't gonna happen. After all, with the exception of current Opteron server parts whose days are numbered, they've got nothing left.

SPARKS

SPARKS said...

Did I say Opteron server parts days were numbered.

Can these numbers be for real?????

Holy cheese!

http://www.theinquirer.net/gb/inquirer/news/2008/03/20/analysis-nehalems-happy

SPARKS said...

Oh Giant, look what I found!

http://www.allstarshop.com/shop/product.asp?ad=fg&pid=20188


Sparks

Anonymous said...

Where are abinstein and baronmatrix hiding these days?

With all the hoopla and circle jerks they were doing around the Barcelona launch timeframe, they seem to have gone into hiding.

Chicken s$hits!

LMAO

Tonus said...

sparks: "Tonus, are you running 64 bit apps?"

Not yet, I'm still on 32-bit Windows, though my 8400 system has 4GB of memory. I've been tempted to make the plunge but the system is working flawlessly now and I don't want to mess with it until I've got a good block of time to dive in and deal with any glitches first.

pointer said...

Sorry for off topic for this post ... may be Roborat can turn that into topic some time :)

just browsing thru the AMDzone and noticed GIANT posted comment on Q6600 has 77% more performance per watt than Phenom 9900 http://www.amdzone.com/phpbb3/viewtopic.php?f=52&t=135285

Again the convenient reason given by the AMD supported is that the Intel number does include the MC ...

well 2 point to raise here

1) the Intel X38 TDP is 26.5W http://www.anandtech.com/printarticle.aspx?i=3120
yes, allow me to conveniently use this number is the power consumed at full load (actually when CPU is fully loaded, chances are that MCH is not; and TDP not necessarily means the max power drawn which normally smaller). If you add that to the XBIT number at full load, it is still a big gap. if this is not clear enough to the AMD supporters, then look at the second point

2) someone posted anandtech number in 3rd post, this round, it is not the CPU power, but the system power, with retail CPUs. I'd think that system power is what ones should look at for the perf per watt metric. Yet, those AMD supporter totally ignore this conveniently on the desktop side, BUT, keeping poking this on the server side when high number if FBDIMM is used by the Intel system :)

Actually, there is nothing bad to be one company supporter rationally ... but what intriguing me most by the AMD fanbois are the double standard they imposed in most of their comments

Unknown said...

At first I was just correcting a few errors over there. i.e. Azmount azryl claiming Intel sold 1.2 million quad core CPUs when the number was in fact ~2.8 million. Then The Ghost tried to convince everyone that Intel has no quad core CPUs because they're a MCM.

Then there was another post quoting Fudzilla claims that Shanghai could go past 3GHz when in the same article they also claimed Intel could go past 4GHz.

I laugh every time I see one of them claiming that Shanghai will debut at speeds in excess of 3GHz. The past few die shrinks for AMD have ALWAYS resulted in lower clockspeeds initially. i.e. if Barcelona reaches 2.6GHz I'd expect 2.2 -> 2.4GHZ for Shanghai. I wouldn't expect Shanghai to pass 2.8GHz or maybe 3GHz due to severe gate leakage. Without the HK/MG the leakage is just worse at 45nm. The notion that AMD can easily switch to a HK/MG before 32nm is absurd. As others pointed out here, it's basically a whole new process. Of course, with the extra cache and improvements their fastest processor may well be Shanghai at the end of this year, just not in terms of clock speed.

Oh and another preditction: Consider the current peformance lead Intel enjoys with Harpertown and Yorkfield over Barcelona (whenever it's actually available in servers....) and Phenom. With Nehalem Intel will INCREASE that lead even further against Shanghai and Deneb.

After azmount azryl posted the hilarious thread claiming that Nehalem is "broken" because each thread only has 128kb of L1/L2 cache I posted a few silly threads of my own. I was banned for a week, which isn't surprising really. I also remember reading about how Intel is copying AMD's cache structure from Barcelona for Nehalem, that's just not true at all. Tulsa was the first x86 processor that gave each core it's own L1 and L2, with all cores sharing a larger L3 cache. Nehalem uses that same structure, only of course there is less cache and more cores this time.

Sparks, that looks like a damn fine board to pair a few gigs of uber fast DDR3 memory and the QX9770 with! I've got my EVGA 790i board now, along with the 4gb DDR3 memory and the extra 8800 GT. With the E8400, 790i, 4GB DDR3 and Geforce 8800 GT SLI (one is the EVGA 8800 GT AKIMBO card, while the other is a standard EVGA 8800 GT with the AKIMBO cooling kit) all running at stock speeds I can play Call of Duty 4 or Command and Conquer 3 at 2560x1600 with everything maxed out to full, with 16X AF, and 2x FSAA. This kind of power is simply mind blowing! A pair of 9800 GX2s would be simply astonishing, to say the least. With a single 9800 GX2 you should get results that equal or better mine. Can't wait to start OCing, but that can wait a few days until I'm satisfied that all is running perfectly. I might end swapping CPUs for the Q6600. I'm torn between having a Q6600 that I can clock at 3 -> 3.3GHz (old B3 stepping) or an E8400 that does 4 -> 4.5GHz. I think you have the right idea Sparks, getting a QX9770 so that you can have 4GHz+ speeds AND quad core!!

Or you could buy a Phenom and overclock from 2.3 to 2.6Ghz! But still, according to the AMDzone forum Phenom offers a "fantastic experience"! Back in 2005 when Intel had the Pentium D If I had of claimed that someone would have had a "fantastic experience" with a Pentium D they would have pointed to benchmarks and said that AMD was faster. Of course, Intel being faster doesn't matter now. Oh, I mean... Intel isn't faster at all. Intel is paying off all the review sites, and the CPU is optimized for benchmarks only!

-GIANT

SPARKS said...

Do wanna know why they’re all hiding?

Do you wanna know why they’re all so quiet?

Do wanna know why they’re hunkered down, eyes and ears covered, in complete FANBOI denial?

HERE”S YOUR ANSWER!!!!!

overclockersclub.com/reviews/intel_qx9770/16.htm


They may say:
It cost too much.
You’re crazy for spending that much money.
See, monopolistic Intel is gouging the market!


Horseshit, I say, with the BILLIONS of development costs, blood sweat and tears, that went into this thing, I’ll have it in my machine, unlocked and ready to rock, leaving everything else in it wake, superfluous and irrelevant.


AMD FANBOI”S, WARNING: BE AFRAID, VERY AFRAID!

AlBI-FRANKENSTEIN and DEMENTIA, SPIN THIS!

QX9770

SPARKS

Unknown said...

I like this quote from that review:

The Intel QX9770 "IS" the fastest thing going "Today" hands down. That performance does come with a price, but in my eyes its worth it!

Expensive, but the performance it offers is just nothing sort of crazy!

SPARKS said...

“I'm torn between having a Q6600 that I can clock at 3 -> 3.3GHz (old B3 stepping) or an E8400 that does 4 -> 4.5GHz. I think you have the right idea Sparks, getting a QX9770 so that you can have 4GHz+ speeds AND quad core!!”

That’s it! It’s having like your cake and eating it on 1600FSB! I had this on my mind, as you know quite well, months ago. The frame rates that SLI offers under HIGH resolutions, as you said, will be the one sacrifice I’ll be making, albeit a minor one. I truly believe that Nvidia wisely offered the 9800 GX2 to lunatics like myself who wanted Intel driven boards with X48 and 1600 (or better) FSB, to counter 3870 X2. It completely, and unfortunately, eliminates AMD/ATI from the high end position entirely. SLI for all, smart, very smart, and I’m in.

Giant, I read the benchmarks. I do the benchmarks. I read the ‘experts’ comments on the performance differences between dual core and quad core. They say they are basically non existent. But, bare me out on this one, there is something more ‘dynamic’ going on with these quads, and it’s more than measurable.

The only thing I can equate it to is the difference between a 6 cylinder engine and an 8 cylinder engine, with equal horsepower. I don’t know if it’s ‘torque’, more metal, or more power distribution. I also don’t how apps are scaled along with the OS to make these things run. I know one thing; these things have a special feel. Almost like smooth acceleration on a power curve. Perhaps it’s a subjective thing on my part. It seems like the damned things will not be overloaded! Maybe, it’s just knowing its there. I’ll say this, though; I’ll never go back to a dual, let alone, single core solution again.
Not with this kind of juice.

That said, I believe that Q6600 was the prototype, the test bed, if you will. The QX9770 is the ultra refined, ultra tweaked and massaged, race car.
The numbers bare this out. So does the price.

Ask me if I give a good flying F**K about the price. I’m looking at 4 rockets at 350 bucks a piece. I call it a bargain.

SPARKS

Anonymous said...

"I read the ‘experts’ comments on the performance differences between dual core and quad core."

The one thing that is often omitted is the # of minor processes running in the background.

I keep hearing everyone say well if you don't use SW that can take advantage of 4 threads, then there is no benefit from quads - but that's like saying there's no advantage for a dual core if you're SW is only single threaded!

The flawed assumption in the quad comment is that there is nothing else running and therefore the other cores aren't needed. However with MS bloatware, misc office & web tasks, perhaps a video or TV or music playing in the background, then you will still get some background gain from a quad if you are saying using a video encoder which only can use 2 cores or a game or whatever...

Granted quads aren't really necessary for everyone, but to say they are only useful for encoding or other tasks that can use 4 threads is just ignorant.

My only concern is that with all of the horsepower Intel is putting out it gives SW folks little reason to do things efficiently and crap like Vista comes out. I predict the 64bit self fulfilling prophecy will soon take place where microsoft will release an OS that is so bloated that it needs close to 4GB of RAM to run and therefore has to be 64bit. They came close with Vista, they just needed to bloat it out a bit so it needed 3-4GB of RAM which might have made it impossible to run in 32bit mode (asusming you wantde to run anything beyond the OS).

Perhaps the Yahoo merger and stuffing that inefficiently into the OS will help make this dream (nightmare?) become a reality!

SPARKS said...

“”minor processes running in the background”

“The flawed assumption in the quad comment is that there is nothing else running and therefore the other cores aren't needed”

“then you will still get some background gain from a quad if you are saying using a video encoder which only can use 2 cores or a game or whatever”

That’s what I’m talk’n about!

I’d really like to know just how they do it. That is, how they split the tasks, proportionally, say, in Task Manager. I’ve run it during a game, in Quake 4 for example. ALL four little green boxes were grinding away quite nicely. The game was upgraded to “multicore” enabled on the first patch. EVERYTHING just ran better. This included the two Raptors (150G RAID 0, the precious darlings) which are not even part of the game. It’s as if the entire hard drive subsystem was given more attention as the other core(s) was/were busy with other chores like the games’ engine.

I wish there was something I could read on how the OS handles the threads, and how this “feel” could be substantiated with some data. I think the user is left out of the loop on this one so we may have a “more rewarding experience”. My ass, their STEALING my precious clock cycles for pretty, useless, glitz.

Perhaps, someday they’ll give us an OS with radio box where the user can allocate specific cores for specific applications, as apposed to that bloated fat slob with lipstick, Vista. Bill Gates has always been impressed with Steven Jobs’ fancy power robbing OS screens. Now we need OS’s with graphic acceleration in glitzy 3D formats, with more TSR’s (Back Round Services) to gum up the works. (This pissing contest has been going on for nearly 20 years) Give me a button to turn the damned THINGS OFF safely! Further, the only way they’ll get XP PRO away from me is when they pry it out of my cold dead fingers

I agree with you whole heartedly. There is a lot more going on here that’s reaches beyond raw benchmarks. Its one thing to get good PEAK benchmarks, it’s quite another to maintain those numbers consistently throughout the entire game without getting the dropouts during play, that at best become irritating.

Incidentally, it goes to show you how stupid NVDA’s new PR spin is on graphics performance being more important than CPU function. That guy’s on CRACK.

In retrospect, a couple of years ago, INTC got a wake up call. Was AMD responsible, perhaps? The excitement is back with this new release, reminiscent of my first 486DX-33 build. INTC answered the call, BIG TIME. Q6600 was an appetizer, QX9770 is the main course, and I’m gonna eat like a pig.

Do you think 1400 bucks is a lot of money? Take 15 or 20 family members out to dinner at a good restaurant, say, for Easter.

Oh yeah, don’t forget the wine.

SPARKS

Chuckula said...

Happy Easter all (in a non-offensive sort of way). In keeping with springtime and renewal, it seems that Scientia has completely changed his view on CPU cache vs. having an IMC.
The new line from our "expert" is that Nehalem's IMC will not actually help memory latency at all! Apparently Penryn's cache and prefetching system has fixed all of Intel's memory problems, such that an IMC and a point to point memory architecture in Nehalem will yield no real improvements!

So let's recap how things operate in Scientia Land:
1. Intel's FSB is hopelessly broken and terrible. Intel only uses cache to fake benchmark scores and its chips perform terribly in the real world.
2. AMD's IMC and hypertransport system are so amazingly superior that they make up for the fact that AMD can't design a cache to save its life, and that the K10 execution engines are incredibly weak.
3. Intel is ripping off AMD by "stealing" its "major discovery" of putting the memory controller on the same piece of silicon as the CPU, and in using a point to point bus. (In all honesty if anybody is stealing anything then BOTH Intel & AMD are stealing from the Alpha but that's another rant).
4. HOWEVER: While an IMC and point to point are miraculously amazing technologies when used by AMD, they will have absolutely no effect on Intel chips IN THE SLIGHTEST. Scientia even seems to do this without the usual cop-out which is that Intel's implementation of a certain piece of technology will (they hope) be completely broken.


Here's my (slightly more rational) take:
1. AMD fanboys are right to a point: the FSB IS a bottleneck.. but not as often as they'd like it to be. Frankly on single-socket desktop or even on a dual-socket server with 2 FSB's it is not a bottleneck that often with the exception of synthetic memory benchmarks. It DOES become a real bottleneck in 4+ socket systems running memory intensive apps like databases.

2. Conroe/Penryn beat the tar out of anything AMD does because of AMD's own execution problems. Just look at the L3 cache that is being DROPPED from desktop quad cores... half the real estate on the #$@! chip apparently contributes almost nothing to performance!! The very fact that the chips were allowed to ship WITH (broken) L3 cache while AMD bled money should be the subject of a shareholder lawsuit.
If I was writing a research paper on nifty memory interconnects AMD would win. Unfortunately, AMD forgot that the real world is not a white-paper and you have to actually execute on great ideas you got from the Alpha designers.

3. To correct Randy Allen's statements: "Conroe already blows AMD away in every dimension, and Nehalem will blow AMD's future products away in every dimension" Any theoretical advantage that AMD might have had in memory latency or bandwidth is OVER with Nehalem. Scientia realizes this and is already rewriting history such that memory access latency & bandwidth were never really important (yeah right). I'm not sure he's decided on what magical powerpoint bullet will be the thing that makes AMD superior yet, but even he is having a hard time spinning things for Emperor Hector.

SPARKS said...

Chuckula, you should give these links to those morons, this is twenty percent across the board for Nehalem!


theinquirer.net/gb/inquirer/news/2008/03/20/
analysis-nehalems-happy

The as far as I can see, the good ole FSB has caught up to AMD’s vunder weapon! Check out the latency numbers, GONE!!!!

hothardware.com/Articles/
Intel_Core_2_Extreme_QX9770
_Performance_Preview/?page=3

You probably know this, but INTC had IMC on chip with Timna back in 2000, with a graphic component! Sure, this was AMD’s idea, right.

http://www.cpu-world.com/CPUs/Timna/

Rambus memory politics put the kabash on this thing.

SPARKS

Tonus said...

Sparks: "Do you think 1400 bucks is a lot of money?"

Yeah, I think it's a lot of money, but it's not really an issue. Both my Q6600 (2.4GHz) and E8400 (3.0GHz) cost ~$270 when I purchased them, and they're excellent performers.

If the only way to get decent performance was to spend $1400 on a CPU, I would consider that a problem. But there is a lot of choice at a lot of price points, and the performance at many price points is quite good.

I won't begrudge the big spenders their QX9770, I've spent big money on computer hardware before. Hey, if you can afford it and it's what you want, why wouldn't you? The days when I'd sit there tweaking hardware and smiling triumphantly at benchmark scores are behind me, but I remember them fondly.

I'm a guy. My toys are silly and expensive, and that's just fine with me!

SPARKS said...
This comment has been removed by the author.
SPARKS said...

Tonus, don’t get me wrong here. I’m very happy with performance of the Q6600. In fact, if some of you recall, I was fortunate enough to purchase one the same day the price dropped.

I got the GO stepping (SLACR) quite unexpectedly from Mwave for $274. It has been running at 2.93 GHz, 24/7 since. I have had the same smile you mentioned in your post as I have gone through an entire Core2 65nM product cycle on the cheap.

BUT, this QX9770 is a new whole enchilada. It is a refinement of the Core2 product cycle, with Hafnium, tweaked and massaged, shrunken down to 45, blessed with a new instruction set, and a 1600 FSB added for good measure. And, to add a little fuel to the fodder, as you are well aware, it comes with an unlocked multiplier.

Factor this, while most ‘enthusiasts’ will do a number of smaller upgrades over a 12 to 18 month product cycle, I did it on the cheap, $274, plus the cost of a Badaxe2. (I fried the P5DG2-WS) Core2 was released June 2006. Frankly, if you spread the cost on the QX9770 and an ASUS P5E3 Premium with this plus another 2 years for the life of this new machine, I call it a bargain, even when you factor the cost of DDR3 1800 AND a 9800 GX2. Some hardware nuts will piss away money in dribs and drabs. I don’t.

My machine hasn’t been the top dog for a while. Of course this fleeting, the way things change so rapidly. However, I’ve been VERY patient, and waited a long time for these components, X48, 1600FSB, and DDR3 1600 native to come together, not to mention the half assed SLI/9800 GX2 setup. This is the product cycle I’ve been waiting for. Frankly, INTC hasn’t had this much juice come together all at once as far back as I can recall. The iron is hot and I’m gonna slam away, trust me.

It’s the most bang for the buck, it’s way ahead of the software curve, and it’s been a long time since a dropped A LOT of money on hardware. That’s why I call it a bargain. In a year or two we will see what Nehalem brings us. In the mean time, quite briefly, I will have the one of the fastest machines on the planet, and basically future proof. I am a hardware lunatic, no doubt, but I’m not crazy.

Besides the QX9770 is a collector’s item. It will always be remembered as the chip that became the culmination of everything that Barcelona was supposed to be, and the one that brought AMD to its knees.

It is a landmark chip, a milestone in hardware history, the last of the FSB Mohegan’s, if you will, and I will own it. After all, you can’t take it with you, so enjoy it while you can.


SPARKS

Unknown said...

Sparks, for your reading pleasure:

http://www.anandtech.com/weblog/showpost.aspx?i=419

Anandtech got a QX9770 stable at 2GHz FSB with DDR3 running at 2GHz! How's that for raw performance eh? The crazy thing is that sometime in the second half of '08 Intel will once again totally redefine PC performance with Nehalem.

Speaking of Nehalem, this little presentation is quite enlightening:

http://www.intel.com/technology/architecture-silicon/next-gen/demo/demo.htm

Started OCing the E8400 on the 790i board as well, hit 4GHz with 1.3V no problems. I'll leave it there for now, I know this CPU can do 4.5GHz if I ever need it to. :)

Both my Q6600 (2.4GHz) and E8400 (3.0GHz) cost ~$270 when I purchased them, and they're excellent performers.

I felt like crying when I read that Intel was cutting the Q6600 to $266. Why? I had just bought the B3 Q6600 for $535! Still happy with that purchase though, it's an excellent performer. Running the E8400 @ 4GHz on a 790i with dual 8800 GTs and 4GB DDR3, and the Q6600 @ 3GHz on a P5B Deluxe (P965) with an 8800 GTS 640MB and 4GB DDR2.

Tonus said...

sparks: "And, to add a little fuel to the fodder, as you are well aware, it comes with an unlocked multiplier."

That is what intrigues me most about the QX9770. And by intrigues, I mean it gets me reflexively reaching for my wallet! The possibilities for current 45nm parts seem very good right now. And an unlocked multiplier lets you test the limits of the CPU without any worries about the rest of the system.

I'm trying to hold off on upgrades until Nehalem is available. I'm running on GeForce 7800/8800 cards in my systems, so a video upgrade may be next if I find that I can't control the "urge to splurge". Especially with cards that can do HDMI-out now. It's not that I need to see PC video games on a 31" (or soon, 47") screen... but oh man I sure want to!!!

SPARKS said...

Giant! Jesus H! 2 Gig FSB! 48.2 NS Latency!
F**K IMC, FSB is going to go out in style, baby!

In The Know’s motherboard trace pattern theories are bearing fruition.

You realize that ASUS board and Nvidiot chipset are looking VERY tempting.
Me buying a NVDA board, sacrilege!
Full blown SLI, your killing me!
It’s on sale now!
2 ULTRA’S, oh the pain!

Hmmm, we’ll see.

SPARKS

Unknown said...

You realize that ASUS board and Nvidiot chipset are looking VERY tempting.
Me buying a NVDA board, sacrilege!
Full blown SLI, your killing me!
It’s on sale now!
2 ULTRA’S, oh the pain!

Hmmm, we’ll see.

SPARKS


ASUS make some top rate boards. The way I see it, either X48 or 790i you can't go wrong! 2GHz FSB on a quad, with matching DDR3 speed memory. IMC indeed!

Speaking of high end, if you believe FudZilla, we should see the Quad SLI 9800 GX2 results on Tuesday! If they've made that scale well, we should be in for some juicy results!

Either way, a QX9770 and a 9800 GX2 (or two!) should make for an awesome system!

Unknown said...

X48 boards are up and for sale at Newegg:

http://www.newegg.com/Product/Product.aspx?Item=N82E16813131276

http://www.newegg.com/Product/Product.aspx?Item=N82E16813128330

Nice ASUS and Gigabyte board. There are a few 790i boards as well, as well as some X48 DDR2 boards. There's just too many choices!

SPARKS said...

Thanks Giant! I'm glad I checked in. I just ordered the P5E3 Premium! New Egg $389! Yikes!


Let the BUILD BEGIN!

Now for Ram, hmmm!

SPARKS

Unknown said...

$389! Makes me feel a bit better about the $349 I paid for the 790i! That's one hell of a motherboard, and I should think that it will overclock awesomely!

Ram is difficult, there's just so many choices. I've always been partial to Corsair ram myself, but there are tons of other great brands out there. Certainly at least 1333mhz speed, I would go for 2x 2GB as well.

Are you going to use a water cooling or just air cooling for this awesome setup sparks? I've been thinking of trying water cooling myself. Maybe when Nehalem comes along I'll give it a go. :)

Sadly, if this is any indication, the QX9770 may not be available for a while yet:

http://www.pcsforeveryone.com/Product/Intel/BX80569QX9770

These guys think the 28th April. :-( You can grab the ram and video card while running your Q6600. Should still run quite nicely!

I had a quick look, but I couldn't find anything from Intel that gave a concrete release date.

Unknown said...

http://hothardware.com/News/Intel_Lauches_New_Lower_Power_Xeons/Default.aspx

SANTA CLARA, Calif., March 25, 2008 – Intel Corporation has further increased its energy-efficient performance lead today with the introduction of two low-voltage 45 nanometer (nm) processors for servers and workstations that run at 50 watts, or just 12.5 watts per core and frequencies as high as 2.50 GigaHertz (GHz). The Quad-Core Intel® Xeon® Processor L5400 Series takes advantage of Intel’s unique 45nm manufacturing capabilities and reinvented transistor formula that combine to boost performance and reduce power consumption in data centers.

For servers this is just incredible performance per watt. 2.5GHz 50W vs. launched six 1/2 months ago and still unavailable 2GHz 95W Barcelona. Not a hard choice. The Intel 5100 chipset (San Clemente) allows the use of RDDR2 memory, so there's no extra power associated with FB-DIMM memory. George Ou did a nice writeup a while ago showing there isn't much performance penalty at all comparing RDDR2 to FB-DIMM.

I did some more poking around on the QX9770, all I could find was Intel confirming a launch in 2008. There were a few rumors of a March launch as well. Now that the X48 chipset boards are available they might be planning on launching the QX9770 in the last days of March or in early April.

Unknown said...

Quad SLI results are in at HardOCP and Anandtech. Looks like it's not at all worth the cost of two 9800 GX2s.

Considering the cost of a 9800 GX2, if you have an nForce board two 9600 GTs or two 8800 GTs probably offer the best bang for the buck. On an Intel board 9800 GX2 SLI on a stick would be the way to go. :)

SPARKS said...

“Sadly, if this is any indication, the QX9770 may not be available for a while yet:”

Yes, ---------and no, INTC has listed QX9770 on its website, which mean it will be released shortly. Frankly, I can’t blame them as it will render the QX9650 superfluous. They’ll wait a bit longer till the channel lowers it’s inventory, has they have done so during the past few months.

I don’t mind the wait. RAM prices including DDR3 are falling substantially on a week by week basis. Low end DDR3 can be had ultra cheep.


“Are you going to use a water cooling or just air cooling for this awesome setup sparks?”

The short answer,---------Yes!

coolitsystems.com/index.php?option=com_
content&task=view&id=65&Itemid=83


It’s called the “Boreas 12 Tec”


“You can grab the ram and video card while running your Q6600. Should still run quite nicely!”

Precisely! By that time a few manufacturers will be overclocking the 9800 GX2 above reference. Some of NVDA’s darlings like XFX are already doing so. My crossfire cards will swap right in, till all the dust settles. Further, we WILL see what the (GO) stepped SLACR Q6600 can do in the interim.

“Certainly at least 1333mhz speed, I would go for 2x 2GB as well.”

Of course, however, my LOW bar will be 1800 MHz. Anything other than MICRON equipped chips need not apply. Besides, what’s another hundred bucks or so?

Manufacturers? Memory companies are pissing on each other on a daily basis. With the release of x48 things will only get better. I’ve had terrific results with Crucial Ballistics’ (Corsairs biggest nemesis) on the cheap, relatively speaking, naturally. (I have more French fried Mushkin’s than an SUV load of kids at Mickey D’s.)

Basically, as you know, I’ve got a good foundation.
Let’s see where the chips fall from here.

I hope this doesn’t bore anyone(important to me on this site, you KNOW who are) but I’ll keep you posted what this thing can do in the real world, straight out of a retail box, if your interested. Nuclear control rod metals, inclusive!

HOO YA!

(GURU, close your eyes, I will be overvolting the whole frigg’n enchilada!)


SPARKS

SPARKS said...

“These guys think the 28th April.”

Nah, these guys are full of shit. Tiger Direct has OEM slabs on sale right now. I never bought an OEM chip yet, not at 1600 bucks ya don’t.

SPARKS

Tonus said...

via Digitimes: "In order to prevent AMD grabbing low-cost desktop PC market share with its Sempron processors, Intel is planning to bring its entry-level desktop Atom 230 processor down to a price below US$29 in thousand-unit tray quantities, according to sources at motherboard makers."

I don't know much about the Atom processor and what devices it's expected to be used in. Ultra-lite notebooks? Phones or handheld devices?

Tonus said...

Gahh, should read my own quoted snippets... "entry-level desktop".

Now I'm curious to learn more about this processor. A googling I shall go...

Ho Ho said...

It should be around the performance level of low-end celeron I assume. Nothing too great but for the price and manufacturing cost this has to be the best x86 CPU in a long time (ever?)

InTheKnow said...

Tonus said...
I don't know much about the Atom processor and what devices it's expected to be used in. Ultra-lite notebooks? Phones or handheld devices?

The atom processor is the new brand name for Silverthorne. It is has an in order execution engine and is capable of processing two threads simultaneously (SMT).

Performance is expected to be about equivalent to an early Pentium-M. It will run at speeds up to ~2GHz while consuming around 2 watts.

All the evaluations I've read have indicated that it is not expected to be too impressive due to either being too power hungry or under-powered for the applications that it is expected to go in.

The next generation device (Moorsetown) is expected to hit the power and performance targets for the target apps.

Anonymous said...

"All the evaluations I've read have indicated that it is not expected to be too impressive due to either being too power hungry or under-powered for the applications that it is expected to go in."

I think this perception is mainly coming from a smart-phone type application perspective... certainly from a low end notebook, desktop, set top box, this is not too power hungry. Also keep in mind most ARM-type processes are rated for typical power usage, not TDP - though atom is still too high for something like cell phones at this point. Also the idle power is a key issue as well.

My guess is Intel throws the first gen at a bunch of possible applications 'to see what sticks' and this will likely have limited penetration; however next gen should be more successful. I don't think it is reasonable to expect a smashng success from a first gen product especially in these competitive areas (and in other cases, undefined areas)

If Intel can drive the power down, X86 in some of these applications would open up a tremendous amount of flexibility.

Anonymous said...

Oh paid Intel pumpers!! The day of revenge is all upons! On this day The AMD is in release of the newly super-improved Phenoms of the Quad-X varieties!!

No more will pro-Intel pumpers like Scientia or the pro-Intel fanboy Hyc spew their lies against the God Emperor HEctOR!! No more will paid Intel schills who claim AMD chips are only 40% faster than evil Intel pumper chips will be allowed to do the talk!

Just look at what God Emperor Hector has decided to allow you to purchase!! There are the chips that are running with TRUE SUPER QUAD TECHNOLOGY at the 2.5Ghz!! This is the Intel lie!! In reality the chip is 2.5Ghz + 10X generation lead on non-native pumper Intel ^ 4 For quad cores. Therefore it is the simple logics that each AMD chip is really running at the 390,625Ghz!!!

Ha, stupid Intel pumpers talk about quad cores! The AMD is having TRIPLE OF CORES!! These are the most reliable chips ever made!!! There has never been a bug in the AMD chips. AMD has never made a defective chip. It gave away 8 BILLION QUAD CORES to poor children last year in the OLPC. Each OLPC runs at teh 4Ghz and is so efficient it MAKES power to run entire villages! By giving away 8 BILLION chips AMD is now owed $500 TRILLION since Intel is t3h evil because of Monopolies!

Any "review" site that does not acknowledge this superirity is a LIE. These are the rules that must be followed for the AMD reviews:
1. Allowing Intel systems to run: Did Hector say these systems are the NATIVE quads? No? Then they must be DESTROYED and cannot be benchmarked.
2. Running "side by side" comparisons: THis is an Intel pumper lie! It is well known that Intel steals from AMD. PUtting the Intel chips in the same room as the AMD makes the Intel faster because it is the STEALING form the superior AMD!
3. ALL BENCHMARKS ARE LIES!! All software that is not made approved by the AMD is an Intel pumper lie! It is very simple: Take any Intel Fanboy software and multiply the speed by the 1 MILLION. That is how fast it REALLY runs on the AMD chips. The rest is just a lie to make peoples thinking that AMD is not the superior!

4. POWER BENCHMARKS ARE A LIE!! It is well known AMD chips do not use power, they MAKE power. PUmper review sites do not know this. Then Intel chips steal the power to cheat! Intel causes 100% of cancer and grobal walming!! THe Governors Spitzers of NeW York who is the 100% non-corrupt has proven this!

Anonymous said...

Bravo Trogdor. Bravo. That was a masterpiece. But shouldn't you be posting that on Slashdot?

Anonymous said...

AMD quad pricing:

http://www.amd.com/us-en/Corporate/VirtualPressRoom/0,,51_104_609,00.html?redir=CPPR01%3fredir=CPPR01

The good:
- they are all fairly cheap, AMD clearly realizes they cannot charge a premium and have chosen not to gouge the AMD diehards who would have bought these for twice the price (though I'm sure the retailers will not be so noble)
- the 9x00 pricing is the same as the 9x50 pricing... so AMD is not trying to price up the TLB bug fix as a feature.

The bad:
- prices effective 4/7...so much for hard launch. If they are 'launched' why are the prices not effective for another week?
- No parts under a 95 Watt bin?
- 100MHZ speed bin increments... why?


The INSANITY:
- All FIVE 9x50 parts are within $30 of each other! Why have so many different products within such a tight range? Keep in mind this is the quadcore desktop market - not exactly high volume!
- 3 different parts at the same price? WTF? 2.3GHz,95Watt= 2.4GHz,95Watt = 2.4GHz, 125 Watt
- a $6 (YES SIX!) difference going from a 2.2GHz to a 2.3GHz? Why not slot a 2.25GHz in there so you can have $3 increments?

And the topper:
- 9600 black edition (2.4GHz, unlocked multi) costs $26 more than the 9750! (2.5GHz, unlocked multi) and costs $36 more than the 2.4GHz model with the TLB bug fixed! And before you say unlocked multi keep in mind that the LOCKED 2.4GHz 9600 cost the same as the unlocked version!

Excuse me may I have the more expensive part with the TLB bug as opposed to the cheaper one with the issue fixed and a higher clock speed! Sign me up! I guess the 9600 is considered 'vintage' and thus AMD can sell them at a higher price? I can see etailers selling them higher because of supply/demand... but AMD?

Am I nuts or is this pricing strategy just crazy?!?!

SPARKS said...

“Am I nuts or is this pricing strategy just crazy?!?!”


“Oh paid Intel pumpers!! The day of revenge is all upons! On this day The AMD is in release of the newly super-improved Phenoms of the Quad-X varieties!!”


theinquirer.net/gb/inquirer/news/2008/03/26/
amd-launces-b3-phenoms

You guys said it all.

What it all boils down to is they can’t put enough lipstick on this pig, and the sun isn’t bright enough to illuminate anyone ass.

Its official, the whole Barcelona lineup is a major historic catastrophic failure, the poor bastards.

KUDOS to Robo, In The Know, Jumping Jack, and last but never least, GURU, you were right.

SPARKS

Unknown said...

So the new Phenoms are still slower than my 11 month old Q6600. This is just getting pathetic. Xbitlabs could only overclock their CPU from 2.5 to 2.7. Compare that to the 100% of Q6600s that can do 3GHz easily on stock voltage with decent air cooling. Enter the BIOS and set the FSB from 266 to 333. Done. That's it.

Oh, btw Sparks:

http://www.newegg.com/Product/Product.aspx?Item=N82E16819115050&Tpk=QX9770

$1450 for the QX9770. That's a bit better than TigerDirect, but it's still only an OEM processor.

BTW, the Striker II Extreme (790i) is at Newegg now as well. It makes me feel a bit better about paying $349 for a board, since Newegg wants a crazy $469 for the Striker II Extreme!

That water cooling of yours sparks, that's huge high end water cooling! I hope we see at least 4.5GHz from that QX9770 when you pick one up.

QX9770, X48 ASUS board, DDR3 1800 (minimum!) and a factory OC'd 9800 GX2! That should be some serious crazy performance. You'll put my E8400, 790i, 8800 GT SLI and DDR3 1333 setup to shame. :-(

-GIANT

P.S. Trogdor, nice post! My daily humor intake, thanks! Are you TheBurninator from Sharikou's blog?

SPARKS said...

Giant, the P5E3 Premium came today, 3 days from New Egg. Almost identical to the 'Deluxe', as you know, it is a manufacturing work of art. This thing has more copper than some of my switch gear!

Thinking back to the early nineties, it is incredible that you can get so much for so little in comparison.

Next stop is Ram. The Intel certified Dominators are currently unavailable, as is the boxed QX9770. You were correct about 4/28 being its official release date.

This is a good thing. It will give me time to 'hide' the expense from my wife.

1800 DOMINATORS!

Hoo Ya!

SPARKS

Unknown said...

Giant, the P5E3 Premium came today, 3 days from New Egg. Almost identical to the 'Deluxe', as you know, it is a manufacturing work of art. This thing has more copper than some of my switch gear!

That boards looks absolutely fantastic and should perform just as well!


Thinking back to the early nineties, it is incredible that you can get so much for so little in comparison.


Indeed, the sort of system you can build today even for just $1000 is simply amazing. I recall back in late 1994 Wing Commander 3 came out and offered the then incredible ability to play the game at a full 640x480 resolution! I had recently build a Pentium 90 system, and as such had the horsepower to run at 640x480 while my friends with 486s were stuck at 320x240. Those were the good old days! Then a few years later we had the 3Dfx Voodoo and GL Quake. I lost, many, many nights of sleep playing GL Quake with the Voodoo based Diamond Monster 3D online over a 33.6kbps connection!


Next stop is Ram. The Intel certified Dominators are currently unavailable, as is the boxed QX9770. You were correct about 4/28 being its official release date.


Speaking from experience here, my corsair DDR3 memory has overclocked very well. You might be able to get away with DDR3-1600. From what I've seen, the prices skyrocket when you go from 1600mhz to 1800 or 2000mhz.


This is a good thing. It will give me time to 'hide' the expense from my wife.


She might not notice the parts slowly spread out, though it would be hard to hide the bill for that QX9770 monster!

I've got this monster rig overclocked nicely now: the CPU at 4GHz with 1.325V with 1780MHz FSB. Dual channel DDR3 at 1780MHz, and dual 8800 GTs overclocked to run at 700MHz core, 2000MHz memory and 1750MHz on the shaders.

Unknown said...

AMD SALES DROPPING RAPIDLY - HECTOR HAS TO GO!

t's a pretty safe bet that if you were asked to name the leading chip manufacturer, you'd guess right: it's Intel. If you were asked for the top three in this $274 billion market, you'd probably get the other two wrong. They are Samsung Electronics and Toshiba. Hard luck if you said Texas Instruments: this once mighty chip giant is now in fourth place with sales worth $11.8bn, according to Gartner's latest list of The Top 10 Worldwide Semiconductor Vendors by Revenue Estimates.

Intel is top by a wide margin, with sales increasing by 10.7% to $33.8bn. Toshiba ($11.8bn) has just jumped into third place with 20.8% growth. It benefited from increased sales of the Sony PlayStation 3.

AMD is now in ninth place, according to Gartner. AMD's sales slumped by 20.9% to $5.9bn. giving it a market share of 2.1%.

Four of the top 10 saw revenues decline. Gartner says: "In terms of absolute revenue shifts, the largest drop was in dynamic random-access memory (DRAM) which saw a decline of $2.4bn in revenue caused by sharp price declines as a result of oversupply."

But like PCs and software, it's a diverse market. "Others" (outside the top 10) shift $147bn worth of chips, for a 53.8% market share.

Unknown said...

Almost forgot the link!

http://blogs.guardian.co.uk/technology/2008/03/31/intel_sales_up_amds_down.html

SPARKS said...

Giant,

“You might be able to get away with DDR3-1600. From what I've seen, the prices skyrocket when you go from 1600mhz to 1800”

Quite right, the Dominators were unavailable AND close to $550. Super Talent DDR-1800, however, were available @ $339, with NO user comments, I might add. Since they use the same Micron chips, have the same low 7-7-7-21 timings, I thought I’d give them a shot. The California based companies’ ‘Project X’ series is an attempt to dive into the high end memory market fray.

I have an interesting link that compares all this high end, DDR3 memory lunacy quite well. In fact, my purchase decision was determined by the article. This, plus saving 200 clams, it became a no brainer. For your review:

benchmarkreviews.com/index.php?option=com_content&task=view&id=117&Itemid=1

It seems counter intuitive and contradictory to build a high end machine AND shop for good deals. Well, we’ll see if I just gambled away 340 bucks, after all.

By the way, for all those contemplating a new build on X38/X48, the manual explicitly states to use A2 and B2 slot FIRST with the 2 DIMM, dual channel solution. As I read somewhere, It seems there is a problem with Windows corruption when going to A1 and B1 banks first. Go figure, and be advised.

“it would be hard to hide the bill for that QX9770 monster!”

Your not kidding, I’m scheming like a bastard to push this one through the pipe, unseen.

“I've got this monster rig overclocked nicely now: the CPU at 4GHz with 1.325V with 1780MHz FSB. Dual channel DDR3 at 1780MHz, and dual 8800 GTs overclocked to run at 700MHz core, 2000MHz memory and 1750MHz on the shaders.”

F**K’n a bubba! This is what rings my chimes, and that’s what we’re paying for. INTC is back in the saddle again, and there ain’t no go’n back! My hats off to all you boys out there, on this sight, who work for the ‘EVIL EMPIRE’ for flawless execution, and a job superbly done. KUDOS! (No more fly bys, and AMD sky writing at company picnics, eh?)

Clock till ya rock!
HOO YAA!

SPARKS

Anonymous said...

AMD roadmap(?)

http://www.nordichardware.com/news,7412.html

Looks like the 6000 and 6400 will be discontinued, and not manufactured on 65nm. Gee, I wonder if that is by 'choice' or if the 65nm has a few problems getting the clocks up on an OLD, STABLE architecture?

"The roadmap does not speak of Q4, but that is when AMD is suppose to begin shipping 45nm Shanghai processors. Desktop revision should be expected until Q1, 2009."

Hmmm... weren't some folks postulating possibly Q2'08, and if not Q3'08.... and thus AMD closing the process technology gap! also keep in mind "shipping" does not equal available for purchase. Looks like best case is AMD will maintain the gap but with a vastly inferior process (from a raw process performance perspective)

And the key comment:
"All Athlon X2 BE models will be replaced/renamed 4x50e and two additional models will be launched; 6250 and 6050. These are based on the much anticipated Kuma core."

If they are pawning these off (again) as another 'customers want energy efficiency' then I have a few speculations:

- These don't look so hot clock for clock vs the K8, so focusing on the energy efficiency may distract people from realizing these are only marginally better than K8 dual cores. This would also be consistent with AMD's plan to launch the quads well ahead of the duals. Get the upgrade crowd on the quads, as the duals are really not much better than K8?
- They may STILL be having some real clock issues.... this could be both process (which I think at this point is a given), but there may also be some architecture issues too.
- Has someone with a business sense finally realized K8 is cheaper to manufacture and K10 is not going to give much of a price premium in the dual core space?

So some questions:
1) Why even bother with ANY K10 65nm dual cores? (unless 45nm is not as healthy or expected to ramp as quickly as)
2) What's up with the tricores at only 2.2 and 2.3GHz? Are they just trying to sell off B2 stepping scrap and are the thermals that bad on the quads that this is a way of turning a core off and meeting TDP's (to help binsplits)?
3) If the K10 desktop revision for 45nm is not due until at least Q1'09, is AMD hoping to ride things out for a year with a 2.6GHz 65nm quad (assuming of course they finally get this out)? Why even bother - just focus 65m quads on servers and get better margins and cede the quad desktop space (which is still tiny). This seems like an ego based decision vs a business one...

All they are doing is putting a price ceiling on the tri-cores, which in turn is putting a ceilng on the dal cores. Why not just sell defective server quads as desktop tri's and forget about 65nm desktops quads altogether (and wait to 45nm)?

Anonymous said...

I think Charlie forgot to take his meds again:

http://www.theinquirer.net/gb/inquirer/news/2008/04/02/amd-fabless

"This may seem like an impediment, but that 49 per cent is still potentially worth billions."

"The influx of cash would be sufficient to pay off the amassed debts and bankroll some of the future plans."

"This may seem like an impediment, but that 49 per cent is still potentially worth billions."

How much debt does AMD have...now double that # if AMD has to keep a 51% stake in the spinoff (per the Intel x86 license agreement)... is the 1.5 fab company (plus some packaging plants) really going to go for 10+ Bil?

I'm not saying the article is wrong - AMD may go fabless... but Charlie's ridiculous supporting arguments are, well, ridiculous. The IP piece on SOI vs bulk... reinforces the lack of expertise Chuckie has in this space.

And I'm sure the US govt (AMD is a US company after all) would sign off on a transfer of leading edge technology to Dubai? It's not like the US only recently has allowed export of 65nm equipment to China! But I'm sure there'll be no issues with 45nm tech?!?!?

Charlie will be right for the wrong reasons and will probably pat himself on the back for getting it right! Remember even a broken clock is right twice a day! Cough, reverse hyperthreading, cough, dancing in the aisle at 3GHz, cough..

Anonymous said...

ohhh.... and some other forgotten part of Charlie's brilliance... One of these companies is going to be operating at a loss (the foundry) and if "AMD design' owns 51% of it... guess what?

Oh and the foundry will be charging AMD design a little extra to fab the chips, there'll be more inefficiencies created as they are now 2 separate companies...

You can't just generate money - I can split a $300K house in half and try to keep half and try to get $200K for the other division but why would I be able to do this especially if that part of the house has water damage and termites?

Hey Dubai I would like to sell you 49% of 1/2 the company, the half which is losing money and I want to sell what is effectively 1/4 of the current company for more than what the TOTAL MARKET CAP of the company is... Houston we have a problem.

Unknown said...

Quite right, the Dominators were unavailable AND close to $550. Super Talent DDR-1800, however, were available @ $339, with NO user comments, I might add. Since they use the same Micron chips, have the same low 7-7-7-21 timings, I thought I’d give them a shot. The California based companies’ ‘Project X’ series is an attempt to dive into the high end memory market fray.

$339 isn't a bad price at all. Was that for 2GB or 4GB? I paid $349 for 4GB of Corsair DDR3-1333 with timings 9-9-9-24 with a default voltage of 1.7V.


I have an interesting link that compares all this high end, DDR3 memory lunacy quite well. In fact, my purchase decision was determined by the article. This, plus saving 200 clams, it became a no brainer. For your review:

benchmarkreviews.com/index.php?option=com_content&task=view&id=117&Itemid=1

It seems counter intuitive and contradictory to build a high end machine AND shop for good deals. Well, we’ll see if I just gambled away 340 bucks, after all.


I've found from personal experience that buying high end ram is always a good move, but you can save quite a bit by not buying the absolute of highest end memory.


By the way, for all those contemplating a new build on X38/X48, the manual explicitly states to use A2 and B2 slot FIRST with the 2 DIMM, dual channel solution. As I read somewhere, It seems there is a problem with Windows corruption when going to A1 and B1 banks first. Go figure, and be advised.


That's odd. I haven't that sort of requirement in quite some time. My 790i board will take two sticks of ram in the first and third slots, or the second and fourth slots.




Your not kidding, I’m scheming like a bastard to push this one through the pipe, unseen.


Surely your wife has some expensive habits of her own. If necessary, just remind her of those! :)


F**K’n a bubba! This is what rings my chimes, and that’s what we’re paying for. INTC is back in the saddle again, and there ain’t no go’n back! My hats off to all you boys out there, on this sight, who work for the ‘EVIL EMPIRE’ for flawless execution, and a job superbly done. KUDOS! (No more fly bys, and AMD sky writing at company picnics, eh?)


Absolutely. Intel came back with a BANG in 2006! Prior to that I had an AMD 4200 dual core (lets face it, the Pentium D was garbage) but since then I've bought (and still own) an E6600, a Q6600 and an E8400. I'm not sure what I'll do with the E6600, it's been sitting in it's box since I bought the E8400.

I think you might find this little link to your liking sparks:

http://www.evga.com/articles/400.asp

Factory OC'd 9800 GX2s. I imagine they'll be available at Newegg, TigerDirect etc. within a few days.


Looks like the 6000 and 6400 will be discontinued, and not manufactured on 65nm. Gee, I wonder if that is by 'choice' or if the 65nm has a few problems getting the clocks up on an OLD, STABLE architecture?


I think you've got it all wrong. AMD's customers only care about energy efficient CPUs! Truly, none of AMD's customers want dual core CPUs that can compete with the E8x00 series or a quad that can compete with the eighteen month old Intel quads!

BTW, it seems that Barcelona servers may finally be starting to come out! This is, just a mere seven months after the launch! There doesn't seem to be much mention of this from AMDzone people though.

Khorgano said...

Anonymous said...

Charlie will be right for the wrong reasons and will probably pat himself on the back for getting it right! Remember even a broken clock is right twice a day! Cough, reverse hyperthreading, cough, dancing in the aisle at 3GHz, cough..


APRIL FOOLS!!!!

*Checks Date: 04/02/2008*

Oh

SPARKS said...

Giant, you confirmed my suspicions that one of NVDA’s darlings would be over clocking the GX2. Well, that didn’t take long, did it? I’ll wait for the benchies, if it’s anything close to 17000 in 3D Mark ’06, I’m in.

http://www.evga.com/products/moreinfo.asp?pn=01G-P3-N897-AR


This one @ 675 clock speed seems to be the weapon of choice, for INTC Mobo’s, anyway.


“They may STILL be having some real clock issues.... this could be both process (which I think at this point is a given), but there may also be some architecture issues too.”

This “speculation” on your part was postulated 8 months ago. I don’t think anyone (on this site, anyway) expected anything more. Frankly, the whole damned line up is a dog, and you called it.


“Why even bother with ANY K10 65nm dual cores?”


The answer, if I may be so bold, isn’t obvious for highly disciplined and conservative individual such as you. It’s pride, pure and simple. May I submit to you that they will milk this cow for what ever they can, until 45nM can justify 2 years of bleeding production costs, hype, and Power Point spin. They will never admit publicly, or to the industry, the whole Barcelona conundrum was a catastrophic failure, process and architecture inclusive. You know I love clichés, ‘Pride go’th before a fault’, and this is no exception, damned the loyal AMD followers and shareholders.

That said, in the final analysis, the 45nM series MUST see 3 gig or better, even if it burns a hole in the motherboard. This is gospel.

“I think Charlie forgot to take his meds again:”

Oh, how true. First, it flies directly in the face of the commitments to both New York State, and the European Union. We know they received subsidies for Dresden, and they are looking at a cool 1.2 B from the ‘New York State of Mind’. (This is not to mention the money already spent on Malta FAB infrastructure my buddy GURU tortures me with!)

Any way you look at it, it a hard pass. If they go for FAB-less, their credibility for any future agreements will be up there with WorldCom, Enron, and Global Crossings. Further, to add insult to injury, the current lending scenario is tighter than a crab’s ass, and that’s waterproof. Compound this with 5+ B in long term debt, ugly gets hideous. The article reeks.


SPARKS

Anonymous said...

"The answer, if I may be so bold, isn’t obvious for highly disciplined and conservative individual such as you. It’s pride, pure and simple"

Individual people have pride, but a whole company? Board of directors? (asleep at the wheel) Stockholders? How can so many people be this stupid (or I should say proud)?

Yes some will say hindsight is 20/20, but AMD has had some time to adjust their plans since K10 samples started rolling off the line in DECEMBER 06! They had to have known what 65nm was capable of based on K8! I can see AMD having to go ahead with the launch - after all the marketing speak on 30% better and true quad core they painted themselves into a corner and probably felt they had to launch. But it seems like they could EASILY have adjusted product mix based on the performance of the chips.

Sure Hector is driving the car at the wall at 200mph, but there are other people in the car and someone's gotta slap him or grab th esteering wheel, or someone in the back seat should yank on the emergency brake.

One or two bad decisions can be understood, but it is mind-boggling what Hector et al have done to this company... this will be taught at Wharton, Harvard, Northwestern, MIT Sloan and other B-schools for years to come. And this is clearly not all on Hector, the board has to step up - THEY ARE PAID TO PROVIDE STRATEGIC ADVICE and remove idiots when needed. Also the number 2 and 3 in command at some point have to lean on No 1.

Anonymous said...

"If they go for FAB-less,"

Forget credibility, If they go fabless THEY POTENTIALLY LOSE THEIR x86 LICENSE! By contract they can't outsource more than 20% of their CPU production (oops) - so that means they will need to be the majority holder in any foundry they spin off!

This means at most AMD can get ~50% of what the foundry spinoff would be worth and will be saddled with 1/2 of whatever losses it continues to incur (unless you think this mom and pop factory shop can compete with the big foundry boys like TSMC and UMC). And if this foundry folded then AMD would be completely F'd because then they wouldn't be able to maintain 80% of CPU production in house/AMD owned! So AMD design would have to pour money into the foundry to keep that solvent anyway! Not much different than today, other than they may get some potential sucker, um investor, to essentially fund 1/2 (or I should say 49%) of their foundry.

The good news Sparky... this may save the NY taxpayers from getting bled 1.2Bil, as I can't see that happening with a spinoff. If anyone in NY had half of a brain they should have written in that this was an AMD only subsidy, and thus they would have a potential escape hatch - of course Spitzer may have been otherwise 'preoccupied' at the time of the deal! :)

I must admit AMD was very shrewd to milk people out of their money with the convertible notes (any chance those will see the strike price? Hah?!?) to pay off the Morgan Stanley loan. Had they still had that loan any money from the spinoff would have gone to Morgan Stanley, now they can just continue to float the convertible notes and debt and use the money to simply survive.

SPARKS said...

“Forget credibility, If they go fabless THEY POTENTIALLY LOSE THEIR x86 LICENSE!”


Whoops! Thanks for reminding me about that --- uh, ahem, --- minor detail.


http://contracts.corporate.findlaw.com/
agreements/amd/intel.license.
2001.01.01.html


“If anyone in NY had half of a brain………. Spitzer may have been otherwise 'preoccupied'……”



Personally, I must take exception to this argument, as you didn’t qualify it by saying which half; some of us have a pretty good working half. Also, you don’t say which side of the brain is used when ‘preoccupied’; as opposed to the side used when you’re dealing with finances! HA!

“The good news Sparky... this may save the NY taxpayers from getting bled 1.2Bil”

HOO RA! (See, that time I used the right half, err-- the good half.)

SPARKS

InTheKnow said...

For those interested in the conversation around Design for Manufacturing (DFM) with regards to Intel's pass on immersion litho at 45nm, you might want to check out this article.

InTheKnow said...

I also found this rubbish elsewhere on the web.

Nehalem is an updated P4.
Intel has said that the hyperthreading is not improved from the P4, but it does have 4 integer units now so Intel is going to call it a Core chip. :)
Pretty funny seeing as Core1 was just a P3 with a huge cache to make up for its short comings.


If Nehalem is a P4, then it would have the huge number of stages in the pipeline that is characteristic of the netburst architecture. It would also show the thermals that netburst had. Since Nehalem is intended to compete in the server space, I'll go on record saying this just isn't so.

As to there being no improvement in hyperthreading, I've never heard anyone suggest that hyperthreading didn't work well if your were able to feed it. Quickpath should ensure the processor isn't waiting around for data, so HT should work a whole lot better.

The thing that distresses me about this comment is that it is allowed to go unchallenged on the blog where I found it. However, were I to challenge the ludicrous claim made above on said blog there is a fair chance I would get censored.

Unknown said...

Some of those idiots at AMDzone and some of the people that comment at that particular blog are all quite stupid.

It's well known that Nehalem is a revised version of the Core Micro Architecture. As for the SMT feature, there's a Pat Gelsinger interview from a while ago in which he says something along the lines of "SMT in Nehalem is similar in many ways to HyperThreading in the Pentium 4, but has been greatly improved upon."

Then we have the illustrious azmount azryl claiming that Phenom is faster and better at overclocking than Intel's quads! Of course, he provided no links at all. He has no proof.

Here's the HARSH REALITY for you AMD fanboys.

Intel has a COMMANDING lead in mobile, in desktop and on SP an DP servers. Even in MP servers Intel has a performance lead, though it's not as great.

Phenom is a stinking pile of crap that's SLOWER THAN EIGHTEEN MONTH OLD QUADs. They still overclock like crap. Xbitlabs got 2.7GHz on air. Pathetic. I've had this year old B3 Q6600 at 3.3GHz on air. My E8400 does 4.5GHz on air.

Barcelona is a seven month old dog that's still unavailable in servers from Dell, Sun etc. It's significantly slower than Intel's 45nm Harpertown processors.

What about future processors? For mobile AMD has yet another K8 based clunker that will be significantly slower than the Core 2 Duo mobile processors out there now. In addition, Intel is introducing faster mobile processors and even a quad core mobile processor later in the year.

What about desktop? AMD has dual core "energy efficient" K10 and older but faster K8 dual core CPUs and the illustrious TRIPLE CRIPPLE Phenom! All fragged by Core 2 Duo today.

AMD has 45nm products coming in the second half of 2008 (read: AMD will paper launch these Dec 31!), but these will offer no more than a 5 -> 10% boost at the same frequency. Clockspeeds will be SLOWER than 65nm. This has been for AMD going from 130 -> 90 and from 90 -> 65. Why would this change now?

Meanwhile Intel has the performance lead in all processor segments and will EXPAND this lead even further with Nehalem. Nehalem will offer a large improvement in single threaded applications, while multithreaded applications the increase will be MASSIVE.

Meanwhile, AMD will continue to lose tons of cash while Intel rakes in over a BN a quarter.

I could continue on for another hour at this, but we all know AMD has no chance of competing with Intel.

pointer said...

Giant said ...
Then we have the illustrious azmount azryl claiming that Phenom is faster and better at overclocking than Intel's quads! Of course, he provided no links at all. He has no proof.


actually whatever they said, while higher probability of false, but can possibly true too, although most often with wrong reason. They mostly don't really give educated guess / rational judgment, but emotional / fanboish statements.

I was there for a quick fun, poking on their double standard in judging thing. While i do think that Super Pi might not be a 'good' benchmark indicator, but it is an indicator anyway, which those AMD fanbois used before the Core era :) I even given them a link where AMDZone used it. Well, you can quickly see how those fanbois play down the Super Pi as if it is zero value.

Anyway, another finding. You posted in sharikou's page on 1st April - Q2'08!!! :) I predicted people will flock to his site to laugh at him .. apparently even myself forgot about it. Neither Intel nor AMD has BKed, but his site surely is :)

Unknown said...

AMD CONTINUES TO LOSE MARKETSHARE:

Several analysts said in recently released notes that Advanced Micro Devices was losing market share to Intel Corp. in desktop and server markets due to product lineup that cannot compete against rival’s family. One of the factors that does not allow the world’s second largest x86 chip producer to fight back the share is the lack of speedy microprocessors for demanding customers.

J.P. Morgan analyst Christopher Danely recently warned that Advanced Micro Devices was likely to report Q1 revenues below company guidance calling for a “seasonal” March quarter, reports Barron’s web-site. Mr. Danely said AMD was losing market share to Intel in Q1 2008 and that Intel was benefiting from “superior product offerings” and AMD’s lack of high-end server processors. Another analyst – Uche Orji from UBS – also believes that AMD was losing market share to its larger rival.

“AMD is losing more processor market share to Intel than we had expected, primarily from lower estimates on desktops and servers,” Mr. Orji said.

On an overall unit basis, Intel had 76.7% market share, a gain of 0.4% in Q4 2007, according to IDC. AMD commanded 23.1% of shipments, a loss of 0.4%. These shares are pretty identical to the shares of Q2 2007, however, in Q1 market share of Advanced Micro Devices may collapse since Intel introduced a family of very competitive microprocessors made using 45nm process technology, whereas AMD failed to deliver higher-end AMD Phenom microprocessors on time and in mass quantities.

There are additional problems that AMD is facing in addition to market share loss. The J.P. Morgan analyst also warned that AMD could be hurt by microprocessor inventory in the channel, noting that microprocessor unit shipments have been well above long trend for the past two quarters. Finally, Mr. Danely said that AMD’s current lineup of central processing units would not allow the firm to report high average selling prices, which means generally lower profits.

“We believe it will be difficult for AMD to make money unless it drastically cuts back production and focuses on execution,” Mr. Danely said.

Axel said...

AMD warns. 15% drop in sales from last quarter, 10% of workforce to be cut over next six months. I expect asset light or asset smart or whatever other term of the day the spinner execs choose to be fleshed out during the Q1 CC on April 16.

SPARKS said...

Fella’s, I’ve got some news, some interesting facts, an update, and an interesting little story.

First, to all the idiot naysayer who said INTC was having 45nM production problems, the market is being flooded as I type. I believe INTC satisfies OEM’s first then the boxed stuff comes later. A week ago an OEM QX9770 was released. A week later, New Egg has the boxed version on sale. So much for the 28th of April FUD, as I’m sure INTC has been gracious enough to allow customer/vendors/OEM’s time to clear inventory. They can afford to with such a performance and production lead. Further, the pricing structure looks very good and well thought out.

The venerable, lowly, bread and butter Q6600 (GO) is a goddamned MONSTER! This thing is the absolute, undisputed, 21st century, overclocker’s darling!! The fabulous celery 300A has had a special place in my heart, but this thing, from the onset, has been an absolute JOY. All you need to do is get an X38, preferably, X48 mobo with some 1600 or better DDR3 RAM, volt the core to 1.38, crank up the FSB to 1450, and viola, QX9770 speeds on the CHEEP! For a moment, a VERY small moment, I questioned my ‘extreme’ purchase, then, naturally, I regained my senses. This is all on air, mind you, Zalman LED 9700 series. By the way, the ASUS P5E3 Premium is a gorgeous motherboard both inside and out. The onboard wireless N is more than capable, although not as refined as my NETGEAR 854 series card. There is more orverclocking CPU/RAM/FSB timing parameters that I ever knew existed. Expensive toy, yes, no doubt. But it’s far, far cheaper, than fast cars, party girls, good booze/drugs, and not to mention, a good divorce.
Trust me.

Now, for a little story. Giant, especially, you will appreciate this after your previous post. I commuted with an IT guy for a number of years. We would talk computers, INTC, AMD, RAM, motherboards, etc., every morning. He’s a geek, but we got along very well. I think he was impressed that a construction worker could be such a computer/hardware nut. There was one difference, besides me being able to snap his neck with two fingers, he likes AMD, and I like INTC.

He had enough of his job at a large University in NYC, and subsequently, quit, just like that. I hadn’t seen the guy for nearly two years. Frankly I missed him as I had no one to talk hardware/software, etc. with. Then last week I ran into him on the way home from work. He was happy about his new job; we then got around to talking computers. I asked him what he was running. His reply was, “Phenom 2.2”. He saw ‘the look’ come across my face and said “Listen I am not going to get into a pissing contest with you on this”!!!!
My reply was to ask him why he was so touchy about this and told him about the Q6600 performance and overclockability. His reply was that he looked at it, but it wasn’t a true quad core, huh?

I let it go. He went on to tell me what a fine machine he purchased, how good the price was, and how well it ran VISTA, huh? There was no more to say. Here’s a guy in the business, in denial, and didn’t want to hear it, not one bit. None of them do. AMD is getting it ass handed to them on a 300mm wafer. They have lost billions, market share, and ultimately, any performance lead they ever had. They may even lose the company. These are the facts, and they are undisputable, except to those AMD fans that choose not to face reality. There is nothing any one is going to do to change these facts, either.

SPARKS

SPARKS said...

You beat me to it AXEL. What isn't so obvious, however, is that AMD waited for the NYSE to close before they made this 1600 job cut call. I suspect new lows at tommorrows opening bell.

My sincerest regrets to those employees and their families.

SPARKS

Anonymous said...

"First, to all the idiot naysayer who said INTC was having 45nM production problems, the market is being flooded as I type."

Come on Sparks, Charlie at the Inq said things were so bad a while back that Intel was developing a second 45nm process (I'm fairly sure this was after reverse hyperthreading)

Axel said: "I expect asset light or asset smart or whatever other term of the day the spinner execs choose to be fleshed out during the Q1 CC on April 16."

I'm fairly sure you we'll here 'we are still working out the fine points and don't want to tip our hands again'. Here's a news fash - there's no such thing as an asset lite strategy - it was a made up strategerinessment by AMD to buy time from investors! What has AMD since talking about asset lite A YEAR AGO? ...nothing - they continue to use foundries to do the graphics (ATI) products and they only mothballed F30... that is not asset lite that is 'asset off'

Going forward it is either:
1) Continue as is and scrounge for cash investments (given the US credit markets this will be difficult without some really punishing AMD on the terns)

2) Try to figure out a way around the x86 license terms and spinoff the manufacturing unit. They may even try some political maneuver and play the 'we'll die if Intel doesn't let us out of this term and then folks we'll be left with a monopoly' card (The license terms say that AMD can outsource no more than 20% of CPU production)

3) Form an alliance with a cash cow and keep the manufacturing in house on paper (to take care of the x86 license) issues, but effectively spinning off the fabs. Not quite sure why someone would do this other than the maybe the NY 1.2Bil carrot hanging out there.

I expect a lot more song and dance at the next earnings call with no details... and an increased call to take Ruiz out behind the woodshed and spank him like a step child. How is this man getting past the staffing reviews? You'd think he'd be part of the 10% cut!

Anonymous said...

"What isn't so obvious, however, is that AMD waited for the NYSE to close before they made this 1600 job cut call. I suspect new lows at tommorrows opening bell."

I'm actually stunned it wasn't done Friday afternoon - the job cuts was already leaked by then anyway.

I suspect the Wall St reaction may be mixed - the job cuts may offset the earnings shortfall in investors minds. The stock already has a lot of bad news baked in. Sadly if it gets dragged down I suspect Intel will go down as well as people just assume this may mean market has softened and not the loss of market share to Intel. (I own intc stock)

Anonymous said...

I suspect the real winner of the AMD warning maybe Nvidia. AMD has some fairly competitive mainstream products in that space and has been pretty aggressive with pricing. I suspect AMD may feel some pressure to ease up on the graphics pricing to help out a bit with overall cash flow, which will give Nvidia some pricing relief (have folks seen the amount graphic card pricing has been falling recently?). Nvidia is pretty much screwed now in the integrated graphics space (how many people really need SLI?), so discrete is their cash cow until Intel jumps into that market.

I wonder if AMD could hold out for a while until Larabee comes out whether Nvidia could try to buy/merge with AMD (with Huang taking over). I can't see it happening before then as that would setup a monopoly in the discrete card space. Just crazy speculation...

SPARKS said...

In The Know, That “this article” link was absolutely terrific. Many of the things you, GURU, the DOC, and others have been explaining gave me quite a bit of insight as I reviewed the data and photos. What was particularly interesting was the graph:

“Figure 11. Improvement in M1 uniformity after Cu etch and CMP enhancements”


Correct me if I’m wrong here, but it seems INTC is getting excellent yields at the edge(s) of the wafers which would be attributed to uniformity within the process.

In any case, Thank you for the link, and thanks to all for the little back round you’ve given me so that I might have some clue of what the hell I was looking at. At least now I can say, conclusively, that I have two meanings of what the “back end” is. I would love to see a 3D rendition of how the all happens layer by layer with actual photos, like these, as the model.

I can see why GURU stressed his dry litho, as apposed to immersion. Clearly, INTC has a considerable edge (forgive the pun) on small, crisp, precision, dry lithography.

Incidentally, the fine details of the 45nM process are far more refined than the 65nM process. I guess “Leap Ahead” was more than just a catchy marketing thing.

Got any more I could bookmark?

SPARKS

Anonymous said...

"Correct me if I’m wrong here, but it seems INTC is getting excellent yields at the edge(s) of the wafers which would be attributed to uniformity within the process."

All yield falls off at the edge, and while there is no concrete data - DFM, design rules, and process capability all help out here. I think Intel's designs are generally more manufacturing friendly if you look at things like the dummification rules (all IC folks use them, but it is a question of how rigorously) - there also appears to be more tradeoff between the design folks and process folks... if you look at some of the layouts that certainly limits probably what the designers want to do, but as a process person, that clearly has tremendous benefits to developing a robust process. In the past these are largely 2 separate efforts, now given the complexities they are heavily intertwined.

What is rather great about intheknow's link, is that it shows (I think) a fundamental difference between Intel and AMD in the process space. Intel tracks everything back to tangible results ad data (specific temp uniformity, fmax variation, etc) and manufacturability, and doesn't rely on what I will refer to as AMD's 'blue crystal' PR approach. Intel is more than happy going with 'old' technology if the benefit of the new technology is not appreciable (vs the cost/risk to implement it) or extendable (SOI leaps to mind here)

Immersion is better just because it is 'more advanced', ULK is better because it has a lower absolute k value, 4 stressors are better than 2 (yet no mention of the amount of stress induced or it's impact on mobility and/or Isat). I have seen no actual DATA on any of these changes from AMD and why they matter (in terms of better yield, lower cost or improved transistor performance). I'm sure there are benefits to most of these, but does it outweigh the negatives (cost, risk, manufacturability, time to market, etc..)?

Anonymous said...

The Inq 'reports' that Ruiz' job may be on the line? As always Charlie is on top of things!

'There has been speculation that Hector’s job is on the line'

Ya think?!?!?

And in other news, Charlie is reporting that there has been speculation that the sun may indeed rise from the East tomorrow!

Anonymous said...

Holy spin Batman:

http://www.itweek.co.uk/itweek/analysis/2213690/amd-feels-cores-optimism

“We have taken a lot of criticism over the delays, but partners and customers appreciate that we have taken the trouble to fix this issue,” Allen said.

Allen reportedly had no comment when asked if customers and partners appreciated the chip being inferior to Intel's offerings despite numerous promises to the contrary! 40% what? Next question!

'Such a slip-up ought to have proven costly to AMD, but the company claims not to have suffered significant losses from the episode'

Hey when you have losses as massive as AMD already has, would anyone really notice the impact of this? This was always a paper launch in Q3 with minimal volume in Q4, so delaying things probably isn't a big deal....especially when these things are being sold at such low prices (in desktop space, it's probably better for AMD to sell K8's)

'Nehalem is a threat, he added, but wondered why it has taken Intel so long to come up with QuickPath. “We had this architecture five years ago, and we have been making improvements ever since. Intel is just introducing it at the end of 2008, while we now have HyperTransport version 3, more links and DDR3 memory,” he said.'

Nehalem is a threat?!?! That's balls as PENRYN IS ALREADY A THREAT except in the niche 4P+ server market. DDR3 memory support?!? WTF?!? If only Intel could support DDR3...oh never mind. Version 3....it must be better than Version 2... we just won't actually tie it to performance...

FINAL ANALYSIS:
Allen's got a ways to go to approach the great Henri Richards in the old spin department, but given time, inferior products and continued execution issues, I think he may rise up to the challenge.

InTheKnow said...

anonymous said...
if you look at some of the layouts that certainly limits probably what the designers want to do....

Let me start off by saying that I think that Intel's approach to DFM strikes a good balance. But I also think that AMD's (apparently) less rigorous application of DFM gives them a potential advantage.

By applying less rigorous limitations on their designers, they have the potential to create superior designs. I would say the potential is greater for them to produce another Opteron vs Netburst situation because of this design flexibility.

On the flip side there is also a bigger chance of running into manufacturing issues with poor yields, cost issues, inability to hit parametric targets, etc. But when you are the little guy, you have to take more risks.

So while I don't know if AMD has the right risk vs reward ratio in their design philosophy, I'm inclined to give them a bit of a break here. When the choice is lose or roll the dice, I'll try rolling the dice every time.

Tonus said...

sparks: "There was one difference, besides me being able to snap his neck with two fingers, he likes AMD, and I like INTC."

There may be a correlation there. :)

SPARKS said...

“When the choice is lose or roll the dice, I'll try rolling the dice every time.”

I don’t get it. With the ENORMOUS complexities associated with chip development and successful quantity/quality chip mass production, why would anyone gamble anything with so many unknowns and variables? Christ, the way I see it you can have everything right, but a slight variation in temperature somewhere along the process, the whole house of cards would come tumbling down.

In my case, if I gambled on a 4 inch x6 ridged pipe run in an existing structure, went up 20 floors with it, and then had to tell my PM, supervisor and the owner it had to be abandoned because we couldn’t get to the 40th floor, I’d be TOAST, on the spot!

NFW! I’d rather get a colt revolver, slide one in, give it a spin, point to my head, and pull the trigger.

“lose or roll the dice”

HUH??? Are/were there no other options, like some getting data, somewhere? Further, as they were rolling the dice on the ‘SOI table’ were they rolling away at the ‘ATI table, too???

Horseshit, I’ll never be “inclined to give them a bit of a break here”.

Sorry, 5.4 B, 16,800 jobs, and solvent company is an awful lot of juice to crap out with, while rolling away at two very large tables, with some big HIGH ROLLERS watching you from across the way. I don’t buy it, and they don’t get a ‘pass’.

SPARKS

SPARKS said...

“There may be a correlation there. :)”

Tonus, LOL! He’s really a nice guy, but I wasn’t used to him speaking to me that way, ‘I am not going to get into a pissing contest with you about this’.

What took me by surprise was his anger towards himself and his purchase, and venting it out on me.

Hey, I can get that from my wife after a dye job gone bad!

SPARKS

SPARKS said...

“Sadly if it gets dragged down I suspect Intel will go down as well as people just assume this may mean market has softened and not the loss of market share to Intel. (I own intc stock)”

Spot on, brother!

“the seesaw battle between semiconductor rivals Intel Corp. and AMD has taken its toll on both companies.”

http://news.moneycentral.msn.com/ticker/article.aspx?Feed=AP&Date=20080408&ID=8447435&Symbol=AMD

Who do they think took up the AMD slack/short fall, VIA??

Both stocks are down on the day, 1:12 EST.

SPARKS

Anonymous said...

"Both stocks are down on the day, 1:12 EST."

Yeah, it was a fairly predictable reaction - I think the traders probably view this as softness in the market as opposed to Intel taking share. Keep in mind while 10% in revenue is a lot for AMD.... we are talking ~150Mil, which really wouldn't swing Intel's market share or even quarterly #'s unless the pain is being equally shared by Intel. This just as easily could be market share swings on the order of 1-2% as opposed to market softness - it also could easil represent AMD's continued price cutting to retain market share.

Next prediction - Intel stock will slowly start creeping up (probably starting Thursday-ish) prior to earnings next week.

SPARKS said...

“Intel stock will slowly start creeping up (probably starting Thursday-ish) prior to earnings next week.”

No doubt, REAL investors “the big guy’s” will see this as an opportunity, especially when they factor in today’s news as DELL is scaling back on it’s loosing AMD product lineup, in addition to AMD’s production and market failures.

What you said earlier, as far as AMD’s price drop is also true, however. The loss in market share drop may be offset by share price gain due lower payroll numbers, hence, drastic cost cutting measures. I think your right. It might be a wash, from Wall Streets perspective, anyway.

Finally, if and when the banking/mortgage mess can be cleared up any time soon, INTC’s share price increase could be substantial towards the third and fourth quarters. I know it’s a lot of if’s and maybe’s. Then again, they’ve weathered the storm quite well, all things considered, never dropping below twenty during the entire melt down. Further, from a pure business aspect, they have performed flawlessly.

I’d say they look good, however, I am bias, as they always look good to me.

SPARKS

Anonymous said...

"Then again, they’ve weathered the storm quite well"

You look at large companies who actually make and sell products, they tend to ride things out better. Financials have become huge houses of cards, that are levered in some cases 30 to 1, meaning a little loss in faith and credibility and you have Bear Sterns.

Intel will trade sideways (barring a complete collapse by AMD) until late Q3-ish and then things should pick up again. By then the shift to 45nm should be 50%+ and the margins will be picking back up again and if AMD continues to flounder there may be less pricing pressure, helping margins even more.

Anonymous said...

An interesting read:

http://www.eetimes.com/news/semi/showArticle.jhtml;jsessionid=OKH1B4LMQHAZOQSNDLOSKHSCJUNN2JVN?articleID=207100361

Note - this is a NON-BIASED site and tends to focus on data when they do their analysis articles.... the cost # trends at AMD are a bit eye opening.

The major problem though is the need (in my humble opinion) is to INCREASE revenue, not drastic cost cutting (though costs obviously need to be looked at). They should raise the price on ALL graphics parts at least 5-10% across the board and should increase CPU pricing (small amount on low end and larger amounts on high end).

They will lose some market share from this but they are cutting prices to gain market share and aren't gaining anything as the ASP's are falling faster than the market share they think they can get. (thus all they are doing is lowering revenue). If they lose 2-3% market share but gain 5-7% revenue (via better ASP's) that is a good tradeoff. Intel still is at a point where they are somewhat capacity constrained and I don;t think the share losses will be massive (short term). Once they stablize financing and payt down some debt, then they can try the ridiculous slash and burn strategy again.

It seems astonishingly simple, but won't happen until Ruiz is gone. Ruiz continues to feed the market share at all cost model, except now he has run out of cash and is feeding it with people (via layoffs)

InTheKnow said...

Sparks said...
I don’t get it. With the ENORMOUS complexities associated with chip development and successful quantity/quality chip mass production, why would anyone gamble anything with so many unknowns and variables?

I'm not advocating anything as radical as you might think. As I believe Guru has pointed out in the past, ALL companies that design anything have design rules. Design rules are essential to ensure that the design can be manufactured within the the constraints of existing process capabilities.

So I'm not advocating that design rules be abolished. Nor am I saying that AMD should give the keys to the Ferrari to some snot-nosed kid fresh out of college. Let me give you an example from the paper I linked to to clarify what I am saying.

..we can see that some of the lines between the logic cells are a darker shade of grey than others in the cells. This is a function of the secondary electrons seen by the SEM detector, but for our purposes it indicates dummy metal lines used in the layout of the part.

Of course, the use of dummy metal is almost as old as the use of CMP, but this is the first time we have seen it employed so extensively, so densely, and so early in the back-end processing. For CMP control, we usually see small structures such as squares of metal; here, the dummy structures are lines squeezed in at every possible position where there is no active metal needed....


The article goes on to explain how these design decisions have improved the uniformity of line widths and helped limit variation in FMax.

but Intel has paid a price for doing this. By using large metal lines as dummies instead of the smaller more traditional metal squares, they have limited how designers can route the traces. By limiting the designers you have reduced the number of possible designs.

The more restrictive the design rules become, the more you reduce the designers freedom.

Sparks also said...
“lose or roll the dice”

HUH??? Are/were there no other options, like some getting data, somewhere?


All I'm saying is that AMD can't afford to compete with Intel at their own game. Intel is just too big. So AMD has to look for another way to find an advantage. I'm proposing that AMD can hope to find an edge by applying somewhat less rigorous limits on their designers at the cost of taking a small risk with yields.

Anonymous said...

"By using large metal lines as dummies instead of the smaller more traditional metal squares, they have limited how designers can route the traces. By limiting the designers you have reduced the number of possible designs."

Intheknow I respectfully disagree... making things more difficult doesn't necessarily mean eliminating possibilities. The areas where AMD should be taking risks is on the fundamental high level design (like they did with K7/K8). Making things harder in terms of routing/tracing does introduce restrictions but it doesn't limit fundamental design choices (do you do an L3 cache, how large L1/L2, # of FP units, etc...)

AMD's key should be (or I should say needs to be) time to market and agility, while RDR's may make the design process more difficult in the early stages it should significantly speedup the time from design to production.

In Intel's case the size of their manufacturing makes it a necessity - while they employ a Copy exactly philosophy, there are too many sites and too many tools which have subtle differences which can't always be addressed. In this case RDR's help widen the process window and allows Intel to handle some unexpected variabilities... with one fab you can attempt to 'brute force' and tweak things, with 3-5 it simply isn't practical.

SPARKS said...

“I'm proposing that AMD can hope to find an edge by applying somewhat less rigorous limits on their designers at the cost of taking a small risk with yields.”

I’m going to go with this, even though I’m not on the same planet as you guys.

Your supposition, and it is healthy one, implies AMD got lucky with K8 the last time around. Therefore, this time around, with K10, that hole card never materialized. This would be, I’m guessing here; substantiated with the initial low yields (and speed) on K8 which got progressive better over time, enter Opteron. It would then follow; this time, they weren’t lucky enough to find the K10 solution (wild card?). Then the whole shebang could be written off as a failure, which in fact, seems to be the case as they are moving on to 45nM.

Perhaps AMD gave their designers a little too much latitude? I don’t know. But, what I do know, that’s one hell of game of brinkmanship and voodoo science to be gambling the company on. This is precisely why I am an INTC shareholder, and I always will be.

SPARKS

InTheKnow said...

Anonymous said...
Intheknow I respectfully disagree... making things more difficult doesn't necessarily mean eliminating possibilities. The areas where AMD should be taking risks is on the fundamental high level design (like they did with K7/K8). Making things harder in terms of routing/tracing does introduce restrictions but it doesn't limit fundamental design choices (do you do an L3 cache, how large L1/L2, # of FP units, etc...)

This is a fair criticism. And you are probably right regarding the high level design issues. Despite Intel's PR spin regarding the FSB, the introduction of HT for multi-socket servers was a good move. It bought them the market share they were able to win in that space.

But there is another facet for RDR that has the potential to impact yields in a negative way. It can result in a larger die. Or alternatively, if the design team has been given a pre-defined die size, it can result in the elimination of features.

And I think we would both agree that AMD needs the ability to include features. If this has to come at the expense of relaxing a design rule, I think it is a risk that should at least be considered on a case by case basis. They can cherry pick tools in the fab on a limited lot basis if needed.

I've been told Intel has sacrificed yield on certain products at times in the name of performance. I can see AMD doing the same.

And don't forget AMD's ace in the hole, APM 3.0. I've seen some very knowledgeable people mention that AMD uses it to tune individual transistors. :P (that was sarcasm in case it wasn't completely obvious)

Anonymous said...

"And don't forget AMD's ace in the hole, APM 3.0."

Too funny - I wonder what happens to all that "IP" if AMD goes fabless! Perhaps they can control the individual transistors in the foundry from thousand of miles away!

I don't think the RDR's ae as impactful as you think... I'm sure there may be some space inefficiency introduced, but you need to way that against time to market. Time to market in this industry is the key (especially for AMD), then comes volume, cost, etc.

Having worked in the process world, designers generally ask for everything 'just in case', however when you start spelling out the impact of those requests in terms of performance (a classic example is metal loss via CMP in especially high metal densities), it's amaazing how many work arounds are quickly found insyead of sacrificing a bit of performance. It's also amazing how 'just in case' suddenly becomes 'well in truth that's probably not needed'.

InTheKnow said...

anonymous said...
Having worked in the process world, designers generally ask for everything 'just in case', however when you start spelling out the impact of those requests in terms of performance (a classic example is metal loss via CMP in especially high metal densities), it's amaazing how many work arounds are quickly found insyead of sacrificing a bit of performance. It's also amazing how 'just in case' suddenly becomes 'well in truth that's probably not needed'.

And there you have me. By the time I've ever been involved the design is pretty much a done deal. All I've been able to do is try to find a work around for a challenging design. :)

I'll defer to the voice of experience on this one.

InTheKnow said...

Sparks said...

Your supposition, and it is healthy one, implies AMD got lucky with K8 the last time around. Therefore, this time around, with K10, that hole card never materialized.

I wouldn't say got lucky so much as made a correct reading of future markets. They brought in HT and showed great performance in the multi-socket server space. With this edge, they were able to gain mind share among OEMs and establish themselves as a legitimate player.

On the manufacturing side, they had more margin for error at 130nm when HT was introduced (I believe it was 130nm). That margin has been steadily eroding with each process generation.

45nm will be tougher than 65nm was.
I believe we are reaching the point where major device changes are going to be necessary every 2-3 generations to keep seeing the levels of progress that we have become accustomed to. For 45nm it was metal gate. At 22nm it may be tri-gate transistors, or one of several other options that are out there. It will be interesting to see.

Given my viewpoint, you can see why I'm not sure that Shanghai is going to be any better than Barcelona.

SPARKS said...

"Given my viewpoint, you can see why I'm not sure that Shanghai is going to be any better than Barcelona."

Well there you have it. Your speculations, plus GURU's speculations (which you both never cease to amaze me) can lead us to TWO conclusions. Shanghai will be another dog, and Penryn will piss all over it.

SPARKS

Anonymous said...

"Shanghai will be another dog, and Penryn will piss all over it."

I wouldn't go that far - it will be a bit better at equivalent clocks given the extra L3 cache, and in all likelihood some optimization on the layout being done based on 65nm learning. The power will be better due to active power reductions (while leakage will be worse on 45nm, the active power will be better thanks to lower Vt's and Vcores). So you will likely see similar clocks to 65nm at launch, with better power #'s under load, but probably similar #'s at idle and a bit better "IPC" performance.

So it will be better but the problem is AMD needs it to be A LOT better to simply compete against Penryn, yet alone Nehalem which is what it will be truly matched up against throughout 2009 and 2010. Unless Nehalem flops I don't see any scenarios where AMD doesn't lose substantial server share.

They will still probably do OK in desktop/mobile thanks to the 'we'll just keep cutting prices, employees and stockholders be damned, just give me my bonus' Ruiz philosopohy. Hey another 10% price cut will be just another 1500 employees - it's all about market share, who cares about profitability and solvency!

SPARKS said...

"while leakage will be worse on 45nm"

This lesson, well taught by you, has been with me ever since Charlie deleted it from his comments.

I know you consider many more factors. However, from a newbs limited perspective, I consider leakage their single biggest problem. The thermals are the evidence.

Beauty is only skin deep, but ugly goes the bone.

SPARKS

«Oldest ‹Older   1 – 200 of 202   Newer› Newest»