12.20.2007

Sometimes an Intel delay can be bad news for AMD

A report or a rumour, whatever you wish to call it, comes from Digitimes suggesting Intel could be delaying the release of its 45nm desktop CPUs. Intel’s reason, according to the site is due to lack of competition in light of AMD’s current problem shipping Phenoms. This statement is highly controversial and I am sure everyone has read the wide ranging opinions from all over the net. What I find frustrating is the over zealous analysis based from a CPU performance stand point. It is ridiculous to think that Intel’s production planning decisions are based on the competition missteps rather than pure market demand. If we assume that the rumour is correct, I can assure you that Intel (likewise any sensible company) will only delay a certain product because such an action creates the maximum return, either immediate or long term.

As a manufacturing company, volume is vital for Intel. The more volume it can produce on a set of tools (i.e., new 45nm tools), the lower the unit cost becomes. Also bear in mind that 45nm depreciation begins the moment the tool is used for revenue products. Intel doesn’t have a choice but to feed their new 45nm production lines as close to 100% capacity as possible. With the volume fixed at maximum, the real question is not what product Intel is trying to delay, but instead, try and figure out what Intel is trying to produce in volume as a replacement.

Think mobile. For the past two quarters Intel has shifted its focus and has thrown all its marketing resources in this space. Nonetheless, AMD continues to remain successful in this segment taking more market share while at the same time driving ASPs significantly downward. If Intel wants to fight back using every competitive advantage it can get with 45nm, I don’t see why not. So again, if the rumour is indeed true, just try and imagine what AMD’s mobile offering will be up against in the coming months. Armed with a lower cost-per-unit, there should be fewer businesses where Intel will be “walking away from”.

102 comments:

core2dude said...

There is an equally plausible argument: yields on 45nm suck, and Intel didn't see it coming. However, fortuantely for them, AMD dropped the ball on performance, and they are using that to spin their own execution problems.

Any sensible publicly traded company would do it this way.

core2dude said...

To add: the problem in mobile is not that of product strength. Intel has a far superior product line-up, and doesn't really need 45nm there. The main problem in mobile is that AMD has a good-enough product, and they are selling it for pennies. Average joe doesn't care which product is theoratically superior, as long as he can see p0rn at acceptable rate.

45nm would give them better margin. But if they are selling all that they can make, then it really doesn't matter.

Chuckula said...

From the rumors I've seen, only quad core CPUs are being delayed since Intel wants to milk more $$ out of the 65nm quads and the Qx9650. The mobile CPUs would premiere first, with desktop dual core Wolfdales still scheduled for the 20th of January.
I doubt Intel is having serious production problems since the Mobile + Desktop chips comprise the vast bulk of their 45nm production. If these actually get substantially delayed more than a month or two then there might be some truth to the rumors of production problems. Right now, Intel has the smaller D1D and the larger Chandler Arizona Fab running. It is not in Intel's best interest to delay in selling chips made by these facilities since they are still costing Intel money to run them, however Intel does have the luxury of choosing which segments to put chips into to get the maximum profit without worrying about real competition from AMD.

Axel said...

Chuckula

Right now, Intel has the smaller D1D and the larger Chandler Arizona Fab running.

Indeed, for the first half of 2008 Intel will only have 45-nm output from those two fabs. Until the other two new fabs come on, they probably need to hold back a bit on quad-core desktop production to ensure ample dual-core supply for the mobile & desktop spaces. Through Q1 2008, quad-core output will likely be dedicated to the server space.

InTheKnow said...

There is an equally plausible argument: yields on 45nm suck, and Intel didn't see it coming.

There is already yield data (at least as much as Intel will ever release) in the public domain that shows 45nm at decent, if not spectacular, yields.

I think it is unlikely that they suddenly found a major yield hit they didn't see in September when the plot was published.

To add: the problem in mobile is not that of product strength. Intel has a far superior product line-up, and doesn't really need 45nm there. ...Average joe doesn't care which product is theoratically superior...

But Average Joe does care about battery life. IIRC Intel is dropping the power envelope on 45nm mobile. This would mean longer battery life, and that is easy to sell.

Anonymous said...

“There is an equally plausible argument: yields on 45nm suck, and Intel didn't see it coming.”

Dude, dude, DUDE! This is precisely what INTC nay Sayers and hopelessly incurable, wretched AMD fan boys would love to believe. Don’t believe it for a second.

First, INTC’s 45nM rockets are clocking like no tomorrow. Penryn’s release was basically an under clock. Q6600 is kicking Pheromones ass up and down the benchmark gauntlet. X48 has been delayed, not for any technical reasons. It was delayed to allow their partners to clear X38 inventory. What does all spell? It’s sandbagging, pure and simple.

If anyone thinks, by any stretch, that there are bugs and glitches in Penryn, they have got to up there with Alien Abductions and backyard Anti gravity physics.

The only thing INTC might have not seen coming was that 10K was such an over pumped, miserable failure. The “Leap Ahead” motto was an understatement. C2D was a killer, Penryn is a tweaked shrink, and Nehalem will be a widow maker. They are lying down because they can. It’s business and it’s smart. I don’t like it. I want that X48, QX9770, 1600 FSB, DDR3, combo like no tomorrow. But, I am an INTC shareholder, and I know exactly what they’re doing. This, I like.

If, by some miracle of happenstance, 10K can even approach anything near INTC top offerings, INTC will be poised to pee all over AMD’s parade, everyone knows this, their not talking, but you should know this , too.

Further, if INTC drop’s prices on its low to midrange lineup, this would be AMD’s biggest nightmare. It will happen, but not right away, as the margins on C2D, and Qxxxx are entirely too good, presently, and AMD is too weak. A price drop in February or March will be enough time to clear inventory. This is gospel.

SPARKS

Unknown said...

AMD's processors suck for mobile. My AMD Laptop is the worst computer I've ever bought. Infact it was the last of 10 years of AMD computers I bought.

AMD may gain market share, but we know how much it is hurting them. They can't go on selling at a loss forever.

Anonymous said...

Intheknow said "But Average Joe does care about battery life."

Interestingly enough there is some news about silicon nanofibers increasing lithium-ion battery storage capacity 10-fold:

http://www.dailytech.com/Stanford+Researchers+Build+Lithiumion+Battery+Using+Silicon/article10088.htm

"Stanford assistant professor of materials science and engineering Yi Cui, graduate student Candace Chan and five other researchers made a breakthrough for lithium-ion batteries. The researchers used silicon nanowires in the battery anodes to design new lithium-ion batteries that can hold ten times the electrical charge of current batteries of the same size."

Tonus said...

Intel can modify its plans without stopping or slowing production. If most of your line of CPUs is faster than the fastest that your competition has, then increasing the speed-grades quickly would pretty much put you in competition with yourself.

core2dude said...


First, INTC’s 45nM rockets are clocking like no tomorrow. Penryn’s release was basically an under clock.

Speed and yield are related, and yet unrelated. You might be able to cherry-pick fast CPUs from extremely bad-yielding lot.

The only "relative" yield information we have about Intel's 45nm process is about 9 months old, when Intel showed that the 45nm was at the same maturity level where 65nm was two years ago. But that does not mean that it also holds true even today. Things may have slipped from that point on. The process may not be HVM ready.

Today Charlie Dimerjian wrote a story about this on Inq--and Charlie is no AMD fanboy, he has slammed AMD on many occasions. Also, Charlie seems to have lots of contacts within Intel. as a lot of stuff about he says about unannounced products turns out to be true.

I do not know whether the rumored delay is true or not. But if it indeed is true, it is hard to imagine that the reson is anything but a yield issue.

And if it infact is a yield issue, that bodes really badly for Nehalem. People have estimated Nehalem size to be around 270 mm^2.

Only time will tell whether Intel's 45 nm process is healthy or not. But again, Intel is not in a hurry either. They can take their time to sort things out (if at all they can be sorted out) as AMD is still struggling with 65nm.

Unknown said...

How can it be a yield issue if dual cores are fine. Intel glue remember.

Anonymous said...

“Today Charlie Dimerjian wrote a story about this on Inq--and Charlie is no AMD fanboy, he has slammed AMD on many occasions.”

I like Charlie; he is no fan boy, yes. But, sometimes, he goes a little beyond reporting AMD news and releases with less than an objective critical journalistic eye.

After all, he was the one who wrote the now infamous article, “Dancing in the Aisles” nearly six months ago when AMD had absolutely nothing but Power Point, estimated performance figures against Clovertown. I told him to be careful, both privately and publicly, about printing this spin without concrete product to verify such claims. The rest is history.

A far as INTC “cherry picking”, forget it, that’s absolute nonsense. Why bother at this point? They’ve got 65nM parts beating the guts out of anything AMD has now or in the foreseeable future. I’m typing on a Q6600 (GO) that’s over clocked to 3 GHz that’s about as stable as a brick.

INTC will not jeopardize its partner’s position by releasing faster products only compete against its own lineup. They will allow their partners to capitalize on current inventories. There is no need to release a better product when you have the best product. You only do it when everyone is ready They just keep cranking out the stuff that’s making them money. How do I now this? Go to ‘Sharky Extreme’, and checkout this weeks CPU’s prices. Most of the good stuff has gone UP in price. This mean they are selling, and selling well, there’s a demand.

Furthermore, in light of INTC’s current performance lead and increased market share, if people want to speculate about, “Intel’s in trouble now!”, they couldn’t care less. They are making money on an 18 month old product. Call it plausible deniability.

As for me, I could care less. The stock price rebounded 55 cents today and QX9770 WILL be released next month. “Cherry picked” or not, I’m in. The rest is speculation and rubbish.

Oh, and one more thing. Intel wants to regain Laptop market share. Be prepared to see HV, power sipping 45nM chip in full gear, gone mobile. How do I know this? That’s where the money is.

SPARKS

Anonymous said...

Oh, yeah, by the way read this. You can buy one today!

http://anandtech.com/cpuchipsets
/intel/showdoc.aspx?i=3184


SPARKS

Orthogonal said...

"There is an equally plausible argument: yields on 45nm suck"

"The process may not be HVM ready."


Bull$#!T, plain and simple.

http://www.theinquirer.net/gb/inquirer/news/2007/12/21/rumors-swirl-intel-45nm
"What are those rumours? The first one was that Intel is having problems ramping the 45/High-K/Metal process to volume so there is going to be a second 45nm non-High-K/non-Metal process to run the lower end volume chips."

If his sources are feeding him crap like this I'd seriously question the integrity of anything they say.

Also, I don't ever recall Intel making an official announcement that lower end Quad's would be released in January, just that it was rumours making the rounds at the usual tech sites. The only thing I remember seeing is Dual Core and Mobile parts in January. Someone correct me if I'm wrong. I believe the official position is, and always has been, Q1 '08.

InTheKnow said...

Speed and yield are related, and yet unrelated. You might be able to cherry-pick fast CPUs from extremely bad-yielding lot.

You could, but the Q&R guys would kill you for trying. Poor yielding lots are suspect from a reliability point of view, and say what you will about Intel, they do take reliability seriously.

The only "relative" yield information we have about Intel's 45nm process is about 9 months old, when Intel showed that the 45nm was at the same maturity level where 65nm was two years ago. But that does not mean that it also holds true even today. Things may have slipped from that point on. The process may not be HVM ready.

Intel's 45nm process is not "new" in the sense that Barcellona is new. Intel has been ramping 45nm for over a quarter now with a focus on improving yields. Since there are no reports of dead engineers at any of Intel's 45nm facilities, I doubt this is the case. :) I can assure you that Intel would be working their process engineers to death if yields had slipped in any meaningful way.

D1D's charter is to develop, ramp and transfer processes. Intel's strength is in their ability to move processes to HVM, and D1D's whole focus is to do just that. I think they may well be the best in the world at what they do. I'm fairly comfortable saying that they would take it personally if there were significant yield issues on "their" process.

By way of evidence that Intel is not having yield problems, I would point out that D1D produced enough product on 45nm that Intel took a measurable write down on this inventory. They didn't scrap this material, but took the write down because they did not have enough inventory built up to sell it in Q3. That material should be bonused back in on Q4's books.

Anonymous said...

"Also, I don't ever recall Intel making an official announcement that lower end Quad's would be released in January, just that it was rumours making the rounds at the usual tech sites. The only thing I remember seeing is Dual Core and Mobile parts in January. Someone correct me if I'm wrong. I believe the official position is, and always has been, Q1 '08."

This is true... and this was also true of the 'Barcelona Delay Rumors' that appeared last summer before launch... neither AMD nor Intel provide launch day specifics until a week or two before launch, they (understandably) give a very granular window, precisely to be able to adjust to unforeseen happenstance.

The 'Delay' with respect to anything is because a rumor started about a date, then a new rumor started about another date, and that second rumor is delayed over the first.

This is nothing new, just the ignorant frenzy of the rumor mill run amuck.

On Charlie -- he lost all credibility long ago... shall we post instances where he prints a rumor and had to retract... the more recent famous one is the "Dancing in the Aisles" ...

InTheKnow said...

Orthogonal, I know you've said you work for Intel in AZ. Out of curiosity, do you work in F12 or F32?

Orthogonal said...

I currently work at F12.

Orthogonal said...

D1D's charter is to develop, ramp and transfer processes. Intel's strength is in their ability to move processes to HVM, and D1D's whole focus is to do just that. I think they may well be the best in the world at what they do. I'm fairly comfortable saying that they would take it personally if there were significant yield issues on "their" process.


I'm glad someone said this. I don't think people fully understand the culture in Oregon. There is no way they're releasing a process to an HVM site until it's DONE.

Anonymous said...

Just to add a little more fuel to the bullshit fire. INTC has no less than 15 Xeon server processors starting with monster 5482 available presently, all at 45nM, HELLO!!!! Sounds like pretty good yields to me.

Do you think they are “Cherry Picking” all these, too???

Charlie is way out on this one, and I told him so.

SPARKS

http://www.intel.com/products/processor/
xeon5000/specifications.htm?iid=
products_xeon5000+tab_specs

Anonymous said...

Core2dude

Here's the real story.

http://www.channelregister.co.uk/
2007/12/19
/intel_delays_quad_penryns/

SPARKS

Anonymous said...

Well, I have just had it explained to me in some detail what might be going on with this delay.... and it is based on supply/demand.

THIS IS PURE SPECULATION (no better than what Charlie has done franly).... take completely as that.

-- The last half of this year demand starting penting up in anticipation of Barcelona, when it failed to materialize the pent up demand released. As such, Intel was caught on the low side of supply to manage a high side demand they were expecting to be partially satiated by Barcelona to a degree.

Intel has a choice for the die with the right bonding pads to fit inside a dual die MCM package, make s775 quad desktop parts or make s771 quad server parts (they are both the same arch just different packaging).

Given the choice between where you want your production going -- quad server or quad desktop, where would you want it to go?

So in a round about way, if this perspecitve is correct (it is speculation on my part as explained to me) ... AMD did cause the delay, but not because they did not have a competitive part but because they left a vacuum of demand that Intel is choosing to sacrifice the DT parts to give to server.

InTheKnow said...

Sparks said...
Just to add a little more fuel to the bullshit fire. INTC has no less than 15 Xeon server processors starting with monster 5482 available presently, all at 45nM, HELLO!!!! Sounds like pretty good yields to me.

You need to put that in perspective. Here is a very crude estimate. At 107mm^2 for a 45nm die, you will get ~550 good die per wafer. Since you are making an MCM quad, that will give you 275 quad cores per wafer. Your 5482 quad cores would require about 20 wafers.

I'd guess D1D is starting 8K - 10K wafers per month. I'm guessing that F32 should be starting twice that in the near future. That is between 6.5-8 million quad cores per month. Even if you assume I'm being wildly optimistic and cut my numbers in half you have 3-4 million quads a month. So 5842 quads is a pimple on the giant's butt.

I don't think you can infer good yields from those quantities. You would need to see the quantities of chips going to OEMs to really get a feel for yields.

enumae said...

In regards to the 45nm Quads, is this a feasible reason...

Today associate got some details about the reasons for Yorkfield announcement delays . According to some computers and motherboards producers , the existing 45 nm processors Core 2 quad cannot stably work in the motherboard with four-layer design. Intel is forced to reexamine This problem before beginning the mass deliveries of Yorkfield models - it is assumed that they will have time to pass to a new revision in February- March .

But why this problem does not concern the already existing 45 nm Intel processors ? . Let us recall that 45 nm four core processor Core 2 extreme QX9650 (3.0 GHz) was represented in first half of November, and they also uses the 1333 MHz bus.

These processors are usually used in motherboard with the six-layer design so there is no operational stability problem in this case . But if we examine 45 nm model Core 2 quad, then they can be used in cheaper motherboards, which have only four layers PCB. In order to exclude this problem, Intel decided to release a new Yorkfield processors revision . Dual core processors Wolfdale do not suffer from such problem, therefore they can be used in motherboard with four-layer design. So they will be announced on 20 January, as it was planned earlier.


This is from xtreview

Roborat, Ph.D said...

According to some computers and motherboards producers , the existing 45 nm processors Core 2 quad cannot stably work in the motherboard with four-layer design

come on InTheKnow, it's time for you to spill some insider secrets. Those NDA things that you sign when you join a company are nothing but unenforcable-scare-tactic nonsense.

So really, what's been going on with the MB's? ;)

Anonymous said...

WHAT????

Change something, anything on my dream date QX9770, Home Coming Queen, because it doesn’t like to run on a cheap ass 4 layer board!?!?

Will I need to contact my accountant to adjust my personal income and portfolio for the extra 100 bucks on a premium board?

The X in QX9770 doesn’t mean x out a few bucks here or there, does it?



Oh, wait let me lower the horsepower on my big block Chevy because it’s ripping the engine mounts off the frame!

Oh my electric bill increased 2 bucks a month, time to pull the second X1900XTX, and the PC Power and Cooling 1KW PS!

Let’s water down the Johnny Walker Blue because the price is too high.

Let’s pull the 572’s out of the Fountain Executioner; it’s burning too much gas.

Those crazy Vipers have the tires letting loose out of second gear, let’s reduce the power!

Let’s get my wife a breast reduction because everyone is staring at her huge Belugas!

God, my Brooks Brothers suits don’t work for me in the crotch area, looks like I need to speak to plastic surgeon!




Stop the nickel and dime horseshit and give me the six layer board so I can over clock the son of a bitch to 1866 FSB, synchronous, with some fat 1866 Dominators, will ya!!!
Don’t even THINK of messing with the chip so it can run on a four layer board!

GMAFB!!!!!

SPARKS

InTheKnow said...

come on InTheKnow, it's time for you to spill some insider secrets. Those NDA things that you sign when you join a company are nothing but unenforcable-scare-tactic nonsense.

No NDA worries. I don't know any details on this issue. So I can speculate freely. :)

The claim is that there is a stability issue on 4-layer boards, but not 6-layer boards. That would point towards an issue with impedance on critical timing circuits.

The timing circuits are usually paired traces (Cu lines if you prefer)that are isolated far enough away from the other circuitry on the board that there is no cross talk. If you choose the right dielectric values between layers you can virtually isolate each layer of the board from impedance effects on the other layers. This will give you 33% more area on a 6-layer board than on a 4-layer board. So I would deduce that there just isn't enough room on a 4-layer board to isolate those timing circuits properly.

The big question in my mind is "what can Intel do to change this?" If this report is accurate, and my speculation is right, Intel would have to reduce the number of timing circuits. I don't have the circuit design background to guess how to do this or even decide if it is possible.

enumae said...

A little more information...

PC Watch - Translated

core2dude said...

Enumae:

Thanks for the link. This is yet another possible reason, and a delay would definitely make sense.

SPARKS:
9650 is available. I don't think 9770 is being delayed because of MB issue (if this rumor holds any water). You can be sure as hell that if it is a yields issue, that also won't hold 9770 back. The simple fact is, Intel does not need 9770.

InTheKnowHow:
I am not a circuit engineer. But would it be possible for Intel to match the terminating impedance on the traces at the CPU level, so that the cross-talk would be reduced (had read something about this in transmission lines)?

1333 is a high frequency, and if Intel wants to push it in cheap mobos, they may have to change some CPU characteristics?

Anonymous said...

In The Know, what I was referring to was the model number Xeon Processor.

X5482 12MB 3.20 GHz 1600 MHz DP 150W

Sorry about that. I forgot the X. However as you can see it is a monster, none the less! More to the point its out there in the wild.

SPARKS


SPARKS

Anonymous said...

I'm not sure about these links regarding MOBO issue on the 45nm delay. I much prefer to bury my head in the sand (AMD fans) or speculate randomly (Charlie@INQ) about more sensationalist process, yield or even CPU bug issues.

Why do you guys have to get actual information in the way of a good delusion? Damn you!

Unknown said...

AMD Phenom 2.4 and 2.6Ghz delayed until Q2'08:

http://www.digitimes.com/mobos/a20071224PD200.html

AMD has recently notified its partners that the launch of higher-end quad-core Phenom processors, including the 9700 and 9900, will be postponed to the second quarter of 2008 from the original schedule of early 2008, according to sources at motherboard makers.

Orthogonal said...

That article just doesn't seem right giant. If they are having trouble fixing the TLB erratum, it wouldn't just affect the release of the 9700/9900 Phenoms, but ALL K10 chips. Either they got their wires crossed or they don't know what they're talking about.

Anonymous said...

You see, Giant, we were right again!

That said, Merry Christmas and happy Holidays to all!

SPARKS


G: Still lip smacking for QX9770?

Anonymous said...

Orthogonal,

Nah, that’s what they are reporting. They got the word from the MOBO makers. I suspect it’s more than the BLT bug. There are other problems, among other things, that this bug is conveniently masking.

This, too, shall come to light in the near future as the geniuses on this site said it would. We spent a week on timing/weak cores/strong cores issues before you arrived.

A bad design on a broken process, and I believe it.

SPARKS

enumae said...

Happy Holidays :)

Unknown said...

Yes indeed Sparks! A QX9770 would be a nice stocking stuffer! Paired up with a nice ASUS X48 Rampage board, 4GB of low latency memory and an 8800 Ultra!

Merry Xmas to all!

Anonymous said...

Check out the Toms Hardware article. Basically in a nut:

“I don't really like the conclusion and AMD won't like it either, but the upgrade situation for users interested in replacing their Athlon 64 X2 processor with a quad core Phenom is all but promising. We looked at ten different motherboards to check how well these would work with the new Phenom quad core processor. The vast majority, eight out of ten motherboards, did not work with Phenom at all, which I found a very frustrating result.”


Holy cow, AMD was chanting how Pheromones was going to be a direct plug in/upgrade. Well, add this to the long list of “missteps”. I’m surprised THG even published the article. They have been giving AMD a pass for over a year. I guess they’re done, too. Then again, they are protecting their reader base from buying a product that simply will not work on existing chipset/boards.

I’ll stick my neck out and say we haven’t heard the last of this.

An unexpected change in platform? This is more fodder for people not going directly to INTC and Q6600 outright. They killed 939, now this? This is not good, as they are screwing AMD fans once again.

http://www.tomshardware.com/2007/12/26/
phenom_motherboards/page15.html

SPARKS

Orthogonal said...

I hadn't bothered reading the article, but was the incompatibility because mobo makers hadn't released a new BIOS, or was there something with the circuitry or non-standard rendering them incapable of running Phenom.

Also, I wouldn't put too much blame on AMD, it's up to the Mobo manufacturers to make sure they're up to date on the standards and product changes, although AMD will likely receive they're fair share of criticism for this snafu, warranted or not.

Anonymous said...

"Also, I wouldn't put too much blame on AMD, it's up to the Mobo manufacturers to make sure they're up to date on the standards and product changes, although AMD will likely receive they're fair share of criticism for this snafu, warranted or not.
"

Yes and no.... I don't blame AMD for MB makers not releasing supporting BIOS updates, I do blame AMD for not giving them enough incentive (perhaps subsidies) for producing the update. Here is why ....

Would you expend money in R&D and costs to develop and proliferate a mechanism that would stop the consumer from buying a new MB if you are a MB maker?

AMD's socket compatibility insistence is great for the consumer, but sux for the board makers... they make zero sales everytime someone decides to drop-in upgrade. The only reasonable way a MB maker should support such an endeaver is if AMD pays them for each BIOS downloaded or some form of subsidized payments.

Something AMD may have done in the past (via X2 -- they fetched quite a premium) but might not be willing to pony up this round, hence the 'feet dragging' wrt updates to support Phenom (pure speculation on my part).

Anonymous said...

Orthogonal, you know, if I didn’t know you worked my cherished INTC, who has increased my share price by a lovely, FAT, 40%, I’d swear, your defense of the “Imitator” would be almost compulsory. Hmmm, the Digitimes article, and now this? As a matter fact, you suggested I call off the troops and put down the pitch forks during the TLB expose. Once, twice, and now thrice, hmmm.

No sir, no way. They’ve had plenty of time to insure compatibly with previously released products. The way I see it, they were adamant about compatibility from the beginning as they did mention a mere bios upgrade very early on in the product cycle.

Therefore, the question begs, from an objective stand point, how many revisions have the motherboard partners received. Further, have any of them worked successfully at all? It’s not as if AMD didn’t have the time to validate on Pheromone on their chipsets, in house. What, they didn’t have time to do this on their own? Maybe, they skipped this part and let the motherboard makers worry about compatibility issues on older products?

Perhaps, they were so busy with the CPU fixes; all previous bios are no longer of any use? If, that’s the case maybe things have changed so much there will never be a fix?

Hey, I’m not speculating here, I’m reading the article.

They said 8 out of 10 mother boards, with their respective manufactures products, didn’t work. Read the article. Further, what would your boss think if you told him, “Ah we said Q6600 would work on 975X, but ah, there seem to be a problem. Ah, we think it’s the MOBO makers fault, not ours.”

Horseshit, Big Paulie, would be out there with a BIG axe, looking for some heads.

SPARKS

Anonymous said...

"Also, I wouldn't put too much blame on AMD, it's up to the Mobo manufacturers to make sure they're up to date on the standards and product changes, although AMD will likely receive they're fair share of criticism for this snafu, warranted or not."

For a company that is migratong to platformance and is touting drop-in upgradability as a KEY DIFFRENTIATOR, AMD should take a hit for this. They are talking up new boards and chipsets (Spider platform), yet have been remarkably quiet on older board wrt Phenom performance.

The reason for this is rather obvious - if you compared an X2 in a spider platform and compared it to an X4 (or X2 k10 variant) - it'd be rather obvious most of the benefit is from the platform and not the chip. Also the old boards, even if they are upgradeable, drop the 'touted' benefits of K10 like split plane voltages and HT3...

And does anyone think even the newest boards bought today will be compatible with AMD's next chip architecture? (is that 2009 or 2010 via bulldozer/fusion?) The upgradability days are likely over - welcome to the land of (partial) system on a chip - which means likely constant socket iteration as architecture changes.

Unknown said...

Sparks, I'm sure you'll like this. :)

Pictures of Intel's X48 Bonetrail board, and Intel's dual socket Skulltrail board!

http://hardforum.com/showthread.php?p=1031826683

Anonymous said...

G:

Oh my, and the Bonetrail board is a plain Jane INTC factory job! INTC is getting very serious with the enthusiast lunatic fringe.

HOO HA! BRING IT ON, BABY!


SPARKS

Anonymous said...

that other blogger from some zone seems to be writing stuff we already know months ago. and it even took him almost a month just to do so. poor guy.

InTheKnow said...

Further, what would your boss think if you told him, “Ah we said Q6600 would work on 975X, but ah, there seem to be a problem. Ah, we think it’s the MOBO makers fault, not ours.

But isn't this exactly what is happening with the top Intel desktop parts according to some of the rumors? The FSB doesn't seem to work on 4-layer boards if the rumors are right. Since the product was spec'ed for a 6-layer board, it certainly isn't Intel's fault.

That said, I think the most logical explanation (posted somewhere above) is that Intel's capacity is being used to supply the server segment. As we all know, the margins are much better in that space.

InTheKnow said...

Legit Reviews had this to say about the Phenom/Spider platform.

Their new Phenom processors do not perform clock to clock to Intel's latest offerings, so this delay puts AMD even further behind Intel. We were able to overclock the 2.6Ghz Phenom 9900 to 3.06Ghz and even at 3GHz it didn't pose a threat to really any of the Intel processors, so the only way AMD is going to be able to compete with Intel is by lowering prices once again and pitching the whole platform and not just the processor.

Many of our readers might think that the AMD Phenom processor series is late and doesn't bring enough performance to the table and I have to agree with you. AMD needed to have this part out months ago and at 2.6GHz or higher. Hopefully, AMD can get these early BIOS bugs worked out, but in reality, they shouldn't be early bugs. Phenom has been delayed and the quad-core Barcelona server parts have been out for some time now. I'm frustrated with this launch as I am sure you are. Intel needs competition and AMD hasn't bucked up and brought it to the table with their first true quad-core processor. The ATI Radeon 3800 series on the other hand offers amazing price versus performance value and killer features for a low price. The AMD roadmap shows their next new processor series will be the Stars 45 Processor, which will be AMD's move to the 45nm process. All you die hard AMD fans can keep your hopes up that the move to 45nm will go as good, if not better, than what Intel did at 45nm.
(emphasis added)

As we've discussed here in some depth, the secret to Intel's 45nm process is high-k/metal gate. Without it, AMD can't hope to compete. AMD's management team doesn't seem to be doing them any favors. Their best plan seems to be hoping for treble damages on their lawsuit.

Anonymous said...

”That said, I think the most logical explanation (posted somewhere above) is that Intel's capacity is being used to supply the server segment. As we all know, the margins are much better in that space.”

Put a dash of that, add a ¼ teaspoon of 45nM laptop goodness, pepper it with existing 65nM inventory in the field, mix in some AMD flop, simmer. What it all boils down to is INTC is intentionally laying down.

“The FSB doesn't seem to work on 4-layer boards if the rumors are right. Since the product was spec'ed for a 6-layer board”

OK, I’ll bite, “Geese, boss, the chips are running so fast with the 1600FSB (and above) it caught us a little by surprise, we’re getting crosstalk at 4.25 GHz!”

“Release the QX9770 and the QX9775 (high margin) for the customers who are going to by the premium boards anyway. Hold off on the bread butter stuff, till we straighten this thing out. Mean while drop the prices on 65nM, mid 1Q” (I don’t this speculation holds water, but perhaps, it’s a worst case scenario?)

Man, I’ll bet AMD wishes they had these problems!


SPARKS

Anonymous said...

In The Know

The Legit Reviews article said this:

“AMD was originally going to launch the AMD Phenom 9700 (2.4GHz) and AMD Phenom 9900 (2.6 GHz) quad core processors today, but they have been held off till Q1 2008. The launch of these two higher performance processors models will now coincide with the introduction of CrossFireX. Higher performance AMD Phenom processors will follow the introduction of the AMD Phenom 9700 and 9900 models, with a 3.0 GHz model in Q2 2008”



Digitimes is reporting this:

“AMD has recently notified its partners that the launch of higher-end quad-core Phenom processors, including the 9700 and 9900, will be postponed to the second quarter of 2008 from the original schedule of early 2008, according to sources at motherboard makers.”

Umm, call me stupid, but does someone else see a discrepancy in the time line here. The bottom line, we still don’t know what’s going on with these jokers

http://www.digitimes.com/mobos/a20071224PD200
.html


SPARKS

Chuckula said...

Thinking about getting a cheap Phenom upgrade for an existing AM2 system?

Be careful even Tom's Hardware had issues and they were about the only ones giving Phenoms positive spin on launch day.

From the conclusion:
We looked at ten different motherboards to check how well these would work with the new Phenom quad core processor. The vast majority, eight out of ten motherboards, did not work with Phenom at all, which I found a very frustrating result.

Anonymous said...

Sparks, what I find a bit humorous is LegitReview's reporting indicates a 3.0 Ghz Phenom in Q2 ... I would say with 95% certanty that we will not see a 3.0 GHz Phenom (stock/binned) from AMD on 65 nm ever. A few will OC to that level.

pointer said...

InTheKnow said...
But isn't this exactly what is happening with the top Intel desktop parts according to some of the rumors? The FSB doesn't seem to work on 4-layer boards if the rumors are right. Since the product was spec'ed for a 6-layer board, it certainly isn't Intel's fault.


Let's assume the rumor is true on the FSB issue with 4 layer board. Nevertheless, there is no existing product issue, Intel is planning to fix that before releasing those Yorkfield according to the same rumor.

Unknown said...

Sparks, what I find a bit humorous is LegitReview's reporting indicates a 3.0 Ghz Phenom in Q2 ... I would say with 95% certanty that we will not see a 3.0 GHz Phenom (stock/binned) from AMD on 65 nm ever. A few will OC to that level.

Mario Rivas claimed that after the 2.6Ghz Phenom is launched next year they'll launch a 2.8GHz and 3GHz version. I don't see them getting passed 2.6GHz easily. They may hit 2.8GHz eventually, but with the leakage that they're seeing I doubt 3GHz is possible. 2.3 -> 2.4GHz raises the TDP from 95 to 125W and from 2.4GHz to 2.6GHZ increases it to 140W! That's just insane.

I'd be curious as to what sort of clockspeeds Guru, JumpingJack and all the other knowledgeable folks here think AMD will get from 45nm Shanghai CPUs on launch.

Also, what do you think the introduction date will be? My money is on a paper launch in late '08 with real availability in Q1'09. The same sort of thing happened with Brisbane in Q4'06 and Q1'07.

It's also worth noting that if AMD keeps to it's January deadline for sampling Shanghai they'll be four months behind Intel with Nehalem. They demonstrated a quad core Nehalem at IDF in September.

Anonymous said...

Anonymous said...

"that other blogger from some zone seems to be writing stuff we already know months ago. and it even took him almost a month just to do so. poor guy."

Not to mention that his buddy Ah-Ben-Stoopid is also MIA for some time now :). Hasn't updated his blog since September, or posted on Sci's blog, or even flamed any fellow AMDers on amdzone. Perhaps the sheer weight of his cumulative stoopid statements has capsized his boat. Or else he is letting A$$Mountie take the pointy end of defending AMD.

Unknown said...

This is hilarious. AMD's 2.6GHz Phenom that you can't even buy yet is fragged by Intel's slowest Q6600 quad core:

http://www.xbitlabs.com/articles/cpu/display/amd-phenom.html

Be sure to check out the power consumption numbers. Phenom chews through 131.4W at full load whilst a Q6600 only consumes 79.1W! Yorkfield embarrasses Phenom even more by consuming under 60W!

Unfortunately, Phenom processors cannot boast low power consumption rates. Core 2 Quad Q6600 that is on average about 7% faster than Phenom 9900 consumes less power in idle mode as well as under workload. Although the Phenom power consumption curve shows that Cool’n’Quiet 2.0 actually works fine, it is not enough to beat the competitor from the energy-efficiency standpoint. Intel solution remains one of the best choices from the performance-per-watt prospective.

Some quick calculations now. Phenom 9900 is a basline for this. Phenom 9900 offers 100% performance, while Intel offers 107% performance of Phenom 9900: 7% more.

Therefore, using the power results above for a full load - the Phenom 9900 CPU uses 1.314W to achieve 1% of it's full performance. The Intel Q6600 only needs 0.739W to achieve the same 1% of performance. This means, that in AMD's vaunted performance per watt metric, Intel kills AMD alive. As these results prove, the Intel Q6600 offers 77% greater performance-per-watt than the AMD Phenom 9600! Also, to remind everyone that this isn't even factoring in Yorkfield, which offers greater performance while consuming significantly less power.

Some other observations here. Why did I use the Xbitlabs results? Because they're fair. They used the fastest 1066Mhz DDR2 memory you can buy on the AMD system. They used the fastest 1333Mhz DDR3 memory you can buy on the Intel system. In each case this is the fastest official standard of memory for each platform. If AMD supported DDR3 memory, Xbitlabs would have used it. AMD does not support DDR3 yet; and thus was stuck with DDR2. Over at AMDZONE some of those idiot fanbois are ranting about Xbitlabs using 1333mhz DDR3 memory for an Intel system. They instead would have you read results from a website (the OverclockersClub review) that used 1066mhz DDR2 on an AMD platform while leaving the Intel system stuck with slow 667mhz DDR2 memory! What kind of crap is that?

You've also got idiots like The Ghost claiming that Intel cheats on TDP ratings, when this is clearly a lie from a desperate AMD fanboi. The Xbitlabs power figures are obtained for stressing all four cores on Prime95 - that's a far more intense workload that most will subject their PC to. Yet, even when this is done a Q6600 with a TDP of 95W consumes 79.1W. This is clearly under the TDP. The Yorkfield CPU consumes under 60W under a full load. The conclusion from this? Intel is being quite conservative, especially with 45nm Yorkfield products in terms of TDP. George Ou did a post on this as well. The QX6950 is rated for 130W yet the results showed it using ~65W. Again, a very conservative TDP.

AMD BK by Q2'08.

Unknown said...

Whoops. Please kindly ignore the last line of my previous post. The BK line was only intended for the copy I posted to Sharikou's blog. I like reminding him of how rediculous his "Intel BK" predictions are. :)

Tonus said...

Thanks for the link, giant.

While reading the scores I kept thinking that it wasn't as bad for AMD as I had feared, their processor seemed to be competing at its price range.

Then I realized that this was the overclocked CPU at 3GHz. The others seem to be solidly behind the Q6600 in terms of performance. If this is accurate, then the Phenom 2.6GHz is already slower than a CPU that is currently at the low end of the Intel price scale.

That is not good news for AMD.

Anonymous said...

From http://blogs.moneycentral.msn.com/topstocks/archive/2007/12/19/tech-train-wrecks-in-2007.aspx

Tech train wrecks in 2007

AMD
Share performance: From $20 in January to $7.68 yesterday.
Oops moment: Hard to choose just one. I'll go with the disappointing delays of its key chips.
What happened: Manufacturing problems affecting production of Opteron and Phenom chips, hurting AMD's chances of catching up to rival Intel.
Chance of recovery in 2008: Low, if you listen to the analysts. Intel has lots of momentum right now.

Anonymous said...

AMD’s news, this month, has been an absolute disaster. After reading the above posts with every ones comments, after reading all the articles with failure after successive failure, it seems no mater what they do, no mater how hard they try , they just can’t get anything right.

From the beginning:

The ridiculous announcement, before Phlegm was launched, of broken quads for sale.

Phlegm has thermals that make Prescott look good energy efficient.

Claims that new steppings will produce higher speeds, they never materialized.

Yeilds were so low that they couldn’t get them SPEC qualified.

The TLB issue that was close to, if not after, the chip was launched and out for sale.

Compatibility issues, they are not, perhaps never were, drop in replacements, ever.

Now I read at Digitimes that they are going ahead with tri-core failures WITH the TLB issue AND a broken core. There’s a Kodak moment.

Looking back, since AMD’s Nov 19 financial ‘misguidance’ where WRECTOR and his cohorts are claiming “return to profitability, I’d like to know one thing, with what?

Call me crazy, but is there anything in their entire lineup, including ATI products, taken as a whole, where any profitability is possible for the next year, in its entirety, quarter by quarter?

The new spin, as I see it, it the new and improved lifesaving 45nM. Does this mean the entire 65nM product cycle called Barcelona will be written off as a total loss and failure? Some may say, ‘see it works at 2.4 GHz’ or better, but it is selling so cheap their not recouping any initial capital costs, not to mention additional cost to fix and re-fix.

The way I see it, they are faced with a huge dilemma. Scrap Barcelona in lieu of a new architecture, or continue on with Barcelona with its respective issues and try to sort them out. Is it possible that this pig could fly at 45, or will more issues surface?

Let’s say they get to run well at 45, how much can they sell it for? There is a little company called Intel they still need to compete with in the interim. Further, they have added more capital costs to a failed product at 65 only to spend more at 45 and try to be competitive financially.

Or worse, redesign a new product and break INTC’s golden rule, never introduce a new architecture on a new process, and spend more money on initial capital cost, again?

Where is the money going to come from now? They’ve hit the Germans; they’ve tapped the Arabs, who’s left, the Chinese?

I’ve been in situations like this while playing chess. The king is boxed in with no where to go.

It’s called checkmate.


SPARKS

Anonymous said...

So much for the 'benefit' of individually clocking cores. The folks over at AMDzone were able to OC a 2.3GHa black edition stabily (sp?) to 2.7 GHz. By clocking 2 of the cores low, they were able to increase the other 2 cores to a whopping 2.8GHz (the other 2 were in the 1 GHz range).

Now it's quite possible that there are other issues limiting things here (likely both PROCESS and CURRENT ARCHITECTURE), but I think the jury's out if the individual core clocking is of much useful benefit, save some minor power savings.

Here's the link for those interested:

http://www.amdzone.com/amdzone/index.php?option=com_content&task=view&id=9413&Itemid=29

Anonymous said...

So much for the 'benefit' of individually clocking cores. The folks over at AMDzone were able to OC a 2.3GHa black edition stabily (sp?) to 2.7 GHz. By clocking 2 of the cores low, they were able to increase the other 2 cores to a whopping 2.8GHz (the other 2 were in the 1 GHz range).

Now it's quite possible that there are other issues limiting things here (likely both PROCESS and CURRENT ARCHITECTURE), but I think the jury's out if the individual core clocking is of much useful benefit, save some minor power savings.

Here's the link for those interested:

http://www.amdzone.com/amdzone/index.php?option=com_content&task=view&id=9413&Itemid=29

Anonymous said...

sorry for the double post - not sure what happened - Robo feel free to delete one (and this too)

Anonymous said...

'I'd be curious as to what sort of clockspeeds Guru, JumpingJack and all the other knowledgeable folks here think AMD will get from 45nm Shanghai CPUs on launch.'

I've said in the past that the initial 45nm will be lower than the 65nm top bin (just as things were on 90nm --> 65nm).

I had previously thought AMD would have a chance to hit 3.0GHz mid year ('08) with an outside chance of 3.2 on 65nm - this was based on expected 2.6GHz launch last Sept. I'd have to revise my estimates to a likely top 65nm bin at around 2.8 when 45nm paper launches this year, though I think AMD may have a shot at 3.0 (65nm) by end of '08, and will likely keep raise one bin on 65nm after 45nm has launched (like 90nm/65nm). I'd say top 65nm bin in mid'09 at either 3.0 or 3.2GHz (basically add 200MHz to whatever exist at the end of '08) - now this bin will be IN VERY LIMITED SUPPLY ('cherry picked') and AMD may have to play with the ACP definition to get it under 150Watts - but in all honestly for the enthusiast market who would buy this, power doesn't and probably shouldn't matter that much (server parts are a different story)

I'll qualify my answer by saying I really need to see the Vcores AMD needs for these 2.6GHz and possibly 2.8 chips - if these are relatively close to current K10 Vcores then 3.0/3.2 should be doable - if they are needing to add 0.1V (or something in that neighborhood) then 3.2 ain't happening and even 3.0 may be a reach.

As for 45nm launch - if they hold to early Q3 release (I'm assuming that's what midyear means) I would likely GUESS the initial parts would top out at around 2.4 or 2.5GHz (possibly 2.6 if AMD has 2.8GHz 65nm parts at the time)

A lot depends on how AMD targets their 45nm process....The active power will go down as 45nm has a lower Vt (theshold voltage) for the transistors and thus AMD should be able to lower Vcore, and therfore reduce active power.

However off state power will go up due to both gate and subthreshold leakage. This is where AMD's targetting comes in:

- if they don't want to jack up clock speeds, they will not scale gate oxide much, leakage will be somewhat under control and they will be able to likely launch the parts at clock speeds near the 65nm counterparts (maybe 1 bin down), however there will be little to no clockspeed upside after launch and 45nm will be relatively flat wrt to clockspeed over the lifetime of 45nm.

- if they want 45nm to eventually be higher bins than 65nm long term, then they will scale the gate oxide (and other parts of the process) and they will suffer leakage issues early on and be 2-3 bins lower at launch but potentially have more clock speed upside long term. The reason for the drop off is that it is likely the sort for the bins near the 65nm parts will be terrible and it will take AMD some time (and possibly several steppings) to get the 45nm process yielding enough parts at the higher bins due to leakage issues.

- the third option is AMD may do something crazy and try to adjust gate oxide with some of their CTI steps (leave it thick for launch, and then try to scale it for speed as 45nm goes on). This in my opinion would be suicide as changing gate oxide drives changes in a ton of other things - generally the first thing you try to lock in is the transistor gate stack (which is also why I don't think you will see high K on 45nm save for a few potential pilot/test runs).

If I were AMD, and given their financial position I would go conservative route and try to focus on the economic benefits of 45nm (from the smaller die size) - their process doesn't have the leakage headroom to scale 45nm that aggressively (n terms of transistor performance and speed) AMD has already said they are targeting 20-30% as opposed to previous generation where they claim they got 40% (emphasis on CLAIM) improvement. Also given their shorter transition time (if their schedule holds), that potentially means they did less to the front end process and were mainly focused on immersion litho and feature shrinks.

So given the slightly faster transition to 45nm, the lower improvement targets going 65nm to 45nm vs 90nm to 65nm, I'd say 45nm is essentially a dumb shrink (feature size as the only real major change), and therefore minimal clock upside, however because of this approach the launch should be close to the 65nm parts.

If they have 2.8GHz 65nm parts when 45nm launches, I'd say a 2.6GHz 45nm launch for the top bin (If AMD only has a 2.6Ghz 65nm part when 45nnm launches that would lead me to guess a 2.4GHz 45nm part at launch). However even if AMD somehow manages a 3.0GHz 65nm part prior to 45nm launch, I'd still think 45nm would launch at 2.6GHz max.

I'd also say 45nm will likely either have the same top bin as 65nm or possibly one bin (200MHz) better - I don't count these crappy 1/2multipliers as a bin.

What is really needed though is some 65nm OC K10 data to see how much voltage is required to get each successive bin (especially in the 2.6/2.8 ranges) - that will tell a lot as to how bad the leakage issue really is and how much of a cliff the process is on.

Sorry for the ramble, and lack of cohesiveness... one too many holiday beverages... if this is not clear please let me know.

Anonymous said...

“Sorry for the ramble, and lack of cohesiveness... one too many holiday beverages... if this is not clear please let me know.”

You just keep drinking and typing GURU, your ramblings are like an unexpected Christmas present from a distant relative, never, ever boring.

By the way, nice piece, however, you have been saying these issues would surface all along. They have.


Happy New year, Guru.


SPARKS

Anonymous said...

It seems the INQ (the Rag) can no longer maintain the “pimp AMD factor”, not with this best case scenario, cherry picked, Phlegm’s. They woke up and smelled some real coffee.

SPARKS

http://www.theinquirer.net/gb/inquirer/news/2007
/12/29/first-inqpressions-amd-phenom

Anonymous said...

I'd be curious as to what sort of clockspeeds Guru, JumpingJack and all the other knowledgeable folks here think AMD will get from 45nm Shanghai CPUs on launch.

When AMD/IBM first published information on their 65 nm process (you may recall the 40% improvement headline), they did as most companies do... i.e. they showed typical transistor level parametrics at IEDM 2005. When comparing that data to the same data Intel published for thier 65 nm process, it was pretty straight forward to arrive at the conclusion that the initial revision of AMD's 65 nm process was not going to be quite as good as Intel's (I estimated based on the Idsat data a top bin about 10% lower than Intel's top bin).

That estimate was fairly good, as Intel's top bin was 2.93 Ghz at launch, AMD's top bin was 2.6 Ghz for thier 65 nm.

Unfortunately, neither IBM nor AMD have published 45 nm parametrics at the last IEDM, so I cannot say with the same kind of confidence that I would have one node back. Frankly, I don't know --- they may pull a 'stressing' rabbit out of the hat, and the ultra-low K will help.

However, if I were guestimating (for lack of any real data), I could comment on the trends ... the classical scaling for CMOS devices hit the limit at 90 nm, i.e. not all critical layers, and geometries contined to scale at a factor of 0.7 as was historically accomplished. Stress engineering bought back some performance normally expected of a classical shrink, but 65 nm was more or less a lateral revision with only power density benefits... max clocking was aneamic to say the least.

Intel, removed the major stumbling block to improved performance at the transistor level with high-K/MG, and based on the trends I suspect that without it, any 45 nm attempt is futile at best ** WITH RESPECT TO PERFORMANCE ON CLOCKING ** ... there will still come a small power density benefit, some headroom there... but I am hard pressed to go out on a limb and predict AMD's top bin on 45 nm without high-K/MG... anything could happen.

But, what the heck, I will venture an educated guess -- I don't think 45 nm will surpass 65 nm for top clocks, i.e. 45 nm will be lucky to hit 3.0 GHz on a quad core product.

Unknown said...

Thanks for the info Guru. That's a superb analysis of the facts :-). AMD overhyped K10/Barcelona/Phenom and really failed to deliver at all. If your clockspeed predictions are accurate, they (AMD) won't even have a product to compete with Intel's fastest Clovertown and Kentsfield CPUs until 2009!

Sparks, perhaps you'll need a new video card for that QX9770 system your thinking of. Would one of these fit the bill nicely?!

http://bbs.chiphell.com/viewthread.php?tid=14253&extra=page%3D1

- Codenamed G100
- 65nm process
- 256 shader processors
- 780MHz core clock
- 3200MHz memory clock
- 512-bit memory width
- 2048MB (256X8) GDDR5 chips
- GDDR5 @ 0.25-0.5ns
- Dual DVI-out
- Supports DX 10.1, VP3
- 15-25% lower TDP than 8800GTS


From these specifications, the card looks to be at least twice as fast as the 8800 Ultra.

A G100 video card and a new Yorkfield CPU will be the final upgrade for this system I think, before Nehalem comes in the late 2008 timeframe. Hopefully Intel will provide a concrete timeframe for the introduction of products based on Larrabee as well.

Unknown said...

But, what the heck, I will venture an educated guess -- I don't think 45 nm will surpass 65 nm for top clocks, i.e. 45 nm will be lucky to hit 3.0 GHz on a quad core product.

Wow. That's harsh. AMD is going to need a lot more than 3GHz to compete with Intel. Rather than trying to compete with Intel in clockspeeds, AMD should focus their efforts on bringing Bulldozer to market ASAP.

I think we're already seeing a lot of leakage on AMD's 65nm process, the 140W TDP on the Phenom 9900 is proof of that. Without the HK/MG to fix the leakage at 45nm it'll just get worse. I think you're quite correct in stating that AMD won't see much of a performance improvement with the 45nm process in terms of clockspeed due to severe leakage issues. They will, however, enjoy the reduced costs associated with the smaller die size.

Thanks for the write-up JumpingJack. :-) I enjoy reading some of the posts here, they're very enlightening indeed.

Anonymous said...

"Frankly, I don't know --- they may pull a 'stressing' rabbit out of the hat, and the ultra-low K will help."

There's not much more mobility improvements (via stress) out there, there is alternative channel materials but those are still a long way off (pure Ge, III-V's) or perhaps Hector could find a way to transfer the added stress he is feeling at the moment. AMD is already using 4 stress techniques on 65nm and outside of a little more Ge in the selective Si-Ge (and there are fundamental limits there), there's not much they can do.

As for ULK (ultra low K) - will this really help? All this does is prevent the backend from being the switching speed limiter no? Traditionally the front end switching speed remains the key limiter and while RC delay in the bakcend is an ever increasing issue - the ULK is more a way of preventing it from becoming a limiter as opposed to making 45nm faster. IBM/AMD could go to an air gap solution (the ultimate low K), but that still won't make the transistors switch any faster.

In terms of front end, I'm not sure what AMD is doing in terms of anneal but there are several techniques (flash, laser) that may squeeze a little more Idsat (and therefore switching speed/clockspeed). There may also be some more implant / junction engineering tweaks or Rext (contact, salicide) that can be done.

All of these would likely be minor without gate sclaing and i think this is why you see AMD's modest claims of targetting ~25% performance as opposed to the >40% they claimed with the 65nm transition. It is rather remarkable that AMD/IBM have not published 45nm data at IEDM - especially given past claims of how far along they were. I don't think this bodes well for 45nm performances and strengthens my AMD's 45nm is "just a dumb shrink of 65nm" theory.

RANT ON: I've said it a hundred times now - this is why you can't simply look at schedule to suggest a company is closing the technology gap (like some other blogs have claimed on numerous occasions). The 45nm car may be soon rolling off the line, but it will be firing on only 3 of it's 6 cylinders, the tires will be underinflated and the brakes won't work - but hey it's out the door and that's all that matters, right? (Hmmm.... AMD seems to be utilizing this philospohy not just for technology nodes, but also for products now)....RANT OFF

Public service announcement - Don't drink and blog (or comment)!

Anonymous said...

Giant

“Sparks, perhaps you'll need a new video card for that QX9770 system your thinking of. Would one of these fit the bill nicely?!”

The key word is ONE. I realize, of course, that you know it’s absolutely eating me alive that I cannot have TWO of them cranked into the lovely ASUS X48 MOBO I saw somewhere on the web. The performance step up, as you know, with two of anything, is quite substantial.

“ From these specifications, the card looks to be at least twice as fast as the 8800 Ultra.”

However, if this thing can waste two mutt ATI/AMD cards, I’m in. That’s after I recover from my fits of depression. I WANT SLI!

Big Paulie: If your listening, when your done grinding AMD it the pavement, please set your sights on NVDA


Two ATI/AMD dogs eats one Nvidiot monster. That said, thanks for the thought and the link. Forgive me; it’s time for my Paxil.



SPARKS

Anonymous said...

I will add my comments in parts to split up the quotes...

There's not much more mobility improvements (via stress) out there, there is alternative channel materials but those are still a long way off (pure Ge, III-V's) or perhaps Hector could find a way to transfer the added stress he is feeling at the moment. AMD is already using 4 stress techniques on 65nm and outside of a little more Ge in the selective Si-Ge (and there are fundamental limits there), there's not much they can do.


I don't disagree, which more or less prompted my 'pull a rabbit out of the hat' ...

Some explanation for the regular readers on stress engineering.

First, how an electron travels through a crystalline lattice is a subject in and of itself in physics. Because of the ordered nature of crystals, an electron traveling in one direction experiences different potentials and periodicity than if it were traveling in a different direction (say 15 degrees off angle), i.e. an electron can move with greater ease in a preferential direction within a crystalline lattice. The theoretical model derived from this are the electron dispersion curves which can be calculated quite easily (well, not easy but can be modeled) based on known geometrics of the crystal. Phonon scattering and such take into account.

If one could pre-arrange atoms in the lattice such that a traveling electron has an 'easier' time moving from point A to point B, then it should be advantageous, in fact it is... one can increase electron mobility by simply 'stretching' atoms apart in a preferred direction, this is called strain.

This is where people get confused, the actual mobility enhancements coming from stress engineering is not the stressing agent itself, but the induced strain from the stressor. The more stress, the more strain (or pushing around atoms) and higher mobility, higher Idsat, faster switching transistors.

Ok... basics aside, stress engineering is good because it is a means to improve performance beyond natural scaling (making things simply smaller), the problem is that the effectiveness of stress engineering decreases rapidly as you get smaller.

Think of it this way, strain is the force by which the atoms are pulled apart. Stress, as a pressure, is force per unit area. If you apply the same stress over a smaller area, the effective strain is actually less. To compensate you drive for higher stress over the smaller area just to get back what is lost in the shrink.

The analogy that explains this is quite simple, think of stretching a rubber band. In a stretched state the rubber band is under strain. Now grab each end of a rubber band and stretch it within the confines of a size 10 shoebox, you can stretch it as far as you like until the box limits how far you can pull it. Now take the same rubber band, same person stretching it, but say you shrink the show box to a size 5 shoe box. As you stretch the rubber band, the smaller show box limits how far you can stretch it...i.e. the same rubber band will be put under less strain because of the geometric confines of the smaller shoe box.

Wrapping up ... I agree, stress engineering has it's limits, AMD put 4 stressors into the equation and still have not been able to achieve a performance matching process to thei 90 nm, I do not have much hope that they can do much more with 45 nm....

In terms of channel mobility, there are other academic research tricks to do it... binary semiconductors is one way (SiGe as you mention above), but those type of approaches are as radical as high-k/mg, and I have seen very little noise about those in the practical CMOS manufactures technical disucssions, those are a ways out I would think....

Jack

Anonymous said...

As for ULK (ultra low K) - will this really help? All this does is prevent the backend from being the switching speed limiter no? Traditionally the front end switching speed remains the key limiter and while RC delay in the bakcend is an ever increasing issue - the ULK is more a way of preventing it from becoming a limiter as opposed to making 45nm faster. IBM/AMD could go to an air gap solution (the ultimate low K), but that still won't make the transistors switch any faster.


The real sexy stuff is at the transistor level, I don't disagree there... however, we often forget the backend when discussing max clocking speed and the backend does indeed contribute capacitance to the circuit which will affect the overall clockability of the chip.

Looking at Intel's 45 nm process, the data does not suggest we are at an RC limited point in the performance curve (these things are clocking up to 4.2-4.5 GHz for goodness sake)... however, there is data suggesting that AMD is somewhat limited by their back end.

- Their design went to 11 metalization layers, you only do this if you are fighting RC delay.

- They lost several cycles of L2 latency going to 65 nm (forget that absolutely bogus explanation of 'reserving the right to add more cache' excuse), the poor L2 latency of Brisbane points squarely at a poor performing backend process.

Finally, improving the backend capacitance will also help out on power, and anywhere they can save on power opens up a little more headroom to push up clocks. P=C*V*V*F, C is the total capacitance of the circuit, relative to the transistors themselves it is not huge, but it is a contribution.

So is the AMD/IBM process currently RC limited? Is this why 65 nm is struggling to get top clocks even close to 90 nm? I don't have answers to these questions, it is certianly possible but AMD/IBM's backend k and Cu metallizations are much the same as Intel's and Intel's data does not suggest RC limited. Who knows, we will find out when IBM/AMD release their 45 nm processors.

Anonymous said...

It is rather remarkable that AMD/IBM have not published 45nm data at IEDM - especially given past claims of how far along they were. I don't think this bodes well for 45nm performances and strengthens my AMD's 45nm is "just a dumb shrink of 65nm" theory.


I found it remarkable to ... almost shocking, considering that they plan to ramp 45 nm in 2008, typically companies like to 'strut their stuff' at the IEDM the year before it goes into production.

Anyone's guess is as good as mine... it could be.

a) They knew they would be upstaged by Intel's high-k/mg and decided to pass on putting more attention on it than necessary.

b) It's really not anywhere near ready, and they don't want to spook the financial guys into thinking they are running late when they in fact are....

c) They have something truly phenomenal, and want to hold back disclosing anything for competitive reasons.

I don't know, but I was very dissappointed in the notable lack of data at IEDM from the IBM camp.

Anonymous said...

RANT ON: I've said it a hundred times now - this is why you can't simply look at schedule to suggest a company is closing the technology gap (like some other blogs have claimed on numerous occasions).

No need to rant, you are precisely correct. The only way AMD/IBM are closing any kind of gap based on schedule at this point can only be argued from the die size... costs equation. Even then, what they say and when they say they are going to do something often times does not turn out to be true in reality... (Barcelona was a hopeful launch for 2nd Quarter 2007, and it is looking like Q2 08 for real).

My fear (and everyone's fear it should be) is that 45 nm is not going to yield a major performance gain (i.e. clocks/power) and the only real benefit is die size. As such, AMD can use it but will remain in the low-end, low price arena... this would not be healthy for the industry in general, without a competitive product in the high end space, Intel will really stretch out the pricing curve where low end Intel stuff will be on the same order price wise as the equivalently performing AMD stuff, but the high end will continue to cost a premium, much more so than if there were real competitive alternatives in that segement.

Jack

Anonymous said...

Now that I have dumped some stuff in several posts.... let's bring in one more topic that needs to be addressed 'badly'....

Have you ever seen someone say "Well, IBM is getting 4.7 GHz with their 65 nm process, AMD should be able to do the same..."?
(I.e. Christian Howell aka BaronMatrix :P )

The reason for this is simple... IBM went in with the power6 design restricting themselves to keep each functional unit of the processor as simple as possible.

The max clock a CPU can achieve is dictated by the slowest actuating circuits in the device. The more complex you make the circuits (i.e. the processor) the harder it is to clock up... here is an explanation. A circuit of say 10 transistors will give a total switching delay of say X, take a different cirucit of 100 transistors with a total switching speed of say y. Y (in seconds) > X (in seconds) because the cumulative propogated delay goes up the more transistors in the cirucit.

These higher order delays are often measured or designed around a term called fan out of 4 or FO4 delays, the lower FO4 delays for the worst case scenario the faster you can clock up the processor. This is also why longer pipelines can clock up faster, by splitting the work to be done into more and more stages, each stage can be designed as a set of simpler, less complex circuits with fewer FO4 delays giving higher clocks.

IBM actually just published a nice series of whitepapers on this topic, the most important one is...
http://www.research.ibm.com/journal/rd/516/berridge.pdf

IBM specifically restricted themselves to an FO4 delay of 13 stages (compared to 43 or so of power 5), this is what enabled IBM to clock power 6 so high.... it was actually a nice piece of design work.... but to assume that IBM's 4.7 GHz accomplishment translates to AMD's barcelona processor (which is the most complex, advanced beast for an x86 :) )... well, that is just foolish.

Unknown said...


The key word is ONE. I realize, of course, that you know it’s absolutely eating me alive that I cannot have TWO of them cranked into the lovely ASUS X48 MOBO I saw somewhere on the web. The performance step up, as you know, with two of anything, is quite substantial.


TBH, I was never really into multi-GPU setups. My single 8800 GTS 640 has ran superbly on my P965 ASUS board for over a year now.

I certainly understand your frustration at not being able to use two in an Intel chipset board. It's NVDA being greedy. They want your cash for the nForce chipset AND the GPUs! The nForce 790i due early next year has all the DDR3, 1600mhz FSB support etc. It might turn out to be just as good as the X48 chipset.

As for ATI having something to compete with G100, I wouldn't bother waiting to find out. They still haven't got anything that competes with the 8800 GTX. Their fastest part now, the Radeon 3800, is only as fast as my GTS 640.

Though if you like multiples of everything perhaps you ought to get a Skulltrail board, two QX9775s running at a sweet 3.2GHz, some good air or water cooling and run them at 4GHz or so, then take three G100s and run 3-way SLI! Such a monster would be enough to run even Crysis at 2560x1600 I imagine!

AFAIK, Nvidia has only licensed Intel to use SLI on Skulltrail boards. Even then, the board has a pair of nForce MCPs on them for SLI compatibility. (in the Skulltrail board pictures, the MCPs and the Intel southbridge are under the large aluminum heatsink that has the fan on it)

Two ATI/AMD dogs eats one Nvidiot monster. That said, thanks for the thought and the link. Forgive me; it’s time for my Paxil.

I don't doubt that a single G100 will toast two of Ati's fastest cards, since R700 is due well after G100.

You're correct now though, two Radeon 3870s are faster than a single 8800 Ultra, and SLI is just not an option on Intel chipsets for now.


Big Paulie: If your listening, when your done grinding AMD it the pavement, please set your sights on NVDA


I'd be content with Paul leaving AMD alone. They've done well: I don't think Hector and his cronies will be boasting about 'dual core duels' or handing out 'multicore for dummies' books anytime soon!

As for NVDA, Intel is in for one hell of a challenge in discrete GPU market. Intel's previous attempt to gain a foothold in the market with the i740 was a flop, I can only hope that Intel is much more serious with Larrabee and that it has the performance to challenge Nvidia in the high end.

Anonymous said...

“AFAIK, Nvidia has only licensed Intel to use SLI on Skulltrail boards. Even then, the board has a pair of nForce MCPs on them for SLI compatibility.”

I saw that, too. It absolutely galls me they will go to such lengths to protect their miserable little chipset market, by designing dedicated components to enable SLI functionality. However, due to my, what seems to be going on forever, frustration with Nvidiot, combined with my adoration with INTC chipsets (are you listening Orthogonal), my use for NVDA chipsets/motherboards has been limited to fairly effective urinal screens in the sleaziest gin mill I could find.

And, yes, if it’s true about Skulltrail SLI functionality, I have considered the option. Picture it, Two Xeon X5482’s and two G100’s, then BLAAA, dead wrong, FB-DIMMS! Did you see the price of these things; they can’t give ‘em away, 4G for less than $200! Dogs, I tell you, DOGS!

We’ve got DDR3 merrily blasting along at 1866, and I have to put FB-DIMM crud 677 or 800 in $6000 full blown Skulltrail, SLI, duel Xeon set up? GMAFB!

No, perhaps, my beloved INTC will also include NVIDIA nForce 100 MCPs to enable support for SLI on the up and coming X48 mobos. Then my world would be complete. I could get relief from bipolar fits and get off the Paxil, and grudgingly give NVDA a cool $1500 for two gorilla cards.

More fuel for me to despise Wrector Ruinz for upsetting the balance of the Graphics/Computing industry with the ATI purchase. He shot himself in the foot and killed ATI. Is it any wonder why all the ATI key players bailed? What a goddamned fool.

Oh. Yeah, RANT OFF, and I haven’t been sipping the punch, yet.

SPARKS

Anonymous said...

More fuel for me to despise Wrector Ruinz for upsetting the balance of the Graphics/Computing industry with the ATI purchase. He shot himself in the foot and killed ATI. Is it any wonder why all the ATI key players bailed? What a goddamned fool.

You can say that again! Truer words have never been spoken.

Anonymous said...

"They have something truly phenomenal, and want to hold back disclosing anything for competitive reasons"

For many companies I would say this is a possibility, but we are talking about an IBM process (not AMD)- IBM is the king of announcements before things are ready (see SiLK, spin on low K). If IBM/AMD is not publishing data,I think it is most likely due to the fact that it is remarkably similar to 65nm... not much of a horn to toot at IEDM.

As for ULK - totally agree with your RC analysis, but that is why I think it is no big deal. It is simply a way for IBM/AMD to get to the 45nm shrink without making performance WORSE than 65nm. Ultimately they are still limited by front end switching however on 45nm, RC delay may have been an issue for them and may have become the limiter.

The fact that AMD uses more metal layers is an often overlooked fact - it adds a chunk of money to the wafer cost, is another chance for yield issues (small on the overall risk side of things though), and suggests that despite LK, ULK, Cu, etc, AMD is not getting the INTEGRATED performance (which is really all that matters)

Their are several potential reasons for this:
1) While ULK (or in generally the bulk dielectric) gets the press, the key on the C part of the RC delay is the effective capacitance which includes the barrier layers used (which is needed to enable the etching of lines/contacts for the metallization process on each layer). If you use an exotic low K, but it needs an etch stop layer which is either a significantly higher K film or is needed to be thicker, then the bulk ILD ULK really doesn't buy you anything. If you want to compare ILD capabilities, one should compare effective K of the entire ILD stack (not sure if this info is published generally speaking).

2) There is more to the 'R' in RC delay than just Cu. You need a barrier layer around the Cu to prevent it from going into the ILD. These materials are more resistive (generally) and the thickness required to serve as an effective barrier also is key (the thicker the barrier you need, the worse the overall R). Also some layers now require shunt layers to prevent electromigration (a topic for another day).

While feature size is somewhat driven by litho and the tech node you are on, you can make the lines taller (to help with R) by making the layers thicker - but of course you need the etch, metal fill and other capabilities to do this.

3) Design - the density of lines, length, layer to layer separation all effect RC delay. You can add more metal layers (like AMD) to lower the density of metal lines and prevent line to line or layer to layer capacitance issues. This is an area that falls into the "restrictive design rules" philosophy. If you have a known process capability you can give the designers very clear limits on what they can and cannot do. Or you can do a design with no (or fewer) constraints and move to more exotic materials or add extra metal layers to make it work.

Bottom line - it is easy to make new material or process announcements, but the key is how that material performs in an integrated process. This is why I'm still curious about how IBM's high K will look in an integrated process.

Anonymous said...

One additional thing on ULK... the fact that Intel is ramping 45nm with a more conventional ILD is another example of Intel being able to extend things further in manufacturing through better integrated performance. (In other words new != better). That Intel is not doing ULK on the order of IBM, speaks to Intel's overall backend capabilities.

This is much like immersion litho - while cool and it makes it sound like AMD is more advanced for using it, those in the know realize that it is FAR better to extend known technologies and introduce new things only when they are absolutely needed or yield a huge performance or cost benefit. As immersion litho will simply enable AMD to achieve the same feature size as Intel does with dry litho and is at best cost neutral with a dry litho double pass process (many think immersion at least initially is MORE expensive) - makes folks who work in the biz think 'why is moving to immersion litho on 45nm a good thing again?'

This is a (if not THE) key difference between a company with a manufacturing philosophy vs a research/engineering philosophy. It's not about the company who uses the coolest materials in its process, it's about who gets the best performance, in a cost effective, high volume capable process in a timely manner.

Does immersion litho or ULK make the performance better? Make it faster to bring the process to market? make the wafer cost cheaper? When you see these announcements, you'll notice what's missing is the 'so what' bullet on why using these thing matters. Not just that it's cool or new. You'll notice when Intel announced high K/MG they provided specific performance and power benefits (the 'so what') and didn't just say it is an innovative material.

Anonymous said...

"They have something truly phenomenal, and want to hold back disclosing anything for competitive reasons"


You know I am so bored with these ridiculous IBM claims on cutting edge process miracles. Historically, they may have come up with the most esoteric process ‘breakthroughs’ but very few have ever materialized into a revolutionary HV process.

Reading Guru’s and Jumping Jacks extremely sophisticated technical analysis, which I will go to my grave never to fully fathom, something more earthy occurred to me as slipped out of orbit. Despite all the spin, IBM still hasn’t got it yet.

Taking something from the backend and putting something in the front end sounds like a serious porn movie, gone badly. From the analysis, all I’m getting is that IBM is fishing for the correct variables to an integrated solution Intel has already found the formula for, the successful implementation of Hi-K dielectrics on successively smaller node processes down to 32nM or smaller.

“but to assume that IBM's 4.7 GHz accomplishment translates to AMD's Barcelona processor (which is the most complex, advanced beast for an x86 :) )... well, that is just foolish.”

Obviously, as Barcelona can barely hit half that speed reliably with complete stability. Hot jobs, or whatever, it will take 10 to 16 weeks of processing to produce and test what Jack and Guru have pointed to a seamlessly never ending multitude of extremely complex interrelated variables on an atomic level, no less!

I still believe that Intel, with it’s penchant for leveraging multiple design teams toward process variables, even if the are seemly bad ideas, has explored these paths, regardless. No inside info here, I’m not that important. But, I will venture to say they will never, ever be caught flat foot again. This is precisely what I would do. Get a couple of guys like Jack and Guru (if you could find them) and hammer out all the variables.

But then, INTC already did this with the hafnium process, didn’t they?

When Gordon Moore said that Hi-k was the biggest breakthrough in 40 years of transistor design it seemed the entire industry yawned. The bottom line here is they’ve got it, it works, they in HV production, and IBM and AMD are pissing their pants.

In the meantime IBM (and indirectly AMD) is still trying to look competitive and innovative from a PR stand point. IBM can afford to do this; it’s a hobby of theirs.

AMD, you ask? Where are they going to get the money to clean the toilets in the interim?

SPARKS

Anonymous said...

"Get a couple of guys like Jack and Guru (if you could find them) and hammer out all the variables."

While I appreciate the flattery, I think you mean a couple hundred guys! Intel has a research team of >50 people working on solutions 22nm and beyond (they stay 2-3 generations ahead of current state of the art). Now throw in a couple of hundred development engineers and a half billion for Si and equipment and you have the *beginning* of a plan!

IBM's research is world class, however they are not in the same biz of truly high volume manufacturing as Intel - yes they do manufacturing but look at the top 10 in semiconductor revenue...IBM is a service and research company that makes some HW. The consortia (AMD, Toshiba, et al) helps but they all have slightly different goals/priorities for their processes. It's probably a bit like herding cats (though not that bad) - some may be on a slower development schedule, some may favor lower power over performance, some may prioritize cost over raw performance, etc.

InTheKnow said...

IBM's research is world class, however they are not in the same biz of truly high volume manufacturing as Intel - yes they do manufacturing but look at the top 10 in semiconductor revenue...IBM is a service and research company that makes some HW.

To elaborate on this a bit, I know a couple people over at IBM and have had some detailed discussions about how things are done there. (I was considering trying to get on there at the time).

What I found was that IBM is very much like working for a university in the publish or perish sense. Patents, publications and to a lesser extent presentations at industry events are what your career is built on. You do the work in the lab or fab, but the ultimate goal is to patent, publish and present your work.

As a result, the engineering focus tends to be on the bits and pieces. No one is really rewarded for making all the bits and pieces fit together.

That is where things like the SiLK fiasco come from. They had a really good piece to the puzzle. The problem was, the piece was from another puzzle. But by golly, it sure looked nice!

But if you look at just the ability to solve very specific problems, IBM is very good. They are certainly among the best, if not the best in the world. Never sell IBM's research ability short.

IBM's weakness is in the development and integration of their "breakthroughs" into the process as a whole.

By comparison, Intel's approach is more of a brute force approach. But their focus is on an integrated process. IBM may have better individual pieces, but Intel's overall process is more robust.

Anonymous said...

I took a look at IBM’s manufacturing facilities. Pictured, are their 200mm facility and a 300mm facility. They have a nice virtual tour of their 300mm facility with plenty of Jacks, In the Knows, and Gurus running around at Fishkill.

IBM’s main goal, however, is, “The IBM Multi-project Wafer program is designed to help small, innovative organizations design and prototype high-performance chips at a lower cost.”

Hmmph, someone tell this to Wrector. Anyway, world class R+D or not, from what have read else where, IBM has slowly phased out serious manufacturing over the years. I don’t know exact numbers, but it has been declining steadily.

Nah, my money has and always will be with INTC. Besides, I’m a construction worker. I’ll take a “brute force” approach over elegant, patented, pompous, Power Point, paper presentations, any day.

I think I’ll spend the rest on the day touring INTC’s virtual tours, with a Monticristo #3, and a nice glass (or two) of Fonseca ’97, Vintage Port. INTC has been very good to me this year.

Cheers and Happy New Year, Gents!

http://www-03.ibm.com/technology/ges/semiconductor/
manufacturing/


SPARKS

Anonymous said...

Dirt to Destiny!

Virtual Tour of Intel's 45nm Factory!

IBM my ass, this is what I’m talk’in about! Checkout the 75 miles of conduit, the networking, the air handlers, and the panels on the utility level! Delicious.


http://intelpr.feedroom.com/?fr_story=
33c969acc3f25d0f702e56eb10b55c5692bd0f81&rf
=sitemap

SPARKS

Anonymous said...

Halleluiah! NOT IN MY STATE YA DON”T. NOT WITH MY TAX MONEY!!!!!

I TOLD YOU THIS PIG WOULDN”T FLY!

http://www.timesunion.com/AspStories/story.asp?story
ID=650977&category=OPINION&newsdate=12/30/2007


SPARKS

Orthogonal said...

sparks said...

Dirt to Destiny!

Virtual Tour of Intel's 45nm Factory!


LOL, all the "in fab" footage was F12 not F32.

Anonymous said...

Just goes to show ya, there's always someone to pee on someone else's parade.

The 'Dirt to Destiny' was a time lapse, photo shoot of the F32 construction site. That the real deal, my cup of tea!

Besides, take all those tools and robots, turn 'em upside dowm, and they all look the same ;)

By the way, are you in any of the pictures? Are you the guy with white suit.

Sparks

Orthogonal said...

The time lapse is pretty cool, they had a link to it on the main intranet site during the whole construction. It was interesting to check up on every once in a while.

Sadly, I'm not in the video, or atleast, I'm pretty sure I'm not. Unless some of the shots down by the stockers happened to catch me on some random day. Not that you can really tell the difference anyway. Although, after you've been in the fab long enough, you can start to tell who everyone is by their gait and body shape... :/

Anonymous said...

Power distribution, line conditioning, UPS, emergency generator, 277V/480V, triple redundancy, backup system, along with environmental Building Management Systems are places where I love swim.

Powering up a project of this magnitude for the first time is an unforgettable experience for an I.B.E.W Journeyman. It’s an ultimate benchmark in ones career, one that is never underestimated or unappreciated.

Frankly, I envy those guys’s, as I truly believe these are 21st century ‘Apollo Projects’ designed to conquer inner space. It is truly, a new frontier.

SPARKS

Anonymous said...

InTheKnow said...

"What I found was that IBM is very much like working for a university in the publish or perish sense. Patents, publications and to a lesser extent presentations at industry events are what your career is built on. You do the work in the lab or fab, but the ultimate goal is to patent, publish and present your work."

I would agree with that. Having worked as a patent examiner for 25+ years, I knew a number of IBM engineers/former engineers who would consult with me on patent novelty searches, and they all mentioned the bonuses and recognition and promotions they would get depending on the number of patents granted for their inventions.

IBM has always had their own peculiar way of doing business, from insisting on their own nomeclature and non-standard symbols (i.e., their symbol for a BJT in their patent drawings), to being thoroughly prepared complete with exhaustive presearches before even submitting a patent application.

I still recall, with some degree of embarrassment, my first conversation with a very senior IBM patent attorney after giving him what was in retrospect an incorrect refusal to grant a patent. He called me up and wanted to know (1) if the Patent Office still taught Patent Academy to newly hired examiners, and (2) if so, why didn't I attend. LOL

Anonymous said...

http://www.digitimes.com/mobos/a20071225PD219.html
(AMD to launch triple-core Phenom CPUs in March 2008)

Hmmm.... with B3 stepping supposed to come out for quad cores at the end of Q1, these tri-cores are B2!

Well, the 'price/performance' or 'value' or whatever spin AMD is using these days sounds like it means a way to dump unsellable B2 quads in addition to defective quads. If the new B3 stepping is available at end of Q1, why else would you bother launching tri-cores at the same time on B2?

After shooting both feet in 2007, it appears as though AMD is now shooting at it's own kneecaps and slowly working it's way up...

Anonymous said...

http://www.digitimes.com/mobos/a20071227PD210.html

Another head scratcher.... AMD apparently will be launching K10 dualies "in the later half of the 2nd quarter"

...if AMD is as far along on 45nm as they say (they're 'ramping' now, right?), why even bother with 65nm DUAL core K10's? Given that the dual core K10's will at best ~15% clock for clock better than K8's (and it's debatable if K10 will have the same clock as K8), why bother with the expense and logistics of qualifying a product that will soon be converted to 45nm? Why would OEM's bother qualifying new K10 dual core based systems on 65nm if they are only marginally better than chips and systems (k8) they have been shipping for some time? It's not like they are going to get that much of a premium for 45nm based systems.

With their world class manufacturing skills and fab operational excellence (struggles to stop laughing) - one would think AMD would be able to quickly ramp 45nm and would be best just to make K10 dualies on 45nm.

Or perhaps 45nm will be a paper/soft launch in 2008, will have no real major benefit over 65nm and AMD thinks 65nm might be more stable for their volume desktop product? (just idle speculation on my part)

Are their any executives with a business clue at AMD? The dual core 65nm K10's will be competing with the 65nm dual core K8's! If it's only 10-15% better than K8 (assuming they somehow get the clocks up), they have no pricing power against themselves! (yet alone Intel)

Simple solution (assuming AMD's 45nm is as healthy and on time as AMD leads people to believe):
1. Continue dual core K8's on 65nm
2. Forget K10 quad desktop altogether on 65nm (or launch a handful to keep the enthusiasts happy)
3. Focus all K10 65nm production on server quads
4. Ramp 45nm on K10 dualcore desktops and quadcore servers. Start with the dualcore desktops as these are smaller and will be less susceptible to initial yield issues and may also be a bit easier to bin. Follow closely with server (lag maybe 3 months)
5. Ramp quad desktops after #4, Again don't bother doing this in any volume - just a token to keep enthusiast happy as they will have to bottom price these for performance reasons.
6. Continue mobile on 65nm, transition to 45nm when 45nm process is under control (late H1'09?)
7. Start plans to outsource desktop dual cores to TSMC on 45nm. Don't bother with 65nm as by the time they have this up and running and qualified it will be difficult for TSMC (and therefore AMD) to compete from a cost/performance perspective with Intel's 45nm dualcores.
8. Give up the 'we can compete with Intel on all fronts' strategy and focus high end performance on server and mobile (and hope advancements in this area will be applicable to desktop)

The reason for doing the desktop dual cores first in 45nm (which is generally what you don't do as these are the lowest margin parts), is that AMD's 45nm will likely lag 65nm performance (like what happened during 90nm to 65nm). You don't want to launch with low bin parts in the lucrative markets which kills the brand and eliminates the fat you get from the early adopters. (in other words don't do what they did with K10 quads!). With dual core K10's they can price these low as they will benefit from smaller die size and will not be expected to be top end chips. They can then use this ramp to 'fix' the process/design and enable a stepping or two (and a process iteration or two) before producing the higher end parts. It should also allow yields/binsplits to stabilize.

Anonymous said...

You've also got idiots like The Ghost claiming that Intel cheats on TDP ratings, when this is clearly a lie from a desperate AMD fanboi.


hahaha, the latest topics of rage over at AMDZone is how Intel is paying all reviewers to say good things about the Core2, how Core2 performance is an outright lie, how tomshardware is paying other sites to link back to them.

Man, that conspiracy must run really deep, not only is intel paying sites to review their cpus positively, those sites are in turn paying other sites to link back to them!

Ho Ho said...

Long time, no posting. I found WoW for myself during the holidays :)


Another blog said something about K10 performance:

"If we take the E6750 score and scale it to quad core at 2.4Ghz we can see that Q6600 scales 98% which is a very good score. However, K10 scales 102%. This suggest that K10 is at least 4% faster than K8"

Wow, a new CPU architecture is a whole 4% better than their old one. That is almost half as good as Intel dumb die-shrink to 45nm!


"Judging from the small increase from K8 to K10 and C2D it is clear that PovRay is not using 128 bit SSE operations. It does say that the 32 bit version only supports up to SSE2."

"Judging from the small increase from K8 to K10 and C2D it is clear that PovRay is not using 128 bit SSE operations. It does say that the 32 bit version only supports up to SSE2."


Sigh, he still has no idea about the difference in SSE, MMX and SIMD in general. There is no such thing as 128bit SSE. There is only SSE and it just happens to be that K10 and Core2 run SSE code at twice the speed of older HW.

If PovRay supports SSE2 in 32bit I see no reasons why it wouldn't in 64bit, it has no incompability problems as regular code has. Though my guess is their code isn't all that optimized and SSE parts are not where majority of the time is spent.


"Unfortuately, we still have not turned up any real proof of whether K10's 128 bit SSE functions are on par with C2D's (and roughly twice as fast as K8's)."

Yes, I agree we have had no proof that k10 is on par with Core2. Though we have had plenty of proof and examples that it isn't.

Anonymous said...

'However, K10 scales 102%. This suggest that K10 is at least 4% faster than K8'


What is even more hilarious is the EGREGIOUS MATH MISTAKE. If we assume perfect (100%) scaling and the numbers turned out 102%, that is not 2% scaling over the old arhcitecture, as he is not accounting for cores - it would actually be 1%... if we use the 98% as the benchmark it would be 4% better in aggregate, but that would mean 2% per core...

Tonus said...

"Man, that conspiracy must run really deep, not only is intel paying sites to review their cpus positively, those sites are in turn paying other sites to link back to them!"

The reason that I learned how much better than the P4 the Althon was, is because I used to read the same hardware review sites that are now being accused of bias and being "bought" by Intel. The same sites that ripped Intel over the P3 1.13GHz fiasco, pointing out that the whole mess was due to the better performance of the Athlon and Intel's desperate rush to compete.

Some of them mentioned how NVIDIA tried putting pressure on review sites in the past. How do we know this happened? Oh, right-- the site owners told us about it!!! But now that they post reviews that show Intel CPUs ahead, we are supposed to believe that they lack honesty and are paid off.

To me it sounds as if AMD's own fans have written them off, and are preparing themselves mentally for the fall, by claiming that it's a big conspiracy. When your most staunch supporters are wearing black, I guess that is a pretty bad sign...

Khorgano said...

When your most staunch supporters are wearing black, I guess that is a pretty bad sign...

As long as there is Sharikou, AMD has nothing to worry about for lack of support ;)

Roborat, Ph.D said...

i think mr. barb is addressing AMD shareholders. smart.


BTW, i would just like to say that I represent Mr. Mebuko Obuki, a political prisoner in Nigeria. We desperately need your assistance in transfering $200M out of the country...

Anonymous said...

I tend to agree with the current set of problems. AMD and/or governments has the right to pursue this for potential past transgressions, but clearly the 'monopoly' is not the cause of the AMD's current problems. I think AMD's mgmt conveniently intermixes these hoping folks may not realize this distinction.

My question is this though... is the EU looking for recent stuff (within the last 2 years say) or older stuff? And what exactly is the SPECIFIC damage done to the EU? I can see AMD trying to make a claim (if anything is proved) but what about the EU? If rebates or whatever is deemed anticompetitive then I can see how this impacted AMD, but if the prices that consumers were paying were still low and competitive (like they are now), how exactly is the EU consumer injured? These are not European companies. Does the EU fine companies who sell clothes in the EU that are done by labor at ridiculously low salaries?

This, to me, in disingenuous. If the argument is that consumers got hurt then I eagerly wait to see if the EU distributes checks to all those who purchased a computer in that time period they allege (should they actually levy a fine). Somehow though, I don't think we will see that (call me a cynic).

We all know that Intel fans are the biggest cock suckers of them all and just as long as we get what we want everyone else can go fuck themselves.

If the issue is that AMD was injured, than AMD (NOT THE EU) should be the ones pursuing this as they are doing in the US.

Anonymous said...

Hi, I'm a newbie here, but I already want to bring all the benefits of me :) So, I want to share my experience with you..
9 days ago, accidentally, i had found the Mobile Phone TV...and I was so delighted with this application
that I decided to talk to you :)

I consider myself a bit of a road warrior. I am on and off jets and through airports at least twice,
usually 4 times a week. I can catch up on news, watch a Discovery program, check up on the stock market or just find something interesting.
The live guide works like cable at home and the connection speed is very good. All in ALL - I RATE A 5 Star program!

but I do not want to leave any links here, so you can email me fairyalexiss@gmail.com
and i will give you the site of this unusual program :)

(but please don't PM me, because it's so difficult to communicate in such kind of way)

so, I hope I was helpful to you)) see you in next posts ..

sincerely
your Alexis....

p.s. English is not my native language, so sorry for any mistakes :)

Anonymous said...

[b][url=http://www.hairstylestop.com/] Hairstyle Categories. [/url][/b]

Rihanna Short bob with bangs. Kylie Minogue winter [url=http://www.hairstylestop.com/]hairstyle![/url] Nicole Scherzinger funky ponytail hairstyle! Katy Perry funky colored hairstyle!
Mila Kuni brunette, updo hairstyle! Winona Ryder ponytail, wavy hairstyle! Jessica Alba updo, casual hairstyles!Short smooth [url=http://www.hairstylestop.com/]haircut[/url] Hayden Panettiere.

[url=http://www.hairstylestop.com/]august[/url]