3.03.2009

"At heart, we're a reverse engineering design company" - AMD

"Advanced Micro Devices Inc closed a deal to spin off its manufacturing operations on Monday, and said it expects the new company to assume responsibility for paying off about $1.1 billion of debt.
The plants which make AMD's chips are now part of a $5 billion joint venture with Advanced Technology Investment Co, of Abu Dhabi, temporarily called The Foundry Co."
- Reuters

In a very creative way, AMD has ridden itself of its crippling debt and the massive burden of capital investment going forward. While AMD may sound as if this strategic move brings them closer to their core expertise, it is without a doubt that this back-to-the-corner decision was the only way for AMD to remain viable. This new lease allows AMD, maybe for a few more product lifecycles to continue and remain as the only challenger to Intel.

At the bleeding edge of semiconductor technology, it has yet to be seen whether a fabless company can challenge one with a foundry. In the not so bleeding edge such as memory products companies with their own foundry like Samsung are dominating over the rest of the industry but competition remains vibrant. But in the x86 space where process leadership creates cost and performance advantages, history isn't kind to fabless companies. Starting this week, AMD is effectively what Transmeta was back in 2000. The difference is Transmeta had a lot of hype going for them and probably with a more compelling product offering in the mobile space.

966 comments:

«Oldest   ‹Older   201 – 400 of 966   Newer›   Newest»
InTheKnow said...

I wonder how much anger there will be on the zone to see AMD doing the very thing they accuse Intel of doing, i.e. making something that forces users to buy their processor. LOL.

The browser, called Fusion Media Explorer, allows users to browse music and video albums stored on a PC, and share those files with social networks, said Casey Gotcher, director of product marketing at AMD, in a blog entry.

The software is available for download from AMD's website, but only works on PCs with an AMD processor.

pointer said...

I wonder how much anger there will be on the zone to see AMD doing the very thing they accuse Intel of doing, i.e. making something that forces users to buy their processor. LOL.


well, this would be called valued added over there:)

anyway, just gone over there to have a look at the funny arguments. the 'one' that always call almost everyone spreading FUD when giving not so positive AMD statement ..is ... spreading FUD again :) after accusing ppl spreading FUD and said ppl shud have referenced to publicly available info, he went saying 'for sure Larrabee not supporting DP at all' and bolded the sentences... guess what, when challenge with facts (publicly available info) ... he twist what he just said to the DP running at 1/8th speed ... and on another story, he does not understand what embedded means and go on telling "see ppl, i have told you, the netbook is not even on the list (embedded stuff ..) hahaha

enjoy:
http://www.amdzone.com/phpbb3/viewtopic.php?f=52&t=136244&st=0&sk=t&sd=a&start=25

http://www.amdzone.com/phpbb3/viewtopic.php?f=52&t=136221&p=155670#p155670

Tonus said...

I saw those conversations. Kaa must be tearing his hair out, but if there's anyone who should understand how it works there, it is him. At the same time, it's fascinating to see his and x704's conversations with Abinstein and Scientia, and how the latter two react when they're dealing with people who clearing are more knowledgeable than they are.

One benefit is that Abinstein is making specific predictions about Larrabee's performance. Although I suspect that by the time Larrabee is actually available and its performance is more in line with Kaa's estimates, he will claim that Intel is just messing with benchmarks and thus the real performance cannot be properly estimated.

I think there's a pretty big divide occurring there between the people who aren't blind fanboys and the ones that are. I think that the current trend, where fewer of the non-worshipers have been posting lately, will continue to become more marked as time goes on. There's only so much that a sane person can take, IMO.

A Nonny Moose said...

Right, but the site owners went and made Abinstein a moderator, or at least Ghost did, so they are actively encouraging the blind fanboyism.

I wonder just why any reasonable person, let alone an Intel supporter, would bother going there except for the amusement factor. Maybe to whiff the ozone coming out of AMDZone??

Samwise said...

And, as someone predicted here, I have been banned by The Ghost ... because I called Abinstein on all his FUDing about Larrabee and Nehalem EX. : )

Anonymous said...

I wonder how much anger there will be on the zone to see AMD doing the very thing they accuse Intel of doing, i.e. making something that forces users to buy their processor

AMD is all about giving customers choice, right? :) (as long as you choose AMD)

Seriously though - is anyone in the world buying a processor based on this SW App? I have no problems with this and if it is value added, much the better. From what I know about the about (which is minimal and I have never used it), it doesn't seem like it is offering much and there are alternatives out there. I wonder why AMD would bother devoting ANY resources to something like this.

InTheKnow said...

I have no problems with this and if it is value added, much the better.

I don't have a problem with it either. If you have something the world is beating the door down to get, I don't see why you have to let everyone else have a piece of the pie.

I just find the attitude of the more exuberant AMD fans that AMD is somehow a more moral corporation than Intel laughable. These entities exist to make money for their stockholders. Neither of them are any more or less benevolent than the other.

I wonder why AMD would bother devoting ANY resources to something like this.

I would speculate that they are trying to offer a unique value proposition. I think they need to do something along those lines to compensate for their inability to compete on performance. This does seem to fit in with their recent media centric platform experience mantra.

That said, this offering doesn't seem compelling enough to me to justify the expenditure of scant resources.

InTheKnow said...

And, as someone predicted here, I have been banned by The Ghost ... because I called Abinstein on all his FUDing about Larrabee and Nehalem EX. : )

As the moderators on that site are gravitating towards the more rabid types, I have to wonder how much longer Kaa will last. He may be a long time AMDer with a well established rep, but now that Abinstein has the mod stick, I have trouble seeing him exercising a lot of restraint. Kaa is certainly not a good sycophant, and that board seems to be trending that direction rapidly.

Personally, I'd love to see him posting here. His view may not be the popular view on this board, but it is rational and defensible. I think he would be a great addition.

Tonus said...

"I just find the attitude of the more exuberant AMD fans that AMD is somehow a more moral corporation than Intel laughable."

I think it's a case of misinterpreting the facts to fit a worldview.

Why is AMD more willing to partner with other companies and work on open standards? Because they're not in a position to do differently. Intel has the size and resources to try and drive the industry in directions favorable to it, and thus they're more likely to act unilaterally. Those same factors make it easier for them to recover when their initiatives fail (RDRAM, for example).

AMD is not in a position to gamble that they can push technologies on their own. So they do the smart thing, which is to partner with other companies and work on shared technologies. They don't do it because they've got some higher sense of ethics, they do it because it's their best choice. Of course, when it comes to marketing their approach, they can't admit the realities, so they climb up on their high horse.

I would expect any company in their position to do the same if they wanted to remain viable.

Tonus said...

"And, as someone predicted here, I have been banned by The Ghost ..."

Woohoo! I got a prediction right!

Sorry about that. ;)

They've been making it more and more clear lately, from the comments there, that they consider themselves an AMD fan site, and that means that they're not interested in hearing anything that's not slanted towards AMD. Read some of the comments they make to "trolls." They have no interest in anything that might put a pin in their bubble. Such as facts, reality, the truth, logic, etc.

I'm impressed that enumae manages to hang on, but he has to walk on eggshells while the others insult him and call him names. Not worth it, IMO.

InTheKnow said...

I'm impressed that enumae manages to hang on, but he has to walk on eggshells while the others insult him and call him names.

Frankly, I don't understand the motivation. I can see wanting to get the view from the AMD side of the fence, but that site is moving further and further away from the fence and into the lunatic fringe.

At some point the views become so unreal that they cease to be of value.

Anonymous said...

I would speculate that they are trying to offer a unique value proposition.

While I agree that is probably the motivation, the tool they are using (this kind of SW app) is not the right one for the job. Someone is not going to be basing purchasing decisions on it and I doubt it will build any brand loyalty. Their value proposition is graphics and chipsets right now, but they can't seem to capitalize on it.

The problematic thing is that to build a "unique" value proposition in most cases plays into some sort of bundling or exclusionary approach - and once they start this; Intel can easily follow and just say that "we didn't start this philosophy, we're just competing with something AMD is trying" and completely undercut any monopoly type argument and enable Intel to do it themselves. It is very difficult argument to make SW only run on a certain HW set when the 2 companies are cross-licensed on the HW (on the CPU-side); it is one thing if the SW is taking advantage of some specific HW instruction set that is unique to a chip, it is another to discriminate on manufacturing type and goes against what AMD has been preaching. They can potentially go down this path when the integrated GPU/CPU products start hitting the market - as they can tailor the SW to the GPU portion of the HW and it would be be incompatible with the integrated chip.

It's a fine line AMD is walking and while this App probably doesn't matter, if they follow this approach I think they will lose for 2 reasons:
1) Intel can throw more resources/cash at a competing solution
2) The industry is less likely to 'bet' on AMD only value proposition than an Intel only one given the relative market shares. The only way this happens is if what AMD is offering is VASTLY superior.

SPARKS said...

"AMD is not in a position to gamble that they can push technologies on their own."

"if they follow this approach I think they will lose for 2 reasons:"

There's a third. They're isolating themselves from 80% of the market. Who do they think they are, APPLE???

SPARKS

SPARKS said...

ITK,

For some odd reason or another you've got me going on this Netbook thing. Before your "Atom Awakening" I couldn't have cared less. Last year I wouldn't have pee'd on one of these things.

Not so anymore, you've tempered my 'Extreme' arrogance.

There's is a new twist to the Netbook/OLPC craze. In perfect 20/20 hindsight, as the hardware and software gets cheaper the lines between the two are getting less distinct. Our crusading, and often uncompromising Mr. N., had much difficulty sleeping with the two big players. It seems both INTC and MS are really going to make this thing happen without him, AND make money! HELLO MR. N!

Atom is a resounding success, obviously. The question was, what OS would be the standard? Surely, the right answer was there all along, and the big boys knew it. It was all simply a matter of market timing.

http://hothardware.com/News/96-Of-Netbooks-Run-Windows-XPSurprised/

'Obsolete' XP is perfect for the job. Whoever said that this would happen naturally as the market matured (I think it was you), you were right!

Incidentally, that low powered little monster ATOM, has found another home/use.
(TONUS, this baby's for you!)

http://www.guru3d.com/news/atombased-nas-ts439-pro-surfaces/


Now this thing looks like a perfect solution for storing 8TB of DVD's, home media, etc.


Little ATOM and Ole' XP, who would have guessed?

SPARKS

SPARKS said...

COPS
Prison Guards
Social Workers
Highway Maintainance
(among others totaling 9000 Jobs)

http://www.wetmtv.com/news/local
/story/NYS-Layoffs/
-CJ7R4AQDUyk8oPxTsuuIw.cspx


Luther Forest Creates JOB's!

http://www.globalfoundries.com
/sites/default/files/fab_2
_new_renderings.pdf

I'm besides myself with anger.


SPARKS

A Nonny Moose said...

Sparks - at least Octo-Mom is getting a reality TV show :). According to the news report on MSNBC:

“She wants to be able to do endorsements and she wants a book, too,” a source close to Suleman said. “She does have a lot of mouths to feed, and she doesn’t want to take handouts.”

As to when the single mother of 14 would find the time to pen a book, the source only said, “I guess that’s a detail that would need to be worked out.”

Sheesh - I bet now some other woman will try for 9 babies or more - sorta the reproductive Guiness record book I guess.

At least AMD didn't have anything to do with Octo-Mom polluting the gene pool, that I know of anyway. Although I wouldn't be surprised to hear that Hector Ruiz was her donor :)

InTheKnow said...

Well I finally got around to going over to spec.org to see how Nehalem compares to Shanghai. The data below is only from 2P systems as that is where AMD was the most competitive against Intel's last generation processor.

I'm comparing the average of all the 2P 2.93GHz Nehalem systems to the average of all the 2P 2.7GHz Shanghai systems. I'm showing the percent difference in the performance on the systems with positive indicating Intel is better.

For Peak performance we have:

CINT...........41.10%
CFP............42.30%
CINT Rates....46.70%
CFP Rates.....39.90%

If you prefer normalized data the per GHz values are:

CINT...........36.09%
CFP............37.74%
CINT Rates....42.17%
CFP Rates.....34.77%

Perhaps even more telling though is the baseline performance where the raw data looks like this:

CINT...........45.40%
CFP............50.40%
CINT Rates....52.00%
CFP Rates.....44.60%

And the normalized data looks like this:

CINT...........40.28%
CFP............46.37%
CINT Rates....47.90%
CFP Rates.....39.97%

So the baseline data is about 5% better than the peak performance.

How about power you ask? Well according to the Spec power rating which is calculated as the sum of the operations over the sum of the power using a java script we get a whopping 52.2% advantage for Intel.

Looking at the numbers though, all is not gloom and doom for the AMD crowd. By borrowing a page from Intel (gasp!), AMD's next gen Shanghai (with 6 cores) should actually be able to out muscle a 4 core Nehalem.

Of course that won't address the power rating, but it at least gives AMD the ability compete on performance for a while.

Small wonder the big topic at AMDZone seems to be Larrabee. It is always easier to attack a product that has yet to be shown publicly than it is to attack a piece of working Si that has been benchmarked extensively.

pointer said...

Small wonder the big topic at AMDZone seems to be Larrabee. It is always easier to attack a product that has yet to be shown publicly than it is to attack a piece of working Si that has been benchmarked extensively.yup :) AB is making a big hoo-hah about the Larrabee DP flop per SP flop ratio ... and the funny thing about it is that before he started hoo-hah about it, he spread the FUD saying Larrabee not supporting DP! If he is so certain about that ratio, that would mean he deliberately spreading FUD on larrabee not supporting DP :)

While i am not sure if the ratio he said is correct or not since i am not related to Larrabee team at all ... he won't be in morale high ground whether his statement is true or false anyway :

if he is wrong then he is spreading yet another FUD;
if he is right but with no real public information available on that, he is like either breaching NDA/trust from his friend that entrusted him with that said information ...

InTheKnow said...

if he is right but with no real public information available on that, he is like either breaching NDA/trust from his friend that entrusted him with that said information ...The thing that I thought was hilarious is that he claimed others should believe he has inside information. This from a guy that tells people he knows more than they do within the other person's area of expertise. He won't give others the benefit of the doubt but they should assume he is the oracle of all things Silicon.

I just about fell out of my chair laughing.

Tonus said...

"AMD's next gen Shanghai (with 6 cores) should actually be able to out muscle a 4 core Nehalem.

Of course that won't address the power rating, but it at least gives AMD the ability compete on performance for a while."
The most interesting part of this will be watching AMDroids suddenly make another about face and claim that top-end performance is what matters, not power consumption or performance-per-watt. :)

"Small wonder the big topic at AMDZone seems to be Larrabee. It is always easier to attack a product that has yet to be shown publicly than it is to attack a piece of working Si that has been benchmarked extensively."I expect larrabee to remain in their sites for some time, as the initial product will probably not be near to the performance of high-end video cards. Larrabee may turn out like Atom in that respect-- an early product that isn't expected to provide world-class performance out of the gate, which gives the fanatics something to mock.

Of course, if Larrabee's early models sell the way Atom did, I doubt Intel will care about what a few hard core AMD nuts think about it. :)

InTheKnow said...

Potential performance aside, the thing that I find really interesting about Larrabee is that Intel brought end users into the design process. I've seen mention in multiple places that Intel went out and solicited input from Tier 1 vendors regarding features and functionality that they would like to see in the product.

This is a huge break from Intel's past approach, which to be honest was an "if we build it they will come" kind of attitude. Intel has taken a page right out of AMD's book with this one and I don't think the AMD faithful want to acknowledge that. So a lot of the buzz around Larrabee is not just Intel "pumping" the product, but the anticipation of the graphics community to see some of their own ideas become reality.

My expectation is that Larrabee will probably fall short of expectations. Not because of a failure of the product, but because expectations are being built to an almost unachievable level.

Anonymous said...

Potential performance aside, the thing that I find really interesting about Larrabee is that Intel brought end users into the design process...ITK... good observation though this is a bit different as we are talking graphics which is fairly dependent on developer/end user support. It is also a market where there are current options in volume.

In CPU space, Intel could probably do a bit more about end user inputs, but there aren't viable alternatives due to volumes (I think Dell found this out the hard way)- the niches where there are (like HPC) is where AMD made their greatest inroads early on.

I look at Larabee like I did Atom... the true key to its success will be the 2nd iteration of the product. I was wrong on atom, and I think this was in part due to the economy and in part due to my failure to see the market, but with Larabee it is entering an established market with viable competitors. I think the first gen will be for the high end folks who will customize their SW to the HW, but the mainstream impact will be somewhat limited. The 2nd gen where Intel can iterate the HW, work on the inevitable driver/SW issues will be the true test for the mainstream.

The question is how this can/will be ported to the IGP-side (or the integrated CPU version). At this point this is really Intel's only vulnerability - if Larabee can shore this up long term, that is probably enough to make it a success.

InTheKnow said...

Crave did a review comparing netbooks with atom (Intel), neo (AMD) and nano (Via) processors.

They took a different approach than the one you typically see. They picked one netbook to represent each processor, so it isn't an apples-to-apples comparison. But as the article pointed out, it does represent the purchasing decision a consumer is faced with when looking for a net book.

I wanted to point out one particular item from the review here.

...battery life is especially hard to judge (at least our three example systems all offered six-cell batteries). While we credit Asus' excellent reputation for power management more than the CPU itself, the Intel-Atom-powered Asus 1000HE ran for 381 minutes, the AMD-Neo-powered HP DV2 ran for 149 minutes, and the Via-Nano-powered Samsung NC20 ran for 275 minutes on our video playback battery drain test. Regardless of the why, these numbers cast the Neo in a very poor light. At 149 minutes, battery life on the Neo netbook was a full 2 hours less than the Nano netbook and almost 4 hours less than the Atom netbook.

If this is going to be representative of the builds the Neo appears in, I don't think there will be much of a market for it. Battery life is number 2 on the hit parade (right behind price) for this form factor and offering 1/2 to 1/3 the battery life of the competition isn't going to cut it.

Anonymous said...

Battery life is number 2 on the hit parade (right behind price) for this form factor and offering 1/2 to 1/3 the battery life of the competition isn't going to cut it.Yeah, but it can handle 1080p (on a screen with what 600-800 lines?), and it gets better FPS on 3D games... and these are 2 obvious things people look for in netbooks!

It's a bit funny - it's like Intel and AMD switched in this market segment. Intel is arguing 'good enough' in the netbook segment (which I think is the right read) and AMD is trying to argue performance and features (which generally doesn't win in a commodity market segment). It'll be interesting if Intel is indeed able to keep some market segmentation while AMD tries to blur it.

Anonymous said...

Q1 INTC earnings are in:

Profit of $647Mil (so much for the whole loss thing)
EPS $0.11/share vs analyst expect of $0.03/share
Sales were 7.1Bil down 26% from a year ago.

Intel was guiding flat sales for Q2 and Otellini said that he believes sales has bottomed out in Q1 and expects things to behave seasonally going forward - which is fairly bullish, though stock was down in after hours.

AMD reports next Tuesday.

Anonymous said...

For those interested, the earnings call transcript:

http://seekingalpha.com/article/130923-intel-corp-q1-2009-earnings-call-transcript?source=yahoo

Some highlights (the person doing the transcript is responsible for the spelling!)

We have pulled in Westmere, our fist 32-nanometer product family, and will now be shipping those products later this year. We have shipped thousands of Westmere samples to over 30 EOM customers already.

If you combine the core I-7 and the Intel Zion 5500 processors, this week Intel expects to ship its one-millionth Nehalem-based microprocessor, demonstrating that the market still responds to the best performing products First quarter revenue for Atom-based microprocessor and associate chip sets was $219.0 million, down 27% from the fourth quarter as a result of reductions in downstream inventory.(Even down 27%, this is still >1Bil/year business...)

Excluding Atom microprocessor, overall microprocessor average selling prices were also approximately flat to the fourth quarter.Gross margin percentage of 45.6%, or 7.5 points lower than the fourth quarter, and better than the low-40% outlook that we provided in January.

Approximately 6 points of the sequential decline came from under-utilization charges and about 2 points came from higher start up charges as we began the ramp of 32 nanometer.
Total inventories were down $699.0 million, or 19%, from the fourth quarter.I think we would still expect netbooks as a category to be approximately 2x in 2009 versus 2008. This is still one of the hot categories out there and one of the great stories of technology this year.

Anonymous said...

huh... the formatting on that got messed up after hitting post...

SPARKS said...

"Intel Zion 5500"

Oy Vay! Whether Reform or Orthodox, that still doesn't sound Kosher!

SPARKS

Tonus said...

"(Even down 27%, this is still >1Bil/year business...)"And again, it has to be pointed out that this was for a first generation CPU that wasn't expected to have anywhere near the impact that it did. To those who felt that Atom was going to be an albatross around Intel's neck, it looks as if it will generate more than a billion dollars a year in revenue, based on sales of the part that was described in some circles as "a failure."

What happens going forward, as Intel tweaks the design and process in order to find an ideal balance of power use and performance? I suspect that they are hoping to continue to "fail." :)

Tonus said...

Okay Sparks, Nonny... here is the latest on my i920 adventure.

First things first, did a quick check and it appears that the source of the loud fan noise is... the Radeon cards. I unplugged the case fans and started the system up and the noise level was the same as usual. I am considering my options at the moment, possibly underclocking the cards and lowering voltages, etc. Or even replacing them with two low end fanless cards (I don't play video games on this system anymore, it's just for graphics and such).

Anyway... went into the BIOS, set the OC option to something like XEPC (which claims to set the DRAM and QPI automatically, based on the bus clock setting), set the bclk from 133 to 166 and rebooted. The system booted and then rebooted before reaching the desktop, so I decided to tweak the voltages. Here is where it gets funny.

The range for the CPU voltage is something like 0.80 to 1.50, and I am pretty sure the i920 isn't running on 0.80v! So I decide to set it conservatively and choose 1.15v. I set the next option (PLL I think) just slightly up from its lowest setting. I reboot, and the system boots just fine, running at 3.33GHz and 1.15v. Then I check online for the default voltage for the i920.

It's 1.20v.

Hee hee! I actually got a 667MHz overclock while undervolting the chip! Anyway, just to be safe I set the voltage to 1.225v instead and it's humming along at 3.3GHz. So I assume that it was the slight bump in the PLL setting that stabilized it, and the 1.225v is actually more than I need, though I haven't tested the system under load as yet.

Time to put Photoshop through its paces!

Tonus said...

Okay, it's not the video cards, so it has to be either the HSF or the PSU or both. I'm guessing the HSF then.

I installed the latest ATI drivers with the Control Center and used it to ramp the fan on one of the cards to 100%, and that made it audible enough to separate it from the other fan noise. Plus, the CC shows the cards running the fans at anywhere between 0% and 23%. Oddly, it shows one card running at 49c and the other at 76c, though they're in adjacent slots and the 'cooler' one is closer to the CPU. Hrm.

Well, in May or June I'll order up a good replacement HSF and see how much further I can push this thing. I'm suddenly curious as to how far I can get it on default CPU voltage...

SPARKS said...

Tonus,

You didn't give any core temps @ 1.2 V, both idle and load. Use Prime95, it will beat the chip like a redheaded stepchild. Secondly, 1.2 V is minimum for overclocks. From every article I've read 1.300 V is a good starting point. Watch your temps. Higher voltages will widen the voltage band (as to avoid ripple) so I don't think horsing around with PLL thresholds are necessary at these speeds and voltages. The only time I screwed around with PLL was with serious overclocks on the QX9770. Those tweaks are necessary above 4.27 Gig, and that's living on the edge. Besides, your ASUS P6T motherboard regulators/PS are well engineered and are the best in the industry.

I've found that overclocking the chipset ALONG with CPU is far more affective as a opposed to a straight CPU overclock. All said, the stock HSF has got to go, no matter what you do----immediately!

That lump of extruded Aluminium will hold heat like a stone in a pizza oven. Copper fins and heat pipes will dissipate heat like a sheer Victoria Secret nighty in a summer breeze.

Here's a nice article that you might want to read before you get hot too heavy with your new gal.

http://hothardware.com/Articles/
Overclocking-Intels-Core-i7-920-Processor/

Let's not forget GURU's horror stories of slow death by electron tunneling! They still give me the cold sweats. Keep them in mind as a reality check.

SPARKS

Anonymous said...

and see how much further I can push this thing. I'm suddenly curious as to how far I can get it on default CPU voltage...If you are not OC'ing full time, I'd recommend undervolting the CPU while running it at stock - this will help lower power consumption (a bit) and probably lower temps a bit - it may also allow you to run the heatsink fan at a lower speed if it turns out that that is the source of the noise.

If the noise is from the video cards I saw a decent review on an aftermarket cooler which can run fanless or with a fan - I think it was the Artic Cooling Accelero (or something like that)... I can't find the review but it lowered temps pretty well (esp under load). Not sure about compatibility with the latest Radeons and also if SLI is an issue (space-wise)

Anonymous said...

...meant crossfire....

Tonus said...

I don't run crossfire since I prefer the dual-screen setup for my apps and I don't intend to play video games.

It is holding up fine so far, I used Photoshop, did some scanning and some image-editing. Ran pretty much all of the apps I have at the moment (not many) and ran 3DMark and PCMark just to put a bit of mild stress on it. No problems at all so far. I even clocked the video cards up to 775/1005 from the stock 750/900 and they seem to handle it just fine and without pushing the fans past 30% at worst.

I don't think I have anything for measuring the CPU temps. I guess I can install the Asus utility and see what it reports. So far, though, it seems as if the system isn't having any issues with stability. Aside from the aborted reboot the first time I increased the speed, it hasn't crashed or acted up at all. I would clock the video cards back to stock, but the truth is that they spend most of their time running at 500MHz core because I'm rarely using anything that stresses them.

All told I'm really happy with this thing. Its performance is better than I expect even at stock speeds, and the stability at 3.3GHz with almost zero tweaking gives a promise of speed increases to come. But I won't clock it any further until I've got a good HSF on it.

SPARKS said...

"the stability at 3.3GHz with almost zero tweaking gives a promise of speed increases to come."

Thats the ticket. Consider yourself Top Dog. 3.2, ~3.3 are what the things were designed for anyway, and given INTC's penchant to conservatively build chips with tons of headroom, you're in fat city. At 3.3 on i7 you've just about blown 99.9% of most machines into the weeds. Hell, enjoy the ride.

Then again, if you want to LIVE at 3.8 to 4.0, you're gonna need a top binned model.

Clock till ya rock.

SPARKS

Tonus said...

I don't overclock much these days, and I would've been happy with 3.0GHz out of this chip. I get the feeling that I can probably get it to 3.67GHz or better once it's got a decent cooler, but I'm very happy with 3.3GHz.

I read an interesting post from Kaa this morning, explaining just how much heat he took when he was 'warning' the Zoners about Core2 some time ago. He claims he was banned for being an Intel fanboy at the time! It's interesting to see history repeating itself, he's already been accused of pro-Intel spin by several people (implied or direct), and abinstein in particular has been hammering at him for over inflating Larrabee's performance.

How long before he gets another temp ban for not being a good company man? And why would he put himself through that, time and again? I understand that he likes AMD and wants to see it succeed, but he'd really be better off at a site where he isn't treated like shit just for being objective.

InTheKnow said...

Enumae, not sure if you still read this blog, but if you do I would be interested in an estimate of the die size of the Larrabee wafer that Gelsinger showed at IDF.

I read somewhere yesterday where Otellini said that it was an "extreme edition" version, but it should give us an idea of the upper limit on 45nm.

Thanks.

Beeblebrox said...

I read somewhere yesterday where Otellini said that it was an "extreme edition" version, but it should give us an idea of the upper limit on 45nm.http://www.amdzone.com/phpbb3/memberlist.php?mode=viewprofile&u=23026

A Nonny Moose said...

Tonus - is this the C0 stepping or D0 stepping of your i920? In either case, it looks like you got an excellent chip! What're your temps, BTW?

As for your 4870's, I would think there's a utility where you can set the default fan speed, like I have with my 8800GTX. Of course, as the GPU loading increases, the fan speeds up. My default speed was something like 70% of max which made for some noise, so I lowered it to 50% and can barely hear it unless I start gaming.

This morning I had to shell out $500 for a 15" refurbished Lenovo lappy for my daughter, as her 17" Dell (2nd one in a year) quit working and she has a bunch of term papers due before her college semester ends next month. Just uses the Intel IGP 4500 chipset which'll make the AMDZonerz all pissy, but she doesn't play games on it anyway - just iTunes, AIM and the very occasional paper :).

So, as long as she doesn't spill beer into this one, I should be in good shape to start my i920 build shortly...

Tonus said...

Nonny- it is a C0 stepping according to CPU-Z. I haven't installed the Asus software so I don't know the temps, I will try that soon just to see. The fan on the ATI cards seem to generally stay in the 0% to 25% range, and they're not audible over the HSF unless I crank them to 90-100%.

I don't think it's a temperature issue, I think the fan is just running at max speed all the time. When the system slips into sleep mode the sound drops to nil. Otherwise, it's at a constant level.

pointer said...

I don't think it's a temperature issue, I think the fan is just running at max speed all the time. When the system slips into sleep mode the sound drops to nil. Otherwise, it's at a constant level.how many pin that you CPU fan has? if 3 pins, then it support only 2 speed - on/off; if 4 pins, then only it is PWM.

A Nonny Moose said...

Tonus: I don't think it's a temperature issue, I think the fan is just running at max speed all the time. When the system slips into sleep mode the sound drops to nil. Hmm, an aftermarket fan like the Zalman 9900 with a 120mm fan would probably be much quieter. I have a 9700 on my QX6700 rig and it is pretty quiet, even though I have a mild oc to 3.0GHz. What makes the most noise is the buzzy little NB fan that came with my 680i mobo (and yes - I have learned my lesson and will not be buying any more nVidia chipset mobos :). That sucker sounds like a blender chewing on marbles..

I have read where the C0's usually get up to 3.7+ GHZ with no sweat, and quite a few 4.0. But unless you really need that level of performance, I'd keep it safe at least until the time you're considering another upgrade. Then you can beat the tar outta it, like Sparks and his 9770 :).

SPARKS said...

"Then you can beat the tar outta it, like Sparks and his 9770 :)."

I resemble that remark!

Let me say I had every intention of beating the tar out of the QX9770. Then GURU, with his keen eye for detail and technical prose explained in vivid detail the subtle long term degradation of copper interconnects and dialectic breakdown due to heat and overvolting.

In short, he scared the shit out of me. (His intellect frightens me more.)

I may stupid but I'm not dumb. Living at 4 gig @ 24/7 @ 1.4 V was, and still is, a very comfortable spot for both me and this magnificent piece of hardware, with no ill effects for well over a year.

However, 4.27 is an entirely different issue. This is beating the tar out of it. PLL needed to be skewed on both die, thermals were out of control (air), and most importantly, long term stability would have been compromised. All said, there simply wasn't the need nor the difference in performance to warrant those speeds.

I suspect, G and ITK bare me out on this one, strange things go on with ALL the hardware above 4 gig. Here we are leaving the 'Ultra High Frequency' and are moving well into the 'Super High Frequency' . There's some weird physics going that I have absolutely no clue about. I can't explain it but I know it's there.

Better to put another IPC, run at or near 4 gig, and be done with it. That's exactly what the geniuses at INTC have done with both Core2 and Core i7.

It works for me at any price.

SPARKS

Lem said...

Beeblebrox, was this the post you were intending to link to?

http://www.amdzone.com/phpbb3/viewtopic.php?p=155598#p155598

I believe that's a wafer with Larrabee dice on it, where enumae has tried to separate and count them.


While I'm here, I was meaning to ask some of you guys with Core i7s, what's the latency on core up-clocking on load? K8 was pretty crap at it, taking between 300 and 500ms to scale up (and down, for that matter). I've found Phenom to do it instantaneously, which makes a huge usability improvement (smooths everything out noticeably). How about Core 2? I've never used either to test it myself, so any feedback would be appreciated, thanks.

Beeblebrox said...

Sorry for the wrong link.

http://www.amdzone.com/phpbb3/viewtopic.php?f=52&t=136244&st=0&sk=t&sd=a&start=100#p156017

Anonymous said...

I'm not too sure I would read that much into the Larabee die size on the engineering sample. As mentioned somewhere and I think by a couple of folks here, there will be several flavors of the part and I'm sure there will be a 'cut down' version with fewer cores and/or cache and then there's the question of 45nm vs 32nm. (My gut is telling me it will be 45nm for a while)

Intel will release large die products for niche areas - I seem to recall server chips with massive amounts of cache that had huge die; this was not a huge volume part but it served a niche market and was expensive to offset the low volume and large die. I'm sure there will be a couple of ranges for Larabee and there will be some folks doing high end work where the SW is tailored and developed specifically to the architecture that will be willing to pay a premium (if the performance is there) and probably will not care about power all that much.

I do find it humorous at AMDZone where if someone speculates in favor of Intel; they get yelled out for being a fanboy and talking about simulations and vaporware and of course when someone speculates negatively about Larabee... well you don't need any actual evidence it is 'just speculation' based on something 'I heard' or 'know' (without any supporting info). Pot... Kettle... AMDzone

Anonymous said...

I see IBM is up to its old tricks they are talking up bare Si 28nm process (half node down from 32nm) as 40% better than 45nm.

Funny thing though, I seem to recall similar estimates for 32nm vs 45nm... which would make 28nm pretty much similar performance-wise to 32nm! (Which is not that strange considering 28nm will be largely an optical shrink of 32nm).

Once again IBM is playing with the baseline to talk up (or more accurately, hope others get confused and talk up) the 28nm process. This is eerily similar when IBM started comparing there 65nm transistors to unstrained devices which led to a nice performance claim. The idiot press ran with it and just substituted unstrained devices with 90nm and suddenly folks were expecting big things on 65nm; it never was realized because IBM was not doing the comparison to 90nm.

I suspect IBM will see a significant jump from 45nm to 32nm as they finally get around to highK/metal gate, but people will be wondering where the jump is when they go from 32nm to 28nm forgetting that IBM's claim was referenced to 45nm. IBM certainly knows how to play the game as nothing they say is technically inaccurate and they know that most folks don't have the background to understand and analyze exactly what it is they are saying.

There's a few articles around on Tom's and INQ, too lazy to link! :)

One would think the press would not be this lazy and would have learned from the past... welcome to the internet age where 'journalism' is basically copying snipets out of a press release, having little to no background on the article content and just be too lazy to actually fact check and/or do some research.

Anonymous said...

their, there, they're... :) I think I'm just going to use 'thare' from now on....

Tonus said...

Lem: "While I'm here, I was meaning to ask some of you guys with Core i7s, what's the latency on core up-clocking on load?".

Lem- what does that refer to and how would I test it?

InTheKnow said...

There's a few articles around on Tom's and INQ, too lazy to link!I read the INQ's article on this. Other than the smarmy anti-intel tone that they take the information didn't seem too bad. At least not when compared to the comments.

The lack of knowledge displayed by those that felt compelled to post comments was mind boggling. The folks posting on the INQ made the usual suspects on AMDZone look like technical geniuses.

I think my personal favorite was the guy who was claiming transistor development was going to end at 22nm, so this was putting AMD within spitting distance of the best possible transistor size.

Along the lines of the press eating up whatever they are spoon fed (and in fairness Intel takes advantage of this too), I love the "AMD is closing the gap on process technology line". No one seems to notice that the move to 32nm at globalfoundries in H1 2010 will not affect AMD's microprocessors because it is a bulk Si process.

And the press has been sucked into playing a mix and match game with the dates on the SOI vs bulk process as well. If people will stop and think a bit, they will remember that TSMC launced their bulk Si 45nm process a month or two before Intel launched their 45nm process. That was around Sep07. Intel was in Nov07. I've found references to SMIC adopting IMB's bulk 45nm process in Dec07. So here we are at the end of 2009 and Global foundries is pushing out IBM's 32nm bulk process. Oddly enough in the same time frame as Intel.

Just like they did 2 years ago.

A Nonny Moose said...

I guess those experts at UAEZone are worried not only by Larrabee, but perhaps this 2-month-old tidbit from the MSNMoney message board:

The imbedded GPU (Fusion) part is currently going through the 4th or 5th re-re-design and they still can't get it right. Two VP's have thrown in the towel and left the company and these are long time AMD guys. There have been several different approaches including competing designs between ATI group and the AMD design group. The current design sounds like a scaled back core perhaps K6/K7 technology with the GPU in bulk process rather than SOI. The ATI GPU seems to not like SOI. All that means is that the processor cores would run at pre K8 speed. That would seriously degrade the Fusion performance. My guess is that product will never tape out just like others. AMD has spent $Millions on designs then cancelled the project before completion. Besides the huge design engineering there is also all of the new product test hardware and software costs. The hardware gets trashed when the project is cancelled and all of the engineering costs for these projects is wasted money. The new CEO Dirk Meyers seems to have a much bigger mess on his hands then he inherited from Hector Ruiz and it sounds like it is getting worse. Internal reorgs are not making things better and more lay offs in manufacturing are rumored. It's hard to be optimistic in that environment.Guy seems to have some inside knowledge on Fusion..

Anonymous said...

Guy seems to have some inside knowledge on Fusion..It's hard to say how accurate that might be (I'd take it with a grain of salt unless the poster had some sort of track record). I would imagine there are some snippets of truth and perhaps some things may or may no have been filled in (accurately/inaccurately?).

I would not be surprised if there was an SOI issue on the graphics, it is not always to just simply port the Si to an SOI design. Makes you wonder if this indeed is the case if a "glued" (gasp!) approach would be better for AMD.

I would also not be surprised if there were competing designs... all you need to do is go to the AMD website, look at the search feature where you HAVE TO choose between ATI and AMD to look at the level of integration that still needs to be done.

Anonymous said...

"Shocking:"Phenom II proves to be better performaer than i7 on high-resolution gaming.NOTE: Also, notice that the Phenom II 955 is not using yet DDR3 memory and it's being run on a cheap 780G mobo.

Anonymous said...

And why would DDR2 vs DDR3 affect gaming performance?

I also assume you are referring to min frame rate data only? The data is all over the place - on some tests it has the Phenom 920 OUTPERFOMING both the Phenom 940 and 955. In some cases it has the i7 920 outperforming the i7 965 and i7 940 (on the min framerate tests)

So the data you use to draw your conclusions, would you then also conclude a 920 is better than 955 (or 965 in Intel's case) for gaming?

Tonus said...

The strange part is that the guy who did those benchmarks is getting ripped apart at AMDZone. Even when someone runs benchmarks that favor AMD, it's not enough.

Of course, the other benchmarks where the i7 won (such as Cinebench) were dismissed, and no one has asked him to explain the strange result for the i7 965 in Crysis (minimum frame rate), though I suspect if you'd had an unusually low score for one of the Phenom processors, he'd be hearing about it.

But I guess it is good to discover that when you get to the CPU-bound portion of a gaming benchmark, the Phenom is ahead by an amount smaller than the margin of error. Heh.

Anonymous said...

But I guess it is good to discover that when you get to the CPU-bound portion of a gaming benchmark, the Phenom is ahead by an amount smaller than the margin of error. Heh.I suspect you mean GPU bound?

The min framerate seemed kind of hokey (in terms of consistency of results). There were a few benchmarks where the both the lower clocked AMD and Intel CPU's had better min framerates then the higher clocked brethren. Either something strange is going on (CPU's toggling in and out of idle states?) or the min framerate metric is a bit fishy as there is no reason why the low clocked chips should be better than the higher clocked ones. Given that, it's hard to really take ANY of the min framerate data seriously.

InTheKnow said...

The min framerate seemed kind of hokey (in terms of consistency of results).Ironically, the benchmark with some real value to the company has been totally dismissed. The PhII seems to actually do really well on the encryption stuff. While that may not be a big deal to you or I, I have to think that could score AMD a big win with the NSA and the military.

What better excuse could the government find to stimulate the Dresden economy than to buy a bunch of AMD chips for their intelligence efforts?

As to the rest of the review, my German ain't what it used to be, but I make out the recommendations (or conclusions if you prefer) to say that the Core i7 is clearly the better performing chip. However, if you take the price performance relationship into account the PhII is the better choice.

It is more that a little ironic that the zoners are taking this guy to task for saying exactly the same thing they have been saying.

SPARKS said...

"AMDs zweiter Versuch: Phenom II"

The whole article is a steaming pile of horseshit.

Any benchmark comparison is graphics dependent above 800 x 600 resolution. Why do INTC chips smoke the Pheromone chips when the resolutions are CPU bound, only to fail at higher graphic card dependent resolutions?

Both AMD and NVDA drivers are a HUGE factor. They may not be optimized for X58/i7------yet.

Why didn't they throw in a Yorkie/X48 for the testing? Because they would gotten their asses handed to them, that's why.

If these "comparisons" had any real substance on this side of the pond, the AMD marketing boys would screaming their asses off from every available rooftop.

All I hear is the sound of crickets from every AMD pimping site stateside. Especially from those idiots at Fuddie, and the at INQ.

Nonsense.


With regard the to the IBM 28nM consortium, a saw the story/link a few days back. (I was too lazy AND unimpressed to post it.)

"The IBM-led alliance consists of IBM, Samsung Electronics, STMicroelectronics, Globalfoundries, Chartered Semiconductor and Infineon. The companies have joined forces to develop the 28nm high-k metal gate process, and IBM claims customers can start the design process in 32nm and then transition to 28nm at a later point."

http://www.fudzilla.com/
index.php?option=com_content&task=view&id=13207&Itemid=1

Three things:

IBM HIGH-K????, where?

They had big problems from 65nM to 45nM, but a jump to 28nM is a breeze!

TSMC is conspicuously absent. I suppose the ATOM deal didn't set too well with the 'consortium.'



I'm not sure many on this site listen to 'Motley Crue', but they do have an excellent song. It's called "Same Old Situation"

http://www.youtube.com/watch?v=M1knx2NLCS0

Or perhaps 'Aerosmith', 'The Same Old Song and Dance'?

http://www.youtube.com/watch?v=BqG2lOfD9cc


SPARKS

Tonus said...

"I suspect you mean GPU bound?".

Yeah, that's probably the term I was thinking of, the point at which more CPU power doesn't have much effect on scores.

I was under the impression that long before now, most review sites had already pointed out that Core i7 did not perform that much better than Penryn in games, that it really shone in more computing-intensive workloads like 3D rendering and databases, etc etc.

Perhaps this is the new tactic for people who are unwilling to deal with reality; take a known issue and act as if it's a new discovery, and crow about how the Phenom II is a better deal for a gamer than Core i7. Never mind the scores in non-gaming apps, look at the FPS!

SPARKS said...

"Why do INTC chips smoke the Pheromone chips when the resolutions are CPU bound?"

I should have added, "as opposed to higher resolutons when they are GPU bound."

The point is taken and you've clarified it. As I said before, they are giving a new meaning to 'DFM,' Design For Marketing.' AMD will never come out publicly and make this claim, they'd get creamed.

Besides, Tonus, you're in the big league's, you know what you've got, and nothing even comes close.

Lutenant Weinberg in "A Few Good Men":

"You object once to get it on the record. If you keep at it, and it makes it look like a bunch of fancy lawyer tricks. It's the difference between trial law and paper law."

You can substitute "trial law and paper law" with "actual performance and paper performance."

SPARKS

Lem said...

Lem: "While I'm here, I was meaning to ask some of you guys with Core i7s, what's the latency on core up-clocking on load?".TonusLem- what does that refer to and how would I test it?Well, I simply use FLAC-Vorbis transcoding with oggenc (command line) to observe the latency of frequency scaling. Something as simple as oggenc -q8 some_music_file.flac, which will proceed to transcode that FLAC to Ogg Vorbis quality 8 (~256kbit nominal bitrate). Once I press enter, I watch the % complete counter very closely. On K8, it counts up quite slowly (1GHz idle with most K8s), then after about half a second, goes much faster (up to 3x on my 6000+, max 3GHz). That indicates the latency of the 6000+'s frequency changes.

On this Phenom 9850BE (idle 1.25GHz, max 2.5GHz), there's no noticable delay, the core which oggenc runs on clocks up instantly.

I'm sure there would be other ways to test this, but oggenc provides a really easy way to observe it. In the grand scheme of getting work done, the latency on frequency scaling doesn't really matter, since the CPU core(s) will spend most of their time in their highest P-state, but for desktop usability with Cool'n'Quiet enabled, the implementation in K10 is far superior to K8, and makes a noticeable difference for me.

SPARKS said...

LEM, if I'm reading you right, I think the term "latency" may be a bit confusing here. I know it was for me. For most hardware nuts, like myself, latency usually refers to system memory measured in nanoseconds. Memory timings, 7-7-7-21 would lower my DDR3 system latency, instead of running at 9-9-9-24. The catch is the numerically higher timings allow for higher memory speeds.

For my particular machine, I can run 7-7-7-21 @ 1866 MHz with a 65ns latency between each refresh. However, I can "loosen the timings" to 9-9-9-24 and very close to 2 gHz but the latency is well into the 70's or worse. This is a hardware/performance balancing act. Definitely not for the newbie.

It must be said that AMD has had absolute dominance over the years with lower memory latency and higher output in MB/s.

In contrast, I think you are referring to actual process time for your specific application. It's really hard to say how your application would compare without a direct comparison with a machine loaded with your software. Further, if your program runs well, and as and added bonus, optimized for AMD processors/platform and multicore threads, hey, more power to you. You get more done in less time.

As for speculating how well your program would do on new/other processors (no matter who makes them), this would impossible without direct machine to machine comparisons.

Try doing a search with your application. Perhaps there's a forum where others are trying new hardware and getting good objectively timed results.

SPARKS

SPARKS said...

Check that LEM, I'm getting 57ns in Sandra, 7-7-7-21 5-28-14-8

SuperTalent Project X 1866
1.67V

QX9770
X48
System timer 3.87 GHz
CPU speed 4.06 GHz
FSB speed 4x 478 (1.912 GHz)

SPARKS

SPARKS said...

Whoops! Thats 2.05V on the Mem!
SPARKS

pointer said...

Sparks, Lem was referring to latency involved in transition the CPU from lower frequency to higher frequency. Each Pstate has 2 points, the frequency multiplier and its VID (in i7 case, VID is 'hidden' as it is controller by the power controller unit). In order to transition to higher frequency, the voltage must first be raised, then only followed by the multiplier. Generally time taken for the voltage to be increased is higher than time taken to change the ratio multiplier. Earliest speed step is even worse, which involve System Management mode to help the transition. Later and current EIST should be pretty fast, but i do not have any number to support the claim, except the transition only involve on MSR write.

Unknown said...

Ah, thanks Pointer. Yeah Sparks, I wasn't talking about RAM timings or even application performance. I was talking about Intel SpeedStep (or Enhanced SpeedStep?), and AMD Cool'n'Quiet.

Here's the situation that I've observed:

AMD K8 (X2 6000+): Takes 300-500ms to jump from lowest power state (usually 1GHz/~1.0V) to highest power state (3GHz/~1.35V on X2 6000+), and the same to clock back down when the load finishes. Both cores must be at the same clock, so even if one is idle, it must clock to 3GHz if the other does.

AMD K10 (Phenom 9850): No perceivable lag (better term than latency? :)) between lowest and highest power state. Each core clocks independently according to load.

This is just my experience, and I'm curious whether Intel's power management (PM) is similar. I know Intel enhanced its PM with Nehalem significantly, as AMD did with K10 over K8. I know Penryn had independent core clocking. I would expect Nehalem to have the same (just an even better implementation).

For pure performance, turning off Cool'n'Quiet on K8 is usually a good idea (especially for sporadic loads which would cause frequent up/down clocking), but with K10 (Phenom and Phenom II, and their Opteron brothers), this is no longer necessary, since power state transitions appear to be instant and shouldn't affect performance at all. This is a big plus as there's now no reason not to use it (saves power, heat, etc).

So my main question is: Is there any lag when changing between power states on Core i7 (and Core 2, if anyone's willing)?

A quick and easy way for me to test this lag between power states is just to run oggenc on the command line, which shows its progress as a numerical % to one decimal place (rather than progress bar or whatever). I can watch how fast it counts up, which is a very good indicator of clockspeed, as it's very obvious when the core clocks up. That's how I observed the 300-500ms lag on K8, and the non-existent lag on K10. Someone could use any application at all, as long as it's possible to tell if there's a jump in performance within the first second of operation (I wouldn't expect any worthwhile implementation of CnQ or SpeedStep to take longer than this to activate).

Sorry for the confusion.

Unknown said...

Oh, with respect to my comment about CnQ or SpeedStep not activating within 1 second, I guess it's possible for a conservative CPU governor to take longer than this to clock up (would be useful on a notebook for example).

The CPU governor I was using to test was "on demand", which clocks a loaded core to maximum clock as soon as possible.

Anonymous said...

Lem, good observations... but 0.3sec is really that noticeable? Obviously it can be detected but still. :) Do you have any idea what the new transition time is on the Phenom? (i.e. did you measure the transition time or are these guestimates?)

I'm also curious as to how much it is individual cores being able to turn on and off vs moving in and out of sleep states. My guess (and it is a pure guess), is the fact that individual cores are clocking up/down is the reason for the improvement as it is probably not normally going from 4 cores idle to 4 cores load all at once, but rather a mix and match depending on what you are running - this would probably lead to more of a 'blended' transition which makes individual cores going up/down less noticeable.

Unknown said...

Anonymous, Yep 0.3-0.5sec is definitely noticeable. The case I presented with oggenc actually doesn't matter from an end-user's perspective, since who cares if your vorbis file encodes in 0.2 seconds less?

Where this is actually noticeable, going to K10 from K8, is scrolling in complex web pages, PDF documents, and in OpenOffice (I run Linux). If I'm browsing or editing (say for review) my documents in OpenOffice or in Evince (GNOME PDF viewer), the demand on the CPU is extremely erratic, causing up/down clocking almost constantly. This doesn't happen on simple files, but it does on more complex files with images etc. Phenom provides a much smoother user experience in this regard (compared to K8 that is.. I've never used a C2D, C2Q or i7, which is why I'm asking these questions).

As to what the new transition time on Phenom is, it's easily less than 100ms as I can't actually perceive it with my oggenc test. These are guesstimates, I have no real way to measure this :) It seems there's no lag when a high-CPU load process migrates between cores either, which is nice.

My guess (and it is a pure guess), is the fact that individual cores are clocking up/down is the reason for the improvement as it is probably not normally going from 4 cores idle to 4 cores load all at once, but rather a mix and match depending on what you are running - this would probably lead to more of a 'blended' transition which makes individual cores going up/down less noticeable.I'm not entirely sure what you meant by this. The test I'm doing with oggenc is single threaded.

SPARKS said...

Pointer, Lem, thanks for clearing that up! P-states! Slowing down a CPU? THREE-TENTHS OF A SECOND powerstate LAG, !!!!!ALL SELF INDUCED!!!!

Sorry Fella's, I have never used any of this stuff. I haven't got a clue.

The day I intentionally SLOWDOWN my CPU's, someone will have a shotgun pointed to the back of my head. (I might hesitate if it's only a single barrel)

SPARKS

SPARKS said...

Lem, I found this nifty little utility that may interest you.

http://cpu.rightmark.org/products/
rmclock.shtml#rmclock_pro

SPARKS

Unknown said...

That looks like a great little utility Sparks, pity it's for Windows (I run Linux almost exclusively.. I only game in Windows, where power management isn't required since everything is pegged at max).

I understand you're a performance nut, and to enable something like Cool'n'Quiet or SpeedStep seems counter-intuitive. And it was, in K8 at least (since you'd get that 0.3s lag), but in K10 .. not any more, it's seamless and totally transparent. There's now no reason not to use power management even when you're a performance nut, since you wont even notice that it's on (except lower CPU temp, lower fan speeds, and a slightly lower power bill :)).

So my main curiosity here is: How does Intel SpeedStep perform? Does it have any lag, on Core 2, on Core i7? I suspect Nehalem does it as well as K10, but I have no proof, nor anectodal evidence.

SPARKS said...

"There's now no reason not to use power management even when you're a performance nut, since you wont even notice that it's on (except lower CPU temp, lower fan speeds, and a slightly lower power bill :))."

Nah, it ain't gonna happen, not for me anyway. There is something inherently intrusive about some algorithm dictating/estimating my power usage at any given moment. (Big Brother Watching Syndrome.)

I deal with power that can heat steel enclosures uncomfortable to the touch (due to hysteresis), if the cables aren't properly configured phase A,B,C, Neutral, 12 runs in total. Power fluctuations are off the wall from cable to cable. It could get hot enough to break down the insulation and cause real honest to God HAVOC! (Read: Catastrophic FIRE!)

To me, when I am in the thousands of amps at 277/480V, a couple hundred watts ain't squat coming from a computer.

I don't even like my drives spinning down. (Not good for RAID 0)

I've got a I kW PS that says there's plenty of juice on hand.

"On All The Time" is in my property box. I don't like power fluctuations upsetting my computers delicate innards/timings, either.

I despise boots and reboots.

All the fans on/in my computer sound like a F-22 Raptor in full afterburner. The plus is, I can't hear my wife complain over the noise.

SquareD Industrial Grade spike/surge protectors are hard wired to my home service.

They don't call me 'Sparks' for nothin', baby.

SPARKS

InTheKnow said...

There is something inherently intrusive about some algorithm dictating/estimating my power usage at any given moment. (Big Brother Watching Syndrome.)Then you'll need to stay away from the whole Core i7 line since that is what turbo boost does. The processor itself monitors your power usage (in the form of temperature) and adjusts your clock speed accordingly. :)

SPARKS said...

"Then you'll need to stay away from the whole Core i7 line since that is what turbo boost does."

ITK, Cute, real cute. From what I understand, (I could be mistaken because I don't own an i7 --- yet.) this Turbo Boost thingy is an option that the user could enable/disable in the BIOS.

Further, I also understand that this (option)? was incorporated for the newbie to painlessly and safely overclock. Hey, preordained overclocking for the masses. ".....everyone will be famous for fifteen minutes." (Andy Warhol)


5 years ago the word 'overclock' was verboten in the hallowed halls of Hillsboro, Oregon. I think this was INTC's way of being enthusiast friendly by mildly sanctioning the practice, while simultaneously loosing up the conservative stiff collars in corporate.

Thank GOD they came to their senses.

Personally, I'd much rather buy a chip with an unlocked multiplier, one that's (DO) stepped, and do it the old fashioned way, slow incremental steps right to the edge then back it off a notch. I do it slowly, over weeks, running every app, until one hiccups. Updated matured BIOS and drivers can't hurt, either.

Then there is the Hyper Threading can of worms.

Oh, don't forget the Fluke 45.

SPARKS

Anonymous said...

Then you'll need to stay away from the whole Core i7 line since that is what turbo boost does. When you're in Sparks operating range (under 4GHz need not apply), there is no Turbo Boost, just good ole Turbo Charged!

Anonymous said...

http://lists.laptop.org/pipermail/devel/2009-April/024127.html

Sorry but I got to keep hating on the OLPC... EGO-ponte is just driving this into the ground.

They switched to the Via C7 which will be clocked somewhere between 400MHz and 1GHz... but... it get better... it is able to be clocked EVEN LOWER if thermal issues arise (those 400Mhz processors can run pretty hot........in 1980!). Whew, I was hoping there would be a feature where you could clock it even lower than 400MHz and EGOponte delivers yet again - proud to be an MIT alumn!

And the best part - the new chipset... when you think OLPC, what do you think....come on... yeah, that's right HD audio and HD video! Sweet! That's what I'm talkin' about! When I think low cost PC designed for education in 3rd world countries, I think 'gots to have my HD'! I'm sorry what is the screen size and resolution again? HD-freakin-video!?!? Does it come with the 5.1 external speaker set?

You have to wonder if they took all of the development money and salaries of the people working on OLPC and just used it for coupons for EXISTING netbooks... yeah I know most netbooks don't feature the capabily to clock lower than 400MHz! Damn...so close to a solution... yet so far...

InTheKnow said...

Further, I also understand that this (option)? was incorporated for the newbie to painlessly and safely overclock. Actually, turbo boost is designed to change the clock speed as the number of operating cores changes. So if you are running a single threaded app on one core, then you will get better performance on that app. In short, it was a way to give back some single threaded performance that you lose when you go to the lower clock speeds that quad core thermals dictate. Hate from the lunatic fringe aside, I thought it was a pretty clever innovation.

And yeah, G is right. When you've pushed the CPU up to the 4Ghz range, turbo boost is a moot point.

I just couldn't resist the dig. ;-)

Tonus said...

Lem: "I understand you're a performance nut, and to enable something like Cool'n'Quiet or SpeedStep seems counter-intuitive."I turn it off as a reflex. :)

I had speedstep on when I had my Penryn system and all I can say is that any lag that may occur while it is ramping up to handle load wasn't noticeable offhand. And since I use Windows, it'd be impossible to distinguish such lag from the usually slow start-up performance you get when you launch an application.

That seems like something more relevant for a notebook user, as it may impact his experience somewhat, and possibly affect battery life.

SPARKS said...

"EGOponte delivers yet again - proud to be an MIT alumn!"

Cheese, G's, don't be so hard on yourself! He's a bright guy who forgot his roots as a scientist/engineer. His objectivity has been clouded by politics, crusading causes, and fame, just like Timothy Leary who taught at Harvard (the other school).

(Sorry, I couldn't resist.)

I saw this new OLPC hardware detour the other day. I chose not to comment, as not to piss you off. Since we're in it, I may as well mention by not going Windows/x86 he's cut the 'children' out of a multitude of older programs like 'Reader Rabbit', and God knows how many other compatable childrens learning/teaching tools written for decades. This is what confounded me most.

Frankly, I don't get it, unless the whole thing is sham to get computers into the hands of third world adults, the real target. What better to view world news, politics, and social events than HD.

I'm just guessing, naturally.

SPARKS

Anonymous said...

Frankly, I don't get it, unless the whole thing is sham to get computers into the hands of third world adults, the real targetNow that I think about it, perhaps it is EGO-ponte's genius behind this move.. he makes a series of ridiculous decisions which destroys the utility of the product. It fails miserable and he starts going to the gov'ts of the world who he made all his pie in the sky promises to, and tells then he was forced into it by the big bad evil companies who wouldn't cooperate in an attempt to have gov'ts pressure the large companies to doing the work.

Perhaps I'm am underestimating the tank job that appears to be going on and it is not a tankjob but a beautifully architected strategy. I could see Schumer, Cuomo, Patterson, et al providing an OLPC subsidy to AMD to make these chips in upstate NY, while levying a "we don't think they play nice with others" fine on Intel to pay for it.

Anonymous said...

Damn tags screwing up the formatting

Testing....
If you put an extra return before you close the tag you can get the return.

Tonus said...

Considering the mess they're making of GM, I don't think that AMD would consider government involvement to be a favor, even given their current financial condition.

SPARKS said...

It all makes sense in some convoluted way, doesn't it? Labeling the whole project as something beneficial to children thereby gaining world traction in its' truest most altruistic form. After all, who would deny children?

It is a brilliantly conceived plan to get technologically challenged countries into a larger more educated view of the world. The leader of these countries are more than happy to subsidize the idea. They can spew whatever political propaganda they chose. (Much the same way we do here!)

The whole thing has been suspect from the start:

UN Official: Here's your Laptops boys and girls.
African Zulu Chief to his tribe: Never mind those things, let's go hunting.

Ah, I don't think so. The chief is going to grab one of these things to see what it's all about, so will every other Dad and Mom in a third world village.

Look, a Laptop is a communication device, educational tool, TV, everything rolled in a tidy little package. Sorry G, I think you underestimate your buddies ambitions, he doesn't want to educate children, his ultimate goal is to educate the WORLD.

I'm sure his goals include the 'United Nations.Net" as the vehicle behind it all.

Frankly, teaching everyone about AIDS, hygiene, agriculture, etc., plus a larger view of the world, isn't such a bad idea. It's the name "OLPC" which threw us, nothing more.

One Laptop Per Family wasn't politically/Diplomatically attractive enough to rally support on the world stage. The 'child' was marketing brilliance, absolute GENIUS. However, when you get down to grass roots, it's all the same thing really.

The guy's brilliant, and he's chosen the perfect tool/instrument to unify the world to his objective. Brilliant, simply brilliant, a credit to the Massachusetts Institute of Technology, no doubt.

SPARKS

A Nonny Moose said...

MSN's "Ahead of the Bell" is predicting another big loss for AMD when it reports this afternoon:

Sunnyvale-based AMD reports its quarterly numbers Tuesday after the market closes.

Analysts expect a loss of 66 cents per share and sales of $977.8 million, which represents a 35 percent drop from last year.

AMD is trying to shore up its finances by spinning off its manufacturing division — one of the most significant moves in the company's 40-year history — in a joint venture with the Abu Dhabi government. That deal got shareholders' OK in February.
However I think AMD got around $800M from the UAE, so maybe it'll be a wash...

Anonymous said...

AMD Q1
Net Loss: - $416Mill (when was breakeven again?)
EPS: -$0.66/share

Split apart AMD was -$189Mil, so they have managed to offload a little more than 1/2 the operating loss with the foundry arrangement.

Gross Margin was 43%, but there was a +5% due to some inventory corrections (so ~38%).

SPARKS said...

I believe that's 9 (nine) consecutive quarterly losses, totaling nearly 7 billion dollars.

That's an average of 3/4 billion per quarter.

Ouch.

They need another cash infusion. Is there anything else they can spin off, ATI perhaps?

SPARKS

SPARKS said...

The hits just keep on coming for OLE' AMD. I can recall when our very own clairvoyant 'G' predicted AMD's un-sustainability going to quad core only. It seems like years ago.

In fact, it was.

How they can make any money at all is beyond me. They're introduced a 'new' dual core to compete with Core2 Duo. Yes, sports fans, TWO read:2! disabled Pheromone cores. They have enough of these to market as a sperate SKU!!! TWO bum cores, one bum core, "Let see, the dual cores are selling so well, will have to disable some good quads to fill in the demand."

I thought the Triple Cripples were supposed to complete with C2D?

Selling half a die! What will they think of next to get wafer ROI, Single Slobs?

Triple Cripples
Double Troubles


http://www.fudzilla.com/index.php?
option=com_content&task=view&id=
13286&Itemid=1



SPARKS

SPARKS said...

Hey "G"!!!!

Get a load of this!!!!

http://www.fudzilla.com/index.php?
option=com_content&task=view&id=
13269&Itemid=1

Man the the DEMS are in a pickle now!

I AM LMAO!!!

What say you?

SPARKS

Tonus said...

"Net Loss: - $416Mill (when was breakeven again?)".

Next quarter.

A Nonny Moose said...

Anybody have any idea on how i7 sales vs. P2 sales are faring? I see some fanbois are claiming i7 has only sold 500K or so since Nov. 17, whereas Otellini said over 1M shipped by the end of Q1. No word on Phenom 2 sales that I can dig up, other than some mysterious mobo makers according to Fudzilla...

Anonymous said...

Sparks it's a tough one - I suspect the Dems will cave on their union principles and do a "mostly" union job arrangement. For those not pasting the link, there is a bit of brew-ha-ha over whether the GF NY fab is a public works project (since they are getting 1.2Bil of PUBLIC money) and if it is they have to use all union labor - the plan is to use 'some'.

The problem is the Dems know they have the unions in their pockets (esp in NY where the state is so Democratic now, they don't have to worry as much about fundraising) so my bet is they totally cave on their theoretical 'union principles' knowing they will still get the support, and they will try to to weasel out by justifying it with saying 'technically' they cannot force it. With many politicians, ESPECIALLY NY democrats, principles are merely an incovenience. What you will be able to find out in this scenario is if any of the Dems truly do have principles and which ones pretend to in order to get money and votes.

Of course noone seems concerned that GF is incorporated out of the Caman Islands (as opposed to the US) largely to avoid taxes. I must have missed the Democratic outrage over corporate loopholes and tax avoidance on this deal... instead it was a rubber stamp transfer of the check from wirting it to AMD to writing it to the Middle East (or the Caman Islands?)

InTheKnow said...

There is some guy out there claiming to be a tech analyst that has run the numbers and "proven" that Intel is losing money off of Atom. The basis of his analysis is that that Intel's 45nm factory network is fully loaded (I'm not sure what planet he is living on. In case no one has noticed we are officially in a major recession) so every wafer devoted to making Atom is a wafer that isn't going to be Penryn (I'm not exactly sure where the Nehalem wafers are coming from either). Since Penryn has a higher ASP, it must make more money for Intel. Therefore, every Atom wafer is taking money out of Intel's pocket.

So I decided to run the numbers. I'll start with his numbers for Atom. He claims that Atom gives 2436 die per wafer. I put the dimensions into a die calculator and played around with the yields and got 2435 good die per wafer. I'll call that close enough. Incidentally, that works out to a 94.2% yield.

The analyst then goes on to state that the yields will be the same for Penryn. His rational is that the process, tools and factories are the same. What he has missed is that the yields are dependent on die size. The argument should be that defect density is the same.

He also is reported as saying that Penryn gives 660 die per wafer. I have no idea how he comes up with this. Plugging numbers into my handy dandy free public source die calculator, I get 573 die per wafer max. If we give him the benefit of the doubt and use the same defect density we used for Atom, we get 445 good die per wafer. The way I learned to do math, that gives a 77.6% die yield. Note that I haven't even tried to estimate parametric yield loss in all of this.

So where does this leave us. Our intrepid analyst assumes a selling price of $29 per chip for Atom. He assumed $279 per Penryn chip. A quick look at New Egg shows the low end Penryn (with 6M cache) actually sells for $167.99. I'll use this number since it compares well with the low end price for Atom. These are the retail prices, not what Intel gets, but I'll call it good enough.

So Atom gives us 2435 chips per wafer at $29 per die. That is $70,615 per wafer. Penryn gives us 445 chips per wafer at $168. That is $74,760 die per wafer. Net result is that an Atom wafer earns Intel all of 5.5% less per wafer.

He is claiming that Atom is responsible for much larger losses to Intel's margins than this. After looking at some real numbers, I suggest he should look for another argument, if not another line of work.

Disclaimer: I've completely left out any analysis of parametric yield losses which should be worse for Penryn due to it's larger die. I've also left out the increased packaging costs of Atom due to the larger number of die that need packaging. There are probably at least a dozen more bad assumptions in these numbers, but I wanted to demonstrate that the purported analyst had not done his homework, so I used his methodology.

SPARKS said...

"Sparks it's a tough one - I suspect the Dems will cave on their union principles and do a "mostly" union job arrangement."....et al.

I'm in total agreement with you. The "mostly" will consist of the union organized companies that have the man power, skills, infrastructure, and experience of dealing with a project of this size. The would mean Steelworkers, Plumbers, Boiler Makers, and my favorite of course, Electricians.

However, since this is funded in part by the State, it will be structured according to State mandated Prevailing Wage Laws. This is for Saratoga County:

http://wpp.labor.state.ny.us/wpp/viewPrevailingWageSchedule.do?county=8

I'm wondering if the deal AMD made with NYS is binding with GF. They could conceivably bring in foreign workers in lieu of the local work force. Wouldn't that be a gas!

All said, since GF has an issue with the 'high cost of union labor,' I suppose their pockets aren't as deep as most think. Well, wait till they get to the cost of specialized tools!

Further, do they think they can get a state-of-the-art FAB built on the cheap, If they nickel and dime labor so early in the game? What could they save, 10 maybe 15 million a year? Peanuts. AMD looses over twice that a month!

What will the future hold in store as they go forward? This certainly is not a huge pile of rocks in the ocean shaped like a palm tree.

These are the guys going up against INTC, TMSC, and Samsung? Man, well welcome to the big leagues, are they in for and education!

I think it will be comical, and I don't the Paula's have a clue on how complicated this 'little' undertaking is. High Tech ain't cheap!

(ITK, nice analysis!)

SPARKS

Anonymous said...

There are probably at least a dozen more bad assumptions in these numbers, but I wanted to demonstrate that the purported analyst had not done his homework, so I used his methodology.
Didn't seem too bad especially as you have mentioned you are probably putting Penryn yield in a favorable light (which would only make the analysts argument that much weaker).

Most people don't want to do the homework of top line (revenue) vs bottom line (earnings).. they think smaller ASP's = always bad without factoring unit production cost. The other real heinous assumption (which you mentioned) is that Intel is capacity constrained on 45nm and thus any atom (or "X" # of atoms on a die size equivalency) is a lost Penryn sale or lost Penryn production. This is not the case BOTH from a sales point of view and from a production point of view.

The other think to factor in is the chipset coupling - if you (wrongly) assume a 1:1 attach rate, a wafer full of Penryns sells a lot fewer chipsets then the accompanying atom wafer. Granted the atom chipsets are older but you are talking >4X the chipset volume.

I also suspect the $29 ASP is on the the low side. Especially on the faster Atoms, they are selling for well over $50 (which is kindof scary when you consider what AMD gets for their Athlons)

Anonymous said...

Never believe what a industry analyst!

Anyone that knows their shit would be looking at the industy they'd be working IN the industry.

if they used to work in the industry and claim to now their shit. You know they don't as if they did they wouldn't have left. Either too stupid to keep the pace, or to lazy.

WTF is the deal with AMD and its huge loss. Even after spinning of the Foundry they are still losing their shirts. Then Dirk says he can't see the end. The reason he can't see the end is because its still in a free fall at AMD. The only end he sees is his own companies.

AMD is finished

Anonymous said...

WTF is the deal with AMD and its huge loss. Even after spinning of the Foundry they are still losing their shirtsYou can't possibly be this dense - the foundry + AMD is the equivalent of the old AMD (with a cash infusion from Abu Dhabi); why would the profit/loss change short term when the business situation is the same. A cash infusion on a deal that closed a little over a month could not possibly change the P&L.

If/when PC industry picks up or GF picks up some new customers things might change, but I don't see how anyone with half a brain could expect things to change short term. Even the analysts were predicting significant losses, so I'm not sure what you are surprised or talking about.

SPARKS said...

"If/when PC industry picks up or GF picks up some new customers things might change, but I don't see how anyone with half a brain could expect things to change short term. Even the analysts were predicting significant losses, so I'm not sure what you are surprised or talking about."

Many years back, I had a very good friend who grew up and lived in a small hamlet in Connecticut. The name of the town was Darien. Being young and naive I had absolutely no idea places like this existed. They defined the pinnacle of American social hierarchy.

Most, of the homes there where modest for the owners social position and relative income. Low key was the order of the day. Any ethnic background whose lineage could not be traced backed to Mayflower, in way or another, need not apply for residence. They did have one family of Italian descent who was accepted as a member of the community. He was quite successful with his private enterprise, after all, they needed SOMEONE to pickup the garbage. He was not, however, allowed membership to the country club. So he built an elaborate playground/recreational facility in his backyard.

As for myself, being rather successful in my own right, educated, kind of flashy with my exotic sports car and wristwatches, I was quite the novelty. They also knew I was looking out for my buddy who lived in a rat hole in the East Village of Manhattan. He also worked for me. As an Italian American, I knew my place. (Besides, the women there loved me.)

Something occurred to me during me visits to Darien. It seemed like every other family or so had had some odd behaving, weird family member living or tucked away in the attic. These characters were straight out of a Woody Allen movie.

Every so often, this individual would venture out of his or her social refuge and make an unexpected appearance. They would make some innocuous, off beat, off the wall comment, and then retire back to their private sanctuary.

It was obvious, they were loved and accepted as a family member and surprisingly, very intelligent. But, they were treated firmly by the dominate household patriarch to keep them in an acceptable social order, lest they become overly aggressive and unmanageable.

I love ya LEX, for your unvarnished loyalty to Intel, but as Dad said, its time to go back up stairs.

SPARKS

Anonymous said...

"SHOCKING"Indeed, Phenom II proves to be better performer in high-resolution gaming.

Source:
www.Techspot.com

Anonymous said...

And from the same article in the conclusion section we get:

Back when we tested the original Phenom II X4 940 and 920 processors, we were impressed by the gaming performance improvements that these offered over the older Phenom X4 9950. The Phenom II X4 955 was very similar in this respect, although gaming benchmarks can be somewhat misleading due to GPU limitations, the Phenom II X4 955 is a very competitive gaming processor.
And even more telling we get:

The subtotal cost of a Phenom II X4 955 system using a basic setup (Phenom II X4 955 = $245, 4GB DDR2 = $40, and AM2+ motherboard = $100) could set you back less than $400. Building a Core i7 920 system will cost at the very least $500 (Core i7 920 = $245, 3GB DDR3 = $60, and LGA1366 motherboard $200). That said, for the extra $100 or so, the Core i7 920 processor was considerably faster in a number of real-world tests.

While it might make more sense to go for the Core i7 920 option if you are building from the ground up, the Phenom II X4 955 offers existing AMD users with an appealing upgrade path.
So this is a great deal for existing AMD owners that can upgrade, but maybe not so hot if you are buying a new system.

What does this prove again?

SPARKS said...

"Shocking???"

Predictable!

If you really looked at the test results, when the Core i7 chips were HANDICAPPED with DDR3-1066 the Pheromone won nearly every time. When the i920 and i940 were given bread and butter DDR3-1333 THEY lead nearly every benchmark.

The starting point for any Nehalem setup is 1600. An XE gets memory TWICE (2X) as fast as the memory 'chosen' in the article.

Not one test included a 965XE with anything BUT DDR-1066, and it STILL won every benchmark. What a joke.

I'll give you my 2 cents, buy a clue. The chips run best with faster memory, like 1600 to 2000, not the pasty GARBAGE they handicapped the entire Intel line with.

The article shows how important fast memory is the Core i7 line, nothing more.

Shocking? I don't think so, cupcake.

SPARKS

InTheKnow said...

I'll follow up my anonymous post (too lazy to log in from another machine) with this little jewel.I loved this part.

Checking today's prices at Newegg.com, we were able to put together a system ....That gave us a subtotal of $879.95, which doesn't give us much breathing room for extras such as power supply, case, lights, cooling, and cabling if our aim is to stay under $1,000.

On the other hand, what would our price be for an Intel system that's comparable in performance to the new Black Edition .... Tom's Hardware tests reveal that the closest performer in the Intel category is the 2.83 GHz Core 2 Quad Q9550....

Right now, Newegg is selling an Asus motherboard with Intel's P45 chipset, for $99 after rebate. That more than compensates for the CPU price premium, since with this motherboard we can still use DDR2 memory. I can keep the other Dragon components and still save ten bucks, paying $869.94.
That is $10 cheaper than the AMD system. At least as far as this article is concerned AMD's value proposition only brings price parity with the Intel system. So much for all the claims that AMD is seriously undercutting the price of comparable Intel systems.

I also find myself wondering why gaming is suddenly so important. What happened to all those "real world" applications?

AMD has fallen behind on those and now they don't matter.

SPARKS said...

Whoops! My apologies, ITK.

SPARKS

Tonus said...

I think that what is "shocking" is that out of the spread of pages from 7-13, this person chose page 11. Page 11 shows us that in tests that are GPU-bound, the score for several CPUs fall within the margin of error.

Now consider the benchmarks on pages 7, 8, 9, 10, 12 and 13, where the tests allow the CPUs to differentiate themselves, and notice which CPUs win every test, and which CPUs fall towards the middle of the pack.

Then consider that this is the second review of the Phenom II where the best OC was 3.7GHz, and that as pages 12 and 13 show, even a 3.7GHz Phenom II doesn't beat the Core i7 in their benchmarks.

And ask yourself just how much less you are willing to pay for a CPU that loses every benchmark except those which are GPU-bound, and which doesn't OC very well when you run out of liquid nitrogen, and that doesn't perform so well even when overclocked.

At the beginning of the review, the reviewer was impressed that AMD can offer such value at this performance level. This statement must present a bitter irony for AMD. Does this reviewer ever sit there, wondering why it is that Intel turns a big profit nearly every quarter, while AMD hemorrhages money at the same time, even in "good" quarters? No, probably not.

InTheKnow said...

Just to be clear, the SHOCKING post wasn't mine. Mine was the anonymous reply that followed.

InTheKnow said...

I think that what is "shocking" is that out of the spread of pages from 7-13, this person chose page 11. Yeah, that is why I was asking why high res games are now the only benchmark that matters. I don't think you can get any more "real world" than the excel tests, but those are conveniently ignored.

At the risk of being redundant, I'll post this statement from the conclusions of that same article again.

While it might make more sense to go for the Core i7 920 option if you are building from the ground up, the Phenom II X4 955 offers existing AMD users with an appealing upgrade path.

What is "shocking" to me that an apparent AMD fan would stoop to Intel's level and engage in selective benchmarking. :)

SPARKS said...

"Just to be clear, the SHOCKING post wasn't mine. Mine was the anonymous reply that followed."


ITK

Oh. Then save the apology for a future or past transgression (that may have escaped me.} You know how much I admire you.

There is another review that is a bit more objective. It also includes my beloved QX9770, which did very well, I might add.

What a buy I got a YEAR ago for $1500.

"Only now, Luke, do realize the power of the Dark Side"

For year, it swept away all, at stock speeds!

%D

http://www.guru3d.com/article/
amd-phenom-x4-945-and-955be-processor-review-test/

SPARKS

Tonus said...

ITK: "At the risk of being redundant, I'll post this statement from the conclusions of that same article again.

While it might make more sense to go for the Core i7 920 option if you are building from the ground up, the Phenom II X4 955 offers existing AMD users with an appealing upgrade path.
"

I knew those were two different people. I just assumed the second one was Guru. :)

re: your comment I quoted here- potential upgraders should also keep in mind that if they do get a Phenom II as a drop-in replacement, they are unlikely to see scores as high as in those benchmarks, since they were using DDR2-1066 and DDR3-1333. Many older AM2+ systems will likely be using DDR2-667 or DDR2-800 RAM, and the rest of the components may also fall short of what they used (1GB 280GTX, SATA II, etc).

This doesn't mean that it's a bad deal as a drop-in replacement- on the contrary, it should be an excellent investment. But it's a bit disingenuous to show benchmarks with new hardware and then tell the reader that the CPU is best purchased as a drop-in upgrade. Some people might call that FUD. :)

Anonymous said...

re: your comment I quoted here- potential upgraders should also keep in mind that if they do get a Phenom II as a drop-in replacement, they are unlikely to see scores as high as in those benchmarksAnd there-in lies the classic bait and switch... they apparently have learned from IBM (on the process side) on this front. Talk up the drop-in, but do the reviews on latest and greatest HW.

Also, if you drop-in an upgrade you could lose some of the other features as well. I think the split plane voltage support (to lower Phenom II power by splitting the voltages to the cores) was also a change along the way of the AM2, AM2+, AM3 mobo's (not sure where).

Look reviewers are going to do reviews with the latest HW and I have no problems with that (just like I'm not sure why people objected to the core i7 reviews with DDR3 while the Phenom reviews use DDR2 which at the time was all they were capable of)

I do have several problems with the linked review. The i7 965 review was with DDR2-1066, even the i7 920 and i7 940 reviews used DDR3! (why not just "drop" the i7 965 into one of the boards they used for the 920 or 940?)

They talked about "drop in" of the Phenom II - I have no problems with this, but if you are going to talk this up, drop it into an older setup (in addition to the new setup) and see how it works and if there is any difference. Or at least point out some of the potential drawbacks of putting it into an older system.

I also find it interesting when comparing power it was PhenomII with 2 DIMMs of DDR3, and i7 965 had 3 DIMMS of DDR2

SPARKS said...

I guess MCM is a good idea, after all.

http://blogs.wsj.com/digits/2009/04/22/
amd-no-longer-feels-the-need-to-go-native/

hyc said...

Sorry to barge in on a different topic, but I just read that GM is trading its bond debt for stock; the US government and the UAW will end up owning 89% of GM as a result.

http://www.dailytech.com/article.aspx?newsid=14977

I'm not so sure about the government involvement, but I see the UAW side as a potentially good thing. Historically, management has been bound to serve the shareholders, at the expense of the work force. Now, the work force and the shareholders will be one and the same. Will they be smart enough to quit their short term greed and work for the long haul? Or will they just implode in a frenzy of internal pillaging?

Tonus said...

I think employee ownership is great, anything that gives them a stake in the company's success is good. I'm not so sure about union ownership though, that could bring them uncomfortably close to management and at cross-purposes with the workers.

I definitely do not like the idea of government having any stake in a company. For one thing, as badly as they manage the economy, I don't want them running a company. Second, government has access to resources that can make for a very messy situation. They can write or change laws, and they spend our money via taxation. That's way too much potential power to abuse in favor of a government-owned company in a capitalistic environment.

The GM situation is a mess, and getting worse. The more the government invests itself and the UAW, the harder it will be for them to allow GM to fail if that is what it needs to do. It's like what we're doing with the economy-- propping it up instead of letting it self-correct. At some point you can no longer prop it up, and the fallout is much, much worse because we invested so much in preventing the normal course of events from occurring.

Anonymous said...

I'm not so sure about the government involvement, but I see the UAW side as a potentially good thing. Basically the bondholders, who were pumping money into the endless pit known as GM, get screwed for doing so, and one of the groups in a large part responsible for the slide now gets rewarded with ownership? How again is that good? It may be good going forward, but it screws the people who put the money into GM to keep it afloat and rewards the union.

This would be like global foundry losing getting it's share cut way down and that stock/ownership being HANDED to the AMD management.

There is a reason the US automakers are not competitive with the rest of the world. People blame the economy or oil prices forcing movement to hybrid/smaller cars, but anyone with half a brain knows these companies were sliding in BOOM times and were merely keeping afloat on the fat margins of the SUV business. The pension, medical benefits, other benefits and hourly wage are still not competitive - while they are better, it's not good enough and there is no innovation coming out of Detroit to justify higher car prices to offset the higher costs.

This is just the beginning of US subsidization of the US auto companies and the first step in gov't by central planning by a socialist majority party. Banks, soon US autos, next will be various 'green' technologies.

At some point you have to let the forest fire burn in order to clear out the underbrush and overgrowth and allow a new forest to grow in it's place in a sound manner. You can keep sending planes to dump loads of water (money) as a stopgap measure - but if the fire doesn't go out, it's just a waste of money in the long run.

I completely agree with Tonus - this is just setting up way too many conflicts of interest and putting elected representatives people (who are largely lawyers these days?) who have no idea how to run a business in charge of a massive industry with an endless bucket of cash (US taxpayer money) to pour into it. This could be worse than Ruiz and AMD to put things in perspective. Remember market share at all cost... think hybrid at all cost or fuel cell at all cost or [insert popular green tech of the day with the biggest political contribtution] that the auto industry will become.

GM has to be allowed to stand or fall on it's own accord - the bankruptcy laws are so screwy in this country you will see a chapter 11 (reorg) and not a liquidation (chapter 7)...you would not see an end to the auto industry as politicians are trying to scare people into thinking. It would force the union to the table with minimal leverage and well the Dems can't have that from one of their largest constituencies. And make no mistake, this is not about jobs, it's about keeping the UAW empowered (and indebted to the Dems)

SPARKS said...

Toms Hardware has a review of new SSD's. More importantly, the extensive testing and comparisons between all the major players drives, INTC included, make this article a bible for the SSD current state of affairs.

I'm sure many of you have had your fair share of ipecac for the number of times I have said you get what you pay for. Again, this is no exception. All said, the INTC drives are absolutely in a high end class of their own, and I'm lusting for the expensive little bastards as RAID 0 boot drive(s).

http://www.tomshardware.com/reviews/256gb-samsung-ssd,2265.html

BTW: Has anyone got a handle of this irrevocable fragmentation issue without resorting to a pure wipe and complete reload to regain optimal performance over time?

SPARKS

InTheKnow said...

BTW: Has anyone got a handle of this irrevocable fragmentation issue without resorting to a pure wipe and complete reload to regain optimal performance over time?Intel issued a bios update to address this. Sorry, but I don't have time to look for the link this morning.

As is usually the case, the problem gets lots of press and the fix just gets a mention back on page 14 somewhere.

Where are all the paid Intel pumpers when you need them? :)

SPARKS said...

ITK,

I read about the INTC firmware 'fix,' but as you said, the performance results of the patch haven't been fully covered yet. Rather, if they have, they been relegated to the ass end of some obscure website somewhere.

Speaking of pimp pumpers, Ole' Charlie at the INQ is reporting a leak that INTC's newest G55 IGP has the graphics community pissing in their pants! I'm loving every minute of it. I said it before, and I'll say it again, INTC is serious about graphics this time around and this ain't no i940 cluster f**k. (That's your cue, LEX)

Imagine Charlie D. giving INTC accolades!?!?! God, he really must hate NVDA's guts!

http://www.theinquirer.net/inquirer/news/992/1051992/intel-caught-graphics-shocker

SPARKS

SPARKS said...

ITK, all,

I found this regarding the X25 fix. They've done well, and these things are really looking attractive.

http://www.pcper.com/article.php?aid=691&type=expert&pid=1

SPARKS

InTheKnow said...

I see you found the article I was going to link. I liked the fact it was by the guy who wrote the original article that started the whole firestorm and his attitude towards Intel was positive at the end of the whole experience.

Imagine Charlie D. giving INTC accolades!?!?

The real killer is the comments Charlie is being accused of being pro Intel.

Some people just don't get out often enough. :)

InTheKnow said...

I also wanted to post this comparison from CFD online.

i have some results for Nehalem
unfortunately for this test i use only small model (it use about 2 gb ram)
now i waiting test with more RAM space

testmodel1 CFX11sp1
8 core Intel QuadCore Xeon 5345 2,33Ghz RAM 16 Gb 2610 s
8 core Intel DualCore Xeon 5160 3,0Ghz RAM 16Gb 1380 s
8 core Intel QuadCore Xeon 5472 3,0Ghz RAM 32Gb 1620 s

AMD SHANGHAI – 1690 s

Intel Nehalem - 873 s

testmodel1 CFX12 p12

AMD SHANGHAI – 1559 s

Intel Nehalem - 770 s

summary 1620 (Xeon 5472 ) /873(Nehalem) = 1.85 times faster




The poster later says he doesn't think the advantage would be so large with a larger job. My point in posting this is simple. I wanted to point out that in a "real world" app like a CFD program Nehalem is a killer.

Tonus said...

ITK: "The real killer is the comments Charlie is being accused of being pro Intel. ".

Yeah, typical knee-jerk reaction. AMD fanatics have become very paranoid as of late. Many of them are in full "with us or against us" attack mode. Just ask kaa...

"I wanted to point out that in a "real world" app like a CFD program Nehalem is a killer.".

Well then that application must be optimized for Intel processors!

Which is a crazy idea- imagine a software developer optimizing his applications to run best on a CPU that is created by a company that holds approximately 80% of the overall market? That would be like writing software primarily for Microsoft Windows! Why would anyone do something so downright insane???

A Nonny Moose said...

And now for something completely different...

Anybody here use non-Comcast VOIP with Comcast as their ISP? If so, do you detect any packet throttling going on?

When I first moved to my new house some 18 months ago, Comcast gave me all sorts of trouble about moving my account from the old address (about a mile away) to the new one, so eventually I gave up on them and went with Verizon as my ISP. Unfortunately Verizon had promised me 3 Mbps service on their DSL (FIOS not yet available), but only delivered 1.5. So I went back to Comcast as a "new" customer and got 6Mbps for 18 months at $30 a month. Well that deal has run out and now the price is nearly double. Unfortunately, Verizon has dropped their DSL service to my area, and still no FIOS on the horizon, so I'm stuck with Comcast.

So I have decided to drop my wired phone line from Verizon ($28 a month for the most basic package) and also my LD carrier (that I used because they charge 5.5 cents per minute to Vietnam, and my wife spends 15- 20 hours a month calling her family who live there). I'm paying about 25% of my LD charges on the "FUSF" and "carrier cost recovery" charges anyway.

Anyway, PhonePower VOIP seems to be the best alternative since they only charge 4.9 cents per minute to Vietnam, and there's none of the extra charges, plus free unlimited calling to USA & Canada. So I was about to pull the trigger and switch, when I ran Phonepower's QOS app to measure the quality of my broadband connection via Comcast. Turns out the upload & download speeds, jitter & packet loss are all in the excellent zone, but there's around 50% packet throttling going on (ran the test twice) so that VOIP sound would be pretty choppy.

So I contacted PP's tech support and they informed me that they have had similar problems with Comcast - Comcast of course wants to sell their own VOIP, which is in a higher band than their regular broadband service, and ridiculously expensive international calling to boot. I know, as my brother uses Comcast's VOIP which has a special modem tuned for the higher band.

PP's tech support suggested that I try their service for a month or two to see if quality would be acceptable, but not to inform Comcast since that would exacerbate my problems. Also, I should take some screenshots of the packet throttling going on and keep in case I decide to complain to the FCC. In other words, despite this affecting PP's business model, I'm on my own :).

There's a blog entry over on Overclockers about "deep packet sniffing" being used by some ISPs in an effort to block P2P and other activities frowned upon by RIAA, MPAA etc; I'm just wondering if Comcast is doing that already with their service.

So if anybody else uses Comcast with a non-Comcast VOIP service, I'd like to hear from you as to how the quality is, thanks.

Tonus said...

Update- I set the 920 to 175 FSB and 1.225v (1.82v for PLL voltage) and it ran at 3.5GHz for a week with two issues-

1- it would apparently shut down at some point if I left it on for a few hours without using it (such as leaving it on overnight). I had the sleep settings off so I don't think it was due to any attempts to go into sleep/hibernate. When I started it up again, it would give me the "windows did not shut down properly" dialogue, but would boot up normally. It had no problems at all in normal use until this afternoon, when...

2- it blue screened on me when I was using Photoshop. I rebooted and set the speeds back to default. I'm not sure why it decided to BSOD at that moment, I've used Photoshop for a week without incident (and with numerous other programs running, as usual).

I still haven't upgraded the HSF on it, but I may do that soon. With a very minor voltage bump and the stock HSF I've gotten almost 900MHz out of it, and aside from that one blue screen at 3.5GHz, it's been stable during use. I'll probably leave it at stock speed until I upgrade the HSF and then I'll do more research into the settings to see how far I can push it and have it remain stable during normal use and extended non-use.

SPARKS said...

Tonus, good buddy, I thought we've been through the stock heat sink issue. I did say immediately! Did I not? Allow me to be a bit clearer. Get rid of the son-of-a-bitch!

That hunk of extruded aluminum will hold heat like the bottom of a 'Tools Of The Trade' nine inch fry pan! Heat pipes, plenty of copper fins, and some good grease are in order here. The cores' internal thermocouple couple will say 'NFW', just like the Prom Queen on a first date!

GET THE LATEST BIOS UPGRADE, and flash away! ASUS has a nifty little utility that makes it as painless as it's ever been

Adobe, is using ALL the cores, bet your life on that one. 1.2 V ain't squat. The old girls thirsty for some juice, but you ain't gonna do it without a good cooler.

Check out Task Manager during your runs to see who's using what, and how much.

I'm quessing here, but I would say your overnight BSOD may have to do with memory timings/voltage. This probable issue could be a real PITA to nail down. Give 'em all the juice they can handle, after all, they are expendable(s) when OCing. You can't spend enough on good memory. It just elliminates one PITA variable from the mix.

Sandra Mem benchmarks will expose any weak sisters.


WATCH YOUR CORE TEMPS 24/7!!!!!

SPARKS

SPARKS said...

Randy Allen is gone. The ATI we all knew and loved is gone. I wasn't too far off when I asked. 'What's left to spin off ATI, perhaps?', in an above post. The fact of the matter is, they can no longer write down the 5.4B 2006 purchase. ATI is history.

This is a very interesting article and it speaks volumes.

http://www.eweek.com/c/a/IT-Infrastructure/AMD-Merges-Chip-and-Graphics-Businesses-753664/

On a personal note, I have never purchased an NVDA product.

Times are surely changing.

SPARKS

InTheKnow said...

Well, according to this story from the Inq, Globalfoundries is beating the APM drum.

I have to admit that I find myself torn on the whole APM thing. On the one hand, it smacks of a method of trying to fix mistakes that result from an unstable process or one with a very small process window.

On the other hand, assuming a stable process with a decent sized process window, it seems to address the idea of a loss function (which quantifies how much variation from target impacts the final product) by putting things back in the center of the process window.

[For more on the idea of a loss function, see this site]

The thing that will really convince me that APM has value though is to see how well it allows an identical process to transfer from one fab to another. If it can do this better than Intel's copy exactly methodology, then it is a winner. If it can't, then I don't think it is worth the added complexity.

SPARKS said...

ITK, Both you and Orthogonal gave me a fairly descriptive explanation on how Tools can be configured and reconfigured on the fly with the "Hot Box" scenario a few posts back.

The way I see it, from a layman's perspective, isn't this APM thing basically what INTC does? I'm sure there are tons of subtle differences which set INTC's approach/methodology differs from GF/AMD. If so, what are they.

Frankly the way I'm reading it, this seems like another smoke and mirror campaign designed to counter this little tidbit.

http://www.timesunion.com/AspStories/story.asp?storyID=798724&category=SARATOGA&TextPage=1

SPARKS

Anonymous said...

The way I see it, from a layman's perspective, isn't this APM thing basically what INTC does? Actually EVERYONE does it, it's just a matter of degree. AMD hypes up how dynamic their system is and how many inputs it takes in and sifts through as if this is a GOOD thing, but for many folks in manufacturing this is symptomatic of an 'iffy' (to get all technical on you) process and something you really don't want. I would see the value of this potentially more in the development and startup phase (if it is as good as advertised, it should accelerate the learning curve), but once in manufacturing you really don't want a process that drifts from tool to tool and wafer to wafer that requires significant adjustments.

Again this is what separates a good manufacturing company that has good engineering and an engineering company that thinks it is a good manufacturer.

Ultimately the proof is in the pudding and AMD/GF has yet to offer up one specific tangible benefit of APM over other systems. They describe (a little) how it works and why it SHOULD be great, but never get to the bottom line impact (cost savings, time to market, better yield).

If I had a car that had wheels that drift out of alignment over time and an automated system that adjusted the steering wheel to compensate for that, you can call that advanced... but I would prefer to fix the root cause, not just use an advanced system to account for drift.

While in semiconductor manufacturing it is impossible to eliminate all drift - you try to make a bigger process window so that even if something drifts it remains in the 'good' process window. You can crank down the window and tune the process continuously to maintain it - but ultimately that is a dangerous proposition as you are doing that to multiple, inter-related process steps - sooner or later, with enough volume, the system will break. It might work on a single fab level for a while, but I wouldn't bet on it long term.

InTheKnow said...

Sparks, this isn't quite the same thing as what you would do with a hot box for a new product.

Let me start with some basics.

All processes have some degree of variation due to the capability of the equipment. For example, your typical thermocouple is accurate to about 0.1 degree. If your process is highly temperature dependent, then the difference from that 0.1 degree variation could produce some uncontrolled variation from one run to the next.

The process window that you have heard people talk about is one way to look at whether or not the degree of variation is acceptable. If you keep each individual process within some (process specific) limit then the final product will give an acceptable yield. The bigger the process window, the more variability the process can tolerate.

Taguchi (the link I provided earlier) takes this concept one step further. According to Taguchi there is a function (or formula)that can be determined that will show how yield will change as you move farther from the target. So as you move away from the target there is some cost associated with this decreased yield.

What APM is attempting to do is address this yield loss due to variation by measuring the difference between the target and making adjustments in the downstream processing to bring the product back on target.

Doing this is not unique to AMD/GF. Every manufacturing company I'm aware of does this to some degree or another. The question is how extensively AMD/GF is relying on these techniques to control yield.

In theory, combining feedfoward and feedback control with a large process window would give you a very high yielding process, but there are costs and risks involved with using this type of approach.

We would need a lot more information than what is ever likely to be publicly available to determine if AMD/GFs implementation is really worth it.

Anonymous said...

APM APM APM blah blah blah

When there is nothing to talk about AMD reverts back to the same old crap. Every high volume manufacture uses some sort of feed forward. Every process has some sort of normal distribution in an ideal world. Measure the output from process x then when you can for the next process you adjust process X+1 to compensate if process x drives a little to the left or right. No different then when you see your car drifting to the left on the highway you nudge it to the right. It common sense, but it ain't for free.

You have to often expand some precious time and metrology tool or other resource to measure the wafer, lot. If a process is well controled you'd rather let 100 or maybe 200 wafers or more run before measuring. You're measurement tool is a non value added expensive tool, it also costs lots of money and slows down your processing time.

People who make noise about APM are people who have a process or product that has a minscule process window and have to measure it way too much to get any yield.

Companies that make noise about APM are companies with a marginal process. Why do you think only AMD makes noise about this?

Tonus said...

Europe fines Intel $1.45bn for anticompetitive practices. Intel will appeal, but by European rules they must pay the fine and change their actions immediately. If they win the appeal they get their money back, and presumably may reinstate whatever policies were in question, but that may take years.

The article indicates that Intel is just one of many tech companies that are being aggressively investigated by the European Union for anticompetitive practices. Microsoft has been investigated and fined a few times for more than $1bn total. They're also focusing on companies like IBM and Cisco.

I can't help but get the feeling that this is more about increasing revenues for EU members and less about improving the competitive landscape (since when are socialist governments concerned about competition?). For example, instead of demanding that Intel stop certain actions that are clearly anticompetitive (ie, providing incentives to companies if they avoid AMD products) they demand that Intel drop its entire rebate program. All that this does is raise prices for European consumers, since OEMs who no longer get a rebate from Intel will simply raise prices to make up the loss. Then again, I suppose nothing would stop the EU from demanding that these OEMs keep prices low.

I also suspect that Intel will be dealing with a very unfriendly DoJ here in the USA. The intent will be the same- not to protect consumers, but to find a way to squeeze more money out of a huge corporation in order to help fund the unsustainable spending plans that the current administration has in store. If anyone ever wondered what happens when people with entitlement syndrome are in charge, you're seeing it on a global scale. I keep hoping that if things go as badly as they are likely to go, that we can at least avoid hyper-inflation.

Khorgano said...

I keep hoping that if things go as badly as they are likely to go, that we can at least avoid hyper-inflation.It may be too late for that. The pump is primed for it. Zero percent interest rates (negative real interest rates). Deficits piled to the moon and no end in sight for the gov't to expand it's role. The only reason prices are low now is because the overall purchasing power of the market has been reduced by the credit market collapse. The gov't has been trying to supplement the loss of credit with new paper to keep the demand stable. The money supply has nearly doubled since last year. However, once the credit market fully recovers, you'll see a HUGE expansion in overall purchasing power and demand will skyrocket, along with prices. People will call it a boom, bull market, whatever, but it's just inflation. Nothing of substance.

Anonymous said...

Khorgano - spot on.

'People' argue that this will never happen because people value the dollar. The difference now is you have several logs feeding the fire.

You have the deficit which will steal more from every tax dollar. I think it is now ~$0.30 out of every dollar that is 'wasted' paying interest on debt - and this is only going to grow as the national debt TRIPLES within 10 years under Obama's "plan".

This in turn will fuel higher taxes - and that will HAVE to happen on the entities that can pay taxes (corporations and rich people). These are the entities that hire people and spend money... so if this slows, who does the spending and hiring? The answer of course becomes the gov't (welcome to socialism). The govt already partly owns the bank and auto companies now - I don't see the exit plan on these. You start taxing businesses more and they either move out of the country or become the next auto industry (though they failed thanks to poor mgmt and out of scale wages)

I see Jimmy Carter-style inflation (and worse) coming. The unemployment rate will turn around and the economy will pick up a bit end this year/next year. Then the years of low/zero interest rates and the massive additions to the debt will lead to massive inflation (which will lead the govt to knee jerk react and do a massive swing interest rates to counteract it); as Khorgano astutely pointed out this will be confused as growth in the early stages, and the media will be too busy falling over themselves to praise our savior Obama, to realize it is fool's gold.

The fed should start increasing the interest rates this year to fend this off - but it won't because it will slow the recovery, and will limit the housing recovery (and we all know 2 things- everyone, regardless of qualification, is entitled to a mortgage and owning a home has to be zero risk proposition and housing prices HAVE to increase) so politically we can't have that. :) I fear in ten years we will look back and say what the hell were we thinking about these economic policies?

After the stock market does it next major spurt - it will probably trade down then sideways for ~6-9 months then you will see it pick up as the 'recovery' takes shape, it will be time to buy gold and/or commodities - inflation is a coming.

Anonymous said...

To put things in perspective....

...if Intel were to cut the stock dividend the next 2 quarters, that would be enough to pay the EU fine and still leave a little left over (They paid out 3.1Bil in cash last year in dividends).

So apparently now Intel is forced to pay dividends to the EU and not just stockholders? Shouldn't the EU have to buy stock to get paid? :)

Anonymous said...

I can't help but get the feeling that this is more about increasing revenues for EU members and less about improving the competitive landscape (since when are socialist governments concerned about competition?).
I'm an Intel fan, but I understand the need to take action if Intel was directly paying folks to shelve AMD products (that is clearly wrong). But I do have an issue with the European ruling:
- the companies don't have to accept the 'bribes' or the rebates; how come the EU distributors are not also getting sanctioned for ACCEPTING the bribe and ACTUALLY DOING THE EXCLUSION OF AMD???
- so long as the rebate doesn't cut the prices below cost (which would be illegal in most jurisdictions); how is this significantly different from simply cutting prices?

I actually kind of hope that Intel tells Europe, that they will comply with their wishes and simply cuts their prices to below the post rebate levels and just punishes AMD in Europe while telling the EU, "look ma, no rebates and we're not below costs, now what are you going to fine us for?"

In the end, I think this is a little about trying to level the playing and a lot about making money. I would/will feel differently when the EU goes after the distributors for their part in this 'illegal' behavior.

Next up - Google, IBM, Cisco... who knows after Win7 comes out maybe they'll take a bite out that the Microsoft apple again? Wait did someone say Apple, sure they aren't a monopoly, but they have to be able to come up with some sort of smugness fine, no?

Anonymous said...

EU fines..

The more they increase the fines the more it become apparent what a bunch of losers they are.

They can fuck INTEL if they try but in the end it matters little. INTEL still innovates and delivers more for less. The EU can talk all they want out of their Ass we are better off now with INTEL making billinos then with AMD versus INTEL. Too bad AMD fanbois that not one penny goes to AMD. AMD products still suck, AMD technology still sucks. Lets here more about AMD APM...

InTheKnow said...

I would see the value of this potentially more in the development and startup phase (if it is as good as advertised, it should accelerate the learning curve), but once in manufacturing you really don't want a process that drifts from tool to tool and wafer to wafer that requires significant adjustments.

While in semiconductor manufacturing it is impossible to eliminate all drift - you try to make a bigger process window so that even if something drifts it remains in the 'good' process window.

I think you are painting the nightmare scenario for manufacturing here. Your assumption is that the process is on the hairy edge of the cliff and only APM is keeping it from falling off the edge. That may be the case here, I don't know, but let's look at the other side of the coin.

If you have done your job up front and have a large process window with tools that are stable, the whole idea of APM really comes into its own. Now rather than saving wafers, you are optimizing performance by reducing the deviation from target. This is the whole essence of Taguchi's loss function. Any deviation from target costs money, even on product that is not "discrepant".

If you can eliminate some of that variation from target through the application of a system like APM within tightly controlled limits then I think that this could be a good thing. In theory, the result would be more material in the high performing bin splits. And that means more money in the bank at the end of the day.

Now having said that, APM requires a lot of automation overhead and potentially has some pretty high metro costs and the ROI has to be looked at closely. Not to mention the risk of having the whole process tweak itself right off the edge of the cliff.

But it is my impression that as the process nodes get smaller and the slope on the loss functions become steeper, we will see an increase in use of this type of approach.

Anonymous said...

I want to learn more about process technology ... Is this a good book? Thanks!

CMOS VLSI Design: A Circuits and Systems Perspective (3rd Edition)
by Neil Weste (Author), David Harris (Author)

SPARKS said...

EU fines.

For those who have grown to know and love me, you all know that I adore INTC. Actually, the same is true for all those who hate my guts. However, fanboy biases aside, I am an American first and I see the European business GESTAPO systematically attacking commercially successful American Companies.

Obviously, others here feel the same too.

"Next up - Google, IBM, Cisco... who knows after Win7 comes out maybe they'll take a bite out that the Microsoft apple again? Wait did someone say Apple, sure they aren't a monopoly, but they have to be able to come up with some sort of smugness fine, no?"

I couldn't agree more.

You see, this is the price doing business in Europe when AMERICA OWNS THE MARKET. "smugness fine, no?", perfect!

Strong and dominate American companies are in the crosshairs of the EU. The Microsoft windfall, in addition to this new INTC windfall, will give them more power and more people to look/fabricate new reasons to punish successful America companies.

They are generating vast sums of money. The EU incentive to do so is painfully clear after this travesty, while that idiot in the White House is in a EURO popularity contest. Incidentally, there was not one comment from his office!

Make no mistake, this is about the Europeans not having ANY market presence in the IT industry, none. This is not about their altruistic concerns for "poor AMD" , they lost billions on that debacle in Dresden, they are just evening the score.

Graphics, nothing.
Super Computers, nothing.
Networking, et al. nothing.
CPU's nothing.

At the risk of sounding like a pumped up union worker at a pep rally, it is time for all of us to think America FIRST and buy American, before it's too late. Buy American, even if it's a goddamned coffee maker.

This is about the EU against American IT market dominance abroad.
And this is gospel.

SPARKS

InTheKnow said...

I want to learn more about process technology ... Is this a good book? Thanks!

This book is geared towards circuit design.

If you are interested in the manufacturing process I'd recommend:

"Handbook of semiconductor manufacturing technology" - by Yoshio Nishi, Robert Doering

It is the single most comprehensive volume I'm aware of.

Anonymous said...

"Handbook of semiconductor manufacturing technology" - by Yoshio Nishi, Robert Doering

It is the single most comprehensive volume I'm aware of.
ITK, Thanks!!

Anonymous said...

A good book to take you from Moron to Novice:

Silicon Processing for the VLSI Era by Richard Tauber

Go study hard boys.

Tonus said...

HardOCP comes to a very different conclusion regarding gaming performance on some current and recent AMD and Intel CPUs. From the conclusion:

"The Core i7 920 gives all our processors a run for the money, producing higher performance across the board. The Core i7 920 provides the best performance in every game tested. You will have to pay for that performance though compared to the Phenom II CPUs though. When you take the Core i7 920 to 3.6GHz though, nothing can touch it. Outside of gaming, the Core i7 have proven to be the most capable desktop processor on the planet in terms of applications that can actually harness it HyperThreading and multi-threading abilities."

Their advice is to buy an AMD CPU only if you are either very tight on cash or plan to overclock it, otherwise the performance is poor even for the price, considering that a better-performing i7-920 costs less than $300. But they also admit (as seen in the snippet above) that if you OC the 920 to 3.6GHz, it will outperform those OC'ed AMD CPUs handily.

(Why they bothered to include a $1,100 QX CPU is a mystery to me, unless it was intended to represent the Q6xxx/Q9xxx series processors)I'd say that if you're on a really tight budget, the AMD CPUs are not a bad deal. The unlocked triple core goes for ~$140, the quad core is ~$170. Or $110/130 less than the i920. This doesn't include the motherboard, as X58 boards are still costlier than AM2+ and AM3 boards, if I'm not mistaken. The savings can be put towards the video card. The savings may cover the cost of upgrading from a 4870 to a 4890, and help defray the cost of upgrading to a 4870 X2.

But if you want performance badly enough, an additional $130-250 may not seem like such a premium.

SPARKS said...

"(Why they bothered to include a $1,100 QX CPU is a mystery to me, unless it was intended to represent the Q6xxx/Q9xxx series processors)"


It probably was, as this was AMD's intended target by default. They can't even smell Nehalem's dust.

Let's not forget Q6600 was released on Jul 27, 2006. AMD has been trying to catch the nasty little bastard since. I don't have a Ph D. in Applied Mathematics like GURU, but I know at least two years when I see it. Let's not even get into stepping (GO).

More importantly, although clearly not stated, is the platform change. There is an awful lot of LGA 775 MOBO's out there in Never-Neverland. This is really AMD's true competition, clearly not the LGA 1366. This is why they brought the older platform into the comparison. They serve up the performance vs. price comparison like cheap hamburgers at Mickey D's.

However, if you were like that dumb-ass SPARKS who bought into the QX9770/1600MHz FSB over a year ago, with LGA 775 platform 2 years ago, with an O.C.'d (GO) stepped Q6600, the entire discussion would be rendered academic. He has been kicking AMD's ass up and down the block 'till Nehalem.

And now TONUS, you're kicking his ass up and down the street with an overclocked i920 at bargain basement prices. So what's a 150 bucks difference with this and limitless possibilities for a future upgradable platform?

Looks good to me, no matter how much crap they fling into the fan. Especially when considering an AMD fanboys dumping their OLDER, STALE platform in lieu of a shinny new 1366,----------------------------------------------------------------- AMD's ultimate nightmare scenario.

SPARKS

SPARKS said...

http://enthusiast.hardocp.com/article.html?art=MTY0NCwxLCxoZW50aHVzaWFzdA==

TONUS,

This article shows where the rubber meets the road, especially the conclusion.........

"Our results are quite clear on how this compares, though we know this is a highly debated topic, all of our testing points to both Intel CPUs providing superior gameplay performance. In Crysis: Warhead, Flight Simulator X, and GTA4 both Intel CPUs consistently allowed us to play with higher in-game settings. In every other game framerates were higher on the Intel CPUs, even if the actual gameplay experience was the same. For gaming, Intel Core i7 and Intel Core 2 processors provide more performance allowing you to get the most value from your high-end graphics card investment."

'nough said.

SPARKS

Tonus said...

Erm...You wonder how much abuse a person is willing to take (look at the last two or three pages). Best line in the thread: "AMDZ has a problem that if not checked is going to cost it lots of credibility." The response? A temporary ban.

SPARKS said...

"You wonder how much abuse a person is willing to take."

That's the rub isn't it? Whereas here, at this site, when you get out of line, you get a serious education from ITK, or "G".

No need for bans, you just put in your place with facts and experience!

Imagine getting an education from Albey Einstein? I rather be stupid!

SPARKS

InTheKnow said...

I watched the presentation by Dadi Perlmutter and Rene James on the Intel investors website last night. If Intel is going to be successful in the moves that they are outlining I believe that there is going to have to be a big shift Intel's culture.

Let's deal with Dadi's part of the presentation first. I think it is also the less obvious change on the surface. He was talking about Intel's move into SoC's among other things. And at first glance, that would seem to be a very natural move for Intel, since it is an exercise in high volume manufacturing.

But there is a wrinkle there that might not be obvious at first. While the volumes may be higher, the number of product variations are also a lot higher. To succeed in this space, Intel's factory network will have to become a lot more flexible. The 2x reduction in cycle times mentioned in one of the presentations will help with this, but now they will have to learn to manage frequently changing products and smaller production runs.

Each product seems to introduce new wrinkles to the process and brings new failure modes. Intel's copy exactly philosophy will need to adapt to enable the process changes needed to maintain product yield on this changing product mix to be implemented more quickly. This will also open up more opportunities for Intel's process engineers to show their stuff. I fear those like our friend Mr. Tock may have a hard time adapting to the new reality.

Rene presented the second big shift. A shift towards a growing dependence on software expertise. Intel is very much a manufacturing company. The methods and systems that work in HVM are not the methods and systems that work in software development. If Intel is going to succeed in growing their software expertise, they are going to have to change some of their business practices. One might be tempted to say that Intel can just separate the software groups from the manufacturing side of the house. But that would result in throwing away the advantages that come with in-house software development. So these changes will spread throughout the company over time.

I'm not saying this is a bad thing, or even a good thing. I'm just pointing out that the move into these adjacent markets that Intel is looking to make has implications that aren't apparent at first glance. If Intel succeeds in moving into these areas, it will not be the same company in 5-10 years that it is today. Out of necessity it will be more nimble and more creative than ever. Intel is in for some interesting times.

InTheKnow said...

Hey Sparks, I see the New York Times is beating the AMD drum pretty hard these days. You'd think they had a dog in the fight. :)

hyc said...

ITK: interesting points.

Reminds me of chasing down data sheets for the MC68HC11, the number of variants of that processor was truly bewildering. They seemed to be quite a favorite of Ford for many years, and their GPIOs and PWM controllers all seemed to evolve with the needs of particular model year engines...

As for software expertise - this seems pretty obvious to me. Designing tons of cool new enhancements into your chips is pointless if nobody knows how to write the software to use those features. Of course, Intel's compiler team is excellent, so people don't tend to need to worry about it at very low levels very often. But certainly it makes a difference to have OS designers on the same page as the chip designers. Seems like very chip company in history has pushed their own OS, except Intel, and I'd say the disconnect there has been a major mistake for Intel. Pushing the problem over to Microsoft has worked to an extent (because mediocre is mostly good enough), but it's well known that when Intel releases chipset drivers, they perform better than Microsoft's. If you want to make your hardware shine at its absolute best, you gotta write the software yourself.

SPARKS said...

ITK,

First and foremost, from my perspective, the New York Times is the most biased liberal rag on the planet. The papers' so called reporting is infested with political agendas and wrought with liberal social commentary's.

I am reminded of Ann Rands the 'Fountain Head,' subsequently turned movie staring Gary Cooper and Patricia Neal, with the turn of every page. Reading one editorial, in my opinion, is akin listening to Charles Schulman, Nancy Pilosi, Barney Frank, Joe Biden, and Henry Waxman, condensed in to a hypodermic needle and shot intravenously.

It gives a new meaning to the term 'Liberal Overdose.'

The AMD/NYS/Luther Forest 1 billion plus fiasco is no exception. The Times is rallying support and public sentiment with their new campaign that is fueled by the Euro-pee-on decision to fine INTC.

New York's liberal social elite are, and always have been, enamored by European Social Aristocracy bondering on an inferiority complex.

Unfortunately, both Microsoft and INTC are in the cross hairs of what they consider 'Big evil Corporations'. Make no mistake, this comes on the tail of the cataclysmic Banking/Insurance failure they so conveniently ignored, until late. (Now they are on a CEO witch hunt)

The agenda is AMD.
The target is INTC.

The goal is more complex:

A few thousand workers in New York State.

Political salvation for those who didn't have a clue about the dynamics of the industry who thought they building another automobile plant.

This is gospel.

SPARKS

Anonymous said...

If the unlikely happens and the liberal left gets is ways.

The innovative and rich will get taxed

The innovative and rich companies will get throttled.

Imagine a world where CPUs supply is constrained by and we have AMD dictating price and innovation. We all know how they innovate, they compete by suing and throttling the competiion versus focus on their own execution. If they screw up its someone elses fault. jerry, hector, dirk, was it their fault their manufacturing, technolgy was bad. NO it was big bad INTEL.

Was it their fault they didn't take risk. No it was INTEL.

WTF, lets here it for US to soon go the way of Europe. What was the great USA will soon look like Africa and Europe and all real competition and innovation will come from the Far East.

SPARKS said...

"If the unlikely happens and the liberal left gets is ways.

The innovative and rich will get taxed

The innovative and rich companies will get throttled."

It will be, as you say, unlikely.

MS and INTC constitute a substantial portion of the US economy,. directly and indirectly. I seriously doubt the rest of the nation, along with both companies constituents/lobbists, would blindly subscribe to the New York Times crusade, thereby crippling the entire industry and its' associated infrastructure. It simply wouldn't gain traction on a national level. I'm quite certain there are many Democrats who would wish to leave both companies alone given the condition of the nations economy, presently.

Once again, the great state of New York distances itself from the rest of the nation through a powerful medium, with it's biggest mouth, and with it's penchant for high brow elitism and political posturing.

The New York Times is, after all, a local rag.

SPARKS

Anonymous said...

A few thousand workers in New York State.

Political salvation for those who didn't have a clue about the dynamics of the industry who thought they building another automobile plant.
It is a bit amusing - it's 1400 jobs (at full ramp) and papers and articles keep referring to a study which estimated ~5000 new jobs when you consider the secondary effects. This is nuts - a 3x job multiplier(or 2X depending on how you want to look at it)? 1400 people (and associated families) will drive some new business, and corresponding jobs, but how much new PERMANENT infrastructure is needed to support an additional 1400 people? I assume the study includes some transitory jobs like construction and trades which will diminish over time when the fab is built out.

The other real fundamental problem is if NY is thinking real long term, each new fab brings even fewer jobs as many will be shared between fabs. So unless this brings in new companies (which I think is NY's long term hope) - a single semiconductor company has diminishing returns over time; the fab part is not a labor intensive industry and with continued automation advances it has become even less labor intensive over time. (assembly and packaging on the other hand are a different story and that is why you see a lot of that work where labor is cheaper)

SPARKS said...

"The other real fundamental problem is if NY is thinking real long term, each new fab brings even fewer jobs as many will be shared between fabs. So unless this brings in new companies (which I think is NY's long term hope)"

You know, I drink your observations and technical analysis like a Tony Schumacher Top Fuel engine drinks nitromethane when the peddle mashed to the floor.

http://www.edmunds.com/insideline/do/Features/articleId=120159


I'm sure that the "Infrastructure" groundwork to the new facility could accommodate future expansion in the area. I hadn't realized the long term prospects of a 'Luther Forest', "Silicon Valley" senario!

This is why they spent so much time and money on infrastructure, the reason they put it up in the sticks, why the "mostly Union" card was played so early in the game, and the tax incentives that would make any company turn their heads in that direction! Furthermore, they'll even give free legal assistance by way of the Attorney Generals Office!

Oh, that clockwork ticker of yours! They are seeding the area with the AMD deal!!!

We had such an area on Long Island. We did it first! Remember Grumman, Republic and all the interrelated smaller companies going back to WWII? Hell, I knew/know guys that were machining mil-spec components in their basements!

The IDIOT local politicians (and Unions) saw Long Islands Aerospace Industries as a Federally subsidized cash cow! However, Dick Cheney (among others) saw the corruption, graft, and nepotism (characterized by Grumman), and made it his business to put them out of business! He did. Today, Long Island is dead technologically. The cost of business is extraordinarily high, even for Uncle Sam, let alone the corporate sector.

Enter Luther Forest (and slob taxpayers like me), with AMD, which they believed to be the jewel in their crown before the bottom dropped out in 2007! One thing is standing in their way, Intel Corporation.

They are presently laying off State workers by the thousands, but they are going to ram this thing down our throats.

Please forgive for the expletive, but they chose the wrong fucking company. I dug clams in the Great South Bay as a kid, and even I knew that, all along. I still do.

Thanks for the wake up call. I'm so naive. I'll even accept Lex's "moron to novice" assertion.

SPARKS

InTheKnow said...

I'll even accept Lex's "moron to novice" assertion.Don't.

That is a major problem I have with many who have come out of our educational system. Far too many of them seem to be unable to distinguish the difference between lack of intelligence and lack of education.

The two are decidedly NOT synonymous.

I'll get off my soapbox now before this turns into a 3 page rant.

SPARKS said...

ITK,

You sir, are a gentileman.

Sparks

InTheKnow said...

I'd be curious to see comments from anyone that had done threaded software as to the value of Intel's new Parallel Studio software. Maybe I'm just drinkng the kool-aide, but it looks to me like this software could be a big step in the right direction. It look like it is supposed to identify the parts of the code that would benefit from running concurrently and then modifying the code so it runs that way.

Anonymous said...

Many hear try to get educated, but lack any intelligence.

The intelligence level of course at AMDzone is even lower, level of high school fanbois living in their grand ma's basement.

THings are so boring now that INTEL has completely trounced AMD just like I predicted years ago.

Luther forest, fucking arabs, global foundries matter not.

AMD is finished

These days it ain't even interesting to go to sharidouche's site anymore. Its sad to have so kicked ass

Tonus said...

"Many hear try to get educated, but lack any intelligence."

Hear?

SPARKS said...

"Many hear try to get educated, but lack any intelligence."

Do you really feal that way about all of us, LEX? We lack any intelligence at all? Even I know, with your limit capacity for compassion, coupled with your arrogance within your extremely narrow field of expertise, you can't truly believe this about us all.

Aside from your obviously flame baiting, your quote above is rather contradictory for someone who claims to be above us all. Why, may you ask?

Exceptionally intelligent people don't make such absolute ridiculous statements, only those with low self esteem and something to prove do.

Secondly, your technical peers, for the lack of a better term (they are clearly more rounded intellectually and professionally), have handed you your technical ass on a platter quite a number of times. Actually, you never answered a question about AMD's SOI process asked of you over year ago. Why, pray, is that?

I really don't understand why someone such as yourself, who claims to be so intelligent, why then, do you post here? (spelled h-e-r-e) Why do you read these posts, why do they hold your interest, and why do you post, besides the obvious trolling? After all, you're so intelligent and above such nonsense, right?

Finally, let's not forget, you have an open invitation to come and meet me here in the BIG APPLE. I'm currently working at 61st and 5 Ave. We'll meet for lunch in Central Park. We'll discuss the matter on a more personal level, and I'll give you a sorely needed, good education. Bring some friends, if you have any, I could use the workout, I haven't used one of my other disciplines in quite a while.

Anytime.

SPARKS

Anonymous said...

Many hear try to get educated, but lack any intelligence.

Anyone else find this amusing? (look closely).

Anonymous said...

OK so Eye didn't reed the previous comments two closely... and I was redundant, repetitive, not to mention I was repeating things. Must bee the lack of intelligence and edukation :)

Anonymous said...

OK so Eye didn't reed the previous comments two closely... and I was redundant, repetitive, not to mention I was repeating things. Must bee the lack of intelligence and edukation :)

Tonus said...

This is not GURU's finest moment. :)

SPARKS said...

It's OK. He can do no wrong.

SPARKS

SPARKS said...

If there was one reason why CRAY went INTC, here it is.

Whoa! 8 cores X 2 Threads X 8 processors, that's a lot of juice.

http://hothardware.com/News/Intel-Unveils-NehalemEX-OctalCore-Server-CPU/

SPARKS

BTW: GF is going bulk on 32 and 28.

"The Globalfoundries is ready to produce 32nm and 28 nm chips in bulk process"

http://www.fudzilla.com/content/view/13930/1/

Say, what?

SPARKS

InTheKnow said...

Some very entertaining parsing of words from Globalfoundries if you go to the source article. Here are the juicy bits.

Tom: We're killing the latency between Intel and us in 32nm. AMD was late with 65nm and introduced 45nm with a nine month delay. With 32nm we are reducing this on just a single quarter or three months.

This facility manufactures most of AMD processors and we are moving forward in order to be ready for 32nm processors [sexa-core Sao Paolo and 12-core Interlagos, Ed.] in the second half of 2010.

Fab1 Module 2, formerly known as Fab30 and Fab38 is bringing up the 32nm bulk-technology. We are currently installing the equipment in the F1M2 and adding more units than there were in the past. We are on track to be ready for initial customer designs in 4Q09 with products available in mid-2010.
So they are introducing bulk Si 32nm 3-6 months ahead of SOI. Intel is bringing out 32nm 3-6 months ahead of Globalfoundries. So if you look at the worst case deltas on microprocessors (which is what we have always compared in the past), the delta is about a year. Or right where they have always been. If you look at the best case, then you can say they are closing the gap.

I'm not going to tell you that closing the gap would be bad, because it isn't. But I will tell you it isn't free. It means they have had less time to squeeze the value from their 45nm investment. Maybe Globalfoundries can afford this. AMD could not.

SPARKS said...

"So if you look at the worst case deltas on microprocessors (which is what we have always compared in the past), the delta is about a year."

I don't have your experience in figuring these timelines out. However, what I don't get is the 'half step' between 32 and 22, unless they feel they can 'tweak' (for the lack of a better term) their 32nM tools to process at 28, as opposed to a 28nM facility built from the ground up. Can this be done? Whereas, INTC can go straight to 22 from 32. Wouldn't this 'half step' put them further behind, unless the Luther Forest facility will be built exclusively for 22nM?

Additionally, both articles speak nothing of design. Core i7 presently outperforms everything by 15 to 80 percent due to design enhancements. Can AMD pump out a design that will at least double the performance of their current offerings just to merely to be on par with i7 selling today?

Then what about yields? TSMC, from what I'm reading, is having trouble with 40nM. I've got a phat 45nM in my case that's been kicking some serious ass for well for over a year.

Can GF get their house in order to push out good yields? AMD has an awful lot of dual core and tri-core failures out there in Never-Neverland. Can GF do what AMD could not?

Seem to me that's a lot to shovel and a big dumpster to fill.

SPARKS

InTheKnow said...

However, what I don't get is the 'half step' between 32 and 22

Half nodes are generally just optical shrinks of the parent process. So 28nm is the 32nm process with a tighter pitch. A half node is typically implemented as a cost saving measure since it reduces die size and doesn't cost as much to develop or implement as a full process node.

However, I have to believe that as feature sizes get smaller, half nodes become more expensive to implement. Aspect ratios (the comparison between the width and depth of a feature) become larger with each progressive process shrink. As the aspect ratios go up, it becomes progressively harder to completely fill in the spaces between the features.

This leads to a whole new host of failure modes related to voiding in the various layers. This drives yields down and costs up. Fixing the new problems requires engineering resources, tool time and Si.

At some point the yield hit and the cost of fixing the problems offsets the gains from the shrink.

Anonymous said...

With all the press 'interpretations' of the, once again, very carefully parse and nebulous 'AMD' announcements - it's a whole bunch of throw stuff out there and hope the illiterate press interpret it incorrectly (yeah, it's GF now, but it has the same AMD vagueness and it's the same people just with a different company name).

The foundry is working on 2 32nm bulk Si processes and 1 SOI process. They are doing a low power bulk Si process (with presumably lower performance) and a 'higher' performance 32nm process (for things like graphics and other demanding product types)

I don't remember the analyst day foils, but I believe also one of the bulk processes was with high K and the other was without. So it's real difficult to assess what taking orders/ramping/in production means with this many variables... and it all translates into the generic AMD "mid year XX" schedule.

The real measuring stick is the AMD SOI process as that is their 'high performance' process. Last I saw it was still end of 2010 type timeline (and I think the older product roadmaps had 32nm CPU's in early 2011)... this would seem to read that they are simply maintaining the gap with Intel - which actually is not bad considering they have to implement high K.

As ITK the half node stuff is useless - it's no longer even half nodes anymore (28nm is not halfway between 32 and 22). The problem with a process like this is it is only useful for folks moving straight to the process. When you consider the design, qual and mask costs - there would be no reason to move something from 32nm to 28nm unless you are doing very high volumes. So 28nm is really only valuable for customers intending to skip 32nm (or in the case of an established foundry moving it from an older node like 45nm)

So now, in addition to playing around with terms 'ramping', 'in production' to make the timelines look better than they are, Global foundries will be able to use 'chips' (which won't neceassirly mean CPU's), confuse timelines on 2 different and 1 SOI process, introduce 'taking orders'and other terms to make the timeline so vague that the press will interpret it anyway they want to.

In the end look at Intel 32nm CPU availability and AMD 32nm CPU availability to understand waht actually is happening. Unless AMD can make CPU's work on a low performance 32nm bulk Si process, that GF timeline is useless from an AMD CPU perspective.

Tonus said...

A HardForum member got a new Bloomfield Xeon and promptly overclocked it 2GHz... ON AIR.

From 2667MHz to 4600MHz with a voltage increase from 1.184v to 1.472v. He says he hasn't tried lower voltages yet. Should be interesting. Come to think of it, it's about time I got that HSF for my 920. I will probably do that next month, if impatience doesn't get the best of me.

SPARKS said...

"will probably do that next month, if impatience doesn't get the best of me."

Tonus, I can't believe you haven't replaced that stock HS yet!

SPARKS

Tonus said...

I'm spoiled at how well it works at stock settings, otherwise I'd have done it sooner. :)

SPARKS said...

Tonus,

It reminds of a past life of mine, when I was many years younger, and much more foolish. I can recall my first Big Block build. I dropped a lightly messaged 427 in my 68 Camaro. The open chamber heads were cleaned up, gasket matched, and hooked up with a relatively mild cam, for around town drivability. 4.11 gears with an M-22 "rockcrusher'" transmission topped off a very fast, durable car. It ran in the mid 11's.

One evening, a friend of mine with Hemi Roadrunner pull up next to me, and beat me to the next light by a car length. I was pissed!

Subsequently, that week, I pulled out the bread and butter cam, and threw in a nice factory L-88 cam.

The world as I knew it changed. What was a very fast car turned into an absolute monster, a freak by all who knew it, an engineering miracle! Engine timing, port volume, cam timing, compression ratio, intake design, headers, all came together and tuned in like a goddamned Stradivarius, right to a sing song 6800 rpm.

Feared by all, my father downright refused to even get in the car, especially when he saw the front wheels leave the ground. He said the car was "overpowered and dangerous".

Weeks later, I met the Hemi again. This time, nothing had changed except my rougher, slightly faster idle. We gave it a go. I beat him by 4 car lengths. The car ran in the high tens.

It's a great feeling to know what potential you have under the hood. It's quite another to realize that potential, and naturally, all it's rewards

Please, ditch the factory heat sink, Bro., even if you never overclock. You just may get a heavy foot someday, and you'll be ready.

SPARKS

Tonus said...

Hehe. I have every intention of overclocking this thing as far as it will go once I have a new HSF. I just haven't been in a hurry to do it because it runs so well as it is. Under better economic times I would've taken the plunge already, but I'm slowing my purchasing pace at the moment.

I'm pretty confident that once I've got a good cooler strapped to it, it will hit 4.0GHz with relatively little tweaking. I guess I'll have to deal with the loss of turbo mode (cackle).

SPARKS said...

http://www.theinquirer.net/inquirer/news/1184462/huang-throws-intel-bone

Oh, how sweet it is. Hang Sing Song is making, what must be a very painful about face regarding INTC and the GPU/CPU debate.

Naturally, for me, this isn't quite enough sucking up, yet. This little tidbit/snippet was like CRACK! I just can't get enough!

Does anyone else here smell blood in the water?

SPARKS

Anonymous said...

hello... hapi blogging... have a nice day! just visiting here....

Tonus said...

Here's a fascinating topic at the Zone. Lots of interesting dynamics at play.

SPARKS said...

Well, it seems INTC has found a way around ARM and the embedded software market. This should be particularly interesting to ITK as he's been following the market so closely. INTC paid 884M in cash.

ITK, (and others) this is an interesting development. What's your take? Where are they going with this purchase?

http://www.dailytech.com/article.aspx?newsid=15341

SPARKS

Anonymous said...

Does anybody have any more information on Larrabee? I'm itching to get my hands on one.

According to Toms Hardware, its going to be one huge chip. I know Itaniums and Nehalem EX's are huge, but they're also expensive big iron and server chips. I wonder what a Larrabee might cost.

And they're speculating that we won't see it until 2011. What gives?

Anonymous said...

How sad we got links to Porno now. Times must be desperate in the la la land of AMD.

Have no fear Larabee will come, INTEL may not get it right at first but they will crush nvidia.

A huge can of whoop ass is coming, Just like what happened to AMD>

Everyone is struggling with yield at 45nm and they don't even have High K. INTEL on the other hand has shipped hundres of millions and is turning on 32nm and will soon have SOC too.

The world belongs to intel. Poor sharikou, poor amd fanbois... soon all the best hardware from top to bottom will be INTEL inside.

Anonymous said...

INTEL on the other hand has shipped hundres of millions

Hundreds of millions... that would be an interesting 'analysis' of sales, then again what else would you expect?

SPARKS said...

Well, there is a better 'analysis' (for the lack of a better term), you just need to look at the facts.

AMD has been absolutely crippled since the purchase of ATI.
They don't process chips anymore.
They've sustained 6 billion in losses since.

Current graphics in their present iterations, as they farm out process to major foundries, have hit a thermal wall. Why, you may ask? There is no close association between process and engineering (like in the same building). Farmed out process is, at best, a compromise geared to many customers. (The needs of the many outweigh the needs of the few.)

At best, the software and hardware from both NVDA and ATI/AMD scales poorly from a performance perspective. At worst, the drivers are quirky, and the hardware runs as hot as hell.

As of late, performance gains have not scaled 1/4 as well as INTC's gains in CPU performance, power consumption, size, leakage, and TDP.

INTC, love them or hate them, is on the tip of the pyramid when it come to process research and engineering. No one else can even come close. Personally, I think Larrabee is primed and ready to bring new power and performance to the GPU arena.

The old way of doing graphics has run its course. The time has come for the revolution. Buying two, three, and four very expensive graphics cards that cost twice and three time the amount of a barebone high end system only to obtain modest gains in performance is absurd. This not to mention that it's a crap shoot that your particular game and/or application will even scale with multiple GPU solutions. Some do and some don't.

Graphics today are presently stuck in the performance mud. If you are a real enthusiast, you know this to be true, no matter who you like or dislike

The world is ready for an entirely new approach to graphics. I hope Larrabee answers the ever increasing need to do so.

Don't think so? Ask yourself this. Why is CPU hardware performance way ahead of the software curve, when GPU performance is ALWAYS behind? Still don't think so? Go buy Crysis and turn up all the settings to MAX @ 1900x1200.

You'll find your answer, trust me.

We need a new plan.

SPARKS

Anonymous said...

Sparks you are learning from the big dick!

Consortium process development is a compromise. The bigger reason it don't work is you have the merging of multiple company cultures, needs, and politics. Every hear of the United Nations.

Look back at history and never has absolute power come from a constortium. They win sometimes but in the end they ALWAYS fail. They are fighting a company with a singular focus, singular design team and factorys focus on a single customer. Who do you thinks produces a better process for the customer, the IDM of course.

Now you got Global foundries outside looking for business only diluting what little focus AMD had on the fab side.

Finished just like I predicted 4 years ago at geek.com

SPARKS said...

Consortium process development is a compromise."

On the surface with regard to GPU development this may seem over the top. However, need I remind those who doubt the NVDA and ATI "consortium" when they were both charged with price fixing years back and subsequently fined last year.

http://www.extremetech.com/article2/0,1558,2066727,00.asp?kc=ETRSS02129TX1K0000532

http://www.techpowerup.com/73757/US_Dept._of_Justice_Spares_ATI_of_Antitrust_Charges.html

http://www.dailytech.com/NVIDIA+Offers+Settlements+in+Suits+Over+GPU+Price+Fixing/article13079.htm

They were slapped on the wrist with a paltry $850,000 fine each, peanuts. Whereas INTC is supposed to hand over nearly TWO HUNDRED times that amount, despite no harm to consumers were ever proven. Conversely, ATI's and NVDA's fines were paid DIRECTLY to those vendors that SHOWED damages.

Additionally lets not forget the notebook GPU debacle where NVDA's chips were de-laminating "cracking" at the seams. Consumers were stuck between a rock and hard place and it cost them millions collectively. No action was ever filed.

Price fixing and faulty components by the EURO PEE ONS is OK, but dealer incentives, discounts, and rebates are illegal! Despicable.

The Eighth Air Force missed a spot when they fire bombed Dresden 55 years ago. That's OK, INTC will finish it off eventually.

Zig Heil!

SPARKS

InTheKnow said...

ITK, (and others) this is an interesting development. What's your take?

Sparks, been locked in the land of dial up connections (yes, much of rural America lacks high speed access) this past week and just got caught up.

This is largely unrelated to the whole netbook issue since this is targeted at embedded processors. These will be aimed at things like cars, TVs, stereos, etc. I believe this acquisition is intended to be a significant enabler for the move into SOCs which I believe is scheduled to begin in earnest late next year.

As I've said before, I think the SOC move is going to bring significant changes to Intel's culture and business practices over time. I recall reading somewhere that this acquisition is going to be run as a wholly owned subsidiary. If I'm right, that is almost an acknowledgment by Intel that their culture isn't currently a good fit for this type of business.

InTheKnow said...

Well, there is a better 'analysis' (for the lack of a better term), you just need to look at the facts.

....
They don't process chips anymore.



I'm not sure what you mean by not "processing chips anymore." Last I checked Globalfoundries and AMD were still joined at the hip and likely will continue to be for several years from what I've seen.

In fact, I'm not sure how the GF "spin off" can really hope to draw new customers under terms of the current agreement. From the bits of it I have seen, it looks like you might as well just send your designs straight to AMD. Until AMD and GF are less dependent on each other I would have significant concerns about IP protection using GFs services.

InTheKnow said...

The world is ready for an entirely new approach to graphics. I hope Larrabee answers the ever increasing need to do so.

Word on the street is that the first iteration of Larabee may not be all you are hoping for on at least three counts.

First, it is going to take time for Intel to get all of their newly hired talent working together like the team they need to be. I think AMD shows a good example of the issues with this. Even with an outright purchase of an entire graphics company, they have had a very rocky start to developing in-house graphics. Intel has been buying up graphics talent from many different companies. This will take even longer to integrate.

Second, once you have the team working as an integrated whole, you still have to get the architecture right. I think that hoping that Intel pulls this off with Larabee of the first cut may be hoping for too much.

Third, Larabee is about more than just graphics. Since it is not primarily a graphics processor, I don't expect to see it come out of the gate smoking everything else in sight.

I think Larabee will be a long term success, but I'm not completely sold on the first iteration.

Tonus said...

I get the feeling that Larrabee will start off slowly, with a first iteration that will be a 'laughing stock' to all of the usual suspects. And unlike Atom, I don't think there will be a killer app that provides an early silver lining.

But I think this is what Intel expects, and that Larrabee will be a process that is designed to pay off in the long term. Maybe it will and maybe it won't, but I doubt that the first product will be impressive.

SPARKS said...

"AMD were still joined at the hip and likely will continue to be for several years from what I've seen."

Yes, and no. At the risk of sounding facetious I've read somewhere they have gone "fabless." Sure, there is a close association, however, business is still business. If GF is to develop a serious market position against the likes of TSMC, Samsung, CSMF, and UMC, AMD will not be the only game in town.

In fact, I read GF is courting everyone possible to become a serious player. In that lies the rub. They are too small to compete with big boys and AMD's margins and production won't be enough to pay the bills (read: new tooling dedicated to AMD's special proprietary needs).

Sure, NYS 1B handout is substantial, but it ain't built yet, and the other companies are primed and ready to fight over what ever the world market has to offer. Then their is INTC. More importantly, the big guys are established and their mass volume production infrastructure is already in place.

Does the AMD/ATI association with CSMF conflict with the needs of GF? What say you if it's cheaper to stay with CSMF, as G. so succinctly pointed out a month ago? Frankly, I don't have his ticker for the intricacies of this game AMD is playing. What I do know, it ain't pretty, no matter how you slice it. There's a lot of big boys on the block and AMD is still feeding some of them, maybe at a loss, but feeding them still the same.

Indeed, when does the partner become the competition and how will they compete when AMD is in the middle of it all?

Hell, if I know.

SPARKS

InTheKnow said...

I know that people have been predicting the end of Moore's Law for over a decade, and so far it has kept right on going despite all the predictions of doom and gloom.

This article takes a bit of a different spin on the whole idea though. The analyst doesn't say that continued shrinks are going to be technically impossible. Instead he argues that they are going to cease to be economically viable.

I don't know that I agree with his time frame of 2014, but I do agree with the point that even more than the technical challenges of continuing shrinks, eventually the cost will slow down progress on process node transitions. In that regard, I think that the fact that Intel has been able to extend dry litho further than other manufacturers bodes well for them staying on the cutting edge. If they continue to extend the life of current technologies, they will be able to afford to move to the next node while others cannot.

I noted the article mentioned other techniques to enable the increase in transistor count without a process shrink almost as an after thought. These type of techniques are going to become increasingly important once process shrinks slow down. But they are going to require large investments now to be ready when they are needed.

InTheKnow said...

Sure, there is a close association, however, business is still business. If GF is to develop a serious market position against the likes of TSMC, Samsung, CSMF, and UMC, AMD will not be the only game in town.

I agree that this is what GF want's to do. But my point was that given the closeness of the relationship between GF and AMD, the last place I would take my design to be fabbed is GF. Not because their process isn't good, or the cost structure isn't right, but because of the risk of any design I send them being revealed to AMD.

IP is extremely valuable in this industry and it only confers a temporary advantage. Why would you risk exposing your IP to a competitor when other alternatives exist?

SPARKS said...

"IP is extremely valuable in this industry and it only confers a temporary advantage. Why would you risk exposing your IP to a competitor when other alternatives exist?"

Excellent point. Of course, you had to spell it out for me (and others I suspect). In any case, that's three reasons why (so far) some competing company's may not go to GF for production.

The big foundries already established infrastructure.
Cost and H. V. mass production.
And, as you point out, IP issues.

As a side note, just to complicate matters, GF is eyeing the beleaguered Chartered Semi for a possible acquisition. This interesting development would put GF in the big leagues overnight, while the "design house" AMD would be dragged in on GF's coat tails reaping all the rewards in the partnership they've had for years.

That's if they can wrestle it out of the hands of ATIC.

It looks like the foundry business is consolidating along with the financial institutions and the Auto company's.

http://blogs.barrons.com/techtraderdaily/2009/05/29/chartered-semi-denies-receiving-abu-dhabi-takeover-bid/

SPARKS

SPARKS said...

And the the hits keep on coming. ITK, you said AMD/ATI and GF are glued at the hip?

Well, NVDA wants x86, it looks like they just might have found a way.

Core i7 and Larrabee? These bastards are running scared.

SPARKS

http://www.xbitlabs.com/news/other/display
/20090617070941_Nvidia_in_Discussions_with_
Globalfoundries_over_Manufacturing__Chief_
Executive_Officer.html

InTheKnow said...

I found this report on IBM's HK/MG process that I though was interesting. It gives a little more detail than I've seen up to this point on IBM's process. It seems to confirm the speculation on this board that they are using techniques that require some additional masking and deposition steps compared to Intel's gate last process.

You'll need two deposition steps (along with associated litho and masks) for the "capping" layers that Intel doesn't appear to use. Intel trades off a dummy gate deposition and etch in their process instead.

Tonus said...

Looks like Intel is taking the steps to convert their whole lineup to Nehalem-based CPUs. They quietly introduced the i3/i5 lines that use Socket 1156. Based on the basic specs it looks like these will be the low-end and mid-range CPUs. Everything from dual core with no turbo mode, to quad core with HT and turbo mode.

Anonymous said...

With hundred of millions to spend on marketing and having spent billions on intel inside, pentium, centrino, atom, core, core2, core2 duo

Now comes from the brilliant MBA and marketing monkeys from Intel their best i3, i5, i7, WTF??

6 figure and 7 and likely 8 figure salarys monkeys and all they can do is copy BMW.

How sad, got the greatest CPUs on earth, greatest factories, and stupid ass marketing that can only think of copying BMW. Paul should fire them all

Anonymous said...

ITK all highk metal gate process gate first or gate last require some additional patterning and film removal to adust the workfunction for P and N fet. Its physics, no matter what spin IBM or INTEL does they still got to obey the laws of physics.

Personally I believe that the complexity of doing gate last is superior. You make fewer compromises in designing your transistors, gate stack patterning, and dielectric and work function tuning. Its all a matter of how much time and money you got and whether you are willing to go that extra bit for that extra little performance. As that recent analyst noted its all about ROI. IBM and the consortium are pooling resources and have little ability to invest as much as INTEL so they are taking the easy route and one that will result in an inferior transistor, no if or butts about it.

Anonymous said...

http://www.eetimes.com/news/semi/showArticle.jhtml;jsessionid=5YYMEKJAEGFNGQSNDLOSKH0CJUNN2JVN?articleID=218100243

"Intel pushes 193nm to 15nm"

In a lab environment, Intel was able to push immersion litho with double exposure (and presumably phase shift masks?) to 15nm.

The whole IBM EUV 22nm thing... yeah, not so much (though I'm sure Dementia will set in and talk about how a 40+Mil EUV tool is cheaper than doing two immersion exposures... much like his ridiculous arguments on dual dry193nm printing vs 193 immersion on 45nm while ignoring differences in capital costs and tool throughputs).

Intel will apparently use SINGLE exposure 193 immersion on 32nm (the prevailing view in the industry not too long ago was that dual patterning would probably be needed). And while the 15nm work was in the lab... this likely puts the nail in the coffin on EUV for 22nm and may also push it beyond the next node (16nm)...

This is beyond even what I thought about EUV and I've been fairly pessimistic (I thought it would likely not make 22nm but would be inserted on 16nm).

While the 15nm stuff was research based and should be taken with a grain of salt, this would be a node 4-4.5 years away and this is an Intel announcement not an IBM one!

«Oldest ‹Older   201 – 400 of 966   Newer› Newest»