6.03.2008

The Atomic Breakthrough

Doubts (including my own) about the viability of Intel’s Silverthorne program have been rife. The company’s exit from ARM based microprocessors shows that the business environment in the low-power embedded market is pretty hostile. When you have more than 10 different vendors, led by high-volume manufacturers the likes of Texas Instruments and Samsung, the margins can get very unattractive. Intel’s re-entry however, is a show of confidence and a belief in the ultra-mobile segment as a market with a huge potential for growth. Having learned their lesson, Intel is now differentiating themselves from ARM by sticking with the general purpose, off-the shelf and widely compatible, x86 Intel Architecture.

It’s a great idea until you realise that the Atom processor isn’t really up to par with ARM based processors when it comes to the flexibility of having the ideal performance at the right power draw. x86 is just too rigid in terms of performance, continues to be power hungry and remains too expensive to implement. The foundation of my scepticism on Intel’s Silverthorne strategy is solely based on the fact that the right market for it (high performance, low power, and high margins) truly does not exist. I was wrong.

There is a growing “and very mobile” market where an Atom based PC just happens to slot right in. It is a segment where high performance is required, power draw isn’t much of a factor and price isn’t the primary concern. I am referring to the In-car PC. Bill Gates once envisioned a PC in every home. Now Intel is betting on having a PC in every vehicle – that’s a potential market of ~72 million annually by 2010. I know we’ve heard so much about this in the past but this time around the automotive industry is ready. Expect high-end/super cars to have built-in PCs in 2010 just to establish the concept and expect wider deployment after 2012. This information comes from actual product roadmaps and not just my prediction.

The industry's move to PC’s to be the system that integrates the ever increasing functionality of the modern car is driven by cost. Believe it or not, an in-car PC will be the cheaper alternative real soon once the volume kicks in. A singular system that drives the audio & video, sat nav, phone system, internet, HVAC, telemetry, security and customised vehicle settings is fast becoming cheaper than having all the different parts bought, assembled and wired individually. Simplification goes a long way in high volume automotive manufacturing and the building blocks that can integrate the PC with the automotive networking standard (CAN/LIN) is very mature. The advent of the Atom removed the cost hurdles that existed for a complete integration.

At the moment the competition is between the incongruent ARM-based system and x86 (Intel and VIA's low power CPU's). Intel is currently leading the pack with their fully developed embedded solutions with major vehicle OEMs and system integrators as partners. Even Microsoft has stepped up in pushing the Windows Embedded platform. They are now competing head to head with Linux (automotive grade) which is quickly gaining support to become the automotive standard. Either way, developers are finding it easier to design human-machine interface applications using the widely adopted x86 instruction set. The Atom processor is beginning to shape like the breakthrough product Intel has been looking for.

314 comments:

1 – 200 of 314   Newer›   Newest»
InTheKnow said...

Expect high-end/super cars to have built-in PCs in 2010 just to establish the concept and expect wider deployment after 2012. This information comes from actual product roadmaps and not just my prediction.

Link, please? I haven't stumbled across this and would be interested in reading what is out there.

hyc said...

Sounds nice, I would certainly like a fully programmable PC instead of the stereo head unit I currently have in my car. But I'm still not sure x86 ISA matters here. Windows Embedded also runs on ARM. (Not that I would ever run Windows-anything in a piece of consumer electronics.) I would guess these things will be running Java, and the underlying ISA will be irrelevant.

Even without Java, binary compatibility is mostly irrelevant too. No desktop PC apps are going to be suitable for use on a Car PC without extensive UI modifications. If you have to recompile the app anyway, you can retarget it without much additional effort.

Anonymous said...

do a google on in-car PC.

this is just one example of a system integrator working with OEMS and intel.

http://www.azentekonline.com/cms/content/view/13/47/

SPARKS said...

DOC-

“I was wrong.”

So was I. I think I mentioned it on the previous post. What I didn’t figure was the system being integrated in the auto market. Oddly enough, with 20/20 blindness, I have witnessed, and personally USED such a system over 2 YEARS ago, duh!

One of my buddies has such a system in his boat! It costs about the price of a good sized car. It integrates radar, GPS, depth finder, auto pilot, engine, fuel, the whole damned magilla. Additionally, it is fully PC compatible with remote locations; it can overlay all the information, and do all this with multiple monitors!

http://www.raymarine.com/default.aspx?site=1§ion=2&page=1813

Nice toy, eh? (Stop drooling, Giant)

Putting one of these things, albeit a less functional one, in a car would save me untold grief. My wife gets lost every time she visits her friends in New Jersey. It would be worth an extra 1500 to 2 grand on the sticker price to keep her and the kid’s safe, well informed, and give me piece of mind.

It would give a new meaning to the terms ‘Idiot Light’ and ‘Mobile Office’.

SPARKS

SPARKS said...

For those who aren't as well heeled as my buddy......

http://www.raymarine.com/default.aspx?site=1§ion=2&page=1821


SPARKS

Tonus said...

For some reason Rob's post made me think of this.

Ho Ho said...

Heh, is that sonar image on the introduction videos really the best they have availiable? If yes then they royally suck. I'm currently working on technology that will by far surprass it. Even the first tests done about 10 years ago on the first prototype look a lot better than that, not to mention the massively overblown thing we are building at the moment. I wouldn't be surprised if every navy in the world would want a few of those sonars :D

Orthogonal said...

Don't forget WIMAX. That is a significant feature that would complement an in-car PC. I've seen some press a while ago touting the interconnectivity with WIMAX in a few high-end vehicles. I haven't heard an update on an ETA for a roll-out, but I guess that depends on Sprint/Nextel now more than anything.

Anonymous said...

Lets talk about the “A”BCs of that supposed “A”tomic breakthrough.

For INTEL these days all the hope for growth is with "A"tom. But it won't be what the INTEL faithful and Paul. All the “A”tom will do is drastically reduce INTEL’s “A”SPs.
Right now there are shortages abound on “A”tom as INTEL seems to have underestimated the demand. I figure it takes about a quarter for wafers to end up as chip so by September and back to school we’ll see lots of “A”toms.

It’ll come as a shock to Paul and the rest of the OEMs as those cheap laptops will go sold out. They’ll find most of the buying public will embrace a low power 40-60GB HD, small displays and be very happy with the under 500 buck price. With USB HD going for less then 100 bucks for 100GB who needs expensive laptops.

What will happen with “A”tom is the whole laptop/portable market will shift lower. This isn’t unlike what happened on the desktop, but unlike the desktop a family, any family can think of buying a cheap laptop for each person versus a cheap desktop. For the price of one high end gamer desktop a family can get 4 very capable laptops. For 99% of what we do, it’ll be good enough.

Come end of 2008 you’ll see INTEL report record units, record revenue, but lower ASPs and the stock will still be stuck in the 20’s. Atom will fly but it won’t be the answer to growth or get INTEL out of simply being a CPU company.

Atom can be a compelling argument for the car, and phone, but for those high volume applications it won’t happen. There are no magical new 100 million volume units coming in a new space.

Now on to the second “A”, “A”utomobiles, for autos there has been recent talks that an x86 like that “A”tom is perfect fit for that fast growing business in cars. But lets be seriouse in cars it’ll be about price and simplicity. X86 isn’t compelling enough, the last thing you want is trying to reboot your MS OS on your car in the freeway. Current embedded 8 and 16 bit controllers and ARM cores are more then any car needs. That auto market also requires significant product life cycles and reliability requirements that are totally foreign to the AMD and INTEL’s of the world.

For the next revolution in mobile computing devices isn’t about “A”tom its about the third “A” . “A”pple already has demonstrated you can get by very very very well without x86. With the competitive ARM environment it makes little sense for apple or any other smartphone to mirgrate an expensive silicon to a captive high price supplier hell bent on assuring all the profits rely in the CPU and commoditizing everything else.

Notice Apple is the model that is successfully here. The product is successful because of the software and integration. Apple has never been about how many MegHz or SPEC is the hardware. INTEL will surely bid for the next iPHONE slot but I bet Apple will play hardball both in price and power. INTEL probably would concede price, but in the power area the ATOM strikes out.

Thus as good as “A”tom looks its nothing but the mobile version of Celeron. And the future fight with “A”RM will be long and a slugfest in low “A”SPs that INTEL is not really the best to fight.

Now on to my last and most favorite “A”, “A”MD. They got nothing here. They are getting their clock cleaned across the board now and got NOTHING on the cheap, low power front that will eat even more of the bottom end.

With their far inferior 45nm without HighK/Metal Gate their low-power spins will be left in the dust and being sold for a few Jackson’s.

Tick Tock Tick Tock AMD got its clock cleaned.

SPARKS said...

This little article brought a tear of joy to my eye. I'm sooo happy NVDA will be cut out of the Nehalem loop.

FUDDIE The Despicsble, the enternal INTC hater, has called INTC the bully.

I have a solution for NVDA, give up SLI and do some serious sucking up, or else kiss off and DIE.

http://www.fudzilla.com/index.php?option=com_content&task=view&id=7713&Itemid=1

SPARKS

Anonymous said...

Come end of 2008 you’ll see INTEL report record units, record revenue, but lower ASPs and the stock will still be stuck in the 20’s. Atom will fly but it won’t be the answer to growth or get INTEL out of simply being a CPU company.

Let's talks the ABC's of economics... the stock price is sensitive to GROSS MARGINS, not ASP's. As you may realize you get just a few more atoms on a given wafer due to the smaller die size (the finished wafer cost is ~constant as this is done on a similar 45nm process).

So the question is not lower ASP's, it's how does the unit production cost scale compared to the reduction in ASP's - if the cost of the chip is scaling faster than the drop in ASP's then Intel's margins will actually RISE (as will the stock price). So if you get say 5X the die, but the the ASP only drops 4X this is a net margin win for Intel (and Wall St would reward this as it would show up in improved gross margin).

A”pple already has demonstrated you can get by very very very well without x86.

Actually Apple's 'rise' in recent time has been in large part due to Mac sales.... which so happened to coincide with their switch to x86! As they just recently switched to x86 and the computer sales jumped coincidentally, the statement above is a head scratcher (also may want to hold your ARM comments until you see the next gen iPhone/iTouch/iwhatever....and I'm not talking about the 3G phone soon to be released).

Sure there are hurdles for an x86 chip in this area (mainly power); but from a SW perspective X86 will greatly simplify and expand potential SW development on MID devices.

As for the car market, who knows, I don't see it as a game changer at least in the near term - the auto makers are already taking it on the chin and will have to focus on fuel efficiencies and alternative tech so I'm not sure how fast a shift to x86 inside the car would happen.

Tock-tick, tock-tick!

Anonymous said...

http://yahoo.brand.edgar-online.com/displayfilinginfo.aspx?FilingID=5897064-3846-155765&type=sect&dcn=0001193125-08-097759

page 23... ~50% of ALL of Apples revenues now come from Macintosh sales and if you look at the quarter on quarter comparisons this is their fastest growing revenue segment. So I would tend to be more inclined to say Apple's dependence on X86 is growing.

Anonymous said...

Fuddie is an idiot...

At this point Intel simply doesn’t want to honor its commitments toward he chipset license deal.

Apparently he believes Intel is required to license technology forever and cannot say no thanks on new technologies (CSI)? What commitment are they not honoring? Is he referring to the FSB license?

...and after Nvidia’s bold decision to keep SLI for itself

Just so we all understand... when Nvidia doesn't license a technology (that it had licensed in the past) it is a 'bold move', but when Intel does it they are failing to honor a contract. No hypocrisy here, move along...

I think Jensen should continue with the bombastic statements, I'm sure those are helping?!?!? As for Fuddie, well at least folks know what they are going to get when they go to his site.

Ho Ho said...

sparks
"I'm sooo happy NVDA will be cut out of the Nehalem loop."

I don't care about SLI but I still prefer NV over every other GPU maker, they are the only one with half-decent 3D support. Unfortunately Intel isn't quite there (yet?).

Ho Ho said...

Nehalem benches!

This thing is insane. averages around 20-30% faster over the similarly-clocked Q9450, much more in some cases. Extremely fast cache and memory latency and bandwidth. This is going to get pretty tough for AMD.

pointer said...

South Korea fines Intel for antitrust breach:

http://www.reuters.com/article/marketsNews/idINSEO28782420080605?rpc=44

bad news for Intel ... Intel will be appealing though ...

Anonymous said...

Fined $25M

Pennies for Intel.

They'll appeal but not because they care about the fine.

Tonus said...

re: Anandtech Nehalem benchmarks

Not only are they 20-30% faster, they ran the tests on what can be kindly referred to as a pre-production motherboard that apparently had memory issues.

So the question becomes... just how fast will it be when it's running on a stable and optimized motherboard that doesn't have memory problems?

Tonus said...

Correction... 20-40% performance increase, with only a 10% increase in power consumption... on an as-yet untuned and slightly buggy platform. If these performance levels hold, I will wait for Nehalem for my next CPU upgrade, unless AMD comes up with something that is faster.

Seeing some of the early hype for the NVIDIA 280 card, I'm glad I held off on upgrading my 7800/8800 cards when the 9600/9800 came out. I'm happy with the Intel 8400 right now, but Nehalem is calling me... calling meeeee...

Axel said...

The AMDZone forums are accessible to unregistered users again, and there's a new thread about Anand's Nehalem preview. It's hilarious to see Scientia and others show EXACTLY the same attitude of disbelief and denial as when the first Conroe benches came out.

The preview indicates that Nehalem is a much more significant jump over Conroe than K10 was over K8. K10.5 is likely to just be a minor bump over K10, similar to Penryn over Conroe. Therefore it's not difficult to predict that throughout all of 2009, Intel's performance lead over AMD will actually be greater than it was in 2006 & 2007.

The bottom line is that Nehalem will clearly keep AMD's ASPs in the gutter through 2009 and be the final nail in the coffin of AMD's current "asset heavy" business model.

pointer said...

Blogger Axel said...

The AMDZone forums are accessible to unregistered users again, and there's a new thread about Anand's Nehalem preview. It's hilarious to see Scientia and others show EXACTLY the same attitude of disbelief and denial as when the first Conroe benches came out.


yeah, those fanbois are at work again, you can see anything from denial to conspiracy theory there.

i'll quote part of what they say here and go from there.

amdzone scientia said..
... Do you think this means that Intel's SPEC scores will jump 30% for Nehalem at the same clock?


I remember that he said that a lot of time, why some site benchmarks single thread applications using the quad core, claiming that people buying QC is more interested with the multithreaded application ...

also with the turbo mode, the Nehalem will run at higher clock with the single threaded application, the correct real life (non-academic) comparison now would be what is the speed difference between the Nehalem and another CPU (Yorkfield, Phenom, etc) rated at the same frequency running a single threaded application. Wait, i am hearing someone shouting: not fair, you must turn off the turbo mode for comparison ... :)

amdzone scientia:
...Also, this was for a server processor; the desktop models only have 2 memory channels. This is what Phenom is going up against, not the server chips.


well, 2 things here:
1) there are actually Desktop Nehalem that comes with 3 memory channel, i believe he knew about it ... and wonder why he suddenly forgot ..
2) except for the memory testing, Anand was actually using the crippled board for the other benchmarks

anand: We had access to a 2.66GHz Nehalem for the longest time, unfortunately the motherboard it was paired with had some serious issues with memory performance. Not only was there no difference between single and triple channel memory configurations, memory latency was high. We know this was a board specific issue since our second Nehalem platform didn't exhibit any issues. Unfortunately we didn't have access to the more mature platform for very long at all, meaning the majority of our tests had to be run on the first setup (never fear, Nehalem is fast enough that it didn't end up mattering).

scientia was able to dig on hundreds of pages of manual to make his points on some of his discussion with others .. and yet he could easily miss this?

amdzone scientia said
Nehalem's L1 cache is 33% slower than Penryn's. Yet when Brisbane's cache was slower people immediately insisted that it was broken. Does this mean that Nehalem's cache is broken? Secondly, people insisted the same thing about Barcelona's L3. Isn't it odd then that Nehalem's L3 is only 10% faster than Barcelona's?


the real example of looking for needle in the hay! :)
And he is talking out of context too. The Brisbane is a shrink of 90nm K8, and thus people did not expecting slower cache access.

amdzome The_Ghost said ..
i have said before that anand has taken tom's hardware place, i have not noticed any other sites benchmarking that cpu, the question is why? and how did anand end up with one? well let's make that how did anand end up with two of them?


here comes the conspiracy theory :)
anand:Nehalem itself is very stable but it has only been in Taiwanese motherboard manufacturer hands for a relatively short while now, so the only truly mature motherboards are made by Intel. Unfortunately since Intel didn't sanction our little Nehalem excursion, we were left with little more than access to some early X58 based motherboards in Taiwan. Thankfully we had two setups to play with, each for a very limited time.

I'd believe Intel would send out sample few weeks before Nehalem launch to have the Nehalem review halo effect at launch. What benefit does Intel has now if it purposely leaks the information? It doesn't help if not at all, since it is holding the performance crown now.

Tonus said...

" amdzone scientia:
...Also, this was for a server processor; the desktop models only have 2 memory channels. This is what Phenom is going up against, not the server chips."


I'm not sure what point he is making there. If Phenom is going up against a 'crippled' Nehalem, that would indicate that Intel feels that all that they need in order to compete at that level is a 'crippled' CPU, isn't it? Is there anything stopping Intel from releasing a Nehalem with more than 2 memory channels for the desktop?

Admitting that Intel is producing a lesser version of Nehalem to compete with Phenom is not a good thing for AMD...

Anonymous said...

AMDZone is absolutely hilarious:

This response made me laugh:

IF you discount hyperthreading i doubt it very much. Non threading code bases will see perhaps not much of 20% at the same clock speed in average. That is what is supposed to be achieved with Shangai from Barcelona. Then there is the clock speed potential... if you have been reading some of other threads on this forum... a 4 core K10 design with 45nm HKMG and VAG will have the possibility to put a K10 well above 4,0GHz.

Better the K10 design can prevail, and a derivative with its circuits tweaked for a much lower FO4 number, like the IBM Power5 to Power6, can put a 4 core K10 design with 45nm HKMG and VAG well above the 5GHz... (IBM Power 6 at 65nm reaches 4,7Ghz, but if it where at 45nm HKMG and VAG it probably would be well over 6Ghz... or 7Ghz...)

Anonymous said...

Folks - come on, of course every attempt is going to be made to attack the #'s, attack the reviewer and attack anything on the periphery in a attempt to keep hope alive. This is exactly what happened with the early Core 2 benchies that leaked. When you don't like the data or if it doesn't fit into your version of reality, you instantly deny it and attack it.

Yes you should take the #'s with a grain of salt, but it shows a few key things:

1) It is better than Penryn (on a core and clock for clock basis). You can argue over the amount, but it is not just a feature/scaling update (with things like IMC, QPI, new sleep states, SMT, etc...). Is there any data that EVEN EXISTS TODAY which shows an apples to apples clock for clock improvement measurement of K10 over K8? (The answer is no, as AMD has failed to deliver a K10 dual core which would allow this comparison)

2) The Si appears relatively healthy considering it is still ~6 months from launch. Yeah there are still bugs on the overall integration side (PCIe and it seemed like there was some memory issues), but the Si doesn't appeared like it is broken or would slip the schedule in a major way. There are still potential for gotchas and slips, but things at this point appear to be on track.

I find it hilarious the attack, 'well how much of the benefit is SMT' - who the ^&@#%^#! cares?!? If it shows 20% on typical applications that I use in real life I don't care if Intel came up with a technology to shrink mice and managed to shove them in there to get the job done. This is the same crappy argument as the "glued cores" vs native (which will conveniently be dropped when AMD releases MCM - then it will be considered a 'practical' market driven approach)

Finally 20% is a significant jump... AMD fans should become politicians - they falsely hype the competitor in the runup to the benchmarks, so that when the benchmarks come out they can say it failed to meet expectations. "I was expecting more", or it is "OK, but not as good as they were saying". Or the classic of associating reviewers (or fan) expectations with actual Intel claims.

I thought the interesting thing was the power - it looks ~10% higher than an equivalently clocked Penryn and quite frankly I was expecting worse. When you consider the added transistors for SMT, the inclusion of IMC and QPI, this seems like a modest increase. We are talking about removing the Northbridge and adding 10 Watts to the CPU (in addition to the Nehalem improvements). I've heard some already call this disappointing!

Finally - it is time to stop the clock for clock crap - this is useful on similar architectures on different technology nodes, but with different architectures that are designed (and run) at different clocks, this is a nice academic study. I can't wait for the exciting data on how a 2.1 GHz Nehalem would compare to a K10. Just give me something that is comparably priced and in the same relative ballpark and compare those!

Anonymous said...

Tick Tock Tick Tock game is over AMD. Looks like the early peeks of INTEL’s second Tock are in and the combination of focused designer given world class silicon is yielding some incredible results.
I hear that Scientia and others are in denial and wallowing in there tears. There is no denying the big swinging Dick. When you combine a big bank account, great process engineers, huge factories and competent designers AMD got nothing to play in this game.
Those that lament the little guy rooting for the little dick need to realize that making CPUs is for big swinging dicks. It takes balls to play and AMD got none. They are two little and here size matters.
A few of my favorite quotes from Anand review.
“run on a partly crippled, very early platform…., the fact that Nehalem is still able to post these 20 - 50% performance gains says only one thing about Intel's tick-tock cadence: they did it.

“,Nehalem is already faster than the fastest 3.2GHz Penryns on the market today”

“it shows that the tick-tock model can work”

I find this following one proof to all those that said INTEL would stop innovating once AMD was gone. AMD is for all practical purpose gone, done, finished. Did INTEL stop innovating? NO, they are pedal to the metal. They are working hard to convince you go upgrade from Conroe to Nehalem and soon to their next Tock. Yo, is Rick Geek from geek.com paying attention, INTEL will always innovate regardless of competition.
“The fact that we're able to see these sorts of performance improvements despite being faced with a dormant AMD says a lot”.
“It's months before Nehalem's launch and there's already no equal in sight, it will take far more than Phenom to make this thing sweat”
How can AMD compete, there is no miracle coming. Its bleak as the IBM/AMD 45 process is barely competitive to INTEL’s 65nm. With that kind of handicap the CPU landscape isn’t interesting at all.
Shanghai on 45nm without highK metal gate is simply equal in performance to INTEL doing a quad-core with IMC on 65nm. Then give INTEL ½ the die size and another 30% performance with HighK/metal gate. There is no magic here, AMD can’t compete because it doesn’t have the goods. No amount of magic from their architects can overcome this. INTEL stayed close without IMC and superior process. After getting the boost of IMC there isn’t another sliver bullet. INTEL will keep the lead due to far superior process.

Tick Tock Tick Tock AMD is done.

InTheKnow said...

South Korea fines Intel for antitrust breach:

I can't find the story now, but the story I saw on this early today had an interesting tid-bit that I haven't seen elswhere. the report said that Intel's practices had been found to be anti-competitive, but that no award was being given to AMD because there wasn't enough evidence to prove Intel's behavior had actually damaged AMD.

If that report is accurate, it doesn't set a good precedent for AMD going forward.

Anonymous said...

Dailytech has a blurb, fine was a whopping 25 million.

Tick Tock Tick Tock AMD is dead

pointer said...

...It’s a Catch-22 situation, as Nvdia wants SLI support for itself, while Intel would love to get SLI for its Nehalem chipset; but Nvidia refuses to give it away.

Intel was always the bully and they simply won't let Nvidia make Nehalem chipsets, which is not really a fair and honest way of playing the game....
http://www.fudzilla.com/index.php?option=com_content&task=view&id=7713&Itemid=1


So ... it is a bully for Intel not giving NVidia CSI license, while it is OK for NVIdia (boldly) not giving Intel the SLI license?

I wonder what logic is this ...

SPARKS said...

"They are working hard to convince you go upgrade from Conroe to Nehalem"

Ooooh, how true!

"Dailytech has a blurb, fine was a whopping 25 million."

Yeah, peanuts, probably twice INTC’s yearly budget for toiletries world wide, TP, feminine napkins, soap, deodorizes, paper towels, etc.

BUT, agreeing to pay this find does set a precedent for the EU hammer home the big numbers. Of course, the AMD legal team will leverage this to their full advantage in future litigation. It is not the amount; it’s the admission, conviction, and obviously, guilt. Japan first, now the Koreans, then the EU, and finally, AMD’s case. This will all be ammunition in AMD’s legal gun come 2009.

AMD’s share price as soared on the news. Expect and INTC appeal to the Korean decision.

SPARKS

Anonymous said...

"Intel-AMD trial delayed to 2010 in U.S. court"

http://www.eetimes.com/news/semi/showArticle.jhtml;jsessionid=YNWE1I14DXRD0QSNDLPSKH0CJUNN2JVN?articleID=208402462

let's see best case:
- it actually does start in 2010 (which is probably not a slam dunk)
- Concludes in late 2010 or 2011
- Should AMD win, Intel will appeal... meaning add at least 1-2 years.

So if AMD is successful with both the suit and the appeal, they may start getting paid around 2012-2013 (or later). Once again if you factor in the net present value of money, the chances of actually winning, the lawyer costs (easily 10's, if not 100's of millions), and the current cash flow issues... why is AMD not aggressively trying to settle this?

I'm thinking 500mil.
- for Intel this is good risk adversion and they save the future lawyer costs. This is a one time charge for them, they can spin it as them simply avoiding the costs of going to trial and it is not enough to fund a fab for AMD.
- for AMD the 500mil today is worth a ~1Bil award in 5-6 years. Factor in the lawyer costs avoided, the debt they can pay down (interest saved) and a 500mil settlement today is probably the equivalent of a 1.5Bil award in 2013. They can also try to spin any Intel settlement as tacit acknowledgment that Intel was guilty.

Given their lawsuit applies to pre-2005/6-ish (I think) and they can only claim damages on sales in the US and just what are they expecting to get in court?

Anonymous said...

Oh I also forgot - Intel could throw in (or AMD could request) a favorable renegotiation of the x86 license (like dropping the outsourcing restrictions?) as part of any settlement... something also of significant value to AMD at this point in time?

I'm thinking this may be yet another grudge/ego issue with Hector (much like the 30% market share at all cost philosophy) - someone in that company needs to explain to Hector that it may be best from a business perspective to get a settlement now rather than roll the dice and bleed all those lawyer costs along the way. They should take what they can get, knowing Intel will still have to deal with all the governmental suits (like NY, EU, FTC, S.Korea...) for many years to come.

Ho Ho said...

People who are worried about Nehalem power usage should remind themselves this is an engineering sample. IIRC first Penryn ES chip was with nearly 2x higher power usage than what they eventually released.

Unknown said...

Maybe it's just me, but I found this extremely hilarious:

http://www.fudzilla.com/index.php?option=com_content&task=view&id=7749&Itemid=69

After reading Scientia's essays about how DTX was the greatest thing since sliced bread it's humorous to know that the first DTX board is actually designed for the Intel Atom CPU!

Anonymous said...

Actually Giant I think Fuddie meant the first DTX board they saw at Computex (not first ever).

If you recall many on this blog (included myself) also made mention that it was not a proprietary (or AMD specific) form factor; so if it indeed took off like Dementia thought it would, it would obviously be quickly used for Intel designs as well.

So basically he was wrong on 2 fronts - the widespread/pervasive DTX usage in 2007 and the fact that he thought this would somehow be a competitive advantage for AMD.

Other than those 2 minor things he was spot on with his analysis.

Now he is railing on the Anand Nehalem article as if it was some formal review and the data needed to be parsed carefully.... despite all of the clear and upfront qualification Anand mentioned in the article about the issues and it just being a quick glimpse.

Of course he is just being objective and non-biased it's not like he has an axe to grind or a clear favorite he is trying to play up!

Anonymous said...

If you guys recall a few months ago; some ("some" = me!) postulated that it was critical for AMD to get the Shanghai benchmarks out early so they could use Penryn as the comparison point and allow AMD to do it on their terms.

AMD is quickly running the risk of Nehalem benchies coming out in force and if that happens it will be what Shanghai is compared against. They should re-think their 'keep it hidden so the competitor doesn't know' strategy (stifles laughter) and get those Shanghai benchies out soon. Even if it is a modest improvement and the clocks are not that impressive and it has trouble keeping up even with Penryn (which is what I suspect); they are better off with that comparison then it being viewed as getting soundly beaten by Nehalem. If they get it out first they can play the "well Nehalem is Intel's next gen architecture of course it should be better" card.

I suspect they have at most another 2 months or so to get ahead of the PR on this one and try to get out a new price per performance per clock per transisor per watt metric made up. (Recall ACP was the attempted benchmark innovation on K10 to get the power #'s looking better)

Unknown said...

Nice toy, eh? (Stop drooling, Giant)

It's unfortunate that I can't really afford a top end boat with such a system on my current rate of pay!

Instead I'll have to settle for a few GTX 280s in a few weeks and a Nehalem CPU, board and memory towards the end of the year!


Giant, as far as your applications are concerned, you couldn’t have made a wiser choice. That’s what we’re here for, top performance. Now both you and I have machines that can handle anything they throw at them, no compromises, no bullshit excuses; just the way I like ‘em.


I couldn't have put it better myself. Top performance is what we want, and we're willing to pay for it - so top performance is what we get. After salivating over those Nehalem benchmarks that Anand posted I get the distinct impression that I'll need to upgrade once again by the end of the year to avoid my PC ending up on the Antiques Roadshow!

-GIANT

Ho Ho said...

"If you guys recall a few months ago; some ("some" = me!) postulated that it was critical for AMD to get the Shanghai benchmarks out early so they could use Penryn as the comparison point and allow AMD to do it on their terms."

No problem, they'll just keep their comparisons to 4P where Intel is still mostly using old Conroes. Just the same as they did when Intel only had Netbursts.

Tonus said...

sparks: "Expect and INTC appeal to the Korean decision."

I was under the impression that they had already filed an appeal or at least had announced their intention to do so. $25 million is a small number in context, but my belief is that companies like Intel don't get to where they are by treating it like a small number.

I agree with the others on two main points as well:

> The indirect finding that AMD was at fault for its troubles is a bad omen for them. If the various courts find against Intel but leave AMD on its own, AMD won't really be in a better position. Intel can absorb the losses from fines, but AMD needs more than a judge telling them "see, we fined Intel!" They need cash, and it gets harder and harder to imagine anyone investing in them under those circumstances. Intel will not be 'crippled' by fines.

> AMD's best option is to settle its lawsuit against Intel. The way the courts work here, the timeline that assumes that it might take until 2012 or beyond to finally conclude the case seems accurate, and I don't think AMD has the luxury of time anymore. A settlement could give them a way to save face (ie, claiming that a settlement is tantamount to Intel admitting guilt or at least 'having something to hide'), while also possibly allowing them to modify the x86 license to terms that they consider more favorable. Dragging out a court case doesn't seem as if it will do them any good when they're in need of cash right now.

Orthogonal said...

For someone so uptight about proper testing methodologies and calling into question all the details, it would be helpful if he actually looked the details of the tests before spouting off. When Sci points out that Anand's old review showed the Q9450 get a score of 3297 in the single threaded Cinebench run, which is obviously higher than the one reported in the Nehalem preview score of Q9450 = 2931, it should be noted that was because the old test was using the 64-bit Vista OS while the new one is using 32-Bit Vista.

Looking at this Vista comparison benchmark by Extremetech, the Vista 64-bit gives a 10% advantage over 32-bit, which easily explains a the difference.

http://www.extremetech.com/
article2/0,2845,2280813,00.asp

Anonymous said...

"Looking at this Vista comparison benchmark by Extremetech, the Vista 64-bit gives a 10% advantage over 32-bit, which easily explains a the difference."

Come on man, there you go letting the facts get in the way of a good story and theory that Anand is either incompetent or cooking the books. Dementia's theory sound plausible to me... as a review site it clearly is in his best interest to cook the #'s - it's not like that when other review sites come out with the correct data they will perhaps expose this and his viewership (and reputation) will tank. This seems like a plausible strategy - maybe Anand is hoping that EVERY other site will make similar mistakes, or Intel is paying them off.

Personally I like the theory! I usually read non-fiction, so Scientia's blog is a nice changeup.

Anonymous said...

Anonymous said "AMD is quickly running the risk of Nehalem benchies coming out in force and if that happens it will be what Shanghai is compared against."

Speaking of which, where are the ES samples of Shanghai? Why no previews yet? If AMD is gonna release these in <6 months, shouldn't we be seeing some preliminary benchies by now??

The problem with AMD is that they make grandiose claims and then fail miserably at them, underperforming, late and buggy. Intel looks like the consumate professional on the other hand, promising and delivering mostly on time and claims.

SPARKS said...

2012 is when both 2007 share dilutions are payable. Correct me if I am wrong, but wasn’t the price 20 or 25 dollars a share? Don’t these things have a BBB- rating and classified near junk status?

Given AMD current circumstances, plus Nehalem’s magnificent numbers on a crippled MOBO, locked in at a paltry 2.66, time is NOT on AMD’s side. Kiss the server market goodbye.

With INTC’s flawless execution, can anyone guess how AMD could survive that long? I mean, they’re looking at least 4 years here, even if they get to the ‘break even’ point, while INTC is looking straight down 32nM’s throat. Then, there is the 450mm piece of glass INTC is cooking up.

Further, to add insult to injury, AMD has to renegotiate the x86 license in 2010 at the beginning of the trial, perfect! Has anyone here ever tried to negotiate with a soon to be ex-wife during a divorce? They don’t give up empty peanut shells, trust me!

Plus, factor in Larrabee. INTC wouldn’t need AMD/ATI as much as they do now even if it is a marginally performing solution. If, perchance, it evolves into a killer product, it’s a win, win scenario for INTC. (Nehalem’s ray tracing numbers were outstanding in the Anand article)

Do you really think it’s to INTC’s advantage to settle anytime soon? Fuck ‘em I say. Let the legal action be another expense (and diversion) that AMD has to deal with during the next dozen or so quarters as they try to make money in the low end gulag while simultaneously fighting both NVDA and INTC.

Death by uncompetitive product execution.
Death by financial attrition.

IINTC has a perfect strategy, let the other son-of a-bitch sweat.

SPARKS

Anonymous said...

'If AMD is gonna release these in <6 months, shouldn't we be seeing some preliminary benchies by now??'

My guess is that, as an astute philosopher once said, it is deja vu all over again.

Recall the task manager 100% loaded K10 demo? Recall the fans saying look it had to be running hard to load all the cores. If I recall there were a few who said maybe it's just running at a real low clock...

Flash forward 18months... AMD claims they have samples out to OEM's and I suspect that they do - really low performing parts that are good enough to do the initial BIOS and development work. I'd speculate (as I have no proof); AMD is seeing similar process issues on 45nm as they had on 65nm - after all the power issues without a change to the gate oxide are going to be just as bad (if not worse).

AMD probably fears having a low clocked demo again - they played that game with K10 and said things are healthy and we are on track and we'll get the clocks up... well if they try that story AGAIN with K10.5 and maybe even the gullible press is not that gullible?

Now couple this with AMD seemingly try to sell a minority share in their fab capacity with claims of "world class manufacturing"... kind of a hard sell (and bad bargaining positioning) if some demos on an underclocked demo part starts leaking out.

AMD played this same game on the financial side before their loan deals/convertible notes in early 2007 - they initially stated that everything was on track and there was no mention of issues at the analyst day Dec'06... and mere weeks later and just prior to their earnings call they warned there would be significant shortfalls.

Of course I could be wrong and AMD is just holding everything secret in order to 'shock the world' :)

Anonymous said...

"I mean, they’re looking at least 4 years here, even if they get to the ‘break even’ point, while INTC is looking straight down 32nM’s throat."

32nm?!?!? I think you mean 22nm! 32nm should be end09/early2010... 22nm should be early2012. (I still think this will start at 300mm - though this may be the crossover node to 450mm?)

Do you really think it’s to INTC’s advantage to settle anytime soon?

Absolutely! You always like to negotiate from a position of strength. Barring bankruptcy, AMD is as weak now as they are ever going to be they need cash, and they need a way to re-structure manufacturing and they need to do all this without really having a competitive product at the high margin markets. I still think (unless AMD somehow knows they have the goods on Intel),there is a better than 30% chance you will see a settlement in 2009 (probably after depositions start - as then both AMD and Intel will have an idea how the trial will go).

For Intel it is all about risk aversion. While civil suits are hard to win (I think <50% of all suits are successful); the potential liability could be huge.

AMD is probably trying to figure out if they can last that long (makes no sense to go to trial if you go bankrupt before then) and probably wants to gauge what they will uncover during the depositions.

2009 in this sense is perfect timing - it is in advance of the x86 license so AMD can drag that into any settlement discussions and Intel might be willing to do this to:
A) get rid of the suit and liability
B) It will probably help (at least politically) to get the governments a bit off their back - look we're negotiating in good faith with AMD and not trying to put them out of business... blah, blah, blah...

So I think there is plenty of incentive for Intel to go to the negotiating table, even if they are confident they will win the suit.

Anonymous said...

Anonymous said "Recall the task manager 100% loaded K10 demo? "

I recall a bunch of AMD BS:
- "40% faster than Conroe over a wide variety of workloads" - more like 20% slower over most workloads.
- A "Tsunami of products" - more like a 'Sue-nami' of Intel lawsuits.
- "Dancing in the aisles" - well, OK that was Fuddo talking out of his rearmost mouth.

Anyway I bet we'll start seeing prerelease Nehalem benchies in the next couple months, and the AMDZonerz are gonna be like floating Egyptians -- neck deep in de Nile :)

Axel said...

Nice. One more, perhaps the one most bordering on outright deception: "I think if you look at the four key value propositions of Barcelona, you know it's going to be the highest performing x86 processor out there. It's going to completely blow away the existing Clovertown product in every dimension." Randy Allen

hyc said...

"Looking at this Vista comparison benchmark by Extremetech, the Vista 64-bit gives a 10% advantage over 32-bit, which easily explains a the difference."

Come on man, there you go letting the facts get in the way of a good story and theory that Anand is either incompetent or cooking the books.


Well, he admitted he got it wrong.
http://anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3326&cp=7#comments

And for someone who has run so many of these benchmarks to blithely post blatantly bogus data, you really do have to wonder about his competence or integrity. The old saying goes "never attribute to malice what can be explained by stupidity" but it's hard to call him stupid.

Anonymous said...

Anonymous said "My guess is that, as an astute philosopher once said, it is deja vu all over again.
"

That was Yogi Bear, right? :)

SPARKS said...

“- "Dancing in the aisles" - well, OK that was Fuddo talking out of his rearmost mouth.”

Check that. It was Charlie D., AMD pimp extraordinaire. He has been giving (and receiving) AMD hand jobs since 2006. He has enough technical knowledge to be dangerous.

SPARKS

SPARKS said...

"32nm?!?!? I think you mean 22nm!"

Hmmm, does that mean we are looking at EUV and, purhaps, tri-gate?

SPARKS


http://www.intel.cc/technology/silicon/tri-gate-demonstrated.htm?iid=tech_silicon_pipeline+rhc_trigatedemo

Anonymous said...

HYC: "Well, he admitted he got it wrong..."

And so the problem is what? It's really transparent Scientia's general bias against Intel and specifically his dislike for Anand. So apparently based on one mistake in the article, we should completely dismiss the entire article or assume he has malice?

Put the article in context (as Anand did if you cared to read it). It was on a not completely functional board, had some issues, was clearly not done with Intel's blessing and was meant to be an early snapshot, not a detailed assessment of what the chip could actually do. Are you really going to sit there and say that based on use of an incorrect Penryn # that he made everything up? To what end?

I would question his integrity (as I do Scientia's) if he didn't own up to the mistake. How many mistakes has Scientia made in his blogs? How many has he acknowledge (we'll never know as he usually just refuses to post comments which illustrate his mistakes). Based on your folksy saying I guess you assume Scientia is either incompetent or has malice in his heart as well?

Again, Anand corrected the # what's the big deal? As always if you don't like the message, attack the messenger. So in your view the entire data set he put up there is either mistakenly measured or blatantly cooked?

Orthogonal said...

Hyc and Anonymous: To clarify, Scientia has issues with Anand's "corrected" numbers. Originally, Anand had a typo in there first that showed Nehalem beating Yorkfield by 25% in the single threaded Cinebench which was shortly thereafter corrected to show a 2.6% gain.

Scientia is claiming that the "corrected" numbers are in fact intentionally lower to put Nehalem in a better light and he tried to prove that by showing results from a FEB review that had higher Q9450 numbers.

It has now been shown that the discrepency in the benchmark scores is due to the different OS's used in the 2 test setup's.

Anonymous said...

Scientia seems to be really searching for straws about why the recent numbers on Nehalem are for some reason fake, misleading, or incorrect. For him and others wake up the big swinging dick is full on and there is no denying him!

Lets reflect a bit! Why would Intel knowingly allow a widely read technology site to come out with a misleading performance evaluation. Today INTEL is undisputed performance leader across all segments except the very highest end / bandwidth intensive segment. Intel rules the roost and is undisputed in leadership. It would undermine all of their hard earned work they’ve done over the past few years and their current product credibility to allow the earliest review to be incorrect. Nothing would cause confusing or hurt the Intel reputation then if the current assement would eventually be proven outright false. If Nehalem was a dud then no amount of misinformation even early lies will change reality. There will be too many people, sites, organizations scrutinizing it when it is released officially. Better to hide it to the last moment like AMD is, when you got nothing don’t show your hand till you have to. That is what AMD is these days all talk and no show as they are limp and no amount of Viagra will help them!

My assessment is that the INTEL team implemented the IMC at the right time. They milked the micro architecture and process for all it was worth between the initial Conroe and the incremental improvement on Penryn. Now they introduce IMC plus a few more incremental tricks and show another huge improvement on their second Tock. The strategy of INTEL to wait to go to IMC at the end was brilliant. It forced the designers to milk improvements everywhere then they bring in the easy guranteed boost from IMC at the end!

Going forward there will be few additional tricks for either INTEL nor AMD. The playing field at design/architecture is pretty much level now and the only thing that will enable leadership is going to be crispness of execution and making small tradeoffs and stepping cadence. What will make INTEL versus AMD pull ahead will be the technology the design is running on. I don’t expect one design team to materially produce more then a 2-8% best case performance advantage based on architecture along. Now on the process side INTEL will be able to leverage a greater then 1 year lead to give its design team 2x density for huge transistor and cost advantage. Then you have INTEL’s superior performance you give them not only double the transistors but 30% performance advantage. Any questions who is going to win?

NO design team can overcome this advantage. A correctly and engineered Nehalem compared to the same Shanghai will result in between a 10-20% advantage for Nehalem. Nothing to debate, Shanghai has no chance unless INTEL Nehalem team totally bungled their architecture and execution versus what the Shanghai team chose to do. The Shanghai also started out knowing they were going to be a dollar short ( performance ) and a day late ( one year later ) and probably is taking bigger risk and with less design margin. The result will be a buggy stepping with many misses resulting in extra unforeseen steppings. Yes you read it hear Shanghai like Barcelona will be LATE, if it ain’t LATE expect bugs to surface and or it to be nothing more then the bastard son of a Phenom.

Tick Tock Tick Tock no need for any more clock as AMD is gone. AMD’s only hope was a few bucks from the lawsuite and that is getting pushed back so far that AMD is going to be nothing but a memory by the time that goes to trial .

Anonymous said...

Thanks for the clarification Orthogonal - though at this point it really doesn't matter 5%, 10%, 20%... what I took from this:
1) it appears to be better than Penryn at a given clock.
2) it doesn't look like Intel will be playing the "we're focusing on efficiency card" (read we can't get the clocks up)
3) the architecture doesn't appear to be a flaming car wreck
4) the addition of the IMC does not appear it will blow the TDP's out of the water (as some have insinuated in the past)
5) There are still the typical issues which need to be worked out prior to launch (i.e there won't be any early surprise launch).

What bothers me is the way people try to characterize reviewers who put any data out which they don't like. You can criticize the conclusions or recommendations but to insinuate incompetence (scientia) and/or intentional malice (hyc) at this stage of testing is utterly ridiculous.

I'm sure Anand was thinking how can I depress the scores to make Intel look better (and thinking I'm sure nobody will find out when more reviews eventually come out)- that obviously has huge benefits and is in the best interest for his site?!?!

I'm going to go out on a limb and think a more thorough review will be done at (or just prior to) launch , and this won't be the definiive review. As this is an ES sample anyway I don;t know why people are so concerned about accuracy at this point (I'm looking for order of magnitude which will help me decide between getting a Penryn quad right now or waiting another year for a Nehalem quad). This tells me to at least hold off until more data comes out.

hyc said...

The problem is that he didn't triple check those numbers before publishing them initially. The problem is that he published the very first public preview of the Nehalem, so everything he said is going to color the world's perceptions. It should be obvious that being the first to publish gives him tremendous influence.

The problem is that if it weren't for "rabid AMD fanboys" questioning the validity of the numbers, this mistaken perception would perpetuate until god knows when.

There's nothing to like or dislike about a set of numbers. But if they're clearly wrong, and the person publishing them ought to know better, there's plenty to dislike about the author. Carelessness deserves to be criticized.

Anonymous said...

"so everything he said is going to color the world's perceptions."

Oh give me a break hyc - you really think this review has colored 'the world's perception'? You are sincerely going to sit here and say you actually believe that? A bit dramatic don't you think? Perhaps AMD should sue Anand?

this mistaken perception would perpetuate until god knows when.

Ummm.. probably until another set of benchmarks were done (whether it be at Anand on a later sample or another site)? Oh the horror! Could you be any more dramatic and over the top? Could you imagine the nightmare of one error on a benchmark going that long? Thank goodness for the rabid fanboys, now I can sleep at night, knowing THE WORLD HAS NOT BEEN DUPED BY THE DIABOLICAL ANAND!

Carelessness deserves to be criticized.

I agree 100% on this, but there is a huge difference between criticsm about sloppiness/carefulness vs accusations of intentionally cooking the #'s (which is what you insinuated in your earlier comment)

I don't understand the seeming fervor by some on this one - it's one damn benchmark, clearly on an early engineering sample in a clearly non-optimal and non-fully functional setup. To attempt to parse and distill the #'s into an actual percent improvement at this stage is absurd (hence you see wide ranges given). It's like there is a compelling need to discredit any positive indication about Nehalem. If you recall there was the same outrage about the early Conroe benchmarks - the #'s may not have been exact, but they gave a good general indication of what the architecture could do.

If you, or for that matter anyone reading a site like Anand (Joe public is not reading this site), cannot intelligently understand the qualifications that he put in the article.... well then you/they deserve to be mislead/ tricked/ snookered or whatever else you think Anand was trying to do.

Anyone with half a brain could see the preliminary nature of the numbers and would not try to draw detailed conclusions (that goes for both the Intel and AMD fanboys).

Thankfully I can sleep at night knowing the world has not been mislead by the mighty Anand! I owe a debt of gratitude to all those who corrected the grave injustice that he was perpetrating on the world.

Anonymous said...

hyc said...
"The problem is that he didn't triple check those numbers before publishing them initially. The problem is that he published the very first public preview of the Nehalem, so everything he said is going to color the world's perceptions. It should be obvious that being the first to publish gives him tremendous influence."

Probably Anand rushed to be first to publish a review - that's always a coup for tech enthusiast websites. Gets lots of eyeballs and hence those advertising dollars rolling in.

Personally I'll wait for the more thorough reviews to come out before making judgment. But from what I've read by those-in-the-know (Intel engineers) posting on other websites (XTreme CPUs for one), I suspect that the thorough benchies are gonna be at the high range of estimates, and Sci is gonna look like an uninformed moron. Again.

InTheKnow said...

Right or wrong, good or bad, the EeePC seems to have really started the ball rolling in the subnotebook market. Unlike what we've seen with previous attempts at this form factor, there seems to be a lot more real interest from OEMs and consumers around these devices.

I've said in the past that it looked like AMD was going to miss the boat on these devices. Now it seems that MarketWatch.com shares that opinion. By the time bobcat arrives it will be too little too late.

InTheKnow said...

On a related note, I have seen a statement from Intel somewhere that they believe they will be able to match ARM in performance and power in the 32nm node. Wish I could find the link, but I'm having trouble tracking it down again.

In any case, Intel's argument seems to be that if you are an OEM and have a choice between multiple software solutions for an ARM device or a single software solution for an x86 device the choice should be a no brainer. You take the solution that doesn't require any additional development (x86).

Assuming that Intel hits it's target and matches ARM for performance and power. Is there a reason to choose ARM over x86? It seems to me performance and power are the only real advantages that ARM has over x86. Or am I just drinking the Kool-Aid here?

Anonymous said...

"Or am I just drinking the Kool-Aid here?"

Perhaps sipping it a bit - power is still a significant issue for the 'mainstream' MID segment.

I'm not sure 32nm will bring about that much of an impact (especially not as much as 45nm for Intel did). Power will be somewhat improved, but I think Intel will need to continue to work on the design side of things to get the power competitive.

For the ultra low end desktop/netbook/setop/etc market I think x86 will eventually dominate. Here power is a concern, but think of the power differences between the CPU's and then consider power consumption for displays, hard drives, peripherals, etc and the CPU power differences will be much less important than in say a smartphone.

Intel appears to have timed this market perfectly - things are still in development stage but when demand really picks up, Intel should be on their 2nd gen product which will have allowed them to iterate the design once and address any major shortcomings. The main problem I see for AMD is that there first product will need to very good out of the box as they will not have a similar time window to iterate the design.

I think Ed at overclockers had it right when he said in the future there'll be a split CPU market - one ultra low end 'good enough' market, and one high end market. It will be difficult to have something in the middle - if you try to raise the performance of the low end (where things will likely be ultra-competitive with cost being the key market driver), it'll be hard to keep up on the cost side. You can always drop the high end down but that runs the risk of eating away at the high end market.

Anonymous said...

" Carelessness deserves to be criticized."

To a point, yes ... however, if in good faith the person publishes the appropriate retraction, and corrects the mistake ... the that should be admired in and of itself.

Of all the reviewers, Anand is one of the most reproducible I have found... Lost Circuits is the least.

To do what people are doing and trying to destroy his credibility is simply wrong ... Anand has published glowing reviews of both CPU makers when the product at the time warrants it. He is also critical of the companies when the product or behavior warrants it. In the end, what Anand says should be respected, even if it is something that one does not agree with.... data is data, when it is wrong it should be explained and corrected which is was... it is when bogus data remains posted (i.e. Fudzilla with the bogus OC screen shots) that destroys credibility and warrants harsh criticism.

Jack

Roborat, Ph.D said...

looks like Scientia has banned everyone on his latest blog.

Scientia said: It remains to be seen if Nehalem fairs better in benchmarking than Penryn. Penryn benefited from code that used lots of cache and was predictable enough for prefetching to work...

Don't forget that not only were all the review sites Intel paid liars, the benchmarks are also designed to favour intel CPUs. it's a conspiracy!

Anonymous said...

sci will probably think it is also an Intel conspiracy that no one is posting over at his place anymore.

Tonus said...

Isn't it normal for at least some software to favor Intel CPUs? I figure that in many industries, software is just as competitive a field as hardware, and if one CPU holds 75-80% market share (and possibly more, in your particular segment) you're going to make sure you optimize your code for that CPU.

Newtek doesn't optimize Lightwave 3D for multiple processors because it's good for the CPU business. Newtek does this because it's good for Newtek, since Lightwave has to compete with other 3D modeling and rendering programs, and speed is very important in that field. I don't think companies optimize software for Intel processors because it's good for Intel, but because it's good for the software companies.

It's a tough position to be in if you're AMD, but that is the realities faced by software developers, IMO.

SPARKS said...

“looks like Scientia has banned everyone on his latest blog.”

DOC-

He hasn’t banned me,---yet.
Nor, will I give him the opportunity.
He knows I am no INTC paid liar.

I absolutely refuse to be baited into an argument, as you saw with my last series of posts concerning the performance of QX9770 with that ridiculous Prime95, half hour challenge.

In fact, I was so disappointed in his response to the challenge he suggested, I will never again waste my time posting anything there again.

He couldn’t bring himself to answer my challenge:


“I’m not bragging Sci, it’s just a simple fact that this is one terrific piece of hardware.

Will you admit that?

Finally, does THIS tell you something?”


Frankly DOC, the Nehalem/Anand benchmark issue is a steaming pile of horseshit. Perhaps we can get a general idea on the performance, but to draw conclusions based on substandard immature hardware is complete nonsense.

While the Anand article was a rushed preview to make headlines and sell soap, in contrast, I gave Sci real world benchmarks on 100% retail purchased products. His last response was far less than objective.

As with anything in this life, you put your money where your mouth is, I did. It’s the difference between a wimp who talks tough, talks about the hot cars and hot women, and the guy who’s out there, living on the edge.

Living on the edge ain’t easy, and it’s gonna cost ya. There’s always some asshole that’s gonna put you down, one way or another.

I’ll stay here in the “locker room” with you.

Sincerely,

SPARKS

SPARKS said...

TONUS-

I'll tell you what. Give me a benchmark that is the WORST for INTC hardware. I'll run it for ya and still blow anything out there out of the water, with the exception of a Skulltrail setup or Sandia National Laboratory.

SPARKS

Tonus said...

Oh, I don't follow benchmarking much anymore. My comment is directed at the squabbling over benchmarks "favoring" Intel processors and the like. It's just practical. If I write software, and I am competing against companies that write similar software, I have to make sure my software runs fast and optimized. If the people I write software for are running processors from Intel ~80% of the time, I will optimize for that CPU first, and any other CPUs later... if I can justify the expense.

It's not an apology for Intel or AMD, it's just the reality of the situation. I expect that some software will favor Intel CPUs, and you can't discount benchmarks for that reason, because those will reflect how things run in real life.

This is even separate (though not entirely) from another complaint I see, about how single-threaded benchmarks show a lower performance gap. I don't think this should be surprising, either. The entry-level desktop these days is running a dual-core CPU, isn't it? Software that requires some muscle (video, graphics, animation, 3D games) are being programmed to support multi-cores, or already do.

Sure, Joe Average won't see an improvement when he runs Firefox and his email client. But who said that Intel/AMD are aiming Nehalem/Shanghai at Joe Average?

Unknown said...

Samsung obtains world's first 450mm tool: http://www.eetimes.com/news/semi/showArticle.jhtml;jsessionid=LSWSTP1Q2FO02QSNDLRSKH0CJUNN2JVN?articleID=208402622

I'm certainly no expert on process technologies, but I figured this might be of interest to Guru, JumpingJack and all the others that have a seemingly limitless knowledge on the subject. :-)

-GIANT

InTheKnow said...

From Giant's article...

the industry has quoted a 30 percent cost improvement with a move to the next wafer size

and...

Economics is the big factor. ''It might be possible to obtain the 30 percent improvement over time as equipment throughput improve, but initially, this level of cost improvement is unlikely,'' he said.

From the first quote you can see that the 30% we are talking about is a cost reduction.

In the second quote is where things go off the deep end. To get a 30% reduction in cost, you not only have to get 30% more wafers out, but you have to use 30% less chemicals as well.

You might be able to reduce chemical costs on some tool sets, but not most of them. And the idea that there is another 30% throughput to be squeezed out of the tools is laughable. Almost all tools run at better that 70% availability. So if you never had a tool break or need preventative maintenance, you would still not squeeze another 30% out of the tools.

Since you won't get the 30% from just trying to eliminate downtime, you have to get wafers through the tools faster. In the cases where this might be possible, you would need to do extensive modifications to the tools. Who is going to pay for this R&D? And how much will the modifications to the existing tools cost?

I agree with the anonymous poster when he stated in the past that the equipment OEMs position is not going to hold up. There will be 450mm at some point. I think that all this is just the wrangling over who will bear the R&D costs and how much the semi manufacturers will pay for the tools.

Anonymous said...

"In the second quote is where things go off the deep end. To get a 30% reduction in cost, you not only have to get 30% more wafers out, but you have to use 30% less chemicals as well.

You might be able to reduce chemical costs on some tool sets, but not most of them."

Please remember chem costs, while substantial, remain a relatively small amount of overall wafer cost. So to get a 30% reduction you do not necessarily need a 30% chemical cost reduction.

Also keep in mind these reductions are generally quoted on a normalized area basis (per/cm2) - the cost of a 450mm processed wafer will obviously increase, it's just a question of keeping it a significant amount under the area scaling to make it economical. You don't need to get 30% more wafers out as you are getting 2X-2.25X the area on every wafer. SO on a per wafer basis, to get a 30% cost reduction, you need to hold things to about a 50%-60% cost increase (assuming of course you work out any yield problems)

This goes for equipment costs, chemical cost, factory cost and whatever other costs. Unless you end up doubling the equipment cost/price (which would be hard for even the equipment OEM's to justify) it will likely be hard to deny there is a positive ROI. The OEM's will likely cook up some story on R&D costs, but these are generally overstated - if you look at the OEM balance sheets you can see the R&D vs revenue and you will also fail to see the
catastrophic impact that occurred with 300mm.

InTheKnow said...

Please remember chem costs, while substantial, remain a relatively small amount of overall wafer cost. So to get a 30% reduction you do not necessarily need a 30% chemical cost reduction.

I'm not sure I'm following you here.

Let's say for the sake of argument that chem costs are 10% of the total cost. If I am going to reduce overall costs by 30%, then I would need to reduce chem costs by 30% as well. That would equate to a 3% overall reduction in cost coming from chem cost reductions, but it is still a 30% reduction in chem costs.

Is the fact that only 3% (in this example) of the overall cost reduction would come from chems and not 30% of the overall cost what you were pointing out?

Anonymous said...

"Let's say for the sake of argument that chem costs are 10% of the total cost. If I am going to reduce overall costs by 30%, then I would need to reduce chem costs by 30% as well. That would equate to a 3% overall reduction in cost coming from chem cost reductions, but it is still a 30% reduction in chem costs."

30% cost reduction doesn't mean 30% across the board - it will come out to different amounts in different areas. While all areas will obviously be worked on, I'm saying don't assume you need a 30% reduction in chems to achieve an overall 30% reduction.

As an example litho equipment scales poorly from a cost perspective as with scanners you basically 1/2 the throughput as you have twice as many exposures to do. And in other areas you will get far more than 30% cost improvements.

Also, as you change wafer sizes the ratio of capital equipment to total cost changes (it goes up). So even if you say hypothetically chems are 10% of the cost on 300, on 450mm it will drop on a relative basis (somewhat 'artificially').

The bulk of the cost savings will come from capital equipment expenditures.

SPARKS said...

Alright fella’s, of course, as usual, you have us lowly minions at a disadvantage. What you guys take for granted as one of Julia Childs recipes, we are out here are thinking about Mr. Spock and Scotty, down in engineering, working on the Di-lithium Crystals.

Chemical costs, for example, what are we talking here? Do you guys use this stuff by the gallon, per wafer, at each process step? I’ve seen demos where a little spigot points itself down at the center of a spinning wafer, squirts enough to insure coverage, completes the process, apparently, after some unknown length of time, and then moves on. Hell, it looks like I use more liquid when I splash my Johnny Black with a little soda.

Or, is their another process step somewhere that washes these dinner plates like my Kenmore dishwasher using an equivalent amount of solution that’s constantly refreshed.
Finally, without divulging the mysterious black art of Fabrication, typically, how much liquid in quarts, gallons, whatever, does it take to crank out one wafer?

What does this stuff cost per gallon? Take the most esoteric proprietary noxious concoction, for an example.

Who makes this stuff? If it’s an outside vendor, and if it’s highly sensitive intellectual property, how do they prevent the secret from getting to the competition? Or, is this stuff mixed in house?

SPARKS

Tonus said...

Ed at Overclockers said that there has been no sign of 45nm AMD CPUs at Computex, which could indicate that AMD won't have them ready in time for a Q4 rollout.

Anonymous said...

Tonus said...
Ed at Overclockers said that there has been no sign of 45nm AMD CPUs at Computex, which could indicate that AMD won't have them ready in time for a Q4 rollout.

IIRC AMD "demoed" Barcie in June of last year, under extensive NDAs and not letting anybody within 10 feet of the machines. Then they release a couple in September, and then announced the Phenom the day before Intel's Penryn announcement. However the non-buggy versions weren't available until March of this year.

If I had to bet on AMD's execution to the 45nm node transition, I'd guess the above scenario would either repeat itself or get worse.

pointer said...

Blogger Orthogonal said...

Hyc and Anonymous: To clarify, Scientia has issues with Anand's "corrected" numbers. Originally, Anand had a typo in there first that showed Nehalem beating Yorkfield by 25% in the single threaded Cinebench which was shortly thereafter corrected to show a 2.6% gain.


it is not a typo, but Anand claimed that is a mistake in the excel (lazy to look for what exactly he said now)

anyway, 3% gain is just too low. Thus, I am assuming he didn't has the turbo mode on, which when on, the core would run at xx% higher clock than the rated clock. If the said benchmark has linear relationship to the CPU clock, we should see at about xx% improvement, assuming no other IPC improvement.

pointer said...


amdzone:
Postby scientia on Wed Jun 11, 2008 12:19 am
The fact is that up to now Intel fans have insisted that single threaded benchmarks are the most important because not surprisingly these are the benchmarks where C2D does best. This is another reason why Intel fans prefer gaming benchmarks because the great majority are single threaded. So, what is going to happen when Nehalem shows up worse than Penryn in a lot of benchmarks? Will Intel fans:

1.) proclaim that single threaded benchmarks remain most important and Nehalem is a dog?
2.) suddenly decide that single threaded benchmarks are not as important as multi-threaded afterall?
3.) try very, very hard to avoid comparisons between Penryn and Nehalem preferring to focus on Nehalem versus Shanghai?


and in his blog:

Hoho asked:
"Quoting yourself, "why should anyone test single-threaded performance on quadcore processor?".

He replied
My feelings have not changed: I still say that it is nonsense to judge a quad core processor by single threaded peformance. However you are making my point quite well that the very same Intel enthusiasts who vehemently insisted that single threaded benchmarks were more important (when they scored higher with C2D) will now flipflop and insist that multi-threaded is far more important (when they score higher with Nehalem).


People are actually questioning his flip-floping, since scientia said that multiple times that quad core shouldn't be tested with the single thread apps. And he is thick face enough to ask for the single thread benchmarks for the NHM, while saying his feeling has not changed, AND accusing others flip-flopping instead! :)

I'm not sure about others. For me single threaded, lightly threaded and heavily-threaded application performance are equally important for a QC usage. In this real world, you will see a mix of these application, with more single threaded nowadays moving towards more multithreaded in the coming years (still won't be 100%)

When one buy a QC, he/she won't use any single thread apps? Don't be silly. He/she won't use thread apps? Don't be silly again.

AND actually NHM's Turbo mode ability actually helps in less threaded environment, in such you won;t see such a bigger waste of running single threaded apps with it as they will be running at higher clock than the more threaded apps

Anonymous said...

"People are actually questioning his flip-floping,"

It's obvious...if there were only single threaded benchmarks, he would claim those were useless for quad core or don't show scaling. He's going to spin the argument (in a completely unbiased fashion, of course) no matter what was done. If it was good.... just an engineering sample (or 'cherry picked')... if it was bad... bad architecture/Intel struggling.

Let's face it... this is a guy who wrote a blog about K10 being a good start! (nearly a year later, how's that looking?), about the AMD losses being somewhat misleading, about Intel's Conroe chips being 'morning mist in the sun', RDR being not that big a deal and overblown in importance (until AMD apparently started talking it up then it suddenly became the MAIN reason for Intel's lead)... I could go on and on...

He is simply a spin doctor who thinks he is unbiased - I have no problem with the spin, everyone does it and has some degree of bias And at the time many of his arguments, however unlikely to be true and ungrounded in reality they be (see 65nm 3.0GHz K10), they at least have a chance of being true... The issue is he pawns himself off as some bastion of objectivity and knowledge (Asset Smart anyone?) and that is where the problem lies.

So why bother arguing? He will either not post your comment, deny it by asking for data (which many of his original arguments do not have), call it irrelevant, call you a fanboy, or claim you are personally attacking him.

This whole Anand thing is ridiculous - I for one am not putting too much stock into the absolute #'s, but let's be realistic here, it would be crazy for Anand to intentionally cook the books like some have fantasized. It is just as ridiculous to expect a complete set of benchmarking and analysis on what is obviously a very early and less than fully functional engineering sample (as well as boards and probably chipsets).

So instead of taking the data with a grain of salt and looking for a general early trend for relative performance, the people who have some sort of personal and emotional investment in putting Nehalem and Intel in the most negative light, decide attacking the author is the best course of action. Of course this house of cards comes apart when benchmarks start to come out in force in 3-6 months, and then we will have to listen to the vast Intel benchmark conspiracy theory (again), just as it happened with Conroe. In the meantime, might as well try to drag Anand through the mud...

Core 2 timeline:
- Intel benchmarks... obviously can't be believed.
- Some select sites benchmark engineering samples.... paid intel pumpers or they are using cherry-picked chips with only favorable benchmarks
- Increasing # of sites come out... well these are just good benchmark chips.
- Lot of sites come out... well they probably won;t be able to ramp them quickly and by the time they do, AMD's latest and greatest will be out.

Expect a similar 4 stages of denial on Nehalem (if it is good). Again assuming it is good, I expect to see a quick jump to the well it will take a while to ramp argument (as if Penryn will struggle to hold its own in the interim). Intel will focus on the server space and, mark my words, all you will hear from the peanut gallery is how slowly Intel is ramping the desktop space.

Hornet331 said...

http://www.xtremesystems.org/forums/showthread.php?t=190762

quite interesting result for nehalem in spi (~17,2)

simmilar clocked E6600 scores ~21secs and wolfdael ~19

Tonus said...

It seems to me that differences in architectures and the slow but steady move towards multiprocessor support (well, not so slow in some industries!) means that the old trick of cherry picking benchmarks is even more effective now.

People will always need to judge performance based on their needs. I'm curious as to why single CPU performance is that important these days, but that's because my own usage patterns make a multicore processor more useful to me.

The early benchmarks from Anand are bound to be suspect, after all he admitted that it was being run on a motherboard with problems. But they do generate some excitement because it seems as if the shipping product will provide a pretty good performance boost. If that turns out not to be the case, it's disappointing. But until then I'll remain hopeful that there will be a powerful new option available by year's end or early next year.

Anonymous said...

"AMD to scrap new Kuma microprocessor"

http://www.theinquirer.net/gb/inquirer/news/2008/06/11/amd-scrap-kuma-microprocessor

(Yes it is the INQ, but it is sourced from HKPEC)

I mentioned it before... I suspected the delay was due to the fact that the dual core K10 would not be much better than the dual core K8 and/or they couldn't get the clocks up in a meaningful way.

This is a sound business decision for 65nm (why make a K10 if you can make K8 just as cheap if they aren't that different?) - it'll be curious as to how it is spun. I do question how long it took to make this decision - have they been actively spending money trying to get this out the door or have they just stuck it to their customers again by sitting on the decision until there was a good time from a PR perspective to leak the info?

What will probably happen now is you will see a 45nm dual core K10, but no 45nm dual core K8... in this way AMD can avoid any actual direct architecture comparisons and sweep this whole thing under the rug. I wonder from an architectural perspective how much better K10 is on a "core for core", "clcok for clock" basis... as I'm not a server person I really don't care about HT3.0 or MP scaling.

Lastly, perhaps this fits in with Kuma be largely K8 based?

InTheKnow said...

Chemical costs, for example, what are we talking here? Do you guys use this stuff by the gallon, per wafer, at each process step?

There are 2 problems with this question.

First is the number of chemicals in use. There are bulk process gases, resists, acids, bases, slurries, etc. Off the top of my head, I can't think of any tool other than metro tools that doesn't use some sort of chemical. I'm sure someone here can think of an exception I'm missing, though.

And the quantities used are as variable as the number of chemicals.

Second is the whole proprietary issue. What you are asking for is bordering on, if not actually in, trade secret territory.

In the broadest sense, volumes are measured in liters down to milliliters depending on the tool.

SPARKS said...

In The Know-

Whew! Jesus, that was a raw nerve I hit. No wonder it took someone 3 days to answer. Obviously, from what I’m feeling here, this must be the dark/mysterious art of processing. Even Attila The Anon was conspicuously silent, and that’s saying something.

Therefore, from what gather, it would seem rather impossible to determine exactly how more in “chemical costs” the 300mm to 450mm transition would be. Further, I’ll take it this stuff ain’t cheap, and from what I’m reading these concoctions are downright deadly.

Here’s the short list.

Arsenic
Boron
Antimony
Phosphorus

Arsine
Phosphine
Silane

Hydrogen peroxide
Nitric Acid
Sulfuric Acid
Hydrofluoric acid

God know what the hell else.

Three things occurred.

One, since this is such a sensitive area, they are mixed in house.

Two, I hope you guys get hazard pay for working with this stuff.

Three, I’m thinking of the poor bastard who’s rushing to get a malfunctioning tool up and running, while trying to save himself a trip to the hospital, or worse, in the process.

Thanks for the reply; I’ll take your increase of 20% to 30% estimation, and call it a day.

SPARKS.

SPARKS said...

"or have they just stuck it to their customers again by sitting on the decision until there was a good time from a PR perspective to leak the info?"

As if it didn't matter, their stock took a big hit yesterday. They are back to the sixes.

SPARKS

SPARKS said...

Oh, Joy of JOY'S.........

http://www.siggraph.org/s2008/attendees/program/item/?type=papers&id=34%20%20%20


and................


http://www.theinquirer.net/gb/inquirer/news/2008/06/12/amd-teams-havok-physics



Me thinks NVDA is getting the squeeze play from INTC.

SPARKS

Anonymous said...

From THG:

http://www.tomshardware.com/news/Intel-Nehalem-Kuma,5642.html

Contrary to what the FUDsters over at AMDZone are saying, Nehalem is ontime and will be widely available with mainstream, performance and extreme editions available in Q4 of this year.

Same article mentions that Kuma duals and Agena duals & quads are canceled, due to performance/cost price points for Kuma and due to TDP concerns for Agena - 140 watts at 2.66 GHz. AMD will wait for their 45nm process node, maybe in Q4 of this year. Since nobody yet has actually seen any 45nm CPUs from AMD, I'm betting 1H 09 before AMD can trot out underclocked and overheated EBO's (Easy-Bake Ovens :).

Anonymous said...

"Three, I’m thinking of the poor bastard who’s rushing to get a malfunctioning tool up and running, while trying to save himself a trip to the hospital, or worse, in the process."

The safety track record in most fabs are phenomenal; safety standards generally require at least 2 points of failure for there to be any chance of exposure. Breathing air is often used for anything which even has a remote chance of exposure; and if there is concern, the air is measured prior to anyone working.

"Thanks for the reply; I’ll take your increase of 20% to 30% estimation, and call it a day."

On a per area (or per die) basis 450mm will be CHEAPER in terms of chemical consumption; this is the key metric, not per wafer - I'm fairly sure ITK was quoting per wafer estimates. Again with over 2x the area, you simply need to have <2X increase for a positive ROI in this area.

The reason you got no answer, is that there is no concise answer for your question - it's not really a mysterious art. The issue is all tools use different amounts of chems and gases and costs are highly variable depending on purities needed and the obviously the rarity of the material. While ultimately you need actual data and experiments to validate the increase in chem usage from 300mm to 450mm (on a per wafer basis!), it can be (and is) modeled.

Anonymous said...

AMD's Kuma still "on track"

http://www.hexus.net/content/item.php?item=13775

Ok, there's spinning and then there's spinning. The ORIGINAL K10 dual core schedule was Q4'07. This was "updated" to Q2'08....

So now apparently it is on track for H2'08? Which by the way means Q4'08 (since Q3 starts in a bit over 2 weeks, you don't think they would say Q3'08 if it was gonna happen in Q3?)

So I guess 1 year behind the original means "on track for launch"... why doesn't anyone call AMD on this ridiculousness... they just keep ripping up the schedules and then when the product finally makes the re-re-re-re-revised schedule, they say look... on track!

http://images.dailytech.com/nimage/3032_large_AMD_Nov2006_Roadmap.png

...notice...kuma...in H2'07! Even back then they were playing fast and loose with half year sloppy schedules to give them as much wiggle room is possible...however when you are over 1 year off target even half year increments look bad! Also note the K10 4x4 :)

SPARKS said...

“it's not really a mysterious art.”

Maybe not to you it’s not. Previously we basically discussed application and removal methodologies, basic voltage parameters, leakage, tools, etc. Now, we have chemicals and gases with their various compositions, concentrations, exposure time, and the new addition, purity; sure, piece of cake.

Yeah, I realize these things are all sorted out during R+D. But, conceptually grasping the entire process in minute detail on a tool by tool basis, while successfully keeping the high quality of standards while in mass production, is a testament to your abilities apposed to the bullshit artists who haven’t a fucking clue.

Please, pardon the questions; they’re ignorant ones, not stupid ones. Besides, the deeper you search this stuff; you ultimately reach a site that wants you to pay for the privilege.

SPARKS

Anonymous said...

Dreaming big for the big dicks.

Oh why oh why do they want 450mm?

Its the area my boys: A=Pi*r^2 going from 300mm to 450mm more then doubles the yieldable area on each wafer. Intel for the same number of fabs could bet greater then 4x the chips with a process shrink in combination with 450mm. That is a lot of cores and caches. I predict it'll be 8 cores with integrated memory controlers damm they'll even have a multi core graphics engine on it too by 22nm node when they go to 450mm.

Will chemical usage double, NO way!
When you consider steps like wet etch, deposition, diffusion and such the incremental size of tool, chemical usage is just a tad bit more. The actual amount depends specific step. Lets take CVD, in general only 10-20% of the reactants will be used with the other 80% pumped away.. Thus scaling up will materially not changed the incoming chemical usage by as much as the productivit. Productivity measured as silicon area / processing step-time.

Lithography is the one place where more tools or faster tools will be require as with step-scan exposures you'll have a lot more steps so in principle they will have 1/2 the thru put. That will be mighty expensive as litho tools are the most expensive. I don't think there is 2x increase in productivity on these tools as they stage move so fast already and they need to align to the nanometers these days.

Going to 450mm will kill AMD as INTEL with its huge factories can double the output with just a tad more investment. The issue is who will foot the bill to develop these tools. The tools guys probably have yet to recoup their development costs. Going to 450mm will mean double the output and 1/2 the factories and 1/2 the tools bought. INTEL, TSMC, and Samsung will have to pay for the development or we'll be at 300mm till the cows come home.

In the end this is a billion dollar per quarter time R&D between tools and silicon and process. AMD and the IBM whore club can't afford it as they don't have the volumes to develop both the silicon and do the size conversion. Only the big dicks are going to play.

45nm is a fine example. Only INTEL could bet the farm and spend the money to bring HighK to production. AMD/IBM and the rest didn't have the courage nor the money nor the business behind their silicon R&D to commit the resources to develop it. Afterall no one believed it could be done anyways. Once INTEL did it they had to go too, and the gate first approach is inferior as the recent publications clearly show.

Tick tock tick tock AMD is dead

Anonymous said...

"But, conceptually grasping the entire process in minute detail on a tool by tool basis,"

No one really has that much understanding - there is simply too much detail for a single person... perhaps in the 'olden' days, but now thinks are way too complex. No offense here but the stuff on this site barely scratches the surface of what engineers working on sepcific toolsets have to know and understand.

The purity thing though is a key, which is not really known by many... many of the process gases are "4 or 5 9's"; meaning 99.9999% purity. There is a delicate balance on this (especially in precursors used in deposition processes) as purifying those is a lot harder - you are talking about chemical compounds which are at times hard to get ultra high purity. The other darkside of purity are the tools themselves... run a high purity chemical through a less pure pipe/chamber/wafer holder/precursor vessel/etc and all that work for nothing! There is an amazing amount of work and attention to detail in this area.

And remember there's no such things as stupid questions; just stupid people who ask those questions! (I'm kidding) Part of the problem is the questions seem (and are) very simple but do not have easy answers... the folks who answer here can BS (like some other sites may?) or try to give a high level overview and point out the omissions or areas where things have been simplified (and may not be 100% technically correct).

"you ultimately reach a site that wants you to pay for the privilege."

And you shouldn't pay... the majority of those sites, while they have people with the technical background and academic understanding, lack the practical experience to assimilate the key points and simply do not have enough access to the trade secret info. Or they may not understand the commercial aspect to a technical decision or a risk/manufacturing aspect and come to incorrect conclusions (better does not always get implemented).

Tonus said...

Two questions here:

1- Do any of you feel that we will reach a tipping point before very long as regards the costs for CPU development? I see people talking about costs in the billions for developing tools, for testing and refining tools, for building fabs, etc. Do you think we'll reach a point where the costs start killing off companies or effecting large increases in CPU costs? Maybe it's not all as fragile as it seems from so far away.

2- Intel is going to develop GPUs to compete with ATI/NVIDIA. How much of an advantage is their knowledge of design and process, as well as their ownership of their own fabs? They are generally ahead of the GPU companies in terms of adoption of process shrinks and implementation of technologies (presumably because they do not own their own fabs or do their own development in this area?). If current high-end GPUs are running on 55/65/80nm processes, what are the benefits (if any) of developing one on 45/32/22nm? Just heat and power?

Anonymous said...

Looks like Sciatica is spewing over on AMDZone once again:

http://www.amdzone.com/phpbb3/viewtopic.php?f=52&t=135197&st=0&sk=t&sd=a&start=75

"The final common cheat is to avoid testing the integrated graphics. Again, a lot of computer systems will be sold with integrated graphics and never upgraded but these tests are rare indeed at places like Anandtech. Using the first thing they do is pop in a faster video card so that they can avoid trying to explain why you should buy an Intel system with bad graphics. I noticed that a lot of X38 boards are still being sold and you should avoid these like the plague unless you plan to add a graphics card. X38 boards not only have poor graphics but are substandard for Vista."

Geez - ya wonder if Sci knows the difference between an enthusiast site and Wal-Mart? Next he'll wanna know why Car & Driver doesn't have a shootout between the Chevy Perspire - er, Aspire - and a riding mower.

Enumae is attempting to set things straight but he's getting perilously close to the "Intel Inside/Idiot Outside" ban-orama from Der Ghosse, whose full-time job apparently is to prevent the general public from noticing the tin god of tech has feet of clay.

Anonymous said...

Wow there are some real idiots over there who seem intent on misleading people. I love these two:

The delta between a Penryn quad core (4 logical) vs a Nehalem quad core (8 logical) is only 17.3%
Adding one more real core to the Penryn i.e. Quint core (5 logical) would basically equal the Nehalem using HT. What we end up having on a per core scalabilit boils down to.

1. The Nehalem core logic changes equate to about a 4% per core increase.
(MKruer)

You can't simply add the % gain like he is trying to do. Here are some simple #'s (made up to show the EGREGIOUS error in his logic)

1 core 4 core
Penryn 100 400
Nehalem 104 416
Gain 4% **STILL 4%**

What a freakin idiot... He took the 17% gain on 4 cores and divided by 4!!!! Ummm... you can't do that mathematically!!! Or think of it this way... if I can get all 4 tires on my car going 4% faster, does my car go 16% faster? Or how about stroke rate in the cylinders?

The scary thing is this apparently is one of the more knowledgeable posters over there!

Then there is this gem from out favorite (Dementia):
Nehalem's L1 cache is 33% slower than Penryn's. Yet when Brisbane's cache was slower people immediately insisted that it was broken. Does this mean that Nehalem's cache is broken?

This almost (I said almost) sounds like a plausible argument, however Brisbane was a DUMB SHRINK of 90nm and Nehalem is a new architecture (vs Penryn). If Nehalem was a dumbshrink...I would think they do indeed have issues. However he is trying to look at a comparison between 2 architectures on the same node and draw a similar conclusion to the SAME ARCHITECTURE on TWO DIFFERENT NODES!

Either he knows better and is intentionally trying to spin things as negatively as possible for Nehalem or he doesn't get the difference between a technology node transition and an architecture transition. Either way it shows how much weight should be put into his "analysis".

Anonymous said...

Is there too much for one person to understand?

Actually NO, but not one in the analyst world has the capacity let along the experience to appreciate the details of process development, equipment processing details or the bigger strategic and business implications. If they had the capacity they would have a job far better paying and interesting then being a silly analyst. If they have the capacity then they need to be lucky enough to get the few unique jobs that manage this cycle. But there are people who’s job is to understand. That number of people is small and they are at the big semiconductor companies. The ones at the design house will NEVER understand unless they came from job at the manufacture.

When does CPU development end?

When the company can’t make money any more. Many people have said this that the end of Moore’s law will NOT be physical but economical. The cost has already killed a long list of companies; IBM TI, Moto and AMD. Yes AMD is dead, the minute they went to the consortium they died, as did IBM. Sure they both pretend to be CPU manufactures but they are laggards now. They both have neither the volume or courage or business model to make the bets, nor are they motivated to take the risks and make the effort to push the envelope. The likes of TSMC and the consortium simply look to ride the trend and turn out mediocre products on a mediocre process. The power of Moore is still there but they aren’t at the bleeding edge. TSMC’s business isn’t to offer the fastest but just enough to get new business and grow using the power of Moore too. Those AMD cum lappers who are happy that the FTC and other goverments are investigating INTEL and hope that billion dollar fines are coming will be in for a rude surprise if INTEL does get the big penalty. For if it does happen it will take ALL innovation out of the CPU progression. INTEL makes huge bets and will only continue if it can gain the rewards of those bets. AMD has always competed and INTEL has NOT technically stopped them, they are just too small to invest. They lost that a few nodes back. What people don’t understand this is the hardball of big business, those that don’t like it don’t play. AMD is a bunch if pussies. Want there cake and but not willing invest to earn it instead of looking for handouts from goverments and states. The barriers to play are huge. Neither AMD nor IBM the last two pretenders have enough volume nor enough cash flow from just the CPU business to justify pushing the envelop; either the process like HighK/Metal gates, or lowK for that matter, or 450mm.
INTEL continues to get benefit so I expect them to continue to push unless the FTC and goverments shackle them. With every generation they get more transistors and they get the world to upgrade. That is a pretty good model as they will do 300million CPUs for 30 billion or so revenue in the coming years. But if you don’t have 100milliion units and 10 billion profit you can’t play. Nothing about anticompetitve plays here, you are just a small dick and can’t make it in this business AMD. As to how long, it will go on for as long as we need more powerful CPUs. When we run out of atoms laterally or vertically we’ll start stacking chips. Rest assured if you can think of needing more bits or more cpu cycles CPU development and progress will continue.


Does being an integrated IDM give a fundamental advantage to producing GPUs.?

Yes, today GPUs are limited by power and area. Being on a superior node with the highest performance gives HUGE advantages. nVidia knows their days are limited only by INTEL’s capability to execute both the silicon and software side. There is NOTHING nVidia can do but spew whoop ass comments as their ass is already whooped. Imagine what a designer can do if he had 30% more transistors that were 10x lower power as the building block. Does that mean Larabee will fly? That is up to INTEL’s execution and resource decision. Like AMD for nVidia it isn’t if but now only a question of when as long as INTEL’s heart is in it. It sure looks like they are serious so a serious can of whoop ass is coming to nVidia. If it wasn’t for silly anti trust laws and big egos we’d have 50% better graphics within 2 years of INTEL buying nVidia. Why can AMD buy ATI but INTEL not buy nVidia?

Regardless Tick Tock Tick Tock AMD and nVidia are done.

SPARKS said...

“ questions seem (and are) very simple but do not have easy answers”

Don’t underestimate the power of a simple question, with a contradictory answer. (just kidding)

For example, take the noxious gases and chemicals. From my perspective which is industrial power and control, immediately I recognized that there must be entire factory infrastructures devoted to safely handling these substances before and after production, scrubbers, containment, detection, disposal, etc., to name a few, and the ‘lowly minions’ who maintain the production line.

Further, as you already know, the nuts and bolts of things fascinate me, especially those lovely tools. Your ‘simple’ answer regarding 4 or 5 purity indicates the high level of sophisticated tool maintenance. What good would the 4 or 5 gases be if the ancillary plumbing was contaminating the reactants into 3’s or 4’s?

No, I cannot subscribe to your interpretation what does and does not scratch the surface. Even if the ice is twenty feet thick, no one skates across without leaving a mark.

So shoot me for my interest for thinking outside the clean room (box).


Attila The Anon-

“Imagine what a designer can do if he had 30% more transistors that were 10x lower power as the building block. Does that mean Larabee will fly? That is up to INTEL’s execution and resource decision. Like AMD for nVidia it isn’t if but now only a question of when as long as INTEL’s heart is in it. It sure looks like they are serious so a serious can of whoop ass is coming to nVidia.”

From your lips to GODS ears, when you write thing like this I get this warm fuzzy feeling inside. 2560 X 1600 with all the bells and whistles maxed out, on one card? Oh, imagine the possibilities.

You don’t know what it’s like living with two leaf blowers consuming over 40 percent of a 1 kW PS in your computer case that can barely handle 1600 X 1200, and paying for the compromise, while having a processor that can easily feed the pipe, trust me.

I hope NVDA gets their asses kicked back to the dark ages.


SPARKS

Anonymous said...

>"Wow there are some real idiots over there who seem intent on misleading people. "

AMDZonerz spend more time "analyzing" and dissecting what is essentially a quick & dirty bench on a crippled system, and from this they are convinced that Nehalem sucks and Intel is doomed. Might as well just read Sharikook's latest spew and be done with it.

Anonymous said...

"AMDZonerz spend more time "analyzing" and dissecting what is essentially a quick & dirty bench on a crippled system, and from this they are convinced that Nehalem sucks and Intel is doomed. Might as well just read Sharikook's latest spew and be done with it."

:) ... They do live in their own little world don't thay ... :)

Anonymous said...

6 nines so what!

Do you know that you can be 6 nines and if the manufacture makes a small change that still meets the certificate of purity a hell can still break loose. Damm spent lots of time figuring those out on numerous instances. Process are so sensitive these days almost any change can effect them.

Thus companies who don't match development tools and process to ramp tools, or don't run enough volume in their development phase can be in for nasty surprises. You know who those companies are, the ones doing it on the cheap.

As to the AMD cum lappers should focus on executing versus worrying about the competition. AMD is so far behind in technology, manufacturing that their products are like the US auto's of old trying to compete with the superior Japanese auto techniques. The difference is AMD is like AMC of old, got no size, got no money. They are finished.

Tick Tock Tick Tock AMD is done

Anonymous said...

"Damm spent lots of time figuring those out on numerous instances"

I call BS - please give me three (I'll settle for 3 being "numerouse") specific examples of a six 9's purity chemical that you had to troubleshoot, who it was made by, what tool it was, and what the issue was and the change that caused it... and how much time to figure it out.

tick tock, tick tock... I hear someone who is full of crock!

Sure there are issues with purity, but holy hyperbole Batman. And yes the process is sensitive, but spare us the dramatics.

Anonymous said...

Sorry meet me over beers, can;t talk about that stuff on the net.
You buy!

Oh the stories of the fun we have. Its the greatest job on earth!



Tick Tock Tick Tock

SPARKS said...

“Oh the stories of the fun we have. Its the greatest job on earth!”

Ahhh, you’re full of yourself, as usual. There’s nothing like building, and then incrementally power up a huge facility, giving it life, so you pencil necked geniuses can go about your business.

Those fancy assed machines ain’t gonna do squat without good clean reliable juice from me. It’s invisible, the way it’s supposed to be, as you never give it a second thought. The way I look at it, you’re working in a BIG chip.

Besides, HV distribution, triple redundant UPS, network communication, and integrated building management systems, is barely scratching the surface here. Think Hospitals and surgical theaters.


Tic Tock Tic Tock, clock must NEVER stop.

:P

SPARKS

Anonymous said...

Facilities: Power, water don't get me started....

I always have a field day when those guys come in and show us their trends for disolved solids, resistivity and such. I also love the stories and follow-ups from postmortems when we lose power. Don't get me started...

What will kill a person is far easier to manage then what will cause Yields or speed to go south.


Did I hear the clock? Tick Tock Tick Tock AMD is done

InTheKnow said...

It’s invisible, the way it’s supposed to be, as you never give it a second thought.

Unless of course you've been involved with a fab when the power fails. I've seen it, and it ain't pretty. This despite the supposedly redundant power systems that prevent it from happening.

First the fab turns into an uninhabitable toxic wasteland as the air fills with noxious fumes from all the chemicals. You need to send in guys in SCBA in to monitor the air quality after the air handling is turned back on.

Once the fab is safe for human life, you can start to worry about how to recover all the wafers that are stranded in the tools. You also have to recover the tools themselves. They are all controlled by computers, and they aren't real happy when you just yank the plug out of the wall.

An event like this will scrap hundreds of wafers. That isn't counting the down time for the fab which equals lost production. It can easily take a week or more to fully get a fab back on it's feet. You don't have to be a genius to see the cost is enormous.

So not at the front of our mind, certainly. But forgotten, not likely if you've seen the consequences of losing it.

Anonymous said...

Fab filled with toxic chemicals, Yes

Toxic chemicals everywhere when you lose power, not likely except maybe in some 3rd world country.

WTF sorry ass fab do you work in? A properly constructed fab with correct safety in place can safely tolerate sudden power loss without it filling with toxic fumes.

Diffusion furnaces with hundreds of wafer will be dead, wafers caught in acid tanks, wafers in deposition or etch chambers will have the wafer being process being bad but that is it. In a modern fab with enviromentally isolated FOUPs, load lock exchange chambers and even older fabs with boxes, most wafers are protected during the fab failure. Fab filled with toxic gases? Could you tell me where they come from? All critical systems include power to insure safe shutdown and evacuation of toxic gasese. You don't have a fab full of Silane or resist spewing on to the floor or such. Nice story but not reality at all.

Good story but not reality readers, but maybe that is Dresden?

I hear a clock tick tock tick tock

Anonymous said...

'Sorry meet me over beers, can;t talk about that stuff on the net'

Just as I thought - the problem is you are talking about quality control issues, not purity issues.


My guess manufacturing site? With a little bit of development experience? (Enough to BS about it, but not enough to put away the arrogance)

Please stick with what you can talk about and stop pawning your self off as an expert

tock tick, tock tick.

Anonymous said...

ITK - I started to type out a flame (like mr tick tock), but then i realized you were talking about the factory level air handling (though this is somewhat isolated from the tools as those go through a separate exhaust system).

I guess in an extreme case it could cause an issue - I think Mr tick tock is confusing this with tool level exhaust which is designed in such a way that in the event of power failure or pneumatic air (some call this CDA, clean dry air) that valves on the tool will be a combination of normally open or normally closed. The purge (inert, safe) gases are done on normally open lines (if they fail they will remain open) while the noxious gases are on normally closed valves.

Of course Mr tick tock is only interested in inflating his own self importance. I almost did they same, but fortunately caught myself when I read your comment more carefully.

SPARKS said...

“I also love the stories and follow-ups from postmortems when we lose power”

“Unless of course you've been involved with a fab when the power fails. I've seen it, and it ain't pretty.”


Fella’s

Ahh, the age old question, what comes first, the chicken or the egg?

Was it the mechanics who installed the systems, or the pencil necked geniuses who designed them? The Engineers and Architects are the BANE of my life. Get you started? I can’t count the times I’ve sat in construction meetings telling those guys “This ain’t gonna fly”.

What’s their agenda? Construction costs, that’s what. You would think those idiots would oversize and overbuild everything.

I ask you. How many times have you had shutdowns after the “postmortem”, because they had to “strengthen the distribution system”? Do you think they’d calculate loads if 5 or 6 tools startup simultaneously? Surge currents can be 200% to 300% of running load (think motor FLA), duh! Na, they didn’t factor that in, especially with those controlled short circuits they call “ annealing ovens”!

Capital costs rise exponentially as you increase the size of the feeder by a factor of one! Going to 1/0 AWG to 2/0 AWG will cost you twice the price, all the way down the line, up to and including the step down transformers. They don’t pull this shit in hospitals, trust me. Not with that kind of liability they don’t.

In fact, in the old days they over built everything. Christ, I recently pulled an old network transformer, which was working perfectly, out of the Essex House in NYC. It was installed in 1910!!!! Con Ed is going to stick the heavy bastard in a museum!!! They should, because they certainly don’t build them like that anymore!

Would you skimp on the width or depth of a copper trace, in the backend, to save you 5 minutes in a tool, merely to save a couple of bucks per wafer? In my world, they do it all the time.

Pay now or pay later, idiots.

SPARKS

SPARKS said...

“i realized you were talking about the factory level air handling (though this is somewhat isolated from the tools as those go through a separate exhaust system).”

Well said, hand that man a cigar.

ITK-

For me, to have something like what you described, had to come from a MAJOR brownout or blackout. This for a FAB is a cataclysmic event of insurmountable proportions. For me, it is an embarrassment for the entire industry.

Take the Utility Companies, the other bane of my life. You see, they are authorities. Their motto: Basically, here’s what you get, live with it. Getting good clean power? Here is a little scenario:

I hope I don’t get too technical for Attila The Tic Tock.

Power Conditioning is quite another mater. There are a multitude of ugly beasties that go though the best UPS systems like rats under the rails of the E Train. Harmonics, Sine Wave Distortions, Distorted Phase Angles, Unbalanced Loads, Imbalanced Neutrals, just to name a few. Loosing a neutral in a multi-phase system will fry a UPS in heartbeat! If you think this stuff can be stopped by your $100, 1000VA UPS, think again.

More to the point, comprehensive power conditioning requires, Inductors, Isolation Transformers, power conditioners, Isolated Separately Derived Grounding, oversized conductors, and finally computer controlled line conditioners. With all this the Engineers still sweat when we hook up the Industrial Grade 3000 pound UPS. This stuff is big, heavy and cost MONEY, big money. Hell, we monitored a huge news organizations power for 6 weeks continuously with state of the art equipment and noise was still sneaking in! Three engineers went over the data for weeks! We discovered that another building’s (THREE BUILDINGS AWAY!!!) Variable Frequency Drive on THEIR chiller was causing the problem, especially during startup! HELLO!

More importantly, even in the best installations, the root of the problem, which is getting worse, is the power supplied by the utility company’s network. Bottom line, it’s ugly and it ain’t gonna get prettier.

SPARKS

SPARKS said...

“ started to type out a flame (like mr tick tock), but then i realized you were talking about the factory level air handling”


I suppose he thinks the power going to the tools is the same power going to the fart fans in the ladies bathroom. Ha Ha, typical homeowner/Bob Villa taught mentality.

“What will kill a person is far easier to manage then what will cause Yields or speed to go south.”

You really should get your priorities in order here. Losing, 20 or so incubators in a ICU premature nursery is not my idea of a good day, pal. Power transfer in these theaters is absolutely seamless. Apparently, my hospitals are put together and/or designed a bit better than your FABS, at least electrically.

We had a blackout a few years back that knocked out the entire city, you know The Big City, we didn’t lose 1 miniature teraflop computer, cranked out by unskilled labor, that’s a “premie” infant to you. Not on my watch.

Further, if were up to me, every tool, on your production line would have its own UPS that would have enough power so that each tool would finish its cycle.

But, they don’t want to pay for that, do they? Hmmm?

SPARKS

Tic Toc, Tic Toc, you guys are living edge.

InTheKnow said...

LOL, that did sound a bit dramatic now that I re-read it. I can picture billowing clouds of noxious chemicals pouring out of all the tools.

The reality is far less exciting, but no less serious. As it was explained to me the biggest offenders were the wet benches. With a wet bench what you have is essentially loosely covered tubs of heated acids. Without the fab level exhaust or the tool exhaust systems to take the vapors to the scrubbers they stay in the fab.

When you work with the stuff the semi-industry does you make safety a priority, or you are out of business. You'll notice that you've never heard of a disaster like Bophal associated with the semi-industry.

So the fab will be evacuated long before it becomes lethal to be in there. After a few hours, the limits easily exceed the acceptable limits for people without SCBA to be in the fab. Note that these limits are typically set well below the TLV. With limits set this low it takes a while to recover to "safe" limits.

For me, to have something like what you described, had to come from a MAJOR brownout or blackout. This for a FAB is a cataclysmic event of insurmountable proportions. For me, it is an embarrassment for the entire industry.

You nailed it. The fab has triple redundancy on the power supply. But when a substation goes up in a fireball (it took a week to bring it back up) there is only so much you can expect of a redundant system.

Further, if were up to me, every tool, on your production line would have its own UPS that would have enough power so that each tool would finish its cycle.

Sparks, a diffusion furnace can be required to run at up to 1000C for around 8 hours to complete processing (this is towards the extreme end, but the numbers are in the ballpark). You'd have a UPS battery bigger than the tool to keep this sucker up and running through a full cycle.

Incidentally, we didn't loose a single computer that I'm aware of. What we did lose was tools knowing what state their various components were in. The mechanical controllers all still knew where "home" was, but they didn't know if they were "home". Each moving part needed to be manually driven to the home position, and wafers recovered and removed from the tools before the controllers could be re-initialized.

The needs of a hospital and the needs of a fab are entirely different. When a fab loses power, you put hundreds of millions of dollars on the line (I'm assuming a big fab, fully loaded). When a hospital loses power, you put human lives at risk. In the grand scheme of things, losing money is unpleasant. Losing a life is unacceptable.

SPARKS said...

’Incidentally, we didn't loose a single computer that I'm aware of.”

This I knew. Typically fitted with Leibert Type systems, servers and computers are pretty much isolated from these events. What troubles me, however, is vulnerability of the gas evacuation system and/or scrubbers. I hope, at least, emergency exhaust is transferred to the in-house power plant(s).

The same would go for those mega furnaces. A five minute run time should be sufficient for generation plant to wind up and accept and/or transfer the load. A UPS at each tool wouldn’t be as big as the tool it itself, in this case.

Then again, the corporate bean counters have already factored in the cost/wafer loss ratio to a marginal UPS installation and maintenance, and decided to take the wafer/production loss, instead.

It all boils down to numbers. If you take a hit of this magnitude once or even twice a year, you’ve saved yourself a buck. Take a hit like this once a month you’re in a world of hurt.

SPARKS

SPARKS said...

The heads are still rolling at the Scrappy Little Company.


http://channel.hexus.net/content/item.php?item=13828

SPARKS

Tonus said...

sparks: "Would you skimp on the width or depth of a copper trace, in the backend, to save you 5 minutes in a tool, merely to save a couple of bucks per wafer? In my world, they do it all the time."

Ain't it the truth. It's both annoying and frustrating to watch people disregard your recommendations and then badger you all the time to deal with the problems that would not have occurred if they'd just done as you had suggested. sigh...

Anonymous said...

AMD Developing Atom Rival

Anonymous said...

atom rival? AMD just doesn't get it.

8Watts @ 1.0 GHz... yeah you'll here the old "but it has the northbridge in it" argument but even considering that, it is still higher power.

So the target of this chip? Is it ultra low end notebooks? If so, we are talking about cost, power and performance (probably in that order). Even if this chip performs well, I see no mention of cost and the power appears worse.

Is this a small die? If not AMD is once again competing for the sake of competing and will get eaten alive on margins and cost. This smells like a slow X2 with maybe a little bit of the die size cut down (just a guess).

What the heck is AMD's marketing team doing? Are they telling people internally, there is this untapped market if we can get performance in this cost and power driven market?

On a related note AMD just announced a 1.9GHz tri-core at 65Watt... energy efficiency! (probably because customers are demanding it). People looking at tri-cores are not on the fence about 95 Watt vs 65 Watt and thinking if only they could somehow slow the chip down so it can fit in a 65Watt envelop! People that worried about power will buy a dual core which will both perform better and use a similar amount of power.

Anonymous said...

Why must AMD release cryptic, intentionally nebulous foils?

The new atom rival is apparently in a 27mm x 27mm package... (based on the slide at Hexus and various other sites), but it's hard to say anything about actual die size.

To put this in perspective Intel's atom is a 25mm2 package (about 4X smaller)... it will be interesting to see the die size comparison. Unless there is a lot of dead space in the package, this die would appear to be significantly larger than the atom.

Asked for comment AMD apparently stated that customers were requesting larger packages and slightly more power consumption for this netbook market.

Anonymous said...

Clarification on previous comment - the Atom die size is ~25mm2, the package is ~12mmx13mm (which is ~1/4 of the AMD package size).

Anonymous said...

"Asked for comment AMD apparently stated that customers were requesting larger packages and slightly more power consumption for this netbook market."

Bwahhhaaahhhaahhhhh, irony becomes you....

Anonymous said...

Disturbing Change in AMD’s routine?

It is funny that Scientia would find it significant to comment about a change in AMD’s analysts press days.

What does it matter what AMD says it will do, what their roadmap says.

I could have told you 4 years ago during the dog days where Netbust and Itanic ruled the roost that AMD was history. The day AMD decided to go to IBM for silicon technology and close down their own silicon development was their kiss of death.

Leading edge CPUs that rely on external process development is not going to be leading edge. AMD thru arrogance or stupidity ( I think it was both ) assumed INTEL would continue to miss execute. Did AMD really think their designers were really better forever then INTEL’s. That INTEL would continue to be misguided and miss execute. That AMD had some better mousetrap all to itself? Get real, there is no secret to great micro architecture nor to great process. All the micro architecture tricks are widely known and its all about what tradeoffs you want go after and to what cost in power, silicon area and additional effort versus additional return you push. All tricks result in significant overhead as you pursue it too far. As to process its all about money commitment to the future. Once AMD decided to outsource I knew their clock had started ticking. Without close linkage between the chip definers and the process technologist you won’t get a process tuned to EXACTLY what maximizes the designers complex tradeoffs. Could be fast transistors, could be low leakage high voltage IO, could be fat low resistant wires, could be 10 tweeked dense metal layers, whatever it is once you outsource it to a bunch of consortium paid engineers you get a mediocre process. A consortium process is like a bill in a democracy. It is full of compromises and paybacks that try and satisfy ( buy off various people ) and in the end never makes the hard compromise that land the best balance between what the process and the design will collectively deliver.

Now you got a bleeding company with no competitive current design, mediocre process getting its clock cleaned. Why is it a surprise they push back schedules, products end up with suprises, dies are big, slow and power hungry. You start back with the fact they have crappy silicon technology not aligned with their architecture choices. Without great silicon you can’t build a great CPU. You can have great silicon and still produce garbage look at Prescott and Cedar Mill as examples of that. But without great silicon you are guranteed NOT to be competitive if the other guy is executing. As long as intel continues to execute it will be NO surprise that AMD products ( I don’t care what fancy roadmap the do or don’t show ) will be behind a full generation or two. Scientia got it right Penryn will do fine for the next year and half and Nehalem is just icing on the cake.

I find it even more funny that Scientia somehow thinks that in 2009 AMD will have 45nm HighK/Metal gates. Where has he been, AMD/IBM missed this by not investing in it two years ago. They were caught with their pants at their ankles beating to SiON for 45nm. It will be 2010 before IBM has a viable HighK/Metal Gate process on 32nm.

For the first time it looks like Scientia has come to realize that its all in INTELs hands. Like I said the future was cast when Hector made the decisions that sunk AMD in the old Prescott days. It’ll go down as Harvard Business Review classic of how David let Goliath off the hook thru arrogance and stupidity.

Today INTEL is the leading producer of the highest performing CPUs
Today INTEL is the producing CPUs with the highest performing silicon
Today INTEL is the largest and most profitable semiconductor company

Today AMD is the leader in driving frivolous lawsuits
Today AMD is the leader in creating fictitious roadmaps of immaterial real value
Today AMD is the leader in losing money and gaining sympathy from fanbois to won’t appreciate innovation or cost and energy to really innovate.

WTF, AMD says customers want bigger? Its not their manhood the customers want they want a low power small form factor die. Its all about smaller and cooler. Don't get confused between your manhood and your chip Hector. In both size matters, but you can only fantasize

Tick Tock Tick Tock AMD got its clock cleaned.

Anonymous said...

I thought AMD listened to their customers and delivered what they wanted.

They now want higher power and bigger die.

They got Barcelona, high power, big and slow. perfect for the customer who wants AMD

LOL WTF are they smoking over there.

Now they are developing a Atom. I thought AMD listend and new what customers wanted, who where they listening to the past 3 years. Did they miss the cheap guys. Or are they just coping INTEL again.

Tick Tock Tick Tock, the clock has run out on AMD

Tonus said...

anon: "8Watts @ 1.0 GHz... yeah you'll here the old "but it has the northbridge in it" argument but even considering that, it is still higher power."

The arguments I saw at one site that reported on this did include the northbridge-is-built-in comment. Others stated that the 1.6GHz Atom is hurt by the in-order processing and that it is probably slower than a 1GHz AMD CPU.

Others pointed out that if AMD doesn't have these out very soon, they won't be competing versus Intel's first generation, and thus comparisons are moot. I don't think I saw any comments discussing the financial factors, which at this point seem to be the most interesting.

Isn't it likely that if Intel can produce many more Atom CPUs at a lower cost (due to the smaller die size) that even if AMD has a faster product, they may face the same old problem? They can't produce many of them, and they need to keep prices low to compete with Intel. So even if they sell as many as they make, they're losing money.

They're in a death spiral at this point, it seems.

InTheKnow said...

It is funny that Scientia would find it significant to comment about a change in AMD’s analysts press days.

I think that Scientia has missed the boat with his analysis.

Reading between the lines of AMD's latest press releases and the products they are pushing, I don't believe they are shooting for top end CPUs. It really seems to me that they have bought into Nvidia's position that the platform is all about graphics. All you need is a "good enough" microprocessor.

Much like Intel has used a "good enough" approach to integrated graphics, I have to wonder if AMD isn't going with a "good enough" approach to CPUs in a graphics driven platform.

In support of this position I offer the following press releases from AMD for the month so June.

AMD Demonstrates the Cinema 2.0 Experience, Punches Hole in ‘Sensory Barrier’ Separating Cinema and Games

AMD Stream Processor First to Break 1 Teraflop Barrier

AMD Pushes Mac® Based Visual Computing Beyond HD

June 12, 2008 AMD and Havok to Optimize Physics for Gaming

AMD Announces Growing Support for its Next-Generation Notebook Platform with Eight New HP Notebooks

AMD Next-Generation OpenGL® ES 2.0 Graphics Technology Achieves Industry Conformance

AMD Empowers IT Managers with New Approach to “Scaling Up” Datacenters with the Quad-Core AMD Opteron™ SE Processor

Commercial Channel Partners Embrace the Quad-Core AMD Opteron™ Series 1300 Processor

AMD Offers Digital Entertainment Solutions for The Ultimate Visual Experience™ in PCs


Of the 9 press releases, 6 of them deal with graphics, 1 deals with their new laptop "platforms" with an emphasis on graphics capability, and two focus on the upgrade path that AMD's long awaited quad cores provide.

To me this does not smack of a company that is fighting for a share of the leading edge cpu market.

InTheKnow said...

The arguments I saw at one site that reported on this did include the northbridge-is-built-in comment. Others stated that the 1.6GHz Atom is hurt by the in-order processing and that it is probably slower than a 1GHz AMD CPU.

As I have read the comments following reviews of the new netbooks, one thing stands out. Battery life is viewed as a key component by those that are buying these things. At 8 watts, vs the 6.5 watts of the Atom platform this new AMD chip will be at a disadvantage. I'm not convinced a performance advantage is going to offset the reduced battery life.

InTheKnow said...

Some interesting stuff on Hi-K Metal Gate in this link.

SPARKS said...

“AMD thru arrogance or stupidity ( I think it was both ) assumed INTEL would continue to miss execute.”


I swear to all that’s holy, to this day, since 2006, I can’t believe AMD did what they did when they did it.

All of us here have bandied this thing back and forth for nearly two years.
We’ve argued the arrogance thing.
We’ve beat up the ATI thing.
We’ve completely dissected the Barcelona thing.
We’ve even trashed the SOI thing.

The whole mess of historical missteps reeks of complete and utter failure. If I were a tad more delusional, I would say they planned it this way.

AMD is like a runaway freight train, running at full throttle, down hill, with wild eyed Wrector screaming out the cab window with a fist full legal papers and Power Point slides saying, “these will stop us!”

They didn’t just it wrong, they simply can’t do a goddamned thing right. Here they are with this sham of a competition for ATOM, and once again, they’re a day late and 5 billion short.

I mean really, can there be a day that goes by where I won’t ask myself, “WTF are they thinking?”

What a wild ride, they’re shot.

SPARKS

Anonymous said...

"I'm not convinced a performance advantage is going to offset the reduced battery life."

Let me go a bit further... for this market performance is close to IRRELEVANT, barring a huge chasm. This is all about cost and battery life and good enough.

I'm stunned at the lack of business sense... AMD strategy on the "high end" market is 'good enough' and 'energy efficiency' (tri-core, low clocked quads, low clocked duals) and compete with cost.

Now on the low end market they are going to compete with performance?

Huhhh...

Anonymous said...

ITK - the IMEC HighK/MG stuff was interesting. In general if you talk to folks in the industry stuff coming out of IMEC is much more respected than say Sematech (which for many things is now a laughing stock).

That said... keep in mind the 'improvements' they are claiming. They started out with the most complex process - a separate high K for NMOS and PMOS devices and a separate metal for NMOS and PMOS and managed to "simplify" it down to 2 high K's and one metal.

Intel's process is 1 high K and 2 metals (I suspect IBM's process is also though I'm not sure if they have published anything) so with the exception of the gate first vs gate last flows this is hardly a "breakthrough". The stress memorization validation though was impressive.

What I found interesting was that neither IBM nor AMD (or anyone in the 'fab club') are partners in this work.

pointer said...

Blogger InTheKnow said...

The arguments I saw at one site that reported on this did include the northbridge-is-built-in comment. Others stated that the 1.6GHz Atom is hurt by the in-order processing and that it is probably slower than a 1GHz AMD CPU.

As I have read the comments following reviews of the new netbooks, one thing stands out. Battery life is viewed as a key component by those that are buying these things. At 8 watts, vs the 6.5 watts of the Atom platform this new AMD chip will be at a disadvantage. I'm not convinced a performance advantage is going to offset the reduced battery life.


actually one more thing is conveniently missing in the equation, the Intel's 6.5 W includes the IGP and AMD's 8W doesn't.

Tonus said...

anon: "Let me go a bit further... for this market performance is close to IRRELEVANT, barring a huge chasm. This is all about cost and battery life and good enough."

That is what I was thinking. One of the comments I saw was along the lines of "I want to buy one of these and run Crysis on it!" And I realized that this person just didn't get it.

I still think that the money issue is the big one now, even overshadowing performance and efficiency. I don't think AMD can be saved by having a better low-end processor, especially if it costs them more to make them. They just fall into the same trap, losing money while boasting that they've got the better product, and watching things get worse as Intel delivers better and better designs over time.

You almost wonder if the board has left Ruiz at the helm out of sheer spite, figuring that the captain should go down with the ship, since he's the reason she is sinking.

InTheKnow said...

I suspect IBM's process is also though I'm not sure if they have published anything

I think I saw a report that said IBM's process is 2 Hi-K materials and one metal. I'll hunt for a link.

Anonymous said...

Anonymous said...
AMD Developing Atom Rival

AMD is gonna pair 2 Atoms together and call it the Molecule...

Anonymous said...

"I think I saw a report that said IBM's process is 2 Hi-K materials and one metal."

That's possible - you can tune with the work function (use different metals), but you can also choose different high K's which will impact the fermi levels as well.

Either way you have extra masking for either the 2nd metal or 2nd high K. As the metal has slightly looser requirements from a processing perspective, I would much rather try to put down a <30A high K oxide once and the metal twice as opposed to the other way around.

Of course IBM may not have a choice as the gate first flow requires metals which can survive high temps so they may have to go with two high K's/one metal out of necessity? (pure speculation)

For folks scratching their heads on what ITK and me are talking about... for CMOS you need 2 different transistor types, PMOS and NMOS. These are electrically different. For conventional SiO2 and Poly Si gates (technology until high K/MG came out) the Poly was doped to get the right P and N characteristics. With metals you cannot really just pick one metal and dope them (this is not entirely true - there are folks researching doping/modification), so you need to use 2 metals each with the right electrical characteristics for NMOS and PMOS. The other alternative which is what IMEC is exploring (link in earlier comment), and apparently IBM, is to use 2 different high K's with one metal...this will also get you 2 different electrically performing transistors.

Anonymous said...

You guys who are really interesteed should go look at the IEDM paper from INTEL on what they have in PRODUCTION then compare with the what IBM is promising to have in production sometime in 2009, they are publishing it in Hawaii right about now their claim to what is a better mousetrap.

Remember today you can buy HighK/Metal gate from INTEL who is in production in 2 300mm factories by the millions with another two 300mm factories coming online soon. In the other camp you have a powerpoint of what IBM believes it can produce in 2009.

As to IBM HK they have to go two different highK dielectrics and one metal as that is the ONLY way to get the matched characteristics for both the NMOS and PMOS. As the previous poster mentioned N and P transistors required different electrochemical potentials for the gate. Polysilicon doped N and P type was a gift from god. Now that they need to replace polysilicon and the SiON the issue is very very complicated as god decided not to give the material engineers easy ready made elements that worked. A complicated combination of either different HighK and one metal or one HighK and different metals is the only way to get close to the right combination. IBM is going the quick and dirty way. The other harder and better way is one dielectric and two metals.

I don’t know about you but common sense says with metals you got a lot to choose from between workfunction and they all conduct so the permutations and limitations are far larger then finding a single metal and two insulators. As the insulator options is very very very limited. Some may ask why that is. Well the dielectric you pick must have a couple must have properties; High K, large band gap and the right offset of valance band and electron band barrier to the metal gate you pick. Old boring oxide in combination with N doped silicon on NMOS and P-dope on PMOS was perfect. But in the case of HighK it is harder. Once you pick the Dielectric, then you got to tune both the gate and the metal electrode stack so it don’t trap charge, leak, breakdown, degrade over time, or degrade mobility for the transistors, AND provide the right electrical characteristics for high performance transistors.

I will go out on a limb and predict. No actually I will promise that gate first will ALWAYS be inferior to gate last. Thus AMD/IBM will always have a inferior transistor. It’s the physics and thermodynamics of the system. You can’t change it, cheat it or engineer around it. Make the wrong choice like IBM did for their last big material bet and you have a disaster. For those that don’t’ remember IBM came out very boldy a few years ago beating everyone with the wrong LowK backend that collapsed on their face. Their HighK choice won’t collapse but it won’t produce the results they need and give them a generation they will be doing gate last too. IN the meantime, AMD is fucked.

But count on IBM spin doctors to say they are best because of its simplicity and ability to put it into a traditional CMOS flow. They will continue to advocate this based on simplicity, cost, ability to drop into standard flow. But has anyone figured out why they are advocating this. They started very late, they haven’t had time to really explore the gate last and the other myrid of material options and combinations. Being late all they can do is trump they got HighK coming, that it is easy to insert blah blah blah. For you those that are interested go get what performance transistor drive current IBM is promising versus what INTEL is PRODUCING. When in doubt don’t believe they speed up of one thing versus another thing. Compare the IBM/AMD against it’s similar competitor process at TSMC and INTEL and vice versa. Who cares if GMs newest car gets 30% better gas mileage or is more reliable then its last car. The question is GM’s newest car better then Toyota or Honda. It isn’t, no different then comparing AMD/IBM’s latest process to INTEL. Who the fuck cares how much better it is compared to the last shit you were producing. Is it better then the competition is all that matters!

Now my favorite AMD blogger…

“Just part of my blatant pro-AMD spin”

The real Scientia comes out, if there was every a question.


I also got a laugh on a comment from a Scientia comment about an explanation why AMD not telling anyone what they are doing and slipping analyst day because they are afraid of INTEL. That is a good one, why weren’t they afraid of INTEL 2 years ago when Core2 came out. They really think they materially change INTEL’s direction by hiding some secret powerpoint plan 6 months. Nehalem is done for that matter INTEL has their next couple Tick Tocks all going as it takes a good year or more to go from concept to full chip. If AMD thinks its got another revolution coming that they can hide that is the funniest thing I heard. There are really no secrets, the only secret was how arrogant and inward looking INTEL was during the Netbust and Itanic days. INTEL was fucked up in those days and paid for it. They’ve woken up.

When AMD came out with dual cores how long did INTEL take to come out with their poor ass shame of a dual core? You think hiding 6 months changes the real impact, NO not much.

AMD thinks hiding roadmaps and plans for a few months is strategic and helps time get ahead they got their priorities in all the wrong place.

The reason its pushed, is they got NOTHING. They hope they got some credible updates in 6 months. Hope is all they got, but they haven’t invested for good things.

Tick Tock Tick Tock AMD got NOTHING.

InTheKnow said...

But has anyone figured out why they are advocating this. They started very late, they haven’t had time to really explore the gate last and the other myrid of material options and combinations.

Check the literature and you will find you are mistaken. IBM has been working on this for nearly 10 years, just like Intel. They have had plenty of time to explore alternatives. You need to understand IBM's culture to understand their decision. IBM will always go for the "elegant" solution. And frankly, from a process point of view, gate-first is the more elegant solution.

I will go out on a limb and predict. No actually I will promise that gate first will ALWAYS be inferior to gate last.

You are aware that there are potential fill issues beyond the 32nm node for the gate last approach, right? Intel may have to go to gate-first at some point due to the technical difficulties of dealing with the high aspect ratio issues that future nodes seem to pose. Will you be as quick to say gate-first is not any good if Intel has to go down that road?

A complicated combination of either different HighK and one metal or one HighK and different metals is the only way to get close to the right combination. IBM is going the quick and dirty way. The other harder and better way is one dielectric and two metals.

Again, I disagree. From a process perspective it is going to be harder to do the masking and etches as well as accurately control the thickness of 2 Hi-K materials than it will be to do 2 metals. The previous poster already mentioned this. Not to mention all the work to get something that will work with the anneals. IBM's approach is not quick and dirty. There are huge hurdles to overcome.

All that said, I expect IBM to under-deliver. IBM is absolutely world class in the research realm. But they fall on their face when it comes to integration and scaling to production.

InTheKnow said...

Here is the link I was referring to regarding IBM's materials choices.

From the second to last paragraph:

Engineering the dielectric stack to be either fastest/leaky or fast/tight for a target HP or LSTP, there’s a single HK gradient-stack and one metal used for both NFET and PFET gates. Poly-silicon tops the metal gates. “After more than three years on the 300mm pilot line, there’s been a lot of learning and we’re on track,” Khare noted. (emphasis added)

Anonymous said...

"They started very late, they haven’t had time to really explore the gate last and the other myrid of material options and combinations."

IBM and Intel probably started at similar times - ininial feasibility and research was pre-2000. I think Intel started a bit earlier on the process integration and bringing a tool into the fab, which is one of the reasons why they are in production first.

The other thing to keep in mind with gate last (or replacement) is you have to fill the smallest feature on the technology and need to do this without voiding... this is not a trivial task - you are talking features that are 300A (or smaller) wide. You also have more integration to do as you introduce new etches as well as 1 (or two) polish steps.

If you can get gate first to work it is definitely the way to go - the integration is simpler and the process is cheaper. The main problem you have is your metal choices are limited by the S/D anneal (typically done around 1000C) - many metals will melt, or at least flow at this temp (it may also start reacting a bit with the high K).

So one of the problems I see with IBM's gate first approach, is that if they shift to a new high K dielectric (which will be needed in 2 generations) they have to hope they can find another magic single metal solution. Intel's challenge will be to continue to fill the smaller and smaller features (though as the metal is put down after the anneals they have significantly more metals to choose from).

This whole thing reminds me a bit of SOI - Intel saw the same benefits that IBM/AMD saw on 90nm; but their research saw this improvement slowly evaporating with each new node. So they chose to skip a short term solution. Fans and some less knowledgeable folk saw this as IBM being more advance; however others saw this (correctly) as Intel avoiding a more complex and significantly more expensive solution for what was a 1-2 generation solution (int terms of large gains of SOI over bulk Si).

The gate first stuff reminds me of this - the approach Intel took is harder and more costly, but ultimately it should five much more flexibility on future nodes. IBM chose probably the best short term solution with gate first in terms of simplicity and cost, but there is a question of how extendable this will be.

Anonymous said...

All that said, I expect IBM to under-deliver. IBM is absolutely world class in the research realm. But they fall on their face when it comes to integration and scaling to production.

This is definitely the industry insider view... IBM makes a lot of money on licensing and does like to patent and publish. Intel tends to keep more stuff trade secret and publish things that DON'T work. Keep in mind IBM is no longer producing mass volumes of chips so to some extent they can attempt to 'brute force' a process that may be on the edge; while their fab club may start uncovering issues as this scales to higher volumes where tool and fab variations start coming into play more significantly.

Orthogonal said...

For $199, the new 4850's are quite bargain. I'm mighty impressed with the results. I think it's time to upgrade from my aging X800 Pro.

http://techreport.com/articles.x/14967
http://anandtech.com/video/showdoc.aspx?i=3338

Unknown said...

For $199, the new 4850's are quite bargain.

Indeed.

I'm as yet undecided on whether or not I'll replace my Geforce 8800 GT SLI setup. If I do replace them, it could be with either an Nvidia or Ati setup. This would be at least two GPUs, as no single card offers much of a performance increase over my aforementioned 8800 GT SLI setup.

So my question is this: Can I run a Crossfire setup with two or three cards on an EVGA 790i board? I have another board (ASUS P5B Deluxe) that I know is Crossfire capable, but the second PCI-E slot is limited to 4X bandwidth. The 790i delivers a full 16X to each slot, so I'd be getting much more performance if I could keep that.

3 4870s or 2 GTX 260s could quite the combination, I imagine.

Tonus said...

orthogonal: "For $199, the new 4850's are quite bargain. I'm mighty impressed with the results."

When I was skimming the Tech Report review, the numbers seemed almost disappointing, then I realized that this was the 4850, a $199 card that is designed as the lesser of the new offerings. Suddenly, being able to 'only' keep up with a 9800GTX wasn't a liability, but a plus.

NVIDIA must have been impressed as well, as they released info on a 9800GTX+ version and announced pricing changes out of the blue. Seems as if they intend to follow Intel's model, owning the high end and keeping the competition sweating in any areas where they have a product. Still, it looks like the 4850 and 4870 are nice cards. I will wait a bit longer before I make any decisions, but I'm definitely not disappointed at what I'm seeing so far.

SPARKS said...

Tonus-
Giant-
Orthogonal-

You know, none of this would be an issue for me if NVDA would allow INTC to have SLI.

It really pains me to factor that at every purchase decision. I would simply buy two of NVDA best and be done with it.

It's not gonna happen, idiots.

If the 4850 is any indication of the 4870's performance, it looks like two will work well with my X48 MOBO, albeit another generation of graphic card compromises for me.

I am really tired of this bullshit graphics market.

SPARKS

Roborat, Ph.D said...

Scintia said: "So we are supposed to trust that Intel didn't massage its own benchmark a bit to favor its own processors?"

after having accused every independent technology review site of bias, he has now turned towards intel and accused it of selective benchmarking on its own website, also widely known as advertising.

can anyone help me understand what's the point of his last blog?

Anonymous said...

"can anyone help me understand what's the point of his last blog?"'

Some of the critical things you missed:

1) 4P and higher is far and away the most important and largest volume market in server land... oh wait, nevermind...

2) With Nehalem coming online soon (despite Scientia expectations for Intel to run into issues and delays); Dementia needs to get in his we scale better than Intel in the all important 4P market. Even with no 4P Nehalem until H2'09, Nehalem will be able to gain credibility in the 1P and 2P server space which should allow for a pretty fast 4P adoption rate (assuming Nehalem delivers)

3) Anything Intel does is, by default, not believable and they are out to mislead, cheat and steal from everyone. On the other hand when AMD makes crap up like "ACP", there's always a good reason and they are simply ahead of the curve in terms of benchmarking.

4) With no good news on the horizon for AMD (with notable exception of the 48xx graphics - I'm surprised this was not a blog); he needs filler. Perhaps by now he has realized with his closing the process technology gap and Asset Smart blog fiascos, that he should not be blogging about manufcaturing and process technology?

5) Apparently in his little Utopian world he does not understand when you market a product you focus on its strong points and minimize the weaknesses. When you go to a card dealer you do not hear this car needs new tires, a transmission in 2 years and some brakes. Have you seen the benchmarks AMD favors (Spec rate and floating point). Apparently he has no issues with AMD simply presenting simulated / estimated spec scores.

Quite frankly when I go to a website which is advertising it's own product, I kind of expect them to show the most favorable data and conditions and it is up to me to do some more digging to get the whole picture. Then of course I live in the real world... thank goodness we have Dementia to police Intel for the folks living in fantasy land.

Anonymous said...

The other thing which Dementia kindly OMITS in his blog, is that Intel did in fact include its own dual core in the benchmarks ON THE SAME GRAPH (which gives the user both a comparison between 2 and 4 cores as well as dual core Intel vs dual core AMD). It is not as one sided as his blog would lead you to believe.

It was also done as of Dec '07 - it's not like there were a lot of AMD quads out at the time. And had they used a "top end" AMD quad at 2.3GHz, Scientia would have cried about the clock mismatch.

Finally - I loved the I've already seen people suggesting that Nehalem's triple channel IMC will increase your gaming performance. Where? who? This is yet another case of trying to build up expectations in order to setup the inevitable "failed to deliver on expectations" article. Why would triple channel have a significant impact on gaming performance? I have not seen this theorized anywhere.

Having been let down by K10 (especially the clocks which he told everyone would be at 3.0GHz by now); he is trying to pre-hype the Nehalem jump (I know this sounds counter-intuitive); in order to set himself up for the didn't meet expectations. Expect him in future blog articles and comments to talk about 40% better (he may even start saying things like "I've seen people suggest 40-70% better" or saying "people are expecting 3.5 - 4GHZ clocks")

SPARKS said...

Peh, Dementia is as bias as they come, despite his transparent claims to the contrary.

I took him up on his 4 gig challenge with QX9770. His responce was as superfical as he and his "analysis".

SPARKS

Anonymous said...

What is the value of benchmarks if you can't even make money.

Scientia you really don't have a clue and continue to lap at irrelevant things these days. A benchmark is of less importance today then at any time in the history of computing. It really is a matter of what is the total cost of that benchmark. If the incremental value is worth it at all. And in the case of your cum loving AMD it matters not what the benchmark is as that company isn't producing them nor does it have a viable business. It matters not if Hectors SPECbbbjcim is tops to you.

Anonymous said...

"can anyone help me understand what's the point of his last blog?
"
I don't see too much of a point of the last post at all, other than he appears to be pulling the same nefarious glossing over the benchmark info without first understanding the detail, then pulling that together to make a point that is both inccorect and biased.

If you check his vConsolidated link, which he accuses Intel of foul play, you will see in the configuration that this particular link is showing data taken Dec 2007 ... long before any Barcelona quad was available (though it had launched some 3 months earlier).

Jack

SPARKS said...

Jack-(and all)

You’re right about the benchmark thing. In my opinion it is all steaming pile of crap. Most of these morons, especially the most vocal twerps, would never actually go out and BUY a very expensive top of the line chip. It’s like talking macho stud, but only learning the technique from what others say, or worse from what they read!

Ask them this:

Ever date an exotic dancer? No? Then STFU!
Ever done the quarter in the nines? NO? Then STFU!
Ever drive in an exotic car over 170? NO? Then STFU!
Ever done over 80 on the water? NO? Then STFU!

You may ask if this relevant here, perhaps, perhaps not, BUT-----

I BOUGHT A HIGH END, TOP BIN CHIP! (Go to NewEgg, box processors; go to QX9770, user comments, read my comment where it says “user purchased this product from NewEgg”, in blue.)

I can tell you guys what my long term relationship with QX9770/X48 DDR3-1800 has evolved into.

Bliss,--- pure--- fucking---- bliss---, that’s what. You guys who make this stuff probably don’t have the “end user experience” a top chef in a Five Star restaurant may have. But let ME, a certified lunatic end user tell you, you really cooked up one masterpiece here. Nothing, absolutely nothing stops this BADBOY. Further, there is NO looking back; it makes everything else quite superfluous and ancillary.

Who the hell ever thought you could play games like MSFS X or Crysis, simultaneously do virius scans, defrag, and not miss a beat, not even a hint! Everything, I mean everything, just ratchets up a couple of notches, with no exception!

There is crispness to the machine that is indescribable. Even touching a lesser computer is at best tedious, at worst, slow motion

Those shit heads can talk computer benchmarks till they gasp their last miserable sheltered bullshit breath. They’ll never know is what I know, forget Nehalem. Want they don’t want to admit, is that they are looking down the barrel of the big gun, QX9770. And, it’s in their faces, smacking down on all takers on a daily basis, right here at HOME, 24/7!

Hypothetical benchmarks=hypothetical sex. I simply doesn’t play out, unless of course their playing with themselves.

I’m no Gordon Moore, but I can only imagine what this core in conjunction with and IMC will bring to the table.

And I speak from experience.


SPARKS

Anonymous said...

Intel never releases a technology before its time.

It savored FSB and Northbridge for billions of profits while it let IMC and QPI ripen. Then when they had beaten AMD to a pulp with that old boring yesterday technology called FSB and Northbridges that the cum lappers love to beat against they unlease Nehalem, like a fine wine right on time.

Tick Tock Tick Tock AMD is nothing but a footnote in technology history. They will be right up there with DEC-Alpha

Anonymous said...

"I’m no Gordon Moore, but I can only imagine what this core in conjunction with and IMC will bring to the table"

On desktop probably not a lot... People keep talking up the Athlon K8 IMC/HT but what made it better than P4 was the IPC of the core. It is/was far easier to say well IMC/HT is unique to the AMD so that must be the cause of the differences - but as you see with Core2 which wiped out that lead it still mainly comes down to the core (IPC and clock) performance (in <4P space). Sure you can keep up with the abstract theoretical arguments of why IMC/HT should be better for a single socket system or how benchmarks don't show the 'goodness'; but it is of secondary importance (assuming you are running a decent FSB).

As usual Intel went with a market driven, not the elegant solution. FSB was easier and faster to market (much like MCM quad). However as we go further into multicore space, amount of cache (and resulting die area) will be an issue. while you can keep adding cache, eventually you hit an ROI where the IMC may be a better solution economically.

Scientia likes the academic arguments... sure it's great to talk about virtual machines in 4P+ space, but quite frankly unless you are a server/IT person it just doesn't matter to >95% of computer purchases. Keep in mind this is the sole remaining 'stronghold' of AMD and everything else has become a "good enough" price/performance philosophy. I suspect when Nehalem comes around you will hear the 'yeah but no 4P solutions yet, paper launch' arguments until the end of 09 when 4P solutions start coming out. At that point it'll be interesting to see what the argument becomes.

In the end it is about actual performance, not what is better on paper, or what has a more elegant design. Give me the 'inelegant' better performing solution every day of the week and twice on Sundays.

Tonus said...

anon: "Finally - I loved the I've already seen people suggesting that Nehalem's triple channel IMC will increase your gaming performance. Where? who?"

This is one of the things that is easy to hide behind because with so many hardware-related sites and forums, you can always find people who make claims that range from difficult to substantiate all the way to downright ludicrous. I get the feeling that whenever Scientia is referring to what 'Intel supporters' have said, he is picking from one of those statements out there. By lumping them under a broad umbrella ('Intel fans') he can make it seem as if people are being hypocritical when they criticize him.

He uses this device a lot to wonder aloud why 'Intel fans' say one thing now and another thing later, even though it's likely that the people saying those things were not the same people each time, and therefore it's not a case of hypocrisy, it's just a case of differing opinions or points of view from different people.

That sort of thing gets done a lot on message boards, lumping a group of people together to use the words of one person against another. Be it CPUs, GPUs, whatever else, lots of people find it easier to argue a point by trying to make other people defend things they never said.

I think he spends too much time trying to defend himself from accusations of bias, and in doing so he seems more biased than he would if he just blogged and dealt with the comments in a straightforward manner.

Tonus said...

And it seems as if AMD's answer to Intel's Atom will be a hobbled Sempron CPU.

I have to echo Ed's sentiment here- how could AMD be caught so flat-footed? Another opportunity, small and unlikely as it was, to strike at Intel and they appear to have blown it completely. It seems that Intel's primary competition in this space might be NVIDIA and VIA, not AMD. That's mind-boggling.

InTheKnow said...

Tonus,

AMD has been playing down and dismissing this space since Silverthorne was first announced. Based on their comments, they seemed to think that Intel was premature in going this route.

I believe their comment was something to the effect of "we will have a product in this space when the market is large enough to justify it." With the benefit of 20-20 hindsight, I'd have to say that they figured this was going to go the route of microsoft's UMPC effort.

The problem with AMD's attitude is that given the small die size, I really think this is the market they should have been focusing on. Atom's margins are good, and AMD has more than enough capacity to supply a large portion of this market without additional fab capacity if they had been prepared.

Anonymous said...

I don't think AMD had much of a choice (except to wait) with the cut down Sempron... it's not like they have a bunch of resources to do a new design on this (look at how much they have been cutting out of their other CPU designs).

However what AMD is doing is lunacy (again)... they are taking an extremely cost and power sensitive market and plugging in a bigger die and slightly higher power solution and will likely argue performance (which is secondary in this market). Once again they will be competing with big competitors (Intel, Nvidia) with a more costly solution that may not even be that much better performance-wise anyway.

It should have been very clear how serious Intel was taking this when they put it on 45nm; this is the leading node and typically Intel puts "less important" products on the trailing edge process (chipsets, embedded product, etc). The announcement a long time ago that this would be done on 45nm should have been a clear signal to AMD.

So now AMD does a reactionary solution which still cost resources but will probably not competitive and will continue to suck down more resources to keep it competitive. Why not wait, save the resources and use them on a solution that can be competitive in the long run and not just a bandaid? AMD just still doesn't get the business side of things and continues to be lead around by Hector's - we'll compete against Intel on all fronts.

Anonymous said...

Here is an interesting article on the little spat between INTEL and AMD. I think it sums up things nicely.

Do you believe when behavior enables the lowest price and best products for consumers rules above all? Or do you believe in the older view that competition must be preserved even if it means keeping inefficient and mismanaged companies in the game by punishing efficient and larger companies at the expense of the consumer?

Most AMD fanboys really don’t realize this but blindly lap at competition even from an inferior and incompenent little dick is better then letting the marketing efficiencys play out. Innovation still appears and forces matters, look at Transmeta, Via, and Opteron. And look at Barcelona for when things go as they naturally end up with AMD.

AMD was never willing to invest buck in technology and now they are finished!

Tick Tock Tick Tock AMD’s game is over done and finished and no antirust cricus is going to save the dinosaur.


A.M.D. and Its War With Intel
By JOE NOCERA
Published: June 21, 2008
A few weeks ago, Stephen Labaton of The New York Times broke the news that the Federal Trade Commission had decided to open a formal antitrust investigation into Intel, the world’s dominant maker of microprocessors. Subpoenas had gone out not just to Intel, but to many of the computer manufacturers who rely on Intel chips. The investigation, as Mr. Labaton wrote, was going to revolve around “accusations that Intel’s pricing is intended to maintain a near monopoly on the microprocessor market.”
The chief accuser, of course, was Intel’s main (some would say only) rival, Advanced Micro Devices. I say “of course” because I can scarcely remember a time when A.M.D. hasn’t been complaining about Intel’s supposed predatory behavior. But I can also recall the company’s many missteps and execution failures over the years, which have tended to undercut its claims. It was always a little hard to swallow A.M.D.’s argument that it was being hurt by Intel’s anticompetitive practices when it had such a long history of snatching defeat from the jaws of victory.
In recent years, however, two things have happened. First, in 2003, A.M.D. came out with a chip called Opteron, which was far superior to anything Intel had on the market. Indeed, this was one of the few times that Intel was the company stubbing its toe; it took a year before it had a competitive chip. What’s more, the Opteron was aimed at the highly profitable server market, which has long been Intel’s domain. It is fair to say that Intel was none too happy with this state of affairs, and it wasn’t too long before A.M.D. was complaining that Intel was cutting deals to keep computer makers from straying, even though many of them wanted to use the Opteron.
Which perhaps explains the second thing that happened: A.M.D.’s accusations finally began to gain some traction. In 2005, after a lengthy investigation, Japan’s Fair Trade Commission asserted that Intel had violated the country’s antitrust laws by, in effect, paying Japanese computer manufacturers to limit their business with A.M.D. That same year, A.M.D. sued Intel in federal court, charging predatory pricing; the case is scheduled to be tried in February 2010. Meanwhile, the European Commission began looking into Intel’s pricing practices; it has since made several preliminary rulings that don’t bode well for the chip giant. And in South Korea this month, Intel was fined $25.4 million for giving rebates to two South Korean computer manufacturers, which had the effect of “excluding” A.M.D., according to the Korea Fair Trade Commission. Intel has said it will appeal. (The New York attorney general, Andrew Cuomo, has also started an investigation, but he’s just piling on.)
With all this ferment, it was probably inevitable that the F.T.C. would follow suit. If the rest of the world is busy imposing sanctions on Intel for abusing its monopoly power, it hardly looks good for the nation’s chief antitrust enforcer to be sitting on its hands.
When I made some inquiries this week, the strong sense I got was that the commissioners wanted to get to the bottom of the Intel accusations once and for all, and needed subpoena power to gather all the evidence they needed.
But in antitrust, the notion of “getting to the bottom of it” is notoriously squishy, and this case is squishier than most. There is no question that Intel offers large discounts to its big customers, along with rebates, quarterly marketing dollars and other goodies. Because those discounts are directly related to how much business a manufacturer gives to Intel, it necessarily has the effect of excluding A.M.D.—since it’s the only other company competing for the business.
Is that predatory behavior? Or is that good old-fashioned competition? What makes antitrust so maddening is that the answer depends as much on who is asking the question — and where — as it does on the evidence.

Let’s start with a simple question: Are discounts good or bad? When I put it like that, the answer is obvious: discounts are clearly good. They allow consumers to buy things at lower prices. Indeed, price competition is at the very heart of free-market capitalism, and it is the natural result of competition. It’s what we as a society want companies to do.
For as long as we’ve had antitrust laws in the United States, predatory pricing — pricing intended solely to prevent a rival from being able to compete — has been against the law. After all, if a big company drops its prices on a short-term basis to drive a smaller rival out of business — and then can raise prices with impunity because it has eradicated its competitor — consumers are ultimately harmed by the price cuts.
But our definition of predatory pricing has tended to vary over time. In the 1950s and 1960s, United States antitrust enforcers — and the courts — tended to view many forms of discounting as predatory. One sorry result was that actions that actually helped consumers were considered illegal practices.
But in the 1970s, that all changed, as legal scholars argued persuasively that anticompetitive behavior had to be defined in more rigorously economic terms, and that there needed to be a high standard of proof that monopolistic behavior was harming consumers. This became known as the Chicago School of antitrust theory, and in time, the courts embraced many of its theories.
One consequence is that today, it is almost impossible to bring a discounting case, even if it has exclusionary consequences. It is presumed by the courts that discounting benefits consumers. The only form of discounting that is now viewed by the courts as proof of predatory behavior is pricing below cost. When I spoke to Robert E. Cooper, a lawyer at Gibson, Dunn & Crutcher, who is representing Intel, he cited a series of Supreme Court cases, going back 20 years, that has come down in favor of discounting — even enormous discounts based on market share, which have the effect of excluding rivals. Intel insists that it doesn’t price below cost, and given the nature of these things — selling in huge volume brings Intel’s own costs down, thanks to economies of scale — it will be almost impossible to prove otherwise.
Does this mean that Intel is all warm and fuzzy when it is negotiating with the big computer manufacturers? Not remotely. Roger Kay, the president of Endpoint Technology Associates, and a very close observer of the Intel-A.M.D. wars, laid out a scenario to me that he thought likely. A computer manufacturer sees its market share declining. When it comes time to negotiate a new microprocessor contract with Intel, it is told that its volume has diminished so much that it can not longer get the same big discounts it has come to depend on for its own profits. But, the Intel salesman adds, if the company is willing to shift more business to Intel, and increase the volume a little, it will still get the discount. Naturally, the company agrees.
Indeed, Mr. Kay says he believes this is precisely what happened in Japan, where the two companies that abandoned A.M.D. completely were Toshiba and Sony — which were both losing market share to competitors. “Thus,” he wrote me in an e-mail message, “Intel can claim it is doing nothing wrong and A.M.D. can claim its options are being foreclosed, and in a sense, they’re both right.”
One reason A.M.D. has had more success pressing its case abroad than in the United States is that in many places in the world, the reigning antitrust view is what’s called the post-Chicago School, which holds that there are times when pricing above cost can still constitute predatory behavior. Indeed, the European Commission tends to put far more emphasis on competition than on consumers, and views with suspicion companies with oversize market share, like Intel
But what happens if the commission rules against Intel — as seems likely — but the F.T.C. and the United States courts rule in favor of the company? That’s what happens these days with mergers that need government approval — Europe has turned thumbs down on a number of mergers involving two American companies, and as a result they haven’t gone through. The world economy really won’t function very well if multinational companies have to dance between dueling regulators. Either we need to adopt their standards, or they need to adopt ours. The Intel-A.M.D. shows, if nothing else, how untenable the current state of play is in antitrust.
Despite its recent investigation, the F.T.C. has long been reluctant to pursue a formal investigation against Intel; I hear that even now many of its economists simply do not believe that Intel’s policies amount to predatory pricing. But there is one other reason why A.M.D. faces an uphill struggle pressing its case in the United States. Its own market share numbers don’t seem to back up its contention that Intel is preventing it from competing. According to data provided to me by Ashok Kumar, a well-known technology analyst with CRT Capital Group, A.M.D.’s overall market share in microprocessors was 17 percent in March 2005. By December 2006, it had risen to 25 percent. Today it has sunk back to 20 percent.
The rise in market share is directly attributable to A.M.D. having strong products like the Opteron. But then Intel came roaring back, leap-frogging A.M.D.’s technology. Meanwhile, the smaller company’s latest high-end chip, code-named Barcelona, has been delayed when a flaw was discovered in it. A.M.D. has lost money for six quarters in a row.
Apparently, some things never change.

Khorgano said...

^Couldn't you have just linked to the article instead?

Anonymous said...

Intel Nehalem 'Bloomfield' B0 stepping benchmarks

Nordic Hardware Nehalem Results


AMD is finished.

RIP, AMD fanboys.

LMAO

Anonymous said...

"Intel Nehalem 'Bloomfield' B0 stepping benchmarks"

1.1V Vcore for 2.93GHz... this baby should have some clockspeed upside both for stock models and OC'ing. It'll be interesting to see if rumors are true that folks need to spring for a high end to overclock...not that Sparks will care! (Sorry couldn't resist!)

I see things topping out at 3.2GHz for a while though (Transistor count with IMC/QPI, not to mention ratio of low Vt vs high V transistors increasing as cache goes down relative to critical speed transistors). It is a good sign that it doesn't seem like Intel will need to jack Vcore to get the clocks early on though.

It'll be nice to see a couple of speed bins with respective Vcore's to guess where the TDP wall might be. It looks like the architecture will have some significant clockspeed upside - just a question of whether it'll be later 45nm or the 32nm shrink to see it (or if Intel 'spends' the clock upside on more cores/graphic cores).

in the end it'll be the same - you'll here the naysayers claiming 'that's it, I was expecting 4 GHz'. And then you'll here 'I was expecting a 50% jump'... just like K10 over K8... ummm, scratch that...

Funny how the AMD fans no longer seem interested in the K8 to K10 jump (if there was one); it has some nice features, but the core itself has a lot of theoretical differences, just not a lot of seemingly measurable ones.

I think the words of Eve6 ('Inside Out') sum up AMDZone:

I burn, burn like a wicker cabinet, chalk white and oh so frail [K10]
I see our time has gotten stale
The tick tock of the clock[INTEL] is painful
In all sane and logical, I want to tear it off the wall[SUE INTEL]
I hear words and clips and phrases [NEHALEM BENCHMARKS]
I think sick like ginger ale
My stomach turns and I exhale

Tonus said...

According to Digitimes, Intel will release three Nehalem CPUs by year's end.

"Although official model names have not yet been set, the CPUs are currently identified by the codenames XE, P1 and MS3 with core frequencies of 3.2GHz, 2.93GHz and 2.66GHz, respectively. All three have a TDP of 130W, 8MB L3 cache and will support simultaneous multi-threading (SMT) technology, the sources detailed"

Does the TDP of 130W seem high for these processors?

Anonymous said...

Does the TDP of 130W seem high for these processors?

It does a bit but some key info is needed before making a judgement on it.

With IMC, and addition of SMT, you would expect some increase in TDP due to the extra transistors (also keep in mind these are on the same process node).

More importantly actual #'s are needed... TDP bins have a market/business aspect to it as well. Suppose the 3.2GHz can not meet a lower TDP bin, but the other 2 can... would Intel create 2 separate TDP bins at initial launch or just lump them all into the same bin? Intel generally will not (AMD on the other hand will create hundreds of bins - see quad desktop lineup)

That said the # appears to be a bit on the high side; but keep in mind Intel has recently been rather conservative with their TDP ratings.

Orthogonal said...

The TDP is high because it is for the Bloomfield platform. Even if the lowest speed chips (2.66Ghz) come no where near 130W TDP, the motherboard manufacturer's are still compelled to design a solution for it since the platform is intended to be overclocked, and thus reach those thermal limits.

Anonymous said...

"Does the TDP of 130W seem high for these processors?"

I have found this very odd ... the QX9650 is at 125 or 130 W, but all the review data shows somewhere near 1/2 to maybe 2/3's of that actual consumption.

In all, it blows a hole in AMD's PR mantra that Intel's TDP is an average when the measured full load is 1/2 of the TDP envelop.

The only rationale I can come up with is that they want to keep the TDP power bins consistent for the product groupings. I would guess it also makes it easier to up the clock and release faster processors within the same generation with the OEMs if you have already dictated to them that their thermal solutions must meet a higher standard. Who knows...

Anonymous said...

"The only rationale I can come up with is that they want to keep the TDP power bins consistent for the product groupings."

Exactly... if they are only releasing 3 SKU's who would use 2 different power bins (except AMD of course).

Folks need to factor in the marketing on this - who is buying Nehalem desktop quads in Q4'08? How many of those are worried about 90Watt vs 130Watt?

People put too much emphasis on the TDP rating - it is mostly to provide some design targets for the MOBO makers. Folks seem to confuse TDP (which is a design spec) with actual power consumption.

My quess, as I mentioned before, is that the 3.2Watt falls out of the next lower power bin. Rather than having 2 different power bins for the first release, it is simpler just to put them all in the same bin (not to mention it gives you that much more margin during the binning process).

Folks need to wait for the actual power measurements.

Anonymous said...

331... 112... 213... 218... 122... 95... 103... 21... 23... 35... 36... 34... 49... 11... 6... 5...

This is the # of comments over at Scientia's... anyone see a trend? As the articles have become a just a bit more biased and baseless (Asset Smart garbage) - is that site slowly going the way of Sharikou's?

Anonymous said...

Okay Okay all is forgiven. I stumbled on to this post by Scientia and forgive him for his totally lack of understand why AMD is finished. He can’t be held responsible for anything he posts on AMD as based on this post this guy has the comprehension of less then a fly when it comes to what and how CPUs are architected. From the highest level of architecture abstraction down to the most basic decision about what gate material to use he is clueless. Even if he assumed his readers had the IQ of a fly and he of Einstein he wouldn’t describe it like this. The only logical conclusion, this guy is one dumb fuck!


“Okay, let's go over how chips are designed. At the start you have a research lab. This lab would be working the specific recipe of a component like a gate. I assume these labs would have lots of Phd's. They probably use some computer modeling to predict gate behavior. They probably use some analysis like electron microscopy or perhaps xray imaging. I assume the rest is a lot of trial and error. I assume they try out promising doping agents and when they find something useful they then work out the best gate dimensions.

Eventually, they come up with working gate designs and these go into a CAD library. I assume these then go into low level circuits which are then checked and tweaked by circuit designers. These circuits would then become the basis of the macros. Presumably the microprocessor designers work primarily with macros. It looks like these macros are something like 40,000 transistors apiece. The Power6 design says that it uses more than one kind of transistor; I wonder if this is true of K10. There also appears to be more circuit tuning built into the Power6 timing tree than was used with Power4. We know that a timing tree has a PLL at the base and ends with buffers driving latches. Ideally, every branch of the tree would be the same length with all the wiring and transistors having identical dimensions and properties. Obviously, this ideal is not going to happen so you allow tuning however you can't tune out manufacturing variation.

Once you have the timing tree designed and macros in your CAD library you start designing. The only problem is that you are trying to design for multiple limitations at the same time like speed, power draw, cost, etc. So, you have another round of trial and error where you create preliminary designs and then analyze them in several different ways to make sure that they are feasible and then make adjustments until you converge on a design. I assume at this point your design then shifts from an abstract design to a physical design and you begin laying out real circuits. I'm going to assume again that these are then run periodically as test wafers so that you have real data flowing back into the design process. I have no idea how many tests you would do since each test would require masks and masks are expensive. I'm assuming that these tests would be in between the first demonstrated SRAM tests and so-called first silicon. By the time you hit first silicon the design should only need tweaking and verification.”


Materially changes nothing that this blogger don’t have a clue and AMD’s clock has run out. Tick Tock Tick Tock, AMD’s business model is BK.

Anonymous said...

"People put too much emphasis on the TDP rating - it is mostly to provide some design targets for the MOBO makers. Folks seem to confuse TDP (which is a design spec) with actual power consumption."

I am afraid you are talking to brick walls in many cases if you take this explanation to more visible venues. The TDP debate is one that sticks in my craw and touches the nerves.

TDP is a spec, people, a spec! (*eJack thinks to self -- as if I need to explain that to the people who post here :) ) ... TDP is a power that both AMD and Intel specify to OEMs that they must meet the minimum dissipation number to ensure that the processor operates within the safe temperature zone under the worst case conditions. AMD specs it as their max power derived from the max possible current and voltage their CPUs can handle ... Intel specs it based on exercising the CPU with worst case loading and setting the limit to ensure the worst case is captured! Read thier specs for god sake :) ...

It just touches a nerve....

Jack

Tonus said...

Thanks all for the explanations, it's one of the reasons I read and participate here. You can ask a question and get comprehensive replies that even someone looking from the outside in (ie- me) can understand and which sound reasonable.

Every bit helps!

SPARKS said...

Jack-

“TDP is a power that both AMD and Intel specify to OEMs that they must meet the minimum dissipation number to ensure that the processor operates within the safe temperature zone under the worst case conditions”

I really think no mater how hard you try, even with your eloquently perfect definition above, will you ever get most AMD pimps to understand this dynamic.

Case in point, I was running (and still am) QX9770 at 4.06 Gig quite nicely for weeks with nary a glitch. This was, of course, before dementia challenged me to Prime95.

The 4 cores, at full swing, kicked in the thermal limiter on the chip. I throttled back the bad boy to 3.86 and it ran this thermal/torture gauntlet for a half hour. I shut it down out of boredom. It would have run all day.

I learned a couple things.

The chip running with normal dynamic loads will never if ever see 130+ TDP.

At 3.2 stock speeds, with a stock cooler (Yuk), full load runs at 130 would be a piece of cake.

If anyone thinks there isn’t any headroom factored into the speeds and thermals, on these chips, with their published specifications, then a reality check is in order. I know this, because mine comes with a juicy unlocked multiplier. Hello. I’ll see your ~ 700 MHz, and raise you one Gig on water.

Finally, regarding Nehalem, AT THE SAME 3.2 Gig speeds, if Nehalem can trounce QX9770 by any significant margin, (which it will) they don’t need to go to 4 Gig, even though they can with the right cooling. Hey, why give away the house when you’ve got lunatics like me will to pay the extra rubles for the prime cuts?

Which, I will. (Sorry, I couldn’t resist)

And, Tonus, as far as the explanations are concerned, the boys are 100% on the money. I bought the hardware to prove it. Besides, we’ve already got 4 GHz. We’re already there; you just have to be an extremist with excellent cooling to find out.

Clock till ya rock!
QX9770 Hoo Ya!
Nehalem, IMC HOO YA!



SPARKS

SPARKS said...

Mack The Knife-

Nordic Hardware-

I hate/love to say this bro, but my QX9770 @ 4.06 is smokin' Super Pi @ 11 seconds. I think GURU's right (again) when he said not that much more performance increase. 13 seconds for Super Pi ain't squat,----yet.

I think it's a server thing that will be the defining "Spec".

SPARKS

SPARKS said...

Yo, Giant! I think we’ve got a winner here.

http://www.fudzilla.com/index.php?option=com_content&task=view&id=8111&Itemid=1


Well, at least for INTC chipsets, anyway. It looks like the 4870 card can give the NVDA 9800 single chip cards a kick in the teeth. As I suspected, and I am glad I waited for 4870 to surface, as apposed to the 9800 X2 I nearly purchased a month ago. Two 4870’s in CF should do rather well. It will not be NVDA’s 280 in SLI, however, but it will be close enough. Frankly I HATED the idea of giving any money to NVDA. I rather give it to AMD--------DOOH!

Ironic ain’t it? Me giving money to AMD, and ATI is carrying the company! I swear it’s making me manic, like some twisted, perverted love/hate thing.

Business sure makes some strange bedfellows these days.

SPARKS

InTheKnow said...

I stumbled on to this post by Scientia and forgive him for his totally lack of understand why AMD is finished.

If you want to be helpful how 'bout correcting the misinformation. Just posting it and leaving it hang only propagates the errors.

I'm more offended by his overbearing tone and insults than many, but when I cross post, I at least try to correct the errors. At least that adds value to the conversation.

Anonymous said...

Intheknow..

Tomorrow's post will tell you how its done. Or if I finish some work and take care of the honey early tonight ( wink wink ) you'll get the story by morning

Anonymous said...

"Two 4870’s in CF should do rather well."

Haven't seen the cooler on the 4870, but if it's anything like the 4850, you'll need to rip that crap off and put on a real cooling solution.

The 4850 is pretty nice but 70-80C at IDLE???? WTF??? Granted some of this is due to the fact that they greatly cut down the fan speed to get it quiet while idle... but that is just crazy.

So Sparks if you buy a 4870, and especially if you x-fire them, do a search for 4850 cooling solutions... some folks were getting the temps down well over 30C (both idle and under load). Perhaps the put a better stock cooler on the 4870?

It's a shame that with a pretty good card (4850) and pricing, the cooling on it is so crappy.

Anonymous said...

Oops - just looked at some of the links, looks like they improved the cooler a bit on the 4870...

Still 80C though? (in my view that's too hot)

Anonymous said...

So the question was asked how does AMD, INTEL and others design these complex chips with hundreds of millions of transistors all connnected in complicated ways and get them to run at Ghz frequencys and work. Damn they work doing a billion different permutations of things without an error ( well most of the time ). For prospective of just how fast they work, take a 2GigHertz clocked chip, not the fastest by any means but still fast. 2 GHz means that the clock that controls all the logic execution has a period of cycle of 500 pico seconds. That is fast! The whole chip calculates a lots of bits and that is a bunch of high or low voltage signals that race across multiple layers of metal turning on a transistor then turning it off. How do we go from one generation, say 65nm to 45nm and make these complex things that make everything work better?

First how long is the cycle? Its takes about two years from one node to the next, but the work to define say 32nm started many many months ago both on the chip design side and the process side, way longer then two years before when the chip finally arrives at market. It isn’t like 45nm team finishes 45nm then move on to the 32nm. At most companies you have an early definition team that does much of the research then as things scale up the developers roll off say the 45nm design and process as it is finished and are added to the 32 team quickly so that it quickly doubles or triples or more in size for the final two year of execution.

First off remember there really are two parallel activities. On one side you have the chip designers and on the other you have the silicon technologist. Both have their unqiue expertise. Neither can wait for the other to finish there stuff to start their own. Each will make midstream correction if the teamwork and integration is good as each learns something during their development.

Both groups have a the same goal. In two years or so time the chip designers will tape out a new CPU design that will be faster and use less power per calculation then the last generation. The process guys need to have a process that yields and meets agreed targets set more then two years earlier. Lastly the factory that cost billions and a year to get read must come on line at the same time too. Lots of things have to fall in place, all taking lots of money!

In this story I will leave tidbits as to why AMD and the IBM fab consortium will always lag and IDM ( INTEL in this case in this competition )

Lets begin at the beginning shall we…

The early designers and process guys meet often and debate what is need and what is achievable. The silly designers don’t know device physics and always want faster transistors with lower leakage, They also want low resistance and low capacitance wires. They want the ability to draw them any way shape and form with any density. Pretty much they want their cake and eat it to. Then its up to the process guys to figure out what they think they can offer in 3 years time. Here is where the metal hits the road. Its years before the process guys have proven they can make the transistor and yield it, yet they need to commit it to the designers so they can start there designs and figure out how many transistors they have, how big a cache they will have, how many metal layers they will have and ultimately how big a die they will have and how hot it will be. So in some smoke filled room an agreement is made and the teams go their merry way. Now imagine the IBM team with who is it Charter, Samsung, TSMC, AMD and every paying john in this room. They all want something for their application. IBM gotta to settle on something. How fast and leaky a transistor, how and what kind of metal layer. It will be a messy compromise. You think a company like IBM or TSMC commits at the edge for designs that might include everything from a PowerPC to a game chip to who knows what? No way they will be conservative. Not unlike any congressional bill. Who does INTEL need to compromise with, NOBODY? They know they can push the envelope as their process only needs to support one or two products and they know the people designing them really well. They only need to satisfy one group of people and they are the same company. Which system will give you the better targets with faster transistor?

Now some retard talked about PhDs and fancy machines to figure out gates. Sure that kind of happens too. What really happens is once targets are agreed to. The silicon guys release SPICE models to the designers. These are things like drive current, capacitances, junction leakage, variability models. Now do the silicon guys have working silicon to measure? Hell no all they have is a bunch of extrapolated targets of what they think can be done. Who do you think stretches harder the guys at IBM or the guys at INTEL. So once the designers got their SPICE files they pretty much go off and start designing stuff. They use lots of computer tools to turn logic described in RTL into logic gates. This would be stuff like adders, MUXs, buffers, cache arrays, decoders, PLLs. Some of this stuff you want to check out so hopefully you design some of this stuff and get it on a shuttle run in the development fab and get it back to characterize and tweek your designs and models for the final CPU product.

Now the process guys are also busy, they now have goals of transistor and metal targets. They need to invent new processes or find new materials to help them hit these targets. In the case of 90nm or 45nm INTEL figured out it needed more performance. IN the end they showed the world strained silicon and HighK metal gates. They simply had to stretch further. In the other camp you had the IBM consortium compromising and agreeing to lesser targets. The net result is AMD gets stuck with an inferior process. So the process guys will be running that test shuttle from the design guys and putting in new tools, recipes, and material to hit the targets and get working silicon. With each new run they learn something new, and if they need to update the SPICE models they do that and give it to the design guys so they can update their designs. Let me ask you do you think if the IBM guys find out some thing in the middle that might be good or bad that they would run over and tell the designers real time, my guess probably not. It’s a different company, they probably hope the can work thru it. Another reason why IDM like INTEL is better positioned to have close design and process interaction so learning midstream updates the design guys. It could easily go the other way with the designers discovering they need something from the process guys not asked for earlier.

So spin forward a year or two. The design guys are getting close to TO and the process guys got the shuttle chips on target and yielding. You tape out and start that tight debug of the design. Now the CPU is no test shuttle so you discover all sorts of new things. You think AMD doing this in Dresden with none of the IBM development engineers is in a better position then INTEL who is likely doing the first stepping of the CPU in the development fab with the same engineers that developed the process? Again INTEL has the advantage here. So figure about a year of debug and qualification of the design thru a few steppings of to fix logic bugs, and speed escapes, or silicon to design gap. At the same time the process guys are tweeking the process to improve yields and performance. Again you think INTEL has an advantage doing this in the development fab versus AMD doing this in Dresden when development was done in Fishkill. Or whether nVidia or ATI/AMD has big gaps in shared learning with process guys at TSMC in Taiwan. Who has faster learning and better interaction?

What I haven’t spent time on yet is details of what the process guys are doing, nor the design guys are doing to get from targets to the end, but that is a story for another night.

In the end a few lessons. Tight coordination of design and process is required many years before the final product comes. It take huge sums of money commited up front for product not design with a process not yet known how to do. You think you do this spread out across a few continents with people from 5 or 6 different companies with different cultures, business needs and goals and short of money is better then one team from one company with one culture and the money and motivation to keep making money? I think I’ll pick the one team with unified goal over that consortium any time. That is why even when Prescott was all that INTEL had it was clear that INTEL would win in the end. AMD got no game plan to win the war. Its like saying Boise State can win day in day out in football playing in the Pac10 against the likes of Oregon, USC, UCLA, and Cal. They may win one or two games but in the long run better funded program with better talent always wins.

AMD has neither the talent nor the money. Tick tock tick tock. The clock has run out.

SPARKS said...

“Oops - just looked at some of the links, looks like they improved the cooler a bit on the 4870...”

You right about the 80 C being to hot. I don’t know how the bastards survive for years.

Nah, you want coolers? Quiet as a mouse, cool as a cat, I’ve got something for ya…….


http://www.frozencpu.com/products/7050/ex-blc-442/Danger_Den_3870_X2_Video_Card_Liquid_Cooling_Block_w_RAM_Sinks_DD_-_3870_X2.html?tl=g30c87s585

A 4870 setup is not to far away. Danger Den and Koolance are the big cahoona’s in this market.

You’re gonna need a pump.

SPARKS

Anonymous said...

Pretty nice, but for us mere mortals I was thinking a 4850 with this (~$30):

http://www.hardforum.com/showthread.php?t=1261100

The work was done on an Nvidia 8800, but I don't see why it wouldn't work just as well on the 4850/4870 (either passive or with a fan strapped on).

On a side not for those not afraid to get their hands dirty (and void a warranty), I would recommend investigating simply pulling off the stock heat sink, cleaning it and the chip and re-applying thermal compound. I saw an 8 deg C drop on an overclocked 8500GT I have (it's been a temp solution until I figure out Penryn quad/Nehalem and an upgraded video card)

Orthogonal said...

For anyone who is interested, Intel has publicly released a technical document regarding 45nm process variation compared to the previous to process nodes. It's a good read if you're into that sort of thing ;)

http://download.intel.com/technology
/itj/2008/v12i2/3-managing
/3-Managing_Process_Variation_
in_Intels_45nm_CMOS_Technology.pdf

SPARKS said...

Did I say the 4870 setup wasn’t too far away? No sooner said than done.

http://www.fudzilla.com/index.php?option=com_content&task=view&id=8153&Itemid=1

I’m in.

BTW,I have a new found respect for the guys on this site. It is very rare for anyone here to speak of something they don’t have direct intimate knowledge of.

However, me, as a newbie, can get a pass once in a while, that’s if I really don’t screw the pooch. That said, compared to what you boys dissect down the atomic level regarding CPU’s, from where I sit these GPU’s are fat bleeding leaking power hogs/slobs. 230W plus! That’s nearly two QX9770’s at full tilt at stock speeds. WTF!?

Obviously, the process sucks. The die’s are HUGE, and anything less than a 550W power supply need not apply for dual card setup.

The bottom line is we as consumers are not seeing this tread going away anytime soon. NVDA 200 series chips are larger than mosaic tiles in a Brooklyn pizza oven. Further, their yields must suck, simply because they are so big and there are 1.4 billion places to get it wrong.

(If you think I’m baiting anyone to comment on Larrabee, your damned straight I am.)

WHEN?

Tell Big Paulie the time is ripe to put these two mediocre bit players to rest!

SPARKS

InTheKnow said...

Sparks, from what I've seen Larrabee will be neither small nor something that belongs in a low powered system. According to this article at Ars Technica, the part will be ~2" square and will draw over 150W.

And it isn't due out anytime soon. From what I've seen in the press, I'm expecting a Q4'09 launch. Hopefully when Intel presents a paper at Siggraph in August we'll get some more details.

Anonymous said...

http://www.geizhals.eu/a346956.html
(link courtesy of Fuddie)

2.6GHz Phenom X4, with a mere 140Watt TDP... apparently AMD has seen an increased customer demand for power sucking CPU's. So now 1.9GHz-2.6GHz quads, with all sorts of TDP's to serve what AMD has called a niche market on desktop with a top to bottom price range of about a buck-fitty (OK, I exaggerate, but the price range between top bin and bottom bin is absurdly small given the # of products AMD has in the desktop quad space).

Could you imagine the cooling needed for this if paired of with two 4870's x-fired?

And before JumpingJack gets mad, the TDP is a good indicator here because AMD already has a 125Watt bin, and I guess this couldn't fit into that bin (especially as this is the last 65nm speed bin so there is no need to plan ahead for future products).

SPARKS said...

I. T. K. - Nice link. Thanks.

4Q 2009 - I think I'll abuse a few apprentices today.


SPARKS

Anonymous said...

"AMD has neither the talent nor the money. Tick tock tick tock. The clock has run out."

AMD's stock closed below $6 today, the first time since May 1st (May Day, or M'Aidez as the French say - means "Help Me!" :). It was $7.72 just 10 days ago. In January 06 it was over $40... Incredible.

If oil keeps climbing & the economy goes into a recession, I predict AMD will "BK" (Shark00k's Engrish for bankrupt) within a year.

Anonymous said...

AMD's stock closed below $6 today, the first time since May 1st (May Day, or M'Aidez as the French say - means "Help Me!" :). It was $7.72 just 10 days ago.

There were rumors 'on the street' that AMD may be looking for another round of equity (which would lead to more stock dilution)... and I think this was the cause of a good chunk of the drop. You are also continuing to see a steady trickling of higher level people leaving, which probably isn't helping.

As I've mentioned numerous times, a slowing world economy hurts AMD much more relatively speaking than Intel. Intel is still capacity constrained so AMD could theoretically make up the slack without having to resort to the standard slash and burn price strategy. As demand lessens and Intel can theoretically supply a large % of it, then it becomes more of a price and performance war which AMD is not winning until it:
- gets to a high yielding and significant volumes on 45nm process (mid 2009?)
- a competitive product to enable a decent ASP (?)
- release of some new dual cores, so they don't have to keep cutting K8 prices, which are pretty much already cut down to the bone (H1'09?)
- A viable higher end notebook solution - simply eating away at the low end of NB segment when that is the fastest growing segment is not really helping AMD, it is just compensating for desktop losses. Also this low end segment is about to get a huge squeeze as Atom(s) disperse on the low end market. (?)

At some point Wall St could potentially choose to hammer AMD through shorting (like some have speculated was done to Bear Sterns and some of the financials). There already is a significant short position (>15%)

With the shenanigans Ruiz has been playing between the "I'll tell you sooner rather than later about Asset Smart/Light/Firesale" and the now routine last minute pre-earnings warnings, the street, if it desired, could crash the stock as the market cap is not that big and all it would take is a few more downgrades and someone taking an even larger short position. The top 10 institutional holders have more than 40% of the stock. All it would take is a few of those to abandon ship (though with the institutional holders it takes a little time for them to exit a position).

We'll see what happens in a few weeks when AMD reports the Q2 #'s. My predictions:

- 'Good progress' toward Asset Smart (it may even get a new name); though Hector will continue to be evasive about when the details will be rolled out ("soon")
- Operational losses continue, but better than Q1. AMD will down play the overall net loss and keep on the operational loss theme.
- Continued ATI acqusition losses (in my view, this is a bit of a financial shell game at this point to simple hide launches under the guise of "one time" charges)
- Hector predicting possible Q4'08 operating income breakeven. Analysts will not realize that even if this is true it will likely mean a return to operating losses in Q1'09 due to seasonality. Also lost in all of this "operating" shell game is that AMD will continue to post overall net losses.
- Cash position will be said to be a bit concerning, but "no extraordinary measures will be needed" (Note: this is what was said before the Abu investment and my guess is the street will not buy it this time around)
- No mention of the NY fab, probably no mention of the F30 conversion.
- 45nm "on track" with another non-definitive scheduling term like "in production", "shipping to OEM", "samplng". AMD of course will avoid the available on the market by X/Y date.
- Graphics will be the bright spot. Possibly positive operating income, with strength expected throughout H2'08.

InTheKnow said...

Graphics will be the bright spot. Possibly positive operating income, with strength expected throughout H2'08.

I'm becoming more and more convinced that they intend to hide their poor processor performance behind graphics centric "platforms."

I saw an announcement about their upcoming fusion product (aka shrike) on Ars Technica today that seemed to support that viewpoint. It looks like it will use the long overdue K10 dual core derivative on 45nm.

The article notes that "AMD has yet to publicize very many details of its 32nm transition, but the company isn't expected to move to 32nm until 2010 at the earliest". With Intel moving to 32nm towards the end of next year, I just see them getting further and further behind on the cpu.

If they can't compete, they need to hide their weakness and it looks to me like they are going to hide their weakness behind ATI's graphics.

Of course it goes without saying that Intel will have Nehalem on 45nm with an IGP in 2008. It looks like AMD is "leading" from about a year behind.

Anonymous said...

The graphics business is already a very competitive business with nVIDIA. The one leveling situation there is that they both use TSMC as their main foundry. But there is very little that seperates these two. It is a leapgrogging affair until one either thru stupidity or bad luck falls on their face in one of the leaps.

The larger problem is the returns in the graphics can barely fund reinvestment in that business and not nearly enough to fund any portion of the CPU business

The clock as run out on AMD

Anonymous said...

No rest for the weary:
http://www.eetimes.com/news/semi/showArticle.jhtml;jsessionid=TGYOWB3LUMXFSQSNDLOSKH0CJUNN2JVN?articleID=208801373

(Yes it references a blog, but eetimes is a very reputable site)

For those who don't want to follow the link:

1) Next iPhone design (beyond the 3G) will likely be based on Atom (either 2009 or 2010)
2) It will most likely be based on a 32nm process!
3) Savitz quoted Feeney as saying the Atom development program is well ahead of schedule," and that this could allow Intel to demo the 32-nm Atom processor at the Intel Developer Forum in San Francisco in August. While I doubt this would be demo'd at IDF (this would be well over 1 year before 32nm is due out), who knows? Intel does have the 32nm development line going for some time and demo'd a working SRAM vehicle awhile ago.... so you never know. (I'm skeptical)

Looks getting the Apple business a while ago was a real coup. Apple's PC growth has far outpaced the overall X86 market growth (though it is still a small absolute % of it), and it apparently is opening up other business opportunities within Apple.

Anonymous said...

"AMD has yet to publicize very many details of its 32nm transition, but the company isn't expected to move to 32nm until 2010 at the earliest"

I hate it when people have some short of surprise at these timelines... For Reference, I believe it is now 2008 (right?), AMD has yet to release 45nm, so how could they possible be thinking 32nm in 2009?!?!? We would be talking about a 1 year transition for this to happen - not only would this not be technically feasible, it would be economic death.

Putting aside Dementia's crazy past technology schedules, you have to figure 2 years. With 45nm in Q4'08(?), I'm talking about product you can buy not some 'shipping' or 'sampling' crap, I think it is optimistic to even think 32nm in 2010. Best case in my view would be Q4'10 - but keep in mind AMD has to implement highK/MG along the way as well as figure out double patterning (I'm assuming even with immersion litho, double patterning will be needed for 32nm).

The technology treadmill is becoming rather daunting... simply scaling nodes (and ramping to volume) is a massive undertaking but now you are talking about doing this while maintaining 2 CPU architectures (I'm assuming a low end like Atom is necessary now) plus chipsets plus graphics solutions (and for AMD the graphics has to be done on a completely different process flow at TSMC).

I don't see AMD succeeding if they maintain the status quo - they will need to bring GPU production in house or start outsourcing CPU volumes (at least the low end CPU design). The key problem is AMD now has 2 potential failure points (they're own tech node transition and TSMC's tech node transitions) and 2 separate lot files and design integration support.

SPARKS said...

Well, here we are, well into the summer of 2008 and the CPU wars have gotten rather boring. Even the AMD lunatic fringe with their wild eyed fantasies concerning Nehalem performance speculations can barely illicit a yawn.

Can you blame me? I got exactly what I paid for. Monster X48/P5E3 Prem., monster DDR3-1800, and quad monster QX9770, are all beating the tar out everything else in sight @ 4.06 Gig. Hey, it’s lonely at the top of the hill. ;))

Incidentally, Ano moose, you may be right about the AMD/ economy thing. There is something happening at AMD/ATI. It just so happens it’s something that has taken me out of the summer doldrums and will simultaneously forestall an early AMD BK.

Sure, Barcelona is a raging monumental failure, no doubt, hence, the performance per watt, low cost marketing bilge. But, from where I sit, the $300 4870 is a world beater when compared to a slightly better NVDA 280 at $650

Frankly, I don’t mind spending the money. Since I can only run one 280 on my X48 MOBO, two 4870’s at the same price, is a dead ringer. Besides, it’s the only way AMD is going to pry $600 plus from my fingers. Conversely, giving NVDA $1300 + would be useless since SLI will not work. NVDA needs a wake up call and return to its “CORE” business, the way INTC did a few years back. They just lost a $1300 sale from me.

The point is, I suspect there are a lot of folks who are thinking the way I am. And, taking a 10% performance hit for a 350 dollar savings, AMD/ATI has a clear winner, up to and including a single slot solution for all. Obviously, you have been watching AMD stock tank with the economy, but NVDA has also tanked along with it at 19 and change.

NVDA’s chip is huge. Its ROI will not go as planned with ATI performance per buck solution(s). Further, as of today, some NVDA’s exclusive partners have jumped ship to ATI/AMD. Therefore, money WILL be coming in for AMD. The money will not come from processor sales; that’s for sure. It will come from graphics, and it just may buy AMD some badly needed time to survive another few quarters, or perhaps, wring in another sucker.

I know exactly where AMD is going to get $650 in the next few weeks.
Trust me.

http://www.overclockersclub.com/reviews/powercolor_hd4870/


http://www.tcmagazine.com/comments.php?shownews=20575&catid=2

SPARKS

Anonymous said...

"They just lost a $1300 sale from me."

They are going to lose a whole lot more going forward... with no quickpath license due to the shenanigans with SLI, they have no ability to make chipsets after Penryn... hence moving forward all enthusiasts will have a choice between crossfire and a single slot Nvidia solution. But hey, I'm sure Nvidia feels a moral victory over the big bad Intel by not licensing SLI (Good call, Jen-Hsun)... now there will be no SLI at all for 75-80% of the market and they can grapple with AMD for the remaining scraps of the dual GPU market on AMD platforms and as an added bonus they will lose all remaining chipset biz on Intel CPU's (except for whatever legacy Penryn are sold until Nehalem is fullly converted over)

Many years down the line, there will be a few harsh lessons about Nvidia and AMD, but the most important one will be to check your ego at the door and follow sound business principles instead of tilting at windmills. Intel nearly fell into this themselves with the continued push of P4 when it was clear it was the wrong direction, but the difference with them is they appeared to have learned the lesson.

SPARKS said...

“They are going to lose a whole lot more going forward...”, et al.

Your comments mirror my sentiments exactly, transistor by transistor, gate by gate.

I never enjoyed NVDA’s way of doing business over the years, but business is business, quite right, quite right.

However, this SLI proprietary/exclusivity thing, in an industry practically built and founded on ISA implementation, (ah, the good ole days) has stuck in my craw like a seething, festering sore.

My loathing for NVDA can be addressed on many different levels, from disgust to full blown rage. There are times when I envision Jen Hung Son skewered (3 inch diameter) on a Texas sized pig roast pit (with apple to keep him from squealing) while Big Paulie is at the disconnect switch firing up the 220V 3P, 5 HP motor. Standing by is Craig Barrett is attentively stoking the pit with 50 pound bags of charcoal and hickory. Pat Gelsinger and Andy Bryant are standing by prodding the pig with 3 foot long meat thermometers.

Oh, the joy! Sorry about that, I got a little carried away.

But it will be extremely interesting to see how this QPI thing pans out. Pray, I hope it unfolds exactly as you say, word for word. I also hope the boys mentioned above cut NVDA absolutely no quarter. I'll take mine well done with crispy skin.

It would give a new meaning to defining me as an ‘Enthusiast’.

SPARKS

Anonymous said...

Tick Tock Tick Tock nVidia's day's are short lived.

Their competion lost their way in high end performance CPU, but they still make a good graphics card! What is more they merger of the two companies got them some silicon knowledge too. nVidia all the got are a bunch of designers who need TSMC to make the dreams go. No ATI got in building process experts and who knows maybe Dresden will even fab a few high end GPUs soon. Makes sense as AMD CPUs at the prices they are going for are wasting valuable 65nm capacity. If AMD yields are as good as they claim they probably get more good 65nm GPUs then they could out of a 55nm TSMC process.

The only can of whip ass nvdia is going to get is when AMD-ATI spanks them from the bottom and INTEL from the top.

Tick Tock Tick Tock Nvdia's business model is in worst shape then AMD

SPARKS said...

Did you guys think I was kidding about the side of NVDA’s fat slob chips? The goddamned things make Penryn look like Atom’s!!! Where will be the ROI at 650 bucks a pop?


http://www.guru3d.com/article/radeon-hd-4870-review--asus/2



SPARKS

Anonymous said...

"If AMD yields are as good as they claim they probably get more good 65nm GPUs then they could out of a 55nm TSMC process"

You are talking about two vastly different processes (AMD - SOI, TSMC - bare Si), not to mention the node difference above.

Also, I don't think AMD has that much excess capacity right now. They have some, but certainly not enough to cover ALL GPU production - are you going to support 2 different processes for GPU's? Perhaps if AMD gets around to converting F30 or building NY things will be different, but even if yields were perfect in F36, it would not make sense. Perhaps AMD should start with something a bit less critical like chipsets, which they then could also outsource to UMC, if they run into capacity issues.

Tick Tock, Tick Tock - have you looked at ANY financial #'s recently? Nvidia's business model may not be the best but they are turning a profit (which I believe is the goal in business?)!

Anonymous said...

Tick Tock Tick Tock

Is that Larabee coming on 45nm HighK/MetalGate.

Compared to TSMC's 55, 45, or 32nm SiON nVidia will be getting some can of whoop ass

InTheKnow said...

Is that Larabee coming on 45nm HighK/MetalGate

I'm not sold on Larrabee yet. I think it has potential, but I won't be surprised if the first iteration still lags NVIDIA and ATI.

Below is a quote from an NVIDIA PR rep. Sure the source is biased, but I think the points he raises are legitimate.

Taking on NVIDIA and AMD in the GPU space is clearly important to Intel, as it has accelerated its development by at least two years in order to promise something tangible by next year.

Berraondo clearly thinks that may have implications of its own. ‘Intel has shortened the development period for Larrabee, so you have to wonder what compromises were made in order to achieve that.’

Good point, and you also have to wonder what kind of product might initially emerge from this hastened roadmap. Berraondo appears to think that getting the technology right might just be the start of Larrabee’s challenges.

‘Even if Larrabee does deliver on the technology side, things like drivers and developer tools also need to be right,’ he said. ‘Intel’s lack of infrastructure, time and experience in this field means there are bound to be teething troubles.’

Anonymous said...

The clock is ticking in graphics

Think who has the fastest transistors? The one that can drive to the fastest circuits

Think who has high yield and high volume on the most advance process node and could turn out chips with billions of transistors on it at low cost and high profits

Think who has the lowest leakage devices that would enable them to produce such large chips running fast with low power

INTEL, INTEL, INTEL, is it no wonder nVIDIA CEO is crapping in his pants from the can of whoop ass that is coming!

Look today Moore has given INTEL the ability to put megbytes and megbytes of cache and more cores then OS and programmers know how to efficiently use. But when you think what can use those hundreds of million of transistors in an efficient manner it is GPU applications. In tomorrows CPU you'll find 2 to 4 cores not likely more for deskop applications and a huge honking piece of the rest of the chip for the GPU.

Larabee has a long road to go, but if any company has the technology the design capability and the money to develop the graphics drivers its INTEL. Look at the money they sank into and continue to sink into Itanium. If they spend 1/2 that they will crush nVidia, maybe not next year or 2010, but by 2012 on 22nm node INTEL's silicon lead, and manufacturing lead will be even bigger for the likes of TSMC and IBM and they will have perhaps 4 or 5 spins of Larabee by then

In 2014 we'll think aback about nVidia and they will be stories about them just like we have now about Dec and alpha.

Clock is ticking down for nVidia. And its already over for AMD

Anonymous said...

"I think it has potential, but I won't be surprised if the first iteration still lags NVIDIA and ATI."

Completely agree... all high K/MG does is help with power and speed; Larabee's succees or failure will come down to design and SW/driver support. To think that Larabee will be good due to the process it is made on, shows a lack of understanding. This is a first gen product that will also potentially suffer through driver immaturities early on.

What Intel needs to do is follow what appears to be the Atom model. Get the first gen in the ballpark (it doesn't have to be better); show OEM's the product is serious and have a credible roadmap for the next iteration. The first gen is simplt to get the inertia behind the product.

Keep in mind we are talking late 2009 (more likely 2010) for the first product. By then Intel will be starting to ramp 32nm, which would allow them to do the 2nd iteration at 32nm which should further help with power and speed as they get the design sorted out.

Anonymous said...

"Think who has the lowest leakage devices that would enable them to produce such large chips running fast with low power"

Please take this Scientia/Sharikou type of logic away from here. Sure transistors are important, but Intel had one of the lowest leakage 90nm processes and how did that work out on Pentium4? And yet that same process (which the AMD fanboys claim was terrible) worked pretty OK for the original Core (mobile) product.

Power is not simply transistor (process) performance... it is transistor count, transistor size, sleep states, # of high Vt vs low Vt transistors needed, etc...

And the same goes for clockspeeds!

Intel's process will give them SOME advantage, but let's not overstate/oversimplify things... in the end it is going to come down to DESIGN (and support).

«Oldest ‹Older   1 – 200 of 314   Newer› Newest»