6.03.2008

The Atomic Breakthrough

Doubts (including my own) about the viability of Intel’s Silverthorne program have been rife. The company’s exit from ARM based microprocessors shows that the business environment in the low-power embedded market is pretty hostile. When you have more than 10 different vendors, led by high-volume manufacturers the likes of Texas Instruments and Samsung, the margins can get very unattractive. Intel’s re-entry however, is a show of confidence and a belief in the ultra-mobile segment as a market with a huge potential for growth. Having learned their lesson, Intel is now differentiating themselves from ARM by sticking with the general purpose, off-the shelf and widely compatible, x86 Intel Architecture.

It’s a great idea until you realise that the Atom processor isn’t really up to par with ARM based processors when it comes to the flexibility of having the ideal performance at the right power draw. x86 is just too rigid in terms of performance, continues to be power hungry and remains too expensive to implement. The foundation of my scepticism on Intel’s Silverthorne strategy is solely based on the fact that the right market for it (high performance, low power, and high margins) truly does not exist. I was wrong.

There is a growing “and very mobile” market where an Atom based PC just happens to slot right in. It is a segment where high performance is required, power draw isn’t much of a factor and price isn’t the primary concern. I am referring to the In-car PC. Bill Gates once envisioned a PC in every home. Now Intel is betting on having a PC in every vehicle – that’s a potential market of ~72 million annually by 2010. I know we’ve heard so much about this in the past but this time around the automotive industry is ready. Expect high-end/super cars to have built-in PCs in 2010 just to establish the concept and expect wider deployment after 2012. This information comes from actual product roadmaps and not just my prediction.

The industry's move to PC’s to be the system that integrates the ever increasing functionality of the modern car is driven by cost. Believe it or not, an in-car PC will be the cheaper alternative real soon once the volume kicks in. A singular system that drives the audio & video, sat nav, phone system, internet, HVAC, telemetry, security and customised vehicle settings is fast becoming cheaper than having all the different parts bought, assembled and wired individually. Simplification goes a long way in high volume automotive manufacturing and the building blocks that can integrate the PC with the automotive networking standard (CAN/LIN) is very mature. The advent of the Atom removed the cost hurdles that existed for a complete integration.

At the moment the competition is between the incongruent ARM-based system and x86 (Intel and VIA's low power CPU's). Intel is currently leading the pack with their fully developed embedded solutions with major vehicle OEMs and system integrators as partners. Even Microsoft has stepped up in pushing the Windows Embedded platform. They are now competing head to head with Linux (automotive grade) which is quickly gaining support to become the automotive standard. Either way, developers are finding it easier to design human-machine interface applications using the widely adopted x86 instruction set. The Atom processor is beginning to shape like the breakthrough product Intel has been looking for.

314 comments:

«Oldest   ‹Older   201 – 314 of 314
SPARKS said...

“Intel has shortened the development period for Larrabee, so you have to wonder what compromises were made in order to achieve that”

I realize that this is from a biased source. However, is it possible that INTC has made breakthroughs as apposed to “compromises”?


http://news.cnet.com/8301-13579_3-9895892-37.html


And, Big Paulie seems to very serious about Intel’s resolve regarding its position in future graphics.


"In graphics, as we move up the food chain, we're bouncing into ATI via AMD and Nvidia more than we used to. And I don't expect that to abate anytime soon."


http://news.cnet.com/nanotech/?keyword=rasterization


Go Big Paulie!


SPARKS

InTheKnow said...

Sparks, of course it is possible that Intel has made a breakthrough of some kind, but I don't think that is driving this pull in. I think Intel sees this as the right time to push this development and wants to be at or near the leading edge.

Given my premise I can only see 2 ways to pull in the project, either throw more bodies at it (up to a point), or cut something out.

If you throw more bodies at it, then you have to do that early. It takes a long time (relatively) for new people to contribute to a project. They have to take a step back and understand everything that has been done before they arrived and why it was done that way. After a certain point, it is too late for them to come up to speed and still make a contribution.

The other, and I think more likely, choice is to eliminate features. If you dig though a lot of Intel's presentations in the last year, particularly around Silverthorne, you will find that Intel seems to be focused on eliminating feature creep. I get the impression that even if someone comes up with a feature that will completely dominate the market, it will just go in the next iteration of the product. They won't hold the current one.

I suspect that is what has been done with Larrabee as the earlier poster suggested. Get a respectable product out the door now, establish mind share, and really ratchet up the performance with the next generation product.

I also found this comment from the article you linked to be very interesting.

Gelsinger claims that big software development houses are excited about what they've seen so far from the project, but he did not offer any specific examples.

Note that is says "what they have seen", not "what they have heard". In my mind this has two implications. First, is that there is at least working prototype Si out there. Second, it implies that the "big software development houses" have been pulled into the design process early and are getting the features that they want into the product before it is even released as a sample.

If that is really what Intel is doing, this shouldn't be a hard sell once they release it. The big boys already know what it can do, and know that it will do what they asked for. Now that is an exciting prospect.

Anonymous said...

LOL you an idiot or what

Great design on crappy process will NEVER turn out to be competitive compared to same design on a great process.

Even in the Precott days the Pentium IV did better then how AMD is doing right now with crappy design on crappy process.

If you don't have the fundamental building block everything else suffers.

Ask a great builder to work with bricks or give him steel and concrete makes a difference.

I guess you aren't a designer are you. Give you 30% faster transistors, 10x lower leakage and double the denisity and higher yields, you'd pick to go hotter bigger every time.

You don't need a clock as you can't tell time

Anonymous said...

Dude are you a designer or a retard?

Would you rather design on 45nm with 10x leakage and 20% faster performance and double the transistor density?

Or would you pick TSMC 55nm SiOn or IBMs similar 65nm to do a design?

You are the architect that prefers using the bricks in your head to concrete and steel.

As you have no clock time has no meaning to ya.

Anonymous said...

"Dude are you a designer or a retard?

Would you rather design on 45nm with 10x leakage and 20% faster performance and double the transistor density?"

please take your tick tock fanboyism out of here... or use your brain.

If you look over the comment I said to think process ALONE will make or break this process is naive. Simply to state well it's on 45nm process so it must be good is rather ignorant (in the true sense of the word).

What will make or break Larabee is the design not whether or not it is done on 45nm.

"I guess you aren't a designer are you."

No, I'm not, but it is abundantly clear that you are not either. It is also abundantly clear that while you have a little background in process/manufacturing I would guess manufacturing site (at Intel); as you only seem to know the powerpoint details and not the underlying fundamentals.

Of course one would like to use the best process when doing a design, but the process alone will not make the product good (just like the design alone will not make the product good).

So I'll go back to your original (ignorant) comment:

Is that Larabee coming on 45nm HighK/MetalGate.

Compared to TSMC's 55, 45, or 32nm SiON nVidia will be getting some can of whoop ass


You know absolutely NOTHING about Larabee performance, probably only know the high level architectural details that have been on Tom's, Anand, etc.. and you imply that Nvidia's chips done on SiON will be worse because they are not on HK/MG?

Larabee may be great, it may suck, it may be mediocre. Noone knows at this point and to imply that HighK/MG will make it great is no better than the fanboy comments at Scientia's site.

If you have some actual detail or analysis, bring it and have an honest discussion. The tick tock crap just makes it clear to folks who and what you are. Just like AMD fanboys will get grilled here, folks should do the same to the same blatant crap coming from the other side as well.

Oh and Pentium 4 did not 'do better' as you state. Intel did better, primarily because they know how to market and sell a product in addition to manufacturing one... but don't imply the P4 PRODUCT was better.

Node scaling does not give you double density (not to mention you are comparing 45nm vs 65nm, as opposed to 45nm to 55nm when Larabee is released). It does on paper, but show me a single product that achieves this. Since you appear to work in the real world, one would think you would understand this distinction (or perhaps you are intentionally trying to mislead?)

Care to spout any more mis-information?

They say even a broken clock is right twice a day.... it's just not much more than that though!

Anonymous said...

" Sure transistors are important, but Intel had one of the lowest leakage 90nm processes and how did that work out on Pentium4? And yet that same process (which the AMD fanboys claim was terrible) worked pretty OK for the original Core (mobile) product. "

This is what I have been able to gather from the engineering literature as well as looking emperically at Pentium-M on DT data which sporadically (and aenemically) appeared on the web. From what data I have seen, there was nothing wrong with the 90 nm process, the transistor parametrics fell right in line with trends shown for most of the historical scaling from a performance prespective.

Couple that with:
http://www.matbe.com/articles/lire/298/pc-desktop-le-core-duo-face-a-la-concurrence/page21.php

The 90 nm Dothan has better power characteristics at 2.8 GHz than do any other CPU at this clock....

It is unfortunate that Prescott gave the process tech such a bum wrap, but it goes to show...

- Put a crappy architecture with a good process and get a crappy product.

- Put a great architecture with a crappy process and get a crappy prodct.

- Put a great architecture with a great process and get Core 2 Duo.

Jack

SPARKS said...

Whoa! WTF just happened? And, GURU, yikes! I’d never want to meet you down a dark tech alley! Whew!

Well, it seems we all have been giving this Larrabee thing quite a bit of thought. I feel like we just opened one of those little novelty cans and 20, 2 foot long snakes flew out.

Ah, I’m going to try to get this out without getting my head bashed in.

We all know INTC has been working on its Teraflops project, 80 cores; demo’d Feb 11, 2007.

http://download.intel.com/pressroom/kits/Teraflops/Teraflops-Chip.jpg

AMD and NVDA are using many multiple shader processors.

http://www.amd.com/us-en/assets/content_type/Additional/45843A_RV770_single_Chip_Frt.jpg


Therefore, given these two points, it would seem with the highly parallel functions of GPU’s, this is the way INTC is going with Larrabee. I think you guys addressed the process and design fundamentals quite well, whew! We all know INTC is way ahead on those fronts. The devils in the details, that would be the software development, no?

However, I think what I remember from the past (and would like to forget), INTC graphics, performance, support, and finally, commitment was, shall we say, less than exemplarity?

http://en.wikipedia.org/wiki/Intel740


They say once bitten twice shy. Frankly, INTC MUST pull it ass out of the “i740 stigma”, obviously, most folks here feel the same way. That was ten years ago.

I personally think they’ve got something big going on. NVDA’s bitching and pissing, both AMD and NVDA looking into GPCPU future prospects, In The Know’s analysis of the big guys getting serious with INTC, can mean only one thing. They are very committed, and may, once again, pull a very big rabbit out of the hat.

After all, they do have the bodies (design), the process, the FABS, and the money to do it.

Hey, AMD has the ATI design group. Why can’t INTC who is 10 times larger and quite profitable do the same?

Jack- I am going to amend your, “Put a great architecture with a great process and get Core 2 Duo” with ‘Put a great architecture with a great process and get a revolutionary graphics solution”, I hope you don’t mind.

Fella’s, “Keep the faith, baby”, this will not be Real3D.

SPARKS

pointer said...

ITK said ...
Gelsinger claims that big software development houses are excited about what they've seen so far from the project, but he did not offer any specific examples.

Note that is says "what they have seen", not "what they have heard". In my mind this has two implications. First, is that there is at least working prototype Si out there. Second, it implies that the "big software development houses" have been pulled into the design process early and are getting the features that they want into the product before it is even released as a sample.


well, the Larrabee aims at 2 usage models, the Gfx and HPC. so, we won't know what the BIG sw dev house did with it right? I am inclined towards the HPC side on that news as I do not think that the Gfx driver was in any shape y that time. Or may be they showed off the rasterization capability which won't help in legacy games and even the future 1-2 yeas-to-come games.


Blogger SPARKS said...


Jack- I am going to amend your, “Put a great architecture with a great process and get Core 2 Duo” with ‘Put a great architecture with a great process and get a revolutionary graphics solution”, I hope you don’t mind.


too bad, you still missed out one part, the driver :)

Anonymous said...

Did Fudzilla have an epiphany (or maybe he started reading this site)?

It would be very tough to sell dual core Phenom and not to destroy the current channel but more important, AMD couldn’t’ clock them that high to make them worth a while. Athlon dual core CPUs are holding this position until middle next year when you can expect to see 45nm dual core based Phenoms from AMD.

A smart business move... but I wonder how much effort and money was wasted trying to get the 65nm dual core K10 to market?

It looks that dual core Phenom at 65nm cannot actually clock that high and at the same time, it is very hard to sell its dual core Phenom marchitecture as the cheapest of Triple core sells for €93 in German etail.

Hmmm... so there is a downside to
triple (or as Sparks likes to say, cripple) core? I though this was such a shrewd thing?!?! not throwing away less than functional quads is sound business policy, no? And in so doing they threw away any chance to get higher priced 65nm dual cores! Could it be that triple core is costing more money than it is saving... I know it's crazy and it's not like I've been saying it for 6 months.

It [45nm K10 dual core] looks awfully far off but AMD has slipped and it pays the debts of its CPU arrogance.

They also are paying for a complete lack of business sense... triple core completely crashed their pricing potential for dual cores, and this of course was brought about by launching really low clocked quads that they had to price accordingly.

I know it sounds crazy, but on desktop AMD should have tabled the launch if they couldn't get the clocks up. Had they waited until at least 2.4GHz, they could have kept the triple core a bit higher in price (and even launched these prior to the quad) and have some space for a dual core.

While AMD saved a nickel with the triple cores and energy (cough) efficient (cough) quads on desktop, it will end up costing them a dime.

On a side note... middle of next year for K10 dualies? Keep in mind the ORIGINAL AMD roadmaps had K10 dual cores in Q4'07... this would put them 1.5 years behind plan. (I'm sure when they do get release it will be consider on schedule as people will only look at the re-re-re-re-revised plan)

SPARKS said...

“too bad, you still missed out one part, the driver :)”

I did not! I resemble that remark. :)

Seriously, I did say, “The devil’s in the details, that would be the software development, no?”

Driver/software, it’s the same thing, right? I guess not. It just goes to show you how much I know, whoops!

Pointer, I have absolutely no clue how you machine language geniuses talk to the hardware. In any case, kudos! I stand corrected.

SPARKS

SPARKS said...

Ah, that’s ‘triple cripple’, close enough, however. Besides anything they excrete, my QX9770 will trash with two cores.

HOO YA!

SPARKS

Anonymous said...

Wow... Nvidia being taken out to the woodshed after hours (stock down 22%) after an earnings warning.

http://www.reuters.com/article/marketsNews/idINN0242556420080702?rpc=44

http://www.thestreet.com/story/10424473/2/nvidia-slashes-forecast-update.html

"Nvidia also disclosed Wednesday that certain of its graphics processors designed for notebook PCs were failing at higher-than-normal rates due to what it described as "weak die/packaging materials."

Might be kind of nice to have control of these things in house? Just a thought.

No word yet on whether Jen-Hsun is blaming Intel on the earnings :)

Anonymous said...

Ask any designer, would they want their chip manufactured on TSMC 55nm or INTEL 45nm, or IBM 65nm.

Of course a person can use bricks and build a better building then a moron with steel and concrete. Only look at Itanium and Prescott for evidence of that.

But if you start with the better foundation and have the basic engineering talent, lots of money, time and management support I think you have a pretty good fundamental lead that equally competent and funded teams without the same technology could ever do.

Does that mean Larabee will be competitive, NO. But if INTEL continues to plug away they will WIN this market. No different then how the beat back all the RISC architectures a decade ago. Nothing beats having faster transistors as your building block and a huge volume running on technology to amortize R&D and manufacturing costs. Yeah, I think I saw that in a powerpoint from some stupid manufacturing presentation. Would you like me to post it?

No question nVidia long term won’t be able to compete. It is symbolized by the fact they don’t have HighK/Metal-gate. The fact INTEL does and no-one else demonstrates how far ahead they are. Be sure AMD/IBM will make lots of noise about their 45nm but it won’t be nearly competitive as INTEL’s. I’m saw the TSMC presentation for their 32nm SOC. I’m sure they will make lots of noise about it. But ask nVidia privately and I’m sure they wish they had INTEL’s 32nm. Damm was that another stupid manufacturing powerpoint I saw it on?

Ask nVidia today if they wished they had better silicon, better assembly. Ummm Houston we have a problem. The wheels are starting to fall off.

WTF do I know, its all from some powerpoint. There seems to be a trend since 130nm node that what was easy for everone from AMD to TSMC to product silicon hasn't appeared at 90nm, 65nm or 45nm. Its getting harder and harder and the little guys don't have what it takes. Oh, I got that from a foil too.

Tell me do you have some detailed anlysis you want to tell me beside the fact that design matters. Everyone here knows that do you brilliant man.

By the way the make or break for Larabee will be in the software drivers and getting people to move to x86 based graphics. That like the MIDI will be decided by the market. Personally I think INTEL should have also invested in a parallel graphics approach. Perhaps just like their skunk works 64bit project they got one going too and aren’t talking.

Now why don’t things really scale by 2x every generation. That is very simple, in principle the minimum patterning and design rules enable this. Again it’s the designers and time to market that often prevent the ideal scaling. Many times libraries can’t take advantage of the scaling to their full extent. If the designers would bother I’m sure they could get double the density, but they like to recycle there cells with minimum work. Another often seen reason is designers eyes are bigger then their stomach. Have you ever known a design that comes in on schedule and at target. Designers always want more features and that leads to more transistors, bigger die and more power.

Regardless changes nothing, tick tock tick tock. Time has run out on AMD, nVidia.

Anonymous said...

Tick/Tock... please stop the trolling. (and I hope people are not thinking much of what he says is useful)

If you care to have an honest conversation let us know... if you just want to troll is is easy enough to just skip over your comments.

Your last post was funny as the argument apparently is now changing (as you original comments have been shown to be simple fanboyism).

So thank for letting us all know, designers will want to start on the best process... apparently you thought some of us were arguing the opposite? People have been saying process alone will not make or break Larabee (and in my view it is not even a significant factor in the first generation!) and it is good to see that you finally acknowledge in your last comment as your original comment went:

Is that Larabee coming on 45nm HighK/MetalGate.

Compared to TSMC's 55, 45, or 32nm SiON nVidia will be getting some can of whoop ass


My apologies, I must have misread this and not understood it to mean "designers will want to start on the best process". How stupid of me.

And quite frankly your scaling comments are rather humorous... and don't explain why you do not get 2X scaling in a simple dumb shrink on a given architecture! As you try to obfuscate this by saying (in part) designer will want to put in new features or are simply limited by time to market (which is a limitation, but not the main one)

You also imply that if designers, had the desire to (and time/money), they could achieve 2X scaling... this of course is not true for reasons which probably will be way above your head (and are not simply design rule related). Some things to think about... what percentage of transistors do you think actually use the minimum feature size? Have you considered the length of the transistor and how that scales? Is the transistor width the limiting scaling feature for die size? Hint: The smallest feature may not be the limiter on both the process and design side things in terms of die size scaling.

SPARKS said...

“Wow... Nvidia being taken out to the woodshed after hours (stock down 22%) after an earnings warning.”

AND HOW! 30% loss @ $5.35 a share! Woodshed, you say? It’s more like down a back alley! My sincerest sympathy to NVDA share holders for another fine example of corporate arrogance run amuck. Here is a leading company with a fine and outstanding product lineup that should be doing much better.

Hey, I’m a nothing from Podunk. But, if Jing Hung Sung were in my field, I wouldn’t work in a live Service Room with him. An arrogant son of a bitch like that would get you killed in a New York heartbeat.

“Can of whoop ass”, what a dick.

SPARKS

SPARKS said...

“This has been a challenging experience for us. However, the lessons we've learned will help us build far more robust products in the future, and become a more valuable system design partner to our customers. As for the present, we have switched production to a more robust die/package material set and are working proactively with our OEM partners to develop system management software that will provide better thermal management to the GPU,” said Jen-Hsun Huang, chief executive officer of Nvidia."

Eat crow, you bastard. GPU more important than my QX9770, HORSE SHIT!

No SLI for my 975X and X48!?! GO F**K YOURSELF!

YEARS, YEARS I've been waiting for this day!


http://www.xbitlabs.com/news/video/display/20080703160712_Nvidia_Lowers_Sales_Guidance_amid_Chipset_Flaws_and_Decreasing_Prices_of_Graphics_Chips.html



SPARKS

SPARKS said...

GIANT! Where are you, bro?

"Nehalem's X58 supports Quad Crossfire"

http://www.fudzilla.com/index.php?option=com_content&task=view&id=8288&Itemid=1


PLUS NO QPI FOR NVIDIA

HA HA HA HA HA!!!

SPARKS

SPARKS said...

"When asked about GPGPU languages such as NVIDIA's CUDA, Gelsinger described it as "a cool new idea that promises 10-20x performance but you have to go through this little orifice called a new programming model".

That, says Gelsinger, will see various GPGPU solutions become footnotes in the history of computing annals, whilst Intel's x86-based Larrabee will prevail."

HA HA HA HA HA!!!!

GURU-Check out the link. Pat G. is looking at 10nM!!! He's got some comments on 450mm, too. As usual, your "speculations" have come to fruition.

http://www.hexus.net/content/item.php?item=14148


SPARKS

Tonus said...

NVIDIA seems to have fallen prey to the same problem that finally killed 3Dfx: focusing too much on performance at any cost. The cost wound up being their own existence.

Anonymous said...

duh why don't we get 2x in a shrink?

Ask the troll.

Could it be, the analog IO

Did I get it right?

Any other questions

You don't like my attitude but do you disagree with my claim

IBM and TSMC don't have the money to support the R&D spending required to do the every two years cycle all stops pulled out.

As such those that use the consortium or TSCM will have inferior silicon to INTEL and less then optimised design rules as both those companies design for the larger base and the silicon must support the weakest design.

As such if you compete directly with INTEL you will be handicapped both becuase of fundamental performance as well as never having the same scales of volume. Nothing beats the efficiency and learning of having huge factors and mutliple ones at that.

ANy more questions for the troll

Tick Tock Tick tock the clock is ticking.

Anonymous said...

nVidia stock down way down
AMD stock down too even with the postive news about MS stabilization

Perhaps the owners of those shares have figured it out

THey can't compete with intel

The clock has run out.

DId I hear that atom is a huge success and INTEL can't make enough? AMD says it'll have something in a few years?

Did amd miss the laptop market, did it miss the ultro low cost ?

Did it lay a turd on its profit high end server.

The wheels have fallen off at both nvidia and AMD.

Anonymous said...

"GURU-Check out the link. Pat G. is looking at 10nM!!! He's got some comments on 450mm, too."

Most of this is pretty much a retread - but it is wrapped up well. Though I'm not sure if Pat mis-spoke or Hexus got the nodes wrong. 32nm... 22nm (not 24nm)... 15nm... 10nm

Back in '05 Intel had shown a crossection of a transistor with a 10nm width (this would be ~15nm node as transistor width is smaller than the name of the node) - so Intel having visibility down to the 10 node is not that surprising. The question, as always, will be cost and manufacturability.

However while people have been wrongly predicting an end to Moore's law since the 1um days... it is coming to an end (in the traditional sense). At the 10nm node, which would put the transistor gate length in the 6-7nm range (or 60-70A), you are talking somewhere about 15-20 atoms wide. At some point, your channel between the source and drain becomes so small you are just always on and can't distinguish the transistor on from off. I don't know where this point is, but as we are now talking about the # of atoms, it is not very far away. (Though I'm sure there are some very smart people at IBM and Intel who have a reasonable guess at where this point is)

I'd guess there is a chance Moore's law does continue though... the law refers to transistor density and there is always another way way to improve the transistor density... build up! I have no idea how or if this could work but if folks can stack transistors, Moore's law would have a chance of marching on after the traditional X-Y scaling limits have been reached. Just a guess and as this is more than 10 years out, it will be a while before this prediction could become true (I'd put it at around 10%).

The other thing being kicked around is wafer stacking or bonding... this is another of getting more transistors in a given area. Though as you are manufacturing 2 (or more) wafers, this only provides some marginal cost benefits.

Anonymous said...

"Any more questions for the troll"

Yes - what is TSMC's manufacturing capacity compared to Intel and who has a higher CapEx? I think in 07 (or maybe 06?), TSMC bought more capital than Intel and it is surprisingly closer than you think, the manufacturing capacity (and thus economies of scale) is also closer than you think. The difference here is the size of fabs - while Intel has more numerous fabs, TSMC's are generaly far larger from a capacity perspective (and fewer in #).

As such if you compete directly with INTEL you will be handicapped both becuase of fundamental performance as well as never having the same scales of volume.

Again you miss the larger picture... the question is not who has the better performance, but how significant is the gap? At 45nm with an inflection point in gate oxide technology, Intel took a large jump, while their competitors did not. However Intel will not see this large a jump on future processes - you will go back to evolutionary scaling (until the next big breakthrough occurs).

Now what happens when the rest of the world finally does implement highK/MG? (presumably on 32nm) They will have a large jump and while Intel will be on the 2nd gen and there will still likely be a gap in performance, the gap will be far less pronounced.

You just don't seem to get it - raw process performance is not as important as you make it out to be... and the reason is because the rest of the world 'catches up'. The question is the gap - both time and performance - and while it is big right now at the implementation of a revolutionary process technology, it is naive to think that gap is sustainable. Even if we assume Intel always maintains some gap, what if that gap was 1%? 5%? 50%? How much do you think that gap is (quatitatively) right now?

If you said one product was 5% better than another would that be a death blow or opening up a can of whoop ass? How about 10% better? 25%?

Look at Intel's 45nm vs 65nm products - there was a very nice jump (for similar architectures) in power, but how much speed have we seen? Now translate this to graphics where speed is king and power be damned. While power will eventually hit a wall and be a concern (like CPU's), right now it's about performance in the discrete world.

Sure Intel can trade some of the 45nm power gains for performance in this space (and note this is not a' 1 for 1' trade - you can't trade 1% power for 1% speed, it is exponential), but it is still primarily going to come down to architecture and support. Once (and after) Intel gets that nailed down, then process and manufacturing might will come into play.

So please stop making obvious comments (designers want to start with the best process, you need a strong foundation, etc...), put away your fanboy/trolling and try to objectively analyze the situation. If you still think Intel's 45nm process will be the key difference and success maker for Larabee, try to articulate your arguments instead of just trolling.

InTheKnow said...

If you still think Intel's 45nm process will be the key difference and success maker for Larabee, try to articulate your arguments instead of just trolling.

Sorry, I'm not anonymous, but I do think Intel's 45nm is a key differentiator and here is why.

1)Die size - To the best of my knowledge, the graphics folks are on the 55nm half node right now. That give Larrabee the edge in terms of either die size (cost) or transistor count (performance).

2)Power - you are right that it is performance at all cost right now. And you clearly see the power wall coming. But I think that while it hasn't been hit from a technical viewpoint, it is very close from an economic viewpoint.

I think Intel will need to push this technology (though not the Larrabee product) into the smaller form factors in the same way it is doing with Atom. To do that, they will need to become power conscious very quickly.

3)Design - I might be all wrong here, but as I understand it, Larrabee is ultimately intended to contain both CPU and GPU cores. 45nm is a big deal on CPU cores right now as it helps deal with thermal issues from a packaging design perspective.

Note that other designs just moving to 45nm design without incorporating Hi-K/MG will only address the first issue.

Feel free to let me know what I missed in an admittedly superficial analysis.

Anonymous said...

ITK - good comments...

"I think Intel will need to push this technology (though not the Larrabee product) into the smaller form factors in the same way it is doing with Atom. To do that, they will need to become power conscious very quickly."

I think your trend is dead on but I don't see this as Larabee. In graphics you will have the good enough crowd which will be IGP (and eventaully integrated GPU-CPU Nehalem/Fusion/etc) and the discrete, 'let me see how many PCI power connectors I can attach to the card' crowd. At the ultra low end, Atom will likely end up as an SOC (system on a chip) design, similar, to the ill-fated Timna design but for Intel hopefully more succeesful(which bet on the wrong memory standard, among other things). The atom market is getting nowhere near discrete cards/high performance graphics.

Again, I'm not saying the process technology is not an advantage for Intel, it is clearly one. But for the initial generation of Larabee, it will not be a significant factor in the product's success. When you get to the next iteration (mid-late 2010?), where things like power and speed can be optimized and Intel can attempt to leverage pricing&cost, then process/design integration will be key, but this first gen product is going to be all about the architecture and the driver support.

A question out there for the masses... how much better (in terms of % performance) is highK/MG then a standard SiON implementation? It is significant, but it's not like it's a doubling of performance. It is a significant breakthrough, but I don't know if people have a real feel for the benefits. If you were able to extract the performance benefit perfectly to chip level performance (which you can't), it's not going to matter as much as architectural differences.

55nm vs 45nm is important for cost, but bear in mind a 45nm process will cost more per wafer to produce which will offset some of the die size gains and additionally highK/MG is a cost adder. Ultimately yield and bin splits will be the $$ driver if you are looking at half node difference and this is where Intel's advantage probably lies (though there is no public data).

Feel free to let me know what I missed in an admittedly superficial analysis.

I don't think it is superficial at all. Though transistor count might not be the right metric - as far as I know the 4870/4850 has far fewer transistors then the Nvidia solutions (yet they are pretty close performance-wise).

4850/70 - 996Mil (source: Anandtech)
GT200 - 1.4Bil (source: Anandtech)

I look at Larabee like P4 and Dothan on 90nm. The architecture will be the big swinger and a good design like Dothan would work on most processes (though obviously the better the process, the better the chip).

Anonymous said...

"DId I hear that atom is a huge success and INTEL can't make enough?"

I think you heard there are shortages of atom.... you are ASSUMING this means it is successful and is not a supply issue. Keep in mind some of the Intel 45nm capacity is coming online a bit slower than planned, so it may also be a matter of Intel focusing a greater amount of 45nm on CPU's? (I don't know, just offering another plausible reason)

Do you have specific information or a link pointing to atom sales/success or are you just speculating on this based on the shortage rumors floating around the web?

Before you misinterpret my comment (again), I'm not saying Atom is not successful (nice double negative, eh?); I'm saying the shortage rumors doesn't prove it is successful.

AMD says it'll have something in a few years?

Again, thanks for the superficial analysis... the competition for atom is ARM-based technologies. I suppose in your myopic view of the world it all comes down to Intel vs AMD vs Nvidia... but atom is an attempt to grow the x86 market beyond computers. So while low cost notebooks and set-tops are important and a nice market for atom, ultimately this is eating into the traditional low end notebook/desktop markets. The real competition for atom is in the MID-type market (like smartphones) and ARM here is the competitor. While Nvidia may be some competition here as well (depending on Tegra performs); you have completely mis-analyzed the Atom market. (Of course the analysis having come from you should folks really be surprised?)

Anonymous said...

Yawn...

How different are process and how important?

If you ask the process guys it can be confusing. By the most clear yardstick saturation drive current at a given leakage current ( assuming they all meet the same gate oxide reliability ) There is one company that has a huge lead since a few generations ago. Actually this same company's transistors on a larger node often perform better then any other competitor at the smaller, next node. Then what they have at 45nm is 10x lower leakage too.

Another metric is SRAM size. Many cells are published, but it is funny that you often don't find them in the actual product. Those that have reverse engineering teams will know what each company ends up fielding. It has a lot to do with low voltage capability. But trust me its clear that the same team that fields the best drive current field the best SRAM too.

All things equal you'd want to be on that more advance process, anyone who says they don't is because they can't. Anybody who says it doesn't matter doesn't compete directly. What the next node gives you is double the density and 20-30% performance at the same leakage, or same performance with 20-50% power imporvement. I don't know of one designer who wouldn't lust for that competitive starting building block. Now its a seperate matter if they waste those transistors for VLIW, deep pipelines or other inefficeint uses, but no question every architects and marketing team's dream would be to have that 1 years cost/density/performance advantage.

Where ATOM will make a new market is in nettops or netbooks. These 299-499 segement machines will be a huge new low cost market that will really push computing to the masses in the 3rd world. ATOM and the widespread manufacturing of these machines will make that silly MIT guy, Negrapomp, history.

As to whether it will make inroads into ARM. NO, not this generation. Like PentiumPRo, or the orginally Penitum4, or Banias all this is get the guns aimed. The second generation will be 10x lower power and on 32nm will crush ARM. Atom in its current incarnation is to power hungry, but even Apple Iphone will go to AtomII. The compelling advantage of advance process, even lower power, x86 are just too good to be beating.

THink back to the days of RISC. Same arguments were made then that x86 couldn't make a run at the business. IN the end the volumes behind x86 pushed superior technology and with superior technology it slowely but surely will win. History already tells us what will happen. INTEL only needs to execute. Itanium is counter proof as to how powerful x86 is.

Remember there once was DEC, IBM, HP, SUN competing. NOw they are relegated to niche high end. ARM is wider spread and the competitors are many, but they too will end up the same story. Wait for 32nm for equalization and 22nm for the winner to be clear.

As to TSMC versus INTEL. Remember INTEL manufactures only a handful of products. TSMC services a huge number of customers. IN that was once the strength that made them big at 0.25, 180nm and 130nm. But with the compromises of supporting so many different customers and complexity of process and design interaction at 90nm and smaller it is forcing them to more conservative design rules. Also the complexity of the process is pushing longer lead times and learning to get to the same yields. INTEL has none of these issues with the ability to focus on optimizing process and yield for only a couple products on each generation.

The competitive barriers become bigger and bigger, and they become not only barriers, but huge weapons to bear on new markets. INTEL's old strategy of trying to buy into communications or other markets was misguided and naive. Their new strategy of x86 everywhere builds from their core strength and will drive them to absorb all.

With ONE caveat, as long as management will support the lower margins for many of these markets if they do, they will grow and win.

Call me a fanboy, call me a troll. But tell me why ARM has a chance?

Tell me how AMD could have a chance.

Tell me how IBM can come back?

Tell me how Samsung or TSMC could compete?

IN the mean time, I'm going to enjoy some hotdogs and shoot some fireworks

Tick Tock Tick Tock its coutdown time for AMD, nVidia and ARM

InTheKnow said...

INTEL has none of these issues with the ability to focus on optimizing process and yield for only a couple products on each generation.

Intel is giving up this advantage as they begin to diversify. Traditionally, they have to support the x86 CPU line, Itanium and chipsets. Now they are adding Atom and Larrabee to that mix. Plus the new chipsets being developed for the Atom line of products.

So as Intel expands their markets, they will have to give up their laser like focus on just a few products and produce a more general purpose process.

I believe this is part of the real story behind Nehalem. It seems like nobody pays any attention to Nehalem's modularity. I think modularity is a key component in Intel's strategy.

I believe Intel intends to provide more customized solutions by linking modules together within the die. The challenge for them will be balancing the product mix in the factory network and deciding what products do and don't deserve customization.

But tell me why ARM has a chance?

Remember, I'm a huge advocate of Atom and believe Intel will do well in this space. But I don't see it as a clear cut victory for Intel anytime soon. I also don't agree with several analysts out there that think Intel has no chance.

ARM has a head start in the power game. To say that they are going to stand still and let Intel catch them is naive at best and disingenuous at worst. They will continue to improve. I'll be pleasantly surprised if Intel gets system level parity with them by 22nm. My bet is on 15nm.

In the meantime, it will be a competition between performance and software availability versus somewhat longer battery life. Once Intel can run all day on a single battery charge, I think ARMs advantage begins to fade here. Who really cares if their device is charging while they sleep?

The other advantage ARM brings is end user customization. Each user is free to combine ARM's modular blocks to fit their specific needs. The customers can then have a foundry build "their" specific product made to meet that customers unique specifications and needs.

This goes back to the Nehalem modularity thing above. Intel can probably match this. But unless they intend to change their business model to include foundry work, there will be a lot of customers that stick with the flexibility of ARM.

Incidentally, 450nm wafers work against the trend to diversify the product mix. You get fewer wafers between mask changes if you run multiple products. The minimum job size to justify the mask changes will go up at the next wafer size. Intel will either have to limit the product mix or give up some of the gains from going to 450nm here.

Anonymous said...

"Then what they have at 45nm is 10x lower leakage too."

Yawn... good to see you can see you can regurgitate stuff seen on the web...

Actually, if you talk to a process guy... they will tell you it is not merely Idsat that is important anymore, they will tell you it is both the Ion/Ioff plot (which you refer to, kind of) and Idlin.

Also when you say 10X lower leakage, you realize it is 10X lower to SPECIFIC leakage modes, not Ioff in general (but obviously you knew this and were not merely reciting the press on Intel's 45nm process)

Oh and I hear SRAM cell size benchmarks are an important process metric for GPU's! (I understand though as it is easy to confuse these things when you are just copying and pasting and trying to sound like you have expertise and don't actually think about what will pertain to Larabee vs Nividia vs ATI)

All things equal you'd want to be on that more advance process, anyone who says they don't is because they can

I'm going to start calling you Captain Obvious... really, you're telling us all things being equal you'd want to be on the more advanced process?!?! Surely you jest? Is there anyone who didn't say this? Noone is questioning this - the question is how much of a difference this will make on a first generation graphics part. If you actually took the time to read and listen, you wouldn't need to state obvious truths that have nothing to do with the conversation.


Call me a fanboy, call me a troll. But tell me why ARM has a chance?

Please tell me you're kidding, right? OK, I'll pretend this was a serious question as a wise teacher once told me there was no such things as stupid questions, just stupid people who ask those questions!

I'm just spitballing here... but maybe because ARM uses about 1/10th the power even though it is produced on an OLDER technology node! Or that it has a fair amount of exiting infrastructure in place already?

Sure, ARM is not ideal and not as simple as something that would be X86 compatible... that would open up a lot of easier SW development options. But to say ARM doesn't have a chance just shows you are completely ignorant of the market, the technologies and just want to troll...

Also take a look at ITK's response (well thought out and far more patient than mine, I might add); clearly you are in the wrong on this one. again it may overtake ARM in the LONG RUN (I think 15nm node is probably not a bad guestimate); but for you to say ARM doesn't have a chance is absolute lunacy and a lack of knowledge. I'm not sure if Atom has even made it into any ARM based product yet in any significant volume.

Tell me how Samsung or TSMC could compete

Are you serious - ask anyone reasonably high up within Intel (this would likely be several pay grades above you as it is clear that you work there) and they will tell you that Samsung is the largest competitive threat. Yes, I know they don't make CPU's, but trust me their manufacturing capability is a definite concern of Intel.

The competitive barriers become bigger and bigger,

Thanks for pointing this out Captain Obvious! It's getting harder and harder to scale too? Come on, now you're just toying with us! And tell us, are the fabs getting more expensive too?

Where ATOM will make a new market is in nettops or netbooks.

Again, while this is a market, it is not Intel's main target - again you have a very Intel, CPU-centric, myopic view of the world. There will soon be BILLIONS of mobile devices sold each year - Intel wants atom to take a slice out of that pie (in addition to the low end notebooks/settop devices).

I'm starting to wonder if Mr. Tick Tock is not just a bored Sharikou who is just interested in stirring things up as his blog has died. Remember, better to keep your mouth shut and have people just think you are an idiot, then to open it up and prove it!

And there you go with the double density thing again... wasn't that scaling limited by analog I/O's? (tries to stifle laughter) If you must now it is limited by the ability to scale the flux capacitor :)

Anonymous said...

Yawn,

So obvious yet so stupid to even argue..

You state the obvious but don't tell me the reasoning behind anything

You are doing no anlysis your self but spitting out from your backside without any digestion.

Tell me something more intelligent then stupid quotes from other experts.


You are either a silly arrogrant INTEL TD guy who doesn't toe the party line or a jealous INTEL wanna be.

InTheKnow said...

Tell me something more intelligent then stupid quotes from other experts.

I know the comment above wasn't directed at me, but I believe I did provided analysis. I told you I don't think ARM is dead and I told you why. Further, I told you that I think it will take longer to for Intel to catch up to ARM in the power area that you expect.

But you chose not to comment on my analysis. So I'm left to conclude that:

a) you found my analysis so lacking in insight that it was beneath your notice

b)you are unable to refute the points and have chosen to ignore them.

I'm curious, which is it?

Anonymous said...

ARM is compelling right now with the right power/performance tradeoff.

Today it is made on 90nm and in some cases 130nm node.

Its has just enough performance for little power

x86 on leading edge node is still about 10-50x too much power. It was a choice correct or not that INTEL made to go with. My suspicians is that at 45nm the inorder x86 if it was tuned down to 10x lower power wouldn't offer enough a compelling experience on native x86 code. Come 32nm and the 2nd revision of this design you will get another doubling of transistor density, more achicture hooks for further power reduction, plus the natural advantage of 32nm tuned to lower power. INTEL has historically focused on performance/power, I believe if I read their retoric correctly they are now senstive to power/performance and as such the 32nm will get even more optimized transistors for their low power team.

Combine the economies of scale at 32nm by piggy backing the R&D and volume of the larger 400 million volumes of x86 will give INTEL a huge 2 year head start on any ARM design. When INTEL gets to 32nm AtomII ARM best case will be 45nm and likely still SiON with its commensurate higher leakage.

Like RISC when the first x86 came out the transistor overhead and power to decode the x86 instruction set was a considerable portion of the silicon area and power budget. But more with a couple doublings have made the RISC/x86 a trivial portion of the equation. That same trend will be applied in the low power x86 space too. The silicon area to provide a usable MIDI/phone experience for x86 will be milli watts by they time it gets to 32nm. Today at 45nm to get a usable experience still takes some power. INTEL also had to compromise and attack two areas with a single design. Next time around I expect INTEL to possibly field 4 designs from a common core: Server, mainstream, low-cost, and ulta - low power. Today on 45nm they have only 3.

ARM will it go away? NO but simply to RISC in the servers and high end they will have a very compelling alternative. If the rumors are true that iPhone goes to x86 the native capability to support the huge existing application base of x86 code will be a competitive advantage that all the other manufactures will have to acknowledge and go dual source.

Look at SUN, HP, IBM and others. 10 years ago the servers were domain of everythign but x86 now look today?

Same will be in the low end. ARM won't go away but for the best exeprience having x86 native capability on a node 1-2 generatoins ahead with the huge x86 scale for volume to support you is something even Samsung can't compete with.

Samsung is big, but Samsung is distributed across drams and flash in silicon. That expertise doesn't directly translsate into a competitive manufacturing advantage for the highest performance or power/performance logic. Building phones, LCDs, etc. doesn't help that much either

The clock is ticking.

Anonymous said...

"But you chose not to comment on my analysis. So I'm left to conclude that...."

Come on InTheKNow, I'm assuming this is a rhetorical question.

Mr TickTock is your classic poser. He is an Intel fan (nothing wrong with that), he likes to state his opinion (also nothing wrong with that), and then trys to pawn himself as some sort of knowledgeable person by throwing out information he doesn't really understand to feed his opinions (SOMETHING DEFINITELY WRONG WITH THAT).

When exposed (like you did with his ridiculous ARM comment), he will either:
1) ignore the info,
2) attack the messenger,
3) change the argument (an oft use tactic of Mr Tock)
4) attack some other comment
5) Demand links and info to disprove his half-baked theories (as he obviously is above having to support his own statements)

So what has he added to the board? Well he has courageously informed all of us that designers will want to work with the best process - something that I certainly did not realize (of course I'm a "retard"), and something most probably would not have understood.

What are some of his ridiculous conclusions?
1) ARM has no chance
2) scaling could be ideal between tech nodes if designers had the time, money, effort. Note - this was later ammended to the just as ABSURD conclusion that scaling is limited by analog I/O.
3) A shortage of a part means it is successful (and apparently there is no other plausible reasons for a shortage)
4) He has confused Intel's claim of a 10X reduction in GATE LEAKAGE with a 10X reduction in GENERAL (OFF STATE) LEAKAGE...as he apparently doesn't understand there are other significant components to off state leakage or has chosen to mislead folks (again).

I still think Mr Tock might just be an experiment by a disgruntled AMD fan, to see if they can pass some ridiculous info through here and say only AMD fans get challenged on this board. It has even turned into the Scientia model - prove I'm wrong with links (and just assume all the unsupported crap that I say is gold).

Hopefully folks have gotten a flavor for Mr Tock's knowledge and analysis capabilities. I'm done responding and feeding the troll. Again Mr Tock, if/when you can demonstrate you are interested in having an open discourse and not just flaming, I will respond to your comments. InTheKnow, I would suggest you do the same.

Anonymous said...

must suck to be a amd fan...

LOL

hyc said...

Ah, Mr. Tock, that's a good name.

Once again - roborat, please, ban anonymous posts. I don't care what made up names posters use, just as long as they all use different names so that we can distinguish who's speaking...

InTheKnow said...

InTheKnow, I would suggest you do the same.

But it's so hard to leave the uncorrected misinformation lying there in front of me. :)

As to the 10x reduction in power, I do want to clear up one common misconception. That 10x reduction applies to idle power consumption. Just read all the press releases. The information is in there.

That kind of reduction isn't going to come from some sort of magic at 32nm. It is going to come from architectural changes and possibly software/firmware implementations to change what is drawing power when the chip is idle and what gets activated in any given power state.

Not being a design guy I'm admittedly vague on exactly how this is pulled off.

What does 32nm promise? Combined with the rev2 design, it will give a power draw of 1/2 the current product when the chip is running. Still an impressive reduction, but nowhere near 10x and still a higher power draw (0.25-1.25W) than ARM.

Anonymous said...

But it's so hard to leave the uncorrected misinformation lying there in front of me.

That 10x reduction applies to idle power consumption.


And now my friend I need to correct you :) Actually that 10X power reduction applies to gate leakage specifically (not idle leakage/power). Gate leakage and subthreshold leakage (leakage from source to drain when the transistor is off) are the 2 biggest sources of offstate (idle) leakage and hence are a big chunk of the idle power consumption. A 10X reduction in one of these doesn't mean an overall 10X reduction.

On 65nm the gate leakage was starting to overtake the subthreshold leakage in magnitude. With High K implemented I would imagine subthreshold leakage has retaken the #1 spot (not certain on this).

By the way Intel also claims that
high K gives them either a 5X subthreshold leakage reduction OR a 20% performance gain.

Bottom line... the 10X is applicable to one specific leakage mode (albeit a large one) and should not be confused with an overall 10X reduction in leakage (or idle power). If you go to Intel's website, research link, one of the links on high K has a bulleted list of detailed improvements.

Anonymous said...

Here's the link on high K/MG benefits (page 5)

http://download.intel.com/pressroom/kits/45nm/Press45nm107_FINAL.pdf

SPARKS said...

“It seems like nobody pays any attention to Nehalem's modularity.”

ITK- I’m paying attention, ---- really.

“ (INTC) which bet on the wrong memory standard, among other things.”

Man, I LOVED Rambus memory. At the time, right up to DDR2, it was slamming everything else. Frankly, it still is, XDR, I think they call it.

I still have an excellent 850E MOBO running in the house (with my cherished P4 3.06) even for it’s time, the 32-bit memory still has excellent bandwidth.

“I'm starting to wonder if Mr. Tick Tock is not just a bored Sharikou……”

Oooooh, that’s gotta hurt!

You may not like ‘Attila The Tocks’ perspective, and he may be less than correct when in-depth, detailed analysis is applied (beyond my scope), however, he may be correct about the process limitations by TSMC. As we all suspected, they are pushing the thermal envelope with their current process generation. It looks like NVDA is finger pointing and sharing the blame for their relative failure to keep GPU thermals and power dynamics in control.

http://www.fudzilla.com/index.php?option=com_content&task=view&id=8293&Itemid=1

Besides, he, as an ultra right wing Intel fanster, my personal bias gives him a pass. After all, we do need our extremists on both end of the spectrum so we moderates can find center. I wonder; am I a moderate?



SPARKS

Unknown said...

Scientia is at it again. He's claiming that no Intel quad can hit 4GHz on air without overheating. What a complete an utter lie, the QX9650 sitting in front of me will run at 4GHz 24/7 with no problems on air. That requires ~1.38V in BIOS, which is well within the safety limits for a 45nm CPU. The thing doesn't even go past 60C with a TRUE 120. Now not all Intel quads can do 4GHz like that, but the cherry picked 'cream of the crop' QX9650 and QX9770 shouldn't have any real problems doing that!

Well what happened when I added water? Just this week In fact I had some time off work so I picked up a Koolance Exos-2.5 external water cooling kit (I didn't want to ruin my Silverstone case by modding it) and water cooled the CPU. Now at the same voltage I used to get 4GHz on air, I can hit a crazy 4.5GHz (450 x 10). This thing is just an utter monster. That is totally stable, after 15 hours of running through ORTHOS across all four cores.

By comparison, what can a Phenom do? Best I've seen is 3.7GHz with water cooling, and that wasn't even stable. It was just a CPU-Z screenshot.

Scientia also claimed that all of the new Phenoms (I'm assuming he's referring to the B3 stepping only here) should reach 3.2GHz on air cooling, but there are numerous people at AMDZone that run at lower frequencies. Some of the reviewers weren't able to hit these speeds either, Xbitlabs only got 2.7GHz from a 9850, and HardwareZone only got 3.1GHz from a 9950. The highest I've seen is AMDZone's own review where they got 3.3GHz on air 100% stable.

Perhaps he ought to rephrase to state that SOME Phenoms can reach these 3.2 -> 3.3GHz speeds. But to claim that they all can is ludicrous.

He also conveniently omits the fact that the C2Q both 65 and 45nm are faster at the same frequency than any Phenom across a wide variety of benchmark tests as the many reviews show. ArsTechnica stated when they reviewed the Phenom 9850 that it would take a 2.7GHz Phenom just to match the old 65nm Q6600. The 45nm CPUs are 5 -> 8% faster on average at the same frequency over the old 65nm CPUs, further increasing the gap.

Besides, 3.2GHz on air really isn't all that interesting. I was doing that 15 months ago with a Q6600. :)

BTW, SPARKS, Nvidia's price cuts have taken effect - there are some sweet deals to be had here:

The GTX 280 for $459: http://www.newegg.com/Product/Product.aspx?Item=N82E16814127360

The GTX 260 for $299: http://www.newegg.com/Product/Product.aspx?Item=N82E16814127361

I still haven't decided what I'll upgrade to as of yet. With my 30" LCD doing a 2560x1600 resolution the 4870s don't have enough video memory for my liking. I hope they'll do a 1GB version soon so I can enjoy 2560x1600 with all the eye candy on!

-GIANT

SPARKS said...

“Scientia is at it again. He's claiming that no Intel quad can hit 4GHz on air without overheating. What a complete an utter lie, the QX9650 sitting in front of me will run at 4GHz 24/7 with no problems on air.”

Quite right, as you recall, I went down the same path with him several weeks back. I confronted him with the same thing; however, he qualified his statement by saying running Prime95 for a half hour, “now that would be something to brag about”.

Two things occurred, one, the most I could get the QX9770 to run this full load torture test was 3.85 Gig on air. Secondly, NO WHERE did he ever mention, or has it been ever shown, that Pheromone ran the same test, under the same conditions.

I think, as pompous as this may sound, I think brought a refreshing change to his blog for the short period of time I posted there. I could be wrong, but I refuse to baited by that egocentric moron. I said I was done, and I mean it. I’ll never waste my time even reading his blog again. He could say INTC is using ALIEN acquired technology obtained during the Apollo Moon missions, and I couldn’t care less.

Rarely do I suggest to people what to do (unless they’re working for me), but may I suggest to you, not to be baited by that AMD shill, no mater what subjective, arbitrary nonsense he concocts.

Besides, we’ve got a healthy discussion right here with “Attila The Tock” and GURU battling it out on the merits of ARM and x86. (Hell, last week I didn’t know what ARM was until they brought it to light.)

Now I know. (Frankly, I’ll have to side with GURU, embedded RISC solutions meagerly sipping power with very efficient programming IS a formidable challenge for x86 in the compact mobile market, regardless of feature size and clever design .)

Giant, there’s no doubt; I’m going with 4870. Two in Crossfire at 300 bucks a pop is simply a no brainer since X48 cannot do SLI. Additionally, ATI has resolved its Anti-Aliasing/Hi-RES issues; the cards are not falling flat on their asses. Actually, they are scaling quite well.

Hmmm, Exos, nice pick, BTW. I don’t believe you mentioned which water block you’re using with the Exos.

SPARKS

Anonymous said...

Yawn

What are the components of power?

There is junction leakage for the Drain junctions at bias

There is sub-threshold leakage from drain to source

There is gate leakage

Then of course when you sitch transistor on and off you have active current flowing but also all the charge being switched to and from all the long capacitive lines. When you run 2, 3 or even 4 Ghz that is a lot of charge, or power


Now you can reduced active power by shrinking the transistors. That is the power of scaling and reducing the width you need. The faster and stronger your transistors the less width you need. You need HighK to get lower eletrical oxide. You must have metal gate to get rid of the the poly depletion. Thin the electrical and you can better scale the transistors to shorter L and still control dibl and subthreshold.

BUt the layman only needs to know that overall with HighK metal Gate you get orders of magnitude improvement.

If you want to debate the details or point out what I left out... fell free and call for my ban.

But you never answered the question. How can TSMC, IBM, AMD compete as consortiums? They are behind already by 2 years on HighK metal gate learning. This is similar to their whiff with strainged silicon.

Tell me how do you field a competitive product if you start with an inferior process. Don't tell me about how prescott and Intanium I know about those.

Lets study Core2 versus Opetron, or Nehalem versus Barcelona. Atom versus ARM is a story only in the first chapter. Like I said earlier as the RISC boys about x86 about 10 years ago.

Call for my ban because you don't like my story. That makes you a sad loser does it not.

Feel free to point out other further omissions, they don't mislead they just aren't relevant to the larger issue I'm talking about


AMD, nVidia are finished and no Consortium can help

Tick Tock Tick Tock the clock is ticking

InTheKnow said...

Actually that 10X power reduction applies to gate leakage specifically (not idle leakage/power).

Here is a reference to my idle power comment. Specifically it states:

Moorestown should reduce overall package size by 50%, reduce power by half and idle power down by a factor of 10x. Intel didn't clarify any of the statements on power consumption, so your guess is as good as ours as to how.

So the 10x reduction is in idle power according to this article (and others). Maybe I missed something more current than the October '07 IDF?

InTheKnow said...

Just to be clear here, I'm not attributing the power reduction to Hi-K/MG. Though the link was interesting, thanks.

Menlow and Moorestown are both targeted to be built on the 45nm process. Since they are both built with the same process technology, Moorestown must be getting these power savings from something else.

Anonymous said...

Don't feed the troll.

SPARKS said...

Well, it seems NVDA’s finger pointing claims offer some legitimacy. It does show no one can do stuff in the 40’s like INTC, after all.

http://www.theinquirer.net/gb/inquirer/news/2008/07/05/tsmc-delays-40nm


I wonder how this will affect AMD and its plans.

SPARKS

Anonymous said...

Looks like TSMC is really struggling at the bleeding edge

Problems with nVIDIA on something as simply as package

Now more bad news as 40nm slips.

Wonder when the newslines about AMD 45nm slipping or shortages will start?

Anonymous said...

"Problems with nVIDIA on something as simply as package"

Has this been confirmed to be a TSMC specific issue? Nvidia uses 3 different folks in the packaging area so it may or may not be TSMC. Any links on this? Or is this speculation / TSMC bashing?

"I wonder how this will affect AMD and its plans" (40nm 'slip')

Sparks... this may actually help ATI (AMD) as it only affects graphics and may actually hurt Nvidia worse as they have far more urgent need to shrink the graphics parts then AMD. The 48X0 is already done on 55nm, is smaller than the comparable Nvidia part, and likely yields better.

I'm not sure what the roadmap for the AMD IGP's was (780/790) - if these were slated to move to 40nm early on there may be some impact, but from the INQ article it seems rather vague as to how much of a slip this really is to begin with (if it goes beyond Q1 then it may be affect graphic parts), but otherwise this is probably noise in the grand scheme of things. A 2 month delay on a tech node shift is really not much to write about.

Seems like a whole much ado about nothing (and a slow news weekend for the INQ).

Anonymous said...

Oh and Mr Tock (comment after Sparks), just because you drop the "catch" signature phrase and lose the vulgarity... your conclusions are just as bad, biased and as usual lack any supporting information!

Tonus said...

No mention of the curious issue Anand had with some benchmarks and the "Cool and Quiet" feature for AMD CPUs? He claimed that one or two benchmarks showed degraded performance (50% for Photoshop) until C&Q was disabled. Then he stated that after a while Photoshop continued to test much slower even when C&Q was disabled.

Have any other review sites discovered this? Has Adobe or AMD made any mention of it? If other sites have this same issue, it could point to another problem with Phenom. But if no one else sees this occur, wouldn't it call Anand's competence into question?

Tonus said...

This is from an asian review site (hence the translator and poor English)
http://en.hardspell.com/doc/showcont.asp?news_id=3687

"Coolarer have posted a detailed preview of Intels upcoming CPU, the entry level E5200. The CPU is the first 45nm CPU with 2MB L2,default FSB keeps 800MHz. E5xxx series CPU will take over E2xxxs place, to become a next entry CPU with good OC ability... Thanks to its high multiplier, Bus Speed only OC to 320MHz and the core can achieve 4GHz. BTW, the CPU will be available at Q3, only in 84USD."

An entry level $84 CPU (2.5GHz, dual-core) that can OC past 3GHz easily and possibly up to 4GHz. It'll be impressive if it's able to reliably OC past 3.5GHz on air.

Anonymous said...

My predictions:

1. When oil hits $150 a barrel, maybe sometime this week, AMD's stock will drop into the 4 dollar range.

2. Scientia the Shrimpy will continue to attack Anand Lal Shimpi, even though Scientia couldn't even test his own arse for gas emissions. It's easy to call others incompetent when you don't do any testing. Besides methane emissions & entertainment value, what the hell has Scientia contributed anytime, anywhere?

3. AMD will post a smaller loss for the quarter just ended - probably under $200M.

SPARKS said...

“Sparks... this may actually help ATI (AMD) as it only affects graphics and may actually hurt Nvidia worse as they have far more urgent need to shrink the graphics parts then AMD.”

In my case, this is law. I can recall the purchase and subsequent performance increase of my Radeon 9700, years back. I am getting that warm and fuzzy feeling about ATI products, as apposed to settling for a $900 compromise when I purchased two XTX 1900’s.

Three weeks will tell the story, 4870 vs. 4870 X2. Either way, I’m in. Frankly, as you may suspect, I will not have ANY sleepless nights over NVDA’s troubles, now or for the next six months. Now they can shove SLI down their shareholders throats, the < 1% sales in this area will not help them out of this pickle. Deep price cuts are/were the indicator here. As you said, SLI “shenanigans” with INTC to save their chipset business will have its payoff, in spades.

Anony Moose- Those are formidable predictions. I don’t see it happening since most AMD stock is institutionally/privately owned. If it does happen, which I seriously doubt, it would mean some of the big holders are going to cut their losses and dump. I don’t think so, they would be cutting each others throats, and things are bad enough in every other sector. Why cause more trouble?

The ATI division has surprised everyone, gained market share, acquired what used to be exclusive NVDA partners, mobile sales are up at the expense of NVDA, and finally, they are going to make money.

In fact, if they can beat the streets predicted losses, you may even see an up tick! It’s been happening at the end of every quarter, ----so far.

SPARKS

SPARKS said...

Love or hate him I think Charlie just may have this one right. Allow me to qualify this by disclosing that he hates NVDA’s guts, even more than INTC. So does everyone at INQ.

http://www.theinquirer.net/gb/inquirer/news/2008/07/07/nvidia-meltdown-blame-game

SPARKS

SPARKS said...

Giant-Did you say GPU memory for big fat flat screen? How’s about TWO GIG GDDR5!

2550 X 1600, turn up the settings, HOO YAA!

SPARKS

http://www.fudzilla.com/index.php?option=com_content&task=view&id=8299&Itemid=1

Anonymous said...

"No mention of the curious issue Anand had with some benchmarks and the "Cool and Quiet" feature for AMD CPUs? He claimed that one or two benchmarks showed degraded performance (50% for Photoshop) until C&Q was disabled."

Quite frankly, and this will sound pretty harsh, the whole K10 launch has been a joke.
- AMD started with much lower speed bins then expected
- They had the whole B3 (which I continue to contend wasn't a big deal - but gave them an excuse to work out some thermal issues on 65nm and get a new stepping)
- Had the energy efficiency first marketing campaign (read excuse for low clocks while they tried to work out the issues)
- Had board compatibility problems
- Now have launched extremely high TDP parts (so much for energy efficiency the customers were demading) - which has cause more board issues.
- They kept playing with "introduced" vs "launch" vs "in production" vs "shipping to OEM's"...when can we buy these things again?

So the whole cool 'n quiet or some crazy SW interaction where cores are clocking up and clocking down as threads move from core to core is hardly surprising. (I'm not saying they are true as there isn't enough data, but it wouldn't surprise me)

I would imagine these things get worked out with BIOS patches or other workarounds. The issue for AMD is the thermals which really appear to be limiting clocks much more than AMD anticipated on 65nm. That wall seems to be around 2.5GHz, and those 2.6GHz, 140Watt chips have to be pretty much right on the 65nm cliff.

Anonymous said...

How far have they fallen.

Latest news on AMD says dreamworks is going to INTEL. Looks like the animators are going for real horsepower versus poop power now.

Anonymous said...

"Latest news on AMD says dreamworks is going to INTEL. Looks like the animators are going for real horsepower versus poop power now."

This is a bigger blow, in my opinion, than what people realize, and has really established a pattern.

Ironically, it started with Steve Jobs switch from PowerPC to Intel in Macintosh systems and the list of high profile names either breaking AMD exlusivity is growing (imagine that AMD had exclusivity also)...

- Jobs announces switch to Intel (not AMD), sites Intel's stronger roadmap as reason for not considering AMD. Not suprisingly or perhaps surprisingly -- Pixar went Intel in 2003.

- Google begins working in Intel servers into their server farms.

- Sun breaks exclusivity it had with AMD for years, and now includes Intel in their portfolio.

- Cray announces they will develop systems with Intel CPUs. Formerly all AMD for x86 procs.

- Dreamworks now announces switch to Intel ... this is the most visible overall, sites stronger roadmap.

I would not be surprised, and would be the icing on the cake, if Lucas Films announced conversion to Intel based platforms.

Anonymous said...

"I would not be surprised, and would be the icing on the cake, if Lucas Films announced conversion to Intel based platforms."

JJ... great comment.. most of these folks will be loyal to the best product / roadmap, not the supplier. AMD for a lot of these areas had a superior product a while ago and until Core2 and Nehalem, Intel really no credible roadmap - Opteron was very scalable and AMD appeared to be very customer oriented/hungry.

I think along the way AMD underestimated the support needed for some of these top tier OEM's like Dell. You have production guarantees, and many of these OEM's probably expect significant platform validation support (I believe Intel did a lot of internal platform validation work for the some of the larger OEM's). When AMD was servicing the channel, things were probably significantly different and AMD could probably devote more support to the Cray's, Sun's and Dreamwork's of the world. I don't think AMD understood how much additional support and infrastructure would be required with their increase in market share and their shift to corporate markets (other than the obvious manufacturing scale-up)

Fast forward to now and Intel has a superior core, and it looks like will eliminate any scalability issues with Nehalem (and you have to think Dreamworks, Cray, et al have been given some insight into this). Couple this with a company that rarely has serious supply issues (especially for the top tier customers) and AMD's recent woes and it is a no brainer for many of these folks.

Right or wrong, AMD as the smaller company has to be better, probably cheaper, and have stellar support. A tie is likely going to go to Intel and any Intel advantage makes these decisions a no-brainer.

Tonus said...

Do companies like Pixar and Dreamworks design and build their server farms and workstations? Or does it work some other way, and "switching to Intel" means that they expect their hardware suppliers to spec for Intel hardware?

Tonus said...

Ah, nevermind, saw the article. They will buy workstations built using Intel CPUs. So it's the second option.

InTheKnow said...

most of these folks will be loyal to the best product / roadmap, not the supplier.

Which is why this migration kind of surprises me. With the introduction of QPI, Intel will eliminate Opteron's hypertransport advantage. But only in the 2 way server space for the first generation Nehalem's. Nehalem with 4 QPI connections isn't due out for another year. I would have expected these announcements to come at that point.

It makes you wonder what AMD's server roadmap looks like. Surely if there were something compelling, AMD's customers would have stood pat until Intel had the clear advantage in the 4P space.

Tonus said...

I think the announcements come now because their three-year contract with AMD was up and they were going to sign a contract with one company or another and announce it. A Dreamworks rep also explained that much of their software is proprietary and that Intel will be helping them develop their software tools, implying that optimization for the platform is also important to them.

AMD's financial position could have a lot to do with it as well. Would Dreamworks want to sign a three-year contract with a company that is in pretty tight financial straits? If AMD couldn't deliver for some reason, DW would probably need to buy some sturdy kneepads before approaching Intel.

Anonymous said...

"It makes you wonder what AMD's server roadmap looks like."

Publicly... there really is not much available (AMD has gone silent on roadmaps more than pretty much 6 months out)... they hope to get the clocks up with 45nm (not probably by much) and probably more if/when they they go to highK on late 45nm/32nm.

Fusion is meaningless in this space as the initial applications for this are low end (which makes sense in my view), and probably mainly targetted for notebooks, and to a lesser extent desktop. They have some sort of memory access improvement (which is probably more evolutionary). And eventually they will do MCM & more cores; but the issue is not the # of the cores, it is the capability of the cores.

Publicly, things on Bulldozer has been completely silent and this is looking to be 2010 best case.

I also think the wild card is Larabee, ESPECIALLY in this space where Dreamworks, Pixar, etc are not gaming, they are probably customizing their software to a large extent and looking for raw computational power. And as Tonus said, Intel is probably offering up a significant amount of support.

GutterRat said...

Hey all,

How about some Nehalem numbers to rain on the AMD fanboy parade?

:)

Anonymous said...

"How about some Nehalem numbers to rain on the AMD fanboy parade?"

Come Q4 Intel may just put the final Nehalem in the coffin :)

Imagine Intel cutting prices on the Q6600/Q6700 and Q9300/Q9450 to make room for the new inventory.

Anonymous said...

"Imagine Intel cutting prices on the Q6600/Q6700 and Q9300/Q9450 to make room for the new inventory."

I'd imagine Intel will attempt to fatten up initially by launching high end Nehalems, and leave the 65nm and 45nm Core 2's around where they are. When mainstream Nehalems start coming out (mid-09?), Intel will be pretty much phasing out 65nm and will use the 45nm's (9300/9450) to push AMD's quads down. I wouldn't be surprised if you really never see any truly 'low end' Nehalems.

Also I wonder if there will ever be dual core Nehalems (at least ones that are not disabled quads)... does anyone know?

Thanks for the Nehalem link, now Scientia can right another blog - how corrupt THG is, especially given the outpouring of response on his Anand article!

Tonus said...

Another review on Nehalem that shows a 20-30% (and higher) performance increase, with the warning that it's still early and the supporting hardware is still neither finished nor tweaked.

Nehalem will either be a sledgehammer to AMD's face, or one of the biggest disappointments to come down the pipe in a long time, thanks to the early looks.

Waiting for this chip is going to be torture. =x

Anonymous said...

Sparks said...

"Anony Moose- Those are formidable predictions. I don’t see it happening since most AMD stock is institutionally/privately owned. If it does happen, which I seriously doubt, it would mean some of the big holders are going to cut their losses and dump."

Well it's 10 minutes to closing and AMD is currently trading at $4.93. Oil jumped $5 to $142+ a barrel, probably due to the Iranians acting out again with missile tests. I think even the big (institutionalized :) investors know that if the economy tanks, AMD is in a much weaker position than Intel.

In other news, and I use the term loosely, looks like the AMDZonerz are cannibalizing themselves once again. This time Ah Been Stoopid is ragging out some guy who works in the render farm industry, over the latter's statement of how much faster Xeons render than Opterons. Based, of course, on A.B. Stoopid's vast experience reencoding home videos. Remind me to never accept an invite from him to watch his home movies :)

Anonymous said...

Not a whole lot of new info, but some stuff on 450mm and 300mm 'prime':

http://www.eetimes.com/news/semi/showArticle.jhtml;jsessionid=ANAZSU4N1JDAMQSNDLSCKHA?articleID=208808246

What I found most interesting is this next gen 300mm has this great goal of 30% cost reduction and 50% cycle time improvement (essentially to delay the need for 450mm).

Current status: It has simulated a 30-to-40 percent boost in cycle times and 10-to-15 percent improvement in cost.

This would lead to 'simulated' margin increases, 'simulated' improved profits, and 'simulated' success. 10%? How much effort is going to be required for 10%? I'm guessing this estimate is a 'greenfield' analysis, meaning it is a benefit starting from scratch - I doubt it takes into account the cost of retrofitting (lost capacity, labor, facility de-install costs, etc...)

Of course Sematech is the master of subtlety:
The goal is to process wafers without any delays, according to Sematech. Whew, glad they cleared that one up.... that's what the industry's been doing wrong!

And remember all that single wafer vs batch talk by Dementia? The bottleneck remains in moving the wafer lots from one tool to another. While single wafer processing will eventually become important, the largest percentage of time a lot sees in the fab is transport/idle in stocker.


I'm also wondering why Sematech failed to mention any cost improvements they do on 300mm should be applicable to 450mm?

Look for these programs to be largely political with people driving their own interests and is some cases driving programs they don't even want in order to delay others. Some basic work will get done, but the heavy lifting will be done by Intel/TSMC/Samsung (and the equipment suppliers).

Anonymous said...

Well it's 10 minutes to closing and AMD is currently trading at $4.93.

First... is your back OK? You're patting it pretty hard! :)

Second.... 4.93 is in THE FIVE DOLLAR RANGE... "four dollar range" would be ~$3.75-$4.25. (or at least sub $4.50)

Anonymous said...

Anonymous said..

"First... is your back OK? You're patting it pretty hard! :)"

Not yet patting my back since all my predictions have yet to come to pass :)

"Second.... 4.93 is in THE FIVE DOLLAR RANGE... "four dollar range" would be ~$3.75-$4.25. (or at least sub $4.50)"

And oil hasn't hit $150 a barrel yet :). Looking at the 10-day trend, AMD has shed over a dollar or about 20% of its value. Whatcha wanna bet the next 10 days brings? Another 47 cents from today's close at $4.96 and bingo! :)

Quarterly report is due out one week from today. Analysts are predicting a loss of 51 cents per share, or >300M. So I might have to eat prediction #3. But we'll see. If the analysts are correct, obviously the TLB bug in my Crystal Ball will be to blame :).

I'll add a 4th prediction - sometime in the next 6 months, before GWB leaves office, there's gonna be war with Iran and the US & Israel, and oil will double in price. Worrying about who's ahead in the CPU wars will be the least of our problems. When it costs over $100 to fill my stinkin' Honda Civic, I'm reverting back to hoof-power :).

SPARKS said...

A nony Moose- Nice call. Historically, AMD hasn’t been in the 4 dollar range since 3rd quarter 1982. Further, INTC had, by today’s standards, a great day at 81 cents a share, while AMD gains failed to materialize. They usually up tick together. Not good for AMD, by any measure.

All said, INTC is, and has been, holding the line like a rock, despite souring oil prices. If, by some stretch, this economy can turn around, I can see INTC back to the traditional high twenties, and perhaps, possibly break into the thirties.

Factoring in what everyone is saying above, if Nehalem scales well in HPC, your prediction may just come to fruition. Sadly, for the thousands of AMD employee’s, they will indeed have nothing competitive for AMD to sell, at least on the CPU front.

Then there is AMD’s quarterly report to factor in. This will be a defining moment, no doubt.

Gutterat- I wish that Nehalem test would have gone up against its real competition, my beloved QX9770 as apposed to the QX6800. (I know, I know, being top dog is fleeting at best.) I would have liked to have seen the numbers just to see how much of a shellacking I’m going to take, in any case. (Sigh)

SPARKS

Anonymous said...

"If, by some stretch, this economy can turn around, I can see INTC back to the traditional high twenties, and perhaps, possibly break into the thirties."

Sparks, I admire the optimism, but the last time Intel was over $30 was Q1'04, and the last time they were in the high 20's was mid'05, for about the time it took to have a cup of coffee. With the ASP erosion in the CPU space, $25-27 is what you should hope for.

Intel has a $108 billion market cap... to hope for a 50% move ($30 stock price)is quite optimistic - unless Intel starts buying back a serious amount of stock (more than 1-2Bil)

As for AMD - who knows, but soon you will get the inevitable merger/takeover talk which should artificially bump the price a bit. Also they may surprise a bit at the earnings call... graphics may be profitable (and almost certainly showing a good trend moving into H2) and nobody is really expecting anything in the CPU space. I'm not an AMD fan but I'd be fairly optimistic, given such low expectations.

Anonymous said...

AMD closed below 5 bucks. Poor arabs, must be kicking themselves for buying their shares so high.

Can't wait to see how AMD plans to finance their vapor fab in NY to run their vapor 32nm technology with their vapor products.

Penryns continue to be released at way below capability as AMD simples is no competition these days with crappy designs running on crappy process.

SPARKS said...

“and the last time they were in the high 20's was mid'05”

Well, yes, and no, if you call 26.90 in Dec 2007 mid twenties, certainly. You guys know me by now, I’m always optimistic about INTC. Therefore, for me, anything over 25 is the high twenties, whereas I should be saying mid twenties. The glass IS half full, damn it!

But, given the stocks performance of late, may I suggest if it were not for the double whammy of astronomical oil prices, coupled with the simultaneous mortgage meltdown, INTC would have seen those 28 or 29 prices. After all, they were at 26.90 in December.

The stock has held the line despite everything else tanking. Their product line up and execution is spectacular, and margins will be better as they ramp to 45. A decent turn around in the economy may have a windfall as companies increase CAPX. Plus, with new powerful server solutions, INTC will have the right products at the right time, all the way from to ATOM to APPLES.

Unless, of course, we are to believe we are never going to get out of this economic downturn and we subscribe to the notion the end is coming in 2012. Ya gotta hand it to, OPEC, Nostradamus, Edgar Casey, and those Mayans, doom and gloom, et al. Is the ‘R’ word fundamental here?

Incidentally, what surprised me was Anony Moose’s predicted AMD’s drop below 5. Frankly, I didn’t see it coming; he did, lucky call, perhaps? AMD’s products leave a lot to be desired, their bonds are Junk status, BBB- I believe, the 5.4B debt is isn’t going away any time soon, and their roadmap looks like the road to Burma.

He did, in fact, call it.

SPARKS

SPARKS said...

“Second.... 4.93 is in THE FIVE DOLLAR RANGE... "four dollar range" would be ~$3.75-$4.25. (or at least sub $4.50)”

Here we were talking about INTC in the mid to high twenties, and you are quibbling about 25 or 50 cents at 5. Chasing AMD stock is like chasing nickels and dimes rolling across the Long Island Expressway at night, blindfolded.

It’s ugly.

SPARKS

Anonymous said...

"Here we were talking about INTC in the mid to high twenties, and you are quibbling about 25 or 50 cents at 5."

hey when you are that low, $0.50 is a 10% move!!! Welcome to the world of gambling... ummm, I mean investing in low priced stocks!

And you got to hand it to OPEC (and Russia and Venezuela)... they will keep messing around with the supply and price of oil until they see demand really start to crack (and a fundamental shift to alternative energies). I think they may have pushed it too far this time.

It is all just a form of conditioning... just think when oil went down from $145 to $135, people were talking about a precipitous drop! Just think if oil settled at ~$100 and gasoline in the US was ~$3.00, people would probably now be pretty happy with that.

Just like AMD going to $10... a few years ago this would have been horrendous, now people would be ecstatic!

Anonymous said...

At < $5.00 / share, AMD is in the realm of being considered a penny stock.

Tonus said...

My 4870s arrived today, replacing the 7800GTX and 8800GTS cards in my two desktop systems. This system is a go and the other is installing the drivers.

Hey, a nice performance boost for $299 a pop? I'll take it!

Anonymous said...

And the financial shell games start again:

http://www.eetimes.com/news/semi/showArticle.jhtml;jsessionid=GIOQEXW5QJUVWQSNDLPCKH0CJUNN2JVN?articleID=208808603

Advanced Micro Devices announced it will take nearly a billion dollars in charges in its second quarter results to be reported July 17

So the highlights:
- $880 Mil for DTV and handheld business (from ATI)
- $32Mil for layoffs (this should help to cover Hector's salary)
- $36Mil in investments (Spansion)
+ $190Mil for sale of 200mm equipment

Someone help me out here... didn't AMD already take a 1.66Bil impairment charge for the ATI acquisition in Q4'07? And then there are the recurring 'one time' charges related to integration.

Just curious... shouldn't the latest impairment charge have been part of the Q4'07? Seems a bit fishy (though legal) way of dumping/hiding charges. This death by a thousand papercuts (releasing charges piecemeal over time) is probably not the best way to go about thing from a stock perspective. It's best to just get all the charges out there at once and establish a little credibility. Sure you take a hit, but you get it out there and get it behind you. Do it the other way and you appear either incompetent (you have no clue how to write down assets) or it appears you are trying to mislead investors. Either way, it's not good.

Should we start the countdown on Hector's job? I'm putting the over-under at late Oct(don't want to do it too close to Christmas)

My theory on this:
- It will give the groomed CEO one last chance to report all of the garbage/baggage on the Q3 quarterly report, before he starts being held accountable
- It gives Q4 (which is generally a good quarter) as the first quarter for the new CEO - this will help the "we're turning things around', 'set the ship on the right course' story
- It's already July and it will take a little planning on the "pursing other interests", "amicable split", "this has been planned for a while" stories.

Anonymous said...

Sparks said...

"Incidentally, what surprised me was Anony Moose’s predicted AMD’s drop below 5. Frankly, I didn’t see it coming; he did, lucky call, perhaps? "

Nothing up my sleeve... :)

Actually just a recent trend observation, such as today's oil closing up 2.3% and AMD down 2.4%.

A shame that we continue to send hundreds of billions overseas to people who don't like us much... We are awash in solar power - 174 petawatts according to Wiki - but it's too dilute and consequently expensive to gather & store. We need a Manhattan style project to make some technical breakthroughs - my vote goes to the candidate that'll make it happen.

SPARKS said...

"It gives Q4 (which is generally a good quarter) as the first quarter for the new CEO - this will help the "we're turning things around', 'set the ship on the right course' story"

Sounds exactly like something Dirk Meyer would say.

SPARKS

Anonymous said...

"We are awash in solar power - 174 petawatts according to Wiki"

Great, another energy expert... how many solar power cars are there? How about solar powered planes? Ships? Trucks?

Like some really ignorant politicians, many believe all energy is completely interchangeable and fungible. It is not, at least not without additional technological breaktthroughs.

It is like trying to force people who drill into the ground to invest in alternative energy. (Oh wait there are some politicians who actually think this makes sense) Do I really want Chevron trying to develop a compound thin film semiconductor solution? Do they have expertise? Is that an efficient solution? Wouldn't you rather have people who, I don't know I'm spitballing here, have expertise in semiconductors developing solar solutions? You know what we should have car companies developing airplanes... after all it's just an alternate mode of transportation!

What solar will help most on is electricity generation which will help limit dependency on coal (which of course is more of an environmental issue and is not a cost/import issue as the US has some of the world's greatest coal reserves).

Unless we figure out things like coal gasification or plug-in hybrids, improving electricity generation does not significantly impact oil demand (It to some extent helps if people move off heating oil, like in areas in the Northeast). Or if we could develop infrastructure for natural gas powered cars, then solar would help as we could replace the natural gas for heating and electricity generation with solar/wind/etc.

We don't need a Manhattan project for power... that is just the latest politician catch-phrase and double speak to make the problem sound really complicated so that they don't have to go into any details and show their complete ignorance and lack of a plan (I know, how about we start a bipartisan commission to study it?) . You need to incentivize the private sector, and this doesn't mean de-incentivizing /punishing industries like Obama wants to do. The government also needs to be 'energy agnostic' and not drive to pre-determined inefficient/ridiculous solutions like corn based ethanol... and this is the problem with a government based manhattan project - too much politics, too much special interests, etc.

My vote will go to a candidate that doesn't use fear and demagogues on the energy issues. Both candidates are missing the boat. You need someone like T Boone Pickens who understands both oil and alternative energies (and is putting his money where his mouth is) leading the way.

Obama is clearly trying the us against the evil oil companies....If a company grows and is successful, won't it pretty much continue to earn record profits? You hear RECORD PROFITS, RECORD PROFITS, but is a 10% profit margin really excessive and no mention of the amount of taxes they pay? If 10% is excessive, there are a lot of businesses that should be windfall taxed... how about:
- dreamworks, >27% margin (gouging the American entertainment consumer, no?)
- first solar >30% margin (get those evil, price gouging solar companies!)

Show me the outrage and then I'll believ you are no just trying to create an opponent and something for the avergae US citizen to fear and get anry at.

And then you have McCain - we should drill more in addition to working on alternative energy (which I support)... but it's irresponsible to drill in some desolate piece of land in the artic (Anwar)? Has anyone seen the piece of land we are trying to protect? But it's Ok to do drill in the oceans? The 'where' part should be data driven, clearly for every gallon we don't drill here, it's another gallon drilled in someone else's backyard (and are they more or less environmentally frienmdly than what we would be doing in the US?)

PS: Wind and some other alternative methods are more efficient than solar and do not require Manhattan project breakthroughs - it's just another example of press and politicians getting stuck on a desired outcome... corn based ethanol? Let's subsidize that, lest we import something more efficient like sugar (or bio which is under development) based solutions which are cheaper to produce AND USE LESS ENERGY TO PRODUCE (isn't that kinda the point?)! Of course their is not a big sugar lobby in the US and you have places like Iowa where I think the first primaries are held? (Though I'm sure that has nothing to do with it!)

End rant... sorry, just get tired of sseing some of the politician spins starting to actually get footing into the consciousness of people without anyone actually vetting or challenging some of the absurdities...

Now back to our regularly scheduled program.

Anonymous said...

Tick Tock Tick Tock

AMD writes off another billion

What is it two billion of the 4 billion written off now. So did AMD over pay for ATI.. but in the end still a smart move as all AMD will have next year is graphics and two years from now INTEL will have that too.

Tick tock tick tock. hector you've been fired

Anonymous said...

Anonymous said...

"Great, another energy expert... how many solar power cars are there? How about solar powered planes? Ships? Trucks?"

Great, another comprehension-challenged poster. If you had read the 2nd half of the sentence - but it's too dilute and consequently expensive to gather & store. - you would know that I too don't think solar powered vehicles are feasible with today's technology. Who wants to drive around with a football-field-sized photovoltaic collector on their roof, between the hours of 10AM - 4PM on cloudless days? Considering a typical auto engine is roughly 100KW for reasonable acceleration, and that a collector generates maybe 200 watts/m^2, well we can all do the math. Clearly some substantial improvements in battery & collector technology are in order.

The point I was trying to make was that we don't have an energy problem - we have an energy density problem. Hydrocarbon fuels are quite concentrated so that you can obtain a lot of energy from a relatively small volume or mass. To achieve simiar results with solar, nuclear, wind, etc will require a lot of R&D. It's been 36 years since the first "oil" crisis for the US - where are these alternative energy sources?

Photovoltaics isn't the only solar technology of course - reflector arrays feeding heat pipes or boilers to drive generators, or high temperature electrolysis to convert steam into H2 in a hydrogen economy. The latter would require significant improvements in generation, storage, distribution & fuel cells.

I probably should have used the Apollo program as an example of a successful gov't project, in lieu of the Manhattan project, since it wasn't secret and got a good deal of public support at the time. However both programs achieved spectacular results in a few short years. Both were results-oriented, none of this "Let's take a committee study" crap. When you have a large-scale, complex, technically difficult project, it's success largely depends on the amount of resources you can throw at it. So your essentially legistlative "incentivize the private sector" seems too small in scale and too short-sighted and more likely to be subject to politics as big oil and other industries try to grab the lion's share of the action.

Just my 2 cents, but the clock is ticking and I personally would feel a lot better about my 2-uyr-old son's future if we could tell Hugo Chavez and our "friends" in Saudi Arabia and Iran to go f*ck themselves.

/rant

SPARKS said...

“PS: Wind and some other alternative methods are more efficient than solar and do not require Manhattan project breakthroughs –“

Whoa, easy wrangler. The other alternative methods have absolutely nothing to with Manhattan Project Breakthroughs, electron storage does! Even if we had Bob Lazzar ALIEN exotic power sources, it wouldn’t change the fact that every oil burner in every house, and every existing car parked in front of every house worldwide, uses petrochemicals as fuel.

It’s all 100 years of cheap liquid fossil fuel infrastructure well established and dug in like an Alabama tick, worldwide. Those are the facts, and they are indisputable.

The world will resist a fundamental change in this infrastructure. Nothing compelling has surfaced to displace oil as an economically feasible alternative to heating, transportation and power generation.

However, you give me a battery that is extremely light, that will last five years, with an extremely high storage capacity, AND inexpensive, I’ll make you a billionaire overnight. You would even turn Detroit around. It’s not the electrical energy; it is the ability to store it cheaply, effectively, and compactly. This is gospel.

Cheap roof top solar cells wouldn’t hurt either, but plugging in your car, say a Tesla Roadster that didn’t cost 90 G’s, that had batteries that can store 3 times the Ah as existing technologies, in a package the size of a toaster, costing a couple of hundred bucks and that lasts for years, now that’s the ticket.

It’s like one of you process/architectural geniuses trying to build ANY circuit without a cap. It ain’t gonna happen. In fact, give me a Hi-V (>100 Volts) cap that can store, say 20 Farads, the size of a regulars car battery, I’ll make you another billion.

Current battery technology goes back to the Egyptians. All the solar power IN THE WORLD generated during the day is useless as tits on a bull at night. As usual, most people are looking the donut not the hole.

Here are some prices for the best solar batteries available today, and it ain’t pretty. You can buy a whole fossil fuel engine for the price of ONE of these, and you’re gonna need a quite a few in your basement and in your car, disgusting! Oh, yeah, get a big truck and some big boys to move ‘em.

These are currently the best for solar apps.


http://gbindustrialbattery.com/Forklift_Battery_Sizes_and_Specifications_Zone15.html


I’ll throw in a FAT inverter, for shits and giggles. It’s a beaut’ and it ain’t cheep.

http://www.affordable-solar.com/Xantrex%20XW6048.120.240.60.Inverter.Charger.htm

SPARKS

SPARKS said...

Heh, A nony moose beat me to it.

SPARKS

SPARKS said...

A nony Moose- Nice presentation. Got a little engineering backround in there, a bubba?

Not too shabby.

SPARKS

Anonymous said...

Brent Rehmel is weird. He's lonely, kinky, and is "working on several science fiction and fantasy novels and would like to be published someday."

I know it's him because on his blog he links to his ebay page, where he likes sailing. That yearbook page also likes sailing. weiird. Dude is weirrd. It should be a crime to further prove him wrong because he's clearly delusional.

Anonymous said...

Another "energy efficient " launch on the horizon? (AMD 45nm Deneb)

http://www.tcmagazine.com/comments.php?id=20790&catid=2

Yeah, it's only an eng sample, but then again folks were saying the same thing about 65nm K10's early on. (Well, except Charlie at the Inq who was dancing in the aisles)

The Vcore seems quite high (1.224V) for the stock speed of 2.3GHz, though it seems they were able to get it to 2.8GHz without more voltage. However, to get the OC over 3.0 Ghz, they were jacking the Vcore to 1.4V (and up).

Perhaps a new stepping will help, else expect 45nm to have similar clocks and thermals to the current 65nm K10's early on. They may get another bin (2.8GHz)?

Anonymous said...

SPARKS said...

"A nony Moose- Nice presentation. Got a little engineering backround in there, a bubba?"

Thanks. Yes, I'm an electrical engineer, but currently employed in the IP (intellectual property) field. Although I specialized in VLSI design & semiconductor fab, I did a lot of research into power generation for a senior project. I still keep up with in informally. Scientific American has the occasional overview issue as well, such as one several years ago on the "hydrogen economy". Entire issue was devoted to ongoing research in generation, storage, transmission & fuel cells.

For example, to achieve about the same energy density as gasoline, you'd need to compress H2 to around 30KPSI. An aluminum scuba tank stores 80 cubic feet of air compressed to 3KPSI and is quite explosive - equivalent to several sticks of dynamite - if the tank ruptures. Imagine driving around with heavy tanks of flammable gas compressed 10 times as much. The way some kamikaze bozos drive around here in Washington DC, that's just asking for trouble :).

There was research on using carbon nanotube material to capture & hold H2 at much lower pressures to achieve the same density that seemed promising at the time. Haven't heard much about it since. But nanotech seems worthwhile exploring - look at the recent articles on using silicon nanoparticles in Lithium batteries to greatly increase the energy density.

Most interesting (to me) future technology is power satellites in geosynchronous orbit, using some tens of square kilometers of mylar reflectors to concentrate sunlight on either photovoltaic arrays or a heat engine (maybe even a boiler driving a steam turbine). Each powersat could beam several gigawatts of microwave energy back to a 'rectenna' farm presumably located in an unpopulated area near major metropolitan areas (i.e.,adjacent to airport property, fenced off of course). The microwave density would be too low to inflict short-term damage to people or animals.

Unfortunately with today's technology it would be far too expensive to launch into orbit. So either Moon-based manufacturing or a 'space elevator' from Earth would be needed to get the cost down to where it would be competitive.

PS - with beam steering on the powersat, it might be feasible to use it as a weapon - we could cook Iran if it got out of line :)

And now back to our CPU wars (yawn)...

hyc said...

Sparks: you linked to a page of lead-acid batteries. They're way too heavy to ever be viable for general automotive use.

The entire T-zero weighed less than the battery pack of the electric RAV4 and the T-zero had enough energy to drive 300 miles on a charge. We don't need triple the Ah of today's tech, we just need the production volume to go up so that economies of scale can kick in.

The T-zero used 6831 off-the-shelf LiIon cells (18650s, the same as used in laptop battery packs) in an 69x99 array. The Tesla is using the same electrical/electronics as the T-zero; I think the Tesla is a little heavier so its driving range is a bit shorter than 300 miles.

In 2003 (when it was driven from LA to Las Vegas on a single charge) the T-zero's battery pack would have used cells with 1350mAH each. Today those cells are roughly the same price, but now 2200mAH each. More than 50% better capacity now, so at least 450 miles on a single charge.

So you're talking about a pack that would be rated for 370V @ 151AH today, with a weight of about 700lbs, with more than 6 times the energy density of those forklift batteries. It's already practical today; it was already practical 5 years ago. Detroit wasn't interested then and still isn't interested now.

Right now it would still cost around $20K for that battery pack. With the whole Tesla at $90K, I guess it's not too ridiculous. If a major car manufacturer stepped in, with high volumes, they could bring the costs down to earth.

SPARKS said...

“They're way too heavy to ever be viable for general automotive use.”


I rest my case.
People are looking at the hole, and not the donut.

At the risk of being redundant,-----

“Nothing compelling has surfaced to displace oil as an “economically feasible alternative to heating, transportation and power generation.”

And-----

“However, you give me a battery that is extremely light, that will last five years, with an extremely high storage capacity, AND inexpensive”

The thrust of my comment wasn’t singularly aimed at the Auto market. In fact, the auto market is a worst case scenario for a battery. (Telco has been using Lead Acid for decades READ: HUGE AND HEAVY) Actually, what I didn’t say and what I thought was implied (I was obviously wrong when I made the comparison), today’s PRACTICAL application of battery technology is either too heavy or too expensive to even compete with fossil fuels on a cost, weight, energy basis, no mater how many energizer bunnies or Lion packs you cram in a car.

Aside from DANGEROUS Lion packs from blowing up laptops, the technology is so bad; they are serious thinking about replacing them with fuel cells. Piddle puddle in your lap?

Hey, you’re as esoteric thinking guy, let’s go this way. What we need today is a Grand Unified Battery Theory! Like Einstein’s shave cream analogy, we should have a long life, light, safe trouble free, inexpensive, battery that will work well in full size autos, AND keep my beer cold for a week without sunshine.

(Yeah, SPARKS Grand Unified Battery Theory, above being the 4 forces. GURU- Send the Nobel Prize this way!)

Further, I’ve got a lot of beer, and I have a heavy foot. (I am NOT advocating drinking and driving, however.)

SPARKS

SPARKS said...

A nonny Moose-

That’s Kilo Pascal’s! Not under my kid’s rear ends they don’t! Can you imagine the Cowboy’s on the L.I.E. (I495) doing 70 and 80 MPH with Hydrogen bombs (sorry, I couldn’t resist) in the trunk slamming in to each other?!? To hell with it I say, let’s give ‘em hypergolic fuels!

I read about the satellite Micro Wave/Grid power generation years ago. Nice Idea. They were thinking about subsidizing farmers to use their land over a few states in the Midwest. I think they were afraid of giving a new meaning to Micro Wave Popcorn.

Clearly, your back round was evident because the energy/density math was spot on.

Battery, technology is a nightmare.

Lion=voltage and heat temperamental with explosive results. (Sir, I’m afraid we can’t allow that laptop on this flight.) How they got those little bastards from a motherboard to Electric cars is beyond me.

Nicad= There goes another $5000 model Jet. Thanks for the ‘memories’. (Cheese, it was 5.2 Volts when I took off!)

Lead acid= (Hun, why are your arms all burned and what wrong with your back? Oh, I was replacing a few dozen batteries in a Leibert UPS today)

All= series, parallel (Hmmm, where is the weakest/dead cell, enney, minny, miney, moe.) Death by overcharge, death by undercharge, seven years tops, at best. The key word is death.

Frankly, batteries and I go back a very long time, all from a very practical perspective.

Sorry, fella’s, enough rant, I rather talk about AMD’s Luther Forrest plans while N.Y. State is 4+ billion in debt and AMD is writing off another BILLION!

SPARKS

Khorgano said...

http://www.sec.gov/Archives/edgar/data/2488/000119312508149499/d8k.htm

“Also, during the fiscal quarter ended June 28, 2008, the Company expects to recognize a gain in connection with sales of certain 200mm wafer fabrication tools which the Company expects will have a materially favorable impact on its gross margin for the second quarter of 2008. The Company’s estimate is that the gross margin impact will be approximately $190 million”

They sure are getting desperate out in Sunnyvale. In order to pump up their margins, they are applying the sale of the 200mm equipment against cost of operations on this quarterly report. Don't be surprised with a 50%+ Margins report.

Anonymous said...

They sure are getting desperate out in Sunnyvale. In order to pump up their margins, they are applying the sale of the 200mm equipment against cost of operations on this quarterly report. Don't be surprised with a 50%+ Margins report.

Most knowledgeable analysts will understand and see through this. They will also pay attention to margin forecasts going forward as next quarter(s) will not have this one time benefit. The report and reaction is going to come down to outlook - when is AMD projecting to get back in the black and how is the cash flow (i.e will they need to raise more money and potentially dilute stock further)

I still think, save an extremely dire outlook or a clear signal that they will need to raise more capital, most of the potential bad news is already priced in.

BTW - folks above should check out 'who killed the electric car?' (documentary). I though it was a pretty fair presentation (as to some opinion pieces which masquerade under documentaries) - and concludes there was plenty of blame to go around (technology, car companies, various levels of government, etc...)

Most of these efforts, in my view, are a matter of will and determination - if we wanted to do it or truly were forced into doing it, it would get done. The problem is we live in a reactionary world where the squeaky wheel gets the grease (e.g. dropping money out of the sky in the form of $600 rebates for a short term economic stimulation, bailing out homeowners and speculators who put nothing down and couldn't afford to buy houses in the first place, etc...), so it will likely take much more dire circumstances to get real progress in the energy area.

Anonymous said...

An interesting interview... it is amazing how many non-answers an exec can give and the interviewer either not care or just be oblivious.

http://www.techtree.com/India/News/EXCLUSIVE_--_An_Interview_with_AMD/551-90984-579.html

This was an interesting bit:
Lastly, about Puma; what is happening on that end? It was supposed to be out by now.

R: Laptop technology is not like the discreet stuff, [for the discreet parts] when you announce you can go out and buy soon. With the notebook stuff there's always a lag of six months. So we are done with our work on Puma. Now the laptops would be going through their stress testing and what not. By this Christmas, we expect a wide range of Puma laptops.


Paper launch anyone? I have now seen a few articles/interviews on the lack of Puma systems. Despite the 'up to 100 design wins' AMD was claiming many, many months ago... where are the reviews? where are the benchmarks?(other than a random data point on gaming here or there, with things like BATTERY LIFE missing!)

Also, he confirmed first gen Fusion will be an entry level product and not targeted for gamers/enthusiasts (I think most of us rationale folks expected this)

And here is the great double-speak:
TT: How about the issue of product availability and competitive pricing for the Indian market?

R: The first thing I did after I landed in India last week, was that I called up one of my engineers at the AMD lab in Hyderabad and asked him, 'hey, where do you buy your cards from? Can you call them up and ask for a 4850?' You know, just to check how available they are in India. Because we're shipping hundreds and thousands of these across the world and I'd like to make sure that we have them available at decent prices here in India, and not disadvantage people here.


If you read the response, what was the answer... well I called folks... and... and... ??? He never actually did say whether pricing and availability was an issue or not!

Anonymous said...

The numbers will soon speak... Intel reports after market close this Tues, AMD this Thurs.

Anyone for predictions?

1) I think Intel will run up a bit Mon and Tues and either flatten out later this week or slightly decline.

2) Despite what will probably be a bad quarter I see AMD rising a bit - perhaps Wed/Thurs (after Intel's report and prior to AMD's report) or Friday (after their report).

3) ASP's for both companies will be relatively flat (AMD may be up a bit) with both companies giving good outlooks for Q3/Q4 ASP's.

4) We will still have no clue what Asset Smart/Light is - but we'll be re-assured, yet again, that all will soon be revealed. (Really going out on a limb on this one, eh?!?!?)

Tonus said...

Ed at Overclockers ripped into AMD for its accounting practices this weekend. He is predicting that they will use some cheap tactics to hit break-even on operational costs in Q2. Should be interesting to see what they do and if there is any fallout at all.

SPARKS said...

NVDA is giving SLI up to Bloomfield. They must be really hurting. No mention of an agreement with INTC concerning QPI, however.

As far as I'm concerned, it's too little, too late. So this is the can of "whoop ass?"

Oh, yeah, the news was so big on Wall Street NVDA shed another 4.5%. Did I say too little, too late? Jin Ching Hung is on the balls of his ass, not so arrogant now, heh?

SPARKS





http://news.moneycentral.msn.com/ticker/article.aspx?Feed=PR&Date=20080714&ID=8888319&Symbol=NVDA

SPARKS said...

OUCH!

http://www.overclockers.com/tips01361/

SPARKS

hyc said...

Paper launch anyone? I have now seen a few articles/interviews on the lack of Puma systems. Despite the 'up to 100 design wins' AMD was claiming many, many months ago... where are the reviews? where are the benchmarks?(other than a random data point on gaming here or there, with things like BATTERY LIFE missing!)

It's a paper launch when there are no products for sale. And yet, HP has been selling Puma systems since last month. I guess there aren't a lot of benchmarks, true, but these aren't the type of systems that people find interesting to benchmark. I.e., they're not performance/gaming oriented.

In the meantime

HP Tablet user review
http://www.tabletpcreview.com/default.asp?newsID=1213

Toshiba A305 review
http://www.notebookreview.com/default.asp?newsID=4440

HP dv5z review
http://www.notebookreview.com/default.asp?newsID=4486

SPARKS said...

Hmmm, this is nice! Oh Giant, are you paying attention? Triple channel DDR3! Hoo Ya!

SPARKS

http://www.hothardware.com/News/Intel_Nehalem_Processor_and_SSD_Sneak_Peek/

Anonymous said...

It's a paper launch when there are no products for sale. And yet, HP has been selling Puma systems since last month.

You are correct, sir. Paper launch was probably a mis-characterization how about "launch light" or "launch smart"? (I'm joking) This thing was launched/introduced/PR'd in April, no? And as early as Jan (if not earlier) AMD was claiming up to 100 design wins? You have AMD's own PR people saying CHRISTMAS for widespread parts and talking about needing 6 months to allow for burn-in (on the notebook side, not on the chip itself)... shouldn't these minor details be considered when the chip is introduce/launched?

Intel has clearly slipped on the Montevina launch, and that is execution issues... but you won't hear Intel folks saying well we are launching today, but expect an additional 6 months for widespread parts to allow out partners to do burn-in.

Anonymous said...

hyc... thanks for the links, I had seen those, and those (in my view) pretty much prove my point.

No old vs new comparison, no specific battery life test (there are several now relatively common benchmarks for battery life) and just a bunch of well battery life was good or slight disappointing or probably just an issue as it is an ES sample.

I can't even find much info on AMD's own page... you can argue it may bot be interesting for review sites, but is AMD not interested in benchmarking their own product? There's a lot of technical jargon and details on why things like battery life should be good, but no benchmarks which actually demonstrate that it IS good.

This smacks a lot of the whole native vs MCM PR... sure native is theoretically better, but when it came to actual performance, for the most part the arguments turned out to be, how can I put this nicely, academic ones.

InTheKnow said...

HYC said...
I guess there aren't a lot of benchmarks, true, but these aren't the type of systems that people find interesting to benchmark. I.e., they're not performance/gaming oriented.

I would think that AMD would want to see these benchmarked. This is the "platform" that is supposed to be competitive with the Intel systems. If AMD's best effort is only going to secure the low end then their problems are bigger than I suspected.

Anonymous said...

$284 Nehalem launching this year: http://www.reghardware.co.uk/2008/07/14/intel_prices_up_nehalem/

AMD is fucked.

Anonymous said...

"AMD is f%#^!"

You're missing the bigger picture... Intel already owns the $200+ market - so this price only means something if it means the Penryn quad price is pushed down to a point where it puts extreme pressure on the AMD quad (and tri) Phenoms. It will be interesting if Intel overlaps the Nehalem/Penryn quad pricing or if Intel kills off some of the higher end Penryn's to keep pressure off Nehalem.

What will potentially hurt AMD the most is when Nehalem 4P+ is released - this is really AMD's last claim to fame from a performance point of view; however these Nehalems are not due out until H2'09 (and while a high margin segment, these are relatively low volume).

In any event, the financial impact of Nehalem will not likely be seen until the 2nd half of 2009; there will be small volumes this year (with focus on server, no doubt) and things will start ramping H1'09.

hyc said...

I would think that AMD would want to see these benchmarked. This is the "platform" that is supposed to be competitive with the Intel systems. If AMD's best effort is only going to secure the low end then their problems are bigger than I suspected.

Good point. On the graphics front, it's pretty clear that the new chipset kicks ass, every review has said this. Not only are the integrated graphics strong, but they can still be combined in Crossfire with a discrete GPU for the serious gamers.

Personally I just want fast compile times and long battery life, so the IGP is good enough for me. I've just requested a quote on one of these:

http://h10010.www1.hp.com/wwpc/us/en/sm/WF25a/321957-321957-64295-321838-89315-3687779.html

Will probably replace the HDD with an SSD...

Anonymous said...

Personally I just want fast compile times and long battery life, so the IGP is good enough for me. I've just requested a quote on one of these:

So how does the computer you linked compare to competitive products in compile times and long battery life?

Noone is disagreeing that the chipset is great from a graphics perspective but is that the focal point for a notebook? Is that the only thing that should be benchmarked? I realize AMD is trying to market their advantage, but doesn't it ring a little hollow that AMD doesn't benchmark their own "next gen" notebook. They talked up Puma quite a bit (including to the financial analysts) and now they only seem to be talking about graphics as the key point on a notebook.

Seems a bit transparent and dis-ingenuous to me. Much like their statements that the reason they were releasing 2.2 and 2.3GGhz quads was because customers were demanding energy efficient products (anyone who knew anything, knew they had problems getting the clocks up).

Anonymous said...

"Nvidia reportedly scores QPI license while Intel gets SLI for X58 motherboards"

http://www.digitimes.com/news/a20080715PD205.html

Looks like Jensun blinked... early on, rumor has it, Nvidia will only be producing mainstream and low end Nehalem chipsets (on the 1160 socket) and will not be doing the enthusiast platform (socket 1366).

Nvidia really had no choice here - AMD is a small part of the market and they have some serious competition from crossfire in the 'I only buy AMD products enthusiast' space.

Intel avoids the 'anti-competitive' spin, they protect their high end chipset biz and they get SLI for the money is no object crowd.

Anonymous said...

Looks like Sharikook has finally updated his blog, complete with the ever-so-informative and unbiased, scientifically accurate Newegg reviews on the G280 being DOA :).

I also noted that he edited out all comments on his later blogs that were anti-AMD or pro-Intel. Whatta BOZO!

Roborat - time to update your blog - soon this page will have over 400 comments :).

Anonymous said...

From http://www.tgdaily.com/content/view/38412/122/:

Strong mobile processor demand improves Intel profit by 25%
Business and Law
By Wolfgang Gruener
Tuesday, July 15, 2008 15:41
Intel delivered a solid second quarter thanks to record mobile processor and chipset sales. Revenue was up 9% year over year from $8.7 billion to $9.5 billion, while the company’s net income jumped by 25% from $1.3 to $1.6 billion. The result came in at the high end of the Intel’s guidance.

"Intel had another strong quarter with revenue at the high end of expectations and earnings up substantially year over year," said Paul Otellini, Intel president and CEO. "As we enter the second half, demand remains strong for our microprocessor and chipset products in all segments and all parts of the globe."

Overall, Intel said that total microprocessor units were up sequentially and higher than seasonal. We don’t want to jump the gun here, but if Intel is any indication and our sources that AMD had successful launches with mobile Puma platform and its triple-core Phenom processors are correct, then AMD should have had a good quarter as well.

«Oldest ‹Older   201 – 314 of 314   Newer› Newest»