What's In A Price?

In basic economic terms, 'price' is the monetary equivalent of 'value' at a particular point in time. But 'value' is such an abstract concept that it varies from person to person, from time to time and depends very much on circumstance. One such circumstance is of course availability. We know the more scarce an object is the more valuable it becomes. The second is the benefit it provides. If Phenom is the best processor at any meaningful measure, naturally it should also be the most valuable processor in the market.

Then why is Phenom being rumoured to launch at a priced only equivalent to Intel's mainstream quad processor, the QX6600 at $288? For a consumer, the bottom line is value for money and usually that means the most performance per dollar spent. Processors are typically priced based on how they will perform and AMD and Intel follows this accordingly. So at the end of the day, it doesn't matter what the benchmarks says or what some crazy blogger thinks. If AMD has priced its next generation processor right around the bottom of Intel's old generation products, there isn't much anyone can say about how badly K10 turned out to be.

How much is Phenom valued? (as per Newegg 5th Nov 2007):
Intel Core2 Extreme QX6850 3.0GHz at $1029.99
Intel Core2 Extreme QX6800 2.93GHz at $1015.00
Intel Core2 Extreme QX6700 2.66GHz at $949.99
Intel Core2 Quad QX6700 2.66GHz at $534.99
Intel Core2 Duo EX6700 2.66GHz at $319.99
AMD Athlon64 FX-74 at 3.0GHz at $299.99
<-- Phenom launches here -->
Intel Core2 Quad QX6600 2.4GHz at $284.99
Intel Core2 Duo E6850 at 3.0GHz at $279.99

Phemon performance sits where it is priced. End of discussion? Not really because we can still discuss how launch prices are typically overvalued due to the anticipated initial demand. But that's just beating a dead horse.


Anonymous said...

It doesn't get any better than this.

AMDZone is getting into the act, this time it's ATI's turn.


ATI's Embarrassment
Written by Matthew Cameron
Monday, 05 November 2007 13:59

On Thursday November, 1st, 2007, I had the opportunity to go to the AMD/Microsoft PowerTogether Tour in Itasca, IL. To be honest, I learned very little about AMD's upcoming product lines - most, if not all of what was reiterated at this event, I was aware of. With that said, I still had a very enjoyable time.

I had planned on posting this write-up sooner; however, technical difficulties have not allowed that to happen. AMDZone is currently working on getting everything back to normal and on track. We are hoping to make several improvements to our site - improvements that will make functionality smoother and the feel of the site better. We are striving to make the layout fully operational and responsive. As can be expected, it will take a while for the site to be fine tuned.

Below is just an excerpt of what is to come. The rest will be posted shortly.

AMD has had their share of embarrassment in the past. Remember the time that the Inquirer reported that the Ferrari racing team was using an Intel laptop to take measurements of their cars? Ironic since Advanced Micro Devices is their technology sponsor.

On Thursday, I witnessed another such embarrassment. During the Q&A session near the end of the event, a man in the row in front of me (I was in the second row..) asked if ATI would be releasing better drivers now since they have merged with AMD. An ATI spinner was quick to say that they are working thoroughly on drivers and pointed out that AMD/ATI currently have better drivers than their competition. He reiterated that AMD had the upper hand with the release of Vista as well.

About an hour before the Q&A was going on, I was wandering around the premises looking at the various booths. One in particular had caught my attention. Four relatively nice LCD monitors supplied by BenQ were attached to a single computer. I noticed that all four screens went black for a spit second - all at once. I found this odd, so I approached the system. On the main monitor, I noticed a pop-up box stating that the display driver had stopped responding.

SPARKS said...

Does anyone remember, at ANY time, going all the way back to the 386 days, where a newly released desktop chip/product was EVER priced SO LOW at launch???? Forget adjusted 90's dollars.

I can't. Please, enlighten me.

A $288 launch is absoluteley pathetic.

This is INDEED an historic moment.





Anonymous said...

Boy, I'd like to be a fly on the wall at the AMD executive conference room and the board of director meetings.

All that needs to happen now is for one of the AMD debt holders to cash in their chips and then a domino effect would ensue.

If I were on the board, I would not extend Hector's employment contract.

Axel said...


A $288 launch is absoluteley pathetic.

And even that is artificially inflated due to initial demand for a new flagship product, as Roborat indicated. The fact is that Phenom X4 2.4 GHz will probably be slower than Q6600, based on all indications so far. The latter is currently $266, so the $288 Phenom would have to be priced somewhere around $220 to have fair market value. In addition, the A64 X2 6400+ will probably beat the X4 2.4 GHz in many games, so Phenom will be competing with AMD's own older generation CPUs.

So this means that if the volume is there, the prices will have to waterfall just like the K8 prices did in order to move the volume. If the volume isn't there, AMD can keep the prices high but revenue will be immaterial.

Anonymous said...

Speaking of AMD's board of directors.

AMD appoints Dirk Meyer to Board of Directors


Hector is on the way out

Anonymous said...

This is embarrassing

Phenom fragged on 3DMark06

AMD Fanbois?

Ho Ho said...

This looked quite interesting:

"The company has been in business for 38 years. Through almost 4 decades of existence, their lifetime cumulative net profit is ... -$1.3 Billion dollars as of last quarter. If you take out the brutal last year when they lost over $2B, their lifetime commutative profits will be $863 million. That's about a bad quarter's worth of profits for Intel. To put it in another perspective, they lost over two times more money over the last 4 quarters then they made over the preceding 145+. During their "best ever" stretch of four consecutive quarters (which incidently more then doubled their lifetime net profits up to that point), AMD made $505 million. Over the last 4 quarters, Nvidia made $576 million."

Tonus said...

Such tight pricing across the three speed grades is very surprising. Is there any possible positive explanation for it?

As a new product (and highly anticipated, I would think, being the first native quad-core product specifically targeted at the desktop) I would've expected the top speed-grade to carry a reasonable premium, even if it was just 100MHz faster.

It just feels as if AMD has resigned itself to its fate. If even for a short time, I'd have expected them to try and premium-price the 2.4GHz model. There will always be people willing to spend the money for it.

Or even cherry pick some 2.6GHz and offer them for a high premium... anything but such a benign entry into the market.

SPARKS said...

“All that needs to happen now is for one of the AMD debt holders to cash in their chips and then a domino effect would ensue.”

AH HA! Brilliant!! Anonymous, You SAID IT! THANKYOU!

ALL the fuss, all the spin, and all the Power Point presentation, were all designed, over the past YEAR to prevent this catastrophic, China Syndrome investor/Wall Street melt down. This has been Wrector Ruinz’s absolute worst nightmare. I’m sure he has had many sleepless nights. Did anyone say Lunesta!

Wrecktor and his Minions, even in the face mounting and compounding debt, have had ONE and ONLY one alternative, MAINTAIN MARKET SHARE. One savvy poster called it the “Scorched Earth Policy”, months ago.

There ain’t enough lipstick on the PLANET to make this PIG look good. The time to get out was at the first convertible note.

This will continue 'till the END. And the END is coming. Think $5,500,000,000 in debt plus mounting quarterly losses.


PS. 3 GHz Pheromone Super Pi @ 1M= 28 Sec.

SPARKS O.C’d Q6600 (GO stepping), 2.6 GHz Super Pi @ 1M=19 sec! 49 degree C!!!

Anonymous said...

"“All that needs to happen now is for one of the AMD debt holders to cash in their chips and then a domino effect would ensue.”

Please explain how this would happen in the real world of business. Most of the debt was raise through convertible bonds this year - I would like to understand how someone can "cash in their chips" holding one of these bonds.

While you're 2nd grade understanding of debt is amusing to read please don't spread misinformation.

If you disagree, please provide a SPECIFIC example about how a current AMD debtholder could "cash in their chips"

Anonymous said...

Again, with prices like these why would AMD proceed with the desktop launch? This is not the server world where customers will deal with lower performance for lower price. Before folks say they have no choice, ask yourself if they will earn more money on a quadcore which takes up double the Si area (and likely has poor yields) or a dual core K8 (or better yet a dual core K10 if that works?). I have yet to run the #'s but I would think cutting prices even further on a smaller dual core die (which yields better) might sill be the better economic thing to do. Sure they would have some PR egg on their face, but at some point someone has to suck it up and say things aren't working out the way we planned and we need to run this company as a business not as an ego trip.

They should have focused on dual desktop K10's first, cede the quad desktop market (which is still very, very small) and wait until they could get the quads up to respectable speeds. They will now be "milking" the early adopters for an "phenom"ally low price. Those same folks will not turn around in 6 months and buy something 2 or 3 speed grades up at >$500, they will either overclock their current chip or wait for AMD's next big thing.

Now if they had gone the dual K10 route those same early adopters would have purchased the K10, and you possibly have an upgrade market for those same folks when you get the higher clocked quads working (and who knows SW may be further along to take advantage of quads).

AMD just shot themselves in the foot, reloaded and shot themselves in the other foot! They have just wasted the early adopter demand on low priced chips.

The tight pricing seems to indicate an all or nothing yield. Either the chips work in the 2.2-2.4 range or they don't. The falloff must be sever outside this range. I will also speculate that these chips are in fact virtually the same performance wise coming off the wafer. AMD can't launch with just one or two speed bins - so you make 3. You then tweak the Vcore up or down or play with the multiplier to get the bins. If this is true, this is a bad situation as it is more than just a "tweak" to fix things.

Anonymous said...

Wow - there is just so much misiformation in Scientia's latest blog it is getting ridiculous (not coincidentally it is an examination of Intel process technology)

Where to start:
"One example of OPC is that you can't actually have square corners on a die mask so this is corrected by rounding the corners to a minimum radius"

This is categorically incorrect - OPC generally uses seriffs or othe (NON_ROUND Features). To visula think of using a shape like mickey mouse to print a small square - in stead of circles though BOTH face and ears would be squares (with the square ears jutting out at the corners of the larger square). Based on this very simplistic error I have to assume Scientia has no idea of how OPC works...

"When we see that Intel's gate length and cache memory cell size are both smaller than AMD's and we see the smooth transition to C2D and now Penryn we would be inclined to give credit to RDR"

This is hilarious - a 6T cache cell is THE SIMPLEST DESIGN ON A CHIP! The restrictive design rules (or RDR's) are intended for non -repetitive no routine structures (read LOGIC, anyone?). Also gate lenght is not an RDR think - I'm now starting to realize Scientia has no clue what RDR's are. Things like metal density, isolation vs nesting, dummification, length runs , amount of resistance budget, landed vs unlanded contacts(and specifically the amount a contact can be unlanded), etc... these are all examples of design rules. Scientia throws out two completely BOGUS examples and thinks he can draw conclusion from them - what a joke.

"We also know that Intel has more design engineers and more R&D money than AMD does for the CPU design itself."

He uses the number of people and money as a metric! If Intel is 2 years ahead they are 2 years ahead... I don't care if they have 50X the engineers - clearly they are doing this and ARE MORE PROFITABLE AND HAVE BETTER MARGINS THAN AMD! Perhaps we should normalize Intel's lead to # of design and process engineers? How about we also throw a platformance metric in their too! So I understand we should believe Intel is not as far ahead simply beacuase they have more people and spend more money - last I checked it's not like AMD can apend more money or hire more people (heck they aren't even doingthe SI research anyway!)

"It is possible that differences between SOI and bulk silicon are factors as well."

hmmm...am I starting to see a crack in the SOI is the BEST-EST technology in the world, AMD is so much more advanced!

BTW - he completely butchered and quoted the SI information completely out of context. For one thing AMD is not using FDSOI (fully depleted) yet, they are using PDSOI (partially depleted)

"It then appears that it is possible for AMD and IBM to continue using SOI down to a scale smaller than 22nn."

WHO CARES! If intel meets requirements without using an expensive substrate aren't they the ones who are better off? NOONE has yet to show that AMD's SOI based process is better than Intel's bulk Si process - in fact the only data publicly available (IEDM) shows that Intel's process for a given technology node has better parametrics. Scientia seems to be enamored with this SOI may still be scalable - again, who care?, it's not like this is cheaper or better than bulk Si processes at this point - that's like asking I wonder if Via's chip designs are extendable to 45nm? (who cares if they are?)

"The fact that AMD was able to push 90nm to 3.2Ghz is also inconclusive."

He is right onthis one - but given the 65nm process performance to date, it suggests that perhaps something in AMD's process flow is not extendable (my guess is SOI or strain issue).

"The fact that AMD was able to get better speed out of 90nm than Intel was able to get out of 65nm "

This is categorically FALSE. He is comparing 2 different architectures. If I use his 3rd grade wisdom, I could look at any of the P4 65nm clockspeeds and say that Intel was better )I too would be wrong using this poor example).

The comprison that needs to be made is on parametrics for a process. As AMD and Intel use different CPU designs, with different # of stages, looking at top bin clockspeeds is absurdly simpleminded! (But what else would you expect from Dementia).

"Something substantially less than twice as fast per core would indicate a design problem." (K10)

Actually something 2X faster would also represent a design flaw as the K10 has double the cores AND supposedly better IPC and performance clock for clock. This is yet another attempt by Scientia to sandbag - if K10 comes out 2X faster, he'll say great - look at the improvements. However if you dig further going K8 from 1 socket to 2 socket is something like 1.9X, so is K10 at 2X better really a godd design if it has all of these theoretical core improvements?

The blog is replete with errors - just about every third sentence has misinformation of flawed logic. The Si stuff is the most egregious (especially Scientia's depiction of OPC and RDR).

Anonymous said...

"We could set upper limits but there is no way to tell exactly how much and this does make a difference. For example, if the IBM/AMD process consortium are spending twice as much as Intel on process R&D then I would say that Intel is doing great."

Look at the balance sheets, Intel was ~3X higher in R&D than AMD (I think it was something like ~1.4Bl vs ~0.37Bil)\

Now factor in that Intel spends R&D on things AMD doesn't (flash, MOBO's, etc...) and that IBM's, UMC, and Chartered's spending as part of the consortia should be factored in.

Also keep in mind IBM buys all of the equipment and provides the fab space for the R&D work in Fishkill. Also keep in mind that equipment, Si (which perhaps AMD shares the cost of?) and fab are the biggest portions of SI R&D. # of people is a joke in terms of costs...

Looks to me like Scientia is just making excuses on why AMD is behind (now that he finally seems to accept that they are behind). I guess an extremely ignorant Intel fanboy could claim that AMD should be much further ahead as they have 4 companies working together as opposed to Intel doing it on it's own. That of course would be just as stupid as Dementia's people and spending arguments.

The lead is the lead - it's not like AMD is suddenly going to triple it's R&D spending or hire 5X more people. (It's also not clear if that would even matter)

Anonymous said...

"One example of OPC is that you can't actually have square corners on a die mask so this is corrected by rounding the corners to a minimum radius."

Ha! It's actually the exact opposite - the printed features on the wafer are rounded (due to light effectively double exposing at corners - his is a bit simplistic but you are essentially hitting the corner features on wafers in 2 directions with light). The OPC features on the mask are SQUARE!

Methinks someone knows nothing about OPC and is trying to google an education on it.

Here's a link with an example of OPC (no the best link...):
(see figure 3)

You'll notice those features on the end (the larger square) is to minimize the amount of rounding. If the mask had a simple rectangle the rounding of the features (pic next to it on the left) would have been worse. Again this is due to my simplistic (and poor) description of the feature essentially getting iluminated from 2 directionsduring the exposure process (as opposed to the long sides which only get 'hit' in one direction)

Here's another link (figures 1,2,5):

You'll notice all the rounded OPC features that Scientia refers to. Note you will have to be drunk off your ass and looking through your AMD rose-colored glasses to convert those into round looking features.

CONCLUSION: As Scientia's knowledge in an area approaches 0 (like Si technology), the more fancy terms and random metaphors he'll throw in to impress his readers and make it seem like he actually knows what he is talking about.

Did anoyone else notice he spent time suggesting why RDR's might not be the cause of Intel's lead, yet failed to give even one example of an RDR? (perhaps because he has no clue?) And then threw in Intel's lead in SRAM cell size and gate length as an example of why it may not be RDR's why Intel is ahead - as if SRAM cell size/gate length is mutually exclusive with RDR's? (they're not)

I would sign up for an account and post on his blog but would he really listen? (That's a rhetorical question - the answer is rather clear) Folks here should feel free to post the links I attached if they'd like! I would enjoy trying to see him wriggle out of his completely ignorant OPC comments!

Anonymous said...


This is what I was looking for - see figure 1 - this is a very good example of OPC - notice the "cutouts" on the inside corners and the extensions (serifs) on the outside corners.

If you squint hard enough, these features may appear round to some!

SPARKS said...

This Date,

INTC 27.49, a new high since July 2004, up 9 and change.

From Nov. 2006, up $6.00, Year over Year.

Further, they have paid out dividends all along.

I reached my target of $27, 3Q, 2007. This is Intel’s version of a late release.

Oh, yeah, by the way, Anonymous who said to the other Anonymous,

” While you're 2nd grade understanding of debt is amusing to read please don't spread misinformation.”

Easy there big fella, let me ask you something.

What would happen if these players decided THEY would DUMP AMD, hmmmm?

Capital Research & Management Company

Fidelity Management & Research
OppenheimerFunds, Inc.

AllianceBernstein L.P.

Barclays Global Investors, N.A.

Vanguard Group, Inc.

State Street Global Advisors (US)

Janus Capital Management LLC


SPARKS said...

"Things like metal density, isolation vs nesting, dummification, length runs , amount of resistance budget, landed vs unlanded contacts"

GURU is a frick'en Genius.


Anonymous said...

"What would happen if these players decided THEY would DUMP AMD, hmmmm?"

These are convertible notes - not STOCK SHARES! You can't just "dump" them - well I guess you could, but considering they already have given AMD the money from these notes why would you dump it?

I don't mean to be harsh - but don't confuse convertible notes with normal debts & loans and/or stock shares.

The note holders GAVE AMD the value of the notes (2.2Bil in the last round). In exchange for that they get an interest on each of the notes for the next few years (maybe 4, I forget) and should the stock hit a certain level they can exchange the note for stock. If the stock doesn't hit that level, well the note holder is SOL (other than continuing to receive the interest on the note from AMD which I think is something like 5-6%). It's not like they can just ask for the money back.

Anonymous said...

"GURU is a frick'en Genius."

Not a frick'en genius - its just compared to Dementia, I appear to be one. But then again when it comes to Si technology, my dog would also seem like a frick'en genius compared to scientia!

I just get upset when people pose as experts (under the guise of a blog), don't provide any support to backup their ridiculous statements and then refuse to acknowledge a counter point of view.

I'll say it again - Scientia has concluded in his own mind that AMD is "close" or "equivalent" or "not too far behind" Intel on process technology - he thus tries to make all data FIT that conclusion (rather than looking at the data first and trying to form a conclusion). As a scientist, this is amusing to me as it goes against everything a real scientist or engineer would do. You don't start with a pre-formed conclusion and then try to dig up data to support it and at the same time exclude data that disproves it.

I still find a surprising amount of entertainment in just watching him try to adapt concepts and topics he clearly doesn't understand (like OPC, SRAM cell size, RDR) into a support structure for his ridiculous assertions. It's almost amusing as his 'followers' writing great blog, as they also have no clue what some of the things Scientia is mentioning.

The absolute kicker was "You'll need to know that OPC is Optical Proximity Correction and that DFM is Design For Manufacturability"

Apparently he can look up the acronyms on the web - now understanding the theory and facts behind those acronyms....well that's a whole different story. He may know what the acronyms are, but he clearly demonstrated he doesn't know what they MEAN and how THEY ARE USED!

SPARKS said...

”I don't mean to be harsh - but don't confuse convertible notes with normal debts & loans and/or stock shares.,”

Agreed, the use of the words (debt holders) was poorly chosen by other Anonymous.

I am by no means in financial business; however, many, if not most, of these Mutual Fund holdings were purchased BEFORE (recorded date) the first of two Senior Convertible Notes. Obviously, in addition to the Senior Debt, there are a considerable number of outstanding shares, of the 551M shares in total, that could conceivably be sold on the open market, agreed? (With SEC approval, of course.)

As you know, the Senior Notes must be paid first in the event of a catastrophic collapse, then the Preferred Holdings, lastly and least of all, the common stock; i.e. poor slobs like me.

Further, if AMD wasn’t worried about investor sell off, then why all the hype and spin. They are worried about something, no? Besides, YOU could buy as much AMD as you could afford tomorrow morning! (You seem like a nice guy, don’t do it.)

Kmart did the same years back under Chapter 11; the common stock evaporated, they then restructured and reorganized.

Here is a link for your review.



SPARKS said...

"But then again when it comes to Si technology, my dog would also seem like a frick'en genius compared to scientia!"



Roborat, Ph.D said...

"A is for Apple...DFM is Design For Manufacturability"

DFM is an old concept and it's an insult to AMD to say that they don't practice it. DFM is too broad of an idea and too vague to define that it's ridiculous to make comparisons between two companies that are probably applying the same principles in a different manner.

This is similar to making a statement that Intel is a safe place to work because they have a Safety First program in place.

Claiming that Intel is better than AMD because of DFM is an empty statement. Anyone making such claims needs to come out and point out exactly what DFM strategy is in place. Even an Intel employee would find it difficult to do so because although there may be obvious ones in place (like joint development teams with high volume manufacturing engineers involved in R&D or there are technology/ process target specs benchmarked from the last process node), these are only a tiny part of a bigger DFM program.

If I were to name only one thing that sets Intel apart from anyone is the strict discipline has on meeting cost targets -- which is really the bottom line. I doesn't matter how fast your processor is, if it cannot be made cheaper than the previous generation it won't get past the discover phase.

Anonymous said...

"As you know, the Senior Notes must be paid first in the event of a catastrophic collapse, then the Preferred Holdings, lastly and least of all, the common stock; i.e. poor slobs like me."

I think we are shifting off the main point - an anonymous poster indicated that the debt holders could just cash in the debt and potentially force bankruptcy - this is simply not true.

As for share price quite honestly it is almost irrelevant in terms of AMD's solvency, unless they are intending to do a stock offering (which tey won;t as noone in their right mind would go for it). If the price of the stock is $12 or $6, AMD will still have the same amount of cash on hand and debt and revenue. As AMD does not offer a dividend and I'm unaware of any buyback plans, stock price will not come into play.

If the stock price dropped to $1 would it effect AMD's revenues? Debt? Profit (loss) margin? The only thing it might do, is drive away employees - which could be either a good or bad thing.

People investing in AMD are simply speculating - if they lose their money, frankly, they somewhat deserve it. As for mutual fund and the like, those funds have TINY amounts of a single stock in their mutual fund basket - in the extreme case of the AMD stock price going to $0 it still should have minimal impact on the overall fund's return (1-2%). If a mutual fund had say 10% of AMD stock in a single mutual fund, well then, frankly that fund shouldn't (and likely won't) be in business long and people should do a little research.

People love to hear about all of these stocks that double or tripled in a 1 year period and want to get that themselves - folks should target a 10% return and anything else is gravy. I mean, did you hear all those jokers saying they were glad the AMD stock price was dropping because after Barcelona launched they would be able to make even more money by buying low prior to it. Has anyone looked at the Intel or AMD stock chart and been able to correlate it to a product launches? It's folks like this that make it easier for the pros to make money...

Anonymous said...

"If I were to name only one thing that sets Intel apart from anyone is the strict discipline has on meeting cost targets -- which is really the bottom line. I doesn't matter how fast your processor is, if it cannot be made cheaper than the previous generation it won't get past the discover phase."

Well there's that but the manufacturing philosophy at Intel is different than IBM or AMD. Once something is ramped, Intel will rarely implement a major change unless it has a HUGE (>10%) cost or performance benefit. You will not see Intel do CTI, because frankly it is not worth the risk. You are better off waiting 2-3 quarters and have things stabilized on a near final process rather than take the first step and deal with the second and thirds steps later (and hope everything works out OK)

And along the lines of your strict cost discipline approach is market discipline - Intel will not implement something for coolness or elegance (see SOI, native quad core, IMC...) If these things don't have a specific tangible benefit, Intel generally will not go for it. SOI is a classic example - was it better on 130nm or 90nm? Sure. Was the added cost worth the performance benefit? probably not. Is SOI a viable long term (multigeneration solution)? In Intel's view (which I think is slowly becoming the consensus view) SOI's benefits become smaller over time - while it still may be better you have to have the discipline to ask yourself how much better and at what cost.

By the way robo, your DFM comments are dead on - it is just management speak for let's make things manufacturable. Perhaps some out of the box thinking would help improve things at AMD or they could work smarter not harder...

Scientia holding up DFM is just another way where he attempts to diminish Intel's clear and obvious lead in both manufacturing and process technology. I suppose high K is part of Intel's DFM approach? Perhaps being the first to implement selective strained process back in 90nm was too? NiSi? These are inventions and breakthroughs, not manufacturing philosopohies. While manufacturing is a core component of Intel's lead, the Si development is really second to none in terms of both speed (meaning schedule), performance and eventual manufacturing cost.

Things like needing 1-2 fewer metal layers than AMD are significant (in terms of cost and complexity), but rarely are noticed by the press. I would say that is a much bigger deal than SOI, but how much do you see written about each of these things?

Heck look at all the press about ZRAM way back when - remember that cool technology that AMD licensed that would allow 5X SRAM packing densities but could only be done on an SOI solution. How's that doing? Wait you mean there are alternate technologies like 1T-1C cells that are better (faster), just as dense and will likely be implemented before ZRAM? (assuming of course ZRAM ever gets implemented)

If you want to understand the technology roadmaps you need to go to scientific conferences, talk to scientists in the area and usually do some work in the area (or at least have a good background in it). Reading something on the INQ, or FUD, or even worse Scientia's blog should set off all sorts of alarm bells.

InTheKnow said...

This from another blog...

Some of the commentary on roborat's blog came from Intel employees.

Which I can only assume means that the Intel employees who post here are full of hot air?

Since I don't work at Intel it would be difficult for me to match this.

Fair enough, but a couple of questions come to mind. First, why aren't they posting on the blog in question, and second if the blogger in question can't match the level of expertise, why not accept the word of an expert?

Might I go so far as to suggest it is because their expert opinions aren't valued on the other blog?

However, the point that you also miss is that none of the commentary on roborat's blog came from AMD employees so it is more than a little one-sided.

This might well be true, but we see the classic ploy of making an unfounded assertion here without any way of supporting the statement. Do all AMD employee's have to report their blogging activity to this individual, making him qualified to make this statement?

And no matter what semiconductor company someone may work for, there one big constant that makes anyone in the industry qualified to comment on some aspects of the business. And that constant would be the toolsets.

There are only a handful of equipment suppliers. I would hazard a guess that at least 80% of any manufactures toolsets are common to their competitors. Companies may make proprietary modifications to the tools that vary from one semi company to the next, but they are all starting from the same place, common tools.

And, I am sorry if your preference is for a one-sided, pro-Intel discussion.

Perhaps if the blogger who posted this didn't censor posts that questioned his assumptions and totally dismiss the input of the experts here, they would post on his site and he could have an honest valuable discussion of these points on his blog.

The question that has not been answered is whether or not Intel has gained a clear lead due to RDR.

As has been mentioned above, without a clear understanding of what RDR is and it's strengths and limitations the question will never be answered. Not to mention that this assumes (wrongly I believe) that AMD does not practice RDR.

Here is an example from the PCB industry.

BGA pads need to be oversized to allow for variance in the drill process. However, they also need to have a certain amount of clearance between these pads to allow for the etch process. Since the centers of each pad are fixed by the device that will be mounted in the BGA socket, designers are forced to walk a narrow window between making the pads big enough for the drill process and far enough apart for the etch process.

Now this isn't really an example of RDR which would, as an extreme example, disallow the use of BGA areas on the board. But it does show the type of tradeoffs that are considered when deciding what paths RDR would be used to close off. You decide where the process cliffs lie and simply don't approach them.

It is strictly my opinion, but I believe that AMD and Intel both practice RDR. The real question, I believe, is how far away from the process cliffs does each company set the barriers.

Anonymous said...

"Not to mention that this assumes (wrongly I believe) that AMD does not practice RDR."

ALL companies have design rules - you must if you are going to make integrated circuits. The subjective thing is what is considered "restrictive". This obviously is impossible to quantify/define and like "DFM" is a blanket statement and the question is not whether folks use RDR's, the question is to what degree.

I can say this is a core of Intel's design - if you cross section chips and look at things like dummification (that is putting in non-working structures to make the overall wafer more homogeneous for processing), Intel really focuses on this and has been for many generations (I think they were at the forefront on this in the early 200mm days). I'm certain AMD has their own dummification rules too, it's just a matter of degree.

For Scientia to dismiss it,with obviously no technical background on what RDR is, means, and how it is used....is absurd. Not quite as absurd as his talking about SRAM cell size and gate length to suggest that it is not RDR giving Intel an edge, but absurd none the less. But still less absurd the his stubborn use of a technology node launch date and a comparison of clockspeeds on 2 different microarchitectures as the key metric to judge how far ahead/behind folks are on process technology. This is just so simplistic it is beyond funny - but then again what really could you expect given Scientia's limited background on Si processing?

And one of the reasons I think AMD fans/employees don't post here is they know the unfounded crap that they tend to spew will not be taken as gospel without a challenge and a request for supporting informartion. Heck I saw some idiot saying:

"but anyhow, I'm more inclined to believe what you've said about AMD solving this flaw with the 45nm process (Shangai any one?)"

And this inclination is based solely on a leap of faith - is there a single data point, rumor, even AMD suggestion about this? This is pure hope and wish - it may turn out to be true, but the theory that it ain't working on 65nm so AMD probably said screw it we'll launch 65nm as is, and we'll fix it on 45nm is pulled out of the air! Almost as solid as the reason you don't have higher clocked K8 65nm parts is not that the 65nm process has issues, it's because they need to utilize their 90nm process. Well with 90nm capacity coming offline I guess the excuse will be they don't need those high clocked K8 parts anymore as K10 is coming on line.

I believe as it was a non leap year AMD decided that without the extra day they couldn't get K10 fixed on 65nm and that with a leap year coming up that would give them the additional time to fix it. What do you guys think, make senses no? I mean AMD HAS TO FIX IT, and next year is a leap year, so the 2 must be related no?

Giant said...

I believe as it was a non leap year AMD decided that without the extra day they couldn't get K10 fixed on 65nm and that with a leap year coming up that would give them the additional time to fix it.

Good one! But, I have the real truth here. Some of AMD's engineers were working late one night to fix these bugs and needed to stop for dinner. Hector Ruiz previously agreed that the company would pay for chinese food, since the engineers had to work such late shifts. But since the company is in serious financial trouble they couldn't afford to have the food delivered. The engineers had to take fifteen minutes of valuable time to collect the food. Since they lost these fifteen minutes they decided that these critical bugs just weren't worth fixing, so they decided to wait until 45nm to fix them!

Anonymous said...

SPEC Barcelona Scores are being branded non-compliant because systems apparently cannot ship with the 90 day from submission rule:



Does it matter the price if there is no product to sell? If they are having such a hard time satisfying a low volume market such as server, how are they going to meet the demands of the high volume markets in DT?

pointer said...

Since i do not wanna post any comment at Scientia side, i believe posting here would also convey the needed message for those that visit there.

abinstein said...
TDP is potentially the max power usage under the worst permissible cooling environment. CMOS circuits take less power under lower temperature; most reviews use very good if not ideal cooling, which reduces power consumption much under TDP. The same is not true for general application.

Besides, as scientia said, most benchmarks do not stress all four cores 100%.

While there are some truth from the long winded explaination, but the TDP is not a rating for a particular CPU, it is for a series of CPUs. One simple facts would easily explain this. For the 65W TDP range of CPU, would anyone think the the E6300 would actually consume equal power as E6700? By no mean i said the power is a pure direct function of its frequency, as there might be leakier transitor occasionally, but I'd still think E6300 100% consume less power than E6700. If the transistor was in such bad condition that making E6300 consume equal power as E6700, chances of it passing the Burn In test (reliability) is low.

and as usual, Scientia has hard time undertsand this, especially when thing paint possitively on intel's side.

Anonymous said...

Here's an AMD First - First to be ruled NON-COMPLIANT by SPEC for not having availability 3 months after test date:


that background image is almost hypnotic...

Anonymous said...

Wow the misinformation on Scientia's blog just continues to mushroom, here's another comment (not from Scientia)

"On the process side AMD needs a good .45 nm process. The process needs to fix both leakage and and the maximum achievable speed of current .65 nm process. As AMD has stated before they will use ultra low-K in their .45 nm process which theoretically should improve power consumption, but I'm having a finding meaningful information in this regard."

Putting aside the 0.45 and 0.65nm (I'll assume he/she was confusing um and nm), they have completely bought into the RIDICULOUS press on ultra low K. Ultra low K is for the interconnect area and addresses RC delay. As the overall delay is generally limited by the transistor switching speed, you simply need the RC delay to be as good or better than that (it's a but mre complicated that, I'm simplifying for brevity).

Either way the ILD used in the background HAS NOTHING TO DO WITH LEAKAGE! The reason the commenter is having a hard time finding meaningful data is because there is none as he is WRONG! (But hey might as well print and theorize away). I would be very curious to hear more about these wonderful theories about ultra low K solving leakage problems!

This is yet another of wishing and hoping things are better and then searching for anything that could possibly fit into this.

Heck yu had Scientia making a bunch of claims on power because he didn't like what one reader was conlcuding (that 45nm uses MUCH less power than 65nm).

First it was Scientia saying prove it - the reader responded. Then he said they are using single threaded prime to load so it it bogus. The reader pointed out that it was a multithreaded version. Scientia then made up some BS saying that well the architecture is SO EFFICIENT (rather ironic that he was forced into this statement) that Prime couldn't load it therefor he will just randomly assume 70% loading and tweak the power #'s up.

Did he prove his assertions - of course not, he referred to reading it previously... and after all of this the power was still better and he was forced to admit he was wrong.

The thing to take note of is that Scientia put up as many hurdles and unfounded arguments as he could to say he was right. The underlying assumption on the blog is that everything he writes is by default correct and you have to prove him wrong. On the other hand anything anyone else says is assumed to be wrong and unfounded and must be proven correct.

Double standard? Perhaps the reason there is more pro-AMD comments there is those get by the Scientia screen without the need for any support. Anything anti-AMD or pro-Intel needs to be carefully supported (and still might get screened if Scientia is embarrassed by the truth)

JumpingJack said...

"Putting aside the 0.45 and 0.65nm (I'll assume he/she was confusing um and nm), they have completely bought into the RIDICULOUS press on ultra low K. Ultra low K is for the interconnect area and addresses RC delay. As the overall delay is generally limited by the transistor switching speed, you simply need the RC delay to be as good or better than that (it's a but mre complicated that, I'm simplifying for brevity)."

:) :) People across the net are confusing high-K and low-K. One is used for the gate oxide material, the critical component of the switch (high-K), while the other is used in the backend for 'wiring' up the transistors into circuits. In one case you want high capacitance with high electric fields (gate) to increase the switching speed, in the other you want very low capacitance (interlayer dielectric) to avoid cross wire coupling and lower overall signal proprogation (frequency).

In fact, you can get by without a great back end but the latency just mushrooms.

We see this evidenced in AMD's inability with their 65 nm process to keep L2 latency the same or lower then 90 nm ... i.e. the did not do a great job engineering their back end process.

I have not read the Scientia post or comments, typically I cannot get through the first few paragraphs without laughing hysterically.

Anonymous said...

One area where ultra low K will help is potentially on reducing or keeping the # of metal layers under control. Typically one metal layer is added each generation due to the shrink but if you can;t keep RC delay under control you may need more.

As JJ said the low K is the "C" part of RC delay - the higher the K value in this case the worse the capacitance and backend delays. This then means wider spacing between lines and/or between layers and potentially requires shorter runs and thus more metal layers. Thicker layers means slower throughput on tools (more cost per wafer) and more difficult integration - higher AR (aspect ratio) etches, more difficult metal fills....

While IBM/AMD is making a big deal out of this - the thing you need noone seems to ask is why can Intel get away with FEWER metal layers? (RC and/or design has to be better to enable this). Just another example of the press and fans latching on to technology they don't really understand.

Did I mention how great SOI and immersion litho are!?!

InTheKnow said...

ALL companies have design rules - you must if you are going to make integrated circuits.

I agree. In fact, companies that intend to manufacture anything more complicated than teddy bears need design rules. I wasn't trying to imply otherwise. Sorry if that wasn't clear.

The subjective thing is what is considered "restrictive". This obviously is impossible to quantify/define and like "DFM" is a blanket statement and the question is not whether folks use RDR's, the question is to what degree.

Again, I agree, that is the crux of the issue. Trying to define what constitutes "restrictive". Based on what I know of the companies involved, I would wager that Intel's design rules are more restrictive than AMDs.

I make that statement primarily because IBM is prone to push the process envelope (and let's face it, AMD's process is IBM's).

Intel on the other hand is prone to be conservative, to the point where they can stick with something too long and occasionally get burned by something like Opteron. But Intel's size, market share and deep pockets allow then to afford the occasional mistake.