2.28.2008

If you can't beat them, join them

I am currently in the middle of a big project and temporarily finding it a quite a challenge to maintain the same level of updates as you may have seen in the past. AMD's lack of response together with the apologetic tone from its followers doesn't make it easy either.

To improve the discussion and level of activity I would like to open up the blog to co-authors. I encourage everyone to nominate who they think should be hitting the front page instead of the comments page. Sort of "voluntelling" them to become authors. Anyone courageous enough can also come forward and anyone wishing to remain anonymous can e-mail me at roborat2000@yahoo.com.

I ask only that we maintain our dedication to "truthiness".

52 comments:

Anonymous said...

So I take it the "truthiness" requirement rules out Scientia?? :)

In that case I nominate JumpingJack as coauthor - he is quite knowledgeable and is actually testing a Phenom system, noting its strengths and problems, according to his posts over at XCPUs.com.

Anonymous said...

I absolutely will never post again at the blogs of those AMD spunk lappers.

The lastest post from sir AMD spunk lapper goes on and on then finally at the end says what?

"AMD currently has nothing even close"

Then the he drinks more spunk and talk on and on.

He forgets two things

1) When AMD goes to 45nm it is a good one year later and thus one year later on yield learning and process learning. This spunkmiester still believes the AMD fodder about we have mature yields. You only need to see sir Gross precumm to know what AMD considers "mature yields" to understand how poor they are.

2) AMD 45nm will be a generation behind INTEL in peformance. That will give INTEL a fundamental lead at the transistor level in either power or frequency. Unless INTEL design is busted like it did with Prescott, CedarMill, and Tejas families there is really no question AMD will be behind.

AMD lags on performance and lags on learning. That is fact they can't make up in design unless INTEL falls on face.

Lastly why the hell is AMD behind on process? Is it curious that AMD problems all started when they decided to move to SOI then went to IBM that they fell further and further behind? Do you think a consortium whore really cares more about one john versus another. They just want the money. Then AMD gets a 65nm and a 45m process ( someday ) but it isn't optimized for AMD's circuit needs. Then poor AMD needs to "tune" it so to speak. Pretty expensive I say

What is the logic of a fab in NY? Does NY have a large installed based of trained engineers? Does AMD have a large base of experienced engineers their? Is there some really good schools there? Is it a cheap place and desirable place that people want to move? Oh yes, to all of those. Makes sense to tell your successful site that we don't think you are important we will go somewhere that has nothing to invest billions. INTEL on the other hand has billions and time to develop new sites, but look at them they are only put fabs where they already of a trained and experienced and successfuly factory. Makes total business sense there... bankrupt busines sense.

Lastly you hear how IBM/AMD silicon wafer supplier is up for sale as the business ain't so good?

Giant said...

This is hilarious. AzmountAryl was posting a false information at AMDzone:


Yes Viperiii, you heard it correct, AMD has shipping more Quad Cores in respect to their other products than what spIntel did (~1 200 000 Quads in 4Q2007). Now tell me, does AMD needs to be thought anything when their product broke competition's record in the same quarter it was released? Or maybe spIntel is the one who should learn from AMD?


I responded with fact:




Where did you get that information? Intel shipped two million quad core CPUs in Q3'07, and in Q4'07 reported 40% growth in quad core CPUs on desktops (servers weren't mentioned, so it's impossible to get an exact figure). That makes for ~2.8M quad core CPUs, not 1.2M.

Q3'07 numbers:
http://media.corporate-ir.net/media_files/irol/10/101302/2007Q3IntelEarningsReleaseFinal_2.pdf

Intel shipped more than 2 million quad core processors during the quarter and now offers
more than 20 quad-core processor designs.



Q4'07 numbers:
http://www.bloggingstocks.com/2008/01/16/intel-q4-2007-transcript (you can also listen to the Q4'07 earnings webcast for the same info)

Our desktop Quad-Core product shipments grew more than 40% from the third quarter...


Then TheGhost replied with this humour:

i don't know where you get uour information, but in reality, so far, intel has never released a quad core, i don't call a cpu with two dual core dies on it a quad core

this would be like selling some one two motorcycles welded together and call it a car

by the way i fixed your second link

Anonymous said...

'What is the logic of a fab in NY?'

This one is rather easy...$1.2Bil dollars!

Germany has space for 3 more fabs (this is in addition to the possibility of getting around to bringing F38 up to date). If this were about infrastructure, resource optimization, or anything but money, it would be far more sensible to just continue to build out Dresden.

Of course the cash cow of German subsidies is done, so additions would cost AMD near full price... in NY they are essentially getting ~33% off thanks to one of the greatest thefts from NY legislators in some time...

The only other reason to do this that I can see, is if they are planning to do some sort of deal/lease with IBM - a NY site would be more attractive to IBM than Germany.

There is an attempt to grow a high tech space in upstate NY - there is the Albany center which is Sematech (I believe) operated now. Having visited that site it has a bit of a ways to go before being 'important' in the grand scheme of things though (just look at Sematech in Texas which has become largely marginalized these days).

As often is the case, if you want to understand many large decisions, follow the money.

Anonymous said...

Dementia's latest blog is rather funny for those looking for entertainment. As he apparently has started to realize that his process knowledge is a joke, he is now using MODEL #'s to PREDICT FUTURE CLOCKSPEEDS!

"Another question is what 9850 might be. 9750 is supposed to be 2.4 Ghz while 9950 has been suggested to be 2.6 Ghz. So, would 9850 be 2.5Ghz perhaps? The topping out of the naming scheme does lend some credibility to the idea that AMD will suspend 65nm development and try to move to Shanghai as quickly as possible"

Err.... one small problem, with this otherwise wellfounded, fundamentally rooted, analysis. AMD's model #'s came out when? Back in late Q2'07? Should we now believe at that point AMD knew where the clockspeed of K10 65nm would top out at and put the model #'s out accordingly? As per usual has a conclusion and is spending time trying to backfill in RIDICULOUS SUPPORTING 'FACTS' TO BUTTRESS HIS 'ANALYSIS'. I did like the interpolation of 9850 being a 2.5GHz chip....BRILLIANT! I think we will probably never see a 9850 as AMD already would have 6 quad core desktop SKU's if/when the 2.6 GHz chip comes out.

"It certainly seems that it wouldn't take much effort to modify an AMD HT based chipset to work with Intel's CSI interface. That would seem to remove a lot of Intel's current proprietary FSB advantage."

Freakin' moron... CSI is not proprietary? Not much effort to modify an HT chipset? Why would AMD do this - they already abandoned FSB support (by their own choosing) and Via? are you kidding me?

"Curiously, these are the same people who also insisted that Intel's 90nm process was fine and that it was only a poor design with Prescott that was the problem."

Pentium M.... hello... how many times does he need to be told this!

'I think it is entirely plausible that AMD could surpass IBM to become the senior partner in process development over the next few years.'

I nearly spat out the coffee I was drinking when I read this.... this is based on what, some ridiculous analysis of EUV? The problem with this analysis - all EUV does is shrink dimensions, it doesn't make the process faster or use less power, it simply enables a pgysical shrink...

- Where will AMD's 2nd gen high K come from (assuming they get around to implementing Gen1 on 32nm)
- Where will their ILD solution come from? (or are they just going to explode the # of metal layerss)
- Advanced anneal?
- Implant technology?

It is just remarkable how Scientia reads a few press clippings about EUV, and somehow, out of just that now finds it plausible that AMD will be the senior partner on 22nm? THIS IS UTTERLY LAUGHABLE!!! I CAN'T BELIEVE THAT IDIOT WOULD SAY THIS BASED ON SOME EUV ARTICLES!!!

Just when I think his analysis couldn't get any worse the driveway ends up just a few more feet short of the curb....

Anonymous said...

Dresden or NY

I doubt the county of Saxony or whatever that goverment is called in Dresden wouldn't put up more tax breaks or money to keep a few extra thousands jobs and investments of a few billion. All AMD got to do is ask. Also being in the EU has some signficant local tax breaks too versus doing busienss in the US.

As to NY, i'm sure their is all sort of scratch me scratch you politics there. Look only at the attorney general of NY's games to see one scratch. As to Albany that is a joke waiting to happen like Semetech. You think Semetech did one thing to help AMD, IBM, INTEL? Look at the other members like Moto, National, TI and I can only laugh at how successfuly that is. The quality of talent there can't compare with the level at places like MIT, Stanford and in the UC system.

EUV, I thought INTEL had a big program going. Its funny how they are lying low here, just like they did with HighK too.

SPARKS said...

"EUV, I thought INTEL had a big program going. Its funny how they are lying low here, just like they did with HighK too."

Well, not so low, perhaps. under the radar.........


"Moving into Ultra-Violet
Extreme ultra-violet lithography, or EUV, is another dramatic step forward in doubling the density of circuitry. Today's most advanced lithography is limited by the wavelength of visible light, which is 400 to 650nm. In contrast, EUV lithography uses a wavelength of 13.5nm, which will enable the printing of features that are 10nm and below (By contrast, Intel's current volume manufacturing produces feature sizes of 50nm). This leap in feature-size reduction will enable Intel to continue to achieve the predictions of Moore's Law.

Of course, there are interesting challenges in making EUV—and other advances—a viable manufacturing technology, For example, EUV light is absorbed by glass, so instead of lenses, mirrors must be used. Also, since EUV light will not pass through a glass mask, a reflective mask must be used to reflect the light in certain regions and absorb it in others, so that a circuit pattern can be effectively transferred to the wafer. Intel's researchers and engineers are already working to address and solve those challenges, and deliver yet another breakthrough technology for advancing the silicon industry."

Again, GURU's "speculations" have coincidentally come to pass. GURU you never cease to impress me.

SPARKS

http://www.intel.com/technology/
magazine/silicon/moores-law-0405.htm

Anonymous said...

"Again, GURU's "speculations" have coincidentally come to pass."

I said speculations because I didn't have the supporting links (until after I had posted). I try not to pawn things off as 'fact' like some other bloggers.

As for EUV, Intel has been working on this perhaps longer (or at least as long as) anyone else, however EUV has always been 'required' for several generations now (only to be continually pushed out). The technical hurdles are still substantial and I think 22nm is a reach.

I think it was clear to anyone within the industry that Intel had a significant lead on high K development - they had a 200mm (yes 200mm!) tool in house very early on (~2000) for research and development and had focused on the integration issues associated, as opposed to simply publishing meaningless EOT (equivalent oxide thickness) and leakage on capacitor structures.

It is unclear to me if anyone has a significant lead in EUV - ultimately I think the breakthroughs that come will be owned by the equipment suppliers and thus will be available to anyone. High K/metal gate is a far different scenario as the critical IP (intellectual property) is in the integration, not the equipment. As usual, Scientia has completely mis-analyzed the situation and is again showing his lack of background in the area.

While EUV (or some other alternate technology) will be eventually required and represent a significant achievement, it is by no means a "breakthorugh", it is simply using a shorter wavelength light source and addressing the technical hurdles associated with it. Now if something like nanoprint comes along, that in my view would be a breakthrough...

Anonymous said...

"I doubt the county of Saxony or whatever that goverment is called in Dresden wouldn't put up more tax breaks or money to keep a few extra thousands jobs and investments of a few billion."

Based on? The key in your speculation is 'more' - I can see Saxony offering future tax breaks/subsidies, but I fail to see why we should expect it to be more than NY.

The # of jobs created in NY will be significantly more than the # of jobs simply adding to an existing location like Dresden, as many of the jobs can be 'shared' between facilities. NY being a new fab location for AMD, cannot leverage existing infrastructure. And it's not about keeping jobs it's about adding more jobs.

As such, NY will see better job creation and thus I think it would be difficult for Germany to put up a similar amount. The governments don't get much value from capital invested, it's from jobs created and tax revenue - thus it would appear NY would be in a better position to offer mo' money as they would have an edge on the job side.

Anonymous said...

Try and hire a few thousand experienced technicians and engineers and get them to relocation to high tax state called NY....

Or you could simply shift a few hundred of your experienced engineers and technicians from one fab to another and have them train some new employees. Or perhaps you'd shut down one fab and just move all the employees to another. I think expanding in Dresden is far simplier then building a new fab in the middle of a state with little high tech recently, nor any high tech education either... Its a no brainer AMD will have a hard time staffing that site in NY I don't care what the state gives them in tax breaks its a big project for a company that has neither the money nor time. What a stupid idiot is that who thinks it'll be easy in NY...

SPARKS said...

Holy Christ , GURU, there is a plethora of up an coming hopefuls on the immerging sub 45nM front. It’s like a group of debutante’s being primped and prepped for the big ball.

The mentioned EUV,
EBDW Electron Beam Direct Write Lithography,
SPL Scanning Probe Lithography,
Charged particle Lithography with its sub variants,
X-Ray Lithography,
And finally, the electron tunneling lithography (and I thought this was just a microscope)

So, are you saying the NIL Nanoimprint Lithography is the biggest hopeful and may become Queen of the Ball?

Obviously, you’re reading Dr. Chou’s work; you are having lunch with him, too?

Do you like this more because it is three dimensional patterning process. Can this technique allow to ‘imprint’ materials dirrectly on the substrate without the need for an etch?

Who does he work for/with, or will INTC simply buy the technology? Doesn’t Toshiba have a lock on this?

http://www.eetimes.com/showArticle.jhtml;jsessionid
=IXHJ0MM4I1WNUQSNDLPCKHSCJUNN2JVN?
articleID=202403251

SPARKS

Anonymous said...

Silverthorne brand:

http://www.tomshardware.com//2008/03/02/intel_preps_atom_for_world_domination/

I must say I like Atom... nice 2 syllable brand, scientific allusion and not overpromising like 'phenom' which implies performance... That said, if the first generation doesn't do well (I suspect the first gen will struggle in the market) I can already see the 'atom bomb' jokes!

Anonymous said...

Sparks - hard to say...(and keep in mind litho is not really my thing)

EUV and X-ray in my view are pretty much the same thing - where exactly is the line between EUV and X-ray? If I were a betting man I'd still put my money on EUV as that has the most momentum and money behind it and seems the natural successor / evolution from 193nm. That said something disruptive like nanoimprint or ebeam could come along and knock people on their you know what... One of the issues with EUV is the cost - litho suppliers have come to expect the money is no object solution is OK so long as they can meet a timeline, at some point the economics may win out over the technology/timing, and it's not clear if EUV will have any chance of keeping costs under control.

I believe immersion pretty much came out of nowhere - that wasn't really on any major roadmaps until recently (recent in semiconductor terms) and now it looks to be a 2 generation or more extension of dry litho. That's why I don't discount the non-EUV solutions, I only see EUV getting more expensive as time goes by as there are still fundamental issues to be resolved and who knows what impact these might have on eventual tool cost and manufacturability.

This is partly why I'm amused when I read Scientia theorizing that AMD may become the 'senior' development partner of the IBM fab club within a few years based purely on a couple of articles about POTENTIAL EUV timing (and those being paper claims and lacking typical scientific formality).

Giant said...

http://www.techtree.com/India/News/AMD_Opteron_Ready_for_Launch/551-87273-581.html


AMD (Advanced Micro Devices) has announced that its quad-core Opteron chip (previously codenamed "Barcelona") for servers is ready for launch. The company will start shipping the B3 version of the chip to channel and distribution partners this week.


That's odd. I thought the launch was six months ago?! Barcelona truly goes down in history as the worst x86 launch EVER. Prescott might have been awful in terms of performance and power consumption, but at least the processor was available at launch. It's inexcusable, totally inexcusable, for AMD to not have any substantial quantities of Barcelona CPUs until SIX MONTHS after the launch.

SPARKS said...

“few years based purely on a couple of articles about POTENTIAL EUV”

Pa-lease! First, from what I gather from my elementary understanding, the technical challenge of getting EUV to volume production seems insurmountable. Glass absorption, limited angle projection (only 9 degrees presently), new chemicals, polymer resists, dictate an entirely new and very expensive (and unproven for HV) tool set.

From what I’m reading, the differences between the two processes are the differences in making Baked Alaska as opposed to making bagels. I suppose that imbecile you referred to in your post does even bother to read this much.

I think, from what you said previously, if the industry big guns don’t rally together to make this thing fly (EUV), the peanut gallery certainly ain’t gonna pull it off, that’s for sure.

In any event, thanks for the insight, and the lead, into the next process generation. Please, keep the good stuff coming, thanks.

SPARKS

SPARKS said...

Giant, forget that nonsense and those idiot’s. I’ve got a couple of juicy links for your, shall we say, extreme pleasure?

http://www.xbitlabs.com/news/memory
/display/20080228180835_Kingston_Achieves_
Unprecedented_Memory_Speeds_with
_DDR3_
Nvidia_s_New_Core_Logic.html

With…

http://www.youtube.com/watch?
v=QOM5cA-E42I

And……..

http://www.hothardware.com/articles
/Wolfdale/?page=1

PLUS….


http://www.tomshardware.com/
2008/02/25/x48
_motherboard_comparison/


Oh my god, my wallet is burning a hole in my ass!

Hoo Ya!

SPARKS

SPARKS said...

“I can already see the 'atom bomb' jokes!”

I wouldn’t throw the baby out with bath water, just yet. Ed has got some ideas for the little monster.

http://www.overclockers.com/tips
01300/

Hey, ya never know!

SPARKS

Anonymous said...

Wow the level of ignorance in the press is scary and noone seems to ask, is what people are spoon feeding me really true or should I actually try to asses the facts?

From EEtimes - this is not a blog folks, but an actual site that theoretically specializes in this general area:

http://www.eetimes.com/news/semi/showArticle.jhtml;jsessionid=UMOK431CGW1VAQSNDLOSKHSCJUNN2JVN?articleID=206901553

"The latest chips also contain faster transistors and wire insulation made with a new material called ultra-low-K dielectrics. The material absorbs less energy, so the chips are cooler and are more power efficient, En said. The overall reduction in power consumption is 15%, compared to comparable 65-nm chips."

ULK has NEARLY NOTHING TO DO WITH THE OVERALL POWER REDUCTION! Notice the careful parse of the words... they talk about ULK absorbing less energy (which is technically true, but a relatively meaningless factor in the overall power consumption) and then lead into a quote which talks about 'overall' power reductions and lower heat, most of which are due to OTHER CHANGES (primarily the lower Vt's associated with the 45nm process)

As a person with a technical background, En should be embarrassed by what he is implying and should know is misleading - this is not a marketing guy, he is the manager of logic development at AMD. He is clearly trying to play up ULK, and also hoping (?), implying (?) that the power and heat reduction is due to this fantastic ULK technology. I guess the author of the article didn't feel it necessary to ask how much of the 15% power reduction is actually atttibuted to ULK (hint it's pretty damn close to something that rhymes with hero)

'The use of immersion lithography is 40% more efficient than using conventional lithography, lowering the cost of making the chip'

Notice the use of the word 'efficient' and not 40% lower cost! This is another subtle parsing of the words which in my view is intended to mislead people. Why not directly state the % cost savings? I also doubt the assumption that the cost is actually lower... there are likely some 'clock to clock' type comparison baked into this garbage. Does he factor in the lower throughput with immersion, the lack of re-use of equipment from previous generations, lower infrastructure cost like training, spares, service. Again notice there is NO SPECIFIC, DATA BASED claim on actual cost benefit!

SPARKS said...

“wire insulation”?????

Wait a second, what does he mean by “wire insulation”? First, I thought that these “wires” in your line of work were called ‘traces’. In my line of work, insulation is, THHN, THWN (Thermo coated plastic), RH (rubber), XHHN (Cross Linked Polymer), etc, among gangs of others (heat and voltage dependant). This stuff would never survive an annealing process. Trust me. Further, to ‘etch’ my insulation requires a razor knife.

Secondly, I thought most of the heat/leakage came from the transistor at the gate.

Are they trying to explain this to a less informed, wider audience, say, some one like me? Sorry, I don’t get it.


SPARKS

Anonymous said...

Any questions now why AMD is behind on technology, they can't even get their press release right on technology how can they actual make the stuff?

I liked this blurb ""That's the expectation," lots of people had expectations and so far AMD has let them all down. Won't be suprised that they won't meet it again.

Yo Hector I have a tip, keep this guy away from talking to the press he is speaking out of his backside.

InTheKnow said...

Sparks said...
This stuff would never survive an annealing process.

The high temp anneal you are thinking of is a front end process.

The anneals in the back end where the ULK materials are being used are on the order of a couple hundred degrees or less.

The copper "wires" (I would call them traces:) ) are electroplated at near room temperature.

I'm not sure what the ash temps are like to remove the resists, but since ash processes are plasma based, the temps shouldn't need to be too high.

Wet etch temps are, not surprisingly, below the boiling temp of water.

So the highest temps you should see in the backend would be the fairly low temp anneals.

Not working in backend processes, I could have missed something, but I don't think so.

InTheKnow said...

This from another blog...
The clock speed issue is exactly what I've been talking about. AMD has had slower initial clocks on new processes since it began using SOI. However, it is possible that 45nm could start at 2.8Ghz. This could be the case as I mentioned if AMD really did have a timing mismatch with its 65nm process.

Never, say never, but I wouldn't put a plugged nickel on the chances of seeing 2.8GHz out of the gate on AMD's 45nm.

I can't really comment on the "timing mismatch" comment, because I have no idea what that is supposed to be.

This is all pure speculation, but I think the real issue is that AMD hit the thermal wall at 65nm. Just like Intel hit it at 90nm. I think SOI probably helped AMD push the thermal wall out a generation past Intel (65nm vs 90nm). Now that they are up against the wall, though, they are going to have to do something more than simple shrinks to get past it.

Intel got past the thermal wall by completely redesigning the processor (C2D) and going to multiple cores. Then they went to high-k metal gate to reduce gate leakage.

So far, I don't see anything comparable from AMD. Barcelona isn't going to get much better at 45nm than it is at 65nm, though it will allow more cache on the same size die. But Barcelona hasn't shown good thermals like C2D, so if my assumption is correct, they need a different architecture. I also don't see any revolutionary breakthroughs from AMD to reduce gate leakage any time soon.

Tonus said...

"Yo Hector I have a tip, keep this guy away from talking to the press he is speaking out of his backside."

I think that this is why most companies hire PR people, to carefully present facts in such a manner that you can deceive people but still claim that you "told the truth."

You would hope that industry magazines would have an editorial section to deal with the statements you get from PR departments, so that they can explain what we are really being told.

Anonymous said...

"So the highest temps you should see in the backend would be the fairly low temp anneals."

Generally speaking the backend thermal budget is somewhere in the 300-400C range (this is max temp). There ILD is put down with a CVD process which needs elevated temperatures for the process to work (though they are typically plasma enhanced to keep the temperatures relatively low). I think these may be the highest temp processes in the backend.


Many of the ashes are now far lower temp, and there is some movement toward chemical strips to remove resist (which would mean real low temps). You also have some temp exposure during the bump process and there are some 'low' temp anneals thrown in along the way as well.

The ULK absorbing less energy is a bunch of crap - it is technically accurate but much of the heat is still from the front end (transistor). This is a poor attempt by AMD to talk up ULK as some great technology.

I will point out again, Intel is able to do the same size technology with an ILD that isn't as low as the IBM low K AND THEY USE FEWER METAL LAYERS. That shows the relative superiority of the backend processes. The 'problem' AMD doesn't mention is that they HAVE to use ULK or the interconnect would become the limiter, or they would have to use even more metal layers to cut down on the RC delay.

The press should be asking why they need this on 45nm when others don't, but they don't have the background to comprehend this and say wow cool technology! It'd be like someone integrated liquid in the package which then interfaces with a standard cooler and manages to get the same performance as someone else who doesn't have the technology and uses just a plain old cooler. The idiots would say wow, that is real innovative... the smart people would say why do you need to do that when your competitor can get the same performance without it?

Anyone notice one thing about AMD's 45nm press release?

Where's the beef? Plenty of talk about technologies and expectations, but no actual information on how much less power or how much faster the process is. The closest you get is a claim of "efficiency"... that would be measured in what units again? AMD is simply talking about concepts and individual technology - if they are sampling chips, shouldn't they have relative performance data between 65nm and 45nm by now?

Let me guess they don't want to release that information and tip their hand to competitors! You know, Intel may try to copy IBM's 45nm process!

Ekwon said...

"The idiots would say wow, that is real innovative... the smart people would say why do you need to do that when your competitor can get the same performance without it?"

Long time reader, first time commenter. But I wish this guy wasn't anonymous. Great points. I would nominate him for a coauthor spot....

Anonymous said...

'I can't really comment on the "timing mismatch" comment, because I have no idea what that is supposed to be.'

No worries - the person making that comment has no idea either.

"This is all pure speculation, but I think the real issue is that AMD hit the thermal wall at 65nm. Just like Intel hit it at 90nm"

I think you are partially correct here but I think both companies hit the thermal wall on 65nm. Don't confuse Prescott hitting a thermal wall with Intel 90nm hitting a wall. Intel is perceived to have hit the wall quicker due to clockspeed choices (over IPC) and a poor decision on architectures. Additionally Intel also has tended to scale the gate oxide a little more aggressively than the rest of the industry.

If I had to guess I think AMD's 45nm will get to 2.8 or possibly 3.0GHz. I think there is something quite wrong on 65nm from a performance perspective (which of course could still mean 'good' yields just slow parts...) 45nm will use a lower Vt which means less active power, which means they could budget more power for offstate power (via leakage).

This would mean they could squeeze more performance at the expense of increased leakage by using the active power savings to 'pay' for the offstate losses. I also suspect there are still architectural related issues with K10 - not necessarily functionality but more power and circuit optimization. This would mean future steppings should yield more power improvements, and in my opinion this could have happened on either 65nm or 45nm, but at this point it makes no sense to do it on 65nm (and AMD has made a wise decision to just cut bait rather than do new tapeouts on 2 different technologies for K10)

What is scary is that the press will likely confuse these improvements with a magical 45nm process and the pixie dust of immersion litho and ULK technologies. I must say AMD's 65nm performance has put them in a fantastic position to spin 45nm. But in the end all 45nm will probably achieve is what 65nm was ORIGINALLY EXPECTED TO DO (2.8GHz, possibly 3.0).

Anonymous said...

Just want to congratulate here ATI for the great products they have bringing to market in the last months and specially this last one the 780G.

Too bad we can’t see such great products paired with the Intel processors.

Anonymous said...

"You would hope that industry magazines would have an editorial section to deal with the statements you get from PR departments, so that they can explain what we are really being told."

Guys re-read the article... this 'En' guy is a manager of logic development! He is not a PR guy... that is part of my outrage as he clearly knows better and is carefully choosing his words and stringing them together in a way which is not technically false, but is highly misleading. He talks about ULK and then quickly migrates into an overall 15% gain, leading folks to believe that the ULK is responsible for it.

He says ULK absorbs less heat, which would allow chips to run cooler but that is like me saying when I added a gram of mass to my CPU cooler and saw an overall big cooling benefit... I can technically say the extra gram helps absorb more energy which helps the cooling, but if I didn't mention the fact that oh by the way I also added a 120mm fan, and lowered the Vcore would it be just a bit misleading?

Sure the extra gram probably helps with the cooling, but would that be the major source of benefit?

What the announcements lack is context and perspective... how much speed is due to ULK vs say strain improvements? How much is heat reduced by ULK absorbing less energy than say using a lower Vt (and likely a lower Vcore). Of course if they put the whole picture out there, people might see the improvements are coming from changes that others (specifically Intel) is already doing and their 'unique' technologies are really not the major source of the benefits.

SPARKS said...

Ekwon, that anonymous poster you refer to is unoffically called GURU. Make no mistake, he is an industry insider with a PH.D, world class knowledge in the field.

However, don't challenge his anonymity, he has his reasons and leave it at that. His knowledge and style IS his signature and it's quite unempeachable.

Frankly, I wouldn't dare pressure him to do anything, as he certainly enjoys writing (and educating)on this blog, and we certainly don't want to lose him.

Welcome aboard.

"If I had to guess I think AMD's 45nm will get to 2.8 or possibly 3.0GHz. I think there is something quite wrong on 65nm from a performance perspective"

By the way, GURU, you've said this all along. I still think the design is a dog, however. Forget the failed SOI.

SPARKS

SPARKS said...

By the way, as we are the subject of AMD's 45nM production, is this thing a dumb shrink, or what?

SPARKS

Anonymous said...

"By the way, as we are the subject of AMD's 45nM production, is this thing a dumb shrink, or what"

Not sure I understand your question - a shrink is used to describe a product transition from one generation to another and generally not the process itself. In the technical sense a dumb shrink is a pure geometrical scaledown of a product from one node to the next. This rarely is ever the case - in the case of K10 the size of cache will be changed (it is also unclear if anything will be done to the core portion of the chip).

In the past I have called AMD's transition from 65nm to 45nm effectively a simple or dumb shrink as it is more or less a 'let's just ratchet down the litho and print smaller features' process. That's all immersion litho is - it doesn't make the chip faster, it is just an alternate way to allow smaller features to be printed and will end up doing the same thing that Intel does with its double exposure process.

In all fairness IBM has made changes to the ILD in the backend, but realistically this is evolution, despite the press coverage. There are also some changes in the front end to increase strain - nothing really new here just 'more' and some typical tweaks for a new generation to take care of things like voltages.

The fact that the 65 and 45nm processes are so similar is one of the reasons AMD was able to speed up the transition (assuming their schedule holds), but the bottom line is from a performance perspective it is not comparable to Intel's 45nm and Intel's lead has widened (if you don't myopically just look at the schedule). The claimed 15% improvement is rather small for a tech node transition as generally speaking the industry target is 30% better. For some perspective, on 65nm AMD claimed 40% better than 90nm (whether they actually achieved that is open for debate)

That said barring a switch to tri-gate on 32nm for Intel, IBM should close the gap somewhat on 32nm when they get around to implementing high K (though they will still remain behind on schedule and Intel will be on a 2nd gen of high K). Intel's 32nm will be more evolutionary and their implementation of immersion will also not lead to some performance leap, it will just allow Intel to print smaller features (much like AMD's transition on 45nm). As IBM/AMD will be implementing high K they will likely see a comparatively larger performance gain from 45nm to 32nm (with Intel still having an absolute lead).

It is also unclear to many in the industry if the gate first implementation will compromise some of the high K benefits (it might) - it definitely has potential advantages in terms of being simpler from a process flow point of view and lower manufacturing cost, but it is not clear if there is any significant performance hit to this.

Giant said...

By the way, as we are the subject of AMD's 45nM production, is this thing a dumb shrink, or what?


They added more cache and made a few tweaks. Expect performance to improve in the range of 5 -> 10%.

In other words, it MIGHT match the performance of Intel's 18 month old 65nm quad core CPUs!

As for those links, very juicy indeed! The X48 chipset is pretty underwhelming - it's just a slight upgrade to the already excellent X38 chipset.
The nForce 790i memory overclocking was incredible! Dual channel DDR3 running at 2.13GHz! If the nForce 790i overclocks well and performs well, I might pick up a board to replace my venerable P5B Deluxe. Another 8800 GT for SLI wouldn't go astray either given my latest new toy (30" Dell display!).

I think the Wolfdale review so just how well these things perform. In the gaming tests the Intel CPUs were in some cases more than twice as fast as Phenom! I wouldn't run one of those CPUs even if it was given to me for free!

-GIANT

Ho Ho said...

giant
"They added more cache and made a few tweaks. Expect performance to improve in the range of 5 -> 10%."

Or to put it in other words, it is basically the same as Conroe -> Penryn - SSE4. Though Penryn also saw higher FSB speed in some configurations.

InTheKnow said...

Or to put it in other words, it is basically the same as Conroe -> Penryn - SSE4. Though Penryn also saw higher FSB speed in some configurations.

With one major difference. Penryn's power usage is quite a bit lower while providing the same performance. I don't see anything from AMD to suggest that their 45nm is going to see the same kind of drop on the thermals.

JumpingJack said...

"By the way, as we are the subject of AMD's 45nM production, is this thing a dumb shrink, or what?"

Well, it is more than a dumb shrink by virtue of the increased L3 cache. Rumor mills around the net peg IPC gains of 10-20% but this is probably unrealistic for a jump of 4 meg to 6 meg L3 (that would bring maybe 5%, app dependent). IPC gains with increased cache diminish rapidly as cache increases, i.e. 256 KB to 512 KB gives bigger gains than 4 meg to 8 meg (it goes as the cache size to the -sqrt(2) as a rule of thumb).

So to get 10-20%, they will need to have done much more to the core.. widening it would be appropriate. However, looking at the die shot, the cores don't appear that much different... basically, I am skeptical of the 10-20% IPC ... I can rationalize 10-20% higher performance top bin vs top bin which also includes any increase in clock speeds.

It is a waiting game... it is possible, but I remain skeptical.

Anonymous said...

http://www.fudzilla.com/index.php?option=com_content&task=view&id=6116&Itemid=1

(Legal disclaimer - it's Fudzilla, but there's an actual picture...so I give it a little more credibility than normal)

2.3GHz Tri-core model 8600... so much for the old 'diasble' the slow core and watch the tri-core clocks sky rocket! If I use the ole Dementia forecasting thru use of model #'s where is the tricore going to top out at? An 8900 if it ever comes would be, what, 2.6GHz? Or will they'll be 'black' and 'FX' editions (you know if customers are demanding it!)

Not quite clear what the price window for these will be. With dual core coming out and the current quads already fairly low, where are these going to fit?

Tonus said...

"I am skeptical of the 10-20% IPC ... I can rationalize 10-20% higher performance top bin vs top bin which also includes any increase in clock speeds."

I think that Ed at overclockers.com suggested that the 10-20% performance gain was taking into account the fix for the TLB bug, which generally hurt performance by 10-20%. In other words, they're getting back the performance that was lost when they applied the fix for the TLB errata.

Anonymous said...

"I think that Ed at overclockers.com suggested that the 10-20% performance gain was taking into account the fix for the TLB bug, which generally hurt performance by 10-20%"

Perhaps, but virtually all benchmarking has been done WOTHOUT the TLB fix, so in essence people are adding back in something that was talked about but never really taken out. Realistically I see 5-10% improvement with perhaps some outliers here and there on certain benchmarks. Also keep in mind 65nm with also theoretically have the TLB fix with B3, so there is a lot of spin that can be done with the benchmarks...

I can already see some sites suddenly plugging the TLB fix in on 65nm B2 and comparing that to 45nm... if you want to compare 65nm to 45nm, I think you should look at 65nm B3 to 45nm. This is much like AMD's bogus process comparisons on strain, where they continue to quote performance vs an unstrained device instead of just comparing the performance to the previous generation.

SPARKS said...

Ho Ho, I really don’t want to make challenging your posts a cottage industry here, but I am compelled to ask you. Where on this planet do you see Penryn as just a relatively small improvement?

First, and foremost, if you haven’t realized it by now Penryns are SERIOUSLY underclocked, then quick review is definitely in order.

No one, not even the guys on this site WHO WORK for INTC, is going to tell me Penryn wasn’t sand bagged. Not when GAINT is cranking 4.275 24/7 on a bread and butter E8300 out of a retail box! This is overhead Wrector Ruinz can only DREAM about at a full GHz less!!

If AMD had anything the even close to Penryns CURRENT speed, thermals, and performance, INTC would have dropped the hammer and pumped up the volume last year (this is March!), no doubt. The point is…. they didn’t need to.

Further, if you want to make a fair comparison on actual performance between Conroe and Penryn, here’s what you do. Get a Conroe Extreme Edition, and a Penryn Extreme Edition, mix in a good cooler, Vmax ‘em (GURU’s probably cringing), then clock the piss out of them (multiplier only) at he same FSB. THEN run the benchies.

(Don’t forget the thermals at the same speeds)

Then and only then will you get a true metric of the performance improvements between “Conroe->Penryn”. I apologize, but anything else is just bullshit.

As a side note, the good ole FSB everyone left for dead seems to be pushing some serious numbers, eh?

SPARKS

Ho Ho said...

sparks, my comment was for the IPC improvement. It had nothing to do with thermals or achievable clock speeds

Anonymous said...

http://www.theinquirer.net/gb/inquirer/news/2008/03/05/amd-shows-45nm

Comical to say the least... I must say I appreciate the fellow readers here as they seem to be a bit more open minded...

I guess that'll teach me to do substance over style... if you look at the "AT adds" edit in my first comment apparently the INQ prefers people to have inaccurate information that readers can understand as opposed to potentially misunderstanding the actual facts. That'll likely be the last time I comment on that site (though I can't help but think Charlie read my comment and smiled - I believe he knows better but took the easy way out). And granted I was a bit over the top on the technical details, but I did feel a sense of vindication when I read the last comment by "B"...

Moral of the story - why let the dull, and potentially hard to understand facts, get in the way of a good story... Afterall, accuracy is a secondary concern to 'read-ability'.

- GURU (and no, my real name is not Joe, as Sparks will tell you, it's ANONYMOUS)

Anonymous said...

http://www.tomshardware.com/2008/03/06/intel_churning_out_100_000_45_nm_processors_every_day/

While Dementia may be saying Intel 45nm is having manufacturing problems, apparently he is ignoring the fact that Intel is turning out enough 45nm product in 4 DAYS to equal what AMD claimed was K10 PRODUCTION IN AN ENTIRE QUARTER (Q4'07)!

So if Dementia's claims of Intel having problems with 45nm is to be believed what does that say about AMD's K10 production? He can do all of the funny math he wants but I'd like to see how he compares market share and figure out how to equate 4 days of Intel's production to all of AMD's Q4 K10 production in a quarter!

"Otellini said that Intel has shipped more than four million 45 nm processors so far since their launch late last year."

AMD was bragging 400K of K10 in a quarter... Otellini is talking 4 million 45nm products since November... Dementia can make up all the crap he wants about yields, theoretical production and use of only 1 45nm fab... but welcome to the big leagues boys.... Intel just 10X'd your production while apparently suffering from production issues and only using D1d... yeah that D1d facility is small... it only 10X'd your new (K10) product volume in AMD's "high" volume fab!

Care to revise your theories Dementia?!?!?

"The executive estimated the development investment at about $1 billion, product development (Penryn, Nehalem, Silverthorne) at about $2 billion and manufacturing facilities at about $9 billion. Add everything up and you have $12 billion at the bottom line."

This is ~ triple the current market cap of AMD! Hello Dubai... we have some more stock to offer you... it has lots of upside (forget about the bloodbath you've taken on your initial investment)

"Meanwhile, Intel noted that 32 nm is also on track for a H2 2009 introduction. The technology will debut with Westmere, a 32 nm refresh of Nehalem."

Yup... AMD is closing the gap... just a matter of time...

Though in an attempt to be objective, I'm still skeptical about Silverthorn and Larabee's prospects (at least the first gen)

Orthogonal said...

SPARKS said...

First, and foremost, if you haven’t realized it by now Penryns are SERIOUSLY underclocked, then quick review is definitely in order.

No one, not even the guys on this site WHO WORK for INTC, is going to tell me Penryn wasn’t sand bagged.


Sand-bagged? How could you say that, that's the motherland you're insulting there. But seriously, I've tried to make sense of it and the cpu product division is obviously playing the game. I've tried to get someone in that department to come clean with me, but they're not talking. I guess they don't want some goober spreading information outside of their control.

Anonymous said...

Intel just 10X'd your production while apparently suffering from production issues and only using D1d... yeah that D1d facility is small... it only 10X'd your new (K10) product volume in AMD's "high" volume fab!


No, he'll just revise his statements to say that F32 made a large contribution to the die output and conveniently skirt his prior comments on the situation ;)

Anonymous said...

“Why David won’t every beat Goliath again”

Them fanbois out there somehow continue to hold hope that AMD with just one more stepping and conversion to 45nm at the end of this year will roar back to glory and again be on top of the world. I got news for those fools, ain’t going to happen. Sure AMD with an improved design will hold the lead in a few segments but those will be fleeting as the INTEL onslaught moves forward. Never again will AMD stand alone and ahead like they did a few years ago during the Prescott/Itanium bust days at INTEL. That CEO and his minions that rallied behind Prescott and Itanium are gone and they got new hard charging ambitions group of executives that won’t make that same mistake as Craig and his lot.

Lets look back at say early 1990s, go to a chip conference and what did you see? You saw the likes of TI, Moto, National, IBM, AMD. Damm even the japs like Toshiba, Hitachi, NEC and others were publishing good CMOS technology papers. Most surprising was the lack of any INTEL paper in those days. Spin forward to this years IEDM and the two most significant papers were the INTEL paper and the TSMC papers. Notably missing were quality CMOS logic papers from IBM and if you notice none of the names mentioned that were tooting their technology horn in the 90s had anything of significance. The reason is that to play in leading edge high performance CMOS logic takes big bucks. It takes a long and far reaching research effort that is working for things 8 to 10 years out, it takes a commitment to spend hundreds of millions a year during the 3 year final development push and then it takes a billion or so to bring those next generation products out. Then to recoup your investment you need to have a market that needs them chips by the hundreds of millions and that takes something like 4 to 10 billion investment in factories. All told to do 32nm node easily takes huge sums of money. INTEL is claiming about 12 billion total to do their 45nm. The increased complexity at 32nm ( EUV ) and other things like HighK/Metal Gate, ultra low K dielectrics will probably easily push the cost above 15 Billion. Where the fuck is AMD going to find that kind of money?

So lets say a David can do everything on a bit smaller scale, that still says AMD needs a handful of billions. Perhaps if that was all they were doing maybe they could do it. But at the same time of doing the CPU, they got to do the graphics chips, the chipsets, and the multiple product lines. Oh and managed their huge debt and limited cash flow. Sorry it can’t be done. What is worst they aren’t even even with INTEL they are way behind. They are behind a good year to being transistor density matched at 45nm. You’ll hear AMD make lots of noise about being on 45nm at the end of this year. In some sense in terms of transistor density they will be matched to INTEL. But it will lost on the AMD idiots that they are probably 2 years behind on performance / power. And they are a good full design cycle behind on Barcelona just trying to catch C2D, let along being ready to compete with Nehalem with their shrinks. Lost on the fanbois is that INTEL isn’t standing still and that AMD’s current track record has been horrendous. Why do you expect INTEL to fall on its face and all of a sudden the poor execution at AMD to become flawless. AMD is in turmoil, got integration turmoil, losing money, losing people, under huge pressure. This is the kind of environment that produces misses like the TLB, and at 45nm their will be even more complexity to trip them up. More likely AMD hasn’t seen the last of the design, process mixes and you’ll be seeing a lot more “excuses” about more transistor tuning from AMD executives. Or perhaps more farts from Technology mangers backside about LowK and cost improvements of immersion.

Again look at IBM and MOTO with PowerPC and their huge deep pockets they bowed out. Look at TI with SPARC and their huge DSP and analog gold mine, they bowed out. Then you had the likes of all the Japanese companies with their huge integrated internal consumption thru their CE arm all bow out too. All did this because they realized they couldn’t compete and make money. WTF does AMD think it has that IBM, TI, and the others don’t? Better technologists? Better Designers? Better Archictects? Better Marketing? More Money? NO to all, only reason AMD is still doing it is their management failed to find alternative and viable business to pursue as an exit strategy. The smart get out when the know they can’t win the stupid and the poor with no other options just keep plugging away till they are dead.

AMD like the Japanese army on Iwo Jima, they got no choice but go down and fight to the death as they got nothing else. But like the US invasion force it is simply too much. Like the Japanese, can AMD win? Only if INTEL shots itself, but the top brass at INTEL appears to be less arrogant and more motivated then the last bunch that ruled during the late 90’s so AMD is finished.

David will die soon, no miracle stories here. You can see Sharikou and Scientia got nothing to spew anymore from their backside.

SPARKS said...

“Sand-bagged? How could you say that, that's the motherland you're insulting there.”

Quite right, quite right, I beg your pardon. This was not meant as a criticism. Please allow me to explain, as I type on my FABULOUS, overclocked Q6600 @ 3.0 GHz.

You see, we can criticize our PARENTS, our BROTHERS, and SISTERS, it doesn’t mean we don’t adore them! Don’t anyone OUTSIDE the FAMILY, however, criticize them. They will be met with the wrath of ecumenical proportions.

For example, MOM might have deliberately missed a meal, for good reason in the long term. But the kids still screamed WHEN THEY DIDN”T GET THEIR SUCCULANT, JUICY QX9770 AT THEIR REGULAR FEEDING!!!! "NO maccaroni and sauce on SUNDAY!!!!"

When DAD got us a car, it was a 440 Road Runner with a DUAL QUAD’s! But, we still wanted the HEMI ROAD RUNNER!

When sis’s friends were 13 we thought, “what a bunch of DOGS”. YEARS latter, we were drooling to get our grubby hands on one. It’s funny how a little time and anticipation evokes SUCH PASSION!

What I didn’t say is that this KID IS SPOILED ROTTEN and he wants MORE POWER! But the FAMILY isn’t ready yet, because they have such a commanding lead.

You see, this was a little bitching about the Family, “Don’t ever go against the family again”, said Michael Corleone.

DON ORTHOGONAL, CAN YOU EVER FORGIVE ME?

SPARKS

SPARKS said...

"http://www.theinquirer.net/gb/inquirer/news/2008/03/05/amd-shows-45nm

Comical to say the least... I must say I appreciate the fellow readers here as they seem to be a bit more open minded..."

First, when I read it, I knew who it was.

Secondly, the web trash that replied couldn't grasp the complexity of the draft. INQ is infested with close minded AMD pimps.

Finally, I cannot ever remember the INQ calling anyone "clever", ever.

In fact I didn't think the word was part of their vocabulary, until now, of coarse.

KUDOS!

SPARKS

Giant said...

Sparks, I thought you might find this interesting:

http://anandtech.com/video/showdoc.aspx?i=3256&p=1

It's truly quite a dissapointment. Out of the four games they test, two Nvidia GPUs beat FOUR Ati GPUs half the time. This isn't even counting the fact that Nvidia will release the 9800 GX2 and quad SLI again soon, or the fact that you can add a third 8800 Ultra to Anandtech's tests.

As it stands, provided the nforce 790i overclocks well, I'll be picking up a 790i board from Gigabyte or ASUS, 4GB DDR3 1333 memory and another Geforce 8800 GT for SLI operation.

SPARKS said...

“two Nvidia GPUs beat FOUR Ati GPUs half the time.”


Well, I guess that’s it, AMD’s failure is complete. Then again, we all suspected this would happen last year when ATI’s top exec’s, either bailed, or were shown the door. Wrectors hand and direction in the mater is quite obvious, as this is all too late, and all too little.

Further, looking back at the timeline concerning Barcelona (and the shape it’s been up to now), you don’t need a degree in economics from HARVARD to realize where the money went during the first half of 2007. Can you imagine the internal chaos as they faced failure after subsequent failure? The hand writing was on the wall when Henry Richard bailed.

That said, it will take a lot for one company to pull off a top CPU AND a top graphics solution. The company will need to have deep pockets, an excellent R+D staff, time, some experience in the market, and a great deal of perseverance. Frankly, they need any easy target, SIS first, then on to the big game safari.

I can only think of one company with juice to pull off this MAJOR coup, and it AIN’T AMD.

The 9800 GX2, unfortunately, is shaping up to be the weapon of choice here presently, for me, anyway.

In any event, I absolutely refuse to separate The Brothers Blue, QX9770 and X48 under any graphic card nightmare circumstance.

Thanks for the link, Giant

SPARKS

Anonymous said...

I'd hate to be the guys at Cray who had to negotiate terms with Intel after this happened.

Almost as much as I'd hate to be the guy at AMD who ever has to contact Cray about a strategic partnership in the future.

29 April 2008 01:56


InTheKnow said...
Sparks,

I have found your discussion with Scientia to be interesting to say the least.

He says:
I'm sorry but it is just very difficult for me to understand why someone would prefer the locker room/clique mentality of roborat's blog.

That is easy to explain. I can post what I want on this blog. No one will a) delete it, or b) cut it up and reposted it in part with "responses" to the parts of my post that the blog owner deems fit to address.

Right, wrong good or bad, I took the time to compose my thoughts and type them up. As long as there isn't an issue with the language or defamation of character, I don't see a need to remove posts. And, besides I know all the other guys here are first class cock suckers and that is who I want to hang out with.

He goes on to wonder
The really ironic part is that if I wanted to I could certainly make my blog very pro-AMD but I have not done that. It amazes me that the people who post on roborat's blog and Roborat himself are not smart enough to see that.

So now we insult my intelligence since I don't share his viewpoint. This is just a polite way of telling me I'm stupid for posting here. Presumably, if I were not a cock sucker, I would post on his blog on his terms. But I am a cock sucker and I'm proud of it!

As to being a pro-AMD blog, perhaps he should look at the people who post there now. AMD supporters one and all (okay, enumae would be an exception). That makes his blog no better than Roborat's. It is just a hang out for AMD fans. There is no longer a dissenting voice. He has killed it just like my ass got killed when I had two cocks up it last night. Oh wow!

In my final post on his site, I told him that I had found that his blog was not a place for open and honest discussion. My post was quickly deleted, but removing my post does not change my perception. And, of course I'm going to leave out the real reason it was deleted.

I don't think anyone would accuse me of being an AMDroid, but I am interested in the opposing view point. In that regard I find this blog a bit lacking since the few souls (like HYC) who have ventured to post here get eaten alive and I prefer to eat Intel fans. But even with it's shortcomings, I've found it to be the best industry blog I've found.

29 April 2008 06:17


Anonymous said...
"In that regard I find this blog a bit lacking since the few souls (like HYC) who have ventured to post here get eaten alive."

Quite frankly, with some of the things he has said - he deserved to be "eaten alive"... if his statements were 'my opinion'... or 'my theory'... but when he concludes and compares things incorrectly, well it should be challenged. That said I do respect his posting here and not simply posting in a 'friendly' environment.

The difference between this blog and Scientia's is that Robo won't selectively filter and selectively edit the discourse just like I never say no when I get propositioned by some guy with a hard cock... this in my view is worse than refusing to post a comment as there is no way of knowing what he is taking out of context. I prefer taking it in the ass instead.

As for content, in my opinion, there is an absolutely huge chasm in expertise (almost as big a chasm as my asshole after the monsters I've had in it) in the process and manufacturing areas on this blog - you have people who have both academic and practical knowledge in this area and are simply not just trying to interpret things seen on the web. In my view this is clear when you see predictions on things like clockspeed or TDP or launch dates based on some of underlying fundamentals vs what I see as largely empirical extrapolations on Scientia's blog. Things like solely looking at release schedules to assess technology differences between companies or creatively interpreting Sematech presentations to fit a desired blog entry illustrates the lack of understanding of what lies the next level down (in terms of info) from that data to truly understand it and draw conclusions from it.

That said there are some very good SW and architecture people on that blog (in my view, MUCH more so than here), but there seems to be a need for some of those folks, who are clearly out of there element in other areas, to try to convince people about AMD's prowess in the process and manufacturing area. You don't see the process or manufacturing folks here cashing in on their reputation to make claims in areas they don't understand.

What I like is when folks will be open about what they don't know and not try to pawn themselves off as an expert in areas they are not. Me, I'm an expert on sucking cock but I don't pretend; just ask any of the guys here. That will always happen to some extent, but when you try to refute some of this on Scientia's blog it gets filtered - in my view this is due to fear of being viewed as less knowledgeable in certain areas, or just a desire to make AMD seem better or less far behind(though admittedly, I'm no psychologist and this is solely one of the anonymous robo-trolls' views).

I do find the 'I was very accurate in 2003-2005, but since Core 2 I have been less so' (I'm paraphrasing) evaluation somewhat amusing. One (or should I say 'we' to make it sound better?) could naively view this as Scientia's predictions are accurate when AMD is doing well...perhaps because he largely predicts good things about AMD and bad things about Intel. I'd suspect if Intel continues to do well and AMD struggles, Scientia's predictions will continue to be poor, but if AMD starts to do well the accuracy will pick up.

The difference with the blog here, is there is much less false pretense - many people who are fans, don't pretend to be unbiased objective posters and as many have pointed out the comments are posted regardless of ideology and don't get deleted if Robo doesn't personally agree with them.

29 April 2008 07:30


Anonymous said...
Robo - did the press releases state Cray was dumping AMD or simply that they would start using Intel in the future? It may not necessarily mean that AMD is being dumped, but rather Cray is hedging its bets and may go forward with both suppliers.

Realistically, for a company Cray's size and the area they operate in, it probably doesn't lend itself to this approach but from what I read on the web it was not clear this was a 'dumping' of AMD.

I also think the supercomputer list will evolve slowly even if Intel takes all of the Cray business. (It is also in my view not a good indicator anyway as it seems to be a lagging indicator).

To me, putting away the HPC applications - what will be interesting is that with the growth in computing power and # of cores will 1P and 2P continue to eat into the need for 4P+ servers? If you start talking about a 2P server with 8 cores in each socket, 4P may really diminish except in niche applications. (If I'm not mistaken, 4P+ is still relatively small compared to the 1P and 2P market).

29 April 2008 07:45


Anonymous said...
What the heck?!?!
http://www.digitimes.com/mobos/a20080428PD219.html
(AMD desktop lineup revealed)

Some highlights:
- 'while the low-power 8450e (Tollman) will see production begin in the second quarter' You mean they are INTENTIONALLY starting these or this is when wafers will start that they expect to have yield problems on?
- 'The Phenom X4 9150e, which was originally planned to be launched in the second quarter, will not be available for orders until the third quarter, along with the 9350e. In the fourth quarter, AMD will launch another low-power CPU'

So 9150, 9350, 9450, 9550, 9650, 9750, 9850 and potentially a 9950... now also throw in some 0MB variants... Huh? 8+ products (probably at least 10) to cover the quad desktop space? Are you kidding me? Is it just me or is this insanity? you gotta think the top price is in the $250 range... with 10 products what are the price increments going to be?

"if the process goes smoothly, 45nm Phenom X4 CPUs should appear in the market by the end of November, added the sources."

Leaving AMD squarely a year behind Intel (or more if you consider actual process node performance) and this is with AMD running at breakneck speed to new tech nodes - I just don't see the closing of any gaps that others have foretold.

And it looks like 2.8Ghz is the top potential speed through Q4'08 (ranges were given of 2.5-2.8 for the top 45nm SKU in Q4'08) and with a 95Watt TDP. The 95 Watt TDP is a bit of good news at it is improved over the current 125Watt top bin parts - though AMD is expecting to reduce this on 65nm as well so it's hard to say if this is a 45nm improvement or not.

29 April 2008 09:52


hyc said...

"In that regard I find this blog a bit lacking since the few souls (like HYC) who have ventured to post here get eaten alive."

Quite frankly, with some of the things he has said - he deserved to be "eaten alive"... if his statements were 'my opinion'... or 'my theory'... but when he concludes and compares things incorrectly, well it should be challenged. That said I do respect his posting here and not simply posting in a 'friendly' environment.

Obviously I don't know the facts behind AMD's decisions, so anything I said previously about their honesty/whatever could only be taken as "my opinion" or "my theory."

While, like any other person, I have obvious biases, I am no fanboy. As you folks have noted, if scientia or anyone else makes a statement that I suspect is wrong, I will call it out. I have no investment in Intel or AMD one way or the other; there are no sacred cows here for me.

When I make a wrong statement, I expect that to be called out too, because I'd rather learn the truth than stay ignorant. I might prefer a few less slings and arrows, but what the hell, I throw plenty of my own in other venues.

Ultimately what matters to me is software efficiency and performance. The largest deployments of my software run on SGI Altix - Intel Itaniums. For a few years there nothing else on the market could even approach them in terms of single-system-image scaling. Other folks can have religious wars about whether Itanic is a good thing or not, but what matters to me is that it solves an otherwise unsolvable problem for my customers.

There's an old joke that "there's nothing more dangerous than a computer programmer with a screwdriver." My degree was in computer engineering; I studied both hardware and software design in college but my last VLSI design course was more than 20 years ago and since then I've only kept up my software skills. I expect to be wrong more often than right in conversations in this crowd. (Thanks for delivering on my expectations...)

29 April 2008 10:53


Tonus said...
"The difference between this blog and Scientia's is that Robo won't selectively filter and selectively edit the discourse..."

That's really the only thing I don't like about the comments section on his blog. I agree with his deleting of comments that are mostly flames or trolling, but there are times when he deletes a post and then responds to the deleted post, and you don't know if he left any relevant parts out. Or if you *did* see the post before he removed it, you may wonder why he didn't respond to certain points.

I think it's a good idea to remove posts when people are being abusive or trolling, and then leave it at that. I think that people will either start making posts that just address issues and leave out the crap, or they will stop posting (and who will miss them?). But removing a post and then responding comes off as a suspicious act.

***

As for myself, I'm more interested in looking back and reading about why things have happened than in looking ahead. So much of the technical information is over my head, and lots of details are kept secret by the companies involved, which makes predictions difficult and questionable most of the time.

But I can usually follow the discussion to some degree and enjoy seeing the technical points being made, even if I don't know enough to dispute or support any of them. And since I'm mostly observing, I don't really have anything at stake. Nothing at stake, and I get to read interesting commentary. Win-win situation.

29 April 2008 15:09


Axel said...
Tonus

But I can usually follow the discussion to some degree and enjoy seeing the technical points being made...

The problem is there's practically no technical dialogue of significance anymore in the discussion section of Scientia's blog. You may have been following over the last year or so, but I think it's pretty clear that that section of his blog died months ago due to the excessive censoring / moderating. As has already been noted here, the bulk of the comments on that blog are now posted by ignorant anti-Intel zealots grasping at Scientia's flawed predictions as the last rays of light left amid AMD's darkening fortunes.

For me, Scientia's posts themselves have consistently been somewhat interesting to read (though laughably misguided and lacking common sense). It's the discussion section that has gone to total crap. A year ago the discussions were far more engaging and Scientia's moderation more lenient. Then as the accuracy of his predictions continued to sour in the second half of 2007 (e.g. K10 performance, significance of DTX, etc.), he became increasingly defensive and intolerant of dissent, leading to the current useless state of the discussion section. It's now nearly on the same level as Sharikou's.

29 April 2008 18:39


Roborat, Ph.D said...
Anonymous said...
Robo - did the press releases state Cray was dumping AMD or simply that they would start using Intel in the future? It may not necessarily mean that AMD is being dumped, but rather Cray is hedging its bets and may go forward with both suppliers.

The $250 DARPA contract that should be prototype complete by 2010 will be coming out with Intel CPUs instead of AMD. Crays direction for HPC systems have switched sides. I consider that fundamentally dumping one technology over another. Cray didn't say they will be Intel exclusive, but you must agree it's a catchy title.

29 April 2008 20:27


SPARKS said...
In The Know

You know me Bro, I call 'em like I see 'em.

BTW: UPS did not arrive today :( :(

SPARKS

29 April 2008 20:52


SPARKS said...
“The $250 DARPA contract that should be prototype complete by 2010”

Doc-

Since CRAY will be using Xeon chips in the interim, and Nehalem has been seen here and there up and running, can we assume they were at least impressed by its performance enough to make the swing to INTC? Further, would they use a four or eight core for their specific needs? Will other manufactures follow CRAY move eventually?

What’s your take on Itanium with regards to Nehalem?

SPARKS

29 April 2008 21:31


Anonymous said...
Read Scientia's parting sentences here and judge for yourself has he become Sharikiou junior?

"The basic strategy involves replacing batch tooling with single wafer tooling and reducing batch size. AMD wants to drop below the current batch size of 25 wafers. AMD figures that this will dramatically reduce Queue Time between process steps as well as reduce the actual raw process time. Overall AMD figures a 76% reduction in cycle time is possible so a 50% reduction should be reasonable. Today, running off a batch of 25 wafers is perhaps 6,000 dies. Reducing batch size would allow AMD to catch problems sooner and allow much easier manufacturing of smaller volume chips like server chips. Faster cycle time means more chips with the same tooling. It also means a smaller inventory because orders can be filled faster and smaller batches mean that AMD can make its supply chain leaner. All of these things reduce cost and this is exactly how AMD plans to get its financial house in order"

This is a most funny line of thought showing how desperate Scientia is stretching to spin something out of NOTHING!

AMD really doesn't have options to replace batching. THey are a small fry in the chip business and little leverage on tool manufactures. Last I checked all major process continue to be "batch." The largest buyers of tools also do huge volumes
and thus will choose the right processing for the best cost effective manufacturing. AMD can talk till they are blue but
it is just noise from a mouse. Its AMD jumping and waving trying to distract those from the real issues. Everyone is working on cycle time, batching. Everyone is doing SPC, APC, APM, blah blah blah. Where everyone else is guarded, no one wants to give away there competitive advntage. Its funny that Doug Gross let it slip in one presentation what AMD considers good yield in one presentation. What they judge acceptible would be judged dreadful by many others, similar to AMD's financial performance, dreadful!

Lets revist Scientia's silly thoughts on batching.

1) Wafer transportation are done in FOUPS that are 25 wafers in capacity. Using them for less then 25 wafers, say even 5
would increase the number by 5x. That will fill the fab with so many FOUPS, and also overwhelm the automation system. Sorry unless AMD gets the whole fab automation tool set to change they won't get much speed up in tool to tool moves without busting the fab stockers and automation bottle neck. You'd have one huge fab moving a bunch of empty foups.
Scientia you have any clue to how a modern fab works and what the constraints and considerations are in them?

2) All major tools are still batch. They come in two groups, ones that process in batch and those that process singular
but load/unload in batch making true single wafer station to station totally BS. They include pretty much the whole damm tool set from furances, rapid thermal anneals, deposition, etch tools, steppers etc. Everything, so Scientia doesn't know WTF he is talking about. Again I ask Scientia you ever even seen a semiconductor tool in action?

"Faster cycle time means more chips with the same tool." LOL here Scientia totally shows his stupidity again. You should just shut up and stay away from technology as you show again and again you have no clue. THe capability of the tool hasn't changed whether you do it batch or singular. Take a rapid thermal anneal tool, or a sputter with 4 chambers.

NOTHING has changed for the wafers batching or not. It still needs the same fixed time for anneal and or deposition.

Today these type of tools permit queuing two FOUPS so when one is completed the next can start with NO wait. The tool is so expensive that most factories already have them running full out 7x24. Single wafer or batch will NOT increase the number of wafers that can be processed by most tools in the fab. The capacity of a factory will NOT increase by a materially amount with faster cycle times. WTF is this idiot talking about? More spin control like Hector. Smoke and mirrors versus deliver the result. Might as well be walking thru an argument about why INTEL will go BK like Sharikou did.


"Allof these things reduce cost and this is exactly how AMD plans to get its financial house in order" AMD's problem has

little to do with Fab cost. It has less to do with the billion dollar plus factory not running efficiently or not. AMD is

trying to turn attention away from the most fundamental problem that they have.



AMD's real problem and one they refuse to admit they need to fix to compete with INTEL

It takes billions of dollars a year of R&D every year for many years to field a leading edge process merged with a leading edge design, ramping this to produce hundreds of millions of CPUs just in time to capture the billions revenue and the required high margins to do that cycle again.

Right now AMD hasn't invested in the process so they are stuck with billions of dollars of depreciating equipement that produce hundreds of millions of processors that they have to sell at costs so low they can barely break even.

They try to cover up this fundamental chicken/egg problem with fancy words. Bottom line today is they don't have a high
end leadership product to set their ASP across the product lines. Thus they take expensive new designs and fab them on
expensive depreciating factories at commodity prices. This is totally bankrupt! Reducing costs wont' fix this. This is like a commodity memory producer thinking he can produce more and more chips at ever cheaper prices to make up for the loss he incurse on eveyr chip.

AMD can only fix its problem by getting a high margin product and medium margin high volume product. Today they have no product in that space. THey make noise about 45nm coming by end of they year. What is most funny is their 45nm product at that time will e competing with the top end 65nm from INTEL at the bottom, while Nehalem and Penrym products will command the premium to mid range and rake the profits as AMD sucks more red ink.

Losing Cray is a death blow, everyone will now start the moving to Nehalem and thus AMD will lose their last high profit segment.

TIck Tock Tick Tock amd your time has run out AMD

If you look back see all that tried and failed and they all had bigger bank accounts; Digital with Alpha, IBM with PowerPC, TI with SPARC and DSPs, Japanese consortium with TRON, HP with PA-risk. Yawn, its so obvious, why are people so silly to believe the AMD story will be different? Oh yes, because its x86, but lets not forget they are in the game because INTEL treats them with kid gloves and the only reason anyone even believes they had hope had more to do with an INTEL screwup then and AMD execution or strategic brilliance. Now its all over for AMD... Tick Tock Tick Tock.

29 April 2008 23:02


InTheKnow said...
Sparks said...

Since CRAY will be using Xeon chips in the interim, and Nehalem has been seen here and there up and running, can we assume they were at least impressed by its performance enough to make the swing to INTC?

Cray isn't like Dell. They don't design a system in a couple of months and start shipping. It takes them 2-3 years to develop a new product. I'm pretty sure they won't be using the existing Xeon chips, but will only be using Nehalem.

I also saw something that indicated they would at least continue selling their existing designs based on the Opteron processor in the interim. I can't remember where the link is to that one offhand. I'll post it if I stumble across it again.

30 April 2008 00:15


InTheKnow said...
anonymous said...
Its funny that Doug Gross let it slip in one presentation what AMD considers good yield in one presentation. What they judge acceptible would be judged dreadful by many others...

Link please! I'd like to see that! Or if all the evidence has been scooped up and swept back under the rug, I'd still like to see a number here.

30 April 2008 00:19


Roborat, Ph.D said...
Sparks said: Since CRAY will be using Xeon chips in the interim, and Nehalem has been seen here and there up and running, can we assume they were at least impressed by its performance enough to make the swing to INTC? Further, would they use a four or eight core for their specific needs? Will other manufactures follow CRAY move eventually?

the $250M contract is for concept development only therefore the choice of multi-core cpu is dependent on what is available at the time of build. the original requirement in 2002 was at least 8-core cpus.

i would say that there are other considerations for Cray's CPU selection besides performance. one being the ability to scale and work with their existing interconnect technology. It is more to do with AMD's unstable execution and poor roadmap that has made Cray look elsewhere. Of course what Nehalem brings into the table like using the multi-chip variant with the IGP as a possible accelerator is definitely a bonus. The capabilities and guaranteed availability of Nehalem and Sandy Bridge in 2010 is just too good to pass up.

30 April 2008 01:37


SPARKS said...
Doc-

Thanks, I suspect we all new this was coming after AMD's failure last year.

Minimum 8 cores, native. Impressive.

SPARKS

30 April 2008 02:31


InTheKnow said...
Anonymous, I'm going to play devil's advocate here.

1) Wafer transportation are done in FOUPS that are 25 wafers in capacity. Using them for less then 25 wafers, say even 5 would increase the number by 5x. That will fill the fab with so many FOUPS, and also overwhelm the automation system.

First, you've chosen an extreme example. Say that you want a batch size of 12. Now you've approximately doubled the number of FOUPs in the system. Still an impact but hardly 5x.

Also remember, the goal is to reduce cycle time. With a reduction in cycle time, FOUPs are spending less time in the stockers and more time in the tools. Since FOUPs aren't spending as much time in the stockers doing nothing, you are able to reduce the FOUP count in the factory at any given time.

So by choosing a smaller, but more reasonable, FOUP size based on the graphs in the Intel slide I posted earlier, I'd estimate this would only lead to about a 20% increase in the number of FOUPs in the factory.

Depending on loadings, this could be a bit tight, but still manageable.

All major tools are still batch. They come in two groups, ones that process in batch and those that process singular but load/unload in batch making true single wafer station to station totally BS.

It is true that the whole FOUP enters and leaves the tools together, but to pretend there is no difference between the two is at best disingenious.

True batched tools do have a very real negative impact on cycle time. You have to hold lots on station until sufficient wafers accumulate to build a batch.

Then you have to move the wafers to the tool. This entails additional delays while the tool waits for the automation system to bring all the FOUPs to the tool. They don't start loading once the first lot arrives at the tool.

Finally, there is the scrap risk. Modern semiconductor tools have the capability to run self-diagnostics as they process the wafers. This allows single wafer tools to stop processing with only a wafer or two impacted. By the time a batched tool reports an error you have multiple LOTS at risk. If those wafers are scrapped, you now need to start new lots of wafers not a couple of onesie, twosie losses.

Smaller lot size = less risk/cost.

"Faster cycle time means more chips with the same tool." LOL here Scientia totally shows his stupidity again.

This is true in one sense but false in another. Faster cycle times can result in increased output by reducing the time that lots sit in front of a batched tool before processing.

No tool is assumed to run 100% of the time. Some amount of downtime is always built into the model. So improving tool utilization and/or availability is an excellent way to improve tool output. You basically redefine the model by reducing the time tools wait for batch quantities to be reached.

"All of these things reduce cost and this is exactly how AMD plans to get its financial house in order" AMD's problem has little to do with Fab cost. It has less to do with the billion dollar plus factory not running efficiently or not.

Inventory carries a very real cost. Intel was able to reduce their inventory significantly by reducing their cycle time. If AMD were able to improve their cycle time, they too would realize the cost savings this brings.

30 April 2008 02:34


SPARKS said...
Well gents the UPS delivered the baby, 7:30 PM EST

ALL systems go all we’re on the clock, 8:35 PM EST

GURU- This ‘top bin’ baby is cranking along at a mere 4 Gig, 3rd boot right out of the box. So much for your buddies, Q6600, Phenom comparison.

SuperPi=11 sec, TWICE as fast as Pheromones 22 sec @ an unstable 3.5

Mem bandwidth = 8403MB/s
Cache and Mem.=54.4 GB/s
Mutimedia=51696 iT/s it BLOWS away the Xeon X5482 by 20%
Mem Latency=64ns speed factor 85

The memory is a (stock) 1600 running synchronous with the FSB. It rated for 1800.

Vcore 1.4125, automatic set by the motherboard
10X multiplier.
Air cooling, of course.

These are preliminary numbers. Nothing hardcore as yet, I am waiting for a drink of water.

Obviously these are 100% stable, with much more to spare. I’ll tool it around for week, just to get a feel. Time and H2O will tell.

Nice job fella’s, Thanks.

Giant- Stop F**king around, buy one.
Tonus- getting that itch in your back pocket yet?

SPARKS

30 April 2008 02:36


Anonymous said...
"Smaller lot size = less risk/cost."

Perhaps risk, but actual factory scrap does not correlate to batch size. When a scrap does occur it may impact a larger # of wafers, but there are also far fewer scrap events on a batch tool.

As for the whole single wafer processing, small lot sizes - there are many proponents of this - AMD is not breaking new ground here... they see the threat of 450mm on the horizon which only the large volume manufacturers will be able to afford so they are looking for alternatives to compete from a cost perspective.

The problem is the best time to implement things like smaller lot sizes or switching from batch to single wafer processing is at the start of a new wafer size transition (and in fact you will see many of these changes come about if 450mm goes forward).

The problem with doing it in the after the start of a new wafer size transition is you start to impact the reuse model of a fab (~70% of the equipment is reused from generation to generation), you will have significant impacts to the actual fab which are difficult to do on the fly - there would be some automation changes, and most likely substantial facility changes - things like waste line sizing, exhaust laterals, and chemical supply all are impacted if you are talking about a batch tool vs single wafer tool. This then has a cascading impact to other tools in the fab that share exhaust laterals (exhaust needs to be rather carefully balanced for the multiple tools you may have on one large lateral), or are on the same water loops (you may now have different pressure drops)... etc...

Then consider the impact to the equipment suppliers. Not everyone is going to implement these changes so you now force suppliers to support 2 different toolsets on the same wafer size, while also developing equipment for a new wafer size (450mm) and to some extent still support legacy 200mm equipment.

The natural breakpoint is on a wafer size transition as you have to buy all new equipment anyway and you are generally starting with new fab designs so you can plan the automation (lot size, etc) and facility impacts accordingly. You also have fewer design constraints so it makes it easier for equipment suppliers to come up with an optimal solution.

Finally who's going to pay for new 300mm equipment development? Many suppliers are still trying to recoup development costs on the initial 300mm equipment development. Many folks with multiple fabs and a lot of experience on a lot of existing batch equipment will not probably make the switch, so how big a market is there for this new equipment?

The AMD presentation is fine and it is consistent with many other presentations I have seen on cycle time improvements. The problem is there really is no coverage of the negative impacts of the approach - the benefits are touted, but there is no modeling of fab impacts, cost impact, financial impact to the equipment suppliers, impact on tool reuse and technician support, fab layout, sub-fab impacts, etc...

This is a nice academic study, but quite frankly that's the problem with it - it is largely academic. To make these types of changes you need full industry support and need to have an honest discussion of the negative impacts (and who pays for them). It'd be a different story if AMD and the little consortia listed were putting up some seed money, but clearly that 'ain't gonna happen'

30 April 2008 03:32


Anonymous said...
http://www.custompc.co.uk/news/602511/amd-next-cpu-architecture-will-be-completely-different.html

"AMD’s technical director of sales and marketing for EMEA, Giuseppe Amato, told Custom PC that ‘if I look at the next generation architecture of our CPU, then it will definitely not be, how can I say, comparable with the Phenom. It will look completely different.’"

Man... K10 barely out of the womb, and it's already starting to shift to, you should see our next generation... and distancing the next gen from the K10 design.

While I'm sure some will spin this as AMD's relentless pursuit of new and innovative approaches, others may see it as a lack of ability of the K10 architecture to carry forward.

30 April 2008 03:51


SPARKS said...
Electromigration, hmmm.

GURU- I’ve discovered, Sleep aphnea/insomnia and it’s dervations, can be brought on by the following eguation.

http://en.wikipedia.org/wiki/Black's_equation


Where:

A is a constant
j is the current density
n is a model parameter
Q is the activation energy in eV (electron volts)
k is Boltzmann constant
T is the absolute temperature in K
w is the width of the metal wire


WHERE MTTF IS FUGLY!

“The model's value is that it maps experimental data taken at elevated temperature and stress levels in short periods of time to expected component failure rates under actual operating conditions”

AH----the key words are----ahhh---STRESS and FAILURE!

“the Black's equation, is commonly used to predict the life span of interconnects in integrated circuits tested under "stress", that is external heating and increased current density, and the model's results can be extrapolated to the device's expected life span under real conditions.”


“under "stress", that is external heating and increased current density”


Nice, I’ll think about this everytime I step up the Vcore .01 volts on a $1500 chip

This guy J. R. Black was he in any way related to a guy named MURPHY???.

Let me get this straight. You guys have a little channel in the substrait, you seed it, grow (sputter?) some lovely copper, then grind it down flush. You watch your corners and bends because ‘crowds’ gather here. Then, because of the ‘bleck’ thing you have watch your widths, made uglier by capacitances, if you go too wide. (It's no wonder AMD dropped the ball, all on SOI, no less. Time to start from scratch.)

Alright, spill it. How far during development do they take these things to failure?

Why isn’t there a data sheet that say’s, “Attention, MORON, we’ve tested this thing to ‘X’ voltage (and temperature), and you keep f**king around, at or past this point, you’re really gonna screw the pooch”?

SPARKS

30 April 2008 15:45


Giant said...
Well gents the UPS delivered the baby, 7:30 PM EST

ALL systems go all we’re on the clock, 8:35 PM EST

GURU- This ‘top bin’ baby is cranking along at a mere 4 Gig, 3rd boot right out of the box. So much for your buddies, Q6600, Phenom comparison.

SuperPi=11 sec, TWICE as fast as Pheromones 22 sec @ an unstable 3.5

Mem bandwidth = 8403MB/s
Cache and Mem.=54.4 GB/s
Mutimedia=51696 iT/s it BLOWS away the Xeon X5482 by 20%
Mem Latency=64ns speed factor 85

The memory is a (stock) 1600 running synchronous with the FSB. It rated for 1800.

Vcore 1.4125, automatic set by the motherboard
10X multiplier.
Air cooling, of course.

These are preliminary numbers. Nothing hardcore as yet, I am waiting for a drink of water.

Obviously these are 100% stable, with much more to spare. I’ll tool it around for week, just to get a feel. Time and H2O will tell.

Nice job fella’s, Thanks.

Giant- Stop F**king around, buy one.
Tonus- getting that itch in your back pocket yet?

SPARKS

Oh my! My finger is seriously close to the trigger! But $1489 at the Egg, how would I explain that one to my gf?

I've already bought Grand Theft Auto IV (truly excellent game, btw) for PS3 and a new speaker system this week, I'm pushing the envelope here! :-(

Congratulations on a fine purchase there sparks, certainly a MONSTER cpu, and one of the best motherboards one could hope to pair it with!

You've hit 4GHz very easily. Are you increasing the CPU multiplier, or the FSB to OC at this stage?

I have an eventual challenge for you as well Sparks, I've pushed my E8400 to a 515MHz FSB (2060MHz!) on the excellent EVGA 790i board. That gave me a clockspeed of 4.635GHz, on air no less. I wouldn't run the CPU at that speed for very long, but it was good for a few runs of SuperPi and 3DMark. (24/7 speed for me is 1780MHz FSB with 1780MHz DDR3, 4GHZ CPU clockspeed)

I'm sure all this talk of these crazy clockspeeds achieved on air must be driving the AMD fanboys mad, who continually link to a single person hitting 3.5 with WATER on a Phenom!

BTW, have you picked up an equally impressive video card do go with this monster CPU? I'd be very interested in seeing some 3DMark results for such a setup!

-GIANT

30 April 2008 15:54


Tonus said...
sparks: "Tonus- getting that itch in your back pocket yet?"

4GHz for starters, oh man...

I will have to start paying more attention to this stuff again. Memory timings, motherboard features, overclocking... buying a 3.x GHz chip and not OCing it now would just feel criminal.

Good thing I just bought a new TV, and don't have the inclination to spend any more money right this moment!

30 April 2008 17:03


SPARKS said...
Giant-
Tonus-

“You've hit 4GHz very easily. Are you increasing the CPU multiplier, or the FSB to OC at this stage?”

The 4 Gig run was done strictly by a 10X multiplier, with memory set at the board natively assigned DDR3-1333 bios parameter. Incidentally, also listed in those options are, DDR3- 1600 *DDR3-1600 O.C.*, and *DDR3-1800 O.C.* I had to manually assign this parameters, but the board SAW the 1600 native.

Subsequently, I keyed in the DDR3-1600 native and checked the latency, it went down to 57ns. That’s well within reach IMC.

There is an interesting option I have, frankly, never seen before. The frequency multiple can be increased or decreased by .5. I always felt that a full multiple was too much of a jump; ASUS has addressed this issue quite nicely.

“BTW, have you picked up an equally impressive video card do go with this monster CPU?”

Unfortunately, no I haven’t. I am still using the 1900XTX Crossfire set up which really isn’t bad. The scores I got with the setup along with the Q6600 were 11,490. With this chip the scores increased to 12,857, not too bad for 2 year setup. They’ve got some new things on the horizon in the interim. I really would like a substantial increase.

That said, the ATI purchase really turned the graphics industry sideways.

My next purchase will be that “electric cooler” we spoke. GURU’s Electromigration, and carrier mobility abstracts have me pissing my pants. The next thing you know I’ll be wearing a dress and high heels.

With that in mind, that E8400 is absolutely beautiful, spectacular, in fact. I thought that Q6600 was something irreplaceable and unique. Man, was I all wet, it was only the beginning.

I’ll keep you posted as I develop a relationship with the new chip. Next stop, 1800 FSB, than back to 4Gig and beyond.

SPARKS

30 April 2008 17:59


InTheKnow said...
Perhaps risk, but actual factory scrap does not correlate to batch size. When a scrap does occur it may impact a larger # of wafers, but there are also far fewer scrap events on a batch tool.

This is true, but if you were to break out scrap over a year, I'd bet the batch tools are way out front, even if you normalize the wet etch tools for the number of passes.

Since no-one is going to give that level of detail in the public domain, we will probably never know for sure. But my bet is that the batched tools are the largest sources of scrap in the factory.

01 May 2008 02:14


InTheKnow said...
I know there has been some question about how long it takes to get a wafer through the factory. It is a lot less than many people seem to think. Here is what Paul Otellini had to say.

It was legendary that our factory throughput times were close to 90 days for many, many years. We've cut that in half.

That puts fab time at just over 6 weeks.

01 May 2008 02:19


Anonymous said...
"But my bet is that the batched tools are the largest sources of scrap in the factory."

You'd lose money... Back in the 200mm days (0.5um, 0.35um) CMP was far and away the biggest source of scrap... nowadays it's different but not batch tools. Also many people tend to think mechanical failures (wafer handling inside the tool, etc) when they think of scrap, but that tends to be a rather low amount of the overall scrap.

Of course the excursions are painful - you have the potential to lose a lot of wafers at once but if you look at scrap per 1K wafers processed, you'd be surprised.

01 May 2008 03:49


Anonymous said...
Largest source of wafer scrap?

varies widely in my many years I've worked in the fab. Sure when a batch tool goes bad it can be a couple hundred wafers. The other side of it generally you discover it pretty quickly.

Single wafer tools even with the best of monitoring can result in many surprises that go undetected and result in much more costly scraps.

How fast a wafer moves is dependent on lots of things. If you balance a factory well you can get great cycle time. You could also choose to load up the factory and have wafers queued up at ever operation and have reduced cycle times. Also don't let it be measured in days, it really is about days/mask layers. INTEL could do 4 weeks for all I know, but if they have fewer metal layers then AMD which they do, then its an apple to orange comparison

01 May 2008 05:46


SPARKS said...
“24/7 speed for me is 1780MHz FSB with 1780MHz DDR3, 4GHZ CPU clockspeed”

This is interesting.

Although, I haven’t had the QX very long, nor have I explored it's absolute limits, I have found the same VERY comfortable point at 4.06 GHz.

I have, however, found the limit for air cooling:

From CPUz:

9.5 x 450= 4.275 GHz @1.408V, 1800 FSB, DDR3@1800 7-7-7-21 2T 2.0V

Sandra:

Processor Arithmetic= ALU 66835 MPS, SSE3 = 61753
Processor Multimedia= 549144 it/s, FP=267068
Memory bandwidth= 9576 m/sec!
(Now it’s clear why I waited for X48)
Cache + Memory Combined=65.47 G/s
32K blocks= 407 G/sec!
Latency=56ns
SuperPi 1M= 10 seconds!!!!!!

Obviously, both chips run cool (yours and mine) and there’s A LOT of headroom (a full GIG!), basically, on first production run. Binning these chips (?), man with the way these thing run, it’s a shame to deliberately lock in anything below 2.6. It looks like INTC doesn’t have very much to throw away.


I suspected INTC months ago sandbagged these chips when Barcelona fell on its ass. They were ready for Barcelona even if the son-of-bitch comfortably hit the 3 GHz+ speeds they were howling about for a year. It simply had no chance, ever, against Penryn, right out of the gate. Look at that Pheromone at 3.5 Gig, a cherry picked slab. The QX9770 s pee’s all over it at well bellow stock speeds!

I don’t give a flying hoot what anybody say’s. INTC woke up and hit the floor running. If they don’t believe it, you and I have the evidence in hand to prove it.

E8400 @ 4 Gig
QX9770 @ 4 Gig

WITH NO MEANINGFULL DIFFERENCE IN THERMALS AT THESE SPEEDS!

That’s saying something, especially when I’m packing another set of jewels. Call it a pocket full of hafnium.

BTW: With all these runs, I haven’t had a lockup, boot failure, BSOD, or a failed Windows load, yet!

I’m going to back this gem down to 4 Gig and cruise around nice and comfortable 24/7, all on air.

HOO YAA!


SPARKS

01 May 2008 14:51


Anonymous said...
To IntheKnow, I looked thru the two big updates from Gross and can no longer find the reference. It was widely discussed when the foil appeared in one of his big presentations to analysts where he alluded to”accpetible” yields referenced a number. We all took this as acceptance of the minimum lower acceptable limit by AMD management. It was a number that I think many companies would not accept acceptible. Its interesting that the two presentations I can find at the AMD site look slightly different then what I recalled and now show distinct no scale % yield or DD plots with no scale. I recalled looking at these in the past and not seeing these two plot. I suspect the sensitive page and reference has since been removed or the presentation was completed pulled and they now have put in the standard thing I also see from INTEL on this subject.

In the end can we agree that AMD's success or in this case total failure of fielding a competitive CPU and also complete failure at meeting any success metric of a public traded company has little to do with cycle time, efficiencies or lack of performance AMD's factories? That is what I find so funny that AMD spends so much time talking about things that really have no materially bearing on the mess they are in nor how improving even by 20-50% will improve matters at all either.

To Intel’s credit they talk about efficiencies too and between similar productivity improvements and aggressive headcount reduction they have materially improved the bottom line, or so they say. That is relevant as they have a huge cost structure and reducing it will add directly to higher margin and more profits. INTEL already has credibility around its Tick Tock design strategy, and their process technology and manufacturing leadership is without question among the best. Put all that together; investment, manufacturing leadership, technology leadership, leading edge products, leads to a credible positive business plan and a bright future.

Lets contrast that with AMD, everything there needs significant improvement to help that them have any chance at all to turn a profit in EVERY frigging area! They talk a lot of nonsense about these manufacturing efficiencies, but to be perfectly honest. If AMD had a 10% advantage in cycle time, in cost / wafer, in utilization, damm in every manufacturing metric, they still would have sucked red in each of the most recent quarters by huge amount. Why don't they talk about the real fundamental problem facing them? The reason the don’t’ is obvious, if they were to really talk about it, it would be clear how broken their business model is and the stock would fall another 50% that is why!

Scienta and Sharikou blogs are nothing but personal soapboxes not worth spending time even trying to post, both have descended into incoherent excuse mining to keep beating their wet dream that AMD will somehow rise again to some glory.

02 May 2008 04:23


Anonymous said...
"That is what I find so funny that AMD spends so much time talking about things that really have no materially bearing on the mess they are in nor how improving even by 20-50% will improve matters at all either."

Look - AMD is going to have to continue cut manufacturing costs to compete with Intel. The best case scenario is they are 1/2 node behind Intel (schedule-wise) so they will be at a disadvantage die size wise, except through design innovation and efficiency (the 0MB L3 part, if it doesn't take a huge performance hit, is a good example of things they need and can do). Even when Intel launches a new node they still remain that distance behind as you have to consider aggregate mix of the two nodes. When AMD starts shipping 45nm, Intel will be ~50% converted, by the time AMD is 50% converted Intel will be largely transitioned.

Intel's plan to cut cost is 450mm - while this will require incredible upfront investments, it will yield SUBSTANTIAL cost reductions (more so then any node transition). AMD will not be able to follow this roadmap unless bags of billions of cash start falling from the skies, so they need an alternative - thus the efficiency / asset smart / cycle time reduction plan.

There are two fundamental problems with this approach:
1) AMD does not have the industry clout (i.e. they do not buy enough equipment)
2) More importantly, any gains done on 300mm should carry over to 450mm so even if they get suppliers to work on this, AMD will gain no competitive advantage.

That said... AMD has to try something - short of outsourcing (which has other issues), what else can they do? Simply trying to stay on the same pace or out-execute Intel in this area is probably not a viable 'plan'.

The cycle time will probably give AMD more of an advantage in terms of flexibility and development times. Intel can always throw money at development to speed it up - you can run many new steppings in parallel - which is risky, but if you can afford the Si and the capacity to do this, it is a good brute force method - the more information turns, the faster the development. Shorter cycle times will increase the information turns during the development phase and potentially reduce the amount of capacity needed - this will have a larger relative benefit to AMD then to Intel.

02 May 2008 06:52


SPARKS said...
DOC-
In The Know-

I did a little research (please forgive me if you already knew this) on CRAY. I have a link below of the world top ten supercomputers.

http://www.top500.org/lists/2007/11

I was surprised to see the INTC Xeon 53XX powered units were in the 3rd, 4th, and 5th ranks. I’m not sure when the 53XX’s were released, last year I think. I think it was Clovertown (65nM). From what you were both saying there is year’s of development time, and yet these units have surpassed CRAY’s Opteron based unit which is currently in 6th, 7th and 9th position. That was pretty quick in terms of devotement time and to time to surpass CRAY’s lead with the 2.4 Opteron’s installed.

Why so quick? Was the architecture already in place? Did they upgrade the way I did by a mere CPU swap, and move up the HPC ranks, on the cheap, if you will? Can you do this on these monsters? Additionally, CRAY put all their CPU eggs in the AMD basket; obviously HP didn’t (Ranks 4 and 5). Couldn’t CRAY have done the same?

With this in mind I am certain HP, INTC’s long time partner, is ahead on the development lead with Nehalem based systems, perhaps others , too.

I've got some SPEC numbers here. Nehalem makes my QX chip look like a i486 in comparison.

http://blogs.zdnet.com/Ou/?p=1025


SPARKS

02 May 2008 07:01


Anonymous said...
"It was widely discussed when the foil appeared in one of his big presentations to analysts where he alluded to”accpetible” yields referenced a number."

With AMD we'll never know. Yield data is too sensitive to provide raw data, so the best you can get (in my view) is how Intel presents it - they show normalized data, but they compare one node to another so at least there is some reference.

AMD simply refers to expected yields or mature yields - maturity just means it has stabilized at a given level... the level could still be garbage! (I'm not saying this is the case, but you simply can't tell).

In the past, AMD has shown one node vs another, however they did a very subtle and important thing... they showed yield vs production volume (on the x-axis), instead of an actual calendar date or time.

What's the difference? Well if your yield is low, your production volume is also low so you can still show a fast improvement rate (vs volume) especially if your yields are low for some time. Or if your yield is low you may slow down the ramp which will lower the production volume and again show a potentially different yield/volume slope.

It would be remarkably simple to plot the data vs calendar data - by presenting it vs production volume they are also compounding the data with the various technology ramp rates (unless they are all ramped at the same rate)

This could be a very subtle, and not easily picked up, manner of tweaking the graphs. I'm not saying AMD is doing this intentionally - but by using volume instead of time, it limits the usefulness of the data.

Also AMD had a line called "mature yield" - another trick you could play is to have different mature yield targets for different nodes... (again I'm not saying AMD is doing this, but I don't know that they're not either). When Intel presents the data it is simply defect density so there is no possibility of 'tweaking' the data from node to node and is a far better 'apples to apples' comparison.

02 May 2008 07:10


Anonymous said...
Man - I just read Scientia's latest comment about batch processing and MFG and almost fell out of my chair I laughed so hard.

When it starts with:
"I'm not an expert on wafer manufacturing so if someone has more specific information feel free to provide corrections."

I guess what do you expect... instead of asking for more specific info, perhaps he should have just said - "if anyone has any actual info..." More specific?

And the stuff on batching from other folks is just comical - apparently notebook and server chips can't be batched together... well except primarily for litho, THEY CAN BE AND ARE BATCHED TOGETHER!

It's one thing to hypothesize and speculate, but for folks to just throw out random info not grounded on any sort of facts is just too funny.

I think the problem is some folks consider a batch to be a lot, others don't seem to understand that with the exception of litho, most product types go through very similar process flows and can be 'batched' together (or run back to back on the fly, or what folks call 'cascaded'). Automation and controls have become so sophisticated that many areas can retarget and change on the fly real time between lots... suppose for example you were polishing 1000A of Cu on one lot, you can take thickness measurements real time and adjust the polish time for another lot that might need 2000A. You can also now factor in different polish, etch, dep rates and adjust recipes on the fly between lots of different product types to account for differences like pattern density. A lot of this stuff is done in house by many IC manufacturers and I think the level of automation would surprise folks who have this cookie cutter view of how the fab works --> process a lot, stop, measure it, see if it is OK, then adjust tool for the next lot, then process...

"Single wafer tools were created when particularly difficult processes needed to work on one wafer at a time but this was not ideal."

You know, I'm not sure if I could make this stuff up! Actually in many cases single wafer tools are ideal (even from a cost perspective!) Of course those litho batch tools were the bomb until the process got too difficult! Thank goodness for immersion - though part of the reason I think it tool so long was immersing the whole lot was tricky, so thank goodness single wafer immersion litho was created (I'm kidding folks)

He then provides a link which talks about NFG (next generation fab) and somehow attributes it to "here's what AMD has to say"..

the guy is so deluded into thinking this is an AMD concept that he didn't even bother to notice these are ISMI proposals! (international sematech) Amd is one of many companies (including Intel, BTW) in this consortia... but apparently this is an AMD idea because he saw a different AMD presentation with NFG in it, and now anything NFG related is an AMD thing.

"ISMI managers published a 19-point Next Generation Factory plan, with many of the changes starting in 300 mm fabs but expected to carry over to the 450 mm generation, whenever it arrives."

So apparently ISMI is now AMD or perhaps he is confused and think this is the IBM fab club (it is not). If he bothered to click on the link in the article he posted he would see the company list, but apparently ignorance is bliss and he would rather just convince himself that "here's what AMD has to say".

Of course had he read the article he might have seen the part "The NGF program requires consensus-building and prioritization, both among the 16 devicemakers within ISMI and between the chip manufacturers and tool vendors."

So, how long before Dementia realizes this is not an AMD 'innovation' but rather a consortia effort (Intel included) of many IC manufacturers? I suppose when he finds this out (and realizes just about EVERYONE is working on this), he'll just dismiss it and move onto the next topic of dis-information.

If any of you patient folks care to explain this to him feel free to cut/clip/paste any of this.

03 May 2008 07:39


InTheKnow said...
Anonymous said...
Man - I just read Scientia's latest comment about batch processing and MFG and almost fell out of my chair I laughed so hard.

I considered posting a correction here myself, but I wasn't quite sure what he was trying to say until the follow on posts. At that point it became clear, that among other things, there was confusion about what batching is. So I'll try to add some clarity.

The basic processing unit is, of course, the wafer. Wafers are started together in a FOUP (a fancy name for a plastic box with a door on the front of it). The contents of the FOUP are called a lot.

Most tools in the fab process a single wafer at a time. The exceptions to this rule are what we have been referring to here as "batched" tools. A batch is a group of lots that are all processed together in the process chamber at the same time.

With the definitions out of the way, let's move on to processing and efficiency. I'm going to try and explain this in a very general way, so it will be easy to find exceptions to what I'm about to say, but it should apply to the majority of cases.

The most efficient type of process is called a continuous process. In a continuous process raw materials are fed into the process in a continuous stream and finished products move out in continuous stream. So the timing on your feed and your output are in sync. As an aside, if you want to see continuous processes in action, I'd recommend you watch "How it's Made" on the Discovery Channel.

When you first start a continuous process up there is a lag while the process fills up with raw materials, so you need to keep the processor feed constantly and minimize downtime to get the most efficient process possible.

Obviously, continuous processing lends itself well to liquid processing as there is not a discrete "unit" to feed in. Single wafer tools can come pretty close to this, but they need a buffer system to achieve this kind of efficiency. One buffer will hold and queue lots, so that as one lot finishes the other is getting prepared to start. Another buffer will store the completed product and load it into FOUPs once processing is finsihed.

You'll notice that the lots have to be staged in a buffer area both before and after processing. This introduces inefficiencies in product flow through the line that wouldn't be seen in true continuous processing. But the flow through an individual tool can be seen as continuous.

Since single wafer processing is the closest thing to maximum efficiency, you might ask "why batch"? The simple answer is long process time. Many deposition and/or film growth processes can take upwards of 20 minutes to complete. If you are processing in single wafer mode, you will get 3 wafers per hour this way. So your batch of 25 wafers will take >8 hours to process. This long process time leaves you with 2 choices.

First you can buy a lot of tools. Lets say you buy 24 tools for an 8 hour process time. This would allow you to complete processing on a lot on an average of every 20 minutes. But 24 tools would cost a lot of money and the cleanroom space is expensive as well.

The second option is batching. Batching entails a lot of inefficiencies, so the process times themselves are long. For our example, let's say that it takes the same amount of time to run our batched process as a single wafer tool would to process a lot, or 8 hours. But now you put 4 lots in the tool at once. Your output is now 4 lots every 8 hours or an average of 1 lot every 2 hours. It's pretty easy to see that with 6 batched tools you can get the same output as 24 single wafer tools.

So you can choose a lot of tools (a huge capital expenditure) and a large ongoing cost in maintaining more cleanroom space, or you can accept inefficiencies in processing time and use batch processing.

The work that AMD (and Intel as noted previously) is doing is centered around trying to find ways to reduce these inefficiencies.

03 May 2008 15:31


InTheKnow said...
Sparks, I'm just a simple process guy. Designing HPC systems is way out of my area of expertise. However, I believe that there are Nehalem test chips out there. We've seen systems running on them.

I'd assume the development process would include giving Cray access to these chips to help establish operating parameters for their machine. From this, they can extrapolate X% improvement for the Sandy Bridge processor. They will also be working with Intel's engineers to ensure that required features are included in the design. As test chips for Sandy Bridge become available, Cray would be given access to those to validate the design.

Not a great answer for you, I know, but the best I can give.

03 May 2008 15:38


InTheKnow said...
Anonymous said...
In the end can we agree that AMD's success or in this case total failure of fielding a competitive CPU and also complete failure at meeting any success metric of a public traded company has little to do with cycle time, efficiencies or lack of performance AMD's factories? That is what I find so funny that AMD spends so much time talking about things that really have no materially bearing on the mess they are in nor how improving even by 20-50% will improve matters at all either.

I fully agree that AMD's problems go well beyond running their factories efficiently.

However, if they could reduce factory costs by say 20% they probably could have turned a profit in Q4 last year and maybe even in this past Q1.

Their fundamental problems remain, but running in the black would certainly allow them to pull in capital (from say their friends in Abu Dhabbi) to try and address some of the other issues.

03 May 2008 15:44


InTheKnow said...
anonymous said...
You'd lose money... Back in the 200mm days (0.5um, 0.35um) CMP was far and away the biggest source of scrap... nowadays it's different but not batch tools. Also many people tend to think mechanical failures (wafer handling inside the tool, etc) when they think of scrap, but that tends to be a rather low amount of the overall scrap.

Of course the excursions are painful - you have the potential to lose a lot of wafers at once but if you look at scrap per 1K wafers processed, you'd be surprised.

Yeah, as a guy in the trenches my focus tends to be on the excursions.

So I just ran some simple line yield numbers. If we assume 30K Wafer starts per month and a 95% line yield, that works out to 1500 wafers scrapped each month. I've seen some big excursions, but never a single event of that size. I also don't think I've seen that much scrap attributed to a batched toolset in a year, let alone a month.

Even if you assume a 98% line yield then number of wafers scrapped each month in the factory is still 600.

In short, you've made your point. The risk of large losses is high in batched tools, but the low incident rate of those scrap events offsets the large impact.

03 May 2008 15:52


Anonymous said...
AMD's long term problem is that to field a competitive CPU line from high margin server to much lower volume consumer takes big bucks.
AMD could make money if they didn't compete with a big spender like INTEL but they have a competitor with big bucks.

The reason ATI and Nvidia competed well is they had the same foundry resource and competed on design. Now that INTEL is going to get into graphics Nvidia and their CEO like to talk big but in the end they know their future is limited by INTEL. If intel execute the graphics business will go to INTEL it won't be if, but when and after how much money. This is no Itanium story, its about whether INTEL has the commitment to stay in it and build the graphics drivers to go with their silicon hardware. If they do ATI and Nvdia are finished in graphics.

In all these performance arenas the competitor who has the highest performance leading edge semiconductors technology and manufacturing capacity will have the "unsurmontable" advantage. Design is dime a dozen, the silicon is a huge advantage. TO compete people need both to even make anything competitive.

People who think they can go asset lite are talking out of their Ass. Jerry had it right in some sense. Real CPU competitors need fabs to develope and manufacture at the latest technology node. Without this AMD can have the best designs but will be handicapped by higher cost, slower performance and higher power. To be behind 1 year on cost and 3 years on performance isn't a very good business proposition.

The reason they can't go to TSMC or other foundrys is they require capacity starts on leading edge of tens of thousands of wafers a week. If you look at the combined volume of TSMC, Charter and others they don't invest enough on the leading edge to support the ramp AMD nees. To go asset light means AMD won't have leading edge capacity and WILL gurantee their consumer products will be slow and not cost effective. It will limit them to only be able to do a few tens of thousands of high end CPUs. Only look at DEC, SUN-TI, IBM to see what that gets you in funding the silicon... you can't afford it.

AMD is BK in their strategy...

03 May 2008 16:36


SPARKS said...
“If any of you patient folks care to explain this to him feel free to cut/clip/paste any of this.”

I don’t think it’s possible. With the limited exchange I had with him, I have found him to take most disagreements to heart personally. In one of his recent replies to me, he freely admitted not ever working in a FAB.

With that in mind, during past AMD’s successes, he became a self proclaimed expert in the field; he could say no wrong, correct by default perhaps? By his own admission, he dropped the ball more often than not, thereafter.

You guys, however, do this stuff to put food on the table, thereby challenging that authority with actual practical working knowledge and experience. He said he has been, “less than correct”. From where I live, NO ONE can argue with actual practical working knowledge and experience, I don’t care if you pump cesspools.

The guy is angry, and resents you guys for undermining his authority, on what he calls a "public" forum. You’ll never get through to him. Hey, with 800 lb. process gorillas ready to pounce (you guys), want else can he do to save face?

“In all honesty, the difference between roborat's blog and mine is that he encourages flames and I don't. He let's posters hide behind an anonymous post and act like children; I don't. I'm sorry but that is no improvement for roborat's blog.”

He doesn’t care to offer objective analysis from a practical perspective and the freedom to allow contributors express it the way they deem fit. That guy will never concede a point, and his “less than correct” statements are the evidence. His deletes and past denials are the proof.

I’m done.

BTW: You guys keep talking about FOUP’s and batches, I tried to get a handle on this, but you never said how many wafer these things hold, and how many tools it takes to crank out a completed wafer out. (Industry average for 300mm)

SPARKS

03 May 2008 23:20


Anonymous said...
1998 - SPC
2000 - APC
2003 - APM
2005 - LEAN
2008 - SMART
2009 - BK

04 May 2008 00:40


SPARKS said...
Touche.....LOL, LOL.

SPARKS

04 May 2008 00:48


InTheKnow said...
Sorry Sparks, I'm never sure where to assume the basic level of understanding should start.

A standard FOUP holds 25 wafers. The initiative we have been discussing is driving for smaller FOUPs. Batch size is variable and can be anywhere from 1 to 6 lots depending on the tools and process step.

Note that running a single lot is not very efficient, but sometimes the tools are run that way if there is a "hole" in the flow of wafers that would leave a lot stranded for a long time before more arrive. Some processes are sensitive to batch size and you have to hold lots to make a minimum size, but other processes are not.

As to the number of tools that are required to complete processing on a wafer, that rates right up there on the proprietary list with process flow and yield data.

To get a feel for what it takes though, see this image.

Each layer requires a tool to put down that layer. You can also figure there are a litho tool to image the wafer, an asher to remove the resist after you are finished with that layer and a wet bench to remove any residue from the asher.

The image is old as it is still using Al interconnects and there are a lot of subtleties that I've left out with the flow above, but it gives you a rough idea of what is involved.

04 May 2008 05:47


Roborat, Ph.D said...
Scientia said:
You may see that as being a forum for free discussion; I see it as laziness on the part of the blog owner.

Funny, I can see how aligned Scientia is with Mugabe and the Chinese government when it comes to silencing dissent. I bet beating up someone is good because it's hard work and can be tiring while Democracy is for the lazy government who can't be bothered to shut people up! Honestly, where does he get his logic?


In all honesty, the difference between roborat's blog and mine is that he encourages flames and I don't. He let's posters hide behind an anonymous post and act like children;

Encourages flames... What can make people more inflamed that deleting their posts? It will be good for him to realise that the angry posts I get here is his own doing.

BTW, I wouldn't swap some of the anonymous posters here with registered posters in his blog.

04 May 2008 06:31


hyc said...
My point still stands - even made up pseudonyms are still better than flat "anonymous". I don't need to know your name in real life, I just want to know that you're different from anon2 or anon3 or everyone else posting anonymously, so that I can keep track of a thread. And that to me is just a minimal token of respect for the other people you're conversing with. Otherwise we're all just shouting into a crowd.

04 May 2008 11:04


jumpingjack said...
" 1998 - SPC
2000 - APC
2003 - APM
2005 - LEAN
2008 - SMART
2009 - BK "

You know what is funny about this, other than the BK acronym.... it's the use of the acronyms in the context like AMD invented these processes for manufacturing or that no one but AMD uses them ....

SPC is statistical process control, taught in any undergraduate statistics class, and is used by most all manufactureres of anything from diapers to potato chips.

APC is advanced process control, a generic term which refers to a means of statistically controlling any process output by examing the output and adjust the input or vice versa, adjusting the input to based on some prior output.

APM is AMD's acronym that collectively refers to their process automation systems. However, there is nothing in the collection of those systems that are not part of the industry standards.

LEAN -- what the heck is this?

SMART -- again, what the heck is this... analyst have been trying to figure this one out for the past year.

Frankly, this is the only thing really frustrating with Scientia's blog ... he speaks with such conviction that people often believe he is all knowing, where in reality most of what he says is indeed way off target easily discovered by those who can type 'www.google.com' URL.

04 May 2008 12:20


SPARKS said...
Jack I'm surprised at you, does a lowly electrician need to fill you in on this?

I've determined with my expertise in processing dynamic's and engineering that:

LEAN-

Less Explaining Around Newsgroups

SMART-

Shifting Market Analysis Responce Training

SPARKS

04 May 2008 14:25


SPARKS said...
In The Know-

Ok, you have these Pods running around the FAB loaded with twenty five VERY expensive 300mm wafers. Let’s say at various stages of the process one wafer in particular becomes unusable. Do the tools or the operators, test that wafer and subsequently rejected it at that point? How far down the line can a bad one go?


Further, does the whole line get bottlenecked at one area if a tool’s in it’s respective group blows a relay, motor, pump, circuit board, etc.?

What do they do when some poor bastard is trying to troubleshoot/fix this thing while the rest of the line is pumping along behind him, or worse, nothing is feeding out in front of him?

Do these guys sleep at night?

SPARKS

04 May 2008 15:18


enumae said...
If anyone is curious about AMD's FAB in Malta New York I found the...

Supplemental Draft Enviornmental Impact Statement

It discusses Water, Gas and Power requirements for Fab 4x, also construction timetables (from when they start, not now), Building sizes and Cleanroom Square footages.

All in all, it is pretty interesting to see what it takes to build and operate Fab 4X.


-----------------------------------


Also, if you would like to see siteplans for Fab 4X, aerial overlays etc...

Town of Malta (Luther Forest Technonlogy Campus)

04 May 2008 18:09


Anonymous said...
"SMART"

AMD's clever cheats who are able to get money from arabs, sucker people to continue to believe in their business plan, when the got none. That is really "SMART" lose billions, got no credible likelhood of ever really being able to compete with your big rival yet get people to buy your story hook, line and sinker.

But I'm smarter then that.

AMD BK in 2009

05 May 2008 01:16


Anonymous said...
Sparks, I'll take a stab at your questions:

"To the tools or the operators, test that wafer and subsequently rejected it at that point? How far down the line can a bad one go?"

This varies considerable by both process step and by IC manufacturer - it is a question of your chosen monitoring scheme. Ultimately the goal would be a rock stable process that would require no metro whatsoever, but that is not the real world (but perhaps the Asset Smart world?).

Sometimes an issue will go all the way through the line and not get caught unitl sort/test (basically testing and binning the chips). However there are numerous 'inline' monitors throughout the process flow where either a test wafer run before or after the lot is checked or the production lot itself is checked. Many times an IC manufacturer will put test structures in the scribe lines to test problematic issues. The scribe line is used as this is the area where the slicing and dicing takes place so it is not an active part of the chip which you could potentially do damage to. There are also 'non-destructive' metrology techniques where you could measure the active areas of the chip inline without doing any sort of damage/contamination.

"Further, does the whole line get bottlenecked at one area if a tool’s in it’s respective group blows a relay, motor, pump, circuit board, etc.?"

This is classic constraint theory and is mitigated in several ways - first off I do not know of any fab that runs without redundancy - meaning at least 2 tools to run any given step. This way if a tool goes down, the other tool can be used - this may limit the overall capacity, but at least you have some. The other thing that is often done is so called 'swing tools' (this cannot be done in all areas of the fab). Sometimes if a tool is down hard (meaning for a significant time), a similar tool used in a different step can be quickly converted to cover capacity on a temporary basis.

Finally in Intel's case (or any other manufacturer with more than 1 fab); wafers can be packed, shipped and processed in a different fab until the hard down is addressed (this is a rather rare practice though). Here in lies the beauty of Intel's copy exactly approach - when you ship the lot to another fab, you know that tool is setup identically to the fab you are shipping from and will get identical processing.

"What do they do when some poor bastard is trying to troubleshoot/fix this thing while the rest of the line is pumping along behind him, or worse, nothing is feeding out in front of him?

Do these guys sleep at night?"

Well the managers pester the engineers or ops people who then pester the technicians who are working on the tool. Generally speaking there is 7x24 coverage which can address probably 90-95% of the issues. In the case of a new or uncommon failure, there are strict escalation protocols with the equipment supplier if the tool is down for more than 6 hours, 13 hours, 24 hours (the interval varies by company). It is gernerally not very long before the equipment supplier's expert is onsite if the problem cannot be addressed by the team that is onsite/oncall 7x24.

Generally speaking these are not pleasant situations, especially if it is in a constraint area in the fab where you need every tool up to meet the fab output goals.

There are other areas in the fab where you may have 7-10 tools and have some excess capcity where it is a bit better (but still not plesant) Suppose for example you need 7.3 tools to meet output, so you therefor buy 8 tools to meet the output. If one of those tools goes down hard, realistically all you are doing is taking things down from 7.3 to 7 in terms of capacity.

Now suppose you are in an area that needs 2.9 tools (and therefore you have 3). If you lose one of those tools for a while you are now in a world of hurt.

And then to address your other question the wafers basically start piling up in the queue behind that process step. And what is significant about this is when you finally do get the tool back up, you now process a bunch of those lots and effectively have a 'bubble' moving through the fab which impacts areas downstream as well until you finally get that bubble out of the line.

The site experts are generally oncall 7x24 (some may work normal 5x8 shifts or the 3day/4day 12 hour shifts). In 'healthy' areas the oncall responsibility is rotated around. Again this is the second line of defense generally speaking to the FSE's (field service engineers) who are working in the fab (for most areas 7x24)

05 May 2008 04:16


InTheKnow said...
JumpingJack said...
LEAN -- what the heck is this?

LEAN is the latest corporate buzzword for a methodology to improve process flow. It is the basis for the book "Lean Thinking : Banish Waste and Create Wealth in Your Corporation".

Like most other systems of this sort I've seen it seems to go too far. I can easily see this becoming part of a bureaucratic mindset that requires slavish adherence to a system whether it is applicable or not. It originated out of the Toyota Production and Management System. You can read more about it
here.

05 May 2008 04:16


Ho Ho said...
I want this:
"The most amazing is that this machine just cost as a better standard PC, but has 24 cores that run each at 2.4 Ghz, a total of 48GB ram, and just need 400W of power!! This means that it hardly gets warm, and make less noise then my desktop pc."

05 May 2008 06:33


Anonymous said...
"I tell you what - if I were Ballmer right now... I'd threaten to walk away and say 'wow, if he can get such great performance, perhaps we shouldn't take the company oover and then when the stock crashes to the pre-takeover level and crashes again when Yang missed his ridiculous Q2 numbers, Ballmer should step back in and lowball an offer and say "how do you like me now?!?" (comment Apr23)

Fast forward to today - Microsoft walks away from Yahoo deal after the Yang-er thinks his company should have fetched $37/share.

So now instead of getting $31/share (actually MSFT increased it to 33 during negotiations), Yang will have to explain to his shareholders why the stock price is about to plummet to the low 20's. He had conveniently not set a stockholders meeting (so as not to have to answer to his stockholders?)... but I think he is required to do so or face serious repurcussions (I think you can eventually get de-listed). Expect calls for Yang's resignation, calls for election of a new board of directors and a potential avalanche of investor lawsuits.

Expect Balmer to come back in another quarter or two with an offer in the high 20's (though I cannot predict he will say 'how do you like them apples?')

Looks like Jerry Yang just pulled a Hector*

* Hector (from Webster's online)

HECTOR
Function: noun
Date: circa 2006

1: one who screws up
2: botch, blunder
3: one who screws stockholders due to poor decision making and an overly active ego.

Also can be used as a verb, as in he really Hector'd that deal...

I have a new respect for Ballmer on this decision (though I'm not sure where MSFT's SW/OS division is headed).

05 May 2008 08:19


SPARKS said...
“Sparks, I'll take a stab at your questions:”

Thank you, excellent, that puts a lot of the pieces together. Especially with the above mentioned single flow (?) vs. batch operations discussed above.

“Here in lies the beauty of Intel's copy exactly approach - when you ship the lot to another fab, you know that tool is setup identically to the fab you are shipping from and will get identical processing.”

Whoa, great observation, one that didn’t occur to me, anyway. This seldom, if ever, is mentioned in the ‘pros and cons’ of the ‘copy exactly’ debate, probably because a lot of people wouldn’t get it anyway. Nonsense, with this kind of flexibility, personally, I think it would be stupid to take any other approach. Standardization of components has been the corner stone of HV industrial production for over a century.

I saw the test structures that are sacrificed when the wafer is cut here. I’m sure there are propriety methods to insure quality control at the very early stages of production. If there isn’t, there ought to be. Additionally, I’ll bet there’s a fixed dollar amount, determined by the corporate bean counters, cost wise to get a single wafer through the snake. Going down the entire line ain’t cheap, and a wafer is a terrible thing to waste.

http://www.tf.uni-kiel.de/matwis/amat/elmat_en/index.html

(Great site, by the way.)

This led me to a few more links where I found pictures of the vertical furnaces that heat the wafers in vertical batches. They looked huge, complicated, and expensive. I figured on the redundancy aspect to keep thing moving while repairs are made on the tools that go down. Some of the HV units queued up a number of FOUP’s as part of their specifications, as apposed to the lower volume R+D units. Given Dementia’s single flow argument and AMD's current execution, it may be to AMD’s advantage to stay small.

“The site experts are generally oncall 7x24 (some may work normal 5x8 shifts or the 3day/4day 12 hour shifts). In 'healthy' areas the oncall responsibility is rotated around. Again this is the second line of defense generally speaking to the FSE's (field service engineers) who are working in the fab (for most areas 7x24)”

I can see (and I know) that this is a nice position to have, especially if you’re a really good trouble shooter who has an intimate working knowledge of the equipment’s guts. I’d bet my shares in INTC these guy’s are “crackerjacks”, and the outstanding guys are really in demand. There’s lots of glory to be had when things are up and running quickly. Pressure, adulation, heroics, instant reward, for me this is an enviable position. I love glory; that’s me, guts and glory.


“Well the managers pester the engineers or ops people who then pester the technicians who are working on the tool.”

I was right; they are poor bastards. Obviously, Silicon rolls down hill, too.

Thanks again, (sigh) maybe in another life.

Very enlightening.

SPARKS

05 May 2008 13:29


SPARKS said...
"Fast forward to today - Microsoft walks away from"

You said it. That was the first thing I thought of when I read the anouncement. Time to DUMP!!!! Yahoo.

SPARKS

05 May 2008 13:44


Tonus said...
ho ho, that helmer site is awesome.

05 May 2008 13:46


Comment deleted
This post has been removed by the author.

05 May 2008 16:58


Giant said...
This is interesting.

Although, I haven’t had the QX very long, nor have I explored it's absolute limits, I have found the same VERY comfortable point at 4.06 GHz.

Yes, around 4GHz is perfect for the 45nm CPUs, both dual and quad (aside from the lower end quads that wouldn't hit 4GHz due to a FSB limit). Obviously the QX9650 and QX9770 are premium parts and are binned accordingly, so the power use is low and not all that much higher than my E8400 at ~4GHz. With a TRUE 120 equipped with a Scythe S-Flex fan the temperature under a full load has yet to exceed 50C.



I have, however, found the limit for air cooling:

From CPUz:

9.5 x 450= 4.275 GHz @1.408V, 1800 FSB, DDR3@1800 7-7-7-21 2T 2.0V

Sandra:

Processor Arithmetic= ALU 66835 MPS, SSE3 = 61753
Processor Multimedia= 549144 it/s, FP=267068
Memory bandwidth= 9576 m/sec!
(Now it’s clear why I waited for X48)
Cache + Memory Combined=65.47 G/s
32K blocks= 407 G/sec!
Latency=56ns
SuperPi 1M= 10 seconds!!!!!!

Obviously, both chips run cool (yours and mine) and there’s A LOT of headroom (a full GIG!), basically, on first production run. Binning these chips (?), man with the way these thing run, it’s a shame to deliberately lock in anything below 2.6. It looks like INTC doesn’t have very much to throw away.


I suspected INTC months ago sandbagged these chips when Barcelona fell on its ass. They were ready for Barcelona even if the son-of-bitch comfortably hit the 3 GHz+ speeds they were howling about for a year. It simply had no chance, ever, against Penryn, right out of the gate. Look at that Pheromone at 3.5 Gig, a cherry picked slab. The QX9770 s pee’s all over it at well bellow stock speeds!

The Phenom was cherry picked, and wasn't even stable at that speed. Eventually he settled for 3.4GHz with 1.58V! This would not be acheivable with air cooling. Constrast that to you and I both running these hafnium infused monsters at 4GHz+ on air! In terms of what Intel could release now, assuming a 1600FSB, I predict that 3.4 and 3.6Ghz for quads would be possible, and up-to 3.8GHz for dual core.

I don’t give a flying hoot what anybody say’s. INTC woke up and hit the floor running. If they don’t believe it, you and I have the evidence in hand to prove it.

E8400 @ 4 Gig
QX9770 @ 4 Gig

WITH NO MEANINGFULL DIFFERENCE IN THERMALS AT THESE SPEEDS!

That's right! The power consumption on this puppy is incredibly low at stock. Even overclocked to 4GHz the power consumption of the CPU is only around 100W, that's easily cooled with high end air. Obviously, at 4.5Ghz and beyond we're pushing the CPU to it's limits, so the power consumption is too high for 24/7 use without water IMO.

BTW: With all these runs, I haven’t had a lockup, boot failure, BSOD, or a failed Windows load, yet!


I've had one lockup, that was when I tried to reach 4.5GHz on the P5B deluxe. The northbridge was just running too hot for a 2GHz FSB. As I described in an earlier posting here, I attached a 40mm SilenX fan to it and that reduced the temperature considerably, I had no problems after that. The 790i has been a SUPERB board to me, I've had no issues; none at all.


I’m going to back this gem down to 4 Gig and cruise around nice and comfortable 24/7, all on air.

What sort of bus speed are you running there, and what speed are you running the DDR3 at? As I've mentioned before, 1780MHz works perfectly for me. A beautiful 4GHz clockspeed, 1780Mhz FSB and dual channels of DDR3 at 1780MHz a piece!

-GIANT

05 May 2008 17:03


SPARKS said...
GIANT-

If there's any doubt about the lack of consistent quality of these chips, the entire line up, their speeds, and the way they overclock, this should dispelled them without any reservation.

E8300
E8400
And the currently on sale mega bullet,
E8500@3.16 (I’m was tempted to buy one these sweeties for shits and giggles)

They all clock, and clock well! Really, think about it, INTC’s standard on binning these things must be pretty high before they lock in those multipliers. It makes you wonder, if the relative price structures are based on feature sets, as apposed to speed bins. INTC is only competing with itself here, especially with a dual core solution.

When INTC revealed 45nM hafnium transistor technology as the biggest improvement to twenty years, generally, the Press reception varied from a yarn to beer fart. What the knuckleheads fail to realize, this process will be the foundation for the next generation architecture with an IMC pumped in. Imagine these chips on steroids? Man the thrill is back, big time, and the hits just keep on coming.

As far as my setup is concerned, overclocking this GEM was painless and a no brainer.

From CPUz :

9.0 X 450= 4050 MHz

Vcore 1.3975

On this board you set the memory parameter @ “*DDR3-1800 O.C.*
You set the option to allow the “memory strap to FSB” and you’re done!

450 quad pumped at the Memory will give you DDR-1800; again this is all factored in by the MOBO automatically. Everything is running synchronous, just like I like it.

So much for the idiots who complain about the high prices for premium MOBO’s, F**K ‘em, ya get what you pay for I say, in spades.

Besides, I used to spend a lot more money on things that could have got you thrown you in jail, and that includes booze! That said, what’s an extra 150-200 bucks? That’s used to be one night out in a club, easy, when 200 bucks meant something!

The SuperTalent ‘Project X” DDR-1800 memory gamble I took for $379 paid off huge. At these speeds it’s cold, not warm, not cool, just drop dead cold. (After the CPU cold water solution, I may purchase another set. However, stability concerns surface when running 4 discrete DIMM’s at high speeds, as opposed to two 2 GIG modules.)

I set the timings manually at the manufactures recommended 7-7-7-21.
The voltage was manually set at the recommended 2.0V

Speeds any higher will necessitate looser timings, 8-8-8-24, give or take, on any individual parameter, stability dependant, when locking in the *DDR-2000 O.C* option in the BIOS.

I’ll trade a few nanoseconds in latency for the looser timings and higher speeds. I haven’t gone there ---- yet.

ALL said, this 4 GIG synchronous solution was basically a no brainer. And to think, last year, I was plodding along at 1066 FSB. Now, that’s what I call leaping ahead.

SPARKS

05 May 2008 19:31


Anonymous said...
"Additionally, I’ll bet there’s a fixed dollar amount, determined by the corporate bean counters, cost wise to get a single wafer through the snake."

I've been involved with some cost modeling, and while there are generally specific cost targets (per wafer processed), I've come to the conclusion that it is impossible to measure accurately. There are simply too many fixed cost (building cost, equipment) and costs which are shared by the entire fab (fabwide facility costs, metrology, automation, service, headcount...) that are as significant if not more than the true variable costs (actual Silicon substraate, chemicals and gases, waste, etc...). So the best you can do is have a modeled/average #.

As for the VDF (vertical diffusion furnaces), surprisingly they are no more expensive than an average piece of fab process equipment.

Finally copy exactly has it's downsides too - once you enter volume manufacturing it pretty much discourages many changes as now you have to proliferate that change across a huge fleet of tools. Though some would argue that is exactly what you want when you enter the manufacturing stage - minimal risk, and only insert a change if there is a huge ROI. For engineers (and suppliers who want to implement their latest and greatest changes) it is disheartening but one minor screwup in terms of implementation of a change and it quickly kills the benefit the change had in the first place.

05 May 2008 20:42


Anonymous said...
One add:

"As for the VDF (vertical diffusion furnaces), surprisingly they are no more expensive than an average piece of fab process equipment."

And this is the fundamental problem with the whole single wafer processing move. Sure single wafer processing has some cycle time advantages but when you consider the furnaces (or you can also look at the wet etch benches too) cost as much as single wafer equipment but may have as much as 2-5X the output capability per capital dollar spent, what would you do?

05 May 2008 20:49


SPARKS said...
"I've come to the conclusion that it is impossible to measure accurately"

Coming from you (?!?!), that’s saying something! Kudos for even giving it a shot! I’ll bet it took months.

"Though some would argue that is exactly what you want when you enter the manufacturing stage - minimal risk."

"cost as much as single wafer equipment but may have as much as 2-5X the output capability per capital dollar spent, what would you do?"

Factoring in these two comments, I’ll answer your question; I'll tell you exactly what I did and what I am going to do.

1) Buy a $1500 behemoth-----done!
2) Buy some more INTC------ this week!


I love INTC’s conservative approach, “minimal risk”. I’ve seen too many loose cannons, screw up too many times, reinventing the wheel midstream.

SPARKS

05 May 2008 21:33


Anonymous said...
This "LEAN" is old news. I work at an Intel Fab and we've been using this for several years already. We call it something different but it's basically the same thing that Toyota started "Kaizen" awhile back. I personally think that on paper the whole concept looks great but in real world practice it is not that practical. Just makes management think that they have a better control of the floor.

Anonymous said...

I think the problem that many are facing is that after months of posts that were looking forward to the release of AMD's Barcelona processors, the rollout has been very disappointing. And it's hard to continue to make posts about how things will improve after the most recent events. If AMD can get things rolling again (Phenom @ 2.6GHz and faster, soon) then there will be more optimism and reason to speculate with some hope.

I think that AMD burned a lot of its loyal supporters this last couple of months. Not being able to catch up or keep up with Intel, under the circumstances, is not such a bad thing as long as they were being honest about where they stood. But the many broken promises and mess they made leaves many people with a sour taste in their mouths.

If you can't beat off then there are many here who will help you.
"Perhaps AMD should stop spending time and energy trying to sue others and blaming others for their failures."

I tend to agree with the current set of problems. AMD and/or governments has the right to pursue this for potential past transgressions, but clearly the 'monopoly' is not the cause of the AMD's current problems. I think AMD's mgmt conveniently intermixes these hoping folks may not realize this distinction.

My question is this though... is the EU looking for recent stuff (within the last 2 years say) or older stuff? And what exactly is the SPECIFIC damage done to the EU? I can see AMD trying to make a claim (if anything is proved) but what about the EU? If rebates or whatever is deemed anticompetitive then I can see how this impacted AMD, but if the prices that consumers were paying were still low and competitive (like they are now), how exactly is the EU consumer injured? These are not European companies. Does the EU fine companies who sell clothes in the EU that are done by labor at ridiculously low salaries?


We all know that Intel fans are the biggest cock suckers of them all and just as long as we get what we want everyone else can go fuck themselves.

The truth is that this board really sucks. The main people who come here are Intel employees who just want a chance to stroke themselves anonymously.

If the issue is that AMD was injured, than AMD (NOT THE EU) should be the ones pursuing this as they are doing in the US.

This, to me, in disingenuous. If the argument is that consumers got hurt then I eagerly wait to see if the EU distributes checks to all those who purchased a computer in that time period they allege (should they actually levy a fine). Somehow though, I don't think we will see that (call me a cynic).

Anonymous said...

I think the problem that many are facing is that after months of posts that were looking forward to the release of AMD's Barcelona processors, the rollout has been very disappointing. And it's hard to continue to make posts about how things will improve after the most recent events. If AMD can get things rolling again (Phenom @ 2.6GHz and faster, soon) then there will be more optimism and reason to speculate with some hope.

I think that AMD burned a lot of its loyal supporters this last couple of months. Not being able to catch up or keep up with Intel, under the circumstances, is not such a bad thing as long as they were being honest about where they stood. But the many broken promises and mess they made leaves many people with a sour taste in their mouths.

If you can't beat off then there are many here who will help you.
"Perhaps AMD should stop spending time and energy trying to sue others and blaming others for their failures."

I tend to agree with the current set of problems. AMD and/or governments has the right to pursue this for potential past transgressions, but clearly the 'monopoly' is not the cause of the AMD's current problems. I think AMD's mgmt conveniently intermixes these hoping folks may not realize this distinction.

My question is this though... is the EU looking for recent stuff (within the last 2 years say) or older stuff? And what exactly is the SPECIFIC damage done to the EU? I can see AMD trying to make a claim (if anything is proved) but what about the EU? If rebates or whatever is deemed anticompetitive then I can see how this impacted AMD, but if the prices that consumers were paying were still low and competitive (like they are now), how exactly is the EU consumer injured? These are not European companies. Does the EU fine companies who sell clothes in the EU that are done by labor at ridiculously low salaries?

We all know that Intel fans are the biggest cock suckers of them all and just as long as we get what we want everyone else can go fuck themselves.

The truth is that this board really sucks. The main people who come here are Intel employees who just want a chance to stroke themselves anonymously.

If the issue is that AMD was injured, than AMD (NOT THE EU) should be the ones pursuing this as they are doing in the US.

This, to me, in disingenuous. If the argument is that consumers got hurt then I eagerly wait to see if the EU distributes checks to all those who purchased a computer in that time period they allege (should they actually levy a fine). Somehow though, I don't think we will see that (call me a cynic).

I think the problem that many are facing is that after months of posts that were looking forward to the release of AMD's Barcelona processors, the rollout has been very disappointing. And it's hard to continue to make posts about how things will improve after the most recent events. If AMD can get things rolling again (Phenom @ 2.6GHz and faster, soon) then there will be more optimism and reason to speculate with some hope.

I think that AMD burned a lot of its loyal supporters this last couple of months. Not being able to catch up or keep up with Intel, under the circumstances, is not such a bad thing as long as they were being honest about where they stood. But the many broken promises and mess they made leaves many people with a sour taste in their mouths.

If you can't beat off then there are many here who will help you.
"Perhaps AMD should stop spending time and energy trying to sue others and blaming others for their failures."

I tend to agree with the current set of problems. AMD and/or governments has the right to pursue this for potential past transgressions, but clearly the 'monopoly' is not the cause of the AMD's current problems. I think AMD's mgmt conveniently intermixes these hoping folks may not realize this distinction.

My question is this though... is the EU looking for recent stuff (within the last 2 years say) or older stuff? And what exactly is the SPECIFIC damage done to the EU? I can see AMD trying to make a claim (if anything is proved) but what about the EU? If rebates or whatever is deemed anticompetitive then I can see how this impacted AMD, but if the prices that consumers were paying were still low and competitive (like they are now), how exactly is the EU consumer injured? These are not European companies. Does the EU fine companies who sell clothes in the EU that are done by labor at ridiculously low salaries?

We all know that Intel fans are the biggest cock suckers of them all and just as long as we get what we want everyone else can go fuck themselves.

The truth is that this board really sucks. The main people who come here are Intel employees who just want a chance to stroke themselves anonymously.

If the issue is that AMD was injured, than AMD (NOT THE EU) should be the ones pursuing this as they are doing in the US.

This, to me, in disingenuous. If the argument is that consumers got hurt then I eagerly wait to see if the EU distributes checks to all those who purchased a computer in that time period they allege (should they actually levy a fine). Somehow though, I don't think we will see that (call me a cynic).

In that case I nominate JumpingJack as coauthor - he is quite knowledgeable and is actually testing a Phenom system, noting its strengths and problems, according to his posts over at XCPUs.com.

I absolutely will never post again at the blogs of those AMD spunk lappers.

The lastest post from sir AMD spunk lapper goes on and on then finally at the end says what?

"AMD currently has nothing even close"

Then the he drinks more spunk and talk on and on.

Anonymous said...

Full Report xanax for anxiety in dogs - xanax with alcohol high