"Advanced Micro Devices Inc closed a deal to spin off its manufacturing operations on Monday, and said it expects the new company to assume responsibility for paying off about $1.1 billion of debt.
The plants which make AMD's chips are now part of a $5 billion joint venture with Advanced Technology Investment Co, of Abu Dhabi, temporarily called The Foundry Co." - Reuters
In a very creative way, AMD has ridden itself of its crippling debt and the massive burden of capital investment going forward. While AMD may sound as if this strategic move brings them closer to their core expertise, it is without a doubt that this back-to-the-corner decision was the only way for AMD to remain viable. This new lease allows AMD, maybe for a few more product lifecycles to continue and remain as the only challenger to Intel.
At the bleeding edge of semiconductor technology, it has yet to be seen whether a fabless company can challenge one with a foundry. In the not so bleeding edge such as memory products companies with their own foundry like Samsung are dominating over the rest of the industry but competition remains vibrant. But in the x86 space where process leadership creates cost and performance advantages, history isn't kind to fabless companies. Starting this week, AMD is effectively what Transmeta was back in 2000. The difference is Transmeta had a lot of hype going for them and probably with a more compelling product offering in the mobile space.
3.03.2009
"At heart, we're a reverse engineering design company" - AMD
Subscribe to:
Post Comments (Atom)
966 comments:
1 – 200 of 966 Newer› Newest»Maybe AMD should change it's name to AIG and show up on capitol hill with hat in hand?
http://www.sfgate.com/cgi-bin/
article.cgi?f=/c/a/2001/10/04/
BU153179.DTL
http://www.businessweek.com/archives/
1994/b336675.arc.htm
Well, that's that. Any body wanna guess what INTC's next move will be? Hmmm?
SPARKS
Boy was that a trip down memory lane. I remember when Cirrus Logic and Trident were the first names that came to mind when I thought of 'great graphics cards.' And PCI graphics cards simply BLEW AWAY anything that came before.
16-color platform shooters, anyone? Commander Keen! Ha ha ha! Good stuff.
Tonus, how about this one. Do you recall the intermediate graphics solution called the VESA Local Bus (VLB)? They put a little add-on connector on an ISA slot, I think it had a 22 pinout. The solution was offered just before AGP hit the streets. I had a Diamond Graphic card, FULL 32 BIT (whoa!), with an S3 chip.
http://en.wikipedia.org/wiki/VESA_Local_Bus
It performed well for it's day. I loved DOOM2 especially the 10th level. I ran a 486DX4-100, officially called "Overdrive". I still have the chip in my collection.
INTC part # DX40DPR100, The integral heat sink was huge for the time, with it's little screw on fan. Attached to the chip was a discrete component that looks very much like a VR.
http://www.cpu-museum.de/?m=Intel&f=Overdrive+CPUs
It's the 11th one down on the page, and the last 486DX built before the Intel Pentium 60MHz
http://www.cpu-museum.de/?m=Intel&f=Pentium+P5
I have both P54C types, with and without the large heat spreader.
SPARKS
This move, while vital to AMD's survival, could hardly be considered 'ridden them of their debt', 1.1 billion of debt was transfered to GlobalFoundry, the other ~4 billion debt stays on AMD's books.
I thought that TG Daily had a pretty good analysis of the Intel - TSMC deal yesterday. Unlike many other analysis out there (including one on Ars Techinca who usually do a better job), this one looks past the bogus "Intel can't make money on Atom" remarks and gets to what I think is the heart of the issue.
Intel gets to have customized designs done by TSMC for a number of volume customers, a service which Intel does not generally do all that well, and is not set up to do efficiently.
I think that is true at present, but will need to change for Intel long term. Intel is looking for 1/3 of their future revenues to come from embedded/CE type devices according to the presentations they have given. I don't think they want to depend on foundries to manufacture 1/3 of their revenue stream long term.
It is a good deal for now, and they may want to always use foundries for smaller customers. But I think at some point they will need to re-purpose some if their capacity to address this portion of their business.
"Step" aside all ye nay sayers. There's a new Boss in town by the name of i7 975, and he ain't playing.
The Boss has new power.
http://www.guru3d.com/news/core-i7-975-extreme-spotted-and-breaks-world-record/
Hoo Ya
SPARKS
INTC is absolutely relentless. 32 up and running baby!
http://www.fudzilla.com/index.php?option=com_content&task=view&id=12420&Itemid=1
SPARKS
Love or hate him, LEX my be right about NVDA. Obviously, they've hit the "thermal wall". Enter Larrabee!
http://www.fudzilla.com/index.php?option=com_content&task=view&id=12382&Itemid=1
SPARKS
"temporarily called The Foundry CO."
This spin is so appropriate to the times. We are all familar with the recent financial meltdown where people repackaged loans made to people who couldn't afford to play and sold them to unsuspecting investors who had no clue to what they were buying. In the end it changed little, the people who couldn't afford their homes still couldn't regardless of the fact they got the loan. It all ended as it needed with people losing their homes and the people holding the debt getting stuck with nothing too.
AMD and the silly Arabs are the analagous case here. Here we have AMD the silly borrower who can't afford to play in the semiconductor business nor pay for the fab. And as the stupid AIG/Citi/etc. you got the dumb arab stupid investor who doesn't understand what they just got sold for. No matter how you spin it the whole foundation for their business is broken.
Here we have the new Foundary Company opening for business in a situation where you got TSMC, Charter, Samsung, IBM, and all the other similar foundrys with capacity galore in a recession and tech meltdown not seen in any of their lifetime. Pricing is soft and competition is fierce. All the foundries essentially share from the same whore ( IBM ). What every crap process they kick out all of them will have. There is no competitive advantage the foundry has compared TSMC or others.
Things are actually worst then just the fact that they have esentially the same process capability. The stupid Foundry is actuall handicapped compared to TSMC, SMIC, Charter. For one they don't have any of the standard industry collateral that the other guys offer. Without that, how the hell does an upstart design company put their design into the Foundry's Fab. All the Foundry knows how to run is the AMD unique process and design. I'm sure that AMD uses industry standard design tools, but the process is still specialized and likely not just a simple port of say something like a PLL, DDR, or USB interface block for someone else. Look only to the fact after buying ATI that AMD didn't move all of their graphics and chipsets to the Foundry. Wouldn't it make sense in this underutilized situation that they'd move all the wafers into their own fab to save money. Why give TSMC a cut when you can save that in our own fab? Also isn't AMD technology superior as it does high performance CPU. Clearly high performance logic from AMD is going to make an ATI design go faster then on TSMC. Guess what, maybe it aint so easy just taking a foundry designed product and porting to AMD's Foundry silicon process. Guess the customers will be far and few.
Second why would any upstart that can get a design to work want to go to Foundry. You know that the elephant for them is AMD. No matter how important you think your product run is at the Foundry you'll always stand behind AMD when it comes to fab priority of any sorts. If you go to TSMC you have a more agnostic supplier for sure. Foundry won't attract serious business unless they really undercut TSMC and Charter's offering with something much cheaper for the few people that can port their designs. But if you charge so much less where are you going to get your $ you invested back?
That comes to the third issue $. Lets now look at the $$$$$$$$$$ that foundry is paying for this fab and technology and the $$$$$$$$$$ that it must have to cough up in the coming years. They will want some return for that. Now AMD hast got to share the profits. Clearly the Foundry isn't in this to throw way money. I only ask would AMD have had to pay more for thousands of guranteed leading edge 45nm wafer starts at TSMC or do it internally. Clearly TSMC or Foundry needs to make a profit. TO make a profit it has to charge a fair price. How can AMD design afford the fare price and still expect to make money. They are already a year and half behind in technology that result in larger and more expensive transistors. They are 3 years behind in getting to highK/metal gate that will penalize the more expensive transistors to be poor performing resulting in further handicap on their design.
Bottom line spinning off the fabs to the Foundry changes nothing fundamental. All it does is make the balance sheet look different. But like our current financial mess it really changes nothing. AMD never could afford to be this business and spinning it off changes nothing. All that has happened is they tricked some dumb fuck arabs to cough up billions while they thought they were rich. I guess in some ways for the Arabs no big deal as they would have blown their wadd on Citi or some other thing if they didn't blow it on Hector. But all this really does is allows AMD to bleed another couple years. THe Arabs are illprepared nor is there any reason to throw billions for another 5 years in this enviroment as by next year they'll have smelled the shit they walked into and walk away.
As Jerry Sanders once said "real men have fabs" Does that make Dirk the pussy and Ruiz the man?. Got to hand it to Hector, he f***ed Motorolla, then he F***ed AMD, and now as the head of Foundry he gets to f**k AMD again and his unmanly buddy Dirk, LOL.
"Here we have the new Foundary Company opening for business in a situation where you got TSMC, Charter, Samsung, IBM, and all the other similar foundrys with capacity galore in a recession and tech meltdown not seen in any of their lifetime. Pricing is soft and competition is fierce."
Well said, and it is significant. Case in point, outsourcing ATOM is a brilliant move by INTC, given a VERY soft market. INTC get ATOM built for a SONG and holds it's reserves for the, excuse the sick ass enthusiast term, the 'REAL CHIPS'. (Higher performance/margins.) TSMC gets a bit of the business to keep its production moving. Work, after all, is work. INTC gets it's cake and eats it too, cheap ATOM's that flood the market, and plenty of CORE products kicking down doors, come the turn around. Brilliant, simply brilliant.
I must agree here.
SPARKS
Saw an article on Fudzilla about an AMD "senior" rep stating the P2 is increasing AMD's marketshare, especially with the X3 -> X4 free upgrade trick. From the thread over in Tom's, it looks like most people trying this trick have had limited or no success - certainly not too stable with all 4 cores loaded.
One idiot claimed this was a great marketing scheme for AMD; however I think it may backfire on them if too many people are disappointed and given one more reason to dislike AMD.
At any rate, Gartner's is predicting a 30% drop in desktop sales this year, and an increase of 80% in the netbook market. Methinks AMD's crystal ball is still murky and, as usual, they are targeting the wrong market.
"Gartner's is predicting a 30% drop in desktop sales this year, and an increase of 80% in the netbook market."
Moose, you said it. Here is the little bugger thats driving it.
ITK, this ATOM's for you.
http://www.fudzilla.com/index.php?option=com_content&task=view&id=12454&Itemid=1
SPARKS
It's funny how people on amdzone are saying stuff like "I wish amd would shrink phenom die and put more cache on it to improve performance" when about 1-2 years ago they were trashing Intel for having lots of cache as they thought it was there just to hide architecture flaws.
Did someone say shrink the die and add more cache?
Do you know that having SOI hurts the cache size
Do you know that to shrink the die you and shrink the cache you need too R&D focused on delivering what your product team requires.
Do you know that consortiums will develop a technology that meets the minimum need of all the members and be compromise for all.
Do you realize that the leader of the constortium will work hard to maximize press to get more folks to pay to amortize a fab and a company that can't make a profit for its mother company
Now how can AMD compete going to such an organization and expect to get the technology they need to compete against a foe so much larger and so focused on delivering the best performance possible?
Tick Tock Tick.... and poof AMD is gone
ITK, this ATOM's for you.
Nice, Sparks. I kinda like the tablet form factor. Wonder if it has touch screen.
On a side note, I just bought my daughter an Asus EEEPc. Performance is perfectly acceptable to do the things we bought it for (primarily web browsing/social media, etc.) Despite dire predictions from Abinstein, she is thrilled to have her own machine and sees the small size as an advantage.
Her dad likes the fact it didn't break the bank. :)
Sparks: ITK, this ATOM's for you.
Hmm, got a Dell Direct marketing email about Dell's 10" netbook, standard with a 160GB HD and an Atom N270, for $399, but can no longer find it on Dell's website. Maybe a preview of a coming offer since all it had standard was Wifi, no bluetooth or even the GPS option in the marketing email. Now that would be a great use for a netbook - all the GPS options of a high-end Garmin, along with the standard netbook features.
Now if they could throw in a cellphone module and make it rotating flip-screen with touch, you'd have a giant iPhone too :). Too big to fit into a pocket (unless you're wearing a Captain Kangaroo coat :), but I'd bet all those iPhone app writers would love to port over to a new market :).
"One idiot claimed this was a great marketing scheme for AMD; however I think it may backfire on them if too many people are disappointed and given one more reason to dislike AMD."
Moose, I was thinking about this for while. I'm compelled to comment.
Not so fast with the "idiot." He may be on to something as ugly is it may seem. I don't think AMD cares if it backfires or not. At this juncture, I think they'd do or say anything to bring in more cash, especially with a broken product.
As far as disappointing anyone, the damage has already been done over the past 2 years. And now, they are Fabless. Really, what have they got to lose in this very narrow segment of fiddling fanboys looking for something for nothing on a product that was declared unreliable with the forth core enabled?
Certainly, OEM's are not going to monkey around with a product that hasn't passed the grade at the factory. Further, I suspect serious AMD enthusiats/fans are not going to waste time and money on a broken "wing and prayer" when AMD's top bin products are so cheap anyway.
From a marketing perspective, it's a great idea to move a busted product that some very narrow segment fanboys could pop in an old MOBO and tinker with in their spare time, on the cheap.
Hey, it's a volume thing. They're out there anyway, so what's the harm in boosting sales a bit when the purchaser knows he may fail. There's no guarantees, it's money in AMD's bank, and this horseshit costs them nothing!
What's next, the pencil trick?
I said it before, and I'll say it again. They're selling garbage. And now, they're making it look attractive. AMD needs cash anyway they can get it. It will buy them more time to engineer something hopefully more competitive, and obviously, every penny counts. This is gospel.
Caveat Emptor.
SPARKS
lol lol lol. ;D
The typical trolls. Definitely, this site is for you guys.
Please, stay here and don't go out to talk crap and fud on other forums and blogs. :)
Roborat made this site specifically for you all. ;)
Roborat and sharikou will be remembered as the clowns of the tech community. :D
I see that our resident AMD fanatic has been reduced to "neener neener" posts. How appropriate. :)
sparks: "Do you recall the intermediate graphics solution called the VESA Local Bus (VLB)?"
I do indeed! I never had one, by the time I had enough money to afford one, PCI had been established and that would tide me over until AGP came along.
"What's next, the pencil trick?"
LOL, those were fun days for OC'ing, when we were figuring out all kinds of tricks for getting around multiplier locks and PCI divisors. I OC'ed one or two AMD chips with that trick.
ITK: "Despite dire predictions from Abinstein, she is thrilled to have her own machine and sees the small size as an advantage."
I think that is where much of the disconnect occurs when people are analyzing the industry. Too many people take the "self-centric" (to coin a term) view, instead of considering the market as a whole.
For many people, an Atom-based netbook is little more than a toy. But so often we talk to people who want to buy a computer for web-browsing and email, and perhaps to type up that grocery list. I was at Best Buy with a friend this weekend and the only notebooks (out of many on display) that he noticed and commented on were... you guessed it, netbooks. Low cost and a "cuteness factor" seem to go a long way with the non-techie consumer.
Same thing with the ability to 'unlock' the fourth core on a tri-core Phenom. Does anyone think that AMD is thinking that the key to higher ASPs is selling tri-core CPUs to tech-geeks? Of course not, and that shouldn't be any part of the discussion when analyzing how those CPUs will affect AMD's bottom line.
ATOM and the netbook is marketing smarts.
The world is in a recession, everyone still wants their electronic gizmos. They go to their local Bestbuy and what do they see a 299 or 399 full featured small pc.
They can do their email, do their websurfing and likely everything else.
For some they find out the limitations and get frustrated. What do they do? Go buy more HP in a Centrino2 or something. Will they get down, likely not, what the fuck do you expect for 399. You got build in WIFI, screen, 160 GB HD, 1GB memory and vista, not a bad value.
By the way where the hell is AMD in this so fast and important market?
Tick Tock Tick Tock AMD is finished.
Let the trolls do more then tease me. I challenge them to tell me how AMD can come back.. no one will because if you look the deck is stacked and the odds are ridiculous.
No go play in AMDzone you dumb fuck
"I see that our resident AMD fanatic has been reduced to "neener neener" posts. How appropriate. :)"
What else can he do? While we "talk crap and fud", every prediction, every post, on this site for the past 2 YEARS has been 100% spot on.
AMD has been reduced to a has-been chip maker/design house, deeply in debt, with no competitive products on the high end or the low end.
They have failed to deliver for two years and have been completely trounced by INTC.
Further, to compare ROBO to Abstenchia, is like comparing gold to sand.
He's the real "clown" who can't see the difference.
SPARKS
comparing gold to sand.
Both of which are important in chip making... ;)
ATOM and the netbook is marketing smarts.
Let's stop re-writing history folks. Atom was conceived and started BEFORE the meltdown. It was also targeted for the MID market not the "netbook" market (which didn't even exist at the time).
While credit needs to be given for Intel to have the guts to try to crack the MID market, and delivering a very good Rev O design in Atom (discounting of course the initial chipset tied to it), it is a reach to say where atom is currently headed is what Intel intended. Credit has to be given for Intel to swallowing their pride and capitilizing on it (I think you will soon see dual core atoms in the netbook arena despite Intel currently not allowing refusing it)
Both of which are important in chip making... ;)
Gold? Maybe a while back...
"They are sitting on $14 billion in cash and generated close to $10 billion in cash last year ... any fine would be more a hit to the mind than a hit to the balance sheet," he said."
The EU, in an effort to line its coffers with sorely need cash since the banking meltdown and it's deal with AMD that has soured into a multibillion, multinational, boondoggle, is at again, this time with record fines.
Never before has so much computing power been available for so little. So the Euro pee-on cockroaches are crawling out of the woodwork.
http://www.ciol.com/Global-News/News-Reports/Intel-pricing-model-seen-facing-EU-scrutiny/10309116982/0/
Well, at least they're not squabbling over each other's land and killing each other over it, en masse. They've found a new venue, MSFT and INTC.
The EU, the self ordained world business police, Zieg Heil!
SPARKS
Well, at least they're not squabbling over each other's land and killing each other over it, en masse. They've found a new venue, MSFT and INTC.
Well when you convert to a scoialist state where the gov't does (or thinks it does) everything for the people and you cause your citizens to think why work hard, you end up tanking the economy and destroying your tax base. You then need to develop an alternate revenue base or the existing power base fails. Hello big large foreign multi-nationals.... make no mistake these fines are not punitive fines, they are tax revenue. If they truly were punitive fines to discourage behavior the EU would also go after the distributors who participated with Intel in this "unfair" behavior (did any of these firms say no to the rebates, no they took the money and ran and for some reason the EU has no problem with their OWN companies profiting from Intel's behavior)- these guys get a pass because 'they are one of ours'. The money would also go to the aggrieved parties (the logic is it's the consumers getting hurt right?), but the money doesn't go back to computer purchasers - it goes into the EU slush fund to fund whatever programs the gov't deems necessary - the people who got hurt... well thanks for taking one for the team.
This is where the US is headed under the Chosen One (and you'll notice how much Europeans love him). The only question is whether the (soon to be structural) damage being done becomes permanent - in my view debt here is the concern, and the potential for large entitlement programs to become established and deemed politically untouchable (Social "Security" anyone?)
Corporations have become the target of gov'ts as it is easy to paint them as these evil entities and they are quickly becoming the only significant source of tax revenue. The problem is once these are destroyed and the gov't becomes the major employer, where does tax revenue come from?
"...gov't does (or thinks it does) everything for the people..."
"punitive fines"
"get a pass because 'they are one of ours"
"slush fund"
"under the Chosen One"
Well said; all points, coherent, well thought out, and spot on. Unfortunately, the last point (above) scares the Bejesus out of me, lots of polish, substantively short.
This guy is going set a new standard that only Jimmy Carter could rival, maybe.
Just get on TV, tell the American people that your kids are advising you (Amy), and that they should lower their thermostats during these hard times.
This is what happens when one makes the "popular decisions".
OMG, we're in it deep, my friend.
SPARKS
Hello big large foreign multi-nationals.... make no mistake these fines are not punitive fines, they are tax revenue.
This ain't gonna happen, but I would love to see Intel take their marbles and go home. Leave Europe in the capable hands of AMD and see what that gets them. It would keep AMD afloat, Europe gets rid of unfair, non-government sanctioned competition, and Intel gets to avoid the $1B fine. Europe's inability to get enough processors for the next several years is just a trivial detail.
Everyone wins. :)
Just get on TV, tell the American people that your kids are advising you (Amy)....
That has always been one of my favorite Jimmy Carter moments. Thanks for the flashback.
I thought the comments section on this ARM vs Intel piece was more interesting than the article and well worth the read.
Some may want to believe that Window's days may be numbered (and I'm not a big fan of windows, I want to control my OS, not the other way around), but the comments bring out some key reasons why Linux isn't going to take over the world. The single biggest reason is inertia. The masses are going to go with what is easy and like it or not, that is Windows, a known quantity. Learning Linux is not easy.
One Laptop Per Child set to dump x86 for Arm chips in XO-2
http://www.macworld.co.uk/business/news/index.cfm?newsid=25368
And this apparently is where Negroponte dissolves into complete oblivion. At this point, well at many points to be honest, Negorponte has let ego get in the way. OLPC refused to work with Intel when Intel refused to agree to OLPC's demand to not sell chips to competing designs (apparently competition is good for most spaces, but really bad in the One laptop per child space!). Now EGO-ponte thinks he will persuade Microsoft to develop Windows on ARM (as opposed to Win mobile)... I'm sure MS will be willing to do this for him and at a next to $0 price...sure... maybe they'll even due a buy one get one free program too! I heard he's willing to buy a million or so units, so heck MS should stop work on x86 (100's of million) and go work on this zero return effort.
Unfortunately (and I say this because I'm an MIT alumn and embarassed by this egomaniac), Negroponte has lost sight of his vision, which was to get cheap laptops to children, and has turned it into a personal crusade about HIM delivering cheap laptop to the children (and him getting the glory). Rather than ride the momentum of netbooks which OLPC as a non-profit is having a hard time keeping up with (which tells you a bit about the viability of OLPC the non-profit company!), he thinks it best to continue on his own personal crusade to manufacture the PC's himself (via OLPC) even if it is inefficient, non-competitive and there are existing commercial solutions getting close to his ultimate goal.
Rather than be happy that through Atom and the netbook movement more progress has been made in a short amount of time then all his time spent tilting at windmills, he apparently needs to strike off in a new direction. It's readily apparent that he will be extremely disappointed when a for profit organization achieves his goal quicker then he can. Rather than be happy at things moving closer to his goal, he will remain embittered that it was in little part due to him and his ridiculous execution skills.
Kinda makes me think of the EU... cheap computer via EU distibutors taking rebates! We must correct that and fine people... we don't want cheap computers, we want competition with higher prices! Can't you see these poor consumers being crushed by the low prices of these for profit organizations! We need to bring in some non-profit organizations (AMD) and discontinue volume discounts so our consumers can pay higher prices for the sake of competition.
InTheKnow
"Learning Linux is not easy."
Let me fix that for you:
Learning anything new is not easy if you don't bother to try.
It still boggles my mind how can people complain about the "complexity" of a package manager when they are capable of downloading pirated software from N+1 torrent sites. Or alternatively, how people learnt to use the (very unintuitive) user interface of windows at one point in time and can't do the same with Linux.
My mom and dad are both >50y old and both had used Windows about a year for very basic surfing and text editing. At one point I installed Gentoo on their PC and let them work with it. Only thing I had to show is where the new my documents folder was, the rest they picked up themselves without problems. They even liked it much more as it was in their native tongue, no such luxury for Windows.
Linux is not hard, it's just a tiny bit different. I'd even say in end-user point of view it's much more easy to use compared to Windows, just people are already used to windows and can't see the flaws of it's design.
hoho: "It still boggles my mind how can people complain about the "complexity" of a package manager when they are capable of downloading pirated software from N+1 torrent sites."
Those aren't the same people, though. In any case, I think it's mostly a case of perception and software availability. Linux isn't harder to use than Windows these days (at least as long as it has a competent interface). But there is a perception that it's harder to use. More than that, I think it's just software availability. If you go to a typical shareware or download site, the software is 99.9% for Windows. So Joe and Jane Average feel safe running Windows from that point of view.
Tonus
"Those aren't the same people, though"
Well, in the forums I frequently visit people I know who are using pirated software as freely as I am using package manager are first to bring up that argument. Yes, Joe Average probably doesn't download and install that stuff himself but why should he expect that if he can't handle installing Windows applications he should handle doing it on some other OS?
"More than that, I think it's just software availability. If you go to a typical shareware or download site, the software is 99.9% for Windows."
Problem with that is people simply can't imagine they don't have to visit any web page to get the software they need, all they need is to open up the package manager gui-frontend and type in a few keywords. Way easier than anything on Windows, not to mention they will not have to be bothered with evaluation software nagging with them or some piece of malware trying to sneak into their computers from many of those so-called shareware sites.
all they need is to open up the package manager gui-frontend and type in a few keywords(emphasis added)
And therein lies the problem. Most people don't want to have to learn keywords when Windows will lead them around by the nose. If you are old enough to remember the world when DOS was your only option on the PC, you will remember how intimidated many people were by a command line editor.
Just look at the reports on netbook returns. More Linux based machines are returned than Windows machines. This is because people don't want to learn something new. The experiences aren't equal for them. They just want to plunk down their money and start running their netbook.
Like it or not, Windows has been an integral part of the growth of the PC because it has made computing accessible. DOS was cheap, Windows wasn't, but DOS died because Windows was accessible. Just because I don't like the fact that I am being forced into a smaller straight-jacket with each successive iteration of Windows can't change that fact.
You can argue that Linux is accessible now too, but until there is some compelling reason for the general public to choose Linux over Windows in terms of added capability that they feel they need or a clear improvement in ease of use, Windows will continue to dominate.
You have to look at this from the point of the lowest common denominator, and I suspect most of us on this board don't fall into that category.
"And therein lies the problem. Most people don't want to have to learn keywords when Windows will lead them around by the nose."
Do you really suggest people are so ridiculously stupid that if they want to get a drawing application they can't figure it out to type "draw" or "paint" or anything else similar to it? If they are then those people have much bigger problems than the ability to learn things.
Btw, how is it usable to have your applications grouped by the manufacturer instead of their purpose under start menu? Great accessibility and ease of understanding indeed. As I said before, people simply have no idea there are far better solutions out there and eat what they are given while praising it as best thing in existence just because they are so stupid they cannot learn a couple of basic things.
As I said before, people simply have no idea there are far better solutions out there and eat what they are given while praising it as best thing in existence just because they are so stupid they cannot learn a couple of basic things.
HoHo, you just don't seem to get it... it's not about there being better solutions out there, and it's not about being stupid. When you buy a PC with Windows on it why would you (you meaning the average consumer)ever change the OS? It really doesn't matter if Linux is better or has a more organized start menu... there is infrastructure in place, there is inertia and there is largely a feeling that the current SW is good enough.
Is this right? It doesn't matter... the average person never opens up there computer, never upgrades their OS (or accomplishes this task by purchasing a new computer), and doesn't care if the start menu is organized better.
On top of this there is the ROI factor (not talking about money); suppose you have a bunch of minor improvements that will make things more efficient - that "benefit" has to be traded off vs the "cost" of learning the new system and installing the new system. At this point the only way things change is if there is a substantial cost delta for NEW computers. This may be significant enough in the netbbok/nettop arena, but in desktop/notebooks I suspect MS will keep the SW pricing just acceptable enough to avoid significant sales erosion.
You are viewing this from a technical/enigineer/computer savy perspective. Computers have become much more like TV's or DVD's player or a camera or a fancy remote... there may be better firmware out there and it may not take much time to flash the new firmware, but how many people would do this even if the new firmware is free and is better? Most say "why bother" or would just hold off until they bought a new TV, DVD player, computer component etc...
Do you really suggest people are so ridiculously stupid that if they want to get a drawing application they can't figure it out to type "draw" or "paint" or anything else similar to it?
Nope they aren't stupid. I just don't think they care about what you care about. They don't care enough to learn to change their habits and do what you suggest, because it doesn't add enough to their experience.
Anonymous hit is square on the head when he said:
It really doesn't matter if Linux is better or has a more organized start menu... there is infrastructure in place, there is inertia and there is largely a feeling that the current SW is good enough.
Speaking of "average joes" using Linux .. my parents use Ubuntu these days. My mother said she'd never go back to Windows, she's got used to Compiz and having 6 easily accessible workspaces (using Expo bound to bottom left screen corner), she likes how she doesn't need virus or malware cleaning apps, she likes how all of her software is legally free. She likes how all of her installed software automatically gets updated when security updates become available - not just for the core of the operating system, but everything installed. She also likes how every six months, basically everything gets a refresh and it can all happen with one click in the update manager. There is however, one Windows dependency left, and that's the Australian Tax Office's eRecord software. Using a virtual machine takes care of that though (she didn't set that up herself, so obviously this would have been a barrier to switching to Linux)
I guess as time goes on, more average Joes will become exposed to Linux, and like it. But for now it takes the more technically able people to install it for them and show them around. Then it'll be no turning back, because going back would be a regression. ;)
Linux or windows why debate? Windows will continue to dominate but the go go days of growth are shrinking as we migrate to cheaper and cheaper platforms
The real question is who will win the CPU socket in 4 years for the smart phone?
Will it be INTEL with migration of x86 cores optimized for power on their superior process.
Will it be ARM upping the CPU power to multicore and higher frequency on an inferior foundry silicon?
Like the 64 bit adventure, INTEL may shoot itself in the foot with arrogrance and stupidity but in the long run it will win. Superior technology with deep pockets will give the designers the valuable time to fix all their missteps like they did with Itananic and IA64
AMD is gone
ARM is next
Intel will rule it all with x86
The tock is ticking
"I say this because I'm an MIT alumn and embarassed by this egomaniac."
So, that's where all those brains and discipline were fine tuned. Kudos. A great American Institution.
At the tender age of 18, I had no such ambition. I was, however, led by the nose into premed where I had absolutely no aptitude, or interest. It was horrible. What I've picked up over the years regarding our mutual love affair with the electron has been for all practical purposes, self taught and hands on.
More interestingly, however, is Mr. N's egocentricity doesn't go without enabled support from your great Uncle Sam. I had the opportunity to build his private residence on the top of a New York City Hotel. ( I will not disclose which) He is attended by armed Secret Service, 24/7. The 'apartment' is lavish and no expense has been spared.
The Mrs. is an absolute gem; kind, articulate, polite, extremely bright, very British, with the disposition of a Lark. She's a great fan of Tony Blaire. So am I. We got along famously.
All said, I must agree, Mr. N's OLPC has taken on a monumental personal cause, much the same as certain character did in a classic novel written by Herman Melville.
"... to the last I grapple with thee; from hell's heart I stab at thee; for hate's sake I spit my last breath at thee."
Well, I guess that about covers his feelings towards INTC.
By the way Ahab was a Quaker.
SPARKS
As I said before, people simply have no idea there are far better solutions out there and eat what they are given while praising it as best thing in existence just because they are so stupid they cannot learn a couple of basic things.
Huh... never realized Melville had ripped off that line from Star Trek 2!
I think Mr N's cause is fine, and a noble one. I think he just needs to be more concerned about making progress toward that goal and getting there then him playing the hero enabler and him being the arbiter about the "right" way to get there. If others can get there better, faster and/or cheaper, then he needs to swallow his pride, get off his high horse and start shoveling the ---- every once in a while instead of directing people to do it when he hasn't taken his turn with the shovel.
I'm convinced when netbooks cheaper then his solution appear (and don't need to be subsidized by the component manufacturers), he will be bitter about it. He will be even more bitter if folks are turning a profit on it!
This whole '5 Watts is too much; I see it as a problem' is a smimey attitude when he knows there are things other than the chip taking up more than 5 Watts! He also doesn't mention a goal and the tradeoffs associated with it. For example what if you drove it to 3-4Watts but at 1/2 or 2/3 the functionality? It is just another case of him trying to drive an agenda and being bitter that Intel (and now AMD) and Microsoft aren't doing exactly what he wants. Rather than working with them and looking at potential compromises which might still get to the end state (cheap, available, manufacturably sustainable laptops?); he'd rather go a more difficult route so that he can be the chief.
OOPS... I meant to quote:
... to the last I grapple with thee; from hell's heart I stab at thee; for hate's sake I spit my last breath at thee.
Kind of kills the joke...
"If others can get there better, faster and/or cheaper, then he needs to swallow his pride"
"I'm convinced when netbooks cheaper then his solution appear (and don't need to be subsidized by the component manufacturers), he will be bitter about it. He will be even more bitter if folks are turning a profit on it!"
Perfect, that's what escapes him, and I mistakenly thought that this was obvious.
As an enthusiast and an electronics freak, I always assumed anyone with a lick of smarts (including those from M.I.T.) knew, as time and volume passes, semi conductor prices fall in direct proportion, sometimes exponentially.
Indeed, I'm quite certain he didn't factor in that buying to the latest tech would command a price premium. Any half-assed engineer knows that utilizing proven, off the self, massed produced components are far cheaper than new, state of the art tech.
Cheap, mass produced laptop/netbook components, if not here already, are just around the corner. Now, with the worst possible timing, he wants to go in another direction just when ATOM is gaining serious traction!
By the time they regroup for a new OLPC approach, INTC and TSMC will be cranking out ATOM's like Frito Lays. The far east, especially in this very competitive economy, will eventually fill the OLPC dream without any, shall we say, 'high brow' social intervention.
A great idea is wonderful, but discipline, timing, and volume is everything.
George Westinghouse and Nicoli Tesla.
Gates, INTC, and ISA
Ford and the 'Tin Lizzy'
Gutenberg and the printing press.
The list is endless, but I'm afraid he is not going to be on it.
SPARKS
Sparks and Company,
I wish you refrain from degrading the Arabs in your blog. I really thought this blog was more intelligent that many fanboy blogs such as sharikou's, scientia's, or Rahul's. I know that you and many of the participants are Intel employees and so I am. I want to remind you that several of your colleagues at work who developed the architecture and process technology for Core i7 and beyond are of Arab decent and they are decent people. I want to also remind you that UAE is one of a few countries that went haywire with investing in the latest and greatest IT infrastructure by buying tons of Intel based machines. The government of Dubai did invest in AMD but also bought $$millions worth of Intel products. It does not make them stupid just because they are investing in technology. Blindly degrading Arabs who are by no means represented by the deal is just of low intelligence which I believe this blog should be over.
I hope you and your blog participants pay attention to this point so you do not get painted as racists. Do not be a sharikou, man!
I hope you and your blog participants pay attention to this point so you do not get painted as racists. Do not be a sharikou, man!
while i 100% agree with that statement, but i failed to see any wrong doing of most people here on that
1) for those that i guess that they are Intel employee, just like me, i do not see them insulting arabs (none of us quoted the word arab in a bad way)
2) I do not think that Sparks is intel employee. I also failed to see him insulting Arabs, although he did shows his dislikeness towards Arabs, just like some American did, have doubt about US-American 'friendship', worry about technology transfer to Arabs, etc. ... For me, calling AMD as Arab Micro Devices is inappropriate , but i do not think that it is insulting.
3) and those that involved in intelligent discussion while not being Intel employee, also show no trace of insulting Arabs.
For the 1 or 2 anonymous posters here that did insult Arabs, I do hope that they stop doing that.
http://arabracismislamofascism.wordpress.com/2008/09/11/remember-911-islamofascisms-attack-on-3000-innocent-people/
http://www.camera.org/index.asp?x_context=7&x_issue=17&x_article=265
http://www.youtube.com/watch?v=UufTBjgEE5Q&NR=1
http://www.youtube.com/watch?v=8VJnNab2zYI&NR=1
http://www.youtube.com/watch?v=9DMHBKSPG9g&feature=related
http://www.fdnylodd.com/BloodofHeroes.html
I am so sorry, this last one has so affected my personal biases. You are 100% correct, I indeed will consider my lack of objectivity here.....
http://www.homeschoolblogger.com/
ChathamMommy/200684/
...with all the pieces they never found of my buddy Paul and all the others.
SPARKS
PS: http://www.lowermanhattan.info/construction
/project_updates/freedom_tower_26204.aspx
To whom it may concern, in the interest of discloser, I am not an INTC employee.
I am an NYC IBEW LU#3 Electrician who is part of a brotherhood of workers currently rebuilding the most beautiful Towers in the world, 'The Freedom Towers'. We lost ten Brothers in the attacks, most family men, among the 3000 in total.
Freedom of speech is a right, unless otherwise personally directed by ROBO himself, as this is his blog.
I do not dare compromise his judgement or wisdom.
SPARKS
Sparks,
Not sure what point all these website links you posted above enforce. Everybody knows that there are the views on both sides of the arguments and there are more victims that anybody can count.
What happened in 9/11 is absolutely a tragedy that no one in their right mind can deny. There is no justification, period. But posting links to protests against a government or a paper does not justify racism against a whole race on anyside of the argument.
I read your blog because it has intelligent views and some of these bloggers including yourself are pretty smart and their opinions stand out because they talk based on experience. It just sadens me to see some bloggers sink to Sharidouche's level sometimes and I was hoping your blog does not sink to that level.
Intel Employee 2 cents!
It just sadens me to see some bloggers sink to Sharidouche's level sometimes and I was hoping your blog does not sink to that level.
There's mainly just one. And everyone here knows who that is - it's a question of the value of censoring that person vs having to censor (or impact) everyone else. If you start censoring frequently, then you start getting blogs and forums like Dementia's blog or AMDZone, where censoring quickly migrates into opposing viewpoints which can't easily be refuted or only allowing personal attacks in one direction. While there are many Intel "fans" here you will not see them calling for an AMD fan to be banned or AMD viewpoint to be deleted.
Back to the original point - like in many religions, political parties, advocacy groups, etc. there is always a small minority fringe group who's actions/words don't represent the main group. (I'll avoid giving examples at the risk of offending people) If you are offended at something specific I would mention it real time, instead of trying to paint a generalization.
"It just sadens me to see some bloggers sink to Sharidouche's level sometimes and I was hoping your blog does not sink to that level."
First and foremost, it is not my blog. I, over time, have been accepted as a member whose opinion has come to be accepted as (perhaps) an intelligent one. Sometimes, a bit overzealous, and perhaps a bit over the top. I have been set straight by other members with far more intellect, justifiably and rightfully so.
However, make no mistake, I am an American first and foremost. I am a card carying, flag waving, union worker who believes in not some twisted religious ideology (I am Jesuit Trained, Regis University, Colo.), but the belief in America and what it affords to is citizens, equal rights, life and liberty, freedom of speech and thought, and the pursuit of happiness under the Constitution of the United States of America.
The system is flawed, but it is always under development. If you love America and its people, you will come to recognize this dynamic.
My opinions are my own, completely and wholly. They are no way to be construed as the opinions of ANY other member on this site. I am more than happy to take credit for them, solely.
SPARKS
Sparks,
Nothing wrong with being an American Patriot. All power to you man. I think a man who does not appreciate his country should be without one!
Let me ask you and other bloggers this: Where do you think the Netbook market going? Is it going to dominate? Is it going to drag ASP? How does Intel use it to stay as profitable even if it becomes a bigger part of the product mix?
On a side note, when are these towers expected to complete?
Netbooks strike me as an unexpected new market, which could grow enormously large in the next few years. They sit between the small handheld devices (BlackBerrys and iPods and such) and larger notebooks and desktop systems. They may be the right blend of portability and functionality for many people.
Since the CPUs are small and will likely remain relatively so, they may generate similar or greater profits based on volume sales of parts that are less expensive to make. That remains to be seen, but desktop computers seem to be a shrinking market, and thus the move towards smaller and better integrated components is already in full swing. Netbooks may be the next large battlefield, and Intel seems well poised to make a serious run at dominance in that area.
As for the Freedom Tower, it is scheduled to open in 2013/2014, which is a bit of a sore spot for me. The late completion date is wholly the function of the ridiculous political fighting that went on for far too long after the site had been cleared.
Let me ask you and other bloggers this: Where do you think the Netbook market going? Is it going to dominate? Is it going to drag ASP? How does Intel use it to stay as profitable even if it becomes a bigger part of the product mix?
Constructively, ASP is the WRONG way to look at it... you are associating lower ASP with less profitability which is sometimes the case but is not necessarily the case. You have to look at the cost side of the equation - the press is so focused on the price only and ignore the fact that Intel is getting thousands of these chips/wafer as opposed to 100's. You also see Intel porting some of the more customizable SOC solution needs over to TSMC, which is a nod to the cost side of the picture.
You also need to analyze whether this is driving new sales or taking away existing sales. There clearly is a mix of this, but I think the whole cannibilization thing is something that makes good writing, but doesn't have a lot of factual data (at least at this point). It also matters what Atom is cannibilzing - are high end notebooks ($1500+) really being cannibilized by netbooks or is it just a matter of current economic conditions? Are people really opting for a netbook instead of a $1000 laptop? (again separating out the economic environment out of it)? And how many people are simply adding a netbook to have something cheap & portable to augment what they already own?
The netbook market will grow for a few years and then saturate (in my view); especially in developed markets/countries. The problem I see is that these are not cell phones where you will get a new one when you switch to a new plan and there will not be the 'coolness' factor that drives the upgrade cycle on some of the other MID-type devices come Christmas time, or school time, or birthday time, etc...
I also think given the razor margins on the entire computer, you will not see the same pricing declines as you do on higher end systems... the prices will come down, but I think you will more likely see pricing holding up better with the trend being more performance at the price point.
Anon,
Honestly, there are others here who saw the whole Netbook/Atom phenomenon
coming way before I did. Frankly, I didn't give it a thought at the time, as I am a sucker for the fastest most powerful chips you boys at INTC can put on silicon.
I am in good company. INTC was way ahead of everyone, especially AMD who was caught completely flatfooted in this exponentially expanding segment. That said, credit must be given to VIA as they have a very competitive solution.
However, in perfect 20/20 hindsight, now that I've been awaked to the ultra light, ultra efficient, ultra portable, relatively inexpensive market, I believe the sky's the limit. The most important ingredient to this potent mix is power usage, or the lack of, if you will.
I am sure other members here who will gladly follow up my observations with more poignant ones.
As far as the status of WTC 2 is concerned, one must recognize that the loss of the Twin Towers, while horrible at the very least, tragically destroyed most, if not all, of the underlying infrastructure hidden from view below street level during and after the collapse(s).
There was a multitude of mass transit infrastructure destroyed. Power, water, communications, shops, stores, eateries and host of ancillary utility and support for the 'city within the city' were also left essentially useless. The area on the whole was a nerve center for the entire city, and a good part of the world, whose shock effect is still rippling through the city to this very day.
The key to the any rebuild, as slow as it may seem, is the foundation. WTC2 is a prime example if not the model. In fact, a buddy of mine who is working on the foundation tells m he has never seen anything as massive. He's been in the business over 35 years and built the first towers.
The cut was deep and hit a vital area. Compounding the difficulties is that the entire area is, in fact, a national shrine and burial site. The original foot prints will be left relatively untouched in memory of those who lost their lives, or like my fellow commuter/buddy Paul, never found. (They did recover the skates and returned them to his family).
In closing, since the most recent collapse of the banking/insuranse industry, fed by corporate greed and simultaneously rising energy prices, I can only guess the movement forward has been hurt generally overall. We shall wait and see.
SPARKS
"The late completion date is wholly the function of the ridiculous political fighting that went on for far too long after the site had been cleared."
Well said, my brother, well said.
The insurance company(s) that underwrote the Towers stalled compensation by declaring that it was a single event loss, as opposed to two separate events. Basically they only wanted to pay half. They lost, but it took years.
Further complications included the various political factions on the Federal, State, City, and local level. It was "Déjà vu all over again" as the same political infighting underscored the first towers design and completion. Compounding the rebuild problems were a quorum of concessions that were made to the victims families as a multitude of design proposal were accepted and subsequently rejected, as you undoubtedly know.
In fact, incredibly, some factions wanted the entire area declared a burial site where nothing would be built! A pocked mark scar in the heart NYC as testament to the success of radical terrorism. This, in my view, was capitulation and utterly unacceptable.
Finally, everyone is on the same page as the anger and pain has subsided. The ball is now in our court, the construction companies and their workers, to realize the dream of rebuilding a this fully functional National Monument. The only restriction I see is the flow of money, these days, and nothing moves without money.
Never Forget
9 11 01
SPARKS
AMD cornered themselves there, didn't they? But to their credit, they had not way to survive but to do this... Jan-Hsung, are you watching?
And it begins...
I'm still fairly convinced this is the first step of an eventual license renegotiation/AMD civil suit settlement where Intel drops the outsourcing agreement and Intel settles the AMD civil claim (relatively) cheap. If/when there is the EU money grab, and other governments look to jump on the gravy train, it will be hard for Intel to win the civil suit regardless of the facts or applicability of any of the government rulings - so Intel has incentive to get this thing out of the way.
AMD had to do the foundry move, it was that or a realistic probability of bankruptcy. However, even if AMD is technically meeting the subsidiary definition (which is debatable); they will not be able to maintain this over time as ATIC pours in more money for new capital. AMD will have to match proportionally or their stake will drop further. AMD seems to be banking on being able to hold out until 2011 when the licensing agreement is re-negotiated (where I assume they will be pushing for an elimination of any outsourcing terms).
When AMD was at ~46% equity in the original deal this was probably easily achievable with no additional cash infusion into the foundry, now with the renegotiation they are at ~34% and don't have much wiggle room before they dip below 30%. The F30/38 retrofit money is already baked into the deal (I think), but you will have F36 retrofit (for 32nm) and NY fab startup costs that will need to be funded over the next 2-3 years. So AMD will probably need to sink in some money to stay above 30% (when they were at 46%, they probably didn't need to at all)...
So AMD has some incentive to settling this early as even though they are better cash-wise, they will still have losses for the foreseeable future and will need the cash for their own work, and probably can't afford to sink a bunch more into Global Foundries. If the outsourcing terms are gone then they simply stop putting money in and potentially can even start selling off their stake if they need to raise more cash (not certain about the last part as there may be some terms in the foundry deal about how/when they can sell their stake)
"Analyst Brian Piccioni of BMO Capital Markets said "it would be a mistake to assume that Intel does not have a legal basis upon which to make their assertion.".......
But he said Intel could also be using the issue as leverage in the antitrust battle."
Well, there you go, GURU, with your usual clairvoyant accuracy, you predicted this turn of events as if you read it in a news letter a year ago. And, to the Anonymous who gave him an argument, at the time, well again, there it is.
INTC's timing was absolutely perfect while the turn of events have played out flawlessly. The ink wasn't even dry on the deal.
"So AMD has some incentive to settling this early as even though they are better cash-wise....."' et al. You've been saying this all along, for well over a year.
I love it. As I said before, the Day Time Drama continues......!
To my credit (of course, I'll pat myself on the back) I challenged anyone, in an above post, to speculate what INTC next move would be. No one replied. Well, there ya go.
INTC has let the legal Piranhas LOOSE!
Well done, G.
SPARKS
"Jan-Hsung, are you watching?"
I can't help it, sometimes I just love LEX. He knows my weak spots. I know, I know, I'm so easy.
SPARKS
"Intellectual property is a cornerstone of Intel's technology leadership and for more than 30 years, the company has believed in the strategic importance of licensing intellectual property in exchange for fair value. However AMD cannot unilaterally extend Intel's licensing rights to a third party without Intel's consent," said Bruce Sewell, senior vice president and general counsel for Intel. We have attempted to address our concerns with AMD without success since October. We are willing to find a resolution but at the same time we have an obligation to our stockholders to protect the billions of dollars we've invested in intellectual property."
http://www.intel.com/pressroom/archive/
releases/20090316corp.htm
SPARKS
http://idea.sec.gov/Archives/edgar/
data/2488/000119312509054552/d8k.htm
SPARKS
INTC's timing was absolutely perfect while the turn of events have played out flawlessly.
Not sure about how planned the timing was... Intel raised the concern during the process while the deal was being done, but they could not officially do anything until after the deal was closed. AMD was also not very forthcoming with the terms - if you recall when Intel wanted to understand the terms of the agreement (to see if it was in line with the licensing agreement), AMD basically said, trust us, it is, it's not your business.
The timing on the AMD-side is also not so bad as they will probably lean on any EU decision to dissuade/distract/disinform/dissemble on this issue and try to make the emotional "monopoly" argument as opposed to just looking at the details of the licensing agreement... they've already brought it up and trumpet that more than the "trust us, we are in compliance" (one would think if they were so clearly in compliance - they would get out in front of this and present such a compelling case to the press and put Intel on the defensive from the get go)
I say let the issue be decided on its merits; not on emotion, or "we need competition" or big businesses are evil. The big problem for AMD is that if this goes on, I believe it goes to a mediator, where an emotional argument will not work, and it will be decided on (gasp!) the actual terms of the agreement and legal arguments (while AMD has lots of emotional arguments to make, it is not clear how strong the legal one is).
The other impact this has on AMD is the whole war of attrition - AMD will have to spend more money on lawyers which takes money (and resources) away from other things. Same thing with the civil suit... even if AMD wins and then wins the eventual appeal(s), it will be probably at least 3-5 years before AMD sees a nickel. When you combine this with the current environment and even with the cash infusion, AMD may be in trouble 9-12 months from now, and what if they need to do another cash infusion into Global foundries to maintain their stake? It's not like there are any credit options in this market.
I don't see anyway this game ends without some sort of settlement - even if Intel wins - they lose! If AMD loses the license the new socialist American gov't will step in faster than Obama can say "Change we can believe in" and will simply rewrite the agreement - that is a wild card Intel will not want to see. AMD is probably at the highest leverage point on the civil suit (or will be when the socialist EU gov't steps in with their new tax, ummm, "fine") and has to realize SOME money now and no license restrictions on the foundry is probably worth more to AMD than more money later but a potentially crippled foundry.
All good no doubt.
However, can INTC file an injunction that would stop AMD CPU production until the case is settled? (This could take years!)
SPARKS
However, can INTC file an injunction that would stop AMD CPU production until the case is settled? (This could take years!)
Don't know the exact details of the agreements, but if they have provisions for a mediator, I doubt any court would grant an injunction. And even if they didn't, what court would grant it? An injunction like that would require some extreme circumstances... it's not like the court would (or should) presume that AMD's breaching the contract - Intel would have to have compelling evidence.
I don't think this would take years - mediation is not a trial and should be vastly expedited. I would see this taking less than 2 months (if that) if it goes to a mediator. This should be pretty cut and dry whether the terms are being breached or not.
(while AMD has lots of emotional arguments to make, it is not clear how strong the legal one is).
From the article on Ars Technica it would seem AMDs position might not be that strong.
Intel's own official statement is fairly terse, but confirms that the CPU giant has notified AMD of its belief that the latter has breached the x86 cross-licensing agreement, and that Globalfoundries is not a subsidiary as defined within the agreement. The Intel statement also alleges that "the structure of the deal between AMD and ATIC breaches a confidential portion of that agreement." Intel goes on to state that it has requested that AMD make the relevant redacted portion of the agreement public (presumably that portion dealing with ATIC) and that AMD has thus far declined to do so.
AMD's reply?
"AMD would be happy to make the entire agreement public if Intel drops its insistence on secrecy concerning its exclusionary business practices under the guise of confidentiality it has imposed on evidence in the US civil antitrust case,"
If the legality of the cross licensing agreement wasn't subject to some significant legal interpretation, I think they would be anxious to release the documentation.
As to Intel's position, would AMD be willing to reveal their pricing structure to all of their clients in order to get Intel to do the same? AMD has nothing to hide after all, right? Let them take the moral high road and publish their cost structure.
We all know AMD wouldn't do it, because that information, is indeed confidential. Choosing to keep the information confidential is not proof of illegal activity, it is just sound business practice.
The thing I found most interesting though, was this bit.
It's standard operating procedure in any patent infringement lawsuit to claim that your opponent's patents are invalid/inapplicable, but the x86 agreement contains a bit of victori spolia. In the event a material breach is upheld, the breaching party loses all rights to the nonbreaching party's patents but the inverse is not true.
Given this is the case, I don't think Intel would go down this road if they weren't very sure that they could win this. If a ruling is issued on this, one company or the other will be ruined.
That said, credit must be given to VIA as they have a very competitive solution.
While Via's Nano has some upside, I think it misses the sweet spot. It offers about 2x the performance, but based on the last actual numbers I saw Via was larger than Atom (read more expensive) and used about 40% more power than Atom on the Menlow platform.
There is some speculation that Via has done some work on the Nano power consumption, but if not, then it just won't cut it. The only consistent knock I see on the Atom is the battery life. I believe consumers still want an all day device. Neither Nano nor Atom deliver this, but from what I've seen, Atom is closer.
And to answer HYC's question of what I consider "all day", to me that is 10-12 hours of active use. The best units I am aware of are at about 1/2 this right now. To get this we will either need to improve the energy density on batteries, slash the power consumption on wireless connection solutions, and/or see some significant improvements on display technology.
I also think given the razor margins on the entire computer, you will not see the same pricing declines as you do on higher end systems... the prices will come down, but I think you will more likely see pricing holding up better with the trend being more performance at the price point.
I think this is spot on. Much like HDDs, they will reach a price floor. Rather than decreases in price, you will pay the same amount over time and get a more powerful machine.
"I believe it goes to a mediator, where an emotional argument will not work, and it will be decided on (gasp!) the actual terms of the agreement and legal arguments."
That's the bottom line.
Look, anyone who has a moderate interest in this industry and the relationship between these two companies, be it professionals, analysts, or enthusiast schmucks like me, knew AMD was tap dancing on a land mine here. The other alternative is/was, to quote LEX, "BK BK BK"
Come on, we all expected this to happen in one form or another. It was simply a matter of time. There's no shocker here. AMD has paid into the "Agreement" for nearly 20 years. They've paid the royalties, and they have done business within those guidelines. Why the change? Simple, they have simply lost control and their ability to MANUFACTURE CHIPS ACCORDING TO THEIR SIGNED AGREEMENT!
Do people need Braille and cane to realize that AMD has made irrevocable missteps during the past 3 years, lost billions, and that this Mideast deal was a last ditch effort to save their hides?
Monopolistic!
Unfair Business!
EU!
Horseshit.
Since the ATI BLUNDER!!!, C2D, and NVDA's powerhouse 7*** series, AMD/ATI has been on the balls of their ass.
THE MARKET HAS DICTATED THE OUTCOME.
March 30 is coming with Nehalem EP. This will not be pretty for the 'Scrappy Little Company', chipmaker turned lawsuit maker, where staying alive is more important than staying competitive. They will lose this market, too.
SPARKS
In the event a material breach is upheld, the breaching party loses all rights to the nonbreaching party's patents but the inverse is not true.
Given this is the case, I don't think Intel would go down this road if they weren't very sure that they could win this. If a ruling is issued on this, one company or the other will be ruined.
ITK - I think you are reading the first part above incorrectly... what the top sentence says is that if AMD is indeed found to have breached the license agreement, they will lose all licensing rights to Intel IP in the agreement, but that will not cancel Intel's licensing rights to AMD IP (as they would have done nothing wrong). So I don't think your interpretation of one company or another being ruined (if this goes to a decision) is accurate.
However, separate from the note above, I believe AMD is making a claim that Intel did not follow the correct protocol in going through the grievance process and they are interpreting that to possibly mean that this may put Intel in breach of the agreement (I'm not too sure on the particulars on this, but it strikes me as a typical countersuit type move). I'll see if I can find a link on this.
To get this we will either need to improve the energy density on batteries, slash the power consumption on wireless connection solutions, and/or see some significant improvements on display technology
This is a key comment which is overlooked! Sure Intel needs to improve the chipset power (and will with the SOC solution); but if the chip had near 0 power consumption, it still would be a challenge to get an all day device because of everything else (for reasons ITK hit upon).
This is why I went on that diatribe against EGO-ponte who is trying to make such a big deal about 5 Watt consumption of a CPU not being good enough, trying to use that to justify a move to ARM and get MS to support it. He is mixing several issues into one to try to justify his desired end result. He knows that the CPU-side of things alone will not get him or OLPC to where it wants to be; he is just using it as a wedge.
BEGIN RANT:
This is much like the stupid US politicians who say solar and wind will get us away from our oil dependency - it's a slap in the face to semi-educated people and an attempt to mix two distinctly different issues together. We currently get very little electricity from oil, solar and wind are ENVIRONMENTAL fixes; as it would reduce dependency on COAL NOT OIL!!! Where as hybrid/all electric cars would put the dent in oil dependency - of course they run on ANY electricity, including coal and nuclear based sources. It just doesn't sound as good if a politician says we need electric/hybrids for oil dependency and solar/wind for environmental issues so we'll just lump these 2 disparate projects together to confuse and scare people and free up money.
If we went 100% solar and wind tomorrow, it would probably have <2% impact on our oil consumption... now that's not something you want to tell the people though, that doesn't exactly light a fire under people to dole out billions, does it? Sure it would help quite a bit environmentally (and in my view a very menaingful and worthwhile project), but the catchphrase of the day is 'cutting oil dependency', so politicans need to twist the facts to somehow tell us it will impact oil (when all it does is impact environmental).
If we went to hybrid/all electric cars tomorrow with absolutely zero solar and wind, we would put a huge dent in our oil consumption - another inconvenient truth. So what do politicians do? They link the issues together and use one issue (dependence on oil) to justify work on another issue (environmental). If they just told it straight, I think people would understand and support both efforts (solar/wind for environmental reasons and hrybrid/all electric cars for oil consumption) - it pisses me off that they need to attempt to distort and artificially link these two together hoping people are idiots (and of course the press has fallen hook, line and sinker and just lap this up as opposed to doing some actual RESEARCH on the issue)
And don't get me started on nuclear! (which by the way accounts for ~20% of our electricity today and is done in a zero carbon emission fashion!) :)
END RANT
ITK - I think you are reading the first part above incorrectly...
No, I read it correctly. I just didn't include the context from the article to support my point. So my bad.
The article points out that AMD is claiming that Intel's claim of a breach is in "bad faith", and thus they abrogated their [Intel's] rights under the cross licensing agreement.
I guess that the ruling could state that AMD's deal does not breach the agreement and that Intel's claims were not made in bad faith. In that case nothing would change, but a null result seems unlikely to me.
Sorry - thought you were using the language to assess the situation. (not the overall link)
I guess that the ruling could state that AMD's deal does not breach the agreement and that Intel's claims were not made in bad faith. In that case nothing would change, but a null result seems unlikely to me.
I think this is fairly likely... it's nearly impossible to prove/demonstrate intent (unless the person or party is an idiot and documents it an email or over the phone and says something like - "hey why don't we just bring a complaint against AMD so we create a diversion with the whole antitrust thing or so we can intentionally scare their customers"
Look, as evil as some folks (elsewhere) like to think Intel is, they also they wouldn't risk their own business on a whim. I highly doubt Intel would simply say "let's take a shot, it won't win, but at least we'll get to spend lots of money and potentially eliminate our ability to manufacture chips!" Sure let's do it. It's also not a question of whether the complaint hurt AMD's reputation or impacted their business - they have to prove that AND that this was Intel's true purpose of the complaint. Not sure how AMD would be able to demonstrate either yet alone both
Intel may lose the complaint/mediation (impossible to say without knowing the redacted portions of the ATIC agreement) but I find it unlikely they would be found guilty of a bringing a frivolous/bad faith complaint.
I think the 2 most likely outcomes are Intel wins (and AMD gets some time relief to amend/correctthe language in the ATIC agreement or they have to kill it altogether) or nothing happens (complaint fails but there is no bad faith finding). If I had to bet, I would bet on AMD either needing to make some minor changes to the deal or nothing comes of it. (But Intel gets what it wants all along - to see the redacted portions of the agreement). If a mediator shutdown AMD's license rights, do you have any idea how fast the federal gov't would step in to "fix" things, or how quickly a power hungry state AG would step in (like, I don't know, maybe one from NY, where there are financial interests in AMD)?
Of course the true most likely outcome is a settlement! Have I mentioned this theory before? :) It's called "Lawsuit Smart" and I'm pretty sure Ruiz will not be in charge of this program on the AMD-side.
Anon: Sparks and Company,
I wish you refrain from degrading the Arabs in your blog. I really thought this blog was more intelligent that many fanboy blogs such as sharikou's, scientia's, or Rahul's.
Personally, I use "UAEZone" occasionally when referring to AMDZone, as a jibe at the administrator there, Ghost, who has a waving American flag as his signature symbol. It's not intended as an anti-Arab or anti-UAE reference - apologies if it seemed that way.
"And don't get me started on nuclear! (which by the way accounts for ~20% of our electricity today and is done in a zero carbon emission fashion!) :)"
Let's go for it.
Do you realize how unsafe those spent fuel transportation containers can be when hurdled through concrete at 100 mph? The physical damage is devastating! Imagine one on the highways! Even though one has never, ever, breached it's contents, they will careen for hundreds of yards destroying everything in their path before they grind to a halt!
Tragic!
And what about Yucca Mountain. Think about the consequences if all that spent nuclear material encased in layers of metal and concrete, and God know what other materials, reached critical mass? The fusion reaction would wipeout at least half the PLANET!
Further, Yucca Mountain the garden spot for tourists WORLDWIDE would be seriously compromised! Think of the loss of revenue and the environmental effects of those sealed container miles beneath the earth. No tennis, no shuffle board, no casinos, and no theme parks in the area for thousands of nuclear half life's, unacceptable! We may even get radioactive CRICKETS!
Where oh where is RALPH NADER when we need him most?
And let's not forget JANE FONDA in the China Syndrome, and look at Chernobyl!
No, I say we keep bitching about carbon emissions, talk about possible solutions, throw hundreds of million toward study groups, and do absolutely nothing.
We have just the Administration to do just that.
SPARKS
http://www.dailytech.com/article.aspx?newsid=14588
SPARKS
http://channel.hexus.net/content/item.php?item=17643
SPARKS
ITK, ATOM is taking off like a shot.
http://www.theinquirer.net/inquirer/news/437/
1051437/atom-half-entry-desktop-sales
SPARKS
What a failure! It should have taken 75%!
By the way both chip stocks have done well recently... I soon may be changing the plan from a Penryn quad drop-in (on one of thos supposed "non-upgradable" Intel based-boards I bought over 3 years ago) to a 920i build.
Booked some more profit on AMD stock today and will once again wait for it to drop to the low 2's... The nice thing about the stock being so low, is you get some nice moves!
ITK, ATOM is taking off like a shot.
I can already hear the cries of "Cannibalization! Intel's margins will crash". :)
One of these days, even the press will wake up and realize that Atoms margins are comparable to Celeron. And since this is being pushed as a Celeron replacement, Intel will do just fine. In fact, I think they will do better because they will sell more of them.
Realistically, I think Atom is a bit anemic on 45nm for a desktop system (I suspect it could be be beefed up a bit on 32nm), but if you just want a cheap system for casual use, I'm sure it is good enough.
"Realistically, I think Atom is a bit anemic on 45nm for a desktop system (I suspect it could be be beefed up a bit on 32nm), but if you just want a cheap system for casual use, I'm sure it is good enough."
Sure, perhaps from our perspective they may be anemic, especially from a power twit like myself. However, from my own personal observations, family, extended family, and friends, basic computing tasks are the order of the day. (I am the resident "Geek Squad") Web browsing, e-mailing, online shopping/banking, etc., are about the only real tasks most people are interested in.
I think the most demanding applications are the photo editing and media copying/recording software for most "average" users. On closer examination, this may be a good thing in the long term. People, especially teens, starting out with basic e-machines (entry level) moving up to more powerful stuff as the need or demand increases with the users experience.
"Wow, Sparks, can you get my computer to run like yours?"
Or.....
"I bought this game, but it does work very well, can you make run a bit faster."
My reply? "No, but if you want one that does it's gonna cost ya"
It seems in previous years the newest hardware was always the latest and greatest. Conversely, today it's like a kid learning to drive, starting out driving an econobox, as opposed to starting out with a Corvette. (Everyone in my house want to drive "Dads Car," if you will.) The market, finally, has diversified. INTC brilliantly saw this with ATOM, truly a chip for the masses.
And we certainly know how addictive powerful machines can be, once you get a taste.
Just like, "sex and drugs and rock'n roll."
SPARKS
"The nice thing about the stock being so low, is you get some nice moves!"
....and some big calhonies.
SPARKS
Ah, is there any substance to this report?
http://www.networkworld.com/community/node/39825?t51hb
SPARKS
I think the most demanding applications are the photo editing and media copying/recording software for most "average" users.
Which is why I said that Atom was a bit anemic. Photo editing is not a task the chip was designed for and (no surprise), reportedly doesn't do it very well.
That is fine for a tiny portable, but I'd expect my desktop to handle photo editing.
the amFUDZone starts to spread FUD on the SMM attack on 'IA' (well, they on use the term Intel ... may be AMD CPU has no SMM mode:))
anyway, I believe this is Intel's answer to this attack:
http://www.freepatentsonline.com/y2007/0156978.html
Conroe G0, Penryn, NHM, WSM have this feature :)
"(well, they on use the term Intel ... may be AMD CPU has no SMM mode:))"
The article at NetworkWorld and the statement from the security researchers names Intel CPUs specifically. Are you saying that AMD CPUs have the same vulnerability, and that Intel (and presumably AMD) have already patched this with recent steppings?
Tonus said...
"(well, they on use the term Intel ... may be AMD CPU has no SMM mode:))"
The article at NetworkWorld and the statement from the security researchers names Intel CPUs specifically. Are you saying that AMD CPUs have the same vulnerability, and that Intel (and presumably AMD) have already patched this with recent steppings?
I do not know how AMD setup the caching for its SMM memory range, thus i am not 100% sure. But as long as it allows caching on the SMM range (which i am quite sure, else the performance will be really bad), and do not have special mechanism to prevent caching outside of SMM mode, then it is open to the similar attack.
Basically the attack use caching. In a correctly implemented BIOS (including old system), MTRR should not be setup to cache the SMM range. Once enter SMM, then only setup the caching and disable it once exit. This is secure, as long as the exploit cannot change MTRR. So the system (including the old one) is pretty safe if the BIOS is implemented correct (need to lock the smm too).
but, if the attack has Ring0 access (which by definition, already can perform quite some harm), MTRR could be changed. Thus, Intel implemented SMRR for this scenario.
Hey LEX!
Are you paying attention? If this is what INTC has in store for possible future graphics, then truly, "NVIDIA are you watching?"
10 TIMES LESS ENERGY.
Hey LEX!
Are you paying attention? If this is what INTC has in store for possible future graphics, then truly, "NVIDIA are you watching?"
10 TIMES LESS ENERGY.
http://www.xbitlabs.com/news/video/display/
20090317231622
_Intel_Develops_Breakthrough_Graphics_Accelerator
_for_Small_Mobile_Devices.html
SPARKS
SPARKS
"but, if the attack has Ring0 access (which by definition, already can perform quite some harm), MTRR could be changed. Thus, Intel implemented SMRR for this scenario."
Cheese POINTER, what the hell does that all mean? Hey, give the dopes a break, will ya?
SPARKS
@Sparks
MTRR is a pair of registers that map an address range to cachability. SMRR is a special type of MTRR that:
1) setup cachability got a given memory range within SMM
2) abort cycles that attempt accessing the said memory range when not in SMM mode
3) accessible only in SMM mode and can be locked
these effectively preventing non BIOS code to change the cachability of the SMM once set and lock. Anyway, this is just extra hardening of the security. I believe in older system, if the MTRR is setup correctly and assuming the attacked has no right to modify MTRR, it is pretty safe. If the attack has the ability to change MTRR, then by itself it can already do quite some harm
I'm glad I held off on any CPU upgrades. Looks as though Intel will be cutting prices over the next few months.
I did 'merge' two of my systems and toss in some upgrades. Currently running a Q6600 (2.4GHz) with 8GB memory and Vista64. I will probably wait until Windows 7 is out to go hog wild, though I might continue to wait since this setup is working very nicely right now (after some real headaches due to a bad HDD).
Lex says AMD is finished and next is Nvidia
Nothing like having a superior technology, superior manufacturing, superior management to put a lot of whoop ass on everyone.
I find anyone who thinks they can beat INTEL at CPUs laughable. They got the factories, they got a TD team so tuned to developing the best, fastest and highest yielding process for CPUs. Got to give some credit to them AMD designers they got some balls and have pulled some rabbits out. The INTEL designers are spoiled. If only INTEL had them green designers they'd truely be awesome.
Nvidia is history, its the natural progression to integrated it all. This is especially true for logic fucntions. They've run out of need for cores, absorbing the graphics transistors is the next logical step. nvidia is a pomp but no substance. They got no chance. They use far crappier TSMC process that is poor yielding, poor performing and doesn't have the economies of scale to compete.
Can you say whoop assed.
Tick Tock Tick Tock its all like clock work.
Sharikou is a pussy
Scientia is a pussy
and Abinstein is gay.
None can provide any credible scenario how that silly green puke smelling company can compete or nvidia either.
The only real question is can INTEL every learn how to do anything but x86.
"The only real question is can INTEL every learn how to do anything but x86."
Only a fool would underestimate INTC.
http://www.tomshardware.com/news/Intel-ray-tracing,5650.html
https://www.cmpevents.com/GD09/a.asp?option=C&V=11&SessID=9139
....And since you mentioned the three stooges, I'm compelled to concur with your, ah--- observations.
SPARKS
Tonus: I will probably wait until Windows 7 is out to go hog wild
Keep us informed when you do let that hog outta his pen :). I'm thinking an i920 D0 stepping, Asus P6T Deluxe mobo, 6GB of 1333 DDR3 (Crucial I guess), and one of the 40nm GPU video cards if and when they finally appear, hopefully in a couple months. Also 8TB of storage for movies, and a decent ATSC/clearQAM tuner for HDTV.
I went a bit nuts with my tax refund and got a 50" Pioneer Kuro Elite plasma TV (Pioneer discontinued their plasma production as of this month), just before all the other fence-sitters jumped off and the price went up by $700. I have it sitting on an L-shaped computer desk in my "study" or den or library - although I do none of the above in that room :). Currently it's hooked up to my Dell laptop via a DVI-to-HDMI cable and although standard-def DVD looks very good on that TV, my brother's Blu-ray player on his Dell laptop yields an absolutely hands-down "best video I've ever seen" experience. I had thought my Sony XBR4 LCD TV had a great picture - this one is so much better that even my wife (who carped about the expense and "too many TVs already in this house") commented on it.
Anyway, now I want to build an HTPC/home server/gaming machine, and use the TV as a 2nd monitor.
"Also 8TB of storage for movies"
Uhh???
SPARKS
"Also 8TB of storage for movies"
Uhh???
With a normal DVD at 4-8GB/movie, that's ONLY 1000-2000 movies - less if he has other stuff on there like music, games, and if he's using it as a TIVO - that's another 500GB or so to store a decent amount of HDTV programming.
And if he's storing Blu-ray movies that 1000-2000 comes down a lot more.
Moose - Are you doing a RAID5 type setup? Are you talking 4, 2GB drives? (or 5 if doing RAID). I'm toying with the same idea but am considering whether to just setup a separate NAS-type box for storage and use either wireless/ethernet type connection.
Yeah,----- I figured out HOW it was done, but for guy who sounds like there was a WAF ("wife acceptance factor", read: budget) the available solution, at least the one I put together, ain't cheap.
http://www.newegg.com/Product/Product.aspx?Item=N82E16822136344
http://www.newegg.com/Product/Product.aspx?Item=N82E16817198028
http://www.newegg.com/Product/Product.aspx?Item=N82E16815116036
We're talking about 2 large.
I assume we are talking seperate volumes here. (as apposed to RAID 0)
SPARKS
By the way, will DRM be a fly in this 8TB ointment?
SPARKS
Nonny: "Keep us informed when you do let that hog outta his pen :)."
Will do! I had been thinking of holding off for a while longer since my needs are changing (less PC gaming, more graphics). But this 'merged' PC has been balky and I'm not sure how much of it is Windows Vista 64 (though I suspect that it's a LOT of it) and how much is from jamming so much hardware into it.
At the moment it runs a Q6600 (2.4GHz, not OCed), 8GB RAM (decent Corsair stuff, not cheap and not expensive), two RAID 0 arrays on SATA drives (2 x 400 for the C:, 2 x 250 for the D:) and an external eSATA RAID 1 (2 x 500) for my data. I had both of my Radeon 4870s in there but only have one of them in there now. I've also got two DVD-R and one BD-R drive in there (hey, I'm a hardware geek!).
I've had to reinstall a few times already, once because of a bad HDD, once because... I'm not sure. Windows blue-screened on me after I made a minor registry edit (to allow OpenGL support in Photoshop, but I didn't even get the chance to reboot with the new changes, much less try them out!) and then the C: partition decided that it wasn't a bootable Windows partition. Several attempts at repair went nowhere, and after a reinstall it works just fine again.
All of that is not so bad (the aggravation factor of reinstalling Windows aside) but not being able to boot into Windows means that I cannot manually "deactivate" my Adobe software, and thus I get closer to being locked out of it until I call Adobe to clear up the problem. They're good about it, but it's annoying and I wonder how many times they'll put up with "yeah, hard drive crashed again" before they decide that maybe I've installed their product enough times already.
So instead of doing incremental upgrades, I may just rebuild from scratch. Since it's primarily 2D/3D graphics that I want to run, and not games, I can go easy on the video cards, mostly to avoid those 11" long and double-wide power hogs. I'd like to get two cards, although one card handles dual monitors like a champ as it is.
Tonus wrote...
All of that is not so bad (the aggravation factor of reinstalling Windows aside) but not being able to boot into Windows means that I cannot manually "deactivate" my Adobe software, and thus I get closer to being locked out of it until I call Adobe to clear up the problem. They're good about it, but it's annoying and I wonder how many times they'll put up with "yeah, hard drive crashed again" before they decide that maybe I've installed their product enough times already.
Probably doesn't happen with a warez'd version of Photoshop. Kind of amusing and paradoxical that software and media DRM ends up hurting legitimate users more than illegitimate users.
And here's something I was unaware of (Yeah, it's Fudzilla, but it's only the component pricing that I'm talking about)
http://www.fudzilla.com/index.php?option=com_content&task=view&id=12756&Itemid=1
So a 1.86 Z540 goes for $135... compare that to AMD quad cores!!!! I'm amazed that Intel can price like that (granted this is probably one of the lower volume parts, but still)... yeah these atoms must really be killing margins :) Shame it's such a failure, with a much smaller die size, Intel has to hate selling these puppies right about where AMD is selling there tricore and quadcores.
I'm now convinced I VASTLY underestimated the hold these things will have, and once the SOC solution is out, AMD is going to have to come up with a low end design - I simply can't see the strip down K8/K10 competing once Intel addresses the chipset and puts some distance on the power front.
The only real long term problem is balancing making these more powerful without eating into the mainstream market (and keeping it low end). I guess the other option is to make Nehalem a truly high end chip and gradually phase out Core2 as the atoms start increasing in performance. (though it seems like Intel is looking at Nehalem/integrated IGP as a the mainstream option longer term)
Thoughts?
"I'm now convinced I VASTLY underestimated the hold these things will have"
Man! Now I know I'm in good company!
"I guess the other option is to make Nehalem a truly high end chip and gradually phase out Core2 as the atoms start increasing in performance."
No doubt. Socket 1156 for the bread and butter, 1366 for the power crowd, and ATOM for small power miser market, perfect.
BTW: It seems INTC "is seeking permission from its shareholders to revalue worthless employee stock options, a controversial move that the world's biggest chipmaker says is needed to retain critical staff."
They are putting it to a shareholder vote. I say go for it. You boys have earned it, and some. Besides, with the way you all have been executing, especially in these times, a nice bonus wouldn't hurt either. All said, to victors go the spoils. This should be no exception as performance is KING.
http://www.reuters.com/article
/rbssITServicesConsulting/
idUSN2134018820090323
SPARKS
Well let's just say that I have been a loyal Netflix subscriber for the last 6 years :). Many of the movies I watched more than once or twice, I wound up buying anyway since I used DVD-R single-layer discs for the most part, until the last year or so when the dual-layers dropped below $1 apiece in bulk. And for those really special movies, I further upgraded to Blu-ray, after making sure the studio did proper justice to them on the port to Blu-ray (I find the user reviews on Amazon.com and some of the home theater websites to be invaluable on whether the Blu-ray version is worth the price). For example, the LOTR trilogy is expected out next year (Peter Jackson is too busy working on the prequel The Hobbit at the moment, and he wants to do justice to Blu-ray on the trilogy). Ditto for The Matrix, and for the Indiana Jones quadrilogy. And I already have Blade Runner director's cut on Blu-ray. However, if I had to pay full-price, I would never have bought Predator or Total Recall - instead I got them for under $10 - basically they're just a port to Blu-ray from the DVD.
The P6T Deluxe supports up to 6 SATA devices, so yes I'm looking for the 2TB drives to come down in price. The Seagate 1.5's were under $100 on Tigerdirect.com earlier this month, so I suspect maybe this summer once the 2TBs are no longer the top rung, there might be a similar sale. Otherwise it'll be an external enclosure and about 6 of the 1.5's as individual volumes. A DVD burner and a BD-R drive and maybe an SSD boot volume plus a RAID0 volume for non-entertainment storage is probably how I'll go. If I need to expand, there's always NAS and MS home server :). There's only a few shows I'll record - Lost and Scrubs for example - although my wife will want to record American Idol and Dancing with the Idiots (Stars) which I only watch once in a while, mainly in hopes of one of those strapless outfits coming loose - no luck so far! :) They'd probably edit that part out anyway...
Sony used to make a 400-disc DVD changer, that I considered one time, but their software wasn't compatible with MS Media Center back at the time I was interested in it. They really intended it for use with their version of a media PC or HTPC. And the price back then was about $400 anyway - what I'd pay for four 2TB discs which would hold nearly 5 times as much data...
PS - if somebody made a 400-disc BDR changer, and if 50GB recordable discs were, say, $1 apiece, I'd jump all over that :)
Sparks - DRM is a moving problem for BD I believe, but not a problem for DVD. Either DVDShrink (which hasn't been updated for about 4 years now) or DVDFabDecrypter work just fine for me, although I prefer DVDShrink's 2-pass compression algorithm as (1) I think it does a better job, and (2) it is completely selective - you choose which files to compress, trim or both. However it'll choke on CRC errors which some DVDs have. For those I'll use Fab with no compression, then compress with Shrink. If it's a really good movie then I'll use a dual-layer disc and not compress at all, or buy it on sale on Amazon.
"Probably doesn't happen with a warez'd version of Photoshop. Kind of amusing and paradoxical that software and media DRM ends up hurting legitimate users more than illegitimate users."
Believe me, that was foremost on my mind while this whole shitfest was going on. And it's ironic, because I gave up "warezing" years ago for two reasons: one, I felt it was reasonable for companies to get my dollars for programs that I wanted to use and two, hacked software had become a minefield. Both trying to find a reliable download and getting a clean file had become difficult and painful.
So nowadays, instead of download headaches and virus threats, I deal with installation headaches and DRM threats... except that now I pay upwards of $500-1500 for the privilege. And this is supposed to make me WANT to stay legit????
PS- regarding Atom, I saw one person at AMDZone opine that Intel's CEO should be fired for the disaster that Atom has become. What caused this reaction? A story on how Intel may need to raise the price on Atom CPUs due to high demand because so many Asian OEMs are planning netbooks around it.
Yes, you read that right, Intel's CEO should be fired because Intel has a product that is so popular that demand is far outstripping supply. I'm not sure whether to laugh or cry. Can people really be that stupid?
I'm now convinced I VASTLY underestimated the hold these things will have....
The only real long term problem is balancing making these more powerful without eating into the mainstream market (and keeping it low end).
I'll admit I'm a bit surprised as well. I saw the mobility aspect as the big driver. I didn't anticipate that anyone would be willing to settle for that much less power in a desktop. I guess that goes to show I'm not exactly a mainstream user either. :)
The whole balancing act with increasing power is fascinating. The only significant weakness Atom has relative to the next tier of CPUs is it's inability to handle video editing. It wouldn't take the addition of much processing power to address that issue.
I think Intel's best bet is to go ahead and give Atom that power, but limit the amount of memory that the CPU can access through the IMC. This would allow the a clear differentiation of the high and low end processors without having to artificially limit the processing power.
I actually see two diverging trends in processors driven by the ability of modern processors to do pretty much all the average user needs. I would focus on Atom for the low end as specified above and create a new mid-range tier of processors that focus on truly shrinking the processor with each die shrink. The processing power wouldn't change much from node to node, but the cost and power would go down.
The upper end would maintain die size and power consumption near current levels and use the increased transistor count to add more processing power. This would satisfy the power users and they typically have been willing to pay the premium Intel could demand for these parts.
I've said it before, but I think speech recognition is the enabling app for these high end processors, but I believe it is still two more nodes away. Once speech recognition becomes a viable app, Intel could work on pushing the ability down the stack over successive die shrinks.
I ran across this analysis of the ARM vs Intel battle. I thought it was one of the most well thought out and balanced pieces I've seen.
It also got me to thinking that there might not need to be a clear cut winner in this space. With Intel porting a version of Atom to the TMSC process, they will most likely combine the core Atom CPU with other ARM modules to produce custom designs. I see no reason for this model not to continue forward with Intel possibly getting access to ARM IP (through licensing), and eventually making these types of SoCs in house.
Yes, you read that right, Intel's CEO should be fired because Intel has a product that is so popular that demand is far outstripping supply. I'm not sure whether to laugh or cry. Can people really be that stupid?
They can only be that stupid by choice. They simply don't want to see the truth, so they focus on ASPs.
It is cheaper than Nehalem, so it must be losing money. The fact they are raising prices is clear proof that they are losing money on Atom. If they were making money, they wouldn't raise prices. Isn't that obvious?!?!
They will never accept the idea that Intel is going to use pricing to reduce demand because of a capacity issue (in assembly, not manufacturing), because they don't want it to be true.
Self-deception is a powerful force. I learned a cheesy little rhyme years ago that keeps being proved true over and over again.
"A man convinced against his will,
is of the same opinion still."
I do think Intel misjudged the demand (and some of that obviously has to do with the economy).
Also I still am of the belief that Intel just wanted to get it out on the market, see what niches it fit in, and then "fix" things on the 2nd gen (like the archaic and power hungry chipset it was tied to). If Intel had spent some more time and got the chipset right on the first gen, they would own the market for 5+ years.
I think they gave AMD a window - but AMD's response of we'll just lower the clocks and voltage will not cut it long (or even intermediate) term. They had an opportunity if they had executed on their Bobcat roadmap - now once the game changes again when the SOC (and 2nd gen Atom) drops the power dramatically AMD will be in severe catchup mode and be trying to argue performance over power and cost in a market that really is not performance driven.
Tonus: PS- regarding Atom, I saw one person at AMDZone opine that Intel's CEO should be fired for the disaster that Atom has become. What caused this reaction? A story on how Intel may need to raise the price on Atom CPUs due to high demand because so many Asian OEMs are planning netbooks around it.
That's probably because AMDZonerz are not used to a company or CEO actually making a profitable product [gasp!].
I guess we'll see in about 3 weeks how the current quarter treated AMD and Intel. Haven't seen any updated profit warnings from either company so maybe they're both doing as expected or better. I've seen a bunch of HP desktops featuring the older Phenoms such as the 9950 at the local Sams club, for $900 - $1200 which includes a 22" or 24" monitor. Now if they had the 25.5" Samsung "touch of color" HDTV/1920x1200 monitor included, that would be a bargain :)...
Also I still am of the belief that Intel just wanted to get it out on the market, see what niches it fit in, and then "fix" things on the 2nd gen (like the archaic and power hungry chipset it was tied to).
I think you are mostly right. I don't agree that they were looking for a "niche" market however (though I agree they were planning on starting in a niche and spreading out from there). This chip is a big part of their strategy to triple their revenue stream by moving into adjacent markets, one of which is CE devices.
What Intel missed was the extent of the already existing desire on the part of consumers for products that fit the low-cost/portable/power efficient model. Sales wildly exceeded Intel's initial expectations. This left them saddled with a weak chipset until the 2nd generation comes out.
Regarding the timing, I really think the decision to release the product early on a cobbled together chipset was driven by pressure to try and get into the iPhone. I think they felt a need to show Apple a viable, compelling CPU that would add value to Apple's existing product.
However, in my opinion that is exactly the wrong approach to take with Apple. Apple wants something that is exclusively theirs. Something that will give them product differentiation. And a mass produced Atom that anyone can use is the wrong thing to offer a customer that wants something uniquely theirs.
x86 is going everywhere
Moore's law and INTEL's superior process will allow it to morph the x86 into many different applications
Lowpower with lower performance are coming that will compete and beat ARM. Wait only another generation or so and the the difference will be so small in ARMs advantage that x86 compatibility will win.
In the mainstream and high end x86 has already won.
Graphics is also going to be owned by INTEL
Can you say AMD and nVidia got a can of whoop ass
ARM is going to get WHOOOOOPED ASS toooooo
Apple without jobs is going to be nothing. Jobs will move on and Sony and others are gunning for them.
Cool companies only are cool for so long before others copy and its no longer so cool.
I think you are mostly right. I don't agree that they were looking for a "niche" market however
Yeah, niche was a poor choice of words... I wasn't intending to refer to "niche markets" but to "finding its niche" (or finding its position in the various markets)
Wait only another generation or so and the the difference will be so small in ARMs advantage that x86 compatibility will win.
You keep saying this, as if you say it enough others might listen and agree with you. ARM's advantage is in its specialization... ARM products that are 2 technology nodes behind Atom still have lower power. You make it sound as if everything is netbook space. ARM will have trouble "moving up" to that space with the x86 compatibility tradeoffs, but in the area it currently OWNS (MID, smartphones, regular phones, etc...) the SW it typically customized anyway and x86 compatibility is not a huge deal.
You seem unable to comprehend anything beyond x86 and computer markets. In all likelihood Intel will begin to make a dent in the high end smartphones and the ultramobile markets (if the SOC solution delivers) but even with that it still is not on par with ARM in power (and 32nm with the SOC will also not probably enable that).
Sparks, this link is for you.
In all likelihood Intel will begin to make a dent in the high end smartphones and the ultramobile markets (if the SOC solution delivers) but even with that it still is not on par with ARM in power (and 32nm with the SOC will also not probably enable that).
I think the real question is whether or not the performance and x86 legacy solutions of the 32nm Atom SOC will justify giving up the battery life that ARM will provide. That is the trade off as I see it in for the upcoming process node. Intel's move to 22nm should see both ARM and Atom at near power and performance parity (by my estimation). At that point we will be able to really see whether or not the x86 legacy stack is worth anything.
Another interesting development is all the reports of how MS is working to slim down Windows 7 to make it fly on smaller form factors. Clearly they see a need to play in the smaller form factor arena as well. If they can successfully deliver an acceptable experience on the small form factor devices, this will be a huge boost to Atom and whatever solution AMD eventually decides to cobble together. (Note that I agree they HAVE to play in this space if they want to be a serious contender.)
ITK! Thanks!
Some guys have "A-ROD'", some guys have Michael Jordan, and some have Tiger Woods. But I have Mr. Bohr, sports fans. Quiet and unassuming, the man walks very softly and carries a HUGE $7B stick.
Of particular interest, is the defect density reduction he spoke of, going from 65 to 45. He's not an easy guy to read, not by along shot. However, I detected a bit of pride, confidence, and accomplishment in his answer. Surprisingly, features are shaping up as they go smaller at the same wavelength.
'Copy exactly', which our own resident GURU has been hammering away, has indeed been a cornerstone of INTC's undeniable success. Mr. Bohr goes on to talk about 32 much the same I do when I replace new washer and dryer! More importantly, he stresses (pun not intended) the necessary symbiotic relationship between design and manufacturing. I suppose AMD feels this isn't important anymore, especially with regard to yields.
What I found particularly interesting was the 9 layer, low-k interconnects. I'm assuming this is improved lower capacitance between layers speeding up traffic, if you will, without the whole shebang crumbling beneath your feet. (I'll bet NVDA wishes they had this tech.)
Additionally, he feels very confident about 193 @ 22nM, but he's left the door open on EUV @ 15. (Easy G don't pounce on me.)
Plus, he has no problem making two passes on the pattern. Obviously, the tools line up just right every time.
And finally, a new term, Computational Lithography?
Thanks ITK, I've got plenty of Googling to do.
SPARKS.
Oh, yeah, no mention of Tri-Gate.
SPARKS
Oh, yeah, no mention of Tri-Gate.
I wouldn't read much into that. If you notice he gave up almost nothing regarding 22nm. The only solid statement I caught was immersion litho will be used on 22nm.
'Copy exactly', which our own resident GURU has been hammering away, has indeed been a cornerstone of INTC's undeniable success.
Copy exactly may be effective, but I'm fairly confident in saying that you would have no trouble finding plenty of engineers at Intel that do find it to be restrictive. Of course, anything that keeps an engineer from trying out his latest tweak is restrictive, so maybe that isn't saying much. :)
Of particular interest, is the defect density reduction he spoke of, going from 65 to 45.
There are those that subscribe to the idea that since there was room for improvement, that yields must have been crummy on 65nm. Don't let the fact that to my knowledge no one in the industry has ever had a perfect 12" wafer influence your view of this one. Always side with a forcefully stated position regardless of what the facts may say.
What I found particularly interesting was the 9 layer, low-k interconnects. I'm assuming this is improved lower capacitance between layers speeding up traffic, if you will, without the whole shebang crumbling beneath your feet. (I'll bet NVDA wishes they had this tech.)
Intel maintaining 9 layers is impressive... there had to have been some RC improvements to enable this. While some of it is probably a lower K dielectric, I suspect Intel did some stuff on the "R" (resistance) side of things too. One guess would be a scaled barrier/seed - Intel may be using ALD on this now? The barriers tend to be fairly resistive and even though they are fairly thin compared to the actual copper, they add significantly to the overall R - so the desire is to scale these as thin as possible, but they still have to work as a barrier (and as a seed for the electroplating process). There may have also been some work done on the etch stop layer and not just the bulk dielectric (which is what generally gets all the press) The etch stop layer impacts the overall K, and is fairly similar to the barrier perspective - you want to scale these as thin as possible - as long as they still work as an etch stop. (The etch stop layers have a higher K value then the bulk dielectric and adds to the effective K of the combined stack - which is what ultimately matters).
This is another example of looking at the integration and actual impact as opposed to the buzzwords. IBM's high performance 65nm tech used 10 metal layers (and I would have to assume 32nm would be 10 or even 11). So you can throw around low k claims, but if you end up using 10-20% more metal layers, is it really worth getting excited about?
Does anyone have the metal layer count on AMD's 65nm and 45nm process? (I think 65nm might have been 11 layers for Barcy?)
So all of this talk about individual techs... immersion... low k... it again comes down to what is the end performance/cost result? When this is not spelled out, be wary...
Copy exactly may be effective, but I'm fairly confident in saying that you would have no trouble finding plenty of engineers at Intel that do find it to be restrictive. Of course, anything that keeps an engineer from trying out his latest tweak is restrictive, so maybe that isn't saying much. :)
There are times when CE! transforms from Copy Exactly to Close Enough. ;)
"The etch stop layers have a higher K value then the bulk dielectric and adds to the effective K of the combined stack - which is what ultimately matters."
What are we talking about here, in picofarads that is? The smallest caps I ever loaded on a circuit board (SuperHet receiver) was about 100pf. Christ, there are mil-spec plugs and sockets with higher values!
How do they measure this? Shielded leads on the testers have higher values, too!
Amazing.
Hey Orthogonal, one mention of the 'backend' was enough to flush you out.
In any case, nice work.
SPARKS
I see in the news that AMD's Foundry spinoff will start accepting orders for 32nm bulk Si products in Q4 of this year. Granted that won't include any AMD CPUs since IIRC those are SOI, not bulk. Anyone think in this economy they'll actually get any orders?
Why mess with success. Why allow some manufacturing engineer to try an idea on a process that is already working and high yielding. You invest billions in a factory and is producing billions a quarter in revenue, you let some engineer to play? Their job is to keep the process on track. why mess with success?
There is nothing to comprehend BUT x86. x86 has generates more revenue and more income for INTEL then all other semiconductor business segments combined.
There is a reason INTEL has risen from a bit playing DRAM/EPROM manufacturer to the biggest chip company in the world, and pulling away more and more now that the memory business has sunk Samsung.
They don't need to look beyond x86. Where ever they set their sights with x86 they can conquer given time and money. Last I checked they have a lot of both.
ARM, Nvdia should be afraid very afraid.
Analog less so as its simply not efficient to integrated the high performance FE into bulk silicon. But sometime in the future say 2015 or so when they move to compound semiconductors they'll take that business too.
"There are times when CE! transforms from Copy Exactly to Close Enough. ;)"
LOL. I work with engineers (geotechnical/civil) and I know that a lot of them would appreciate that joke. :)
Process is so complicated that these manufacturing guys really can't comprehend what is going on, thus copy exactly is the only way to go.
Process is so complicated that these manufacturing guys really can't comprehend what is going on, thus copy exactly is the only way to go.
According to this Anandtech article the XEON 7xxx series has admirably held up the fort. As we all know, and the test results varified, the XEON has had a few weak spots. However, the articles conclusion factors in some new chip by INTC called---uh---NEHALEM.
http://it.anandtech.com/IT/showdoc.aspx?i=3484
Today is the 26th and if my calculations are correct, this Monday is the 30th. DELL, and others, are going to open the flood gates with the new badasses.
http://www.vnunet.com/vnunet/news/2239211/dell-unveils-nehalem-systems
http://blogs.zdnet.com/Foremski/?p=414
27B? Hmmm, INTC share price is back to November 2008 levels----hmmm.
AMD's days in the sunshine are over, come Monday morning.
SPARKS
Why mess with success. Why allow some manufacturing engineer to try an idea on a process that is already working and high yielding. You invest billions in a factory and is producing billions a quarter in revenue, you let some engineer to play? Their job is to keep the process on track. why mess with success?
-----------
Process is so complicated that these manufacturing guys really can't comprehend what is going on, thus copy exactly is the only way to go.
Just because the process has reached HVM, doesn't mean the process doesn't continue to improve. There are times that the full scope of a problem isn't realized until you ramp it to 6K+ wafers across significantly more tools. Yes CE is a great way to transfer the technology since the HVM site hasn't had the same experience with it as the devlopment site, but it doesn't mean that improvement ends.
Development sites have significantly more man power and nearly an order of magnitude the budget! HVM has to figure out how to the same things on tighter purse strings.
Have you ever seen one of the graphs that Intel shows with average defect density over time? Notice the long tails that slowly improve over the 2+ years of the process? That is done almost exclusively by the HVM sites. Once the first HVM site is on it's feet, the development site drops it like a rock. After that, it's upt to HVM to make it better. CE! is really for the transfer and startup of a process/fab. Once the development site has moved on to the next generation, there's no reason to just sit on it.
What does HVM stand for? High Volume Manufacturing?
Anyone think in this economy they'll actually get any orders?
Depends on the cancellation policy! Accepting orders is YET ANOTHER ambiguous AMD term to make things sound far along, when it could mean anything. Accepting orders for when? How much volume? As always the lack of specificity should be a clue. With Intel 32nm around the corner, AMD wants to make it seem like they are closing the gap (again).
This will be a lower performance target process (by design) and will be bare Si based (so 32nm CPU's are still a ways out). And keep in mind that if the foundry is taking orders this has to be done WELL in advance as since it is a new company from a foundry perspective, customers will have to understand the process specs, design rule requirements, and go through a tapeout, debug, new mask, debug... etc...
So when you figure the time for a customer to make sure it meets the foundry process/design rules, the 6-9 months from first silicon product to revenue product, all this means is that 32nm bulk Si process is probably mid-2010 at the earliest, and probably in low volumes, with limited customers.
It seems odd as one would think the first "customer" would be AMD in terms of chipset and graphics products (which are on bulk Si) - I would suspect this would soak up most of the early capacity and would probably be biggest bang for the buck in terms of volume and engineering resources. Why farm out 32nm capacity until AMD gets it's products on it first?
"What does HVM stand for? High Volume Manufacturing?"
At AMD I think it means 'High Volume Marketing'.
SPARKS
Orthogonal, Please don't feed the troll. It only results in more rubbish being posted here.
You are entirely correct about the contribution of HVM to the process yields.
For those that don't use this kind of data on a regular basis I should probably point out that being plotted on a log scale means that the further down the y-axis you move the smaller the degree of improvement. I only bring this out to point out the fallacy of the argument that continued improvement as shown on the charts means that the yields were poor when the product went into production.
I don't mean to disparage the effort of the process engineers in the manufacturing group when I say this. If anything, incremental improvement gets harder as the process improves. When you have 1000 defects on a wafer at the end of the line it is easy to find ways to get rid of 500 of them (relatively speaking). When you only have 50 defect, all the obvious solutions are gone and getting rid of the next 25 is very hard.
Regarding "copy exactly" I believe it is also useful in HVM after process transfer as a means to keep the various factories on a given process node aligned. If I've read the history right, it was implemented by Craig Barrett as a means to eliminate factory-to-factory variation in HVM and not really as a process transfer tool. The added benefit of bringing new factories up more quickly was a bonus as I understand it.
As an aside, this is something AMD has never had to deal with. It will be interesting to see how they manage any process overlap in their factories after Luther Forest is built (sorry, Sparks).
I also happen to believe that D1D is one of the best things to ever happen to Intel.
Prior to the construction of D1D I believe that the development group moved into a fab and did their development. Then the manufacturing group would join the development group in the fab and development would train them to take over. Once the manufacturing group was trained up, the development group would move on to another fab.
Now, with D1D both the development group and the ramp group occupy the same building. Why do I think this makes a difference?
Because under the old system, the development group was divorced from the old technology once they moved on. So they weren't in a position to learn from all the yield improvements that Orthogonal and all his buddies have come up with. The development group would go about their business without the benefit of everything that was learned on the old process.
Now that they are in the same building with a group that runs higher volumes, they get to see the issues and the resolution first hand. It has to be hard to ignore the fact that one of your fellow employee's is cursing the day you were born every day because you gave him a cruddy process. If that doesn't motivate you to learn from the problems on the last technology and bring the solutions forward, I don't know what will.
And having that information get fed back to the development group is a huge win for Intel's future processes.
I never work in the fab, but did work in the assembly and test before. Copy exactly doesn't mean you cannot change from the original point, just that any change, if was to make, and if more than ONE factory is producing it, have to be proliferated to other factory as well.
This could get interesting, assuming it's not just a stunt or a 'no-other-option' type of move.
NVIDIA Corporation today announced that it has filed a countersuit in the Court of Chancery in the State of Delaware against Intel Corporation for breach of contract. The action also seeks to terminate Intel's license to NVIDIA's valuable patent portfolio.
Anon: Accepting orders is YET ANOTHER ambiguous AMD term to make things sound far along, when it could mean anything. Accepting orders for when? How much volume? As always the lack of specificity should be a clue. With Intel 32nm around the corner, AMD wants to make it seem like they are closing the gap (again).
Yeah, it struck me as some chest-thumping by ATIC/AMD/UAE/GF, both as 'in your face, Intel' in regards to the x86 dispute, and also as 'closing the gap'. However a third thought occurred to me (yes, a rare event indeed :). Maybe GF is actually worried about AMD losing the x86 license and this is a way to dig themselves outta that mess if it should happen.
And I agree with your statement that we're unlikely to see any actual 32nm products until mid-2010. GF may be a 'new company' in name only, but AFAIK they have never had to fab anybody's products except AMDs before. So there may be a learning curve involved. I just wonder what TSMC thinks of all this...
Sparks: At AMD I think it means 'High Volume Marketing'.
LOL - good one, Sparks! Reminds me of those late-night TV commercials where the station turns the volume up on some fast-talking, BS-spouting, "too good to be true" offer sure to help relieve that unsightly bulge in your wallet.
but AFAIK they have never had to fab anybody's products except AMDs before. So there may be a learning curve involved. I just wonder what TSMC thinks of all this...
It's not just a learning curve of different products (which is an issue), you don't just sell first silicon. Any design given to GF will have to be taped out, then tested, mask(s)likely modified and then run again (and this could be iterated a few times...) Think about "A0" silicon to what is typically C2, C3... granted these are CPU's which might be a bit complicated, but any product is going to require at least a couple of silicon turns.
And unless you are planning on hot boxing every product, you are talking a couple of months for each information turn (moving the Si thru the fab, fixing the issue, reworking the masks).
I think GF is trying to figure out what demand for 32nm Si will be beyond AMD chipset/graphics. It would be best for them to run this stuff first as you have a close working tie with all the ex-AMD'ers and a somewhat predictable demand. Building the capacity out beyond that needs to be done well in advance and by "taking orders" they may be trying to prevent building out too much capacity too quickly?
The TSMC thing will be interesting to play out... does their work just go 100%-0% real quick once 32nm is proven out? Or is it phased out? What if TSMC is cheaper, due to their economies of scale? Does AMD negotiate TSMC against GF?
"What if TSMC is cheaper, due to their economies of scale? Does AMD negotiate TSMC against GF?"
Brilliant. Painting a picture of this broad landscape leaves very rich and vibrant undertones.
That clockwork ticker never ceases to impress. You analysis is unprecedented.
Well done. Indeed, the plot thickens.
SPARKS
"Maybe GF is actually worried about AMD losing the x86 license and this is a way to dig themselves outta that mess if it should happen."
The big "G" must be raising the bar, Moose. You're up there on this one, don't cut yourself short.
It makes one wonder if GF factored in a fight with INTC before they signed on to the deal. Perhaps, they received reassurances from a certain AG of NYS that they/he would be in on the battle, should one materialize.
And now the NVDA scenario?
This is going to be one helluva year for industry daytime drama. Very juicy stuff, and think of the deals we'll never know about!
Well almost all of us. GURU's got that crystal ball in his head. He's probably got the whole thing computed, categorized, and collated.
SPARKS
And unless you are planning on hot boxing every product, you are talking a couple of months for each information turn (moving the Si thru the fab, fixing the issue, reworking the masks).
This doesn't really take the foundry operating model into account. The use of "shuttles" has become commonplace in foundries.
UMC describes their shuttle service in the link above as follows: UMC's Silicon Shuttle provides a cost-effective means for you to verify your designs, prototypes, and IP in UMC silicon. The program allows separate "seats" to be purchased on the same Silicon Shuttle test wafer, allowing customers to split the overall mask cost among multiple parties to reduce cost-per-customer to a fraction of the total.
Lumping multiple products on the same wafer also allows a reduction in the number of lots that would need to be hotboxed in order to improve your customers data turns. And at least at the PCB shop where I worked, we charged a premium to expedite designs. So you sell "seats" on an expedited shuttle and make a bunch of extra cash without significantly impacting capacity or existing products. Most factories I know of always have a few hotboxes in the line, so it would really just be business as usual for foundries to expedite new customer designs this way.
It occurs to me that not everyone here will know what a hotbox is. A hotbox is a lot of wafers that is given priority when it arrives at each operation. Hotboxes can be assigned different levels of priority. Lower level hotboxes get bumped to the top of the queue when they arrive (i.e. they get loaded on the next available tool). For higher level hotboxes you actually hold tools idle so these very high priority lots don't have to wait. This can be particularly important for diffusion furnaces where processing can take several hours.
A high level hotbox can move through the factory in around half the time it takes a standard lot. The trade off for the increased velocity is slowing down the other production in the line because you are making those lots wait.
It occurs to me that not everyone here will know what a hotbox is. A hotbox is a lot of wafers that is given priority when it arrives at each operation. Hotboxes can be assigned different levels of priority. Lower level hotboxes get bumped to the top of the queue when they arrive (i.e. they get loaded on the next available tool). For higher level hotboxes you actually hold tools idle so these very high priority lots don't have to wait. This can be particularly important for diffusion furnaces where processing can take several hours.
Horror Story!!! Hot Boxes can cause you to lose sleep. Literally. I just love it when you get a call at 2AM to find out a hotbox is stuck on your tool. Or worse, the wafer handling unit dropped one of the wafers. Not fun. The chances of it happening are so remote, but sometimes Murphy likes to take up residence and overstay his welcome.
A little confused about the shuttles... yeah it may help with the number of priority lots, but the lots still take time to turn, then analyze, then fix, then re-layout.
Also while you may be able to share some mask costs... you can only get so much into a single field/exposure. As litho is step and scan anything outside the size of one field means a new mask and a new wafer. Unless you are somehow using a common interconnect structure (which would seem difficult given the differences in design), I'm not sure how much you are actually saving in mask costs.
It's not like you can make a single mask with 10 different customer designs stuffed into one field for validation. While I see the speed/clutter improvement aspect of this approach... not seeing the mask savings.
(That said it was good info on the concept of shuttlers)
"Horror Story!!! Hot Boxes can cause you to lose sleep. Literally."
OK fellas, give the rest of us who haven't had the good fortune to enter "Tech-Freak" Valhalla, a break. Just a couple of basic questions about the "Hot Box Nightmare scenario."
I'll make few assumptions that these special case Pod's are:
1. Clearly marked so you see them coming a 100 yards away.
Do they elicit a "Oh shit, look what's coming" response?
2. Each "hot box" has some type of identifying signature encoded on the 'hot box" that tells the particular tool(s)/engineer to change its parameters, exposer time, solution strength, aperture settings, spin time, etc., and a myriad of other variables I'll never know about.
If these tool variables do change, does an engineer (you) ever get the tool dialed in back to its original, say, perfectly dialed-in, full production setup?
Or is it "Damn it, I had that/these frig'in tool(s) tweaked perfectly"?
3. Tool setup is not a "Set it, and Forget it" affair.
If the variables do change as mentioned above, do the tools easily go back to the perfectly set, original configuration, or is there addition tweaking involved to make life more miserable?
4. Obviously, this slows production.
Does your Supervisor recognize that "Hey the poor bastard had to deal with two "hot boxes today," during his/her shift?
5. 2 AM phone calls are not fun.
Is it "Christ, Sweetheart, I'd better get down there" or is it, " You better finish what you started, big boy?"
(Don't answer that one.)
SPARKS
1. Clearly marked so you see them coming a 100 yards away.
Do they elicit a "Oh shit, look what's coming" response?
It depends on the tool. Some tools require recipe development on the lot if it is a new product. Most tools however, process the wafers exactly the same regardless of what it is. Many times you don't even know the lot went through your tool.
2. Each "hot box" has some type of identifying signature encoded on the 'hot box" that tells the particular tool(s)/engineer to change its parameters, exposer time, solution strength, aperture settings, spin time, etc., and a myriad of other variables I'll never know about.
If these tool variables do change, does an engineer (you) ever get the tool dialed in back to its original, say, perfectly dialed-in, full production setup?
Or is it "Damn it, I had that/these frig'in tool(s) tweaked perfectly"?
Hot boxes are marked physically and virtually (in the automation system). If future operations require someone to be present or write recipes, the system will notify them ahead of time. In the case of very high priority lots, there will be someone assigned as a "Lot Shephard" to physically watch it go from tool to tool. Most of the time, a primary tool and backup tool are assigned ahead of time for processing and thus a tool can be idled well ahead of time if necessary. Since the tool is chosen ahead of time, you know what is dialed in or targeted correctly.
3. Tool setup is not a "Set it, and Forget it" affair.
If the variables do change as mentioned above, do the tools easily go back to the perfectly set, original configuration, or is there addition tweaking involved to make life more miserable?
I can't speak for every tool, maybe Guru has more detailed response, but generally, Hot Boxes don't have any special targeting. Most targeting is done automatically through APC (Automated Process Control). Hands-on targeting is generally only required after maintenance (if that).
4. Obviously, this slows production.
Does your Supervisor recognize that "Hey the poor bastard had to deal with two "hot boxes today," during his/her shift?
Generally supervisors recognize it, yes, Hot Boxes have factory priority.
5. 2 AM phone calls are not fun.
Pleads the fifth ;)
Sparks, a lot of what you are asking about is handled by automation systems in modern factories. In older factories a lot more "human glue" is required, with an accompanying increase in stress level.
In a state of the art fab, if a special recipe (i.e. different processing) is required, the automation is set to notify the person loading the tool that a non-standard recipe is needed. Instructions clearly identifying the recipe needed are appended to the lot file and easily looked up when the lot needing special processing arrives at the target tool for processing. The automation system will not allow the lot to be "accidentally" loaded with the standard recipe. It will not load without some degree of human intervention.
So the engineer will write a special recipe for the hot box and the person loading the tool runs that recipe. Previous and subsequent lots continue to run the standard recipe. If some sort of conditioning is required to return the tool to a state where it can run standard material, those instructions will also be included with the instructions that were attached to the special lot. Though Orthogonal is right, it is usually just back to business as usual.
The number of variables you can adjust on a tool is huge. However in practice, there are only a few of those variables that you actually play with. Limiting the number of variables that get changed and how far you are allowed to move them is a fundamental requirement in order to ensure that you don't fall out of the process window.
And remember, you aren't just talking about falling outside of the process window for one tool, you are talking about the whole integrated process window from multiple tool interactions.
Now if you are in litho, every new product uses a new mask, and therefore requires a lot of personal involvement from the engineers. That is one of the many reasons I stay as far away from litho as I can. I like my job, but I also like to pretend that I have a life as well on occasion. :)
Ah, fascinating. Thanks for the new terms and concepts.
Orthogonal,
"APC"
"Targeting"
"Special Targeting"
"Lot Shepherd"
"physically and virtually"
ITK,
"engineer will write a special recipe for the hot box"
"And remember, you aren't just talking about falling outside of the process window for one tool, you are talking about the whole integrated process window from multiple tool interactions."
Whoa, now that's scary. The amazing thing is how well engineered/developed the tools are to accurately change on the fly and then back again. Now I know how they tweak these variables when they move to different steppings.
Thanks for the insight.
SPARKS
Oooh, tomorrow is the 30th! I can't wait to read the reviews on the new XEON badasses! It's nearly 10 PM EST here.
Finally, we take back the server crown in full!
HOO YA!
SPARKS
NHM-EP benchmark from anandtech
http://it.anandtech.com/IT/showdoc.aspx?i=3536&p=1
wow, just tried to visit the it.anandtech.com again to see what people has commented, guess what, the page has been removed ... NDA not lifted yet? :)
well, i have finished reading all its pages anyway :) and the results are really good for the NHM-EP. Guess those that miss it have to wait for another few hours? haha
"wow, just tried to visit the it.anandtech.com again to see what people has commented, guess what, the page has been removed ... NDA not lifted yet? :)"
I think that they removed it almost as quickly as they posted it. I visited the link shortly after you posted it and it was already gone.
I'm more interested in the comments as well. It was funny to see the comment section on his previous blog post. After months of AMD fans accusing him of being paid off by Intel, this time it was someone claiming that he was 'paying too much attention to AMD PR.'
I guess that is when you know you're taking the right approach, when fanbois from both sides complain about your fairness. :)
PS- I guess we can start the countdown for x704's removal from the Zone. Although he did seem to recognize that his tenure there would be short, heh.
The article on Anandtech is back up. Intel pretty much sweeps the awards ceremony.
TONUS, POINTER, not to worry, I've got the link.
Just to put things in perspective 2 5570 halved my beloved QX9770 Cinbench10 scores, from 70 seconds to 35 seconds. Rendering farms are a lock.
http://www.legitreviews.com/article/943/2/
Anand's review is saying the 5570 is shattering all records.
http://it.anandtech.com/IT/showdoc.aspx?i=3532
Power vs. performance is in the stratosphere. I don't want to EVER see a billboard on Park Ave. that says otherwise, AGAIN!
No word on the 5580-----yet!
HOO YA! It's about time. I wonder what "The Three Stooges" are going to say now?
SPARKS
Check that it's a KILLER!
http://www.bit-tech.net/hardware/cpus/2009/03/30/intel-xeon-w5580-nehalem-ep-review/1
SPARKS
Oh did INTEL release a great server chip or what?
Nehalem cleaning up on Servers
Penrym cleaning up on desktop and mobile.
Silverthrone cleaning up the netbook space
32nm coming with Westmere and more. The onslaught continues.
AMD on the other hand spinning off fabs. Foundry desperately looking for new business.
Yawn, game over, finished. AMD is done.
A can of Whoop Ass if I ever saw one.
Anonymous Anonymous said...
Oh did INTEL release a great server chip or what?
Nehalem cleaning up on Servers
Penrym cleaning up on desktop and mobile.
Silverthrone cleaning up the netbook space
32nm coming with Westmere and more. The onslaught continues.
AMD on the other hand spinning off fabs. Foundry desperately looking for new business.
Yawn, game over, finished. AMD is done.
A can of Whoop Ass if I ever saw one.
Intel's recent achievements were supposed to be good, but until you write them in those tones.
Poor LEX, I empathize with his overzealous and downright unabashed comments. I too, after all, am a cold blooded, go for the throat, Blue through and through, fanboy.
I admit it.
However, there are more tactful ways of going about things, and this is one of them. It comes from the INQ (no less). As we are all well aware, they don't have any great love for INTC. That said, this review and commentary speaks volumes.
For all those diplomatically challenged, pay close attention, please.
http://www.theinquirer.net/inquirer/opinion/
612/1051612/nehalem-ep-impress
Whoa.
SPARKS
Well, sports fans, my venerable (now discontinued) QX9770 is getting rather long in the tooth, from an enthusiasts perspective that is. INTC's 16 core behemoths (2 CPU's w/HT) have absolutely trashed my beloved 1 year old chip. (Sigh) Such is life in the fast lane.
The web is replete with duel processor XEON W5580 benchmarks. I realize this setup is a bit over the top for even the most rabid hardware freaks, especially for gaming. Further, at nearly 1700 bucks a pop, the whole enchilada is way above the WAF "Wife Acceptance Factor" radar.
But, it's nice to lust and drool anyway.
I've got a nice link that shows the current CPU charts from PassMark Software.
http://www.cpubenchmark.net/
Incidentally, I can't find any SLI enabled, dual processor 'server' motherboards, but I do have this lovely piece of hardware that will accept dual 16X graphics cards.
It seems INTC's and NVDA's pissing fight isn't going to be very pretty, after all.
http://www.hexus.net/content/item.php?item=17868&page=3
This is the S5520SC, INTC's workstation board, undoubtedly, a 'Dreamworks' and 'Boeing' dream come true. I'm sure with the substantial increase in performance, productivity will scale 30% or better. That's a lot of engineering time at the large companies, here and abroad. I suspect many corporate bean counters will see the value and overall savings in the (not so) long term, here and abroad.
http://www.intel.com/products/workstation/
motherboards/s5520sc/s5520sc-overview.htm
The new XEON's are prepped and ready for cost savings/productivity in a very tight market. They will do well in the professional sector.
SPARKS
Here's a far more informative link (and better written) to substantiate my rather conservative estimate along with some pretty graphs. INTC is claiming an 8 month ROI. The next 4 months will be gravy, factor in a nice capital expenditure tax credit to boot. Not bad for a fiscal year with a four month increase in productivity, free.
So much for the we need the "so and so" company theory to be competitive. Here is a prime example why the bullshit 'stagnation theory' falls flat on it's ass. The key words are performance, productivity and market demand.
Duh.
http://www.theinquirer.net/inquirer/news/617/
1051617/xeon-5500-servers-pay-months-claim
SPARKS
Question about drop-in replacements at the server level: Do a lot of companies ever do this, or even consider it?
I see talk of how AMD can have an advantage with its socket-compatible CPU upgrades, because you can just drop them into your existing server, flash the BIOS if needed, and BAM-- faster server in just a few minutes! Whereas with Nehalem you're going through the expense and tedium of a full replacement.
But I never hear anyone talk about actually doing this. We don't do it here (relatively small shop, mostly basic dual-socket servers for file/print, email, etc), especially since our servers run for 5 years or more, which means that when it is time to upgrade them, they're past their warranty period. Even if we could, we would never consider just dropping in a new CPU on a system where the surrounding hardware is already 5 years old, especially when the warranties are expired and support becomes more limited. But our usage may not be typical, or may be typical for a company of our size.
Is it really that different elsewhere? Are there companies that would consider drop-in CPU upgrades for 2P, 4P, etc servers instead of full system replacements?
From what I hear, the answer is "no".
However, what "drop in replacement" implies is that you can use all the same hardware other than the CPU. Which might reduce the validation time quite substantially for new systems as you have already "shaken out" issues in most of the setup. So, for new purcahses, I wonder if it could be an advantage for IT in that they don't have to validate as much.
Purely a conjecture.
Anand is calling the new XEON's :
"giant tsunami in the server world."
Part two of this seventeen page, detailed analysis is very revealing. Basically, a 2P 5570 setup will kick the guts out of ANY 4P machine by a substantial margin. If I'm not mistaken that is half the software licensing fees (per core[s]), adding to the over all saving in productivity and ROI. The article is well done and the conclusion is spot on.
http://it.anandtech.com/IT/showdoc.aspx?i=3536
SPARKS
The best part of the review is some of the comments. You know which ones they are. :)
"You know which ones they are. :)"
I certainly do. I dare not paste, in the interest of diplomacy, of course.
See G, I'm loining.
SPARKS
"AMDZone is the biggest joke on the internet. I just went there to see how the zealots like abinstein are still doing their damage control; just like before he went on rambling how the Penryn is still weak against Shanghai, and the old and tired excuses like how if people all bought AMD they can drop in upgrades etc etc. ZootyGray...he's the biggest joke on AMDZone. None of them had the mental capacity to accept AMD has been DEFEATED, which is disappointing but funny to say the least"
LOL
Drop in means you drop it in and it will work. So easy.
I've tried it in one of my servers, it went blazingly fast after the drop in.
How the bankrupt prediction going there?
You really need to start them up again. They were the better of yoru posts, now go back to your own little sandbox there boy
"it went blazingly fast after the drop in."
Well, is it merely a drop-in? Fascinating! I suggest you catch up on your tech. (If you are who you say you are.)
May I suggest the following:
http://www.xbitlabs.com/news/other/display/
20090403120624_Intel_Claims_New_Materials_
Can_Trim_Consumption_of_Microprocessors_by_90.html
Then:
http://www.intel.com/technology/silicon
/future.htm?iid=SEARCH
The third item on the page.
"Integrating III-V on a silicon substrate"
While you're at it, you may as well read the .PDF:
http://download.intel.com/technology/
silicon/chau_IEDM_2008.pdf
You see INTC is not going to stop at Hafnium, actually it was just the beginning. The new tech using 'IsSB' Quantum Well Field Effect Transistors demonstrates remarkably low drive currents, as low as 0.5V. These "poor metals" and superior alloys are going to have a profound impact in process tech and transistor performance over the next few years. Transistor speed, gate latency, and low power are unprecedented in the industry.
Call it World Class.
The 55XX series is simply a foreshadow of future performance gains. You must realize INTC has only itself to compete with, and the sky's the limit, indeed.
These future devices may not be drop-in replacements, then again, with these performance gains, who cares?
Oh yeah, welcome aboard.
SPARKS
AMD on the other hand is counting on a bunch of Arabs for their technology. Now that is a BK strategy if I ever saw one!
"Integrating III-V on a silicon substrate"
Sparks, I wouldn't get too far ahead of myself on this one. Successfully integrating III-V materials on Si has been an ongoing effort for a long time now. And it still has a long way to go.
As the article noted, it is still a "science project". Kinda like EUV litho.
III-V on Si is tough and manufavturability (cost) is also going to be a big deal as it is currently (typically) grown with MBE (molecular beam epitaxy) which is very slow and not really a tool that is capable of high volume manufacturing. Also do to the lattice mismatch between the III-V's and Si you have to grow a buffer layer in order to get a crystalline material (if you have too much lattice mismatch, you get strain, which begets disloctaion which begets 'poly' crystalline material which begets crappy transistor performance). These buffer layers are not simple from a technical perspective and also add to the thickness you need to grow... which means slower throughput, more tools needed, higher costs.
The other problem I see with III-V's is the lhe small bandgaps potentially create difficult leakage issues... while it is nice in terms of not needing a lot of voltage to turn them on, it is difficult to really distinguish between on and off (meaning significant leakage could be a problem).
This is probably farther out in terms of timeline and issues that need to be solved than EUV. In terms of which is more likely to occur first... hard to say - not certain EUV will ever happen anymore :)
I didn't read the links, so if I'm just mentioning what Intel has mentioned, my apologies
"In terms of which is more likely to occur first... hard to say - not certain EUV will ever happen anymore :)"
Yeah, sure, I know III-V is a bit stretch. MBE is a slow as it gets, 1000nM in a 24 hour period, I believe. Ultra high vacuum, 10-9 Torr or better, tools made of very expensive Tantalum (among other exotic materials), and lattice structure breakdown, makes things nearly techically impossible.
Then EUV eating your tools, mirrors, and lenses, among other things, make this tech equally challanging.
However, if anyone gets there first, it will be Intel Corporation, this I'm sure.
SPARKS
What it takes to make Moore's Law continue for the next 15 years
1) Money
2) Money
3) Money
Now if you got 1-3 then you got what it takes.
Last I checked there will be 3 companies on the bleeding edge in 10 years, INTEL, Samsung, TMSC.
Funny AMD got a green logo but that ain't enough, they need something else that is green that they sorely lack and haven't made for 3/4 of their existence.
EUV may or may not happen, but if anyone can afford it like SPARKs said its the guys with the $$$$$$$ and that is INTEL
"Keep us informed when you do let that hog outta his pen :)."
Well Nonny, the order is in. I had hoped to wait until summer but my current system, while running well, has some minor problems that have me worried. Lots of error messages, of the nature of "your application did not install correctly" after an application installs just fine, and "your program has stopped responding" after I close some applications (Photoshop most notably, both 32 and 64 bit versions). If I install Lightwave there is a problem where the sentinel key software chews up 25% of the CPU upon bootup, and I have to stop the service. Nothing that keeps me from doing the stuff I want to do, but I keep looking over my shoulder. :P
This may just be a BIOS issue, but do I really need an excuse to splurge a little? *cackle*
Anyway, assuming there are no problems processing the order, this week Newegg will be sending me the following:
Cooler Master ATCS 840 case- a monster of a case with lots of room and lots of large (and thus quiet) fans. I've got room for it, the only question is can I avoid a hernia while handling it.
Corsair 750W 80+ certified, SLI/Crossfire, i7 ready power supply- Probably more than I need, but I may eventually wind up with six hard drives and two Radeon 4870s in there. I went with Corsair because it has the longest cables, which are a must with this case. It doesn't have any modular cables, but with a case this big a few extra wires are not so bad.
Asus P6T mobo and Core i7-920 CPU- Based on the reviews I've seen, the P6T is as good as any of the other boards from Asus, with the main difference being that it costs a lot less ($240 discounted). I'd love to grab a 940 or 965, but my software runs just fine on my Q6600 (2.4GHz quad) and paying $600 or $1000 just doesn't seem cost effective to me. A 2.66GHz Nehalem will be a nice speed boost, and I should be able to get it past 3GHz with both the stock HSF and no voltage boost. All that for $289?
(Sparks, skip this next part! If I'd waited, I would have been sorely tempted to get a Phenom 2 3GHz quad, but I want DDR3 and I want it now! Plus, I have the potential for a 4GHz CPU if I grab a decent HSF!)
OCZ Platinum 12GB (6 x 2GB) DDR3 kit (7-7-7-20)- I am no longer as conversant as I was regarding memory timings, but I know lower is better and 12GB of 7-7-7-20 memory for $185 sounds pretty damned good. I get chills when I think about telling Photoshop that it can use 8GB+ of memory. *giggle*
Two WD Caviar 500GB hard drive- SATA 3.0Gb/s, 7200RPM, 32MB buffer, these will do just fine in RAID 0 as a primary drive. I wanted to get Spinpoint F1s from Samsung but Newegg is out of stock. I have a 500GB RAID 1 external drive for my data, this purchase is mostly for speed via RAID 0 and the 32MB cache. At some point they may be joined by my current internal RAID setups, which consist of two 400GB drives and two 250GB drives. Why? Because I can! <-- I know that Sparks can appreciate this part!
In any case, the drives are $70 each. I could certainly have gotten bigger drives and faster drives, but I wanted to keep a lid on the price. And just as with the CPU, this will outperform my current setup, which I'm happy with already.
MS Windows Vista Ultimate 64-bit- I guess every silver lining has a cloud. This was part of a bundle with the CPU that provided a $55 discount.
This will be paired up with some of my current hardware. An SATA Blu-Ray recordable drive (hm, this might short-circuit my plans for six internal hard disks!), two Radeon 4870s (only using one at the moment due to lack of sufficient case room, not a problem with the 840!!!), and at least one DVD-RW drive just because there is room. All of this will connect to my existing monitors, a 28" Hanns-G and a 26" LG monitor. The 28" display cost almost half as much as the 26" and the picture quality is much more than 2x better. Oh well. :p
In any event, it's a pretty big upgrade for just ~$1,300 +shipping ($40 from UPS thanks to some items having free shipping! The case alone probably accounts for $35+). If the new system avoids any problems with Lightwave and eliminates the strange errors, it will be well worth it. Especially considering that the software that will be running on it costs more than 3x as much...
PS- and as I am typing this, I get the confirmation emails. The stuff should arrive Wednesday or Thursday. If it arrives Friday I have no problem with that, as I do not expect to start building it until Saturday.
TONUS, great system. The price/performance metric is top shelf. (I'm compelled to say you would have broken my heart with the 'alternative CPU').
Interestingly enough, only a few months back, most tech 'analists' were factoring in the DDR3 purchase as too cost prohibited for a complete build on i7. Here you get 12 gig's of goodness for 185 bucks, contrary to obsolete DDR2/ Dual Channel DDR3 memory that will be headed for the junk draw in short order. Good move, bro. Those 3C DDR3/i7 nay sayers were pissing me off.
Historically, they alway resist a platform change, then eight or ten months go by, prices drop (marginally) then they all take the plunge. It's almost like an industry standard, and they're always behind the friggin curve. "The more things change, the more they stay the same." It's a French expression, I know, but they do get something right once in a great while.
The P6T is one f**king good MOBO! It may not be the lunatic fringe, super tweaked 'Rampage' setup, but guess what? The P6T basically has all the same guts, sans heat sinks/regulators and extreme BIOS tuning, BFD. The board is a stable as a brick, and there is a good deal of tuning flexability is contained within, as are all ASUS products. Don't sell that selection short, it was a perfect pick at any price!
Your 3/4 Kw power supply choice was shrewd and prudent from an peripheral perspective. Add on, and don't sweat. Personally, I prefer PC Power and Cooling products. In fact, they were so good the OCZ Group bought the whole shebang last year. However, Corsair doesn't play games with their stuff ether, so the point is moot, nice.
Frankly, this is the perfect system, perhaps more so, than the one I recommended to ROBO as an entry into the Core i7 platform. More importantly, a slight overclock, and you will blow my overclocked QX9770/P5E3 Premmium into the weeds along with everything else that isn't overclocked. At these performance levels, coupled with exceptional stability, the tweak would be superfluous and unnecessary.
Wise choices, all. Your homework will payoff in excellent dividends, make no mistake.
KUDOS
SPARKS
TONUS,
May I suggest:
http://www.legitreviews.com/article/880/4/
SPARKS
ITK, Check this out. They're already sold out.
http://www.abacocomputers.com/Computer-Desktop/Primo.html
SPARKS
"http://www.legitreviews.com/article/880/4/"
Thanks Sparks. I was browsing a few reviews and noticed that one. There is also one from Thermaltake which does a moderately good job of cooling but has a much lower cost. Since the 840 case has a cutout under the motherboard tray (which means that I can replace the HSF without having to remove the motherboard) I can take a wait-and-see approach.
I decided to try and overclock this Q6600 today and had problems booting to Windows at 3GHz, but it runs just fine at 2.7GHz. The case is very cramped and this room is a bit warm most of the time, and I'm using the stock HSF. So a 300MHz OC isn't so bad, and I could probably squeeze out some more speed if I wanted to spend the time. 3GHz with the stock HSF in the larger case should be easy with the i920.
I tried to find corners that I could cut without hurting the build and I think I did pretty well. Now to play the waiting game. >.<
Well, the packages have shipped and according to UPS tracking, they should be here... tomorrow?!?!
That's impressive on Newegg's part. I mean, I made the order yesterday and went with guaranteed three-day shipping specifically to avoid the high cost of next day (more than $120) and two day (around $90) shipping. So I saved $80 and still get what amounts to next-day shipping? BOO-YAH!
This article talks about upcoming ARM based netbooks. In particular it says:
Ian Drew, senior vice president at ARM, told Computerworld recently that he expects to see "six to 10 ARM-based netbooks this year, starting in Q3." The devices will run Linux or a Linux derivative, such as the Google-backed Android smartphone operating system, boast eight to 12 hours of battery life and cost about $200, he said.
Drew's price estimates for upcoming ARM-based netbooks represent savings of up to half off compared with today's cheapest Intel Atom-based netbooks, which range from $300 to $400.
First, I'm skeptical of the 8-12 hour claims of battery life if it has the same form factor as an Atom based netbook. If you look at the trend in netbooks, it is to move to larger form factors. The larger screens do not lend themselves to increasing battery life. In fact, the wireless radios and larger screens can offset the advantage of a low power processor and be the primary driver of battery life.
I suspect what ARM is really looking at here is more in the MID category. That is where I expect the real clash between Atom and ARM to take place.
The timing is quite interesting as well as it corresponds well with when Moorsetown should be hitting the market. I'll be quite interested to see how Moorsetown and ARM compare on a similar form factor.
ARM management needs to take a look back at history.
The past is littered with superior architectures that didn't offer the one thing that has made Intel so successful. Its called x86 compatibility.
Go ask your local sales person wha the return rate is on LINUX verus Window based net books.
Please don't debate the simplicity or what ever of LINUX. History has already shown that windows based computers only need to be almost as good and they will always win. People prefer the known versus any little extra for something slight superior.
x86 will rule all powercomputing platforms. Time will tell if they can get the smartphone socket. I give INTEL 3 more years and they will have that too.
I give INTEL 3 more years and they will have that too.
Anyone notice the "shariqu-esque" poster...
1) Repeat the same thing over and over and over and over and over and over.
2) Substitute actual data with anecdotal data ('go ask....')
3) Start evolving prediction over time if it looks like prediction is not coming true.
Lather, Rinse, Repeat... he'll be posting the same thing in a week... he thinks he's a geek... but he's confused from baking in the heat... (of AZ)
Tonus: Well Nonny, the order is in.
Whoa, dude - I'm totally jealous right now!!! :)
Unfortunately for me, my time and discretionary budget have been consumed by my daughter managing to trash her second laptop in about 8 months. Both of these were Dell XPS lappies from a couple years ago - I figure I spent over $7K on them, plus getting the one repaired last fall set me back around $1K. Now the 2nd is giving her fits. One has an nVidia 7900GTX and the other 7950GTX, but since the first one got trashed by liquid being spilled into it (daughter who is in college says it was water, but I suspect beer), I can't blame nVidia for that one.
BTW, did you see this blog entry on Anandtech? http://www.anandtech.com/weblog/showpost.aspx?i=584
Looks like the D0 stepping is going to do for i7 what G0 did for the Q6600...
"Looks like the D0 stepping is going to do for i7 what G0 did for the Q6600..."
Hmmmmmm.
All courtesy of my beloved INTC and it's GENIUS engineers, a very special (DO) chip.
Is anyone here familiar with the concept 'Rapture' and it's religious significance?
SPARKS
LOL yeah that Az heat is tough on them old brain cells.
At least I have some versus douche bag spouting about BK when the only company going BK was the green one.
Never a doubt INTEL was going to rule it all even in those old dog Prescott days, nobody believed but it was always so obvious. Now the advantage INTEL has is even larger and the gap harder and harder to close.
First, I'm skeptical of the 8-12 hour claims of battery life if it has the same form factor as an Atom based netbook. If you look at the trend in netbooks, it is to move to larger form factors. The larger screens do not lend themselves to increasing battery life. In fact, the wireless radios and larger screens can offset the advantage of a low power processor and be the primary driver of battery life.
Good point. But I'm thinking this is a problem with the current radio implementations. My G1 phone can sit idle for 2 days and the battery gauge still reads above 95%, if I leave bluetooth and wifi off. But if I leave wifi on, the battery will be drained within 10 hours (overnight, +/- a bit). That's even though the router is close by and it's seeing 100% signal strength, so it doesn't need very much transmit power.
That really doesn't make much sense to me, the entire radio in a laptop is on a mini-PCIE card that's relatively tiny. How can it be dissipating so much more power than the CPU?
Obviously the GSM radio protocols are more thrifty, they only key up the transmitter once in a long while, to let the base station know they're still alive. 802.11 and bluetooth seem to be far more chatty. Or, there's too much background server traffic keeping the wifi interface active. Dunno what the issue is, but that needs to be fixed.
"Lather, Rinse, Repeat...
he'll be posting the same thing
in a week..."
He thinks he's a geek...
but he's confused from baking,
in the heat..."
Poetry?
Sorry for the edit. I hope you don't mind. It reads better this way.
Not bad, actually. However, may I suggest you stay with process/production analysis?
You're much closer to a Mark Bohr, than a T.S. Eliot.
OXOX
SPARKS
Sparks: "Your 3/4 Kw power supply choice was shrewd and prudent from an peripheral perspective. Add on, and don't sweat."
You were right, it's holding up very nicely. I got it assembled last night and installed Windows and the drivers before calling it a night, then installed Adobe Creative Suite this morning.
The system seems much 'snappier' than the Q6600, possibly as a result of the faster HDDs and the better CPU. Maybe it's in part due to my wanting it to be faster, but I didn't really have any particular expectations. This is Microsoft Windows I was installing, after all. Still, it feels more responsive than it did before, though I still have a bit of software to install. We'll see how it goes once it's ready for action.
I'll say this, it's very noisy. It might be the two 230mm fans up top, but the thing sounds like a shrill air conditioner. I think that most of the fans have speed controls, and if they do, I'm betting that they're all turned to max. I'll have to look into that. I'd gotten used to the barely-audible hum from the previous setup.
I'm not sure if HT is better for me or not (Photoshop mostly, and the occasional 3D modeling for fun), but it's a kick to see eight CPUs running in task manager. To say nothing of having it show that the system is using 1.5GB of memory... or 8% of the total. :)
The positioning of the hard drive bays (turned 90 degrees) is a slight problem. This is a large case (pictures and videos do not quite do it justice) but with six drives in the drive bays, putting the back panel on was a bit of a chore, because there's not much space between the hard drive connectors and the back of the case. It closed and I was able to get it screwed on with some effort, but it does bow outward slightly at the bottom.
The power supply worked out almost perfectly. It has four separate PCI-E connectors, which meant that I could use my two 4870s without worrying about how to power them. It has eight SATA power connectors, which made it easy to hook up the hard drives. However, the position of the cage meant that none of the connectors could reach the Blu-Ray drive once I had plugged the hard drives in. So I scrounged around and found a converter and that was that. It also has eight standard power connectors. It's hard to imagine a system that it couldn't handle!!!
"The system seems much 'snappier' than the Q6600, possibly as a result of the faster HDDs and the better CPU."
I got the same feeling going from Q6600 to the big Yorkie. In your case, no doubt, more so.
You'll know for certain once everything gets settled in. The snappy feeling is unmistakable and undeniable, that's a fact.
Ploddung around with 'other solutions', I believe, would have been a disappointment. But then, you get/got what you paid for.
SPARKS
But if I leave wifi on, the battery will be drained within 10 hours (overnight, +/- a bit).
Yeah, and that is on a phone. Now add power draw for a 10" display and you can easily see that dropping to the 4-6 hours that you see for typical netbooks. ARM might give you an hour or so advantage by having a more miserly chipset, but I think you will find Moorestown is very comparable to ARM's offerings on large display devices.
Since battery life is a key driver for this type of device (along with cost), I think you will see something done to deal with the radio issue in the next few years.
There also seems to be a lot of activity on the display front.
Sparks "You'll know for certain once everything gets settled in. The snappy feeling is unmistakable and undeniable, that's a fact."
That certainly seems to be the case. I have almost all of my software installed and set up (or at least, the stuff I use most frequently these days). I haven't installed Lightwave yet. But right now it's running smooth as glass.
So far only two error messages: One "this driver may not have installed correctly" after installing the Radeon drivers (ha) and one "this application may not have closed properly" after I exited Paint Shop Pro 7 (a fairly old version that still has one of the best thumbnail viewers I've ever used). Well, PowerDVD is still glitchy, but that's a minor consideration.
Other than that my applications are liking this setup. On the old setup I simply could not get the Adobe updater to properly update my applications, but this time it went very smoothly. The hitching and stuttering that I would occasionally get when playing music or videos is gone. I get the feeling that something in the Q6600 system wasn't working right after I rebuilt it. But even putting aside that, the i920 is purring like a kitten. Noticeably so, which I really didn't expect (but which you did!).
And I haven't even overclocked it yet. I plan to do some reading up on the P6T's BIOS settings before taking that plunge, as well as putting the system through its paces at default. There should be fun times ahead (he cried, while crossing his fingers)!
Tom Cruise in Days of Thunder says to car builder Robert Duvall, as if it's some big revelation/secret,"I don't anything about cars". Robert Duvall's mater of fact response was with a smile, "Hell, I never met a race driver who did."
But, ole' Tommy knew when the car felt good/right/tight/loose at 220+ MPH!
I couldn't care less what L1, L2, L3 cache does.
I know what BIOS means and how important it is to the hardware, I'm totally clueless how it does, what it does.
The entire scale of hardware, software, hardware, OS, memory, more hardware, drivers, software interaction is absolutely mind boggling. At times, over the years, I sometimes get the feeling the poor bastards who assemble the software, firmware, drivers, etc., are up against a wall with a gun to their head trying to write/optimize code for every piece of slob hardware and/or miserable bloat/App ever manufactured!
It's no wonder current graphics drivers are in the 20 to 30 Meg range. They never truly work well with everything until the end user gets the holy grail of software/hardware bussiness, PATCHES!! God bless the f**king patches for every app ever made!
Your last post typifies the trauma of a new build up. (It sounds great, so far. That said, you've got bigger balls than me. Vista scares me stupid.)
I'll tell ya, I wouldn't know well executed code from well optimized memory registers. But, I do know what it feels like running a well tweaked machine at 4 Gig plus with my hair on fire! That snappy, shoot from the hip mouse click, wait for nothing feeling, is what it's all about.
There's nothing like it, it feels great, it ain't cheap, and that's what they're in business for, to give me a good reason to part me with my hard earned money.
(i7 975 might be enough, you and INTC are pushing my buttons)
Want to fast? Go buy one of these.
http://www.bugatti.com/en/veyron-16.4.html
Want to go fast? Go but one of these.
http://www.intel.com/products/processor/corei7/pix/core_i7.jpg
They're in the same league, and in your case, you've got just one more to go. Good show.
SPARKS
ITK: Since battery life is a key driver for this type of device (along with cost), I think you will see something done to deal with the radio issue in the next few years.
That's my feeling too since 802.11 and Bluetooth are both older standards compared to GSM, I believe. However, I'm waiting for all the super-duper battery tech that DailyTech used to go on about, such as using silicon or maybe it was carbon nanotubes to give a 10x increase in energy storage in a Li battery. That was at least a year or so ago - so where's my super battery already?? :).
That's the trouble with DailyTech - they're like the "Star" or "In Touch" moviestar mags that my wife loves to read while sitting on the potty - all breathtaking news bits like Brangelina splitting up, yadda yadda - then nothing. Story over, get on with your life - it never happened..
Tonus: I'll say this, it's very noisy. It might be the two 230mm fans up top, but the thing sounds like a shrill air conditioner.
That would be a turn-off for my usage as an HTPC, so variable fanspeeds would be a requirement. Or else go to watercooling, which I might just do since the fancy L-shaped computer desk has an enclosed space for the PC. Well it has a front and back door so I could always open up a straight-through path if the PC casefans suck in air from the front and push it out the back, but still it would look kinda funky in use.
The space also has a large square hole on the inside wall, opening up into the kickspace under the desk, and that might work as a heat exchanger for watercooling - keep my feet toasty during winter maybe :). I've never done watercooling before and I wouldn't want any leaks since I have hardwood flooring in that room.
Post a Comment