Visit additional Tabor Communication Publications
July 21, 2006
After a delay of nearly a year, this week Intel finally launched its dual-core Itanium 2 Processor 9000 series (formerly code-named Montecito). The 9000 series was introduced in five different flavors, with a variety of clock speeds and cache memory sizes. Over the next several weeks all eight OEMs that produce Itanium-based servers are expected to promote systems that incorporate the new dual-core chip.
The Itanium represents Intel's four-year venture into the mainframe microprocessor market. The company promotes the chip as an industry-standard alternative to the proprietary 64-bit RISC architectures, specifically, Sun Microsystem's UltraSPARC processor and IBM's Power processor. Itanium's Explicitly Parallel Instruction Computing (EPIC) architecture differs from both CISC and RISC approaches, using instruction level parallelism (ILP) to achieve high levels of performance.
Itanium's declared market turf is mission-critical enterprise servers and high-end supecomputers, neither of which are particularly high-volume segments when compared to the overall server and commodity cluster computing market. But according to IDC, revenue for Itanium-based servers will grow to approximately $6.6 billion by 2009. And over the next five years, the compound annual growth rate for Itanium-based servers is expected to be 35 percent, compared to 3.4 percent for the overall server market.
SGI is particularly enthusiastic about the new chip since it has made a large investment in the architecture in its Altix systesm. The company claims that their new Itanium 2 9000-equipped platforms, which are expected to be commercially available at the end of August, are already achieving record performance on applications such as computational structural mechanics, molecular dynamics, weather forecasting and environmental modeling.
HP, which sells the vast majority of Itanium-based servers, is also happy to see the chip. In this issue of HPCwire, Ed Turkel, Manager, HPC Product Marketing at HP, discusses the future of Itanium 2-based systems in the rapidly developing HPC enterprise market. Industrial applications such as seismic modeling, aerospace/aeronautical design, financial forecasting/modeling and automotive CAE represent some of the more prominent HPC enterprise workloads. Turkel makes a case that Itanium is well-suited for this growing market.
Says Turkel: "With vastly superior on-board memory caching and I/O systems designed to deal with larger data volumes, servers based on the Intel Itanium 2 processor can provide faster, more accurate calculations at a lower price point than comparable RISC-based systems."
The Intel chip does appear to be steadily eroding the market share of its RISC competitors. In its first real year of production (2003), the Itanium-based system represented only about a tenth of the market occupied by RISC systems. But as of this year, Itanium-based systems now generate almost half as much revenue as either UltraSPARC or Power-based systems.
Even though the Itanium chip volumes have grown steadily, both Intel and HP originally envisioned a faster penetration into the IT market. Both analysts and customers expected more from the earlier Itanium versions, so the architecture developed a reputation as an underachiever. But Intel and its Itanium OEM fans are certainly placing a lot of their hopes on the new dual-core offering. The chip doubles the performance of the previous generation single-core Madison, and accomplishes this with less power.
Although Intel sees the IBM Power and Sun UltraSPARC RISC chips as Itanium's competitors, its biggest threat may be from below -- the AMD Opteron and Intel's own Xeon microprocessor. These dual-core 64-bit x86 chips are being used in systems throughout the high-end enterprise server and HPC markets. Even though the Itanium has certain technological advantages over the x86 chips -- such as greater memory reach and higher levels of instruction parallelism -- for many applications these benefits are outweighed by the price/performance advantages of Opterons and Xeons. In addition, the software momentum that is associated with the x86 architecture creates a formidable barrier for the establishment of competing architectures. All of this pressure tends to push the Itanium- and RISC-based systems towards higher-end and more specialized applications.
Considering that Intel and the other vendors in the Itanium Solutions Alliance have already poured billions of dollars into the architecture, it's hard to imagine that they'll pull the plug anytime soon, even if they achieve only modest success. It would be unfortunate if the chip disappeared entirely. With regards to general-purpose microprocessors, there is not a whole lot of diversity in the IT industry right now. And if Itanium fails, could any new architecture survive?
Elsewhere in the Issue
Speaking of surviving, global warming appears to be in full swing this summer in the Northern Hemisphere. There are still learned people who don't quite believe in the whole concept, but I'm guessing few of them live in southern England. This past week in Wisley, just south of London, the temperature reached 97.7 degrees Fahrenheit (36.5 Centigrade), a record for July in the usually temperate British Isles. This was part of an overall heat wave that has affected large areas of Europe. Meanwhile in the U.S., much of the central and western parts of the country are also enduring sizzling temperatures. The mile-high city of Denver hit a record of 101 just last week. While daily weather extremes don't necessarily indicate the climate is changing, meteorological data recorded over the past several decades does point to global warming.
But in order to really understand climate changes and their effects, accurate simulations have to be developed. As climatologists have accumulated knowledge of the Earth's weather and as supercomputing power has increased, the models have become increasingly sophisticated. In this week's issue of HPCwire, our feature article, "The Next Generation of Climate Models" talks about one of the more advanced climate models in use today. In this article, Per Nyberg, Earth Sciences Segment Director at Cray, describes the Community Climate System Model (CCSM) and the supercomputing resources behind it. CCSM integrates a variety of component systems such as ocean simulations, atmospheric simulations and ice sheet modeling into a unified picture of the Earth's climate. The next version of CCSM will do even more:
"We expect that, when the next climate model is released, we'll have options for essentially full atmospheric chemistry, dynamic vegetation processes on the land, ocean ecosystems, and more," says ORNL's John Drake, chief computational scientist for the Climate Science End Station effort. "By pulling all of these processes together, we'll be able to create not only a physically coupled model, but a chemically coupled and biologically coupled climate model. That's a big stretch over where we are now."
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at email@example.com.
Posted by Michael Feldman - July 20, 2006 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.