SGI LAUNCHES FAMILY OF HIGH-PERFORMANCE COMPUTERS

July 28, 2000

COMMERCIAL NEWS

Mountain View, CA — SGI announced the launch of the SGI 3000 family of systems, which employ a breakthrough in modular design and computer architecture that stands to revolutionize high-performance computing. Available immediately, the SGI Origin 3000 series of servers and SGI Onyx 3000 series of visualization systems promise flexibility, resiliency, overall investment protection, superior performance and scalability.

SGI Origin 3000 series servers and SGI Onyx 3000 series visualization systems utilize the unique SGI NUMAflex modular technology, a “brick”-style system for constructing small to very large systems from a common set of building blocks. The SGI NUMAflex modular system allows users to build the optimum configuration one component at a time and adopt new technologies that map to their specific business or research needs. In contrast, traditional high-performance computers may need to be replaced all at once as often as once a year to keep up with competitive demands and technological changes-a costly and cumbersome process.

SGI Origin 3000 series servers enable “capability computing”-the ability to analyze and solve complex problems that were previously unsolvable. For existing projects or applications, SGI Origin 3000 series servers provide greater precision, quicker results and breakthroughs in price/performance.

SGI Onyx 3000 series visualization systems offer users a unique combination of graphics capability and compute power. This combination allows for visualization of large, complex volumetric data (e.g., brain mapping); allows interactivity and realism (e.g., pilot training simulation); provides bandwidth and image quality for real-time, high-definition special effects (e.g., broadcast); and has the visual accuracy and compute power that enable interactive design (e.g., photo-realistic automotive modeling).

SGI Origin 3000 and SGI Onyx 3000 series systems utilize the SGI IRIX operating system, the world’s premier 64-bit UNIX operating system for high- performance computing, advanced visualization and production supercomputing. SGI IRIX is renowned for its leadership in scalable computation; high-performance data movement, sharing, and management; real-time applications support; and media streaming capabilities. Technical applications that currently run on the SGI 2000 series and Silicon Graphics Onyx2 systems will run on SGI 3000 family systems with as much as twice the previous performance without the need for recompilation. All technical applications currently available on SGI 2000 series and Silicon Graphics Onyx2 systems can run on the new systems

With NUMAflex technology, each drawer-like module in a system has a specific function and can be linked, through the patented SGI high-speed system interconnect, to many other bricks of varying types to create a fully customized configuration. The same bricks, depending on their number or configuration, can be used for a continually expanding range of high-performance computing needs: C-brick (CPU module), P-brick (PCI expansion), D-brick (disk storage), R-brick (system/memory interconnect), I-brick (base I/O module), X-brick (XIO expansion) and G-brick (InfiniteReality graphics). New brick types will be added to the NUMAflex modular offering for specialized configurations (e.g., broadband data streaming) and as new technologies, such as PCI-X and Infiniband, enter the market. The systems can also be deployed in clusters or as large shared-memory systems, depending on users’ needs.

“The scalability, flexibility and performance of these systems are what customers have been asking for,” said Jan Silverman, vice president, Advanced Systems Marketing, SGI. “SGI is proud to be the first to successfully bring modular computing to the industry.”

Customer and analyst reaction to the product launch has been very favorable. Notable SGI 3000 family clients, including the U.S. Army Engineer Research Development Center and NASA/Ames Research Center, have either ordered or already taken delivery. These organizations will use the systems for a variety of needs, ranging from financial analytics to crash-test simulation and aircraft testing. In addition, Sony Computer Entertainment Inc. has selected the SGI Origin 3400 as the broadband server for a next-generation entertainment demonstration at SIGGRAPH 2000.

“One of the key elements when we’re designing a vehicle that is going to fly in the atmosphere and reenter is that it takes a large number of engineers a long period of time-three years or more-to design the vehicle,” said Henry McDonald, director of NASA/Ames Research Center, Mountain View, Calif. “By improving the turnaround time and increasing the number of calculations possible while increasing the fidelity, we reduce the overall development time of the vehicle.”

“The installation of a 512-processor, single system supercomputer from SGI using next-generation SGI Origin 3000 series technology will give government and academic researchers across the country access to the most advanced NUMA shared-memory computing architecture available today,” said Bradley Comes, director of the U.S. Army Engineer Research Development Center’s Major Shared Resource Center (ERDC MSRC), Vicksburg, Miss. The ERDC MSRC is one of four Major Shared Resource Centers established under the Department of Defense High-Performance Computing Modernization Program. Although the system is physically located at the ERDC MSRC, the Arctic Region Supercomputing Center (ARSC) at the University of Alaska in Fairbanks is a partner in the deployment of the new system. Dr. Frank Williams, director of ARSC, added, “We are looking forward to leveraging the combined expertise at the two centers along with the new SGI system to address the large computational requirements the DoD! research and development and test and evaluation communities are demanding.”

“IDC believes that NUMAflex and its current implementation in the form of the SGI Origin 3000 product line should strongly position SGI to regain customer mind share and sales,” said Earl Joseph, Research Director, IDC. “SGI should see strong acceptance of this product in its core technical markets as well as in the markets that service the creative user. Moreover, we see this as a potentially strong product to support emerging Internet workloads given its flexibility, scalability and modular attributes.” For more information on these products, please refer to http://www.sgi.com/origin3000 or http://www.sgi.com/onyx3000 .

Addendum: by Uwe Harms

Munich, GERMANY — In a press meeting SGI announced the long expected successor of the Origin 2000 series, which was named as SN 1. Now the name O3000 was chosen, but additionally a new graphical system with the InfiniteReality was presented. It is based on the Origin 3000, which can be seen as a combination of the best of Origin 2000 and Cray T3E. There will be two lines, a MIPS R12/14000 (400/500 MHz) and IRIX based machine with up to 512 processors. It can be delivered now. With the advent of the Intel Itanium, this processor will be used too with Linux as operating system. The new Origin and Onyx are based on an innovative concept, NUMAflex. Building blocks, called bricks, are developed for different tasks. Thus the machine can be built directly for the specific user’s needs. The first systems have been shipped in America, in Germany the first machine will be installed in August at the Center for High-Performace Computing at the Technical University Dresden.

SGI tries an improved approach with its NUMA (Non-Uniform Memory Access) technology and extends the 128 processor barrier to 512 processors. The next innovation is the modular NUMAflex technology. Today 7 building blocks or bricks are available:

– R-brick router interconnect, it is realized as a high-speed crossbar and replaces the system bus. It connects memory and processors

– C-brick is the processor module and contains 4 MIPS CPUs and local memory up to 8 GByte.. The new crossbar-memory-controller improves the CPU memory bandwidth by 200%.

– I-brick ist the basic I/O module with the basic I/O functionality, system disk, CD-ROM, Ethernet and 4 PCI slots

– P-brick is the PCI extension and provides 12 slots (hot-swappable) with a total I/O bandwidth of more than 3 GByte/s.

– X-brick is the XIO extension and offers 4 XIO slots that support HIPPI, GSN, VME and digital video

– G-brick is the graphics subsystem with the SGI InfiniteReality3 graphic. This allows high-performance visualisation.

– D-brick disk storage, allows a modular integration of JBOD (Just a Bunch of Disks) and RAID and each brick supports up to 12 drives, 16, 36 and 73 GByte.

This approach is flexible and the realization can follow the user’s needs. If he has compute intensive applications, he can buy more C-bricks, if it is I/O bound, I/O bricks are the right solution. All these bricks can be exchanged, when SGI offers new hardware – and thus can be uptodate. Further on, SGI plans to develop more basic blocks for specific applications.

The Origin 3000 MIPS series

SGI offers three models:

Origin 3200 with 2 to 8 MIPS processors, up 16 GB memory, 11.2 GB/s system bandwidth, one I-brick, no R-brick, 18 GB system disk

Origin 3400 with 4 to 32 processors, up to 64 GB memory, 44.8 GB/s system bandwidth, 6-port router (R-brick), one I brick and 18 GB system disk

Origin 3800, 16 to 512 ptrocessors, 1 TB memory, 716 GB/s system bandwidth, 8-port meta router (R-brick), one I- and one P-brick. As the peak performance of the R12000 is 800 (820) MFlop/s, the biggest system sums up to 410 (420) GFlop/s. The high-end system will probably be ranked in the Top500 Linpack-list in the 50s.

The next steps for a bigger system is clustering a solution. Using the integrated Meta Router clusters of thousands of CPUs are realisable with internal clustering. Thus it is possible to realise a single system image with shared memory. If the user wants different partitions for the applications or usage, a single machine image can be defined. The partitions use their own operating system instance. Different operating systems in the different partitions are allowed. The switch between the different modes is realised by software.

The external clustering is done via the NUMAlink interconnct technology.

SGI will build this machine on IA64 – Itanium, when it is available in mass production. The operating system will be Linux. In the beginning this machine will not scale as high as the MIPS series (512 processors).

SGI Onyx 3200 is tuned for small teams, power users, and readily deployable tasks. It scales to up to eight CPUs, two graphics pipelines, 16GB of memory.The SGI Onyx 3000 series is the world’s most p

SGI Onyx 3400, scales from 4 to 32 CPUs and drive up to eight full graphics pipelines and eight simultaneous graphics users.

SGI Onyx 3800, scales from 16 to 512 CPUs and from 1 to 16 graphics pipelines in a single, shared-memory system. For the ultimate in scalability, clusters of SGI Onyx 3800 systems offer thousands of CPUs and hundreds of graphics pipelines.

——- Uwe Harms is a supercomputing consultant and owner of Harms-Supercomputing-Consulting in Munich, Germany.

============================================================

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Supercomputer Research Reveals Star Cluster Born Outside Our Galaxy

July 11, 2020

The Milky Way is our galactic home, containing our solar system and continuing into a giant band of densely packed stars that stretches across clear night skies around the world – but, it turns out, not all of those st Read more…

By Oliver Peckham

Max Planck Society Begins Installation of Liquid-Cooled Supercomputer from Lenovo

July 9, 2020

Lenovo announced today that it is supplying a new high performance computer to the Max Planck Society, one of Germany's premier research organizations. Comprised of Intel Xeon processors and Nvidia A100 GPUs, and featuri Read more…

By Tiffany Trader

Xilinx Announces First Adaptive Computing Challenge

July 9, 2020

A new contest is challenging the computing world. Xilinx has announced the first Xilinx Adaptive Computing Challenge, a competition that will task developers and startups with finding creative workload acceleration solutions. Xilinx is running the Adaptive Computing Challenge in partnership with Hackster.io, a developing community... Read more…

By Staff report

Reviving Moore’s Law? LBNL Researchers See Promise in Heterostructure Oxides

July 9, 2020

The reality of Moore’s law’s decline is no longer doubted for good empirical reasons. That said, never say never. Recent work by Lawrence Berkeley National Laboratory researchers suggests heterostructure oxides may b Read more…

By John Russell

President’s Council Targets AI, Quantum, STEM; Recommends Spending Growth

July 9, 2020

Last week the President Council of Advisors on Science and Technology (PCAST) met (webinar) to review policy recommendations around three sub-committee reports: 1) Industries of the Future (IotF), chaired be Dario Gil (d Read more…

By John Russell

AWS Solution Channel

Best Practices for Running Computational Fluid Dynamics (CFD) Workloads on AWS

The scalable nature and variable demand of CFD workloads makes them well-suited for a cloud computing environment. Many of the AWS instance types, such as the compute family instance types, are designed to include support for this type of workload.  Read more…

Intel® HPC + AI Pavilion

Supercomputing the Pandemic: Scientific Community Tackles COVID-19 from Multiple Perspectives

Since their inception, supercomputers have taken on the biggest, most complex, and most data-intensive computing challenges—from confirming Einstein’s theories about gravitational waves to predicting the impacts of climate change. Read more…

Penguin Computing Brings Cascade Lake-AP to OCP Form Factor

July 7, 2020

Penguin Computing, a subsidiary of SMART Global Holdings, Inc., announced yesterday (July 6) a new Tundra server, Tundra AP, that is the first to implement the Intel Xeon Scalable 9200 series processors (codenamed Cascad Read more…

By Tiffany Trader

Max Planck Society Begins Installation of Liquid-Cooled Supercomputer from Lenovo

July 9, 2020

Lenovo announced today that it is supplying a new high performance computer to the Max Planck Society, one of Germany's premier research organizations. Comprise Read more…

By Tiffany Trader

President’s Council Targets AI, Quantum, STEM; Recommends Spending Growth

July 9, 2020

Last week the President Council of Advisors on Science and Technology (PCAST) met (webinar) to review policy recommendations around three sub-committee reports: Read more…

By John Russell

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

Q&A: HLRS’s Bastian Koller Tackles HPC and Industry in Germany and Europe

July 6, 2020

In this exclusive interview for HPCwire – sadly not face to face – Steve Conway, senior advisor for Hyperion Research, talks with Dr.-Ing Bastian Koller about the state of HPC and its collaboration with Industry in Europe. Koller is a familiar figure in HPC. He is the managing director at High Performance Computing Center Stuttgart (HLRS) and also serves... Read more…

By Steve Conway, Hyperion

OpenPOWER Reboot – New Director, New Silicon Partners, Leveraging Linux Foundation Connections

July 2, 2020

Earlier this week the OpenPOWER Foundation announced the contribution of IBM’s A21 Power processor core design to the open source community. Roughly this time Read more…

By John Russell

Hyperion Forecast – Headwinds in 2020 Won’t Stifle Cloud HPC Adoption or Arm’s Rise

June 30, 2020

The semiannual taking of HPC’s pulse by Hyperion Research – late fall at SC and early summer at ISC – is a much-watched indicator of things come. This yea Read more…

By John Russell

Racism and HPC: a Special Podcast

June 29, 2020

Promoting greater diversity in HPC is a much-discussed goal and ostensibly a long-sought goal in HPC. Yet it seems clear HPC is far from achieving this goal. Re Read more…

Top500 Trends: Movement on Top, but Record Low Turnover

June 25, 2020

The 55th installment of the Top500 list saw strong activity in the leadership segment with four new systems in the top ten and a crowning achievement from the f Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

Leading Solution Providers

Contributors

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This