TACC Receives $59 Million NSF Award For Sun Supercomputer

By Nicole Hemsoth

September 29, 2006

University of Texas, Arizona State University, Cornell University and Sun Microsystems to deploy the world's most powerful general-purpose computing system on the TeraGrid

The National Science Foundation (NSF) has made a five-year, $59 million award to the Texas Advanced Computing Center (TACC) at The University of Texas at Austin to acquire, operate and support a high performance computing system that will provide unprecedented computational power to the nation's research scientists and engineers.

“This is a very valuable resource for the scientific community and society in general,” said William Powers Jr., president of the university. “This award confirms that The University of Texas at Austin is an innovative leader in high performance computing and research.” The award is the largest NSF award ever to The University of Texas at Austin.

The University of Texas at Austin project team is led by Dr. Jay Boisseau, director of TACC, and includes leading researchers from TACC and the Institute for Computational Engineering & Sciences (ICES). UT Austin, in collaboration with Sun Microsystems, Arizona State University and Cornell Theory Center (CTC) at Cornell University, submitted the proposal in response to the NSF's High Performance Computing System Acquisition Program's inaugural competition. The program is designed to deploy and support world-class high performance computing systems with tremendous capacity and capability to empower the U.S. research community. The award covers the acquisition and deployment of the new Sun system and four years of operations and support to the national community to enhance leading research programs. TACC will be the lead partner, with assistance from ICES, ASU and CTC in the areas of applications optimization, large-scale data management, software tools evaluation and testing, and user training and education.

High performance computing has become a vital investigative tool in many science and engineering disciplines.  It enables testing and validation of theories and analysis of vast volumes of experimental data generated by modern scientific instruments, such as the very high-energy particle accelerators in the United States and Europe. HPC makes it possible for researchers to conduct experiments that would otherwise be impossible — studying the dynamics of the Earth's climate in the distant past, for example,  investigating how the universe developed, or discovering how complex biological molecules mediate the processes that sustain life. In industry, high performance computing is used in everything from aircraft design and improvement of automobile crash-worthiness, to the creation of breath-taking animations in the cinema and production of snack food.

The NSF Office of Cyberinfrastructure (OCI) coordinates and supports the acquisition, development and provision of state-of-the-art cyberinfrastructure resources, tools and services essential to 21st century science and engineering research and education, including HPC systems. The TeraGrid, sponsored by OCI, integrates a distributed set of high capability computational, data management and visualization resources to enable and accelerate discovery in science and engineering research, making research in the United States more productive. The new Sun HPC system at TACC will become the most powerful computational resource in the TeraGrid.

Juan Sanchez, vice president for research at UT Austin, said the new supercomputer will enable a new wave of research and researchers. “The Texas Advanced Computing Center is highly qualified to manage this powerful system, which will have a deep impact on science,” Sanchez said. “The scale of the hardware and its scientific potential will influence technology research and development in many areas, and the results and possibilities will contribute to increasing public awareness of high performance computing. In addition, the project team is deeply committed to training the next generation of researchers for using HPC resources.”

TACC is partnering with Sun Microsystems to deploy a supercomputer system specifically developed to support very large science and engineering computing requirements. In its final configuration in 2007, the supercomputer will have a peak performance in excess of 400 teraflops, making it one of the most powerful supercomputer systems in the world. It will also provide over 100 terabytes of memory and 1.7 petabytes of disk storage. The system is based on Sun Fire x64 (x86, 64-bit) servers and Sun StorageTek disk and tape storage technologies, and will use over 13,000 of AMD's forthcoming quad-core processors. It will be housed in TACC's new building on the J.J. Pickle Research Campus in Austin, Texas.

This system marks Sun's largest HPC installation to-date. “Sun's new supercomputer and storage technologies create a powerful combination that will allow TACC to build and operate a supercomputer delivering more than 400 teraflops,” said Marc Hamilton, director of HPC Solutions, Sun Microsystems. “We are excited about extending our long standing relationship with TACC with this system, making it possible for scientists and engineers to reap the benefits of one of the world's most powerful supercomputers.” Kevin Knox, AMD's vice president for worldwide commercial business, said, “The design and performance of the AMD Opteron processor and our planned quad-core processor roadmap have been integral in supplying the best option for high performance computing deployments to customers such as Sun to provide to businesses, universities and government research centers.”

“The new Sun system will provide unmatched capability and capacity for scientific discovery for the open research community,” Boisseau said. “The technologies in the new Sun systems will enable breakthrough performance on important science problems.”  Added Tommy Minyard, assistant director for advanced computational systems at TACC and the team project manager, “With tremendous and balanced processor, memory, disk, and interconnect capabilities, this powerful system will enable both numerically-intensive and large scale data applications in many scientific disciplines.”

Under the agreement with the NSF, five percent of the computer's processing time will be allocated to industrial research and development through TACC's Science & Technology Affiliates for Research (STAR) program. “High performance computing is essential to innovation, in business as well as in science,” said Melyssa Fratkin, TACC's industrial affiliates program manager. “We anticipate collaborations with a wide range of companies that will take advantage of this powerful computing system, to achieve the breakthrough insights they need to maintain a competitive edge in the global marketplace.”

Another five percent will be allocated to other Texas academic institutions. “This resource will help Texas academic researchers provide answers to some of the most perplexing scientific questions,” said Dr. Mark Yudof, chancellor of the University of Texas System.

The initial configuration of this system will go into production on June 1, 2007, with the final configuration in operation by October 2007. User training will begin shortly before deployment to help researchers utilize this resource. “Our Virtual Workshop technology will help researchers across the US rapidly come up to speed on using the new system,” said Dave Lifka, CTC's director of high performance & innovative computing. Added Dan Stanzione, director of the ASU High Performance Computing Initiative, “Effectively training and supporting a national community will be just as important to addressing the most important scientific challenges as making the hardware available.”

HPC systems are enabling researchers to address important problems in nearly all fields of science. From understanding the 3D structure and function of proteins to predicting severe weather events, HPC resources have become indispensable to knowledge discovery in life sciences, geosciences, social sciences and engineering, producing results that have direct bearing on society and quality of life. Furthermore, HPC resources are required for basic research across disciplines, from understanding the synthesis of all heavy elements via supernova explosions to mapping the evolutionary history of all organisms throughout the history of life on Earth.

“The new TACC/Sun system has great potential for advancing the study of quantum chromodynamics,” said Bob Sugar, a research professor in the department of physics at the University of California. Sugar and his colleagues study the fundamental forces of nature to obtain a deeper understanding of the laws of physics — electromagneticism, weak interactions, and quantum chromodynamics (QCD), the theory of the strong interactions. They also study the properties of matter under extreme conditions of temperature and density, such as those that existed immediately after the Big Bang.
 
“Our research requires highly capable computers,” Sugar continued. “This system will lead to major advances in our work and that of many other high energy physicists. I expect to see important progress on problems that are presently beyond our reach.”

As the head of the Theoretical and Computational Biophysics Group at the University of Illinois at Urbana-Champaign, Klaus Schulten conducts groundbreaking research in computational life science, investigating how cells in all organisms synthesize new proteins from genetic instructions and how plants convert sunlight into chemical energy. Schulten also assists bioengineers in developing medical nanodevices.

“TACC is a major provider of supercomputer power to U.S. researchers,” Schulten said. “The new TACC/Sun system, combined with our group's award-winning parallel molecular dynamics code, promises to simulate the largest structures yet of living cells. This will turn the TACC/Sun system into a new type of microscope that shows how viruses infect human cells, how obesity is fought through the cell's own proteins, and how nature harvests sunlight to fuel all life on Earth,” Schulten concluded.

“TeraGrid users will be able to conduct simulations that are currently impossible, and researchers from diverse fields of science will develop entirely new applications for scientific discovery,” said Omar Ghattas of ICES, the project's chief applications scientist. Ghattas, Karl Schulz of TACC, and Giri Chukkapalli of Sun will lead the high-level collaborations activities with leading researchers across the US such as Sugar and Schulten to ensure that the Sun system is used most effectively on important and strategic research challenges.

“This Sun system will enable scientific codes to achieve greater performance on vastly larger problems, with higher resolution and accuracy, than ever before. It is no exaggeration to say it will be one of the most important scientific instruments in the world,” concluded Boisseau.

—–

Source: Texas Advanced Computing Center

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

Nvidia Appoints Andy Grant as EMEA Director of Supercomputing, Higher Education, and AI

March 22, 2024

Nvidia recently appointed Andy Grant as Director, Supercomputing, Higher Education, and AI for Europe, the Middle East, and Africa (EMEA). With over 25 years of high-performance computing (HPC) experience, Grant brings a Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the fi Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire