Visit additional Tabor Communication Publications
January 22, 2010
World-renowned nuclear physics laboratory explores cloud computing with Platform ISF and Platform ISF Adaptive Cluster
TORONTO, Dec. 17 -- Platform Computing, the leader in cluster, grid and cloud management software, today announced that CERN, the European Organization for Nuclear Research, has advanced its current Platform LSF grid infrastructure to pilot the world's largest cloud computing environment for scientific collaboration. Using Platform's private cloud management and HPC cloud-enabling software solutions, Platform ISF and Platform ISF Adaptive Cluster, CERN believes the cloud project will allow them to deliver increased computing performance and offer better infrastructure services to its 10,000 researchers from 85 countries. The scientists are working at the new high energy frontier in particle physics to explain fundamental mysteries of the universe such as why particles have mass and the nature of all the "missing mass" in the universe.
Founded in 1954, CERN is one of the world's largest and most respected scientific research facilities. At CERN, complex scientific instruments are used to study fundamental physics, or the basic constituents of matter that reveal how the universe works. In addition to the center's renowned research into particle physics, CERN is also known for its global network of collaborative scientific research partners and technological innovation, including the HTTP networking protocol that led to the creation of the World Wide Web, and the construction of the world's most powerful particle accelerator, the Large Hadron Collider (LHC) and now the advancement of cloud computing.
"For CERN's cloud computing initiative, we needed an infrastructure that would support our existing grid in a heterogeneous environment that could manage both the VMs and physical machines necessary for our researchers to run projects smoothly since their computing needs change constantly as the data is processed," said Tony Cass, Group Leader, Fabric Infrastructure and Operations, CERN. "Platform's ISF and ISF Adaptive Cluster, combined with the Platform LSF grid workload management solution already in place, will provide our users the scalability and flexibility they need to manage their clusters and share datacenter resources while adhering to our requirements for open standards."
At CERN, massive amounts of scientific data are processed and must be distributed to researchers in near real time. As a result, CERN's cloud infrastructure has to provide the capacity necessary to support production and analysis of more than 15 petabytes of data per year, processed by 60,000 CPU cores, allowing scientists to manage workloads themselves as opposed to a centralized IT management department at CERN's laboratory near Geneva. Because CERN uses Platform's LSF grid and workload management solution to enable the extensive scalability to analyze its vast research data, the laboratory chose to partner again with Platform to explore how to more effectively utilize their resources in a virtualized cloud environment.
Platform ISF and ISF Adaptive Cluster provide an open, low-cost common platform for CERN's scientists, allowing the management of both virtual and physical servers in the cloud. In addition, scientists can manage their own application environments and control projects dynamically for maximum flexibility and efficient workload processing in a more cost-effective manner than with a centralized IT management department.
"CERN is where the fields of computing and scientific discovery intersect on a grand scale to advance our understanding of the universe and ourselves," said Songnian Zhou, CEO, Platform Computing. "Since collaboration and sharing are fundamental to scientific research, Platform ISF and ISF Adaptive Cluster enable the ability to easily collaborate and manage data along with the supercomputing power researchers need to capture, simulate, reconstruct and analyze scientific events. Our partnership with CERN started in the early 1990's in support of their large distributed computing production and is expanding along with CERN and enterprises worldwide as they lead the evolution of the Internet from networking protocol to grid to cloud."
CERN, the European Organization for Nuclear Research, is the world's leading laboratory for particle physics. It has its headquarters in Geneva. At present, its Member States are Austria, Belgium, Bulgaria, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Italy, Netherlands, Norway, Poland, Portugal, Slovakia, Spain, Sweden, Switzerland and the United Kingdom. India, Israel, Japan, the Russian Federation, the United States of America, Turkey, the European Commission and UNESCO have Observer status. Visit www.cern.ch.
About Platform Computing
Platform Computing is the leader in cluster, grid and cloud management software - serving more than 2,000 of the world's most demanding organizations. For 17 years, our workload and resource management solutions have delivered IT responsiveness and lower costs for enterprise and HPC applications. Platform has strategic relationships with Cray, Dell, HP, IBM, Intel, Microsoft, Red Hat, and SAS. Visit www.platform.com.
Source: Platform Computing
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.