Although the pre-Memorial Day news slowdown is in effect across the tech spectrum, there have been several notable stories this week from some heavy-hitters in HPC, including Cray, MathWorks, and others.
As we lead up to the news cycle before ISC ’14, we were able to gather an incredible, diverse group of leaders in both the HPC and “big data” this week to talk about some of the most pressing issues in commercial and scientific computing at our own Leverage Big Data event, which just wrapped up tonight in Carlsbad. We had an incredible time at the five-star Four Seasons Aviara surrounded by some brilliant folks, including our keynote and featured speakers, among which were Dr. Jack Collins from the National Cancer Institute, astrophysicist and data scientist Dr. Kirk Borne, Jack Levis of UPS, Dr. Ari Berman of BioTeam and many others from major companies, leading supercomputing sites, and research centers.
While we’ll discuss the event in more detail in the coming week, especially in terms of the most prominent themes that emerged, it’s fair to say for now that the challenges of research and commercial HPC and those found in enterprise “big data” are strikingly similar.
The problems of finding (and retaining) the right people; storing, managing and curating large archives of data; separating hype from real-world value with emerging tools for large-scale data analysis and visualization; and taking new approaches to infrastructure (i.e. converged, specialized and/or commodity) were all hot topics.
Editors from Datanami, EnterpriseTech and HPCwire were on hand for some very lively panels and a great deal of lessons were learned and shared.
Again, more on this coming next week as we delve into some of the keynote topics and other themes. For now, however, let’s kick around the few but important bits of info that crossed our wires while we were gathering with so many of you here in Carlsbad….
Top News Items This Week
Cray has announced that the Center for Computational Sciences (CCS) at the University of Tsukuba in Japan has put another Cray CS300 cluster supercomputer into production – the second Cray CS300 system unveiled at the university in the last six months.
With the addition of the new Cray CS300 system, named “COMA (PACS-IX),” which stands for Cluster Of Many-core Architecture processors, and the previously announced High Accelerated Parallel Advanced system for Computational Science (HA-PACS) system, the university now has two petascale Cray cluster supercomputers.
MathWorks announced that the Swedish National Infrastructure for Computing (SNIC) has selected MATLAB and MATLAB Distributed Computing Server as vehicles to enable researchers at all Swedish universities to utilize resources at the national datacenters for high-performance computing (HPC) and to more effectively collaborate with colleagues across the country.
Starting in April, MATLAB Distributed Computing Server will be available in SNIC’s six datacenters, allowing researchers to run their computationally intensive MATLAB programs on SNIC’s high-performance computing clusters. Researchers will be able to develop parallel MATLAB applications on their own computers and then scale them to SNIC’s infrastructure from within the MATLAB environment.
The European Grid Infrastructure (EGI) launched the Federated Cloud – a cloud service tailored for European researchers. The announcement was made at the annual EGI Community Forum, in Helsinki.
EGI’s Federated Cloud provides researchers with a flexible, scalable, standards-based cloud infrastructure. The service includes support and expertise provided by EGI and its partners to ensure researchers benefit fully from the infrastructure. Any European researcher can start using the Federated Cloud today by contacting EGI ([email protected]), or consulting the instructions at http://www.egi.eu/how- to/use_the_federated_Cloud.html.
The DoE released a report through its Office of Science detailing the top ten research challenges in reaching the level of exascale computing, once again calling on the University of Tennessee’s Dr. Jack Dongarra for input.
Dongarra has long been at the forefront of exascale computing, or computing at roughly a thousand times the capability of recent supercomputers. “Numerous reports have documented the technical challenges and nonviability of simply scaling existing computer designs to reach exascale,” said Dongarra. “Drawing from these reports and experience, our subcommittee has identified the top 10 computing technology advancements that are critical to making a productive, economically viable exascale system.”
Servergy and University of Texas, San Antonio announced an open innovation bridge and new lab, between IBM’s OpenPOWER and the Open Compute community, to accelerate the pace of open innovation for the benefit of both communities and the industry at large.
Servergy, a Texas-based Cleantech IT Innovations and Design Firm, announced with the University of Texas San Antonio’s Cloud, Big Data Lab – the only Open Compute Lab for North America – that they would be partnering together to create the world’s first lab to focus on accelerating the development of OpenPOWER for the Open Compute community.
We’re back again next week with a shortened news cycle in the U.S. with the Memorial Day holiday, catching our breath after the great event week – and before the ISC news rush begins.