Tag: Argonne National Laboratory
Grid computing pioneer and big data visionary Charlie Catlett recently delivered a presentation on “Big Data and the Future of Cities” as part of the Argonne OutLoud series, hosted by Argonne National Laboratory. Catlett explores how emerging technologies in high-performance computing, embedded systems and data analytics can help mitigate some of the challenges associated with Read more…
Argonne National Laboratory recently published several sessions from its Summer 2013 Extreme-Scale Computing program to YouTube. One of these is a lesson on combining performance and portability presented by Argonne Assistant Computational Scientist Jeff Hammond. For some reason the video image does not match the lecture, but you will find a link to Hammond’s slide deck here. Read more…
One of the most pressing issues faced by the HPC community is how to go about attracting and training the next generation of HPC users. The staff at Argonne National Laboratory is tackling this challenge head on by holding an intensive summer school in extreme-scale computing. One of the highlights of the 2013 summer program was a Read more…
There are a few themes that run along this week’s pick for the top research items that emerged over the last seven days. Among these are making systems running HPC applications more efficient, both at the VM and storage layers. Further, we present research on energy efficiency, job scheduling and resource sharing.
<img src=”http://media2.hpcwire.com/hpcwire/argonne_crop.jpg” alt=”” width=”94″ height=”72″ />Prominent figures in government, national labs, universities and other research organizations are worried about the effect that sequestration and budget cuts may have on federally-funded R&D in general, and on HPC research in particular. They have been defending the concept in hearings and in editorial pages across the country. It may be a tough argument to sell.
Everybody loves predictions. Here’s a few made by IEEE group members at SC12 in case you missed it.
Argonne’s 10-petaflop Blue Gene/Q will be used to gain a better understanding of dark matter.
DOE lab is taking applications from researchers who want time on 8-petaflop super.
As a result of the dissolution of DARPA’s UHPC program, the driving force behind exascale research in the US now resides with the Department of Energy, which has embarked upon a program to help develop this technology. To get a lab-centric view of the path to exascale, HPCwire asked a three of the top directors at Argonne National Laboratory — Rick Stevens, Michael Papka, and Marc Snir — to provide some context for the challenges and benefits of developing these extreme scale systems.
China and Europe are already planning for the next supercomputing era.