Visit additional Tabor Communication Publications
January 05, 2009
2008 marked the 19th year of the existence of CASC -- the Coalition for Academic Scientific Computation. CASC now has 60 member institutions, including many of the leading supercomputing, grid, and visualization centers in the US. In this brief article, I will highlight some of CASC's recent accomplishments. I also hope to encourage academic organizations that are not yet members of CASC, and have a major commitment to scientific computing, to consider applying to become members.
Highlights of CASC activities in the past year include:
The CASC brochure (at http://www.casc.org/) describes the full extent and quality of the work being done by CASC members. CASC began in 1989 as the Coalition of Academic Supercomputer Centers. Since then CASC members have worked together and learned from each other as scientific computing evolved from a few supercomputers to many supercomputers and then to grids and webs, changing its name and expanding its activities along the way. CASC members are already at work pursuing the best ways to derive scientific benefit from cloud computing. Any academic organization with a major commitment to scientific computing will benefit from becoming a CASC member and joining in CASC's activities as the field of scientific computing continues to evolve.
The architect David Burnham said: "Make no little plans. They have no magic to stir men's blood and probably themselves will not be realized. Make big plans; aim high in hope and work."
CASC and CASC members have been making -- and implementing -- big plans for years. As CASC comes to the end of its second decade of activities, I see before us a new decade in which the US and the world as a whole need, more desperately than ever, the insights CASC members and the scientific computing community generally can help bring about. The application of the technology CASC members are building, implementing and using can help us find the right answers to important questions more quickly, and where there are no right answers at least better answers. Our education and training efforts can help accelerate the transformation of science and engineering while at the same time opening new and exciting career opportunities for the young people of the US and the world as a whole.
The David Burnham quote goes on, "Remember that our children and grandchildren are going to do things that amaze us." Our children and grandchildren will indeed do things that amaze us, and the work that we are doing now will help provide better lives for them in the future.
Let me close by noting that the immediate future of CASC is in excellent hands. As announced at SC08, the CASC leadership team for 2009 will include Dick Pritchard as Secretary Treasurer, Amy Apon as Vice Chair, and Stan Ahalt as Chair. It has been an honor, a privilege, and a tremendous learning experience to serve CASC as its chair for the past two years. I look forward to continuing to serve CASC and being a part of this community in the future.
Craig A. Stewart, Ph.D., was CASC Chair for 2007-2008 and is the Executive Director of the Pervasive Technology Institute as well as Associate Dean of Research Technologies at Indiana University.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.