Visit additional Tabor Communication Publications
March 21, 2013
BEIJING, China, March 21 — Finalists for the 2013 Asia Student Supercomputer Challenge (ASC13) were announced in Beijing on 19 March. 10 Asian university student supercomputer teams will compete head-to-head for the highest computing performance next month at Shanghai Jiaotong University, China.
The 10 university teams are from the Chinese University of Hong Kong, King Abdulaziz University, National Tsinghua University, University of St. Petersburg, Ulsan University of Science and Technology, Sun Yat-Sen University, Tsinghua University, National University of Defense Technology, Shanghai Jiaotong University, and Huazhong University of Science and Technology.
ASC is one of the three global supercomputing contests, together with the Supercomputing Conference (SC) and the International Supercomputing Conference (ISC). This contest is initiated by China and jointly launched and organized by the experts and institutes from Japan, Russia, South Korea, Singapore, Taiwan, Hong Kong and other countries and regions. It is hosted by famous Chinese supercomputer corporation Inspur, and aims to cultivate the talents of supercomputing application and boost supercomputer communication and cooperation in Asia.
ASC13 is considered as an all-round challenge for the ability of the supercomputer. Mo Zeyao, chairman of the ASC13 evaluation committee, commented in the review that all teams have outstanding skills in supercomputer platform building and the three tests of the preliminary contest, including HPL, Gromacs, and BSDE test.
According to the ASC13 committee, the finals will be more difficult. In addition to the three previous tests, it adds two new tests, OpenCFD and WRF. OpenCFD is a kind of Computational fluid dynamics software independently developed by Chinese scientists. WRF is a kind of Advanced Regional Prediction System developed by the National Center of Environment Prediction (NCEP), the National Center for Atmospheric Research (NCAR) and other American scientific institutes.
The supercomputer, known as the brain of modern science and technology, serves as a significant tool to boost scientific and technical innovation, economic development, social advancement, and defense security. Asian supercomputers have been constantly expanding in the global market. In the Top500 released in November 2012, Asia contributed 91 sets, ranking a close third in the world after America and Europe. However, compared with America and Europe, the exchange and cooperation is not mature in this field among countries and regions in Asia.
A number of experts of ASC experts from China, Japan, South Korea, Singapore, Taiwan, Hong Kong, and Thailand expressed their viewpoints and expectations toward Asian supercomputing development.
Mr. Michalewicz Marek, Senior Director of Singapore Computer Resource Center A*star, said that Asian countries should jointly strengthen the cultivation of the supercomputing talents, as well as communication and cooperation, so as to elevate the overall application level of supercomputers in Asia.
Mr. Hu Leijun, Vice Director of the State Key Laboratory of High-end Server & Storage Technology & Vice President of Inspur, believed that ASC13 will be taken as a beneficial initiative in Asian supercomputer development.
Since the commencement of the ASC13, many Asian universities have participated with great enthusiasm. Hundreds of universities have inquired through the telephone. Although it is held for the first time, ASC's influence is profound and extensive.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.