Visit additional Tabor Communication Publications
March 14, 2013
BEIJING, March 14 — ASC13 is the first session of the international supercomputer challenge in Asia. Now the registration work has finished and all the teams are entering the preliminary contest. So far, 43 universities from mainland China, Hong Kong, Taiwan, Russia, South Korea, India, Kazakhstan, and Saudi Arabia are preparing for the preliminary contest to compete into the finals. The finals of ASC13 will be officially held during April 15-19 in Shanghai, China.
ASC in Asia, SC in the U.S., and ISC in Germany are the three top-level supercomputer challenges around the world. Proposed by China, this competition is jointly launched and organized by the supercomputer experts and organizations of Japan, Russia, South Korea, Singapore, Thailand, Taiwan and Hong Kong. Inspur is the major sponsor.
As introduced by the ASC13 organization committee, this challenge is aimed at promoting the communication and training of the youth supercomputer talents, improving the utilization, research and development of the supercomputer, developing the science and technological driving force of the supercomputer and boosting the innovation of Asian science, technology and industries.
As the first supercomputer challenge held in Asia, the ASC13 has attracted many famous Asian universities. The Top 10 list will be announced in the middle of March. Top teams includes Tsinghua University, which won the ISC12 championship in Germany, the National University of Defense Technology, which won the top prize of computing capability, National Tsinghai University, which twice won the SC title in America, and other teams from famous universities in Asia such as Pukyong National University from South Korea, Baumann State Technical University from Russia, Chinese University of Hong Kong, Mumbai University from India, Kazakh National University, and King Saud University, as well as universities from mainland China including Shanghai Jiaotong University, University of Science and Technology of China, Huazhong University of Science and Technology, Nanjing University, Wuhan University, Tongji University, Fudan University, Sun Yat-Sen University, Northwestern Polytechnical University, and more. Only 10 of those universities can obtain the tickets for the finals.
As introduced by the ASC13 organization committee, each team consists of six university students. Based on the uniform proposition, every team is required to accomplish the supercomputer system solution, testing, optimizing and submitting the proposal within the required time. The evaluation committee will make their reviews. Every team should submit its supercomputer system solution that is built within a total power consumption of 3000W. For the application examination, the challenge has set up some commonly used application tests such as the HPL, which can test the performance of the supercomputer system floating point, and the GROMACS, which is used to research the molecular dynamics of the biomolecular systems. Particularly, the committee has set the MIC with many core parallel optimization tests that are based on the option pricing application BSDE. This fully embodies the applications and practice features of this challenge.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.