After five days of intense competition in Wuhan, China, the Huazhong University of Science and Technology team was crowned champion of the Asia Supercomputer Community’s 2016 Student Supercomputer Challenge (ASC16).
The awards ceremony took place last Friday (April 22) on the Huazhong University campus, where the event was hosted. Earlier that day, all 16 teams presented their findings to the distinguished panel of judges and and the final tallies were recorded.
After the runnners-up were recognized for making it into the final round, the top-ranking teams were brought up on stage to receive their placards and trophies. The overall championship prize as well as the e-Prize went to the Huazhong University team, earning them a cash prize of approximately $19,500.
The winners of the six award categories with cash prizes totaling $36,000 are listed below:
Champion (100,000 CNY / $15,443):
- Huazhong University of Science and Technology
Second Place (50,000 CNY / $7,722):
- Shanghai Jiao Tong University
e-Prize (27,182 CNY / $4,198):
- Huazhong University of Science and Technology
Highest Linpack Award (10,000 CNY / $1,544):
- Zhejiang University
Application Innovation Award (10,000 CNY / $1,544):
- Sun Yat-Sen University
- Beijing University of Aeronautics and Astronautics
- Northwestern Polytechnical University
- Nanyang Technological University
Best Popularity Award (5,000 CNY / $772):
- Hong Kong Baptist University
- Nanyang Technological University
For the ePrize challenge, team Huazhong University of Science and Technology optimized a deep neural network program to create a very precise training model for approximately 600,000 speech data segments in English, Chinese Mandarin and the Sichuan Dialect. Computing performance was improved by a factor of 108. “This was an amazing achievement on DNN for HUST,” said judge Kwan Wing Keung, Asst. IT Director, Information Technology Services, the University of Hong Kong (HKU).
The ePrize application was worth 25 percent of the total score. Being so heavily weighted, doing well on this application would help boost a team’s overall ranking improving their chance of winning the championship. Performance optimization of the DNN program was carried out on eight nodes of the Tienhe-2 supercomputer — each node outfitted with two CPUs (Xeon E5-2692 v2, 12 cores) and three MIC cards (Intel Xeon Phi 31S1P, 57 cores), for a total of 24 Phi coprocessors. CPU and GPU configurations have been traditional for DNNs, thus the involvement of the Xeon Phi was an innovative aspect of the competition. The HUST team coach commented that because the students are undergraduates, they haven’t developed a strong preference for one platform over another and will be able to approach the Phi with a fresh outlook.
The students came into the competition with varying levels of experience with the Phi architecture. Some of the teams have access to Phi hardware at their home institution or a sister organization; some were able to purchase a node to experiment with; and others were using them for the very first time at the competition. The ASC committee and primary sponsor Inspur provided remote test platforms, including a four node CPU cluster and a four node CPU+MIC cluster, during the preparation period of the finals. Students were able to practice the DNN challenge with a sample data set of 15,000 segments of speech data, but for the actual test, that dataset was greatly expanded to approximately 600,000 pieces. The voice data was provided by speech recognition company, iFlyTek. Like Microsoft Cortana, Skype Translator, Apple Siri and Google Now, iFlyTek’s voice recognition software relies on deep learning methods.
In addition to the High Performance Linpack workload and the DNN program, the other applications required by the contest were the benchmark standard HPCG, a surface wave numerical model MASNUM, and a material simulation software ABINIT. The “mystery application,” a tradition of student cluster competitions, was revealed at the start of the official testing period to be ABySS, a de novo, parallel sequence assembler intended for short paired-end reads and large genomes. Zhu Hong, an event official from Inspur, commented that this genomics sequencing application is known for being difficult to parallelize and thus teams may be more successful using fewer cores.
For the competition, the teams from Huazhong University and Shanghai Jiao Tong University (first and second place respectively) designed and built a cluster with eight Inspur server nodes equipped with a total of 16 CPUs (160 cores) and six Nvidia K80 GPUs. The team from Zhejiang University deployed a four node cluster — with a total of eight CPUs (80 cores) and eight K80 GPUs.
The closing speech of the conference award ceremony was delivered by ASC Expert Committee member Jack Dongarra.
“I’ve been involved in many of the student challenges and what I see here is quite different and it does represent a higher level of intensity,” said the father of the High Performance Linpack benchmark, standing before a packed auditorium. “When you think about it this didn’t start just a few weeks ago, this process started back in December when 175 teams joined the competition. That represents over one thousand students who have participated in this effort and it culminates today in your 16 teams. Please stand up so we can give you a round of applause. The future of supercomputing is in good hands.”