Tokyo Institute of Technology
Satoshi Matsuoka has been a Full Professor at the Global Scientific Information and Computing Center (GSIC), a Japanese national supercomputing center hosted by the Tokyo Institute of Technology, and since 2016 a Fellow at the AI Research Center (AIRC), AIST, the largest national lab in Japan. He received his Ph. D. from the University of Tokyo in 1993. He is the leader of the TSUBAME series of supercomputers, including TSUBAME2.0 which was the first supercomputer in Japan to exceed Petaflop performance and became the 4th fastest in the world on the Top500 in Nov. 2010, as well as the recent TSUBAME-KFC becoming #1 in the world for power efficiency for both the Green 500 and Green Graph 500 lists in Nov. 2013. He is also currently leading several major supercomputing research projects, such as the MEXT Green Supercomputing, JSPS Billion-Scale Supercomputer Resilience, as well as the JST-CREST Extreme Big Data. He has written over 500 articles according to Google Scholar, and chaired numerous ACM/IEEE conferences, most recently the overall Technical Program Chair at the ACM/IEEE Supercomputing Conference (SC13) in 2013. He is a fellow of the ACM and European ISC, and has won many awards, including the JSPS Prize from the Japan Society for Promotion of Science in 2006, awarded by his Highness Prince Akishino, the ACM Gordon Bell Prize in 2011, the Commendation for Science and Technology by the Minister of Education, Culture, Sports, Science and Technology in 2012, and recently the 2014 IEEE-CS Sidney Fernbach Memorial Award, the highest prestige in the field of HPC.
HPCwire: Hi Satoshi. Congratulations on being selected as an HPCwire 2017 Person to Watch. Japan, along with the US, was an early supercomputing pioneer. What is the basis for Japan’s long-time leadership in supercomputing?
Satoshi Matsuoka: Ever since the Meiji revolution over 100 years ago, Japan became and has continued to be one of world science technology leaders. Many Nobel prizes have been won by Japanese scientists, and Japanese tech companies are household names globally. Since supercomputing drives as well as is driven by science and technology, it is only natural that Japan has had continued investment and sustained community over the years. In fact, Japan precedes US with respect to providing widespread supercomputing powers to the general academia since the 70s; there have been continuation of national supercomputer development projects such as the NWT, Earth Simulator, and the K computer, as well as the top universities hosting and sometimes developing national supercomputing centers, now collectively evolved into a coalition of national supercomputing infrastructure called HPCI, including Tokyo Tech, GSIC and the TSUBAME supercomputers we designed and deployed.
HPCwire: It is often said that necessity is the mother of invention, how have the unique power/space constraints of Japan shaped its supercomputing program?
In the past power/space was not a major constraint for supercomputers, but with the arrival of MPPs and many core processors with machines hosting thousands to millions of cores, it has become the major constraint. As with other social infrastructures in Japan, supercomputers are now designed and procured in Japan with these parameters as the top-level constraints, due to their relative expense. But of course these lead to innovations — There have been many projects regarding “green” IT in Japan including those in supercomputing by my research group. Japan has often taken the top spots on the Green 500 list including our TSUBAME-KFC. The average power consumption of TSUBAME2.5, a 5 Petaflops machine, is mere 0.8 Megawatts in production including cooling due to various power saving tactics; TSUBAME3 can become one of the densest and lowest PUE machine ever. The Post-K supercomputer to be deployed by 2021 has a power efficiency goal higher than any current machine on the Top500 list as the primary project target.
HPCwire: In 2016 you were named Designated Fellow for AIST AIRC and you are leading the ABCI AI project to stand up a 130 petaflops (FP16) AI supercomputer. What is the HPC-AI connection and where do you see AI intersecting with traditional HPC techniques?
Firstly 130 petaflops is the lower bound, and is not restricted to FP16, but rather some “precision” sufficient for convergence in majority of machine learning workloads; we are looking at various interesting proposals by the vendors in this regard to define the concept of “AI-FLOPS”.
There are many connections between and AI, in that the third “rebirth” of AI, primarily driven by machine learning and data analytics, was absolutely enabled by modern HPC. Back propagation algorithms in the neural networks have been around since the late 70s, but at the time the algorithm was just too expensive and we had too little training data or the bandwidth to move them with I/O operations. But since then supercomputing capabilities have grown by 7-8 orders of magnitude in both compute and data, making such expensive algorithms viable, and the AI field is now exploding as a result. The demand for HPC is growing ever stronger in AI, as neural network becomes extremely deep and complex, as well as the training sets and hyperparameter searches becoming ever more complex due to real application needs.
In fact, it is my strong belief that AI is, and will continue to become, the primary field HPC will leverage for existence and evolution, just as it had leveraged mainframes in the 70-80s, killer micros in MPPs in the 90-2000s, and AV/Entertainment in many cores/GPU in the 2010s with TSUBAME and other machines. That is to say the demand and growth of AI will be the primary fuel for the HPC industry growth and innovations. Moreover, it is not just that the data analytics workload will grow as a dichotomy to traditional simulations, but rather, simulations based on first principles and analytics based on empirical data will become very closely knit to further accelerate HPC. For example simulations could be steered using AI, or even accelerated by “skipping” timesteps based on empirical approximations.
Some people are worried that the emphasis of in AI and data analytics will steer the optimized machine architectures away from traditional first principle HPC simulations. I feel that will not be the case, and in fact will actually help HPC acceleration. Tight knitting I already mentioned, but even for pure simulations, demand for less precision in the machine learning computations may actually cause breakthroughs in accelerating simulation calculations as well. We are already seeing some of this in many areas such as molecular dynamics and CFD where the use of lower precision is becoming the norm, as well as studies of fault-oblivious algorithms as well as uncertainty quantifications where we try to quantify the allowed error in the algorithms. Despite their advances, previously the motivation was low to adopt such algorithms, but with real hardware coming along catering for allowed uncertainty in machine learning may also make the counterpart algorithms in simulation pragmatic.
HPCwire: Outside of the professional sphere, what can you tell us about yourself – personal life, family, background, hobbies, etc.? Is there anything about you your colleagues might be surprised to learn?
Many people who meet me for the first time are surprised by how “American” my English is. This is because I had spent four years near D.C. during my childhood, a critical time to learn languages as the first language and I was very fortunate to have been raised as a multi-lingual. Of course I trained my language skills in English during my later school years as I had done for Japanese after coming back to Japan, otherwise I would sound verbally unprofessional. It may not be just the accent itself, but rather the expressions and mannerisms that manifest when I speak that presents to the conversational listener as being “American.” By all means this has benefitted me significantly during my career – many of my Japanese colleagues say I give a much better talk in English than in Japanese!
Nonetheless I grew up mostly in urban Tokyo, and my passion during my elementary school days was to spend hours in Akihabara, the world-famous electronics town in Tokyo. Modern-day Akihabara is dominated by IT and subcultures like Anime, but before going to the US when I was nine, Akihabara was synonymous to electronics. Every weekend I would ride a train to Akihabara, and scout around numerous 6 feet wide stable-like shops selling electronics parts such as transistors and transformers to build electronics projects like radios. Times were such that elementary school kids were allowed to travel alone across a megalopolis like Tokyo by themselves and spend a whole day without cell phones or GPS trackers!
Then, my passion for computing started when I shockingly saw the microprocessor evaluation boards such as 8080 and 6800 being sold in Akihabara, literally hanging in the storefronts, right after my absence of being in the US, when I was thirteen. I got immediately hooked and that determined my career in computing. I honed my skills and became a good games programmer during high school for early personal computers such as Commodore PET and their gaming consoles, and during Univ. Tokyo days more popular consoles such as Nintendo NES (Family Computer) and Microsoft Japan-driven MSX machines. Some of my early games such as PET INVADERS and PET STARFORCE can be found on the Internet such as these and those including NES and MSX still be played using emulators.
In fact, the late Satoru Iwata, the former CEO of Nintendo and a graduate of Tokyo Tech. where I now teach, was a work partner and a good friend of mine in the early days, when a company called “HAL Laboratories” was founded around 1980 and we worked there together, spending countless hours together in the development office at Akihabara and collaborating on many games projects, including the NES Pinball which became a million seller–Iwata was already working full-time for HAL but I was still in my Junior or Senior year at Univ. Tokyo working part time. There was an Easter Egg that depicts our partnership in the game which was quickly discovered and infuriated Nintendo but the ROM cartridges were already sold in the market, and they decided to let it go
There are many stories to tell; hopefully someday I would write a memoire for those early personal computing days. What I have been doing recently is to collect the historical artifacts of those 8-bit personal computing in the late 70s to early 80s. I have a collection of US machines such as the Altair 8800a and 680b, IMSAI 8080, SOL-20, SWTPC 6800, Motorola MEK6800D2, MOS Technologies KIM-1, Apple ][, etc., as well as the Japanese machines such as NEC TK-80, Fujitsu Lkit-8, Hitachi Basic Master, Sharp MZ-80 etc. Again someday I hope to do proper curation of these machines to preserve them in a museum. Amazingly, some of these machines are quite pristine and still work after 40 years!
Nonetheless I had never imagined I would become an academic in CS at one of the top universities in Japan and build a fairly successful career. I considered myself a capable engineer but possessed little confidence in my abilities as an academic researcher. Upon finishing the Master’s degree at Univ. Tokyo, I had a choice of pursuing Ph.D. or becoming an engineer at some major IT company. I judged I would try out Ph.D. but bail out after a year if it would not work out. At the time I met with Iwata briefly – I had already resigned from my part-time job at HAL to focus on my studies, and he and I were somewhat remote then, but he told me that, “It is likely you are better fit for an academic career, while I will pursue a business and engineering career; and we will both likely succeed.” At the time I really did not believe him, but it turned out that he was correct.
As such IT has dominated both my career and personal life since adolescence and I could not have been happier, and my passion persists in both. For example, personally I always carry two iPhones and an iPad along with a laptop, and use them continuously. One major use is Twitter, and I now have accumulated over 14,000 followers, which is a fairly large number for a simple academic. I also know that there are also many people in the HPC business who do not follow me but still check my tweets. So the audience base is fairly large now, and I have to be a little bit more cautious as to what I tweet.
| Guangwen Yang