Visit additional Tabor Communication Publications
October 01, 2012
On September 18th, the National University of Ho Chi Minh City (VNU-HCM) and Intel Vietnam signed a memorandum of understanding (MOU) to team up on a high performance computing project for the city. The partnership comes in the context of a growing HPC capability in southern Asia. China, Vietnam's neighbor to the north, has been rapidly developing its supercomputing infrastructure and expertise over the past five years. Meanwhile, India, to the west, and Singapore, to the south, are also emerging as regional HPC hubs.
Under the VNU-HCM agreement, Intel will provide an HPC platform, including hardware and tool support, as well as service courses for teachers and administrators. Intel will also support training of university faculty. At the same time, VNU-HCM will develop a Master's program in HPC, to be instituted in the 2013-2014 school year.
Le Manh Ha, deputy chairman of the HCM City People's Committee and Raj Hazra, GM of the Intel's Technical Computing Group and VP of the Intel Architecture Group, signed the agreement to support the project.
An HPC research center will be built to house the needed computing infrastructure. The goal of the facility will be to support research relevant to local problems and socioeconomic development in Ho Chi Minh City -- traffic simulation and urban flooding, for example.
The initial phase (2012-2015) of the partnership entails building the HPC research center and deploying a 30-teraflop system, powered by Intel processors. According to the university's press release, at the end of the first phase the project will be evaluated and a decision will be made to move on to phase two. If all goes as planned, the second phase, which would run to 2020, will involve a system upgrade to a 200-teraflop machine. The release goes on to state that after 2020, Vietnam will need to build a computer system that can deliver up to 1 petaflop of performance.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.