Visit additional Tabor Communication Publications
November 04, 2010
With all the recent talk of China's ascension into a supercomputing superpower, let's not forget the other up-and-comers in the HPC world. Brazil, for instance.
A recent story in Nature, describes a supercomputer to be installed at Brazil's Centre for Weather Forecast and Climate Studies in Cachoeira Paulista, northeast of São Paulo. The machine, called Tupã, will be used for climate modeling. In particular, the system will focus on simulating the effects of carbon soot and other aerosols from Amazonian wildfires. The Brazilians will team up climate modeling researchers at the Hadley Centre in Exeter, UK to help propel the effort forward. From the Nature article:
Brazilian science minister Sergio Rezende proposed the initiative three years ago, as a strategic investment intended to nurture a relatively small climate-modelling team and help bolster Brazilian climate science on the international stage. Tupã builds on several decades of effort to develop a weather- and climate-modelling capacity; in time, the supercomputer could help to earn Brazil a place in the small club of nations that contributes global climate-modelling expertise to the Intergovernmental Panel on Climate Change (IPCC). China has paved the way among developing countries, but Brazil would be the first country in the Southern Hemisphere, apart from Australia, to develop such a capacity.
Tupã is an XT6 Cray machine that clocks in at more than 244 teraflops. And while that falls short of the petaflop club, it will likely be the most powerful super in the Southern Hemisphere when it becomes fully operational in February 2011. The system is actually scheduled for boot-up later this month, but will only be running at 20 percent capacity until they can tap into the power need to run the machine at full tilt.
Beside the collaboration with the Brits mentioned above, the Brazilian have also teamed up with climate researchers in South Africa and India and plan to host a new Earth system modeling workshop for scientists from all three countries next summer.
Full story at Nature
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.