Visit additional Tabor Communication Publications
June 16, 2008
R$ 3.1 million project will include seven university campuses with 368 servers, capable for 33.3 trillion calculations per second
June 14 -- UNESP (São Paulo State University) has this semester begun to set up the largest computational cluster in Latin America, on seven different sites in the State of São Paulo. GridUNESP (Computational Capacity Integration at UNESP), powered by Sun Microsystems’ technology, will allow research groups at the university access to the highest levels of data processing and storage capacity for particle physics, genetics, meteorology, medicine, and other areas of scientific investigation.
The central system, which will be installed at the new UNESP campus in Barra Funda, São Paulo, will have 2,048 processing cores and a performance capacity of about 23.2 teraflops (trillions of calculations per second) for the whole cluster (a system with various linked processing nodes, which operate as if they were one single computer). The complex formed by the central cluster and another seven will reach 33.3 teraflops.
The cost of the project, at about R$ 3.1 million, has been supported by the Ministry of Science and Technology, through the Study and Project Finance Office (FINEP). Computational infrastructure, including Intel processors and comprising a central cluster and another seven secondary clusters, will be set up at the campuses at Araraquara, Bauru, Botucatu, Ilha Solteira, Rio Claro, São José do Rio Preto and São Paulo.
GridUNESP will be connected at high speed to the United States' Internet2 through the MetroSampa network -- which connects educational, cultural and research institutions in the metropolitan region of São Paulo -- andthe ANSP/RNP/Florida International University connection between São Paulo and Miami. The connection between the clusters in São Paulo will bemade through the KyaTera -- the Optic Platform for Research into the Development of the Advanced Internet at FAPESP (the São Paulo State Research Assistance Foundation).
The selection of Sun Microsystems for GridUNESP was carried out with strict observation to the requirement under the Bidding and Contracts Law and was preceded by a wide-ranging consultation of companies specializing in high-capacity computational processing. The establishment of the specifications and analysis of the technical and commercial proposals was followed by a multi-institutional committee formed by specialists in the area. "Sun was selected for having presented the best technical features and the best cost amongst all proposals," said the general coordinator of GridUNESP, Sérgio Ferraz Novaes, a professor at the Theoretical Physics Institute (IFT), at the São Paulo campus.
GridUNESP has established a partnership with Open Science Grid (OSG),in the United States, which gathers together grid structures with computational resources from 50 sites in the United States, Asia and Latin America. It becomes part of a group made up of Enabling Grids for E-sciencE (EGEE), in Europe; TeraGrid, in the United States; NorduGrid, in Scandinavia; TWGrid, in China; the Australian Partnership for Advanced Computing (APAC), in Australia; and NYSGrid, in New York, amongst others. GridUNESP will use OSG middleware and equitably share its computational resources.
GridUNESP will have centralized management, operation, and maintenance, and shall be accessible by any researcher from the University. Novaes affirms the project will attend the research areas that require the processing, analysis, and storage of large amounts of data. Examples of these departments are genetic sequencing, weather forecasting, molecular and cellular modeling, medical image reconstruction, the development of new materials, quantum chemistry, large-scale numerical simulations and high-energy physics, amongst others.
"With its multi-campus structure, UNESP has the profile of an institution that can benefit a lot from this approach. The interconnecting of the main data processing and storage centers at the University shall allow the equitable distribution of these resources and access to the whole computational infrastructure that, in another way, would be either unfeasible or extremely expensive," explains Novaes.
"The development of our research will be aided in terms of speed calculation and the memory availability and will facilitate interaction between the different theoretical research groups," says Elson Longo, of the Chemistry Institute at the Araraquara campus, where he is coordinator of the Multidisciplinary Center for the Development of Ceramic Materials.
"The development of GridUNESP will give the University a capacity to join complex international projects in the area of grid computation," says Gastão Krein, director of IFT. Physicist Ney Lemke, from the Institute of Biosciences at the Botucatu campus, says his studies in the areas of medical biology and physics will make great progress. "With the computational facilities of GridUNESP, calculation time for research will be reduced, allowing us to develop more detailed studies."
Adriano Mauro Cansian, coordinator of the Research Laboratory into Security at the Institute of Biosciences, Language, and Science (IBILCE), at the São José do Rio Preto campus, says the project working on the detection of attacks against large-scale computer network infrastructures, which his team works on, will benefit from the higher data treatment and storage capacity. "We believe that the Grid shall allow faster processing in analysis detecting attacks in real time."
"GridUNESP shall allow the University to overcome the challenge of sharing resources from the high performance international computational processing circuit, and shall be an important tool in our research centers continuing to contribute significantly to maintaining the rate of growth and improvement in scientific studies in Brazil," says the dean of UNESP, Marcos Macari.
Carlos Thomaz, a specialist in high-performance computing at Sun Microsystems Brazil, says GridUNESP is a milestone for the Brazilian academic community. "The project comprises a set of interconnected clusters, forming a computational grid in the European and North American molds. Challenges like this are consolidated not only by systems, but with an infrastructure defined specifically to attend the needs of UNESP, including software, hardware and service solutions."
Joaquim Merino, an executive at Sun Microsystems Brazil, says, "The GridUNESP project is pioneer in the implementation of a computational grid connected to the world’s large research centers, such as OSG. We expect this project not only to be a success for UNESP, but an example to the whole Brazilian scientific community."
"The grid formed by servers equipped with processors from the Quad Core Intel Xeon 5400 family offers leadership in performance with the lowest energy consumption. The computational grid will offer the most advanced technology developed in 45 nanometers, which has much greater processing power and, consequently, a faster response time, contributing to the advancement of the most important researches carried out in the country," adds Marcel Saraiva, server product manager at Intel for Latin America.
For further information from UNESP, visit www.unesp.br.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.