Visit additional Tabor Communication Publications
January 11, 2013
Encanto supercomputer, still intact – Source: NMCAC
It seems like just yesterday that Encanto – that's Spanish for "enchantment" – was launched as the pride (and potential salvation) of New Mexico, primed to spur economic development by attracting high-tech companies to the state. But money troubles have plagued the system since its launch in 2008.
Last summer the State of New Mexico repossessed Encanto from the non-profit that managed it, the New Mexico Computing Applications Center. The system had racked up substantial debt, and there was little funding for Encanto's maintenance and operation.
Now, according to a story in the Albuquerque Journal, this former number-three superstar is headed to the chopping block. The state is planning to sell off parts of the system to local research universities – the University of New Mexico, New Mexico State University, and the New Mexico Institute of Mining and Technology – to recoup some of its investment and pay off outstanding debts.
"Barring someone offering to buy the whole machine, we can still get piecemeal use from it," state Information Technology Secretary Darryl Ackley told the paper. "The universities have proposed to cannibalize it to put some of the assets back into service."
The project was troubled from the start as the New Mexico government made the unusual decision that the computer should pay for itself by selling cycles to interested parties. Proponents of limited government lambasted the project as a waste of taxpayer money, while researchers expressed doubt over the sustainability of the supercomputer-as-revenue-generator business model.
In short, the supercomputer was never adequately funded, although Encanto and other high-end systems did their part to attract federal research dollars. The Computing Applications Center estimates that the state's computational resources drew $60 million in federal funding to New Mexico universities.
Salvaging Encanto piecemeal style may be the best outcome at this point. While the 172 teraflop (peak) SGI Altix machine was the third-fastest in the world in late 2007, as of November 2012, the supercomputer had slid to number 185 on the TOP500 list. And despite Encanto's $11 million original price tag – with another $9 million going to operational expenses – it's now worth only a few hundred thousand dollars.
After trying unsuccessfully to find a buyer for the machine, Ackley invited the University of NM to house and manage the system, and make it available to researchers at UNM, NMSU and New Mexico Tech. UNM responded that this would not be economically feasible. They would need to generate $1 million-per-year for five years to cover operating costs, but the supercomputer's expected useful lifetime is only another four years.
In the words of UNM's interim vice president for research and development, John McGraw: "To operate the computer as a whole entity is just not possible. But putting a number of racks to use at each research university is one way the state could at least recover some value from its investment."
By divvying up the supercomputer's 28 racks, the effective lifetime is extended. Plus the universities in question already operate small Encanto replica machines, called "exemplars," that have one rack of processors each. If the fire sale goes through, UNM will get 10 additional racks; New Mexico State University will take four, and the New Mexico Institute of Mining and Technology will claim two, significantly boosting the schools' computational power.
But that still leaves 12 of the 28 racks unaccounted for. The state's IT department is reviewing what to do with the unsold racks and leftover components. There are also legal aspects of distributing a state asset among the universities that need to be ironed out. Encanto awaits its fate while being housed at Intel's Rio Rancho facility.
Takeaway: In this age of budget cuts, austerity measures and self-inflicted fiscal cliffs, Encanto's decline serves as a cautionary tale, one that casts doubt on the strategy of expecting research tools to double as profit centers.
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.