Visit additional Tabor Communication Publications
January 11, 2013
Encanto supercomputer, still intact – Source: NMCAC
It seems like just yesterday that Encanto – that's Spanish for "enchantment" – was launched as the pride (and potential salvation) of New Mexico, primed to spur economic development by attracting high-tech companies to the state. But money troubles have plagued the system since its launch in 2008.
Last summer the State of New Mexico repossessed Encanto from the non-profit that managed it, the New Mexico Computing Applications Center. The system had racked up substantial debt, and there was little funding for Encanto's maintenance and operation.
Now, according to a story in the Albuquerque Journal, this former number-three superstar is headed to the chopping block. The state is planning to sell off parts of the system to local research universities – the University of New Mexico, New Mexico State University, and the New Mexico Institute of Mining and Technology – to recoup some of its investment and pay off outstanding debts.
"Barring someone offering to buy the whole machine, we can still get piecemeal use from it," state Information Technology Secretary Darryl Ackley told the paper. "The universities have proposed to cannibalize it to put some of the assets back into service."
The project was troubled from the start as the New Mexico government made the unusual decision that the computer should pay for itself by selling cycles to interested parties. Proponents of limited government lambasted the project as a waste of taxpayer money, while researchers expressed doubt over the sustainability of the supercomputer-as-revenue-generator business model.
In short, the supercomputer was never adequately funded, although Encanto and other high-end systems did their part to attract federal research dollars. The Computing Applications Center estimates that the state's computational resources drew $60 million in federal funding to New Mexico universities.
Salvaging Encanto piecemeal style may be the best outcome at this point. While the 172 teraflop (peak) SGI Altix machine was the third-fastest in the world in late 2007, as of November 2012, the supercomputer had slid to number 185 on the TOP500 list. And despite Encanto's $11 million original price tag – with another $9 million going to operational expenses – it's now worth only a few hundred thousand dollars.
After trying unsuccessfully to find a buyer for the machine, Ackley invited the University of NM to house and manage the system, and make it available to researchers at UNM, NMSU and New Mexico Tech. UNM responded that this would not be economically feasible. They would need to generate $1 million-per-year for five years to cover operating costs, but the supercomputer's expected useful lifetime is only another four years.
In the words of UNM's interim vice president for research and development, John McGraw: "To operate the computer as a whole entity is just not possible. But putting a number of racks to use at each research university is one way the state could at least recover some value from its investment."
By divvying up the supercomputer's 28 racks, the effective lifetime is extended. Plus the universities in question already operate small Encanto replica machines, called "exemplars," that have one rack of processors each. If the fire sale goes through, UNM will get 10 additional racks; New Mexico State University will take four, and the New Mexico Institute of Mining and Technology will claim two, significantly boosting the schools' computational power.
But that still leaves 12 of the 28 racks unaccounted for. The state's IT department is reviewing what to do with the unsold racks and leftover components. There are also legal aspects of distributing a state asset among the universities that need to be ironed out. Encanto awaits its fate while being housed at Intel's Rio Rancho facility.
Takeaway: In this age of budget cuts, austerity measures and self-inflicted fiscal cliffs, Encanto's decline serves as a cautionary tale, one that casts doubt on the strategy of expecting research tools to double as profit centers.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.