Visit additional Tabor Communication Publications
January 12, 2012
Over the past several years, volunteer computing grids that use donated PC cycles to tackle grand challenge-type science problems, have been sprouting up like weeds after a rain. That's mostly thanks to BOINC (Berkeley Open Infrastructure for Network Computing), an open source grid framework that has made volunteer computing easy and cheap.
Wikipedia lists 20 current projects and 40 others in development. Most, like SETI@home (searching for extraterrestrial life), Clean Energy Project (finding best material for solar cells and energy storage) and Folding@home (seek cures for cancer, Alzheimer's and other diseases) are devoted to what you would call good causes.
The latest non-profit grid, Charity Engine, embraces the ethically correct culture of volunteer computing and takes it to a new level. The UK-based grid is the brainchild of Mark McAndrew, who founded the organization in 2011. McAndrew, a former software developer himself, came up with the idea of building a supercomputing grid service, but with a social conscience.
In a nutshell, the grid's computational power is sold to clients with a need for extra compute cycles, in much the same way a user would rent cloud computing cycles. Half of the proceeds are donated to the company's select charities. The current list includes Amnesty International, Médecins Sans Frontières MSF (Doctors Without Borders) WaterAid, Oxfam, Sightsavers, War on Want, CARE International, ActionAid, and Practical Action.
Charity Engine compute cycles come from volunteers who download the appropriate software on their PC that makes the computer available to the grid. But the company does not entirely rely upon the kindness of strangers. The remaining half of the money collected from clients is distributed as prizes to lucky PC donors, chosen randomly.
So far, two beta users have been awarded -- one with a $1,000 prize, the other with a iPad2 and iPhone 4S (who promptly donated the cash equivalent back to Amnesty and CARE International). Supposedly the next prize drawing, which takes place this month, is for $10,000.
As with a traditional volunteer grid, the PC's compute cycles are essentially harvested between keystrokes so the user is not normally aware that their system is being tapped. According to the Charity Engine site, being a donor typically "adds less than 10 cents per day to a PC's energy costs and can generate $10-$20 for charity – and the prize draws – for each $1 of electricity consumed."
As far as the application side goes, Charity Engine is not restricted to specific causes like cancer cures or extraterrestrial sleuthing. The grid is open to any application, scientific or otherwise, that is amenable to be distributed across a set of loosely coupled computers.
Well, perhaps not any application. McAndrew's is committed to making the grid available to only what he refers to as "well-chosen, ethical projects." The Charity Engine website describes the policy thusly:
We are signed up to the same ethical policies as our charity partners, so the grid is off-limits to any organisation they don't want using it. Only the good guys can use Charity Engine.
Pricing for the grid has not been made public, nor are the current set of clients.
[Update: According to Mark McAndrew, typical cost to users runs $0.01 per CPU/hour, which is around a tenth the cost of Amazon EC2. In addition, they are currently porting Mathematica and a distributed web crawler client to support their first customer. With the help of BOINC project lead David Anderson, they are also completing a distributed storage feature aimed at big science datasets.]
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.