Visit additional Tabor Communication Publications
October 29, 2012
Chipmaker Adapteva has hit its funding goal for the company's Parallella project. As HPCwire reported last month, the company decided to employ the micro-investor platform known as Kickstarter to speed development of its manycore Epiphany floating point accelerator. The latest design promises a stunning 50 gigaflops per watt, but getting the silicon into the hands of developers has been a challenge.
The company set out to get at least $750,000 worth of pledges to fund a project that would deliver an Epiphany coprocessor board to all interested parties – mainly programmers who were looking to kick the tires on a unique and highly parallel computing architecture. The pledge window was just 30 days, which began on September 27. When the Kickstarter clock ran out, Parallela had signed up $898,921 from 4,965 backers.
The pitch was that for $99 investors would receive a PCIe "supercomputing" board equipped with a dual-core ARM A9 chip driving Adapteva's 16-core, (26-gigaflop) Epiphany accelerator. Of course, 26 gigaflops is hardly a supercomputer by today's standards, but the ability to develop a 16-way parallel application on a hundred dollar platform is quite valuable in and of itself.
An investment of $199 would have allowed for the more powerful (100-gigaflop) 64-core chip if $3 million of total funding was pledged. Alas that "stretch goal" did not come to pass.
When HPCwire spoke with Adapteva CEO and founder Andreas Olofsson last month, he said the money would be used to produce the Epiphany processors in greater volumes, thereby reducing the cost to a few dollars per chip. And now, since thousands of developers will soon be getting their own Adapteva board, the Epiphany architecture will get the additional benefit of more applications and software tools ported to the platform.
For a deeper dive into the Epiphany architecture and Parallela, check out the Kickstarter site for the project.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.