Visit additional Tabor Communication Publications
December 05, 2012
DALLAS, Dec. 5, – Texas Instruments Incorporated (TI) and Nimbix, a leader in heterogeneous high performance computing cloud processing, announced their collaboration on Nimbix's Accelerated Compute Cloud (NACC). By selecting TI's high-performance KeyStone multicore DSPs, Nimbix is significantly reducing power and accelerating workflows for video processing and imaging applications, making high performance computing in the cloud easier than ever before.
"When choosing technology for our accelerated compute cloud, we looked no further than TI's KeyStone multicore DSPs," said Steve Hebert, CEO of Nimbix. "With the Nimbix Accelerated Compute Cloud, customers can leverage hardware acceleration technology, including TI's multicore DSPs to increase the speed and ease of use for video and imaging applications, as well as reducing overall development costs. Together with TI, we are lowering the adoption barrier, while helping users achieve better cloud economics."
The cloud has become an increasingly important computing paradigm for high performance and data intensive applications. As analytics challenges and cloud data processing volumes continue to explode, having an infrastructure tuned to handle these diverse workloads more efficiently becomes paramount. The TI and Nimbix collaboration demonstrates the power of combining low-power acceleration technology with an easy to use high performance cloud service. Nimbix leveraged the fixed-and floating-point capabilities of TI's KeyStone-based TMS320C66x DSPs to provide real time high definition video processing in a cloud compute environment. The use of multicore DSPs over other processors lends itself to be better in performance per watt, resulting in lower TCO for a variety of cloud-based workloads, including video, imaging and multimedia.
"We are very excited to be working with Nimbix on their Accelerated Compute Cloud," said Ramesh Kumar, business manager, multicore processors, Texas Instruments. "In just two weeks Nimbix got our multicore DSP technology integrated with their cloud environment, and we look forward to continuing our work with them in the high performance computing market as they implement accelerated cloud platform for additional DSP workloads."
Nimbix offers its cloud computing service through the Nimbix Accelerated Compute Cloud Portal at https://nacc.nimbix.net and through its innovative Task Processing API which enables enterprises to launch many concurrent cloud processing tasks with a simple web command.
About TI's KeyStone multicore architecture
TI's KeyStone multicore architecture is the platform for true multicore innovation, offering developers a robust portfolio of high performance, low-power multicore devices. Unleashing breakthrough performance, the KeyStone architecture is the foundation upon which TI's new TMS320C66x DSP generation was developed. KeyStone differs from any other multicore architecture as it has the capacity to provide full processing capability to every core in a multicore device. KeyStone-based devices are optimized for high performance markets including wireless base stations, mission critical, test and automation, medical imaging and high performance computing.
Nimbix is a provider of cloud-based High Performance Computing infrastructure and applications. Nimbix offers HPC applications as a service through the Nimbix Accelerated Compute Cloud, dramatically speeding up data processing for Life Sciences, Oil & Gas and Rendering applications. Nimbix operates unique high performance hybrid systems and accelerated servers in its Dallas, Texas datacenter.
About Texas Instruments
Texas Instruments semiconductor innovations help 90,000 customers unlock the possibilities of the world as it could be – smarter, safer, greener, healthier and more fun. Our commitment to building a better future is ingrained in everything we do – from the responsible manufacturing of our semiconductors, to caring for our employees, to giving back inside our communities. This is just the beginning of our story.
Source: Texas Instruments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
he study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.