Visit additional Tabor Communication Publications
November 05, 2008
CHAMPAIGN, Ill., Nov. 5 -- Wolfram Research announced an initiative today to develop a cloud computing service for users of their flagship technical computing software Mathematica. This project is a collaborative effort by Wolfram Research, Nimbis Services, Inc., a clearing-house for accessing third party compute resources and commercial software, and R Systems NA, Inc., a provider of computing resources to the commercial and academic research community.
According to Deborah Wince-Smith, president of the Council on Competitiveness, "High-performance computing systems (HPC) remain a largely underutilized competitiveness asset in the United States for the majority of companies. Opening access to HPC represents a huge productivity opportunity for the nation and a competitiveness transformation challenge." The collaboration of Wolfram Research, Nimbis Services, and R Systems will make the transition from desktop to HPC systems easier for Mathematica users by providing efficiently structured access to larger, more powerful computing systems.
Nimbis Services will enable the Mathematica cloud service to access many diverse HPC systems, including TOP500 supercomputers and the Amazon Elastic Compute Cloud. Nimbis Services, Inc. President and CEO Robert Graybill echoes the council's views on HPC systems and explains that the foundational principle of Nimbis Services is to focus on "ease of use" by providing experimental and periodic business users the choice of large-scale computing service alternatives, all in one "instant" computing storefront.
"Our partnership with Wolfram Research immensely benefits software users attempting to increase efficiency and capacity," says R Systems founder Brian Kucic. "As Mathematica users seek to extend resource capacity, the exceptionally large memory of our multicore HPC resources and the double-data rate and quad-data rate InfiniBand network will increase performance." HPC resources such as the R Smarr cluster by R Systems, Inc., which was recently named the 44th fastest system on the TOP500 list for supercomputing pioneers, are responsible for bringing HPC technology to the forefront.
The Mathematica cloud computing service will provide flexible and scalable access to HPC from within Mathematica, simplifying the transition from desktop technical computing to HPC. "The two largest challenges in using HPC are programming the HPC application itself and ensuring that you can get enough computing power to do the job," says Tom Wickham-Jones, Wolfram Research executive director of kernel technology. "Mathematica answers the programming challenge by providing an integrated technical computing platform, enabling computation, visualization, and data access. Cloud computing offers consistent access to large-scale computing capabilities. We are excited to be working with Nimbis and R Systems to offer HPC access to our customers."
About Wolfram Research
Wolfram Research is the world's leading developer of computational software for science and technology, offering organization-wide computing solutions. Led by Mathematica, its flagship product, the company's software is relied on today by several million enthusiastic users around the world and has been the recipient of many industry awards. Wolfram Research was founded in 1987 by Stephen Wolfram, who continues to lead the company today. The company is headquartered in the United States, with offices in Europe and Japan. Go to www.wolfram.com for more information about Wolfram Research and its products.
About Nimbis Services, Inc.
Nimbis Services, Inc., is a start-up company in the digital analysis computing (DAC) industry that connects its clients transparently through an industry-wide clearing-house with computing services, software, and expertise. The primary goal is to provide low-risk, low-effort, "pay-as-you-go" access to DAC for small to midsize companies that are currently unable to move beyond technical computing on the desktop. Nimbis partners with the world's leading computing services companies to provide experimental and periodic users with a choice, growing menu of pre-qualified, pre-negotiated services from HPC cycle providers, independent software vendors, domain experts, and regional solution providers, delivered on a "pay-as-you-go" basis. For additional information, visit www.nimbisservices.com.
About R Systems
R Systems NA, Inc. is a privately held corporation providing high-end computing resources for research. R Systems offers a rapid-response, queue-sensitive production environment with utility or dedicated access depending upon your project. Custom service-level agreements are available if needed. R Systems provides services aimed at benefiting the commercial research community and improving the quality of life throughout the planet.
Source: Wolfram Research, Inc.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
he study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.