Visit additional Tabor Communication Publications
June 05, 2012
GUILDFORD, UK, June 5 -- Adaptive Computing, managers of the world’s largest supercomputing systems and experts in HPC workload management and Cloud management solutions and NICE Software, a visualization software and services company, today announced their joint solution to deliver a Technical Visualization Private Cloud with Adaptive’s Moab HPC Suite and NICE Desktop Cloud Visualization (NICE DCV) and EnginFrame products. This solution makes it possible to view and manipulate complex 3D simulations remotely on PCs and mobile devices by centralizing physical or virtual visualization workstations to the data center and transferring pixels instead of data allowing a reduction in capital and management costs, an improvement in data processing, security, workforce collaboration and productivity and a reduction in network congestion. This complete solution will be demonstrated at ISC ’12 in Hamburg, Germany, June 18-20, 2012 in booth # 147.
Moab HPC Suite works in concert with NICE DCV and EnginFrame to centralize access to visualization servers with OpenGL 3D applications, together transferring pixels instead of data and intelligently placing and managing graphics workloads as well as regular HPC cluster workloads. The intelligent placement and management capabilities of Moab help maximize resource utilization and ensure the success of 3D visualization applications, through intelligent policies. It also manages multiple user sessions on a single machine and enables dynamic re-provisioning of the OS and applications, improving availability and the ability to respond to workload demands.
“Cloud computing is transforming business, allowing greater productivity nearly anywhere to an increasingly mobile workforce. But as technical computing workforce gets more diverse and distributed, traditional workstations quickly show bottlenecks and constraints,” says Andrea Rodolico, CTO of NICE. “3D applications are in demand inside and outside of the office for employees and partners who need to process and visualize large datasets, often in geographically distributed collaborative settings. In a situation like that, computing can be a real concern. By tightly integrating DCV and EnginFrame with Moab HPC Suite, intelligent placement and management maximizes resource utilization and performance by placing and packing visualization workloads on the optimal resources using policies and ensures 3D application success.”
A Technical Visualization Private Cloud moves the “seat of power” in 3D simulation from Linux and Windows workstations distributed throughout the enterprise to a highly efficient, secure, centralized location: a private cloud in the corporate data center. It’s a natural outgrowth of Software as a Service (SaaS) and Cloud adoption. In fact, what SaaS did for 2D applications, sending business productivity applications down the wire to thin clients, is now accomplished with complex 3D technical visualization applications in the Cloud. “By deploying visualization applications in the data center instead of on distributed workstations, and transferring pixels back to users’ systems instead of large data models, it is now possible to view and manipulate complex 3D simulations remotely on PCs and mobile devices with the NICE solution. Users can collaborate easily—anytime, anywhere—on the same session,” Michael Jackson, President and Co-Founder of Adaptive Computing. “And compute resources, including GPUs, can be used more efficiently than ever before. It’s all possible thanks to the collaborative efforts with NICE.”
NICE delivers comprehensive Grid & Technical Computing Cloud Products and Solutions. NICE product portfolio boosts productivity of Private and Public Clouds by increasing usability and user-friendliness of HPC and 3D applications, without sacrificing flexibility and control. NICE global customers include leading companies in Automotive, Aerospace, Industry Manufacturing, Oil and Gas, Life Science, Universities and Scientific Research.
For more information, call +39 0141 90.15.16 or visit http://www.nice-software.com .
About Adaptive Computing
Adaptive Computing, manages the world’s largest supercomputing environments with its self-optimizing dynamic cloud management solutions and HPC workload management systems driven by Moab®, a patented multi-dimensional intelligence engine. Moab® delivers policy-based governance, allowing customers to consolidate and virtualize resources, allocate and manage applications, optimize service levels and reduce operational costs. Adaptive Computing offers a portfolio of Moab cloud management and Moab HPC workload management products and services that accelerate, automate, and self-optimize IT workloads, resources, and services in large, complex heterogeneous computing environments such as HPC, data centers and cloud. Our products act as a brain on top of existing and future diverse infrastructure and middleware to enable it to self-optimize and deliver higher ROI to the business with its:
For more information, call (801) 717-3700 or visit http://www.adaptivecomputing.com.
Source: NICE, Adaptive Computing
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.