Visit additional Tabor Communication Publications
May 09, 2012
CINCINNATI, Ohio, May 9 -- The Fred Hutchinson Cancer Research Center can now provide employees with remote access to high-performance computing, data analysis, and administrative resources thanks to NoMachine software. NoMachine allows Center researchers to maintain rigorous research schedules, no matter where they are located.
The Center consists of more than 3,000 staff, including world-renowned scientists and Nobel Laureates, who are dedicated to understanding, treating, and preventing cancer, HIV/AIDS and other life-threatening diseases. Headquartered in Seattle, Washington, researchers are also spread out at remote sites across North America and Europe. With NoMachine, the Center can connect 250-300 researchers to the high-performance computing (HPC) cluster, with up to 25 users connecting at the same time.
NoMachine also gives Center researchers access to server-based, administrative applications that are critical to their day-to-day activities, such as Firefox, R language, and Matlab. To simplify support, the Center's scientific computing department cut the organization back to two operating systems. Now, new employees are given a Windows computer. Those who prefer to work with Linux or similar desktops are steered towards Macs with a Linux connection provided by the NoMachine NX Server.
"We needed a way to offer a Linux desktop interface to a lot of people without giving them each a physical system. This way we can concentrate our support efforts on high-performance computing, instead of desktop support," explained System Administrator Carl Benson.
NoMachine's NX Enterprise Server and Client software have enabled the scientists at Fred Hutchinson Cancer Research Center to remain productive while simplifying desktop support. Whether on site or at a remote location, NoMachine provides a full Linux experience and access to x11 applications for a superior remote experience and easier administration. The software's dependability and session resiliency also make it perfect for researchers who need to monitor projects that may last for months at a time.
"NoMachine is way less hassle, uses fewer resources per session on the server and is reliable," said Benson. "It has more capabilities...it's really excellent."
Read more about the project at www.nomachine.com/hutchinson.php.
NoMachine is the creator of NoMachine NX software, an enterprise-class solution for secure remote access, application delivery, and hosted desktop deployment. NoMachine revolutionizes the way users access their computing resources across the Internet to make seamless desktop access as easy and widespread as Web browsing. For more information, visit www.nomachine.com.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.