Visit additional Tabor Communication Publications
December 02, 2008
LabVIEW real-time module establishes high-performance computing benchmarks in European extremely large telescope application
AUSTIN, Texas, Dec. 2 -- National Instruments recently was announced as a finalist in the 2008 Supercomputing Conference Analytics Challenge for accomplishments in high-performance computing (HPC) with the NI LabVIEW graphical system design platform. This recognition acknowledges the most innovative solution to the more complex problems in supercomputing applications. For the competition, the National Instruments LabVIEW research and development team submitted a technical paper establishing multicore programming benchmarks in developing real-time control for the forthcoming European Extremely Large Telescope (E-ELT), which represents historic computational challenges.
"We are excited to be a finalist in this challenge because it recognizes the parallel programming potential National Instruments has been developing since introducing LabVIEW more than 20 years ago," said Dr. James Truchard, CEO, cofounder and president of National Instruments. "In addition to acknowledging the impressive high-performance computing capabilities of LabVIEW and our work on the European Extremely Large Telescope, this honor positions National Instruments as a leader in real-time control applications. This achievement also complements the major solutions National Instruments has facilitated for the Max Planck Institute for Plasma Physics in the field of nuclear fusion and for CERN in particle acceleration, which represent two of the biggest technical challenges of our time."
The Analytics Challenge was held in conjunction with SC08, the international conference on high-performance computing, networking, storage and analysis, Nov. 15-21 in Austin, Texas. Each year, the Analytics Challenge provides a forum for researchers and industry representatives to present solutions that embody all facets of high- performance computing, such as comprehensive computational approaches, large-data-set processing and innovative analysis and visualization techniques.
For their Analytics Challenge submission, National Instruments engineers documented their breakthrough work with the European Southern Observatory (ESO) on the E-ELT project, which is currently in the proof-of-concept phase and, when constructed, will be the world's largest telescope ever created. The ESO needed help to prove the viability of a commercial off-the-shelf (COTS) solution for controlling the two most complex mirrors within the E-ELT, which will have a total of five mirrors. The telescope's primary active mirror will be 42 m in diameter and will comprise 984 hexagonal mirror segments, all of which must be in strict alignment continuously, even in windy conditions. To maintain mirror segment alignment, the control system must respond to a total of 6,000 sensor inputs and then send control signals to 3,000 actuators, and it must complete this input-output cycle up to 1,000 times per second.
To solve this problem, NI engineers used the multicore programming functionality of LabVIEW Real-Time to create a highly deterministic, hardware-in-the-loop (HIL) communication network that moves 36 MB of data per second. The benchmarks achieved included distributing control algorithms on up to eight cores simultaneously and performing a 3,000-by-6,000 matrix-vector multiplication within 0.5 ms. This meets a monumental computational challenge while maintaining the determinism required in real-time applications and breaking the 1 ms closed-loop threshold.
The team also documented its work on the even larger problem of developing control for the telescope's 2.5 m active mirror, which will comprise a thin, flexible mirror membrane spread across 8,000 actuators. Instead of maintaining alignment, this mirror will adapt and deform to compensate for waveform aberrations caused by atmospheric disturbances. The computational requirements for controlling this mirror are nearly 15 times more complex than that of the large primary mirror. NI engineers determined that this problem could be solved only by using a state-of-the-art multicore blade system, and they tested their solution on the Dell M1000, a 16-blade system in which each blade machine features eight cores. Although the solution is still in progress, the results from the Dell system have concluded that the LabVIEW solution already has effectively distributed the control problem onto 128 cores, another groundbreaking achievement in itself.
"The leading-edge power of the Dell Precision workstation and PowerEdge servers together with the real-time and graphical programming capabilities of NI LabVIEW deliver impressive capabilities to efficiently distribute computing loads across all the nodes in HPC applications," said Greg Weir, senior manager of Worldwide Business Development for Dell Precision Workstations. "The full memory and graphics potential of our workstations is realized with the key visualization functions of LabVIEW that HPC applications require."
Other parallel hardware that may be used to add processing power to the final E-ELT solution for the ESO includes field-programmable gate arrays (FPGAs), which LabVIEW already supports with the NI LabVIEW FPGA Module, and general-purpose graphics processing units (GPGPUs), which are being researched as a viable acceleration platform. In addition to the Dell proof of concept, a prototype in which NVIDIA's CUDA technology enables LabVIEW has been thoroughly benchmarked with impressive computational results.
For more information about LabVIEW implementation in the E-ELT project, read the full case study at www.ni.com/eelt.
About National Instruments
National Instruments (Nasdaq: NATI) is transforming the way engineers and scientists design, prototype and deploy systems for measurement, automation and embedded applications. NI empowers customers with off-the-shelf software such as NI LabVIEW and modular cost-effective hardware, and sells to a broad base of more than 25,000 different companies worldwide, with no one customer representing more than 3 percent of revenue and no one industry representing more than 10 percent of revenue. Headquartered in Austin, Texas, NI has more than 5,000 employees and direct operations in more than 40 countries. For the past nine years, FORTUNE magazine has named NI one of the 100 best companies to work for in America. Readers can obtain investment information from the company's investor relations department by calling (512) 683-5090, e-mailing firstname.lastname@example.org or visiting www.ni.com/nati.
Source: National Instruments
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.