Visit additional Tabor Communication Publications
September 09, 2010
Dell HPC Solutions Enable Research from Leading Higher Education, Government and Industry Labs
NASA High-End Computing Testbed Runs over National LambdaRail
Fujitsu SynfiniWay V4 Enables Distributed HPC
Solarflare Announces New Value-Added Reseller Program
Virginia Bioinformatics Institute Launches New Industrial Affiliates Program
Acceleware, Microsoft Announce CUDA/OpenCL Fall 2010 Training Schedule
Solarflare Announces New Midrange Server Adapters
Appro Deploys 324-node/100 Teraflops System to San Diego Supercomputing Center
Latest Xilinx FPGA Gets NSA Approval for High-Grade Cryptographic Processing
Oracle Hires Mark Hurd as President
Infinera Partners in US UCAN Broadband Network
University at Buffalo Selects XtremeData's dbX Data Warehousing Appliance
IBM to Ship World's Fastest Microprocessor
MathWorks Announces Release 2010b of the MATLAB and Simulink Product Families
Computing for Clean Water
Volunteer computing is still going strong as evidenced by an announcement this week from IBM's World Community Grid. The worldwide network of personal computers is being used in several projects all focused on developing techniques that will lead to better water quality. These projects could not be more timely or necessary as clean water is in desperately short supply for over 1.2 billion people.
From the release:
To accelerate the pace, lower the expense, and increase the precision of these projects, scientists will harness the IBM-supported World Community Grid to perform online simulations, crunch numbers, and pose hypothetical scenarios. The processing power is provided by a grid of 1.5 million PCs from 600,000 volunteers around the world. These PCs perform computations for scientists when the machines would otherwise be underutilized. Scientists also use World Community Grid -- equivalent to one of the world's fastest supercomputers -- to engineer cleaner energy, cure disease and produce healthier food staples.
One initiative aims to find ways to filter pathogens that cause disease from the water and another is trying to uncover how human behaviors affect water quality. Another group, based in Brazil, is looking to cure schistosomiasis, a parasite-based disease found in tropical regions and spread by contaminated water.
Stanley S. Litow, IBM vice president of Corporate Citizenship & Corporate Affairs and president of IBM's Foundation, commented on the program:
"I can think of few endeavors more important than making sure people across the globe have ready access to clean water. I would even suggest that it's a basic human right, and a hallmark of sophisticated and compassionate societies everywhere. That's why IBM is so incredibly proud to help scientists harness the resources of World Community Grid to make strides in this vital arena."
I couldn't agree more.
The World Community grid relies on the unused cycles of its volunteers' computers to help solve humanitarian projects. If you'd like to add your computer to this project, sign up at www.worldcommunitygrid.org.
China's First Petaflop System Up and Running
People's Daily reported late last week that China's first petaflop supercomputer is now fully assembled and running at the National Supercomputing Center in Tianjin. Named Tianhe-1, which means River in the Sky, the system is scheduled to undergo debugging and testing this month.
The supercomputer, housed 13 computer cabinets, employs a hybrid design, where GPUs are used as accelerators. Its 2,560 compute nodes each contain two Xeon processors and two AMD GPUs, for a total of 71,680 cores. (For a deeper explanation of core counts, check out this blog.) Tianhe-1 achieves Linpack values of 563.1 sustained teraflops and 1.2 petaflops peak theoretical performance.
The article gives a real world comparison of all that computing power:
One second of calculations conducted by Tianhe-1 is equivalent to 88 consecutive years of calculations by 1.3 billion people, and the data that the supercomputer can store is equivalent to the sum of the collections in four national libraries with 27 million books each.
Tianhe-1 was developed by the Changsha-based National University of Defense Technology in 2009 and is China's first domestically-made petaflop supercomputer. It was ranked seventh on the latest TOP500 list. The machine will be used for a variety of high-performance applications in the fields of animation and rendering, biomedical research, aircraft simulation, petroleum exploration, data analysis for financial engineering, weather forecasting, and general science.
If you're still craving deeper insight into the Tianhe-1, there's a good primer at the TOP500 site, here.
Myricom Gets New CEO
This week Myricom announced that it had named co-founder Nanette (Nan) Boden as president and chief executive officer. Boden was promoted from her position as CFO, and replaces Chuck Seitz, another of Myricom's founders.
From the release:
Since helping found Myricom in 1994, Nan Boden has participated in nearly every aspect of Myricom's operations. She was named Executive Vice President in 1999, CFO in 2001, and CEO in 2010. Nan has been a member of Myricom's Board of Directors since 2001. She received her M.S. and Ph.D. degrees in Computer Science from the California Institute of Technology (Caltech), and her B.S. degree in Applied Mathematics from the University of Alabama.
Myricom, a Caltech spin-off, made its foray into the 10 Gigabit Ethernet (10GbE) field providing HPC interconnect technology for high-end clusters and supercomputers, but has since branched out into more mainstream networking applications. With its fourth generation of networking products, Myri-10G, the company delivers 10GbE solutions for specialized vertical markets, such as financial trading, packet capture, video streaming and IPTV, and HPC.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.