Visit additional Tabor Communication Publications
December 16, 2010
Here is a collection of highlights from this week's news stream as reported by HPCwire.
LSU Center for Computation & Technology Names New Deputy Director
New Scilab Release Adds Improved Ergonomics, Parallel Execution
Scientists Identify the Largest Network of Protein Interactions Related to Alzheimer's Disease
LSU CCT Group Develops New Cyberinfrastructure Environment
PRACE Announces 'SuperMUC' System for LRZ
LSU Scientist Identifies Chess Benchmark Parameters for DARPA UHPC Project
Red Bull Racing Revs Up Formula 1 Race Car Simulations with Netlist's HyperCloud Memory
LSU Center for Computation & Technology Appoints New Director
MathWorks Delivers New Parallel Computing Support for Real-Time Workshop
AMAX Introduces Multi-Petabyte NAS Storage Solution
Leibniz Supercomputing Centre to Acquire Multi-Petaflop Supercomputer from IBM
NIST Team Awarded Millions of Supercomputing Hours, Aims for 'Concrete Results'
Rensselaer's Francine Berman Named IEEE Fellow
IU Open Systems Lab Researchers Receive SC10's Best Paper Award
EUREKA Project Develops Advanced HPC Tools
ACM Names IU's Bobby Schnabel as 2010 Fellow
Open Source Parallel File System Gets European Backers
Parallel file systems might not sound like the sexiest aspect of high-end computing, but they've been getting a lot of attention lately, specifically the popular, open source Lustre technology. After Oracle swallowed Sun early this year, Lustre's fate was left uncertain, prompting concerned parties to take matters into their own hands. In July 2010, startup Whamcloud vowed to support the Lustre standard for Linux platforms. That same month, the High Performance Cluster File System (HPCFS) Software Foundation was founded -- an international, non-profit organization dedicated to furthering the needs of parallel file system user communities worldwide. Then, there was the launch of another pro-Lustre alliance, Open Scalable File Systems, in October of this year.
Well, this week another Lustre champion was announced, but this one's based in Europe. The creation of the European Open Filesystem (OSF) was announced by ParTec Cluster Competence Center GmbH. The legal name of the entity is the Societas Cooperativa Europaea (SCE). The OSF-SCE is the first European-wide endeavor of this sort.
Cited in their news release, the group's purpose is "to promote the establishment and adoption of an open source parallel filesystem, sustain and enhance its quality, capabilities and functionality and to ensure that the specific requirements of European organizations, institutions and companies are considered."
Jean Gonnord, program director for numerical simulation & computer sciences at CEA/DAM, explains the project's importance:
"Having an open source filesystem is a necessity for high end supercomputers computers producing massive amounts of data. Such an important piece of the filesystem software cannot be made proprietary with no absolute guarantee of future access. This is the main reason why CEA has supported and will continue to support Lustre as an open source filesystem. As with any open source software, Lustre should be supported by the largest community of users. We hope that the organization will prioritize its development efforts to continually improve the functionality and stability of an open source code base."
So far, the group's impressive roster of 14 companies includes Forschungszentrum Jülich, Bull GmbH, CEA/DAM, DataDirect Networks, Universities of Zürich, Switzerland and Paderborn, Germany, GSI Helmholtzzentrum für Schwerionenforschung GmbH, credativ GmbH, T-Platforms, HPCFS, Mellanox, Whamcloud, Leibniz Rechenzentrum(LRZ) and ParTec GmbH.
IBM/Jeopardy! Match Date Set
Set your Tivos and DVRs. The long-anticipated faceoff between IBM's "Watson" supercomputer and the ever-popular and long-running Jeopardy! quiz show is finally at hand. The competition will air on February 14, 15 and 16, 2011, with two matches being played over three consecutive days. Watson will go up against two of the show's most successful contestants -- Ken Jennings and Brad Rutter.
According to IBM officials, Watson was designed for the type of natural-language processing challenges that the Jeopardy! quiz show provides. Watson's skills will be thoroughly tested "because the game's clues involve analyzing subtle meaning, irony, riddles, and other complexities in which humans excel and computers traditionally do not."
Watson's software runs on IBM POWER7 servers optimized to analyze and respond to the kind of complex language that makes up Jeopardy! clues. The system performs at rapid speeds and is capable of processing an enormous number of concurrent tasks in real time.
Watson has been prepped to play the game, having played more than 50 test matches against former Tournament of Champions contestants. Watson has also taken and passed the the same qualification test given to humans contestants. These early results give both Jeopardy! producers and IBM officials confidence that the opponents are well-matched.
The winner will receive a cool one million, while $300,000 will go to the second place finisher, and $200,000 for third place. In the event that Watson is victorious, IBM will donate 100 percent of the winnings to charity. Rutter and Jennings have agreed to donate 50 percent of their winnings to charity.
The PBS science show NOVA is even getting in on the action. NOVA is producing a documentary about this historic event, called Smartest Machine On Earth. The one-hour documentary will delve into the subject of artificial intelligence (AI) and will follow Watson's journey culminating in the first man versus machine Jeopardy! competition.
The in-depth documentary is scheduled to premiere on February 9, 2011, at 10pm ET/PT (local listings may vary) on PBS, three days in advance of Watson/Jeopardy! showdown.
Can't wait until February? Highlights of the sparring matches are accessible at www.ibmwatson.com.
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.