Visit additional Tabor Communication Publications
April 08, 2010
Here is a collection of highlights from this week's news stream as reported by HPCwire.
Supermicro Delivers Platinum Level Servers
Tokyo Institute of Technology Selected as Japan's First CUDA Center of Excellence
Criterion HPS Unveils the Phantom Extreme Featuring Intel Xeon 5600 Processors
Woodward Taps IBM High Performance Cloud Services to Simulate Aircraft Component Design
GridCentric Announces Copper Cluster Management Software
NVIDIA Quadro GPUs Are Certified for AutoCAD
NCAR Orders Cray XT5 Supercomputer
RenderStream Announces Its VDAC 8-16 GPU Systems
Fixstars Releases 'The OpenCL Programming Book'
Lomonosov Supercomputer Tops New Russian List of Most Powerful HPC Systems
AccelerEyes Upgrades Jacket Software for GPU Computing
New Computer Cluster Ups the Ante for Notre Dame Research
Xilinx Helps University of Regensburg Launch Most Power-Efficient Supercomputer
Pittsburgh Supercomputing Center Accelerates Scientific Research with SGI Altix UV
Software Design Technique Allows Programs to Run Faster
New AMAX Solutions Powered by NVIDIA Tesla 20-Series GPU
National Petascale Computing Facility Reaches Substantial Completion
Netezza TwinFin Appliance Used for Data-Intensive Computing Applications at PNNL
Memristor Technology Holds Intriguing Promise
HP Labs this week announced advances in memristor technology that could fundamentally change the design of computing. Memristors could be the key that enables computers to handle the ongoing information explosion, where data from a slew of devices, both explicit and embedded, threatens to overwhelm our current computing limits.
So what is a memristor? According to the HP Labs announcement, it's "a resistor with memory that represents the fourth basic circuit."
If you're familiar with electronics, you will recognize the language. The trinity of fundamental components encompasses the resistor, the capacitor, and the inductor. In 1971, a University of California, Berkeley engineer, Leon Chua, predicted that there should be a fourth element: a memory resistor, or memristor. However, when memristors were first theorized 40 years ago, they were too big to be practical.
It was not until two years ago, in 2008, that researchers from HP Labs rediscovered Chu's earlier work. With the reduction of transistor sizes, even more capabilities of the memristor were realized due to the way properties behave at nanoscale.
What makes the memristor different from other circuits is that when the voltage is turned off, it remembers its most recent resistance, and it retains this memory indefinitely until the voltage is turned on again. It would take many more paragraphs for a full explanation, but if you are interested, I suggest this easy-to-understand primer at IEEE Spectrum Web site.
One of the advantages of memristors is that they require less energy to operate, and are already being considered as a replacement to transistor-based flash memory.
Researchers predict that in five years, such chips, when stacked together, could be used to create handheld devices that offer ten times greater embedded memory than exists today, and could also be used to power supercomputers for digital rendering and genomic research applications at far greater speeds than Moore's Law suggests is possible.
Memristors work more like human brains. In fact, Leon Chua explained that our "brains are made of memristors," referring to the function of biological synapses.
And according to R. Stanley Williams, senior fellow and director of Information and Quantum Systems Lab at HP:
Memristive devices could change the standard paradigm of computing by enabling calculations to be performed in the chips where data is stored rather than in a specialized central processing unit. Thus, we anticipate the ability to make more compact and power-efficient computing systems well into the future, even after it is no longer possible to make transistors smaller via the traditional Moore's Law approach.
The promises this technology offers sound almost to good to be true. If even half of what is promised holds true, than this will go down in history as one of the great breakthroughs in computer technology.
48-Core Intel Processor for Educational Purposes Only
Intel announced plans to ship "limited quantities" of computers with an experimental 48-core processor to researchers by the middle of the year. The 48-core processors will be shipped mainly to academic institutions, an Intel rep said during an event in New York on Wednesday. And while the chip will probably not become commercially available, certain features may make their way into future products.
The 48-core chip operates at about the clock speed of Atom-based chips, said Christopher Anderson, an engineer with Intel Labs. Intel's latest Atom chips are power-efficient, are targeted at netbooks and small desktops, and run at clock speeds between 1.66GHz and 1.83GHz. The 48-core processor, built on a mesh architecture, could lead to a massive performance boost when all the chips communicate with each other, Anderson said.
The new processor reportedly has a power draw between 25-125 watts, and cores can be powered off to save energy or reduce clock speed. The chip touts better on-die power management capabilities than current multicore chips and comes with power-management software to help lower energy consumption depending on performance requirements.
During the Wednesday event, researchers demonstrated the processor's advanced power management features. While running a financial application, sets of cores were deactivated and the power consumption went from 74 watts to 25 watts in under a second.
The new 48-core chip is based on the 80-core Teraflop prototype created in 2007 by Intel's Tera-scale Computing Research Program. And that chip is a runner-up to the 48-core "Single-chip Cloud Computer" announced in December 2009, also a product of the Tera-scale Computing Research Program.
Those processors, however, were only prototypes and were never released into the wild. However, the 48-core chips announced this week are almost ready to leave the research nest, and will be released if not into the fierce corporate jungles at least into the relatively tamer academic habitat.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.