As we move into the pre-exascale era, issues of power consumption, network bandwidth, I/O and other issues will continue to push increasing integration. This was a key theme during the International Supercomputing Conference this week in Germany where Intel’s Raj Hazra provided an early look into how it will integrate various pieces of the future Read more…
This week during SC13, Intel hosted a roundtable session to discuss the future of its upcoming Knights Landing product, hitting on where the key benefits are expected for technical computing users and how Knight’s Landing might influence the shape of next generation systems and applications. As Intel turns its focus on the Xeon front to Read more…
If we had to take a pick from some of the most compelling announcements from SC13, the news from memory vendor (although that narrow distinction may soon change) Micron about its new Automata processor is at the top of the list. While at this point there’s still enough theory to lead us to file this Read more…
With the advance of multicore and manycore processors, managing caches becomes more difficult. Researchers at MIT suggest that it might make sense to let software, rather than hardware, manage these high-speed on-chip memory banks.
The market for computer memory is entering a period of punctuated evolution as a result of several forces, including the continued growth of mobile devices like smartphones and tablets, as well as growth in the cloud data centers and communication networks that serve data to mobile users. HPC workloads also play a part in the changing memory landscape.
With ever-mounting CPU advancements that promise superior performance, the blame for lousy delivery on those chip promises lies squarely on memory. This problem isn’t just a matter of application performance—it’s also a matter of efficiency. This week Micron with partners Intel and others, including Altera, IBM, ARM, Xilinx and others…..
<img src=”http://media2.hpcwire.com/hpcwire/nvidia_gpu_graphic.jpg” alt=”” width=”94″ height=”65″ />This week at NVIDIA’s GPU Technology Conference, the priorities for GPU computing’s future, including providing snappy access to high memory bandwidth, were cited as critical to growing user ranks. The energy consumption, data volume and velocity requirements are giving way to new, more efficient and higher bandwidth approaches, including Volta, which was revealed during the keynote event.
DRAM manufacturers gear up for DDR4.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Byungse_So_small.jpg” alt=”” width=”77″ height=”85″ />As one of the world leaders in memory solutions, Samsung Semiconductor has been a key supplier of DRAM and NAND components that end up in high performance computing systems. Dr. Byungse So, who heads the Memory Product Planning & Application Engineering team at Samsung, shares his thoughts about the memory technologies needed by performance-minded users today and what might come next.
While SSDs are expensive, prices are falling and some users are seeing remarkable returns from their investment.