Visit additional Tabor Communication Publications
May 14, 2012
MILPITAS, CA, May 14 -- Appro, a leading provider of supercomputing solutions, announces today the deployment of the Appro Xtreme-X™ Supercomputer with four socket configurations based on the new Intel® Xeon® processor E5-4600 product family for Kyoto University in Japan. The system has been used primarily by researchers and engineers for the Academic Center for Computing and Media Studies (ACCMS) at Kyoto University and others in the academic community across Japan.
The Appro Xtreme-X Supercomputer based on the new Intel Xeon processor will deliver improved HPC performance with outstanding memory footprint, I/O and FLOPs in a dense system offering outstanding memory performance to improve data intensive HPC applications while lowering data center infrastructure costs. The new Intel Xeon processor E5-4600 product family features up to 4 channels DDR3 1600 memory per socket, 48 DIMMs per system, 8 cores with 16 threads per CPU and 20MB cache, 2 QPI Links with up to 8GT/s per socket, integrated PCIe 3.0 with up to 40 lanes per socket. The system supports the new Intel Integrated I/O and Intel® Turbo Boost Technology 2.0.
"We are excited to have completed the installation and deployment of the Appro Xtreme-X™ Supercomputer based on Intel® Xeon® processor E5 product family at the Kyoto University in Japan”, said Daniel Kim, CEO of Appro. “These cutting-edge supercomputers configured with four processor based servers provide 1.5TB per node of shared memory with a single address space delivering outstanding memory performance, density and manageability to support HPC scientific research projects”
The Appro Xtreme-X Supercomputer configured for the Kyoto University consists of two systems. The first system features 10 TFlops based on four processor servers delivering a total of 24TB of shared memory in a 16-node configuration. The second system features 200TFlops based on two processor servers in a 601-node configuration coupled with 64 GPU nodes to provide even higher density and higher performance in a small footprint. The two systems are interconnected with the latest Mellanox Dual Rail FDR Infiniband networks.
These systems are designed to optimize floor space offering outstanding memory footprint per processor, power utilization, and cooling efficiency using the latest cluster technologies and techniques. They are also configured with Appro’s HPC Software Stack to include Appro Cluster Engine™ (ACE) remote management software delivering diskless operation, fast booting, load balancing and failover capabilities for non-stop operation; all factory integrated and ready to ship with Appro’s turn-key System Delivery Integration Services.
“The most complex computational tasks required by workloads ranging from data intensive computing to breakthroughs at the leading-edge of science can take advantage of the solutions delivered by Appro in their new Intel® Xeon® E5-4600 product family based platforms.” said Raj Hazra, General Manager of the Intel Technical Computing Group. “These solutions offer Intel’s new Integrated I/O technology with PCI Express 3.0 support and deliver breakthrough performance through improved system memory bandwidth.”
Appro is a leading developer of innovative supercomputing solutions. Appro is uniquely positioned to support High-Performance Computing (HPC) markets focusing on medium to large-scale deployments where lower total cost of ownership is essential. Appro accelerates technical applications and business results through outstanding price/performance, power efficient and fast time-to-market solutions based on the latest open standards technologies, innovative cluster tools and management software packaged with HPC professional services and support.
Appro supercomputing solutions enables scientists and engineers to use data-intensive, capacity, capability and hybrid computing for scientific research, data modeling, engineering simulations, and seismic visualization. Appro’s headquarters is located in Milpitas, CA with offices in Korea, Japan and Houston, TX. To receive automatic Appro news and feature stories, subscribe to Appro RSS feeds at http://www.appro.com, also visit us on Facebook at http://www.facebook.com/ApproSupercomputers or interact with us at http://twitter.com/approhpc
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.