November 23, 2011
It's been a little over a year since Nimbix announced the initial beta launch of its Nimbix Accelerated Compute Cloud (NACC). During the SC11 show in Seattle last week, HPC in the Cloud sat down with Nimbix Co-Founder and CEO Steve Hebert to find out where the company fits in with the small-but-growing stable of cloud providers who specialize in supporting HPC workloads. Read more…
November 22, 2011
Indiana University's Scinet Research Sandbox entry sets new records, renews promise of cloud for data-intensive science workloads. Read more…
November 19, 2011
HPC in the Cloud talks to Cycle Computing CEO Jason Stowe at SC11 to get the details on the CycleCloud BigScience Challenge 2011. Cycle crafted the contest based on the noble ideal that science should not be hindered by lack of computational resources. So the company put out the call to non-profit institutions: do you have an HPC problem that will benefit humanity in a large-scale way? Read more…
November 18, 2011
When NVIDIA CEO Jen-Hsun Huang delivered his keynote at SC11 this week, it was easy to forget that a few short years ago, the company and its GPU products had absolutely nothing to do with supercomputing. Today, of course, the technology is a driving force in the HPC ecosystem and is challenging the entrenched interests of chip makers Intel, AMD, and IBM. Read more…
November 18, 2011
If you thought Lustre and GPFS were your only two choices for a high performance, scalable parallel file system, then you've probably never heard of OrangeFS. We talked with three of the file system's developers and backers to discuss the unique attributes of OrangeFS and how it's being used in the field. Read more…
November 17, 2011
With a number of government and commercial exascale projects in full swing, SC11 has provided a convenient venue for vendors, academics and government types to tout their vision of the future of supercomputing. To get a broad perspective on these efforts, we spoke with Thomas Sterling, Professor of Informatics and Computing Indiana University, and one of the foremost experts on supercomputing architectures. Read more…
November 17, 2011
Advances in silicon photonic integration will present an opportunity for hardware engineers to reconsider basic computer designs. That topic is the theme of a Disruptive Technology session at SC11 on Thursday conducted by Keren Bergman of Columbia University and Nadya Bliss of MIT Lincoln Laboratory. Prior to the conference, we asked Bergen and Bliss to discuss the technology issues surrounding integrated photonics and how it could impact computer systems, including HPC machines. Read more…
November 16, 2011
Although women comprise the majority of the United States labor force, 60 percent of college graduates in developed countries, most of the internet users, and start more than half of the new companies created each year in the US, they have made surprisingly few inroads into high performance computing. On Thursday at SC11, there will be two sessions where HPC community members can discuss these issues and exchange ideas on how to change the status quo. Read more…
Today, manufacturers of all sizes face many challenges. Not only do they need to deliver complex products quickly, they must do so with limited resources while continuously innovating and improving product quality. With the use of computer-aided engineering (CAE), engineers can design and test ideas for new products without having to physically build many expensive prototypes. This helps lower costs, enhance productivity, improve quality, and reduce time to market.
As the scale and scope of CAE grows, manufacturers need reliable partners with deep HPC and manufacturing expertise. Together with AMD, HPE provides a comprehensive portfolio of high performance systems and software, high value services, and an outstanding ecosystem of performance optimized CAE applications to help manufacturing customers reduce costs and improve quality, productivity, and time to market.
Read this whitepaper to learn how HPE and AMD set a new standard in CAE solutions for manufacturing and can help your organization optimize performance.
A workload-driven system capable of running HPC/AI workloads is more important than ever. Organizations face many challenges when building a system capable of running HPC and AI workloads. There are also many complexities in system design and integration. Building a workload driven solution requires expertise and domain knowledge that organizational staff may not possess.
This paper describes how Quanta Cloud Technology (QCT), a long-time Intel® partner, developed the Taiwania 2 and Taiwania 3 supercomputers to meet the research needs of the Taiwan’s academic, industrial, and enterprise users. The Taiwan National Center for High-Performance Computing (NCHC) selected QCT for their expertise in building HPC/AI supercomputers and providing worldwide end-to-end support for solutions from system design, through integration, benchmarking and installation for end users and system integrators to ensure customer success.
© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.