Visit additional Tabor Communication Publications
March 12, 2013
Tommy Minyard has been selected as this year's Community Representative Director at Open Scalable File Systems (OpenSFS). Minyard will continue to hold down his "day job" as Director of Advanced Computing Systems (ACS) at the Texas Advanced Computing Center (TACC), where his group is responsible for operating and maintaining TACC's production systems and infrastructure. He replaces 2012 Community Representative Director Stephen Simms, manager of the High Performance File Systems at Indiana University.
Minyard's current projects at TACC include world-class science through leadership in high performance computing, HPC research using clusters, system performance measurement and benchmarking, and fault-tolerance for large scale cluster environments. He earned a PhD in Aerospace Engineering from the University of Texas at Austin.
HPCwire asked Minyard about his new role at OpenSFS and the future of Lustre.
HPCwire: How would you describe the status of Lustre today?
Minyard: Lustre has become a very stable and robust filesystem. It provides an extremely scalable platform for building HPC systems at any scale. We have run many versions of Lustre here at TACC and each release has improved in stability. Each release has added features to make installation and administration of Lustre much easier.
Also, with the establishment of the OpenSFS and EOFS community organizations, the overall Lustre community remains vibrant and active with growing international participation.
HPCwire: What opportunities do you see ahead for Lustre, and why?
Minyard: One key opportunity for Lustre is in the new push addressing big data. We are starting to see Lustre being used in large scale data analytics and as a possible alternative/compliment to Hadoop for MapReduce types of problems. With its scalable architecture and upcoming feature enhancements, I expect to see Lustre used in many more non-traditional HPC environments that have big data problems to solve.
HPCwire: What are the most important issues the Lustre community needs to deal with in the near term?
Minyard: One is continued development of features and ensuring that a single source tree for Lustre is maintained. Many parties contribute to Lustre and each has their own development priorities, which could lead to bifurcation of the Lustre source tree. OpenSFS is funding Lustre development and source-tree maintenance activities, along with coordinating with EOFS, to avoid this situation as new features are implemented and bugs are fixed.
HPCwire: Long term?
Minyard: Longer term, I think the most important issue for Lustre will be growing and sustaining the community development efforts initiated by OpenSFS and EOFS. I am glad to see vendors willing to support and offer Lustre-based solutions. However, it will be the funding from these two organizations that will continue development of new Lustre features for the community in order to keep it a viable open source package. Without increasing and broadening community support, these organizations could have trouble continuing to fund the very expensive software development required to keep Lustre at the forefront as new hardware and software technologies become available.
HPCwire: What kinds of features or capabilities would you most like to see added to Lustre? How will these capabilities improve Lustre?
Minyard: Some of the features that I would like to see added to Lustre are currently on the development roadmap – primarily, the implementation of distributed namespace (DNE) and the Hierarchical Storage Management (HSM) features slated for the 2.4 and 2.5 releases. DNE will resolve one of the primary meta-data bottlenecks present in current Lustre releases. HSM will allow Lustre to be used in a wider range of systems, including tape archival libraries.
HPCwire: Do you have any particular initiatives you'd like to implement this year?
Minyard: One of my primary initiatives for this year will be to expand the Lustre community and encourage more institutions to join the OpenSFS organization. The Lustre community has always been active on the various mailing lists, but I would like to see more of them participate and attend the annual Lustre User Group meeting. I look forward to LUG 2013 in April, where we will hear about recent developments from existing and new Lustre users, will see presentations from the vendors and will discuss the roadmap for upcoming releases.
HPCwire: In the last couple years, we've seen Intel buy Whamcloud and more recently Xyratex buy Oracle's Lustre assets. What does this mean for the Lustre community going forward?
Minyard: My hope is that these two acquisitions will enhance the long-term viability of Lustre by ensuring that the filesystem remains supported and commercially viable, yet still available to the community as an open-source package.
Intel's purchase of Whamcloud shows me that Intel considers Lustre to be critical on today's current HPC platforms as well as on future exascale systems. I was also pleased to see Xyratex acquire the Lustre rights from Oracle, as it was my understanding that Oracle had stopped active development of future releases. I think Xyratex will ensure that the Lustre trademark and brand are maintained for the community.
HPCwire: What are the long-term prospects of the technology for future exascale systems?
Minyard: Lustre will be a key piece of software for deployment of exascale systems. Lustre has already proven itself to be extremely scalable. It can keep up with the growing system size from terascale to petascale now, and has already demonstrated a bandwidth capability exceeding 1TB/sec. With the planned features that will be implemented in the next few years, Lustre should evolve in such a way that it may be one of the only filesystems to work reliably on exascale systems.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.