Visit additional Tabor Communication Publications
March 16, 2009
March 16 -- The Texas Advanced Computing Center (TACC) has named Dan C. Stanzione Jr. to the newly created position of deputy director at the center. As deputy director, Stanzione will track the center's vision, strategy and planning, ensure effective and efficient operations of center-wide activities and programs, and articulate the center's impact and plans to the broader scientific community. He will also play a key role in funding new activities through proposals and partnerships. He will officially begin in this role in June 2009.
TACC is well known for providing high-end advanced computing resources and services to researchers nationwide; conducting leading research and development projects; and providing training and education for the local and national scientific community.
"The University of Texas at Austin is proud to have emerged as one of the leading computational science institutions in the world," Juan M. Sanchez, vice president for research, said. "TACC is the foundation of that growth. We look forward to hiring more individuals of Dan's national reputation and deep expertise to help the university continue to reinforce and grow its leadership position."
Most recently, Stanzione was the director of the Ira A. Fulton High Performance Computing Institute (HPCI) at Arizona State University (ASU). Over the past four years, he founded and led the development of this new HPC organization from conception to a fully functioning center with the 10th largest system in academia and a staff of 22 people. Prior to his directorship at ASU, Stanzione was a science policy fellow in the Division of Graduate Education at the National Science Foundation (NSF).
"I'm incredibly excited to join TACC and The University of Texas at Austin," Stanzione said. "Certainly, the systems and the facilities are among the best in the world, but what makes TACC special is the enormous talent of the people. I'm thrilled to be joining this team and look forward to what we can accomplish together. Large-scale computation is a crucial element in addressing the enormous challenges facing science and our society, and I can think of no better place in the world to bring these challenges than to the Texas Advanced Computing Center."
TACC Director Jay Boisseau said, "Dan is one of the emerging leaders in the supercomputing community, and has a deep understanding for how researchers and educators use HPC technologies. He's well known and highly respected for his expertise and for his success in building a strong center at ASU. We look forward to having him help us increase TACC's impact as a world-class computing center, and enable more breakthrough discoveries that advance science and society."
Stanzione has led numerous synergistic activities while at ASU, including teaming with TACC and Cornell University to deploy and support Ranger, NSF's first "Path to Petascale" system, as part of the NSF TeraGrid initiative. In this role, Stanzione was instrumental in the training and user support efforts, developing online content, and delivering in-person training as far afield as Coimbra, Portugal. In addition, he completed a project under the Department of Defense (DOD) Programming Environment and Training (PET) to examine programming models for next -- generation DOD systems, and his focus on user productivity has been adopted by a number of new communities through training and advanced user support programs.
Stanzione began his career at Clemson University, where he earned his doctoral and master's degrees in computer engineering, as well as his bachelor of science in electrical engineering. He then directed the supercomputing laboratory at Clemson, and also was an assistant research professor of electrical and computer engineering. His research focuses on the tools, software and architectures to advance scientific research through high-end computing.
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.