Visit additional Tabor Communication Publications
May 02, 2012
COLUMBUS, Ohio, May 1 -- A senior researcher in computer science at the Ohio Supercomputer Center has been designated a Campus Champion – charged with empowering researchers and educators to advance scientific discovery by serving as their local source of knowledge about national high performance computing opportunities and resources.
Karen Tomko, Ph.D., was designated a Campus Champion by officials at the Extreme Science and Engineering Discovery Environment (XSEDE), the most advanced, powerful, and robust collection of integrated advanced digital resources and services in the world. The Ohio Supercomputer Center is one of 17 XSEDE partner organizations across the country.
The five-year, $121-million National Science Foundation (NSF) XSEDE project replaces and expands on the NSF TeraGrid project. More than 10,000 scientists used the TeraGrid to complete thousands of research projects, at no cost to the scientists. That same sort of work – only in more detail, generating more new knowledge and improving our world in an even broader range of fields – continues with XSEDE.
“I’d like to see more researchers thinking about the problems they could solve with ten times or 100 times more computing resources than they are currently using, and to encourage researchers to be bolder in their computational goals,” Tomko said.
As a Campus Champion for OSC, Tomko will support the researchers who leverage the center’s resources. Through Tomko, those researchers will have direct access to XSEDE and input to its staff, resource allocations for the researchers, and assistance in using those resources.
Tomko will receive regular correspondence from XSEDE on new resources, services and offerings. She also will participate in User Services Working Group teleconferences, forums for sharing information among other Campus Champions, and XSEDE personnel and training at the XSEDE conference, regular meetings and through online forums.
“Having Dr. Tomko in a position to collect and share this vital information with our research community will help dramatically lower the technological barriers to the access and use of extremely powerful computing resources,” said Steven Gordon, interim co-executive director of OSC and the lead for XSEDE education programs. “This flow of information, coupled with Karen’s rich history of collaborations in computer science and engineering research, will prove invaluable.”
Using XSEDE, researchers can establish private, secure environments that have all the resources, services, and collaboration support they need to be productive. Initially, XSEDE supports 16 supercomputers and high-end visualization and data analysis resources across the country. It also includes other specialized digital resources and services to complement these computers.
OSC serves the high performance computing and data storage needs of about 1,000 researchers each calendar year, advancing vital research in the biosciences, advanced materials, energy, the environment and many other fields. Currently, the center’s flagship computing system, the Oakley Cluster, is an HP-Intel Xeon array with more than 8,300 cores and 128 graphic processing units.
Tomko earned her doctorate from the University of Michigan in 1995 and spent 11 years in academia as a faculty member in computer science and engineering. She has been working in collaboration with computational scientists for more than 15 years. Her experience with scientific applications ranges from ground motion simulation to quantum many body physics.
Source: Ohio Technology Consortium
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.