Visit additional Tabor Communication Publications
June 05, 2012
A team of scientists at the Stanford Predictive Science Academic Alliance Program (PSAAP) are modeling the fuel and airflow of a scramjet engine. The work is part of a $20 million project that researchers hope will lead to a major breakthrough in hypersonic flight. Eureka magazine covered the PSAAP’s progress.
Hypersonic flight occurs at extremely fast speeds (roughly Mach 5 and above) and NASA has been testing vehicles to travel at these speeds since 2001. The X-43A supersonic combustion ramjet (scramjet) is a small aircraft that utilizes atmospheric air to help fuel its flight. Previous tests have clocked the hypersonic craft at Mach 9.6 or roughly 7,000 mph, which earned it a spot in the Guinness World Records book.
Amazing as the speed records may be, the X-43A’s scramjet engine suffers from a problem called ‘unstart.’ Parviz Moin, a professor at Stanford’s school of engineering explained the issue:
“If you put too much fuel in the engine when you try to start it, you get a phenomenon called thermal choking, where shock waves propagate back through the engine. Essentially, the engine doesn't get enough oxygen and it dies. It's like trying to light a match in a hurricane.”
This is the focus of PSAAP’s research and they plan to make design changes based on results from supercomputing models. The project is also receiving help from Stanford’s computer science and mathematics, aeronautics and astronautics, as well as engineering departments.
Moin recognized that a multi-departmental effort was required to model scramjet behavior. He says the mechanical and aeronautical engineers, who understand the problem, need to get together with the computer scientists so that more sophisticated simulations can be built. That will be especially crucial as supercomputers move into the exaflops realm.
With that in mind, the PSAAP team developed a new computer language, know as LISZT, for processing complex simulations using a large amount of compute cores. The tool is designed for exascale computing, enabling scientists to study combustion, turbulence, fluid dynamics as well as any other mathematically complex simulations at scale.
In the video below, LISZT was used to demonstrate exhaust temperature fluctuations from a passenger jet engine. The simulation was accomplished over four days, utilizing 160,000 cores.
Full story at Eureka Magazine
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.