Visit additional Tabor Communication Publications
October 05, 2007
When Neil Armstrong set foot on the lunar surface in 1969, he made more than a historic and enduring mark on the surface of the moon; he made a monumental impact on the collective imagination of the world. After nearly 40 years, NASA is preparing for another journey to the moon -- and beyond -- with a new class of explorers sure to strike a whole new generation with the same overwhelming sense of awe and wonderment.
In the volatile realm of space travel, pyrotechnics are both a necessary evil and a force to be reckoned with. An explosion on a monumental scale is essential during liftoff to achieve the extraordinary thrust required to break through and escape the earth's gravity. However, any unexpected or unplanned pyrotechnics can be cataclysmic for the mission and its crew.
The Quest for Controlled Destruction
Few understand this pyro-proposition better than pyrotechnic engineer Christopher W. Brown at NASA's Johnson Space Center in Houston, Texas. Brown works with the team of engineers responsible for the design, testing and implementation of pyrotechnics for a variety of space exploration initiatives, including the Constellation Program, which will send human explorers back to the moon and onward to other unexplored parts of the solar system.
"Pyrotechnics is popular in aerospace when it comes to one-time actuation or separation," Brown says. "One example is the frangible nut, which is used to separate the space shuttle from the external fuel tanks."
The frangible nut, critical to the separation process, is designed to fracture when the command is given to activate explosive charges in order to achieve a clean and thorough separation of the external fuel tanks once their propellant is exhausted. The idea is to blow it apart when necessary, with as little collateral damage as possible.
"I call it controlled destruction," Brown says. "There is a lot of shock and debris that need to be contained. It is a very tricky balance. We need to have the nut or the separation structure strong enough to hold the pieces together, but brittle and weak enough to fracture when commanded."
The Test before the Test
In order to achieve the proper design of the components, determine the precise amount of combustion required without going overboard, and predict the behavior of the debris field to prevent incidental damage, NASA is using finite element analysis (FEA) software from MSC.Software in concert with visualization software by CEI, Inc. of Apex, NC. The goal is to produce 3D simulations of various test scenarios to ultimately design a means to control the end results.
Using MSC's Dytran to model the various applications, NASA imports the results into CEI's EnSight for post processing, where they are converted into graphical images and movies that can be shared with colleagues. Using this process, Brown's team is able to simulate proposed design modifications and other variables to predict behavior under certain conditions before any "live" testing takes place. In the case of a shuttle launch, for example, the results can have life-altering implications.
"The simulations will give you an idea of what could happen," Brown says. "We want to know as much about the possible outcomes before conducting a real test, to get a heads up on what to watch out for and what to avoid."
In one such experiment, Brown and his team are working to perfect a newly designed back-up release mechanism for a new docking system to be put in place on the Orion Crew Exploration Vehicle. Orion is expected to fly its first mission to the International Space Station by 2014 and carry the next generation of astronauts to the moon by 2020. The mechanism works by using a charge to apply pressure at one side, which engages a piston release system that allows the bolt to slip out, with no metal fracture.
Modeling this procedure allows thorough testing of each and every component to achieve the flawless operation critical for space travel. Before any testing took place, Brown was able to identify some hang-ups in the parts, modify the design and rerun the simulations until it worked. In the end, the live testing was quite successful.
In the case of the frangible nut, the objective is that the explosive allows the nut to break apart correctly instead of blowing the energy in the wrong direction. With two boosters on each side of the nut (the second in place only for redundancy), Brown tests one booster at a time.
Powerful Combination Yields Powerful Results
The one-two punch of MSC and CEI's avant-garde software allows NASA to achieve accurate results in an easily viewable format. EnSight is able to seamlessly and quickly import the Dytran files for post processing. They can be viewed as motion pictures or frame by frame for thorough analysis.
Annotations can be included in the resultant images and adjust color contours to emphasize certain results or variables and achieve the desired graphical image. This output is then used for collaboration among fellow engineers and the test area.
"Using a 3-D viewer provides a quick way of viewing motion and rotation at any angle without having to start the post processing all over again," Brown says. "It allows you to view the animation, not just like a video, but something that you can rotate and move around."
For more information
MSC.Software's Dytran overview: http://www.mscsoftware.com/products/dytran.cfm?Q=396&Z=287&Y=387
Human Space Flight: The Shuttle -- External Tank Separation System: http://spaceflight.nasa.gov/shuttle/reference/shutref/orbiter/sep/sepsystem.html
NASA Constellation Program: Orion Crew Vehicle: http://www.nasa.gov/mission_pages/constellation/orion/index.html
MSC.Software signs agreement with CEI enhancing graphics visualization for SimXpert: http://www.ensight.com/msc.software-signs-agreement-with-cei-enhancing-graphics-visualization-for-sim.html
Source: CEI Inc.
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.