NCSA Allows Faster, Better, Cheaper Engineering

By Kathleen Ricker, NCSA Science Writer

April 29, 2005

You're a commuter, your daily rush-hour ordeal made even more grueling by the hassle of unexpected merging lanes, the heady essence of asphalt, and the sign-toting, orange-clad road crew ahead. Resurfacing the road again? you think. But they just did that two years ago! Is this why my taxes are so high?

When it comes to major public construction projects, it's not just the public who wants the end product to be faster, cheaper, and better. The Federal Highway Authority estimates that a staggering $94 billion will be spent on transportation infrastructure every year for the next twenty years. Not surprisingly, state and federal transportation departments want to make sure that their significant infrastructure investments are worthwhile–and they've upped the stakes. The traditional bidding process, in which the least expensive estimate wins the contract, has undergone a transformation in recent years. Cost is no longer the primary factor in determining who gets the job; now project duration–the amount of time that drivers will be negotiating the construction–and quality and durability–are also important criteria.

The best of all possible worlds

Of course, tradeoffs are inevitable. That old saw in engineering and software development says that you can't have faster, cheaper, and better–you can only have two out of three. “If you're trying to minimize the duration, you have to use overtime, and that means increasing your costs,” says Khaled El-Rayes, an assistant professor in the Department of Civil Engineering at UIUC. “If you're trying to improve quality, in many cases you have to pay more for that increase in quality.”

How to reach a comfortable tradeoff between these conflicting objectives? That's the focus of the research that El-Rayes and his research assistant Amr Kandil are currently conducting, using NCSA machines to optimize the decision-making process. El-Rayes, who received an NSF Career Award for optimizing construction utilization of resources in transportation infrastructure systems, is developing an optimization model that can determine the optimal tradeoff between conflicting multiple objectives. This is no simple problem: for each task involved in a large-scale construction project, there are at least three important criteria to consider–cost, duration, and quality. Plug in different combinations of possible values for each, and you can generate a large number of permutations involving different kinds of construction, equipment, and crews, the addition or omission of overtime, an off-peak work schedule, and other possible factors. With the average infrastructure project involving 600 or 700 different activities, the task of determining the optimal balance of duration, cost, and quality proves impossibly overwhelming for a human being.

Instead, El-Rayes uses a genetic algorithm-based model that allows him to generate a large number of possible construction resource utilization plans that provide a wide range of tradeoffs among project cost, duration and quality, and to eliminate the vast majority of suboptimal plans quickly. “At the end,” he says, “what you want is a set of optimal tradeoffs which decision-makers can use to determine, according to their preferences, the best possible combination of resources.” This might mean, for example, that a longer project duration time is tolerable if cost or quality is a bigger concern, or that a reduced duration is a greater priority than cost or quality.

The advantage of this optimization model is its ability to transform the traditional two-dimensional time-cost tradeoff analysis to an advanced three-dimensional time-cost-quality tradeoff analysis. The introduction of the third dimension in construction projects is a challenging task particularly when quality is itself a difficult factor to quantify. “The cost is simply dollar value, and so it is easy to aggregate by adding it all up,” says El-Rayes. “Quality is more challenging.”

El-Rayes's model, which incorporates quality, is currently based on data from the Illinois Department of Transportation (IDOT), which keeps records, for example, on the kind of utilized construction crews along with their measured performance in various quality metrics such as compressive and flexural strength for concrete pavement work. Examining this data in aggregate, El-Rayes can determine how frequently and by how much a given combination of resources exceeded IDOT-specified quality limits, allowing him to assign a quality level to that specific construction crew and resource combination.

In the future, El-Rayes and his research team hope to be able to add even more factors for consideration, including safety, service disruption, and environmental impact. He would also like to make the process more user-friendly by including an interactive tool that would allow users to rank solutions based on weighting factors according to their preferences.

Optimizing the optimal

While El-Rayes' model, by automatically weeding out all less-than-optimum scenarios, makes the decision-making process easier for humans, there is no getting around the fact that it is still an enormous calculation. “If we had a project that included 700 activities, an average sized construction process,” explains El-Rayes, “and each activity had a potential 3 to 5 options each–and that's conservative–it would create a solution space which is exponential to the number of activities.” It's a huge solution space, one which, El-Rayes estimates, would require around 430 hours of computation on a single processor. “Solving this problem wouldn't be feasible,” El-Rayes says. “Nobody's going to wait 430 hours for the solution.”

This is where NCSA comes in. Using Tungsten, El-Rayes and his research team, with the help of Nahil Sobh, who heads NCSA's Performance Engineering and Computational Methods group, are currently exploring how to parallelize his computations over a number of processors, so that rather than performing them on a single processor in a contractor's office or a state, local, or federal transportation department office, they can instead be distributed over a number of unutilized office processors, drastically reducing the run time to the duration of a weekend.

In his experiments on the NCSA Tungsten cluster, El-Rayes examined the required computational time for optimizing three construction projects of different sizes: 180 activities, 360 activities, and 720 activities, each of which he has analyzed on 1 processor and on multiple processors to a maximum of 50. So far, he says that parallelization has been successful in transforming the analysis of the largest project of 720 activities from an impractical problem requiring several weeks (430 hours) on a single computer to a feasible task that can be accomplished on a network of unutilized office computers over a weekend in 55 hours. “We don't even need 50 processors for this size project,” he says. “For bigger projects, we might benefit from an increase in the number of processors, but the improvement starts to level off after maybe 10 to 15 processors, which is a reasonable number for an office to have available over a weekend.” The problem he has chosen for these computations is a hypothetical highway construction project, but he says that the optimization model would be equally applicable to other kinds of large-scale projects, such as the construction of a convention center or a bridge, which would involve more different kinds of activities than would highway construction. What all large-scale project have in common, however, is their complexity, and that is the problem El-Rayes hopes his computations will help solve.

“We want to transform an infeasible problem into a practical problem. That's what we're aiming for,” says El-Rayes.

Funding statement

This research is supported by the National Science Foundation.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

2024 Winter Classic: Meet Channel Islands “A”

May 3, 2024

This is the second team from California State University, Channel Islands – or maybe it’s the first team? Not sure, but I do know they have two teams total, and this is one of them. As you’ll see in the video in Read more…

Intersect360 Research Takes a Deep Dive into the HPC-AI Market in New Report

May 3, 2024

A new report out of analyst firm Intersect360 Research is shedding some new light on just how valuable the HPC and AI market is. Taking both of these technologies as a singular unit, Intersect360 Research found that the Read more…

Hyperion To Provide a Peek at Storage, File System Usage with Global Site Survey

May 3, 2024

Curious how the market for distributed file systems, interconnects, and high-end storage is playing out in 2024? Then you might be interested in the market analysis that Hyperion Research is planning on rolling out over Read more…

2024 Winter Classic: Meet Team Jackson State

May 3, 2024

This is the second time we’re seeing a team from Jackson State university. The team features two veterans of the 2023 Winter Classic, which should help, but it’s also a team whose members are involved in a lot of oth Read more…

2024 Winter Classic: NASA Results Revealed!

May 2, 2024

In this edition of the Winter Classic Studio Update Show we reveal the results from the NASA BTIO Challenge. The benchmark, BTIO, is a subset of the NAS Parallel benchmark and NASA set up a formidable set of milestones, Read more…

2024 Winter Classic: NASA Mentor Interview

May 2, 2024

The folks at NASA Ames once again did a bang-up job as a mentor for the 2024 Winter Classic. This is the third time they’ve fulfilled this vital function, and their challenges keep getting better and better. In thei Read more…

Hyperion To Provide a Peek at Storage, File System Usage with Global Site Survey

May 3, 2024

Curious how the market for distributed file systems, interconnects, and high-end storage is playing out in 2024? Then you might be interested in the market anal Read more…

Qubit Watch: Intel Process, IBM’s Heron, APS March Meeting, PsiQuantum Platform, QED-C on Logistics, FS Comparison

May 1, 2024

Intel has long argued that leveraging its semiconductor manufacturing prowess and use of quantum dot qubits will help Intel emerge as a leader in the race to de Read more…

Stanford HAI AI Index Report: Science and Medicine

April 29, 2024

While AI tools are incredibly useful in a variety of industries, they truly shine when applied to solving problems in scientific and medical discovery. Research Read more…

IBM Delivers Qiskit 1.0 and Best Practices for Transitioning to It

April 29, 2024

After spending much of its December Quantum Summit discussing forthcoming quantum software development kit Qiskit 1.0 — the first full version — IBM quietly Read more…

Shutterstock 1748437547

Edge-to-Cloud: Exploring an HPC Expedition in Self-Driving Learning

April 25, 2024

The journey begins as Kate Keahey's wandering path unfolds, leading to improbable events. Keahey, Senior Scientist at Argonne National Laboratory and the Uni Read more…

Quantum Internet: Tsinghua Researchers’ New Memory Framework could be Game-Changer

April 25, 2024

Researchers from the Center for Quantum Information (CQI), Tsinghua University, Beijing, have reported successful development and testing of a new programmable Read more…

Intel’s Silicon Brain System a Blueprint for Future AI Computing Architectures

April 24, 2024

Intel is releasing a whole arsenal of AI chips and systems hoping something will stick in the market. Its latest entry is a neuromorphic system called Hala Poin Read more…

Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Resear Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

Intel Plans Falcon Shores 2 GPU Supercomputing Chip for 2026  

August 8, 2023

Intel is planning to onboard a new version of the Falcon Shores chip in 2026, which is code-named Falcon Shores 2. The new product was announced by CEO Pat Gel Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire