NCSA Allows Faster, Better, Cheaper Engineering

By Kathleen Ricker, NCSA Science Writer

April 29, 2005

You're a commuter, your daily rush-hour ordeal made even more grueling by the hassle of unexpected merging lanes, the heady essence of asphalt, and the sign-toting, orange-clad road crew ahead. Resurfacing the road again? you think. But they just did that two years ago! Is this why my taxes are so high?

When it comes to major public construction projects, it's not just the public who wants the end product to be faster, cheaper, and better. The Federal Highway Authority estimates that a staggering $94 billion will be spent on transportation infrastructure every year for the next twenty years. Not surprisingly, state and federal transportation departments want to make sure that their significant infrastructure investments are worthwhile–and they've upped the stakes. The traditional bidding process, in which the least expensive estimate wins the contract, has undergone a transformation in recent years. Cost is no longer the primary factor in determining who gets the job; now project duration–the amount of time that drivers will be negotiating the construction–and quality and durability–are also important criteria.

The best of all possible worlds

Of course, tradeoffs are inevitable. That old saw in engineering and software development says that you can't have faster, cheaper, and better–you can only have two out of three. “If you're trying to minimize the duration, you have to use overtime, and that means increasing your costs,” says Khaled El-Rayes, an assistant professor in the Department of Civil Engineering at UIUC. “If you're trying to improve quality, in many cases you have to pay more for that increase in quality.”

How to reach a comfortable tradeoff between these conflicting objectives? That's the focus of the research that El-Rayes and his research assistant Amr Kandil are currently conducting, using NCSA machines to optimize the decision-making process. El-Rayes, who received an NSF Career Award for optimizing construction utilization of resources in transportation infrastructure systems, is developing an optimization model that can determine the optimal tradeoff between conflicting multiple objectives. This is no simple problem: for each task involved in a large-scale construction project, there are at least three important criteria to consider–cost, duration, and quality. Plug in different combinations of possible values for each, and you can generate a large number of permutations involving different kinds of construction, equipment, and crews, the addition or omission of overtime, an off-peak work schedule, and other possible factors. With the average infrastructure project involving 600 or 700 different activities, the task of determining the optimal balance of duration, cost, and quality proves impossibly overwhelming for a human being.

Instead, El-Rayes uses a genetic algorithm-based model that allows him to generate a large number of possible construction resource utilization plans that provide a wide range of tradeoffs among project cost, duration and quality, and to eliminate the vast majority of suboptimal plans quickly. “At the end,” he says, “what you want is a set of optimal tradeoffs which decision-makers can use to determine, according to their preferences, the best possible combination of resources.” This might mean, for example, that a longer project duration time is tolerable if cost or quality is a bigger concern, or that a reduced duration is a greater priority than cost or quality.

The advantage of this optimization model is its ability to transform the traditional two-dimensional time-cost tradeoff analysis to an advanced three-dimensional time-cost-quality tradeoff analysis. The introduction of the third dimension in construction projects is a challenging task particularly when quality is itself a difficult factor to quantify. “The cost is simply dollar value, and so it is easy to aggregate by adding it all up,” says El-Rayes. “Quality is more challenging.”

El-Rayes's model, which incorporates quality, is currently based on data from the Illinois Department of Transportation (IDOT), which keeps records, for example, on the kind of utilized construction crews along with their measured performance in various quality metrics such as compressive and flexural strength for concrete pavement work. Examining this data in aggregate, El-Rayes can determine how frequently and by how much a given combination of resources exceeded IDOT-specified quality limits, allowing him to assign a quality level to that specific construction crew and resource combination.

In the future, El-Rayes and his research team hope to be able to add even more factors for consideration, including safety, service disruption, and environmental impact. He would also like to make the process more user-friendly by including an interactive tool that would allow users to rank solutions based on weighting factors according to their preferences.

Optimizing the optimal

While El-Rayes' model, by automatically weeding out all less-than-optimum scenarios, makes the decision-making process easier for humans, there is no getting around the fact that it is still an enormous calculation. “If we had a project that included 700 activities, an average sized construction process,” explains El-Rayes, “and each activity had a potential 3 to 5 options each–and that's conservative–it would create a solution space which is exponential to the number of activities.” It's a huge solution space, one which, El-Rayes estimates, would require around 430 hours of computation on a single processor. “Solving this problem wouldn't be feasible,” El-Rayes says. “Nobody's going to wait 430 hours for the solution.”

This is where NCSA comes in. Using Tungsten, El-Rayes and his research team, with the help of Nahil Sobh, who heads NCSA's Performance Engineering and Computational Methods group, are currently exploring how to parallelize his computations over a number of processors, so that rather than performing them on a single processor in a contractor's office or a state, local, or federal transportation department office, they can instead be distributed over a number of unutilized office processors, drastically reducing the run time to the duration of a weekend.

In his experiments on the NCSA Tungsten cluster, El-Rayes examined the required computational time for optimizing three construction projects of different sizes: 180 activities, 360 activities, and 720 activities, each of which he has analyzed on 1 processor and on multiple processors to a maximum of 50. So far, he says that parallelization has been successful in transforming the analysis of the largest project of 720 activities from an impractical problem requiring several weeks (430 hours) on a single computer to a feasible task that can be accomplished on a network of unutilized office computers over a weekend in 55 hours. “We don't even need 50 processors for this size project,” he says. “For bigger projects, we might benefit from an increase in the number of processors, but the improvement starts to level off after maybe 10 to 15 processors, which is a reasonable number for an office to have available over a weekend.” The problem he has chosen for these computations is a hypothetical highway construction project, but he says that the optimization model would be equally applicable to other kinds of large-scale projects, such as the construction of a convention center or a bridge, which would involve more different kinds of activities than would highway construction. What all large-scale project have in common, however, is their complexity, and that is the problem El-Rayes hopes his computations will help solve.

“We want to transform an infeasible problem into a practical problem. That's what we're aiming for,” says El-Rayes.

Funding statement

This research is supported by the National Science Foundation.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Intel’s Silicon Brain System a Blueprint for Future AI Computing Architectures

April 24, 2024

Intel is releasing a whole arsenal of AI chips and systems hoping something will stick in the market. Its latest entry is a neuromorphic system called Hala Point. The system includes Intel's research chip called Loihi 2, Read more…

Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Research senior analyst Steve Conway, who closely tracks HPC, AI, Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, and this day of contemplation is meant to provide all of us Read more…

Intel Announces Hala Point – World’s Largest Neuromorphic System for Sustainable AI

April 22, 2024

As we find ourselves on the brink of a technological revolution, the need for efficient and sustainable computing solutions has never been more critical.  A computer system that can mimic the way humans process and s Read more…

Empowering High-Performance Computing for Artificial Intelligence

April 19, 2024

Artificial intelligence (AI) presents some of the most challenging demands in information technology, especially concerning computing power and data movement. As a result of these challenges, high-performance computing Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

Intel’s Silicon Brain System a Blueprint for Future AI Computing Architectures

April 24, 2024

Intel is releasing a whole arsenal of AI chips and systems hoping something will stick in the market. Its latest entry is a neuromorphic system called Hala Poin Read more…

Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Resear Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

Intel’s Xeon General Manager Talks about Server Chips 

January 2, 2024

Intel is talking data-center growth and is done digging graves for its dead enterprise products, including GPUs, storage, and networking products, which fell to Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire