AI vs. Humans: Upending the Division of Labor

By Ken Chiacchia

July 27, 2018

Despite transitional growing pains, the promise of artificial intelligence (AI) in innovation and decision-making offers a future with better decisions made at the command of but not by humans. That’s what Pradeep Dubey, director of the Parallel Computing Laboratory at Intel, told attendees of a plenary talk at the PEARC18 conference in Pittsburgh, Pa., on July 25.

“Humans and machines have had this very nice separation of labor,” Dubey said. “Humans make decisions; machines crunch numbers … but humans are terrible decision makers.”

Pradeep Dubey, Intel Fellow

The annual Practice and Experience in Advanced Research Computing (PEARC) conference—with the theme Seamless Creativity—stresses key objectives for those who manage, develop and use advanced research computing throughout the U.S. and the world. This year’s program offered tutorials, plenary and contributed talks, workshops, panels, poster sessions and a visualization showcase.

The upending of the current hierarchy of decision making—and the division of labor—between brains and computers represented by progress in AI research has been controversial among the general public. But perhaps it shouldn’t be, Dubey argued.

“Most errors in medicine are human errors; most accidents in driving are human errors,” he said. “Humans [have] had this job for way too long.”

Dubey argued that the power of deep learning offers a future in which answers are “reverse engineered” from the data rather than being generated by hypothesis testing of data—essentially taking humans out of the analytical loop.

“We can learn via brute force from the data,” he said. “We don’t need to wait on a Newton born once every 100 years.”

Today, Dubey said, the AI field has begun succeeding at task-specific learning—a deep-learning model can create an algorithm that addresses a particular problem encompassed by a learning dataset. But it can’t generalize to other problems, and it can’t adapt autonomously as a human can. One limitation is posed by training data—slight mismatches between the data used to optimize the algorithm and the real world can generate large errors in output. Applying an algorithm from a particular problem even to a closely related but different problem only increases that mismatch.

“How do you generalize from there?” he said. “That’s the next Holy Grail.”

Dubey reviewed Intel Labs’ work on neuroscience, scalable algorithms for learning and decision-making, and the development of an end-to-end pipeline from data to developing solutions for new problems.

Pervasive model deployment, in which feedback continually re-trains the algorithm, is a major focus. So is computing architecture, including using quantum computing that could allow better scaling than possible with the classic von Neumann architecture.

In theory, Dubey said, “we should be able to encode [deep learning] in a quantum machine in a single shot,” with all the layers of the learning process superimposed on one quantum waveform. The fundamental problem, he argued, is that quantum phenomena are linear phenomena, while the layers in a deep-learning network are connected by nonlinear relationships. “Otherwise, it could collapse into one layer.”

Another focus at Intel, he said, is to “hack the chip” of the brain using fMRI brain scans. Yet another is to derive physical constants direct from the data rather than from the classical hypothesis/experimentation loop. At the SC18 HPC conference in Dallas in November, he added, Intel Labs researchers will present the derivation of cosmological constants direct from simulations—including a measure of the curvature of the Universe five times more precise than those derived from scientific testing.

One very difficult problem for AI will be code generation, also being studied at Intel—“using AI for AI development.” AIs are attractive in this sphere because AI systems working from different standards or conventions wouldn’t need to be told about it—their algorithms would adjust.

Ken Chiacchia is a Senior Science Writer with the Pittsburgh Supercomputing Center.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

At 50, Foxconn Celebrates Graduation from Connectors to AI Supercomputing

October 8, 2024

Foxconn is celebrating its 50th birthday this year. It started by making connectors, then moved to systems, and now, a supercomputer. The company announced it would build the supercomputer with Nvidia's Blackwell GPUs an Read more…

ZLUDA Takes Third Wack as a CUDA Emulator

October 7, 2024

The ZLUDA CUDA emulator is back in its third invocation. At one point, the project was quietly funded by AMD and demonstrated the ability to run unmodified CUDA applications with near-native performance on AMD GPUs. Cons Read more…

Quantum Companies D-Wave and Rigetti Again Face Stock Delisting

October 4, 2024

Both D-Wave (NYSE: QBTS) and Rigetti (Nasdaq: RGTI) are again facing stock delisting. This is a third time for D-Wave, which issued a press release today following notification by the SEC. Rigetti was notified of delisti Read more…

Alps Scientific Symposium Highlights AI’s Role in Tackling Science’s Biggest Challenges

October 4, 2024

ETH Zürich recently celebrated the launch of the AI-optimized “Alps” supercomputer with a scientific symposium focused on the future possibilities of scientific AI thanks to increased compute power and a flexible ar Read more…

The New MLPerf Storage Benchmark Runs Without ML Accelerators

October 3, 2024

MLCommons is known for its independent Machine Learning (ML) benchmarks. These benchmarks have focused on mathematical ML operations and accelerators (e.g., Nvidia GPUs). Recently, MLCommons introduced the results of its Read more…

DataPelago Unveils Universal Engine to Unite Big Data, Advanced Analytics, HPC, and AI Workloads

October 3, 2024

DataPelago this week emerged from stealth with a new virtualization layer that it says will allow users to move AI, data analytics, and ETL workloads to whatever physical processor they want, without making code changes, Read more…

At 50, Foxconn Celebrates Graduation from Connectors to AI Supercomputing

October 8, 2024

Foxconn is celebrating its 50th birthday this year. It started by making connectors, then moved to systems, and now, a supercomputer. The company announced it w Read more…

The New MLPerf Storage Benchmark Runs Without ML Accelerators

October 3, 2024

MLCommons is known for its independent Machine Learning (ML) benchmarks. These benchmarks have focused on mathematical ML operations and accelerators (e.g., Nvi Read more…

DataPelago Unveils Universal Engine to Unite Big Data, Advanced Analytics, HPC, and AI Workloads

October 3, 2024

DataPelago this week emerged from stealth with a new virtualization layer that it says will allow users to move AI, data analytics, and ETL workloads to whateve Read more…

Stayin’ Alive: Intel’s Falcon Shores GPU Will Survive Restructuring

October 2, 2024

Intel's upcoming Falcon Shores GPU will survive the brutal cost-cutting measures as part of its "next phase of transformation." An Intel spokeswoman confirmed t Read more…

How GenAI Will Impact Jobs In the Real World

September 30, 2024

There’s been a lot of fear, uncertainty, and doubt (FUD) about the potential for generative AI to take people’s jobs. The capability of large language model Read more…

IBM and NASA Launch Open-Source AI Model for Advanced Climate and Weather Research

September 25, 2024

IBM and NASA have developed a new AI foundation model for a wide range of climate and weather applications, with contributions from the Department of Energy’s Read more…

Intel Customizing Granite Rapids Server Chips for Nvidia GPUs

September 25, 2024

Intel is now customizing its latest Xeon 6 server chips for use with Nvidia's GPUs that dominate the AI landscape. The chipmaker's new Xeon 6 chips, also called Read more…

Building the Quantum Economy — Chicago Style

September 24, 2024

Will there be regional winner in the global quantum economy sweepstakes? With visions of Silicon Valley’s iconic success in electronics and Boston/Cambridge� Read more…

Shutterstock_2176157037

Intel’s Falcon Shores Future Looks Bleak as It Concedes AI Training to GPU Rivals

September 17, 2024

Intel's Falcon Shores future looks bleak as it concedes AI training to GPU rivals On Monday, Intel sent a letter to employees detailing its comeback plan after Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Granite Rapids HPC Benchmarks: I’m Thinking Intel Is Back (Updated)

September 25, 2024

Waiting is the hardest part. In the fall of 2023, HPCwire wrote about the new diverging Xeon processor strategy from Intel. Instead of a on-size-fits all approa Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Ansys Fluent® Adds AMD Instinct™ MI200 and MI300 Acceleration to Power CFD Simulations

September 23, 2024

Ansys Fluent® is well-known in the commercial computational fluid dynamics (CFD) space and is praised for its versatility as a general-purpose solver. Its impr Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Leading Solution Providers

Contributors

IBM Develops New Quantum Benchmarking Tool — Benchpress

September 26, 2024

Benchmarking is an important topic in quantum computing. There’s consensus it’s needed but opinions vary widely on how to go about it. Last week, IBM introd Read more…

Quantum and AI: Navigating the Resource Challenge

September 18, 2024

Rapid advancements in quantum computing are bringing a new era of technological possibilities. However, as quantum technology progresses, there are growing conc Read more…

Intel Customizing Granite Rapids Server Chips for Nvidia GPUs

September 25, 2024

Intel is now customizing its latest Xeon 6 server chips for use with Nvidia's GPUs that dominate the AI landscape. The chipmaker's new Xeon 6 chips, also called Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Google’s DataGemma Tackles AI Hallucination

September 18, 2024

The rapid evolution of large language models (LLMs) has fueled significant advancement in AI, enabling these systems to analyze text, generate summaries, sugges Read more…

Microsoft, Quantinuum Use Hybrid Workflow to Simulate Catalyst

September 13, 2024

Microsoft and Quantinuum reported the ability to create 12 logical qubits on Quantinuum's H2 trapped ion system this week and also reported using two logical qu Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

US Implements Controls on Quantum Computing and other Technologies

September 27, 2024

Yesterday the Commerce Department announced export controls on quantum computing technologies as well as new controls for advanced semiconductors and additive Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire