CCC Offers Draft 20-Year AI Roadmap; Seeks Comments

By John Russell

May 14, 2019

Artificial Intelligence in all its guises has captured much of the conversation in HPC and general computing today. The White House, DARPA, IARPA, and Department of Energy all have issued strategies or undertaken programs intended to foster AI development and use. Yesterday, the Computing Community Consortium (CCC) weighed in with a 100-plus page draft report – A 20-Year Community Roadmap for Artificial Intelligence Research in the US – and CCC is seeking comment around its concepts and recommendations.

The CCC, of course, is a body formed to “define the large-scale infrastructure needs of the computing research community” that was created in response to a National Science Foundation (NSF) solicitation in 2006. In turn, the CCC is part of the Computing Research Association (CRA) founded in 1972 and encompassing academia, industry, and government; its proposals, among other things, help inform NSF activities and federal computing priorities.

As noted on the CCC website, “The CCC Council meets three times every calendar year, including at least one meeting in Washington, D.C., and has biweekly conference calls between these meetings. Also, the CCC leadership has biweekly conference calls with the leadership of NSF’s Directorate for Computer and Information Science and Engineering (CISE).”

CCC began work on the new AI roadmap last fall, held three workshops and a ‘Town Hall’ meeting in 2019, and yesterday issued a blog calling for comment on its roadmap. Comments are due by May 28, 2019.

Honestly, parsing such a large document is best done by directly reading it and CCC has packed its AI roadmap with all manner of observation and suggestion. Here are its major recommendations excerpted from the bog:

I – Create and Operate a National AI Infrastructure to serve academia, industry, andgovernment through four interlocking capabilities:

a) Open AI platforms and resources: a vast interlinked distributed collection of “AI-ready” resources (curated high-quality datasets, software libraries, knowledge repositories, instrumented homes and hospitals, robotics environments, cloud-scale computing services, etc.) contributed by and available to the academic research community, as well as to industry and government. Recent major innovations from companies demonstrate that AI breakthroughs require large-scale hardware investments and open-source software infrastructures, both of which require substantial ongoing investments.

b) Sustained community-driven AI challenges: organizational structures that coordinate the formulation of grand-challenge problems by AI and domain experts to drive research in key areas, building upon—and adding to—the shared resources in the Open AI Platforms and Facilities.

c) National AI Research Centers:physical and virtual facilities that bring together Faculty Fellows from a range of academic institutions and Industry Fellows from industry and government in multi-year funded projects focused on pivotal areas of long-term AI research.

d) Mission-Driven AI Laboratories:living laboratories that provide sustained infrastructure, facilities, and human resources to support the Open AI Platforms and the AI Challenges, and work closely with the National AI Research Centers to integrate results to address critical AI challenges in vertical sectors of public interest such as health, education, policy, ethics, and science.

II – Re-conceptualize and Train an All-Encompassing AI Workforce, building upon the elements of the National AI Infrastructure listed above to create:

a) Development of AI Curricula at All Levels: guidelines should be developed for curricula that encourage early and ongoing interest in and understanding of AI, beginning in K-12 and extending through graduate courses and professional programs.

b) Recruitment and Retention Programs for Advanced AI Degrees: including grants for talented students to obtain advanced graduate degrees, retention programs for doctoral-level researchers, and additional resources to support and enfranchise AI teaching faculty.

c) Engaging Underrepresented and Underprivileged Groups: programs to bring the best talent into the AI research effort.

d) Incentivizing Emerging Interdisciplinary AI Areas: initiatives to encourage students and the research community to work in interdisciplinary AI studies—e.g., AI-related policy and law, AI safety engineering, as well as analysis of the impact of AI on society—will ensure a workforce and a research ecosystem that understands the full context for AI solutions.

e) Training Highly Skilled AI Engineers and Technicians, to support and build upon the Open AI Platform to grow the AI pipeline through community colleges, workforce retraining programs, certificate programs, and online degrees.

III – Core Programs for AI Research are critical.  These new resources and initiatives cannot come at the expense of existing programs for funding theoretical and applied AI. These core programs—which provide well-established, broad-based support for research progress, for training young researchers, for integrating AI research and education, and for nucleating novel interdisciplinary collaborations—are critical complements to the broader initiatives described in this Roadmap, and they too will require expanded support.

As you can see, there’s a lot here and that includes calling for increased spending in the context of a global race for AI. The report declares, “U.S. leadership in AI is at risk without significant, strategic investments, new models for infrastructure and resources, and attention to the education and training pipeline. Other major industrialized countries are already embarking on substantial AI research programs.

  • The EU has announced funding of 20B Euros for AI17 and is currently evaluating proposals for decadal-long 1B Eur0 science projects, one of them in the area of AI assistants. Germany and France have allocated 3B and 1.5B Euros to AI, respectively. The UK has pledged an investment of 1B Pounds in AI, together with dedicated funding for 1,000 PhDs and 8,000 specialized teachers in AI, and has repurposed its flagship Turing Institutes into major data-driven AI research centers.
  • China has announced that it will invest billions in AI over the next five years, creating at least four $50M/year AI Centers and a $1B/year National AI Research laboratory with thousands of AI researchers and engineers, and committing to training 500 instructors and 5,000 students at major universities…”

The scope of these efforts, argues CCC, “are in line with major U.S. research investments in the past, such as the LIGO project ($1.1B), the Human Genome project ($2.7B), and the Apollo program ($144B), all of which not only led to major scientific advances, but also produced significant economic and societal benefits.”

The recommended national Pivot AI Research Centers (PAIRCs) intended to create unique and stable environments for large multi-disciplinary teams devoted to long-term AI research aren’t cheap. The report says, “Each PAIRC would be funded in the range of $100M/year for at least 10 years. With this level of funding, a PAIRC would be able to support an ecosystem of roughly 100 full-time faculty (in AI and other relevant disciplines), 50 visiting fellows (faculty and industry), 200 AI engineers, and 500 students (graduate and undergraduate), and sufficient computing and infrastructure support.”

By way of analogy, the report notes, “There are a few examples of AI research centers that have long-term funding. The University of Maryland’s Center for the Study of Language (CASL), founded in 2003 as a DoD- sponsored University Affiliated Research Center (UARC) funded by the National Security Agency, includes about 60 researchers and 70 visitors from academia and industry focused on natural language research with a defense focus.”

How the report translates into NSF or further US-funded AI activities, of course, remains to be seen.

Link to CCC blog: https://www.cccblog.org/2019/05/13/request-comments-on-draft-a-20-year-community-roadmap-for-ai-research-in-the-us/?utm_source=feedblitz&utm_medium=FeedBlitzRss&utm_campaign=cccblog

Link to draft AI Roadmap: https://cra.org/ccc/wp-content/uploads/sites/2/2019/05/AIRoadmapDraftforCommunityMay2019.pdf

Link to comment form: https://computingresearch.wufoo.com/forms/s15u6ssf15mvnlg/

Figures all taken from the draft report

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Is Time Running Out for Compromise on America COMPETES/USICA Act?

June 22, 2022

You may recall that efforts proposed in 2020 to remake the National Science Foundation (Endless Frontier Act) have since expanded and morphed into two gigantic bills, the America COMPETES Act in the U.S. House of Representatives and the U.S. Innovation and Competition Act in the U.S. Senate. So far, efforts to reconcile the two pieces of legislation have snagged and recent reports... Read more…

Cerebras Systems Thinks Forward on AI Chips as it Claims Performance Win

June 22, 2022

Cerebras Systems makes the largest chip in the world, but is already thinking about its upcoming AI chips as learning models continue to grow at breakneck speed. The company’s latest Wafer Scale Engine chip is indeed the size of a wafer, and is made using TSMC’s 7nm process. The next chip will pack in more cores to handle the fast-growing compute needs of AI, said Andrew Feldman, CEO of Cerebras Systems. Read more…

AMD’s MI300 APUs to Power Exascale El Capitan Supercomputer

June 21, 2022

Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Quinn in a presentation delivered to the 79th HPC User Forum Read more…

IDC Perspective on Integration of Quantum Computing and HPC

June 20, 2022

The insatiable need to compress time to insights from massive and complex datasets is fueling the demand for quantum computing integration into high performance computing (HPC) environments. Such an integration would allow enterprises to accelerate and optimize current HPC applications and processes by simulating and emulating them on today’s noisy... Read more…

Q&A with Intel’s Jeff McVeigh, an HPCwire Person to Watch in 2022

June 17, 2022

HPCwire presents our interview with Jeff McVeigh, vice president and general manager, Super Compute Group, Intel Corporation, and an HPCwire 2022 Person to Watch. McVeigh shares Intel's plans for the year ahead, his pers Read more…

AWS Solution Channel

Shutterstock 152995403

Bayesian ML Models at Scale with AWS Batch

This post was contributed by Ampersand’s Jeffrey Enos, Senior Machine Learning Engineer, Daniel Gerlanc, Senior Director for Data Science, and Brandon Willard, Data Science Lead. Read more…

Microsoft/NVIDIA Solution Channel

Shutterstock 261863138

Using Cloud-Based, GPU-Accelerated AI for Financial Risk Management

There are strict rules governing financial institutions with a number of global regulatory groups publishing financial compliance requirements. Financial institutions face many challenges and legal responsibilities for risk management, compliance violations, and failure to catch financial fraud. Read more…

Nvidia, Intel to Power Atos-Built MareNostrum 5 Supercomputer

June 16, 2022

The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Intel CPUs and GPUs across multiple partitions. The newly reimag Read more…

Is Time Running Out for Compromise on America COMPETES/USICA Act?

June 22, 2022

You may recall that efforts proposed in 2020 to remake the National Science Foundation (Endless Frontier Act) have since expanded and morphed into two gigantic bills, the America COMPETES Act in the U.S. House of Representatives and the U.S. Innovation and Competition Act in the U.S. Senate. So far, efforts to reconcile the two pieces of legislation have snagged and recent reports... Read more…

Cerebras Systems Thinks Forward on AI Chips as it Claims Performance Win

June 22, 2022

Cerebras Systems makes the largest chip in the world, but is already thinking about its upcoming AI chips as learning models continue to grow at breakneck speed. The company’s latest Wafer Scale Engine chip is indeed the size of a wafer, and is made using TSMC’s 7nm process. The next chip will pack in more cores to handle the fast-growing compute needs of AI, said Andrew Feldman, CEO of Cerebras Systems. Read more…

AMD’s MI300 APUs to Power Exascale El Capitan Supercomputer

June 21, 2022

Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Read more…

IDC Perspective on Integration of Quantum Computing and HPC

June 20, 2022

The insatiable need to compress time to insights from massive and complex datasets is fueling the demand for quantum computing integration into high performance computing (HPC) environments. Such an integration would allow enterprises to accelerate and optimize current HPC applications and processes by simulating and emulating them on today’s noisy... Read more…

Q&A with Intel’s Jeff McVeigh, an HPCwire Person to Watch in 2022

June 17, 2022

HPCwire presents our interview with Jeff McVeigh, vice president and general manager, Super Compute Group, Intel Corporation, and an HPCwire 2022 Person to Watc Read more…

Nvidia, Intel to Power Atos-Built MareNostrum 5 Supercomputer

June 16, 2022

The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Inte Read more…

D-Wave Debuts Advantage2 Prototype; Seeks User Exploration and Feedback

June 16, 2022

Starting today, D-Wave Systems is providing access to a 500-plus-qubit prototype of its forthcoming 7000-qubit Advantage2 quantum annealing computer, which is d Read more…

AMD Opens Up Chip Design to the Outside for Custom Future

June 15, 2022

AMD is getting personal with chips as it sets sail to make products more to the liking of its customers. The chipmaker detailed a modular chip future in which customers can mix and match non-AMD processors in a custom chip package. "We are focused on making it easier to implement chips with more flexibility," said Mark Papermaster, chief technology officer at AMD during the analyst day meeting late last week. Read more…

Nvidia R&D Chief on How AI is Improving Chip Design

April 18, 2022

Getting a glimpse into Nvidia’s R&D has become a regular feature of the spring GTC conference with Bill Dally, chief scientist and senior vice president of research, providing an overview of Nvidia’s R&D organization and a few details on current priorities. This year, Dally focused mostly on AI tools that Nvidia is both developing and using in-house to improve... Read more…

Royalty-free stock illustration ID: 1919750255

Intel Says UCIe to Outpace PCIe in Speed Race

May 11, 2022

Intel has shared more details on a new interconnect that is the foundation of the company’s long-term plan for x86, Arm and RISC-V architectures to co-exist in a single chip package. The semiconductor company is taking a modular approach to chip design with the option for customers to cram computing blocks such as CPUs, GPUs and AI accelerators inside a single chip package. Read more…

The Final Frontier: US Has Its First Exascale Supercomputer

May 30, 2022

In April 2018, the U.S. Department of Energy announced plans to procure a trio of exascale supercomputers at a total cost of up to $1.8 billion dollars. Over the ensuing four years, many announcements were made, many deadlines were missed, and a pandemic threw the world into disarray. Now, at long last, HPE and Oak Ridge National Laboratory (ORNL) have announced that the first of those... Read more…

AMD/Xilinx Takes Aim at Nvidia with Improved VCK5000 Inferencing Card

March 8, 2022

AMD/Xilinx has released an improved version of its VCK5000 AI inferencing card along with a series of competitive benchmarks aimed directly at Nvidia’s GPU line. AMD says the new VCK5000 has 3x better performance than earlier versions and delivers 2x TCO over Nvidia T4. AMD also showed favorable benchmarks against several Nvidia GPUs, claiming its VCK5000 achieved... Read more…

Top500: Exascale Is Officially Here with Debut of Frontier

May 30, 2022

The 59th installment of the Top500 list, issued today from ISC 2022 in Hamburg, Germany, officially marks a new era in supercomputing with the debut of the first-ever exascale system on the list. Frontier, deployed at the Department of Energy’s Oak Ridge National Laboratory, achieved 1.102 exaflops in its fastest High Performance Linpack run, which was completed... Read more…

Newly-Observed Higgs Mode Holds Promise in Quantum Computing

June 8, 2022

The first-ever appearance of a previously undetectable quantum excitation known as the axial Higgs mode – exciting in its own right – also holds promise for developing and manipulating higher temperature quantum materials... Read more…

Nvidia Launches Hopper H100 GPU, New DGXs and Grace Superchips

March 22, 2022

The battle for datacenter dominance keeps getting hotter. Today, Nvidia kicked off its spring GTC event with new silicon, new software and a new supercomputer. Speaking from a virtual environment in the Nvidia Omniverse 3D collaboration and simulation platform, CEO Jensen Huang introduced the new Hopper GPU architecture and the H100 GPU... Read more…

PsiQuantum’s Path to 1 Million Qubits

April 21, 2022

PsiQuantum, founded in 2016 by four researchers with roots at Bristol University, Stanford University, and York University, is one of a few quantum computing startups that’s kept a moderately low PR profile. (That’s if you disregard the roughly $700 million in funding it has attracted.) The main reason is PsiQuantum has eschewed the clamorous public chase for... Read more…

Leading Solution Providers

Contributors

ISC 2022 Booth Video Tours

AMD
AWS
DDN
Dell
Intel
Lenovo
Microsoft
PENGUIN SOLUTIONS

Intel Reiterates Plans to Merge CPU, GPU High-performance Chip Roadmaps

May 31, 2022

Intel reiterated it is well on its way to merging its roadmap of high-performance CPUs and GPUs as it shifts over to newer manufacturing processes and packaging technologies in the coming years. The company is merging the CPU and GPU lineups into a chip (codenamed Falcon Shores) which Intel has dubbed an XPU. Falcon Shores... Read more…

AMD Opens Up Chip Design to the Outside for Custom Future

June 15, 2022

AMD is getting personal with chips as it sets sail to make products more to the liking of its customers. The chipmaker detailed a modular chip future in which customers can mix and match non-AMD processors in a custom chip package. "We are focused on making it easier to implement chips with more flexibility," said Mark Papermaster, chief technology officer at AMD during the analyst day meeting late last week. Read more…

India Launches Petascale ‘PARAM Ganga’ Supercomputer

March 8, 2022

Just a couple of weeks ago, the Indian government promised that it had five HPC systems in the final stages of installation and would launch nine new supercomputers this year. Now, it appears to be making good on that promise: the country’s National Supercomputing Mission (NSM) has announced the deployment of “PARAM Ganga” petascale supercomputer at Indian Institute of Technology (IIT)... Read more…

Nvidia Dominates MLPerf Inference, Qualcomm also Shines, Where’s Everybody Else?

April 6, 2022

MLCommons today released its latest MLPerf inferencing results, with another strong showing by Nvidia accelerators inside a diverse array of systems. Roughly fo Read more…

AMD’s MI300 APUs to Power Exascale El Capitan Supercomputer

June 21, 2022

Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Read more…

Nvidia, Intel to Power Atos-Built MareNostrum 5 Supercomputer

June 16, 2022

The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Inte Read more…

Industry Consortium Forms to Drive UCIe Chiplet Interconnect Standard

March 2, 2022

A new industry consortium aims to establish a die-to-die interconnect standard – Universal Chiplet Interconnect Express (UCIe) – in support of an open chipl Read more…

Covid Policies at HPC Conferences Should Reflect HPC Research

June 6, 2022

Supercomputing has been indispensable throughout the Covid-19 pandemic, from modeling the virus and its spread to designing vaccines and therapeutics. But, desp Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire