The intersection of HPC and AI is creating a vibrant new market: “High Performance Artificial Intelligence” (HPAI) that is fueling the growth of AI platforms and products.
After decades of slow progress, HPC has given AI the boost it needed to be taken seriously. Enabled by supercomputing technologies, HPC techniques such as deep learning are transforming AI to make it practical for many new use cases.
The necessary ingredients:
- Big data, generated by digitized processes, sensors, and instruments
- Massive computational power, often in the form of cloud computing, and
- Economically attractive use cases
are coming together to create a new breed of “Thinking Machines” that can automate complex tasks and decision processes, augmenting or replacing mechanical and electrical machines and people.
The intersection of HPC and AI is showing that cognition can be computable in a practical way (see, for example, this 1978 paper titled “Computability and Cognition”). It represents a blend of logic processing with numerically intensive computation. It is an area of intense activity in academic, commercial, industrial, and government settings. HPAI combines HPC (numerically intensive statistical analysis and optimization) with traditional AI (search algorithms and expert systems) to profoundly impact the IT industry and customer investment priorities, to influence every aspect of human life, and to pose its own grand challenges.
HPAI techniques, technology drivers and core technologies, characteristics, practical applications, and future directions are all important topics. Here, we focus on the future of HPAI.
The Future of HPAI
AI has been evolving for decades. Initial inference-based expert systems laid the foundation, and taught us how to formulate and solve AI problems. With deep learning and HPC technologies, AI is taking an evolutionary leap into a new phase.
HPAI will include the following challenges and advances:
Current algorithms make simplifying assumptions that will be relaxed in the future. In addition to the depth and breadth of layers, there will be cross-links connecting various layers, and dynamically created mini-layers, to provide more flexibility for deep neural networks. Furthermore, while current algorithms iteratively approach an optimum set of parameters, future algorithms will pursue many paths in parallel.
More Realistic Neurons
Current implementations of neuron models are simplistic, with S-curve like or other simple transfer functions. Real-world neurons have much richer connectivity, and often exhibit very spiky signaling behavior. The frequency of spikes can transmit information as well. Future neural nets will incorporate such additional complexity for higher accuracy and to achieve similar results with fewer neurons in the model. Computational complexity will increase, however.
Deep learning is already accelerating new system architecture and component technologies. We expect a period of blossoming innovation across the board: accelerator technologies, new types of CPUs specifically optimized for new workloads, new data storage and processing models such as In-Situ Processing, and entirely novel approaches such as Quantum Computing. These will all evolve rapidly in the coming years.
Natural language processing, augmented and virtual reality, haptic and gesture systems, and brain wave analysis are examples of new forms of interaction between humans and information machines.
Synergy with IoT and HPC
HPAI relies on large bodies of data, which are often generated by sensors and edge devices. Depending on the use case, this data can feed cognitive processing. At the same time, the quest for more accuracy across more and more fathomable situations will continue to justify the designation HPAI.
Smart and Autonomous Devices
Because learning can be separated from practice, and practice can be computationally cheap, a proliferation of smart devices can be expected. This trend is already visible but will expand to entirely new classes of devices. Edge devices, wearables, artificial limbs and exoskeletons, and near-permanent attachments such as smart contact lenses are examples.
A special class of autonomous devices, robots aim to mimic humans and animals. As such, they not only perform tasks better than humans and perform tasks that humans are unable to perform. They will also become increasingly social. Turing tests will be passed. Humans are social animals and can easily develop emotional bonds with robots.
This is the ultimate in integration of technology and humans into a single cognitive being. Cyborg technologies will become a permanent part of host humans.
Challenges and Grand Challenges
HPAI can help solve existing grand challenge problems by better integrating theory, simulation, and experiment, but it will create new grand challenges that span multiple disciplines.
HPAI shows that sufficiently complex sets of equations can make cognition computable. But that same complexity makes them unpredictable.
Consequences of AI systems are not always adequately or widely understood, and advanced applications of AI can be a monumental case of unintended consequences. In short, system complexity can easily exceed human competence.
Like any advanced tool, AI can be used for good or evil. Most often, it is quite straightforward to tell whether the application of a technology is good or bad for its users or the society. With AI, this is not always simple.
Current anxieties about AI include the imminent elimination of large classes of jobs by AI systems. Future concerns are about humans making a so-called Darwinian mistake: creating something that will threaten the survival of its creators.
Counter arguments point to the still-primitive nature AI systems in terms of the breadth of its capabilities or the more nuanced aspects of human intelligence.
An ethical framework, similar to that proposed by Asimov for robots, would allow a more structured discussion. Ethical concerns about AI are valid even as they temper the adoption of AI technologies and require formal efforts to study ethical implications of AI.
Arguably a more important parameter than technological advances, and in light of its ethical complexities, AI poses significant challenges for legal systems, and requires new norms and legislation. We expect progress in this area will lag actual deployments of technologies and will be more reactive than proactive.
Autonomy will be limited by the precise definition of the tasks that are automated, the environment (exact boundaries) in which they operate, and tolerance for mistakes.
Of course, for some tasks, machines do not have to be perfect, but simply better than humans, or more practically, better than the specific human responsible for a task at a given time and place. In such cases, mistakes will be made. Being at peace with a mistake made by a machine may or may not be easier than that made by a human. Society is far from accepting mistakes made by machines at the same level for which human error is accepted.
Fully autonomous systems are far from imminent.
The intersection of HPC and AI has created the HPAI market, a vibrant and rapidly growing segment with far reaching implications not just for the IT industry but humanity as a whole.
Driven by digitization and the dawn of the Information Age, HPAI relies on the presence of large bodies of data, advanced mathematical algorithms, and high performance hardware and software.
Just as industrial machines ushered in a new phase in human history, new “information machines” will have a profound impact on every aspect of life. No different than industrial machines, information machines can help when the scope of their activity is fully defined.
If it can be defined, it can be automated. Whether, or how well, it can be defined is the crux of the matter. Can we successfully program in Asimov’s three laws?