Thinking Forward: Conrad Wolfram on the Computational Knowledge Economy

By Daniel Araya

January 12, 2012

Conrad Wolfram is the founder and managing director of Wolfram Research Europe, which he founded in 1991. He also serves as the strategic director of US-based Wolfram Research, which is run by his older brother Stephen Wolfram. As such, Conrad is intimately involved in developing the company’s flagship product, Mathematica, as well as other Wolfram technologies, like CDF Player and webMathematica, the web framework that underlies Wolfram|Alpha. In an interview for HPCwire, conducted by Daniel Araya of the Institute for Computing in the Humanities, Arts and Social Sciences (I-CHASS), Conrad describes his thoughts on the evolving knowledge economy, math and science education, and the application of computational science to the arts and humanities.

Daniel Araya: As the founder and managing director of Wolfram Research Europe, and strategic and international director of Wolfram Research, how would you describe your overall research interests?

Conrad Wolfram: Short answer: applying computation everywhere. That’s why we’ve taken to describing Wolfram as the company where “computation meets knowledge,” which I think encapsulates these objectives of pushing the envelope of doing, deploying and democratizing computation, including applying it to knowledge. Inventing new levels of automation and usability is as critical part of achieving those aims as is continuing to step up raw computational power. Computation is such a powerful concept and unleashing it with modern high-performance computing multiplies that power. It’s particularly exciting to see at the moment how broadly applicable our technology has become.

Araya: You’ve been a particularly strong proponent of math education reform through greater use of technology. What kinds of new affordances do you think technology makes possible for learning and education?

Wolfram: Clearly technology introduces new modalities of learning for all subjects — be they video, interactivity or geographical independence. Though it’s only just begun, individualized learning that enables students to discover at their own pace and at least to some extent set their own learning paths is clearly crucial too.

But here’s why math is different. Unlike say, the subject of history, math outside education has fundamentally changed over the last decades because computers have liberated it from what’s typically the limiting step of hand-calculating. We live in a far more mathematical world than we did precisely because math is based on computers doing the calculating.

But in education that transformation hasn’t happened yet. Around the world almost all students learn traditional hand-calculating not computer-based math. Sometimes it’s “computer-assisted,” that is, applying some of the new modalities to the traditional subject. That’s holding them and their countries back from more creative, conceptual math. Indeed a larger and larger chasm is opening up between math for the real world and math in education. Technology isn’t an optional extra for math, it’s fundamental to the mainstream subject of today.

Araya: You suggest that computers could potentially support a shift from a knowledge-based economy to a “computational knowledge economy.” Could you explain what you mean by this?

Wolfram: There are various definitions for “a knowledge economy,” but I think of it as one in which the majority of economic activity is based on knowledge rather than manual labor. But now the value-chain of knowledge is shifting. The question is not whether you have knowledge but know how to compute new knowledge from it, almost always applying computing power to help.

The most developed economies of the future will have a majority of economic activity innovating by computing or generating new knowledge and applying it, not just deploying existing knowledge. It’s in this sense I believe we’re heading for a “computational knowledge economy”.

Araya: How do you envision expanding the capacities of computational technologies like Wolfram|Alpha and Mathematica in the decades to come?

Wolfram: I can’t claim to foresee that far ahead. But I can say what’s guiding us, both our key principles and the picture we see outside. Computation has come of age, and is now used by everyone, either explicitly or implicitly. Our job is to drive adoption of computation by engineering ever greater abilities. Abilities may be raw computational power or power of automation to refine or dramatically re-engineer workflows in every field and at every level of computational endeavor.

In a sense Wolfram|Alpha was just such a re-engineering by injecting computation into knowledge. Our technology — both released and in the pipeline — is really strong right now and is proving what we’ve said for years. But it’s taken until now to build up to what it is today: a single coherent, integrated platform that delivers dramatically more power, usability and reliability.

And our rate of development is so much greater than before because we’re using Mathematica technology as our prime development environment. In fact Wolfram|Alpha’s development and HPC deployment is only practically possible because of the technology tower we’ve built up over more than 20 years. Not everyone checks out Wolfram technologies for what they’re building or analyzing but I think from the Wolfram language to our Workbench IDE to CDF or web deployment we have a very compelling offering.

Araya: Mathematica has been on the forefront of high performance computing with gridMathematica and now GPU computing, can you comment on the current state of HPC?

Wolfram: I think it’s where personal computing was with the Apple I back in the late seventies. that base components are there but not the workflows, automation and therefore the eventual ubiquity of use. For that reason I like to re-characterize the “P” in HPC as productivity not only raw performance. I’m excited to see a more rational online-offline hybridization emerging where we’re marrying the best characteristics of the cloud and of local computation.

Araya: What in your view are the implications of using large-scale computing as a research platform? Does this make interdisciplinary models of research and learning more likely?

Wolfram: Really, interdisciplinary is not new for innovators, it’s just currently more talked about. I think it does have an interesting intersection with HPC because some of the most dramatic technique improvements span many subjects and require HPC. Large-scale data science and image processing are examples — areas we’re very engaged in at Wolfram.

In the past these only got applied where major funding was available with experts in those techniques eg. for weather forecasting. Now they’re being applied across many fields, including delivering results directly to consumers through the cloud.

Automation is key to interdisciplinary success. Users know their fields but rarely the methods or techniques they want to apply. This is a key area where the computing environment needs to provide the delivery intelligence not just raw computational power. Our technology map really fits ideally with this approach.

Araya: Some suggest that technology is replacing workers so quickly that we are have essentially entered into an age of automated labor. How do you envision changes to society and the economy over the coming decades?

Wolfram: Technology enables us to stand on progressively higher levels of automation. For example, technology automated farm labor, to allow people to work in factories, automated factories to allow them to do knowledge processing work, and now we’re starting to replace knowledge processing work so that we can do the higher task of creating knowledge. History shows that rather than replace workers — in the aggregate long term though not always in the short run or a particular locale — it increases our appetite for improvements.

Creativity is clearly at the center of more and more future jobs, particularly the most lucrative. But so is logical thinking and experience. What’s falling out is rote knowledge of the base facts. Yes, we need some of that, but more important is knowledge of how to work things out, how to get machines to do stuff for us rather than necessarily commanding other humans to do it or doing it ourselves. For example, programming is a crucial yet hardly educated-for skill of today.

Araya: We know that digital technologies are having a huge impact on the hard sciences. Do you see computers having an equal impact on the arts and humanities?

Wolfram: As I mentioned before, computers have fundamentally changed the subject of math and therefore dramatically extended the practical scope of those “hard” sciences that for centuries have been very math-based. But even this effect of computers has extended far further, introducing computation to a wide range of new, previously non-computational fields.

Since we first launched Mathematica nearly 25 years ago, it’s amazing to see how much more analytical and computational virtually every field has become including many in the arts and humanities. I always find it instructive to look at our demonstrations.wolfram.com project to remind myself how many fields people have submitted examples from. Sure, there are more in traditionally maths subjects, but the scope is broad.

I think a key driver for turning fields computational is the huge of range of practically deployable computational approaches not available to previous generations, for example large scale data analysis or image processing. Often, the problems in humanities are harder to apply these to, needing modern HPC to get real results. So while computers may not have fundamentally changed ancient arts and humanities subjects the way they have math, the application of math and computing is starting to change almost every field, though there’s a lot further to travel to ubiquitous computationalization.

Araya: Technology has become fundamental to an age in which digital networks serve as platforms for creativity and the imagination. How do you understand this changing milieu?

Wolfram: Every age has its own platforms for expression whether papyrus, paper or the web. Each has offered a richer, more democratized canvas than the last. I’d argue that what’s different now is not just the platform of digital networks but the rate of change of that platform. Whereas a milieu might last a generation or much more in the past, I enhance my canvas and reconceptualize my workflows every few years.

Speaking of the web, I think Web 2.0 has been about the user generating the content, for example, in social networking. I’d argue Web 3.0 is about the computers generating new, derivative content, either from Web 2.0-style user-generated content or from base information. Wolfram|Alpha is one manifestation of this direction, using a computational process to compute a custom answer specific to a question.

Araya: It has become commonplace to suggest that Web-based technologies are leveraging a unique democratic shift in a wide array of technological, political, and social spaces. What do you think of this?

Wolfram: There’s no question we’re living through a fundamental shift that close-to ubiquitous information has provided. And we’re not done yet. In fact, I think we’re in a curious transition where for the first time the underlying information is out there, but complete information overload is obscuring much of its real worth. For example, there’s a big gap between publishing government data and democratizing its practical use for the average citizen. As I’ve argued, I think the computational approach can increasingly bridge this divide with customized, pre-processed results.

As with all technical advances, new problems occur, for example, around security and privacy — who and how to trust information holders that these changes have newly placed in positions of power. But history tells us that over time society will gain experience with today’s dangers and find ways to mitigate them.

—–

About the author

Daniel Araya is a Research Fellow in Learning and Innovation with the Institute for Computing in the Humanities, Arts and Social Sciences (I-CHASS) at the National Center for Supercomputing Applications (NCSA). The focus of his research is the confluence of digital technologies and economic globalization on learning and education. He has worked with the Wikimedia Foundation and the Kineo Group in Chicago. In 2011, he received the Hardie Dissertation Award and was selected for the HASTAC Scholars Fellowship. He is currently the co-editor of the Journal of Global Studies in Education. His newest books include: The New Educational Development Paradigm (2012, Peter Lang), Higher Education in the Global Age (2012, Routledge) and Education in the Creative Economy (2010, Peter Lang).

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Researchers Scale COSMO Climate Code to 4888 GPUs on Piz Daint

October 17, 2017

Effective global climate simulation, sorely needed to anticipate and cope with global warming, has long been computationally challenging. Two of the major obstacles are the needed resolution and prolonged time to compute Read more…

By John Russell

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Cluster Competition coverage has come to its natural home: H Read more…

By Dan Olds

UCSD Web-based Tool Tracking CA Wildfires Generates 1.5M Views

October 16, 2017

Tracking the wildfires raging in northern CA is an unpleasant but necessary part of guiding efforts to fight the fires and safely evacuate affected residents. One such tool – Firemap – is a web-based tool developed b Read more…

By John Russell

HPE Extreme Performance Solutions

Transforming Genomic Analytics with HPC-Accelerated Insights

Advancements in the field of genomics are revolutionizing our understanding of human biology, rapidly accelerating the discovery and treatment of genetic diseases, and dramatically improving human health. Read more…

Exascale Imperative: New Movie from HPE Makes a Compelling Case

October 13, 2017

Why is pursuing exascale computing so important? In a new video – Hewlett Packard Enterprise: Eighteen Zeros – four HPE executives, a prominent national lab HPC researcher, and HPCwire managing editor Tiffany Trader Read more…

By John Russell

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Read more…

By Dan Olds

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan Read more…

By John Russell

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Intel Debuts Programmable Acceleration Card

October 5, 2017

With a view toward supporting complex, data-intensive applications, such as AI inference, video streaming analytics, database acceleration and genomics, Intel i Read more…

By Doug Black

OLCF’s 200 Petaflops Summit Machine Still Slated for 2018 Start-up

October 3, 2017

The Department of Energy’s planned 200 petaflops Summit computer, which is currently being installed at Oak Ridge Leadership Computing Facility, is on track t Read more…

By John Russell

US Exascale Program – Some Additional Clarity

September 28, 2017

The last time we left the Department of Energy’s exascale computing program in July, things were looking very positive. Both the U.S. House and Senate had pas Read more…

By Alex R. Larzelere

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Leading Solution Providers

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

Intel, NERSC and University Partners Launch New Big Data Center

August 17, 2017

A collaboration between the Department of Energy’s National Energy Research Scientific Computing Center (NERSC), Intel and five Intel Parallel Computing Cente Read more…

By Linda Barney

  • arrow
  • Click Here for More Headlines
  • arrow
Share This