Talk Xanga: Capturing Gen-Z’s Computational Imagination

By By Krishna P.C. Madhavan, Sebastien Goasguen, Gary R. Bertoline

September 16, 2005

Rosen Center for Advanced Computing, Purdue University

In a recent HPCWire article entitled “New Directions for Computational Science Education,” the authors claim computational science education in the United States is broken. We agree whole-heartedly with this assessment.

A more important point, however, is that it is not just HPC education that is broken, but there is a dire need to think of a cyberinfrastructure-enabled educational science, where information technology provides the necessary foundation to build pedagogy. If cyberinfrastructure is poised to change the way discoveries happen in science, technology, engineering as well as social and behavioral sciences, the educational system as a whole needs to reflect this urgency.

The overt presence of cyberinfrastructure in every aspect of our life — starting from HDTV at home to high-speed cellphone networks — throws into sharp relief the need to re-think education ground up given the recent advances in cyberinfrastructure (CI) sciences.

The truth is that cyberinfrastructure is everywhere and is an overt market force that we educators are yet to completely understand. No pedagogical solution that treats the CI sciences as a static entity that is hidden and waiting to be uncovered, rather than an area of science that is evolving to meet the grand challenges of the next century will be successful.

While the above article takes into partial consideration the changing landscape of the high performance computing world, there is a real need to tailor our arguments to the modern teenager — the aptly named Gen-Z or the “CI generation.” The challenge is to teach our students that form of computational science that will help them enter modern cross-disciplinary fields, such as nanotechnology, genomics and biomedical engineering. While HPC is partially about understanding MPI and speed-up, it is now — in this new CI world — a tool that will let scientists solve the next generation of scientific grand challenges including ones in computing.

The problem is not so much that teenagers cannot or do not have access to powerful computers or are aware of the complexity thereof, but rather that most students prefer to use their powerful desktop computers to play online games, chat with friends or share audio and video with friends and strangers around the world. A teenager today is no stranger to superior computational power. They use gaming devices that have significant computing capacity; students carry around 60 GB hard-drives that store songs, pictures, calendar, and pretty much everything else they need. In addition, they add to this impressive technology arsenal a variety of all-in-one, trendy cellphones that seem to have an unending list of capabilities. They create social networks and “collaborate” with students all over the world on common tasks.

The current generation of students and increasingly the next generation will be completely comfortable playing a critical role in establishing new models of economy that will shrink the world even more. Today's students are increasingly driving home the message on campuses and high schools not only highlighting the need to deploy a robust cyberinfrastructure, but also the need to revise existing teaching methods to increase the focus on students.

They use words like “Xanga,” sentences like “let's 360 at my Xanga” and fuel a multi-billion dollar economy by trading “e-props” online. The problem is we, the educators, are left dumbfounded at what these words and concepts mean. Now, think about the teacher in a high school or a faculty member at universities; this is the generation of students they have to entice into careers in science, technology, engineering and mathematics. Given the fast moving nature of the field of high performance computing, we need to systematically and carefully look to the future of cyberinfrastructure — simply because the fundamental fabric of high performance computing is changing and we cannot pretend that preserving the status quo will do the trick.

Gen-Z students are quicker to adopt new technologies, faster, smarter and more in tune with the new CI world because they are being raised in it. They understand CI better than the generations of students before and want more out of it than any of us. They are one of the major forces driving innovation in the new world and we cannot keep up.

As the new world is one of global knowledge and global sharing, the challenge resides in teaching the fundamentals of science, technology, engineering and mathematics without compromising the way Gen-Z students think of technology. Offering more workshops, providing more lectures, software tools or for that matter even grid on CDs is only going to deliver on this requirement partially. One might present counter-arguments and vouch for the success of traditional methods. After all, most of us were brought up using the traditional paradigm and it has apparently worked for most people. The simple truth of the matter, however, is this methodology is nothing new and has essentially failed. Future generations of students are growing up in a “ever flattening world” where the Internet and the worldwide web are viewed as technologies of the 1990s. Education in the United States stands at crossroads; we either choose to be bold and innovative or sit back and pave the road for next generation of innovations to come from countries such as China and India.

Simply put, we as scientists, educators and researchers, despite our best efforts, are slowly losing touch with the current generation of students. We argue for practices that we know have failed to deliver concrete results — mostly because it is the safest route to take and we are comfortable with them. This boxed approach to education in general, and computational science education specifically, is slowly forcing us to descend a path to irrelevance.

We have failed to capture the imagination of Generation-Z — something that corporate America has so successfully managed to achieve. Why is it that an artifact of computational science like the iPod is a mega-success, while computational science education is broken? Why are gaming companies able to setup a multi- million dollar gaming environment with its own internal currency targeted at teenagers and college students, while we are not able to attract and keep students from dropping out of computational science programs in our colleges and universities? The only rational conclusion one can draw is that we are not leveraging our understanding of our audience — the students, our customers — nor are we leveraging our expertise in information technology.

We talk Matlab, when students talk Xanga; we talk simulations, when students talk gaming; we talk parallel algorithms, when students talk invisible computing.

In August 2005, Time magazine published a special issue on “Being 13,” which depicted beautifully the lifestyle and technology choices of teenagers. What was really surprising about the findings was that all technologies that we scientists, educators and researches think of as important were conspicuously absent. There is absolutely no mention of computers, definitely no mention of television or for that matter teenagers seem to think the current form of the Internet is old-age. What was present on the list of technology choices were devices like iPods, gaming devices, cyber-services like iTunes and Rhapsody and really trendy communication devices.

One message that is becoming louder and is consistent with the findings expressed in the book entitled “Educating the Net Generation” (http://www.educause.edu/ir/library/pdf/pub7101.pdf): students spend more time playing online games and instant messaging than doing their homework. This being the case, perhaps we as educators should leverage the emerging national cyberinfrastructure to tap into the playing and instant messenging time.

Incidentally, what serious gamers will tell you is they spend a considerable amount of time researching their game and discussing strategies on how to win it. It is well established knowledge among educators that perhaps one of the greatest predictors of success or failure on a specific learning task is “time on task” combined with carefully structured instruction. The greater the time students spend learning and thinking about a specific concept in any subject area, the greater their chance of understanding it. Given that our students' attentions are focused elsewhere, the real challenge lies in how we transform their day-to-day activities into learning moments and fill their field of vision with learning experiences. Clearly, this requires some Gen-Z thinking.

Learning experiences of the future will be multi-sensory, engage technologies and significant computational power continuously and invisibly, and will be completely engaging. The future of learning will incorporate science, technology, engineering and mathematics concepts into the students' everyday life seamlessly.

We need to look into methodologies that transform common day- to-day student activities, such as gaming, eating at the cafeteria or visiting the supermarket, into learning experiences. Our vision for the future will need to develop a Cyberinfrastructure Education Ecosystem where learning co-exists with students' lifestyles, technology choices and emerging national cyberinfrastructure.

This new vision has to signal a revolutionary shift toward user-focused discovery and pedagogical paradigm where information technology and computational science are a tacit part of all campus activities. Furthermore, any such effort should place students' learning experiences on university campuses squarely at the intersection of cyberinfrastructure science and pedagogical theory and practice.

The future of discovery and learning will force us to break computational, data, bandwidth and more importantly, imagination walls. Given current trends in technology and the dramatic innovations that the National Science Foundation is fostering through its cyberinfrastructure, engineering and educational initiatives — the question we need to think about, and more importantly encourage future students to think about, is the following: “What would we do if we were provided with virtually unlimited computational power, storage and bandwidth? What grand challenges will we take on? What would be the manifestation of this freedom on our university campuses and also in K-12 education?”

We need the vocabulary and the courage to think in those terms. For this we really need to “deschool” our minds from the traditional assumptions about education in general. Metaphorically, the solution lies in us learning to talk Xanga.

About the authors

Dr. Krishna P.C. Madhavan is a research scientist with the Rosen Center for Advanced Computing, Information Technology at Purdue University. He is also the educational technology director for the NSF-funded Network for Computational Nanotechnology. Dr. Madhavan serves as the Chair for the Supercomputing 2006 Education Program and the Curriculum Director for the Supercomputing 2005 Education Program. His interests lie in the new and emerging field of cyberinfrastructure enabled educational science.

Dr. Sebastien Goasguen is a senior research scientist with the Rosen Center for Advanced Computing, Information Technology at Purdue University. He is the site lead for the TeraGrid at Purdue University. Dr. Goasguen is the PI for the NSF- funded middleware initiative at Purdue University as part of the NMI. He is also the co-PI on the NSF Tier-2 CMS award at Purdue University. Dr. Goasguen has also served as the technical director for the NSF-funded Network for Computational Nanotechnology.

Dr. Gary R. Bertoline is an associate vice president and director of the Rosen Center for Advanced Computing and the NSF-funded Envision Center for Data Perceptualization at Purdue University. He is also a full professor of computer graphics technology at Purdue University. Dr. Bertoline serves as the Chair of the Supercomputing 2005 Education Program and the co-chair of the Supercomputing 2006 Education Program.

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire