UC San Diego’s Jupyterhub Platform Aids Students with Data-Intensive Computing Needs

January 24, 2022

Jan. 24, 2022 — In classic UC San Diego fashion, an overheard conversation at a campus coffee cart has turned into an interdisciplinary project that’s making computing-intensive coursework more exciting while saving well over one million dollars so far. The effort gives UC San Diego graduate and undergraduate students – and their professors – better hardware and software ecosystems for exploring real-world, data-intensive and computing-intensive projects and problems in their courses.

Larry Smarr, Distinguished Professor Emeritus, Department of Computer Science and Engineering at the UC San Diego Jacobs School of Engineering.

It all started while UC San Diego computer science and engineering professor Larry Smarr was waiting for coffee in the “Bear” courtyard at the Jacobs School of Engineering a little more than three years ago. In line, Smarr overheard a student say, “I can’t get a job interview if I haven’t run TensorFlow on a GPU on a real problem.”

While this one student’s conundrum may sound extremely technical and highly specific, Smarr heard a general need; and he saw an opportunity. In particular, Smarr realized that innovations coming out of a U.S. National Science Foundation (NSF) funded research project he leads—the Pacific Research Platform (PRP)—could be leveraged to create better computing infrastructure for university courses that rely heavily on machine learning, data visualizations, and other topics that require significant computer resources. This infrastructure would make it easier for professors to offer courses that challenge students to solve real-world data- and computation-intensive problems, including things like what he heard at the coffee cart: running TensorFlow on a GPU on a real problem.

Fast forward to 2022, and Smarr’s spark of an idea has grown into a cross-campus collaboration called the UC San Diego Data Science/Machine Learning Platform or the UC San Diego JupyterHub. Through this platform, the inexpensive, high-performance computational building blocks combining hardware and software that Smarr and his PRP collaborators designed for use in computation-intensive research across the country are now also the backbone of dynamic computing ecosystems for UC San Diego students and professors who use machine learning, data visualization, and other computing- and data- intensive tools in their courses. The Platform has been widely used in every Division on campus, with courses taught in biological sciences, cognitive science, computer science, data science, engineering, health sciences, marine sciences, medicine, music, physical sciences, public health and more. See a list of Jacobs School affiliated faculty and the names of the courses they have taught using the UC San Diego Data Science/Machine Learning Platform.

It’s a unique, collaborative project that leverages Federally funded computing research innovations for classroom use. To make the jump from research to classroom applications, a creative and hardworking interdisciplinary team at UC San Diego came together. UC San Diego’s IT Services / Academic Technology Services stepped up in a big way. Senior architect Adam Tilghman and chief programmer David Andersen led the implementation effort, with leadership and funding support from UC San Diego CIO Vince Kellen and Academic Technology Senior Director Valerie Polichar. The project has already helped the campus avoid well over one million dollars in cloud-computing spend, according to Kellen.

Usage patterns for the UC San Diego Data Science/Machine Learning Platform. The green regions represent available capacity for non-coursework use.
Examples of the software that students are able to run on the UC San Diego Data Science/Machine Learning Platform.

At the same time, the project gives the UC San Diego community tools to encourage the back-and-forth flow of students and ideas between classroom projects and follow-on research projects.

“Our students are getting access to the same level of compte capacity that normally only a researcher using an advanced system like a supercomputer would get. The students are exploring much more complex data problems because they can,” said Smarr, who was also the founding Director of the California Institute for Telecommunications and Information Technology (Calit2), a UC San Diego / UC Irvine partnership.

Personal genomics

One of the many professors from all across campus using the UC San Diego Data Science / Machine Learning Platform for courses is Melissa Gymrek, who is a professor in both the Department of Computer Science and Engineering and the Department of Medicine’s Division of Genetics.

Her students write and run code in a software environment called Jupyter Notebooks that runs on the UC San Diego platform. “They can write code in the notebook and press execute and see the results. They can build figures to visualize data. We focus a lot more now on data visualizations,” said Gymrek.

Xuan Zhang (UC San Diego Chemistry PhD, ’21) is one of the tens of thousands of UC San Diego students and young researchers who has used the UC San Diego Data Science/Machine Learning Platform extensively in courses.

One of the thousands of UC San Diego students who has used the platform extensively is Xuan Zhang. Through the data- and visualization- intensive coursework in CSE 284, Zhang realized that the higher order genetic structures at the center of her chemistry Ph.D. dissertation – R-Loops – could be regulated by the short tandem repeats (STRs) that are at the center of much of the research in Gymrek’s lab. Without the computing-infrastructure for real-world coursework problems, Zhang believes she would not have made the research connection.

After taking Gymrek’s course, Zhang also realized that she could apply to obtain her own independent research profile on the UC San Diego Data Science / Machine Learning Platform in order to retain access to all her coursework and to keep building on it. (When Jupyter Notebooks are hosted on the commercial cloud, students generally lose access to their data-intensive coursework when the class ends, unless they download the data themselves.)

“I thought it was just for the course, but then I realized that Jupyter Notebooks are available for research, without losing access through the UC San Diego Jupyterhub,” said Zhang.

This educational infrastructure has added benefits for professors as well.

“With these Jupyter Notebooks, you can automatically embed the grading system. It saves a lot of work,” said Gymrek. You can designate how many points a student gets if they get the code right, she explained. Before using this system, students sent PDFs of their problem sets which made grading more time intensive. “It was hard to go past a dozen students. Now, you can scale,” said Gymrek. In fact, she has been able to expand access to her personal genomics graduate class to more than 50 students, up from a dozen before she had access to these new tools.

Direct uploading of assignments and grades to the campus learning management system, Canvas, is also now available.

“The platform is truly transforming education. Unlike many learning technology innovations, classes in every division at UC San Diego have used the Data Science/Machine Learning Platform. Many thousands of students use it every year. It’s innovation with real impact, preparing our students in many — sometimes unexpected — fields to be leaders and innovators when they graduate,” said Polichar.

Professors and students from all six departments at the UC San Diego Jacobs School of Engineering are making great use of the UC San Diego Data Science/Machine Learning Platform. The numbers on each stacked bar represent the number of students in that Department using the DSMLP in that quarter.
Courses from all six departments at the UC San Diego Jacobs School of Engineering are run on the UC San Diego Data Science/Machine Learning Platform.
This graph shows courses in all discipines. The numbers in the bars are the number of courses that quarter and the colors show the campus divisions (HDSI is the UCSD Halıcıoğlu Data Science Institute) which used the UCSD DSMLP. This shows how JupyterHub is bringing data science and machine learning computing to a broad set of disciplines.

Commodity hardware for research and education

“If you build your distributed supercomputer, like the PRP, on commodity hardware then you can ride Moore’s Law,” explained Smarr.

UC San Diego ITS senior architect Adam Tilghman poses with some of the innovative computing hardware that has opened the door to more data-intensive and computing-intensive coursework for UC San Diego students. These PCs run a wide range of leading-edge software to help students program the system, record their results in Jupyter notebooks, and execute a variety of data analytic and machine learning algorithms on their problems.

Following this commodity hardware strategy, Smarr and his PRP collaborators developed hardware designs where performance goes up while prices go down over time. The computational building blocks developed by the PRP, that were repurposed by UC San Diego’s ITS, are rack-mounted PCs, containing multi-core CPUs, eight Graphics Processing Units (GPUs), and optimized for data-intensive projects, including accelerating machine learning on the GPUs. These PCs run a wide range of leading-edge software to help students program the system, record their results in Jupyter Notebooks, and to execute a variety of data analytic and machine learning algorithms on their problems.

Building on this commodity hardware approach to high performance computing has allowed UC San Diego to build a dynamic and innovative “on premises” ecosystem for data- and computing- intensive coursework, rather than relying solely on commercial cloud computing services.

“The commercial cloud doesn’t provide an ecosystem that gives students the same platform from course to course, or the same platform they have in their courses as they have in their research,” said Tilghman. “This is especially true in the graduate area where students are starting work in a course context and then they continue that work in their research. It’s that continuity, even starting as a lower division undergraduate, all the way up. I think that’s one of the innovative advantages that we give at UC San Diego.”

UC San Diego professors and students interested in learning more about the Data Science / Machine Learning Platform can find additional details and contact information on their website.

“I’ve been at this for 50 years,” said Smarr. “I don’t know of many examples where I’ve seen such a close linking of research and education all the way around, in a circle.”

This alignment of research and education feeds into UC San Diego’s culture of innovation and relevance.

“It’s essential for the nation that students all across campus learn and work on computing infrastructure that is relevant for their future, whether it’s in industry, academia or the public sector,” said Albert P. Pisano, dean of the UC San Diego Jacobs School of Engineering. “These information technology ecosystems being created and deployed on campus are critical for empowering our students to leverage innovations to serve society.”

Pacific Research Platform

Click here for a video that gives an overview of the Pacific Research Platform (PRP) and includes a sampling of research projects the platform has enabled.

Larry Smarr serves as Principal Investigator on the PRP and allied grants (NSF Awards OAC-1541349, OAC-1826967, CNS-1730158, CNS-2100237) which are administered through the Qualcomm Institute, which is the UC San Diego Division of Calit2.


Source: Daniel Kane, UC San Diego

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

LRZ Adds Mega AI Aystem as It Stacks up on Future Computing Systems

May 25, 2022

The battle among high-performance computing hubs to stack up on cutting-edge computers for quicker time to science is getting steamy as new chip technologies become mainstream. A European supercomputing hub near Munich, called the Leibniz Supercomputing Centre, is deploying Cerebras Systems' CS-2 AI system as part of an internal initiative called Future Computing to assess alternative computing... Read more…

Nvidia Launches Four Arm-based Grace Server Designs

May 25, 2022

Nvidia is lining up Arm-based server platforms for a diverse range of HPC, AI and cloud applications. The new systems employ Nvidia’s custom Grace Arm CPUs in four different configurations, including a Grace Hopper HGX Read more…

Nvidia Bakes Liquid Cooling into PCIe GPU Cards

May 24, 2022

Nvidia is bringing liquid cooling, which it typically puts alongside GPUs on the high-performance computing systems, to its mainstream server GPU portfolio. The company will start shipping its A100 PCIe Liquid Cooled GPU, which is based on the Ampere architecture, for servers later this year. The liquid-cooled GPU based on the company's new Hopper architecture for PCIe slots will ship early next year. Read more…

Durham University to Test Rockport Networks on COSMA7 Supercomputer

May 24, 2022

Durham University’s Institute for Computational Cosmology (ICC) is home to the COSMA series of supercomputers (short for “cosmological machine”). COSMA—now in its eighth iteration, COSMA8—has been working to an Read more…

SoftIron Measures Its Carbon Footprint to Make a Point

May 24, 2022

Since its founding in 2012, London-based software-defined storage provider SoftIron has been making its case for what it calls secure provenance: a term that encompasses the company’s rigorous accounting of the supply Read more…

AWS Solution Channel

Shutterstock 1044740602

DTN Doubles Weather Forecasting Performance Using Amazon EC2 Hpc6a Instances

Organizations in weather-sensitive industries need highly accurate and near-real-time weather intelligence to make adept business decisions. Many companies in these industries rely on information from DTN, a global data, analytics, and technology company, for that information. Read more…

TACC Adds Details to Vision for Leadership-Class Computing Facility

May 23, 2022

The Texas Advanced Computing Center (TACC) at The University of Texas at Austin passed to the next phase of the planning process for the Leadership-Class Computing Facility (LCCF), a process that has many approval stage Read more…

LRZ Adds Mega AI Aystem as It Stacks up on Future Computing Systems

May 25, 2022

The battle among high-performance computing hubs to stack up on cutting-edge computers for quicker time to science is getting steamy as new chip technologies become mainstream. A European supercomputing hub near Munich, called the Leibniz Supercomputing Centre, is deploying Cerebras Systems' CS-2 AI system as part of an internal initiative called Future Computing to assess alternative computing... Read more…

Nvidia Launches Four Arm-based Grace Server Designs

May 25, 2022

Nvidia is lining up Arm-based server platforms for a diverse range of HPC, AI and cloud applications. The new systems employ Nvidia’s custom Grace Arm CPUs in Read more…

Nvidia Bakes Liquid Cooling into PCIe GPU Cards

May 24, 2022

Nvidia is bringing liquid cooling, which it typically puts alongside GPUs on the high-performance computing systems, to its mainstream server GPU portfolio. The company will start shipping its A100 PCIe Liquid Cooled GPU, which is based on the Ampere architecture, for servers later this year. The liquid-cooled GPU based on the company's new Hopper architecture for PCIe slots will ship early next year. Read more…

Durham University to Test Rockport Networks on COSMA7 Supercomputer

May 24, 2022

Durham University’s Institute for Computational Cosmology (ICC) is home to the COSMA series of supercomputers (short for “cosmological machine”). COSMA— Read more…

SoftIron Measures Its Carbon Footprint to Make a Point

May 24, 2022

Since its founding in 2012, London-based software-defined storage provider SoftIron has been making its case for what it calls secure provenance: a term that en Read more…

ISC 2022: International Association of Supercomputing Centers to Debut

May 23, 2022

At ISC 2022 in Hamburg, Germany, representatives from four supercomputing centers across three countries plan to debut the International Association of Supercom Read more…

ANL Special Colloquium on The Future of Computing

May 19, 2022

There are, of course, a myriad of ideas regarding computing’s future. At yesterday’s Argonne National Laboratory’s Director’s Special Colloquium, The Future of Computing, guest speaker Sadasivan Shankar, did his best to convince the audience that the high-energy cost of the current computing paradigm – not (just) economic cost; we’re talking entropy here – is fundamentally undermining computing’s progress such that... Read more…

HPE Announces New HPC Factory in Czech Republic

May 18, 2022

A week ahead of ISC High Performance 2022 (set to be held in Hamburg, Germany), supercomputing heavyweight HPE has announced a major investment in sovereign Eur Read more…

Nvidia R&D Chief on How AI is Improving Chip Design

April 18, 2022

Getting a glimpse into Nvidia’s R&D has become a regular feature of the spring GTC conference with Bill Dally, chief scientist and senior vice president of research, providing an overview of Nvidia’s R&D organization and a few details on current priorities. This year, Dally focused mostly on AI tools that Nvidia is both developing and using in-house to improve... Read more…

Royalty-free stock illustration ID: 1919750255

Intel Says UCIe to Outpace PCIe in Speed Race

May 11, 2022

Intel has shared more details on a new interconnect that is the foundation of the company’s long-term plan for x86, Arm and RISC-V architectures to co-exist in a single chip package. The semiconductor company is taking a modular approach to chip design with the option for customers to cram computing blocks such as CPUs, GPUs and AI accelerators inside a single chip package. Read more…

AMD/Xilinx Takes Aim at Nvidia with Improved VCK5000 Inferencing Card

March 8, 2022

AMD/Xilinx has released an improved version of its VCK5000 AI inferencing card along with a series of competitive benchmarks aimed directly at Nvidia’s GPU line. AMD says the new VCK5000 has 3x better performance than earlier versions and delivers 2x TCO over Nvidia T4. AMD also showed favorable benchmarks against several Nvidia GPUs, claiming its VCK5000 achieved... Read more…

In Partnership with IBM, Canada to Get Its First Universal Quantum Computer

February 3, 2022

IBM today announced it will deploy its first quantum computer in Canada, putting Canada on a short list of countries that will have access to an IBM Quantum Sys Read more…

Supercomputer Simulations Show How Paxlovid, Pfizer’s Covid Antiviral, Works

February 3, 2022

Just about a month ago, Pfizer scored its second huge win of the pandemic when the U.S. Food and Drug Administration issued another emergency use authorization Read more…

Nvidia Launches Hopper H100 GPU, New DGXs and Grace Superchips

March 22, 2022

The battle for datacenter dominance keeps getting hotter. Today, Nvidia kicked off its spring GTC event with new silicon, new software and a new supercomputer. Speaking from a virtual environment in the Nvidia Omniverse 3D collaboration and simulation platform, CEO Jensen Huang introduced the new Hopper GPU architecture and the H100 GPU... Read more…

PsiQuantum’s Path to 1 Million Qubits

April 21, 2022

PsiQuantum, founded in 2016 by four researchers with roots at Bristol University, Stanford University, and York University, is one of a few quantum computing startups that’s kept a moderately low PR profile. (That’s if you disregard the roughly $700 million in funding it has attracted.) The main reason is PsiQuantum has eschewed the clamorous public chase for... Read more…

Nvidia Dominates MLPerf Inference, Qualcomm also Shines, Where’s Everybody Else?

April 6, 2022

MLCommons today released its latest MLPerf inferencing results, with another strong showing by Nvidia accelerators inside a diverse array of systems. Roughly fo Read more…

Leading Solution Providers

Contributors

D-Wave to Go Public with SPAC Deal; Expects ~$1.6B Market Valuation

February 8, 2022

Quantum computing pioneer D-Wave today announced plans to go public via a SPAC (special purpose acquisition company) mechanism. D-Wave will merge with DPCM Capital in a transaction expected to produce $340 million in cash and result in a roughly $1.6 billion initial market valuation. The deal is expected to be completed in the second quarter of 2022 and the new company will be traded on the New York Stock... Read more…

Intel Announces Falcon Shores CPU-GPU Combo Architecture for 2024

February 18, 2022

Intel held its 2022 investor meeting yesterday, covering everything from the imminent Sapphire Rapids CPUs to the hotly anticipated (and delayed) Ponte Vecchio GPUs. But somewhat buried in its summary of the meeting was a new namedrop: “Falcon Shores,” described as “a new architecture that will bring x86 and Xe GPU together into a single socket.” The reveal was... Read more…

Industry Consortium Forms to Drive UCIe Chiplet Interconnect Standard

March 2, 2022

A new industry consortium aims to establish a die-to-die interconnect standard – Universal Chiplet Interconnect Express (UCIe) – in support of an open chipl Read more…

India Launches Petascale ‘PARAM Ganga’ Supercomputer

March 8, 2022

Just a couple of weeks ago, the Indian government promised that it had five HPC systems in the final stages of installation and would launch nine new supercomputers this year. Now, it appears to be making good on that promise: the country’s National Supercomputing Mission (NSM) has announced the deployment of “PARAM Ganga” petascale supercomputer at Indian Institute of Technology (IIT)... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Facebook Parent Meta’s New AI Supercomputer Will Be ‘World’s Fastest’

January 24, 2022

Fresh off its rebrand last October, Meta (née Facebook) is putting muscle behind its vision of a metaversal future with a massive new AI supercomputer called the AI Research SuperCluster (RSC). Meta says that RSC will be used to help build new AI models, develop augmented reality tools, seamlessly analyze multimedia data and more. The supercomputer’s... Read more…

Nvidia Acquires Software-Defined Storage Provider Excelero

March 7, 2022

Nvidia has announced that it has acquired Excelero. The high-performance block storage provider, founded in 2014, will have its technology integrated into Nvidia’s enterprise software stack. Nvidia is not disclosing the value of the deal. Excelero’s core product, Excelero NVMesh, offers software-defined block storage via networked NVMe SSDs. NVMesh operates through... Read more…

Nvidia Announces ‘Eos’ Supercomputer

March 22, 2022

At GTC22 today, Nvidia unveiled its new H100 GPU, the first of its new ‘Hopper’ architecture, along with a slew of accompanying configurations, systems and Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire