You Can Lure Unicorns to Water, but You Can’t Make Them Drink

By Elizabeth Leake, STEM-Trek

May 30, 2019

Lessons learned from Practice & Experience in Advanced Research Computing

If you’ve spent much time cruising employment ads lately, you’ve probably noticed that certain research computing specializations are in high demand. Some university-based centers have had positions open for months; others years. It’s the same in densely-populated communities that compete with regional industries as it is elsewhere. The culture has forced managers and human resource professionals to explore novel ways to fill the prospect pipeline.

Few academic programs provide the practical knowledge necessary to support research computing, so some universities have begun to incorporate advanced skills training into the curriculum. But what do you call the course, and where should it live? If you’re just starting out, there are additional, even more critical things to consider than what to call the course; how you approach this effort could make or break your program.

In smaller schools, if you do manage to get a for-credit course approved, at some point in your future, it’s possible that an administrator with no background in computer science (CS) will judge it, unfairly, based on economics, alone. How many students were served, and could that classroom be better utilized with a course that draws more paying customers? It may inspire scrutiny over the return on investment of your computational cluster, itself. Some are unlikely to prioritize something with such immense power, network and personnel operational costs—especially when budgets are tight and an athletic program might be on the chopping block!

If you envision the course to be structured around the use of a cluster; in other words, if you want to train advanced computer science students on how to run, maintain and optimize workflows for its use, you might draw 5-15 students in a 400-level CS program. Frankly, depending on the size of your resources and data center, that’s about as many as you’d want in a hands-on lab. If you’re only communicating with CS students, it could be called “Distributed and Parallel Computing.” Uber-geeks will understand what they’re in for. But if you do this, don’t encumber a full classroom. Occupy a conference room and pull them into the data center when it’s appropriate. That’ll keep the space auditors happy.

But if you want your program to grow, you should call it something that denotes employment potential and economic prosperity, for example, “Performance Computing for Research and Industry.” That title will resonate favorably with a broader range of prospective stakeholders (and advocates). At the master’s level, plan to train 5-15 the first year, with the goal of doubling that number after two years. A worthy goal for the future would be to attract interdisciplinary students—that’s where the magic happens in terms of scientific and engineering discoveries.

And, if you’re all-in for the academic approach, you might want to create an undergraduate-level, general education course with the same title. This might convene in a small classroom the first year, and move to an auditorium as the number of registrants grows. Open that course to all majors, targeting computationally-curious students, with the lion’s share in CS, engineering, physics, bio, and business (in terms of allocations awarded—those are big users). That could serve as a prerequisite for those who would ultimately pursue any computationally-intensive graduate program.

If this type of course is not established as an undergraduate, interdisciplinary gen-ed course from the beginning, it will invariably get political. Each college will begin to sponsor their own as the demand for computational knowledge increases across disciplines, and departments will make a power grab for seats that would be gained by CS, if that’s where it’s initially housed.

A gen-ed survey course should present applications for HPC in the full spectrum of industries so that more can envision the economic outcomes, in terms of research advances, startups that employ regional workers, collaborating industrial partners, and grant awards. You might incorporate a lecture about cloud computing, and how to determine if it’s a good fit for the workflow. I visited a company recently (1,000 employees; mostly technical) that has a data-intensive mission for which AWS Lambda plus cloud-based GPU computing is performing quite well. They have no interest in supporting HPC. They pay as they go—much like you’d pay for a utility—and they don’t have the capital burden. That wouldn’t work for universities whose mission is to prepare the workforce for a range of occupations, however. Those that do this well support a diverse portfolio of systems and services, including cloud. But understanding when it’s appropriate would prepare students for a cloud-exclusive scenario upon graduation, especially those who take jobs in industries where it’s normalized and they don’t need to employ people who can spell HPC.

There’s a lot to be said for vocational training. In that case, you can bypass academic credit hurdles and politics all together. Most of the senior sysadmins I know—especially generalists capable of handling a range of tasks well—earned their stripes as student employees at one point. But plan to focus on quantity in anticipation of attrition—even among student employees. Students with LinkedIn profiles showing two or three years of in-house experience are getting noticed by talent scouts from the 14 big tech companies that recently waived degree requirements. While the starting salary is tempting to an undergrad who thinks that dropping out would reduce student debt, they need sound advice when it comes to assessing community cost of living comparisons. Also, it’s difficult to return to school once departing; you must soon begin to repay student debt. Many tech companies offer a combination of salary and stock, but that grass isn’t always greener.

Someone recently explained to me that a year after joining a big tech company, he realized that it wasn’t all he had hoped it would be. While doubling his salary, one-third was in the form of stocks that aren’t currently performing well. At the same time, he had to move to a region that costs three times as much as the one he left behind. When comparing lifestyles, he said, “I can’t eat stocks; I live in cramped quarters and there’s nothing left at the end of the month.”

By 2025, three-fourths of the world’s workers will have been born between 1977 and 1995. According to BVK Marketing research, this demographic is impatient; 91 percent expect to change jobs every three years. The gig culture which gained popularity after the 2008 economic downturn affects both employee and employer loyalty. A 2017 Intuit (company behind TurboTax) study found that by 2020, 43 percent of the workforce will be temporary. While I devoted 22 years of service to Illinois’ public university system, and STEM-Trek’s Vice President, David Stack, recently retired after a long career in Wisconsin’s, such commitment and loyalty will be extremely rare in the future.

Quality of life is important to this demographic; if the stars aren’t in alignment where they land, they won’t stick around for long. While a competitive salary is important, university employers who can’t compete with industry would do well to focus on fringes that they may have more control over, such as professional development and related travel, and the ability to work-from-home. In many cases, this is institutionally frowned upon, so it’s incumbent upon technical leadership to drive positive change on their campuses, and offer peer support for such changes to others through professional organizations.

Do you have experience to share, or suggestions that I haven’t thought of?

Many would love to hear from you. Please continue the dialogue during a Practice & Experience in Advanced Research Computing (PEARC19) panel titled, “Stop Chasing Unicorns in the Global Gig Economy,” Wednesday, July 31, 2019 in Chicago. In this panel, five senior research computing center directors will share lessons learned, and road-tested recruitment and retention strategies. PEARC19 is July 28-August 1, 2019 in Chicago, Illinois; early registration ends June 23.

About the Author

HPCwire Contributing Editor Elizabeth Leake is a consultant, correspondent and advocate who serves the global high performance computing (HPC) and data science industries. In 2012, she founded STEM-Trek, a global, grassroots nonprofit organization that supports workforce development opportunities for science, technology, engineering and mathematics (STEM) scholars from underserved regions and underrepresented groups.

As a program director, Leake has mentored hundreds of early-career professionals who are breaking cultural barriers in an effort to accelerate scientific and engineering discoveries. Her multinational programs have specific themes that resonate with global stakeholders, such as food security data science, blockchain for social good, cybersecurity/risk mitigation, and more. As a conference blogger and communicator, her work drew recognition when STEM-Trek received the 2016 and 2017 HPCwire Editors’ Choice Awards for Workforce Diversity Leadership.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Better Scientific Software: Turn Your Passion into Cash

September 13, 2019

Do you know your way around scientific software and programming? You think you can contribute to the community by making scientific software better? If so, then the Better Scientific Software (BSSW) organization wants yo Read more…

By Dan Olds

Google’s ML Compiler Initiative Advances

September 12, 2019

Machine learning models running on everything from cloud platforms to mobile phones are posing new challenges for developers faced with growing tool complexity. Google’s TensorFlow team unveiled an open-source machine Read more…

By George Leopold

HPC Perspectives with Dr. Seid Koric

September 12, 2019

Brendan McGinty, director of Industry for the National Center for Supercomputing Applications (NCSA), University of Illinois at Urbana-Champaign, kicks off the first in a series of pieces profiling leaders in high performance computing (HPC), writing for the... Read more…

By Brendan McGinty

AWS Solution Channel

A Guide to Discovering the Best AWS Instances and Configurations for Your HPC Workload

The flexibility and heterogeneity of HPC cloud services provide a welcome contrast to the constraints of on-premises HPC. Every HPC configuration is potentially accessible to any given workload in a well-resourced cloud HPC deployment, with vast scalability to spin up as much compute as that workload demands in any given moment. Read more…

HPE Extreme Performance Solutions

Intel FPGAs: More Than Just an Accelerator Card

FPGA (Field Programmable Gate Array) acceleration cards are not new, as they’ve been commercially available since 1984. Typically, the emphasis around FPGAs has centered on the fact that they’re programmable accelerators, and that they can truly offer workload specific hardware acceleration solutions without requiring custom silicon. Read more…

IBM Accelerated Insights

Building a Solid IA for Your AI

The journey to high performance precision medicine starts with designing and deploying a solid Information Architecture that addresses the spectrum of challenges from data and applications that need to be managed and orchestrated together to empower workloads from analytics to AI. Read more…

IDAS: ‘Automagic’ HPC With Training Wheels

September 12, 2019

High-performance computing (HPC) for research is notorious for having steep barriers to entry. For this reason, high-tech disciplines were early adopters, have used the most cycles and typically drove hardware and softwa Read more…

By Elizabeth Leake

IDAS: ‘Automagic’ HPC With Training Wheels

September 12, 2019

High-performance computing (HPC) for research is notorious for having steep barriers to entry. For this reason, high-tech disciplines were early adopters, have Read more…

By Elizabeth Leake

Univa Brings Cloud Automation to Slurm Users with Navops Launch 2.0

September 11, 2019

Univa, the company behind Grid Engine, announced today its HPC cloud-automation platform NavOps Launch will support the popular open-source workload scheduler Slurm. With the release of NavOps Launch 2.0, “Slurm users will have access to the same cloud automation capabilities... Read more…

By Tiffany Trader

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

Eyes on the Prize: TACC’s Frontera Quickly Ramps up Science Agenda

September 9, 2019

Announced a year ago and officially launched a week ago, the Texas Advanced Computing Center’s Frontera – now the fastest academic supercomputer (~25 petefl Read more…

By John Russell

Quantum Roundup: IBM Goes to School, Delft Tackles Networking, Rigetti Updates

September 5, 2019

IBM today announced a new open source quantum ‘textbook’, a series of quantum education videos, and plans to expand its nascent quantum hackathon program. L Read more…

By John Russell

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

Fastest Academic Supercomputer Enters Full Production at TACC, Just in Time for Hurricane Season

September 3, 2019

Frontera, the NSF supercomputer installed at the Texas Advanced Computing Center (TACC) in June, passed its formal acceptance last week and is now officially la Read more…

By Tiffany Trader

MIT Prepares for Satori…and a New 2 Petaflops Computer Too

August 27, 2019

Sometime this fall, MIT will fire up Satori – an $11.6 million compute cluster donated by IBM and coinciding with the opening of the MIT Stephen A. Schwarzma Read more…

By John Russell

High Performance (Potato) Chips

May 5, 2006

In this article, we focus on how Procter & Gamble is using high performance computing to create some common, everyday supermarket products. Tom Lange, a 27-year veteran of the company, tells us how P&G models products, processes and production systems for the betterment of consumer package goods. Read more…

By Michael Feldman

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

Nvidia Embraces Arm, Declares Intent to Accelerate All CPU Architectures

June 17, 2019

As the Top500 list was being announced at ISC in Frankfurt today with an upgraded petascale Arm supercomputer in the top third of the list, Nvidia announced its Read more…

By Tiffany Trader

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Top500 Purely Petaflops; US Maintains Performance Lead

June 17, 2019

With the kick-off of the International Supercomputing Conference (ISC) in Frankfurt this morning, the 53rd Top500 list made its debut, and this one's for petafl Read more…

By Tiffany Trader

A Behind-the-Scenes Look at the Hardware That Powered the Black Hole Image

June 24, 2019

Two months ago, the first-ever image of a black hole took the internet by storm. A team of scientists took years to produce and verify the striking image – an Read more…

By Oliver Peckham

Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

May 22, 2019

More so than with most acquisitions of this kind, HPE’s purchase of Cray for $1.3 billion, announced last week, seems to have elements of that overused, often Read more…

By Doug Black and Tiffany Trader

Chinese Company Sugon Placed on US ‘Entity List’ After Strong Showing at International Supercomputing Conference

June 26, 2019

After more than a decade of advancing its supercomputing prowess, operating the world’s most powerful supercomputer from June 2013 to June 2018, China is keep Read more…

By Tiffany Trader

Qualcomm Invests in RISC-V Startup SiFive

June 7, 2019

Investors are zeroing in on the open standard RISC-V instruction set architecture and the processor intellectual property being developed by a batch of high-flying chip startups. Last fall, Esperanto Technologies announced a $58 million funding round. Read more…

By George Leopold

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This