Cubes, Culture, and a New Challenge: Trish Damkroger Talks about Life at Intel—and Why HPC Matters More Than Ever

By Jan Rowell

September 13, 2017

Trish Damkroger wasn’t looking to change jobs when she attended SC15 in Austin, Texas. Capping a 15-year career within Department of Energy (DOE) laboratories, she was acting Associate Director for Computation at Lawrence Livermore National Laboratory (LLNL). Her mission was to equip the lab’s scientists and research partners with resources that would advance their cutting-edge work in stockpile science and other critical fields.

But a chance conversation brought up the possibility of a career jump, and one thing led to another.

Trish Damkroger

Today, Damkroger is Vice President of Intel’s Data Center Group and General Manager of its Technical Computing Initiative. Her work helps shape Intel’s high-performance computing (HPC) products and services for the technical market segment. Under that umbrella are the next-generation platform technologies and frameworks that will take Intel toward exascale and advance the convergence of traditional HPC, big data, and artificial intelligence workloads.

Along with her new job, Damkroger and her husband have moved to Oregon and joined the state’s $3.35 billion wine and grape industry. He recently retired, and she commutes to work from their 12-acre winery on Bald Peak, 20 miles or so south of Intel’s Hillsboro facilities. The two met at an executive coaching program run by the University of California, Berkeley’s Haas School of Business.

Damkroger is also a certified coach and a strong advocate for women in science, technology, engineering, and math (STEM). She has played leadership roles in the industry’s annual Supercomputing Conference for more than a decade. She chaired SC14—the year of HPC Matters—as well as heading the SC15 Steering Committee, and leading the SC16 Diverse HPC Workforce Committee. She’s signed on as Vice Chair of SC18.

Trish, you’re very well known in the industry, but I wonder if you could tell us a bit about your background and career path. What were some of the steps that led you to where you are today?

My dad is an electrical engineer. I had an older brother and a younger brother, and when we were growing up, the expectation was always that we would go to college, we could go to any state school we wanted, and we could be any kind of engineer. Those were our choices.

Of course, I thought that was awful, but now, with my own kids, I sometimes think it would have been good to give them a little more clarity, a few more guard rails—one of them is in computer engineering, and the other is still figuring out his passion. In any case, I chose electrical engineering graduating from Cal Poly. My older brother is a computer engineer, and my younger brother is a mechanical engineer.

When I started out, I was fascinated by the Six Million Dollar Man and Bionic Woman television shows. I wanted to do robotics, and create prostheses that connected to the brain, and make that whole thing work. But I was a little before my time, and the programs to do that really weren’t there.

After graduation, I worked full time at Hewlett-Packard and got my Master’s at Stanford studying AI and neural networks, which were in their infancy. That’s always been a passion for me—to figure out how the brain and body work together and how we can make prosthetics that mimic real limbs. It’s cool to see that coming to fruition now.

So you had worked at HP?  

Yes. I left HP to marry my husband, who lived in Livermore, California and I took a job with Sandia National Laboratories. I worked at Sandia for 10 years, and left there in 2000 to manage a product line for an IT service management company.

After 9/11, I wanted to return to the national security sector. I missed the labs and the national security mission. Plus the company I worked for was relocating and I was not interested in moving.

So I went to Lawrence Livermore, and I loved it, and I never expected to leave. I had worked my way up the organizational ladder and was probably in the last position I would have at the laboratory—and I realized I didn’t want to do what I was doing for another 10 years.

I came to Intel because it’s a chance to do something totally different. I love new challenges. I love to learn new things, and I have more chances to do that at Intel. It’s a completely different mindset and a completely new skillset to learn. I feel like I could spend decades here and continue to learn and grow.

Intel is in the middle of everything. It’s just a tremendously exciting place to be.

How did you make the move to Intel? Were you recruited? Were you job hunting?

Not job hunting at all. I ran into Debra Goldfarb [formerly of Microsoft and now Intel’s Senior Director of Market Intelligence] at SC15, and Deb asked if I was attending a women’s recruiting event Intel was putting on. I was already booked, and wasn’t looking to change jobs, so I didn’t attend. I made one of those, “If the right job comes up, keep me in mind” comments, but I wasn’t that serious—it was more in a spirit of not wanting to close doors.

Well, Deb set up a dinner meeting for me with Diane Bryant [president of the Intel Data Center Group], and I loved Diane. I mean, who doesn’t love Diane? We connected. She pointed out that I was passionate when I talked about all the things I was doing outside my job, with women and STEM, with SC. But, she said, “I don’t hear that same passion when you talk about your work.”

She was right, and it was a real “Wow” moment. It made me aware and got me thinking.

Has the Intel culture surprised you in any ways? Is it different from what you expected?

I’ll share a story. At Livermore, I had a beautiful office and my own conference room. On my first day at Intel, they walked me to my office, which is a small cube, and I asked, “Is this temporary?” But Intel being very egalitarian, they said, “No, everyone has a cube. BK—CEO Brian Krzanich—has a cube.”

I knew Intel was very egalitarian, and I think it’s a good thing. I like that philosophy. It’s a part of the culture going back to [Bob] Noyce and [Gordon] Moore. But the cube was a surprise.

People warned me about the pace. I’ve always worked hard and long hours, so that hasn’t changed, but being in a worldwide company is different. I have lots of early morning and evening calls. Intel’s business is truly global, and it’s 24/7. You’re dealing with China, with Europe—you have to be available. I knew about it intellectually, but it’s different when you’re actually doing the 6 am and 8 pm calls.

Another thing I love about Intel, and it’s huge, is how open everyone has been. They’ve been very welcoming, very willing to throw me in the middle of everything very quickly, and have the confidence in me that I can represent Intel all over the world. I love it. It shows the trust they have in their people.

You’re an advocate for diversity in STEM, and I know Intel is out front on this issue.  Why is diversity so important?

The real importance of diversity in HPC is that we need more people to go into tech fields. Period. Demand is growing, and we can’t meet it with only white men. The other point is that we’re selling to a diverse market. If we’re not engineering for that diversity, we’re going to lose. Everybody loses.

I’m very supportive of women in STEM. I’m continuing to push that, and to coach women who are in male-dominated fields.

You’ve focused industry attention on why HPC matters. Could you talk a bit about why sustained federal investments in HPC are so crucial?

My one-sentence answer is that HPC is absolutely essential to national competitiveness. China recognizes this. China expects to be at exascale in 2020. They’re getting there first because they’re making the investments. They’re developing indigenous technologies, and seeing HPC as a core element of competitiveness.

HPC is important because it is the way we are going to solve problems in every field. If we want the US to be at the forefront of innovation, we have to continue to invest. If we aren’t making those upstream investments to drive HPC innovation, we will lose our competitive edge.

That’s manufacturing and financial modeling and drug discovery. It’s autonomous driving and cognitive computing and bulletproof cyber security. It’s curing cancer, managing the electrical grid and safeguarding the nuclear arsenal. It’s sustainable agriculture and precision medicine.

Our digital infrastructure is just as important as our highways and airports. We need all hands on deck to help the government’s policymakers and funders understand HPC’s importance and why we need to push forward. We have to expand the capacity to support the nation’s critical science and technology research—DOE systems are at greater than 90 percent capacity, and that’s hard to keep up because you have to bring the systems down for maintenance.

We need to educate funders and decision makers about the ways government investment funds the full ecosystem—the labs and universities to build the large machines, conduct the research, do the applied math for the models, develop the applications and algorithms, explore the new technologies, and do all the things that will be in everyday computing environments 5-10 years out, and in your smart phone and wearables after that. If we stop those investments, the middle of the pyramid eventually collapses and the innovation stops. That’s an outcome no one wants.

About the Author

Jan Rowell writes about technology trends and impacts in HPC, healthcare, life sciences, and other industries.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

HOKUSAI’s BigWaterfall Cluster Extends RIKEN’s Supercomputing Performance

February 21, 2018

RIKEN, Japan’s largest comprehensive research institution, recently expanded the capacity and capabilities of its HOKUSAI supercomputer, a key resource managed by the institution’s Advanced Center for Computing and C Read more…

By Ken Strandberg

Neural Networking Shows Promise in Earthquake Monitoring

February 21, 2018

A team of Harvard University and MIT researchers report their new neural networking method for monitoring earthquakes is more accurate and orders of magnitude faster than traditional approaches. Read more…

By John Russell

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HPE to provide the DoD High Performance Computing Modernizatio Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

Experience Memory & Storage Solutions that will Transform Your Data Performance

High performance computing (HPC) has revolutionized the way we harness insight, leading to a dramatic increase in both the size and complexity of HPC systems. Read more…

Topological Quantum Superconductor Progress Reported

February 20, 2018

Overcoming sensitivity to decoherence is a persistent stumbling block in efforts to build effective quantum computers. Now, a group of researchers from Chalmers University of Technology (Sweden) report progress in devisi Read more…

By John Russell

HOKUSAI’s BigWaterfall Cluster Extends RIKEN’s Supercomputing Performance

February 21, 2018

RIKEN, Japan’s largest comprehensive research institution, recently expanded the capacity and capabilities of its HOKUSAI supercomputer, a key resource manage Read more…

By Ken Strandberg

Neural Networking Shows Promise in Earthquake Monitoring

February 21, 2018

A team of Harvard University and MIT researchers report their new neural networking method for monitoring earthquakes is more accurate and orders of magnitude faster than traditional approaches. Read more…

By John Russell

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HP Read more…

By Tiffany Trader

Fluid HPC: How Extreme-Scale Computing Should Respond to Meltdown and Spectre

February 15, 2018

The Meltdown and Spectre vulnerabilities are proving difficult to fix, and initial experiments suggest security patches will cause significant performance penal Read more…

By Pete Beckman

Brookhaven Ramps Up Computing for National Security Effort

February 14, 2018

Last week, Dan Coats, the director of Director of National Intelligence for the U.S., warned the Senate Intelligence Committee that Russia was likely to meddle in the 2018 mid-term U.S. elections, much as it stands accused of doing in the 2016 Presidential election. Read more…

By John Russell

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

The Food Industry’s Next Journey — from Mars to Exascale

February 12, 2018

Global food producer and one of the world's leading chocolate companies Mars Inc. has a unique perspective on the impact that exascale computing will have on the food industry. Read more…

By Scott Gibson, Oak Ridge National Laboratory

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Leading Solution Providers

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

V100 Good but not Great on Select Deep Learning Aps, Says Xcelerit

November 27, 2017

Wringing optimum performance from hardware to accelerate deep learning applications is a challenge that often depends on the specific application in use. A benc Read more…

By John Russell

SC17: Singularity Preps Version 3.0, Nears 1M Containers Served Daily

November 1, 2017

Just a few months ago about half a million jobs were being run daily using Singularity containers, the LBNL-founded container platform intended for HPC. That wa Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This