HPCwire recently announced our 2021 People to Watch and will be running featured interviews with these 14 thought leaders and HPC influencers in the coming weeks. First up we are happy to bring you our interview with Jim Keller, president and chief technology officer of Tenstorrent.
One of the top chip architects of our time, Keller has had an impactful career. He has held high-profile roles at AMD (where he designed the Zen cores that helped the company compete in datacenters once more), Tesla and Apple. Keller joined AI chip startup Tenstorrent earlier this year following two years as senior vice president of Intel’s silicon engineering group.
Hi Jim, congrats on your new position as CTO & President of Tenstorrent and being named an HPCwire Person to Watch for the second time! Tell us about your role at Tenstorrent, your areas of responsibility, and what drew you to the company.
Thank you for this opportunity.
As CTO, I’m working on new technology at Tenstorrent. Following our roadmap, we have a chip (Grayskull) currently starting production. We are powering up on our second-generation part and designing our 3rd and 4th generation of processors as we speak. I’m spending my time working on all of these parts and system designs around them.
As President, I’ve been working with our growing team on business strategy. We’ve gained significant traction with various companies, system builders and their customers, which we can now start translating into revenue.
I was the first investor at Tenstorrent. Ljubisa Bajic (Tenstorrent Founder and CEO) and I go way back. We worked together at AMD and I was always impressed by his approach to building AI silicon. He knows how GPUs work, how the software works, and he also knows the math behind AI, which is a rare combination. That’s why I was interested in investing with him.
Personally, I think the AI revolution is bigger than the Internet. Joining Tenstorrent is a great way for me to contribute to it, and so far it’s been super fun.
With so many startups engaged in designing and commercializing AI silicon, what sets Tenstorrent apart?
There are a few different things to consider. First, and it took us a while to realize this, you have to get right all the basics at a very deep level: memory, compute and network bandwidth together with programmability.
We’ve talked to a number of customers who are frustrated about the current state of AI silicon at its core.
The second thing I really like is the approach to the software. It begins with a unique compiler and software strategy, with hardware designed around it properly.
Some AI chip companies build chips with lots of GFLOPS or TFLOPS, and then they design the software later.
But Tenstorrent has always been different. We build hardware in collaboration with software right from the start.
The original software team consists of people who worked at Altera on FPGA compilers and CAD tools, which are both very complicated problems; we have people from AI and also people who work on HPC computers. There’s a big presence of talent in Toronto from companies and institutions like Intel, Nvidia, AMD and the University of Toronto.
How does the Tenstorrent approach differ in terms of architecture, and in combination of software and hardware. What is “Software 2.0” and how is it important?
What sets Tenstorrent apart is networking, data transformation and math engines of the software stack that work in sync with the hardware.
When you look at the Tenstorrent processor, it looks like an array of math processors, which is pretty common. There’s actually a real matrix multiplier and convolutional engine, so you don’t have to write programs to emulate that kind of math. The Tenstorrent engine does it naturally. It makes the number of programs you have to write for high performance lower because it runs the AI idioms of matrix multiply and convolution natively.
Then there are two units we call “Unpacker” and “Packer”, which are data transformation engines. Rather than writing programs to move bytes around, we have hardware that does it in a very straightforward way and presents a common data format into the math engine, which simplifies the programming.
And finally, networking is built in the Tenstorrent technology from the ground up. When all compute engines do their work, they have to send data somewhere – they send the data packet to the other engine.
We use the same on-chip and off-chip protocol to connect multiple chips together.
The first time I heard about Software 2.0 it was coined by Andrei Karpathy, who is the director of AI and autopilot at Tesla.
His idea was that we’re going from a world where you write programs to modify data to where you build neural networks and then program them with data to do the things you want. So modern computers are literally programmed with data.
It means a very different way of thinking about programming in many places where AI has had so much success. I think in the Software 2.0 future, 90% of computing will be done that way.
There will always be some computing that runs standard C programs but more and more of the actual cycles will be done in AI hardware running what we think of as Software 2.0.
What is the status of Grayskull and Wormhole and what markets and use cases do these chips address?
We’ve started our first production run of Grayskull, which we’re sampling to our customers. Our chip goes on a PCIe card and we have 75 W, 150 W and 300 W form factors. People can buy and plug them into their server infrastructure. We’ve released our inference software, and in a month or so, we are going to release training software. It’s built for a broad variety of AI applications, both training and inference.
Wormhole is our 2nd generation part that is going to take Tenstorrent to the next level because it has native networking between chips and lets us scale from a single chip to many chip systems just using our own network. This greatly improves bandwidth between chips and lowers the cost of building a system.
What excites you most about being a computer architect right now?
I’m sort of amazed by this but I’ve been building and designing computers for 40 years. The complexity of the computers that we build today is so far past what we did or even considered hard 40 years ago.
The reason we can build these computers is that modern tools and software have gotten so much better. You can think of an idea, write down RTL, synthesize it and build it into a chip with a really small team.
People at one point thought there’d be so many transistors and things would be so complicated we wouldn’t be able to build silicon because it’d be too expensive. But the opposite is true. Tenstorrent built Grayskull and Wormhole as a very small team of really great people. They took a very clever approach to modularity and design. We have a relatively small number of units that we put together to make a very complex chip. The amount of change I’ve seen in the last 5 or 10 years of computer design is probably greater than the previous 20.
We’ve been through a lot of revolutions. I think the AI revolution is going to be the biggest one so far.
Outside the professional sphere, what activities, hobbies or travel destinations do you enjoy in your free time?
I like to be active and fairly physical – I kitesurf, snowboard. I like to run and workout. I find it’s almost meditative, especially when I’m working on a hard problem. I get the problem loaded up in my head and I go run or snowboard for four hours. Somehow or other, it sorts itself out.
I like to travel. I went to Egypt with my kids a couple of years ago, it was great. I went to Serbia last year, we had a really great time there before Serbia got shut down due to the pandemic. I often go to Hawaii to surf, and I really enjoy the beach. The last year has been tough on travel so we’ll see about next year.
Keller is one of 14 HPCwire People to Watch for 2021. You can read the interviews with the other honorees at this link.