Citrix CTO Simon Crosby on Battling VMware

By Derrick Harris

July 7, 2008

This interview was spurred by an article discussing VMware’s now-complete acquisition of B-hive Networks, which I used as a jumping-off point to get into a variety of topics affecting the virtualization marketplace. Citrix Systems CTO Simon Crosby wanted to clear the air around the acquisition, technology-wise, and also wanted to let our readers know that although XenServer can’t compete with VMware in terms of sheer number of users, Xen has its own claims to fame, including a very rich partner ecosystem and the undisputed title as the hypervisor of choice for cloud service providers like Amazon and Google.

I think Crosby accomplished his mission, and then some. He gave candid opinions on a wide range of virtualization-related topics, including how Citrix views its partnership with Microsoft around Hyper-V as a bullfighting metaphor, with XenServer being the ring in Microsoft’s nose and VMware being the matador. Every once in a while, he says, we just give it a tug and point Microsoft in the direction of VMware. Oh, and Crosby is not shy about calling out VMware — in fact, he takes shots at the market leader whenever possible. “I think VMware has a lot to lose, they’re going to fight ’til the end to keep their world proprietary and isolated from everybody,” said Crosby, “and the only thing I have to say to them is that there was a billion-dollar business in TCP/IP stacks before it went into the OS.”

VMware believes its acquisition of B-hive Networks gives it some very innovative IP and an interesting set of capabilities. What are your thoughts on this move?

SIMON CROSBY: [The B-hive acquisition is] mildly interesting. [First,] it creates numerous problems, it’s a bump in the wire. VMware doesn’t know how to deal with bumps in the wire, generally, and bumps in the wire always impose some reliability and/or performance concerns. Second, it’s not actionable stuff. It’s good at finding out things, it does a reasonable job of figuring out what apps are running on what VMs and where those VMs are running on physical infrastructure, and paints a pretty good picture of that. It gives you some performance data as a result, but it is not an action engine. There is no way to use that information to then turn it into actionable ways to tune, modify, alter or control the infrastructure. Third, of course, it’s never been known to scale.

Technology-wise, Citrix has an incredibly rich set of capabilities in that capacity. Citrix EdgeSight is a massively scalable, essentially distributed, SQL query engine that allows us to monitor performance SLAs, installed software, reasons for application failure — and so on, and so on, and so on — in very large deployments of application infrastructure. EdgeSight is just a part of the Citrix portfolio, so it’s part of XenApp, which used to be Presentation Server, it’s part of XenDesktop, which is our VDI offering, and, indeed, we will at some point leverage those core assets for use in datacenter infrastructure with XenServer.

But we also have the actionable component here, too. One of the things I’d love in front of any VMware audience is to have somebody say, “But you don’t do distributed resource scheduling in your product.” To which I respond, “Yes we do.” We have a product called NetScaler, with an ability to sit in-band in traffic and do Layer 7 policy-based, content-aware, application-level-aware management of the infrastructure. NetScaler plays a strategically critical role for us. We have customers who are using XenServer in production today, with NetScaler sitting in front of XenServer resource pools, and on the fly dynamically provisioning new VMs, or natively running instances of apps on servers, based on sensing the application’s performance. It is the world’s fastest Layer 7 switch and, moreover, it gives us the ability to so some way cool stuff. For example, if an application fails, it can move all traffic onto a redundantly provisioned VM or bring up a new VM. It also can be used for disaster recovery: if you lose a datacenter, NetScaler will simply move the traffic to an alternate site.

Now, when we talk about application delivery, we are not talking about virtualization; virtualization is just a part of application delivery. And this is why I think my friends at VMware have got the whole thing backward. They think that virtualization is the raison d’etre of the datacenter, and we think that virtualization is kind of a cool feature set to have around for some circumstances. This is a very good example of that. NetScaler, today, drives 75 percent of Internet transactions, and that’s application delivery for native workloads and for virtual workloads. VMware is well behind, but they’re starting out on an interesting path.

Was Citrix working with B-hive or sending customers there for any reason?

CROSBY: I’m not aware of us having done that, although they are a partner and they’re Citrix-ready. The good thing I can say about the world is that we’re rapidly moving toward one that is hypervisor-agnostic. Even XenServer today, in the Platinum Edition, has the ability to provision VMs onto VMware, Microsoft Hyper-V, Xen itself, and onto bare metal. At the Platinum level in every Citrix product, we support VMware VMs, so we don’t see a particular issue there. It’s a nice opportunity for us to leverage our application-delivery assets, and if a customer has purchased VMware, that’s fine — they just happened to pay too much for it.

So virtualization isn’t the be-all and end-all in the datacenter?

CROSBY: What I meant by that is that virtualization is a technology that occurs at multiple layers of the stack, and it connotes agility, dynamism and availability. We already do in production today many of those things for native workloads. How do we do that? Well, with Provisioning Server, which is just a feature of XenServer at the Platinum level, we have the ability to dynamically and instantly provision native workloads. That means on any server at any time, I can bring up any app in a VM. And the fact that that VM is running without a hypervisor — is running native on a server — shouldn’t particularly worry you. I’ve managed to achieve agility, dynamism and availability, and we’ve managed to do that for native workloads, and if you wanted to run a VM on a hypervisor, we could serve a VM onto Xen, Hyper-V or VMware.

There are some of the benefits of virtualization, which essentially are encapsulation, centralization and dynamism, conferred even on native workloads. We have an ability to address 100 percent of the datacenter workload today, even those which you wouldn’t want to virtualization for some performance-related or other reason. So, we’re not worried about the fact that only some 10 percent of servers are virtualized, and, moreover, we don’t think that virtualization at the operating system and server level is the only level which you virtualize. If you want to deliver massively scalable virtualization, you need to separate the apps from the OS, and we do this today in XenDesktop…

What about virtual environments being the ideal platform for mission-critical applications? Is this a legitimate hope? Are enterprises comfortable enough to port their most sacred apps onto VMs?

CROSBY: Yes, they are. It’s significantly the case that large applications are moving in production onto virtualization, and it really has come down to “are the app vendors ready to support,” “are they certified,” and so on. The app set there is increasingly large, and it really comes down to performance. It’s happening.

Is virtualization becoming the best platform on which to run applications?

CROSBY: There are some things that will never be virtualized, just because there are different ways of building the app. For example, if you look at the way Web 2.0 apps are built, or if you look at Oracle RAC, you have a cluster controller that throws up new instances of native execution workloads on demand. That’s a different way of virtualizing infrastructure, and a perfectly legitimate way to do so. Do I need a hypervisor to do that? Not necessarily. At that point, the only value that hypervisor-based virtualization would offer is flexibility in provisioning a unit of work, which essentially would be my VM, my packaged execution unit, and making it boot and run.

There are some that will be virtualized, and there are some that won’t, but we can confer the key properties of agility, availability and dynamism to all workloads, whether they are virtualized or not. Virtualized workloads are really focused on the notion of provisioning the resources of a server for multiple VMs so that you can make the best use of your infrastructure.

Yes, apps are moving to production on virtualization, but the other thing that’s happening is the concepts of virtualization — the key one being encapsulation of the workload as a VM, and then provisioning that VM onto some piece of infrastructure, including native infrastructure — also are being achieved, and we do that today with XenServer, while still delivering dynamism and agility.

Speaking of dynamism and agility, VMware has been talking recently about the notion of cloud computing, about having a virtualized cloud to run your entire datacenter. Is Citrix looking at this kind of a future for virtualization technology?

CROSBY: The question I would ask VMware in response to that is: “Great. Sounds cool. Now, which cloud are you in?” What does VMware know about clouds, how does Virtual Center scale? Virtual Center does not scale; it has a huge issue, which is you reach a decent number of servers and the thing falls over because it’s a single point of management control. VMware knows a lot less about scaling, in general, than does the Xen community.

And all of the interesting cloud implementations of virtualization are built on Xen. No. 1 in the world would be Amazon, and it’s Xen and it’s massive — absolutely massive. Google has a massive Xen deployment. I don’t claim any money out of that, but I claim victory. That is, an open architecture that encourages people to build all of the orchestration capabilities for a very flat, very efficient, massively optimized datacenter environment. Yep, we, collectively, have done this.

From a resource model, we believe that XenServer scales better today than VMware does. Obviously, in terms of real customer deployments, the number of our customers who have hundreds of servers running XenServer is small — it’s in the tens — so we have very little to offer there in terms of very large-scale proof points. But I’ve got to believe the architecture we’ve got is far more scalable than something that is based on the notion that a single point of control — Virtual Center — can be anything that drives a cloud.

If a cloud is a big deal for VMware, I can tell you another thing that’s not going to happen: that cloud is not going to be composed of servers for which the virtualization layer costs $6,000 per server. That just is not going to happen at scale.

Price and scalability aside, is it a legitimate notion that this could happen?

CROSBY: Absolutely. Two thing happen: first, IT becomes a cloud to the rest of the organization, and the interface to the rest of the organization essentially becomes a provisioning interface, whereby if I have an application workload to run, I submit it to my cloud. I don’t get to choose what server it runs on, but I have an SLA, and so on. All of the packaging standards to build that are things that we worked on in the OBF work that is now part of the DMTF.

I think that is a very real transformation that is happening. Large organizations are moving from an acquisition model — “I need a server, how do I buy one?” — to being “If you need a server, you have to go all the way up to the CIO.” The predominant use case is you get a virtual server unless you make a very good case for buying a physical one. That is absolutely changing.

The other kind of cloud that’s interesting is the real cloud, the third-party ones. There are some very interesting opportunities there in disaster recovery, availability and instant scalability of applications.

What are you building into XenServer to address the cloud computing model?

CROSBY: XenServer itself has a very interesting architecture, in that XenServer is inherently composed of resource pools. The basic building block of the architecture is a resource pool. Resource pools themselves contain all of the management of the pool — every server in the pool contains all of the management information needed to manage the entire pool — and we leverage a much more scalable storage architecture than they do. VMFS, their clustered file system, does not currently scale above 16 servers. Ours scales arbitrarily, because we can use something very flat like iSCSI or NFS or any kind of backend storage mechanism.

And resource pools in our architecture simply provide APIs to be driven by a provisioning system, or any system that wants to drive a virtualized infrastructure. XenCenter is a management user interface which is perfect for managing our products for SMEs or enterprise departments, but we have partners who directly leverage XenServer’s APIs to build massively scalable systems. Platform Computing has taken all of its grid stuff and wrapped it around XenServer, and there are various other folks also doing this. At the resource pool level, we look to scale massively through collaboration with vendors who can sell it to different markets.

When it comes to dynamism and agility, how much of this can be done through a hypervisor? Where do things like storage virtualization and I/O virtualization come into play?

CROSBY: This is an area where I think, until now, the virtualization architecture that is out there really limits innovation to the software stack running on the server, because that’s the stuff they sell. Everybody else in the industry has been out there innovating; the storage guys are doing a fabulous job. VMFS basically turns storage into dumb blocks where VMs are invisible to storage. What our architecture does is expose VMs as first-class objects to the storage infrastructure so that we can leverage all the capabilities of the storage infrastructure to use array-based snapshots, clones, thin provisioning, HA, backup, DR, etc., instead of doing this all on software on the host. Storage virtualization is about to get a huge lift by collaborating with us on an open architecture.

And, indeed, this same architecture transfers directly into Hyper-V. Bear in mind that all a hypervisor does is virtualizes the resources of a single server. Really, what becomes interesting is how you make multiple servers scale into pooled resources in a datacenter. That always involves a conversation about how you deal with storage and how you deal with networks and fabrics. I believe passionately that our open architecture is the right way to do that because then the best storage solutions will shine out. When VMs can be seen from the storage infrastructure, then storage can snapshot VMs rather than us having to do all that with more software in our layer.

You mentioned Hyper-V. What are your thoughts on Microsoft’s foray into the virtualization world?

CROSBY: We’ve always been a close partner with Microsoft, and for around two and a half years have been developing and enabling Hyper-V to be a better hypervisor. We have partnered extensively with Microsoft on several projects to make Hyper-V a more competitive product in the industry. Our specific goal has always been fast, free, compatible, ubiquitous hypervisors. XenServer is compatible with Hyper-V.

Why would I want to do this? First of all, [XenServer] is free, so why would I not want to do this? Second, because it’s compatible, their footprint gives us a terrific opportunity to expand and upsell. There are uses cases we can address that they cannot; we are significantly ahead on the enterprise feature set. The other thing is that we are embedded into hardware and they are not. So, customers have a rational right to understand that if they buy a server with XenServer built in, it’s going to run Hyper-V, it’s going to run Windows Server VMs, and it’s going to do it in a way that is entirely plugged into the Microsoft ecosystem.

Also, we extend that architecture with XenServer. Today, we do VM provisioning to them in XenServer, and we have other things coming up that we haven’t announced, and we are completely aligned in VDI, which is our desktop solution. The Microsoft channel will recommend it to customers because it’s the world’s most scalable implementation of virtual desktop infrastructure. XenDesktop is all of the Citrix technology developed over the last 15-18 years applied to desktops as opposed to applications. That partnership with Microsoft is a tremendously strong one and one that I believe is suited to a next-generation collaboration on virtualization.

So, the partnership is very strong. It’s strong because both we and Microsoft will make money out of our exploitation of that Hyper-V footprint.

Which brings up the question of interoperability, in general, of hypervisors. How important do you think it is for all the products out there — VMware, Xen, Microsoft, Virtual Iron, etc. — to work together?

CROSBY: Or indeed Red Hat’s KVM, right? And a lack of interoperability between Red Hat Enterprise Linux with Xen and KVM — that’s a vendor with two virtualization products that don’t interoperate. From all perspectives, it’s critical. In that brief rattling off of a list, we’ve gone from the absurd to the sublime, in a funny sense. Interoperability is going to be key because every customer that I speak to is not going to bet the farm on a single vendor.

It needs to be said that VMware has been sufficiently heavy-handed with the ISV ecosystem to date, so that nobody has confidence in anything VMware says about openness. They have taken the whole market to themselves, and it’s too late for them to say, “Yeah, we’ll open it up and anybody can go to market with us.” Nobody believes it. They’ve cooked their goose on that one, I’m afraid, which means that customers will go for multiple-hypervisor or multiple-virtualization models.

Is there a chance that we will see across-the-board interoperability, or is that unlikely?

CROSBY: VMware arguably has everything to lose, and the rest of the ecosystem has everything to gain. Because we are founded on the notion that the hypervisor should be free and we want everyone to be in the business of making that the case and competing with VMware, I view Microsoft as a bull. We are the ring through the nose of that bull, and we have a rope that we tug every once in a while to point them in the direction of VMware.

The reason it’s so interesting right now is that we are an embedded option on something approaching 50 percent of x86 servers worldwide. Microsoft is not there, but we’re compatible with that, so VMs you create there will just run on Hyper-V. Customers definitely care about that. It’s also going to be compatible with what Stratus does with HA, what Marathon does in HA and fault tolerance, with what Symantec does with its Veritas Virtual Infrastructure, with what Egenera does with PAN Manager. All of these products are compatible because we’re just an embedded component. What you’re seeing is a whole ecosystem of ISV virtualization stuff, all with its own value-add, in which we are arguably a perfectly form-factored component. Customers can be confident that from all of these vendors, VMs will just boot and run. I think that’s a very important thing to say.

I think VMware has a lot to lose, they’re going to fight ‘til the end to keep their world proprietary and isolated from everybody, and the only thing I have to say to them is that there was a billion-dollar business in TCP/IP stacks before it went into the OS.

In a nutshell, how would you describe XenServer’s fit in the virtualization marketplace?

CROSBY: One: The world’s largest virtualization deployment — bar none — in production, is Xen-based. We hold the title. That’s maybe the equivalent of holding the title of the fastest supercomputer.

Two: We have a much richer ecosystem of offerings around us now than VMware does, a richer feature set in the form of things like fault tolerance, high availability and continuous availability. Why? Because the architecture is open and it encourages multiple ISVs to add value — and they can make money, whereas nobody is making money around VMware.

Three: We are compatible with Hyper-V, and we simply view the two as different tools for use in different projects. The Microsoft footprint is one that’s going to be important to us from a scale perspective, and our footprint is going to be important to Microsoft because (1) it counters the VMware footprint and it addresses advanced uses cases that they can’t yet address, and (2) because we are completely partnered in the add-on stuff, like System Center VMM and XenDesktop.

In my view, the whole industry is now set up to compete with VMware. Will we pull it off? It’s going to be an interesting fight. I think that VMware has done a great job, they are an extremely competitive vendor, and they have done a fabulous job of winning customers’ hearts and minds. To them, hats off. It’s now time for the party to end.

Finally, I’m wondering what other factors you see driving virtualization advancement in the immediate to near future.

CROSBY: Virtualization, the kind we do and VMware does, is just an emergent property of Moore’s Law. Super-normal is what Moore’s Law is right now. So, no surprise, we have to virtualize these boxes because the only thing that’s interesting about x86 is the legacy — large numbers of legacy single-threaded apps. That’s why virtualization is so relevant now.

Are those guys stopping? Gosh, no. We’ll see many-core systems very shortly. We’ll find that the hypervisor will become a key differentiator again, and there again an ability to be able to scale to 64 or 128 cores is going to be key, as is the ability to scale to massive memory architectures, and so on. Can Xen do this? Yes. Xen already runs on a 4,096-node supercomputer from SGI, and I have absolute confidence that an open architecture there will always win. The test that is really out there is whether a proprietary hypervisor development team sitting in one place — Palo Alto — can do a better job than the world’s best engineers sitting at 42 of the world’s leading IT companies. The answer is no, they can’t. They just cannot pull it off.

—–

Comments about GRIDtoday are welcomed and encouraged. Write to me, Derrick Harris, at [email protected].

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire