Citrix CTO Simon Crosby on Battling VMware

By Derrick Harris

July 7, 2008

This interview was spurred by an article discussing VMware’s now-complete acquisition of B-hive Networks, which I used as a jumping-off point to get into a variety of topics affecting the virtualization marketplace. Citrix Systems CTO Simon Crosby wanted to clear the air around the acquisition, technology-wise, and also wanted to let our readers know that although XenServer can’t compete with VMware in terms of sheer number of users, Xen has its own claims to fame, including a very rich partner ecosystem and the undisputed title as the hypervisor of choice for cloud service providers like Amazon and Google.

I think Crosby accomplished his mission, and then some. He gave candid opinions on a wide range of virtualization-related topics, including how Citrix views its partnership with Microsoft around Hyper-V as a bullfighting metaphor, with XenServer being the ring in Microsoft’s nose and VMware being the matador. Every once in a while, he says, we just give it a tug and point Microsoft in the direction of VMware. Oh, and Crosby is not shy about calling out VMware — in fact, he takes shots at the market leader whenever possible. “I think VMware has a lot to lose, they’re going to fight ’til the end to keep their world proprietary and isolated from everybody,” said Crosby, “and the only thing I have to say to them is that there was a billion-dollar business in TCP/IP stacks before it went into the OS.”

VMware believes its acquisition of B-hive Networks gives it some very innovative IP and an interesting set of capabilities. What are your thoughts on this move?

SIMON CROSBY: [The B-hive acquisition is] mildly interesting. [First,] it creates numerous problems, it’s a bump in the wire. VMware doesn’t know how to deal with bumps in the wire, generally, and bumps in the wire always impose some reliability and/or performance concerns. Second, it’s not actionable stuff. It’s good at finding out things, it does a reasonable job of figuring out what apps are running on what VMs and where those VMs are running on physical infrastructure, and paints a pretty good picture of that. It gives you some performance data as a result, but it is not an action engine. There is no way to use that information to then turn it into actionable ways to tune, modify, alter or control the infrastructure. Third, of course, it’s never been known to scale.

Technology-wise, Citrix has an incredibly rich set of capabilities in that capacity. Citrix EdgeSight is a massively scalable, essentially distributed, SQL query engine that allows us to monitor performance SLAs, installed software, reasons for application failure — and so on, and so on, and so on — in very large deployments of application infrastructure. EdgeSight is just a part of the Citrix portfolio, so it’s part of XenApp, which used to be Presentation Server, it’s part of XenDesktop, which is our VDI offering, and, indeed, we will at some point leverage those core assets for use in datacenter infrastructure with XenServer.

But we also have the actionable component here, too. One of the things I’d love in front of any VMware audience is to have somebody say, “But you don’t do distributed resource scheduling in your product.” To which I respond, “Yes we do.” We have a product called NetScaler, with an ability to sit in-band in traffic and do Layer 7 policy-based, content-aware, application-level-aware management of the infrastructure. NetScaler plays a strategically critical role for us. We have customers who are using XenServer in production today, with NetScaler sitting in front of XenServer resource pools, and on the fly dynamically provisioning new VMs, or natively running instances of apps on servers, based on sensing the application’s performance. It is the world’s fastest Layer 7 switch and, moreover, it gives us the ability to so some way cool stuff. For example, if an application fails, it can move all traffic onto a redundantly provisioned VM or bring up a new VM. It also can be used for disaster recovery: if you lose a datacenter, NetScaler will simply move the traffic to an alternate site.

Now, when we talk about application delivery, we are not talking about virtualization; virtualization is just a part of application delivery. And this is why I think my friends at VMware have got the whole thing backward. They think that virtualization is the raison d’etre of the datacenter, and we think that virtualization is kind of a cool feature set to have around for some circumstances. This is a very good example of that. NetScaler, today, drives 75 percent of Internet transactions, and that’s application delivery for native workloads and for virtual workloads. VMware is well behind, but they’re starting out on an interesting path.

Was Citrix working with B-hive or sending customers there for any reason?

CROSBY: I’m not aware of us having done that, although they are a partner and they’re Citrix-ready. The good thing I can say about the world is that we’re rapidly moving toward one that is hypervisor-agnostic. Even XenServer today, in the Platinum Edition, has the ability to provision VMs onto VMware, Microsoft Hyper-V, Xen itself, and onto bare metal. At the Platinum level in every Citrix product, we support VMware VMs, so we don’t see a particular issue there. It’s a nice opportunity for us to leverage our application-delivery assets, and if a customer has purchased VMware, that’s fine — they just happened to pay too much for it.

So virtualization isn’t the be-all and end-all in the datacenter?

CROSBY: What I meant by that is that virtualization is a technology that occurs at multiple layers of the stack, and it connotes agility, dynamism and availability. We already do in production today many of those things for native workloads. How do we do that? Well, with Provisioning Server, which is just a feature of XenServer at the Platinum level, we have the ability to dynamically and instantly provision native workloads. That means on any server at any time, I can bring up any app in a VM. And the fact that that VM is running without a hypervisor — is running native on a server — shouldn’t particularly worry you. I’ve managed to achieve agility, dynamism and availability, and we’ve managed to do that for native workloads, and if you wanted to run a VM on a hypervisor, we could serve a VM onto Xen, Hyper-V or VMware.

There are some of the benefits of virtualization, which essentially are encapsulation, centralization and dynamism, conferred even on native workloads. We have an ability to address 100 percent of the datacenter workload today, even those which you wouldn’t want to virtualization for some performance-related or other reason. So, we’re not worried about the fact that only some 10 percent of servers are virtualized, and, moreover, we don’t think that virtualization at the operating system and server level is the only level which you virtualize. If you want to deliver massively scalable virtualization, you need to separate the apps from the OS, and we do this today in XenDesktop…

What about virtual environments being the ideal platform for mission-critical applications? Is this a legitimate hope? Are enterprises comfortable enough to port their most sacred apps onto VMs?

CROSBY: Yes, they are. It’s significantly the case that large applications are moving in production onto virtualization, and it really has come down to “are the app vendors ready to support,” “are they certified,” and so on. The app set there is increasingly large, and it really comes down to performance. It’s happening.

Is virtualization becoming the best platform on which to run applications?

CROSBY: There are some things that will never be virtualized, just because there are different ways of building the app. For example, if you look at the way Web 2.0 apps are built, or if you look at Oracle RAC, you have a cluster controller that throws up new instances of native execution workloads on demand. That’s a different way of virtualizing infrastructure, and a perfectly legitimate way to do so. Do I need a hypervisor to do that? Not necessarily. At that point, the only value that hypervisor-based virtualization would offer is flexibility in provisioning a unit of work, which essentially would be my VM, my packaged execution unit, and making it boot and run.

There are some that will be virtualized, and there are some that won’t, but we can confer the key properties of agility, availability and dynamism to all workloads, whether they are virtualized or not. Virtualized workloads are really focused on the notion of provisioning the resources of a server for multiple VMs so that you can make the best use of your infrastructure.

Yes, apps are moving to production on virtualization, but the other thing that’s happening is the concepts of virtualization — the key one being encapsulation of the workload as a VM, and then provisioning that VM onto some piece of infrastructure, including native infrastructure — also are being achieved, and we do that today with XenServer, while still delivering dynamism and agility.

Speaking of dynamism and agility, VMware has been talking recently about the notion of cloud computing, about having a virtualized cloud to run your entire datacenter. Is Citrix looking at this kind of a future for virtualization technology?

CROSBY: The question I would ask VMware in response to that is: “Great. Sounds cool. Now, which cloud are you in?” What does VMware know about clouds, how does Virtual Center scale? Virtual Center does not scale; it has a huge issue, which is you reach a decent number of servers and the thing falls over because it’s a single point of management control. VMware knows a lot less about scaling, in general, than does the Xen community.

And all of the interesting cloud implementations of virtualization are built on Xen. No. 1 in the world would be Amazon, and it’s Xen and it’s massive — absolutely massive. Google has a massive Xen deployment. I don’t claim any money out of that, but I claim victory. That is, an open architecture that encourages people to build all of the orchestration capabilities for a very flat, very efficient, massively optimized datacenter environment. Yep, we, collectively, have done this.

From a resource model, we believe that XenServer scales better today than VMware does. Obviously, in terms of real customer deployments, the number of our customers who have hundreds of servers running XenServer is small — it’s in the tens — so we have very little to offer there in terms of very large-scale proof points. But I’ve got to believe the architecture we’ve got is far more scalable than something that is based on the notion that a single point of control — Virtual Center — can be anything that drives a cloud.

If a cloud is a big deal for VMware, I can tell you another thing that’s not going to happen: that cloud is not going to be composed of servers for which the virtualization layer costs $6,000 per server. That just is not going to happen at scale.

Price and scalability aside, is it a legitimate notion that this could happen?

CROSBY: Absolutely. Two thing happen: first, IT becomes a cloud to the rest of the organization, and the interface to the rest of the organization essentially becomes a provisioning interface, whereby if I have an application workload to run, I submit it to my cloud. I don’t get to choose what server it runs on, but I have an SLA, and so on. All of the packaging standards to build that are things that we worked on in the OBF work that is now part of the DMTF.

I think that is a very real transformation that is happening. Large organizations are moving from an acquisition model — “I need a server, how do I buy one?” — to being “If you need a server, you have to go all the way up to the CIO.” The predominant use case is you get a virtual server unless you make a very good case for buying a physical one. That is absolutely changing.

The other kind of cloud that’s interesting is the real cloud, the third-party ones. There are some very interesting opportunities there in disaster recovery, availability and instant scalability of applications.

What are you building into XenServer to address the cloud computing model?

CROSBY: XenServer itself has a very interesting architecture, in that XenServer is inherently composed of resource pools. The basic building block of the architecture is a resource pool. Resource pools themselves contain all of the management of the pool — every server in the pool contains all of the management information needed to manage the entire pool — and we leverage a much more scalable storage architecture than they do. VMFS, their clustered file system, does not currently scale above 16 servers. Ours scales arbitrarily, because we can use something very flat like iSCSI or NFS or any kind of backend storage mechanism.

And resource pools in our architecture simply provide APIs to be driven by a provisioning system, or any system that wants to drive a virtualized infrastructure. XenCenter is a management user interface which is perfect for managing our products for SMEs or enterprise departments, but we have partners who directly leverage XenServer’s APIs to build massively scalable systems. Platform Computing has taken all of its grid stuff and wrapped it around XenServer, and there are various other folks also doing this. At the resource pool level, we look to scale massively through collaboration with vendors who can sell it to different markets.

When it comes to dynamism and agility, how much of this can be done through a hypervisor? Where do things like storage virtualization and I/O virtualization come into play?

CROSBY: This is an area where I think, until now, the virtualization architecture that is out there really limits innovation to the software stack running on the server, because that’s the stuff they sell. Everybody else in the industry has been out there innovating; the storage guys are doing a fabulous job. VMFS basically turns storage into dumb blocks where VMs are invisible to storage. What our architecture does is expose VMs as first-class objects to the storage infrastructure so that we can leverage all the capabilities of the storage infrastructure to use array-based snapshots, clones, thin provisioning, HA, backup, DR, etc., instead of doing this all on software on the host. Storage virtualization is about to get a huge lift by collaborating with us on an open architecture.

And, indeed, this same architecture transfers directly into Hyper-V. Bear in mind that all a hypervisor does is virtualizes the resources of a single server. Really, what becomes interesting is how you make multiple servers scale into pooled resources in a datacenter. That always involves a conversation about how you deal with storage and how you deal with networks and fabrics. I believe passionately that our open architecture is the right way to do that because then the best storage solutions will shine out. When VMs can be seen from the storage infrastructure, then storage can snapshot VMs rather than us having to do all that with more software in our layer.

You mentioned Hyper-V. What are your thoughts on Microsoft’s foray into the virtualization world?

CROSBY: We’ve always been a close partner with Microsoft, and for around two and a half years have been developing and enabling Hyper-V to be a better hypervisor. We have partnered extensively with Microsoft on several projects to make Hyper-V a more competitive product in the industry. Our specific goal has always been fast, free, compatible, ubiquitous hypervisors. XenServer is compatible with Hyper-V.

Why would I want to do this? First of all, [XenServer] is free, so why would I not want to do this? Second, because it’s compatible, their footprint gives us a terrific opportunity to expand and upsell. There are uses cases we can address that they cannot; we are significantly ahead on the enterprise feature set. The other thing is that we are embedded into hardware and they are not. So, customers have a rational right to understand that if they buy a server with XenServer built in, it’s going to run Hyper-V, it’s going to run Windows Server VMs, and it’s going to do it in a way that is entirely plugged into the Microsoft ecosystem.

Also, we extend that architecture with XenServer. Today, we do VM provisioning to them in XenServer, and we have other things coming up that we haven’t announced, and we are completely aligned in VDI, which is our desktop solution. The Microsoft channel will recommend it to customers because it’s the world’s most scalable implementation of virtual desktop infrastructure. XenDesktop is all of the Citrix technology developed over the last 15-18 years applied to desktops as opposed to applications. That partnership with Microsoft is a tremendously strong one and one that I believe is suited to a next-generation collaboration on virtualization.

So, the partnership is very strong. It’s strong because both we and Microsoft will make money out of our exploitation of that Hyper-V footprint.

Which brings up the question of interoperability, in general, of hypervisors. How important do you think it is for all the products out there — VMware, Xen, Microsoft, Virtual Iron, etc. — to work together?

CROSBY: Or indeed Red Hat’s KVM, right? And a lack of interoperability between Red Hat Enterprise Linux with Xen and KVM — that’s a vendor with two virtualization products that don’t interoperate. From all perspectives, it’s critical. In that brief rattling off of a list, we’ve gone from the absurd to the sublime, in a funny sense. Interoperability is going to be key because every customer that I speak to is not going to bet the farm on a single vendor.

It needs to be said that VMware has been sufficiently heavy-handed with the ISV ecosystem to date, so that nobody has confidence in anything VMware says about openness. They have taken the whole market to themselves, and it’s too late for them to say, “Yeah, we’ll open it up and anybody can go to market with us.” Nobody believes it. They’ve cooked their goose on that one, I’m afraid, which means that customers will go for multiple-hypervisor or multiple-virtualization models.

Is there a chance that we will see across-the-board interoperability, or is that unlikely?

CROSBY: VMware arguably has everything to lose, and the rest of the ecosystem has everything to gain. Because we are founded on the notion that the hypervisor should be free and we want everyone to be in the business of making that the case and competing with VMware, I view Microsoft as a bull. We are the ring through the nose of that bull, and we have a rope that we tug every once in a while to point them in the direction of VMware.

The reason it’s so interesting right now is that we are an embedded option on something approaching 50 percent of x86 servers worldwide. Microsoft is not there, but we’re compatible with that, so VMs you create there will just run on Hyper-V. Customers definitely care about that. It’s also going to be compatible with what Stratus does with HA, what Marathon does in HA and fault tolerance, with what Symantec does with its Veritas Virtual Infrastructure, with what Egenera does with PAN Manager. All of these products are compatible because we’re just an embedded component. What you’re seeing is a whole ecosystem of ISV virtualization stuff, all with its own value-add, in which we are arguably a perfectly form-factored component. Customers can be confident that from all of these vendors, VMs will just boot and run. I think that’s a very important thing to say.

I think VMware has a lot to lose, they’re going to fight ‘til the end to keep their world proprietary and isolated from everybody, and the only thing I have to say to them is that there was a billion-dollar business in TCP/IP stacks before it went into the OS.

In a nutshell, how would you describe XenServer’s fit in the virtualization marketplace?

CROSBY: One: The world’s largest virtualization deployment — bar none — in production, is Xen-based. We hold the title. That’s maybe the equivalent of holding the title of the fastest supercomputer.

Two: We have a much richer ecosystem of offerings around us now than VMware does, a richer feature set in the form of things like fault tolerance, high availability and continuous availability. Why? Because the architecture is open and it encourages multiple ISVs to add value — and they can make money, whereas nobody is making money around VMware.

Three: We are compatible with Hyper-V, and we simply view the two as different tools for use in different projects. The Microsoft footprint is one that’s going to be important to us from a scale perspective, and our footprint is going to be important to Microsoft because (1) it counters the VMware footprint and it addresses advanced uses cases that they can’t yet address, and (2) because we are completely partnered in the add-on stuff, like System Center VMM and XenDesktop.

In my view, the whole industry is now set up to compete with VMware. Will we pull it off? It’s going to be an interesting fight. I think that VMware has done a great job, they are an extremely competitive vendor, and they have done a fabulous job of winning customers’ hearts and minds. To them, hats off. It’s now time for the party to end.

Finally, I’m wondering what other factors you see driving virtualization advancement in the immediate to near future.

CROSBY: Virtualization, the kind we do and VMware does, is just an emergent property of Moore’s Law. Super-normal is what Moore’s Law is right now. So, no surprise, we have to virtualize these boxes because the only thing that’s interesting about x86 is the legacy — large numbers of legacy single-threaded apps. That’s why virtualization is so relevant now.

Are those guys stopping? Gosh, no. We’ll see many-core systems very shortly. We’ll find that the hypervisor will become a key differentiator again, and there again an ability to be able to scale to 64 or 128 cores is going to be key, as is the ability to scale to massive memory architectures, and so on. Can Xen do this? Yes. Xen already runs on a 4,096-node supercomputer from SGI, and I have absolute confidence that an open architecture there will always win. The test that is really out there is whether a proprietary hypervisor development team sitting in one place — Palo Alto — can do a better job than the world’s best engineers sitting at 42 of the world’s leading IT companies. The answer is no, they can’t. They just cannot pull it off.

—–

Comments about GRIDtoday are welcomed and encouraged. Write to me, Derrick Harris, at [email protected].

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing components with Intel Xeon, AMD Epyc, IBM Power, and Arm server ch Read more…

By Tiffany Trader

SIA Recognizes Robert Dennard with 2019 Noyce Award

November 12, 2019

If you don’t know what Dennard Scaling is, the chances are strong you don’t labor in electronics. Robert Dennard, longtime IBM researcher, inventor of the DRAM and the fellow for whom Dennard Scaling was named, is th Read more…

By John Russell

Leveraging Exaflops Performance to Remediate Nuclear Waste

November 12, 2019

Nuclear waste storage sites are a subject of intense controversy and debate; nobody wants the radioactive remnants in their backyard. Now, a collaboration between Berkeley Lab, Pacific Northwest National University (PNNL Read more…

By Oliver Peckham

Using HPC and Machine Learning to Predict Traffic Congestion

November 12, 2019

Traffic congestion is a never-ending logic puzzle, dictated by commute patterns, but also by more stochastic accidents and similar disruptions. Traffic engineers struggle to model the traffic flow that occurs after accid Read more…

By Oliver Peckham

Mira Supercomputer Enables Cancer Research Breakthrough

November 11, 2019

Dynamic partial-wave spectroscopic (PWS) microscopy allows researchers to observe intracellular structures as small as 20 nanometers – smaller than those visible by optical microscopes – in three dimensions at a mill Read more…

By Staff report

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quantum annealing) – ion trap technology is edging into the QC Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. Th Read more…

By Jan Rowell

MLPerf Releases First Inference Benchmark Results; Nvidia Touts its Showing

November 6, 2019

MLPerf.org, the young AI-benchmarking consortium, today issued the first round of results for its inference test suite. Among organizations with submissions wer Read more…

By John Russell

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed ins Read more…

By Tiffany Trader

Nvidia Launches Credit Card-Sized 21 TOPS Jetson System for Edge Devices

November 6, 2019

Nvidia has launched a new addition to its Jetson product line: a credit card-sized (70x45mm) form factor delivering up to 21 trillion operations/second (TOPS) o Read more…

By Doug Black

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

Spending Spree: Hyperscalers Bought $57B of IT in 2018, $10B+ by Google – But Is Cloud on Horizon?

October 31, 2019

Hyperscalers are the masters of the IT universe, gravitational centers of increasing pull in the emerging age of data-driven compute and AI.  In the high-stake Read more…

By Doug Black

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This