Citrix CTO Simon Crosby on Battling VMware

By Derrick Harris

July 7, 2008

This interview was spurred by an article discussing VMware’s now-complete acquisition of B-hive Networks, which I used as a jumping-off point to get into a variety of topics affecting the virtualization marketplace. Citrix Systems CTO Simon Crosby wanted to clear the air around the acquisition, technology-wise, and also wanted to let our readers know that although XenServer can’t compete with VMware in terms of sheer number of users, Xen has its own claims to fame, including a very rich partner ecosystem and the undisputed title as the hypervisor of choice for cloud service providers like Amazon and Google.

I think Crosby accomplished his mission, and then some. He gave candid opinions on a wide range of virtualization-related topics, including how Citrix views its partnership with Microsoft around Hyper-V as a bullfighting metaphor, with XenServer being the ring in Microsoft’s nose and VMware being the matador. Every once in a while, he says, we just give it a tug and point Microsoft in the direction of VMware. Oh, and Crosby is not shy about calling out VMware — in fact, he takes shots at the market leader whenever possible. “I think VMware has a lot to lose, they’re going to fight ’til the end to keep their world proprietary and isolated from everybody,” said Crosby, “and the only thing I have to say to them is that there was a billion-dollar business in TCP/IP stacks before it went into the OS.”

VMware believes its acquisition of B-hive Networks gives it some very innovative IP and an interesting set of capabilities. What are your thoughts on this move?

SIMON CROSBY: [The B-hive acquisition is] mildly interesting. [First,] it creates numerous problems, it’s a bump in the wire. VMware doesn’t know how to deal with bumps in the wire, generally, and bumps in the wire always impose some reliability and/or performance concerns. Second, it’s not actionable stuff. It’s good at finding out things, it does a reasonable job of figuring out what apps are running on what VMs and where those VMs are running on physical infrastructure, and paints a pretty good picture of that. It gives you some performance data as a result, but it is not an action engine. There is no way to use that information to then turn it into actionable ways to tune, modify, alter or control the infrastructure. Third, of course, it’s never been known to scale.

Technology-wise, Citrix has an incredibly rich set of capabilities in that capacity. Citrix EdgeSight is a massively scalable, essentially distributed, SQL query engine that allows us to monitor performance SLAs, installed software, reasons for application failure — and so on, and so on, and so on — in very large deployments of application infrastructure. EdgeSight is just a part of the Citrix portfolio, so it’s part of XenApp, which used to be Presentation Server, it’s part of XenDesktop, which is our VDI offering, and, indeed, we will at some point leverage those core assets for use in datacenter infrastructure with XenServer.

But we also have the actionable component here, too. One of the things I’d love in front of any VMware audience is to have somebody say, “But you don’t do distributed resource scheduling in your product.” To which I respond, “Yes we do.” We have a product called NetScaler, with an ability to sit in-band in traffic and do Layer 7 policy-based, content-aware, application-level-aware management of the infrastructure. NetScaler plays a strategically critical role for us. We have customers who are using XenServer in production today, with NetScaler sitting in front of XenServer resource pools, and on the fly dynamically provisioning new VMs, or natively running instances of apps on servers, based on sensing the application’s performance. It is the world’s fastest Layer 7 switch and, moreover, it gives us the ability to so some way cool stuff. For example, if an application fails, it can move all traffic onto a redundantly provisioned VM or bring up a new VM. It also can be used for disaster recovery: if you lose a datacenter, NetScaler will simply move the traffic to an alternate site.

Now, when we talk about application delivery, we are not talking about virtualization; virtualization is just a part of application delivery. And this is why I think my friends at VMware have got the whole thing backward. They think that virtualization is the raison d’etre of the datacenter, and we think that virtualization is kind of a cool feature set to have around for some circumstances. This is a very good example of that. NetScaler, today, drives 75 percent of Internet transactions, and that’s application delivery for native workloads and for virtual workloads. VMware is well behind, but they’re starting out on an interesting path.

Was Citrix working with B-hive or sending customers there for any reason?

CROSBY: I’m not aware of us having done that, although they are a partner and they’re Citrix-ready. The good thing I can say about the world is that we’re rapidly moving toward one that is hypervisor-agnostic. Even XenServer today, in the Platinum Edition, has the ability to provision VMs onto VMware, Microsoft Hyper-V, Xen itself, and onto bare metal. At the Platinum level in every Citrix product, we support VMware VMs, so we don’t see a particular issue there. It’s a nice opportunity for us to leverage our application-delivery assets, and if a customer has purchased VMware, that’s fine — they just happened to pay too much for it.

So virtualization isn’t the be-all and end-all in the datacenter?

CROSBY: What I meant by that is that virtualization is a technology that occurs at multiple layers of the stack, and it connotes agility, dynamism and availability. We already do in production today many of those things for native workloads. How do we do that? Well, with Provisioning Server, which is just a feature of XenServer at the Platinum level, we have the ability to dynamically and instantly provision native workloads. That means on any server at any time, I can bring up any app in a VM. And the fact that that VM is running without a hypervisor — is running native on a server — shouldn’t particularly worry you. I’ve managed to achieve agility, dynamism and availability, and we’ve managed to do that for native workloads, and if you wanted to run a VM on a hypervisor, we could serve a VM onto Xen, Hyper-V or VMware.

There are some of the benefits of virtualization, which essentially are encapsulation, centralization and dynamism, conferred even on native workloads. We have an ability to address 100 percent of the datacenter workload today, even those which you wouldn’t want to virtualization for some performance-related or other reason. So, we’re not worried about the fact that only some 10 percent of servers are virtualized, and, moreover, we don’t think that virtualization at the operating system and server level is the only level which you virtualize. If you want to deliver massively scalable virtualization, you need to separate the apps from the OS, and we do this today in XenDesktop…

What about virtual environments being the ideal platform for mission-critical applications? Is this a legitimate hope? Are enterprises comfortable enough to port their most sacred apps onto VMs?

CROSBY: Yes, they are. It’s significantly the case that large applications are moving in production onto virtualization, and it really has come down to “are the app vendors ready to support,” “are they certified,” and so on. The app set there is increasingly large, and it really comes down to performance. It’s happening.

Is virtualization becoming the best platform on which to run applications?

CROSBY: There are some things that will never be virtualized, just because there are different ways of building the app. For example, if you look at the way Web 2.0 apps are built, or if you look at Oracle RAC, you have a cluster controller that throws up new instances of native execution workloads on demand. That’s a different way of virtualizing infrastructure, and a perfectly legitimate way to do so. Do I need a hypervisor to do that? Not necessarily. At that point, the only value that hypervisor-based virtualization would offer is flexibility in provisioning a unit of work, which essentially would be my VM, my packaged execution unit, and making it boot and run.

There are some that will be virtualized, and there are some that won’t, but we can confer the key properties of agility, availability and dynamism to all workloads, whether they are virtualized or not. Virtualized workloads are really focused on the notion of provisioning the resources of a server for multiple VMs so that you can make the best use of your infrastructure.

Yes, apps are moving to production on virtualization, but the other thing that’s happening is the concepts of virtualization — the key one being encapsulation of the workload as a VM, and then provisioning that VM onto some piece of infrastructure, including native infrastructure — also are being achieved, and we do that today with XenServer, while still delivering dynamism and agility.

Speaking of dynamism and agility, VMware has been talking recently about the notion of cloud computing, about having a virtualized cloud to run your entire datacenter. Is Citrix looking at this kind of a future for virtualization technology?

CROSBY: The question I would ask VMware in response to that is: “Great. Sounds cool. Now, which cloud are you in?” What does VMware know about clouds, how does Virtual Center scale? Virtual Center does not scale; it has a huge issue, which is you reach a decent number of servers and the thing falls over because it’s a single point of management control. VMware knows a lot less about scaling, in general, than does the Xen community.

And all of the interesting cloud implementations of virtualization are built on Xen. No. 1 in the world would be Amazon, and it’s Xen and it’s massive — absolutely massive. Google has a massive Xen deployment. I don’t claim any money out of that, but I claim victory. That is, an open architecture that encourages people to build all of the orchestration capabilities for a very flat, very efficient, massively optimized datacenter environment. Yep, we, collectively, have done this.

From a resource model, we believe that XenServer scales better today than VMware does. Obviously, in terms of real customer deployments, the number of our customers who have hundreds of servers running XenServer is small — it’s in the tens — so we have very little to offer there in terms of very large-scale proof points. But I’ve got to believe the architecture we’ve got is far more scalable than something that is based on the notion that a single point of control — Virtual Center — can be anything that drives a cloud.

If a cloud is a big deal for VMware, I can tell you another thing that’s not going to happen: that cloud is not going to be composed of servers for which the virtualization layer costs $6,000 per server. That just is not going to happen at scale.

Price and scalability aside, is it a legitimate notion that this could happen?

CROSBY: Absolutely. Two thing happen: first, IT becomes a cloud to the rest of the organization, and the interface to the rest of the organization essentially becomes a provisioning interface, whereby if I have an application workload to run, I submit it to my cloud. I don’t get to choose what server it runs on, but I have an SLA, and so on. All of the packaging standards to build that are things that we worked on in the OBF work that is now part of the DMTF.

I think that is a very real transformation that is happening. Large organizations are moving from an acquisition model — “I need a server, how do I buy one?” — to being “If you need a server, you have to go all the way up to the CIO.” The predominant use case is you get a virtual server unless you make a very good case for buying a physical one. That is absolutely changing.

The other kind of cloud that’s interesting is the real cloud, the third-party ones. There are some very interesting opportunities there in disaster recovery, availability and instant scalability of applications.

What are you building into XenServer to address the cloud computing model?

CROSBY: XenServer itself has a very interesting architecture, in that XenServer is inherently composed of resource pools. The basic building block of the architecture is a resource pool. Resource pools themselves contain all of the management of the pool — every server in the pool contains all of the management information needed to manage the entire pool — and we leverage a much more scalable storage architecture than they do. VMFS, their clustered file system, does not currently scale above 16 servers. Ours scales arbitrarily, because we can use something very flat like iSCSI or NFS or any kind of backend storage mechanism.

And resource pools in our architecture simply provide APIs to be driven by a provisioning system, or any system that wants to drive a virtualized infrastructure. XenCenter is a management user interface which is perfect for managing our products for SMEs or enterprise departments, but we have partners who directly leverage XenServer’s APIs to build massively scalable systems. Platform Computing has taken all of its grid stuff and wrapped it around XenServer, and there are various other folks also doing this. At the resource pool level, we look to scale massively through collaboration with vendors who can sell it to different markets.

When it comes to dynamism and agility, how much of this can be done through a hypervisor? Where do things like storage virtualization and I/O virtualization come into play?

CROSBY: This is an area where I think, until now, the virtualization architecture that is out there really limits innovation to the software stack running on the server, because that’s the stuff they sell. Everybody else in the industry has been out there innovating; the storage guys are doing a fabulous job. VMFS basically turns storage into dumb blocks where VMs are invisible to storage. What our architecture does is expose VMs as first-class objects to the storage infrastructure so that we can leverage all the capabilities of the storage infrastructure to use array-based snapshots, clones, thin provisioning, HA, backup, DR, etc., instead of doing this all on software on the host. Storage virtualization is about to get a huge lift by collaborating with us on an open architecture.

And, indeed, this same architecture transfers directly into Hyper-V. Bear in mind that all a hypervisor does is virtualizes the resources of a single server. Really, what becomes interesting is how you make multiple servers scale into pooled resources in a datacenter. That always involves a conversation about how you deal with storage and how you deal with networks and fabrics. I believe passionately that our open architecture is the right way to do that because then the best storage solutions will shine out. When VMs can be seen from the storage infrastructure, then storage can snapshot VMs rather than us having to do all that with more software in our layer.

You mentioned Hyper-V. What are your thoughts on Microsoft’s foray into the virtualization world?

CROSBY: We’ve always been a close partner with Microsoft, and for around two and a half years have been developing and enabling Hyper-V to be a better hypervisor. We have partnered extensively with Microsoft on several projects to make Hyper-V a more competitive product in the industry. Our specific goal has always been fast, free, compatible, ubiquitous hypervisors. XenServer is compatible with Hyper-V.

Why would I want to do this? First of all, [XenServer] is free, so why would I not want to do this? Second, because it’s compatible, their footprint gives us a terrific opportunity to expand and upsell. There are uses cases we can address that they cannot; we are significantly ahead on the enterprise feature set. The other thing is that we are embedded into hardware and they are not. So, customers have a rational right to understand that if they buy a server with XenServer built in, it’s going to run Hyper-V, it’s going to run Windows Server VMs, and it’s going to do it in a way that is entirely plugged into the Microsoft ecosystem.

Also, we extend that architecture with XenServer. Today, we do VM provisioning to them in XenServer, and we have other things coming up that we haven’t announced, and we are completely aligned in VDI, which is our desktop solution. The Microsoft channel will recommend it to customers because it’s the world’s most scalable implementation of virtual desktop infrastructure. XenDesktop is all of the Citrix technology developed over the last 15-18 years applied to desktops as opposed to applications. That partnership with Microsoft is a tremendously strong one and one that I believe is suited to a next-generation collaboration on virtualization.

So, the partnership is very strong. It’s strong because both we and Microsoft will make money out of our exploitation of that Hyper-V footprint.

Which brings up the question of interoperability, in general, of hypervisors. How important do you think it is for all the products out there — VMware, Xen, Microsoft, Virtual Iron, etc. — to work together?

CROSBY: Or indeed Red Hat’s KVM, right? And a lack of interoperability between Red Hat Enterprise Linux with Xen and KVM — that’s a vendor with two virtualization products that don’t interoperate. From all perspectives, it’s critical. In that brief rattling off of a list, we’ve gone from the absurd to the sublime, in a funny sense. Interoperability is going to be key because every customer that I speak to is not going to bet the farm on a single vendor.

It needs to be said that VMware has been sufficiently heavy-handed with the ISV ecosystem to date, so that nobody has confidence in anything VMware says about openness. They have taken the whole market to themselves, and it’s too late for them to say, “Yeah, we’ll open it up and anybody can go to market with us.” Nobody believes it. They’ve cooked their goose on that one, I’m afraid, which means that customers will go for multiple-hypervisor or multiple-virtualization models.

Is there a chance that we will see across-the-board interoperability, or is that unlikely?

CROSBY: VMware arguably has everything to lose, and the rest of the ecosystem has everything to gain. Because we are founded on the notion that the hypervisor should be free and we want everyone to be in the business of making that the case and competing with VMware, I view Microsoft as a bull. We are the ring through the nose of that bull, and we have a rope that we tug every once in a while to point them in the direction of VMware.

The reason it’s so interesting right now is that we are an embedded option on something approaching 50 percent of x86 servers worldwide. Microsoft is not there, but we’re compatible with that, so VMs you create there will just run on Hyper-V. Customers definitely care about that. It’s also going to be compatible with what Stratus does with HA, what Marathon does in HA and fault tolerance, with what Symantec does with its Veritas Virtual Infrastructure, with what Egenera does with PAN Manager. All of these products are compatible because we’re just an embedded component. What you’re seeing is a whole ecosystem of ISV virtualization stuff, all with its own value-add, in which we are arguably a perfectly form-factored component. Customers can be confident that from all of these vendors, VMs will just boot and run. I think that’s a very important thing to say.

I think VMware has a lot to lose, they’re going to fight ‘til the end to keep their world proprietary and isolated from everybody, and the only thing I have to say to them is that there was a billion-dollar business in TCP/IP stacks before it went into the OS.

In a nutshell, how would you describe XenServer’s fit in the virtualization marketplace?

CROSBY: One: The world’s largest virtualization deployment — bar none — in production, is Xen-based. We hold the title. That’s maybe the equivalent of holding the title of the fastest supercomputer.

Two: We have a much richer ecosystem of offerings around us now than VMware does, a richer feature set in the form of things like fault tolerance, high availability and continuous availability. Why? Because the architecture is open and it encourages multiple ISVs to add value — and they can make money, whereas nobody is making money around VMware.

Three: We are compatible with Hyper-V, and we simply view the two as different tools for use in different projects. The Microsoft footprint is one that’s going to be important to us from a scale perspective, and our footprint is going to be important to Microsoft because (1) it counters the VMware footprint and it addresses advanced uses cases that they can’t yet address, and (2) because we are completely partnered in the add-on stuff, like System Center VMM and XenDesktop.

In my view, the whole industry is now set up to compete with VMware. Will we pull it off? It’s going to be an interesting fight. I think that VMware has done a great job, they are an extremely competitive vendor, and they have done a fabulous job of winning customers’ hearts and minds. To them, hats off. It’s now time for the party to end.

Finally, I’m wondering what other factors you see driving virtualization advancement in the immediate to near future.

CROSBY: Virtualization, the kind we do and VMware does, is just an emergent property of Moore’s Law. Super-normal is what Moore’s Law is right now. So, no surprise, we have to virtualize these boxes because the only thing that’s interesting about x86 is the legacy — large numbers of legacy single-threaded apps. That’s why virtualization is so relevant now.

Are those guys stopping? Gosh, no. We’ll see many-core systems very shortly. We’ll find that the hypervisor will become a key differentiator again, and there again an ability to be able to scale to 64 or 128 cores is going to be key, as is the ability to scale to massive memory architectures, and so on. Can Xen do this? Yes. Xen already runs on a 4,096-node supercomputer from SGI, and I have absolute confidence that an open architecture there will always win. The test that is really out there is whether a proprietary hypervisor development team sitting in one place — Palo Alto — can do a better job than the world’s best engineers sitting at 42 of the world’s leading IT companies. The answer is no, they can’t. They just cannot pull it off.

—–

Comments about GRIDtoday are welcomed and encouraged. Write to me, Derrick Harris, at [email protected].

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Supercomputer Research Reveals Star Cluster Born Outside Our Galaxy

July 11, 2020

The Milky Way is our galactic home, containing our solar system and continuing into a giant band of densely packed stars that stretches across clear night skies around the world – but, it turns out, not all of those st Read more…

By Oliver Peckham

Max Planck Society Begins Installation of Liquid-Cooled Supercomputer from Lenovo

July 9, 2020

Lenovo announced today that it is supplying a new high performance computer to the Max Planck Society, one of Germany's premier research organizations. Comprised of Intel Xeon processors and Nvidia A100 GPUs, and featuri Read more…

By Tiffany Trader

Xilinx Announces First Adaptive Computing Challenge

July 9, 2020

A new contest is challenging the computing world. Xilinx has announced the first Xilinx Adaptive Computing Challenge, a competition that will task developers and startups with finding creative workload acceleration solutions. Xilinx is running the Adaptive Computing Challenge in partnership with Hackster.io, a developing community... Read more…

By Staff report

Reviving Moore’s Law? LBNL Researchers See Promise in Heterostructure Oxides

July 9, 2020

The reality of Moore’s law’s decline is no longer doubted for good empirical reasons. That said, never say never. Recent work by Lawrence Berkeley National Laboratory researchers suggests heterostructure oxides may b Read more…

By John Russell

President’s Council Targets AI, Quantum, STEM; Recommends Spending Growth

July 9, 2020

Last week the President Council of Advisors on Science and Technology (PCAST) met (webinar) to review policy recommendations around three sub-committee reports: 1) Industries of the Future (IotF), chaired be Dario Gil (d Read more…

By John Russell

AWS Solution Channel

Best Practices for Running Computational Fluid Dynamics (CFD) Workloads on AWS

The scalable nature and variable demand of CFD workloads makes them well-suited for a cloud computing environment. Many of the AWS instance types, such as the compute family instance types, are designed to include support for this type of workload.  Read more…

Intel® HPC + AI Pavilion

Supercomputing the Pandemic: Scientific Community Tackles COVID-19 from Multiple Perspectives

Since their inception, supercomputers have taken on the biggest, most complex, and most data-intensive computing challenges—from confirming Einstein’s theories about gravitational waves to predicting the impacts of climate change. Read more…

Penguin Computing Brings Cascade Lake-AP to OCP Form Factor

July 7, 2020

Penguin Computing, a subsidiary of SMART Global Holdings, Inc., announced yesterday (July 6) a new Tundra server, Tundra AP, that is the first to implement the Intel Xeon Scalable 9200 series processors (codenamed Cascad Read more…

By Tiffany Trader

Max Planck Society Begins Installation of Liquid-Cooled Supercomputer from Lenovo

July 9, 2020

Lenovo announced today that it is supplying a new high performance computer to the Max Planck Society, one of Germany's premier research organizations. Comprise Read more…

By Tiffany Trader

President’s Council Targets AI, Quantum, STEM; Recommends Spending Growth

July 9, 2020

Last week the President Council of Advisors on Science and Technology (PCAST) met (webinar) to review policy recommendations around three sub-committee reports: Read more…

By John Russell

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

Q&A: HLRS’s Bastian Koller Tackles HPC and Industry in Germany and Europe

July 6, 2020

In this exclusive interview for HPCwire – sadly not face to face – Steve Conway, senior advisor for Hyperion Research, talks with Dr.-Ing Bastian Koller about the state of HPC and its collaboration with Industry in Europe. Koller is a familiar figure in HPC. He is the managing director at High Performance Computing Center Stuttgart (HLRS) and also serves... Read more…

By Steve Conway, Hyperion

OpenPOWER Reboot – New Director, New Silicon Partners, Leveraging Linux Foundation Connections

July 2, 2020

Earlier this week the OpenPOWER Foundation announced the contribution of IBM’s A21 Power processor core design to the open source community. Roughly this time Read more…

By John Russell

Hyperion Forecast – Headwinds in 2020 Won’t Stifle Cloud HPC Adoption or Arm’s Rise

June 30, 2020

The semiannual taking of HPC’s pulse by Hyperion Research – late fall at SC and early summer at ISC – is a much-watched indicator of things come. This yea Read more…

By John Russell

Racism and HPC: a Special Podcast

June 29, 2020

Promoting greater diversity in HPC is a much-discussed goal and ostensibly a long-sought goal in HPC. Yet it seems clear HPC is far from achieving this goal. Re Read more…

Top500 Trends: Movement on Top, but Record Low Turnover

June 25, 2020

The 55th installment of the Top500 list saw strong activity in the leadership segment with four new systems in the top ten and a crowning achievement from the f Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

Leading Solution Providers

Contributors

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This