Hyperscalers Google, Amazon, and Meta have developed barebones servers by stripping down parts, but you cannot buy them off the shelf.
Startup Oxide took on that idea this week and started shipping a mega-server with custom motherboards stripped of unnecessary parts, wires, and components typically found in off-the-rack servers.
The company’s secretive hardware-software co-design plan has developed a cult following among open-source and hardware enthusiasts since its inception.
Oxide introduced its concept to reequip servers in 2019. Since then, the company has developed custom silicon and software that prioritizes security and introduces new techniques to manage compute, storage, and networking.
Conventional server designs heap stacks of software and hardware layers that take software further away from silicon, Oxide argues. Oxide’s goal with its minimalistic server design is to bring software closer to the silicon.
Hyperscalers are also stripping off useless parts from servers. Meta last year talked about stripping down servers to the bare minimum for its new data center for better performance and power efficiency predictability. Meta could not scale data centers on commodity hardware overloaded with components.
Minimalistic servers for the cloud are not readily available from the likes of HPE or Dell. Oxide wants to fill that gap.
“To meaningfully build a cloud computer, one must break out of the shackles of the 1U or 2U server and think about the rack as the unit of design,” Cantrill wrote in a blog entry posted this week. Moreover, the Oxide cloud server includes software and hardware, with no licensing costs.
Oxide started developing its on-premise cloud servers just when companies were shutting their data centers and going all-out to the cloud. But the trend is slowly reversing, with companies bringing back workloads to on-premise systems.

At a recent HPC+AI on Wall Street conference in New York City, financial IT executives argued that on-premise hardware could run some applications faster and prevent lock-in to cloud vendors. Applications are also being repatriated to in-house systems for security and financial reasons.
Oxide’s patience has paid off, and it has started shipping servers with interest in on-premise systems rebounding. The company also received $44 million in venture funding from firms including Eclipse and Intel Capital.
Oxide systems use off-the-shelf AMD Milan CPUs, and the server architecture goes in a different direction when compared to commodity hardware systems from HPE and Dell. I spoke with Oxide co-founders Bryan Cantrill and Steve Tuck last year about how the system was developed and how it worked. Tuck is Oxide’s CEO, while Cantrill, who created DTrace while employed at Sun Microsystems, is now chief technology officer.
The system has compute sleds, which include CPUs, memory, storage, and networking. The networking is connected directly to the switch over Ethernet. Oxide also developed its custom switch to facilitate communications to an adjacent compute sled via external PCI.
“We felt like it was a real shame to have a traditional switch. x86 is a kind of a colostomy bag on the side of the switch, where you have got this kind of low-powered Xeon D, or what have you, on that an Intel Management Engine, and a bunch of things that we didn’t want in it,” Cantrill said.
There are no hard-coded quotas for switching or routing, unlike in commodity hardware servers. That gives the Oxide servers the ability to make decisions on how to address different traffic patterns, which makes the server system more flexible. The switch is programmable, which allows customers to get closer to the silicon.
Oxide has replaced the baseboard management controller — which is getting more bloated — down to a service processor, which handles power, serial console, and environmental monitoring.
The service processor is versatile — it can also handle power cycling and remote management of servers. The processor is an example of hardware-software co-design, where software can be moved to the hardware layer without piling up components.
The Oxide server has a control plane running on compute sleds that handles server tasks like virtual machine management, storage, networking, etc. Customers do not need to use VMware or OpenStack, and it does not need proprietary hardware controllers.
The control plane is an important piece of Oxide’s server system and the glue to keep the cloud running smoothly. It facilitates interconnects, switches, service processors, and other important pieces.
For example, a new virtual machine request can be sent to the control plane, which will find a compute sled. The computer communicates with the control plane and provisions a new instance. It also provisions storage, connects networking to the instance, and assigns a new IP address.
Oxide has also slashed out power supplies from compute sleds. Instead, the company has installed a DC busbar at the back of the rack that distributes power to the compute sleds, which improves power efficiency and reliability.
The DC busbar is a hunk of copper in the middle of a rack that converts AC to DC and then runs DC up and down the rack. When a compute sled is plugged in, it does not have to convert from AC to DC. All it needs to do is adjust voltages from the higher voltage on the busbar down to something that the electronic components can use.
“If you look at any kind of traditional 1U/2U server, you’ve got your AC power cords, power supplies, fans. Those fans fail, the power supplies fail … you should not be running compute that way. No one is running compute at scale that way. Yet you cannot buy a DC busbar-based system from Dell, HPE, or Supermicro. It is not on the price list,” Cantrill said.
Oxide servers also have a cabling backplane that connects compute sleds to a switch, which is connected to the grid. The blind-mated networking backplane ensures cables do not come out the front of the rack and allows for higher density and easier serviceability.
“This was mechanically tricky, but the payoff is huge: capacity can be added to the Oxide cloud computer simply by snapping in a new compute sled — nothing to be cabled whatsoever,” Cantrill said in a blog entry, adding: “This is a domain in which we have leapfrogged the hyperscalers, who … don’t do it this way.”
Oxide has also invested heavily in the security side of the computing infrastructure by creating a confidential computing environment that keeps VMs and data secure. The server has a hardware root of trust with the service processor, control plane, and blind-mate networking features, preventing attacks on the hardware and software.