The Open Rack creates a new, open standard for server rack design that provides an innovative platform for rack infrastructure while lowering TCO in the scale compute space. It's the first rack standard that's designed for data centers, integrating the rack into the data center infrastructure, part of the Open Compute Project's "grid to gates" philosophy, a holistic design process that considers the interdependence of everything from the power grid to the gates in the chips on each motherboard.
The Open Rack standard seeks to right some historic wrongs in traditional rack design, set by the EIA 310-D specification, which traces its origins back to railroad signaling relays. EIA 310-D wasn't designed with data centers and scale computing in mind — it couldn't, because it was established in the 1950s. EIA 310-D standardized the width between the inner rails in a rack, but left other rack specifications — such as height, depth, mounting and cabling schemes, and connectors — to the manufacturers, each of whom came up with their own proprietary designs. This resulted in gratuitous differentiation in server and rack designs, locking consumers into specific vendors and their implementations.
The Open Rack features a simple design, built with the scale compute space at its core. A slightly taller rack unit, called an OpenU, or OU, is 48mm high (the traditional rack unit is 44.5mm tall), which increases airflow, improving air economization; it also allows for better for cable and thermal management and efficient use of space. Fewer parts and service from the front improves serviceability. Modular design of the IT chassis (anywhere from 0.5 OU to 12 OU high) allows for flexible density within the racks.
Even though it's a new standard, the Open Rack doesn't deviate from the 24" column width, which is driven by standard floor tile pitch. And while the Open Rack's IT equipment space is 21" wide, it can be adapted to accommodate existing 19" equipment. The racks adhere to their own standard but still allow for innovation within the compute space. For example, the wider equipment bay allows for implementations with three motherboards or five 3.5" disk drives side by side in one chassis. And the wider rack is much more space efficient than the 19" rack, which, once you factor in the sidewalls and rails, results in just 17.5" for equipment for about 73% space efficiency, versus the Open Rack delivering all 21" out of the 24" available for 87.5% space efficiency.
The Open Rack lowers total cost of ownership because it maximizes the product life cycle for each compute component. Rather than replacing the whole server on a regular, short (2.5 year) cycle, each component gets replaced according to its own life cycle, which can be up to 10 years in some cases. This disaggregation of compute components (CPU, hard drives, NICs) improves efficiency and reduces the amount of industrial waste.
The Open Rack implements an innovative, cable-less power distribution system, the first of its kind. Servers no longer have their own power supplies; they simply plug into bus bars at the back of the rack. The bus bars connect to the power shelves within each rack. The power shelves use the same highly efficient power supply used with the OCP Intel v2 motherboard. Two PDUs supply AC and DC power; in the event of loss of AC power, DC power can be provided by the Open Compute Project Battery Cabinet or by battery backup units (BBUs) within each power shelf in the rack.
The Open Rack is a new direction even for the Open Compute Project. The initial OCP motherboard specifications still adhered to the 19" rack standard, and tradeoffs were made with cable routing and the necessity of putting the PSU in the back of the chassis. Having the disk drives in front caused the drives to traverse the chassis. Which made us realize a few truths:
- Simplicity is hard.
- When dealing with scale compute ecosystems, you have to design for the data center; the motherboard is just a module in a server.
- When you engineer for density, you can do the wrong thing. The Open Rack aims for practical density.
Download and read the specification and charter. And join the conversation with the community.