Power is the backbone of a data center, ensuring its IT infrastructure remains operational, even amidst interruptions. It often constitutes the largest expenditure in running a data center. Overutilization or improper distribution of power in a data center is not only costly but also wasteful, while underutilization can lead to damaging power surges during workload spikes.

Power, specifically in the form of electricity, is fundamental to data centers because it is required to run the servers, cooling systems, storage systems, networking equipment, backup systems, security systems, and lighting that allow for data storage, management, and distribution.

Power consumption is a crucial aspect of data center operations, but there’s more to the story than watts and volts. From exploring what consumes the most power within a data center to dissecting the complexities of power distribution hierarchies and understanding the role of redundant systems, Dgtl Infra unravels the intricacies of power in these massive facilities. Delve deeper into the electrifying world of data centers.

Why do Data Centers Need Power?

Data centers require power for several essential functions, including running servers, cooling systems, storage systems, networking equipment, backup systems, security systems, and lighting.

  • Servers: Servers, with their core components such as the central processing unit (CPU), memory (RAM), hard drives, and fans, all need electrical power to operate
  • Cooling Systems: Data centers house servers, storage systems, networking equipment, power equipment, and lighting. These elements collectively generate a significant amount of heat. To avoid hardware failures, this heat must be managed, which requires power to run cooling systems that keep these components at an optimal temperature
  • Storage Systems: Hard drives and solid-state drives consume power to spin disks and read/write data, particularly in data centers that handle vast amounts of data
  • Networking Equipment: A multitude of networking devices like routers, switches, and firewalls are required in data centers to maintain connectivity. These devices need power to function
  • Backup Systems: To ensure continuity of service in the event of a power failure, data centers utilize backup power systems. These include uninterruptible power supply (UPS) systems and diesel generators, both of which require power
  • Security Systems: Data centers use security systems such as surveillance cameras, access control systems, and alarms. These systems need power to operate
  • Lighting: Data centers need power for lighting to ensure that technicians and staff members working in the facility can effectively manage its components with clear visibility

What Uses the Most Power in a Data Center?

Servers use the most power in a data center. A typical server might consume anywhere from 100 to 600 watts of power under standard operation. High-end servers, particularly those running intensive high-performance computing (HPC) tasks, could draw a few thousand watts of power. Therefore, the power usage of servers can vary significantly depending on the specific tasks they are executing.

How Much Power Does a Data Center Require?

The power requirements for a data center can vary significantly depending on the scale and design of the facility, as well as the efficiency of its equipment. Small data centers, which span from 5,000 to 20,000 square feet and host between 500 and 2,000 servers, may only require 1 to 5 megawatts of power. On the other hand, large ‘hyperscale’ data centers, ranging from 100,000 square feet to several millions of square feet and accommodating tens of thousands of servers, can demand anywhere from 20 to over 100 megawatts of power. For a detailed comparison, see the table below:

Small Data CenterMedium Data CenterLarge Data Center
Building Size5,000 – 20,000 sqft20,000 – 100,000 sqft100,000 sqft to millions of sqft
Server Count500 – 2,000 servers2,000 – 10,000 servers10,000 to 100,000 servers
Power Capacity1 – 5 megawatts (MW)5 – 20 megawatts (MW)20 – 100+ megawatts (MW)
Design/EfficiencyBasic power management and coolingRobust power management, partial efficiencyHigh efficiency, renewable energy use
Example CompanyEquinixDigital RealtyAmazon Web Services

How Much Power Does a Server Rack Require?

A typical server can consume anywhere between 100 to 600 watts of power. Therefore, a fully populated server rack, housing 42 1U servers, can consume anywhere between 4 kilowatts (kW) and 25 kW of power, not considering cooling and other devices. Additionally, data centers often need to provide power for cooling the server rack, a process that can require as much power as the IT equipment itself.

On average, the power density in a traditional data center ranges from 4 kW to 6 kW per rack. However, Cloud Service Providers (CSPs), such as Amazon Web Services (AWS), and large internet companies like Meta Platforms, operate at power densification levels ranging from 10 kW to 14 kW per rack. Additionally, power for newer, high-density artificial intelligence (AI) workloads is pushing densification to exceed 20 kW per rack.

What are Data Centers Powered By?

Data centers predominantly function on electrical power. Understanding the intricacies of diverse power distribution methods, voltage levels, and single-phase or three-phase systems, is crucial.

Do Data Centers use AC or DC Power?

Most data centers use AC (Alternating Current) power to energize their servers, cooling systems, and networking equipment, even though some components may convert this to DC (Direct Current) for specific uses. This is primarily because most commercial power grids supply AC power, which is easier to distribute over long distances.

Alternating Current (AC) in Data Centers

Alternating Current (AC) plays several key roles in the functioning of a data center, including:

  • Distribution and Efficiency: AC is typically used because it can easily be transformed to higher or lower voltages, which makes it more efficient for distribution over long distances, such as from the power plant to the data center. Within the data center, the voltage is stepped down to levels that the servers and other IT equipment can use
  • Servers and IT Equipment: Most servers, networking equipment, and storage devices in a data center use AC due to its compatibility with the power grid and because the voltage of AC can be easily stepped up or down using a transformer. While some newer equipment may use DC because of its power efficiency, given that this type of current reduces energy losses due to conversions
  • Cooling Systems: AC is used to power a data center’s cooling systems, like HVAC (Heating, Ventilation, and Air Conditioning) systems, which maintain an optimal temperature for the servers and other hardware
  • Lighting and Security Systems: AC powers the remaining infrastructure in a data center, including the lighting and security systems
Direct Current (DC) in Data Centers

Direct Current (DC) plays a secondary role in the operations of a data center, with its importance primarily lying in the following areas:

  • Backup Systems: DC power is most commonly used in Uninterruptable Power Supply (UPS) systems, which provide backup power to the data center. DC power is used in these systems because lead-acid and lithium-ion batteries store and deliver DC power
  • Power Conversion: Power Supply Units (PSUs) in servers and other IT equipment operate on DC power. Therefore, when AC power reaches the server, it is converted into DC for the server’s electronic components to use. These components include the microprocessor and memory. This conversion process takes place within the PSUs of each server

Overall, DC power distribution can lead to significant energy savings. However, it carries higher upfront costs and complexities associated with retrofitting existing data centers. For new, greenfield data center projects, a DC power distribution architecture could potentially provide long-term operational cost benefits and be more sustainable, given its superior power efficiency.

What Voltage is Data Center Power?

The voltage for data center power varies depending on the region, country’s standard, and the type of equipment being used.

  • United States: In the U.S., common voltages for smaller data center equipment and servers are 120V and 208V. Larger equipment, such as a UPS system, uses 277V or 480V, which comes from three-phase power supplies
  • Europe: In Europe, the standard voltage is generally higher, often 230V for servers and smaller data center equipment. Larger equipment uses 400V, which comes from three-phase power supplies
  • Asia-Pacific: power standards vary by country in Asia-Pacific – most, like China, India, Australia, New Zealand, and South Korea, use 220V to 230V for single-phase power and 380V to 400V for three-phase power, while Japan is an outlier with its uncommon 100V standard

Single-Phase vs. Three-Phase Power

To address the rising power demands in data centers, systems capable of delivering multiple circuits, higher voltages, and increased currents are being implemented. Consequently, data center power distribution is typically classified into single-phase and three-phase power systems.

Single-Phase Power

Single-phase power is a simple form of Alternating Current (AC) power transmission and is suitable for lower power requirements. In this system, power is transmitted via a single wire (phase), and another wire (neutral) serves as a return path for the current. In colocation data centers, single-phase power is often used to energize individual pieces of smaller equipment, including standard server racks, routers, and switches.

Three-Phase Power

Three-phase power is widely used in data centers. It employs three distinct circuits of AC power, each delivered over its own wire, plus a fourth, neutral wire as a return path. This effectively triples the power capacity while keeping the cable bulk under control.

Three-phase power systems deliver power constantly and smoothly, eliminating points in the cycle where power falls to zero. As a result, they are more efficient, provide more power with less voltage drop, and are better suited for high-capacity, high-uptime environments such as data centers.

A three-phase circuit also offers greater power density than a single-phase one at the same amperage, thus reducing wiring size and costs. Therefore, it is favored for powering high-density server racks, which are optimal for running artificial intelligence (AI) and machine learning (ML) computations.

How is Power Supplied to Data Centers?

To maintain continuous operation, data centers need a dependable, uninterrupted flow of electricity, primarily sourced from the local electrical grid, known as utility power.

Path of Electricity from Power Plants to Data Centers

The journey of electricity from a power plant to a data center begins with the generation of electricity through various sources. The majority of data center electricity worldwide still comes from conventional grid electricity, which primarily consists of fossil fuels such as natural gas, coal, oil, and in some regions, nuclear power.

Renewable energy sources, including wind, solar, and hydroelectric power, currently contribute a smaller portion to the power supply of data centers. However, their share is growing as data centers increasingly embrace sustainability.

The electricity that is generated, initially at a low voltage, is then transformed to a high voltage via a step-up transformer for efficient long-distance transmission. This high-voltage electricity, often between 155 kV to 765 kV, is transported through the transmission lines of the electrical grid to substations. At these locations, step-down transformers are used to reduce the voltage before the electricity is distributed.

Electricity is transmitted through distribution lines to data centers. Here, with the assistance of transformers, the voltage may be further reduced before it enters the facility. The electricity typically enters a data center at a medium voltage level, with common standards being approximately 13.2 kV, 13.8 kV, or 27.6 kV.

How is Power Distributed in a Data Center?

The distribution of power in a data center is a critical aspect of its design and operation. Reliable power distribution ensures the continuous operation of servers, storage systems, and networking devices that comprise the data center’s infrastructure.

Power Distribution Hierarchy in Data Centers

Here’s a simplified explanation of how power is typically distributed in a data center:

Power Distribution Hierarchy in Data Centers
  • Utility Feed: Power to a data center initially comes from the local power grid, also known as the utility feed. To prevent a single point of failure, data centers typically have multiple feeds from different grids for redundancy
  • Switchgear: This is the initial point of contact for utility power within the data center. It divides the incoming power into smaller, more manageable circuits
  • Transformer: After the main switchgear, power may flow to transformers, which adjust the voltage to levels suitable for the data center infrastructure. Transformers can step up or step down voltage as required
  • Uninterruptible Power Supply (UPS): From the main switchgear, power flows to the UPS systems. These systems store energy and provide emergency power – usually lasting for a few minutes – to the data center during an outage until generators start. They also smooth out power quality issues, such as sags or surges, which could damage equipment
  • Power Distribution Units (PDUs): Power is sent from the UPS systems to PDUs. PDUs convert this power into a voltage suitable for the data center’s equipment, then distribute it to individual server racks, switches, and other equipment. To ensure redundancy, a data center typically operates with multiple PDUs
  • Remote Power Panels (RPPs): Power is then distributed to RPPs, which are smaller protective enclosures containing fuses, circuit breaker panels, and ground fault protection devices. These function as localized distribution points, dividing the power received from the PDUs into separate circuits, which facilitate the distribution of power to individual server racks, switches, and other equipment
  • Power Whips: These are conduit systems composed of flexible cables used to distribute power from a PDU or RPP to server racks and other IT equipment within a data center. Depending on the data center’s design, power whips can be routed overhead or beneath a raised floor
  • Server Racks: Each server rack has its own power strip, also known as a rack PDU. From here, power is provided to the IT equipment, including the individual servers, storage systems, and networking devices within the rack

This power distribution process is designed to provide a continuous, high-quality power supply to the data center equipment, protecting against potential outages or power quality issues.

Utility Power Capacity vs IT Power Capacity

Data center operators often cite two key metrics when describing their data center’s energy resources: utility power capacity and IT power capacity.

  • Utility Power Capacity: This is the maximum amount of power that the data center can draw from the utility grid. It represents the upper limit of power that a data center can use at any one time, and it encompasses all power needs of the data center, including IT equipment, cooling, lighting, and other support systems
  • IT Power Capacity: This is the amount of power dedicated to running the IT equipment itself, which includes the servers, storage devices, and networking equipment. This capacity does not account for power used for cooling, lighting, or any other non-IT functions. It is a crucial metric for determining the power leased by a data center customer. IT power capacity can often be referred to by data center operators using a variety of other terms, such as critical load, IT load, and critical power

Referencing the power distribution hierarchy above, utility power capacity covers the systems from the initial utility feed down to, and including, the Power Distribution Units (PDUs). On the other hand, IT power capacity begins from the Remote Power Panels (RPPs) and extends to the server racks, which includes the actual distribution and consumption of power by the IT equipment such as servers, storage systems, and networking devices. Therefore, the transition point between utility power capacity and IT power capacity is situated between the PDUs and the RPPs.

Redundant Power Systems in Data Centers

Redundant power systems are a crucial part of any data center. They ensure uninterrupted operation by providing a continuous power supply, even if there’s a failure in the primary power source.

Components of Redundant Power Systems in Data Centers

The key components of a redundant power system in a data center include:

  • Uninterruptible Power Supplies (UPS): These systems provide emergency power when the primary power source fails, usually lasting for a few minutes until generators start. Unlike auxiliary or emergency power systems, a UPS offers near-instantaneous protection from power interruptions by supplying energy stored in batteries (either lead-acid or lithium-ion) or flywheels
  • Backup Generators: In the event of a long-term power outage, backup generators can supply power to the data center. These generators are usually diesel- or gas-powered and can operate for extended periods
  • Power Distribution Units (PDUs): PDUs distribute electrical power to the various components within the data center. To provide redundancy and ensure an uninterrupted power supply to servers and other equipment, multiple PDUs are used, mitigating the impact of a single PDU’s failure
  • Redundant Power Paths: To avoid a single point of failure, data centers often use redundant power paths. This approach creates separate and independent paths for power to travel from the source (e.g., the local power grid or backup generators) to the IT equipment. In practice, UPS modules feed separate distribution panels that support distinct power supplies within each IT equipment unit. One UPS sustains a single power path and a specific power supply in the IT equipment, while another UPS supports the alternate power supply
  • Transfer Switches: An Automatic Transfer Switch (ATS) is used to switch the power source from the main supply to the UPS or generator in the event of a power failure. In contrast, a Static Transfer Switch (STS) facilitates rapid switching to an alternative power source, such as a second power feed from the utility company, a UPS, or backup generator, if the primary source becomes unavailable or falls outside of acceptable parameters

Power System Redundancy Methods in Data Centers

In data centers, redundancy methods are crucial for maintaining system stability and uptime. The primary redundancy methods for power systems in data centers include:

‘N’ Method

The ‘N’ method refers to the capacity needed to power, cool, and back up the data center under normal conditions. It does not have extra capacity built in for equipment failure, meaning it has no redundancy.

N+1 or N+X Redundancy

N+1 or N+X redundancy ensures that there is one or ‘X’ number of backup components (such as UPS, HVAC, or generator systems) in addition to the necessary components. The ‘+’ sign indicates that there are backup components available that can take over if a primary component fails. Power flows from the utility, through the UPS/PDU, and connects to the server. Within the N+1 framework, there are different strategies that provide varying levels of fault tolerance:

  • N+1 Block Redundant: Every single component has at least one spare backup, ensuring system robustness
  • N+1 Distributed Redundant: Backup components are dispersed throughout the system for immediate takeover if needed
  • N+1 Isolated Parallel Bus: Each component has an extra, isolated bus ready for immediate takeover upon failure
2N Redundancy

The 2N redundancy approach involves a fully redundant set of ‘N’ in addition to the original ‘N’, allowing the system to operate normally even if the entire original ‘N’ fails. Essentially, the system has double the necessary UPS, HVAC, and generator systems to provide full fault tolerance. Power flows from the utility, passes through the UPS/PDU of two separate systems, and then connects to the server.

2N+1 Redundancy

The 2N+1 redundancy method combines the concepts of 2N and N+1. It doubles the necessary capacity (2N), plus adds an extra backup component to each element of the N architecture. It offers high resilience and is capable of withstanding multiple component failures. Even when the primary system is offline, N+1 redundancy is sustained.

3N/2 Redundancy

The 3N/2 redundancy configuration is effectively halfway between N and 2N redundancy. It implies that the total infrastructure of the data center is designed to handle 1.5 times the base load of ‘N’. This means that even if half the total system fails, the remaining system will still operate normally.

Trade-Off of Power System Redundancy in Data Centers

Power system redundancy in data centers ranges from the cost-efficient N configuration, which offers no backup capacity, to the highly reliable but expensive 2N+1 method, which maintains redundancy despite multiple component failures. Configurations like N+1, 2N, and 3N/2 offer a balance of resilience at a moderated cost.

READ MORE: How Much Does it Cost to Build a Data Center?

Mary Zhang covers Data Centers for Dgtl Infra, including Equinix (NASDAQ: EQIX), Digital Realty (NYSE: DLR), CyrusOne, CoreSite Realty, QTS Realty, Switch Inc, Iron Mountain (NYSE: IRM), Cyxtera (NASDAQ: CYXT), and many more. Within Data Centers, Mary focuses on the sub-sectors of hyperscale, enterprise / colocation, cloud service providers, and edge computing. Mary has over 5 years of experience in research and writing for Data Centers.

LEAVE A REPLY

Please enter your comment!
Please enter your name here