A data center is a specialized facility outfitted with the necessary space, power, and cooling systems to accommodate servers and various IT equipment. The space within a data center is critical to its operation and has several types: ‘white space,’ where the IT equipment is actively housed and operated; ‘gray space,’ which includes support areas and infrastructure; and ‘rack space,’ specifically for mounting IT equipment.
Data center space is a dedicated area where companies house their critical applications and data. It includes equipment like servers, storage systems, and networking devices, all of which are supported by power and cooling infrastructure to ensure smooth and continuous operation.
Dgtl Infra delves into the fundamental importance of data center space, covering everything from the traditional white space and gray space to the innovative concept of data centers in outer space. We explore how different types of spaces like rack space, support space, and the building’s shell, along with adjacent land, play crucial roles in the efficiency and functionality of data centers. Intriguingly, Dgtl Infra takes a leap into the future, discussing the advantages, challenges, and real-world examples of data centers located in outer space, a frontier that may reshape how we think about data storage and management.
Types of Data Center Space
Data center space is strategically organized into distinct areas, each serving a specific function crucial for the facility’s operation. The primary types of spaces within a data center are: white space, gray space, rack space, support space, building shell, and adjacent land. Each of these spaces serves a specific purpose. Together, they ensure the efficient operation and scalability of the data center.
Space is also an important factor in defining the capacity of a data center. This capacity is determined by the size of the white space, which includes both the rack space available for IT equipment and floor space. Beyond space, the capacity of a data center is highly influenced by the capabilities of its power and cooling systems.
1. White Space
White space in a data center refers to the main operational area where key computing hardware, including servers, storage systems, and networking devices, is housed. This area is considered the data center’s usable space, primarily occupied by IT equipment arranged in hot and cold aisles. Data center white space is usually measured in square feet or square meters.
The white space, often termed the server room or computer room, is distinguished from the data center’s “gray space”, which includes the support infrastructure, such as switchgear and cooling systems. In data centers, the proportion of white to gray space typically approximates a one-to-one ratio. This highlights a significant difference between data centers and conventional building types like offices, the latter requiring considerably less space for electrical and cooling systems.
White Space vs Gray Space – Examples
|White Space||Gray Space|
|Servers||Uninterruptible Power Supplies (UPS)|
|Networking Devices||Backup Generators|
|Cabling Systems||Computer Room Air Conditioning (CRAC)|
|Racks||Computer Room Air Handler (CRAH)|
|Cabinets||Meet-Me Room (MMR)|
In larger data centers, sections known as data halls contain the white space, where server racks, cabinets, and IT equipment are ultimately housed. Within these data halls, cages or private suites can further subdivide the space by providing secure enclosures for the server racks, cabinets, and IT equipment. Access to the white space is highly restricted, ensuring entry only to qualified personnel like network engineers, operations managers, or facility managers.
Data centers’ white spaces are constructed with either a raised floor system or a hard floor, which is essentially a concrete slab. Below is further detail on the raised floor in a data center:
A raised floor is a type of flooring that creates a gap, approximately 2 feet high, known as a plenum, between the solid foundation (like a concrete slab) and the walking surface (typically tiles). This space houses critical infrastructure components, including electrical wiring, cooling systems, and network cabling. The raised floor design facilitates underfloor air distribution, which is used for cooling IT equipment, and offers convenient cable management and straightforward access for maintenance and repairs to these systems.
Constructed from durable materials such as steel or aluminum, a raised floor has removable tiles capable of supporting the weight of racks and IT equipment above. It also provides necessary grounding for cabinets and equipment, contributing to electrical safety like stabilizing voltage levels.
In data center white space, raised floor tiles come in two types: solid (without holes) and perforated (with holes). Solid tiles control airflow, maintaining pressure in the plenum, whereas perforated tiles, positioned next to equipment racks or under bottom-cooled heavy equipment, direct cold air into the room or straight to the racks, facilitating effective cooling.
2. Gray Space
Gray space in a data center refers to the area designated for supporting infrastructure essential to the operation of IT equipment located in the white space. It includes components such as uninterruptible power supply (UPS) systems, switchgear, backup generators, and various cooling systems, including Computer Room Air Conditioning (CRAC) and Computer Room Air Handler (CRAH) units. Additionally, gray space includes meet-me rooms (MMRs) and onsite storage for diesel fuel.
The functionality and reliability of the data center’s white space, where actual data processing takes place, heavily depend on the gray space. As the amount of white space increases, so does the need for more extensive gray space infrastructure to support it. Facilities and operations managers typically oversee the operation and maintenance of the data center’s gray space.
The electrical components crucial to a data center, such as UPS systems and lithium-ion batteries, are usually housed in specialized electrical or battery rooms within the gray space. Some components, however, like backup generators and external transformer units, are positioned outside the data center building. The space allocated for these power systems is determined by the data center’s capacity and the necessary level of redundancy and reliability of the electrical systems.
READ MORE: Data Center Power – A Comprehensive Guide
Many cooling components for a data center, including chillers and cooling towers, are located outside the building to facilitate heat exchange with the external environment. Within the data center, devices like CRAC units, CRAH units, and in-row cooling systems are used to directly control the temperature and humidity around IT equipment. The space designated for cooling systems depends on the data center’s capacity and the expected level of redundancy and reliability in its design. It also varies based on the type of cooling system chosen, such as air-based or water-based systems.
READ MORE: Data Center Cooling – A Comprehensive Guide
Meet-Me Room (MMR)
Meet-me rooms (MMRs) in data centers are usually situated in the gray space of the facility. These rooms function as neutral spaces where various customers in the data center, such as telecommunications carriers and internet service providers (ISPs), can interconnect their networks. This interconnection is achieved through a cross-connect, allowing for network integration without the need to run extensive cabling between individual cages or racks of different customers within the data center.
3. Rack Space
Rack space in a data center refers to the designated physical area and infrastructure for housing servers, storage systems, and networking equipment like routers, switches, and load balancers. This space is organized within racks and cabinets, measured in rack units (U). One rack unit, denoted as 1U, represents a standard height of 1.75 inches, fitting within the metal frame of a rack. Typically, a data center rack stands at about 42U or approximately 73.5 inches tall.
Rack space is distinct from the data center’s white space. While rack space specifically is the area occupied by racks and their IT equipment, white space comprises the entire floor area allocated for IT infrastructure. This includes rack space and additional areas for airflow management, in-row cooling systems, and access aisles.
In a typical data center, approximately 50% of the rack space is occupied by racks or independent IT equipment. The other half comprises aisles, ramps, gaps between rows of racks, and other open spaces used for airflow management systems.
Aisles in data centers are the spaces between rows of server racks. Their design plays a crucial role in managing airflow. Cold aisles, facing the front of servers, intake cool air, while hot aisles located at the backs of servers exhaust hot air. This setup is key to maintaining optimal operating temperatures. For functionality, aisles are designed to be wide enough to allow for the movement of racks and other large and heavy equipment.
Server Types – Space Requirements
Data centers utilize three main types of servers: rack servers, blade servers, and tower servers, each differing in size and the amount of space they occupy. Rack servers are designed to occupy standardized rack units and generally use more space compared to blade servers. Blade servers are known for their compact and space-efficient design within a blade enclosure. On the other hand, tower servers, due to their standalone, tower-like structure, consume the most space on an individual basis.
4. Support Space
Support spaces in a data center are areas not allocated for specific technical operations like server racks or cooling systems. These spaces cover various functions critical to data center operations, including the Network Operations Center (NOC), security areas, telecommunications entrance rooms, storage areas, office spaces, loading docks, and other ancillary spaces within the facility.
Network Operations Center (NOC)
The Network Operations Center (NOC) serves as a centralized hub where IT staff continuously monitor and manage the data center network’s performance and security. This monitoring extends to various components, such as servers, routers, switches, and firewalls. Operating 24/7, the NOC typically features a large room equipped with numerous monitors that display the network’s status and key performance indicators. While commonly situated adjacent to the data center’s white space, a NOC can also be located offsite and operated remotely.
Security spaces are areas specifically dedicated to security operations within the data center. These include entry points, security checkpoints, biometric access controls, and monitoring rooms. Their primary function is to ensure that only authorized personnel have access to the data center’s sensitive areas. An example of such a space is the Security Operations Center (SOC), which focuses on centrally analyzing the data center’s security posture. This is crucial for the protection and integrity of the data center’s data and infrastructure.
Telecommunications Entrance Room
The telecommunications entrance room in a data center is a specialized area where external network and connectivity service cables enter the building. This room acts as the primary distribution point for network connectivity within the data center. It houses demarcation hardware, serving as the point where external service provider networks connect with and integrate into the data center’s internal network systems.
Storage space refers to the physical area used for storing spare parts, tools, and equipment necessary for the maintenance and repair of the data center. A staging area within this space is dedicated to pre-configuring, testing, assembling, and securely storing high-value IT equipment. This equipment might include spare line cards and network interface cards (NICs), which are prepared here before being deployed into the operational white space environment.
Office space in data centers comprises areas designated for administrative functions. This includes offices for staff, meeting rooms, and, in some cases, accommodations for personnel.
The loading dock, or receiving area, of a data center is a secure space for receiving and shipping hardware deliveries. It is equipped with ramps for transporting equipment and has a floor with sufficient loading capacity to handle the weight of large and heavy material and equipment.
Ancillary space within a data center includes all general-purpose areas such as common areas, break rooms, restrooms, maintenance areas, and parking spaces. These spaces are essential for supporting the staff and overall operations of the data center.
5. Building Shell
The building shell, or envelope, of a data center is its outermost physical layer, comprising its walls, roof, and foundation. This shell delineates the data center’s total space.
Acting as a protective barrier, the data center’s building shell safeguards the interior and its equipment against external environmental factors, including weather, temperature changes, and physical security threats. It often incorporates design elements to support the data center’s infrastructure, whether it be power supplies, cooling systems, and network connectivity.
A data center’s building shell can either be purpose-built or retrofitted. A purpose-built shell is designed specifically for data center operations, ensuring an optimal layout and seamless integration of infrastructure. On the other hand, a retrofitted shell means modifying an existing building (e.g., a warehouse) to house data center equipment and systems. This approach leverages the building’s existing infrastructure, like electrical and cooling systems, within the spatial constraints of the original structure.
6. Adjacent Land
The adjacent land, also known as the expansion potential area, of a data center refers to the total area of undeveloped or vacant land contiguous to the data center property that is available for use or development. This includes land with electrical infrastructure already in place, which is a particularly significant factor for data center development.
This space is crucial for planning and expansion purposes. Adjacent land indicates the available room for scaling up operations or constructing additional data centers. The presence of such land allows for a modular data center construction approach by incrementally using the expandable land capacity.
Renting Data Center Space
Renting data center space, commonly known as colocation, entails leasing space in a data center facility to house servers and networking equipment. This approach provides access to essential infrastructure, such as power, cooling systems, connectivity to network providers, and physical security. Organizations can utilize these resources without the capital expenditure of constructing and maintaining their own data centers.
Colocation offers companies the advantage of utilizing a third-party data center’s facilities and infrastructure while they focus on their primary business activities. When renting data center space, companies should consider factors such as location, power usage efficiency, network connectivity options, and scalability. These considerations are crucial to ensure alignment with their business goals and operational needs.
Data centers offering colocation services typically provide standard rack sizes to a customer when they rent data center space, which include:
- Quarter Rack: Known as a ¼ rack, it accommodates up to 10U of space, with 8U to 9U being usable
- Half Rack: This option provides support for 20U of space, typically with 18U to 19U usable
- Full Rack: Offering between 38U and 40U of space, a full rack is suited for servers, storage systems, and networking equipment
For more extensive requirements, customers have the option to rent data center spaces spanning several thousand square feet or more in a colocation environment.
Data Centers in Outer Space
Data centers in outer space are an emerging concept that integrates space technology with information technology to provide a supplement or alternative to traditional terrestrial data centers. They involve placing data storage and processing facilities in Earth’s orbit and on other celestial bodies, such as planets and moons.
Advantages and Disadvantages of Space-Based Data Centers
Deploying data centers in space leverages unique conditions such as microgravity, vacuum, and abundant solar energy, potentially enhancing their operations and capabilities. One significant advantage is the reduction in cooling costs. Space’s extreme cold environment naturally dissipates heat generated by data centers, offering an efficient cooling solution. Additionally, the constant exposure to solar energy in space allows these facilities to harness solar power, potentially reducing energy expenses.
However, operating data centers in space also introduces considerable challenges. The lack of Earth’s insulating atmosphere leads to extreme temperature fluctuations in space, complicating the data center’s ability to maintain optimal server operating temperatures. Moreover, the high levels of cosmic and solar radiation in space pose a significant risk to electronic components and data storage devices, potentially leading to damage and data loss. Another major concern is the data transmission to and from Earth, as the physical distance creates latency, impeding effective communication.
Examples of Data Centers in Space
In response to these challenges, Hewlett Packard Enterprise (HPE), in collaboration with NASA, is at the forefront of exploring the feasibility of space-based data centers. This partnership is assessing both the benefits and drawbacks through practical experimentation and technological advancement.
An example of this endeavor is the HPE Spaceborne Computer-2 (SBC-2). Launched to the International Space Station (ISS) in February 2021, SBC-2 marked the first deployment of a traditional data center, equipped with standard off-the-shelf servers, into space. It comprises four servers, each designed with enhanced tolerance to shock, vibration, and temperature fluctuations (thanks to water-cooling), enabling the execution of modern applications and production workloads in space.
The SBC-2 space-based data center has been used for tasks such as conducting DNA analysis directly aboard the ISS. This approach significantly reduces the need for data transmission to Earth. Additionally, SBC-2 is supporting various experiments, including those in 5G technology, satellite-to-satellite communications, and processing the vast quantities of satellite imagery being taken of Earth.
Despite these advancements, the hardware of HPE Spaceborne Computer-2 is not specifically hardened against radiation. Instead, it employs software-based protection mechanisms against radiation exposure.
Nonetheless, SBC-2 represents a significant improvement over its predecessor, the HPE Spaceborne Computer-1 (SBC-1), launched in 2017. SBC-1 demonstrated the feasibility of using standard computing equipment in space, incorporating two servers housed in standard 19-inch racks. However, it encountered issues such as higher failure rates of solid-state drives in space compared to similar device performance on Earth.
Future Space Data Centers
In the future, plans are underway to extend data centers and computing operations into space, with a key focus being on applications such as imaging analysis. This expansion will include low Earth orbit (LEO) satellites, lunar stations (data centers on the Moon), and Mars. The Moon is particularly suited for data centers due to its stable environmental conditions – there’s no weather or atmosphere. Additionally, the Moon’s tidally locked position provides for constant line-of-sight communication with Earth.
Companies like NTT, Thales Alenia Space, and partnerships between HPE and OrbitsEdge are actively developing designs and technologies for these space-based data centers. Although Google Cloud’s previous announcement about expanding its platform to Mars with the “Ziggy Stardust” data center was fictional, such interplanetary projects are increasingly feasible due to rapid advancements in space and information technology.