Designing a data center is a complex task that requires careful planning, specialized engineering skills, and seamless coordination between various teams. To ensure a successful data center design, the facilities team – which includes building architects and engineers – must work closely with the IT team, comprised of server, network, and storage experts. This collaborative approach ensures that the data center can accommodate multiple technological cycles.

Data center design is the process of defining the layout, architecture, and configuration of a data center to meet operational and business requirements. Key elements to consider include electrical systems, cooling mechanisms, server setups, network topology, security measures, and energy efficiency.

Dgtl Infra explores the intricate elements involved in designing a data center. We examine five critical aspects of data center design: architectural design, which concentrates on layout and aesthetics; structural design, which deals with materials and engineering; facility operations design, focusing on power and cooling systems; IT operations design, covering server and network configurations; and commissioning design, to verify that all systems function as intended before going live.

Pre-Design Essentials for a Data Center

Before beginning the design of a data center, it’s essential to establish the project’s scope and objectives. This involves defining the intended size of the data center in terms of square footage and power capacity, measured in kilowatts (kW) or megawatts (MW). Setting a budget is also a crucial step. These initial preparations are vital because a data center that is too small may not meet requirements, while an excessively large one could result in wasted financial resources.

Blueprints Predesign Architect Draft Feasibility Study

To ensure all aspects of the data center’s design and operation are covered, assemble a multidisciplinary team of experts. This team should consist of architects, electrical engineers, HVAC specialists, environmental consultants, security experts, IT personnel, real estate brokers, energy procurement advisors, legal consultants, and project managers.

Before proceeding further, undertake a feasibility study to assess the project’s viability. This should include an evaluation of prospective locations, a review of energy supply options, a risk assessment, and an estimate of the projected return on investment (ROI).

Architectural Design in Data Centers

Architectural design in data centers requires careful attention to various elements such as site selection, spatial layout, and functional specifications. These elements include a wide range of considerations, from assessing natural hazards and complying with regulations, to configuring hardware and planning for future scalability.

Site Selection

Site selection for a data center involves a thorough assessment of various factors, including the risk of natural disasters, availability of utility services, and compliance with local regulations. To assess financial viability, examine the complete range of economic considerations such as the cost of land, electricity, water, construction, ongoing operational expenses, and any local tax incentives. If repurposing an existing building, confirm that it can be easily retrofitted to meet data center-specific requirements.

Site Selection Construction Build Contractor

Natural Disasters

Carefully evaluate the risks of natural disasters that could disrupt data center operations. Choose locations that are not susceptible to seismic activity (earthquakes), wildfires, or floods to reduce the risk of damage. Extreme weather events, such as tornadoes, hurricanes, and cyclones, can damage external infrastructure, like cooling towers, impacting the data center’s operation. Additionally, avoid sites near active volcanoes.

Natural Environment

The natural environment of a data center’s site is a key factor in its overall resilience. To minimize foundation and structural issues, choose sites with stable ground and soil conditions. Areas with low lightning activity, determined by flash rate, are ideal for reducing risks to electrical systems and preventing downtime.

Additionally, sites with low water tables are less prone to groundwater flooding, enhancing the data center’s stability. High-quality air is also essential, both for fresh air intake and for external mechanical elements such as cooling towers and heat exchangers, as it improves cooling efficiency and lowers the risk of corrosion.

Site Access and Location

Opt for a data center site that is conveniently located near major roads and highways to allow for easy transportation, while also being physically setback from adjacent road traffic to minimize risks. Evaluate the function and security of neighboring properties, as they could pose hazards like fires or structural collapse that could adversely affect the data center facility.

For improved system redundancy, choose a site that is neither too close nor too far from existing primary data centers. The data center site should also be in close proximity to emergency services like fire and police stations to ensure quick response times in case of incidents. Additionally, a location near a skilled labor pool is beneficial for hiring operations, maintenance, and security staff, as well as for working with vendor technicians.

Utility and Infrastructure Services

For uninterrupted operations, it’s essential to select locations with reliable utility services. Choose sites that offer stable and redundant power supplies, along with abundant access to clean water for cooling systems.

Make sure the location has robust and reliable telecommunications infrastructure, including fiber optic connections. Sanitation services, such as sewage systems, should meet regulatory standards and be capable of handling the facility’s waste needs. Additionally, confirm the availability of natural gas and alternative backup fuels like diesel and propane. Don’t forget to consider renewable energy sources, such as wind and solar, as well.

Regulatory Compliance

Compliance with regulations at the local, regional, and national levels is essential. These regulations cover a wide range of factors, including air quality standards such as generator emissions, acceptable noise levels from equipment, and height limitations for cooling towers and communications antennas. Proper regulatory compliance also encompasses fuel storage guidelines, generator operating procedures, truck traffic restrictions, parking space requirements, building and perimeter security setbacks, and visibility sight lines.

Space Planning

Effective space planning in a data center involves accurately assessing the facility’s total capacity and strategically designing its layout to meet both present needs and future growth. Key elements to consider include the total square footage, power requirements, and available rack space.

Space Planning Blueprint Designing Planning Architecture

Specific areas should be designated for essential components like electrical infrastructure, cooling systems, power distribution, piping, ductwork, machine rooms, and maintenance areas. The electrical infrastructure should guarantee a continuous power supply to all data center equipment, while cooling systems need to be designed for optimal airflow and temperature control.

In addition to electrical infrastructure and cooling systems, other critical areas for effective data center space planning include computer rooms and supporting spaces:

Computer Room

The computer room houses the data center’s core computational hardware, including servers, storage arrays, and networking switches. Design this space for easy access, robust security, and optimal cooling.

Opt for versatile server racks and frames that accommodate various types of equipment. Create separate spaces for telecommunications equipment to avoid interference with computational hardware. Also, set up secure, dedicated pathways for telecom providers to connect their services without compromising the data center’s security.

Supporting Spaces

Data center design involves the creation of a number of supporting spaces, including the following:

  • Administrative: Position administrative offices close to computer rooms to simplify system maintenance and upgrades
  • Security Zones: Incorporate dedicated areas for security, equipped with surveillance and access control systems. Use either biometric or key card systems for secure entry points that are exclusive to the data center
  • Telecommunications Entrance Room: Establish this as the main hub for all external telecom services, and make sure it’s easily accessible
  • Network Operations Center (NOC): Equip the NOC with redundant systems, real-time monitoring, and a direct communication line to emergency services
  • Helpdesk: Situate a helpdesk close to the NOC for quick communication and efficient problem-solving of hardware malfunctions, software errors, and network issues
  • Loading Dock: Allocate a specific area for a loading dock to facilitate equipment deliveries
  • Storage Space: Reserve a separate area for storing spare parts and tools
  • Waste and Recycling Zones: Place these near the loading dock to simplify the disposal process

Functional Planning

Begin by identifying the main functions and services the data center will provide, such as data storage, computation, or web hosting. Decide if the data center facility will cater to multiple customers via colocation or focus solely on a single organization, tailoring plans accordingly. If the data center is located in a building with other tenants, robust security and isolation measures must be in place.

Select an optimal location within the building for data halls or computer rooms, considering cooling efficiency, security, and ease of access. Account for the data center hardware’s life cycle and allocate resources for regular upgrades and replacements. Redundancy and failover mechanisms for power, cooling, and network systems should be implemented to improve data center uptime and reliability.

Lastly, determine if on-site personnel are required or if remote management is feasible. Ensure the data center design permits continuous 24/7 operations.

Structural Design for Data Centers

The structural design of data centers aims to create a resilient environment capable of withstanding a range of natural and man-made disasters. Specialized planning is essential for ensuring facility stability, safety, and uninterrupted operations.

Structure Designing Civil Engineer Data Centers Schematics

Floor Load Capacity

The ability of the floor to support the weight of servers, racks, storage arrays, networking equipment, as well as power and cooling systems, is crucial in data center design. Engineers assess the maximum floor load to decide whether reinforcement or specialized materials are necessary.

Raised Flooring

Raised flooring provides a modular, easily accessible platform for routing cables, cooling systems, and power distribution. This setup enhances organization, simplifies underfloor air circulation, and offers flexibility for accommodating future technological needs.

In contrast, the absence of raised floors can complicate coordination, particularly in traditional cooling technologies like Computer Room Air Conditioning (CRAC) units and Computer Room Air Handler (CRAH) units. In such cases, additional data center planning is needed for aspects like floor drainage slopes and alternative grounding systems.

Column Placement

In large data centers, structural columns or compartmentalization are necessary to support the roof and additional server rack systems distributed throughout the facility. Conversely, small data centers aim to limit the number of columns to simplify layout planning. These smaller facilities typically feature centralized network distribution and may not require additional server rack systems.

Wind Resistance

Wind resistance is a key factor, particularly for data centers with external components such as cooling towers. To counteract the forces of high winds, designers use aerodynamic roof designs, create dual-roof architectures with a gap of several feet between inner and outer roofs, and choose materials such as solid steel that are both airtight and capable of withstanding wind pressures of up to hundreds of miles per hour.

Earthquake Resilience

To maintain structural integrity during seismic events, earthquake-resistant designs utilize flexible materials and joints, along with seismic bracing. Additional features like reinforced structures, isolated platforms, and specialized mountings protect mission-critical IT equipment from the impacts of seismic activity.

Terrorist Attack Mitigation

Due to their critical role, data centers may be potential targets for terrorist attacks. To mitigate this risk, structural safeguards such as reinforced walls, blast-resistant doors, and perimeter fencing are put in place to minimize the damage from explosions or other types of attacks.

For data centers located in cold climates, falling or wind-driven ice shards pose a risk to the building’s exterior and roof. By using durable materials and designing angled surfaces, the risk of damage from ice shards can be minimized.

Facility Operations Design in Data Centers

Designing facility operations in data centers requires attention to electrical systems, cooling mechanisms, redundancy measures, security features, and fire protection protocols.

Electrical Systems

Electrical systems serve as the backbone of a data center, powering all its operations and ensuring high availability and uptime. These systems include utility service, power distribution mechanisms, and various types of power units. To eliminate single points of failure, electrical systems often incorporate redundancies, such as dual power feeds, backup generators, and uninterruptible power supply (UPS) units.

Power Distribution System Plant Room Data Centers

To effectively determine a data center’s power needs, designers must assess both facility and IT infrastructure requirements. Facility power needs include HVAC systems and lighting, while IT infrastructure demands vary based on server workloads and specific hardware configurations, such as CPUs or GPUs.

Utility Service

Evaluating the capacity of the local utility grid to meet varying data center power demands is essential. Data centers use different electrical voltages: low-voltage is suitable for smaller operations and is easier to install, whereas medium and high-voltage are optimal for larger data centers but entail higher installation costs.

Unit substations convert high-voltage electricity into a form that data centers can use. Generally owned by the electric utility company, these substations can be situated either outdoors or, in certain instances (e.g., urban environments), within the data center building itself. The decision to locate a unit substation on-site is influenced by factors such as the data center’s size, the level of operational uptime it is designed to achieve, and its geographic location.

Utility transformers serve as another vital component in this system. Positioned as the final utility-provided device before the electric meter, they delineate the boundary between the utility company and the data center operator. Typically located on-site, these transformers usually convert medium-distribution voltage received from a unit substation into a lower voltage appropriate for the data center’s use. The specific voltage levels vary by geographic location and are typically mandated by the regional regulatory authority where the data center operates.

Power Distribution

Power distribution systems play a critical role in ensuring scalability, flexibility, and adaptability to varying electrical loads. The system is hierarchical and encompasses uninterruptible power supply (UPS) systems, power distribution units (PDUs), remote power panels (RPPs), and equipment-level power distribution.

UPS systems act as immediate backup power sources, equipped with batteries like lead-acid or lithium-ion, to maintain operations during short-term power failures. They also improve power quality by filtering out voltage spikes and frequency variations. These systems often include automatic static transfer switches for load transfer to a backup power source and emergency power off (EPO) systems for rapid shutdown in emergencies.

PDUs serve as hubs for distributing electrical power to servers and other data center equipment. They break down the main electrical supply into smaller, usable units and are commonly connected via power cables or busways. Busways offer a more flexible and modular approach than traditional cabling; they are overhead track systems that allow for easier modification and scalability in the power distribution network.

RPPs are also utilized as localized distribution points that further break down power from PDUs into separate circuits for individual server racks and switches. Power whips provide the final stage of power delivery, connecting the PDUs or busways to server racks and other IT equipment. They offer flexibility in routing, either overhead or beneath a raised floor, depending on the data center’s design.

Finally, at the equipment level, power strips equipped with multiple outlets and surge protection deliver electricity directly to servers and network switches.

Backup Power Systems

Backup generators are designed to handle extended interruptions in data center operations. They typically run on diesel or natural gas, depending on local regulations and fuel availability. It’s crucial to size generators so that they not only meet but also exceed peak load requirements, allowing for future expansion.

Generator Backups Electrical Power System Data Center

To ensure safe and efficient operation, proper ventilation is necessary for safely expelling generator exhaust. Additionally, the generator should be equipped with an effective cooling system to avoid overheating. Installing vibration isolators when setting up the generator can significantly reduce both sound and shaking.

Cooling Equipment Support

The electrical system in a data center must be designed to fully support all cooling equipment – such as chillers, cooling towers, and air handlers – to maintain optimal temperature levels. To ensure uninterrupted operation of cooling systems, the electrical setup should include redundant circuits and backup generators. Additionally, comprehensive load calculations are essential for determining the electrical capacity required to operate both the cooling equipment and other functions within the data center.

Lighting and Safety Measures

Proper lighting in data center design is crucial for ensuring visibility during human operations, although it doesn’t directly affect server performance. In server rooms, opt for task-specific lighting that brightens equipment without creating screen glare. In contrast, support areas like offices and break rooms can make do with standard office lighting. To save energy, many data centers dim or switch off lights when the rooms are unoccupied.

Essential to the electrical safety and reliable operation of a data center are bonding and grounding systems, lightning protection mechanisms such as lightning rods, and surge suppressors. These measures guard against electrical surges caused by storms, protect sensitive equipment from voltage spikes, and mitigate electrical interference.

READ MORE: Data Center Power – A Comprehensive Guide

Cooling Systems

Cooling systems in data centers consist of specialized equipment, heat rejection systems, airflow management techniques, humidity control, and ventilation systems. These components work together to maintain environmental conditions for servers and other hardware.

Cooling System Equipment on Rooftop Design Cool

Cooling Equipment

Cooling equipment in data centers is engineered to ensure uninterrupted facility operation while maintaining ideal temperature and humidity conditions. The specific cooling solutions employed may differ between data centers but usually comprise some of the following components:

  • Computer Room Air Conditioning (CRAC) and Computer Room Air Handler (CRAH) Units: These units regulate and circulate air in the data center, effectively controlling both temperature and humidity
  • Chilled Water Systems: These systems circulate cold water to absorb excess heat within the data center environment
  • Chillers: Mechanical units that cool the water circulated by the chilled water systems
  • Cooling Towers: Units designed to expel waste heat from chillers into the atmosphere
  • Adiabatic Cooling: Systems that utilize water evaporation to lower air temperature, offering a more energy-efficient but less commonly deployed cooling option
  • Humidifiers: Devices that help regulate humidity levels to prevent static electricity and extend the lifespan of equipment
  • Fans: These devices circulate air to disperse heat generated by servers and other hardware, thereby aiding in temperature regulation and system performance
  • Thermal Storage: Systems that store chilled water or other coolants for use during high-demand periods or outages, serving as a backup to conventional cooling methods
  • Piping and Pumps: These systems circulate coolant to regulate temperature and dissipate heat generated by computing equipment

At this point, a mechanical engineer will perform detailed calculations to estimate Heating, Ventilation, and Air Conditioning (HVAC) system efficiencies, including factors such as Power Usage Effectiveness (PUE). These calculations and schematics are then used to fine-tune the overall system design and help in the selection of appropriate equipment.

To create an efficient cooling system, it’s essential to know the heat output and airflow specifications of every piece of equipment in the data center. This information allows for the design of a cooling system capable of effectively managing the facility’s thermal load.

Heat Rejection Systems

Heat rejection systems are essential for regulating temperature in data centers. They transfer the heat generated by servers and other IT equipment to an external location, ensuring optimal operating conditions. This is usually accomplished through a heat exchanger, a device that transfers heat energy from one fluid to another, such as from air to water. Selecting the right system is crucial for effective cooling, and several options are available:

  • Fluid-Based Systems: Utilizing liquids such as water or glycol, these systems efficiently absorb and transfer heat. However, they require a complex network of pipes and pumps for operation
  • Direct Expansion (DX): These systems employ refrigerants to absorb heat directly from the air
  • Air-Side Economizers: These systems take advantage of cooler outdoor air to lower data center temperatures when external conditions permit. While energy-efficient, they necessitate robust air filtration systems
  • Water-Side Economization: This method typically uses a plate/frame heat exchanger combined with a chilled water system. When outdoor temperatures are low, the heat exchanger uses the cool air to chill the water, reducing the need to operate chillers
  • Dual Coil Solutions: Offering redundancy, these systems employ two separate coils for cooling – one for chilled water and another for refrigerant
  • Liquid Cooling: This approach is a server cooling technology that uses chilled liquid mediums like water or specialized coolants to efficiently dissipate heat from electronic devices in data centers. It has two main types: immersion cooling and direct liquid cooling

These heat rejection systems are vital for sustaining optimal temperature and humidity levels within the data center. The recommended temperature for most data centers ranges from 64°F to 81°F (18°C to 27°C). Likewise, the advised relative humidity levels fall between 40% and 60%.

Airflow Management

Airflow management is crucial for regulating air circulation within a data center, which in turn ensures effective cooling and minimizes hotspots. Maintaining optimal temperatures is vital for energy efficiency and extending the lifespan of the equipment. Here are some key airflow management design strategies:

  • Hot Aisle and Cold Aisle Containment: This approach separates hot and cold air streams by strategically positioning server racks. Servers face the cold aisle, drawing in cool air and expelling it into a designated hot aisle located behind them
  • Access Floor Air Distribution: In this system, cool air circulates through perforated tiles on a raised floor. This ensures that equipment receives a consistent flow of cool air
  • Overhead Air Distribution: Cold air is released from ducts situated above the server racks. This method capitalizes on natural convection to circulate air efficiently
  • Row-Integrated Cooling: In this setup, cooling units and hot air exhaust systems are built directly into the rows of server racks. This localized cooling minimizes the distance cool air needs to travel, making it more efficient
Airflow Management Steel Piping Ceiling

To optimize air distribution, it’s essential to:

  • Arrange IT equipment and cables in a way that doesn’t obstruct airflow
  • Carefully plan cool air supply pathways to maintain a uniform temperature across the data center
  • Design return air paths that effectively capture and channel hot air back to cooling units

Additional Cooling Design Considerations

  • Ventilation: In computer rooms, outside air is used for ventilation only if it meets specific quality and humidity criteria, as it is an energy-efficient option. Battery rooms, however, require separate ventilation systems to manage the potential off-gassing of harmful fumes
  • Altitude: Cooling efficiency decreases at higher altitudes. Data center design considerations should therefore factor in the elevation of the location
  • Noise Levels: Cooling systems, such as fans, air handler units, and air conditioners, can generate disruptive noise. Acoustic design can help mitigate this issue

READ MORE: Data Center Cooling – A Comprehensive Guide

Redundancy

Redundancy levels are critical for ensuring high availability and fault tolerance in data center design. These levels impact a range of equipment, from power supply units to cooling systems. The primary configurations for redundancy in data centers are N, N+1, and 2N.

  • N: This configuration means that there is exactly the amount of equipment needed to power, cool, and backup the data center under normal operating conditions. There is no redundancy. If one component fails, it could lead to partial or complete downtime
  • N+1: In this configuration, there is one extra unit of each critical system component in addition to the baseline ‘N’ components required for normal operation. This standard approach allows for one piece of equipment to fail without causing system downtime, as the additional unit can take over
  • 2N: This configuration provides complete redundancy for every piece of equipment to the original ‘N’. In a 2N configuration, the system can continue operating normally even if an entire set of ‘N’ components fails. Essentially, the system has double the necessary UPS, HVAC, and generator systems to provide full fault tolerance. Power flows from the utility, passes through the UPS/PDU of two separate systems, and then connects to the server

READ MORE: Data Center Tiers – What’s the Difference Between 1, 2, 3, and 4?

Security

Data center security is a comprehensive approach involving multiple layers of strategies and protocols. These measures are designed to protect the facility and its stored information from a range of threats, including physical intrusion, cyberattacks, environmental hazards, and risks within the computer rooms.

Perimeter Fence Security Around Facility Red Lights

Physical Security

The focus of physical security is to shield the data center from unauthorized access, theft, and physical damage. Access control serves as the initial line of defense and employs various mechanisms:

  • Perimeter fencing
  • Bollards
  • Security badges
  • Biometric authentication
  • Electronic locks
  • Mantraps

These elements work together to regulate entry into the data center and specific zones within the facility.

To minimize external attention to the data center, keep signage to a bare minimum. However, the interior should feature adequate signage to guide authorized personnel and discourage unauthorized entry to particular areas.

Security measures such as CCTV cameras and alarm systems should be in place and monitored 24/7 to swiftly identify and respond to any security incidents. Bright, motion-activated lighting enhances visibility and surveillance, serving both as a deterrent to potential intruders and as a means to capture clear CCTV footage.

Cybersecurity

The objective of cybersecurity is to guard the data center’s IT infrastructure from virtual threats such as cyber-attacks and unauthorized data access. A robust defense relies on a combination of firewalls and intrusion detection systems. These tools serve to filter out malicious traffic and notify administrators of suspicious activities.

Consistent software updates (for example, antimalware programs), regular penetration testing, and vigilant network monitoring are essential for identifying and mitigating potential threats.

Environmental Security

Environmental security is primarily concerned with the physical conditions that impact data center design. Both natural disasters – like earthquakes, floods, hurricanes, tornadoes, and wildfires – and man-made threats, such as terrorist attacks, have the potential to compromise the facility. A Building Management System (BMS) typically oversees this aspect of security, continuously monitoring for environmental and emergency scenarios and alerting managers when action is required.

Computer Room Security

Ensuring the integrity of computer rooms involves stringent access control measures. Two-factor authentication and 24/7 surveillance cameras are standard protocols for monitoring and safeguarding these sensitive areas.

Additional security layers include secure cages, partitioned constructions, and key-based access systems for individual racks and servers. Raised floors facilitate better cable management, while shielding techniques for data cables and white noise generators protect against electronic eavesdropping. Finally, backup media should be securely stored in a fireproof safe or at an off-site location, and encrypted to safeguard sensitive data.

Fire Protection

Fire protection in data centers refers to the measures, systems, and practices implemented to prevent, detect, and suppress fires within a facility.

Fire Prevention Detection Suppression System Data Centers Designing

These fire protection systems include:

  • Fire Prevention: These are methods and technologies used to minimize the likelihood of a fire starting. They include implementing stringent housekeeping practices, temperature monitoring, housing lithium-ion batteries in a separate room, regular inspections and maintenance of critical infrastructure, and instituting a proper cable management plan
  • Fire Detection: Involves technologies that aim to identify fires at the earliest possible stage. They include smoke detectors, heat detectors, air sampling systems, gas detectors, and video detection
  • Fire Suppression: Once a fire is detected, these systems work to control and extinguish it. They utilize a range of methods, including water-based sprinklers, gas-based systems, inerting agents, and chemical extinguishers

READ MORE: Data Center Fires – A Detailed Breakdown with Examples

IT Operations Design in Data Centers

Designing IT operations for data centers involves careful planning of the computer room, network infrastructure, network operations center (NOC), and disaster recovery strategies.

Computer Room

The layout of a computer room in a data center is crucial for achieving optimal airflow, energy efficiency, and ease of maintenance. Arrange servers, storage, and networking equipment in a way that enhances cooling efficiency, such as hot and cold aisles. Additionally, ensure that all IT components are easily accessible for maintenance.

Floor Ceiling Metal Cage Protects Server Rack Data Centers

Racks and Cabinets

In the computer room, racks and cabinets both play a crucial role in housing and organizing servers, storage systems, networking equipment, and other IT hardware. However, they have distinct differences in terms of security, cooling, and cost:

CriteriaRacksCabinets
DefinitionFramework for mounting hardwareEnclosed structure with racks
SecurityLower, no built-in locksHigher, features like electronic locks
CoolingDependent on room coolingCan have built-in cooling systems
CostGenerally less expensiveMore expensive due to added features

Racks are typically constructed from materials such as steel or aluminum and commonly come in the following configurations: two-post racks and four-post racks. Both types feature open frames with rail spacing.

  • Two-Post Racks: These racks consist of two vertical posts and a single rail. They are suitable for mounting lighter IT equipment like switches and patch panels but lack the stability required for heavy server equipment
  • Four-Post Racks: These racks have four vertical posts and include both a front and back rail. They are designed to accommodate heavier equipment like servers, offering greater weight capacity and stability

Cabinets, on the other hand, are enclosed frames with rail spacing. Standard guidelines specify that the mounting rail rack units within cabinets should measure 1 RU (rack unit), which is equivalent to 1.75 inches.

Rack Power Requirements

There are two traditional methods for estimating power requirements: rack-based and nameplate-based.

  • Rack-Based: This method assigns a standard power estimate per rack, typically ranging from 4 kW to 10 kW. While easy to apply, this approach can be inaccurate because it fails to account for the specific equipment in each rack
  • Nameplate-Based: This method calculates the power needs by summing the values listed on the nameplates of each server or IT device. Although more detailed, this approach may not be reliable, as nameplate values don’t always match actual power usage

Network Infrastructure

Network infrastructure is fundamental to data center design. It serves as the arteries for data transmission, processing, and storage.

Networking Cable Servers Rack Data Centre Structure Cabling Systems

Network Topology

Network topology is the structural layout that governs how servers, storage, and other network devices are interconnected via physical cables and software. This layout has a significant impact on data flow within the data center, influencing factors like speed, redundancy, and fault tolerance.

In a data center, cables are usually arranged in a hierarchical star topology. This arrangement consists of three primary elements: backbone cabling, horizontal cabling, and cross-connects.

  • Backbone Cabling: These cables connect core network components, such as switches and routers, across different rooms or even separate buildings. Originating from a central hub, the backbone cabling extends to various distribution areas
  • Horizontal Cabling: This type of cabling links individual servers to local network switches, commonly within the same rack or adjacent ones. It stems from main distribution points to connect end-user equipment
  • Cross-Connects: These are specialized patch panels that connect cables via patch cords. Cross-connects efficiently route and reroute data traffic among servers, network devices, and external connections

Alternative network topologies like ring or mesh can also be utilized to meet specific data center design requirements or to add redundancy. Redundant cabling provides alternative routes for data, thereby increasing network reliability. If a cable or pathway fails, the data can be automatically rerouted through an alternate route.

Structured Cabling Systems

Data center design incorporates specific telecommunications spaces into a typical facility layout. These spaces enable organized network connectivity, minimize signal loss, and provide fault tolerance.

Entrance rooms serve as the primary point of connection between the data center and external networks. Here, telecommunications cables enter the building and connect to physical IT hardware located at demarcation points. These points mark the shift in responsibility from service providers – such as ISPs and telecom companies – to data center operators.

Within the data center, a structured cabling system organizes the cable infrastructure across different hierarchical layers, each with its own set of functions. These layers handle tasks like core network management – which includes hosting main switches, routers, and backbone cabling. They also oversee data routing to aggregation and edge switches, as well as direct connectivity to computing hardware through horizontal cabling. This hierarchical structure ensures organized and efficient connections to individual racks, servers, and storage devices.

Additionally, a separate telecommunications room serves as the hub for cabling to peripheral areas like support offices and security rooms. This room is usually situated outside the main computer room.

Types of Data Center Cables

Data centers use various types of cables for telecommunications, each with its own set of characteristics such as flexibility, size, shielding, and load capacity. The main types are coaxial cables, twisted pair cables, and fiber optic cables.

  • Coaxial Cables: These cables have a central conductor, an insulator, a metallic shield, and an outer jacket. They are often used for wide-area circuits and occasionally for distributing TV signals within data centers
  • Twisted Pair Cables: Also known as Ethernet cables, these feature multiple pairs of copper conductors twisted together to reduce electromagnetic interference. They are frequently used in local area networks (LANs) and come in different performance categories, such as Cat 5e, Cat 6, and Cat 7
  • Fiber Optic Cables: Made of a thin glass filament encased in cladding, these cables provide high bandwidth and are capable of transmitting data over long distances. They are available in both single-mode and multimode varieties. InfiniBand, a high-speed, low-latency networking technology, predominantly uses fiber optic cables in data centers

READ MORE: Fiber Optics – What is it? and How Does it Work?

Cabling Pathways in Data Centers

The routes where these cables are installed within the data center are known as telecommunications cabling pathways.

Overhead Networking Cable Suspended Tray Server

These pathways can take several forms:

  • Raised Floors: These are installed at ground level and provide a space between the building’s actual floor and an elevated surface, which is used for cable routing
  • Conduits: These can be located either below the raised floors or elevated along walls and ceilings to house cables
  • Cable Trays: Generally situated near the ceiling, these trays provide overhead routing for cables

It is crucial for each type of pathway to have spare capacity to accommodate future installations of fiber optic cables.

Outside Plant Cabling

Outside plant cabling comprises the physical cables and support structures that link a data center to external networks. This infrastructure includes conduits, poles, and manholes designed to house and safeguard the cables. The cabling can be organized into two types of service pathways:

  • Underground: This involves tunnels or conduits situated below the ground to hold network cables. These underground pathways provide a higher level of protection against environmental conditions and unauthorized access
  • Aerial: This method employs overhead lines or cables supported by poles. Although aerial pathways are generally less secure than underground ones, they are often easier, faster, and more cost-effective to install

Network Operations Center (NOC)

The Network Operations Center (NOC) serves as the data center’s nerve center, coordinating all monitoring and administrative activities. It is equipped with workstations, multiple screens, and direct connections to all critical systems. Ideally, the NOC should be located near the computer room, yet sufficiently isolated to guard against physical threats such as fire.

Network Operations Center NOC Personnel Team Computer

To maintain the data center’s integrity, the NOC continuously monitors electrical systems, HVAC equipment, and fire suppression mechanisms. It utilizes a variety of sensors and remote monitoring tools for this purpose. One such tool is Data Center Infrastructure Management (DCIM) software, specifically designed for centralized planning, monitoring, measurement, management, control, and automation of data center operations.

Keyboard, Video, and Mouse (KVM) switches streamline the management process by enabling operators to control multiple servers using a single set of peripherals, thus conserving physical space in the NOC.

Disaster Recovery

Disaster recovery plans are crucial for quickly restoring data center operations after unexpected events like natural disasters, fires, or cyber-attacks. These plans are also necessary for dealing with a range of system failures, such as power outages and cooling system malfunctions.

Disaster Recovery Unexpected Event Datacenter Design

Key disaster recovery strategies include:

  • On-Premise Data Center Redundancy: This strategy involves maintaining duplicate systems and data backups within the same on-premise data center. While effective for fast recovery from hardware failures, it doesn’t adequately protect against larger-scale disasters
  • Off-site Data Storage: Keeping backup data at a geographically separate location ensures that essential information can be recovered even if the primary data center is severely compromised
  • Colocation Facility: Utilizing a colocation data center enhances reliability and expedites disaster recovery. These facilities provide shared physical space and resources, making them valuable for backup and redundancy efforts

Commissioning of Data Center Design

Commissioning is essential for effective risk management in data center facilities. It involves a rigorous series of tests on installed systems to confirm they perform according to design specifications.

Commissioning Design Data Center Tests

This commissioning takes place before the facility becomes operational and before IT systems are activated. Commissioning is deeply integrated into the data center design process. For example, test ports are built into piping systems, and access doors are strategically placed in key areas of the air handler units.

Steps in Commissioning

The data center commissioning process is organized into specific milestones, collectively leading to the ultimate goal of substantial completion. The testing steps involved are:

  • Factory Acceptance Testing: This step involves testing facility infrastructure equipment such as UPS units, generators, chillers, CRAC or CRAH units, and cooling towers. A facilities manager is appointed, and a commissioning review is conducted during this phase
  • Installation Testing: During this phase, suppliers or subcontractors test facility infrastructure components, including cables and pipework
  • Site Acceptance Testing: Also known as plant commissioning, this stage includes comprehensive tests on equipment like UPS units, pumps, CRAC or CRAH units, and chillers. These tests are conducted for client or consultant review and involve powering on the equipment and observing the tests
  • Interoperability Testing: This phase assesses how different systems – such as UPS, generators, and building management systems (BMS) – interact with each other to ensure design functionality. Activities in this stage include the handover of IT racks, chemical cleaning, and training sessions
  • System Integration Testing: The final step is to ensure that all systems, including mission-critical facilities, operate cohesively as a single unit

Commissioning Workflow

The data center design team drafts the initial blueprint and specifications for the facility. Their role is to ensure that the systems align with the project requirements set by the data center owner. Separately, a commissioning agent, serving as an independent overseer, manages the entire commissioning process to guarantee that all systems and components fulfill the defined objectives.

Contractors and subcontractors are tasked with constructing the systems based on the design specifications. They collaborate closely with both the data center design team and the commissioning agent to ensure the project’s accurate execution.

Towards the end of the commissioning process, operation and maintenance staff receive training to manage the systems effectively. Once the data center becomes operational, they assume responsibility for its ongoing operation and maintenance.

Mary Zhang covers Data Centers for Dgtl Infra, including Equinix (NASDAQ: EQIX), Digital Realty (NYSE: DLR), CyrusOne, CoreSite Realty, QTS Realty, Switch Inc, Iron Mountain (NYSE: IRM), Cyxtera (NASDAQ: CYXT), and many more. Within Data Centers, Mary focuses on the sub-sectors of hyperscale, enterprise / colocation, cloud service providers, and edge computing. Mary has over 5 years of experience in research and writing for Data Centers.

LEAVE A REPLY

Please enter your comment!
Please enter your name here