Data center architecture is a complex integration of modern facility, IT, and network systems working together to architect, design, and support critical business applications. These systems are highly interconnected, requiring a well-planned and synchronized approach to their design and operation. Due to their interdependence, changes made to one system can have far-reaching consequences, affecting multiple other systems.

Data center architecture represents the design and layout of a computing facility that houses IT infrastructure, including servers, storage systems, and networking equipment. This architecture also incorporates the facility’s physical infrastructure, such as power distribution and cooling systems.

Dgtl Infra explores the world of data center architecture, delving into its key components, architectural principles, and best practices. We examine the network, storage, and server architectures that form the backbone of modern data centers, providing insights into their design and implementation. Finally, Dgtl Infra takes a look at the physical architecture of data centers, including site selection, building considerations, and design, to deliver a comprehensive understanding of this critical infrastructure.

What Is Data Center Architecture?

Data center architecture refers to the design and layout of a computing facility that houses servers, storage systems, and networking equipment for processing, storing, and distributing large amounts of information. It involves detailed planning of the physical space, power and cooling systems, network connectivity, security measures, and software to ensure optimal performance, reliability, and scalability of IT resources and services. The ultimate goal is to create an efficient, resilient, and secure environment for hosting critical IT infrastructure of modern businesses and organizations.

Data Center Architecture Layout and Design Shown by a Vast Computing Facility with Rows of Server Racks

Components of Data Center Architecture

The key components of a typical data center architecture include:

  1. Servers: Classified into different types based on their physical structure and size, including rack servers, blade servers, and tower servers
  2. Storage Systems: Data centers use various storage technologies such as Storage Area Networks (SANs), Network Attached Storage (NAS), and Direct Attached Storage (DAS) to store and manage data
  3. Networking Equipment: Switches, routers, firewalls, and load balancers provide efficient data communication and security within the data center and to external networks
  4. Power Infrastructure: Uninterruptible Power Supply (UPS) systems, backup generators, and power distribution units (PDUs) deliver a stable and reliable power supply to the data center equipment
  5. Cooling Systems: Computer Room Air Conditioning (CRAC) units, liquid cooling systems, and hot/cold aisle containment maintain optimal temperature and humidity levels for the hardware to function properly
  6. Enclosures: Racks and cabinets used in data centers include open frame racks (two- and four-post racks), enclosed racks, wall-mounted racks, and network cabinets
  7. Cabling: Structured cabling systems, including twisted pair cables (for Ethernet, such as Cat5e, Cat6), fiber optic cables (single-mode and multi-mode), and coaxial cables
  8. Security Systems: Physical security measures like biometric access control, surveillance cameras, and security personnel, as well as cybersecurity solutions like firewalls, intrusion detection/prevention systems (IDS/IPS), and encryption protect the data center from unauthorized access and threats
  9. Management Software: Data Center Infrastructure Management (DCIM) software helps monitor, manage, and optimize the performance and energy efficiency of the data center components

Modern Data Center Architecture Principles

The modern data center architecture principles of scalability, reliability, efficiency, and security are fundamental to the design and layout of data centers.

Modern Data Center Architecture Principles Person Point Finger at Transparent Checklist Overlay
  1. Scalability: Data centers must be designed to easily accommodate future growth in data volume, processing power, and storage needs without significant redesign or downtime. This principle ensures that infrastructure can expand in a modular fashion, as well as new hardware and resources can be added to meet increasing demands. Data centers need to both ‘scale out’ (adding more machines or instances to a pool of resources to handle increased load) and ‘scale up’ (adding more power like CPU and RAM to an existing machine to improve its capacity)
  2. Reliability: Ensuring high availability and minimizing downtime are critical aspects of data center architecture, which can be achieved through redundant systems, fault tolerant hardware, and robust backup and disaster recovery solutions. Architecture incorporates redundancy at various levels, including power supply, cooling, networking, and storage, to maintain continuous data center operations even in the event of component failures
  3. Efficiency: Efficient data center design focuses on maximizing performance while minimizing the consumption of resources, particularly energy. This involves optimizing the layout, implementing energy-efficient hardware, using intelligent power management systems, utilizing efficient cooling systems (such as free cooling and liquid cooling), and improving key metrics like power usage effectiveness (PUE) to reduce operational costs and environmental impact
  4. Security: Protecting sensitive data and ensuring the integrity of the data center’s infrastructure are paramount in data center architecture, requiring a multi-layered security approach. This includes implementing physical security measures, such as access control systems and surveillance, as well as logical security measures, like firewalls, intrusion detection and prevention systems, and data encryption, to safeguard against unauthorized access, data breaches, and cyber threats

Network Architecture for Data Centers

Data center network architecture refers to the design and layout of the interconnected nodes and pathways that facilitate communication and data exchange within a data center. It includes the physical and logical layout of network equipment, such as switches, routers, and cabling, to enable efficient data transmission between servers, storage systems, firewalls, and load balancers.

Proper network architecture provides high-speed, low-latency, and reliable connectivity while delivering scalability, security, and fault tolerance.

Network Architecture for Data Centers Shown via Dynamic Vibrant Digital Glowing Lines with Movement

For decades, the three-tier architecture has been the standard model for data center networks. However, an alternative topology, the spine-leaf architecture, has emerged and gained prominence in modern data center environments. This architecture is especially prevalent in high-performance computing (HPC) settings and has become the predominant choice among cloud service providers (CSPs).

The following is a comparison of these two distinct data center networking architectures:

Three-Tier Data Center Network Architecture

The three-tier data center network architecture is a traditional network topology that has been widely adopted in many older data centers and is often referred to as the ‘core-aggregation-access’ model or the ‘core-distribution-access’ model. Redundancy is a key part of this design, with multiple paths from the access layer to the core, in addition to helping networks achieve high availability and efficient resource allocation.

Here’s an overview of each tier in the three-tier data center network architecture:

  • Access Layer: As the lowest tier in the three-tier data center network architecture, it functions as the entry point for servers, storage systems, and other devices into the network, providing connectivity through switches and cables. Access layer switches, often arranged in a top-of-rack (ToR) configuration, enforce policies such as security settings and VLAN (Virtual Local Area Network) assignments
  • Aggregation Layer: Also known as the distribution layer, it consolidates data traffic from the access layer’s top-of-rack switches before transmitting it to the core layer for routing to its ultimate destination. This layer enhances the data center network’s resilience and availability through redundant switches, eliminating single points of failure, and controlling network traffic through policies like load balancing, quality of service (QoS), packet filtering, queuing, and inter-VLAN routing
  • Core Layer: Also known as the backbone, it is the high-capacity, central part of the network designed for redundancy and resilience, interlinking aggregation layer switches and connecting to external networks. Operating at Level 3, the core layer prioritizes speed, minimal latency, and connectivity using high-end switches, high-speed cables, and routing protocols with lower convergence times

Ultimately, the traditional three-tier data center architecture struggles to efficiently handle the increased east-west (server-to-sever) traffic generated by modern server virtualization technologies due to latency introduced by multiple hops between layers.

Spine-Leaf Architecture

Spine-leaf architecture, often referred to as a Clos design, is a two-tier network topology that is widely used in data centers and enterprise IT environments. It brings multiple benefits for data center infrastructure, such as scalability, reduced latency, and improved performance over traditional three-tier network architectures.

The components of spine-leaf architecture are as follows:

  • Leaf Switches: These are top-of-rack switches in the access layer that connect to servers and storage devices within the rack. They form a full mesh network by connecting to every spine switch, ensuring all forwarding paths are available and nodes are equidistant in terms of hops
  • Spine Switches: These form the backbone of the data center network, interconnecting all leaf switches and routing traffic among them. They do not connect directly to each other, as the mesh network architecture eliminates the need for dedicated connections between spine switches. Instead they route east-west traffic through the spine layer to enable fully non-blocking data transfers between servers on different leaf switches

The spine-leaf architecture offers superior scalability, reduced latency, predictable performance, and optimized east-west traffic efficiency compared to the traditional three-tier architecture. It also provides fault tolerance through high interconnectivity, eliminates network loop concerns, and simplifies data center network management.

Storage Architecture for Data Centers

Data center storage architecture refers to the design and organization of storage systems that dictate how data is physically stored and accessed within a data center. It defines the types of physical storage devices used, like hard disk drives (HDDs), solid-state drives (SSDs), and tape drives, as well as how they are configured, such as direct-attached storage (DAS), network-attached storage (NAS), and storage area network (SANs).

Additionally, storage architecture involves the method by which stored data is accessed by servers – either directly or over a network.

Storage Architecture Layout of Backup Tape Drives in the Google Data Center in Berkeley County South Carolina
Source: Google.

Here are the main types of storage architecture in data centers:

Direct-Attached Storage (DAS)

Direct-attached storage (DAS) is a digital storage system used in data centers, characterized by its direct physical connection to the server it supports, without a network connection in between. The server communicates with the storage devices using protocols like SATA, SCSI, or SAS, and a RAID controller manages the data striping, mirroring, and disk management.

Diagram of Direct-Attached Storage DAS from Application Server to File System

DAS offers cost-effectiveness, simplicity, and high performance for individual servers, but has limitations in scalability and accessibility compared to networked storage solutions like NAS and SAN.

Network-Attached Storage (NAS)

Network-attached storage (NAS) is a dedicated file-level storage device that facilitates data access for multiple users and client devices through TCP/IP Ethernet in local area networks (LANs). These systems are designed for easy data storage, retrieval, and management without the need for an intermediary application server.

Diagram of Network-Attached Storage NAS with Application Server connecting to Ethernet Switches

NAS offers easy access, sharing, and management benefits, but faces scalability and performance limitations due to its dependence on shared network bandwidth and physical constraints.

Storage Area Network (SAN)

Storage area network (SANs) are dedicated, high-speed networks that connect servers to shared storage devices, typically utilizing the Fibre Channel protocol. These systems provide block-level access to storage within data centers, enabling servers to interact with storage devices as if they were directly attached, streamlining operations like backups and maintenance by offloading these tasks from the host servers.

Diagram of Storage Area Network SAN with Application Server connecting to Fibre Channel Switch

SANs offer high performance and scalability, but they come with high costs and complex management requirements that demand specialized IT expertise.

Server Architecture for Data Centers

Server architecture in data centers refers to the design and organization of servers and related components to efficiently process, store, and manage data.

Server Architecture Arranged in Rows within Microsoft Washington Data Center IT Equipment
Source: Microsoft.

It can typically be broken down into the following categories: form factor (physical structure), system resources, and support infrastructure:

Form Factor (Physical Structure)

  • Rack Servers: These are the most common type of servers found in data centers. They are designed to be mounted in standard 19-inch racks and are typically 1U to 4U in height
  • Blade Servers: These servers are designed to maximize density and minimize physical space. Multiple blade servers are housed in one single chassis, sharing common resources such as power supplies, cooling, and networking
  • Tower Servers: While less common in large data centers, tower servers are still used in smaller-scale deployments or where rack space is not a constraint. They resemble desktop computer towers and can be standalone units

System Resources

  • CPU (Central Processing Unit): The CPU is the brain of the server, responsible for executing instructions and processing data. It performs arithmetic, logical, and input/output operations
  • Memory: RAM (Random Access Memory) is the server’s primary memory, providing fast access to data and instructions. It temporarily stores data and programs that are currently in use
  • Storage: Devices, such as hard disk drives (HDDs) or solid-state drives (SSDs), store data and files persistently. They hold the operating system, applications, databases, and user data
  • Networking: NICs (network interface cards) connect the server to the network, enabling communication with other devices. They handle the sending and receiving of data packets
  • GPU (Graphics Processing Unit): GPUs are specialized processors designed for parallel processing and graphics rendering. They excel at handling computationally intensive tasks, particularly those for AI, machine learning, and scientific simulations. However, not all servers require GPUs

Support Infrastructure

  • Power: Power Supply Units (PSUs) provide stable and reliable power to all the server components. They convert AC power from the wall outlet to the appropriate DC voltages required by the server
  • Cooling: Servers generate a significant amount of heat, and the cooling system ensures that the components operate within safe temperature ranges. Cooling options include fans, heatsinks, liquid cooling, and air conditioning in server rooms
  • Motherboard: This is the main printed circuit board that connects all the server components together. It provides the necessary interfaces, buses, and slots for the CPU, RAM, storage, and other peripherals

Cloud Data Center Architecture

Cloud data center architecture refers to the design and organization of compute, storage, networking, and database resources within a remote data center that enables the delivery of cloud computing services. This architecture is built on virtualization technology, allowing for the efficient sharing and utilization of physical resources to provide scalable, reliable, and flexible cloud-based applications and services.

Cloud Data Center Architecture Glows with Modern Visual Hovering Over Rows Of Server Racks

Here’s a breakdown of the main components of a cloud data center architecture:

  1. Compute: Cloud compute services provide virtual machines (VMs), containers, and serverless computing resources for running applications and workloads. These services allow users to provision and scale computing power on-demand, without the need to manage physical hardware. For example, major cloud compute services include Amazon EC2, Microsoft’s Azure Virtual Machines, and Google Cloud’s Compute Engine
  2. Storage: Cloud storage services offer scalable and durable storage solutions for various data types, such as files, objects, and backups. These services provide high availability, automatic replication, and data encryption to ensure data integrity and security. Examples of popular cloud storage services include Amazon S3, Microsoft’s Azure Blob Storage, and Google’s Cloud Storage
  3. Network: Cloud networking services enable users to create, configure, and manage virtual networks, subnets, and network security rules. These services provide connectivity between cloud resources, on-premises networks, and the internet, allowing for secure and efficient data transfer. For example, key cloud networking services include Amazon Virtual Private Cloud (VPC), Microsoft’s Azure Virtual Network, and Google Cloud Virtual Private Cloud (VPC)
  4. Database: Cloud database services offer managed and scalable database solutions for storing, retrieving, and managing structured and unstructured data. These services support various database engines, such as relational databases (e.g., MySQL, PostgreSQL), NoSQL databases (e.g., MongoDB), and data warehouses. Cloud database services handle tasks like provisioning, scaling, backups, and security, allowing developers to focus on application development. For example, well-known cloud database services include Amazon RDS, Microsoft’s Azure Cosmos DB, and Google Cloud SQL

Physical Data Center Design

The physical architecture and design of a data center is crucial for ensuring optimal performance, security, and reliability.

Physical Architecture for Exterior of Google Data Center Building Outside Atlanta in Douglas County Georgia
Source: Google.

Here are the key elements of physical data center architecture design:

Site Selection

  • Location: Data centers are often built in areas with low risk of natural disasters, away from areas prone to earthquakes, floods, and hurricanes
  • Climate: Colder locations can reduce the cost of cooling a data center by using ambient air, while hotter climates require more energy-intensive cooling solutions
  • Accessibility: The site must be easily accessible for staff and in proximity to major roads and airports for the transport of equipment and emergency response
  • Power: Reliable and cost-effective energy sources are vital. The presence of multiple high-voltage transmission lines and power substations are important for power delivery
  • Connectivity: Proximity to major fiber optic routes and having multiple service providers results in better connectivity

Building and Structural

  • Construction Materials: Data centers are typically constructed using durable, fire-resistant materials such as concrete, steel, and specialized wall panels
  • Structure: While single-story data centers are more common, multi-story data centers are increasingly being built in areas with limited land availability or high real estate costs
  • Ceiling Heights: High ceiling heights, usually between 12 and 18 feet, are necessary to accommodate raised floors, overhead cable trays, and air conditioning ducts while providing adequate clearance for equipment and maintenance
  • Load-Bearing Capacity: Data centers require high floor loading capacity to support the weight of heavy server racks, cooling systems, and Uninterruptible Power Supply (UPS) systems. The load-bearing capacity typically ranges from 150 to 300 pounds per square foot
  • Internal Layout: The internal architecture of a data center, including columns and partition walls, plays a crucial role in the overall design and functionality of the facility. These elements impact space utilization, airflow associated with cooling systems, power distribution, and the accessibility of equipment for maintenance purposes

Design Systems

Data centers have been designed and built based on various architectural factors, such as size, purpose, ownership, and location. Notable types of data centers include:

  • Enterprise Data Centers: Owned and operated by individual companies to support their specific business needs and applications. They are often build-to-suit, meaning that they are customized to meet the specific needs of one single organization
  • Colocation Data Centers: Offer a shared infrastructure where multiple customers can rent space, power, and cooling to house their own IT equipment in the colocation facility
  • Hyperscale Data Centers: Massive, centralized facilities designed to support the needs of hyperscalers, which are large cloud service providers (CSPs) and internet companies
  • Edge Data Centers: Smaller facilities that utilize a distributed data center architecture. Edge data centers are located closer to end users or data sources, designed to reduce latency and improve application performance by processing data closer to its origin
  • Containerized Data Centers: Also known as micro data centers, these are modular, portable facilities housed in shipping containers, offering flexibility and rapid deployment
  • Artificial Intelligence (AI) Data Centers: Specialized facilities optimized for AI workloads, featuring high-performance computing, GPUs (Graphics Processing Units), and liquid cooling systems

Frequently Asked Questions

What is a Data Center Architect?

A data center architect is a professional responsible for the strategic planning, design, and oversight of data center infrastructure construction and implementation. This includes considerations for physical layout, environmental controls, energy efficiency, and the integration of IT network systems and security measures.

Data center architects collaborate closely with various stakeholders, such as clients, contractors, project managers, IT teams, and facilities management personnel. They ensure that the designed infrastructure aligns with the organization’s business goals while adhering to industry standards and regulations.

The primary objective of a data center architect is to design and deliver a reliable, efficient, and scalable data center that meets the current and future needs of the organization.

Mary Zhang covers Data Centers for Dgtl Infra, including Equinix (NASDAQ: EQIX), Digital Realty (NYSE: DLR), CyrusOne, CoreSite Realty, QTS Realty, Switch Inc, Iron Mountain (NYSE: IRM), Cyxtera (NASDAQ: CYXT), and many more. Within Data Centers, Mary focuses on the sub-sectors of hyperscale, enterprise / colocation, cloud service providers, and edge computing. Mary has over 5 years of experience in research and writing for Data Centers.

LEAVE A REPLY

Please enter your comment!
Please enter your name here