Cloud networking, a vital component of cloud computing, has revolutionized the way network services are provisioned and managed. By seamlessly connecting and scaling resources on-demand, cloud networking offers unparalleled flexibility, cost efficiency, and performance for enterprises to deliver more data-intensive applications and services.

Cloud networking involves designing, deploying, and managing network infrastructure in a cloud-based environment, using virtualized resources hosted on servers in data centers. It offers networking solutions without needing the physical hardware and maintenance linked to traditional networking.

Dive deeper into cloud networking as Dgtl Infra explores its components, compares it to traditional networks, and analyzes the evolution of cloud data traffic patterns. Learn about various cloud networking technologies, such as network virtualization, SDN, and NFV, and discover the different types of cloud networking, including public, private, hybrid, and multi-cloud options. Additionally, we uncover the numerous benefits cloud networking can bring to your organization.

What is Cloud Networking?

Cloud networking refers to the practice of designing, deploying, managing, and operating network infrastructure and services within a cloud-based environment. It involves the integration and interconnection of virtualized network resources, services, and applications that are hosted on servers in cloud data centers. This approach enables enterprises to utilize scalable, flexible, high-performing, and cost-effective networking solutions without having to invest in the physical hardware and maintenance associated with traditional networking.

Underlay and Overlay Networks

Cloud networking utilizes underlay networks as the physical infrastructure foundation, while overlay networks create virtual connections and services on top of the underlay.

Underlay Networks

The underlay network is the physical network infrastructure, owned, built, and managed by the cloud service provider, which resides inside their data centers. It consists of elements such as routers, switches, and Ethernet cables and remains transparent to end users, who have no visibility into it. Enterprises can connect their on-premises networks to the cloud network, establishing a robust, global network system.

Overlay Networks (Virtual Private Clouds)

Overlay networks, also known as virtual private clouds (VPCs), are isolated, software-defined networks that exist within the cloud and allow connectivity among virtual machines (VMs) and other cloud resources. Enterprises create, configure, and manage these private virtual networks, which run atop the cloud service provider’s underlay network.

Configuration options for VPCs encompass routing policies, access control lists, network gateway rules, security groups, IP address assignment, and subnet creation. Additional network services that can be configured include load balancers, application-layer firewalls, content delivery networks (CDNs), caching systems, and domain name system (DNS) services. A cloud service provider’s orchestration platform handles the underlying connectivity details.

Example of Cloud Networking

The following illustration depicts an example of a virtualized cloud data center, demonstrating the role of cloud networking. It highlights the physical servers and network components that underlie the virtualized infrastructure.

Cloud Network Diagram

Within the context of cloud networking, the diagram displays three distinct customers sharing the same physical resources. The cloud service provider (CSP) is responsible for allocating virtual machine (VM) resources across multiple servers in the network. Simultaneously, the CSP assigns virtual network resources to each customer, connecting their respective virtual machines (VMs).

To ensure security and privacy, the software-defined virtual networks and connections are private and isolated from one another, preventing any potential interference or unauthorized access between customers.

What is Different about Cloud Networking?

Cloud networking, different from traditional networking, leverages remote data centers and virtualized resources to provide network infrastructure and services to enterprises, without on-site physical hardware. This approach enables rapid application deployment and cost-effective updates while accommodating the increased east-west traffic patterns in modern data centers, as opposed to the north-south traffic patterns of traditional networks.

Cloud Networks accelerate Application Deployment

In cloud computing, applications are distributed across thousands of servers. These cloud servers are interconnected by high-speed networking switches to form a pool of resources that allows applications to be rapidly deployed and cost-effectively updated. Cloud computing enables ubiquitous and on-demand network access to these applications from internet-connected personal computers, smartphones, tablets, and Internet of Things (IoT) devices.

Almost all consumer applications are now provided through cloud services, with enterprise applications quickly following suit. Large internet companies such as Amazon, Microsoft, Google, and Meta Platforms (Facebook) have led the way in developing “hyperscale” cloud data centers to support these applications and accommodate the increasing demands of users. Consequently, these technology giants have also emerged as leaders in cloud networking, rearchitecting the way data is managed and transmitted around the world.

Furthermore, the advent of cloud-native applications, involving technologies such as containers, container orchestration, and microservices, as well as SaaS (Software as a Service) models, is repositioning siloed ‘places in the network’ to be ‘places in the cloud’.

Traditional Networks vs Cloud Networks

Traditional networks involve the deployment and management of physical hardware, such as routers, switches, cabling, firewalls, and load balancers, within an enterprise’s on-premises data center. This approach requires substantial upfront investment in equipment, as well as ongoing maintenance and manual configuration.

On the other hand, next-generation data center networks, known as cloud networks, are designed and built differently to meet the unique demands of modern applications. Cloud networks leverage remote data centers and virtualized resources to provide network infrastructure and services without the need for physical hardware on-site.

In cloud networks, the networking functions provided by traditional routers, switches, firewalls, and load balancers are implemented in the cloud service provider’s proprietary software, meaning they are not discrete resources that the customer can manage.

Evolution of Data Traffic Patterns in Cloud Networks

Cloud networks and traditional networks have fundamental differences in their structure and the nature of data traffic. Traditional data centers typically host specific applications on a limited number of servers, resulting in primarily server-to-client, or north-south traffic. This accounts for an aggregate network bandwidth of a few terabits per second. In contrast, cloud networks experience a much higher volume of server-to-server, or east-west traffic, with aggregate network bandwidth exceeding 1 petabit per second, which is over 300 times greater than that of traditional data center networks.

The increase in east-west traffic in cloud data centers can be attributed to several factors, such as the growing use of cloud services, proliferation of content distribution networks (CDNs), rapid data transfer between multiple clouds (i.e., multi-cloud), and data replication across various data centers. This shift in traffic patterns is also driven by the rise of highly distributed applications that generate east-west traffic for workloads, workflows, workstreams, and distributed personal computers, smartphones, tablets, and Internet of Things (IoT) devices.

Addressing the Increase in East-West Traffic

In response to these increases in east-west traffic, cloud network architectures are being developed and adopted to optimize the flow of east-west traffic, ultimately reducing latency. The focus of these architectures is to accommodate the evolving server workloads and the increasing demand for server virtualization, which has contributed to the rapid growth of east-west traffic in comparison to traditional data center networks.

The growing volume of east-west traffic, which refers to server-to-server communication in cloud data centers, is effectively managed by the widely adopted leaf-spine architecture, a technology specifically designed for modern data center networks. The leaf-spine architecture simplifies network management, enhances scalability, and reduces latency, by employing spine switches and leaf switches to connect servers with the wide area network (WAN) or core router.

Networking Inside and Outside of the Cloud Data Center

Cloud infrastructure utilizes local area networks (LANs) to connect internal physical resources, like servers, routers, and switches within data centers, and wide area networks (WANs) to establish connections between data centers and external networks or users.

Networking Inside Cloud Data Centers

Networking inside a cloud data center primarily involves the use of local area networks (LANs). Utilizing networking technologies such as Ethernet, Fibre Channel, and InfiniBand, LANs connect the physical resources, including servers, routers, switches, and storage devices, within a data center. This internal connectivity enables effective data transfer, communication, and management of resources within the data center.

Key characteristics of Ethernet, Fibre Channel, and InfiniBand are as follows:

  • Ethernet: the most widely-used LAN technology in cloud networking, which offers high-bandwidth performance and cost-effectiveness within data centers. Key characteristics of Ethernet include its extensive use of Virtual Local Area Networks (VLANs) for segmenting cloud network infrastructure into multiple virtual networks, enhancing scalability, security, and efficiency in managing large, multi-tenant cloud environments
  • Fibre Channel: a dedicated storage networking standard designed to ensure resilience and security in data centers, where storage traffic cannot tolerate retransmission delays. This protocol also offers high-speed data transfer rates, ranging from 1 Gbps to 128 Gbps, making it suitable for cloud networking applications that require rapid and efficient data exchange between servers and storage devices
  • InfiniBand: high-speed interconnect technology that offers low-latency communication between devices in data center and cloud networking environments. InfiniBand’s high-performance and low-latency characteristics make it suitable for applications requiring fast, efficient communication between nodes, such as high-performance computing (HPC), machine learning (ML), and large-scale data processing
Innovations Driving Cloud Network Demands

Recent innovations, including dense server virtualization and IP storage have placed significant demands on networking inside of cloud data centers by increasing the number of virtual machines (VMs) and data traffic. For example, IP storage uses standard Ethernet and TCP/IP protocols, whereas traditional storage uses proprietary, non-IP based networks like Fibre Channel. As organizations transition to IP storage, they create more data traffic since IP storage networks can handle file-level (NAS), block-level (iSCSI), and object-level access to data, while traditional storage primarily focuses on block-level access (Fibre Channel SANs).

Networking Outside Cloud Data Centers

Cloud networking enables global distribution and simplified development of the enterprise network system outside cloud data centers. It connects resources across regions and availability zones between a cloud service provider’s various infrastructure locations.

Regions and Availability Zones

Cloud service providers (CSPs) organize their cloud infrastructure into different geographical regions, which are arbitrary geographical areas with varying names and boundaries. In practice, all cloud service providers typically establish regions in key data center markets within the United States, Europe, Asia Pacific, and Latin America.

These cloud regions contain multiple availability zones (AZs). Each availability zone is a logical grouping of one or more closely located data centers. Each AZ is self-contained and physically isolated from other AZs in the same region to provide additional fault tolerance and resiliency.

Cloud networking relies on geographically distributed cloud regions and availability zones (AZs) to enhance resiliency, enable efficient, low-latency connections between resources, and distribute applications globally.

Virtual Networks

Virtual networks, provided by cloud vendors, are logical groupings of infrastructure resources that connect and exchange data. Typically, customers can connect to virtual networks through three different methods:

  • Internet: as a public network, the Internet provides constant accessibility but may experience high, unpredictable latency due to its open and shared nature
  • Virtual Private Network (VPN) Gateway: offers secure, encrypted connections for applications that do not require constant access, and are commonly used to connect cloud resources to on-premises networks, offices, or remote data centers
  • Dedicated Private Connections: provide predictable latency and dedicated bandwidth through mediums such as dark fiber, but lack built-in encryption and internet access
Subnets

Virtual networks can be subdivided into subnets to allow different types of processing or to define specific traffic rules. Subdividing virtual networks is important for cloud networking because it enables granular control over network traffic and enhances security by isolating different types of processing.

Cloud Networking Technologies

Cloud networking architecture relies on key technologies including network virtualization, software-defined networking (SDN), and network functions virtualization (NFV).

Network Virtualization

Network virtualization is the process of abstracting physical network resources into a flexible, software-based virtual environment. This approach allows for the creation of multiple, isolated virtual networks on the same underlying hardware, each with its own configurations, policies, and management. Network virtualization is useful in multi-tenant cloud data centers, where each customer can be allocated multiple virtual machines (VMs) as well as a virtual network connecting these VMs.

Key technologies in network virtualization include software-defined networking (SDN) and network functions virtualization (NFV), which are discussed below:

Software-Defined Networking (SDN)

Software-defined networking (SDN) enables centralized management, programmability, and automation of network resources by decoupling the control and data planes within a network. This technology is crucial in large cloud data center networks, where software is used for the setup, configuration, and monitoring of servers, storage resources, and network equipment.

SDN’s programmability enables dynamic network configuration adaptation based on workload and facilitates policy-based network management schemes to achieve cloud service providers’ objectives in performance, utilization, and availability. In particular, SDN has been adopted by hyperscale companies like Microsoft, Google, and Meta Platforms (Facebook), which have specific IT workloads for which network traffic routing decisions can be optimized.

READ MORE: Software-Defined Networking (SDN) Explained

Network Functions Virtualization (NFV)

Network functions virtualization (NFV) utilizes cloud-native software to virtualize network services, including routers, switches, firewalls, and load balancers, which were previously dependent on dedicated, proprietary hardware. By virtualizing these services, NFV enhances cloud networking through improved resource allocation, increased flexibility, and a lower total cost of ownership, leading to more efficient and adaptable networks.

NFV allows for the flexible deployment of virtual network functions (VNFs) across the four major cloud computing models: public cloud, private cloud, hybrid cloud, and multi-cloud. As such, VNFs can be moved to, or instantiated in, various locations in the network as required, without the need for installation of new hardware.

READ MORE: Network Functions Virtualization (NFV) Explained

Types of Cloud Networking

There are several types of cloud networking, which can be broadly categorized based on their deployment models, including public cloud networking, private cloud networking, hybrid cloud networking, and multi-cloud networking.

Public Cloud Networking

Public cloud networking offers cost-effective, scalable, and on-demand network resources and services from third-party providers like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. These resources are shared among multiple users or organizations over the internet.

Enterprises manage virtual networks, security policies, and other network-related services through the provider’s management console or APIs. Public cloud hosting uses multi-tenant data centers, ensuring isolated, scalable, flexible, and secure virtual networks for communication between thousands of servers or virtual machines (VMs) belonging to different customers.

READ MORE: Top 10 Cloud Service Providers Globally

Private Cloud Networking

Private cloud networking, either on-premises or in a remote data center, offers a single organization increased control, customization, and security compared to public cloud networking. Enterprises with strict regulatory and security requirements or needing a customized infrastructure benefit from deploying, managing, and scaling network resources within the private cloud environment.

Cloud service providers (CSPs) create isolated, multi-tenant environments with virtual machines (VMs) and networks, using server virtualization, network tunneling protocols, and storage virtualization for efficiency and optimized performance.

READ MORE: Private Cloud – What is it? and How Does it Work?

Hybrid Cloud Networking

Hybrid cloud networking integrates public and private cloud models with secure connections and unified management tools. Organizations benefit from the cost savings, scalability, and flexibility of public clouds while retaining the control, security, and customization of private clouds.

Hybrid cloud networking enables enterprises to optimize their infrastructure based on specific workloads and requirements, such as keeping sensitive data on-premises and using public cloud resources for less critical applications. Services like AWS Direct Connect exemplify hybrid cloud networking solutions.

READ MORE: Hybrid Cloud – What is it? and How Does it Work?

Multi-Cloud Networking

Multi-cloud networking uses multiple cloud service providers (CSPs) to optimize network resources, enhance flexibility, and mitigate risks associated with relying on a single provider. Integrating various public, private, or hybrid clouds allows organizations to leverage each provider’s unique strengths and avoid vendor lock-in.

Multi-cloud network architectures use APIs to abstract, automate, secure, and improve networking within and between multiple public clouds.

READ MORE: Multi-Cloud – What is it? and How Does it Work?

Benefits of Cloud Networking

The benefits of cloud networking are scalability and flexibility, performance and reliability, cost savings and efficiency, simplified network management, as well as being open and programmable.

Scalability and Flexibility

Cloud networking offers organizations scalability and flexibility by utilizing software to swiftly adjust their network infrastructure according to fluctuating demands. This approach ensures necessary resources are readily available without requiring additional hardware investments. Cloud networking achieves scalability and flexibility through internal networks that interconnect hundreds of thousands of physical servers and millions of virtual machines (VMs), as well as storage nodes. Furthermore, these networks maintain strong connections to external carrier networks for seamless communication and data transfer between the cloud data center and the broader internet or other data centers.

To deliver scalability and flexibility in hardware, cloud data center networks rely on the use of highly adaptable software operating systems, which can be deployed on various networking hardware appliances, such as bare metal switches and proprietary switches. Such scalable cloud networking platforms cater to the needs of cloud service providers (CSPs) and large internet companies.

Moreover, cloud networks function as hardware-agnostic platforms, capable of running and scaling on custom application-centric infrastructure circuits, merchant silicon, or even hypervisors and containers. This flexibility allows for seamless integration with various hardware types, further enhancing the adaptability of cloud networking.

Performance and Reliability

Cloud networking enhances performance and reliability by leveraging multiple data centers and network redundancy measures, such as diverse network paths. Additionally, cloud networking achieves performance isolation, ensuring that the performance of one user, application, or tenant on the shared network does not negatively affect the performance of other users, applications, or tenants. Ultimately, these measures reduce the risk of downtime, which is especially crucial for large-scale cloud networks, where network outages can be costly to customers.

Resilient, self-healing infrastructure and virtualization techniques are employed in cloud networking environments, allowing for automated workload reallocation in the event of physical equipment failures. This further strengthens reliability and operational manageability. In the context of hyperscale data centers, such as those operated by Amazon or Microsoft, these features are vital since hardware failures are statistically more likely to occur compared to smaller data centers, making a manual IT support approach impractical.

Merchant silicon, widely used in cloud data center networks, enables Ethernet network interfaces of 400G and 800G, allowing cloud network providers to deliver switches with industry-leading capacity of 400 and 800 gigabits per second (Gbps), low latency, port density, and power efficiency.

Cost Savings and Efficiency

Cloud networking delivers cost savings and increased efficiency for organizations by reducing both capital expenditures and operating expenses. This is primarily achieved by eliminating the need to purchase, manage, and maintain physical hardware, such as routers, switches, cabling, firewalls, and load balancers. Additionally, organizations benefit from the cloud’s pay-as-you-go model, where they only pay for the resources they consume, further enhancing cost-effectiveness.

The use of programmable, scalable leaf-spine architectures in conjunction with software applications significantly reduces networking costs compared to traditional network designs. This approach enables faster time to service and improved availability, contributing to overall efficiency. Automation tools also play a crucial role in lowering operational costs by streamlining provisioning, managing, and monitoring cloud data center networks.

Additionally, many cloud data center operators purchase lower-cost, custom-built networking equipment from original design manufacturers (ODMs), such as Edgecore Networks (Accton), Celestica, and Quanta. Software-defined networking (SDN) initiatives facilitate this shift by creating a central orchestration layer for rapid, error-free network and server configuration, simplifying networking equipment and enabling hardware-agnostic network operating systems. In turn, this allows for the use of multiple network equipment vendors, which reduces costs.

Simplified Network Management

Cloud networking simplifies network management by providing a centralized platform equipped with programmability and automation features. These capabilities reduce the complexity of managing disparate systems and enable efficient provisioning of multi-tenant applications that serve multiple users.

  • Programmability: allows cloud networks to integrate with third-party applications for network management, automation, orchestration, and network services, which is crucial for enabling customization and enhancing the capabilities of cloud networking platforms
  • Automation: facilitates workload mobility across the cloud network, while container technology streamlines various processes (making automation easier) by enabling rapid and agile provisioning, reducing deployment time from hours or days to minutes

The centralized approach of software-defined networking (SDN) serves as a key enabler of cloud network automation. Automated software programs can be written to allow organizations to configure, provision, secure, and optimize network resources as needed. For cloud service providers, automating manual tasks is particularly important, with methods such as zero-touch provisioning (ZTP) being employed to configure a switch without human intervention.

Open and Programmable

Cloud networking emphasizes being open and vendor-agnostic, allowing organizations to purchase bare metal switches from a variety of vendors. This approach results in a diversified hardware supply chain and eliminates vendor lock-in. Additionally, control plane software can be built in-house, purchased from different vendors, or even implemented using open-source versions of protocols to facilitate communication with hardware devices.

Programmable interfaces provide greater control by enabling software and cloud networking platforms to integrate with a wide range of third-party applications. This eliminates the need to manually program multiple vendor-specific hardware devices. Instead, developers can control packet forwarding and traffic flow over a network by programming an open standards-based software controller.

Merchant silicon also plays a key role in enabling open standards-based networking, as it offers faster time-to-market and is driven by technology advances associated with Moore’s Law. These off-the-shelf, high-performance semiconductor chips eliminate vendor lock-in and facilitate multi-sourcing from various vendors.

Mary Zhang covers Data Centers for Dgtl Infra, including Equinix (NASDAQ: EQIX), Digital Realty (NYSE: DLR), CyrusOne, CoreSite Realty, QTS Realty, Switch Inc, Iron Mountain (NYSE: IRM), Cyxtera (NASDAQ: CYXT), and many more. Within Data Centers, Mary focuses on the sub-sectors of hyperscale, enterprise / colocation, cloud service providers, and edge computing. Mary has over 5 years of experience in research and writing for Data Centers.

LEAVE A REPLY

Please enter your comment!
Please enter your name here