Edge data centers provide compute and storage resources at the periphery of a network, enhancing the performance, operating cost, and security of applications and services in the market. These facilities enable the development of new low-latency applications that cannot afford to send all their data back to centralized, regional, or cloud data centers. Edge data centers ensure that computing resources are located close to end users, processing data in real-time.
Edge data centers are small, decentralized facilities that offer computing and storage services closer to where data is generated and consumed. Unlike regional and cloud data centers, these facilities reduce latency and optimize bandwidth, facilitating the deployment of new applications.
Dgtl Infra offers a comprehensive guide to edge data centers, covering their advantages, various types such as metro, mobile, aggregation, and access, and the companies involved in building and utilizing them. We also explore the market size and factors driving demand. To provide a complete perspective, we compare edge data centers with regional data centers and cloud solutions, and address whether 5G relies on edge computing data centers.
Understanding Edge Data Centers
Edge data centers are decentralized facilities with their own power and cooling systems. These facilities are strategically located closer to where data is generated or used. Unlike traditional data centers that route traffic to the nearest major market for processing at regional or cloud-based facilities, edge data centers handle data storage, processing, and analysis near the end user’s location.
In terms of location, edge computing data centers are deployed as either standalone facilities or in a number of different environments, such as at telecommunications central offices, cable headends (i.e., local distribution points), the base of cell towers, or on-premises at an enterprise.
To this end, edge data centers are smaller facilities, located closer to end users than regional data centers – which are large facilities in close proximity to urban population centers – and cloud data centers – which are massive, centralized, and remote facilities in areas where land and power are relatively inexpensive. Typically, edge sites are connected via fiber optics to the larger regional and cloud data centers.
Why Move Data Centers to the Edge?
There are four main benefits of moving data centers to the edge, which involve improvements to latency, bandwidth, operating costs, and security:
- Latency: edge data centers facilitate lower latency, meaning much faster response times. Locating compute and storage functions closer to end users reduces the physical distance that data packets need to traverse, as well as the number of network “hops” involved, which lowers the probability of hitting a transmission path where data flow is impaired
- Bandwidth: edge facilities process data locally, reducing the volume of traffic flowing to-and-from central servers. In turn, greater bandwidth across the user’s broader network becomes available, which improves overall network performance
- Operating Cost: because edge computing data centers lower network bandwidth usage, they inherently reduce the cost of data transmission and routing. This is particularly beneficial for high-bandwidth applications. More specifically, edge facilities lessen the number of necessary high-cost circuits and interconnection hubs leading back to regional or cloud data centers, by moving compute and storage closer to end users
- Security: edge data centers enhance security by: i) reducing the amount of sensitive data transmitted, ii) limiting the amount of data stored in any individual location, given their decentralized architecture, and iii) decreasing broader network vulnerabilities, because breaches can be ring-fenced to the portion of the network that they compromise
What are the Types of Edge Data Centers?
There are two major types of edge data centers, namely metro edge facilities, which are located in suburban markets, and mobile edge facilities, which are deployed in C-RAN (Cloud-Radio Access Network) hubs and at the base of cell towers. Additionally, within the broader mobile edge definition, facilities located at C-RAN hubs can be referred to as the aggregation edge, while deployments situated at the base of cell towers can be referred to as the access edge.
Architecture of Metro and Mobile Edge
Visually, the relationship between metro and mobile edge data centers can be depicted through a hub-and-spoke system. In the below example, Jacksonville, Florida acts as the hub or metro edge facility, while five smaller and proximate markets, including Tallahassee, Gainesville, and Palm Coast in Florida, as well as Savannah and Augusta in Georgia, are the spokes or mobile edge data centers.
The hub and spoke architecture of edge data centers can be further defined as follows:
- Hub: metro edge data centers located in a suburban market (e.g., Jacksonville, Florida) allow customers to access connectivity services from telecommunications carriers, internet service providers (ISPs), and cloud service providers
- Spoke: mobile edge data centers situated in a smaller, underserved, local market (e.g., Gainesville, Florida) provide colocation services for a customer’s IT infrastructure. These smaller mobile edge computing data centers, which are deployed at C-RAN hubs and the base of cell towers, connect via fiber to the larger metro edge facilities
Connection Between Metro and Mobile Edge
Edge Data Centers – Summary by Type
Here are additional details on metro and mobile edge data centers, covering both the aggregation edge, represented by C-RAN hubs, and the access edge, located at the base of cell towers. Often, micro edge data centers are deployed at these cell tower bases to form the access edge.
|Criteria||Metro Edge||Mobile Edge|
|Location||Suburban markets||Cell towers and C-RAN hubs|
|Power Capacity||5+ megawatts||50 to 150+ kilowatts|
|Area||50,000+ square feet||Hundreds of square feet|
|Customer Deployments||25+ cabinets||1/4+ cabinet|
Metro Edge Data Centers (Suburban Markets)
Metro edge data centers typically comprise over 5 megawatts of power capacity and more than 50,000 square feet, located in suburban (Tier II/III) markets. These data centers serve individual customer deployments in the range of 250 kilowatts to 3 megawatts of power capacity, and can supply 25 to 100 cabinets.
Metro edge data centers facilitate aggregation, content delivery networks (CDNs), and peering for Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS) companies. For example, cloud service providers and internet companies, including Amazon Web Services (AWS), Google Cloud, Apple, Meta Platforms (Facebook), Netflix, and Salesforce have taken capacity in metro edge computing data centers.
Geographically, metro edge data centers exist in the suburbs, just outside of large cities. For instance, Cleveland, Ohio; Jacksonville, Florida; Minneapolis, Minnesota; Overland Park, Kansas; Plano, Texas; Salt Lake City, Utah; and St. Louis, Missouri, are all markets where metro edge facilities have been deployed. In these Tier II/III markets, edge computing data centers can reduce latency to <20 milliseconds, which is important for a number of emerging applications.
Mobile Edge Data Centers (Cell Towers and C-RAN Hubs)
Mobile edge data centers are facilities found at C-RAN hubs, also known as the aggregation edge, while those located at the base of cell towers are referred to as the access edge. Collectively, mobile edge data centers comprise 50 to 150+ kilowatts of power capacity across hundreds of square feet per site.
The aggregation edge resides at C-RAN (Cloud-Radio Access Network) hubs, which serve as central aggregation points for various radio communication equipment that support cellular networks. These hubs are strategically positioned near multiple cell towers, allowing several baseband units (BBUs) to be colocated at a single facility.
C-RAN hubs reduce latency and improve performance for applications by shifting compute closer to a wireless carrier’s end users.
The access edge is located at the base of cell towers, often taking the form of container-like, modular structures, known as micro edge data centers.
Over 165,000 cell towers exist in the United States and their exclusive real estate represents a significant new location where thousands of edge data centers can be deployed. While not all of these cell towers are suitable, SBA Communications, an independent tower company, states that it has more than 8,000 sites in the United States that could house an edge facility.
Placing edge data centers near a cell tower reduces the physical distance between the network’s final segment and the main processing functions of an application located in the same edge facility. An edge computing data center at the base of a cell tower could be less than 10 miles away from the end user.
Ultimately, edge computing at the access edge results in the lowest latency and highest performance from the application, to the mobile user’s device – which is particularly important for applications that require proximity.
What is a Micro Edge Data Center?
Micro edge data centers are container-like, modular structures that are prefabricated and then moved to locations that have the requisite real estate, power, and connectivity. While the base of cell towers are a major location where micro edge facilities are deployed, these facilities can also be placed at office buildings, retailers, stadiums/arenas, universities, parking lots, and the intersection of major fiber routes.
In terms of form factor, micro edge data centers are built to different size, power capacity, and cooling specifications. Still, the components of these micro edge facilities remain similar to larger facilities, including power distribution units (PDUs) and uninterruptible power supply (UPS) systems.
Example – EdgePod by EdgePresence
Below is an illustration of EdgePresence’s micro edge data center, known as an EdgePod, which supports 100 kilowatts (kW) of power availability and is 360 square feet in size, with dimensions of 30 feet in length and 12 feet in height. Typically, the facility’s fit-out includes 8 usable cabinets, made-up of 20 quarter-cabinet lockers (i.e., 5 full cabinets worth of space) and 3 full cabinets.
While these micro edge data centers can support power densities of over 15 kW per cabinet, average customer usage is only 3 kW per cabinet. As a reference point, a typical enterprise data center realizes average power densities of 6 to 12 kW per cabinet. Eventually, power density could become more important for micro edge facilities given that space is scarce and expensive – at pricing of more than $300 per kW per month.
To build, deliver, and install one of these micro edge data centers, total costs range from $500,000 to $1+ million, depending on the size of the facility.
For more details on the technical specifications, check-out the walkthrough tour in this video:
Edge Data Center Companies
Edge data center companies include those building and using the facilities:
Who is Building Edge Data Centers?
Large data center operators including Equinix, Digital Realty, CoreSite, and Cyxtera operate in urban (Tier I) data center markets. By definition, these facilities are located outside of the metro edge, in suburban (Tier II/III) markets, and the mobile edge at C-RAN hubs and the base of cell towers.
However, these large data center operators market their “edge” capabilities on the premise of reaching large populations rapidly. For example, Equinix states that 80% of the population of North America, Western Europe, and most Asia-Pacific metros, are within a 10 millisecond round trip of its data centers. While CoreSite’s United States portfolio of data centers covers 75% of U.S. businesses with latency of 5 milliseconds or less.
More importantly, the companies actually building edge computing data centers, including metro edge and mobile edge facilities, include the following operators:
- Metro Edge: AtlasEdge, Cologix, DartPoints, DataBank, Edge Centres, EdgeConneX, Evoque, Flexential, Leading Edge Data Centres, T5 Data Centers, TierPoint
- Mobile Edge (Micro Edge Data Centers): EdgePresence, Vapor IO
As a point of comparison, metro edge data center provider DataBank estimates its latency to be 3 to 5 milliseconds, within a 10-mile radius of its data centers.
Who is Using Edge Data Centers?
Edge data center customers vary by type and purpose, including the following groups and customers:
- Wireless Carriers: Verizon, AT&T, T-Mobile, and DISH Network are using these facilities as Open RAN (O-RAN) and virtualized RAN (vRAN) gain importance with 5G
- Cloud Service Providers (CSPs): Amazon Web Services (AWS), Microsoft Azure, and Google Cloud are deploying edge cloud services
- Internet Companies: Apple, Meta Platforms (Facebook), Netflix, Salesforce
- Content Delivery Networks (CDNs): Akamai, Cloudflare, Fastly, Edgio (formerly Limelight Networks) are deploying in multiple sites for applications such as video caching
- Connectivity Providers: Zayo, Lumen Technologies, Cogent Communications, and Megaport use these sites to reduce latency and enhance network resilience
- Bare Metal Providers: VMware, Rackspace, OVHcloud
- Enterprises: lack on-premises IT infrastructure and/or prefer to access a third-party facility in close proximity
Edge Cloud Services
The cloud service providers, including Amazon Web Services (AWS), Microsoft Azure, and Google Cloud, are all utilizing edge data centers to deploy their edge cloud services, including:
- Amazon Web Services (AWS): AWS Local Zones (places compute, storage, and database close to large population and industry centers) and AWS Wavelength (compute and storage services at wireless carrier facilities for 5G networks)
- Microsoft Azure: Azure Edge Zones and Azure Private Multi-Access Edge Compute (customers build their own private Azure data center)
- Google Cloud: Global Mobile Edge Cloud (GMEC) is a portfolio and marketplace of 5G edge computing solutions built jointly with wireless carriers
Edge Data Center Market Size
While the edge data center market size is unquestionably growing, a number of market participants have attempted to quantify this growth through their total addressable market (TAM) estimations.
American Tower, as part of its acquisition of CoreSite, estimated that, by 2026, edge data centers will have a TAM of approximately $3 billion in revenue, from space and power solutions. More specifically, this is comprised of the mobile edge, with a TAM of approximately $1 billion in revenue and the metro edge, with a TAM of approximately $2 billion in revenue.
Additionally, PwC (PricewaterhouseCoopers) expects the broader global market for edge data centers to grow to $13.5 billion in 2024, from $4 billion in 2017. At the same time, Technavio, a research firm, estimates that the global edge data center market (including components) will grow by $8.7 billion between 2020 and 2025, implying a compound annual growth rate (CAGR) of 16.7%.
Finally, Tolaga Research forecasts that, by 2028, there will be over 1.6 million servers residing in edge data centers around the world. By this time, edge servers will support 10% of cloud workloads globally, which is an increase from just over 1% at present.
Bringing content, and the servers that the content resides on, closer to end users has driven demand for edge data center space. To-date, short form video applications (e.g., TikTok), video streaming services (e.g., Netflix), audio streaming (e.g., Spotify), and ridesharing (e.g., Uber) have all driven a need for more edge facilities.
Over the next 5 years, low-latency applications including artificial intelligence (AI), Internet of Things (IoT) applications, augmented reality (AR), virtual reality (VR), telemedicine, real-time analytics, autonomous vehicles, video/live streaming, and network functions virtualization (NFV) for 5G, will all drive a greater need for edge computing data centers.
Edge Data Center Architecture
Presently, to deliver applications, all of the required computing is performed in centralized regional or cloud data centers and then data is transmitted back, via high-bandwidth connections, to end users. Alternatively, edge data centers offer a decentralized solution, while being physically smaller and consuming less power than regional and cloud data centers.
Edge vs Regional Data Center
Edge data centers are located closer to where data is being generated and/or used, such as only 10 to 50 miles away from the end user. In contrast, regional data centers are located in larger urban areas and service a more sizable geographic region, meaning that they are often located 100 to 200+ miles away from the end user. Therefore, edge facilities are better suited to serving applications with low-latency requirements, while regional data centers can handle less performance-sensitive functions such as storage and analysis.
Edge Data Center vs Cloud
Cloud data centers are massive in scale and located outside of urban areas, where land and power are relatively inexpensive. Specifically, cloud data centers are often hundreds or thousands of miles away from the end user, increasing latency to where the data is ultimately delivered. As such, edge data centers can appropriately serve applications with low-latency requirements, while cloud data centers can more optimally enable use cases like web & mobile applications, website hosting, and e-commerce.
Does 5G Use Edge Data Centers?
5G networks often use edge data centers to reduce latency – the time it takes for users to access data from a source – and improve data processing speed closer to end users. As wireless carriers aim to support low-latency applications, the demand for edge data centers is growing. Located near the devices that both generate and use data, these facilities play a crucial role in improving 5G network performance.
Edge Data Centers – 4G / LTE and 5G Networks
As shown above, in a 4G / LTE network, average latency ranges between 50 to 100 milliseconds (ms), comprised of core / cloud, transport, and air interface latency. Whereas in a 5G network, latency can be reduced to a total of 10 milliseconds (ms), with only 2 to 3 milliseconds of latency excluding transport. Decreasing transport latency requires moving the core compute and cloud interface closer to the edge, and, in turn, end user.