Underwater data centers and servers provide a glimpse into the future of a rapidly evolving industry where submerged oceanic facilities help support and connect underserved communities around the world.

There are major underwater data center and underwater server initiatives ongoing across the globe, with the most notable of these developments being Microsoft’s Project Natick. By putting the “cloud” into the “ocean”, services like compute, storage, and networking can help democratize innovation.

Dgtl Infra provides an in-depth overview of underwater data centers and underwater servers, covering the rationale as to why these new forms of infrastructure exist. Additionally, we review Microsoft’s Project Natick, including past developments in Phase 1 and Phase 2, as well as details on a potential Phase 3 in the future. Finally, Dgtl Infra highlights ongoing underwater data center and underwater server initiatives from lesser-known companies who are taking different deployment approaches.

Underwater Data Centers

Underwater data centers are submerged facilities equipped with power and cooling infrastructure that house computer servers.

Are there Data Centers Underwater?

Since 2015, several data centers have been submerged underwater in both the Pacific Ocean and Atlantic Ocean. The first underwater data center was deployed by Microsoft into the Pacific Ocean, off the coast of California, through an experiment called Project Natick, with Phase 1 being a vessel carrying 1 rack, containing 24 servers.

Following Microsoft’s initial proof-of-concept tests, oceanic data centers have grown in scale, with Phase 2 of Project Natick being a shipping container-sized data center carrying 12 racks, containing 864 servers.

Microsoft’s Project Natick Phase 2 – Underwater Data Center

Microsoft Project Natick Phase 2 Underwater Data Center
Source: Microsoft.

Additionally, underwater data center prototypes and tests have been conducted by China-headquartered Beijing Highlander Digital Technology and Los Angeles, California-based Subsea Cloud. While speculation has also arisen that Amazon Web Services (AWS), Google, and Facebook (Meta Platforms) could be conducting their own underwater data center research.

Why are Data Centers Underwater?

Data centers are being placed underwater to realize benefits in cooling, latency, time to market, reliability, and sustainability. Below are further details on the five reasons why these oceanic data centers are being submerged underwater:


Core to the value proposition of underwater data centers is cooling. To this end, oceans provide a consistently cold setting which removes heat produced by the facilities that house servers. Moreover, this cooling can be used at effectively no cost.

Heat removal methods are an important consideration in data centers overall as cooling is a significant component of operating costs at the facility-level. Therefore, if underwater data centers can reduce cooling costs, then they are able to hold an operating cost advantage over land-based data centers.

As an example, Microsoft notes that its most recent underwater data center, which was submerged 117 feet (36 meters) below sea level, experienced temperatures about 18-degrees Fahrenheit (10-degrees Celsius) cooler than land-based data centers. As measured using energy efficiency metrics, Microsoft’s underwater data center attained a power usage effectiveness (PUE) of 1.07, whereas the company’s newly-constructed land-based data centers produce a PUE of about 1.125.

READ MORE: How Data Centers Impact the Environment


Underwater data centers provide a solution for low-latency connectivity, meaning reducing the time it takes for data to travel between its source and destination. In particular, oceanic data centers can deliver low-latency connectivity to coastal populations, which is important, as more than 50% of the world’s population lives within 120 miles (200 kilometers) of the coast.

By placing underwater data centers in close proximity to a large proportion of the world’s population, faster and smoother internet browsing, video streaming, gaming, and cloud services can be brought to underserved communities. As such, underwater data centers could become an important edge computing tool for cloud service providers including Amazon Web Services (AWS), Microsoft Azure, and Google Cloud.

READ MORE: What is an Edge Data Center? (With Examples)

Time to Market

Generally, underwater data centers are built as pre-fabricated and standardized modules, which enables quick construction and delivery times. For example, Microsoft’s underwater data center initiative targets a deployment timeframe of “less than 90 days from factory to operation”.

Ultimately, the goal with underwater data centers is to deploy these facilities faster than land-based data centers. On land, data center “construction” requires permits and adaptation to various physical environments. Whereas underwater data centers involve more of a “manufacturing” process, which aims to produce modules at-scale for deployment in very similar oceanic conditions.

READ MORE: How Much Does it Cost to Build a Data Center?


Underwater data centers possess a high-degree of reliability and more predictable data center performance because these pre-fabricated modules are built with precise specifications in a controlled factory environment. As such, these oceanic data centers can operate with no personnel on-site and without maintenance for up to 5 years.

READ MORE: Data Center Tiers – Measuring Availability

As Microsoft’s underwater data center project highlighted, after each 5-year deployment cycle, the data center vessel would be retrieved, reloaded with new servers, and then redeployed. Overall, this process could be repeated for a total of 4 deployments over a 20-year lifespan. Subsequently, the underwater data center would be decommissioned and recycled.

Also, the servers within an underwater data center exhibit greater longevity, a proxy for reliability – which is discussed in further detail in the next section.


Underwater data centers enable operators to meet their sustainability requirements because these facilities can be co-located with offshore renewable energy sources that produce no greenhouse gas emissions. For example, renewable energy sources for underwater data centers include offshore wind, solar, tidal, and wave power. By not connecting to the electrical grid, these oceanic data centers can reduce strain on local power networks.

Additionally, underwater data centers consume no water for cooling or any other operational purpose. As such, these facilities do not strain fresh water resources that are essential to people and the environment.

READ MORE: Data Center Water Usage – Billions of Gallons Every Year

As measured using water sustainability metrics, underwater data centers operate with a “perfect” water usage effectiveness (WUE) of exactly zero. In comparison, land-based data centers consume up to 4.8 liters of water per kilowatt-hour (kWh).

Underwater Servers

Underwater servers are submerged computing devices with data processing and storage components designed to house and operate applications, websites, and content, which ultimately connect to a network.

Are there Underwater Servers?

In total, thousands of servers, housed in underwater data centers, have been submerged into the Pacific Ocean and Atlantic Ocean.

Microsoft’s Project Natick Phase 2 – Underwater Servers

Microsoft Project Natick Phase 2 Underwater Servers
Source: Microsoft.

Why are Servers Underwater?

Servers are being placed underwater in sealed containers on the ocean floor because it allows for greater server longevity, a proxy for reliability. Underwater server reliability is driven by four primary factors:

  1. Atmosphere: underwater data center containers provide an atmosphere of dry nitrogen, meaning there is no oxygen – this is important as nitrogen is less corrosive than oxygen. In turn, a benign oxygen-free environment allows servers to last much longer
  2. Humidity: low humidity in an underwater environment helps reduce the risk of corrosion, as a result of excessive condensation, and extends the life expectancy of servers
  3. Temperature Fluctuations: consistent ocean temperatures reduces the risk of significant increases or decreases in ambient temperatures, as a result of HVAC / cooling system issues or failures. Large temperature fluctuations can cause servers and networking equipment to expand and contract, which contributes to equipment failure
  4. Absence of People: underwater data centers operate with no personnel on-site, as opposed to land-based data centers which utilize facility management staff and technical engineers. By having no personnel on-site, underwater data centers eliminate bumps and jostles to the facility’s servers from people who replace broken components

As a reference point, Microsoft states that the servers in its latest underwater data center are 8 times more reliable than those on land. Said differently, Microsoft’s underwater servers showed a failure rate at 1/8th of what the company experiences in land-based server deployments.

Microsoft’s Underwater Data Center – Project Natick

Microsoft’s research experiment to build an underwater data center and place servers in the ocean is called Project Natick. To-date, Microsoft’s Project Natick has successfully completed Phase 1 and Phase 2 testing, while a forthcoming Phase 3 has been speculated.

Project NatickPhase 1Phase 2Phase 3
LaunchedAugust 2015June 2018Future
Duration105 days2 years5 years
Length10 ft (3 m)40 ft (12.2 m)<300 ft (<91.5 m)
DepthShallow117 ft (36 m)>131 ft (>40 m)
Note: Phase 3 of Project Natick represents a combination of reports and assumptions.

Below is further detail on each of the phases of Microsoft’s Project Natick:

Phase 1 – Microsoft’s Project Natick

Microsoft’s Project Natick Phase 1 was a proof-of-concept prototype underwater data center that was launched in August 2015. Phase 1 of Project Natick was placed onto the seafloor in calm, shallow waters, located approximately 0.6 miles (1 kilometer) off the Pacific Ocean coast of Avila Beach, which is near San Luis Obispo, California, United States. This 10-foot (3-meter) by 7-foot (2.1-meter) and 38,000-pound oceanic data center was operated for a 105-day period, until November 2015.

Phase 1 of Microsoft’s Project Natick comprised an underwater data center loaded with 1 standard 42U rack, containing 24 servers. The servers occupied ~1/3rd of the space of the rack, with the other ~2/3rds being filled with “load trays” for the purposes of generating heat – to effectively test the underwater data center’s cooling system.

Why did Microsoft put a Data Center Underwater?

Microsoft put a data center underwater to demonstrate its ability to deploy, operate (with no personnel on-site), and cool a submerged oceanic facility for an extended period of time.

Phase 2 – Microsoft’s Project Natick

Microsoft’s Project Natick Phase 2 was an underwater data center that was deployed for a period of two years, from June 2018 to July 2020. Phase 2 of Project Natick was placed onto the seafloor in the Northern Isles, and was specifically located at the European Marine Energy Centre (EMEC) in the Orkney Islands, Scotland, United Kingdom.

Microsoft’s Project Natick Phase 2 – Location Map

European Marine Energy Centre EMEC Orkney Islands Scotland United Kingdom
Source: Microsoft.

This shipping container-sized underwater data center was manufactured by Naval Group, a French naval defense company, with the following components and dimensions:

  • Pressure Vessel: steel cylinder with dimensions of 40 feet (12.2 meters) in length, 9.2 feet (2.8 meters) in diameter or 10.4 feet (3.2 meters) including external components. As such, the pressure vessel has approximately the same dimensions as an ISO shipping container. This design was deliberate to ensure the underwater data center could be transported utilizing existing logistics supply chains
  • Subsea Docking Structure: ballast-filled triangular base with dimensions of 47 feet (14.3 meters) in length and 41.7 feet (12.7 meters) in width. This subsea docking structure resided on the seabed and was attached to the pressure vessel

Microsoft’s Project Natick Phase 2 – Underwater Data Center

Microsoft Project Natick Phase 2 Underwater Data Center Vessel Structure
Source: Microsoft.

Phase 2 of Microsoft’s Project Natick was placed 117 feet (36 meters) deep to the rock slab seafloor. The facility comprised an underwater data center loaded with 12 racks, containing 864 standard servers with FPGA acceleration. Each of the 864 servers had 32 terabytes of disk, equating to 27.6 petabytes of total disk.

In terms of electrical power consumption, Phase 2 of Microsoft’s Project Natick required 240 kilowatts (kW), meaning just under a quarter of a megawatt of power, when operating at full capacity. This power was sourced from 100% locally-produced renewable electricity, including on-shore wind and solar, as well as off-shore tidal and wave energy.

Microsoft Project Natick Phase 2 Underwater Data Center Power
Source: Microsoft.

Regarding cooling, Phase 2 of Microsoft’s Project Natick uses an air-to-liquid heat exchange process. This system pipes seawater directly through radiators on the back of each of the underwater data center’s 12 server racks and back out into the ocean.

Finally, the internal operating environment of Phase 2 of Project Natick was 100% dry nitrogen at 1 atmosphere of pressure.

Why are Microsoft’s Data Centers Underwater?

Microsoft’s Project Natick Phase 2 sought to determine whether it was economically feasible to manufacture full-scale underwater data center modules and deploy them in under 90 days.

In addition, for a period of two years, Microsoft was able to test and monitor the performance and reliability of the underwater data center’s servers. For example, Microsoft monitored metrics including power consumption, temperature, internal humidity levels, fan speed, sound, and speed of the current.

Phase 3 – Microsoft’s Project Natick

Microsoft’s future Project Natick Phase 3 has been described as a “pilot”. Specifically, Microsoft would build an underwater data center at a “larger scale” for Phase 3 of Project Natick, which “might be multiple vessels” and “might be a different deployment technology” than Phase 2.

In any commercial deployment, Phase 3 of Microsoft’s Project Natick would be placed at a depth greater than 117 feet (36 meters) – which was the depth at which Phase 2 was deployed.

To-date, speculation has arisen that for Phase 3, a long steel frame, spanning less than 300 feet (91.5 meters), could hold 12 underwater data center cylinders, similar in size to the cylinders used in Phase 2.

Illustrative Depiction of Microsoft’s Project Natick Phase 3

Microsoft Project Natick Phase 3 Underwater Data Center Design
Source: Microsoft.

Assuming each underwater data center cylinder was loaded with 12 racks, then the Phase 3 steel frame could carry a total of 144 racks. Additionally, assuming Phase 2’s ratio of 72 servers per rack, means Phase 3 of Project Natick would be able to support a total of 10,368 servers.

Additional Companies with Underwater Data Centers

Underwater data centers and servers have been tested by China-headquartered Beijing Highlander Digital Technology and U.S.-based Subsea Cloud. While speculation has also arisen that Amazon Web Services (AWS), Google, and Facebook (Meta Platforms) could be conducting their own research on underwater data centers and servers.

Beijing Highlander Digital Technology

In early 2021, Beijing Highlander Digital Technology launched a prototype underwater data center carrying four racks in the city of Zhuhai, which resides in China’s southern Guangdong province.

Subsequently, in mid-2021, Beijing Highlander Digital Technology announced plans to build 100 underwater data center modules, by 2025, at a cost of RMB5.6 billion ($880 million). These underwater data center modules will be deployed in the city of Sanya, which resides on the southern portion of China’s Hainan Island.

Beijing Highlander Digital Technology – Underwater Data Centers in Sanya

Beijing Highlander Digital Technology Underwater Data Centers Sanya Hainan Island China
Source: China Daily.

For Beijing Highlander’s latest project, the company is partnering with Offshore Oil Engineering Co (COOEC), an engineering contractor, and Beijing Sinnet Technology Co, a carrier-neutral data center operator in China, which operates Amazon Web Services (AWS) cloud products and services in the Beijing region of China.

READ MORE: Top 10 Cloud Service Providers Globally in 2023

Subsea Cloud

Subsea Cloud is a Los Angeles, California-based start-up focused on deploying underwater data centers and servers. Presently, Subsea Cloud plans to launch the following three underwater data centers:

  1. Jules Verne: underwater data center deployment near Port Angeles, Washington, with similar size and dimensions to a 20-foot (6-meter) shipping container. This underwater data center will be loaded with 16 racks, containing about 800 servers and placed in shallow waters, at a depth of 30 feet (9.1 meters)
  2. Njord01: underwater data center deployment in the Gulf of Mexico, which will be placed at a depth of 700 to 900 feet (213 to 274 meters)
  3. Manannán: underwater data center deployment in the North Sea (Europe), which will be placed at a depth of 600 to 700 feet (183 to 213 meters)

Initially, Subsea Cloud expects to make the Jules Verne underwater data center commercially available before the end of 2022.

Mary Zhang covers Data Centers for Dgtl Infra, including Equinix (NASDAQ: EQIX), Digital Realty (NYSE: DLR), CyrusOne, CoreSite Realty, QTS Realty, Switch Inc, Iron Mountain (NYSE: IRM), Cyxtera (NASDAQ: CYXT), and many more. Within Data Centers, Mary focuses on the sub-sectors of hyperscale, enterprise / colocation, cloud service providers, and edge computing. Mary has over 5 years of experience in research and writing for Data Centers.


Please enter your comment!
Please enter your name here