Underwater data centers and servers provide a glimpse into the future of a rapidly evolving industry where submerged oceanic facilities help support and connect underserved communities around the world.
There are major underwater data center and underwater server initiatives ongoing across the globe, with the most notable of these developments being Microsoft’s Project Natick. By putting the “cloud” into the “ocean”, services like compute, storage, and networking can help democratize innovation.
Dgtl Infra offers a comprehensive analysis of underwater data centers and servers, explaining the reasons behind the emergence of this innovative infrastructure. Our review includes an examination of Microsoft’s Project Natick, detailing its past achievements in Phase 1 and Phase 2, and explores the possibilities of a forthcoming Phase 3. Additionally, Dgtl Infra showcases ongoing initiatives in underwater data center and server projects from lesser-known companies, each employing unique deployment strategies.
What are Underwater Data Centers?
Underwater data centers are submerged facilities equipped with power and cooling infrastructure that house computer servers and related IT equipment. These facilities aim to increase sustainability and efficiency in computing operations, while also exploring new frontiers in data storage and processing technology.
Are there Data Centers Underwater?
Since 2015, several data centers have been submerged underwater in both the Pacific Ocean and Atlantic Ocean. The first underwater data center was deployed by Microsoft into the Pacific Ocean, off the coast of California, through an experiment called Project Natick, with Phase 1 being a vessel carrying 1 rack, containing 24 servers.
Following Microsoft’s initial proof-of-concept tests, oceanic data centers have grown in scale, with Phase 2 of Project Natick being a shipping container-sized data center carrying 12 racks, containing 864 servers.
Microsoft’s Project Natick Phase 2 – Underwater Data Center
Additionally, underwater data center prototypes and tests have been conducted by China-headquartered Beijing Highlander Digital Technology and Los Angeles, California-based Subsea Cloud. While speculation has also arisen that Amazon Web Services (AWS), Google, and Facebook (Meta Platforms) could be conducting their own underwater data center research.
Why are Data Centers Underwater?
Data centers are being placed underwater to realize benefits in cooling, latency, time to market, reliability, and sustainability. Below are further details on the five reasons why these oceanic data centers are being submerged underwater:
The primary advantage of underwater data centers lies in their cooling capabilities. Oceans offer a naturally cold environment that efficiently dissipates the heat generated by server-hosting facilities. Furthermore, this form of cooling incurs virtually no additional cost.
Cooling is a critical aspect of data center operations, often accounting for a substantial portion of their operating expenses. Consequently, the ability of underwater data centers to minimize cooling costs gives them a financial edge over traditional, land-based data centers.
As an example, Microsoft notes that its most recent underwater data center, which was submerged 117 feet (36 meters) below sea level, experienced temperatures about 18-degrees Fahrenheit (10-degrees Celsius) cooler than land-based data centers. As measured using energy efficiency metrics, Microsoft’s underwater computing facility attained a power usage effectiveness (PUE) of 1.07, whereas the company’s newly-constructed land-based data centers produce a PUE of about 1.125.
Underwater data centers provide a solution for low-latency connectivity, meaning reducing the time it takes for data to travel between its source and destination. In particular, oceanic data centers can deliver low-latency connectivity to coastal populations, which is important, as more than 50% of the world’s population lives within 120 miles (200 kilometers) of the coast.
By placing underwater data centers in close proximity to a large proportion of the world’s population, faster and smoother internet browsing, video streaming, gaming, and cloud services can be brought to underserved communities. As such, underwater facilities could become an important edge computing tool for cloud service providers including Amazon Web Services (AWS), Microsoft Azure, and Google Cloud.
3. Time to Market
Generally, underwater data centers are built as pre-fabricated and standardized modules, which enables quick construction and delivery times. For example, Microsoft’s underwater computing initiative targets a deployment timeframe of “less than 90 days from factory to operation”.
Ultimately, the goal with underwater data centers is to deploy these facilities faster than land-based data centers. On land, data center “construction” requires permits and adaptation to various physical environments. Whereas underwater data centers involve more of a “manufacturing” process, which aims to produce modules at-scale for deployment in very similar oceanic conditions.
Underwater data centers offer a high level of reliability and predictable performance. This is because they are pre-fabricated modules, constructed to precise specifications in a controlled factory environment. Consequently, these oceanic data centers can function autonomously, without any on-site personnel, and do not require maintenance for up to 5 years.
As Microsoft’s underwater data center project highlighted, after each 5-year deployment cycle, the data center vessel would be retrieved, reloaded with new servers, and then redeployed. Overall, this process could be repeated for a total of 4 deployments over a 20-year lifespan. Subsequently, the underwater facility would be decommissioned and recycled.
Also, the servers within an underwater data center exhibit greater longevity, a proxy for reliability – which is discussed in further detail in the next section.
Underwater data centers enable operators to meet their sustainability requirements because these facilities can be co-located with offshore renewable energy sources that produce no greenhouse gas emissions. For example, renewable energy sources for underwater data centers include offshore wind, solar, tidal, and wave power. By not connecting to the electrical grid, these oceanic data centers can reduce strain on local power networks.
Additionally, underwater data centers consume no water for cooling or any other operational purpose. As such, these facilities do not strain fresh water resources that are essential to people and the environment. As measured using water sustainability metrics, underwater data centers operate with a “perfect” water usage effectiveness (WUE) of exactly zero. In comparison, land-based data centers consume up to 4.8 liters of water per kilowatt-hour (kWh).
What are Underwater Servers?
Underwater servers are submerged computing devices with data processing and storage components designed to house and operate applications, websites, and content, which ultimately connect to a network.
Are there Underwater Servers?
In total, thousands of servers, housed in underwater data centers, have been submerged into the Pacific Ocean and Atlantic Ocean.
Microsoft’s Project Natick Phase 2 – Underwater Servers
Why are Servers Underwater?
Servers are being placed underwater in sealed containers on the ocean floor because it allows for greater server longevity, a proxy for reliability. Underwater server reliability is driven by four primary factors:
- Atmosphere: Underwater data center containers provide an atmosphere of dry nitrogen, meaning there is no oxygen – this is important as nitrogen is less corrosive than oxygen. In turn, a benign oxygen-free environment allows servers to last much longer
- Humidity: Low humidity in an underwater environment helps reduce the risk of corrosion, as a result of excessive condensation, and extends the life expectancy of servers
- Temperature Fluctuations: Consistent ocean temperatures reduces the risk of significant increases or decreases in ambient temperatures, as a result of HVAC / cooling system issues or failures. Large temperature fluctuations can cause servers and networking equipment to expand and contract, which contributes to equipment failure
- Absence of People: Underwater data centers operate with no personnel on-site, as opposed to land-based data centers which utilize facility management staff and technical engineers. By having no personnel on-site, these facilities eliminate bumps and jostles to the facility’s servers from people who replace broken components
As a reference point, Microsoft states that the servers in its latest underwater data center are 8 times more reliable than those on land. Said differently, Microsoft’s underwater servers showed a failure rate at 1/8th of what the company experiences in land-based server deployments.
Microsoft’s Underwater Data Center – Project Natick
Microsoft’s research experiment to build an underwater data center and place servers in the ocean is called Project Natick. To-date, Microsoft’s Project Natick has successfully completed Phase 1 and Phase 2 testing, while a forthcoming Phase 3 has been speculated.
|10 ft (3 m)
|40 ft (12.2 m)
|<300 ft (<91.5 m)
|117 ft (36 m)
|>131 ft (>40 m)
Below is further detail on each of the phases of Microsoft’s Project Natick:
Phase 1 – Microsoft’s Project Natick
Microsoft’s Project Natick Phase 1 was a proof-of-concept prototype underwater data center that was launched in August 2015. Phase 1 of Project Natick was placed onto the seafloor in calm, shallow waters, located approximately 0.6 miles (1 kilometer) off the Pacific Ocean coast of Avila Beach, which is near San Luis Obispo, California, United States. This 10-foot (3-meter) by 7-foot (2.1-meter) and 38,000-pound oceanic data center was operated for a 105-day period, until November 2015.
Phase 1 of Microsoft’s Project Natick comprised an underwater data center loaded with 1 standard 42U rack, containing 24 servers. The servers occupied approximately 1/3rd of the space of the rack, with the other approximately 2/3rds being filled with “load trays” for the purposes of generating heat – to effectively test the underwater data center’s cooling system.
Why did Microsoft put a Data Center Underwater?
Microsoft put a data center underwater to demonstrate its ability to deploy, operate (with no personnel on-site), and cool a submerged oceanic facility for an extended period of time.
Phase 2 – Microsoft’s Project Natick
Microsoft’s Project Natick Phase 2 was an underwater data center that was deployed for a period of two years, from June 2018 to July 2020. Phase 2 of Project Natick was placed onto the seafloor in the Northern Isles, and was specifically located at the European Marine Energy Centre (EMEC) in the Orkney Islands, Scotland, United Kingdom.
Microsoft’s Project Natick Phase 2 – Location Map
This shipping container-sized underwater data center was manufactured by Naval Group, a French naval defense company, with the following components and dimensions:
- Pressure Vessel: Steel cylinder with dimensions of 40 feet (12.2 meters) in length, 9.2 feet (2.8 meters) in diameter or 10.4 feet (3.2 meters) including external components. As such, the pressure vessel has approximately the same dimensions as an ISO shipping container. This design was deliberate to ensure the underwater data center could be transported utilizing existing logistics supply chains
- Subsea Docking Structure: Ballast-filled triangular base with dimensions of 47 feet (14.3 meters) in length and 41.7 feet (12.7 meters) in width. This subsea docking structure resided on the seabed and was attached to the pressure vessel
Microsoft’s Project Natick Phase 2 – Underwater Data Center
Phase 2 of Microsoft’s Project Natick was placed 117 feet (36 meters) deep onto the rock slab seafloor. The facility comprised an underwater data center loaded with 12 racks, containing 864 standard servers with field programmable gate array (FPGA) acceleration. Each of the 864 servers had 32 terabytes of disk storage, equating to 27.6 petabytes of total disk storage.
In terms of electrical power consumption, Phase 2 of Microsoft’s Project Natick required 240 kilowatts (kW), meaning just under a quarter of a megawatt of power, when operating at full capacity. This power was sourced from 100% locally-produced renewable electricity, including on-shore wind and solar, as well as off-shore tidal and wave energy.
Regarding cooling, Phase 2 of Microsoft’s Project Natick uses an air-to-liquid heat exchange process. This system pipes seawater directly through radiators on the back of each of the underwater data center’s 12 server racks and back out into the ocean.
Finally, the internal operating environment of Phase 2 of Project Natick was 100% dry nitrogen at 1 atmosphere of pressure.
Why are Microsoft’s Data Centers Underwater?
Microsoft’s Project Natick Phase 2 aimed to evaluate the economic feasibility of manufacturing and deploying full-scale underwater data center modules within a 90-day timeframe.
In addition, for a period of two years, Microsoft was able to test and monitor the performance and reliability of the underwater data center’s servers. For example, Microsoft monitored metrics including power consumption, temperature, internal humidity levels, fan speed, sound, and speed of the current.
Phase 3 – Microsoft’s Project Natick
Microsoft’s future Project Natick Phase 3 has been described as a “pilot”. Specifically, Microsoft would build an underwater data center at a “larger scale” for Phase 3 of Project Natick, which “might be multiple vessels” and “might be a different deployment technology” than Phase 2.
In any commercial deployment, Phase 3 of Microsoft’s Project Natick would be placed at a depth greater than 117 feet (36 meters) – which was the depth at which Phase 2 was deployed.
To-date, speculation has arisen that for Phase 3, a long steel frame, spanning less than 300 feet (91.5 meters), could hold 12 underwater data center cylinders, similar in size to the cylinders used in Phase 2.
Depiction of Microsoft’s Project Natick Phase 3
Assuming each underwater data center cylinder was loaded with 12 racks, then the Phase 3 steel frame could carry a total of 144 racks. Additionally, assuming Phase 2’s ratio of 72 servers per rack, means Phase 3 of Project Natick would be able to support a total of 10,368 servers.
Additional Companies with Underwater Data Centers
Underwater data centers and servers have been tested by China-headquartered Beijing Highlander Digital Technology and U.S.-based Subsea Cloud. While speculation has also arisen that Amazon Web Services (AWS), Google, and Facebook (Meta Platforms) could be conducting their own research on underwater data centers and servers.
Beijing Highlander Digital Technology
In early 2021, Beijing Highlander Digital Technology launched a prototype of an underwater data center carrying four racks in the city of Zhuhai, which resides in China’s southern Guangdong province.
Subsequently, in mid-2021, Beijing Highlander announced its intention to build 100 underwater data center modules, by 2025, at a cost of RMB5.6 billion ($880 million). These modules are set to be located in the city of Sanya, which resides on the southern portion of China’s Hainan Island.
Beijing Highlander – Underwater Data Centers in Sanya
By December 2022, Beijing Highlander had completed the construction of its first commercial underwater data center near Hainan Island. Following this, the company began preparing the facility for operational loads.
Subsequently, in November 2023, Beijing Highlander launched its inaugural commercial underwater data center near Hainan Island, China. This facility, with a weight of 1,433 tons (1,300 tonnes), was installed by HiCloud and is submerged 115 feet (35 meters) beneath the ocean’s surface on the seabed, utilizing seawater for its cooling system.
Remarkably, the data center can process over four million high-definition images within a span of 30 seconds, a task that would require the combined effort of 60,000 traditional computers.
The construction of Beijing Highlander’s underwater data center was carried out by Offshore Oil Engineering Co (COOEC), an engineering contracting firm. Additionally, Beijing Highlander further benefitted from a collaboration with Beijing Sinnet Technology Co (Sinnet), a leading carrier-neutral data center operator in China. Notably, Sinnet is responsible for operating Amazon Web Services (AWS) cloud products and services in the Beijing region.
To date, notable customers like China Telecom and SenseTime, a Hong Kong-based AI software company, have placed orders with Beijing Highlander for underwater data centers.
Subsea Cloud is a Los Angeles, California-based start-up focused on deploying underwater data centers and servers. Initially, Subsea Cloud planned to launch the following three underwater data centers:
- Jules Verne: Deployment near Port Angeles, Washington, with similar size and dimensions to a 20-foot (6-meter) shipping container. This underwater data center will be loaded with 16 racks, containing about 800 servers and placed in shallow waters, at a depth of 30 feet (9.1 meters)
- Njord01: Deployment in the Gulf of Mexico, which will be placed at a depth of 700 to 900 feet (213 to 274 meters)
- Manannán: Deployment in the North Sea (Europe), which will be placed at a depth of 600 to 700 feet (183 to 213 meters)