CoreWeave, the leader among a new generation of specialized cloud providers focused on serving AI workloads at massive scale, has received over $12 billion in funding to build data centers in various global locations. The company specializes in providing its customers with access to cutting-edge NVIDIA GPU resources, enabling them to efficiently train and deploy complex AI models and handle compute-intensive workloads.

CoreWeave plans to have a data center portfolio spanning 28 facilities by the end of 2024, including locations in Weehawken, New Jersey; Chicago, Illinois; Las Vegas, Nevada; Plano, Texas; Austin, Texas; Chester, Virginia; Hillsboro, Oregon; Douglasville, Georgia; London, United Kingdom, and more.

Dgtl Infra provides a comprehensive overview of CoreWeave, highlighting its GPU-as-a-Service solutions that empower AI innovators worldwide. From CoreWeave’s strategically located data centers to their unique array of GPU offerings, customers, and competitive positioning, this detailed analysis will provide you with valuable insights into the company’s operations, market presence, and financial backing.

CoreWeave – Company Overview

CoreWeave is a specialized cloud infrastructure provider for large-scale GPU-accelerated workloads. Founded in 2017 and headquartered in Roseland, New Jersey, the company currently employs over 400 people.

CoreWeave Cloud Infrastructure Connects High-Performance Compute Artificial Intelligence AI ML VFX

CoreWeave belongs to a new category of companies offering GPU as a Service (GPUaaS) or Artificial Intelligence as a Service (AIaaS). They serve AI workloads by providing access to hundreds of thousands of the latest generation GPUs across their cloud, sourced through strategic partnerships with hardware vendors like NVIDIA.

CoreWeave operates within the infrastructure layer of the AI technology stack, distinguishing itself from top cloud service providers (CSPs) like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). The company provides specialized cloud solutions with a Kubernetes-native cloud, offering products such as GPU compute, CPU compute, containers, virtual servers, cloud storage, and high-performance networking.

CoreWeave Kubernetes Native Cloud Networking Storage GPU CPU Computing Server Infrastructure
Source: CoreWeave.

CoreWeave delivers cloud solutions for compute-intensive use cases, including machine learning, AI, visual effects (VFX), rendering, batch processing, real-time pixel streaming, life sciences, drug discovery, and the metaverse. Its infrastructure supports both the training phase – where models learn from large datasets – and the inference phase – where models generate predictions based on user input.

CoreWeave offers compute solutions that are up to 35 times faster and 80% less expensive than large, generalized public clouds. Its inference service performs 8 to 10 times faster than those of major generalized cloud providers. The company’s specialized networking fabric, built to reduce latency and boost chip utilization, can offer a 50% reduction in latency. Additionally, CoreWeave provides value-added software and technical resources.

Data Centers

CoreWeave’s operational data center portfolio consists of 14 facilities across the United States, including locations in New Jersey, Illinois, Nevada, Texas, Virginia, Oregon, and Georgia.

IT Technician Wearing a Branded Jacket Working on a Server in a GPU Cloud Infrastructure
Source: CoreWeave.

The company plans to double its number of data centers to 28 by the end of 2024, expanding further within the U.S. and into Europe, specifically in London, UK, and Barcelona, Spain.


CoreWeave is led by a management team that includes Michael Intrator (Chief Executive Officer), Nitin Agrawal (Chief Financial Officer), Brannin McBee (Chief Development Officer), Brian Venturo (Chief Strategy Officer), Peter Salanki (Chief Technology Officer), and Mike Mattacola (Chief Business Officer).

Financial and Market Projections

CoreWeave has significantly expanded its signed contracts, increasing from an estimated $500 million in revenue in 2023 to $7 billion in signed contracts through 2026.

Over the past 12 months, the company has raised more than $12 billion in equity and debt funding. Multiple sources report that CoreWeave was valued at $19 billion in its latest Series C investment round.

According to IDC, CoreWeave’s total addressable market (TAM) in AI is expected to reach $160 billion by 2027.

Data Center Regions and Locations – CoreWeave

CoreWeave’s data center regions house high-performance GPU clusters, delivering low-latency access to cloud infrastructure for AI workloads to over 51 million people.

CoreWeave currently operates 14 data centers across the U.S. and aims to expand to 28 facilities by the end of 2024, with additional plans for growth in the U.S. and Europe.

Data Center LocationOperatorCapacity
Weehawken, New JerseyServes Eastern U.S.
Chicago, IllinoisServes Central U.S.
Las Vegas, NevadaSwitch, IncServes Western U.S.
Plano, TexasLincoln Rackhouse30 MW, 454,421 sqft
Austin, TexasCore Scientific16 MW, 118,000 sqft
Chester, VirginiaChirisa Technology Parks28 MW, 250,000 sqft
Hillsboro, OregonFlexential9 MW
Douglasville, GeorgiaFlexential9 MW
London, United KingdomTwo UK data centers
Barcelona, SpainEdgeConneXColocation

Data Center Regions – CoreWeave Cloud

CoreWeave Cloud currently operates three data center regions in the United States: US East, US Central, and US West. These geographically diverse regions are supported by data centers located in Weehawken, New Jersey; Chicago, Illinois; and Las Vegas, Nevada, respectively.

CoreWeave Data Center Regions ORD1 Chicago LAS1 Las Vegas LGA1 New York City Represented by Icons
Source: CoreWeave.

US East – Weehawken, New Jersey

CoreWeave’s US East (LGA1) data center region is located in Weehawken, New Jersey, serving the Eastern U.S. with ultra-low latency to over 20 million residents across the New York City metropolitan area, including sub-1 millisecond latency to Manhattan.

US Central – Chicago, Illinois

CoreWeave’s US Central (ORD1) data center region is situated just outside downtown Chicago, Illinois, serving the central United States.

US West – Las Vegas, Nevada

CoreWeave’s US West (LAS1) data center region is located at 5605 Badura Avenue in Las Vegas, Nevada, serving the Western U.S. The GPU as a Service (GPUaaS) provider has secured colocation capacity at this location, which is Switch, Inc’s data center, known as the Core Campus.

Switch Inc Core Campus Las Vegas Nevada LAS1 Data Center Region Badura Avenue Colocation Capacity
Source: Switch, Inc.

Network Infrastructure

Each of CoreWeave’s data center regions provide redundant public Internet connectivity exceeding 200 Gbps (Gigabits per second) from Tier 1 global carriers. Additionally, these data center regions are interconnected with over 400 Gbps of dark fiber transport, enabling easy and free data transfers within CoreWeave Cloud.

Data Center Locations – CoreWeave

To accommodate its growth, CoreWeave has been actively leasing or building data centers in additional locations across the United States, including Texas, Virginia, Oregon, and Georgia. The company has also expanded internationally, establishing a presence in London, United Kingdom, and Barcelona, Spain.

Plano, Texas

CoreWeave opened a new $1.6 billion, 454,421-square-foot data center facility in Plano, Texas, which became operational at the end of 2023. The facility is situated on a 23.8-acre campus and is located within Lincoln Property Company’s Lincoln Rackhouse building at 1000 Coit Road in Plano, Texas.

The data center features four 14,000-square-foot data halls, with an additional 50,000 square feet of powered shell space available for future expansion.

CoreWeave Plano Texas Data Center at Lincoln Rackhouse 1000 Coit Road Infrastructure
Source: CoreWeave.

According to filings with the District Court of Dallas County, Texas, the 1000 Coit Road data center includes the following provisions:

  • CoreWeave has the potential to utilize up to 30 megawatts (MW) of critical IT load at Lincoln Rackhouse’s data center, structured in multiple phases to accommodate necessary electrical and mechanical upgrades
  • Lincoln Rackhouse estimated it would need to make capital expenditures of approximately $90 million to support the 30 MW critical IT load
  • The initial phase of CoreWeave’s deployment involved 12 MW
  • Lincoln and CoreWeave executed a 6-year Master Services Agreement (MSA) – not a lease – agreeing to a base rent of $75 per kilowatt (kW) per month
  • Under the MSA, CoreWeave agreed to pay Lincoln over $260 million for the initial 30 MW installation
  • Lincoln and CoreWeave also included two 2-year renewal options, allowing for a total possible contract length of 10 years. The renewal options were estimated at an additional $180.4 million in payments
  • Tax incentives and abatements with the City of Plano resulted in roughly $141 million in tax benefits for both CoreWeave and Lincoln

Austin, Texas

CoreWeave has signed an 8-year lease agreement for 16 megawatts (MW) of data center capacity, spanning 118,000 square feet of compute space, at Core Scientific’s data center located at 3301 Hibbetts Road in Austin, Texas. This space will host CoreWeave’s infrastructure and support its GPU cloud compute workloads. Currently, the Austin data center has an operating capacity of 12 MW.

Over the duration of the colocation hosting contract, CoreWeave will pay Core Scientific over $100 million, which includes $97.8 million in lease payments for the 8-year term.

Chester, Virginia

CoreWeave has signed a 12-year license agreement, with two 5-year extension options, for a data center known as CTP-01. This facility is owned by Chirisa Investments and operated by Chirisa Technology Parks. It is located at 1401 Meadowville Technology Parkway in Chester, Chesterfield County, Virginia, approximately 15 miles south of Richmond.

CoreWeave Richmond Chester Chesterfield County Virginia Data Center Chirisa Investments Technology Parks
Source: Chirisa Technology Parks.

According to filings with the Supreme Court of the State of New York, CoreWeave initially agreed to license 18.6 megawatts (MW) of critical power at a rate of $115 per kilowatt (kW) per month, with annual 3% escalations. The starting monthly recurring charge for this contract was approximately $2.14 million. The future expansion of the Richmond, Virginia site resulted in the estimated total value of the contract between CoreWeave and Chirisa being approximately $365 million.

This fully operational, turn-key data center now offers 28 MW of power capacity across three data halls and 250,000 square feet of space. It is part of a larger 88-acre campus owned by Chirisa, which includes adjacent land that can support future build-to-suit data center construction of up to 100 MW of power capacity and up to 500,000 square feet of floor space.

Hillsboro, Oregon

CoreWeave has expanded into a colocation data center, owned and operated by Flexential, known as Portland – Hillsboro 4. This facility is located at 4915 NE Starr Blvd in Hillsboro, Oregon, a city situated 13 miles (21 kilometers) west of Portland. The CoreWeave data center has a 9-megawatt (MW) power allocation and supports 3600 Gbps InfiniBand networking.

CoreWeave Portland Hillsboro Oregon Data Center Flexential Infrastructure 4915 NE Starr Blvd
Source: Flexential.

Separately, CoreWeave has secured a 36 MW lease with Digital Realty at one of its data centers in Hillsboro, Oregon, to deploy tens of thousands of GPUs in a single facility. This shell-ready facility has a high-density design confined to a small footprint in one building, with no fiber exceeding 100 meters (328 feet). This design allows CoreWeave to create a single “brain” of GPUs, enabling the entire GPU cluster to be configured as a single system and run as a single InfiniBand fabric.

Douglasville, Georgia

CoreWeave has expanded into a colocation data center, owned and operated by Flexential, known as Atlanta – Douglasville. This facility is located at 1700 N. River Road in Douglasville, Georgia, a city situated 13 miles (21 kilometers) west of Atlanta. The CoreWeave data center has a 9-megawatt (MW) power allocation.

Core Weave Atlanta Douglasville Georgia Facility at Flexential 1700 N River Road Infrastructure
Source: Flexential.

Other U.S. Data Center Locations

In addition to the U.S. data center locations mentioned above, CoreWeave has further deployments in Ashburn, Virginia; Atlanta, Georgia; Chicago, Illinois; Secaucus, New Jersey; and San Jose, California.


CoreWeave has deployed networking and inference-type nodes at several of Equinix’s retail (IBX) data centers in the United States. These deployments leverage Equinix’s multi-cloud on-ramps and network connectivity across multiple metropolitan areas, enabling CoreWeave to access various cloud providers and network service providers.

CoreWeave is present in the following Equinix interconnection data centers:

  • Ashburn, Virginia: Equinix DC1-DC15, DC21. Here, CoreWeave also has access to the Equinix Ashburn public peering exchange point
  • Atlanta: Equinix AT1
  • Chicago: Equinix CH1/CH2/CH4
  • New York (Secaucus, New Jersey): Equinix NY2/NY4/NY5/NY6
  • Silicon Valley (San Jose): Equinix SV1/SV5/SV10. Here, CoreWeave also has access to the Equinix San Jose public peering exchange point
Core Scientific – Contracts for HPC Services

CoreWeave has signed a series of 12-year contracts with Core Scientific, each including two 5-year renewal options, to secure approximately 200 megawatts (MW) of infrastructure for hosting its high-performance computing (HPC) services.

Core Scientific will modify multiple existing bitcoin mining-focused sites it owns to accommodate CoreWeave’s NVIDIA GPUs. These site modifications are expected to begin in the early second half of 2024 and become operational in the first half of 2025.

CoreWeave’s 200 MW of long-term hosting agreements with Core Scientific are expected to result in annual payments of approximately $290 million, totaling over $3.5 billion during the initial 12-year contract term.


CoreWeave has signed a long-term agreement with TierPoint to deploy colocation services with a capacity of 16 MW in one of TierPoint’s U.S. data centers, which remains unnamed.


According to CoreWeave’s Careers page, the company lists data center job openings in Alpharetta, Georgia; Breinigsville, Pennsylvania; Cincinnati, Ohio; Clarksville, Virginia; East Windsor, New Jersey; Lynnwood, Washington; Reno, Nevada; and Sparks, Nevada. This could indicate present or future data center locations in these markets.

London, United Kingdom

CoreWeave has committed to investing $1.3 billion (£1 billion) in the UK to enhance the country’s AI capabilities. This investment will support the opening of two UK data centers in 2024, with further expansion planned for 2025. Additionally, CoreWeave has established its European headquarters in London as part of its broader expansion into Europe.

Continental Europe – Norway, Sweden, and Spain

CoreWeave has committed to investing $2.2 billion to expand and open three new data centers in continental Europe (Norway, Sweden, and Spain) by the end of 2025. This investment is in addition to its $1.3 billion commitment in the UK, bringing its total European investment to $3.5 billion.

CoreWeave’s expansion aims to provide advanced compute solutions, including the NVIDIA Blackwell platform and NVIDIA Quantum-2 InfiniBand networking, at scale for the first time in the region. The company’s European data centers will offer low-latency performance, data sovereignty, and will be powered by 100% renewable energy.

Barcelona, Spain

According to CoreWeave’s Careers page, the company has data center job openings in Barcelona, Spain, specifically within EdgeConneX’s BCN01 data center located at Ctra. de la Santa Creu de Calafell, 99, 08840 Viladecans, Barcelona, Spain. This could indicate a potential CoreWeave data center location in this market.

Core Weave Barcelona Spain Infrastructure EdgeConneX BCN01 Ctra de la Santa Creu de Calafell Viladecans
Source: EdgeConneX.

Data Center GPUs – CoreWeave Solutions

CoreWeave primarily focuses on providing access to high-performance NVIDIA GPUs for its cloud-based services, and to a lesser extent, CPUs from AMD and Intel. These solutions offer customers powerful computing resources, available both on-demand and through reserved instance contracts.

Data Center with CPU GPU RAM Storage Connected to Laptop Showing Hardware Specifications
Source: CoreWeave.

Current GPU Offerings

Currently, CoreWeave offers 13 NVIDIA GPU SKUs, which fall into the following main categories:

  • NVIDIA H100: The NVIDIA Hopper H100 Tensor Core GPU is NVIDIA’s flagship data center GPU designed for large-scale AI and high-performance computing (HPC) workloads. It uses the Hopper architecture to power the NVIDIA Grace Hopper CPU+GPU architecture, which combines an NVIDIA Hopper GPU with a Grace CPU
  • NVIDIA A100: The NVIDIA A100 Tensor Core GPU is a data center GPU designed for AI, data analytics, and HPC workloads. It is the predecessor to the NVIDIA H100 and is based on the older Ampere architecture
  • NVIDIA A40: The NVIDIA A40 is a data center GPU intended for AI, deep learning, data analytics, and HPC workloads. It is a more affordable option compared to the A100 and is also based on the Ampere architecture, positioned between the A100 and V100 in terms of performance
  • NVIDIA V100: The NVIDIA Tesla V100 Tensor Core GPU is a data center GPU designed for machine learning, rendering, and simulation. It is the predecessor to the NVIDIA A100 and is based on the older Volta architecture
  • NVIDIA RTX Series: The NVIDIA RTX A6000, A5000, and A4000 are professional-grade GPUs designed for visual computing workloads such as visual effects (VFX), rendering, simulation, and data science. These GPUs are based on the Ampere architecture

Future GPU Offerings

In 2024, CoreWeave will expand its offerings to include the NVIDIA B200, utilizing NVIDIA’s Blackwell GPU architecture, as well as the GB200 (Grace Blackwell Superchip). This new architecture enables trillion-parameter AI models with up to 25 times lower cost and energy consumption compared to its predecessor, the Hopper architecture. CoreWeave is one of the companies in the NVIDIA Cloud Partner program that will be among the first to offer Blackwell-powered instances to customers.

NVIDIA Blackwell Architecture Data Center GPU Chip B100
Source: NVIDIA.

AI Supercomputer with NVIDIA

CoreWeave has also built the world’s fastest AI supercomputer in partnership with NVIDIA, which trained a GPT-3 large language model (LLM), that had 175 billion parameters, in under 11 minutes – over 29x faster than competitors in a benchmark test.

Customers – AI Leaders and Innovators

CoreWeave serves a diverse range of customers across various industries, with a particular focus on large language model (LLM) builders.

Customers AI Companies Microsoft NovelAI Inflection Mistral Logo Side-by-Side

Notable examples of CoreWeave’s customers include:

  • Microsoft: CoreWeave provides cloud computing infrastructure and access to NVIDIA GPUs to support the computing needs of OpenAI and Azure AI workloads. Microsoft is one of CoreWeave’s largest customers, with agreements worth billions of dollars over multiple years
  • NovelAI: NovelAI, a subscription service for AI-assisted authorship and storytelling, was among the first to deploy NVIDIA HGX H100 GPUs on CoreWeave’s cloud platform. CoreWeave’s H100 clusters enable NovelAI to be more flexible with model design, iterate on training faster, and serve their AI models to millions of users each month
  • Inflection AI: Inflection AI, an AI lab, utilized CoreWeave’s supercomputing infrastructure powered by over 3,500 NVIDIA H100 GPUs to train their LLM, Pi. Additionally, Inflection AI has partnered with CoreWeave and NVIDIA to build the world’s largest AI cluster, comprising 22,000 NVIDIA H100 Tensor Core GPUs
  • Mistral AI: Mistral AI, a Paris-based open-source AI startup, uses CoreWeave’s infrastructure to provide fast and reliable performance through NVIDIA H100 Tensor Core GPUs. CoreWeave’s infrastructure has streamlined Mistral AI’s VFX workflows and boosted efficiency


CoreWeave provides à la carte pricing for NVIDIA GPUs, with hourly costs varying depending on the specific GPU model. Customers can tailor their hardware configurations by selecting the desired GPU, CPU, RAM, and storage resources when scheduling workloads. Below is a breakdown of CoreWeave’s NVIDIA GPU instance pricing:

H100 HGX (80GB)$4.7680GB HBM3
A100 HGX (80GB)$2.2180GB HBM2e
A100 HGX (40GB)$2.0640GB HBM2e
A40$1.2848GB GDDR6
V100 for NVLINK$0.8016GB HBM2
RTX A6000$1.2848GB GDDR6
RTX A5000$0.7724GB GDDR6

Competitive Landscape – GPU as a Service (GPUaaS)

CoreWeave primarily competes with other GPU as a Service (GPUaaS) companies and AI-oriented cloud providers such as Lambda Labs, Denvr Dataworks, Applied Digital, and Crusoe. They offer Infrastructure as a Service (IaaS), serving customers who need direct access to “raw” GPUs and bare metal performance for computationally intensive tasks, as well as the ability to move large volumes of data between different parts of the GPU fabric.

Competitors GPU as a Service GPUaaS Lambda Labs Denvr Dataworks Applied Digital Crusoe Logo Board

More broadly, CoreWeave’s high-performance computing (HPC) solutions face competition from companies like DigitalOcean (following its acquisition of Paperspace), FluidStack, Iris Energy, and RunPod. These platforms cater to customers early in their AI journey. For instance, DigitalOcean provides both IaaS solutions and Platform as a Service (PaaS) via its Gradient application, which supports the full lifecycle for software developers building AI-enabled applications.

Additionally, CoreWeave and NVIDIA’s DGX Cloud have a complex relationship, involving both partnership through investment and technology and indirect competition. They offer differentiated cloud solutions for AI workloads to slightly different market segments.

Ownership and Funding Sources – CoreWeave

CoreWeave has raised over $12 billion in equity and debt capital in the past 12 months through multiple funding rounds. Notable investors include NVIDIA, Blackstone, Magnetar Capital, Coatue, Fidelity Management, and BlackRock, among others. According to various sources, CoreWeave was valued at $19 billion in its latest Series C investment round.

CoreWeave FundingDateCapital$ Amount
Series BApril 2023Equity$421 million
Secondary SaleDecember 2023Equity$642 million
Series CMay 2024Equity$1.1 billion
Initial Term LoanAugust 2023Debt$2.3 billion
Follow-On Term LoanMay 2024Debt$7.5 billion
Total Funding$12 billion

Equity Capital

  • Series B: CoreWeave raised a total of $421 million in Series B funding, specifically as primary capital. NVIDIA invested $100 million in this round, granting CoreWeave preferential access to its GPUs. Additionally, Magnetar Capital, as well as Nat Friedman and Daniel Gross (nfdg) participated in this funding round
  • Secondary Sale: CoreWeave closed a secondary sale for a minority investment of $642 million, led by Fidelity Management. Additional participants included the Investment Management Corporation of Ontario (IMCO), Jane Street, J.P. Morgan Asset Management, Nat Friedman and Daniel Gross (nfdg), Goanna Capital, and Zoom Ventures
  • Series C: CoreWeave raised a total of $1.1 billion in Series C funding, specifically as primary capital. This funding round was led by Coatue, with participation from Magnetar Capital, Altimeter Capital, Fidelity Management, and Lykos Global Management

Debt Financing

In August 2023, CoreWeave secured $2.3 billion in debt financing, led by Magnetar Capital and Blackstone Tactical Opportunities, with participation from Coatue, DigitalBridge Credit, BlackRock, PIMCO, and Carlyle. This facility was structured into two components:

  • First Lien Secured Term Loan: Interest rate of SOFR + 8.75%, with a maturity date of July 31, 2028, and a 5-year term
  • Delayed Draw Term Loan: Interest rate of SOFR + 8.75%, with a maturity date of June 30, 2028, and an unused commitment fee of 1.0%

In May 2024, CoreWeave closed an additional $7.5 billion in debt financing, again led by Blackstone and Magnetar Capital, with participation from Coatue, Carlyle, CDPQ (Caisse de dépôt et placement du Québec), DigitalBridge Credit, BlackRock, Eldridge Industries, and Great Elm Capital.

A unique aspect of CoreWeave’s debt financing arrangements is the asset-backed collateral. The debt is collateralized by NVIDIA GPUs, including H100 GPUs.

Use of Proceeds

CoreWeave has used this equity and debt funding to acquire computing resources, including hundreds of thousands of the latest-generation NVIDIA GPUs, as well as related infrastructure such as servers and networking equipment. Additionally, the funding has enabled CoreWeave to expand its data center portfolio into new geographic regions, cover working capital needs, and grow its team. These financings and associated growth capital expenditures (CapEx) are aimed at meeting the contracted demand for CoreWeave’s GPU-accelerated cloud infrastructure.

Frequently Asked Questions

Is CoreWeave a Publicly Traded Company?

No, CoreWeave is not a publicly traded company. As a privately-held company, CoreWeave’s shares are not available for purchase on public stock exchanges.

What is CoreWeave’s Stock Symbol on the Market?

Since CoreWeave is a private company, it does not have a stock symbol on any public market. Privately-held companies like CoreWeave are not required to disclose their financial information or stock ownership to the public.

How has CoreWeave’s Stock Price Performed Since its IPO?

As CoreWeave has not had an Initial Public Offering (IPO), there is no information available about its stock price performance. Privately-held companies like CoreWeave do not have publicly traded stocks, so there is no stock price history to analyze.

How does CoreWeave’s Infrastructure enable faster Machine Learning Workloads?

CoreWeave’s infrastructure is designed to accelerate machine learning workloads through a combination of powerful hardware, optimized architecture, and streamlined deployment:

  • CoreWeave provides access to the industry’s broadest range of high-performance NVIDIA GPUs, allowing users to right-size their workloads for optimal performance and efficiency
  • The Kubernetes-native infrastructure enables bare-metal performance by eliminating hypervisors from the stack, resulting in faster spin-up times and responsive auto-scaling
  • CoreWeave’s full-stack machine learning expertise is reflected in an infrastructure built to reduce setup time and improve performance, whether users are training or deploying models
Mary Zhang covers Data Centers for Dgtl Infra, including Equinix (NASDAQ: EQIX), Digital Realty (NYSE: DLR), CyrusOne, CoreSite Realty, QTS Realty, Switch Inc, Iron Mountain (NYSE: IRM), Cyxtera (NASDAQ: CYXT), and many more. Within Data Centers, Mary focuses on the sub-sectors of hyperscale, enterprise / colocation, cloud service providers, and edge computing. Mary has over 5 years of experience in research and writing for Data Centers.


Please enter your comment!
Please enter your name here