Steven Hill embarks on a virtual journey through Microsoft’s data centers, unraveling insights from the presentation and his personal exploration of Azure’s data infrastructure.
Having spent over two decades as a journalist and analyst, with a keen interest in data centers, I’ve always relished the behind-the-scenes access they provide to the IT industry. Peering under floor tiles, marveling at the intricate infrastructure setup, and feeling the airflow in the warm aisles offer a tangible connection to the physicality of data centers, an experience often exclusive to a select few.
However, the recent Microsoft Datacenter Tour: Virtual Experience, hosted by Alistair Speirs, Director of Global Infrastructure for the Microsoft Azure Business Group, presented a starkly different encounter. Unlike the usual sensory immersion with its sounds, scents, and vibrant lights (NICHT FÜR DER GEFINGERPOKEN UND MITTENGRABEN!), this event unfolded as a more sanitized affair. Instead of the customary ambiance, a 50-slide deck took center stage, highlighting Microsoft Azure’s cloud data center achievements and virtues.
Creating a webinar, even on simple IT subjects, demands considerable effort. Crafting a one-hour program delving into the intricacies of a data center ecosystem supporting and expanding the Azure cloud environment inevitably means omitting a vast amount of information. Admittedly, there’s a necessity to segment key areas while guiding the curious ones among us to explore further details on the provided website. This insatiable thirst for knowledge often propels us to seek more insights, yet it must be balanced with safeguarding proprietary Microsoft technology that could potentially offer technological or business advantages. It’s a delicate balancing act they must walk.
Microsoft Data Center Tour Highlights
In a sweeping overview, the Microsoft Datacenter Tour: Virtual Experience meticulously highlighted pivotal facets within the Azure ecosystem:
- Cloud Infrastructure: Spanning across 60+ data center regions worldwide, Microsoft boasts over 200 data centers, featuring multiple availability zones.
- Network Connectivity: Leveraging a dedicated WAN network comprising 175,000 miles of fiber and extending across 190+ network PoPs, regional network gateways ensure minimal latency, clocking at 2.0ms for zones within a ~60-mile radius.
- Availability Zones: Presently, regions supporting availability zones facilitate the distribution of mission-critical workloads across three or more fully isolated data centers. Application latency requirements drive ideal placement, aiming for a baseline of 0.4ms for zones less than 40 km apart and 1.2ms for those under 120 km apart. Globally, there are 47 regions hosting 103 availability zones, with projections to expand to 63 regions and 161 availability zones soon.
- Comprehensive Edge-to-Cloud Management: Microsoft’s Azure Arc platform seamlessly integrates security, identity, and management capabilities, spanning from Azure Sphere and Azure IoT devices at the edge, through on-premises options like Azure Stack Hub and Azure Private Edge Zones, up to Azure cloud Edge Zones and Regions.
- Connectivity Solutions: With over 190 Points of Presence (PoPs), Microsoft’s Azure offers versatile connectivity options, including ExpressRoute direct-to-WAN private links. These connections ensure consistent latency, support IPv6 workloads, and deliver bandwidth scaling up to 100GbE.

While pivotal aspects of a cloud ecosystem were emphasized during the presentation, the focus on data center infrastructure was relatively limited, offering only a fractional view. With over 200 Azure data centers in existence, the complexity and variations are vast. Microsoft briefly highlighted key aspects, mentioning that a typical Azure data center houses more than 1 million miles of fiber cable. The internal network is fortified with Azure Firewall and distributed denial-of-service (DDoS) protection, adhering to the SONiC Open Networking in the Cloud standard. Yet, details on servers, racking, storage, power, and cooling – quintessential components of a data center – were notably absent.
Fortunately, delving into https://datacenters.microsoft.com/ revealed additional insights that supplemented the tour’s content. The details below are sourced from the aforementioned site, offering a more comprehensive view beyond the tour’s scope. Keep in mind, there might be additional features that I haven’t covered.
- Server Blades: Built on OCP Olympus Gen 6 or Gen 7 designs, these blades support various processors such as Intel Xeon or AMD EPYC x64 CPUs, Cavium ThunderX2, or Qualcomm Centriq ARM SOCs. Equipped with 16 DIMMs, these full-rack depth server blades feature 50GbE FPGAs NICs, accommodating up to eight PCIe NVMe drives, four SATA drives, and a three-phase, battery-backed power supply.
- GPU Blades: Support a range of GPU options from Nvidia, AMD, and other manufacturers.
- Storage Blades: Offering up to 88-disk JBOD (Just a Bunch of Disks) arrays.
- Racks: Available in 44 or 48 rack units (RU), these 19-inch open frame racks come integrated with a three-phase, blind-mate PDU and adapters for various power services.
- Emergency Power: Provided by diesel generators.
- Cooling: Employing hot-aisle containment with optimized adiabatic cooling, these centers combine ambient temperature and efficient evaporative cooling methods. The virtual tour showcased a machine room built on a slab with overhead cable routing and enclosed hot aisles via doors.
These details offer a more nuanced perspective on the hardware and infrastructure supporting Azure’s data centers, augmenting the information provided during the virtual tour.

Fairly straightforward details indeed, but the standout revelation was the utilization of 50GbE connections linked to all servers, notably employing field programmable gate arrays (FPGAs). This choice appears to align with the data processing unit (DPU) paradigm seen in networking strategies from various vendors. Although Microsoft refrains from explicitly terming it as such, the concept revolves around delegating routine networking tasks to the network adapter itself, thereby conserving CPU cycles.
When probed about the most commonly upgraded infrastructure components, Alistair, our host, highlighted that, apart from software enhancements, networking underwent the most frequent modifications. This adaptability is facilitated by leveraging FPGAs instead of conventional networking application-specific integrated circuits (ASICs).
Azure’s Focus on Security and Sustainability
Returning to the tour, Alistair underscored Azure’s robust security and resilience features. Amid the escalating global focus on personal privacy, international services like Azure are exposed to significant risks if not aligned with local and regional security protocols.

In addressing this concern, Azure encompasses over 100 compliance-oriented offerings and is in the process of developing an EU-specific data boundary to align with GDPR regulations. More notably, Microsoft envisions Azure confidential computing, entrusting data entirely to the customer’s control while ensuring Azure lacks access to this data.
Continuing the tour, I was heartened by Microsoft’s endeavors to ensure the sustainability of its Azure cloud. This aligns with a trend we’ve long advocated for, and it’s commendable to witness Microsoft’s serious commitment—pledging to achieve zero waste and carbon negativity by 2030 and aiming to eliminate all historical carbon emissions since its inception by 2050.
Equally significant is its aim to achieve water positivity by 2030. Numerous regions globally face water scarcity crises, and data centers can consume vast amounts of fresh water annually. The reversal of this trend by a leading international cloud provider is encouraging. Moreover, Microsoft offers tools for cloud sustainability alongside an emissions monitoring dashboard.
Naturally, any comprehensive cloud program includes a nod to AI, and for good reason. AI stands as one of the most compute-intensive applications today, with a burgeoning demand for cloud-based AI resources projected for years to come.
Microsoft’s approach to AI seems pragmatic, offering a diverse array of services and GPU-based resources adaptable to achieve the optimal balance of cost, performance, and outcomes for virtually any customer use case. This dynamic environment fulfills the cloud’s promise from over a decade ago, democratizing access to top-tier resources for customers who may lack the means to invest in high-end infrastructure necessary for AI capabilities.