Five Core Data Center Architectures
过儿和迪奥  2024-08-06 16:18   published at in China

Source: Architect Technology Alliance Official Account

641.jpg

Data center (DC) architecture is a complex integration of modern facilities, IT, and network systems that work together to build, design, and support mission-critical apps. These systems are highly interconnected, requiring meticulous, synchronized planning in both their design and operation. DC architecture includes the design and layout of physical infrastructure (such as power distribution and cooling systems) and IT infrastructure (including network architecture, storage architecture, server architecture, and cloud DC architecture). This necessitates a detailed planning of physical space, power and cooling systems, network connections, security measures, and software to ensure optimal performance, reliability, and scalability of IT resources and services. The ultimate goal is to build an efficient, flexible, and resilient environment to support the critical IT infrastructure of modern enterprises and organizations.

Components of DC architecture

Servers: Servers are classified into rack servers, blade servers, and tower servers based on their physical structure and sizes.

Storage systems: Various storage technologies, such as storage area network (SAN), network attached storage (NAS), and direct attached storage (DAS), are used to store and manage data in DCs.

Network devices: Switches, routers, firewalls, and load balancers provide efficient data communication and resilience within the DC and to external networks.

Power infrastructure: The uninterruptible power supply (UPS) system, backup generator, and power distribution unit (PDU) combine to offer stable and reliable power supply for DC devices.

Cooling systems: Computer room air conditioner (CRAC) units, liquid cooling systems, and cold/hot aisle containment maintain optimal temperature and humidity levels to ensure the proper operation of hardware.

Enclosures: Racks and cabinets used in DCs include open frame racks (two-post and four-post racks), enclosed racks, wall-mounted racks, and network cabinets.

Cabling: Structured cabling systems, including twisted pair cables (for Ethernet, such as Cat5e, Cat6), fiber-optic cables (single-mode and multi-mode), and coaxial cables.

Security systems: Physical security measures, such as biometric access control, monitoring cameras, and security personnel, combined with network resilience solutions like firewalls, intrusion detection/prevention systems (IDS/IPS), and encryption, can protect DCs from unauthorized access and threats.

Management software: Data center infrastructure management (DCIM) software helps monitor, manage, and optimize the performance and energy efficiency of DC components.

The network architecture of DCs refers to the design and layout of interconnected nodes and paths that facilitate communication and data exchange within DCs. It includes the physical and logical layout of network devices (such as switches, routers, and cables) to enable efficient data transmission between servers, storage systems, firewalls, and load balancers. A well-designed network architecture can deliver high-speed, low-latency, and reliable connections while ensuring scalability, resilience, and fault tolerance.

The three-tier architecture has long been the standard model for DC networks over the past decades. A new network topology—the leaf-spine architecture—has emerged and gained wide popularity in modern DC environments. This architecture is particularly common in high-performance computing (HPC) deployments and has become a major choice for cloud service providers (CSPs).

The following is an overview of these two network architectures.

Three-tier network architecture

The three-tier network architecture is a traditional network topology that has been widely used in older DCs. It is commonly referred to as the "core-aggregation-access" model. Redundancy is a key part of this design, with multiple paths from the access layer to the core layer, in addition to helping networks achieve high availability and efficient resource allocation.

641.jpg

Access layer: As the bottom layer of the three-tier network architecture, the access layer serves as the entry point for servers, storage systems, and other devices to connect to the network. It provides these connections through switches and cables. Switches at the access layer, often arranged in a top-of-rack (ToR) configuration, enforce policies such as security settings and virtual local area network (VLAN) assignments.

Aggregation layer: The aggregation layer is also known as the distribution layer. It consolidates data traffic from the access layer's ToR switches before transmitting it to the core layer for routing to its ultimate destination. This layer enhances the resilience and availability of the DC network through redundant switches, eliminates single points of failure, and manages network traffic using strategies such as load balancing, Quality of Service (QoS), packet filtering, queuing, and inter-VLAN routing.

Core layer: The core layer is also called the backbone network. It is the high-capacity, central part of the network designed for redundancy and resilience, interlinking aggregation layer switches and connecting to external networks. Operating at Level 3, the core layer prioritizes speed, minimal latency, and connectivity using high-end switches, high-speed cables, and routing protocols with lower convergence times.

Ultimately, the traditional three-tier DC architecture struggles to efficiently handle the increased east-west (server-to-server) traffic generated by modern server virtualization technologies due to latency introduced by multiple hops between layers. This also leads to issues such as bandwidth waste, large fault domains, and difficulty adapting to ultra-large-scale networks.

Traffic patterns in DCs include:

North-south: This traffic refers to the flow of data between external clients and the servers within a DC, or the traffic generated when DC servers access the internet.

East-west: This traffic refers to the flow of data between servers within a DC.

Cross-DC: This traffic refers to the flow of data across different DCs, such as cross-DC disaster recovery and communication between private and public clouds.

In traditional DCs, services are typically deployed using dedicated leased lines on one or more physical servers, which are physically isolated from other systems. This means that the proportion of east-west traffic in a traditional DC is low, while north-south traffic constitutes approximately 80% of all DC traffic.

In cloud DCs, the service architecture gradually evolves from monolithic to web-application-database, and the distributed technology becomes the mainstream of enterprise applications. Service components are typically distributed in multiple virtual machines (VMs) or containers. Services are no longer run on one or a few physical servers but on multiple servers working together, resulting in a rapid increase in east-west traffic.

The emergence of big data services makes distributed computing a standard configuration for cloud DCs. Big data services can be distributed on hundreds of servers in a DC for parallel computing, which greatly increases east-west traffic.

The traditional three-tier network architecture, designed for conventional DCs with dominant north-south traffic, is not well-suited for cloud DCs where east-west traffic predominates.

East-west traffic, such as inter-Pod Layer 2 and Layer 3 traffic, must be forwarded by devices at the aggregation and core layers, unnecessarily passing through many nodes. A bandwidth oversubscription ratio of 1:10 to 1:3 is usually set on traditional networks to improve device utilization. With the oversubscription ratio, the performance deteriorates significantly each time traffic passes through a node. In addition, xSTP technologies on the Layer 3 network exacerbate this deterioration.

Therefore, if a large amount of east-west traffic is transmitted through the traditional three-tier network architecture, devices connected to the same switch port may compete for bandwidth, resulting in poor response times for end users.

Spine-leaf architecture

The spine-leaf architecture, often referred to as a Clos design, is a two-tier network topology that has become prevalent in DCs and enterprise IT environments. Compared to traditional three-tier architectures, it offers multiple benefits, including scalability, reduced latency, and improved overall performance.

641.jpg

Leaf switches: These are ToR switches located at the access layer, responsible for connecting servers and storage devices. They form a full mesh network by connecting to every spine switch, ensuring all forwarding paths are available and nodes are equidistant in terms of hops.

Spine switches: These form the backbone of the DC network, interconnecting all leaf switches and route traffic between them. Spine switches do not directly connect to each other, as the mesh architecture eliminates the need for direct links. Instead, they route east-west traffic through the spine layer, enabling non-blocking data transmission between servers across different leaf switches.

Compared with the traditional three-tier architecture, the spine-leaf architecture features excellent scalability, lower latency, predictable performance, and optimized east-west traffic efficiency. It also provides fault tolerance through high interconnectivity, eliminates network loop concerns, and simplifies DC network management.

However, the fabric architecture is not perfect. Leaf network devices have greater performance and functionality demands compared with access devices in traditional architectures. Serving as gateways between various protocols (e.g., Layer 2 and Layer 3, VLAN and VXLAN, VXLAN and NVGRE, and FC and IP), leaf devices require advanced chip processing capabilities. However, no commercial chip currently supports seamless interworking across all these protocols. Due to the absence of standardized solutions, vendors rely on proprietary encapsulation methods for forwarding between spine and leaf nodes, which presents challenges for future interoperability. In addition, there are other drawbacks:

A separate Layer 2 domain limits the deployment of applications that depend on this domain. Applications that need to be deployed on a Layer 2 network can only be deployed on the same rack. The independent Layer 2 domain also restricts the migration of servers. After the migration, servers' gateways and IP addresses need to be changed.

In addition, the number of subnets increases greatly. Each subnet corresponds to a route in a DC. Currently, each rack is assigned a separate subnet. This significantly increases the overall number of routes in the DC. Additionally, transmitting route information to each leaf node poses a complex challenge.

Before designing a spine-leaf network architecture, several key factors need to be considered, including the convergence ratio (or overbooking ratio), the ratio of leaf switches to spine switches, the uplink configuration from the leaf layer to the spine layer, and whether the uplinks are operated at Layer 2 or Layer 3, among other considerations.

The storage architecture of a DC refers to the design and organization of storage systems. It determines how data is physically stored and accessed in the DC. It defines the types of physical storage devices, such as hard disk drives (HDDs), solid state drives (SSDs), and tape drives, as well as their configuration modes, such as direct attached storage (DAS), network attached storage (NAS), and storage area network (SAN). Furthermore, the storage architecture determines how a server accesses data, either directly or over a network. Here are the main types of storage architecture in DCs:

Direct Attached Storage (DAS)

DAS is a digital storage system physically connected to a server, without a network connection in between. Servers use protocols like SATA, SCSI, or SAS to communicate with storage devices, with RAID controllers managing the data striping, mirroring, and disks.

640.jpg

DAS is cost-effective, simple, and offers high performance for a single server. However, it is limited in scalability and accessibility compared to network storage solutions like NAS and SAN.

Network Attached Storage (NAS)

NAS is a dedicated file-level storage device that facilitates data access for multiple users and client devices over a local area network (LAN) using TCP/IP Ethernet. It is designed to simplify data storage, retrieval, and management without an intermediary application server.

640.jpg

NAS offers simple access, sharing, and management, but faces performance and scalability limitations due to its dependence on shared network bandwidth and physical constraints.

Storage Area Network (SAN)

A SAN is a dedicated high-speed network that uses the Fibre Channel protocol to connect servers to shared storage devices. It provides block-level access to storage, allowing servers to interact with storage as if it were directly attached. This simplifies tasks like backups and maintenance by offloading these tasks from host servers. SANs offer high performance and scalability but come with high costs and complex management needs that require specialized IT expertise.

Next-generation storage solutions

Various next-generation solutions with innovative technologies are emerging in the DC storage field to meet growing demands for efficiency, scalability, and performance. These include:

All-flash storage array: The high-speed storage system uses SSDs instead of traditional HDDs, offering superior performance and reduced latency. The adoption of storage protocols specifically designed for SSDs, such as Non-Volatile Memory Express (NVMe) and NVMe over Fabric (NVMe-OF), is growing. These protocols enhance performance, reduce latency, and increase throughput for all-flash storage arrays in DCs.

Scale-out file system: This is a storage architecture that enables the expansion of both storage capacity and performance by adding more nodes, offering flexibility and scalability.

Object platform: This storage solution is designed for managing large volumes of unstructured data. It uses flat namespaces and unique identifiers to facilitate data retrieval.

Hyper-converged infrastructure (HCI): This is an integrated system that combines storage, compute, and networking into a single framework to simplify management and enhance scalability.

Software-defined storage (SDS): SDS uses software to manage and abstract underlying storage resources, offering flexibility and efficiency through policy-based management. Companies like Meta Platforms (Facebook), Google, and Amazon have adopted SDS technology.

Heat-assisted magnetic recording (HAMR): HAMR is a magnetic storage technology that increases magnetic recording density on storage devices such as HDDs, by temporarily heating the disk material. It meets the growing storage demands of modern DCs.

The server architecture of DCs refers to the design and organization of servers and related components to efficiently process, store, and manage data. It can be broken down into the following categories:

Form factor (physical structure)

Rack servers: These are the most common types of servers in DCs. They are designed to be mounted in standard 19-inch racks and are usually 1 U to 4 U high.

Blade servers: These servers are designed to maximize density and minimize occupied physical space. Multiple blade servers are housed in one chassis to share resources such as power supplies, cooling, and networking.

Tower servers: While less common in large DCs, tower servers are still used in smaller-scale deployment or where rack space is not a constraint. They resemble desktop computer towers and can function as standalone units.

System resources

Central Processing Unit (CPU): The CPU is the brain of a server, responsible for executing instructions and processing data. It performs arithmetic, logical, and input/output operations.

Memory (RAM): Random Access Memory (RAM) is a server's main memory, offering quick access to data or instructions and temporarily storing active programs and data.

Storage: HDDs and SSDs provide permanent data storage for operating systems, apps, databases, and user data.

Network: A Network Interface Card (NIC) connects servers to a network, enabling them to send and receive data packets.

GPU: The Graphics Processing Unit (GPU) is an optional component designed for parallel processing and graphics rendering, excelling in computationally intensive tasks like artificial intelligence (AI), machine learning, and scientific simulations.

Support resources

PSU: The power supply unit (PSU) delivers stable power to all server components by converting AC power from the wall to the appropriate DC voltage.

A/C: To manage heat generated by servers, an air conditioning (A/C) system keeps components within optimal temperature ranges, utilizing fans, radiators, liquid cooling, and air conditioners.

Mainboard: The mainboard connects all server components, providing interfaces, buses, and slots for the CPU, RAM, storage, and peripherals.

Cloud DC architecture encompasses the design and organization of compute, storage, network, and database resources to deliver cloud computing services. Leveraging virtualization, this architecture enables efficient sharing of physical resources, ensuring scalable, reliable, and flexible apps. Key components include:

Compute: Offers VMs, containers, and serverless compute resources for running apps. Users can easily configure and scale computing capabilities without managing physical hardware. Major services include Amazon EC2, Microsoft Azure VMs, and Google Cloud Compute Engine.

Storage: Delivers scalable and durable solutions for various data types, including files, objects, and backups. It features high availability (HA), automatic replication, and data encryption for data integrity and resilience. Popular storage services include Amazon S3, Microsoft Azure Blob Storage, and Google Cloud Storage.

Network: Allows users to create, configure, and manage virtual networks, subnets, and security rules, connecting cloud resources, local networks, and the Internet for efficient data transmission. Key services include Amazon VPC, Microsoft Azure Virtual Network, and Google Cloud VPC.

Database: Provides scalable, hosted solutions for storing, retrieving, and managing structured and unstructured data. It supports various database engines, including relational (MySQL, PostgreSQL), NoSQL (MongoDB), and data warehouses, while handling tasks like configuration, scaling, and backups to enable developers to focus on app development. Notable services include Amazon RDS, Microsoft Azure Cosmos DB, and Google Cloud SQL.

The physical structure and design of DCs are vital for performance, resilience, and reliability. Key components include:

Site selection

Location: DCs should be situated in areas with low risks of natural disasters, such as earthquakes, floods, and hurricanes.

Climate: Cold regions can reduce cooling costs, while hot areas require energy-efficient cooling solutions.

Transportation: Accessibility to main roads and airports is essential for equipment transport and emergency responses.

Power: A reliable and cost-effective energy source is critical, supported by multiple high-voltage transmission lines and substations.

Fiber connectivity: Proximity to major optical fiber resources and having multiple service providers results in better connectivity.

Building and structure

Building materials: Durable, refractory materials like concrete, steel, and specialized wall panels are commonly used.

Structure: While single-floor DCs are typical, multi-floor designs are emerging in areas with regulated construction purposes or high real-estate costs.

Ceiling height: A ceiling height of 12 to 18 feet is necessary for raised floors, overhead cable trays, and air conditioning systems, allowing room for equipment and maintenance.

Load capacity: Floors must support heavy server racks, cooling systems, and UPS systems, with a capacity typically ranging from 150 to 300 pounds per square foot.

Layout: The arrangement of columns and partition walls impacts space utilization, airflow, power distribution, and equipment maintenance.

Types and functions of DCs

DCs vary in design and construction based on factors like scale, usage, ownership, and location. Common types include:

Enterprise DC: owned by enterprises to support specific services and apps, tailored to their needs.

Hosting DC: Offers shared infrastructure where multiple users can rent space and resources (such as energy and the cooling system) to house their IT equipment.

Hyperscale DC: Large centralized facilities serving CSPs and major internet companies.

Edge DC: Smaller facilities built on distributed architecture, located closer to end users or data sources to reduce latency and enhance performance.

Containerized DC: Modular and portable facilities housed in containers, allowing for quick deployment; also known as micro DCs.

AI DC: Specialized facilities optimized for AI workloads, featuring high-performance computing, GPUs, and advanced cooling systems.

 

Source: DC O&M and management

Replies(
Sort By   
Reply
Reply