Hyperscalers continue to expand their network capacity to meet growing bandwidth demands. Evolving their internal network infrastructure is an area of focus in order to handle growing internal data traffic to support information collection, analysis, and content transfer. Another area of focus may be on minimizing network hops to support latency sensitive cloud-based applications. In connection with the ongoing COVID-19 pandemic, some network operators have reported surges in bandwidth demand as more of the population has moved to on-line meetings and increased cloud services, which in turn has driven an increase of traffic between data centers and throughout the access network.

These examples illustrate how the interconnects that make up the data center network infrastructure play an important role in a hyperscaler’s network evolution. Recently introduced 400G switches and pluggable optical modules are new tools that enable hyperscalers to transform how data center networks are being architected, with an anticipated impact of comparable magnitude to when 10G and later 100G solutions were introduced. These 400G solutions are designed to enable network operators to address increasing bandwidth demand through a simplified network architecture, targeting the reduction of both capex and opex.

Data Center NetworkFigure 1: Data center network operators can scale up DCI bandwidth with 400G 400ZR and OpenZR+ solutions.

High-capacity switch and router platforms with 400 gigabit Ethernet ports are transforming hyperscale data center networks by enabling higher switching capacity (using 12.8/25.6Tbps ASICs). Recently introduced 400ZR and OpenZR+ QSFP-DD or OSFP form-factor coherent optical modules are designed to plug into these ports. A network operator with a sizeable percentage of 400G optical Ethernet connections between switches/routers less than 120km links in their edge network can utilize 400ZR modules, while OpenZR+ modules can be used for regional links greater than 120km. Network operators can plug these modules into ports alongside shorter reach client optics modules.

New deployment opportunities can leverage the capability of having both transport (400ZR/OpenZR+) and client optics plugged in the same switch/router to support an IP-over-DWDM (IPoDWDM) network architecture where switching is performed at the IP layer rather than the optical transport layer. An IPoDWDM network reduces cost per bit as well as operational overhead since a separate transport platform layer is not required, and network management can be consolidated. Eliminating the separate transport layer can also result in solution density improvements and reduced power consumption of approximately 25%.

Optical Infrastructure
Figure 2: Two architectures to support 400G IP/Ethernet traffic over an optical infrastructure are (1) traditional separation of the IP/Ethernet layer from the DWDM optical transport layer (top) or (2) IP-over-DWDM using 400ZR or OpenZR+ modules which plug directly into the switch/routers (bottom).

Transport optics in pluggable client form factors plugged directly into routers/switches is not an entirely new concept. What makes 400ZR/OpenZR+ different than earlier 100G solutions (besides the 4x capacity increase) is longer reach capability via coherent transmission, and wavelength tunability which provides operational benefits of deployment ease and spares reduction.

Legacy architectures that use a separate DWDM optical transport platform with a modular design (via line-cards or sleds) can be designed with an upgrade path to support these new 400G interfaces. Ethernet-centric ports can then be economically optimized using pluggable 400ZR or OpenZR+ modules.

 

Acacia_400ZR

Figure 3: Acacia 400G pluggable coherent optical modules supporting 400ZR and OpenZR+ (QSFP-DD on left, OSFP on right).

Some hyperscalers may find it necessary to maintain a separate IP layer from the optical transport layer, especially to support legacy infrastructure. Others may want to reduce the amount of equipment they have to manage using IPoDWDM if they do not require supporting legacy infrastructure, especially given scalability concerns.

To enable the wide adoption of 400ZR, these modules should be designed for volume production. However, packaging optics into the QSFP-DD/OSFP form factors is challenging. Complying with these compact mechanical designs while meeting specifications for performance, power consumption and cost focuses on three important areas: the DSP, optical/electrical component consolidation, and high-density packaging.

Acacia’s 3D Siliconization follows the example of the electronics world, applying integration and co-packaging techniques such as 3D stacking. Advantages of 3D Siliconization include the reduction of electrical inter-connects while preserving robust signal integrity, as well as using silicon photonics to leverage electronics semiconductor fabrication process suitable for volume production and high yields.

After much anticipation, the curtain has been drawn open. Entering onto the stage…400G pluggable coherent transceiver modules! The recent introduction of 400G solutions, such as Acacia’s 400ZR and OpenZR+ pluggable coherent optical modules, were designed to bring about another transformative implementation of optical networking solutions for data center interconnects.

Stay tuned for our next 400G blog, when we will go into more details on the applications driving OpenZR+ requirement.