AI’s impact on coherent transport demand is undisputable, and the industry is preparing to deliver the bandwidth required for this new generation of computing. Large language model (LLM) version updates and higher-performing GPUs have been introduced, terminology such as neo cloud provider and AI factories has become increasingly mainstream, and new entities are being created worldwide to house the infrastructure to create massive AI training facilities. In addition, multiple service providers that have traditionally supported wholesale bandwidth and fiber connectivity services have publicly announced deals to support AI connectivity growth. These connections would specifically address connectivity between data center sites – with the AI connectivity opportunities not limited to hyperscaler sites, but also including enterprise customer connectivity.
AI Driving Coherent Transport Growth
Short-reach IMDD-based optical interconnections to support AI training clusters continue to receive much of the attention with respect to advancements in optical connectivity technology such as co-packaged optics (CPO). While the industry has been looking at CPO solutions for quite some time, AI cluster interconnects might be the application that propels these solutions to the mainstream market. This might take place in a similar manner to how data center interconnect requirements helped propel coherent pluggable optics towards widespread industry adoption. As mentioned in our previous blog, the ramp up of AI networks and infrastructure has resulted in increased coherent transport traffic, playing a part in current 400G coherent pluggable module adoption as well as expected 800G coherent module adoption.

Today, we continue to see AI driving coherent transport growth. Cignal AI recently stated that growth in recent quarters of coherent pluggable 400G shipments is being driven by data center interconnects (DCIs) for AI datacenters, with AI software being designed to support multiple physical locations. This distributed architecture is referred to as a scale-across network. Understanding how longer-reach coherent transport traffic is affected by the growth in AI applications can help drive future technology requirements for coherent optics.
AI Elements Filling the Coherent Transmission Pipe
AI-related applications for both the consumer and enterprise continue to drive training demands leading to either in-house AI training or GPU-as-a-service (GPUaaS) investments. The datasets to feed these training clusters are as large as petabyte-scale and need to be transported over campus, metro, regional, and long-haul networks. If these petabyte-sized datasets reside on an enterprise network and have to be moved to an AI factory site, either multiple high-capacity links could be used or, if transfer time was not a factor, a single link could be used. Lower cost GPUaaS regions may require longer distance transmission for datasets causing a trade-off decision between the GPUaaS pricing versus the transport bandwidth pricing.
Figure 2. Examples of drivers impacting coherent transport bandwidth demand between sites.
In addition to dataset traffic, inference models also need to be pushed closer to end-users to reduce latency. To address this, cloud providers are offering inference-model hosting/caching as an evolving service. There have also been ongoing efforts to demonstrate distributed training across multiple physical sites to mitigate power availability constraints. While not as efficient due to latency considerations as when the training is co-located, the reality of power constraints will continue to push the advancements of distributed training to acceptable efficiency levels. Simultaneously, there are efforts by the Ultra Ethernet Consortium to optimize AI-centric traffic flow, leveraging the dominant Ethernet-based DCI transport infrastructure. Leveraging the existing DCI infrastructure will be key to accelerating the expansion of distributed training architectures.
All these activities are providing ongoing growth avenues for coherent transport solutions. Acacia’s coherent modules are already being deployed in these AI build outs and customers are projecting increasing bandwidth requirements. They will also provide the drivers for technology advancements to support higher coherent transmission baud rates as well as higher-capacity coherent links for shorter reaches.

