Fiber Capacity Mining, Then and Now

By Acacia | Posted on August 28, 2019

Fiber Capacity Mining

Introduction

Ever since the invention of single mode fiber optic cable decades ago, the industry has continued to develop new ways of increasing the amount of data that can be transmitted over an optical fiber link. Single-wavelength, on-off-keying (OOK) modulation, in which laser light is turned on and off to represent a digital “1” and “0”, offered significant improvements over electrical transmission, but achieved only a fraction of the capability of the fiber optic cable. Two significant developments have significantly improved fiber utilization: (1) the simultaneous transmission of multiple lasers of different wavelengths over a single fiber — a technique called wavelength division multiplexing (WDM), and (2) coherent transmission using digital signal processors (DSPs) to more efficiently modulate and detect multi-levels in both phase and amplitude of laser light on two polarizations, resulting in increased spectral efficiency. More recently, coherent optical transmission shaping techniques have further advanced the cause, pushing capabilities closer to the maximum theoretical transmission capacity per channel, referred to as the Shannon limit.

This white paper reviews the technological advancements that have played a role in increasing the capacity of information that can be transmitted over a single mode fiber link. It also discusses how parameters in coherent transmission such as modulation order, baud rate, and transmission shaping determine overall fiber capacity.

Historical Perspective

The Gray Period
The scientific and engineering breakthroughs that ushered in the first generation of long-distance fiber optic transmission included (but were not limited to) the ability to reliably manufacture single-mode lasers with a transmission wavelength that coincided with low-loss wavelength windows of SiO2 glass fiber, and the ability to reliably manufacture optical fiber that could contain single-mode light transmission within the fiber’s core structure.

Over three decades ago, terrestrial single-wavelength single mode optical transmission operated in the low gigabits-per-second (Gbps) range with the transmission capacity defined by this single-wavelength data rate. Optical transceivers with this characteristic are called “gray optics” because they do not require light to be transmitted on a specific wavelength, or color. Distance extension was done by means of optical-to-electrical-to-optical regeneration. Initial deployments, which utilized this method of regeneration, operated at transmission speeds in the range of a few Gbps for terrestrial applications and a few hundred megabits per second (Mbps) for submarine applications.

EDFAs and DWDM
A major breakthrough in the 1980s that accelerated the deployment of optical networks occurred with the invention of the erbium doped fiber amplifier (EDFA). The EDFA enabled wavelengths within the 1550nm low-loss window to be amplified in the optical domain, eliminating the cumbersome and costly need to electrically regenerate the optical signal. Because the EDFA was a broadband amplifier, multiple wavelengths within the window could be amplified with a single amplifier. Around the same time, designs of lasers and wavelength stabilization techniques were maturing to the point that multiple wavelengths could be densely packed within the same 1550nm window. EDFAs and stable lasers brought dense wavelength division multiplexing (DWDM) into mainstream applications. The invention of the EDFA provided a tremendous boost to the adoption of DWDM for optical networks, especially submarine and long-haul networks and later metro networks. Commercial terrestrial DWDM deployments in 1995 were comprised of eight wavelengths at 2.5Gbps, and by 1999, systems with 40 wavelengths at 10Gbps were being deployed1.

Multiple techniques were incorporated into the links to overcome various dispersion effects due to the interaction of the laser light with the glass fiber. This interaction limited the overall distance achievable in these amplified links. Techniques included incorporating dispersion-shifted fiber or passive dispersion compensators, to name a few. However, since the topic of dispersion is quite expansive and beyond the scope of this paper, we will not discuss it in detail.

The reach effects of fiber non-linearities as well as the electronic capabilities of directly modulated lasers and direct-detect detectors could support practical commercial OOK transmission deployments to 10Gbps per wavelength. Phase modulation was introduced in the 2000s to help commercial deployment push towards 40Gbps per wavelength using differential phase shift keying (DPSK) and differential quadrature phase shift keying (DQPSK). These schemes were able to leverage existing direct detect receiver technology.

DWDM Standardization
To help advance the commercialization of DWDM systems, standards activity played a critical role. The international standards body called the International Telecommunication Union (ITU) released a frequency grid plan for DWDM transmission in 2002 known as Recommendation ITU-T G.694.1. This standard called out a grid with defined spacing of 100GHz, 50GHz, and 25GHz, with granularity of 12.5GHz.

The grid spacing that initially became widely used was the 100GHz grid. This enabled lasers and optical passive filters to be manufactured in volume with ample margin and still adhere to this grid. A standardized DWDM grid enabled network operators to design an optical network topology that allowed for traffic to be dropped and added at various network nodes. These nodes relied on fixed optical add-drop multiplexers (FOADMs) to accomplish these tasks. Changes in network topology required manual changes (“truck rolls”) in order to modify the FOADM fiber connections or the transmission sources. Later, using tunable lasers or optical filters in front of the receiving detectors, changes could be performed remotely via software commands.

Improvements with laser stability designs, along with optical filtering technology (to increase isolation between adjacent DWDM channels), led to deployments using the tighter 50GHz grid. This not only allowed a way to double the amount of DWDM transmissions compared to using the 100GHz grid, but it also enabled a way to upgrade an optical network from a 100GHz to 50GHz grid as bandwidth demand grew by pre-planning the installation of FOADMs and interleavers.

fiber capacity figure 1

Figure 1. Illustrations of 100GHz channels and 50GHz channels adhering to the ITU DWDM grid.

 

The Rise of Coherent and Flexible Grid WSS Technology
Let’s focus on the progress of single-wavelength transmission again, before going back to DWDM. Another confluence of technology advancements in the 2010s brought about the generation of mainstream coherent optical transmission. Aided by advancements in CMOS DSP technology (to implement complex detection and error-correction algorithms) which mitigated dispersion effects using signal processing, the capability to transmit long reaches at >100Gbps with modulation schemes much more complex than OOK, DPSK, and DQPSK were achievable.

Early coherent solutions utilized fixed modulation order and fixed baud rate transmission. With advancements in DSP technology, software programmable modulation orders and baud rates became achievable, providing the ability to address multiple applications with common hardware, referred to as multi-haul solutions. Modulating via QPSK (2 bits mapped into a symbol), 8QAM (3 bits mapped into a symbol), 16QAM (4 bits mapped into a symbol), and higher orders provided a means to increase the amount of data that could be transmitted. In addition, the rate at which these symbols were transmitted (aka baud rate) could also be selected. Selectable modulation and baud rate were steps towards achieving a basic level of capacity and spectral optimization over a coherent optical channel.

Early coherent transmission was able to co-exist over legacy 100GHz line systems (with some limitations due to detrimental interactions between an OOK and coherent transmission over a common amplified link). The spectral width of coherent 100G QPSK modulation fit within a 50GHz channel. However, just as increased bandwidth demands drove a need to increase the number of DWDM channels on a fiber link, a continuing increase in bandwidth demand drove coherent modulation requirements to achieve higher capacity and reaches. This meant that 50GHz was no longer wide enough to support various coherent modulation transmission.

As previously mentioned, early DWDM networks relied on FOADMs to add or drop wavelengths at network nodes. The advent of wavelength selective switch (WSS) technology ushered in the era of reconfigurable optical add/drop multiplexers (ROADMs) allowing wavelengths to be dropped or added via software control. Another benefit of WSS technology was that it enabled line systems to create a tunable/flexible DWDM grid rather than a fixed 50GHz or 100GHz grid. This allowed different coherent modulation transmission with different spectral widths to coexist on the same fiber. This capability provided a tremendous level of network flexibility and future proofing.

fiber capacity figure 2

Figure 2. (a) Example of flexible grid, (b) 75GHz channels.

In response to a need for additional network planning options beyond a fixed-grid system, the ITU revised ITU-T G.694.1 to include a flexible grid framework (Figure 2a). More recently, in 2017, the Optical Internetworking Forum (OIF) proposed a framework to aid in the development of coherent DWDM transmission with flexible characteristics which included the adoption of 37.5GHz, 50GHz, 62.5GHz, and 75GHz channels. Although current unrelated 400ZR OIF efforts to provide 400Gbps data center interconnect standardization using coherent 16QAM was not an application focus at the time, it turns out that 75GHz channels are a good fit for this application (Figure 2b). As 400G transmission becomes more widely deployed in data center and carrier networks, the contribution of 75GHz channels may begin to increase, creating an impact to network planning activities.

Coherent Shaping
Another advancement in coherent transmission was the introduction of transmission shaping solutions, such as Acacia’s 3D Shaping2, which addressed a wide range of multi-haul applications and allowed for granular control of the transmission characteristics to improve performance, reach and capacity.

Higher modulation orders (higher bits-per-symbol) provide increased capacity at the expense of reach, while lower modulation orders (lower bits-per-symbol) provide reduced capacity with farther reach. Integer (quantized) bits-per-symbol steps such as QPSK, 8QAM, and 16QAM may result in sub-optimal capacity utilization due to gaps in link margin. Granular modulation techniques such as 3D Shaping enable non-integer bits-per-symbol modes which helps to close these capacity gaps.

Using 3D Shaping, the probability and location of coherent constellation points can be adjusted to optimize for reach and capacity. 3D Shaping also provides the ability to fine tune the transmission spectrum using Adaptive Baud Rate, reducing the stranded spectrum to help increase capacity utilization over a fiber link. This continuously variable method of adjusting baud rate allows for more efficient optimization of a channel by filling up the channel spectrum, since the spectrum of the transmission varies proportionally with baud rate. By adjusting baud rate to achieve the maximum spectrum supported by the channel, network operators can either increase the capacity of the channel for a given reach or achieve greater reaches at a given data rate by operating at a lower modulation order. Modulations with fixed baud-rates (high or low) leave unused channel spectrum, resulting in a waste of fiber capacity.

An understandable misperception is that overall fiber capacity can always be increased by simply increasing baud rates of each transmission of the DWDM system within the fiber. While this might be true when spectral gaps exist between the transmission spectrum and a fixed channel window (e.g. when using 100GHz channel spacing with 32Gbaud or 64Gbaud transmission), it is not the case when the transmission spectrum is tightly packed. In this latter case, increasing transmission baud rate has no impact on total fiber transmission capacity because proportionally fewer total channels can be supported within the fiber.

Conclusion

Benefiting from multiple technology advances described above, the amount of information that can be transmitted across a single optical fiber has increased by more than 10,000 since the 1980s. The latest transport solutions offer exceptional performance and adjustable transmission features that approach the practical limits of fiber capacity. Network operators are looking to address their specific traffic requirements in the most cost-effective manner possible. Addressing these requirements will require further advances in integration, packaging, automation, scale, and flexible architectures.

References

1P. Winzer, D. Neilson, A., Chraplyvy, “Fiber-optic transmission and networking: the previous 20 and the next 20 years [Invited],” Optics Express, September, 2019, pp. 24190-24239.

2Acacia Communications (2018), “Network Optimization in the 600G Era,” Retrieved from: https://acacia-inc.com/wp-content/uploads/2018/12/Network-Optimization-in-the-600G-Era-WP1218.pdf

 

Download the Whitepaper