Integrating optics into the same package as switching ASICs improves signal integrity and increases data rates, but challenges remain. Near-packaged optics could emerge as an interim solution to the problem.
The constant need for more throughput in data centers pushes engineers to develop ever-faster optical and electrical links. In addition to designing for more speed, engineers must optimize these links for physical space, power consumption, cost, reliability, and scalability.
Most of the data-center traffic moves into and out of the data centers (north-south traffic). A shift is, however, happening toward distributed computing, which increases server-to-server traffic (east-west traffic). As east-west traffic exponentially increases, unprecedented levels of data center traffic threaten to outpace the development of new switches. Co-packaged optics can help mitigate signal integrity and power consumption problems, both of which introduce new test issues.
At the heart of a switch lies a specialized application-specific integrated circuit (ASIC) capable of terabits-per-second throughput. Previously, most of these ASICs were developed in-house by the switch manufacturers. That paradigm has, however, shifted with the rise of merchant silicon — ASICs developed by third-party silicon vendors and sold to switch manufacturers for final-product integration.
Data center optics
In traditional switches, the switching ASIC drives the data over multiple channels across the printed-circuit board (PCB) to ports on the switch chassis’ front panel. The ports and their pluggable modules have evolved alongside the switching silicon in the form of increasing speed or number of channels per link. As Figure 1 shows, the throughput per port has grown exponentially from the original small form factor (SFP) 1 Gb/sec links to the latest Quad-SFP double-density form factor (QSFP-DD 800) supporting up to 800 Gb/sec. Modules with copper cabling, otherwise known as direct-attach copper (DAC), can connect switches to one another. Unfortunately, copper cannot handle the speeds and distance necessary for most data-center communication. Instead, data centers leverage fiber-based optical interconnects between switches to preserve signal integrity over long distances with the added benefit of lower power consumption and better noise immunity.
Fiber cabling requires transceiver modules in the switch ports to convert signals from the electrical domain of the switching silicon to the optical domain of the cabling and back. Figure 2 shows a conventional transceiver with two key components: the transmit optical subassembly (TOSA) manages the electrical-to-optical conversion while the receive optical subassembly (ROSA) manages conversion in the opposite direction. The copper fingers of the transceiver plug into the switch while an optical connector plugs into the other end. The optical connectors come in an entirely separate variety of form factors and variants. Multisource agreement groups (MSAs) work to ensure standardization and interoperability between vendors as new transceivers and cable technologies enter the market.
SerDes
The path between the pluggable transceiver and the ASIC consists of copper-based serializing and deserializing (SerDes) circuitry. As the switching silicon scales, the copper interconnects must equally scale, which switch vendors achieve by increasing either the number or speed of SerDes channels. The highest-bandwidth switch silicon today supports 51.2 Tb/sec, which manufacturers accomplished by doubling the number of 100 Gb/sec PAM4-modulated SerDes lines from 256 to 512.
If a 51.2 Tb/sec ASIC serves a front panel of 16 ports, the switch requires a 3.2T link at each port to fully utilize the provided switching capacity. While today’s highest-bandwidth pluggable implementations provide 800 Gb/sec per port, standards groups are actively working to expand the capacity of these links through channel density and speed (e.g., 16 channels at 200 Gb/s to reach 3.2T). Table 1 shows the progression of symbol rate, data rate, channels, and aggregate capacity over the years.
2010 | 2012 | 2014 | 2016 | 2018 | 2020 | 2022 | 2024 (predicted) |
|
SerDes symbol rate (Gbd) | 10 | 10 | 25 | 25 | 25 | 50 | 50 | 100 |
Modulation type | NRZ | NRZ | NRZ | NRZ | PAM4 | PAM4 | PAM4 | PAM4 |
SerDes data rate (Gb/sec) | 10 | 10 | 25 | 25 | 50 | 100 | 100 | 200 |
Number of SerDes channels | 64 | 128 | 128 | 256 | 256 | 256 | 512 | 512 |
Total capacity (Tb/sec) | 0.64 | 1.28 | 3.2 | 6.4 | 12.8 | 2.56 | 51.2 | 102.4 |
Research is underway into 224 Gb/sec technology, which yields a 200 Gb/sec SerDes data rate and would enable the use of 1.6 Tb/sec interfaces at the front panel. With increased speed, however, comes the added challenge of more complex signal transmission methods and higher power consumption per bit. Evolutions moving toward 224 Gb/sec have helped to avoid the ominous predictions made in the early 2010s that forecasted skyrocketing power consumption scaling with traffic through data centers. Figure 3 shows how competing factors have kept data center power consumption relatively steady.
Along with updates to network layouts and cooling systems, the increased data rate for a single switch allowed for a smaller number of devices, thus reducing the footprint and overall power consumption. Technology experts suggest that we are approaching a physical limit of copper channel data rates within the existing server form factor. While breakthroughs in interconnect technology have supported scaling to 800 Gb/sec links and 1.6T, driving beyond these data rates will require a fundamental change in switch designs.
On-board optics to co-packaged optics
As the SerDes speed and density continue to increase, so does the power required to drive the signal and preserve the signal across PCBs. A large component of the power increase comes from the additional re-timers used to ensure proper data recovery at the receiver. To reduce the power that the switch needs, research groups and standards bodies have been pursuing methods to shorten the copper distance over which the signal must travel.
The Consortium for On-Board Optics (COBO) represents the earliest collaboration to move the signal conversion away from the front panel closer to the ASIC. On-board optics (OBO) move the primary functionality of the pluggable transceiver to a module on the switch’s PCB, shortening the copper channel where electrical signals cross. This methodology relies on recent advancements in silicon photonics, where optical functions are built into the die fabrication process. Silicon photonics enable more compact conversion options in the form of optical engines (OEs), which have lower costs and use less power than conventional pluggable transceivers. While these improvements help to reduce the copper channel length, the improvements have not yet outweighed the complications introduced in deviating from the industry-standard pluggable architecture. Based on this, the industry may leapfrog OBO to a more advanced form of integration.
Beyond OBO, the terminology becomes debatable. The term near-package optics (NPO) describes optical engines placed on the PCB or interposer along the boundary of the switching silicon’s substrate. This method reduces the electrical channel path even further than OBO but still requires significant power to reliably drive the SerDes signals. Near-package optics could prove to be a great transitionary step as it provides significant signal benefits and reuse of the existing silicon designs — at the small price of requiring a new approach to relaying optical signals from the front panel to the optical engine.
Co-packaged optics (CPO) is a design approach that integrates the optical engine and switching silicon onto the same substrate without requiring the signals to traverse the PCB. The levels of integration between the optical and electrical functions of the package exist on a spectrum, some of which appear in Figure 4.
For instance, some devices leverage a 2.5D co-packaging strategy, which places the optical engines on the same substrate as the ASIC with millimeter-scale connections between the two. Some manufacturers use a 2.5D chiplet integration system to provide flexible interface options to the silicon (e.g., mixed-use of co-packaged optics and pluggable transceivers), demonstrated in Broadcom’s Tomahawk 5 co-packaged optics switch. Even more integrated co-packaged optic designs are in their nascence, including:
- Direct-drive configurations where the digital signal processing migrates from the optical engine to the ASIC
- 3D (stacked) integration between the optical and electrical functions
- Integration of the driving lasers into the package
- Fully integrated monolithic electro-photonic ICs
Impact on test
While technologies such as CPO and NPO reduce the span over which electrical signals must travel, providing both signal integrity and power consumption benefits, the interoperability requirement remains. That is, the optical data signal from the transmitter must traverse an optical channel and be correctly received at the other end by a receiver that may have been produced by another vendor. The signals will likely need to comply with specifications such as those developed in IEEE 802.3.
A key difference compared to testing pluggable optics is in the difficulty and expense to correct a problem once the CPO/NPO has been integrated into the switch. The CPO/NPO cannot be easily swapped out like a pluggable module. Test strategies must evolve to not only verify signal performance for compliance but also identify problems early in the manufacturing process, with additional testing to ensure long-term reliability. While the electrical path to the CPO/NPO may be over shorter distances, the high symbol rates will require careful design, validated with the same test and measurement methods employed for classic chip-to-module interfaces.
Where from here?
While there are many paths to co-packaged optics, challenges around these new technologies work against rapid adoption. Adopting new technologies overnight and the progression toward new standards requires a path for data centers to gradually upgrade or replace their infrastructure while also getting component manufacturers to incorporate new technology by iterating on existing designs.
Key challenges manufacturers and data centers face include:
-
- Development and standardization of new fiber-based front panel connections
- Silicon flexibility that allows for the co-existence of pluggable, on-board, near-packaged, and/or co-packaged optics
- Service and replacement of components
- Manufacture and yield of advanced packages
- Return-on-investment for data investment remains unproven
While these hurdles may ultimately outweigh the benefits achieved by co-packaged optics, it is difficult to deny the possibilities created by moving in this direction. Whether or not co-packaged optics see widespread adoption, the explosive forecast in data traffic signals an approaching and necessary end to how we do things today, ushering in a new approach to data center interconnect technology.
Leave a Reply
You must be logged in to post a comment.