Chris Nunn, National Instruments Corp.
High-speed serial is a growing technology aimed at reducing device footprint and boosting data communication rates. We have seen a drastic change in digital communication buses shifting from parallel to serial formats starting in the early 2000s. The transition from parallel buses to high-speed serial buses has led to many technologies such as SATA, USB, and PCI Express, that consumers take advantage of today.

standards started transitioning to serial standards.
There is a physical limitation on the clock rates of parallel buses at around 1 to 2 GHz. This is because of skew introduced by individual clock and data lines that cause bit errors at faster rates. High-speed serial buses send encoded data that contains both data and clocking information in a single differential signal, allowing engineers to avoid the speed limitations of parallel buses. Today, it is common to see high-speed serial links with data lanes running at 10 Gb/sec. Additionally, multiple lanes of serial links can be coherently bonded together to form communication links with higher data throughputs.
Serializing data and sending it at faster speeds allows a reduction in IC pin counts. Furthermore, the serial lanes can operate at much faster clock speeds, realizing better data throughput than what was possible with parallel buses.

Reduced pin count can ease design complexities, but the faster speeds that are required introduce additional design challenges. When data rates reach RF frequencies, circuits handling the signals must be designed like RF transceivers for proper signal integrity. To alleviate these signal integrity issues, high-speed serial links implement techniques such as encoding, pre-emphasis, and equalization.

For a serial connection to work, each end must agree to operate within specific parameters. These parameters can be abstracted to multiple functional layers.
The lowest layer is the physical layer, which is responsible for successful transmission and recovery of 0s and 1s. Above this, the data-link layer is responsible for mapping the raw bits to meaningful data, along with functions to let the physical layer transmit and receive successfully. Finally, upper layers above the physical and data-link layers can provide additional context with features such as error correction, packetizing, or data routing information.
The physical layer is responsible for ensuring electrical compatibility between devices and presenting synchronously clocked bits to the data-link layer. The physical layer is the lowest of all layers. At this layer, designers must make transmitters and receivers electrically compatible.
Different high-speed serial protocols will define different requirements for the electrical interface of the transmitter and receiver. The electrical signal for high-speed serial links is differential. Information transmits as two complementary signals (the original signal and its inverse). The complements form a differential pair of signals, each traveling on its own conductor. This technique is necessary to hit the extremely fast rise and fall times at speeds above 1 Gb/sec, minimize electromagnetic emissions, and improve noise immunity by rejecting common mode noise. Furthermore, peak-to-peak voltages rarely rise above 1 V at these speeds, and electrical standards are typically low-voltage differential signaling (LVDS), emitter-coupled logic (ECL), or current mode logic (CML).
Another important feature of the physical layer for high-speed serial links is clock and data recovery (CDR). CDR is the ability of the receiving device to synchronize to the incoming data stream without the need of an actual clock signal. This is done with help from the data-link layer ensuring frequent bit transitions through encoding. This lets phase-locked loop (PLL) and phase interpolator (PI) circuitry recreate the transmitting clock and use it to capture the incoming data stream with minimal timing errors.

Equalization is the process of compensating for channel electrical behavior in an effort to boost the channel frequency response. This compensation may be necessary on either the transmitter or receiver side of the communication link to improve link margin, but the term equalization is typically used when referencing the receiver.
When the high-speed serial signal travels over the PCB traces, through connectors and cables, and into the receiver, attenuation does not affect all frequency components of the signal equally. The result is signal distortion. The equalization settings on multi-gigabit transceivers (MGTs) can apply gain or attenuation to different frequencies of the signal before it is sampled to improve the signal and link margin. Many MGTs feature auto-equalization that can automatically detect and continuously update the equalizers to their ideal settings.
Pre-emphasis is the term typically used to refer to equalization on the transmitter side of a high-speed serial link. It is primarily used to overcome analog challenges presented by inter-symbol interference (ISI). At fast line rates, the data bits start to affect one another when transmitting. Pre-emphasis works to counteract the signal degradation that the channel introduces.

The data-link layer is responsible for data manipulation to improve signal integrity, ensure successful communication, and map physical bits to data. Features to enable this data manipulation include encoding schemes and control characters for alignment, clock correction, and channel bonding.
The goal of encoding is to guarantee frequent bit transitions for successful CDR as well as ensure a dc balance for the data. For successful CDR, the encoding scheme must be such that the data signal will have enough transitions for the CDR circuitry to remain phase-locked to the data stream. If the PLL inside the CDR circuitry cannot stay locked because of too few transitions, the receiver cannot guarantee synchronous clocking of data bits. Bit errors or link failure will arise. CDR is accomplished by ensuring the transmission of symbols with frequent bit transitions, which results in overhead bits being added to the data.
DC balance is also important for a functioning serial link. Without dc balance, signals could drift away from their ideal logic high and low levels and bit errors can arise. DC balance is ensured by balancing the statistical amount of 1s and 0s in the symbols that transmit. In a dc-balanced signal, the number of 0s and 1s transmitted are statistically equal over time. Some common examples of encoding schemes are 8b/10b (transforming 8-bit data to 10-bit in the interest of more transitions), 64b/66b, and 128b/130b.
If the line rate and encoding scheme cannot deliver the required data throughput on a single serial lane, multiple lanes can be used. As an example, the HDMI standards use three serial data lanes to realize their overall data bandwidth.
When sending data across multiple lanes, propagation delays will cause each receiving lane at the receiver to see data arriving at different times. Depending on the application, it might be necessary to align the data across all lanes at the receiver, a process known as channel bonding. The elastic buffer in each receiving lane at the receiver that is used for clock correction is also used for channel bonding.
Channel bonding requires that a special control character be chosen and reserved. The serial link will have one master lane, and the rest are considered slaves. The master and slaves all transmit the channel bonding character simultaneously. When the master receiver sees the channel bonding sequence in a certain location of the elastic buffer, all slaves are instructed to find their bonding sequence. Then the read pointers of all elastic buffers are adjusted to the offset of the channel bonding sequence location. Because each data lane has its own offset for its own elastic buffer, the receiver can read from different locations of each elastic buffer, resulting in reading aligned data.

One important control character is the idle character. The clock and data recovery can only remain phase-locked if the transmitter continuously sends bits. So when there is no data to send, an idle character must be sent. This is a control character determined by the protocol, and the receiver knows that character is not true data.
Additional upper layers sit above the data-link layer and let engineers tailor the communication to specific needs. Some protocols have set standards for upper layer features, while other protocols leave those layers up to the designer. Some common features could include error checking/correction, header information for packet-based communication, or even link status information.
The layers above the data-link layer and physical layer are the layers most commonly customized for specific applications. Examples of common upper layer features include error detection and correction through cyclic redundancy checking (CRC) and forward error correction (FEC). As a trade-off of efficiency in data transfer, schemes can be used to detect or correct errors.
CRC implements rules for knowing if there were bit errors in the transmission, but cannot correct the error. The application can decide if it supports the re-requesting of data. FEC, on the contrary, has additional error-correction information in the transmitted data that can let the receiver recover from a limited amount of bit errors.
The industry is continuously improving the fundamentals of high-speed serial, allowing faster and faster line rates and enabling the world of big data. One example of a recent advance is the shift to multi-level signaling like PAM-4 and PAM-8 to enable faster data rates in the same channel bandwidth. NI is following the trends of the high-speed serial market and now has high-speed serial transceivers on products ranging from digital functional testers, embedded stand-alone processing nodes, and high-end RF testers with second-generation NI PXI Vector Signal Transceivers.
Leave a Reply
You must be logged in to post a comment.