14.4 Terabits in a single rack unit ?

Have switching ASICs gotten too fast?

Looking back at the last few years it certainly appears that ethernet switching ASICs and front panel interface bandwidth are clearly moving at a different pace in that a faster switching ASIC comes just ahead of the required ethernet interface speed and optic form factor size required to drive the full bandwidth the ASIC actually provides while still fitting into a 1RU top-of-rack ethernet switch or line card profile.

Current 6.4+ Tbps system on-a-chip (SOC) ASIC based switching solutions have moved past the available front panel interface bandwith inside of a single rack unit (RU).  The QSFP28 (Quad-SFP) form factor currently occupies the entire front panel real estate of a 1RU switch at 32x100G QSFP28 ports prompting switching vendors to release 2RU platforms in order to cram 64x100G ports and fully drive the newest switching ASICs. With higher bandwidth switching ASICs on the near horizon the industry clearly needs a higher ethernet interface speed and new form factors to address the physical real estate restrictions.

So where do we go from here?

First lets look at the 3 available dimensions at our disposal for scaling up the interface bandwidth.

1.)  Increase the symbol rate per lane.

This means we need an advance in the actual optical component and thermal management used to deliver the needed increase in bandwidth in a power efficient manner.  Put more simply in the words of a ceratin  Evil Scientist who wakes up after being frozen for 30 years “I’m going to need a better laser okay”.

2.)  Increase the number of parallel lanes that the optical interface supports

As an example in the case of the 40Gbps QSFP form factor this meant running 4 parallel lanes of 10Gbps to achieve 40Gbps of bandwidth

3.)  Stuff (encode) more bits into each symbol per lane by using a different modulation scheme.

For example PAM4 encodes 2 bits per signal which effectively doubles the bit rate per lane and is the basis for delivering 50Gbps per lane and 200Gbps aggregate across 4 lanes.

Looking Beyond QSFP28

Next looks look at what is potentially coming down the pike for better interface bandwidth (greater than 100Gbps) and front panel port density.

Smaller form factor 100G

One approach is to simply used a more compact form factor and this is exactly what the micro QSFP is being designed to do.  uQSFP is the same width as an SFP form factor optic yet uses the same 4 lane design of QSFP28. This translates into a 33% increase in the front panel density of a 1RU switch when compared with the existing QSFP28 form factor. The uQSFP also draws the same 3.5W of power as the larger form factor QSFP28.  Its now going to be possible to fit up to 72 ports of uQSFP (72x100G) into a 1RU platform or line card allowing for the support of switching ASICs operating at 7.2Tbps when the uQSFP runs at 25Gbps per channel (4 lanes of 25Gbps).  If broken out into 4x25G ports a single 1RU could terminate up to 288 x 25G ports.  uQSFP is also expected to support PAM4 enabling 50Gbps per channel for an effective bandwidth of 200Gbps in a single port paving the way for enough front panel bandwidth to drive 14+Tbps of switching ASIC capacity in a 1RU switching device form factor.  There may however be technical challenges in engineering a product with 3 rows of optics on the front panel.

Image courtesy of http://www.microqsfp.com/

Double-Density Form Factors

Another approach is the QSFP-DD (double density) form factor.

QSFP28-DD is the same height and width of QSFP28, but slightly longer allowing for a second row of electrical contacts.  This second row provides for 8 signal lanes operating at 25Gbps for a total of 200Gbps in the same amount of space as the previous QSFP28 operating at 100Gbps.  This provides enough interface bandwidth and front panel density for 36 x 200Gbps and a 7.2Tbps switching ASIC.  There are break-out solutions coming that will allow for breaking out into 2x100Gbps QSFP28 connections with QSFP-DD optics on the 100G end.   What is not yet clear is whether a product will emerge which would allow for 8x25G breakouts of a QSFP28-DD into server cabinets.

400G

CFP8 is going to be the first new form factor to arrive for achieving 400G, but is going to be too large a form factor to fit into the more traditional model of 32 front panel ports in 1RU of space.  CFP8 dimensions are W 40 x L 102 x H 9.5 which should max out at around 18 ports per 1RU of space.  At 15-18W (3x the power of QSFP28), power consumption is another challenge in designing a line card that can accomodate it.  CFP8 is more likely to be used by service providers for router to router and router to transport longer haul transmissions rather than traditional ethernet switching devices found in the Data Center rack.

QSFP56-DD consists of 8 lanes of 50Gbps with PAM4 modulation for 400Gbps operation.  Its the same size form factor as QSFP/QSFP28 allowing for up to 36 ports in 1RU of space and flexible product designs where QSFP, QSFP28 or QSFP56-DD modules could alternatively be used in the same port.  These 36 ports of 400Gbps would support ASICs with 14.4Tbps in a single RU of space.  QSFP56-DD should also support short reach 4x100Gbps breakout into 4x SFP-DD which is the same size as SFP+/SFP28 making it eventually ideal for server connectivity.

Octal SFP (OSFP) is another new form factor with 8 lanes of 50Gbps for an effective bandwidth of 400G.  Its slightly wider than QSFP, but should still be capable of supporting up to 32 ports of 400G, a total of 12.8Tbps in 1RU of front panel space.  The challenge for OSFP adoption will be that its a completely different size form factor than the previous QSFP/QSFP28 which will require a completely new design for 1RU switches and line cards.  In other words there will be no backwards compatability where a QSFP/QSFP28 could be alternatively be plugged into the same port on line card or fixed switch. An adapter for allowing a QSFP28 optic to be inserted into the OSFP form factor is apparently under discussion.

So in conclusion just while ASICs seemed to be quickly outpacing interface bandwidth and front panel real estate there are viable options coming soon that will be able to take us to the 12.8 to 14.4Tbps level in a single RU.

Disclaimer: The views expressed here are my own and do not necessarily reflect the views of my employer Juniper Networks