Skip to content

A Quick Primer on Cisco Fabric Extension (FEX-Link)

imageAugust 20th, 2012

One of the many technologies used by Cisco’s Unified Computing System (UCS) is Fabric Extension. Before we discuss the applications of Fabric Extension and how it’s used within UCS (in subsequent blog posts), let’s first discuss the basic problem that fabric extension solves, walk through a quick component overview, and lastly, discuss a simple design showing the logical vs. physical topology.

The Basic Problem and Basic Solution

In the data center today, there are three basic server connectivity design models: 1) Push Server Ports to Central Switch 2) Push Little Switches to Servers 3) Push Switch Ports to Servers. The over simplified description of the problem is that designs 1 & 2 require a trade-off – either too many cables or too many switches to manage. The solution provided by Cisco FEX-Link is design 3 – less cabling without lots of little switches to manage. Allow me explain in more detail.

imageDesign 1 shows a model that we’re all well aware of….the administrator pushes the server NICs up to the central switches. An administrator wants to keep switch management overhead low, so they decide to deploy two very large central switches to connect everything. The pair of large modular switches (Note: The diagram only depicts one side of a redundant design for simplicity sake.) are in a central location (e.g. middle of row, end of row, middle of data center, etc.). The servers are located in racks with no top of rack (ToR) switches. Instead, each server’s NIC must have a long cable run all the way back to the central switch.

This model applies to either traditional rack servers or blade servers using pass-through modules. The central switch is depicted as a modular switch with a Supervisor module and a single line card. The light blue box with a dotted outline depicts the “management domain” for this modular switch. If the network admin adds more line cards to this module to increase the modular switch’s port capacity, the existing switch management domain (e.g. IOS/NXOS) extends to include the additional line cards.

The upside for this model is that the switch management overhead is low. The downside for this model is that, typically, you end up with lots of server->switch cabling to manage.

Note: Click to enlarge any diagram.image

Design 2 depicts a model where an administrator pushes lots of little switches closer to the servers to minimize the amount of cabling needed in the data center. The administrator realizes that home running all servers to a pair of central switches requires WAY too much cabling. So, the admin installs top of rack (ToR) switches in each rack. The servers in each rack only have to connect to the switch at the top of their rack. The ToR then home runs back to the central switch.

Typically, you see this in two flavors: 1) the traditional Top of Rack (ToR) design 2) the legacy blade switch (a.k.a. the mini-rack) design. The upside to this option is that overall cabling is less than with Design 1. However, the major downside is that you end up with lots of little switches to deploy, configure, manage, monitor, troubleshoot, upgrade firmware, etc.

As they say in life…everything is a trade-off. So, do you want less switches to manage (Design 1) or do you want less cables to manage (Design 2)? What if I said that you could have both?

imageDesign 3 is a model Cisco first delivered with the Nexus architecture using a feature called “Fabric Extension”. In this design, the administrator pushes the switch ports from the central switch closer to the servers using Cisco Fabric Extenders (a.k.a remote line cards).

Figuratively, start with a modular switch (e.g. Catalyst 6500) and strip away the sheet metal surrounding the supervisor module, line cards, backplane, power supplies and fan. Next, take the supervisor module and a line card, wrap sheet metal around it. Next, give each line card its own sheet metal, power supplies and fans and place them very near the servers. Next, the switch’s backplane is extended out to these remotely deployed line cards using 10GE/40GE technology. The result is a very large, distributed, modular switch that provides very low management overhead (same as a large modular switch) but also provides reduced cabling since the remote line cards are now very close to the servers. By close, I mean in the same rack or blade chassis as the physical servers.

This design from Cisco doesn’t require a trade-off. Cisco’s Fabric Extension (FEX-Link) technology provides the best of both worlds to the customer…less cabling AND less switch management overhead.

image

Component Overview

There are three main components to Cisco’s Fabric Extension (FEX-Link) technology: 1) central switch 2) external backplane 3) remote line card.image

The ‘central switch’ handles all management and configuration functions forCisco Nexus Family all remote line cards (a.k.a Fabric Extenders), provides the management interfaces (CLI, SNMP, etc.), serves as the centralized control & data plane, and makes all L2/L3 frame movement decisions. The central switch can be a Nexus switch (5000, 5500, 7000, etc.) or a UCS Fabric Interconnect (6100 or 6200).

Cisco FET Cable

The ‘external backplane’ is a collection of one or more physical Ethernet cables (10GE, etc.) carrying management and data traffic (IP, FCoE, etc.) between the central switch and the remote line cards. The cables used for the external backplane can be SFP+ copper (Twinax), SFP+ FET (MM OM2/3/4), SFP+ short reach (SR), and SFP+ long reach (LR).

The ‘remote line card’, or Fabric Extender (FEX), provides physical connectivity for multiple hosts. The Fabric Extenders are not switches. They do not make forwarding decisionsCisco Fabric Extenders based on L2 (MAC address) or L3 (IP, etc.) information. Fabric Extenders only move frames based on the VNTag information embedded in each frame by the upstream VNTag-aware central switch (Nexus switch or UCS Fabric Interconnect). Fabric Extenders are FEX-Link devices and have two very basic functions: 1) receive an inbound frame from an uplink (external backplane) and move that frame to one or more downlinks by interpreting the VNTag information 2) receive inbound frames from downlinks, add the VNTag associated with the downlink, and transmit the frame up the external backplane towards the central switch for switching.

Terminology Clarification: A Fabric Extender (FEX) uses Cisco’s technology named FEX-Link. FEX-Link is just a marketing term for the capability itself.

VNTag and 802.1BR

Cisco developed the VNTag format, however, this capability is being standardized by the IEEE as 802.1BR. The tag format used by the IEEE draft differs from VNTag. Once 802.1BR is ratified, look for Cisco to include support for it in products going forward. Its kind of like how VLAN tagging got started. Cisco developed it (called ISL) and submitted it to the IEEE to make it an open standard. The IEEE changed the tag format and ratified it as 802.1Q. After ratification, Cisco continued supporting ISL but began adding support for 802.1Q. For awhile, many Cisco switches supported both ISL and 802.1Q. Gradually over several years, ISL was deprecated and 802.1Q was carried forward in Cisco products. Its reasonable to assume the same will happen with VNTag and 802.1BR.

Example Deployment: Logical vs. Physical

To help bring all of the above together, below is a graphic showing two diagrams for the same 440 rack servers – a logical diagram and a physical diagram. The physical view 10 racks of 40 servers each (only first and last rack shown). The physical view shows that two central switches (Switch A and Switch B) are each connected to a FEX (remote line card) at the top of each rack and the FEX is physically connected to each server in its rack. FEX modules A1 and A2 belong to Switch A and FEX modules B1 and B2 belong to Switch B. The logical view shows how the physical diagram effectually operates – as two very large central switches that are each connected to all 440 servers.

Note: There are many deployment scenarios for FEX connectivity to Nexus switches. Some scenarios include Server vPC, FEX vPC, EvPC, and vPC+. Consult Cisco documentation for examples. (e.g. see Figure 7 in this Nexus 2000 data sheet)

image

The purpose of showing these two diagrams side-by-side is to help the reader understand the similarities. Both diagrams show 440 servers that are connected to two central switches. In both diagrams, all servers are only ONE layer 2 hop from ANY of the other 439 servers and all servers have the same latency to reach ANY of the other 439 servers. In other words, Cisco’s FEX-Link technology delivers an extremely flat, low hop count, low latency switching architecture that provides both minimized cabling and very little switch management overhead.

In a follow-on article, we’ll discuss the different types of deployment models that make use of this technology – Rack FEX, Chassis FEX, Adapter-FEX, and VM-FEX.

Go to Source
Author: mseanmcgee

Powered by WPeMatico

Published inUncategorized