Skip to content

UCS Networking: Simplicity of Rack Connectivity PLUS All The Benefits of Blade and Virtualization

imageOctober 18th, 2012

There are many reasons why Cisco UCS isn’t a “me too” blade solution. Just some of the differentiators compared to its competitors are: consolidated management, stateless configuration and identity using Service Profiles, form factor design that accommodates the best CPU/memory/IO footprint, highly efficient power and cooling design, extensive 3rd party management integration, open API (published SDK), fully featured PowerShell implementation, virtualization enhancements and integration, and last but certainly not least is networking. This IS a Cisco product, right?

In a previous post, we covered current server connectivity design options, the trade-offs they required (too many cables to run or too many switches to manage), and how Cisco’s FEX-Link architecture solves both problems. Next we’ll outline the networking trade-offs required when deploying legacy blade technologies and server virtualization. Lastly, we’ll discuss how UCS provides the simplicity of the rack server connectivity design while also providing all the benefits of blades and virtualization. In other words, no trade-offs.

In The Beginning Was The Rack

imageAaah, the tried and true, simple top of rack (ToR) design that most of us cut our networking teeth on. This classic design is characterized by a rack of servers with a pair (or more) of switches at the top. The servers are all directly connected to the ToRs.

The great thing about this design was how simple it was…put a server in a rack, cable the server NIC cable to the ToR, configure the ToR port…done. We didn’t care “where” in the rack the server was. We didn’t care if it was physically adjacent to another particular server or not because they were all connected to the same switch anyway so east<->west traffic wasn’t that big of a deal. The design was very flat (compared to legacy blades or legacy blades + virtualization). We had full traffic visibility. We had all the networking features we needed on the ToR switch port that the server NIC connected directly to. The only real downside was all the cabling required when our servers had lots of NICs and HBAs.

Along Came Blades…

imageThen (circa 2000) the eventually-to-be legacy blade design entered the picture. We got almost all of the benefits they promised us.  They promised us reduced cabling (check), better power and cooling (check), density (check), and simpler management (uh, negative on that last one. See mini-rack).

Let’s just focus on ‘reduced cabling’. We did get it. They were right. However, life is full of trade-offs and they didn’t tell us about the trade-offs. The trade-off for the benefits of blades was MORE switch management, LESS networking features, REDUCED server visibility and more oversubscription on blade switch uplinks. Whoa, quite the trade-offs. No wonder so many of us never adopted blades.

Note: Know how to identify a legacy blade architecture vendor? They talk about how great blade switches are because of east-west traffic problems. I’ll save that glorious topic for a post all it’s own soon.

For the many customers still using all rack servers and no blades… Are they behind the times or are they just being pragmatic? If a few extra cables aren’t a bother, if a little more cost for power & cooling doesn’t offset the more costly blade form factor, and if space isn’t an issue…when why buy blades and complicate the network? Let me ask you this question: If you wouldn’t deploy little switches to put between every 16 rack servers, then why deploy little switches to put between every 16 blade servers? Exactly…you shouldn’t. Cisco UCS doesn’t use blade switches for that very reason.

Then Virtualization Piled On…

imageWe didn’t stop at adopting blades and their associated trade-offs. We also added server virtualization on top of it. Don’t get me wrong…I LOVE server virtualization for the benefits it provides, however, most of us have seen the trade-off – even worse network visibility for servers. Think about it: when our OS and apps were running in the top diagram (traditional rack) as a physical server, we had full visibility. Our OS and apps are now running in a VM. For the sake of getting all the benefits of legacy blades and virtualization, we’ve had to endure less and less server networking visibility, features, control, troubleshooting, security, etc.

What if a customer didn’t have to choose trade-offs? What if a customer could get the benefits of the rack design plus all the benefits of blades and virtualization in a single solution?

UCS Networking: Have Your Cake And Eat It Too

What if a blade vendor came along and said “I can give you the benefits of the rack design plus all the benefits of blades and virtualization. No networking trade-offs.”? But how would a vendor do that? Without using a blade switch, how do you reduce cabling? Without using a vSwitch, how do you aggregate VM traffic out of the hypervisor? Don’t make the assumption that you MUST use “switches” to solve these problems. What would you say if Cisco told you “Hey! You’re using too many switches. Stop it!”?

The quick answer is: Use Cisco’s FEX-Link technology. In a previous post, we discussed deploying FEX-Link technology at the rack level (something I refer to as “Rack FEX”). The same solution can be applied at the blade chassis level and at the server level to solve the trade-offs mentioned above.

image

Chassis FEX

Cisco UCS provides cable reduction without using bunches of little blade switches. Cisco replaces the blade switch (used in the legacy blade design) with remote line cards (FEX modules) that are connected to the ToR switch (called a Fabric Interconnect). The UCS Fabric Interconnects (built on the Nexus 5000/5500 platform) are the central controlling switch for the FEX modules deployed into each UCS blade chassis. As we discussed in a previous post, FEX modules are not switches. FEX modules are remote line cards (analogous to line cards from a Catalyst 6500). A Fabric Interconnect can have up to 20 FEX modules deployed into 20 different UCS blade chassis and they all operate together as a single switch. It’s like having 160 rack servers all connected to a single pair of redundant switches – a flat, low latency fabric that’s equidistant from any server to any server no matter what blade chassis they’re in.

Adapter FEX and VM-FEX

There are two common points of network complexity in servers: 1) too many physical NICs and HBAs and all the wiring 2) the need for virtual switches to aggregate virtual machine traffic. Replacing blade switches with FEX modules solved the blade chassis connectivity conundrum. Could this approach also simplify both of the network complexity issues INSIDE the server?

Cisco asked the same question and the answer was to develop a FEX module in the form factor of a PCIe expansion card that is inserted into the physical server. This FEX-on-a-card device is called a Virtual Interface Card (or VIC). The Cisco VIC adapter can be used on Cisco B-Series servers (blades) or C-Series servers (racks). The Cisco VIC can operate in two modes: Adapter FEX or VM-FEX.image

Adapter FEX is the mode used for bare metal blade/rack servers (or used for hypervisors when a virtual switch, like the Nexus 1000v, is still desired). Adapter FEX presents one or more software-defined Ethernet NIC, iSCSI NIC, or Fibre Channel HBA to the Operating System. The software (UCS Service Profile) defines the type and number of interfaces for each server and then the hardware (VIC) creates the required PCIe functions to present to the OS. While the OS “sees” what looks like multiple physical ports, the devices are really logical ports on the VIC card in the server.

Note: As of this writing, the ASIC used in a VIC adapter supports  up to 256 PCIe functions, although current OS supported limits are 116.

Many folks are familiar with HP’s Virtual Connect FlexFabric product. Let’s use it for comparison. FlexFabric allows the creation of up to 8 virtual interfaces per adapter (8 FlexNICs or 6 FlexNICs and 2 FlexHBAs/iSCSI). By comparison, Cisco’s VIC 1280 allows up to 116 virtual interfaces – any of which can be Ethernet, HBA, or iSCSI. In addition, Cisco’s VIC supports failover NIC teaming in hardware (called Fabric Failover). Basically, Cisco’s VIC is like a FlexFabric adapter on steroids.

So Adapter FEX provides the simplicity for a server at the hardware layer (fewer physical NICs, HBAs and cabling to manage). What about using the technology to provide simplicity at the virtualization layer?

imageVM-FEX is an extension of Adapter FEX functionality that allows the logical ports (PCIe functions) of the VIC to be coupled directly with a virtual machine. This eliminates the need for a virtual switch in the hypervisor since the VM is directly connected to the top of rack switch via FEX-Link. Every VM’s logical port on the VIC is seen and managed independently on the top of rack switch as a separate interface. The configuration, troubleshooting, monitoring/sniffing, and statistics collection are all done PER VM now on the upstream switch. And yes, vMotion is supported when using VM-FEX.

Now it’s understandable why the Cisco VIC needed to support so many logical interfaces…the VIC needs to support dense virtual machine deployments on the same physical host.  Since this “virtual switch bypass mode” requires a VIC logical port per VM, the VIC needs to support 116+ interfaces.

Note: To configure the ports for Adapter FEX or VM-FEX, the administrator uses UCS Service Profiles configured from within UCS Manager. This method is used for B-Series blade servers or C-Series rack servers managed by UCS Manager. To configure Adapter FEX or VM-FEX for C-Series rack servers in standalone mode, use Cisco Integrated Management Controller (CIMC).

In an effort to bring it all together, below is a graphic showing the four deployment types for Cisco’s FEX technology. As you can see, it can be deployed for use cases within a rack (Rack FEX), within a blade chassis (Chassis FEX), within a bare metal server (Adapter FEX), or within a hypervisor (VM-FEX). For non-Cisco server customers, only Rack FEX and Chassis FEX (HP, Dell, and Fujitsu) are available. Adapter FEX and VM-FEX are not available. For Cisco UCS customers, all four deployment types are possible.

image

In summary, Cisco’s FEX-Link architecture provides the largest, flattest, layer 2 network available today for blade servers while still providing all the benefits of blades and virtualization. For UCS customers, there are no networking trade-offs…

Go to Source
Author: mseanmcgee

Powered by WPeMatico

Published inUncategorized