Skip to content

Cisco UCS Direct Attached Storage Migration to SAN Fabric Attached

Unfortunately this is interruptive. Have worked with this process in the lab and have not found a way to do this online. Moving from the “Bad Idea” of direct attached storage, to SAN Fabric attached storage seems to be a headache many are dealing with. Here is how I am planning to do it. In these notes will notice an XtreamIO mentioned. Just substitute whatever SAN device fits your environment.

Cisco UCS Direct Attached Storage Migration to SAN Fabric Attached

Virtualization Team shuts down all VM’s and ESX Servers

Connect via Serial to both FI’s for visibility during reboots

TURN OFF Call Home

Disassociate  all Service Profiles from Blades. Note SP to Blade.

Remove VSAN from vHBA Templates (set to Default or 1)

Remove VSAN from uplink Storage FC Interfaces on each FI under SAN/Storage Cloud/Fabric A&B/<interface name> (set to Default or 1).

Delete Storage FC Interfaces under SAN/Storage Cloud/Fabric A&B/<interface name>. Note Ports. When done check under SAN/SAN Cloud/Uplink FC Interfaces. The interfaces should be there now. If not check under Equipment/Fabric Interconnect A&B/FC Ports/<port number> and select as “Configure as Uplink Port”

Remove VSAN’s under Storage Cloud (these should belong to nothing at this point). Note these have FC Zoning enabled

Remove any Storage Connection Policy (under SAN/Policies/SAN Connectivity Policies/<Policy Name>/vHBA Initiator Groups) connected to a SAN Connectivity Policy

Remove any Storage Connection Policies not in use (under SAN/Policies/Storage Connection Policies<Policy Name>)

On Subordinate FI switch to “Set FC End-Host Mode”

  • UCSM will disconnect
  • BOTH FI’s will go to “End-Host Mode”
  • Reboot can be viewed via Serial connection
  • NOTE: The FI’s being in End-Host Mode WILL NOT heart the XtreamIO

After Reboot ensure both FI’s are in “End-Host Mode”. FI’s in switch mode attached to SAN may crash the upstream MDS (Be Careful)

Once FI’s are ensured in End-Host mode FC cabling change can be started for both FI’s and XtremeIO. This can be done in any order

After FI’s reboot Service Profiles will re-associate to blades. This is fine, servers will not be rebooted by migration again. Servers will need to reboot to boot from SAN. If Service Profiles do not re-associate do manually at any point on.

Create required VSAN’s under SAN Cloud for each FI

Disable zoning on VSAN 1 under SAN/Storage Cloud (will update both)

Create new SAN Port-Channel using same SAN Ports previously used for Storage Cloud (direct attached) Add SAN Port-Channel VSAN to each new Port-Channel (Note PO will not come up until VSAN’s match on each side). If FC cabling is done and upstream MDS are configured these PO’s should come up. Troubleshoot if not

Update new VSAN in vHBA template for each FI

Check a VDI Service Profile to ensure new VSAN’s made it to vHBA’s

Associate a Service Profile to original Blade (unless server pool already associated)

Boot Service Profile and F6 to verify all expected paths to Boot LUN exist. 4 paths should exist

Virtualization Team checks running Service Profile/ESX server

Associate all Service Profiles to original blades, or reboot remaining blades

TURN ON Call Home

Done

Published inUncategorized

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *