One of the interesting things about working with Cisco’s high-end data center switches is seeing how well they adapt to new challenges. We put a lot of engineering and R&D into products like the Nexus 9000, and for us it’s not just “faster and denser and more reliable.” We like the challenges that network managers throw at us, and we like to help solve their problems.
Recently, a new type of storage transport “NVMe-over-fabric (NVMe-oF)” has been evolving, and we’re excited to see it shake up the world of storage area networking. Until now, network managers haven’t had much of a challenge when it comes to storage: the network was so fast and the spinning hard drives so slow that natural network upgrades to 10G Ethernet (iSCSI) and faster Fibre Channel (FCP) speeds have kept storage managers happy.
What we’ve found in testing these new NVMe ultra-fast solid-state storage SANs in our labs is that things aren’t so simple anymore. Storage teams now have the ability to saturate data center networks with incredibly fast devices. And this means that network managers need to look closely at this new generation of storage to understand what is different—and how they can meet the performance demands of truly high-speed storage.
NVMe storage offers multiple transport options namely NVMe-FC for Fibre channel fabric and NVMe-RoCEv2, NVMe-TCP for IP storage. Fibre channel networks can seamlessly handle NVMe-FC traffic and for IP storage we found some very interesting challenges for the network manager including:
- Flexibility –Integrate NVMe systems into existing networks using evolving protocols.
- Security – Securely deliver networking to NVMe storage systems, ensuring that clients and servers are isolated and that network connections between devices are tightly controlled.
- Performance – Deliver a high-performance network that meets the strict latency and loss requirements of these new storage protocols, even in the face of congestion and oversubscription.
- Visibility – Look deep across a whole network fabric to ensure that SLAs are being respected from end-to-end, all the way from client to server, and that capacity is available to handle failover events.
- Manageability –Accurately and easily deploy complex configurations, including access controls and traffic engineering across large switch fabrics.
We found that Cisco Cloudscale ASIC Powered Nexus 9000 switches and ACI Fabrics are great at meeting the challenges of NVMe-oF for IP based storage systems in data centers. We have done some testing and the findings are written in a white paper that does a deep dive into these five areas with testing results and configuration advice for network managers.
Here’s a summary of our results:
Please checkout the NVMe-IP storage white paper
Cisco believes that different workloads will utilize different NVMe transport options and Cisco switches can provide the support for all three options (i.e. NVMe-FC, NVMe-RoCEv2, NVMe-TCP) today. We’re also working on a companion white paper showing how NX-OS powered Cisco MDS 9000 family products are taking the advantages of NVMe-FC innovations including, but not limited to, the NVMe analytics engine on the Cisco MDS 64GFC ASIC. Stay tuned for the NVMe-Anywhere whitepaper.
The post Meeting the Network Requirements of NVMe Storage with Cisco Nexus 9000 and ACI appeared first on Cisco Blogs.
Go to Source
Author: Yousuf Khan
Powered by WPeMatico