Set up NVMe over RoCE on the host side
NVMe initiator configuration in a RoCE environment includes installing and configuring the rdma-core and nvme-cli packages, configuring initiator IP addresses, and setting up the NVMe-oF layer on the host.
You must be running the latest compatible RHEL 7, RHEL 8, RHEL 9, SUSE Linux Enterprise Server 12 or 15 service pack operating system. See the NetApp Interoperability Matrix Tool for a complete list of the latest requirements.
-
Install the rdma and nvme-cli packages:
SLES 12 or SLES 15
# zypper install rdma-core # zypper install nvme-cli
RHEL 7, RHEL 8, and RHEL 9
# yum install rdma-core # yum install nvme-cli
-
For RHEL 8 and RHEL 9, install network scripts:
RHEL 8
# yum install network-scripts
RHEL 9
# yum install NetworkManager-initscripts-updown
-
Get the host NQN, which will be used to configure the host to an array.
# cat /etc/nvme/hostnqn
-
Set up IPv4 IP addresses on the ethernet ports used to connect NVMe over RoCE. For each network interface, create a configuration script that contains the different variables for that interface.
The variables used in this step are based on server hardware and the network environment. The variables include the
IPADDR
andGATEWAY
. These are example instructions for SLES and RHEL:SLES 12 and SLES 15
Create the example file
/etc/sysconfig/network/ifcfg-eth4
with the following contents.BOOTPROTO='static' BROADCAST= ETHTOOL_OPTIONS= IPADDR='192.168.1.87/24' GATEWAY='192.168.1.1' MTU= NAME='MT27800 Family [ConnectX-5]' NETWORK= REMOTE_IPADDR= STARTMODE='auto'
Then, create the example file
/etc/sysconfig/network/ifcfg-eth5
:BOOTPROTO='static' BROADCAST= ETHTOOL_OPTIONS= IPADDR='192.168.2.87/24' GATEWAY='192.168.2.1' MTU= NAME='MT27800 Family [ConnectX-5]' NETWORK= REMOTE_IPADDR= STARTMODE='auto'
RHEL 7 or RHEL 8
Create the example file
/etc/sysconfig/network-scripts/ifcfg-eth4
with the following contents.BOOTPROTO='static' BROADCAST= ETHTOOL_OPTIONS= IPADDR='192.168.1.87/24’ GATEWAY='192.168.1.1' MTU= NAME='MT27800 Family [ConnectX-5]' NETWORK= REMOTE_IPADDR= STARTMODE='auto'
Then, create the example file
/etc/sysconfig/network-scripts/ifcfg-eth5
:BOOTPROTO='static' BROADCAST= ETHTOOL_OPTIONS= IPADDR='192.168.2.87/24' GATEWAY='192.168.2.1' MTU= NAME='MT27800 Family [ConnectX-5]' NETWORK= REMOTE_IPADDR= STARTMODE='auto'
RHEL 9
Use the
nmtui
tool to activate and edit a connection. Below is an example file/etc/NetworkManager/system-connections/eth4.nmconnection
the tool will generate:[connection] id=eth4 uuid=<unique uuid> type=ethernet interface-name=eth4 [ethernet] mtu=4200 [ipv4] address1=192.168.1.87/24 method=manual [ipv6] addr-gen-mode=default method=auto [proxy]
Below is an example file
/etc/NetworkManager/system-connections/eth5.nmconnection
the tool will generate:[connection] id=eth5 uuid=<unique uuid> type=ethernet interface-name=eth5 [ethernet] mtu=4200 [ipv4] address1=192.168.2.87/24 method=manual [ipv6] addr-gen-mode=default method=auto [proxy]
-
Enable the network interfaces:
# ifup eth4 # ifup eth5
-
Set up the NVMe-oF layer on the host. Create the following file under
/etc/modules-load.d/
to load thenvme_rdma
kernel module and make sure the kernel module will always be on, even after a reboot:# cat /etc/modules-load.d/nvme_rdma.conf nvme_rdma
-
Reboot the host.
To verify the
nvme_rdma
kernel module is loaded, run this command:# lsmod | grep nvme nvme_rdma 36864 0 nvme_fabrics 24576 1 nvme_rdma nvme_core 114688 5 nvme_rdma,nvme_fabrics rdma_cm 114688 7 rpcrdma,ib_srpt,ib_srp,nvme_rdma,ib_iser,ib_isert,rdma_ucm ib_core 393216 15 rdma_cm,ib_ipoib,rpcrdma,ib_srpt,ib_srp,nvme_rdma,iw_cm,ib_iser,ib_umad,ib_isert,rdma_ucm,ib_uverbs,mlx5_ib,qedr,ib_cm t10_pi 16384 2 sd_mod,nvme_core