Set up NVMe over InfiniBand on the host side
Configuring an NVMe initiator in an InfiniBand environment includes installing and configuring the infiniband, nvme-cli, and rdma packages, configuring initiator IP addresses, and setting up the NVMe-oF layer on the host.
You must be running the latest compatible RHEL 7, RHEL 8, RHEL 9, SUSE Linux Enterprise Server 12 or 15 service pack operating system. See the NetApp Interoperability Matrix Tool for a complete list of the latest requirements.
-
Install the rdma, nvme-cli, and infiniband packages:
SLES 12 or SLES 15
# zypper install infiniband-diags # zypper install rdma-core # zypper install nvme-cli
RHEL 7, RHEL 8, or RHEL 9
# yum install infiniband-diags # yum install rdma-core # yum install nvme-cli
-
For RHEL 8 or RHEL 9, install network scripts:
RHEL 8
# yum install network-scripts
RHEL 9
# yum install NetworkManager-initscripts-updown
-
For RHEL 7, enable
ipoib
. Edit the /etc/rdma/rdma.conf file and modify the entry for loadingipoib
:IPOIB_LOAD=yes
-
Get the host NQN, which will be used to configure the host to an array.
# cat /etc/nvme/hostnqn
-
Check that both IB port links are up and the State = Active:
# ibstat
CA 'mlx4_0' CA type: MT4099 Number of ports: 2 Firmware version: 2.40.7000 Hardware version: 1 Node GUID: 0x0002c90300317850 System image GUID: 0x0002c90300317853 Port 1: State: Active Physical state: LinkUp Rate: 40 Base lid: 4 LMC: 0 SM lid: 4 Capability mask: 0x0259486a Port GUID: 0x0002c90300317851 Link layer: InfiniBand Port 2: State: Active Physical state: LinkUp Rate: 56 Base lid: 5 LMC: 0 SM lid: 4 Capability mask: 0x0259486a Port GUID: 0x0002c90300317852 Link layer: InfiniBand
-
Set up IPv4 IP addresses on the ib ports.
SLES 12 or SLES 15
Create the file /etc/sysconfig/network/ifcfg-ib0 with the following contents.
BOOTPROTO='static' BROADCAST= ETHTOOL_OPTIONS= IPADDR='10.10.10.100/24' IPOIB_MODE='connected' MTU='65520' NAME= NETWORK= REMOTE_IPADDR= STARTMODE='auto'
Then, create the file /etc/sysconfig/network/ifcfg-ib1:
BOOTPROTO='static' BROADCAST= ETHTOOL_OPTIONS= IPADDR='11.11.11.100/24' IPOIB_MODE='connected' MTU='65520' NAME= NETWORK= REMOTE_IPADDR= STARTMODE='auto'
RHEL 7 or RHEL 8
Create the file /etc/sysconfig/network-scripts/ifcfg-ib0 with the following contents.
CONNECTED_MODE=no TYPE=InfiniBand PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=static IPADDR='10.10.10.100/24' DEFROUTE=no IPV4=FAILURE_FATAL=yes IPV6INIT=no NAME=ib0 ONBOOT=yes
Then, create the file /etc/sysconfig/network-scripts/ifcfg-ib1:
CONNECTED_MODE=no TYPE=InfiniBand PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=static IPADDR='11.11.11.100/24' DEFROUTE=no IPV4=FAILURE_FATAL=yes IPV6INIT=no NAME=ib1 ONBOOT=yes
RHEL 9
Use the
nmtui
tool to activate and edit a connection. Below is an example file/etc/NetworkManager/system-connections/ib0.nmconnection
the tool will generate:[connection] id=ib0 uuid=<unique uuid> type=infiniband interface-name=ib0 [infiniband] mtu=4200 [ipv4] address1=10.10.10.100/24 method=manual [ipv6] addr-gen-mode=default method=auto [proxy]
Below is an example file
/etc/NetworkManager/system-connections/ib1.nmconnection
the tool will generate:[connection] id=ib1 uuid=<unique uuid> type=infiniband interface-name=ib1 [infiniband] mtu=4200 [ipv4] address1=11.11.11.100/24' method=manual [ipv6] addr-gen-mode=default method=auto [proxy]
-
Enable the
ib
interface:# ifup ib0 # ifup ib1
-
Verify the IP addresses you will use to connect to the array. Run this command for both
ib0
andib1
:# ip addr show ib0 # ip addr show ib1
As shown in the example below, the IP address for
ib0
is10.10.10.255
.10: ib0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65520 qdisc pfifo_fast state UP group default qlen 256 link/infiniband 80:00:02:08:fe:80:00:00:00:00:00:00:00:02:c9:03:00:31:78:51 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff inet 10.10.10.255 brd 10.10.10.255 scope global ib0 valid_lft forever preferred_lft forever inet6 fe80::202:c903:31:7851/64 scope link valid_lft forever preferred_lft forever
As shown in the example below, the IP address for
ib1
is11.11.11.255
.10: ib1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65520 qdisc pfifo_fast state UP group default qlen 256 link/infiniband 80:00:02:08:fe:80:00:00:00:00:00:00:00:02:c9:03:00:31:78:51 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff inet 11.11.11.255 brd 11.11.11.255 scope global ib0 valid_lft forever preferred_lft forever inet6 fe80::202:c903:31:7851/64 scope link valid_lft forever preferred_lft forever
-
Set up the NVMe-oF layer on the host. Create the following files under /etc/modules-load.d/ to load the
nvme_rdma
kernel module and make sure the kernel module will always be on, even after a reboot:# cat /etc/modules-load.d/nvme_rdma.conf nvme_rdma
-
Reboot the host.
To verify the
nvme_rdma
kernel module is loaded, run this command:# lsmod | grep nvme nvme_rdma 36864 0 nvme_fabrics 24576 1 nvme_rdma nvme_core 114688 5 nvme_rdma,nvme_fabrics rdma_cm 114688 7 rpcrdma,ib_srpt,ib_srp,nvme_rdma,ib_iser,ib_isert,rdma_ucm ib_core 393216 15 rdma_cm,ib_ipoib,rpcrdma,ib_srpt,ib_srp,nvme_rdma,iw_cm,ib_iser,ib_umad,ib_isert,rdma_ucm,ib_uverbs,mlx5_ib,qedr,ib_cm t10_pi 16384 2 sd_mod,nvme_core