NVMe/FC Host Configuration for AIX with ONTAP
You can enable NVMe over Fibre Channel (NVMe/FC) on IBM AIX and VIOS/PowerVM hosts using ONTAP storage as the target. For additional details on supported configurations, see the NetApp Interoperability Matrix Tool.
The following support is available for NVMe/FC host configuration for an AIX host with ONTAP:
-
Beginning with ONTAP 9.13.1, NVMe/FC support is added for IBM AIX 7.2 TL5 SP6, AIX 7.3 TL1 SP2, and VIOS 3.1.4.21 releases with SAN boot support for both physical and virtual stacks. See the IBM documentation for more information on setting up SAN boot support.
-
NVMe/FC is supported with Power9 and Power10 IBM servers.
-
No separate PCM (Path Control Module), such as Host Utilities for AIX SCSI Multipath I/O (MPIO) support, is required for NVMe devices.
-
Virtualization support with NetApp (VIOS/PowerVM) is introduced with VIOS 3.1.4.21. This is only supported through NPIV (N_PortID Virtualization) storage virtualization mode using the Power10 IBM server.
-
Verify that you have 32GB FC Emulex adapters (EN1A, EN1B, EN1L, EN1M) or 64GB FC adapters (EN1N, EN1P) with adapter firmware 12.4.257.30 and later versions.
-
If you have a MetroCluster configuration, NetApp recommends changing the AIX NVMe/FC default APD (All Path Down) time for supporting MetroCluster unplanned switchover events to avoid the AIX operating system enforcing a shorter I/O timeout. For additional information and the recommended changes to default settings, refer to NetApp Bugs Online - 1553249.
-
By default, the Asymmetric Namespace Access Transition Timeout (ANATT) value for the AIX host OS is 30 seconds. IBM provides an Interim Fix (ifix) which caps the ANATT value at 60 seconds; you need to install an ifix from the IBM website to ensure that all the ONTAP workflows are non-disruptive.
For NVMe/FC AIX support, you must install an ifix on the GA versions of AIX OS. This is not required for the VIOS/PowerVM OS. The ifix details are as follows:
-
For AIX level 72-TL5-SP6-2320, install the
IJ46710s6a.230509.epkg.Z
package. -
For AIX level 73-TL1-SP2-2320, install the
IJ46711s2a.230509.epkg.Z
package.For more information on managing ifixes, see Managing Interim Fixes on AIX.
You need to install the ifixes on an AIX version with no previously installed ifixes related to devices.pciex.pciexclass.010802.rte
on the system. If these ifixes are present, they will conflict with the new installation.The following table demonstrates HBAs assigned to the AIX LPAR (AIX Logical Partition) or the physical stack:
Host OS Power Arch Power FW version Mode Comments AIX 7.2 TL5 SP6
Power9
FW 950 or later
Physical stack
ifix available through TS012877410.
Power10
FW 1010 or later
Physical stack
SAN booting is supported. ifix available through TS012877410.
AIX 7.3 TL1 SP2
Power9
FW 950 or later
Physical stack
ifix available through TS012877410.
Power10
FW 1010 or later
Physical and virtual stack
ifix available through TS012877410.
The following table demonstrates HBAs assigned to the VIOS with NPIV-enabled support in a virtualized mode:
Host OS Power Arch Power FW version Mode Comments VIOS/PowerVM 3.1.4.21
Power10
FW 1010 or later
Virtual stack
Support starts from AIX 7.3 TL1 SP2 for VIOC
-
Known limitations
NVMe/FC host configuration for AIX with ONTAP has the following known limitations:
-
Qlogic/Marvel 32G FC HBAs on an AIX host does not support NVMe/FC.
-
SAN boot is not supported for NVMe/FC devices using Power9 IBM server.
Multipathing
IBM MPIO (Multi Path I/O), used for NVMe multipathing, is provided by default when you install the AIX OS.
You can verify that NVMe multipathing is enabled for an AIX host by using the lsmpio
command:
#[root@aix_server /]: lsmpio -l hdisk1
Example output
name path_id status path_status parent connection hdisk1 8 Enabled Sel,Opt nvme12 fcnvme0, 9 hdisk1 9 Enabled Sel,Non nvme65 fcnvme1, 9 hdisk1 10 Enabled Sel,Opt nvme37 fcnvme1, 9 hdisk1 11 Enabled Sel,Non nvme60 fcnvme0, 9
Configure NVMe/FC
You can use the following procedure to configure NVMe/FC for Broadcom/Emulex adapters.
-
Verify that you are using the supported adapter. For the most current list of supported adapters, see the NetApp Interoperability Matrix Tool.
-
By default, the NVMe/FC protocol support is enabled in the physical FC; however, the NVMe/FC protocol support is disabled in the Virtual Fibre Channel (VFC) on Virtual I/O Server (VIOS).
Retrieve a list of virtual adapters:
$ lsmap -all -npiv
Example output
Name Physloc ClntID ClntName ClntOS ------------- ---------------------------------- ------ -------------- ------- vfchost0 U9105.22A.785DB61-V2-C2 4 s1022-iop-mcc- AIX Status:LOGGED_IN FC name:fcs4 FC loc code:U78DA.ND0.WZS01UY-P0-C7-T0 Ports logged in:3 Flags:0xea<LOGGED_IN,STRIP_MERGE,SCSI_CLIENT,NVME_CLIENT> VFC client name:fcs0 VFC client DRC:U9105.22A.785DB61-V4-C2
-
Enable support for the NVMe/FC protocol on an adapter by running the
ioscli vfcctrl
command on the VIOS:$ vfcctrl -enable -protocol nvme -vadapter vfchost0
Example output
The "nvme" protocol for "vfchost0" is enabled.
-
Verify that the support has been enabled on the adapter:
# lsattr -El vfchost0
Example output
alt_site_wwpn WWPN to use - Only set after migration False current_wwpn 0 WWPN to use - Only set after migration False enable_nvme yes Enable or disable NVME protocol for NPIV True label User defined label True limit_intr false Limit NPIV Interrupt Sources True map_port fcs4 Physical FC Port False num_per_nvme 0 Number of NPIV NVME queues per range True num_per_range 0 Number of NPIV SCSI queues per range True
-
Enable NVMe/Fc protocol for all current adapters or selected adapters:
-
Enable the NVMe/Fc protocol for all adapters:
-
Change the
dflt_enabl_nvme
attribute value ofviosnpiv0
pseudo device toyes
. -
Set the
enable_nvme
attribute value toyes
for all the VFC host devices.# chdev -l viosnpiv0 -a dflt_enabl_nvme=yes
# lsattr -El viosnpiv0
Example output
bufs_per_cmd 10 NPIV Number of local bufs per cmd True dflt_enabl_nvme yes Default NVME Protocol setting for a new NPIV adapter True num_local_cmds 5 NPIV Number of local cmds per channel True num_per_nvme 8 NPIV Number of NVME queues per range True num_per_range 8 NPIV Number of SCSI queues per range True secure_va_info no NPIV Secure Virtual Adapter Information True
-
-
Enable the NVMe/Fc protocol for selected adapters by changing the
enable_nvme
value of the VFC host device attribute toyes
.
-
-
Verify that
FC-NVMe Protocol Device
has been created on the server:# [root@aix_server /]: lsdev |grep fcnvme
Exmaple output
fcnvme0 Available 00-00-02 FC-NVMe Protocol Device fcnvme1 Available 00-01-02 FC-NVMe Protocol Device
-
Record the host NQN from the server:
# [root@aix_server /]: lsattr -El fcnvme0
Example output
attach switch How this adapter is connected False autoconfig available Configuration State True host_nqn nqn.2014-08.org.nvmexpress:uuid:64e039bd-27d2-421c-858d-8a378dec31e8 Host NQN (NVMe Qualified Name) True
[root@aix_server /]: lsattr -El fcnvme1
Example output
attach switch How this adapter is connected False autoconfig available Configuration State True host_nqn nqn.2014-08.org.nvmexpress:uuid:64e039bd-27d2-421c-858d-8a378dec31e8 Host NQN (NVMe Qualified Name) True
-
Check the host NQN and verify that it matches the host NQN string for the corresponding subsystem on the ONTAP array:
::> vserver nvme subsystem host show -vserver vs_s922-55-lpar2
Example output
Vserver Subsystem Host NQN ------- --------- ---------------------------------------------------------- vs_s922-55-lpar2 subsystem_s922-55-lpar2 nqn.2014-08.org.nvmexpress:uuid:64e039bd-27d2-421c-858d-8a378dec31e8
-
Verify that the initiator ports are up and running and you can see the target LIFs.
Validate NVMe/FC
You need to verify that the ONTAP namespaces correctly reflect on the host. Run the following command to do so:
# [root@aix_server /]: lsdev -Cc disk |grep NVMe
Example output
hdisk1 Available 00-00-02 NVMe 4K Disk
You can check the multipathing status:
#[root@aix_server /]: lsmpio -l hdisk1
Example output
name path_id status path_status parent connection hdisk1 8 Enabled Sel,Opt nvme12 fcnvme0, 9 hdisk1 9 Enabled Sel,Non nvme65 fcnvme1, 9 hdisk1 10 Enabled Sel,Opt nvme37 fcnvme1, 9 hdisk1 11 Enabled Sel,Non nvme60 fcnvme0, 9
Known issues
The NVMe/FC host configuration for AIX with ONTAP has the following known issues:
Burt ID | Title | Description |
---|---|---|
AIX NVMe/FC default APD time to be modified for supporting MCC Unplanned Switchover events |
By default, AIX operating systems use an all path down (APD) timeout value of 20sec for NVMe/FC. However, ONTAP MetroCluster automatic unplanned switchover (AUSO) and TieBreaker initiated switchover workflows might take a little longer than the APD timeout window, causing I/O errors. |
|
AIX NVMe/FC caps ANATT at 60s, instead of 120s as advertised by ONTAP |
ONTAP advertises the ANA(asymmetric namespace access) transition timeout in controller identify at 120sec. Currently, with ifix, AIX reads the ANA transition timeout from controller identify, but effectively clamps it to 60sec if it is over that limit. |
|
AIX NVMe/FC hits EIO after ANATT expiry |
For any storage failover (SFO) events, if the ANA(asymmetric namespace access) transitioning exceeds the ANA transition timeout cap on a given path, the AIX NVMe/FC host fails with an I/O error despite having alternate healthy paths available to the namespace. |
|
AIX NVMe/FC waits for half/full ANATT to expire before resuming I/O after ANA AEN |
IBM AIX NVMe/FC does not support some Asynchronous notifications (AENs) that ONTAP publishes. This sub-optimal ANA handling will result in sub optimal performance during SFO operations. |
Troubleshoot
Before troubleshooting any NVMe/FC failures, verify that you are running a configuration that is compliant with the Interoperability Matrix Tool (IMT) specifications. If you are still facing issues, contact NetApp support for further triage.