Use Red Hat Enterprise Linux 8.0 with ONTAP
You can use the ONTAP SAN host configuration settings to configure Red Hat Enterprise Linux 8.0 with ONTAP as the target.
Install the Linux Unified Host Utilities
You can download the NetApp Linux Unified Host Utilities software package as a 64-bit.rpm file from the NetApp Support Site.
NetApp strongly recommends installing the Linux Unified Host Utilities, but it is not mandatory. The utilities do not change any settings on your Linux host. The utilities improve management and assist NetApp customer support in gathering information about your configuration.
-
Download the 64-bit Linux Unified Host Utilities software package from the NetApp Support Site to your host.
-
Install the software package:
rpm -ivh netapp_linux_unified_host_utilities-7-1.x86_64
You can use the configuration settings provided in this document to configure cloud clients connected to Cloud Volumes ONTAP and Amazon FSx for ONTAP. |
SAN Toolkit
The toolkit is installed automatically when you install the NetApp Host Utilities package. This kit provides the sanlun
utility, which helps you manage LUNs and HBAs. The sanlun
command returns information about the LUNs mapped to your host, multipathing, and information necessary to create initiator groups.
In the following example, the sanlun lun show
command returns LUN information.
# sanlun lun show all
Example output:
controller(7mode/E-Series)/ device host lun vserver(cDOT/FlashRay) lun-pathname filename adapter protocol size Product ------------------------------------------------------------------------------------ data_vserver /vol/vol1/lun1 /dev/sdb host16 FCP 120.0g cDOT data_vserver /vol/vol1/lun1 /dev/sdc host15 FCP 120.0g cDOT data_vserver /vol/vol2/lun2 /dev/sdd host16 FCP 120.0g cDOT data_vserver /vol/vol2/lun2 /dev/sde host15 FCP 120.0g cDOT
SAN Booting
If you decide to use SAN booting, it must be supported by your configuration. You can use the NetApp Interoperability Matrix Tool to verify that your OS, HBA, HBA firmware and the HBA boot BIOS, and ONTAP version are supported.
-
Map the SAN boot LUN to the host.
-
Verify that multiple paths are available.
Multiple paths become available after the host operating system is up and running on the paths. -
Enable SAN booting in the server BIOS for the ports to which the SAN boot LUN is mapped.
For information on how to enable the HBA BIOS, see your vendor-specific documentation.
-
Reboot the host to verify that the boot was successful.
Multipathing
For Red Hat Enterprise Linux (RHEL) 8.0 the /etc/multipath.conf file must exist, but you do not need to make specific changes to the file. RHEL 8.0 is compiled with all settings required to recognize and correctly manage ONTAP LUNs.
You can use the multipath -ll
command to verify the settings for your ONTAP LUNs.
The following sections provide example multipath outputs for a LUN mapped to ASA and non-ASA personas.
All SAN Array configurations
All SAN Array (ASA) configurations optimize all paths to a given LUN, keeping them active. This improves performance by serving I/O operations through all paths at the same time.
The following example displays the correct output for an ONTAP LUN.
# multipath -ll 3600a098038303634722b4d59646c4436 dm-28 NETAPP,LUN C-Mode size=80G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw `-+- policy='service-time 0' prio=50 status=active |- 11:0:7:1 sdfi 130:64 active ready running |- 11:0:9:1 sdiy 8:288 active ready running |- 11:0:10:1 sdml 69:464 active ready running |- 11:0:11:1 sdpt 131:304 active ready running
A single LUN shouldn't require more than four paths. Having more than four paths might cause path issues during storage failures. |
Non-ASA configurations
For non-ASA configurations, there should be two groups of paths with different priorities. The paths with higher priorities are Active/Optimized, meaning they are serviced by the controller where the aggregate is located. The paths with lower priorities are active but are non-optimized because they are served from a different controller. The non-optimized paths are only used when optimized paths are not available.
The following example displays the correct output for an ONTAP LUN with two Active/Optimized paths and two Active/Non-Optimized paths.
# multipath -ll 3600a098038303634722b4d59646c4436 dm-28 NETAPP,LUN C-Mode size=80G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw `-+- policy='service-time 0' prio=50 status=active |- 11:0:7:1 sdfi 130:64 active ready running |- 11:0:9:1 sdiy 8:288 active ready running |- 11:0:10:1 sdml 69:464 active ready running |- 11:0:11:1 sdpt 131:304 active ready running
A single LUN shouldn't require more than four paths. Having more than four paths might cause path issues during storage failures. |
Recommended Settings
The RHEL 8.0 OS is compiled to recognize ONTAP LUNs and automatically set all configuration parameters correctly for both ASA and non-ASA configuration.
The multipath.conf
file must exist for the multipath daemon to start. If this file doesn't exist, you can create an empty, zero-byte file by using the touch /etc/multipath.conf
command.
The first time you create the multipath.conf
file, you might need to enable and start the multipath services by using the following commands:
# systemctl enable multipathd # systemctl start multipathd
There is no requirement to add devices directly to the multipath.conf
file, unless you have devices that you do not want multipath to manage or you have existing settings that override defaults. You can exclude unwanted devices by adding the following syntax to the multipath.conf
file, replacing <DevId> with the WWID string of the device you want to exclude:
blacklist { wwid <DevId> devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*" devnode "^hd[a-z]" devnode "^cciss.*" }
In the following example, you determine the WWID of a device and add the device to the multipath.conf
file.
-
Determine the WWID:
/lib/udev/scsi_id -gud /dev/sda
360030057024d0730239134810c0cb833
sda
is the local SCSI disk that you want to add it to the blacklist. -
Add the
WWID
to the blacklist stanza in/etc/multipath.conf
:blacklist { wwid 360030057024d0730239134810c0cb833 devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*" devnode "^hd[a-z]" devnode "^cciss.*" }
You should always check your /etc/multipath.conf
file, especially in the defaults section, for legacy settings that might be overriding the default settings.
The following table demonstrates the critical multipathd
parameters for ONTAP LUNs and the required values. If a host is connected to LUNs from other vendors and any of these parameters are overridden, they will need to be corrected by later stanzas in the multipath.conf
file that apply specifically to ONTAP LUNs. If this is not done, the ONTAP LUNs might not work as expected. You should only override these defaults in consultation with NetApp and/or an OS vendor and only when the impact is fully understood.
Parameter | Setting |
---|---|
detect_prio |
yes |
dev_loss_tmo |
"infinity" |
failback |
immediate |
fast_io_fail_tmo |
5 |
features |
"2 pg_init_retries 50" |
flush_on_last_del |
"yes" |
hardware_handler |
"0" |
no_path_retry |
queue |
path_checker |
"tur" |
path_grouping_policy |
"group_by_prio" |
path_selector |
"service-time 0" |
polling_interval |
5 |
prio |
"ontap" |
product |
LUN.* |
retain_attached_hw_handler |
yes |
rr_weight |
"uniform" |
user_friendly_names |
no |
vendor |
NETAPP |
The following example shows how to correct an overridden default. In this case, the multipath.conf
file defines values for path_checker
and no_path_retry
that are not compatible with ONTAP LUNs. If they cannot be removed because of other SAN arrays still attached to the host, these parameters can be corrected specifically for ONTAP LUNs with a device stanza.
defaults { path_checker readsector0 no_path_retry fail } devices { device { vendor "NETAPP " product "LUN.*" no_path_retry queue path_checker tur } }
Configure KVM settings
You can use the recommended settings to configure Kernel-based Virtual Machine (KVM) as well. There are no changes required to configure KVM because the LUN is mapped to the hypervisor.
Known issues
The RHEL 8.0 with ONTAP release has the following known issues:
NetApp Bug ID | Title | Description |
---|---|---|
Kernel disruption on RHEL8 with QLogic QLE2672 16GB FC during storage failover operations |
Kernel disruption might occur during storage failover operations on a Red Hat Enterprise Linux (RHEL) 8 kernel with a QLogic QLE2672 host bus adapter (HBA). The kernel disruption causes the operating system to reboot. The reboot causes application disruption and generates the vmcore file under the /var/crash/directory if kdump is configured. Use the vmcore file to identify the cause of the failure. In this case, the disruption is in the “kmem_cache_alloc+160” module. It is logged in the vmcore file with the following string: "[exception RIP: kmem_cache_alloc+160]". Reboot the host OS to recover the operating system and then restart the application. |
|
RHEL8 OS boots up to "emergency mode" when more than 204 SCSI devices are mapped on all Fibre Channel (FC) host bus adapters (HBA) |
If a host is mapped with more than 204 SCSI devices during an operating systemreboot process, the RHEL8 OS fails to boot up to "normal mode" and enters "emergency mode". This results in most of the host services becoming unavailable. |
|
Creating a partition on an iSCSI multipath device during the RHEL8 installation is not feasible. |
iSCSI SAN LUN multipath devices are not listed in disk selection during RHEL 8 installation. Consequently, the multipath service is not enabled on the SAN boot device. |
|
The "rescan-scsi-bus.sh -a" command does not scan more than 328 devices |
If a Red Hat Enterprise Linux 8 host maps with more than 328 SCSI devices, the host OS command "rescan-scsi-bus.sh -a" only scans 328 devices. The host does not discover any remaining mapped devices. |
|
Remote ports transit to a blocked state on RHEL8 with Emulex LPe16002 16GB FC during storage failover operations |
Remote ports transit to a blocked state on RHEL8 with Emulex LPe16002 16GB Fibre Channel (FC) during storage failover operations. When the storage node returns to an optimal state, the LIFs also come up and the remote port state should read "online". Occasionally, the remote port state might continue to read as "blocked” or "not present". This state can lead to a "failed faulty" path to LUNs at the multipath layer |
|
Remote ports transit to blocked state on RHEL8 with Emulex LPe32002 32GB FC during storage failover operations |
Remote ports transit to a blocked state on RHEL8 with Emulex LPe32002 32GBFibre Channel (FC) during storage failover operations. When the storage node returns to an optimal state, the LIFs also come up and the remote port state should read "online". Occasionally, the remote port state might continue to read as "blocked” or "not present". This state can lead to a "failed faulty" path to LUNs at the multipath layer. |