Skip to main content
SAN hosts and cloud clients

NVMe-oF Host Configuration for Oracle Linux 9.4 with ONTAP

Contributors netapp-sarajane

NetApp SAN host configurations support the NVMe over Fabrics (NVMe-oF) protocol with Asymmetric Namespace Access (ANA). In NVMe-oF environments, ANA is equivalent to asymmetric logical unit access (ALUA) multipathing in iSCSI and FCP environments. ANA is implemented using the in-kernel NVMe multipath feature.

About this task

The following support and features are available with the NVMe-oF host configuration for Oracle Linux 9.4 with ONTAP storage. You should also review the known limitations before starting the configuration process.

  • Support available:

    • Support for NVMe over TCP (NVMe/TCP) in addition to NVMe over Fibre Channel (NVMe/FC). The NetApp plug-in in the native nvme-cli package displays ONTAP details for both NVMe/FC and NVMe/TCP namespaces.

    • Running both NVMe and SCSI traffic on the same host. For example, you can configure dm-multipath for SCSI mpath devices for SCSI LUNs and use NVMe multipath to configure NVMe-oF namespace devices on the host.

      For additional details on supported configurations, see the NetApp Interoperability Matrix Tool.

  • Features available:

    • Beginning with ONTAP 9.12.1, support for secure in-band authentication is introduced for NVMe-oF. You can use secure in-band authentication for NVMe-oF with Oracle Linux 9.4

    • Support for in-kernel NVMe multipath enabled for NVMe namespaces by default, therefore, there is no need for explicit settings.

  • Known limitations:

    • SAN booting using the NVMe-oF protocol is currently not supported.

Validate software versions

You can use the following procedure to validate the minimum supported Oracle Linux 9.4 software versions.

Steps
  1. Install Oracle Linux 9.4 GA on the server. After the installation is complete, verify that you are running the specified Oracle Linux 9.4 GA kernel.

    uname -r
    5.15.0-205.149.5.1.el9uek.x86_64
  2. Install the nvme-cli package:

    rpm -qa|grep nvme-cli
    nvme-cli-2.6-5.el9.x86_64
  3. Install the libnvme package:

    rpm -qa|grep libnvme
    libnvme-1.6-1.el9.x86_64
  4. On the Oracle Linux 9.4 host, check the hostnqn string at /etc/nvme/hostnqn:

    cat /etc/nvme/hostnqn
    nqn.2014-08.org.nvmexpress:uuid:9c5d23fe-21c5-472f-9aa4-dc68de0882e9
  5. Verify that the hostnqn string matches the hostnqn string for the corresponding subsystem on the ONTAP array:

    vserver nvme subsystem host show -vserver vs_coexistence_149
    Show example
    Vserver Subsystem Priority  Host NQN
    ------- --------- --------  ------------------------------------------------
    vs_coexistence_149
            nvme
                      regular   nqn.2014-08.org.nvmexpress:uuid:9c5d23fe-21c5-472f-9aa4-dc68de0882e9
            nvme_1
                      regular   nqn.2014-08.org.nvmexpress:uuid:9c5d23fe-21c5-472f-9aa4-dc68de0882e9
            nvme_2
                      regular   nqn.2014-08.org.nvmexpress:uuid:9c5d23fe-21c5-472f-9aa4-dc68de0882e9
            nvme_3
                      regular   nqn.2014-08.org.nvmexpress:uuid:9c5d23fe-21c5-472f-9aa4-dc68de0882e9
    4 entries were displayed.
    Note If the hostnqn strings don't match, you can use the vserver modify command to update the hostnqn string on your corresponding ONTAP array subsystem to match the hostnqn string from /etc/nvme/hostnqn on the host.

Configure NVMe/FC

You can configure NVMe/FC with Broadcom/Emulex FC or Marvell/Qlogic FC adapters. For NVMe/FC configured with a Broadcom adapter, you can enable I/O requests of size 1 MB.

Broadcom/Emulex

Configure NVMe/FC for a Broadcom/Emulex adapter.

Steps
  1. Verify that you are using the supported adapter model:

    1. cat /sys/class/scsi_host/host*/modelname

      LPe32002-M2
      LPe32002-M2
    2. cat /sys/class/scsi_host/host*/modeldesc

      Emulex LightPulse LPe32002-M2 2-Port 32Gb Fibre Channel Adapter
      Emulex LightPulse LPe32002-M2 2-Port 32Gb Fibre Channel Adapter
  2. Verify that you are using the recommended Broadcom lpfc firmware and inbox driver:

    1. cat /sys/class/scsi_host/host*/fwrev

      14.4.317.7, sli-4:2:c
      14.4.317.7, sli-4:2:c
    2. cat /sys/module/lpfc/version

      0:14.2.0.13

      For the most current list of supported adapter driver and firmware versions, see the NetApp Interoperability Matrix Tool.

  3. Verify that lpfc_enable_fc4_type is set to 3:

    cat /sys/module/lpfc/parameters/lpfc_enable_fc4_type

    3
  4. Verify that you can view your initiator ports:

    cat /sys/class/fc_host/host*/port_name

    0x100000109b3c081f
    0x100000109b3c0820
  5. Verify that your initiator ports are online:

    cat /sys/class/fc_host/host*/port_state

    Online
    Online
  6. Verify that the NVMe/FC initiator ports are enabled and that the target ports are visible:

    cat /sys/class/scsi_host/host*/nvme_info

    Show example
    NVME Initiator Enabled
    XRI Dist lpfc0 Total 6144 IO 5894 ELS 250
    NVME LPORT lpfc0 WWPN x100000109b3c081f WWNN x200000109b3c081f DID x081600 ONLINE
    NVME RPORT       WWPN x2020d039eab0dadc WWNN x201fd039eab0dadc DID x08010c TARGET DISCSRVC ONLINE
    NVME RPORT       WWPN x2024d039eab0dadc WWNN x201fd039eab0dadc DID x08030c TARGET DISCSRVC ONLINE
    
    NVME Statistics
    LS: Xmt 00000027d8 Cmpl 00000027d8 Abort 00000000
    LS XMIT: Err 00000000  CMPL: xb 00000000 Err 00000000
    Total FCP Cmpl 00000000315454fa Issue 00000000314de6a4 OutIO fffffffffff991aa
            abort 00000be4 noxri 00000000 nondlp 00001903 qdepth 00000000 wqerr 00000000 err 00000000
    FCP CMPL: xb 00000c92 Err 0000bda4
    
    NVME Initiator Enabled
    XRI Dist lpfc1 Total 6144 IO 5894 ELS 250
    NVME LPORT lpfc1 WWPN x100000109b3c0820 WWNN x200000109b3c0820 DID x081b00 ONLINE
    NVME RPORT       WWPN x2027d039eab0dadc WWNN x201fd039eab0dadc DID x08020c TARGET DISCSRVC ONLINE
    NVME RPORT       WWPN x2025d039eab0dadc WWNN x201fd039eab0dadc DID x08040c TARGET DISCSRVC ONLINE
    
    NVME Statistics
    LS: Xmt 00000026ac Cmpl 00000026ac Abort 00000000
    LS XMIT: Err 00000000  CMPL: xb 00000000 Err 00000000
    Total FCP Cmpl 00000000312a5478 Issue 00000000312465a2 OutIO fffffffffffa112a
            abort 00000b01 noxri 00000000 nondlp 00001ae4 qdepth 00000000 wqerr 00000000 err 00000000
    FCP CMPL: xb 00000b53 Err 0000ba63
Marvell/QLogic

Configure NVMe/FC for a Marvell/QLogic adapter.

Note The native inbox qla2xxx driver included in the Oracle Linux 9.4 GA kernel has the latest fixes. These fixes are essential for ONTAP support.
Steps
  1. Verify that you are running the supported adapter driver and firmware versions:

    cat /sys/class/fc_host/host*/symbolic_name
    QLE2872 FW:v9.15.00 DVR:v10.02.09.100-k
    QLE2872 FW:v9.15.00 DVR:v10.02.09.100-k
  2. Verify that ql2xnvmeenable is set. This enables the Marvell adapter to function as an NVMe/FC initiator:

    cat /sys/module/qla2xxx/parameters/ql2xnvmeenable
    1

Enable 1MB I/O size (Optional)

ONTAP reports an MDTS (Max Data Transfer Size) of 8 in the Identify Controller data. This means the maximum I/O request size can be up to 1MB. To issue I/O requests of size 1 MB for a Broadcom NVMe/FC host, you should increase the lpfc value of the lpfc_sg_seg_cnt parameter to 256 from the default value of 64.

Note These steps don't apply to Qlogic NVMe/FC hosts.
Steps
  1. Set the lpfc_sg_seg_cnt parameter to 256:

    cat /etc/modprobe.d/lpfc.conf
    options lpfc lpfc_sg_seg_cnt=256
  2. Run the dracut -f command, and reboot the host.

  3. Verify that the expected value of lpfc_sg_seg_cnt is 256:

    cat /sys/module/lpfc/parameters/lpfc_sg_seg_cnt

Configure NVMe/TCP

The NVMe/TCP protocol doesn't support the auto-connect operation. Instead, you can discover the NVMe/TCP subsystems and namespaces by performing the NVMe/TCP connect or connect-all operations manually.

Steps
  1. Verify that the initiator port can fetch the discovery log page data across the supported NVMe/TCP LIFs:

    nvme discover -t tcp -w host-traddr -a traddr
    Show example
    nvme discover -t tcp -w 192.168.166.4 -a 192.168.166.56
    
    Discovery Log Number of Records 10, Generation counter 15
    =====Discovery Log Entry 0======
    trtype:  tcp
    adrfam:  ipv4
    subtype: current discovery subsystem
    treq:    not specified
    portid:  13
    trsvcid: 8009
    subnqn:  nqn.1992-08.com.netapp:sn.cf84a53c81b111ef8446d039ea9ea481:discovery
    traddr:  192.168.165.56
    eflags:  explicit discovery connections, duplicate discovery information
    sectype: none
    =====Discovery Log Entry 1======
    trtype:  tcp
    adrfam:  ipv4
    subtype: current discovery subsystem
    treq:    not specified
    portid:  9
    trsvcid: 8009
    subnqn:  nqn.1992-08.com.netapp:sn.cf84a53c81b111ef8446d039ea9ea481:discovery
    traddr:  192.168.166.56
    eflags:  explicit discovery connections, duplicate discovery information
    sectype: none
    =====Discovery Log Entry 2======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  13
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.cf84a53c81b111ef8446d039ea9ea481:subsystem.nvme_tcp_2
    traddr:  192.168.165.56
    eflags:  none
    sectype: none
  2. Verify that the other NVMe/TCP initiator-target LIF combinations can successfully fetch discovery log page data:

    nvme discover -t tcp -w host-traddr -a traddr
    nvme discover -t tcp -w 192.168.166.4 -a 192.168.166.56
    nvme discover -t tcp -w 192.168.165.3 -a 192.168.165.56
  3. Run the nvme connect-all command across all the supported NVMe/TCP initiator-target LIFs across the nodes:

    nvme connect-all -t tcp -w host-traddr -a traddr
    nvme connect-all -t tcp -w 192.168.166.4 -a 192.168.166.56
    nvme connect-all -t tcp -w 192.168.165.3 -a 192.168.165.56
    Note Beginning with Oracle Linux 9.4, the default setting for the NVMe/TCP ctrl_loss_tmo timeout is turned off and there are no limits on the number of retries (indefinite retry).
    You don't have to manually configure a specific ctrl_loss_tmo timeout duration when using the nvme connect or nvme connect-all commands (option -l ). With this default behavior, the NVMe/TCP controllers don't experience timeouts in the event of a path failure and remain connected indefinitely.

Validate NVMe-oF

To support correct operation for ONTAP LUNs, verify that the in-kernel NVMe multipath status, ANA status, and ONTAP namespaces are correct for the NVMe-oF configuration.

Steps
  1. Verify the following NVMe/FC settings on the Oracle Liniux 9.4 host:

    1. cat /sys/module/nvme_core/parameters/multipath

      Y
    2. cat /sys/class/nvme-subsystem/nvme-subsys*/model

      NetApp ONTAP Controller
      NetApp ONTAP Controller
    3. cat /sys/class/nvme-subsystem/nvme-subsys*/iopolicy

      round-robin
      round-robin
  2. Verify that the namespaces are created and correctly discovered on the host:

    nvme list
    Show example
    Node         SN                   Model
    ---------------------------------------------------------
    /dev/nvme0n1 81K2iBXAYSG6AAAAAAAB NetApp ONTAP Controller
    /dev/nvme0n2 81K2iBXAYSG6AAAAAAAB NetApp ONTAP Controller
    /dev/nvme0n3 81K2iBXAYSG6AAAAAAAB NetApp ONTAP Controller
    
    
    Namespace Usage    Format             FW             Rev
    -----------------------------------------------------------
    1                 3.78GB/10.74GB 4 KiB + 0 B       FFFFFFFF
    2                 3.78GB/10.74GB 4 KiB +  0 B      FFFFFFFF
    3	              3.78GB/10.74GB 4 KiB + 0 B       FFFFFFFF
  3. Verify that the controller state of each path is live and has the correct ANA status:

    NVMe/FC
    nvme list-subsys /dev/nvme0n1
    Show example
    nvme-subsys0 - NQN=nqn.1992-08.com.netapp:sn.5f074d527b7011ef8446d039ea9ea481:subsystem.nvme
                   hostnqn=nqn.2014-08.org.nvmexpress:uuid:060fd513-83be-4c3e-aba1-52e169056dcf
                   iopolicy=round-robin
    \
     +- nvme10 fc traddr=nn-0x201fd039eab0dadc:pn-0x2024d039eab0dadc,host_traddr=nn-0x200000109b3c081f:pn-0x100000109b3c081f live non-optimized
     +- nvme15 fc traddr=nn-0x201fd039eab0dadc:pn-0x2020d039eab0dadc,host_traddr=nn-0x200000109b3c081f:pn-0x100000109b3c081f live optimized
     +- nvme7 fc traddr=nn-0x201fd039eab0dadc:pn-0x2025d039eab0dadc,host_traddr=nn-0x200000109b3c0820:pn-0x100000109b3c0820 live non-optimized
     +- nvme9 fc traddr=nn-0x201fd039eab0dadc:pn-0x2027d039eab0dadc,host_traddr=nn-0x200000109b3c0820:pn-0x100000109b3c0820 live optimized
    NVMe/TCP
    nvme list-subsys /dev/nvme1n22
    Show example
    nvme-subsys0 - NQN=nqn.1992-08.com.netapp:sn.cf84a53c81b111ef8446d039ea9ea481:subsystem.nvme_tcp_1
                   hostnqn=nqn.2014-08.org.nvmexpress:uuid:9796c1ec-0d34-11eb-b6b2-3a68dd3bab57
                   iopolicy=round-robin
    \
     +- nvme2 tcp traddr=192.168.166.56,trsvcid=4420,host_traddr=192.168.166.4,src_addr=192.168.166.4 live optimized
     +- nvme4 tcp traddr=192.168.165.56,trsvcid=4420,host_traddr=192.168.165.3,src_addr=192.168.165.3 live non-optimized
  4. Verify that the NetApp plug-in displays the correct values for each ONTAP namespace device:

    Column
    nvme netapp ontapdevices -o column
    Show example
    Device        Vserver   Namespace Path
    ----------------------- ------------------------------
    /dev/nvme0n1  	 vs_coexistence_147	/vol/fcnvme_1_1_0/fcnvme_ns
    /dev/nvme0n2     vs_coexistence_147	/vol/fcnvme_1_1_1/fcnvme_ns
    /dev/nvme0n3	 vs_coexistence_147	/vol/fcnvme_1_1_2/fcnvme_ns
    
    
    
    
    NSID       UUID                                   Size
    ------------------------------------------------------------
    1	e605babf-1b54-417d-843b-bc14355b70c5	10.74GB
    2	b8dbecc7-14c5-4d84-b948-73c7abf5af43	10.74GB
    3	ba24d1a3-1911-4351-83a9-1c843d04633c	10.74GB
    JSON
    nvme netapp ontapdevices -o json
    Show example
    {
      "ONTAPdevices":[
        {
          "Device":"/dev/nvme0n1",
          "Vserver":"vs_coexistence_147",
          "Namespace_Path":"/vol/fcnvme_1_1_0/fcnvme_ns",
          "NSID":1,
          "UUID":"e605babf-1b54-417d-843b-bc14355b70c5",
          "Size":"10.74GB",
          "LBA_Data_Size":4096,
          "Namespace_Size":2621440
        },
        {
          "Device":"/dev/nvme0n2",
          "Vserver":"vs_coexistence_147",
          "Namespace_Path":"/vol/fcnvme_1_1_1/fcnvme_ns",
          "NSID":2,
          "UUID":"b8dbecc7-14c5-4d84-b948-73c7abf5af43",
          "Size":"10.74GB",
          "LBA_Data_Size":4096,
          "Namespace_Size":2621440
        },
        {
          "Device":"/dev/nvme0n3",
          "Vserver":"vs_coexistence_147",
          "Namespace_Path":"/vol/fcnvme_1_1_2/fcnvme_ns",
          "NSID":3,
          "UUID":"c236905d-a335-47c4-a4b1-89ae30de45ae",
          "Size":"10.74GB",
          "LBA_Data_Size":4096,
          "Namespace_Size":2621440
        },
        ]
    }

Set up secure in-band authentication

Beginning with ONTAP 9.12.1, secure in-band authentication is supported over NVMe/TCP and NVMe/FC between an Oracle Linux 9.4 host and an ONTAP controller.

To set up secure authentication, each host or controller must be associated with a DH-HMAC-CHAP key, which is a combination of the NQN of the NVMe host or controller and an authentication secret configured by the administrator. To authenticate its peer, an NVMe host or controller must recognize the key associated with the peer.

You can set up secure in-band authentication using the CLI or a config JSON file. If you need to specify different dhchap keys for different subsystems, you must use a config JSON file.

CLI

Set up secure in-band authentication using the CLI.

Steps
  1. Obtain the host NQN:

    cat /etc/nvme/hostnqn
  2. Generate the dhchap key for the OL 9.4 host.

    The following output describes the gen-dhchap-key command paramters:

    nvme gen-dhchap-key -s optional_secret -l key_length {32|48|64} -m HMAC_function {0|1|2|3} -n host_nqn
    •	-s secret key in hexadecimal characters to be used to initialize the host key
    •	-l length of the resulting key in bytes
    •	-m HMAC function to use for key transformation
    0 = none, 1- SHA-256, 2 = SHA-384, 3=SHA-512
    •	-n host NQN to use for key transformation

    In the following example, a random dhchap key with HMAC set to 3 (SHA-512) is generated.

    # nvme gen-dhchap-key -m 3 -n nqn.2014-08.org.nvmexpress:uuid:9796c1ec-0d34-11eb-b6b2-3a68dd3bab57
    DHHC-1:03:zSq3+upTmknih8+6Ro0yw6KBQNAXjHFrOxQJaE5i916YdM/xsUSTdLkHw2MMmdFuGEslj6+LhNdf5HF0qfroFPgoQpU=:
  3. On the ONTAP controller, add the host and specify both dhchap keys:

    vserver nvme subsystem host add -vserver <svm_name> -subsystem <subsystem> -host-nqn <host_nqn> -dhchap-host-secret <authentication_host_secret> -dhchap-controller-secret <authentication_controller_secret> -dhchap-hash-function {sha-256|sha-512} -dhchap-group {none|2048-bit|3072-bit|4096-bit|6144-bit|8192-bit}
  4. A host supports two types of authentication methods, unidirectional and bidirectional. On the host, connect to the ONTAP controller and specify dhchap keys based on the chosen authentication method:

    nvme connect -t tcp -w <host-traddr> -a <tr-addr> -n <host_nqn> -S <authentication_host_secret> -C <authentication_controller_secret>
  5. Validate the nvme connect authentication command by verifying the host and controller dhchap keys:

    1. Verify the host dhchap keys:

      cat /sys/class/nvme-subsystem/<nvme-subsysX>/nvme*/dhchap_secret
      Show example output for a unidirectional configuration
      cat /sys/class/nvme-subsystem/nvme-subsys0/nvme*/dhchap_secret
      DHHC-1:01:OKIc4l+fs+fmpAj0hMK7ay8tTIzjccUWSCak/G2XjgJpKZeK:
      DHHC-1:01:OKIc4l+fs+fmpAj0hMK7ay8tTIzjccUWSCak/G2XjgJpKZeK:
    2. Verify the controller dhchap keys:

      cat /sys/class/nvme-subsystem/<nvme-subsysX>/nvme*/dhchap_ctrl_secret
      Show example output for a bidirectional configuration
      cat /sys/class/nvme-subsystem/nvme-subsys0/nvme*/dhchap_ctrl_secret
      DHHC-1:03:zSq3+upTmknih8+6Ro0yw6KBQNAXjHFrOxQJaE5i916YdM/xsUSTdLkHw2MMmdFuGEslj6+LhNdf5HF0qfroFPgoQpU=:
      DHHC-1:03:zSq3+upTmknih8+6Ro0yw6KBQNAXjHFrOxQJaE5i916YdM/xsUSTdLkHw2MMmdFuGEslj6+LhNdf5HF0qfroFPgoQpU=:
JSON file

When multiple NVMe subsystems are available on the ONTAP controller configuration, you can use the /etc/nvme/config.json file with the nvme connect-all command.

To generate the JSON file, you can use the -o option. See the NVMe connect-all manual pages for more syntax options.

Steps
  1. Configure the JSON file:

    Show example
    cat /etc/nvme/config.json
    [
      {
        "hostnqn":"nqn.2014-08.org.nvmexpress:uuid:9796c1ec-0d34-11eb-b6b2-3a68dd3bab57",
        "hostid":"9796c1ec-0d34-11eb-b6b2-3a68dd3bab57",
        "dhchap_key":"DHHC-1:01:OKIc4l+fs+fmpAj0hMK7ay8tTIzjccUWSCak\/G2XjgJpKZeK:",
        "subsystems":[
          {
            "nqn":"nqn.1992-08.com.netapp:sn.cf84a53c81b111ef8446d039ea9ea481:subsystem.nvme_tcp_1",
            "ports":[
              {
                "transport":"tcp",
                "traddr":"192.168.165.56",
                "host_traddr":"192.168.165.3",
                "trsvcid":"4420",
                "dhchap_key":"DHHC-1:01:OKIc4l+fs+fmpAj0hMK7ay8tTIzjccUWSCak\/G2XjgJpKZeK:",
                "dhchap_ctrl_key":"DHHC-1:03:zSq3+upTmknih8+6Ro0yw6KBQNAXjHFrOxQJaE5i916YdM\/xsUSTdLkHw2MMmdFuGEslj6+LhNdf5HF0qfroFPgoQpU=:"
              },
              {
                "transport":"tcp",
                "traddr":"192.168.166.56",
                "host_traddr":"192.168.166.4",
                "trsvcid":"4420",
                "dhchap_key":"DHHC-1:01:OKIc4l+fs+fmpAj0hMK7ay8tTIzjccUWSCak\/G2XjgJpKZeK:",
                "dhchap_ctrl_key":"DHHC-1:03:zSq3+upTmknih8+6Ro0yw6KBQNAXjHFrOxQJaE5i916YdM\/xsUSTdLkHw2MMmdFuGEslj6+LhNdf5HF0qfroFPgoQpU=:"
              }
            ]
          }
        ]
      }
    ]
    Note In the preceding example, dhchap_key corresponds to dhchap_secret and dhchap_ctrl_key corresponds to dhchap_ctrl_secret.
  2. Connect to the ONTAP controller using the config JSON file:

    nvme connect-all -J /etc/nvme/config.json
    Show example
    traddr=192.168.165.56 is already connected
    traddr=192.168.165.56 is already connected
    traddr=192.168.165.56 is already connected
    traddr=192.168.165.56 is already connected
    traddr=192.168.165.56 is already connected
    traddr=192.168.165.56 is already connected
    traddr=192.168.166.56 is already connected
    traddr=192.168.166.56 is already connected
    traddr=192.168.166.56 is already connected
    traddr=192.168.166.56 is already connected
    traddr=192.168.166.56 is already connected
    traddr=192.168.166.56 is already connected
  3. Verify that the dhchap secrets have been enabled for the respective controllers for each subsystem:

    1. Verify the host dhchap keys:

      cat /sys/class/nvme-subsystem/nvme-subsys0/nvme0/dhchap_secret
      DHHC-1:01:OKIc4l+fs+fmpAj0hMK7ay8tTIzjccUWSCak/G2XjgJpKZeK:
    2. Verify the controller dhchap keys:

      cat /sys/class/nvme-subsystem/nvme-subsys0/nvme0/dhchap_ctrl_secret
      DHHC-1:03:zSq3+upTmknih8+6Ro0yw6KBQNAXjHFrOxQJaE5i916YdM/xsUSTdLkHw2MMmdFuGEslj6+LhNdf5HF0qfroFPgoQpU=:

Known issues

There are no known issues for the Oracle Linux 9.4 with ONTAP release.