Using Solaris 11.4 with NetApp ONTAP

Contributors netapp-ranuk netapp-aoife

Installing the Solaris Host Utilities

You can download the compressed file containing the Host Utilities software packages from the NetApp Support Site. After you have the file, you must uncompress it to get the software packages you need to install the Host Utilities.

  1. Download a copy of the compressed file containing the Host Utilities from the NetApp Support Site to a directory on your host.

  2. Go to the directory containing the download.

  3. Uncompress the file.

    The following example uncompresses files for a SPARC system. For x86-64 platforms, use the x86/x64 package.

    gunzip netapp_solaris_host_utilities_6_2N20170913_0304_sparc.tar.gz

  4. Use the tar xvf command to untar the file.

    tar xvf netapp_solaris_host_utilities_6_2N20170913_0304_sparc.tar

  5. Add the packages that you extracted from the tar file to your host.

    pkgadd -d NTAPSANTool.pkg

    The packages are added to the /opt/NTAP/SANToolkit/bin directory.

    To complete the installation, you must configure the host parameters for your environment (MPxIO in this case) using host_config command.

    The host_config command has the following format:

    /opt/NTAP/SANToolkit/bin/host_config ←setup> ←protocol fcp|iscsi|mixed> ←multipath mpxio|dmp| non> [-noalua] [-mcc 60|90|120]

    The host_config command does the following:

    • Makes setting changes for the Fibre Channel and SCSI drivers for both X86 and SPARC systems

    • Provides SCSI timeout settings for both the MPxIO configurations

    • Sets the VID/PID information

    • Enables or disables ALUA

    • Configures the ALUA settings used by MPxIO and the SCSI drivers for both X86 and SPARC systems.

  6. Reboot the host.

SAN Toolkit

The toolkit is installed automatically when you install the NetApp Host Utilities package. This kit provides the sanlun utility, which helps you manage LUNs and HBAs. The sanlun command returns information about the LUNs mapped to your host, multipathing, and information necessary to create initiator groups.


In the following example, the sanlun lun show command returns LUN information.

#sanlun lun show

controller(7mode)/                 device                                            host             lun
vserver(Cmode)     lun-pathname    filename                                         adapter protocol  size  mode
data_vserver       /vol/vol1/lun1  /dev/rdsk/c0t600A098038314362692451465A2F4F39d0s2  qlc1  FCP       60g   C
data_vserver       /vol/vol2/lun2  /dev/rdsk/c0t600A098038314362705D51465A626475d0s2  qlc1  FCP       20g   C

SAN Booting

What you’ll need

If you decide to use SAN booting, it must be supported by your configuration. You can use the NetApp Interoperability Matrix Tool to verify that your OS, HBA, HBA firmware and the HBA boot BIOS, and ONTAP version are supported.

SAN booting is the process of setting up a SAN-attached disk (a LUN) as a boot device for a Solaris host.

You can set up a SAN boot LUN to work in a Solaris MPxIO environment using the FC protocol and running the Solaris Host Utilities. The method you use to set up a SAN boot LUN can vary depending on your volume manager and file system. See the Solaris Host Utilities Installation and Setup Guide for details on SAN Booting LUNs in an Solaris MPIO environment.


Multipathing allows you to configure multiple network paths between the host and storage system. If one path fails, traffic continues on the remaining paths. Oracle Solaris I/O Multipathing (MPxIO) is enabled by default for Solaris 11.4. The default setting in /kernel/drv/fp.conf changes to mpxio-disable="no".

Non-ASA Configuration

For non-ASA configuration there should be two groups of paths with different priorities. The paths with the higher priorities are Active/Optimized, meaning they are serviced by the controller where the aggregate is located. The paths with the lower priorities are active but are non-optimized because they are served from a different controller. The non-optimized paths are only used when no optimized paths are available.


The following example displays the correct output for an ONTAP LUN with two Active/Optimized paths and two Active/Non-Optimized paths:

The path priorities are displayed against the Access State section for each LUN in the OS native mpathadm show lu <LUN> command.

All SAN Array Configuration

In All SAN Array (ASA) configurations, all paths to a given Logical Unit (LUN) are active and optimized. This means I/O can be served through all paths at the same time, thereby enabling better performance.


The following example displays the correct output for an ONTAP LUN:

The output for the sanlun command is the same for ASA and non-ASA configurations.

The path priorities are displayed against the Access State section for each LUN in the OS native mpathadm show lu <LUN> command.

#sanlun lun show -pv sparc-s7-16-49:/vol/solaris_vol_1_0/solaris_lun

                    ONTAP Path: sparc-s7-16-49:/vol/solaris_vol_1_0/solaris_lun
                           LUN: 0
                      LUN Size: 30g
                   Host Device: /dev/rdsk/c0t600A098038314362692451465A2F4F39d0s2
                          Mode: C
            Multipath Provider: Sun Microsystems
              Multipath Policy: Native
Note All SAN Arrays (ASA) configurations are supported beginning in ONTAP 9.8 for Solaris Hosts.

Following are some parameter settings that are recommended for Solaris 11.4 SPARC and x86_64 with NetApp ONTAP LUNs. These parameter values are set by Host Utilities. For additional settings for Solaris 11.4 systems, see Oracle DOC ID: 2595926.1

Parameter Value















By default, the Solaris operating system will fail I/Os after 20 seconds if all paths to a LUN are lost. This is controlled by the fcp_offline_delay parameter. The default value for fcp_offline_delay is appropriate for standard ONTAP clusters. However, in MetroCluster configurations the value of fcp_offline_delay must be increased to 120s to ensure that I/O does not prematurely time out during operations including unplanned failovers. For addition information and recommended changes to default settings, please refer to NetApp KB1001373.

Oracle Solaris Virtualization

  • Solaris virtualization options include Solaris Logical Domains (also called LDOMs or Oracle VM Server for SPARC), Solaris Dynamic Domains, Solaris Zones, and Solaris Containers. These technologies have been rebranded generally as "Oracle Virtual Machines" despite the fact that they are based on very different architectures.

  • In some cases, multiple options can be used together such as a Solaris Container within a particular Solaris Logical Domain.

  • NetApp generally supports the use of these virtualization technologies where the overall configuration is supported by Oracle and any partition with direct access to LUNs is listed on the NetApp Interoperability Matrix in a supported configuration. This includes root containers, LDOM IO domains, and LDOM’s using NPIV to access LUNs.

  • Partitions and/or virtual machines which use only virtualized storage resources, such as a vdsk, do not need specific qualification as they do not have direct access to NetApp LUNs. Only the partition/VM that has direct access to the underlying LUN, such as an LDOM IO domain, must be found in the NetApp Interoperability Matrix.

When LUNs are used as virtual disk devices within an LDOM, the source of the LUN is masked by virtualization and the LDOM will not properly detect the block sizes. To prevent this issue the LDOM operating system must be patched for Oracle Bug 15824910 and a vdc.conf file must be created that sets the block size of the virtual disk to 4096. See Oracle Doc 2157669.1 for more information.

To verify the patch do the following:

  1. Create a zpool.

  2. Run zdb -C against the zpool and verify that the value of ashift is 12.

    If the value of ashift is not 12, verify that the correct patch was installed and recheck the contents of vdc.conf.

    Do not proceed until ashift shows a value of 12.

Note Patches are available for Oracle bug 15824910 on various versions of Solaris. Contact Oracle if assistance is required in determining the best kernel patch.

In order to ensure the Solaris client applications are non-disruptive when an unplanned site failover switchover occurs in an SnapMirror Business Continuity (SM-BC) environment, the following setting needs to be configured on Solaris 11.4 host. This setting will override the failover module – f_tpgs in order to prevent the code path that detects the contradiction from being executed.

Note Beginning with ONTAP 9.9.1, SM-BC setting configurations are supported in Solaris 11.4 host.

Follow the instructions to configure the override parameter:

  1. Create configuration file /etc/driver/drv/scsi_vhci.conf with an entry similar to the following for the NetApp storage type connected to the host:

    scsi-vhci-failover-override =
    "NETAPP  LUN","f_tpgs"
  2. Use devprop and mdb commands to verify the override has been successfully applied:

    root@host-A:~# devprop -v -n /scsi_vhci scsi-vhci-failover-override scsi-vhci-failover-override=NETAPP LUN + f_tpgs
    root@host-A:~# echo "*scsi_vhci_dip::print -x struct dev_info devi_child | ::list struct dev_info devi_sibling| ::print struct dev_info devi_mdi_client| ::print mdi_client_t ct_vprivate| ::print struct scsi_vhci_lun svl_lun_wwn svl_fops_name"| mdb -k

    svl_lun_wwn = 0xa002a1c8960 "600a098038313477543f524539787938"
    svl_fops_name = 0xa00298d69e0 "conf f_tpgs"
Note conf will be added to the svl_fops_name when a scsi-vhci-failover-override has been applied.
For additional information and recommended changes to default settings, refer to NetApp KB article Solaris Host support recommended settings in SnapMirror Business Continuity (SM-BC) configuration.

Known Problems and Limitations

NetApp Bug ID Title Description Oracle ID


HUK 6.2 and Solaris_11.4 FC driver binding changes

Solaris 11.4 and HUK recommendations.
FC driver binding is changed from ssd(4D) to sd(4D). Move configuration that you have in ssd.conf to sd.conf as detailed in Oracle (Doc ID 2595926.1). The behavior varies across newly installed Solaris 11.4 system and upgraded from 11.3 or lower versions.

(Doc ID 2595926.1)


Solaris LIF problem during GB with Emulex 32G HBA on x86 Arch

Seen with Emulex Firmware version 12.6.x and above on x86_64 Platform

SR 3-24746803021


"Solaris 11.x cfgadm -c configure resulting in I/O error with End-to-End Emulex configuration"

Running cfgadm -c configure on Emulex End-to-End configurations results in I/O error. This is fixed in 9.5P17, 9.6P14 , 9.7P13 and 9.8P2



Abnormal Path reporting on Solaris Hosts with ASA/PPorts using OS native commands

Intermittent path reporting issues on Solaris 11.4 with ASA