Skip to main content
Enterprise applications

Logical interfaces

Contributors kaminis85

Oracle databases need access to storage. Logical interfaces (LIFs) are the network plumbing that connects a storage virtual machine (SVM) to the network and in turn to the database. Proper LIF design is required to ensure sufficient bandwidth exists for each database workload, and failover does not result in a loss of storage services.

This section provides an overview of key LIF design principles for ASA r2 systems, which are optimized for SAN-only environments. For more comprehensive documentation, see the ONTAP Network Management documentation. As with other aspects of database architecture, the best options for storage virtual machine (SVM, known as a vserver at the CLI) and logical interface (LIF) design depend heavily on scaling requirements and business needs.

Consider the following primary topics when building a LIF strategy:

  • Performance. Is the network bandwidth sufficient for Oracle workloads?

  • Resiliency. Are there any single points of failure in the design?

  • Manageability. Can the network be scaled nondisruptively?

These topics apply to the end-to-end solution, from the host through the switches to the storage system.

LIF types

There are multiple LIF types. ONTAP documentation on LIF types provide more complete information on this topic, but from a functional perspective LIFs can be divided into the following groups:

  • Cluster and node management LIFs. LIFs used to manage the storage cluster.

  • SVM management LIFs. Interfaces that permit access to an SVM through the REST API or ONTAPI (also known as ZAPI) for functions such as snapshot creation or volume resizing. Products such as SnapManager for Oracle (SMO) must have access to an SVM management LIF.

  • Data LIFs. Interfaces for SAN protocols only: FC, iSCSI, NVMe/FC, NVMe/TCP. NAS protocols (NFS, SMB/CIFS) are not supported on ASA r2 systems.

Note It is not possible to configure an interface for both iSCSI (or NVMe/TCP) and management traffic, despite the fact that both use an IP protocol. A separate management LIF is required in iSCSI or NVMe/TCP environments. For resiliency and performance, configure multiple SAN data LIFs per protocol per node and distribute them across different physical ports and fabrics. Unlike AFF/FAS systems, ASA r2 does not allow NFS or SMB traffic, so there is no option to repurpose a NAS data LIF for management.

SAN LIF design

LIF design in a SAN environment is relatively simple for one reason: multipathing. All modern SAN implementations allow a client to access data over multiple, independent, network paths and select the best path or paths for access. As a result, performance with respect to LIF design is simpler to address because SAN clients automatically load-balance I/O across the best available paths.

If a path becomes unavailable, the client automatically selects a different path. The resulting simplicity of design makes SAN LIFs generally more manageable. This does not mean that a SAN environment is always more easily managed, because there are many other aspects of SAN storage that are much more complicated than NFS. It simply means that SAN LIF design is easier.

Performance

The most important consideration with LIF performance in a SAN environment is bandwidth. For example, a two-node ASA r2 cluster with two 32Gb FC ports per node allows up to 64Gb of bandwidth to/from each node. Similarly, for NVMe/TCP or iSCSI, ensure sufficient 25GbE or 100GbE connectivity for Oracle workloads.

Resiliency

SAN LIFs do not fail over in the same way NAS LIFs do. ASA r2 systems rely on host multipathing (MPIO/ALUA) for resiliency. If a SAN LIF becomes unavailable due to controller failover, the client's multipathing software detects the loss of a path and redirects I/O to an alternate path. ASA r2 may perform LIF relocation after a short delay to restore full path availability, but this does not interrupt I/O because active paths already exist on the partner node. The failover process occurs in order to restore host access on all defined ports.

Manageability

There is no need to migrate a LIF in a SAN environment when volumes are relocated within the HA pair. That is because, after the volume move has completed, ONTAP sends a notification to the SAN about a change in paths, and the SAN clients automatically reoptimize. LIF migration with SAN is primarily associated with major physical hardware changes. For example, if a nondisruptive upgrade of the controllers is required, a SAN LIF is migrated to the new hardware. If an FC port is found to be faulty, a LIF can be migrated to an unused port.

Design recommendations

NetApp makes the following recommendations for ASA r2 SAN environments:

  • Do not create more paths than are required. Excessive numbers of paths make overall management more complicated and can cause problems with path failover on some hosts. Furthermore, some hosts have unexpected path limitations for configurations such as SAN booting.

  • Very few configurations should require more than four paths to a LUN. The value of having more than two nodes advertising paths to LUNs is limited because the aggregate hosting a LUN is inaccessible if the node that owns the LUN and its HA partner fail. Creating paths on nodes other than the primary HA pair is not helpful in such a situation.

  • Although the number of visible LUN paths can be managed by selecting which ports are included in FC zones, it is generally easier to include all potential target points in the FC zone and control LUN visibility at the ONTAP level.

  • Use selective LUN mapping (SLM) feature, which is enabled by default. With SLM, any new LUN is automatically advertised from the node that owns the underlying aggregate and the node's HA partner. This arrangement avoids the need to create port sets or configure zoning to limit port accessibility. Each LUN is available on the minimum number of nodes required for both optimal performance and resiliency.

  • In the event a LUN must be migrated outside of the two controllers, the additional nodes can be added with the lun mapping add-reporting-nodes command so that the LUNs are advertised on the new nodes. Doing so creates additional SAN paths to the LUNs for LUN migration. However, the host must perform a discovery operation to use the new paths.

  • Do not be overly concerned about indirect traffic. It is best to avoid indirect traffic in a very I/O-intensive environment for which every microsecond of latency is critical, but the visible performance effect is negligible for typical workloads.