Skip to main content

Create intercluster LIFs for remote FabricPool tiering with ONTAP S3

Contributors netapp-dbagwell netapp-manini netapp-ahibbard netapp-aherbin

If you are enabling remote FabricPool capacity (cloud) tiering using ONTAP S3, you must configure intercluster LIFs. You can configure intercluster LIFs on ports shared with the data network. Doing so reduces the number of ports you need for intercluster networking.

Before you begin
  • The underlying physical or logical network port must have been configured to the administrative up status.

  • The LIF service policy must already exist.

About this task

Intercluster LIFs are not required for local Fabric pool tiering or for serving external S3 apps.

Steps
  1. List the ports in the cluster:

    network port show

    The following example shows the network ports in cluster01:

    cluster01::> network port show
                                                                 Speed (Mbps)
    Node   Port      IPspace      Broadcast Domain Link   MTU    Admin/Oper
    ------ --------- ------------ ---------------- ----- ------- ------------
    cluster01-01
           e0a       Cluster      Cluster          up     1500   auto/1000
           e0b       Cluster      Cluster          up     1500   auto/1000
           e0c       Default      Default          up     1500   auto/1000
           e0d       Default      Default          up     1500   auto/1000
    cluster01-02
           e0a       Cluster      Cluster          up     1500   auto/1000
           e0b       Cluster      Cluster          up     1500   auto/1000
           e0c       Default      Default          up     1500   auto/1000
           e0d       Default      Default          up     1500   auto/1000
  2. Create intercluster LIFs on the system SVM:

    network interface create -vserver Cluster -lif LIF_name -service-policy default-intercluster -home-node node -home-port port -address port_IP -netmask netmask

    The following example creates intercluster LIFs cluster01_icl01 and cluster01_icl02:

    cluster01::> network interface create -vserver Cluster -lif cluster01_icl01 -service-
    policy default-intercluster -home-node cluster01-01 -home-port e0c -address 192.168.1.201
    -netmask 255.255.255.0
    
    cluster01::> network interface create -vserver Cluster -lif cluster01_icl02 -service-
    policy default-intercluster -home-node cluster01-02 -home-port e0c -address 192.168.1.202
    -netmask 255.255.255.0
  3. Verify that the intercluster LIFs were created:

    network interface show -service-policy default-intercluster

    cluster01::> network interface show -service-policy default-intercluster
                Logical    Status     Network            Current       Current Is
    Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
    ----------- ---------- ---------- ------------------ ------------- ------- ----
    cluster01
                cluster01_icl01
                           up/up      192.168.1.201/24   cluster01-01  e0c     true
                cluster01_icl02
                           up/up      192.168.1.202/24   cluster01-02  e0c     true
  4. Verify that the intercluster LIFs are redundant:

    network interface show –service-policy default-intercluster -failover

    The following example shows that the intercluster LIFs cluster01_icl01 and cluster01_icl02 on the e0c port will fail over to the e0d port.

    cluster01::> network interface show -service-policy default-intercluster –failover
             Logical         Home                  Failover        Failover
    Vserver  Interface       Node:Port             Policy          Group
    -------- --------------- --------------------- --------------- --------
    cluster01
             cluster01_icl01 cluster01-01:e0c   local-only      192.168.1.201/24
                                Failover Targets: cluster01-01:e0c,
                                                  cluster01-01:e0d
             cluster01_icl02 cluster01-02:e0c   local-only      192.168.1.201/24
                                Failover Targets: cluster01-02:e0c,
                                                  cluster01-02:e0d