Skip to main content
Cluster and storage switches

Migrate from a two-node switchless cluster to a cluster with Cisco Nexus 3232C cluster switches

Contributors netapp-yvonneo netapp-jolieg

If you have a two-node switchless cluster, you can migrate to a two-node switched cluster that includes Cisco Nexus 3232C cluster network switches. This is a nondisruptive procedure.

Review requirements

Migration requirements

Before migration, be sure to review Migration requirements.

What you'll need

Ensure that:

  • Ports are available for node connections. The cluster switches use the Inter-Switch Link (ISL) ports e1/31-32.

  • You have appropriate cables for cluster connections:

    • The nodes with 10 GbE cluster connections require QSFP optical modules with breakout fiber cables or QSFP to SFP+ copper breakout cables.

    • The nodes with 40/100 GbE cluster connections require supportedQSFP/ QSFP28 optical modules with fiber cables or QSFP/QSFP28 copper direct-attach cables.

    • The cluster switches require the appropriate ISL cabling: 2x QSFP28 fiber or copper direct-attach cables.

  • The configurations are properly set up and functioning.

    The two nodes must be connected and functioning in a two-node switchless cluster setting.

  • All cluster ports are in the up state.

  • The Cisco Nexus 3232C cluster switch are supported.

  • The existing cluster network configuration has the following:

    • A redundant and fully functional Nexus 3232C cluster infrastructure on both switches

    • The latest RCF and NX-OS versions on your switches

    • Management connectivity on both switches

    • Console access to both switches

    • All cluster logical interfaces (LIFs) in the up state without having been migrated

    • Initial customization of the switch

    • All ISL ports enabled and cabled

Migrate the switches

About the examples

The examples in this procedure use the following switch and node nomenclature:

  • Nexus 3232C cluster switches, C1 and C2.

  • The nodes are n1 and n2.

The examples in this procedure use two nodes, each utilizing two 40 GbE cluster interconnect ports e4a and e4e. The Hardware Universe has details about the cluster ports on your platforms.

  • n1_clus1 is the first cluster logical interface (LIF) to be connected to cluster switch C1 for node n1.

  • n1_clus2 is the first cluster LIF to be connected to cluster switch C2 for node n1.

  • n2_clus1 is the first cluster LIF to be connected to cluster switch C1 for node n2.

  • n2_clus2 is the second cluster LIF to be connected to cluster switch C2 for node n2.

  • The number of 10 GbE and 40/100 GbE ports are defined in the reference configuration files (RCFs) available on the Cisco® Cluster Network Switch Reference Configuration File Download page.

Note

The procedure requires the use of both ONTAP commands and Cisco Nexus 3000 Series Switches commands; ONTAP commands are used unless otherwise indicated.

Step 1: Display and migrate physical and logical ports

  1. If AutoSupport is enabled on this cluster, suppress automatic case creation by invoking an AutoSupport message:

    system node autosupport invoke -node * -type all - message MAINT=xh

    x is the duration of the maintenance window in hours.

    Note

    The AutoSupport message notifies technical support of this maintenance task so that automatic case creation is suppressed during the maintenance window.

  2. Determine the administrative or operational status for each cluster interface:

    1. Display the network port attributes:

      network port show -role cluster

      Show example
      cluster::*> network port show -role cluster
        (network port show)
      Node: n1
                                                                             Ignore
                                                        Speed(Mbps) Health   Health
      Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
      --------- ------------ ---------------- ---- ---- ----------- -------- -----
      e4a       Cluster      Cluster          up   9000 auto/40000  -
      e4e       Cluster      Cluster          up   9000 auto/40000  -        -
      Node: n2
                                                                             Ignore
                                                        Speed(Mbps) Health   Health
      Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
      --------- ------------ ---------------- ---- ---- ----------- -------- -----
      e4a       Cluster      Cluster          up   9000 auto/40000  -
      e4e       Cluster      Cluster          up   9000 auto/40000  -
      4 entries were displayed.
    2. Display information about the logical interfaces and their designated home nodes:

      network interface show -role cluster

      Show example
      cluster::*> network interface show -role cluster
       (network interface show)
                  Logical    Status     Network            Current       Current Is
      Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
      ----------- ---------- ---------- ------------------ ------------- ------- ---
      Cluster
                  n1_clus1   up/up      10.10.0.1/24       n1            e4a     true
                  n1_clus2   up/up      10.10.0.2/24       n1            e4e     true
                  n2_clus1   up/up      10.10.0.3/24       n2            e4a     true
                  n2_clus2   up/up      10.10.0.4/24       n2            e4e     true
      
      4 entries were displayed.
    3. Verify that switchless cluster detection is enabled using the advanced privilege command:

      network options detect-switchless-cluster show`

      Show example

      The output in the following example shows that switchless cluster detection is enabled:

      cluster::*> network options detect-switchless-cluster show
      Enable Switchless Cluster Detection: true
  3. Verify that the appropriate RCFs and image are installed on the new 3232C switches and make any necessary site customizations such as adding users, passwords, and network addresses.

    You must prepare both switches at this time. If you need to upgrade the RCF and image software, you must follow these steps:

    1. Go to the Cisco Ethernet Switches page on the NetApp Support Site.

    2. Note your switch and the required software versions in the table on that page.

    3. Download the appropriate version of RCF.

    4. Click CONTINUE on the Description page, accept the license agreement, and then follow the instructions on the Download page to download the RCF.

    5. Download the appropriate version of the image software.

  4. Click CONTINUE on the Description page, accept the license agreement, and then follow the instructions on the Download page to download the RCF.

  5. On Nexus 3232C switches C1 and C2, disable all node-facing ports C1 and C2, but do not disable the ISL ports e1/31-32.

    For more information on Cisco commands, see the guides listed in the Cisco Nexus 3000 Series NX-OS Command References.

    Show example

    The following example shows ports 1 through 30 being disabled on Nexus 3232C cluster switches C1 and C2 using a configuration supported in RCF NX3232_RCF_v1.0_24p10g_24p100g.txt:

    C1# copy running-config startup-config
    [] 100% Copy complete.
    C1# configure
    C1(config)# int e1/1/1-4,e1/2/1-4,e1/3/1-4,e1/4/1-4,e1/5/1-4,e1/6/1-4,e1/7-30
    C1(config-if-range)# shutdown
    C1(config-if-range)# exit
    C1(config)# exit
    C2# copy running-config startup-config
    [] 100% Copy complete.
    C2# configure
    C2(config)# int e1/1/1-4,e1/2/1-4,e1/3/1-4,e1/4/1-4,e1/5/1-4,e1/6/1-4,e1/7-30
    C2(config-if-range)# shutdown
    C2(config-if-range)# exit
    C2(config)# exit
  6. Connect ports 1/31 and 1/32 on C1 to the same ports on C2 using supported cabling.

  7. Verify that the ISL ports are operational on C1 and C2:

    show port-channel summary

    For more information on Cisco commands, see the guides listed in the Cisco Nexus 3000 Series NX-OS Command References.

    Show example

    The following example shows the Cisco show port-channel summary command being used to verify the ISL ports are operational on C1 and C2:

    C1# show port-channel summary
    Flags: D - Down         P - Up in port-channel (members)
           I - Individual   H - Hot-standby (LACP only)        s - Suspended    r - Module-removed
           S - Switched     R - Routed
           U - Up (port-channel)
           M - Not in use. Min-links not met
    --------------------------------------------------------------------------------
          Port-
    Group Channel      Type   Protocol  Member Ports
    -------------------------------------------------------------------------------
    1     Po1(SU)      Eth    LACP      Eth1/31(P)   Eth1/32(P)
    
    C2# show port-channel summary
    Flags: D - Down         P - Up in port-channel (members)
           I - Individual   H - Hot-standby (LACP only)        s - Suspended    r - Module-removed
           S - Switched     R - Routed
           U - Up (port-channel)
           M - Not in use. Min-links not met
    --------------------------------------------------------------------------------
    
    Group Port-        Type   Protocol  Member Ports
          Channel
    --------------------------------------------------------------------------------
    1     Po1(SU)      Eth    LACP      Eth1/31(P)   Eth1/32(P)
  8. Display the list of neighboring devices on the switch.

    For more information on Cisco commands, see the guides listed in the Cisco Nexus 3000 Series NX-OS Command References.

    Show example

    The following example shows the Cisco command show cdp neighbors being used to display the neighboring devices on the switch:

    C1# show cdp neighbors
    Capability Codes: R - Router, T - Trans-Bridge, B - Source-Route-Bridge
                      S - Switch, H - Host, I - IGMP, r - Repeater,
                      V - VoIP-Phone, D - Remotely-Managed-Device,                   s - Supports-STP-Dispute
    Device-ID          Local Intrfce  Hldtme Capability  Platform      Port ID
    C2                 Eth1/31        174    R S I s     N3K-C3232C  Eth1/31
    C2                 Eth1/32        174    R S I s     N3K-C3232C  Eth1/32
    Total entries displayed: 2
    C2# show cdp neighbors
    Capability Codes: R - Router, T - Trans-Bridge, B - Source-Route-Bridge
                      S - Switch, H - Host, I - IGMP, r - Repeater,
                      V - VoIP-Phone, D - Remotely-Managed-Device,                   s - Supports-STP-Dispute
    Device-ID          Local Intrfce  Hldtme Capability  Platform      Port ID
    C1                 Eth1/31        178    R S I s     N3K-C3232C  Eth1/31
    C1                 Eth1/32        178    R S I s     N3K-C3232C  Eth1/32
    Total entries displayed: 2
  9. Display the cluster port connectivity on each node:

    network device-discovery show

    Show example

    The following example shows the cluster port connectivity displayed for a two-node switchless cluster configuration:

    cluster::*> network device-discovery show
                Local  Discovered
    Node        Port   Device              Interface        Platform
    ----------- ------ ------------------- ---------------- ----------------
    n1         /cdp
                e4a    n2                  e4a              FAS9000
                e4e    n2                  e4e              FAS9000
    n2         /cdp
                e4a    n1                  e4a              FAS9000
                e4e    n1                  e4e              FAS9000
  10. Migrate the n1_clus1 and n2_clus1 LIFs to the physical ports of their destination nodes:

    network interface migrate -vserver vserver-name -lif lif-name source-node source-node-name -destination-port destination-port-name

    Show example

    You must execute the command for each local node as shown in the following example:

    cluster::*> network interface migrate -vserver cluster -lif n1_clus1 -source-node n1
    -destination-node n1 -destination-port e4e
    cluster::*> network interface migrate -vserver cluster -lif n2_clus1 -source-node n2
    -destination-node n2 -destination-port e4e

Step 2: Shut down the reassigned LIFs and disconnect the cables

  1. Verify the cluster interfaces have successfully migrated:

    network interface show -role cluster

    Show example

    The following example shows the "Is Home" status for the n1_clus1 and n2_clus1 LIFs has become "false" after the migration is completed:

    cluster::*> network interface show -role cluster
     (network interface show)
                Logical    Status     Network            Current       Current Is
    Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
    ----------- ---------- ---------- ------------------ ------------- ------- ----
    Cluster
                n1_clus1   up/up      10.10.0.1/24       n1            e4e     false
                n1_clus2   up/up      10.10.0.2/24       n1            e4e     true
                n2_clus1   up/up      10.10.0.3/24       n2            e4e     false
                n2_clus2   up/up      10.10.0.4/24       n2            e4e     true
     4 entries were displayed.
  2. Shut down cluster ports for the n1_clus1 and n2_clus1 LIFs, which were migrated in step 9:

    network port modify -node node-name -port port-name -up-admin false

    Show example

    You must execute the command for each port as shown in the following example:

    cluster::*> network port modify -node n1 -port e4a -up-admin false
    cluster::*> network port modify -node n2 -port e4a -up-admin false
  3. Verify the connectivity of the remote cluster interfaces:

ONTAP 9.9.1 and later

You can use the network interface check cluster-connectivity command to start an accessibility check for cluster connectivity and then display the details:

network interface check cluster-connectivity start and network interface check cluster-connectivity show

cluster1::*> network interface check cluster-connectivity start

NOTE: Wait for a number of seconds before running the show command to display the details.

cluster1::*> network interface check cluster-connectivity show
                                  Source           Destination      Packet
Node   Date                       LIF              LIF              Loss
------ -------------------------- ---------------- ---------------- -----------
n1
       3/5/2022 19:21:18 -06:00   n1_clus2         n2-clus1         none
       3/5/2022 19:21:20 -06:00   n1_clus2         n2_clus2         none

n2
       3/5/2022 19:21:18 -06:00   n2_clus2         n1_clus1         none
       3/5/2022 19:21:20 -06:00   n2_clus2         n1_clus2         none
All ONTAP releases

For all ONTAP releases, you can also use the cluster ping-cluster -node <name> command to check the connectivity:

cluster ping-cluster -node <name>

cluster1::*> cluster ping-cluster -node local
Host is n1
Getting addresses from network interface table...
Cluster n1_clus1 n1        e4a    10.10.0.1
Cluster n1_clus2 n1        e4e    10.10.0.2
Cluster n2_clus1 n2        e4a    10.10.0.3
Cluster n2_clus2 n2        e4e    10.10.0.4
Local = 10.10.0.1 10.10.0.2
Remote = 10.10.0.3 10.10.0.4
Cluster Vserver Id = 4294967293 Ping status:
....
Basic connectivity succeeds on 4 path(s)
Basic connectivity fails on 0 path(s) ................
Detected 9000 byte MTU on 32 path(s):
    Local 10.10.0.1 to Remote 10.10.0.3
    Local 10.10.0.1 to Remote 10.10.0.4
    Local 10.10.0.2 to Remote 10.10.0.3
    Local 10.10.0.2 to Remote 10.10.0.4
Larger than PMTU communication succeeds on 4 path(s) RPC status:
1 paths up, 0 paths down (tcp check)
1 paths up, 0 paths down (ucp check)
  1. Disconnect the cable from e4a on node n1.

    You can refer to the running configuration and connect the first 40 GbE port on the switch C1 (port 1/7 in this example) to e4a on n1 using cabling supported for Nexus 3232C switches.

Step 3: Enable the cluster ports

  1. Disconnect the cable from e4a on node n2.

    You can refer to the running configuration and connect e4a to the next available 40 GbE port on C1, port 1/8, using supported cabling.

  2. Enable all node-facing ports on C1.

    For more information on Cisco commands, see the guides listed in the Cisco Nexus 3000 Series NX-OS Command References.

    Show example

    The following example shows ports 1 through 30 being enabled on Nexus 3232C cluster switches C1 and C2 using the configuration supported in RCF NX3232_RCF_v1.0_24p10g_26p100g.txt:

    C1# configure
    C1(config)# int e1/1/1-4,e1/2/1-4,e1/3/1-4,e1/4/1-4,e1/5/1-4,e1/6/1-4,e1/7-30
    C1(config-if-range)# no shutdown
    C1(config-if-range)# exit
    C1(config)# exit
  3. Enable the first cluster port, e4a, on each node:

    network port modify -node node-name -port port-name -up-admin true

    Show example
    cluster::*> network port modify -node n1 -port e4a -up-admin true
    cluster::*> network port modify -node n2 -port e4a -up-admin true
  4. Verify that the clusters are up on both nodes:

    network port show -role cluster

    Show example
    cluster::*> network port show -role cluster
      (network port show)
    Node: n1
                                                                           Ignore
                                                      Speed(Mbps) Health   Health
    Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
    --------- ------------ ---------------- ---- ---- ----------- -------- -----
    e4a       Cluster      Cluster          up   9000 auto/40000  -
    e4e       Cluster      Cluster          up   9000 auto/40000  -        -
    
    Node: n2
                                                                           Ignore
                                                      Speed(Mbps) Health   Health
    Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
    --------- ------------ ---------------- ---- ---- ----------- -------- -----
    e4a       Cluster      Cluster          up   9000 auto/40000  -
    e4e       Cluster      Cluster          up   9000 auto/40000  -
    
    4 entries were displayed.
  5. For each node, revert all of the migrated cluster interconnect LIFs:

    network interface revert -vserver cluster -lif lif-name

    Show example

    You must revert each LIF to its home port individually as shown in the following example:

    cluster::*> network interface revert -vserver cluster -lif n1_clus1
    cluster::*> network interface revert -vserver cluster -lif n2_clus1
  6. Verify that all the LIFs are now reverted to their home ports:

    network interface show -role cluster

    The Is Home column should display a value of true for all of the ports listed in the Current Port column. If the displayed value is false, the port has not been reverted.

    Show example
    cluster::*> network interface show -role cluster
     (network interface show)
                Logical    Status     Network            Current       Current Is
    Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
    ----------- ---------- ---------- ------------------ ------------- ------- ----
    Cluster
                n1_clus1   up/up      10.10.0.1/24       n1            e4a     true
                n1_clus2   up/up      10.10.0.2/24       n1            e4e     true
                n2_clus1   up/up      10.10.0.3/24       n2            e4a     true
                n2_clus2   up/up      10.10.0.4/24       n2            e4e     true
    4 entries were displayed.

Step 4: Enable the reassigned LIFs

  1. Display the cluster port connectivity on each node:

    network device-discovery show

    Show example
    cluster::*> network device-discovery show
                Local  Discovered
    Node        Port   Device              Interface        Platform
    ----------- ------ ------------------- ---------------- ----------------
    n1         /cdp
                e4a    C1                  Ethernet1/7      N3K-C3232C
                e4e    n2                  e4e              FAS9000
    n2         /cdp
                e4a    C1                  Ethernet1/8      N3K-C3232C
                e4e    n1                  e4e              FAS9000
  2. Migrate clus2 to port e4a on the console of each node:

    network interface migrate cluster -lif lif-name -source-node source-node-name -destination-node destination-node-name -destination-port destination-port-name

    Show example

    You must migrate each LIF to its home port individually as shown in the following example:

    cluster::*> network interface migrate -vserver cluster -lif n1_clus2 -source-node n1
    -destination-node n1 -destination-port e4a
    cluster::*> network interface migrate -vserver cluster -lif n2_clus2 -source-node n2
    -destination-node n2 -destination-port e4a
  3. Shut down cluster ports clus2 LIF on both nodes:

    network port modify

    Show example

    The following example shows the specified ports being set to false, shutting the ports down on both nodes:

    cluster::*> network port modify -node n1 -port e4e -up-admin false
    cluster::*> network port modify -node n2 -port e4e -up-admin false
  4. Verify the cluster LIF status:

    network interface show

    Show example
    cluster::*> network interface show -role cluster
     (network interface show)
                Logical    Status     Network            Current       Current Is
    Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
    ----------- ---------- ---------- ------------------ ------------- ------- ----
    Cluster
                n1_clus1   up/up      10.10.0.1/24       n1            e4a     true
                n1_clus2   up/up      10.10.0.2/24       n1            e4a     false
                n2_clus1   up/up      10.10.0.3/24       n2            e4a     true
                n2_clus2   up/up      10.10.0.4/24       n2            e4a     false
    4 entries were displayed.
  5. Disconnect the cable from e4e on node n1.

    You can refer to the running configuration and connect the first 40 GbE port on switch C2 (port 1/7 in this example) to e4e on node n1, using the appropriate cabling for the Nexus 3232C switch model.

  6. Disconnect the cable from e4e on node n2.

    You can refer to the running configuration and connect e4e to the next available 40 GbE port on C2, port 1/8, using the appropriate cabling for the Nexus 3232C switch model.

  7. Enable all node-facing ports on C2.

    Show example

    The following example shows ports 1 through 30 being enabled on Nexus 3132Q-V cluster switches C1 and C2 using a configuration supported in RCF NX3232C_RCF_v1.0_24p10g_26p100g.txt:

    C2# configure
    C2(config)# int e1/1/1-4,e1/2/1-4,e1/3/1-4,e1/4/1-4,e1/5/1-4,e1/6/1-4,e1/7-30
    C2(config-if-range)# no shutdown
    C2(config-if-range)# exit
    C2(config)# exit
  8. Enable the second cluster port, e4e, on each node:

    network port modify

    Show example

    The following example shows the second cluster port e4e being brought up on each node:

    cluster::*> network port modify -node n1 -port e4e -up-admin true
    cluster::*> *network port modify -node n2 -port e4e -up-admin true*s
  9. For each node, revert all of the migrated cluster interconnect LIFs: network interface revert

    Show example

    The following example shows the migrated LIFs being reverted to their home ports.

    cluster::*> network interface revert -vserver Cluster -lif n1_clus2
    cluster::*> network interface revert -vserver Cluster -lif n2_clus2
  10. Verify that all of the cluster interconnect ports are now reverted to their home ports:

    network interface show -role cluster

    The Is Home column should display a value of true for all of the ports listed in the Current Port column. If the displayed value is false, the port has not been reverted.

    Show example
    cluster::*> network interface show -role cluster
     (network interface show)
                Logical    Status     Network            Current       Current Is
    Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
    ----------- ---------- ---------- ------------------ ------------- ------- ----
    Cluster
                n1_clus1   up/up      10.10.0.1/24       n1            e4a     true
                n1_clus2   up/up      10.10.0.2/24       n1            e4e     true
                n2_clus1   up/up      10.10.0.3/24       n2            e4a     true
                n2_clus2   up/up      10.10.0.4/24       n2            e4e     true
    4 entries were displayed.
  11. Verify that all of the cluster interconnect ports are in the up state:

    network port show -role cluster

  12. Display the cluster switch port numbers through which each cluster port is connected to each node: network device-discovery show

    Show example
    cluster::*> network device-discovery show
                Local  Discovered
    Node        Port   Device              Interface        Platform
    ----------- ------ ------------------- ---------------- ----------------
    n1          /cdp
                e4a    C1                  Ethernet1/7      N3K-C3232C
                e4e    C2                  Ethernet1/7      N3K-C3232C
    n2          /cdp
                e4a    C1                  Ethernet1/8      N3K-C3232C
                e4e    C2                  Ethernet1/8      N3K-C3232C
  13. Display discovered and monitored cluster switches:

    system cluster-switch show

    Show example
    cluster::*> system cluster-switch show
    
    Switch                      Type               Address          Model
    --------------------------- ------------------ ---------------- ---------------
    C1                          cluster-network    10.10.1.101      NX3232CV
    Serial Number: FOX000001
    Is Monitored: true
    Reason:
    Software Version: Cisco Nexus Operating System (NX-OS) Software, Version 7.0(3)I6(1)
    Version Source: CDP
    
    C2                          cluster-network     10.10.1.102      NX3232CV
    Serial Number: FOX000002
    Is Monitored: true
    Reason:
    Software Version: Cisco Nexus Operating System (NX-OS) Software, Version 7.0(3)I6(1)
    Version Source: CDP 2 entries were displayed.
  14. Verify that switchless cluster detection changed the switchless cluster option to disabled:

    network options switchless-cluster show

  15. Verify the connectivity of the remote cluster interfaces:

ONTAP 9.9.1 and later

You can use the network interface check cluster-connectivity command to start an accessibility check for cluster connectivity and then display the details:

network interface check cluster-connectivity start and network interface check cluster-connectivity show

cluster1::*> network interface check cluster-connectivity start

NOTE: Wait for a number of seconds before running the show command to display the details.

cluster1::*> network interface check cluster-connectivity show
                                  Source           Destination      Packet
Node   Date                       LIF              LIF              Loss
------ -------------------------- ---------------- ---------------- -----------
n1
       3/5/2022 19:21:18 -06:00   n1_clus2         n2-clus1         none
       3/5/2022 19:21:20 -06:00   n1_clus2         n2_clus2         none

n2
       3/5/2022 19:21:18 -06:00   n2_clus2         n1_clus1         none
       3/5/2022 19:21:20 -06:00   n2_clus2         n1_clus2         none
All ONTAP releases

For all ONTAP releases, you can also use the cluster ping-cluster -node <name> command to check the connectivity:

cluster ping-cluster -node <name>

cluster1::*> cluster ping-cluster -node local
Host is n1
Getting addresses from network interface table...
Cluster n1_clus1 n1        e4a    10.10.0.1
Cluster n1_clus2 n1        e4e    10.10.0.2
Cluster n2_clus1 n2        e4a    10.10.0.3
Cluster n2_clus2 n2        e4e    10.10.0.4
Local = 10.10.0.1 10.10.0.2
Remote = 10.10.0.3 10.10.0.4
Cluster Vserver Id = 4294967293
Ping status:
....
Basic connectivity succeeds on 4 path(s)
Basic connectivity fails on 0 path(s) ................
Detected 9000 byte MTU on 32 path(s):
    Local 10.10.0.1 to Remote 10.10.0.3
    Local 10.10.0.1 to Remote 10.10.0.4
    Local 10.10.0.2 to Remote 10.10.0.3
    Local 10.10.0.2 to Remote 10.10.0.4
Larger than PMTU communication succeeds on 4 path(s) RPC status:
1 paths up, 0 paths down (tcp check)
1 paths up, 0 paths down (ucp check)
  1. If you suppressed automatic case creation, re-enable it by invoking an AutoSupport message:

    system node autosupport invoke -node * -type all -message MAINT=END