Skip to main content

Move NAS data LIFs owned by node1 from node2 to node3 and verify SAN LIFs on node3

Contributors netapp-aoife netapp-pcarriga

Before you relocate aggregates from node2 to node3, you must move the NAS data LIFs belonging to node1 that are currently on node2 from node2 to node3. You also must verify the SAN LIFs on node3.

About this task

Remote LIFs handle traffic to SAN LUNs during the upgrade procedure. Moving SAN LIFs is not necessary for cluster or service health during the upgrade. SAN LIFs are not moved unless they need to be mapped to new ports. You will verify that the LIFs are healthy and located on appropriate ports after you bring node3 online.

Steps
  1. List all the NAS data LIFs not owned by node2 by entering the following command on either node and capturing the output:

    network interface show -role data -curr-node node2 -is-home false -home-node node3

  2. If the cluster is configured for SAN LIFs, record the SAN LIFs adapter and switch-port configuration information in this worksheet for use later in the procedure.

    1. List the SAN LIFs on node2 and examine the output:

      network interface show -data-protocol fc*

      The system returns output similar to the following example:

      cluster1::> net int show -data-protocol fc*
        (network interface show)
                   Logical     Status     Network            Current        Current Is
      Vserver      Interface   Admin/Oper Address/Mask       Node           Port    Home
      -----------  ----------  ---------- ------------------ -------------  ------- ----
      svm2_cluster1
                   lif_svm2_cluster1_340
                               up/up      20:02:00:50:56:b0:39:99
                                                             cluster1-01    1b      true
                   lif_svm2_cluster1_398
                               up/up      20:03:00:50:56:b0:39:99
                                                             cluster1-02    1a      true
                   lif_svm2_cluster1_691
                               up/up      20:01:00:50:56:b0:39:99
                                                             cluster1-01    1a      true
                   lif_svm2_cluster1_925
                               up/up      20:04:00:50:56:b0:39:99
                                                             cluster1-02    1b      true
      4 entries were displayed.
    2. List the existing configurations and examine the output:

      fcp adapter show -fields switch-port,fc-wwpn

      The system returns output similar to the following example:

      cluster1::> fcp adapter show -fields switch-port,fc-wwpn
        (network fcp adapter show)
      node         adapter  fc-wwpn                  switch-port
      -----------  -------  -----------------------  -------------
      cluster1-01  0a       50:0a:09:82:9c:13:38:00  ACME Switch:0
      cluster1-01  0b       50:0a:09:82:9c:13:38:01  ACME Switch:1
      cluster1-01  0c       50:0a:09:82:9c:13:38:02  ACME Switch:2
      cluster1-01  0d       50:0a:09:82:9c:13:38:03  ACME Switch:3
      cluster1-01  0e       50:0a:09:82:9c:13:38:04  ACME Switch:4
      cluster1-01  0f       50:0a:09:82:9c:13:38:05  ACME Switch:5
      cluster1-01  1a       50:0a:09:82:9c:13:38:06  ACME Switch:6
      cluster1-01  1b       50:0a:09:82:9c:13:38:07  ACME Switch:7
      cluster1-02  0a       50:0a:09:82:9c:6c:36:00  ACME Switch:0
      cluster1-02  0b       50:0a:09:82:9c:6c:36:01  ACME Switch:1
      cluster1-02  0c       50:0a:09:82:9c:6c:36:02  ACME Switch:2
      cluster1-02  0d       50:0a:09:82:9c:6c:36:03  ACME Switch:3
      cluster1-02  0e       50:0a:09:82:9c:6c:36:04  ACME Switch:4
      cluster1-02  0f       50:0a:09:82:9c:6c:36:05  ACME Switch:5
      cluster1-02  1a       50:0a:09:82:9c:6c:36:06  ACME Switch:6
      cluster1-02  1b       50:0a:09:82:9c:6c:36:07  ACME Switch:7
      16 entries were displayed
  3. Take one of the following actions:

    If node1…​ Then…​

    Had interface groups or VLANs configured

    Go to Step 4.

    Did not have interface groups or VLANs configured

    Skip Step 4 and go to Step 5.

  4. Perform the following substeps to migrate any NAS data LIFs hosted on interface groups and VLANs that were originally on node1 from node2 to node3:

    1. Migrate any data LIFs hosted on node2 that previously belonged to node1 on an interface group to a port on node3 that is capable of hosting LIFs on the same network by entering the following command, once for each LIF:

      network interface migrate -vserver vserver_name -lif LIF_name -destination-node node3 –destination-port netport|ifgrp

    2. Modify the home port and home node of the LIF in Substep a to the port and node currently hosting the LIFs by entering the following command, once for each LIF:

      network interface modify -vserver vserver_name -lif LIF_name -home-node node3 -home-port netport|ifgrp

    3. Migrate any data LIF hosted on node2 that previously belonged to node1 on a VLAN port to a port on node3 that is capable of hosting LIFs on the same network by entering the following command, once for each LIF:

      network interface migrate -vserver vserver_name -lif LIF_name -destination-node node3 –destination-port netport|ifgrp

    4. Modify the home port and home node of the LIFs in Substep c to the port and node currently hosting the LIFs by entering the following command, once for each LIF:

      network interface modify -vserver vserver_name -lif LIF_name -home-node node3 -home-port netport|ifgrp

  5. Take one of the following actions:

    If the cluster is configured for…​ Then…​

    NAS

    Complete Step 6 and Step 7, skip Step 8, and complete Step 9 through Step 12.

    SAN

    Disable all the SAN LIFs on the node to take them down for the upgrade:

    network interface modify -vserver vserver_name -lif LIF_name -home-node node_to_upgrade -home-port netport|ifgrp -status-admin down

  6. If you have data ports that are not the same on your platforms, add the ports to the broadcast domain:

    network port broadcast-domain add-ports -ipspace IPspace_name -broadcast-domain mgmt -ports node:port

    The following example adds port "e0a" on node "8200-1" and port "e0i" on node "8060-1" to broadcast domain "mgmt" in the IPspace "Default":

    cluster::> network port broadcast-domain add-ports -ipspace Default -broadcast-domain mgmt -ports 8200-1:e0a, 8060-1:e0i
  7. Migrate each NAS data LIF to node3 by entering the following command, once for each LIF:

    network interface migrate -vserver vserver_name -lif LIF_name -destination-node node3 -destination-port netport|ifgrp

  8. Make sure that the data migration is persistent:

    network interface modify -vserver vserver_name -lif LIF_name-home-port netport|ifgrp -home-node node3

  9. Confirm that the SAN LIFs are on the correct ports on node3:

    1. Enter the following command and examine its output:

      network interface show -data-protocol iscsi|fcp -home-node node3

      The system returns output similar to the following example:

      cluster::> net int show -data-protocol iscsi|fcp -home-node node3
                    Logical     Status      Network             Current        Current  Is
       Vserver      Interface   Admin/Oper  Address/Mask        Node           Port     Home
       -----------  ----------  ----------  ------------------  -------------  -------  ----
       vs0
                    a0a         up/down     10.63.0.53/24       node3          a0a      true
                    data1       up/up       10.63.0.50/18       node3          e0c      true
                    rads1       up/up       10.63.0.51/18       node3          e1a      true
                    rads2       up/down     10.63.0.52/24       node3          e1b      true
       vs1
                    lif1        up/up       172.17.176.120/24   node3          e0c      true
                    lif2        up/up       172.17.176.121/24   node3          e1a      true
    2. Verify that the new and adapter and switch-port configurations are correct by comparing the output from the fcp adapter show command with the configuration information that you recorded in the worksheet in Step 2.

      List the new SAN LIF configurations on node3:

      fcp adapter show -fields switch-port,fc-wwpn

      The system returns output similar to the following example:

      cluster1::> fcp adapter show -fields switch-port,fc-wwpn
        (network fcp adapter show)
      node        adapter fc-wwpn                 switch-port
      ----------- ------- ----------------------- -------------
      cluster1-01 0a      50:0a:09:82:9c:13:38:00 ACME Switch:0
      cluster1-01 0b      50:0a:09:82:9c:13:38:01 ACME Switch:1
      cluster1-01 0c      50:0a:09:82:9c:13:38:02 ACME Switch:2
      cluster1-01 0d      50:0a:09:82:9c:13:38:03 ACME Switch:3
      cluster1-01 0e      50:0a:09:82:9c:13:38:04 ACME Switch:4
      cluster1-01 0f      50:0a:09:82:9c:13:38:05 ACME Switch:5
      cluster1-01 1a      50:0a:09:82:9c:13:38:06 ACME Switch:6
      cluster1-01 1b      50:0a:09:82:9c:13:38:07 ACME Switch:7
      cluster1-02 0a      50:0a:09:82:9c:6c:36:00 ACME Switch:0
      cluster1-02 0b      50:0a:09:82:9c:6c:36:01 ACME Switch:1
      cluster1-02 0c      50:0a:09:82:9c:6c:36:02 ACME Switch:2
      cluster1-02 0d      50:0a:09:82:9c:6c:36:03 ACME Switch:3
      cluster1-02 0e      50:0a:09:82:9c:6c:36:04 ACME Switch:4
      cluster1-02 0f      50:0a:09:82:9c:6c:36:05 ACME Switch:5
      cluster1-02 1a      50:0a:09:82:9c:6c:36:06 ACME Switch:6
      cluster1-02 1b      50:0a:09:82:9c:6c:36:07 ACME Switch:7
      16 entries were displayed
      Note If a SAN LIF in the new configuration is not on an adapter that is still attached to the same switch-port, it might cause a system outage when you reboot the node.
    3. If node3 has any SAN LIFs or groups of SAN LIFs that are on a port that did not exist on node1 or that need to be mapped to a different port, move them to an appropriate port on node3 by completing the following substeps:

      1. Set the LIF status to "down":

        network interface modify -vserver vserver_name -lif LIF_name -status-admin down

      2. Remove the LIF from the port set:

        portset remove -vserver vserver_name -portset portset_name -port-name port_name

      3. Enter one of the following commands:

        • Move a single LIF:

          network interface modify -vserver vserver_name -lif LIF_name -home-port new_home_port

        • Move all the LIFs on a single nonexistent or incorrect port to a new port:

          network interface modify {-home-port port_on_node1 -home-node node1 -role data} -home-port new_home_port_on_node3

        • Add the LIFs back to the port set:

          portset add -vserver vserver_name -portset portset_name -port-name port_name

          Note You must move SAN LIFs to a port that has the same link speed as the original port.
  10. Modify the status of all LIFs to "up" so the LIFs can accept and send traffic on the node:

    network interface modify -home-port port_name -home-node node3 -lif data -status-admin up

  11. Enter the following command on either node and examine its output to verify that LIFs have been moved to the correct ports and that the LIFs have the status of "up" by entering the following command on either node and examining the output:

    network interface show -home-node node3 -role data

  12. If any LIFs are down, set the administrative status of the LIFs to "up" by entering the following command, once for each LIF:

    network interface modify -vserver vserver_name -lif LIF_name -status-admin up

  13. Send a post-upgrade AutoSupport message to NetApp for node1:

    system node autosupport invoke -node node3 -type all -message "node1 successfully upgraded from platform_old to platform_new"