Skip to main content

Move NAS data LIFs owned by node2 from node3 to node4 and verify SAN LIFs on node4

Contributors netapp-pcarriga netapp-aherbin

After you verify the node4 installation and before you relocate node2 aggregates from node3 to node4, you must move the NAS data LIFs owned by node2 currently on node3 from node3 to node4. You also need to verify the SAN LIFs on node4.

About this task

Remote LIFs handle traffic to SAN LUNs during the upgrade procedure. Moving SAN LIFs is not necessary for cluster or service health during the upgrade. SAN LIFs are not moved unless they need to be mapped to new ports. You verify that the LIFs are healthy and located on appropriate ports after you bring node4 online.

Steps
  1. List all the NAS data LIFs that are not owned by node3 by entering the following command on either node and capturing the output:

    network interface show -role data -curr-node node3 -is-home false

  2. If the cluster is configured for SAN LIFs, record the SAN LIFs and existing configuration information in this worksheet for use later in the procedure.

    1. List the SAN LIFs on node3 and examine the output:

      network interface show -data-protocol fc*

      The system returns output similar to the following example:

      cluster1::> net int show -data-protocol fc*
        (network interface show)
                   Logical     Status     Network            Current        Current Is
      Vserver      Interface   Admin/Oper Address/Mask       Node           Port    Home
      -----------  ----------  ---------- ------------------ -------------  ------- ----
      svm2_cluster1
                   lif_svm2_cluster1_340
                               up/up      20:02:00:50:56:b0:39:99
                                                             cluster1-01    1b      true
                   lif_svm2_cluster1_398
                               up/up      20:03:00:50:56:b0:39:99
                                                             cluster1-02    1a      true
                   lif_svm2_cluster1_691
                               up/up      20:01:00:50:56:b0:39:99
                                                             cluster1-01    1a      true
                   lif_svm2_cluster1_925
                               up/up      20:04:00:50:56:b0:39:99
                                                             cluster1-02    1b      true
      4 entries were displayed.
    2. List the existing configurations and examine the output:

      fcp adapter show -fields switch-port,fc-wwpn

      The system returns output similar to the following example:

      cluster1::> fcp adapter show -fields switch-port,fc-wwpn
        (network fcp adapter show)
      node         adapter  fc-wwpn                  switch-port
      -----------  -------  -----------------------  -------------
      cluster1-01  0a       50:0a:09:82:9c:13:38:00  ACME Switch:0
      cluster1-01  0b       50:0a:09:82:9c:13:38:01  ACME Switch:1
      cluster1-01  0c       50:0a:09:82:9c:13:38:02  ACME Switch:2
      cluster1-01  0d       50:0a:09:82:9c:13:38:03  ACME Switch:3
      cluster1-01  0e       50:0a:09:82:9c:13:38:04  ACME Switch:4
      cluster1-01  0f       50:0a:09:82:9c:13:38:05  ACME Switch:5
      cluster1-01  1a       50:0a:09:82:9c:13:38:06  ACME Switch:6
      cluster1-01  1b       50:0a:09:82:9c:13:38:07  ACME Switch:7
      cluster1-02  0a       50:0a:09:82:9c:6c:36:00  ACME Switch:0
      cluster1-02  0b       50:0a:09:82:9c:6c:36:01  ACME Switch:1
      cluster1-02  0c       50:0a:09:82:9c:6c:36:02  ACME Switch:2
      cluster1-02  0d       50:0a:09:82:9c:6c:36:03  ACME Switch:3
      cluster1-02  0e       50:0a:09:82:9c:6c:36:04  ACME Switch:4
      cluster1-02  0f       50:0a:09:82:9c:6c:36:05  ACME Switch:5
      cluster1-02  1a       50:0a:09:82:9c:6c:36:06  ACME Switch:6
      cluster1-02  1b       50:0a:09:82:9c:6c:36:07  ACME Switch:7
      16 entries were displayed
  3. Take one of the following actions:

    If node2…​ Description

    Had interface groups or VLANs configured

    Go to Step 4.

    Did not have interface groups or VLANs configured

    Skip Step 4 and go to Step 5.

  4. Take the following steps to migrate any NAS data LIFs hosted on interface groups and VLANs that originally were on node2 from node3 to node4.

    1. Migrate any LIFs hosted on node3 that previously belonging to node2 on an interface group to a port on node4 that is capable of hosting LIFs on the same network by entering the following command, once for each LIF:

      network interface migrate -vserver vserver_name -lif lif_name -destination-node node4 –destination-port netport|ifgrp

    2. Modify the home port and home node of the LIFs in Substep a to the port and node currently hosting the LIFs by entering the following command, once for each LIF:

      network interface modify -vserver vserver_name -lif datalif_name -home-node node4 home-port netport|ifgrp

    3. Migrate any LIFs hosted on node3 that previously belonged to node2 on a VLAN port to a port on node4 that is capable of hosting LIFs on the same network by entering the following command, once for each LIF:

      network interface migrate -vserver vserver_name -lif datalif_name -destination-node node4 –destination-port netport|ifgrp

    4. Modify the home port and home node of the LIFs in Substep c to the port and node currently hosting the LIFs by entering the following command, once for each LIF:

      network interface modify -vserver vserver_name -lif datalif_name -home-node node4 home-port netport|ifgrp

  5. Take one of the following actions:

    If the cluster is configured for…​ Then…​

    NAS

    Complete Step 6 through Step 9, skip Step 10, and complete Step 11 through Step 14.

    SAN

    Skip Step 6 through Step 9, and complete Step 10 through Step 14.

    Both NAS and SAN

    Complete Step 6 through Step 14.

  6. If you have data ports that are not the same on your platforms, enter the following command to add the ports to the broadcast domain:

    network port broadcast-domain add-ports -ipspace IPspace_name -broadcast-domain mgmt ports node:port

    The following example adds port "e0a" on node "6280-1" and port "e0i" on node "8060-1" to broadcast domain mgmt in the IPspace Default:

    cluster::> network port broadcast-domain add-ports -ipspace Default  -broadcast-domain mgmt -ports 6280-1:e0a, 8060-1:e0i
  7. Migrate each NAS data LIF to node4 by entering the following command, once for each LIF:

    network interface migrate -vserver vserver-name -lif datalif-name -destination-node node4 -destination-port netport|ifgrp -home-node node4

  8. Make sure that the data migration is persistent:

    network interface modify -vserver vserver_name -lif datalif_name -home-port netport|ifgrp

  9. Verify the status of all links as up by entering the following command to list all the network ports and examining its output:

    network port show

    The following example shows the output of the network port show command with some LIFs up and others down:

    cluster::> network port show
                                                                 Speed (Mbps)
    Node   Port      IPspace      Broadcast Domain Link   MTU    Admin/Oper
    ------ --------- ------------ ---------------- ----- ------- -----------
    node3
           a0a       Default      -                up       1500  auto/1000
           e0M       Default      172.17.178.19/24 up       1500  auto/100
           e0a       Default      -                up       1500  auto/1000
           e0a-1     Default      172.17.178.19/24 up       1500  auto/1000
           e0b       Default      -                up       1500  auto/1000
           e1a       Cluster      Cluster          up       9000  auto/10000
           e1b       Cluster      Cluster          up       9000  auto/10000
    node4
           e0M       Default      172.17.178.19/24 up       1500  auto/100
           e0a       Default      172.17.178.19/24 up       1500  auto/1000
           e0b       Default      -                up       1500  auto/1000
           e1a       Cluster      Cluster          up       9000  auto/10000
           e1b       Cluster      Cluster          up       9000  auto/10000
    12 entries were displayed.
  10. If the output of the network port show command displays network ports that are not available in the new node and are present in the old nodes, delete the old network ports by completing the following substeps:

    1. Enter the advanced privilege level by entering the following command:

      set -privilege advanced

    2. Enter the following command, once for each old network port:

      network port delete -node node_name -port port_name

    3. Return to the admin level by entering the following command:

      set -privilege admin

  11. Confirm that the SAN LIFs are on the correct ports on node4 by completing the following substeps:

    1. Enter the following command and examine its output:

      network interface show -data-protocol iscsi|fcp -home-node node4

      The system returns output similar to the following example:

      cluster::> network interface show -data-protocol iscsi|fcp -home-node node4
                  Logical    Status     Network            Current       Current Is
      Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
      ----------- ---------- ---------- ------------------ ------------- ------- ----
      vs0
                  a0a          up/down  10.63.0.53/24      node4         a0a     true
                  data1        up/up    10.63.0.50/18      node4         e0c     true
                  rads1        up/up    10.63.0.51/18      node4         e1a     true
                  rads2        up/down  10.63.0.52/24      node4         e1b     true
      vs1
                  lif1         up/up    172.17.176.120/24  node4         e0c     true
                  lif2         up/up    172.17.176.121/24  node4
    2. Verify that the new adapter and switch-port configurations are correct by comparing the output from the fcp adapter show command with the new configuration information that you recorded in the worksheet in Step 2.

      List the new SAN LIF configurations on node4:

      fcp adapter show -fields switch-port,fc-wwpn

      The system returns output similar to the following example:

      cluster1::> fcp adapter show -fields switch-port,fc-wwpn
        (network fcp adapter show)
      node         adapter  fc-wwpn                  switch-port
      -----------  -------  -----------------------  -------------
      cluster1-01  0a       50:0a:09:82:9c:13:38:00  ACME Switch:0
      cluster1-01  0b       50:0a:09:82:9c:13:38:01  ACME Switch:1
      cluster1-01  0c       50:0a:09:82:9c:13:38:02  ACME Switch:2
      cluster1-01  0d       50:0a:09:82:9c:13:38:03  ACME Switch:3
      cluster1-01  0e       50:0a:09:82:9c:13:38:04  ACME Switch:4
      cluster1-01  0f       50:0a:09:82:9c:13:38:05  ACME Switch:5
      cluster1-01  1a       50:0a:09:82:9c:13:38:06  ACME Switch:6
      cluster1-01  1b       50:0a:09:82:9c:13:38:07  ACME Switch:7
      cluster1-02  0a       50:0a:09:82:9c:6c:36:00  ACME Switch:0
      cluster1-02  0b       50:0a:09:82:9c:6c:36:01  ACME Switch:1
      cluster1-02  0c       50:0a:09:82:9c:6c:36:02  ACME Switch:2
      cluster1-02  0d       50:0a:09:82:9c:6c:36:03  ACME Switch:3
      cluster1-02  0e       50:0a:09:82:9c:6c:36:04  ACME Switch:4
      cluster1-02  0f       50:0a:09:82:9c:6c:36:05  ACME Switch:5
      cluster1-02  1a       50:0a:09:82:9c:6c:36:06  ACME Switch:6
      cluster1-02  1b       50:0a:09:82:9c:6c:36:07  ACME Switch:7
      16 entries were displayed
      Note If a SAN LIF in the new configuration is not on an adapter that is still attached to the same switch-port, it might cause a system outage when you reboot the node.
    3. If node4 has any SAN LIFs or groups of SAN LIFs that are on a port that did not exist on node2, move them to an appropriate port on node4 by entering one of the following commands:

      1. Set the LIF status to down:

        network interface modify -vserver vserver_name -lif lif_name -status-admin down

      2. Remove the LIF from the port set:

        portset remove -vserver vserver_name -portset portset_name -port-name port_name

      3. Enter one of the following commands:

        • Move a single LIF:

          network interface modify -lif lif_name -home-port new_home_port

        • Move all the LIFs on a single nonexistent or incorrect port to a new port:

          network interface modify {-home-port port_on_node2 -home-node node2 -role data} -home-port new_home_port_on_node4

        • Add the LIFs back to the port set:

          portset add -vserver vserver_name -portset portset_name -port-name port_name

    Note You must move SAN LIFs to a port that has the same link speed as the original port.
  12. Modify the status of all LIFs to up so the LIFs can accept and send traffic on the node by entering the following command:

    network interface modify -vserver vserver_name -home-port port_name -home-node node4 lif lif_name -status-admin up

  13. Verify that any SAN LIFs have been moved to the correct ports and that the LIFs have the status of up by entering the following command on either node and examining the output:

    network interface show -home-node node4 -role data

  14. If any LIFs are down, set the administrative status of the LIFs to up by entering the following command, once for each LIF:

    network interface modify -vserver vserver_name -lif lif_name -status-admin up