Skip to main content

Verify the node1 installation

Contributors netapp-pcarriga netapp-aoife

You must verify the node1 installation with the replacement system modules. Because there is no change to physical ports, you are not required to map the physical ports from the old node1 to the replacement node1.

About this task

After you boot node1 with the replacement controller module, you verify that it is installed correctly. You must wait for node1 to join quorum and then resume the controller replacement operation.

At this point in the procedure, the controller upgrade operation should have paused as node1 attempts to join quorum automatically.

Steps
  1. Verify that node1 has joined quorum:

    cluster show -node node1 -fields health

    The output of the health field should be true.

  2. Verify that node1 is part of the same cluster as node2 and that it is healthy:

    cluster show

  3. Switch to advanced privilege mode:

    set advanced

  4. Check the status of the controller replacement operation and verify that it is in a paused state and in the same state that it was in before node1 was halted to perform the physical tasks of installing new controllers and moving cables:

    system controller replace show

    system controller replace show-details

  5. Resume the controller replacement operation:

    system controller replace resume

  6. The controller replacement operation pauses for intervention with the following message:

    Cluster::*> system controller replace show
    Node          Status                       Error-Action
    ------------  ------------------------     ------------------------------------
    Node1         Paused-for-intervention      Follow the instructions given in
                                               Step Details
    Node2         None
    
    Step Details:
    --------------------------------------------
    To complete the Network Reachability task, the ONTAP network configuration must be manually adjusted to match the new physical network configuration of the hardware. This includes:
    
    
    1. Re-create the interface group, if needed, before restoring VLANs. For detailed commands and instructions, refer to the "Re-creating VLANs, ifgrps, and broadcast domains" section of the upgrade controller hardware guide for the ONTAP version running on the new controllers.
    2. Run the command "cluster controller-replacement network displaced-vlans show" to check if any VLAN is displaced.
    3. If any VLAN is displaced, run the command "cluster controller-replacement network displaced-vlans restore" to restore the VLAN on the desired port.
    2 entries were displayed.
    Note In this procedure, section Re-creating VLANs, ifgrps, and broadcast domains has been renamed Restore network configuration on node1.
  7. With the controller replacement in a paused state, proceed to Restore network configuration on node1.

Restore network configuration on node1

After you confirm that node1 is in quorum and can communicate with node2, verify that node1’s VLANs, interface groups, and broadcast domains are seen on node1. Also, verify that all node1 network ports are configured in their correct broadcast domains.

About this task

For more information on creating and re-creating VLANs, interface groups, and broadcast domains, refer to References to link to the Network Management content.

Steps
  1. List all the physical ports that are on upgraded node1:

    network port show -node node1

    All physical network ports, VLAN ports, and interface group ports on the node are displayed. From this output, you can see any physical ports that have been moved into the Cluster broadcast domain by ONTAP. You can use this output to aid in deciding which ports should be used as interface group member ports, VLAN base ports, or standalone physical ports for hosting LIFs.

  2. List the broadcast domains on the cluster:

    network port broadcast-domain show

  3. List the network port reachability of all ports on node1:

    network port reachability show -node node1

    You should see output like the following example:

    Cluster::> reachability show -node node1
      (network port reachability show)
    Node      Port     Expected Reachability                Reachability Status
    --------- -------- ------------------------------------ ---------------------
    Node1
              a0a      Default:Default                      ok
              a0a-822  Default:822                          ok
              a0a-823  Default:823                          ok
              e0M      Default:Mgmt                         ok
              e11a     -                                    no-reachability
              e11b     -                                    no-reachability
              e11c     -                                    no-reachability
              e11d     -                                    no-reachability
              e3a      -                                    no-reachability
              e3b      -                                    no-reachability
              e4a      Cluster:Cluster                      ok
              e4e      Cluster:Cluster                      ok
              e5a      -                                    no-reachability
              e7a      -                                    no-reachability
              e9a      Default:Default                      ok
              e9a-822  Default:822                          ok
              e9a-823  Default:823                          ok
              e9b      Default:Default                      ok
              e9b-822  Default:822                          ok
              e9b-823  Default:823                          ok
              e9c      Default:Default                      ok
              e9d      Default:Default                      ok
    22 entries were displayed.

    In the preceding example, node1 booted after the controller replacement. Some ports do not have reachability because there is no physical connectivity. You must repair any ports with a reachability status other than ok.

    Note During the upgrade, the network ports and their connectivity should not change. All ports should reside in the correct broadcast domains and the network port reachability should not change. However, before moving LIFs from node2 back to node1, you must verify the reachability and health status of the network ports.
  4. Repair the reachability for each of the ports on node1 with a reachability status other than ok by using the following command, in the following order:

    network port reachability repair -node node_name -port port_name

    1. Physical ports

    2. VLAN ports

    You should see output like the following example:

    Cluster ::> reachability repair -node node1 -port e11b
    Warning: Repairing port "node1:e11b" may cause it to move into a different broadcast domain, which can cause LIFs to be re-homed away from the port. Are you sure you want to continue? {y|n}:

    A warning message, as shown in the preceding example, is expected for ports with a reachability status that might be different from the reachability status of the broadcast domain where it is currently located. Review the connectivity of the port and answer y or n as appropriate.

    Verify that all physical ports have their expected reachability:

    network port reachability show

    As the reachability repair is performed, ONTAP attempts to place the ports in the correct broadcast domains. However, if a port’s reachability cannot be determined and does not belong to any of the existing broadcast domains, ONTAP will create new broadcast domains for these ports.

  5. Verify port reachability:

    network port reachability show

    When all ports are correctly configured and added to the correct broadcast domains, the network port reachability show command should report the reachability status as ok for all connected ports, and the status as no-reachability for ports with no physical connectivity. If any port reports a status other than these two, perform the reachability repair and add or remove ports from their broadcast domains as instructed in Step 4.

  6. Verify that all ports have been placed into broadcast domains:

    network port show

  7. Verify that all ports in the broadcast domains have the correct maximum transmission unit (MTU) configured:

    network port broadcast-domain show

  8. Restore LIF home ports, specifying the Vserver and LIF home ports, if any, that need to be restored by using the following steps:

    1. List any LIFs that are displaced:

      displaced-interface show

    2. Restore LIF home nodes and home ports:

      displaced-interface restore-home-node -node node_name -vserver vserver_name -lif-name LIF_name

  9. Verify that all LIFs have a home port and are administratively up:

    network interface show -fields home-port,status-admin