Skip to main content

Verify the node2 installation

Contributors netapp-pcarriga netapp-aoife

You must verify the node2 installation with the replacement system modules. Because there is no change to physical ports, you are not required to map the physical ports from the old node2 to the replacement node2.

About this task

After you boot node1 with the replacement system module, you verify that it is installed correctly. You must wait for node2 to join quorum and then resume the controller replacement operation.

At this point in the procedure, the operation pauses while node2 joins quorum.

Steps
  1. Verify that node2 has joined quorum:

    cluster show -node node2 -fields health

    The output of the health field should be true.

  2. Verify that node2 is part of the same cluster as node1 and that it is healthy:

    cluster show

  3. Switch to advanced privilege mode:

    set advanced

  4. Check the status of the controller replacement operation and verify that it is in a paused state and in the same state that it was in before node2 was halted to perform the physical tasks of installing new controllers and moving cables:

    system controller replace show

    system controller replace show-details

  5. Resume the controller replacement operation:

    system controller replace resume

  6. The controller replacement operation pauses for intervention with the following message:

    Cluster::*> system controller replace show
    Node          Status                       Error-Action
    ------------  ------------------------     ------------------------------------
    Node2         Paused-for-intervention      Follow the instructions given in
                                               Step Details
    Node1         None
    
    Step Details:
    --------------------------------------------
    To complete the Network Reachability task, the ONTAP network configuration must be manually adjusted to match the new physical network configuration of the hardware. This includes:
    
    
    1. Re-create the interface group, if needed, before restoring VLANs. For detailed commands and instructions, refer to the "Re-creating VLANs, ifgrps, and broadcast domains" section of the upgrade controller hardware guide for the ONTAP version running on the new controllers.
    2. Run the command "cluster controller-replacement network displaced-vlans show" to check if any VLAN is displaced.
    3. If any VLAN is displaced, run the command "cluster controller-replacement network displaced-vlans restore" to restore the VLAN on the desired port.
    2 entries were displayed.
    Note In this procedure, section Re-creating VLANs, ifgrps, and broadcast domains has been renamed Restore network configuration on node2.
  7. With the controller replacement in a paused state, proceed to Restore network configuration on node2.

Restore network configuration on node2

After you confirm that node2 is in quorum and can communicate with node1, verify that node1’s VLANs, interface groups, and broadcast domains are seen on node2. Also, verify that all node2 network ports are configured in their correct broadcast domains.

About this task

For more information on creating and re-creating VLANs, interface groups, and broadcast domains, refer to References to link to the Network Management content.

Steps
  1. List all the physical ports that are on upgraded node2:

    network port show -node node2

    All physical network ports, VLAN ports, and interface group ports on the node are displayed. From this output, you can see any physical ports that have been moved into the Cluster broadcast domain by ONTAP. You can use this output to aid in deciding which ports should be used as interface group member ports, VLAN base ports, or standalone physical ports for hosting LIFs.

  2. List the broadcast domains on the cluster:

    network port broadcast-domain show

  3. List network port reachability of all ports on node2:

    network port reachability show -node node2

    You should see output similar to the following example. The port and broadcast names vary.

    Cluster::*> network port reachability show -node local
    Node      Port     Expected Reachability                Reachability Status
    --------- -------- ------------------------------------ ---------------------
    Node2
              e0M      Default:Mgmt                         no-reachability
              e10a     Default:Default-3                    ok
              e10b     Default:Default-4                    ok
              e11a     Cluster:Cluster                      no-reachability
              e11b     Cluster:Cluster                      no-reachability
              e11c     -                                    no-reachability
              e11d     -                                    no-reachability
              e2a      Default:Default-1                    ok
              e2b      Default:Default-2                    ok
              e9a      Default:Default                      no-reachability
              e9b      Default:Default                      no-reachability
              e9c      Default:Default                      no-reachability
              e9d      Default:Default                      no-reachability
    13 entries were displayed.

    In the preceding example, node2 has booted and joined quorum after controller replacement. It has several ports that have no reachability and are pending a reachability scan.

  4. Repair the reachability for each of the ports on node2 with a reachability status other than ok by using the following command, in the following order:

    network port reachability repair -node node_name -port port_name

    1. Physical ports

    2. VLAN ports

    You should see output like the following example:

    Cluster ::> reachability repair -node node2 -port e9d
    Warning: Repairing port "node2:e9d" may cause it to move into a different broadcast domain, which can cause LIFs to be re-homed away from the port. Are you sure you want to continue? {y|n}:

    A warning message, as shown in the preceding example, is expected for ports with a reachability status that might be different from the reachability status of the broadcast domain where it is currently located. Review the connectivity of the port and answer y or n as appropriate.

    Verify that all physical ports have their expected reachability:

    network port reachability show

    As the reachability repair is performed, ONTAP attempts to place the ports in the correct broadcast domains. However, if a port’s reachability cannot be determined and does not belong to any of the existing broadcast domains, ONTAP will create new broadcast domains for these ports.

  5. Verify port reachability:

    network port reachability show

    When all ports are correctly configured and added to the correct broadcast domains, the network port reachability show command should report the reachability status as ok for all connected ports, and the status as no-reachability for ports with no physical connectivity. If any port reports a status other than these two, perform the reachability repair and add or remove ports from their broadcast domains as instructed in Step 4.

  6. Verify that all ports have been placed into broadcast domains:

    network port show

  7. Verify that all ports in the broadcast domains have the correct maximum transmission unit (MTU) configured:

    network port broadcast-domain show

  8. Restore LIF home ports, specifying the Vserver and LIF home ports, if any, that need to be restored by using the following steps:

    1. List any LIFs that are displaced:

      displaced-interface show

    2. Restore LIF home nodes and home ports:

      displaced-interface restore-home-node -node node_name -vserver vserver_name -lif-name LIF_name

  9. Verify that all LIFs have a home port and are administratively up:

    network interface show -fields home-port,status-admin