Skip to main content

Confirm that the new controllers are set up correctly

Contributors netapp-pcarriga netapp-aoife

To confirm the correct setup, you verify that the HA pair is enabled. You also verify that node1 and node2 can access each other's storage and that neither owns data LIFs belonging to other nodes on the cluster. In addition, you verify that all data aggregates are on their correct home nodes, and that the volumes for both nodes are online. If one of the new nodes has a unified target adapter, you must restore any port configurations and you might need to change the use of the adapter.

Steps
  1. After the post-checks of node2, the storage failover and cluster HA pair for the node2 cluster are enabled. When the operation is done, both nodes show as completed and the system performs some cleanup operations.

  2. Verify that storage failover is enabled:

    storage failover show

    The following example shows the output of the command when storage failover is enabled:

    cluster::> storage failover show
                              Takeover
    Node	    Partner       Possible      State Description
    ----------  -----------   ------------  ------------------
    node1	    node2         true	        Connected to node2
    node2	    node1         true	        Connected to node1
  3. Verify that node1 and node2 belong to the same cluster by using the following command and examining the output:

    cluster show

  4. Verify that node1 and node2 can access each other's storage by using the following command and examining the output:

    storage failover show -fields local-missing-disks,partner-missing-disks

  5. Verify that neither node1 nor node2 owns data LIFs home-owned by other nodes in the cluster by using the following command and examining the output:

    network interface show

    If neither node1 or node2 owns data LIFs home-owned by other nodes in the cluster, revert the data LIFs to their home owner:

    network interface revert

  6. Verify that the aggregates are owned by their respective home nodes.

    storage aggregate show -owner-name node1

    storage aggregate show -owner-name node2

  7. Determine whether any volumes are offline:

    volume show -node node1 -state offline

    volume show -node node2 -state offline

  8. If any volumes are offline, compare them with the list of offline volumes that you captured in the section Prepare the nodes for upgrade, and bring online any of the offline volumes, as required, by using the following command, once for each volume:

    volume online -vserver vserver_name -volume volume_name

  9. Install new licenses for the new nodes by using the following command for each node:

    system license add -license-code license_code,license_code,license_code…​

    The license-code parameter accepts a list of 28 upper-case alphabetic character keys. You can add one license at a time, or you can add multiple licenses at once, separating each license key by a comma.

  10. Remove all of the old licenses from the original nodes by using one of the following commands:

    system license clean-up -unused -expired

    system license delete -serial-number node_serial_number -package licensable_package

    • Delete all expired licenses:

      system license clean-up -expired

    • Delete all unused licenses:

      system license clean-up -unused

    • Delete a specific license from a cluster by using the following commands on the nodes:

      system license delete -serial-number node1_serial_number -package *
      system license delete -serial-number node2_serial_number -package *

    The following output is displayed:

    Warning: The following licenses will be removed:
    <list of each installed package>
    Do you want to continue? {y|n}: y

    Enter y to remove all of the packages.

  11. Verify that the licenses are correctly installed by using the following command and examining its output:

    system license show

    You can compare the output with the output that you captured in the Prepare the nodes for upgrade section.

  12. If self-encrypting drives are being used in the configuration and you have set the kmip.init.maxwait variable to off (for example, in Boot node2 with the replacement system modules, Step 1), you must unset the variable:

    set diag; systemshell -node node_name -command sudo kenv -u -p kmip.init.maxwait

  13. Configure the SPs by using the following command on both nodes:

    system service-processor network modify -node node_name

    Refer to References to link to the System Administration Reference for information about the SPs and the ONTAP 9 Commands: Manual Page Reference for detailed information about the system service-processor network modify command.

  14. If you want to set up a switchless cluster on the new nodes, refer to References to link to the NetApp Support Site and follow the instructions in Transitioning to a two-node switchless cluster.

After you finish

If Storage Encryption is enabled on node1 and node2, complete the section Set up Storage Encryption on the new controller module. Otherwise, complete the section Decommission the old system.