Preparing for the upgrade when moving storage

Before upgrading by moving storage, you must gather license information from the original nodes, plan network configuration, send an AutoSupport message about the upgrade, record the system IDs, destroy the mailboxes, power down the nodes, and remove the chassis.

Steps

  1. Display and record license information from the original nodes by using the system license show command.
  2. If you use Storage Encryption on the original nodes and the new nodes have encryption-enabled disks, make sure that the original nodes' disks are correctly keyed:
    1. Display information about self-encrypting disks (SEDs) by using the storage encryption disk show command.
    2. If any disks are associated with a non-manufacture secure ID (non-MSID) key, rekey them to an MSID key by using the storage encryption disk modify command.
  3. Record port and LIF configuration information on the original nodes:
    To display information about... Enter...
    Shelves, numbers of disks in each shelf, flash storage details, memory, NVRAM, and network cards system node run -node node_name sysconfig
    Cluster network and node management LIFs network interface show -role cluster,node-mgmt
    Physical ports network port show -node node_name -type physical
    Failover groups network interface failover-groups show -vserver vserver_name
    Note: Record the names and ports of failover groups that are not clusterwide.
    VLAN configuration network port vlan show -node node_name
    Note: Record each network port and VLAN ID pairing.
    Interface group configuration network port ifgrp show -node node_name -instance
    Note: Record the names of the interface groups and the ports assigned to them.
    Broadcast domains network port broadcast-domain show
    IPspace information network ipspace show
  4. Obtain information about the default cluster ports, data ports, and node management ports for each new node that you are upgrading to.
  5. As needed, adjust the configuration of the network broadcast domains on the original nodes for compatibility with that of the new nodes: network port broadcast-domain modify
  6. If VLANs are configured on interface groups, remove the VLANs: network port vlan delete -node node_name -port ifgrp -vlan-id VLAN_ID
  7. If any interface groups are configured on the original nodes, delete the ports that are assigned to the interface groups: network port ifgrp remove-port -node node_name -ifgrp ifgrp_name -port port_name
  8. Send an AutoSupport message from each original node to inform technical support of the upgrade: system node autosupport invoke -node node_name -type all -message "Upgrading node_name from platform_original to platform_new"
  9. Disable high availability or storage failover on each original node:
    If you have a... Enter...
    Two-node cluster
    1. cluster ha modify -configured false
    2. storage failover modify -node node_name -enabled false
    Cluster with more than two nodes storage failover modify -node node_name -enabled false
  10. Reboot the node: system node reboot -node node_name
    You can suppress the quorum check during the reboot process by using the -ignore-quorum-warnings option.
  11. Interrupt the reboot process by pressing Ctrl-C to display the boot menu when the system prompts you to do so.
  12. From the boot menu, select (5) Maintenance mode boot to access Maintenance mode.
    A message might appear asking you to ensure that the partner node is down or takeover is manually disabled on the partner node. You can enter yes to continue.
  13. Record each original node's system ID, which is obtained through disk ownership information in Maintenance mode: disk show -v
    You need the system IDs when you assign disks from the original nodes to the new nodes.
    Example
    *> disk show -v
    Local System ID: 118049495
    DISK    OWNER               POOL    SERIAL NUMBER          HOME
    ----    -----               ----    -------------          ----
    0a.33   node1 (118049495)   Pool0   3KS6BN970000973655KL   node1 (118049495)
    0a.32   node1 (118049495)   Pool0   3KS6BCKD000097363ZHK   node1 (118049495)
    0a.36   node1 (118049495)   Pool0   3KS6BL9H000097364W74   node1 (118049495)
    ...
    
  14. If you have FC or CNA port configuration, display the configuration in Maintenance mode: ucadmin show
    You should record the command output for later reference.
    Example
    *> ucadmin show
    Current Current Pending   Pending
    Adapter Mode    Type      Mode    Type    Status
    ------- ------- --------- ------- ------- ------
    0e      fc      initiator -       -       online
    0f      fc      initiator -       -       online
    0g      cna     target    -       -       online
    0h      cna     target    -       -       online
    ...
    
  15. In Maintenance mode, destroy each original node's mailboxes: mailbox destroy local
    The console displays a message similar to the following:
    Destroying mailboxes forces a node to create new empty mailboxes, which 
    clears any takeover state, removes all knowledge of out-of-date plexes and 
    mirrored volumes, and will prevent management services from going online in
    2-node cluster HA configurations.
    Are you sure you want to destroy the local mailboxes?
  16. Confirm that you want to destroy the mailboxes: y
    The system displays a message similar to the following:
    .............Mailboxes destroyed
    Takeover On Reboot option will be set to ON after the node boots.
    This option is ON by default except on setups that have iSCSI or FCP license.
    Use "storage failover modify -node <nodename> -onreboot false" to turn it OFF.
    
    *>
  17. Exit Maintenance mode: halt
  18. Turn off the power to the original nodes, and then unplug them from the power source.
  19. Label and remove all cables from the original nodes.
  20. Remove the chassis containing the original nodes.