Upgrading a management node

You can manually upgrade the management node for your cluster running NetApp Element software using the appropriate image for your configuration. If you are upgrading to the management node from a recent version of NetApp Element software, you must create a new VM using an OVA or ISO image. An upgrade script is provided to pull proxy and cluster information from the previously installed management node running version 11.0 and later.

Before you begin

About this task

Prior to completing this procedure, you should have an understanding of persistent volumes and whether or not you want to use them. Persistent volumes allow management node data to be stored on a specified storage cluster so that data can be preserved in the event of management node loss or removal.

After you have upgraded to management node 11.3, you can keep up-to-date with management services, including the SIOC service for the Element Plug-in for vCenter, the Active IQ collector service, the NetApp Monitoring Agent (for NetApp HCI installations only) and additional services, by using service updates. See updating management services for more information about service updates.

Updating management services

Note: A procedure for upgrading the existing management node 11.0 or 11.1 to management node 11.3 in-place (without requiring a new VM deployment) is also available as an alternative to this upgrade process. https://kb.netapp.com/app/answers/answer_view/a_id/1088660

Steps

  1. Download the OVA or ISO for your installation from the NetApp Support Site:
    1. Click the <Select Platform> drop-down list for NetApp HCI or NetApp Element software and select a version number.
    2. Click Go.
    3. Read and click through the required prompts, accept the EULA, and select the management node image you want to download.
  2. If you downloaded the OVA, follow these steps:
    1. Deploy the OVA.
    2. If your storage cluster is on a separate subnet from your management node (eth0) and you want to use persistent volumes, add a second network interface controller (NIC) to the VM on the storage subnet (eth1) or ensure that the management network can route to the storage network.
  3. If you downloaded the ISO, follow these steps:
    1. Create a new 64-bit virtual machine from your hypervisor with the following configuration:
      • Six virtual CPUs
      • 12GB RAM
      • 400GB virtual disk, thin provisioned
      • One virtual network interface with internet access and access to the storage MVIP.
      • (Optional for SolidFire all-flash storage) One virtual network interface with management network access to the storage cluster. Add this configuration if you want to use persistent volumes and your storage network is on a different subnet than your management network.
      Attention: Do not power on the virtual machine prior to the step indicating to do so later in this procedure.
    2. If your storage cluster is on a separate subnet from your management node (eth0) and you want to use persistent volumes, add a second network interface controller (NIC) to the VM on the storage subnet (eth1).
    3. Attach the ISO to the virtual machine and boot to the .iso install image.
      Note: Installing a management node using the image might result in 30-second delay before the splash screen appears.
  4. Power on the virtual machine for the management node after the installation completes.
  5. Using the terminal user interface (TUI), create a management node admin user.
    Tip: To enter text, press Enter three times on the keyboard to open edit mode. After you enter text, press Enter again to close the edit mode. To navigate between fields, use the arrow keys.
  6. Configure the management node network (eth0).
    Note: If you have a second NIC on eth1, see instructions on configuring a second NIC.

    Configuring a storage NIC (eth1)

  7. SSH into the management node or use the console provided by your hypervisor.
  8. Using SSH, run the following command to gain root privileges. Enter your password when prompted:
    sudo su
  9. Ensure time is synced (NTP) between the management node and the storage cluster.
    Note: In vSphere, the Synchronize guest time with host box should be checked in the VM options. Do not disable this option if you make future changes to the VM.
  10. Configure the management node upgrade command:
    Note: You will be prompted to enter passwords if you do not include them in the command. The script attempts to pull storage credentials from the previous management node and will fail if credentials cannot be pulled. You can optionally add storage credentials manually to ensure the script is successful.
    /sf/packages/mnode/upgrade-mnode --mnode_admin_user [username] --prev_mnode_ip_or_hostname [ip] --prev_mnode_admin_user [username] --telemetry_active [true]
    1. Replace the values in [ ] brackets for each of the following required parameters:
      Note: The abbreviated form of the command name is in parentheses ( ) and be substituted for the full name.
      --mnode_admin_user (-mu) [username]
      The user name for the management node administrator account. This is likely to be the user name for the user account you used to log into the management node.
      --prev_mnode_ip_or_hostname (-pmi) [IP or host name]
      The IP or host name of the previous management node from which configuration data will be pulled.
      --prev_mnode_admin_user (-pmu) [user name]
      The user name for the administrator account from the previous management node.
      --telemetry_active (-t) [true]
      Retain the value true that enables data collection for analytics by Active IQ.
    2. (Optional): Add administrator and storage credential parameters and endpoint, SVIP, or MVIP parameters to the command. You will be prompted to enter these passwords in a secure prompt if you do not include them in the command:
      --mnode_admin_password (-mp) [password]
      The password of the management node administrator account. This is likely to be the password for the user account you used to log into the management node.
      --prev_mnode_admin_password (-pmp) [password]
      The password for the administrator account of the previous management node.
      --storage_mvip (-sm) [MVIP address]
      The MVIP (management virtual IP address) of the storage cluster running Element software. If not specified in the update command, an attempt will be made by the script to pull the MVIP from the previous management node.
      --storage_username (-su) [username]
      The storage cluster administrator user name for the cluster specified by the --storage_mvip parameter. If not specified in the update command, an attempt will be made by the script to pull the user name from the previous management node.
      --storage_password (-sp) [password]
      The password of the storage cluster administrator specified by the --storage_mvip parameter. If not specified in the update command, an attempt will be made by the script to pull the password from the previous management node.
      --remote_host (-rh) [AIQ_endpoint]
      The endpoint where Active IQ telemetry data is sent to be processed. If the parameter is not included, the default endpoint is used.
    3. (Optional): Add the following persistent volume parameters:
      Attention: Do not modify or delete the account and volumes created for persistent volumes functionality or a loss in management capability will result.
      --use_persistent_volumes (-pv) [true]
      Enter the value true that enables persistent volumes functionality.
      --persistent_volumes_account (-pva) [account_name]
      Enter the storage account name that will be used for persistent volumes.
      Note: Use a unique account name for persistent volumes that is different from any existing account name on the cluster. It is critically important to keep the account for persistent volumes separate from the rest of your environment.
      --persistent_volumes_mvip (-pvm) [mvip]
      Enter the MVIP (management virtual IP address) of the storage cluster running Element software that will be used with persistent volumes. This is only required if multiple storage clusters are managed by the management node. If multiple clusters are not managed, the default cluster MVIP will be used.
    4. Configure a proxy server:
      --use_proxy (-up) [true/false, default: false]
      Enable or disable the use of the proxy. This parameter is required to configure a proxy server.
      --proxy_hostname_or_ip (-pi) [host]
      The proxy host name or IP. This is required if you want to use a proxy. If you specify this, you will be prompted to input --proxy_port.
      --proxy_username (-pu) [username]
      The proxy user name. This parameter is optional.
      --proxy_password (-pp) [password]
      The proxy password. This parameter is optional.
      --proxy_port (-pq) [port, default: 0]
      The proxy port. If you specify this, you will be prompted to input the proxy host name or IP (--proxy_hostname_or_ip).
      --proxy_ssh_port (-ps) [port, default: 443]
      The SSH proxy port. This defaults to port 443.
  11. (Optional) Use parameter help if you need additional information about each parameter:
    --help (-h)
    Returns information about each parameter. Parameters are defined as required or optional based on initial deployment. Upgrade and redeployment parameter requirements might vary.
  12. Run the command.
    Important: If you have an NetApp HCI installation, in addition to running the upgrade script you also need to add a controller asset for vCenter using the REST API UI for the management node (https:// [mNode IP]/mnode). A controller asset is necessary for NetApp HCI monitoring and cloud control functionality to operate properly and is not installed as part of manual upgrade. To create a controller asset, use the following REST API: POST /assets/{asset_id}/controller. You can acquire the asset_ID necessary to complete the command from the base asset using GET /assets.