You can perform an in-place upgrade of the management node from 11.0 or 11.1 to version 11.3 without needing to provision a new management node virtual machine.
Before you begin
- Storage nodes are running Element 11.3.
Note: Use the latest HealthTools to upgrade Element software.
- The management node you are intending to upgrade is version 11.0 or 11.1 and uses IPv4 networking. The management node 11.3 does not support IPv6.
Note: For management node 11.0, the VM memory needs to be manually increased to 12GB.
- You have configured an additional network adapter (if required) using the instructions for configuring a storage NIC (eth1) in the management node user guide your product.
Note: Persistent volumes might require an additional network adapter if eth0 is not able to be routed to the SVIP. Configure a new network adapter on the iSCSI storage network to allow the configuration of persistent volumes.
- You have logged in to the management node virtual machine using SSH or console access.
- You have downloaded the management node ISO for NetApp HCI or Element software from the NetApp Support Site to the management node virtual machine.
Note: The name of the ISO is similar to solidfire-fdva-sodium-patch3-11.3.0.xxxx.iso
- You have checked the integrity of the download by running md5sum on the downloaded file and compared the output to what is available on NetApp Support Site for NetApp HCI or Element software, as in the following example:
sudo md5sum -b <path to iso>/solidfire-fdva-sodium-patch3-11.3.0.xxxx.iso
Steps
- Mount the management node ISO image and copy the contents to the file system using the following commands:
sudo mkdir -p /upgrade
sudo mount solidfire-fdva-sodium-patch3-11.3.0.xxxx.iso /mnt
cd /mnt
sudo cp -r * /upgrade
- Change to the home directory, and unmount the ISO file from /mnt:
- Delete the ISO to conserve space on the management node:
sudo rm <path to iso>/solidfire-fdva-sodium-patch3-11.3.0.xxxx.iso
- Run one of the following scripts with options to upgrade the management node OS version for either management node 11.1 or management node 11.0. Each script retains all necessary configuration files after the upgrade, such as Active IQ collector and proxy settings.
- On an 11.1 (11.1.0.73) management node, run the following command:
sudo /sf/rtfi/bin/sfrtfi_inplace file:///upgrade/casper/filesystem.squashfs sf_upgrade=1 sf_keep_paths="/sf/packages/solidfire-sioc-4.2.3.2288 /sf/packages/solidfire-nma-1.4.10/conf /sf/packages/sioc /sf/packages/nma"
- On an 11.1 (11.1.0.72) management node, run the following command:
sudo /sf/rtfi/bin/sfrtfi_inplace file:///upgrade/casper/filesystem.squashfs sf_upgrade=1 sf_keep_paths="/sf/packages/solidfire-sioc-4.2.1.2281 /sf/packages/solidfire-nma-1.4.10/conf /sf/packages/sioc /sf/packages/nma"
- On an 11.0 (11.0.0.781) management node, run the following command:
sudo /sf/rtfi/bin/sfrtfi_inplace file:///upgrade/casper/filesystem.squashfs sf_upgrade=1 sf_keep_paths="/sf/packages/solidfire-sioc-4.2.0.2253 /sf/packages/solidfire-nma-1.4.8/conf /sf/packages/sioc /sf/packages/nma"
- After the process completes, access the management node CLI using SSH or console access, and link to the upgraded <management node ip>:442:
sudo unlink /etc/nginx.legacy.conf.d/node.conf
sudo ln -s /sf/etc/webmgmt/11.3/nginx_conf/node.conf /etc/nginx.legacy.conf.d/node.conf
sudo systemctl restart nginx
- Ensure there are no escape characters (for example: '\') in the "password=" field of the /sf/packages/sioc/app.properties file. These characters might cause the upgrade process to fail.
- On the 11.3 management node, run the upgrade-mnode script to copy the Active IQ collector to the new configuration format.
Note: Because this is an in-place upgrade, the -mu, -pmi, -pmu commands point to the upgraded 11.3 management IP and user name, not a newly installed 11.3 management node. You need to enter the same password twice.
- For a single storage cluster managed by the existing management node, with persistent volumes:
/sf/packages/mnode/upgrade-mnode -mu <mnode user> -pmi <current mnode ip> -pmu <current mnode user> -pv <true - persistent volume> -pva <persistent volume account name - storage volume account>
- For a single storage cluster managed by the existing management node, with no persistent volumes:
/sf/packages/mnode/upgrade-mnode -mu <mnode user> -pmi <current ip address> -pmu <current mnode user>
- For multiple storage clusters managed by the existing management node, with persistent volumes:
/sf/packages/mnode/upgrade-mnode -mu <mnode user> -pmi <current mnode ip> -pmu <current mnode user> -pv <true - persistent volume> -pva <persistent volume account name - storage volume account> -pvm <persistent volumes mvip>
- For multiple storage clusters managed by the existing management node, with no persistent volumes (-pvm flag is just to provide one of the cluster's MVIP addresses):
/sf/packages/mnode/upgrade-mnode -mu <mnode user> -pmi <current ip address> -pmu <current mnode user> -pvm <mvip for persistent volumes>
- (For installations with the NetApp Element Plugin for vCenter Server) Upgrade the vCenter Plug-in on the 11.3 management node:
- Log out of the vSphere Web Client.
- Browse to the registration utility (<management node ip>:9443).
- Click the vCenter Plug-in Registration tab.
- Under Manage vCenter Plug-in, select Update Plug-in.
- Update the vCenter address, vCenter administrator user name, and vCenter administrator password.
- Click Update.
- Log in to the vSphere Web Client and verify that the plug-in information has been updated by browsing to .
- (For NetApp HCI only) Add a vCenter controller asset.
- Open a browser to the storage MVIP and log in, which will accept the certificate for the next step.
- Open a browser to https://<mnodeip>/mnode.
- Click Authorize and enter your MVIP username and password credentials. Close the pop-up window.
- Execute GET /assets to pull the base asset ID needed to add the vcenter/controller asset.
- Execute POST /assets/{ASSET_ID}/controllers to add a controller asset with vCenter credentials.