Skip to main content
Element Software
A newer release of this product is available.

Recover a management node

Contributors netapp-pcarriga netapp-dbagwell

You can manually recover and redeploy the management node for your cluster running NetApp Element software if your previous management node used persistent volumes.

You can deploy a new OVA and run a redeploy script to pull configuration data from a previously installed management node running version 11.3 and later.

What you'll need
  • Your previous management node was running NetApp Element software version 11.3 or later with Persistent volumes functionality engaged.

  • You know the MVIP and SVIP of the cluster containing the persistent volumes.

  • Your cluster version is running NetApp Element software 11.3 or later.

  • Your installation uses IPv4. The management node 11.3 does not support IPv6.

  • You have permission to download software from the NetApp Support Site.

  • You have identified the management node image type that is correct for your platform:

    Platform Installation image type

    Microsoft Hyper-V

    .iso

    KVM

    .iso

    VMware vSphere

    .iso, .ova

    Citrix XenServer

    .iso

    OpenStack

    .iso

Download ISO or OVA and deploy the VM

  1. Download the OVA or ISO for your installation from the Element software page on the NetApp Support Site.

    1. Select Download Latest Release and accept the EULA.

    2. Select the management node image you want to download.

  2. If you downloaded the OVA, follow these steps:

    1. Deploy the OVA.

    2. If your storage cluster is on a separate subnet from your management node (eth0) and you want to use persistent volumes, add a second network interface controller (NIC) to the VM on the storage subnet (for example, eth1) or ensure that the management network can route to the storage network.

  3. If you downloaded the ISO, follow these steps:

    1. Create a new 64-bit virtual machine from your hypervisor with the following configuration:

      • Six virtual CPUs

      • 24GB RAM

      • 400GB virtual disk, thin provisioned

      • One virtual network interface with internet access and access to the storage MVIP.

      • (Optional for SolidFire all-flash storage) One virtual network interface with management network access to the storage cluster. If your storage cluster is on a separate subnet from your management node (eth0) and you want to use persistent volumes, add a second network interface controller (NIC) to the VM on the storage subnet (eth1) or ensure that the management network can route to the storage network.

        Important Do not power on the virtual machine prior to the step indicating to do so later in this procedure.
    2. Attach the ISO to the virtual machine and boot to the .iso install image.

      Note Installing a management node using the image might result in 30-second delay before the splash screen appears.
  4. Power on the virtual machine for the management node after the installation completes.

Configure the network

  1. Using the terminal user interface (TUI), create a management node admin user.

    Tip To move through the menu options, press the Up or Down arrow keys. To move through the buttons, press Tab. To move from the buttons to the fields, press Tab. To navigate between fields, press the Up or Down arrow keys.
  2. Configure the management node network (eth0).

    Note If you need an additional NIC to isolate storage traffic, see instructions on configuring another NIC: Configure a storage Network Interface Controller (NIC).

Configure time sync

  1. Ensure time is synced between the management node and the storage cluster using NTP:

Note Starting with Element 12.3.1, substeps (a) to (e) are performed automatically. For management node 12.3.1, proceed to substep (f) to complete the time sync configuration.
  1. Log in to the management node using SSH or the console provided by your hypervisor.

  2. Stop NTPD:

    sudo service ntpd stop
  3. Edit the NTP configuration file /etc/ntp.conf:

    1. Comment out the default servers (server 0.gentoo.pool.ntp.org) by adding a # in front of each.

    2. Add a new line for each default time server you want to add. The default time servers must be the same NTP servers used on the storage cluster that you will use in a later step.

      vi /etc/ntp.conf
      
      #server 0.gentoo.pool.ntp.org
      #server 1.gentoo.pool.ntp.org
      #server 2.gentoo.pool.ntp.org
      #server 3.gentoo.pool.ntp.org
      server <insert the hostname or IP address of the default time server>
    3. Save the configuration file when complete.

  4. Force an NTP sync with the newly added server.

    sudo ntpd -gq
  5. Restart NTPD.

    sudo service ntpd start
  6. Disable time synchronization with host via the hypervisor (the following is a VMware example):

    Note If you deploy the mNode in a hypervisor environment other than VMware, for example, from the .iso image in an Openstack environment, refer to the hypervisor documentation for the equivalent commands.
    1. Disable periodic time synchronization:

      vmware-toolbox-cmd timesync disable
    2. Display and confirm the current status of the service:

      vmware-toolbox-cmd timesync status
    3. In vSphere, verify that the Synchronize guest time with host box is un-checked in the VM options.

      Note Do not enable this option if you make future changes to the VM.
Note Do not edit the NTP after you complete the time sync configuration because it affects the NTP when you run the redeploy command on the management node.

Configure the management node

  1. Create a temporary destination directory for the management services bundle contents:

    mkdir -p /sf/etc/mnode/mnode-archive
  2. Download the management services bundle (version 2.15.28 or later) that was previously installed on the existing management node and save it in the /sf/etc/mnode/ directory.

  3. Extract the downloaded bundle using the following command, replacing the value in [ ] brackets (including the brackets) with the name of the bundle file:

    tar -C /sf/etc/mnode -xvf /sf/etc/mnode/[management services bundle file]
  4. Extract the resulting file to the /sf/etc/mnode-archive directory:

    tar -C /sf/etc/mnode/mnode-archive -xvf /sf/etc/mnode/services_deploy_bundle.tar.gz
  5. Create a configuration file for accounts and volumes:

    echo '{"trident": true, "mvip": "[mvip IP address]", "account_name": "[persistent volume account name]"}' | sudo tee /sf/etc/mnode/mnode-archive/management-services-metadata.json
    1. Replace the value in [ ] brackets (including the brackets) for each of the following required parameters:

      • [mvip IP address]: The management virtual IP address of the storage cluster. Configure the management node with the same storage cluster that you used during NTP servers configuration.

      • [persistent volume account name]: The name of the account associated with all persistent volumes in this storage cluster.

  6. Configure and run the management node redeploy command to connect to persistent volumes hosted on the cluster and start services with previous management node configuration data:

    Note You will be prompted to enter passwords in a secure prompt. If your cluster is behind a proxy server, you must configure the proxy settings so you can reach a public network.
    sudo /sf/packages/mnode/redeploy-mnode --mnode_admin_user [username]
    1. Replace the value in [ ] brackets (including the brackets) with the user name for the management node administrator account. This is likely to be the username for the user account you used to log into the management node.

      Note You can add the user name or allow the script to prompt you for the information.
    2. Run the redeploy-mnode command. The script displays a success message when the redeployment is complete.

    3. If you access Element web interfaces (such as the management node or NetApp Hybrid Cloud Control) using the Fully Qualified Domain Name (FQDN) of the system, reconfigure authentication for the management node.

Important SSH capability that provides NetApp Support remote support tunnel (RST) session access is disabled by default on management nodes running management services 2.18 and later. If you had previously enabled SSH functionality on the management node, you might need to disable SSH again on the recovered management node.

Find more Information