Skip to main content
HCI
1.10

Install a management node

Contributors netapp-pcarriga netapp-dbagwell netapp-yvonneo amgrissino

You can manually install the management node for your cluster running NetApp Element software using the appropriate image for your configuration.

This manual process is intended for NetApp HCI administrators who are not using the NetApp Deployment Engine for management node installation.

What you'll need
  • Your cluster version is running NetApp Element software 11.3 or later.

  • Your installation uses IPv4. The management node 11.3 does not support IPv6.

    Note If you need to IPv6 support, you can use the management node 11.1.
  • You have permission to download software from the NetApp Support Site.

  • You have identified the management node image type that is correct for your platform:

    Platform Installation image type

    Microsoft Hyper-V

    .iso

    KVM

    .iso

    VMware vSphere

    .iso, .ova

    Citrix XenServer

    .iso

    OpenStack

    .iso

  • (Management node 12.0 and later with proxy server) You have updated NetApp Hybrid Cloud Control to management services version 2.16 before configuring a proxy server.

About this task

The Element 12.2 management node is an optional upgrade. It is not required for existing deployments.

Prior to following this procedure, you should have an understanding of persistent volumes and whether or not you want to use them. Persistent volumes are optional but recommended for management node configuration data recovery in the event of a virtual machine (VM) loss.

Download ISO or OVA and deploy the VM

  1. Download the OVA or ISO for your installation from the NetApp HCI page on the NetApp Support Site:

    1. Select Download Latest Release and accept the EULA.

    2. Select the management node image you want to download.

  2. If you downloaded the OVA, follow these steps:

    1. Deploy the OVA.

    2. If your storage cluster is on a separate subnet from your management node (eth0) and you want to use persistent volumes, add a second network interface controller (NIC) to the VM on the storage subnet (for example, eth1) or ensure that the management network can route to the storage network.

  3. If you downloaded the ISO, follow these steps:

    1. Create a new 64-bit VM from your hypervisor with the following configuration:

      • Six virtual CPUs

      • 24GB RAM

      • Storage adapter type set to LSI Logic Parallel

        Important The default for your management node might be LSI Logic SAS. In the New Virtual Machine window, verify the storage adapter configuration by selecting Customize hardware > Virtual Hardware. If required, change LSI Logic SAS to LSI Logic Parallel.
      • 400GB virtual disk, thin provisioned

      • One virtual network interface with internet access and access to the storage MVIP.

      • One virtual network interface with management network access to the storage cluster. If your storage cluster is on a separate subnet from your management node (eth0) and you want to use persistent volumes, add a second network interface controller (NIC) to the VM on the storage subnet (eth1) or ensure that the management network can route to the storage network.

        Important Do not power on the VM prior to the step indicating to do so later in this procedure.
    2. Attach the ISO to the VM and boot to the .iso install image.

      Note Installing a management node using the image might result in 30-second delay before the splash screen appears.
  4. Power on the VM for the management node after the installation completes.

Create the management node admin and configure the network

  1. Using the terminal user interface (TUI), create a management node admin user.

    Tip To move through the menu options, press the Up or Down arrow keys. To move through the buttons, press Tab. To move from the buttons to the fields, press Tab. To navigate between fields, press the Up or Down arrow keys.
  2. If there is a Dynamic Host Configuration Protocol (DHCP) server on the network that assigns IPs with a maximum transmission unit (MTU) less than 1500 bytes, you must perform the following steps:

    1. Temporarily put the management node on a vSphere network without DHCP, such as iSCSI.

    2. Reboot the VM or restart the VM network.

    3. Using the TUI, configure the correct IP on the management network with an MTU greater than or equal to 1500 bytes.

    4. Re-assign the correct VM network to the VM.

    Note A DHCP that assigns IPs with an MTU less than 1500 bytes can prevent you configuring the management node network or using the management node UI.
  3. Configure the management node network (eth0).

    Note If you need an additional NIC to isolate storage traffic, see instructions on configuring another NIC: Configure a storage Network Interface Controller (NIC).

Configure time sync

  1. Ensure time is synced between the management node and the storage cluster using NTP:

    Note Starting with Element 12.3.1, substeps (a) to (e) are performed automatically. For management node 12.3.1 or later, proceed to substep (f) to complete the time sync configuration.
    1. Log in to the management node using SSH or the console provided by your hypervisor.

    2. Stop NTPD:

      sudo service ntpd stop
    3. Edit the NTP configuration file /etc/ntp.conf:

      1. Comment out the default servers (server 0.gentoo.pool.ntp.org) by adding a # in front of each.

      2. Add a new line for each default time server you want to add. The default time servers must be the same NTP servers used on the storage cluster that you will use in a later step.

        vi /etc/ntp.conf
        
        #server 0.gentoo.pool.ntp.org
        #server 1.gentoo.pool.ntp.org
        #server 2.gentoo.pool.ntp.org
        #server 3.gentoo.pool.ntp.org
        server <insert the hostname or IP address of the default time server>
      3. Save the configuration file when complete.

    4. Force an NTP sync with the newly added server.

      sudo ntpd -gq
    5. Restart NTPD.

      sudo service ntpd start
    6. Disable time synchronization with host via the hypervisor (the following is a VMware example):

      Note If you deploy the mNode in a hypervisor environment other than VMware, for example, from the .iso image in an Openstack environment, refer to the hypervisor documentation for the equivalent commands.
      1. Disable periodic time synchronization:

        vmware-toolbox-cmd timesync disable
      2. Display and confirm the current status of the service:

        vmware-toolbox-cmd timesync status
      3. In vSphere, verify that the Synchronize guest time with host box is un-checked in the VM options.

        Note Do not enable this option if you make future changes to the VM.
Note Do not edit the NTP after you complete the time sync configuration because it affects the NTP when you run the setup command on the management node.

Set up the management node

  1. Configure and run the management node setup command:

    Note You will be prompted to enter passwords in a secure prompt. If your cluster is behind a proxy server, you must configure the proxy settings so you can reach a public network.
    sudo /sf/packages/mnode/setup-mnode --mnode_admin_user [username] --storage_mvip [mvip] --storage_username [username] --telemetry_active [true]
    1. Replace the value in [ ] brackets (including the brackets) for each of the following required parameters:

      Note The abbreviated form of the command name is in parentheses ( ) and can be substituted for the full name.
      • --mnode_admin_user (-mu) [username]: The username for the management node administrator account. This is likely to be the username for the user account you used to log into the management node.

      • --storage_mvip (-sm) [MVIP address]: The management virtual IP address (MVIP) of the storage cluster running Element software. Configure the management node with the same storage cluster that you used during NTP servers configuration.

      • --storage_username (-su) [username]: The storage cluster administrator username for the cluster specified by the --storage_mvip parameter.

      • --telemetry_active (-t) [true]: Retain the value true that enables data collection for analytics by Active IQ.

    2. (Optional): Add Active IQ endpoint parameters to the command:

      • --remote_host (-rh) [AIQ_endpoint]: The endpoint where Active IQ telemetry data is sent to be processed. If the parameter is not included, the default endpoint is used.

    3. (Recommended): Add the following persistent volume parameters. Do not modify or delete the account and volumes created for persistent volumes functionality or a loss in management capability will result.

      • --use_persistent_volumes (-pv) [true/false, default: false]: Enable or disable persistent volumes. Enter the value true to enable persistent volumes functionality.

      • --persistent_volumes_account (-pva) [account_name]: If --use_persistent_volumes is set to true, use this parameter and enter the storage account name that will be used for persistent volumes.

        Note Use a unique account name for persistent volumes that is different from any existing account name on the cluster. It is critically important to keep the account for persistent volumes separate from the rest of your environment.
      • --persistent_volumes_mvip (-pvm) [mvip]: Enter the management virtual IP address (MVIP) of the storage cluster running Element software that will be used with persistent volumes. This is only required if multiple storage clusters are managed by the management node. If multiple clusters are not managed, the default cluster MVIP will be used.

    4. Configure a proxy server:

      • --use_proxy (-up) [true/false, default: false]: Enable or disable the use of the proxy. This parameter is required to configure a proxy server.

      • --proxy_hostname_or_ip (-pi) [host]: The proxy hostname or IP. This is required if you want to use a proxy. If you specify this, you will be prompted to input --proxy_port.

      • --proxy_username (-pu) [username]: The proxy username. This parameter is optional.

      • --proxy_password (-pp) [password]: The proxy password. This parameter is optional.

      • --proxy_port (-pq) [port, default: 0]: The proxy port. If you specify this, you will be prompted to input the proxy host name or IP (--proxy_hostname_or_ip).

      • --proxy_ssh_port (-ps) [port, default: 443]: The SSH proxy port. This defaults to port 443.

    5. (Optional) Use parameter help if you need additional information about each parameter:

      • --help (-h): Returns information about each parameter. Parameters are defined as required or optional based on initial deployment. Upgrade and redeployment parameter requirements might vary.

    6. Run the setup-mnode command.

Configure controller assets

  1. Locate the installation ID:

    1. From a browser, log into the management node REST API UI:

    2. Go to the storage MVIP and log in. This action causes the certificate to be accepted for the next step.

    3. Open the inventory service REST API UI on the management node:

      https://<ManagementNodeIP>/inventory/1/
    4. Select Authorize and complete the following:

      1. Enter the cluster user name and password.

      2. Enter the client ID as mnode-client.

      3. Select Authorize to begin a session.

    5. From the REST API UI, select GET ​/installations.

    6. Select Try it out.

    7. Select Execute.

    8. From the code 200 response body, copy and save the id for the installation for use in a later step.

      Your installation has a base asset configuration that was created during installation or upgrade.

  2. (NetApp HCI only) Locate the hardware tag for your compute node in vSphere:

    1. Select the host in the vSphere Web Client navigator.

    2. Select the Monitor tab, and select Hardware Health.

    3. The node BIOS manufacturer and model number are listed. Copy and save the value for tag for use in a later step.

  3. Add a vCenter controller asset for NetApp HCI monitoring (NetApp HCI installations only) and Hybrid Cloud Control (for all installations) to the management node known assets:

    1. Access the mnode service API UI on the management node by entering the management node IP address followed by /mnode:

      https:/<ManagementNodeIP>/mnode
    2. Select Authorize or any lock icon and complete the following:

      1. Enter the cluster user name and password.

      2. Enter the client ID as mnode-client.

      3. Select Authorize to begin a session.

      4. Close the window.

    3. Select POST /assets/{asset_id}/controllers to add a controller sub-asset.

      Note You should create a new NetApp HCC role in vCenter to add a controller sub-asset. This new NetApp HCC role will limit the management node services view to NetApp-only assets. See Create a NetApp HCC role in vCenter.
    4. Select Try it out.

    5. Enter the parent base asset ID you copied to your clipboard in the asset_id field.

    6. Enter the required payload values with type vCenter and vCenter credentials.

    7. Select Execute.

(NetApp HCI only) Configure compute node assets

  1. (For NetApp HCI only) Add a compute node asset to the management node known assets:

    1. Select POST /assets/{asset_id}/compute-nodes to add a compute node sub-asset with credentials for the compute node asset.

    2. Select Try it out.

    3. Enter the parent base asset ID you copied to your clipboard in the asset_id field.

    4. In the payload, enter the required payload values as defined in the Model tab. Enter ESXi Host as type and enter the hardware tag you saved during a previous step for hardware_tag.

    5. Select Execute.

Find more Information