English

Install a management node

Contributors netapp-dbagwell netapp-mwallis Download PDF of this page

You can manually install the management node for your cluster running NetApp Element software using the appropriate image for your configuration.

This manual process is intended for SolidFire all-flash storage administrators and NetApp HCI administrators who are not using the NetApp Deployment Engine for management node installation.

What you’ll need
  • Your cluster version is running NetApp Element software 11.3 or later.

  • Your installation uses IPv4. The management node 11.3 does not support IPv6.

    If you need to IPv6 support, you can use the management node 11.1.
  • You have permission to download software from the NetApp Support Site.

  • You have identified the management node image type that is correct for your platform:

    Platform Installation image type

    Microsoft Hyper-V

    .iso

    KVM

    .iso

    VMware vSphere

    .iso, .ova

    Citrix XenServer

    .iso

    OpenStack

    .iso

About this task

The Element 12.2 management node is an optional upgrade. It is not required for existing deployments.

Prior to following this procedure, you should have an understanding of persistent volumes and whether or not you want to use them.

Download ISO or OVA and deploy the VM

  1. Download the OVA or ISO for your installation from the NetApp Support Site:

    1. Click Download Latest Release and accept the EULA.

    2. Select the management node image you want to download.

  2. If you downloaded the OVA, follow these steps:

    1. Deploy the OVA.

    2. If your storage cluster is on a separate subnet from your management node (eth0) and you want to use persistent volumes, add a second network interface controller (NIC) to the VM on the storage subnet (for example, eth1) or ensure that the management network can route to the storage network.

  3. If you downloaded the ISO, follow these steps:

    1. Create a new 64-bit virtual machine from your hypervisor with the following configuration:

      • Six virtual CPUs

      • 12GB RAM for most configurations or 24GB RAM for Element 12.2 configurations.

        For Element 12.2 configurations, the increased provisioned memory capacity accommodates management services upgrades and is not used in normal operation.
      • 400GB virtual disk, thin provisioned

      • One virtual network interface with internet access and access to the storage MVIP.

      • (Optional for SolidFire all-flash storage) One virtual network interface with management network access to the storage cluster. If your storage cluster is on a separate subnet from your management node (eth0) and you want to use persistent volumes, add a second network interface controller (NIC) to the VM on the storage subnet (eth1) or ensure that the management network can route to the storage network.

        Do not power on the virtual machine prior to the step indicating to do so later in this procedure.
    2. Attach the ISO to the virtual machine and boot to the .iso install image.

      Installing a management node using the image might result in 30-second delay before the splash screen appears.
  4. Power on the virtual machine for the management node after the installation completes.

Create the management node admin and configure the network

  1. Using the terminal user interface (TUI), create a management node admin user.

    To move through the menu options, press the Up or Down arrow keys. To move through the buttons, press Tab. To move from the buttons to the fields, press Tab. To navigate between fields, press the Up or Down arrow keys.
  2. Configure the management node network (eth0).

    If you need an additional NIC to isolate storage traffic, see instructions on configuring another NIC: Configure a storage Network Interface Controller (NIC).

Configure the management node

  1. SSH into the management node.

  2. Using SSH, run the following command to gain root privileges. Enter your password when prompted:

    sudo su
  3. Ensure time is synced (NTP) between the management node and the storage cluster.

    In vSphere, the Synchronize guest time with host box should be checked in the VM options. Do not disable this option if you make future changes to the VM.
  4. Configure and run the management node setup command:

    You will be prompted to enter passwords in a secure prompt. If your cluster is behind a proxy server, you must configure the proxy settings so you can reach a public network.
    /sf/packages/mnode/setup-mnode --mnode_admin_user [username] --storage_mvip [mvip] --storage_username [username] --telemetry_active [true]
    1. Replace the value in [ ] brackets (including the brackets) for each of the following required parameters:

      The abbreviated form of the command name is in parentheses ( ) and can be substituted for the full name.
      • --mnode_admin_user (-mu) [username]: The username for the management node administrator account. This is likely to be the username for the user account you used to log into the management node.

      • --storage_mvip (-sm) [MVIP address]: The management virtual IP address (MVIP) of the storage cluster running Element software.

      • --storage_username (-su) [username]: The storage cluster administrator username for the cluster specified by the --storage_mvip parameter.

      • --telemetry_active (-t) [true]: Retain the value true that enables data collection for analytics by Active IQ.

    2. (Optional): Add Active IQ endpoint parameters to the command:

      • --remote_host (-rh) [AIQ_endpoint]: The endpoint where Active IQ telemetry data is sent to be processed. If the parameter is not included, the default endpoint is used.

    3. (Recommended): Add the following persistent volume parameters. Do not modify or delete the account and volumes created for persistent volumes functionality or a loss in management capability will result.

      • --use_persistent_volumes (-pv) [true/false, default: false]: Enable or disable persistent volumes. Enter the value true to enable persistent volumes functionality.

      • --persistent_volumes_account (-pva) [account_name]: If --use_persistent_volumes is set to true, use this parameter and enter the storage account name that will be used for persistent volumes.

        Use a unique account name for persistent volumes that is different from any existing account name on the cluster. It is critically important to keep the account for persistent volumes separate from the rest of your environment.
      • --persistent_volumes_mvip (-pvm) [mvip]: Enter the management virtual IP address (MVIP) of the storage cluster running Element software that will be used with persistent volumes. This is only required if multiple storage clusters are managed by the management node. If multiple clusters are not managed, the default cluster MVIP will be used.

    4. Configure a proxy server:

      • --use_proxy (-up) [true/false, default: false]: Enable or disable the use of the proxy. This parameter is required to configure a proxy server.

      • --proxy_hostname_or_ip (-pi) [host]: The proxy hostname or IP. This is required if you want to use a proxy. If you specify this, you will be prompted to input --proxy_port.

      • --proxy_username (-pu) [username]: The proxy username. This parameter is optional.

      • --proxy_password (-pp) [password]: The proxy password. This parameter is optional.

      • --proxy_port (-pq) [port, default: 0]: The proxy port. If you specify this, you will be prompted to input the proxy host name or IP (--proxy_hostname_or_ip).

      • --proxy_ssh_port (-ps) [port, default: 443]: The SSH proxy port. This defaults to port 443.

    5. (Optional) Use parameter help if you need additional information about each parameter:

      • --help (-h): Returns information about each parameter. Parameters are defined as required or optional based on initial deployment. Upgrade and redeployment parameter requirements might vary.

    6. Run the setup-mnode command.

Configure controller assets

  1. Locate the installation ID:

    1. From a browser, log into the management node REST API UI:

    2. Go to the storage MVIP and log in. This action causes the certificate to be accepted for the next step.

    3. Open the inventory service REST API UI on the management node:

      https://[management node IP]/inventory/1/
    4. Click Authorize and complete the following:

      1. Enter the cluster user name and password.

      2. Enter the client ID as mnode-client.

      3. Click Authorize to begin a session.

    5. From the REST API UI, click GET ​/installations.

    6. Click Try it out.

    7. Click Execute.

    8. From the code 200 response body, copy and save the id for the installation for use in a later step.

      Your installation has a base asset configuration that was created during installation or upgrade.

  2. (NetApp HCI only) Locate the hardware tag for your compute node in vSphere:

    1. Select the host in the vSphere Web Client navigator.

    2. Click the Monitor tab, and click Hardware Health.

    3. The node BIOS manufacturer and model number are listed. Copy and save the value for tag for use in a later step.

  3. Add a vCenter controller asset for NetApp HCI monitoring (NetApp HCI installations only) and Hybrid Cloud Control (for all installations) to the management node known assets:

    1. Access the mnode service API UI on the management node by entering the management node IP address followed by /mnode:

      https://[management node IP]/mnode
    2. Click Authorize or any lock icon and complete the following:

      1. Enter the cluster user name and password.

      2. Enter the client ID as mnode-client.

      3. Click Authorize to begin a session.

      4. Close the window.

    3. Click POST /assets/{asset_id}/controllers to add a controller sub-asset.

    4. Click Try it out.

    5. Enter the parent base asset ID you copied to your clipboard in the asset_id field.

    6. Enter the required payload values with type vCenter and vCenter credentials.

    7. Click Execute.

(NetApp HCI only) Configure compute node assets

  1. (For NetApp HCI only) Add a compute node asset to the management node known assets:

    1. Click POST /assets/{asset_id}/compute-nodes to add a compute node sub-asset with credentials for the compute node asset.

    2. Click Try it out.

    3. Enter the parent base asset ID you copied to your clipboard in the asset_id field.

    4. In the payload, enter the required payload values as defined in the Model tab. Enter ESXi Host as type and enter the hardware tag you saved during a previous step for hardware_tag.

    5. Click Execute.

Find more Information