Install a management node
You can manually install the management node for your cluster running NetApp Element software using the appropriate image for your configuration.
This manual process is intended for NetApp HCI administrators who are not using the NetApp Deployment Engine for management node installation.
-
Your cluster version is running NetApp Element software 11.3 or later.
-
Your installation uses IPv4. The management node 11.3 does not support IPv6.
If you need to IPv6 support, you can use the management node 11.1. -
You have permission to download software from the NetApp Support Site.
-
You have identified the management node image type that is correct for your platform:
Platform Installation image type Microsoft Hyper-V
.iso
KVM
.iso
VMware vSphere
.iso, .ova
Citrix XenServer
.iso
OpenStack
.iso
-
(Management node 12.0 and later with proxy server) You have updated NetApp Hybrid Cloud Control to management services version 2.16 before configuring a proxy server.
The Element 12.2 management node is an optional upgrade. It is not required for existing deployments.
Prior to following this procedure, you should have an understanding of persistent volumes and whether or not you want to use them. Persistent volumes are optional but recommended for management node configuration data recovery in the event of a VM loss.
Download ISO or OVA and deploy the VM
-
Download the OVA or ISO for your installation from the NetApp HCI page on the NetApp Support Site:
-
Select Download Latest Release and accept the EULA.
-
Select the management node image you want to download.
-
-
If you downloaded the OVA, follow these steps:
-
Deploy the OVA.
-
If your storage cluster is on a separate subnet from your management node (eth0) and you want to use persistent volumes, add a second network interface controller (NIC) to the VM on the storage subnet (for example, eth1) or ensure that the management network can route to the storage network.
-
-
If you downloaded the ISO, follow these steps:
-
Create a new 64-bit virtual machine from your hypervisor with the following configuration:
-
Six virtual CPUs
-
24GB RAM
-
Storage adapter type set to LSI Logic Parallel
The default for your management node might be LSI Logic SAS. In the New Virtual Machine window, verify the storage adapter configuration by selecting Customize hardware > Virtual Hardware. If required, change LSI Logic SAS to LSI Logic Parallel. -
400GB virtual disk, thin provisioned
-
One virtual network interface with internet access and access to the storage MVIP.
-
One virtual network interface with management network access to the storage cluster. If your storage cluster is on a separate subnet from your management node (eth0) and you want to use persistent volumes, add a second network interface controller (NIC) to the VM on the storage subnet (eth1) or ensure that the management network can route to the storage network.
Do not power on the virtual machine prior to the step indicating to do so later in this procedure.
-
-
Attach the ISO to the virtual machine and boot to the .iso install image.
Installing a management node using the image might result in 30-second delay before the splash screen appears.
-
-
Power on the virtual machine for the management node after the installation completes.
Create the management node admin and configure the network
-
Using the terminal user interface (TUI), create a management node admin user.
To move through the menu options, press the Up or Down arrow keys. To move through the buttons, press Tab. To move from the buttons to the fields, press Tab. To navigate between fields, press the Up or Down arrow keys. -
Configure the management node network (eth0).
If you need an additional NIC to isolate storage traffic, see instructions on configuring another NIC: Configure a storage Network Interface Controller (NIC).
Configure time sync
-
Ensure time is synced between the management node and the storage cluster using NTP:
Starting with Element 12.3.1, substeps (a) to (e) are performed automatically. For management node 12.3.1, proceed to substep (f) to complete the time sync configuration. -
Log in to the management node using SSH or the console provided by your hypervisor.
-
Stop NTPD:
sudo service ntpd stop
-
Edit the NTP configuration file
/etc/ntp.conf
:-
Comment out the default servers (
server 0.gentoo.pool.ntp.org
) by adding a#
in front of each. -
Add a new line for each default time server you want to add. The default time servers must be the same NTP servers used on the storage cluster that you will use in a later step.
vi /etc/ntp.conf #server 0.gentoo.pool.ntp.org #server 1.gentoo.pool.ntp.org #server 2.gentoo.pool.ntp.org #server 3.gentoo.pool.ntp.org server <insert the hostname or IP address of the default time server>
-
Save the configuration file when complete.
-
-
Force an NTP sync with the newly added server.
sudo ntpd -gq
-
Restart NTPD.
sudo service ntpd start
-
Disable time synchronization with host via the hypervisor (the following is a VMware example):
If you deploy the mNode in a hypervisor environment other than VMware, for example, from the .iso image in an Openstack environment, refer to the hypervisor documentation for the equivalent commands. -
Disable periodic time synchronization:
vmware-toolbox-cmd timesync disable
-
Display and confirm the current status of the service:
vmware-toolbox-cmd timesync status
-
In vSphere, verify that the
Synchronize guest time with host
box is un-checked in the VM options.Do not enable this option if you make future changes to the VM.
-
-
Do not edit the NTP after you complete the time sync configuration because it affects the NTP when you run the setup command on the management node. |
Set up the management node
-
Configure and run the management node setup command:
You will be prompted to enter passwords in a secure prompt. If your cluster is behind a proxy server, you must configure the proxy settings so you can reach a public network. sudo /sf/packages/mnode/setup-mnode --mnode_admin_user [username] --storage_mvip [mvip] --storage_username [username] --telemetry_active [true]
-
Replace the value in [ ] brackets (including the brackets) for each of the following required parameters:
The abbreviated form of the command name is in parentheses ( ) and can be substituted for the full name. -
--mnode_admin_user (-mu) [username]: The username for the management node administrator account. This is likely to be the username for the user account you used to log into the management node.
-
--storage_mvip (-sm) [MVIP address]: The management virtual IP address (MVIP) of the storage cluster running Element software. Configure the management node with the same storage cluster that you used during NTP servers configuration.
-
--storage_username (-su) [username]: The storage cluster administrator username for the cluster specified by the
--storage_mvip
parameter. -
--telemetry_active (-t) [true]: Retain the value true that enables data collection for analytics by Active IQ.
-
-
(Optional): Add Active IQ endpoint parameters to the command:
-
--remote_host (-rh) [AIQ_endpoint]: The endpoint where Active IQ telemetry data is sent to be processed. If the parameter is not included, the default endpoint is used.
-
-
(Recommended): Add the following persistent volume parameters. Do not modify or delete the account and volumes created for persistent volumes functionality or a loss in management capability will result.
-
--use_persistent_volumes (-pv) [true/false, default: false]: Enable or disable persistent volumes. Enter the value true to enable persistent volumes functionality.
-
--persistent_volumes_account (-pva) [account_name]: If
--use_persistent_volumes
is set to true, use this parameter and enter the storage account name that will be used for persistent volumes.Use a unique account name for persistent volumes that is different from any existing account name on the cluster. It is critically important to keep the account for persistent volumes separate from the rest of your environment. -
--persistent_volumes_mvip (-pvm) [mvip]: Enter the management virtual IP address (MVIP) of the storage cluster running Element software that will be used with persistent volumes. This is only required if multiple storage clusters are managed by the management node. If multiple clusters are not managed, the default cluster MVIP will be used.
-
-
Configure a proxy server:
-
--use_proxy (-up) [true/false, default: false]: Enable or disable the use of the proxy. This parameter is required to configure a proxy server.
-
--proxy_hostname_or_ip (-pi) [host]: The proxy hostname or IP. This is required if you want to use a proxy. If you specify this, you will be prompted to input
--proxy_port
. -
--proxy_username (-pu) [username]: The proxy username. This parameter is optional.
-
--proxy_password (-pp) [password]: The proxy password. This parameter is optional.
-
--proxy_port (-pq) [port, default: 0]: The proxy port. If you specify this, you will be prompted to input the proxy host name or IP (
--proxy_hostname_or_ip
). -
--proxy_ssh_port (-ps) [port, default: 443]: The SSH proxy port. This defaults to port 443.
-
-
(Optional) Use parameter help if you need additional information about each parameter:
-
--help (-h): Returns information about each parameter. Parameters are defined as required or optional based on initial deployment. Upgrade and redeployment parameter requirements might vary.
-
-
Run the
setup-mnode
command.
-
Configure controller assets
-
Locate the installation ID:
-
From a browser, log into the management node REST API UI:
-
Go to the storage MVIP and log in. This action causes the certificate to be accepted for the next step.
-
Open the inventory service REST API UI on the management node:
https://<ManagementNodeIP>/inventory/1/
-
Select Authorize and complete the following:
-
Enter the cluster user name and password.
-
Enter the client ID as
mnode-client
. -
Select Authorize to begin a session.
-
-
From the REST API UI, select GET /installations.
-
Select Try it out.
-
Select Execute.
-
From the code 200 response body, copy and save the
id
for the installation for use in a later step.Your installation has a base asset configuration that was created during installation or upgrade.
-
-
(NetApp HCI only) Locate the hardware tag for your compute node in vSphere:
-
Select the host in the vSphere Web Client navigator.
-
Select the Monitor tab, and select Hardware Health.
-
The node BIOS manufacturer and model number are listed. Copy and save the value for
tag
for use in a later step.
-
-
Add a vCenter controller asset for NetApp HCI monitoring (NetApp HCI installations only) and Hybrid Cloud Control (for all installations) to the management node known assets:
-
Access the mnode service API UI on the management node by entering the management node IP address followed by
/mnode
:https:/<ManagementNodeIP>/mnode
-
Select Authorize or any lock icon and complete the following:
-
Enter the cluster user name and password.
-
Enter the client ID as
mnode-client
. -
Select Authorize to begin a session.
-
Close the window.
-
-
Select POST /assets/{asset_id}/controllers to add a controller sub-asset.
You should create a new NetApp HCC role in vCenter to add a controller sub-asset. This new NetApp HCC role will limit the management node services view to NetApp-only assets. See Create a NetApp HCC role in vCenter. -
Select Try it out.
-
Enter the parent base asset ID you copied to your clipboard in the asset_id field.
-
Enter the required payload values with type
vCenter
and vCenter credentials. -
Select Execute.
-
(NetApp HCI only) Configure compute node assets
-
(For NetApp HCI only) Add a compute node asset to the management node known assets:
-
Select POST /assets/{asset_id}/compute-nodes to add a compute node sub-asset with credentials for the compute node asset.
-
Select Try it out.
-
Enter the parent base asset ID you copied to your clipboard in the asset_id field.
-
In the payload, enter the required payload values as defined in the Model tab. Enter
ESXi Host
astype
and enter the hardware tag you saved during a previous step forhardware_tag
. -
Select Execute.
-