Deploy a StorageGRID node as a virtual machine
You use VMware vSphere Web Client to deploy each grid node as a virtual machine. During deployment, each grid node is created and connected to one or more StorageGRID networks.
If you need to deploy any StorageGRID appliance Storage Nodes, see Deploy appliance Storage Node.
Optionally, you can remap node ports or increase CPU or memory settings for the node before powering it on.
-
You have reviewed how to plan and prepare for installation, and you understand the requirements for software, CPU and RAM, and storage and performance.
-
You are familiar with VMware vSphere Hypervisor and have experience deploying virtual machines in this environment.
The open-vm-tools
package, an open-source implementation similar to VMware Tools, is included with the StorageGRID virtual machine. You don't need to install VMware Tools manually. -
You have downloaded and extracted the correct version of the StorageGRID installation archive for VMware.
If you are deploying the new node as part of an expansion or recovery operation, you must use the version of StorageGRID that is currently running on the grid. -
You have the StorageGRID Virtual Machine Disk (
.vmdk
) file:
NetApp-SG-version-SHA.vmdk
-
You have the
.ovf
and.mf
files for each type of grid node you are deploying:Filename Description vsphere-primary-admin.ovf
vsphere-primary-admin.mf
The template file and manifest file for the primary Admin Node.
vsphere-non-primary-admin.ovf
vsphere-non-primary-admin.mf
The template file and manifest file for a non-primary Admin Node.
vsphere-storage.ovf
vsphere-storage.mf
The template file and manifest file for a Storage Node.
vsphere-gateway.ovf
vsphere-gateway.mf
The template file and manifest file for a Gateway Node.
vsphere-archive.ovf
vsphere-archive.mf
The template file and manifest file for an Archive Node.
-
The
.vdmk
,.ovf
, and.mf
files are all in the same directory. -
You have a plan to minimize failure domains. For example, you should not deploy all Gateway Nodes on a single virtual machine server.
In a production deployment, don't run more than one Storage Node on a single virtual machine server. Using a dedicated virtual machine host for each Storage Node provides an isolated failure domain. -
If you are deploying a node as part of an expansion or recovery operation, you have the instructions for expanding a StorageGRID system or the recovery and maintenance instructions.
-
If you are deploying a StorageGRID node as a virtual machine with storage assigned from a NetApp ONTAP system, you have confirmed that the volume does not have a FabricPool tiering policy enabled. For example, if a StorageGRID node is running as an virtual machine on a VMware host, ensure the volume backing the datastore for the node does not have a FabricPool tiering policy enabled. Disabling FabricPool tiering for volumes used with StorageGRID nodes simplifies troubleshooting and storage operations.
Never use FabricPool to tier any data related to StorageGRID back to StorageGRID itself. Tiering StorageGRID data back to StorageGRID increases troubleshooting and operational complexity.
Follow these instructions to initially deploy VMware nodes, add a new VMware node in an expansion, or replace a VMware node as part of a recovery operation. Except as noted in the steps, the node deployment procedure is the same for all node types, including Admin Nodes, Storage Nodes, Gateway Nodes, and Archive Nodes.
If you are installing a new StorageGRID system:
-
You must deploy the primary Admin Node before you deploy any other grid node.
-
You must ensure that each virtual machine can connect to the primary Admin Node over the Grid Network.
-
You must deploy all grid nodes before configuring the grid.
If you are performing an expansion or recovery operation:
-
You must ensure that the new virtual machine can connect to the primary Admin Node over the Grid Network.
If you need to remap any of the node's ports, don't power on the new node until the port remap configuration is complete.
-
Using VCenter, deploy an OVF template.
If you specify a URL, point to a folder containing the following files. Otherwise, select each of these files from a local directory.
NetApp-SG-version-SHA.vmdk vsphere-node.ovf vsphere-node.mf
For example, if this is the first node you are deploying, use these files to deploy the primary Admin Node for your StorageGRID system:
NetApp-SG-version-SHA.vmdk vsphere-primary-admin.ovf vsphere-primary-admin.mf
-
Provide a name for the virtual machine.
The standard practice is to use the same name for both the virtual machine and the grid node.
-
Place the virtual machine in the appropriate vApp or resource pool.
-
If you are deploying the primary Admin Node, read and accept the End User License Agreement.
Depending on your version of vCenter, the order of the steps will vary for accepting the End User License Agreement, specifying the name of the virtual machine, and selecting a datastore.
-
Select storage for the virtual machine.
If you are deploying a node as part of recovery operation, perform the instructions in the storage recovery step to add new virtual disks, reattach virtual hard disks from the failed grid node, or both.
When deploying a Storage Node, use 3 or more storage volumes, with each storage volume being 4 TB or larger. You must assign at least 4 TB to volume 0.
The Storage Node .ovf file defines several VMDKs for storage. Unless these VMDKs meet your storage requirements, you should remove them and assign appropriate VMDKs or RDMs for storage before powering up the node. VMDKs are more commonly used in VMware environments and are easier to manage, while RDMs might provide better performance for workloads that use larger object sizes (for example, greater than 100 MB). Some StorageGRID installations might use larger, more active storage volumes than typical virtualized workloads. You might need to tune some hypervisor parameters, such as MaxAddressableSpaceTB
, to achieve optimal performance. If you encounter poor performance, contact your virtualization support resource to determine whether your environment could benefit from workload-specific configuration tuning. -
Select networks.
Determine which StorageGRID networks the node will use by selecting a destination network for each source network.
-
The Grid Network is required. You must select a destination network in the vSphere environment.
-
If you use the Admin Network, select a different destination network in the vSphere environment. If you don't use the Admin Network, select the same destination you selected for the Grid Network.
-
If you use the Client Network, select a different destination network in the vSphere environment. If you don't use the Client Network, select the same destination you selected for the Grid Network.
-
-
For Customize Template, configure the required StorageGRID node properties.
-
Enter the Node name.
If you are recovering a grid node, you must enter the name of the node you are recovering. -
Use the Temporary installation password drop-down to specify a temporary installation password, so that you can access the VM console or use SSH before the new node joins the grid.
The temporary installation password is only used during node installation. After a node has been added to the grid, you can access it using the node console password, which is listed in the Passwords.txt
file in the Recovery Package.-
Use node name: The value you provided for the Node name field is used as the temporary installation password.
-
Use custom password: A custom password is used as the temporary installation password.
-
Disable password: No temporary installation password will be used. If you need to access the VM to debug installation issues, see Troubleshoot installation issues.
-
-
If you selected Use custom password, specify the temporary installation password you want to use in the Custom password field.
-
In the Grid Network (eth0) section, select STATIC or DHCP for the Grid network IP configuration.
-
If you select STATIC, enter the Grid network IP, Grid network mask, Grid network gateway, and Grid network MTU.
-
If you select DHCP, the Grid network IP, Grid network mask, and Grid network gateway are automatically assigned.
-
-
In the Primary Admin IP field, enter the IP address of the primary Admin Node for the Grid Network.
This step does not apply if the node you are deploying is the primary Admin Node. If you omit the primary Admin Node IP address, the IP address will be automatically discovered if the primary Admin Node, or at least one other grid node with ADMIN_IP configured, is present on the same subnet. However, it is recommended to set the primary Admin Node IP address here.
-
In the Admin Network (eth1) section, select STATIC, DHCP, or DISABLED for the Admin network IP configuration.
-
If you don't want to use the Admin Network, select DISABLED and enter 0.0.0.0 for the Admin Network IP. You can leave the other fields blank.
-
If you select STATIC, enter the Admin network IP, Admin network mask, Admin network gateway, and Admin network MTU.
-
If you select STATIC, enter the Admin network external subnet list. You must also configure a gateway.
-
If you select DHCP, the Admin network IP, Admin network mask, and Admin network gateway are automatically assigned.
-
-
In the Client Network (eth2) section, select STATIC, DHCP, or DISABLED for the Client network IP configuration.
-
If you don't want to use the Client Network, select DISABLED and enter 0.0.0.0 for the Client Network IP. You can leave the other fields blank.
-
If you select STATIC, enter the Client network IP, Client network mask, Client network gateway, and Client network MTU.
-
If you select DHCP, the Client network IP, Client network mask, and Client network gateway are automatically assigned.
-
-
-
Review the virtual machine configuration and make any changes necessary.
-
When you are ready to complete, select Finish to start the upload of the virtual machine.
-
If you deployed this node as part of recovery operation and this is not a full-node recovery, perform these steps after deployment is complete:
-
Right-click the virtual machine, and select Edit Settings.
-
Select each default virtual hard disk that has been designated for storage, and select Remove.
-
Depending on your data recovery circumstances, add new virtual disks according to your storage requirements, reattach any virtual hard disks preserved from the previously removed failed grid node, or both.
Note the following important guidelines:
-
If you are adding new disks you should use the same type of storage device that was in use before node recovery.
-
The Storage Node .ovf file defines several VMDKs for storage. Unless these VMDKs meet your storage requirements, you should remove them and assign appropriate VMDKs or RDMs for storage before powering up the node. VMDKs are more commonly used in VMware environments and are easier to manage, while RDMs might provide better performance for workloads that use larger object sizes (for example, greater than 100 MB).
-
-
-
If you need to remap the ports used by this node, follow these steps.
You might need to remap a port if your enterprise networking policies restrict access to one or more ports that are used by StorageGRID. See the networking guidelines for the ports used by StorageGRID.
Don't remap the ports used in load balancer endpoints. -
Select the new VM.
-
From the Configure tab, select Settings > vApp Options. The location of vApp Options depends on the version of vCenter.
-
In the Properties table, locate PORT_REMAP_INBOUND and PORT_REMAP.
-
To symmetrically map both inbound and outbound communications for a port, select PORT_REMAP.
If only PORT_REMAP is set, the mapping that you specify applies to both inbound and outbound communications. If PORT_REMAP_INBOUND is also specified, PORT_REMAP applies only to outbound communications. -
Scroll back to the top of the table, and select Edit.
-
On the Type tab, select User configurable, and select Save.
-
Select Set Value.
-
Enter the port mapping:
<network type>/<protocol>/<default port used by grid node>/<new port>
<network type>
is grid, admin, or client, and<protocol>
is tcp or udp.For example, to remap ssh traffic from port 22 to port 3022, enter:
client/tcp/22/3022
-
Select OK.
-
-
To specify the port used for inbound communications to the node, select PORT_REMAP_INBOUND.
If you specify PORT_REMAP_INBOUND and don't specify a value for PORT_REMAP, outbound communications for the port are unchanged. -
Scroll back to the top of the table, and select Edit.
-
On the Type tab, select User configurable, and select Save.
-
Select Set Value.
-
Enter the port mapping:
<network type>/<protocol>/<remapped inbound port>/<default inbound port used by grid node>
<network type>
is grid, admin, or client, and<protocol>
is tcp or udp.For example, to remap inbound SSH traffic that is sent to port 3022 so that it is received at port 22 by the grid node, enter the following:
client/tcp/3022/22
-
Select OK
-
-
-
If you want to increase the CPU or memory for the node from the default settings:
-
Right-click the virtual machine, and select Edit Settings.
-
Change the number of CPUs or the amount of memory as required.
Set the Memory Reservation to the same size as the Memory allocated to the virtual machine.
-
Select OK.
-
-
Power on the virtual machine.
If you deployed this node as part of an expansion or recovery procedure, return to those instructions to complete the procedure.