Proxmox VE with ONTAP
Shared storage in Proxmox Virtual Environment(VE) reduces the time for VM live migration, and makes for a better target for backups and consistent templates across the environment. ONTAP storage can serve the needs of Proxmox VE host environments as well as for guest file, block and object storage demands.
Proxmox VE hosts need to have FC, Ethernet, or other supported interfaces cabled to switches and have communication to ONTAP logical interfaces.
Always check Interoperability Matrix Tool for supported configurations.
High-level ONTAP Features
Common features
-
Scale out Cluster
-
Secure Authentication and RBAC support
-
Zero trust multi admin support
-
Secure Multitenancy
-
Replicate data with SnapMirror.
-
Point in time copies with Snapshots.
-
Space efficient clones.
-
Storage efficiency features like dedupe, compression, etc.
-
Trident CSI support for Kubernetes
-
Snaplock
-
Tamperproof Snapshot copy locking
-
Encryption support
-
FabricPool to tier cold data to object store.
-
BlueXP and CloudInsights Integration.
-
Microsoft offloaded data transfer (ODX)
NAS
-
FlexGroup volumes are a scale out NAS container, providing high performance along with load distribution and scalability.
-
FlexCache allows data to be distributed globally and still provides local read and write access to the data.
-
Multiprotocol support enables the same data to be accessible via SMB, as well as NFS.
-
NFS nConnect allows multiple TCP sessions per TCP connection increasing network throughput. This increases utilization of high speed nics available on modern servers.
-
NFS session trunking provides increased data transfer speeds, high availability and fault tolerance.
-
SMB multichannel provides increased data transfer speed, high availability and fault tolerance.
-
Integration with Active directory/LDAP for file permissions.
-
Secure connection with NFS over TLS.
-
NFS Kerberos support.
-
NFS over RDMA.
-
Name mapping between Windows and Unix identities.
-
Autonomous ransomware protection.
-
File System Analytics.
SAN
-
Stretch cluster across fault domains with SnapMirror active sync.
-
ASA models provide active/active multipathing and fast path failover.
-
Support for FC, iSCSI, NVMe-oF protocols.
-
Support for iSCSI CHAP mutual authentication.
-
Selective LUN Map and Portset.
Proxmox VE storage types supported with ONTAP
NAS protocols (NFS/SMB) support all content types of Proxmox VE and are typically configured once at the datacenter level. Guest VMs can use disks of type raw, qcow2, or VMDK on NAS storage.
ONTAP Snapshots can be made visible, to access point in time copies of data from the client.
Block storage with SAN protocols (FC/iSCSI/NVMe-oF) are typically configured on a per host basis and are restricted to the VM Disk and Container Image content types supported by Proxmox VE. Guest VMs and Containers consume block storage as raw devices.
Content Type | NFS | SMB/CIFS | FC | iSCSI | NVMe-oF |
---|---|---|---|---|---|
Backups |
Yes |
Yes |
No1 |
No1 |
No1 |
VM Disks |
Yes |
Yes |
Yes2 |
Yes2 |
Yes2 |
CT Volumes |
Yes |
Yes |
Yes2 |
Yes2 |
Yes2 |
ISO Images |
Yes |
Yes |
No1 |
No1 |
No1 |
CT Templates |
Yes |
Yes |
No1 |
No1 |
No1 |
Snippets |
Yes |
Yes |
No1 |
No1 |
No1 |
Notes:
1 - Requires cluster filesystem to create the shared folder and use Directory storage type.
2 - use LVM storage type.
SMB/CIFS Storage
To utilize SMB/CIFS file shares, there are certain tasks that needs to be carried out by the storage admin and the virtualization admin can mount the share using Proxmox VE UI or from shell. SMB multichannel provides fault tolerance and boosts performance. For more details, refer to TR4740 - SMB 3.0 Multichannel
Password will be saved in clear text file and accessible only to root user. Refer to Proxmox VE documentation. |
Storage Admin Tasks
If new to ONTAP, use System Manager Interface to complete these tasks for a better experience.
-
Ensure SVM is enabled for SMB. Follow ONTAP 9 documentation for more information.
-
Have at least two lifs per controller. Follow the steps from the above link. For reference, here is a screenshot of lifs used in this solution.
-
Use Active Directory or workgroup based authentication. Follow the steps from the above link.
-
Create a volume. Remember to check the option to distribute data across the cluster to use FlexGroup.
-
Create an SMB share and adjust permissions. Follow ONTAP 9 documentation for more information.
-
Provide the SMB server, Share name and credential to the virtualization admin for them to complete the task.
Virtualization Admin Tasks
-
Collect the SMB server, share name and credentials to use for the share authentication.
-
Ensure at least two interface are configured in different VLANs (for fault tolerance) and NIC supports RSS.
-
If using Management UI
https:<proxmox-node>:8006
, click on datacenter, select storage, click Add and select SMB/CIFS. -
Fill in the details and the share name should auto populate. Ensure all content is selected. Click Add.
-
To enable multichannel option, go to shell on any one of the nodes on the cluster and type pvesm set pvesmb01 --options multichannel,max_channels=4
-
Here is the content in /etc/pve/storage.cfg for the above tasks.
NFS Storage
ONTAP supports all the NFS versions supported by Proxmox VE. To provide fault tolerance and performance enhancements, ensure session trunking is utilized. To use session trunking, minimum NFS v4.1 is required.
If new to ONTAP, use System Manager Interface to complete these tasks for a better experience.
Storage Admin Tasks
-
Ensure SVM is enabled for NFS. Refer to ONTAP 9 documentation
-
Have at least two lifs per controller. Follow the steps from the above link. For reference, here is the screenshot of lifs that we use in our lab.
-
Create or update NFS export policy providing access to Proxmox VE host IP addresses or subnet. Refer to Export policy creation and Add rule to an export policy.
-
Create a volume. Remember to check the option to distribute data across the cluster to use FlexGroup.
-
Assign export policy to volume
-
Notify virtualization admin that NFS volume is ready.
Virtualization Admin Tasks
-
Ensure at least two interface is configured in different VLANs (for fault tolerance). Use NIC bonding.
-
If using Management UI
https:<proxmox-node>:8006
, click on datacenter, select storage, click Add and select NFS. -
Fill in the details, After providing the server info, the NFS exports should populate and pick from the list. Remember to select the content options.
-
For session trunking, on every Proxmox VE hosts, update the /etc/fstab file to mount the same NFS export using different lif address along with max_connect and NFS version option.
-
Here is the content in /etc/pve/storage.cfg for NFS.
LVM with iSCSI
To configure Logical Volume Manager for shared storage across Proxmox hosts, complete for the following tasks:
Virtualization Admin Tasks
-
Make sure two linux vlan interfaces are available.
-
Ensure multipath-tools is installed on all Proxmox VE hosts. Ensure it starts on boot.
apt list | grep multipath-tools # If need to install, execute the following line. apt-get install multipath-tools systemctl enable multipathd
-
Collect the iscsi host iqn for all Proxmox VE hosts and provide that to the Storage admin.
cat /etc/iscsi/initiator.name
Storage Admin Tasks
If new to ONTAP, use System Manager for a better experience.
-
Ensure SVM is available with iSCSI protocol enabled. Follow ONTAP 9 documentation
-
Have two lifs per controller dedicated for iSCSI.
-
Create igroup and populate the host iscsi initiators.
-
Create the LUN with desired size on the SVM and present to igroup created in above step.
-
Notify virtualization admin that lun is created.
Virtualization Admin Tasks
-
Go to Management UI
https:<proxmox node>:8006
, click on datacenter, select storage, click Add and select iSCSI. -
Provide storage id name. The iSCSI lif address from ONTAP should be able to pick the target when there is no communication issue. As our intention is to not directly provide LUN access to the guest vm, uncheck that.
-
Now, click Add and select LVM.
-
Provide storage id name, pick base storage that should match the iSCSI storage the we created in the above step. Pick the LUN for the base volume. Provide the volume group name. Ensure shared is selected.
-
Here is the sample storage configuration file for LVM using iSCSI volume.
LVM with NVMe/TCP
To configure Logical Volume Manager for shared storage across Proxmox hosts, complete the following tasks:
Virtualization Admin Tasks
-
Make sure two linux vlan interfaces are available.
-
On every Proxmox host on the cluster, execute the following command to collect the host initiator info.
nvme show-hostnqn
-
Provide collected host nqn info to storage admin and request an nvme namespace of required size.
Storage Admin Tasks
If new to ONTAP, use System Manager for better experience.
-
Ensure SVM is available with NVMe protocol enabled. Refer NVMe tasks on ONTAP 9 documentation.
-
Create the NVMe namespace.
-
Create subsystem and assign host nqns (if using CLI). Follow the above reference link.
-
Notify virtualization admin that the nvme namespace is created.
Virtualization Admin Tasks
-
Navigate to shell on each Proxmox VE hosts in the cluster and create /etc/nvme/discovery.conf file and update the content specific to your environment.
root@pxmox01:~# cat /etc/nvme/discovery.conf # Used for extracting default parameters for discovery # # Example: # --transport=<trtype> --traddr=<traddr> --trsvcid=<trsvcid> --host-traddr=<host-traddr> --host-iface=<host-iface> -t tcp -l 1800 -a 172.21.118.153 -t tcp -l 1800 -a 172.21.118.154 -t tcp -l 1800 -a 172.21.119.153 -t tcp -l 1800 -a 172.21.119.154
-
Login to nvme subsystem
nvme connect-all
-
Inspect and collect device details.
nvme list nvme netapp ontapdevices nvme list-subsys lsblk -l
-
Create volume group
vgcreate pvens02 /dev/mapper/<device id>
-
Go to Management UI
https:<proxmox node>:8006
, click on datacenter, select storage, click Add and select LVM. -
Provide storage id name, choose existing volume group and pick the volume group that just created with cli. Remember to check the shared option.
-
Here is a sample storage configuration file for LVM using NVMe/TCP