Skip to main content
NetApp Solutions

Proxmox VE with ONTAP

Contributors kevin-hoke sureshthoppay

Shared storage in Proxmox Virtual Environment(VE) reduces the time for VM live migration, and makes for a better target for backups and consistent templates across the environment. ONTAP storage can serve the needs of Proxmox VE host environments as well as for guest file, block and object storage demands.

Proxmox VE hosts need to have FC, Ethernet, or other supported interfaces cabled to switches and have communication to ONTAP logical interfaces.
Always check Interoperability Matrix Tool for supported configurations.

High-level ONTAP Features

Common features

  • Scale out Cluster

  • Secure Authentication and RBAC support

  • Zero trust multi admin support

  • Secure Multitenancy

  • Replicate data with SnapMirror.

  • Point in time copies with Snapshots.

  • Space efficient clones.

  • Storage efficiency features like dedupe, compression, etc.

  • Trident CSI support for Kubernetes

  • Snaplock

  • Tamperproof Snapshot copy locking

  • Encryption support

  • FabricPool to tier cold data to object store.

  • BlueXP and CloudInsights Integration.

  • Microsoft offloaded data transfer (ODX)

NAS

  • FlexGroup volumes are a scale out NAS container, providing high performance along with load distribution and scalability.

  • FlexCache allows data to be distributed globally and still provides local read and write access to the data.

  • Multiprotocol support enables the same data to be accessible via SMB, as well as NFS.

  • NFS nConnect allows multiple TCP sessions per TCP connection increasing network throughput. This increases utilization of high speed nics available on modern servers.

  • NFS session trunking provides increased data transfer speeds, high availability and fault tolerance.

  • SMB multichannel provides increased data transfer speed, high availability and fault tolerance.

  • Integration with Active directory/LDAP for file permissions.

  • Secure connection with NFS over TLS.

  • NFS Kerberos support.

  • NFS over RDMA.

  • Name mapping between Windows and Unix identities.

  • Autonomous ransomware protection.

  • File System Analytics.

SAN

  • Stretch cluster across fault domains with SnapMirror active sync.

  • ASA models provide active/active multipathing and fast path failover.

  • Support for FC, iSCSI, NVMe-oF protocols.

  • Support for iSCSI CHAP mutual authentication.

  • Selective LUN Map and Portset.

Proxmox VE storage types supported with ONTAP

NAS protocols (NFS/SMB) support all content types of Proxmox VE and are typically configured once at the datacenter level. Guest VMs can use disks of type raw, qcow2, or VMDK on NAS storage.
ONTAP Snapshots can be made visible, to access point in time copies of data from the client.
Block storage with SAN protocols (FC/iSCSI/NVMe-oF) are typically configured on a per host basis and are restricted to the VM Disk and Container Image content types supported by Proxmox VE. Guest VMs and Containers consume block storage as raw devices.

Content Type NFS SMB/CIFS FC iSCSI NVMe-oF

Backups

Yes

Yes

No1

No1

No1

VM Disks

Yes

Yes

Yes2

Yes2

Yes2

CT Volumes

Yes

Yes

Yes2

Yes2

Yes2

ISO Images

Yes

Yes

No1

No1

No1

CT Templates

Yes

Yes

No1

No1

No1

Snippets

Yes

Yes

No1

No1

No1

Notes:
1 - Requires cluster filesystem to create the shared folder and use Directory storage type.
2 - use LVM storage type.

SMB/CIFS Storage

To utilize SMB/CIFS file shares, there are certain tasks that needs to be carried out by the storage admin and the virtualization admin can mount the share using Proxmox VE UI or from shell. SMB multichannel provides fault tolerance and boosts performance. For more details, refer to TR4740 - SMB 3.0 Multichannel

Note Password will be saved in clear text file and accessible only to root user. Refer to Proxmox VE documentation.
SMB shared storage pool with ONTAP
Storage Admin Tasks

If new to ONTAP, use System Manager Interface to complete these tasks for a better experience.

  1. Ensure SVM is enabled for SMB. Follow ONTAP 9 documentation for more information.

  2. Have at least two lifs per controller. Follow the steps from the above link. For reference, here is a screenshot of lifs used in this solution.

    nas interface details

  3. Use Active Directory or workgroup based authentication. Follow the steps from the above link.

    Join domain info

  4. Create a volume. Remember to check the option to distribute data across the cluster to use FlexGroup.

    FlexGroup option

  5. Create an SMB share and adjust permissions. Follow ONTAP 9 documentation for more information.

    SMB share info

  6. Provide the SMB server, Share name and credential to the virtualization admin for them to complete the task.

Virtualization Admin Tasks
  1. Collect the SMB server, share name and credentials to use for the share authentication.

  2. Ensure at least two interface are configured in different VLANs (for fault tolerance) and NIC supports RSS.

  3. If using Management UI https:<proxmox-node>:8006, click on datacenter, select storage, click Add and select SMB/CIFS.

    SMB storage navigation

  4. Fill in the details and the share name should auto populate. Ensure all content is selected. Click Add.

    SMB storage addition

  5. To enable multichannel option, go to shell on any one of the nodes on the cluster and type pvesm set pvesmb01 --options multichannel,max_channels=4

    multichannel setup

  6. Here is the content in /etc/pve/storage.cfg for the above tasks.

    storage configuration file for SMB

NFS Storage

ONTAP supports all the NFS versions supported by Proxmox VE. To provide fault tolerance and performance enhancements, ensure session trunking is utilized. To use session trunking, minimum NFS v4.1 is required.

If new to ONTAP, use System Manager Interface to complete these tasks for a better experience.

NFS nconnect option with ONTAP
Storage Admin Tasks
  1. Ensure SVM is enabled for NFS. Refer to ONTAP 9 documentation

  2. Have at least two lifs per controller. Follow the steps from the above link. For reference, here is the screenshot of lifs that we use in our lab.

    nas interface details

  3. Create or update NFS export policy providing access to Proxmox VE host IP addresses or subnet. Refer to Export policy creation and Add rule to an export policy.

  4. Create a volume. Remember to check the option to distribute data across the cluster to use FlexGroup.

    FlexGroup option

  5. Assign export policy to volume

    NFS volume info

  6. Notify virtualization admin that NFS volume is ready.

Virtualization Admin Tasks
  1. Ensure at least two interface is configured in different VLANs (for fault tolerance). Use NIC bonding.

  2. If using Management UI https:<proxmox-node>:8006, click on datacenter, select storage, click Add and select NFS.

    NFS storage navigation

  3. Fill in the details, After providing the server info, the NFS exports should populate and pick from the list. Remember to select the content options.

    NFS storage addition

  4. For session trunking, on every Proxmox VE hosts, update the /etc/fstab file to mount the same NFS export using different lif address along with max_connect and NFS version option.

    fstab entries for session trunk

  5. Here is the content in /etc/pve/storage.cfg for NFS.

    storage configuration file for NFS

LVM with iSCSI

LVM shared pool with iSCSI using ONTAP

To configure Logical Volume Manager for shared storage across Proxmox hosts, complete for the following tasks:

Virtualization Admin Tasks
  1. Make sure two linux vlan interfaces are available.

  2. Ensure multipath-tools is installed on all Proxmox VE hosts. Ensure it starts on boot.

    apt list | grep multipath-tools
    # If need to install, execute the following line.
    apt-get install multipath-tools
    systemctl enable multipathd
  3. Collect the iscsi host iqn for all Proxmox VE hosts and provide that to the Storage admin.

    cat /etc/iscsi/initiator.name
Storage Admin Tasks

If new to ONTAP, use System Manager for a better experience.

  1. Ensure SVM is available with iSCSI protocol enabled. Follow ONTAP 9 documentation

  2. Have two lifs per controller dedicated for iSCSI.

    iscsi interface details

  3. Create igroup and populate the host iscsi initiators.

  4. Create the LUN with desired size on the SVM and present to igroup created in above step.

    iscsi lun details

  5. Notify virtualization admin that lun is created.

Virtualization Admin Tasks
  1. Go to Management UI https:<proxmox node>:8006, click on datacenter, select storage, click Add and select iSCSI.

    iscsi storage navigation

  2. Provide storage id name. The iSCSI lif address from ONTAP should be able to pick the target when there is no communication issue. As our intention is to not directly provide LUN access to the guest vm, uncheck that.

    iscsi storage type creation

  3. Now, click Add and select LVM.

    lvm storage navigation

  4. Provide storage id name, pick base storage that should match the iSCSI storage the we created in the above step. Pick the LUN for the base volume. Provide the volume group name. Ensure shared is selected.

    lvm storage creation

  5. Here is the sample storage configuration file for LVM using iSCSI volume.

    lvm iscsi configuration

LVM with NVMe/TCP

LVM shared pool with NVMe/TCP using ONTAP

To configure Logical Volume Manager for shared storage across Proxmox hosts, complete the following tasks:

Virtualization Admin Tasks
  1. Make sure two linux vlan interfaces are available.

  2. On every Proxmox host on the cluster, execute the following command to collect the host initiator info.

    nvme show-hostnqn
  3. Provide collected host nqn info to storage admin and request an nvme namespace of required size.

Storage Admin Tasks

If new to ONTAP, use System Manager for better experience.

  1. Ensure SVM is available with NVMe protocol enabled. Refer NVMe tasks on ONTAP 9 documentation.

  2. Create the NVMe namespace.

    nvme namespace creation

  3. Create subsystem and assign host nqns (if using CLI). Follow the above reference link.

  4. Notify virtualization admin that the nvme namespace is created.

Virtualization Admin Tasks
  1. Navigate to shell on each Proxmox VE hosts in the cluster and create /etc/nvme/discovery.conf file and update the content specific to your environment.

    root@pxmox01:~# cat /etc/nvme/discovery.conf
    # Used for extracting default parameters for discovery
    #
    # Example:
    # --transport=<trtype> --traddr=<traddr> --trsvcid=<trsvcid> --host-traddr=<host-traddr> --host-iface=<host-iface>
    
    -t tcp -l 1800 -a 172.21.118.153
    -t tcp -l 1800 -a 172.21.118.154
    -t tcp -l 1800 -a 172.21.119.153
    -t tcp -l 1800 -a 172.21.119.154
  2. Login to nvme subsystem

    nvme connect-all
  3. Inspect and collect device details.

    nvme list
    nvme netapp ontapdevices
    nvme list-subsys
    lsblk -l
  4. Create volume group

    vgcreate pvens02 /dev/mapper/<device id>
  5. Go to Management UI https:<proxmox node>:8006, click on datacenter, select storage, click Add and select LVM.

    lvm storage navigation

  6. Provide storage id name, choose existing volume group and pick the volume group that just created with cli. Remember to check the shared option.

    lvm on existing vg

  7. Here is a sample storage configuration file for LVM using NVMe/TCP

    lvm on nvme tcp configuration