Skip to main content
NetApp Solutions

Overview of Proxmox Virtual Environment

Contributors kevin-hoke

Proxmox Virtual Environment is an open source Type-1 hypervisor (installed on bare metal servers) based on Debian Linux. It can host virtual machines (VM) as well as linux containers (LXC).

Overview

Proxmox Virtual Environment(VE) supports both full VM and container based virtualization on the same host. Kernel-based Virtual Machine (KVM) and Quick Emulator (QEMU) is utilized for full VM virtualization. QEMU is an open source machine emulator and virtualizer and it uses KVM Kernel module to execute guest code directly on the host CPU. Linux Containers (LXC) allows containers to be managed like VMs with data persistance across the reboots.

VM and LXC on Proxmox host

RESTful API is available for automation tasks. For info on API calls, check Proxmox VE api viewer

Cluster Management

The web based management portal is available on the Proxmox VE node at port 8006. A collection of nodes can be joined together to form a cluster. The Proxmox VE configuration, /etc/pve, is shared among all nodes of the cluster. Proxmox VE uses Corosync cluster engine to manage the cluster. The management portal can be accessed from any node of the cluster.

Management Interface

A cluster enables VMs and Containers to be monitored and restarted on other nodes if the hosting node fails. VMs and container needs to be configured for High Availability (HA). VMs and Containers can be hosted on a specific subset of hosts by creating groups. The VM or container is hosted on a host with the highest priority. For more info, check HA manager

HA Group priority

Authentication options include Linux PAM, Proxmox VE PAM, LDAP, Microsoft AD or OpenID. Permissions can be assigned via Roles and the use of resource pools which are a collection of resources. For additional details, check Proxmox User Management

Tip Connection credentials of LDAP/Microsoft AD might be stored in clear text, and in a file which needs to be protected by the host filesystem.

Compute

The CPU options for a VM includes the number of CPU cores and sockets (to specify the number of vCPUs), option to choose NUMA, defining affinity, setting the limits, and the CPU type.

VM CPU options

For guidance on CPU types and how it affects live migration, check QEMU/KVM Virtual Machine section of Proxmox VE documentation

The CPU options for LXC container image is shown in the following screenshot.

LXC CPU options

The VM and LXC can specify the memory size. For VMs, the balooning feature is available for Linux VMs. For more info, refer to QEMU/KVM Virtual Machine section of Proxmox VE documentation

Storage

A virtual machine consists of a configuration file, /etc/pve/qemu-server/<vm id>.conf, and virtual disk components. Supported virtual disk formats are raw, qcow2 and VMDK. QCOW2 can provide thin provisioning and snapshot capabilities on various storage types.

VM disk formats

There is an option to present the iSCSI LUNs to a VM as raw devices.

LXC also has its own configuration file, /etc/pve/lxc/<container id>.conf, and container disk components. The data volume can be mounted from the supported storage types.

Container additional mount

Supported storage types include local disk, NAS (SMB and NFS), and SAN (FC, iSCSI, NVMe-oF, etc.). For more details, refer to Proxmox VE Storage

Every storage volume is configured with allowed content types. NAS volumes supports all content types while SAN support is limited to VM and Container images.

Note Directory storage type also supports all content types. SMB connection credentials are stored in clear text and are accessible only to root.

Content types with NAS

Content types with SAN

To import VMs from a Broadcom vSphere environment, the vSphere host can also be included as a storage device.

Network

Proxmox VE supports native Linux networking features like Linux bridge or Open vSwitch, to implement Software Defined Networking (SDN). The Ethernet interfaces on the host can be bonded together to provide redundancy and high availability. For other options, refer to Proxmox VE documentation

Bonded network

Guest networks can be configured at the cluster level and changes are pushed to member hosts. Separation is managed with Zones, VNets and Subnets. Zone defines the network types such as Simple, VLAN, VLAN Stacking, VXLAN, EVPN, etc.

Depending on the type of zone, the network behaves differently and offers specific features, advantages, and limitations.

Use cases for SDN range from an isolated private network on each individual node, to complex overlay networks across multiple PVE clusters on different locations.

After configuring a VNet in the cluster-wide datacenter SDN administration interface, it is available as a common Linux bridge, locally on each node, to be assigned to VMs and Containers.

When a VM is created, the user has capability to pick the Linux bridge to connect. Additional interfaces can be included after the VM is created.

VM network selection

And here is the VNet information at the datacenter level.

VNet infomation at datacenter

Monitoring

The summary page on most of the objects, such as Datacenter, host, VM, container, storage, etc. provides details and includes some performance metrics. The following screenshot shows the summary page of a host and includes information about the packages installed.

Host package view

The stats about hosts, guests, storage, etc. can be pushed to an external Graphite or Influxdb database. For details, refer to Proxmox VE documentation.

Data Protection

Proxmox VE includes options to backup and restore the VMs and Containers to storage configured for backup content. Backups can be initiated from UI or CLI using the tool vzdump or it can be scheduled. For more details, refer to Backup and Restore section of Proxmox VE documentation.

Proxmox VE backup storage content

The backup content needs to be stored offsite to protect from any diaster at source site.

Veeam added support for Proxmox VE with version 12.2. This allows restore of VM backups from vSphere to a Proxmox VE host.