Menü Schließen

Proxmox VE 7.3 mit ZFS dRAID und CRS veröffentlicht

Proxmox VE Logo

Die Open-Source Virtualisierungslösung Proxmox VE erhielt das größere Update 7.3. Proxmox VE 7.3 basiert auf Debian 11.5 Bullseye und kommt optional mit Linux Kernel Version 5.19.

Mit dem neuem ZFS dRAID (ab OpenZFS 2.1.0) an Board soll es die Verteilung, von gestarteten virtuellen Maschinen im Cluster, noch besser verteilen. Die Variante funktioniert über verteilte Hot-Spare Disks. Dabei bilden mehrere Gruppen ein RAIDZ-VDEV die Parity und Daten enthalten. Durch diese Verteilung soll vor allem bei größeren RAIDZ Systemen die Wiederherstellung ( Resilvering) beschleunig werden. In Proxmox VE können diese über die GUI eingerichtet wrden.

Neu ist auch der Cluster Resource Scheduler (CRS). Dies ist eine Art automatisches Migrationswerkzeug, das neu gestartete virtuelle Maschinen auf verfügbare Nodes im HA-Stack verteilt. Dabei werden die konfigurierten CPU und RAM Settings berücksichtigt, sodass im Idealfall der optimale Node verwendet wird und so der Cluster gleichmäßig unter Last läuft.

Eine weitere Neuerung ist das Offline-Mirror-Tool, dass ein lokaler APT Spiegel für Debian VMS und Container ist. Somit lassen sich Systeme, auch Air-Gapped-Systeme, die keinen Zugriff auf das Internet haben z.B. mittels USB-Stick erstellen, oder per Updates versorgen.

Weitere wichtige Punkte sind die Tasks die nun CPU pinning ermöglichen und so an feste CPUs gebunden werden können. USB-Devices können im laufenden Betrieb angeschlossen, Voraussetzung es ist eine VM mit KVM/QEMU 7.1 und das Gastsystem ein Windows 8 oder Linux mit Kernel ab 2.6.

Highlights Proxmox VE 7.3

  • Debian 11.5 (Bullseye), but using a newer Linux kernel 5.15 or 5.19
  • QEMU 7.1, LXC 5.0.0, and ZFS 2.1.6
  • Ceph Quincy 17.2.5 and Ceph Pacific 16.2.10
  • Tags for virtual guests are enabled in the GUI
  • Initial support for Cluster Ressource Scheduling with new mode „Static Load“. The new TOPSIS tool uses the “total memory” and “vCPU” properties of the HA resource to guide the decision on which node in the cluster a HA resource is started on.
  • New Container templates: Fedora, Daedalus (Debian 12), AlmaLinux 9, Rocky Linux 9, Ubuntu 22.10, etc.
  • Proxmox Offline Mirror: to update policy-restricted or air-gapped systems

Proxmox VE 7.3 Release Notes

  • Based on Debian Bullseye (11.5)
  • Latest 5.15 Kernel as stable default (5.15.74)
  • Newer 5.19 kernel as opt-in
  • QEMU 7.1
  • LXC 5.0.0
  • ZFS 2.1.6
  • Ceph Quincy 17.2.5
  • Ceph Pacific 16.2.10


  • Ceph Quincy support. It is also the default for new installations
  • Initial Cluster Resource Scheduling (CRS) support
  • Tags for Virtual Guests in web-interface for better categorizing/searching/…
  • Support for Proxmox Offline Mirror to update and manage subscriptions of air gapped systems

Changelog Overview

  • Enhancements in the web interface (GUI):
    • Show tags for virtual guests in the resource tree and allow edits.
    • Improved UX for mediated PCIe devices – they now also show the name.
    • Improved Certificate View – for example for certificates with many SANs.
    • Node disk UI: gracefully handle adding the same local storage (e.g. a zpool with the same name) to multiple nodes.
    • Expose node configurations like wakeonlan and the delay for starting guests on boot for each node in the web interface.
    • Improved translations, among others:
      • Arabic
      • Dutch
      • German
      • Italian
      • Polish
      • Traditional Chinese
      • Turkish
    • Improve rendering complex formats in the api-viewer widget
  • Virtual Machines (KVM/QEMU)
    • New major version of QEMU 7.1
    • Support for pinning a VM to certain CPU cores via taskset
    • In the web interface, new VMs default to iothread enabled and VirtIO SCSI-Single selected as SCSI controller (if supported by the guest OS)
    • New VMs use qemu-xhci USB controller, if supported by the guest OS (Windows >= 8, Linux >= 2.6)
    USB devices can now be hot-pluggedPass through up to 14 USB devices (previous 5) to a virtual machine
    • Align virtio-net parameters for the receive- (rx) and transmit- (tx) queue sizes with upstream best-practices
    • Use the more efficient packed format for multi-queues
    • Allow up to 64 rx and tx multi-queues (previously 16)
    • Cloud-init improvements: changes to the cloud-init settings now can be made available in the config-drive ISO inside the guest directly
    • Disable io_uring for CIFS backed VM disks by default – to mitigate an issue with CIFS and io_uring present since kernel 5.15
    • Improved handling for VMs with passed through PCIe-devices:
      • Cleanup of created mdev devices, even if the VM could not be started
      • Longer timeouts between sending SIGTERM and SIGKILL to allow for a cleanup upon termination
      • Prevent suspending a VM with passed through PCIe device, as the device’s state cannot be saved
  • Containers (LXC)
    • New major LXC version 5.0.0
    • More robust cgroup mode detection, by explicitly checking the type of /sys/fs/cgroup
    • Support for newer distribution versions:
      • Fedora 37 and preparation for 38
      • Devuan 12 Daedalus
      • Preparation for Ubuntu 23.04
    • Bind-mounts are now also directly applied to a running container
    • Fix a bug when cloning a locked container: It does not create an empty config anymore, but fails correctly
    • Improvements to the systemd version detection inside containers
    • Volumes are now always deactivated upon successful move_volume, not only if the source volume is to be removed: preventing dangling krbd mappings
    • New pre-made templates available for:
      • AlmaLinux 9
      • Alpine 3.16
      • Centos 9 Stream
      • Fedora 36
      • Fedora 37
      • OpenSUSE 15.4
      • Rocky Linux 9
      • Ubuntu 22.10
    • Refreshed existing templates:
      • Gentoo (2022-06-22-openrc)
      • ArchLinux (2022-11-11)
  • General improvements for virtual guests
    • Add option to disable MAC learning on bridges (the guest-mac addresses are added statically, no broadcast packets are flooded to those ports, thus no spurious answers are replied, which broke certain hoster network setups)
    • Improve cleanup of backup-jobs upon purging the configuration for a removed VM
    • Optionally restart a guest after rollback to snapshot
    • Framework for remote migration to cluster-external Proxmox VE hosts
  • HA Manager
    • Cluster Resource Scheduling (CRS) tech-preview: Improve new-node selection for when the HA Manager needs to find a new host node for a HA service, in the following cases:
      • recovering it after fencing its node
      • on node shutdown, if the migrate shutdown-policy is enabled
      • on HA group configuration changes, if the current node is not in the highest priority set anymore
    Use the TOPSIS multi-criteria decision analysis method for finding a better targetHaving established the CRS foundation, Proxmox developers plan to extend it with a dynamic load scheduler and live load balancing in future releases
  • Cluster
    • Fix a permission issue in the QDevice status API
    The API call for obtaining the API status needs privileged access, but was run directly in the unprivileged daemon leading to spurious permission denied errors
    • Fix race-condition between writing corosync.conf and reloading corosync on update
  • Backup/Restore
    • Improved namespace support for the Proxmox Backup Server storage type
    • Improvements to the parsing of the template variables of the backup notes
    The notes template for backups, introduced in Proxmox VE 7.2, received a number of bug-fixes and improvements
    • Added option repeat-missed, allowing one to opt-out from the default behavior of running missed jobs on a new boot
    • The VM used for single-file-restore with QEMU guests now has support for increasing its memory (e.g. to handle many ZFS datasets inside the guest)
    • Improved configuration validation with Proxmox Backup Server encryption (for example, do not fall back to plain-text if the encryption key is missing)
    • When deleting vzdump backup files the corresponding notes and log are also removed.
  • Storage
    • Support ZFS dRAID vdevs when creating a zpool via the API & GUI. dRAID improves recovery times when a disk failure occurs.
    A dRAID setup makes most sense for either a large (15+) amount of disks, or a medium+ amount of huge disks (15+ TB).
    • Align SMART status API with Proxmox Backup Server fields
    • Support Notes and the Protected setting for backups stored on BTRFS storage types.
  • Storage Replication
    • Don’t send mails on bogus errors: e.g. when a replication could not be started because the guest is currently being migrated.
    • Upon replication failure the first 3 retries are scheduled in a shorter time, before waiting for 30 minutes before retrying – improving the consistency upon short network hiccups.
    • Cleanup replication state of guests running on another note: as can happen after a HA-fence.
    • Make interaction of replication state and configuration changes more robust: e.g. in the case of first removing all volumes from one storage, and then removing the VM before the next replication was run.
  • pve-zsync
    • support --compressed option resulting in an already compressed dataset to be sent as is to the destination (thus removing the need to decompress and potentially re-compress on the target).
  • Ceph
    • Improved UX when creating new clusters
    The network selection and duplicate IP checking was improved.It’s no longer possible to run into an error, by selecting a different node for the first monitor than the one you are connected to (prevents trying to create a monitor on nodes without installed Ceph packages).
    • Added heuristic checks if it is OK to stop or remove a ceph MON, MDS, or OSD service.
    The Web UI will now show a warning if the removal / stop of a service will affect the operation of the cluster.
    • Support for installing Ceph Quincy via Proxmox VE CLI and GUI.
  • Access Control
    • Improve naming of WebAuthn parameters in the GUI.
    • Increase OpenID code size – compatibility with Azure AD as OpenID provider.
    • Only require write-access (quorum) to TFA config for recovery keys.
    All other TFA methods only need read-access to the config. This makes it possible to login to a node, which is not in the quorate partition, even if your user has TFA configured.
    • Fix a hard to trigger update issue with rotating the private key used for signing the access tickets, resulting in falsely rejected API calls.
    • Fix creation of tokens for other users, by anyone except root@pam
    a bug prevented user A from creating a token for user B, despite having the relevant permissions
    • Better logging for expired tokens.
  • Firewall, Networking & Software Defined Networking (tech-preview)
    • Fix setting MTU on setups using OVS.
    • ifupdown2 now handles point-to-point settings correctly
    • ifupdown2 can now add a OVSBrige with a vlan-tag as ports to an OVSBridge (fakebridge)
    • Fix updating MTU if a bridge-port is plugged into a different bridge.
    • Firewall security groups can now be renamed with the changes directly being picked up from pve-firewall
    • Stricter parsing of guest config files in pve-firewall, making it possible to actually disable the guest firewall while keeping the config file around.
    • Improved handling on externally added ebtables rules: If a rule was added to a table different than filterpve-firewall still tried to parse and add it to the filter table upon rule compilation.
  • Improved management for Proxmox VE clusters:
    • Proxmox Offline Mirror: The tool supports subscriptions and repository mirrors for air-gapped clusters. Newly added proxmox-offline-mirror utility can now be used to keep Proxmox VE nodes, without access to the public internet up-to-date and running with a valid subscription.
    • New mail-forwarding binary proxmox-mail-forward: no functional change, but unifies the configuration for sending the system-generated mails to the email address configured for root@pam
    • Improvements to pvereport – providing a better overview of the status of a Proxmox VE node the following information was added/improved:
      • ceph-device-list
      • stable ordering of guests and network information
      • proxmox-boot-tool output
      • arcstat output
  • HTTP & REST-API Daemon
    • File-uploads now support filenames with spaces.
    • File-uploads now support files with a size smaller than 16 KB
    • Improved input sanitization of API URLs as additional security layer.
  • Installation ISO
    • Fixed the permissions of /tmp inside the installation environment (e.g. for the edge-case of users installing software manually from a debug-shell)
    • Make the size requirement of 8 GB a warning – most installations can run with less space, but might need adaptation after install (e.g. moving the log-destination to another device) – keep 2 GB as hard limit
    • Rework auto-sizing of root, guest-data and SWAP partitions & avoid creating the LVM-Thin pool in favor of root partition space on small setups.
  • Mobile App
    • update to flutter 3.0
    • support and target Android 13
    • fix buttons hidden behind Android’s soft nav buttons
    • provide feedback about running backup tasks: a bug prevented any visual feedback in the app, when starting a backup (although the backup was started).

Known Issues & Breaking Changes

  • Virtual Guest Tags:Duplicate tags are now filtered out when updating the tag property of a virtual guest.Duplication detection and sorting is handled case-insensitive by default, all tags are treated lower case. One can change that in the datacenter.cfg configuration using the case-sensitive boolean property of the tag-style option.


Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert