Proxmox VE 9.1 landed in November and brings a noticeable lift right across the stack. The move to Debian 13.2, kernel 6.17, QEMU 10.1, LXC 6.0.5, ZFS 2.3.4 and Ceph Squid 19.2.3 gives the platform a much more modern foundation. Newer guest OSes run smoother, hardware compatibility is better, and storage behaves more predictably. It feels like the right baseline for the next few years of virtualisation workloads. But the real improvements sit in the operational details.
OCI image support for LXC
The biggest shift in 9.1 is that LXC can now use OCI images directly. This is a HUGE deal for anyone who has wanted simple, lightweight containers on Proxmox without running Docker or Kubernetes in a VM.
You can pull images from a registry or upload your own and treat them as first-class templates. Entry points can be overridden, environment variables can be defined in the GUI, and Proxmox will handle DHCP for images that don’t ship with their own network tooling.
It means you can finally deploy modern containerised services the same way you deploy LXC system containers. Internal utilities, vendor-provided app images, and small microservices become far easier to run on the same cluster as your VMs. It brings LXC into a much more relevant place for today’s deployment patterns.
TPM state in qcow2 makes Windows far easier to manage
For years, TPM support on Proxmox has made running Windows 11 or modern Windows Server on NFS or SMB storage annoying. TPM state lived in raw volumes, which didn’t behave reliably with snapshots or templates.
Proxmox VE 9.1 switches to qcow2 for TPM state, which instantly removes most of that pain. Snapshots on file-based storage become safer, Windows templates are easier to clone, backups behave, and DR workflows stop tripping over mismatched state. Anyone managing Windows at scale on Proxmox will appreciate the difference.
Nested virtualisation with proper boundaries
Nested virtualisation has always been useful, but the way Proxmox enabled it wasn’t ideal. The usual method involved setting the CPU type to “host” which exposed a huge set of CPU flags and often caused migration inconsistencies.
Proxmox 9.1 introduces a dedicated nested-virt flag that can be applied to a CPU model matching the vendor and generation of your hardware. It gives nested workloads the virtualisation extensions they expect, without turning the VM into a hardware clone of the underlying host. The result is a more predictable cluster and far fewer surprises during live migrations.
If you run nested ESXi, Hyper-V, Windows VBS, or any kind of training/lab environment, this change lands well.
Intel TDX support
Confidential computing is gradually becoming part of mainstream infrastructure, and with early Intel TDX support now available, Proxmox aligns with both major vendor ecosystems (Intel TDX and AMD SEV/SNP). It’s still early and not everything supports it yet, but the path is there for teams preparing for higher isolation requirements or building multi-tenant environments.
LXC feels smoother overall
Beyond OCI support, LXC benefits from a lot of subtle fixes. Startup delays caused by systemd-networkd are reduced, DHCP behaviour is more reliable, compatibility with newer Debian/Ubuntu/AlmaLinux/CentOS versions is improved, and the GUI shows assigned IPs more clearly. None of these changes is dramatic on its own, but together they make container management feel far more predictable.
SDN gets proper visibility
The SDN interface has finally grown up. You can now see which guest NICs sit on which bridges and VNets, which MACs and IPs EVPN zones have learned, and the full fabric layout; routes, neighbours, interfaces, directly in the tree view.
This makes troubleshooting far less painful. Anyone running VXLAN overlays, multi-tenant networks, or stretched fabrics will immediately notice the difference.
More consistent cluster behaviour
Several changes reduce friction in clustered environments. VMs can be added to HA after creation without odd side effects. Metrics collection is faster and less prone to stalling. Affinity rules behave in a way that matches real-world expectations. Long-term RRD data stays intact after upgrading. It all contributes to a smoother operational experience, especially on larger clusters.
Storage refinements and cleaner ESXi imports
Storage gets a few important improvements. Snapshot volume chains behave more safely and avoid corner cases. Ceph Squid performs better under snapshot-heavy loads and offers cleaner pool behaviour. ESXi imports work more reliably thanks to QEMU 10.x and clean up properly when removed. These changes matter a lot when you’re migrating environments or running Proxmox+Ceph in production.
Should you upgrade?
If you’re running 9.0, the upgrade is straightforward. If you’re running 8.4, this is the point where 9.x starts to feel settled and predictable.
Test your Windows templates with the new TPM handling, check nested-virt behaviour on mixed hardware, validate a small OCI-LXC deployment (we’re excited for this), and confirm Ceph performance under kernel 6.17. Once those pieces check out, the upgrade should be routine.
Proxmox VE 9.1 focuses on the parts operators interact with every day: containers, Windows VMs, nested hypervisors, SDN visibility, storage behaviour and cluster stability. Everything feels a little more modern, a little more consistent, and a lot more aligned with how people deploy workloads in 2025 and beyond.
It’s a strong release, and Proxmox is clearly growing into a more capable platform without losing the simplicity that makes it so appealing.
