Here is my process. Assume the VM is named zabbix, one of the Proxmox hosts is named proxmox1.example.com, the ID of your new VM is 1015, and your Ceph pool is named FirstPool.
- Shut down the VM. (optionally change the interface name in
/etc/network/interfaces
to ens18 first) - Disable autostart on the VM:
virsh autostart --disable --domain zabbix
- Copy the qcow2 file to the Proxmox server. A couple options, all run from the old KVM server (check source/destination order of scp command!):
- To the Proxmox local disk:
scp /data/vms/zabbix.qcow2 root@proxmox1.example.com:/var/lib/vz/images/zabbix.qcow2
- Or maybe your VMs are big and your Proxmox disks are small. You can copy directly to a CephFS if you have created one:
scp /data/vms/zabbix.qcow2 root@proxmox1.example.com:/mnt/pve/cephfs/zabbix.qcow2
- To the Proxmox local disk:
- Import the VM into Proxmox:
qm create 1015 --scsi0 FirstPool:0,import-from=/var/lib/vz/images/zabbix.qcow2 --boot order=scsi0
FirstPool is the Ceph pool. It creates a disk in there automatically named vm-1015-disk-0. - Find the VM in the Proxmox web GUI and make some changes. Probably many of these could be included in the previous step.
- Add a network adapter.
- Bump up CPU cores and RAM to whatever you want.
- Change "Start at boot" to Yes.
- Change the name of the VM (to "zabbix" for this example).
- Change the OS type.
- Boot the VM
- If you didn't before shutting down, change the interface name in
/etc/network/interfaces
to ens18. Make sure you check the interface name withip a
first to verify that it is ens18. I have no idea why mine are always set to that but it seems consistent.
BTW copying to CephFS could be slower. I have three nodes, all SSDs, and 1gbps connections and it seems to run at about 35MB/s versus 120MB/s that I was seeing to local disk. Of course the biggest VMs need to be copied to slower storage! But then that pain was dulled when I observed that my import speeds were over 1GB/s which is way faster than when importing from local storage.
Even though I think I have thin provisioning enabled in KVM, it seems to copy the full disk size across the network, and into the .qcow2 file.. But then when it imports into Ceph it seems to be thin provisioned again. Smart but mysterious.
Seeing surprisingly high Ceph space usage afterwards? You could be like me and assume that thin-provisioning is broken. Or just go delete that .qcow2 (now that you have imported it) that you stuffed onto the CephFS which gets replicated 3x with no thin provisioning.