Dell Latitude 7430
This was my secondary system for work, not often used.
This was my secondary system for work, not often used.
Purchased as a birthday present for myself in 2015. I had been in Best Buy and thought that the thinness of the computer was incredible, like two pieces of cardboard stuck together.
I knew from reviews that the CPU would be weak, and it has been, but it has been good enough for almost 5 years now as I write this.
The PC people at work had some of these with extra features like a touchscreen and fingerprint reader which they were looking for people to adopt so I agreed to go through the swap process.
Purchased while I was in St. Augustine via Dell Outlet. I'm guessing it had been returned due to some sort of nick in the metal on one of the hinges.
Here is my process. Assume the VM is named zabbix, one of the Proxmox hosts is named proxmox1.example.com, the ID of your new VM is 1015, and your Ceph pool is named FirstPool.
/etc/network/interfaces to ens18 first)virsh autostart --disable --domain zabbixscp /data/vms/zabbix.qcow2 root@proxmox1.example.com:/var/lib/vz/images/zabbix.qcow2scp /data/vms/zabbix.qcow2 root@proxmox1.example.com:/mnt/pve/cephfs/zabbix.qcow2qm create 1015 --scsi0 FirstPool:0,import-from=/var/lib/vz/images/zabbix.qcow2 --boot order=scsi0 FirstPool is the Ceph pool. It creates a disk in there automatically named vm-1015-disk-0./etc/network/interfaces to ens18. Make sure you check the interface name with ip a first to verify that it is ens18. I have no idea why mine are always set to that but it seems consistent.BTW copying to CephFS could be slower. I have three nodes, all SSDs, and 1gbps connections and it seems to run at about 35MB/s versus 120MB/s that I was seeing to local disk. Of course the biggest VMs need to be copied to slower storage! But then that pain was dulled when I observed that my import speeds were over 1GB/s which is way faster than when importing from local storage.
Even though I think I have thin provisioning enabled in KVM, it seems to copy the full disk size across the network, and into the .qcow2 file.. But then when it imports into Ceph it seems to be thin provisioned again. Smart but mysterious.
Seeing surprisingly high Ceph space usage afterwards? You could be like me and assume that thin-provisioning is broken. Or just go delete that .qcow2 (now that you have imported it) that you stuffed onto the CephFS which gets replicated 3x with no thin provisioning.