KVM Snapshot/Backup Script
-
@Romo said in KVM Snapshot/Backup Script:
I use this to create my image and the use virt-manager to finish the install
qemu-img create -f qcow2 -o preallocation=metadata centos-7.qcow2 25G
I preallocated the original template, and then when I clone with Virt-Manager or cli I don't usually change it after. I did some tests and didn't see any difference between running the preallocation on the clone and not. I'm not sure if it copies the preallocation flag when you clone, but like I said, I haven't seen much of a read/write difference.
-
@Romo said in KVM Snapshot/Backup Script:
@stacksofplates I only did a ls -lh
output of ls -lsh
2.9G -rw-r--r--. 1 root root 2.9G Oct 26 17:39 centos7-clone.qcow2 1.1G -rw-r--r--. 1 root root 26G Feb 8 15:35 centos-7.qcow2
Ya so it's thin provisioned. I wonder why it's taking so long. I don't think the disk speeds would make that much of a difference.
-
What's your host specs? Mine is a DL380 G6. Dual 4 core Xeons and 96GB RAM. Don't think the RAM would have much to do with it. I had 24 originally and I'm pretty sure it cloned at the same speed.
-
@stacksofplates said in KVM Snapshot/Backup Script:
What's your host specs? Mine is a DL380 G6. Dual 4 core Xeons and 96GB RAM. Don't think the RAM would have much to do with it. I had 24 originally and I'm pretty sure it cloned at the same speed.
Its tiny
ML110 G7 8GB RAM , Single 4 core Xeon
-
@Romo said in KVM Snapshot/Backup Script:
@stacksofplates said in KVM Snapshot/Backup Script:
What's your host specs? Mine is a DL380 G6. Dual 4 core Xeons and 96GB RAM. Don't think the RAM would have much to do with it. I had 24 originally and I'm pretty sure it cloned at the same speed.
Its tiny
ML110 G7 8GB RAM , Single 4 core Xeon
Hmm. Do you have anything else running while you clone? You would think 4 cores would be enough as long as you're not way over provisioned.
-
@stacksofplates said in KVM Snapshot/Backup Script:
@Romo said in KVM Snapshot/Backup Script:
@stacksofplates said in KVM Snapshot/Backup Script:
What's your host specs? Mine is a DL380 G6. Dual 4 core Xeons and 96GB RAM. Don't think the RAM would have much to do with it. I had 24 originally and I'm pretty sure it cloned at the same speed.
Its tiny
ML110 G7 8GB RAM , Single 4 core Xeon
Hmm. Do you have anything else running while you clone? You would think 4 cores would be enough as long as you're not way over provisioned.
3 vms
virsh # list Id Name State ---------------------------------------------------- 111 FreePBX running 144 rocket-chat running 160 ubt-ans-ininja running
This is a clone on the centos image.
[root@kvm2 ~]# virt-clone -o centos-7 -n clone-test -f /vmrepo/clone-test.qcow2 Allocating 'clone-test.qcow2' | 25 GB 00:00:33 Clone 'clone-test' created successfully.
-
@Romo said in KVM Snapshot/Backup Script:
@stacksofplates said in KVM Snapshot/Backup Script:
@Romo said in KVM Snapshot/Backup Script:
@stacksofplates said in KVM Snapshot/Backup Script:
What's your host specs? Mine is a DL380 G6. Dual 4 core Xeons and 96GB RAM. Don't think the RAM would have much to do with it. I had 24 originally and I'm pretty sure it cloned at the same speed.
Its tiny
ML110 G7 8GB RAM , Single 4 core Xeon
Hmm. Do you have anything else running while you clone? You would think 4 cores would be enough as long as you're not way over provisioned.
3 vms
virsh # list Id Name State ---------------------------------------------------- 111 FreePBX running 144 rocket-chat running 160 ubt-ans-ininja running
This is a clone on the centos image.
[root@kvm2 ~]# virt-clone -o centos-7 -n clone-test -f /vmrepo/clone-test.qcow2 Allocating 'clone-test.qcow2' | 25 GB 00:00:33 Clone 'clone-test' created successfully.
Maybe it is hw limitations. I'm not sure. Still, 33 seconds is much faster than building by hand
-
@stacksofplates yeah it must be my hardware, and indeed it is way faster than building by hand. I will still be jealous of your cloning times =).
-
@Romo said in KVM Snapshot/Backup Script:
@stacksofplates yeah it must be my hardware, and indeed it is way faster than building by hand. I will still be jealous of your cloning times =).
Oh mine is nothing. Google can spin up thousands with Kubernetes in seconds. That's something to be jealous of.
-
@stacksofplates said in KVM Snapshot/Backup Script:
@Romo said in KVM Snapshot/Backup Script:
@stacksofplates yeah it must be my hardware, and indeed it is way faster than building by hand. I will still be jealous of your cloning times =).
Oh mine is nothing. Google can spin up thousands with Kubernetes in seconds. That's something to be jealous of.
But they spin up containers don't they.
-
@Romo said in KVM Snapshot/Backup Script:
@stacksofplates said in KVM Snapshot/Backup Script:
@Romo said in KVM Snapshot/Backup Script:
@stacksofplates yeah it must be my hardware, and indeed it is way faster than building by hand. I will still be jealous of your cloning times =).
Oh mine is nothing. Google can spin up thousands with Kubernetes in seconds. That's something to be jealous of.
But they spin up containers don't they.
True, good point but I would think it's relative. For example I could probably only spin up a handful in a few seconds. With their equipment they have to be able to spin up a ton of full VMs pretty quickly.
Plus there are the really trimmed down cloud versions of these OSs that spin up even faster.
-
Thanks.
-
Very nice!
Question:
I'm looking to setup a oVIRT engine + node setup. I think this script will help, as Ideally, i do not want to store my snapshots on the primary VM storage (iSCSI). Would rather have them be saved to a secondary storage location (NFS) location.
how would I modify your script to specify another storage location? I do not see the argument where to modify the snap location
-
@ntoxicator I think he is grabbing a temp snapshot and then he tar.gz's the snap to the destination (e.g. an NFS mount). Then the snap is destroyed, otherwise you get worse and worse on performance.
-
@matteo-nunziati said in KVM Snapshot/Backup Script:
@ntoxicator I think he is grabbing a temp snapshot and then he tar.gz's the snap to the destination (e.g. an NFS mount). Then the snap is destroyed, otherwise you get worse and worse on performance.
This is correct. The snapshot only lives long enough to copy the disk to the specified location and is then merged back into the original disk. Here's what it would look like using a disk called vm-test.qcow2
vm-test.qcow2 (base) < ---- vm-test-snap.qcow2 (overlay)
When the snapshot is taken, writes are directed to the overlay file (vm-test-snap.qcow2) and reads are done from the base (vm-test.qcow2). The script then copies the base image to whatever storage you want. Once it's copied, the overlay is blockcommitted to the base disk and the VM is pivoted back to the base disk. Then it just deletes the snapshot and the overlay file.
The snapshots shouldn't be very large at all. If there is nothing going on, the size could be kilobytes until it's blockcommitted.
-
Thank you for clarification.
Once snap is stored / saved to location - can a full VM be recreated from that snap taken?
What if the original VM disk (.qcow2) was destroyed or lost (say failure). Could the VM be recreated from that said snap taken? (Restored from snap save location)
Essentially, Trying to find a way to completely backup KVM's within a production environment. As most snapshots will take larger space on the originating VM storage location. Have snaps offloaded to a NFS Share -- less expensive storage than the originating VM location
Snaps be stored for life on the NFS - and can restore from snapshot or roll-back. As would like to use these Snapshots for production within or for Xen Desktop....
As trying to find a production replacement for XenServer - due to storage virtual disk limitations. can only have 2TB vdisk assigned to any VM at any time. The storage repository or iSCSI LUN can be of any size, but am capp'd at 2TB for a VM disk...
its love at first sight for Xen Orchestra integration for XS (Snapshots, rolling/delta, etc) But the storage limitation kills me..
Sorry for gettint off topic! Just trying to show my proof of concept and need for said snapshot script.
-
What if the original VM disk (.qcow2) was destroyed or lost (say failure). Could the VM be recreated from that said snap taken? (Restored from snap save location)
Snapshots would not work if the base file is written to or doesn't exist.
As @stacksofplates responded
the snapshot only lives long enough to copy the disk to the specified location and is then merged back into the original disk. Here's what it would look like using a disk called vm-test.qcow2.
Think of the process as cloning the original disk without shutting down the vm, the snapshot is just temporal ceases to exist once the original file is copied.
As most snapshots will take larger space on the originating VM storage location. Have snaps offloaded to a NFS Share -- less expensive storage than the originating VM location
-
Right, we aren't copying the snapshot, we are copying the disk before the snapshot was taken. Then the snapshot is blockcommitted back into the original.
-
@ntoxicator said in KVM Snapshot/Backup Script:
What if the original VM disk (.qcow2) was destroyed or lost (say failure). Could the VM be recreated from that said snap taken? (Restored from snap save location)
Yes, the saved disk (not the snapshot) can be copied back to the normal image directory and you can start the VM again.
This is assuming you are using images and not block storage. If you've just mounted the LUN through the hypervisor and are using block storage, this won't work. If you've mounted the LUN to a directory and created a file system on it to store images, then this will work.
-
@ntoxicator when you grab a snapshot on libvirt/kvm is like if you grab a snapshot in LVM or ZFS: the snap is a (tempeorary) log of changes since it was taken (usually, but not necessarily a COW). it is used to allow the freeze of the data at a given time, infact the actual VM disk is not written anymore, but the snapshot is.
when you have done with the processing of the VM disk (whatever this working on is) you re-merge back the snapshot (you commit it) so that the actual VM disk gets all the write changes occurred between the snapshotting and the processing of the VM disk.
this ensures that the VM disk doesn't change while you backup it!
**note ** that this not implies that the disk is in a coherent state: it is simply frozen. the fact that the disk is correctly flushed and a proper write barrier is posed on it when snapshotted depends on other factors, which kvm honors via host/guest FS settings.
A brief about barriers is here. Here is a description of what to expect from a KVM/libvirt snapshot.