XenServer NFS Storage Repo in the SMB
-
About thin provisioning in next XenServer version (Dundee):
THIN PROVISIONED BLOCK STORAGE
iSCSI and HBA block storage can now be configured to be thinly provisioned. This is of particular value to those users who provision guest storage with a high water mark expecting that some allocated storage won't be used. With XenServer 6.5 and prior, the storage provider would allocate the entire disk space which could result in a significant reduction in storage utilization which in turn would increase the cost of virtualization. Now block storage repositories can be configured with an initial size and an increment value. Since storage is critical in any virtualization solution, we are very interested in feedback on this functional change.
I have to make some tests on my side.
-
@olivier said:
That's why it's important to use thin provisioned storage as possible.
Like local files and/or NFS
-
@DustinB3403 said:
@olivier is it a reasonable assumption that you'd want to have at least double the capacity that you're using on the Local Xen SR when building the Delta for that process?
Not generally, no. That's huge. You have to know something about your system, if course, but this is a theoretical limit, not a practical number.
-
@scottalanmiller said:
@DustinB3403 said:
@scottalanmiller the reason for scaling up to 22TB would be for the time & space it takes to build the delta which is a Snapshot on the Host, until it gets put onto the NFS Server.
Which would then copy it to an External NAS (and with planning another external device like a USB)
3-2-1 Backup.
Live and 2 copies on different media and one off site.
Okay, so that's assuming 11TB of native VMs and 100% deltas and backing up the entire server in a single go to be able to hit that? Do those things happen?
No, but if I don't plan for it now, when it happens in a years time because of poor decision making I'll be the asshole who didn't plan for stupidity.
-
@DustinB3403 said:
@scottalanmiller said:
@DustinB3403 said:
@scottalanmiller the reason for scaling up to 22TB would be for the time & space it takes to build the delta which is a Snapshot on the Host, until it gets put onto the NFS Server.
Which would then copy it to an External NAS (and with planning another external device like a USB)
3-2-1 Backup.
Live and 2 copies on different media and one off site.
Okay, so that's assuming 11TB of native VMs and 100% deltas and backing up the entire server in a single go to be able to hit that? Do those things happen?
No, but if I don't plan for it now, when it happens in a years time because of poor decision making I'll be the asshole who didn't plan for stupidity.
So the thing that you are ACTUALLY planning for is using 16TB of storage for VMs. That's fine, but be transparent about what you are planning for. It's not the backups, it's overrunning the intended storage.
-
@scottalanmiller I wasn't trying to be non-transparent.
Just trying to make a point that you need to have enough space for the eventuality of growth, which might change plans.
-
@DustinB3403 said:
@scottalanmiller I wasn't trying to be non-transparent.
Just trying to make a point that you need to have enough space for the eventuality of growth, which might change plans.
To make it transparent, you would state it as X storage + Y for backup overhead. Not project 11TB as the growth number, then add another growth number and the backup overhead together.
So in your case, accounting for future unknown growth, you'd have something like 16TB and a reasonable 5TB of space for snaps during the backup process. Which sounds a lot more reasonable.
-
So this is kind of a related question. Normally thin LVMs look like the test one I created here:
root@Megatron:/home/jhooks# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert root ubuntu-vg -wi-ao---- 1.81t swap_1 ubuntu-vg -wi-ao---- 7.96g test ubuntu-vg twi-a-tz-- 100.00m 0.00 0.88 root@Megatron:/home/jhooks#
However XenServer doesn't show a Data or Meta section, is that just because it's Centos 6? It shows a type of linear for each LVM. So if they aren't thin provisioned does it create the LVM, then attach a thin provisioned VHD over top?
-
@johnhooks If you are on a XenServer pre-Dundee, that's normal: LVM is not thin provisioned in this case.
-
@olivier said:
@DustinB3403 Even classical backup: for a running VM we need to export the snapshot. So if all your VMs are running and you are backuping everything at once, you'll need to double your space usage (at least during the VM export process).
That's why it's important to use thin provisioned storage as possible.
HUH? Creating a Snap doesn't instantly create a double of your current in use storage. Assuming a 2 TB VM, you snap it, As I understand it, what happens is the current 2 TB file is no longer written to, and a new additional file is created where all of the changes are written to. If that VM isn't very change heavy, the Snap file most likely will remain small. Of course there are times when the system might be changing data like crazy - out with the old, in with the new - so total usage doesn't change, but actual changes could be epic.
-
@Dashrender It's not exactly like that on a thick storage. It's a little bit more complicated in fact.
It depends of the current disk content. XenServer will try to deflate as possible. Let's take for example, only one VDI with the total size of the disk space provisioned (let's say on a LVMoiSCSI):
Then, if you do a snapshot, you'll have 3 disks:
- the original one will become the parent
- the new created active VDI will be remapped to be current VM disk
- the snapshot
Doubling size is the worst case, when deflate won't free some space (or very little). In this case, the initial snapshot mechanism will double the space used.
But any extra snapshot (after the initial one) won't consume a lot of space.
-
By the way, you can spot the difference with a thin provisioned SR, like NFS in this case:
Far better...
-
Interesting. What happens when you take a second snap? When it is deleted, is the there two or three files now?
-
@Dashrender Okay let's retake a clean example: one VM in a SR, with one 4GB VDI:
# lvscan inactive '/dev/VG_XenStorage-e27c48de-509f-3fec-d627-7f348062ab1a/MGT' [4.00 MiB] inherit inactive '/dev/VG_XenStorage-e27c48de-509f-3fec-d627-7f348062ab1a/VHD-770ceeac-e97e-4e05-b9c5-892b97b9d16e' [4.02 GiB] inherit
After first snapshot:
# lvscan inactive '/dev/VG_XenStorage-e27c48de-509f-3fec-d627-7f348062ab1a/MGT' [4.00 MiB] inherit inactive '/dev/VG_XenStorage-e27c48de-509f-3fec-d627-7f348062ab1a/VHD-38e2156f-da74-4edb-ac83-56fda54cfe55' [1.75 GiB] inherit inactive '/dev/VG_XenStorage-e27c48de-509f-3fec-d627-7f348062ab1a/VHD-770ceeac-e97e-4e05-b9c5-892b97b9d16e' [4.02 GiB] inherit inactive '/dev/VG_XenStorage-e27c48de-509f-3fec-d627-7f348062ab1a/VHD-f18856a5-039b-4d84-bf6c-a259d0f49a9e' [8.00 MiB] inherit
After second snapshot:
# lvscan inactive '/dev/VG_XenStorage-e27c48de-509f-3fec-d627-7f348062ab1a/MGT' [4.00 MiB] inherit inactive '/dev/VG_XenStorage-e27c48de-509f-3fec-d627-7f348062ab1a/VHD-38e2156f-da74-4edb-ac83-56fda54cfe55' [1.75 GiB] inherit inactive '/dev/VG_XenStorage-e27c48de-509f-3fec-d627-7f348062ab1a/VHD-f18856a5-039b-4d84-bf6c-a259d0f49a9e' [8.00 MiB] inherit inactive '/dev/VG_XenStorage-e27c48de-509f-3fec-d627-7f348062ab1a/VHD-770ceeac-e97e-4e05-b9c5-892b97b9d16e' [4.02 GiB] inherit inactive '/dev/VG_XenStorage-e27c48de-509f-3fec-d627-7f348062ab1a/VHD-68408f33-5a69-4b3b-afdd-a2cfabcad9ba' [8.00 MiB] inherit
As you can see, we got a second 8 MiB logical volume, nothing more (base parent and active VDI doesn't change).
Let's remove the latest snapshot:
# lvscan inactive '/dev/VG_XenStorage-e27c48de-509f-3fec-d627-7f348062ab1a/MGT' [4.00 MiB] inherit inactive '/dev/VG_XenStorage-e27c48de-509f-3fec-d627-7f348062ab1a/VHD-38e2156f-da74-4edb-ac83-56fda54cfe55' [1.75 GiB] inherit inactive '/dev/VG_XenStorage-e27c48de-509f-3fec-d627-7f348062ab1a/VHD-f18856a5-039b-4d84-bf6c-a259d0f49a9e' [8.00 MiB] inherit inactive '/dev/VG_XenStorage-e27c48de-509f-3fec-d627-7f348062ab1a/VHD-770ceeac-e97e-4e05-b9c5-892b97b9d16e' [4.02 GiB] inherit
It removes the previously created volume, as expected. Now, let's remove the initial snapshot. Durin few seconds, we'll have this:
lvscan inactive '/dev/VG_XenStorage-e27c48de-509f-3fec-d627-7f348062ab1a/MGT' [4.00 MiB] inherit inactive '/dev/VG_XenStorage-e27c48de-509f-3fec-d627-7f348062ab1a/VHD-38e2156f-da74-4edb-ac83-56fda54cfe55' [1.75 GiB] inherit inactive '/dev/VG_XenStorage-e27c48de-509f-3fec-d627-7f348062ab1a/VHD-770ceeac-e97e-4e05-b9c5-892b97b9d16e' [8.00 MiB] inherit inactive '/dev/VG_XenStorage-e27c48de-509f-3fec-d627-7f348062ab1a/leaf_770ceeac-e97e-4e05-b9c5-892b97b9d16e_38e2156f-da74-4edb-ac83-56fda54cfe55' [4.00 MiB] inherit
But it will be automatically "garbage collected" when the system will see than the chain doesn't have any snapshot in it (after few seconds in this case):
# lvscan inactive '/dev/VG_XenStorage-e27c48de-509f-3fec-d627-7f348062ab1a/MGT' [4.00 MiB] inherit inactive '/dev/VG_XenStorage-e27c48de-509f-3fec-d627-7f348062ab1a/VHD-770ceeac-e97e-4e05-b9c5-892b97b9d16e' [4.02 GiB] inherit
We are back to the initial situation.