Vultr, Block Storage CentOS



  • I have a Vultr VPS, that I have a custom ISO on (FreePBX). The disk is getting full due to call recordings. We do not want to
    move the call recordings off of the box yet because we want the recordings linked to the CDRs for a couple weeks yet so they can easily be played through the web interface. So, the way I see it, I have 3 options here.

    1. Resize the VPS to a bigger instance and expand the disks (not preferred, because I don't need more CPU or RAM, just storage)
    2. Add a Vultr block device to the VPS and expand the disk to use the block device, thus expanding my overall disk space.
    3. Add a Vultr block device to the VPS and point the directory for call recordings to this new block device.

    Before getting into specifics about how to accomplish each task, I'd like opinions on what the best approach would be here, and the pros/cons of each method.

    I am using LVM.



  • Because you're using LVM, options 2 and 3 are both doable. To me 2 is quicker and easier, but I've used LVM long enough that I know how to do most of that off the top of my head. The only thing I normally have to lookup is how to expand the file system, because the process tends to be a little different depending on the file system. Real quick here.

    pvcreate /dev/device
    vgextend volume_group_name /dev/device
    lvextend logical_volume_name -l +95%FREE
    xfs_growfs /dev/volume_group_name/logical_volume_name
    

    done. Shouldn't take but 5 minutes, if that.

    Edit: I normally go with either 90% or 95% of the available space in the volume group to keep space available for a local snapshot.



  • @travisdh1 2. is the way I was leaning as well, since it just seems cleaner.

    I'll post my disk details in the next post.



  • It's worth noting that I have already upgraded this server once, and expanded the disk once already. Just not with block storage.

    Relevant disk stats:

    [[email protected] ~]# df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/mapper/vg_data-lv_root
                           55G   32G   20G  62% /
    tmpfs                 1.9G     0  1.9G   0% /dev/shm
    /dev/vda1             477M   32M  420M   8% /boot
    
    [[email protected] ~]# vgs
      VG      #PV #LV #SN Attr   VSize  VFree
      vg_data   2   2   0 wz--n- 59.50g    0
    
    [[email protected] ~]# lsblk
    NAME                       MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
    vda                        252:0    0   60G  0 disk
    ├─vda1                     252:1    0  500M  0 part /boot
    ├─vda2                     252:2    0 39.5G  0 part
    │ ├─vg_data-lv_root (dm-0) 253:0    0 55.6G  0 lvm  /
    │ └─vg_data-lv_swap (dm-1) 253:1    0    4G  0 lvm  [SWAP]
    └─vda3                     252:3    0   20G  0 part
      └─vg_data-lv_root (dm-0) 253:0    0 55.6G  0 lvm  /
    sr0                         11:0    1 1024M  0 rom
    
    
    [[email protected] ~]# pvdisplay
      --- Physical volume ---
      PV Name               /dev/vda2
      VG Name               vg_data
      PV Size               39.51 GiB / not usable 3.00 MiB
      Allocatable           yes (but full)
      PE Size               4.00 MiB
      Total PE              10114
      Free PE               0
      Allocated PE          10114
      PV UUID               ZPKVBC-lzNg-UiHV-UaCL-V2Ep-1FIo-rxYHsC
    
      --- Physical volume ---
      PV Name               /dev/vda3
      VG Name               vg_data
      PV Size               20.00 GiB / not usable 3.77 MiB
      Allocatable           yes (but full)
      PE Size               4.00 MiB
      Total PE              5119
      Free PE               0
      Allocated PE          5119
      PV UUID               Inu87j-N6Vb-lQ6e-CsUj-3JZi-CQBD-RREPVT
    
      [[email protected] ~]# vgdisplay
      --- Volume group ---
      VG Name               vg_data
      System ID
      Format                lvm2
      Metadata Areas        2
      Metadata Sequence No  5
      VG Access             read/write
      VG Status             resizable
      MAX LV                0
      Cur LV                2
      Open LV               2
      Max PV                0
      Cur PV                2
      Act PV                2
      VG Size               59.50 GiB
      PE Size               4.00 MiB
      Total PE              15233
      Alloc PE / Size       15233 / 59.50 GiB
      Free  PE / Size       0 / 0
      VG UUID               WETe1B-n9cD-DwP9-T9k1-91fZ-wm8Z-5KTVuG
    
    
      [[email protected] ~]# lvdisplay
      --- Logical volume ---
      LV Path                /dev/vg_data/lv_root
      LV Name                lv_root
      VG Name                vg_data
      LV UUID                6aLAzL-8vbD-zjlF-yX10-b516-3gkv-6aP9A9
      LV Write Access        read/write
      LV Creation host, time xxx.xxx.xxx.xxx.vultr.com, 2017-05-08 18:05:33 -0400
      LV Status              available
      # open                 1
      LV Size                55.57 GiB
      Current LE             14225
      Segments               2
      Allocation             inherit
      Read ahead sectors     auto
      - currently set to     256
      Block device           253:0
    
      --- Logical volume ---
      LV Path                /dev/vg_data/lv_swap
      LV Name                lv_swap
      VG Name                vg_data
      LV UUID                IxfAkn-FtI3-0o7Y-TcM9-DTeF-7NV2-sNLfde
      LV Write Access        read/write
      LV Creation host, time xxx.xxx.xxx.xxx.vultr.com, 2017-05-08 18:05:35 -0400
      LV Status              available
      # open                 1
      LV Size                3.94 GiB
      Current LE             1008
      Segments               1
      Allocation             inherit
      Read ahead sectors     auto
      - currently set to     256
      Block device           253:1
    


  • @fuznutz04 How much recording to you expect to be doing in the next 2 weeks? 20G is a LOT of call recordings. I'd keep an eye on it, and add block storage if you need it. Like I said, shouldn't take more than 5 minutes.

    Do you know what the file system is? I think the default for CentOS is xfs, but I'd rather be sure. cat /etc/fstab



  • @travisdh1 said in Vultr, Block Storage CentOS:

    cat /etc/fstab

    [[email protected] ~]# cat /etc/fstab
    
    #
    # /etc/fstab
    # Created by anaconda on Mon May  8 18:05:59 2017
    #
    # Accessible filesystems, by reference, are maintained under '/dev/disk'
    # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
    #
    /dev/mapper/vg_data-lv_root /                       ext4    defaults        1 1
    UUID=2d9dfe5e-db4c-4936-b234-3dbdf62a90e1 /boot                   ext4    defaults        1 2
    /dev/mapper/vg_data-lv_swap swap                    swap    defaults        0 0
    tmpfs                   /dev/shm                tmpfs   defaults        0 0
    devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
    sysfs                   /sys                    sysfs   defaults        0 0
    proc                    /proc                   proc    defaults        0 0
    


  • @travisdh1 This system has high usage. I fully expect it to fill up rapidly.



  • @fuznutz04 said in Vultr, Block Storage CentOS:

    @travisdh1 This system has high usage. I fully expect it to fill up rapidly.

    Ah, ok. Yeah, block is probably your most efficient use of resources for this then.

    vgcreate /dev/sdb #I'm assuming the new block storage will show up as an sd device
    vgextend /dev/sdb vg_data
    lvextend lv_root -l +95%FREE
    resize2fs /dev/vg_data/lv_root
    

    Shouldn't take much time at all. As always, we'd want to have a backup available before touching storage things.



  • @travisdh1 said in Vultr, Block Storage CentOS:

    @fuznutz04 said in Vultr, Block Storage CentOS:

    @travisdh1 This system has high usage. I fully expect it to fill up rapidly.

    Ah, ok. Yeah, block is probably your most efficient use of resources for this then.

    vgcreate /dev/sdb #I'm assuming the new block storage will show up as an sd device
    vgextend /dev/sdb vg_data
    lvextend lv_root -l +95%FREE
    resize2fs /dev/vg_data/lv_root
    

    Shouldn't take much time at all. As always, we'd want to have a backup available before touching storage things.

    I'm going to give this a test on a test system just because I have never dealt with Vultr block storage. Then if all is well, I'll give this a go.

    I'm assuming if there is an issue with block storage (like there was earlier this week with the NJ data center) that it would cause the VPS to crash. Hopefully this isn't the case. 🙂



  • @travisdh1 Thank you for this BTW. I'll probably do this this weekend, but wanted to get prepared and wrap my head around this first.



  • @fuznutz04 said in Vultr, Block Storage CentOS:

    @travisdh1 Thank you for this BTW. I'll probably do this this weekend, but wanted to get prepared and wrap my head around this first.

    Glad to help. LVM is one of my big knowledge wheelhouses if you can't tell.



  • @travisdh1 said in Vultr, Block Storage CentOS:

    @fuznutz04 said in Vultr, Block Storage CentOS:

    @travisdh1 Thank you for this BTW. I'll probably do this this weekend, but wanted to get prepared and wrap my head around this first.

    Glad to help. LVM is one of my big knowledge wheelhouses if you can't tell.

    I noticed. Before I met you this year, I watched your LVM presentation from Mangocon 2016.



  • @travisdh1 said in Vultr, Block Storage CentOS:

    Because you're using LVM, options 2 and 3 are both doable. To me 2 is quicker and easier, but I've used LVM long enough that I know how to do most of that off the top of my head. The only thing I normally have to lookup is how to expand the file system, because the process tends to be a little different depending on the file system. Real quick here.

    pvcreate /dev/device
    vgextend volume_group_name /dev/device
    lvextend logical_volume_name -l +95%FREE
    xfs_growfs /dev/volume_group_name/logical_volume_name
    

    done. Shouldn't take but 5 minutes, if that.

    Edit: I normally go with either 90% or 95% of the available space in the volume group to keep space available for a local snapshot.

    If you pass -r to lvextend it will auto resize the filesystem. That way you don't need to remember the differences between them.



  • @stacksofplates said in Vultr, Block Storage CentOS:

    @travisdh1 said in Vultr, Block Storage CentOS:

    Because you're using LVM, options 2 and 3 are both doable. To me 2 is quicker and easier, but I've used LVM long enough that I know how to do most of that off the top of my head. The only thing I normally have to lookup is how to expand the file system, because the process tends to be a little different depending on the file system. Real quick here.

    pvcreate /dev/device
    vgextend volume_group_name /dev/device
    lvextend logical_volume_name -l +95%FREE
    xfs_growfs /dev/volume_group_name/logical_volume_name
    

    done. Shouldn't take but 5 minutes, if that.

    Edit: I normally go with either 90% or 95% of the available space in the volume group to keep space available for a local snapshot.

    If you pass -r to lvextend it will auto resize the filesystem. That way you don't need to remember the differences between them.

    So you are saying instead of this:

     lvextend logical_volume_name -l +95%FREE
    

    Use this:?

    lvextend logical_volume_name -r
    


  • @fuznutz04 said in Vultr, Block Storage CentOS:

    @stacksofplates said in Vultr, Block Storage CentOS:

    @travisdh1 said in Vultr, Block Storage CentOS:

    Because you're using LVM, options 2 and 3 are both doable. To me 2 is quicker and easier, but I've used LVM long enough that I know how to do most of that off the top of my head. The only thing I normally have to lookup is how to expand the file system, because the process tends to be a little different depending on the file system. Real quick here.

    pvcreate /dev/device
    vgextend volume_group_name /dev/device
    lvextend logical_volume_name -l +95%FREE
    xfs_growfs /dev/volume_group_name/logical_volume_name
    

    done. Shouldn't take but 5 minutes, if that.

    Edit: I normally go with either 90% or 95% of the available space in the volume group to keep space available for a local snapshot.

    If you pass -r to lvextend it will auto resize the filesystem. That way you don't need to remember the differences between them.

    So you are saying instead of this:

     lvextend logical_volume_name -l +95%FREE
    

    Use this:?

    lvextend logical_volume_name -r
    

    More like

    lvextend logical_volume_name -l +95%FREE -r


  • @travisdh1 Thanks



  • @fuznutz04 said in Vultr, Block Storage CentOS:

    @travisdh1 Thanks

    If you want to just look at how powerful LVM has become through the years, you should run lvm and look at the help screens sometime. That's how I dived into it initially at least.



  • @travisdh1 Run LVM? You mean just look at the man pages? Or are you referring to something else?



  • @fuznutz04 said in Vultr, Block Storage CentOS:

    @travisdh1 Run LVM? You mean just look at the man pages? Or are you referring to something else?

    It's got a whole environment just for its self. Literally just lvm on a command line.



  • @travisdh1 Ah nice. Will do. I want to learn a lot more about it, so I'll take your advice.



  • @fuznutz04 said in Vultr, Block Storage CentOS:

    @travisdh1 Ah nice. Will do. I want to learn a lot more about it, so I'll take your advice.

    The amount of options is almost staggering. Even more, ZFS, brtfs and a number of other filesystems have just as many options and choices to make.



  • @travisdh1 said in Vultr, Block Storage CentOS:

    @fuznutz04 said in Vultr, Block Storage CentOS:

    @travisdh1 Ah nice. Will do. I want to learn a lot more about it, so I'll take your advice.

    The amount of options is almost staggering. Even more, ZFS, brtfs and a number of other filesystems have just as many options and choices to make.

    Yeah, one at a time. I've worked with ZFS in the past, but it was when I was using FreeNAS back in the day.



  • @fuznutz04 said in Vultr, Block Storage CentOS:

    @travisdh1 said in Vultr, Block Storage CentOS:

    @fuznutz04 said in Vultr, Block Storage CentOS:

    @travisdh1 Ah nice. Will do. I want to learn a lot more about it, so I'll take your advice.

    The amount of options is almost staggering. Even more, ZFS, brtfs and a number of other filesystems have just as many options and choices to make.

    Yeah, one at a time. I've worked with ZFS in the past, but it was when I was using FreeNAS back in the day.

    At least it's all the same stuff, just called something different for the most part.



  • @travisdh1 Right. Seeing as how the world is built on storage systems, I need to dig in a little.

    That's all for now. Good night NotJengaMaster.



  • @fuznutz04 said in Vultr, Block Storage CentOS:

    @travisdh1 Right. Seeing as how the world is built on storage systems, I need to dig in a little.

    That's all for now. Good night NotJengaMaster.

    whispers it's the glasses



  • @travisdh1 So on second thought, I'm thinking it might be a better approach to redirect the call recordings to the block device directly, without extending the LVM volume to the block device. So it would be like this:

    Attach block device and create partition and file system.
    Mount the new device to a new directory (/callrecordings)
    In FreePBX, point the call recordings to this new directory.

    This way, the VPS disk, is still completely separate from the block device disk. In my head, this just seems cleaner, and has less potential for errors if the block device is ever unavailable.

    Thoughts?



  • @fuznutz04 said in Vultr, Block Storage CentOS:

    @travisdh1 So on second thought, I'm thinking it might be a better approach to redirect the call recordings to the block device directly, without extending the LVM volume to the block device. So it would be like this:

    Attach block device and create partition and file system.
    Mount the new device to a new directory (/callrecordings)
    In FreePBX, point the call recordings to this new directory.

    This way, the VPS disk, is still completely separate from the block device disk. In my head, this just seems cleaner, and has less potential for errors if the block device is ever unavailable.

    Thoughts?

    Yes, that makes way more sense.



  • @scottalanmiller said in Vultr, Block Storage CentOS:

    @fuznutz04 said in Vultr, Block Storage CentOS:

    @travisdh1 So on second thought, I'm thinking it might be a better approach to redirect the call recordings to the block device directly, without extending the LVM volume to the block device. So it would be like this:

    Attach block device and create partition and file system.
    Mount the new device to a new directory (/callrecordings)
    In FreePBX, point the call recordings to this new directory.

    This way, the VPS disk, is still completely separate from the block device disk. In my head, this just seems cleaner, and has less potential for errors if the block device is ever unavailable.

    Thoughts?

    Yes, that makes way more sense.

    The only thing that made me think of that was because about 2 weeks ago Vultr NJ had some issues with block storage. If they have an issue again, at least I could still boot the VM. (although I would have to remove the block device from the fstab. But then, It should boot fine I suppose. (crosses fingers)


Log in to reply