ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Linux: Working with LVM

    IT Discussion
    linux lvm logical volume managers storage sam linux administration
    2
    3
    1.6k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • scottalanmillerS
      scottalanmiller
      last edited by scottalanmiller

      Working with Linux LVM mostly comes down to manipulating three things: Physical Volumes (PV), Volume Groups (VG) and Logical Volumes (LV.) Each must exist in that order for anything to work; meaning we can't make an LV until we have VG, and we can't make a VG until we have at least one PV.

      The first thing that we need to do is to place the block device(s) that we are going to be working with under LVM management by designating them as PVs. Traditionally we use an entire block device as presented to the operating system for simplicity. You can, of course, create partition(s) on top of the device and only add one or more partitions as PVs, but generally this is avoided.

      In this first example, we need to determine the block device that we are going to work with. In this example we have VIRTIO devices as block devices which show up as /dev/vd* devices.

      ls | grep vd
      vda
      vda1
      vda2
      vdb
      

      The /dev/vda block device is partitions by the installation system and has two partitions: /dev/vda1 and /dev/vda2. What we are interested in is the unused second device, /dev/vdb. This is a device that we added just for the example and is commonly what we would find in the wild. In order to add a block device under LVM, we have to use the pvcreate command. This is very straightforward, just pvcreate and the name of the device.

      # pvcreate /dev/vdb
        Physical volume "/dev/vdb" successfully created
      

      There are two commands for each layer of the LVM system that shows us the status of the layer. For the PV layer, these commands are pvs and pvdisplay. The short three letter names give quick summaries and the longer "display" formats give us a lot of details.

      First let's look at the summary.

      # pvs
        PV         VG     Fmt  Attr PSize  PFree 
        /dev/vda2  centos lvm2 a--  18.13g 40.00m
        /dev/vdb          lvm2 ---  37.25g 37.25g
      

      From this we can see that /dev/vda2, which we know was a partition of /dev/vda, is added as a PV under LVM. This is the CentOS and RHEL standard default build configuration so you will see this much of the time. This is CentOS 7 in our example here. What is of more interest to us is that /dev/vdb has been successful added as a PV. The lvs command shows us a quick list of the PVs, tells us if they are actively part of a Volume Group (VG), tells us which LVM format is used (all will be LVM2 currently), any special attributes are shown, the physical size (PSize) of the device and how much of it has been used (PFree.)

      How that we have /dev/vdb as a PV we need to add it to a VG in order to use it. For this we use the vgcreate command. With this command we will make our first VG and add our PV to it at the same time. We can name our VG whatever we want, but calling it something useful is pretty important. A common reason for creating a VG is to hold data files, so calling it "data" is handy. To make it very obvious that we are dealing with a VG, it is common practice to preface the name with "vg_". So in our example we will call our first VG "vg_data", but you could just as easily have name it "saturn", "snoopy", "vg1" or "myvg".

      # vgcreate vg_data /dev/vdb
        Volume group "vg_data" successfully created
      

      Now, as before, we can look at the summary of our VG with the vgs command.

      # vgs
        VG      #PV #LV #SN Attr   VSize  VFree 
        centos    1   2   0 wz--n- 18.13g 40.00m
        vg_data   1   0   0 wz--n- 37.25g 37.25g
      

      We can see, again, that the CentOS default install already has create a VG called "centos" that is fully used (all except for 40MB which is really just spillover overhead.) And we can see our new "vg_data" VG is now there. The first column shows us the VG name, the second is #PV and tells us how many PVs are included in this VG. In both cases here, there is only on PV per VG. The third column shows us how many current Logical Volumes (LVs) have been created on top of each VG. The existing "centos" VG has two LVs created on top of it and, of course, our new "vg_data" VG has none so far.

      Now that we have a working VG, we are free to create our LV. We can make just one LV on top of each VG or we can make many. Think of LVs much like partitions in the "old days", but more flexible. In this example, we will make just one LV that uses all of the available free space on the existing VG. There are many ways to specify how large to make an LV, we will use the simplest syntax in this example of "-l 100%FREE" which simply uses all available space. We also specify the name of the LV like this "-n lv_data".

      # lvcreate -l 100%FREE -n lv_data vg_data
        Logical volume "lv_data" created.
      

      Quick and easy. We now have our first LV. But before we do anything with it, we should take a quick look using, you guessed it, lvs to get a summary of the LVs on our system now.

      # lvs
        LV      VG      Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
        root    centos  -wi-ao---- 16.23g                                                    
        swap    centos  -wi-ao----  1.86g                                                    
        lv_data vg_data -wi-a----- 37.25g 
      

      As we can see, the "centos" VG from the system installation has two LVs created automatically, "root" and "swap". Our new LV "lv_data" is visible there, too. So we know that our new LV is now available for us to use as a normal block device.

      So now we have to find where this new block device exists on the system so that we can use it in standard commands. In CentOS and RHEL, this will show up be default in two locations. The first is under /dev/vgname/lvname and the second is under /dev/mapper/vgname-lvname. For the moment, we will use the former but both work just fine.

      # ls /dev/vg_data/
      lv_data
      # ls /dev/mapper/
      centos-root  centos-swap  control  vg_data-lv_data
      

      Of course what we have currently is simply a block device, nothing special. To the OS we have just added a new hard drive that is ready to be used. So, just like any block device that we want to use, we need to start by applying a filesystem to it. We've successfully configured our first LV and now are just making a filesystem as usual.

      Continuing with our example and assuming that we will use XFS we would do this:

      # mkfs.xfs /dev/vg_data/lv_data 
      meta-data=/dev/vg_data/lv_data   isize=256    agcount=4, agsize=2441216 blks
               =                       sectsz=512   attr=2, projid32bit=1
               =                       crc=0        finobt=0
      data     =                       bsize=4096   blocks=9764864, imaxpct=25
               =                       sunit=0      swidth=0 blks
      naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
      log      =internal log           bsize=4096   blocks=4768, version=2
               =                       sectsz=512   sunit=0 blks, lazy-count=1
      realtime =none                   extsz=4096   blocks=0, rtextents=0
      

      Now we are ready to mount our new LV. Since we called it "lv_data" it probably makes sense to mount it under /data, right? So here we go:

      # mkdir /data
      # mount /dev/vg_data/lv_data /data
      # df -x tmpfs -x devtmpfs
      Filesystem                  1K-blocks    Used Available Use% Mounted on
      /dev/mapper/centos-root      17008640 1193816  15814824   8% /
      /dev/vda1                      508588  151840    356748  30% /boot
      /dev/mapper/vg_data-lv_data  39040384   32928  39007456   1% /data
      

      As you can see, the system mounts using the "/dev/mapper" syntax under the hood automatically regardless of which format we used ourselves. Both syntax are actually symlinks anyway. So this is just fine. Six of one, half dozen of the other.

      Part of a series on Linux Systems Administration by Scott Alan Miller

      1 Reply Last reply Reply Quote 4
      • scottalanmillerS
        scottalanmiller
        last edited by

        Fixed a typo. Thanks @BRRABill for checking.

        1 Reply Last reply Reply Quote 0
        • BRRABillB
          BRRABill
          last edited by

          And by checking, he means learning and just happening to find it.

          1 Reply Last reply Reply Quote 1
          • scottalanmillerS scottalanmiller referenced this topic on
          • 1 / 1
          • First post
            Last post