ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Adding New Drives to an HP Proliant SmartArray with LVM

    Scheduled Pinned Locked Moved IT Discussion
    linuxrhelcentosstoragelvmsmartarrayprolianthpacuclismartarray p410proliant dl380 g6hpe
    1 Posts 1 Posters 2.3k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • scottalanmillerS
      scottalanmiller
      last edited by

      In this real world example post I am going to deal with the every day administration task of adding four new hard drives to an existing, and running, production server, configuring them into an array, adding them to LVM management, adding a file system and mounting that filesystem. The server in question is an HP Proliant DL380 G6 with a SmartArray P410i RAID controller but these direction should apply to any SmartArray controller. We will be using HP’s hpacucli utility to manage the array from the command line. We will be doing this from Red Hat Enterprise Linux 5.6 Tikanga but, again, the commands are very generic and will apply across many Linux versions. The SmartArray management utility, hpacucli, is needed and is not a standard component of any Linux distro so if you are missing it you will need to acquire it from HP in order to manage your hardware.

      The hpacucli utility is generally used interactively. Commands entered at the “=>” prompt are within the hpacucli utility rather than in our normal shell. Once in the utility we will run “ctrl all show config” to get an overview of the status of our array. In our example we have a server with eight available drive bays. Four are previously populated and configured (this is an existing production server) and the remaining four have just had drives inserted into their hot swap bays live so we will need to verify here that their details are correct.

      => ctrl all show config
      
      Smart Array P410i in Slot 0 (Embedded) (sn: 50123456789ABCDE)
      
        array A (SAS, Unused Space: 0 MB)
      
          logicaldrive 1 (136.7 GB, RAID 1+0, OK)
      
          physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 72 GB, OK)
          physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 72 GB, OK)
          physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS, 72 GB, OK)
          physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 72 GB, OK)
      
        unassigned
      
          physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SAS, 146 GB, OK)
          physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SAS, 146 GB, OK)
          physicaldrive 2I:1:7 (port 2I:box 1:bay 7, SAS, 146 GB, OK)
          physicaldrive 2I:1:8 (port 2I:box 1:bay 8, SAS, 146 GB, OK)
      
        SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250 (WWID: 50123456789ABCED)
      

      As you can see here, we have, as expected, four configured physical drives in a RAID 10 array and four unconfigured drives which we will now need to assign to a second RAID 10 array. Of course we could also make two RAID 1 arrays, one RAID 5 array or one RAID 6 array but we want RAID 10 for performance and reliability. Now we need to assign the four unassigned drives to our array. We do this as well with the hpacucli utility.

      =>ctrl slot=0 create type=ld drives=2I:1:5,2I:1:6,2I:1:7,2I:1:8 raid=1+0
      

      This command should silently create the array. We will need to rerun the config command to see if the results have changed.

      => ctrl all show config
      
      Smart Array P410i in Slot 0 (Embedded)    (sn: 50123456789ABCDE)
      
        array A (SAS, Unused Space: 0 MB)
      
           logicaldrive 1 (136.7 GB, RAID 1+0, OK)
      
           physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 72 GB, OK)
           physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 72 GB, OK)
           physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS, 72 GB, OK)
           physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 72 GB, OK)
      
        array B (SAS, Unused Space: 0 MB)
      
           logicaldrive 2 (273.4 GB, RAID 1+0, OK)
      
           physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SAS, 146 GB, OK)
           physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SAS, 146 GB, OK)
           physicaldrive 2I:1:7 (port 2I:box 1:bay 7, SAS, 146 GB, OK)
           physicaldrive 2I:1:8 (port 2I:box 1:bay 8, SAS, 146 GB, OK)
      
        SEP (Vendor ID PMCSIERA, Model  SRC 8x6G) 250 (WWID: 50123456789ABCED)
      

      Success. We can see that our four unassigned physical drives have turned into a single RAID 10 logical drive. Now are are done in the hpacucli utility.

      Our next step is to add the newly created RAID array as physical volumes under LVM.

      [root@snoopy ~]# pvcreate /dev/cciss/c0d1
        Physical volume "/dev/cciss/c0d1" successfully created
      

      Now that we have a physical volume we can create our volume group. In my example I already have a first volume group name vg0 so my new one here will be named vg1.

      [root@snoopy ~]# vgcreate vg1 /dev/cciss/c0d1
        Volume group "vg1" successfully created
      

      And following the creation of our volume group we can now carve it up into the logical volumes that you want to use. In my example I just want one logical volume that is going to include all of the space within vg1. This is where commonly you might make several smaller logical volumes depending upon your goals.

      [root@snoopy ~]# lvcreate -l 100%FREE -n lv_data vg1
        Logical volume "lv_data" created
      

      Before we continue we will examine the status of the volume groups and logical volumes just to ensure that everything is as we expect it to be.

      [root@snoopy ~]# vgdisplay
        --- Volume group ---
        VG Name               vg1
        System ID
        Format                lvm2
        Metadata Areas        1
        Metadata Sequence No  2
        VG Access             read/write
        VG Status             resizable
        MAX LV                0
        Cur LV                1
        Open LV               0
        Max PV                0
        Cur PV                1
        Act PV                1
        VG Size               273.40 GB
        PE Size               4.00 MB
        Total PE              69991
        Alloc PE / Size       69991 / 273.40 GB
        Free  PE / Size       0 / 0
        VG UUID               Edt4KX-Cfdv-rdpj-KtgJ-69o7-Sdom-uiWeo5
      
        --- Volume group ---
        VG Name               vg0
        System ID
        Format                lvm2
        Metadata Areas        1
        Metadata Sequence No  3
        VG Access             read/write
        VG Status             resizable
        MAX LV                0
        Cur LV                2
        Open LV               2
        Max PV                0
        Cur PV                1
        Act PV                1
        VG Size               136.56 GB
        PE Size               32.00 MB
        Total PE              4370
        Alloc PE / Size       4370 / 136.56 GB
        Free  PE / Size       0 / 0
        VG UUID               p2asMu-lkgf-3wYd-GGXX-4Cjc-vedk-z090AD
      
      [root@snoopy ~]# lvdisplay
        --- Logical volume ---
        LV Name                /dev/vg1/lv_data
        VG Name                vg1
        LV UUID                D7flkf-KYzz-EdUA-B5FL-rBKo-Hv0v-xYzkZl
        LV Write Access        read/write
        LV Status              available
        # open                 0
        LV Size                273.40 GB
        Current LE             69991
        Segments               1
        Allocation             inherit
        Read ahead sectors     auto
        - currently set to     256
        Block device           253:2
      
        --- Logical volume ---
        LV Name                /dev/vg0/lv_root
        VG Name                vg0
        LV UUID                TjeUwx-kt0G-CBnl-Dp7b-VBWZ-oaXv-2VjB63
        LV Write Access        read/write
        LV Status              available
        # open                 1
        LV Size                128.56 GB
        Current LE             4114
        Segments               1
        Allocation             inherit
        Read ahead sectors     auto
        - currently set to     256
        Block device           253:0
      
        --- Logical volume ---
        LV Name                /dev/vg0/lv_swap
        VG Name                vg0
        LV UUID                31GU2w-Dl4s-EfXx-OQLk-XeAe-CtPW-2DDba5
        LV Write Access        read/write
        LV Status              available
        # open                 1
        LV Size                8.00 GB
        Current LE             256
        Segments               1
        Allocation             inherit
        Read ahead sectors     auto
        - currently set to     256
        Block device           253:1
      

      Everything looks good. Our new logical volume of /dev/vg1/lv_data is the expected size and ready to be used. Now we simply need to create a filesystem, a mount point and mount it. As this is Red Hat 5 that I am using in this example we will use ext3 as an our example filesystem. I would likely use ext4 on RHEL 6. BtrFS would be a logical option on a Suse or Oracle system today and will only continue to be more and more popular.

      [root@snoopy ~]# mkfs.ext3 /dev/vg1/lv_data
      mke2fs 1.39 (29-May-2006)
      Filesystem label= 
      OS type: Linux 
      Block size=4096 (log=2) 
      Fragment size=4096 (log=2)
      35848192 inodes, 71670784 blocks
      3583539 blocks (5.00%) reserved for the super user 
      First data block=0 Maximum filesystem blocks=4294967296
      2188 block groups
      32768 blocks per group, 32768 fragments per group
      16384 inodes per group
      Superblock backups stored on blocks:
       32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
       4096000, 7962624, 11239424, 20480000, 23887872, 71663616
      
      Writing inode tables: done
      Creating journal (32768 blocks): done
      Writing superblocks and filesystem accounting information: done
      
      This filesystem will be automatically checked every 32 mounts or
      180 days, whichever comes first.  Use tune2fs -c or -i to override.
      

      Now to create our mount point and mount our new filesystem. We’ll create the somewhat obvious mountpoint named /data.

      [root@snoopy ~]# mkdir -p /data
      [root@snoopy ~]# mount /dev/vg1/lv_data /data
      

      Now that we have mounted successfully we can use the df command to test that everything is working as expected. No surprises here.

      [root@snoopy ~]# cd /data
      [root@snoopy data]# df -h .
      Filesystem            Size  Used Avail Use% Mounted on
      /dev/mapper/vg1-lv_data
                            270G  192M  256G   1% /data
      

      As you can see, our filesystem is looking good and is already usable. At this point everything is good but when we reboot our filesystem will be unmounted again. Not ideal for normal business usage. So we just need to append a line to our /etc/fstab file to automatically mount this new filesystem. This is my entry that I added:

      /dev/vg1/lv_data   /data   ext3   defaults   0 0
      

      Originally posted on my Linux blog in 2012 at: http://web.archive.org/web/20140823021548/http://www.scottalanmiller.com/linux/2012/04/22/adding-new-drives-to-an-hp-proliant-smartarray-with-lvm/

      1 Reply Last reply Reply Quote 4
      • 1 / 1
      • First post
        Last post