ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Building Out XenServer 6.5 with USB Boot and Software RAID 10

    IT Discussion
    xen virtualization xenserver xenserver 6.5 how to
    13
    182
    59.2k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • scottalanmillerS
      scottalanmiller
      last edited by scottalanmiller

      Finding almost no concise guide to doing this online and trying to help someone go through this in the simplest manner possible, I thought that it would be good to document the process as we typically don't see XenServer with software RAID and when you find guides for that they assume that you are booting from the software RAID array as well which adds a lot of unnecessary complication.

      echo "modprobe raid10" > /etc/sysconfig/modules/raid.modules
      modprobe raid10 
      mdadm --create /dev/md0 --level=10 --raid-devices=4 /dev/sd[b-e]
      cat /proc/mdstat
      chmod a+x /etc/sysconfig/modules/raid.modules
      mdadm --examine /dev/sd[b-e]
      mdadm --detail /dev/md0
      mkfs.ext3 /dev/md0
      mkdir /data
      mdadm --detail --scan --verbose >> /etc/mdadm.conf
      xe sr-create type=ext device-config:device=/dev/md0 shared=false host-uuid:!(mdadm --detail /dev/md0 | grep UUID | cut -d' ' -f3) name-label="Local OBR10"
      
      DustinB3403D 1 Reply Last reply Reply Quote 8
      • DustinB3403D
        DustinB3403
        last edited by

        Here's a guide on installing MDAD on XS 6.5 http://petrbena.blogspot.com/2015/02/how-to-install-mdadm-on-citrix-xen-65.html

        1 Reply Last reply Reply Quote 1
        • scottalanmillerS
          scottalanmiller
          last edited by

          Like all of the ones that I have found, this assumes installing to the RAID array. In this particular case we don't want to partition but want the MD RAID to be a single array of the entire devices. So /dev/sda rather than /dev/sda1.

          1 Reply Last reply Reply Quote 1
          • DustinB3403D
            DustinB3403
            last edited by

            Just run the installation to the USB drive.

            Configure mdadm from the guide.

            scottalanmillerS 1 Reply Last reply Reply Quote 0
            • scottalanmillerS
              scottalanmiller @DustinB3403
              last edited by

              @DustinB3403 said:

              Configure mdadm from the guide.

              Didn't I just explain that it's not right for that? I've found guide after guide with way too much complexity and improperly set up software RAID given that we are installing from USB and not installing to the disks. This is just like all of the other guides. It's good for what it is, but wrong for what we are doing here.

              1 Reply Last reply Reply Quote 1
              • scottalanmillerS
                scottalanmiller
                last edited by

                These are the same:

                http://djlab.com/2014/03/xenserver-6-2-with-software-raid/
                https://blog.trendelkamp.net/2015/02/configure-software-raid-xenserver-6-5/

                That's why I made this thread, to specifically fix what all of these do differently from what is needed.

                1 Reply Last reply Reply Quote 1
                • DustinB3403D
                  DustinB3403
                  last edited by

                  Well here's a guide : https://major.io/2012/01/16/xenserver-6-storage-repository-on-software-raid/

                  For a storage repo on Fake Raid

                  scottalanmillerS 2 Replies Last reply Reply Quote 0
                  • scottalanmillerS
                    scottalanmiller @DustinB3403
                    last edited by

                    @DustinB3403 said:

                    Well here's a guide : https://major.io/2012/01/16/xenserver-6-storage-repository-on-software-raid/

                    For a storage repo on Fake Raid

                    Same, booting from the software RAID array. Like I said above, if you see a number appear in the device list ever, it''s the wrong style of guide. Way too much complexity.

                    This is exactly the same as the other three.

                    1 Reply Last reply Reply Quote 0
                    • scottalanmillerS
                      scottalanmiller
                      last edited by

                      A guide that does what we want will make an array out of /dev/sda /dev/sdb etc.... Not from /dev/sda3 /dev/sdb3 etc.

                      1 Reply Last reply Reply Quote 0
                      • scottalanmillerS
                        scottalanmiller @DustinB3403
                        last edited by

                        @DustinB3403 said:

                        For a storage repo on Fake Raid

                        Not for FakeRAID, just for software RAID.

                        1 Reply Last reply Reply Quote 1
                        • DustinB3403D
                          DustinB3403
                          last edited by

                          Well since our side bar, it's decide that we need a guide that simply instructs someone on how to build a RAID Array on CentOS.

                          I have two contenders.

                          Option 1

                          Option 2

                          1 Reply Last reply Reply Quote 0
                          • DustinB3403D
                            DustinB3403
                            last edited by

                            Actually here is a guide for specifically configuring RAID 10 (4 disks) on mdadm http://www.tecmint.com/create-raid-10-in-linux/

                            1 Reply Last reply Reply Quote 1
                            • DustinB3403D
                              DustinB3403
                              last edited by

                              Cutting out the individual details, and simplifying the steps, might be ideal though...

                              1 Reply Last reply Reply Quote 0
                              • DustinB3403D
                                DustinB3403
                                last edited by DustinB3403

                                Taken and simplified from here.

                                RAID 10 is a combination of RAID 0 and RAID 1 to form a RAID 10. To setup Raid 10, we need at least 4 disks.

                                Here we will use both RAID 0 and RAID 1 to perform a Raid 10 setup with minimum of 4 drives. Assume, that we’ve some data saved to logical volume, which is created with RAID 10. Just for an example, if we are saving a data “apple” this will be saved under all 4 disk by this following method.

                                Creating RAID 10
                                Using RAID 0 it will save as “A” in first disk and “p” in the second disk, then again “p” in first disk and “l” in second disk. Then “e” in first disk, like this it will continue the Round robin process to save the data. From this we come to know that RAID 0 will write the half of the data to first disk and other half of the data to second disk.
                                In RAID 1 method, same data will be written to other 2 disks as follows. “A” will write to both first and second disks, “P” will write to both disk, Again other “P” will write to both the disks. Thus using RAID 1 it will write to both the disks. This will continue in round robin process.
                                Now you all came to know that how RAID 10 works by combining of both RAID 0 and RAID 1. If we have 4 number of 20 GB size disks, it will be 80 GB in total, but we will get only 40 GB of Storage capacity, the half of total capacity will be lost for building RAID 10.

                                Requirements
                                In RAID 10, we need minimum of 4 disks, the first 2 disks for RAID 0 and other 2 Disks for RAID 1. Like I said before, RAID 10 is just a Combine of RAID 0 & 1. If we need to extended the RAID group, we must increase the disk by minimum 4 disks.
                                My Server Setup
                                Operating System : CentOS 6.5 Final
                                IP Address : 192.168.0.229
                                Hostname : rd10.tecmintlocal.com
                                Disk 1 [20GB] : /dev/sdd
                                Disk 2 [20GB] : /dev/sdc
                                Disk 3 [20GB] : /dev/sdd
                                Disk 4 [20GB] : /dev/sde
                                There are two ways to setup RAID 10, but here I’m going to show you both methods, but I prefer you to follow the first method, which makes the work lot easier for setting up a RAID 10.

                                Method 1: Setting Up Raid 10

                                1. First, verify that all the 4 added disks are detected or not using the following command.

                                   ls -l /dev | grep sd    
                                  
                                2. Once the four disks are detected, it’s time to check for the drives whether there is already any raid existed before creating a new one.

                                  mdadm -E /dev/sd[b-e]
                                  mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde
                                  

                                Verify 4 Added Disks
                                Note: In the above output, you see there isn’t any super-block detected yet, that means there is no RAID defined in all 4 drives.

                                Step 1: Drive Partitioning for RAID

                                1. Now create a new partition on all 4 disks (/dev/sdb, /dev/sdc, /dev/sdd and /dev/sde) using the ‘fdisk’ tool.

                                   fdisk /dev/sdb
                                   fdisk /dev/sdc
                                   fdisk /dev/sdd
                                   fdisk /dev/sde
                                  

                                Create /dev/sdb Partition
                                Let me show you how to partition one of the disk (/dev/sdb) using fdisk, this steps will be the same for all the other disks too.

                                   fdisk /dev/sdb
                                

                                Please use the below steps for creating a new partition on /dev/sdb drive.

                                1. Press ‘n‘ for creating new partition.
                                2. Then choose ‘P‘ for Primary partition.
                                3. Then choose ‘1‘ to be the first partition.
                                4. Next press ‘p‘ to print the created partition.
                                5. Change the Type, If we need to know the every available types Press ‘L‘.
                                6. Here, we are selecting ‘fd‘ as my type is RAID.
                                7. Next press ‘p‘ to print the defined partition.
                                8. Then again use ‘p‘ to print the changes what we have made.
                                9. Use ‘w‘ to write the changes.

                                Disk sdb Partition

                                Note: Please use the above same instructions for creating partitions on other disks (sdc, sdd sdd sde).
                                4. After creating all 4 partitions, again you need to examine the drives for any already existing raid using the following command.

                                    mdadm -E /dev/sd[b-e]
                                    mdadm -E /dev/sd[b-e]1
                                

                                OR

                                   mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde
                                   mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
                                

                                Check All Disks for Raid
                                Note: The above outputs shows that there isn’t any super-block detected on all four newly created partitions, that means we can move forward to create RAID 10 on these drives.

                                Step 2: Creating ‘md’ RAID Device

                                1. Now it’s time to create a ‘md’ (i.e. /dev/md0) device, using ‘mdadm’ raid management tool. Before, creating device, your system must have ‘mdadm’ tool installed, if not install it first.

                                  yum install mdadm           
                                  

                                Once ‘mdadm’ tool installed, you can now create a ‘md’ raid device using the following command.

                                   mdadm --create /dev/md0 --level=10 --raid-devices=4 /dev/sd[b-e]1
                                
                                1. Next verify the newly created raid device using the ‘cat’ command.

                                  cat /proc/mdstat

                                Loading the modules

                                echo "modprobe raid10" > /etc/sysconfig/modules/raid.modules
                                modprobe raid10
                                chmod a+x /etc/sysconfig/modules/raid.modules
                                

                                Create md raid Device

                                1. Next, examine all the 4 drives using the below command. The output of the below command will be long as it displays the information of all 4 disks.

                                  mdadm --examine /dev/sd[b-e]1
                                  
                                2. Next, check the details of Raid Array with the help of following command.

                                  mdadm --detail /dev/md0
                                  

                                Check Raid Array Details
                                Note: You see in the above results, that the status of Raid was active and re-syncing.

                                Step 3: Creating Filesystem

                                1. Create a file system using ext4 for ‘md0’ and mount it under ‘/mnt/raid10‘. Here, I’ve used ext4, but you can use any filesystem type if you want.

                                  mkfs.ext4 /dev/md0
                                  

                                Create md Filesystem
                                10. After creating filesystem, mount the created file-system under ‘/mnt/raid10‘ and list the contents of the mount point using ‘ls -l’ command.

                                     mkdir /mnt/raid10
                                     mount /dev/md0 /mnt/raid10/
                                     ls -l /mnt/raid10/
                                

                                Next, add some files under mount point and append some text in any one of the file and check the content.

                                    touch /mnt/raid10/raid10_files.txt
                                    ls -l /mnt/raid10/
                                    echo "raid 10 setup with 4 disks" > /mnt/raid10/raid10_files.txt
                                    cat /mnt/raid10/raid10_files.txt
                                

                                Mount md Device
                                11. For automounting, open the ‘/etc/fstab‘ file and append the below entry in fstab, may be mount point will differ according to your environment.

                                    vim /etc/fstab
                                
                                    /dev/md0                /mnt/raid10              ext4    defaults        0 0
                                

                                To save and quit type.

                                    wq!.
                                

                                AutoMount md Device
                                12. Next, verify the ‘/etc/fstab‘ file for any errors before restarting the system using ‘mount -a‘ command.

                                       mount -av
                                

                                Check Errors in Fstab

                                Step 4: Save RAID Configuration

                                1. By default RAID don’t have a config file, so we need to save it manually after making all the above steps, to preserve these settings during system boot.

                                  mdadm --detail --scan --verbose >> /etc/mdadm.conf
                                  

                                Save Raid10 Configuration

                                That’s it, we have created RAID 10 using this method.

                                1 Reply Last reply Reply Quote 2
                                • RomoR
                                  Romo
                                  last edited by Romo

                                  We need to load the raid modules to the kernel prior to creating the md raid Device. Like this:
                                  echo "modprobe raid10" > /etc/sysconfig/modules/raid.modules
                                  modprobe raid10
                                  chmod a+x /etc/sysconfig/modules/raid.modules

                                  DustinB3403D 1 Reply Last reply Reply Quote 0
                                  • DustinB3403D
                                    DustinB3403 @Romo
                                    last edited by

                                    @Romo said:

                                    We need to load the raid modules to the kernel prior to creating the md raid Device. Like this:
                                    echo "modprobe raid10" > /etc/sysconfig/modules/raid.modules
                                    modprobe raid10
                                    chmod a+x /etc/sysconfig/modules/raid.modules

                                    So sliping your code in just before

                                    Create md raid Device

                                    you're saying should address the issue

                                    RomoR 1 Reply Last reply Reply Quote 0
                                    • RomoR
                                      Romo
                                      last edited by

                                      We can also use the whole disk, without the need to create partitions in them, don't really know if this is better but it is a possibility.

                                      This is the screenshot of the raid array created with 4 disks using the whole disks.
                                      Screenshot from 2015-11-04 20:06:04.png

                                      scottalanmillerS 1 Reply Last reply Reply Quote 1
                                      • RomoR
                                        Romo @DustinB3403
                                        last edited by

                                        @DustinB3403 Yes, I couldn't create the md10 device in my setup without loading the modules into the kernel

                                        1 Reply Last reply Reply Quote 1
                                        • scottalanmillerS
                                          scottalanmiller @Romo
                                          last edited by

                                          @Romo correct, that's part of the purpose of the new guide, to use the whole disk rather than to partition it first. Fewer steps, better results.

                                          1 Reply Last reply Reply Quote 1
                                          • RomoR
                                            Romo
                                            last edited by

                                            This shows the file system added to our raid array

                                            creating_file_system.png

                                            1 Reply Last reply Reply Quote 1
                                            • 1
                                            • 2
                                            • 3
                                            • 4
                                            • 5
                                            • 9
                                            • 10
                                            • 1 / 10
                                            • First post
                                              Last post