NAS alternative on the cheap



  • Hello,

    I have project and i need to learn more and understand, let say we dont want to purchase any hardware and we want to re purpose old hardware, and they have plenty.

    I want to make ONE BIG RAID 10 machine + KVM for light vm hosting = only a couple of VMs.

    I got the KVM part and understand it, but I usually purchase NAS systems, however this is kind of donation work, so one thing I dont understand lets say i have 4 drives with similar capacity like 2 TB , I just plug those 4 drives and do install (like I am projecting inside VMware workstation), or do you guys install the OS on single separate drive (i heard some do RAID 1 on this) then create another RAID 10 on the 4 other drives ?

    But if I created RAID 10 on the other 4 drives, how can the system use them for KVM ? they will be only used as data mount point, and not system mount point , right ?

    And should I do this from installation level or better I install Centos on 1 disk then configure, what will you do basically ?

    And is it okay for me wanting to have both RAID 10 + KVM on same machine ? or this better be done on 2 systems one only for RAID 10 and the other for KVM, and the KVM will save files on the RAID 10

    1_1518433454775_2018-02-12 12_59_30-CentOS 7 64-bit (RAID+KVM) - VMware Workstation.png 0_1518433454774_2018-02-12 12_57_19-CentOS 7 64-bit (RAID+KVM) - VMware Workstation.png

    Also note USB drives are available, so an OS can be installed on USB.

    There is no money for anything, this needs to be done in way that will help them and install some VM to help their work flow but without adding any cost, or maybe we can purchase an external USB flash drive 64 GB but that is it.

    And if I installed the OS on USB Drive, which is something i am leaning too, should I create SWAP partition or not ? for me it does not sound practical.



  • @emad-r said in NAS alternative on the cheap:

    Hello,

    I have project and i need to learn more and understand, let say we dont want to purchase any hardware and we want to re purpose old hardware, and they have plenty.

    I want to make ONE BIG RAID 10 machine + KVM for light vm hosting = only a couple of VMs.

    I got the KVM part and understand it, but I usually purchase NAS systems, however this is kind of donation work, so one thing I dont understand lets say i have 4 drives with similar capacity like 2 TB , I just plug those 4 drives and do install (like I am projecting inside VMware workstation), or do you guys install the OS on single separate drive (i heard some do RAID 1 on this) then create another RAID 10 on the 4 other drives ?

    But if I created RAID 10 on the other 4 drives, how can the system use them for KVM ? they will be only used as data mount point, and not system mount point , right ?

    And should I do this from installation level or better I install Centos on 1 disk then configure, what will you do basically ?

    And is it okay for me wanting to have both RAID 10 + KVM on same machine ? or this better be done on 2 systems one only for RAID 10 and the other for KVM, and the KVM will save files on the RAID 10

    1_1518433454775_2018-02-12 12_59_30-CentOS 7 64-bit (RAID+KVM) - VMware Workstation.png 0_1518433454774_2018-02-12 12_57_19-CentOS 7 64-bit (RAID+KVM) - VMware Workstation.png

    Also note USB drives are available, so an OS can be installed on USB.

    There is no money for anything, this needs to be done in way that will help him and install some VM to help their work flow but without adding any cost, or maybe we can purchase an external USB flash drive 64 GB but that is it.

    And if I installed the OS on USB Drive, which is something i am leaning too, should I create SWAP partition or not ? for me it does not sound practical.

    OBR10 (One Big Raid 10). Create a small partition to throw a decent OS (Fedora Server most likely in this case), and you should be good to go.

    The auto partitioning system will not create a RAID for you, so that you'll have to do manually with the "I will configure partitioning" radio button (I think it's the same option on Fedora.)



  • @travisdh1

    So you suggest small OS on one drive and the rest 4 drives has RAID 10 , right?

    Is it okay that this small OS can be on USB drive ? I can clone it and make backup of it easily ?



  • @emad-r said in NAS alternative on the cheap:

    @travisdh1

    So you suggest small OS on one drive and the rest 4 drives has RAID 10 , right?

    Is it okay that this small OS can be on USB drive ? I can clone it and make backup of it easily ?

    No, a small OS partition on the OBR10 array. You can do that with good software raid like what is included in the Linux kernel.

    I'd never put a standard OS on a USB stick, the sticks just aren't made with enough reliability in mind to handle all of the small writes that happen to things like log files, not just swap space. There are a few operating systems that were made with running from a USB/Memory Card in mind, but if it wasn't custom built for it all I see is a dead USB stick.



  • @travisdh1

    Interesting, so everything in those 4 drives. no need for anything extra.

    I would really love if you can help me partition them, this is where I see many options especially since the boot partition needs to be basic and standard one of the drives so that paused me.

    If I have

    Drive A = 2 TB
    Drive B = 2 TB
    Drive C = 2 TB
    Drive D = 2 TB

    I know all the options in the installer, and I know how to select drives and make software RAID 10, but is it simple as

    creating boot partition on A = 2 GiB
    Then creating LVM on A/B/C/D with RAID 10 worth of 4 TB
    And does swap need to be in separate partition ?


  • Service Provider

    @emad-r said in NAS alternative on the cheap:

    @travisdh1

    So you suggest small OS on one drive and the rest 4 drives has RAID 10 , right?

    Is it okay that this small OS can be on USB drive ? I can clone it and make backup of it easily ?

    No. Just one array. No need to split off the OS.


  • Service Provider

    @emad-r said in NAS alternative on the cheap:

    @travisdh1

    Interesting, so everything in those 4 drives. no need for anything extra.

    I would really love if you can help me partition them, this is where I see many options especially since the boot partition needs to be basic and standard one of the drives so that paused me.

    If I have

    Drive A = 2 TB
    Drive B = 2 TB
    Drive C = 2 TB
    Drive D = 2 TB

    I know all the options in the installer, and I know how to select drives and make software RAID 10, but is it simple as

    creating boot partition on A = 2 GiB
    Then creating LVM on A/B/C/D with RAID 10 worth of 4 TB
    And does swap need to be in separate partition ?

    Single array resulting in 4 TB usable. Then just make a /boot of 1GB and a / of maybe 14GB and then a datastore of the rest.



  • @scottalanmiller said in NAS alternative on the cheap:

    @emad-r said in NAS alternative on the cheap:

    @travisdh1

    So you suggest small OS on one drive and the rest 4 drives has RAID 10 , right?

    Is it okay that this small OS can be on USB drive ? I can clone it and make backup of it easily ?

    No. Just one array. No need to split off the OS.

    Interesting I was just thinking how would commercial small NAS would work, cause I thought they have ROM image with the OS so I would do something similar with USB drive, thats how my idea originated.


  • Service Provider

    @emad-r said in NAS alternative on the cheap:

    @scottalanmiller said in NAS alternative on the cheap:

    @emad-r said in NAS alternative on the cheap:

    @travisdh1

    So you suggest small OS on one drive and the rest 4 drives has RAID 10 , right?

    Is it okay that this small OS can be on USB drive ? I can clone it and make backup of it easily ?

    No. Just one array. No need to split off the OS.

    Interesting I was just thinking how would commercial small NAS would work, cause I thought they have ROM image with the OS so I would do something similar with USB drive, thats how my idea originated.

    No need to do what they do, though. They do it for reasons that are of no value to the customer. You can do it better in the case.

    Not that the USB approach is bad. You are just free to do what is best here. Not deal with zero disk scenarios.



  • @scottalanmiller @travisdh1

    Hi what do you think of the updated config, do note each screenshot shows the partition, then the next screenshot shows on what disk does it reside, thanks again for checking this with me. the reason i went to big / is cause I want to make sure all the KVM files will be stored in the raid. Note that boot and boot efi cannot be inside the RAID, the installer will error out.

    0_1518440447493_1 (1).png

    0_1518440450765_1 (2).png

    0_1518440454014_1 (3).png

    0_1518440457045_1 (4).png

    0_1518440460637_1 (5).png

    0_1518440463597_1 (6).png

    0_1518440466625_1 (7).png

    0_1518440469589_1 (8).png


  • Service Provider

    Looks fine to me.



  • @emad-r said in NAS alternative on the cheap:

    @scottalanmiller @travisdh1

    @scottalanmiller @travisdh1

    Should I worry about the fact that the boot partition is not included in the RAID array ? what can you advise to fix or increase the durability of this, without dedicated RAID card

    Also the setup is very nice especially with cockpit.



  • @scottalanmiller @travisdh1

    @scottalanmiller @travisdh1

    Should I worry about the fact that the boot partition is not included in the RAID array ? what can you advise to fix >or increase the durability of this, without dedicated RAID card

    Also the setup is very nice especially with cockpit.

    I answered this:

    https://en.wikipedia.org/wiki/Mdadm

    Since support for MD is found in the kernel, there is an issue with using it before the kernel is running. Specifically it will not be present if the boot loader is either (e)LiLo or GRUB legacy. Although normally present, it may not be present for GRUB 2. In order to circumvent this problem a /boot filesystem must be used either without md support, or else with RAID1. In the latter case the system will boot by treating the RAID1 device as a normal filesystem, and once the system is running it can be remounted as md and the second disk added to it. This will result in a catch-up, but /boot filesystems are usually small.

    With more recent bootloaders it is possible to load the MD support as a kernel module through the initramfs mechanism. This approach allows the /boot filesystem to be inside any RAID system without the need of a complex manual configuration.

    So yh you can make the boot patition and the boot EFI as RAID 1 and the installer will allow this, seems abit too much but good to know, with motherboards in 2018 having dual M2 slots + 8 SATA ports, this makes interesting choices... I think the only thing left to check is SWAP cause I dont think i checked it was working when put on separate non RAID partition. I even made catchy name for this = RAID 1 for boot, RAID 10 for root

    1_1518457284274_2018-02-12 19_38_03-Clone of Fedora 64-bit - VMware Workstation.png 0_1518457284273_2018-02-12 19_37_29-Clone of Fedora 64-bit - VMware Workstation.png



  • I have mine setup like this:

    cat /proc/mdstat
    Personalities : [raid10] [raid1] 
    md0 : active raid1 sda1[0] sdb1[1] sdd1[3] sdc1[2]
          409536 blocks super 1.0 [4/4] [UUUU]
          
    md2 : active raid10 sdd3[3] sdc3[2] sda3[0] sdb3[1]
          965448704 blocks super 1.1 512K chunks 2 near-copies [4/4] [UUUU]
          bitmap: 4/8 pages [16KB], 65536KB chunk
    
    md1 : active raid10 sdd2[3] sdc2[2] sda2[0] sdb2[1]
          10230784 blocks super 1.1 512K chunks 2 near-copies [4/4] [UUUU]
    
    $ df -Th
    Filesystem           Type   Size  Used Avail Use% Mounted on
    /dev/mapper/vg_mc-ROOT
                         ext4    25G  1,7G   22G   7% /
    tmpfs                tmpfs  3,9G     0  3,9G   0% /dev/shm
    /dev/md0             ext4   380M  125M  235M  35% /boot
    /dev/mapper/vg_mc-DATA
                        
    
    

    Boot Raid 1
    Swap Raid 10
    Everything else raid 10 and then LVM on top



  • @romo said in NAS alternative on the cheap:

    Everything else raid 10 and then LVM on top

    No issues from Boot Raid 1 ? with the system updating and kernel updating and is this UEFI setup ?



  • @emad-r No issues booting from Raid 1, system has been live for 4 years.
    No it is not using UEFI.



  • @emad-r

    I love COCKS
    I mean cockpit-storage
    I removed 1 HDD from my test environment and it was very easy to detect and handle.
    cockpit is shaping to become the defacto standard in managing linux boxes, I hope they dont stop or sell out

    4_1518460185901_2018-02-12 20_28_33-Storage - centos.kvm.raid10.png 3_1518460185901_2018-02-12 20_28_26-Storage - centos.kvm.raid10.png 2_1518460185901_2018-02-12 20_28_16-.png 1_1518460185900_2018-02-12 20_28_13-.png 0_1518460185898_2018-02-12 20_28_44-root@centos_~.png

    Regarding RAID 1 on boot, I did this one test enviroment and removed RAID 1 disk and now the system on emergency mode, so not sure if I will add complexity and do this, especially since the chance of that partition corrupting is low, cause it is used on startups mostly and reboots.



  • @emad-r said in NAS alternative on the cheap:

    I removed 1 HDD from my test environment and it was very easy to detect and handle.

    how did you remote it? usually with soft raid you would not eject a drive without purging it from RAID before...



  • @matteo-nunziati said in NAS alternative on the cheap:

    @emad-r said in NAS alternative on the cheap:

    I removed 1 HDD from my test environment and it was very easy to detect and handle.

    how did you remote it? usually with soft raid you would not eject a drive without purging it from RAID before...

    in virtual environment you can do whatever you want.



  • @emad-r said in NAS alternative on the cheap:

    @matteo-nunziati said in NAS alternative on the cheap:

    @emad-r said in NAS alternative on the cheap:

    I removed 1 HDD from my test environment and it was very easy to detect and handle.

    how did you remote it? usually with soft raid you would not eject a drive without purging it from RAID before...

    in virtual environment you can do whatever you want.

    ah! sorry I've misunderstood: I was sure you had remove a disk from a physical OBR10!



  • @emad-r

    So I deleted the contents of boot partition + boot EFI in Fedora, trying to simulate failure of the boot partitions like I did in Centos, and I restarted Fedora and the system worked ! (cause boot partitions are not inside RAID, and if you read the edit it works only cause it auto-detects the copies I made of the boot partitions)

    ANyway I found the best approach which OBR10 and I dont let take the max size at the drives i spare 8 GB and then I copy the boot partitions to the end of the empty space of those drives if I ever needed them as backup.

    Edit = I deleted the boot paritions but it seems Fedora is smart enough that i already copied the boot drives so it automatically used them. without me pasting it back.

    pics

    0_1518519471706_2018-02-13 12_49_38-Fedora Server [Running] - Oracle VM VirtualBox.png

    0_1518519476156_2018-02-13 12_51_11-Fedora Server [Running] - Oracle VM VirtualBox.png

    0_1518519481491_2018-02-13 12_51_50-Fedora Server [Running] - Oracle VM VirtualBox.png

    Note that it detected the boot partition without the boot flag.

    Okay next step

    I restored the flags + boot partitions on the original drive, then rebooted = still boots from drive /dev/sdf3+2

    So I deleted /dev/sdf3+2 and now it is using /dev/sdg3+2 (another copy, yes I copied the boot parition on the end of each RAID drive)

    This is surprising interesting for me, I never new about this feature of auto recovery of boot paritions and detection even if they are not in the beginning of the drive, the system is using them at the end of the drive.

    I wonder if Centos is the same ...

    Last test was to delete all the other at the end of drive boot partition and keep the ones I restored at the original location of first drive and system booted fine to those, so all is good and again surprised on how much the system is resilient.


  • Service Provider

    @emad-r said in NAS alternative on the cheap:

    @emad-r said in NAS alternative on the cheap:

    @scottalanmiller @travisdh1

    @scottalanmiller @travisdh1

    Should I worry about the fact that the boot partition is not included in the RAID array ? what can you advise to fix or increase the durability of this, without dedicated RAID card

    Also the setup is very nice especially with cockpit.

    It can't be, you have to manually copy the /boot to each partition.



  • @emad-r

    Centos not the same behaviour. it does not switch to another boot partition and auto detects it if there, it just errors out on next reboot. and you cant just restore the partition either to restore the system...



  • @scottalanmiller said in NAS alternative on the cheap:

    @emad-r said in NAS alternative on the cheap:

    @emad-r said in NAS alternative on the cheap:

    @scottalanmiller @travisdh1

    @scottalanmiller @travisdh1

    Should I worry about the fact that the boot partition is not included in the RAID array ? what can you advise to fix or increase the durability of this, without dedicated RAID card

    Also the setup is very nice especially with cockpit.

    It can't be, you have to manually copy the /boot to each partition.

    try it out, have fedora system install, and copy the boot paritions to another disk, then delete the original boot partition and reboot the system, the system will auto detect and boot from the duplicated in another disk boot parition, on centos this does not happen. I tested with in EFI



  • @emad-r said in NAS alternative on the cheap:

    @scottalanmiller said in NAS alternative on the cheap:

    @emad-r said in NAS alternative on the cheap:

    @emad-r said in NAS alternative on the cheap:

    @scottalanmiller @travisdh1

    @scottalanmiller @travisdh1

    Should I worry about the fact that the boot partition is not included in the RAID array ? what can you advise to fix or increase the durability of this, without dedicated RAID card

    Also the setup is very nice especially with cockpit.

    It can't be, you have to manually copy the /boot to each partition.

    try it out, have fedora system install, and copy the boot paritions to another disk, then delete the original boot partition and reboot the system, the system will auto detect and boot from the duplicated in another disk boot parition, on centos this does not happen. I tested with in EFI

    I suspect this behavior is related to that centos uses XFS for boot partition and Fedora uses EXT4 so doing more tests, and perhaps UUID is another possible reason.



  • @emad-r said in NAS alternative on the cheap:

    @emad-r said in NAS alternative on the cheap:

    @scottalanmiller said in NAS alternative on the cheap:

    @emad-r said in NAS alternative on the cheap:

    @emad-r said in NAS alternative on the cheap:

    @scottalanmiller @travisdh1

    @scottalanmiller @travisdh1

    Should I worry about the fact that the boot partition is not included in the RAID array ? what can you advise to fix or increase the durability of this, without dedicated RAID card

    Also the setup is very nice especially with cockpit.

    It can't be, you have to manually copy the /boot to each partition.

    try it out, have fedora system install, and copy the boot paritions to another disk, then delete the original boot partition and reboot the system, the system will auto detect and boot from the duplicated in another disk boot parition, on centos this does not happen. I tested with in EFI

    I suspect this behavior is related to that centos uses XFS for boot partition and Fedora uses EXT4 so doing more tests, and perhaps UUID is another possible reason.

    I am dropping this, I matched UUID and flags and everything, if you delete boot + boot/efi on Centos no matter what it does not get restored, even if you cloned the parition back and restored same UUID + flags.

    EXT4 seems better for boot cause it retains the UUID while copying while XFS changes it.


  • Service Provider

    Oh interesting. Good catch.



  • @scottalanmiller

    Yh by accident today I was able to understand it better

    2_1518718280839_2018-02-15 20_02_46-Fedora OBR10 - VMware Workstation.png 1_1518718280837_2018-02-15 20_02_41-Fedora OBR10 - VMware Workstation.png 0_1518718280836_2018-02-15 20_08_13-Fedora OBR10 - VMware Workstation.png

    Fedora suffers the same fate as centos, and wants to error out but instead when it does not see the boot partitions it will error out quickly then check the next drive, and if it founds it it will update the EFI parition and add another line so it has fail safe method, centos wont do this. so the 1st entry belonged to boot part that was deleted and the 2nd as well, so fedora will keep finding them if you have them and it will auto update the EFI parition while it boots up and adds another line, so then you can boot normally. all this done in the background, you will just see an error fail for boot for 1-2 seconds they you will boot normally cause Fedora already written another entry.

    I am starting to like Fedora for servers (minimal install) and if I want centos I can use it as guest vm.