Raid 6 Amateur File Server Setup Questions
-
Mind if I ask you what speaks against the approach of having the OS not on the array? You said performance, number of ports, etc...
Just as a trial I threw in a small SSD connected to the motherboard rather than the controller. It produced the results above; write is not great, but for my purpose OK (and could you really expect much more with RAID6?; Read could have actually been even better; I think 115MB/s should also be close to the maximum of what my Gigabit ethernet can do, right?).
I think with the way I write, that is I copy it over from my main pc, I could even risk enabling write-cache. That would mean that in the event of power-failure the data currently written is lost, right? The likelihood of that happening during I copy something over is almost zero, but even if it happened, I would still have the data on my pc. (Write performance is not so critical, so I won't be doing it, just out of interest).
I have also booted the entire thing with Ubuntu on a stick and I could access the array with no issues.
So it almost seems like a panacea for me; I could switch OS later on, while taking full advantage of the energy conservation options of the controller (switching drives off etc...). The OS would still run on the SSD, instead of having 6 disks spinning up and down for access to the OS alone.
I'm probably missing something though.The issue with drives only being recognized at 1.5GB/s has returned. I think I can rule out a bad cable since the drives are attached to a 1-4 splitter cable (SFF 8087 to 4 SATA) and there is no pattern of, say, one cable producing bad results. I haven't changed any of the cables, and yet the speed seems to change every reboot. Write-performance seems not be impacted (still got around 10MB/s when 5 out of 6 where recognized as SATA I; next time I get 4/6 recognized as SATA II).
-
@geertcourmacher said in Raid 6 Amateur File Server Setup Questions:
Mind if I ask you what speaks against the approach of having the OS not on the array? You said performance, number of ports, etc...
Just as a trial I threw in a small SSD connected to the motherboard rather than the controller. It produced the results above; write is not great, but for my purpose OK (and could you really expect much more with RAID6?; Read could have actually been even better; I think 115MB/s should also be close to the maximum of what my Gigabit ethernet can do, right?).
I think with the way I write, that is I copy it over from my main pc, I could even risk enabling write-cache. That would mean that in the event of power-failure the data currently written is lost, right? The likelihood of that happening during I copy something over is almost zero, but even if it happened, I would still have the data on my pc. (Write performance is not so critical, so I won't be doing it, just out of interest).
I have also booted the entire thing with Ubuntu on a stick and I could access the array with no issues.
So it almost seems like a panacea for me; I could switch OS later on, while taking full advantage of the energy conservation options of the controller (switching drives off etc...). The OS would still run on the SSD, instead of having 6 disks spinning up and down for access to the OS alone.
I'm probably missing something though.The issue with drives only being recognized at 1.5GB/s has returned. I think I can rule out a bad cable since the drives are attached to a 1-4 splitter cable (SFF 8087 to 4 SATA) and there is no pattern of, say, one cable producing bad results. I haven't changed any of the cables, and yet the speed seems to change every reboot. Write-performance seems not be impacted (still got around 10MB/s when 5 out of 6 where recognized as SATA I; next time I get 4/6 recognized as SATA II).
I assume you are letting your raid controller automatically negotiate the speed. There should be a way to manually set it to 3.0. I would wait until someone more knowledgeable than I am responds as well because I don't know how risky this is to do in your situation. I have read of Adaptec and WD Red's not getting along in this way though.
-
As I have seen sometimes RAID controllers and drives do not play nicely.
Did you Google your drives and the adapter? Maybe there is some info out there.
-
@geertcourmacher said in Raid 6 Amateur File Server Setup Questions:
Mind if I ask you what speaks against the approach of having the OS not on the array? You said performance, number of ports, etc...
Waste of Performance, waste of capacity, extra effort...
Ask the question the opposite way, given that splitting arrays causes huge losses in capacity and performance, what factor would make it a consideration? Try to answer the "pro" of going against standard best practices instead of only looking for the cons of following them.
-
-
@geertcourmacher said in Raid 6 Amateur File Server Setup Questions:
Just as a trial I threw in a small SSD connected to the motherboard rather than the controller. It produced the results above; write is not great, but for my purpose OK (and could you really expect much more with RAID6?; Read could have actually been even better; I think 115MB/s should also be close to the maximum of what my Gigabit ethernet can do, right?).
Yes, 115MB/s is roughly GigE speeds. However, drive performance is primarily measured in IOPS, not network speed. MB/s isn't a very useful number here.
-
@scottalanmiller said in Raid 6 Amateur File Server Setup Questions:
@geertcourmacher said in Raid 6 Amateur File Server Setup Questions:
Mind if I ask you what speaks against the approach of having the OS not on the array? You said performance, number of ports, etc...
Waste of Performance, waste of capacity, extra effort...
Ask the question the opposite way, given that splitting arrays causes huge losses in capacity and performance, what factor would make it a consideration? Try to answer the "pro" of going against standard best practices instead of only looking for the cons of following them.
Again, I'll take your word for granted, so no questions there. But, I'm not really wasting any capacity by attaching a 60gb ssd I already own (own it twice, in fact, and I find it too small for anything). I'm not wasting a slot since the ones on the mb aren't used at all (wouldn't be wasting a slot on the controller either, since I probably won't ever attach 20 drives). No extra effort too.
That leaves performance. If I'm fine with the write, and the read is maxed out anyway - would you still advice (not on principle, but in my case) against it? I've read your article on Raid1 + Raid 5. Am I wrong that my scenario deviates from standard industry use enough?As for drive performance, I just provided what I though would be my only usage scenario. Does this give you more info or what tool should I use to measure the IOPS?
With write-cache enabled (just for test purposes)
-
Yes, indeed the controller negotiates the speed. I couldn't find an easy way to adjust that though.
Google search only led me back to this thread
But if its to do with WD Red's, I'll just have to live with it I guess.Temperature: The controller gots really hot with temperatures of up to 78°C. It still reports as normal, but it seems really hot to me. I probably shouldn't compare this to a normal pc, but I actually doubt that I could cool it much better with air; I suspect the heatsink is just too small. Temps dont really drop much when inactive. Water cooling is probably not feasible, or even available.
Scrubs: I have two options that seem similiar on the controller. One is called background consistency checks; which can be set to anything; even a few hundred days. The second is called verify and fix, which can be set at a maximum of once a month. If you the specifics, all the better, but if you don't, given the very rare writes, what would be a good number of days/months to run scrubs?
-
not much point to faster speeds with Reds, they can't push very much speed at all.
-
Why are you disabling the cache? no battery backup? you can buy that on ebay for like $50 totally worth it for the performance boost you show in my opinion.
-
The improved speed is indeed impressive, but I'd only feel the difference right now, as I am filling up the array. And even then, I won't get to use all of the improved speed (it's either limited by the network, or worse, usb 2.0). I've actually gone ahead and left it enabled despite the fact that I don't have a battery just for filling up the array. I figured, hopefully not wrongly, that I can't lose any data as long as I don't delete it from my external drives/work pc even if the first power failure in 5+yrs were to occur.
-
@geertcourmacher said in Raid 6 Amateur File Server Setup Questions:
The improved speed is indeed impressive, but I'd only feel the difference right now, as I am filling up the array. And even then, I won't get to use all of the improved speed (it's either limited by the network, or worse, usb 2.0). I've actually gone ahead and left it enabled despite the fact that I don't have a battery just for filling up the array. I figured, hopefully not wrongly, that I can't lose any data as long as I don't delete it from my external drives/work pc even if the first power failure in 5+yrs were to occur.
Meaning... you have a backup?
-
Not really a backup. For the purpose of creating this array I had to fill up every hard drive I could find. I know you're probably crinching right now, but I actually had to delete backups for that, so for the moment even my critical data is not backed up. Only a few more hours, though. *fingers crossed
-
huh - as cheap as 4 TB USB 3.0 drives are today, this seems like living on the edge.
-
@Dashrender said in Raid 6 Amateur File Server Setup Questions:
huh - as cheap as 4 TB USB 3.0 drives are today, this seems like living on the edge.
At least just for a few hours.
-
Just a little update for those interested.
I have now utilized 2 60GB SSD's as a software-based Raid0 with Win10 installed and I keep an Ubuntu stick by the server. Every now and then I boot into Ubunto just to see whether everything is accessable and it seems to work fine.
According to Adaptec all new Raid cards should be compatible with mine, so the biggest risk remaining to me seems to be user error.I don't post this to disregard your great advice; I am sure there are better options, but I had to fill the server rather quick to escape the situation described above and hence didn't have the time to get aquainted with entirely new OS's, or whole notions of virtualization (for the next update I shall, however, try to do this your way).
The server is actually running anything but 24/7 - I only turn it on every other day for a few hours at most. Whenever I have to move more than a few gigabytes I turn on write-cache - I figured as long as I'm copying it from my main PC there is no risk involved. Afterwards I turn it off again.
The issue with the WD Red's being recognized as Sata1 keeps reoccuring - and changing with every reboot - but I haven't seen any negative impact given my usage scenario.
I installed the best fan I am aware of blowing air directly at the card (140mm Noctua Industrial) and with 1300RPM I get up to 62° (shaving off 20° compared to an 120mm fan I had installed before), if it runs at 2000RPM I could get it down to 52°, but the noise is unbearable.
The only thing that's left that I haven't figured out is regarding scrubs. I have two options that seem similiar on the controller. One is called background consistency checks; which can be set to anything; even a few hundred days (I set this to 180 days so it hasnt ocured yet). The second is called verify and fix, which can be set at a maximum of once a month. If I don't run either, it means the controller doesn't scrub at all?
-
Sounds like that would indicate a no scrubbing situation.