Storage Question



  • First off: hello everyone! (First post!) 😃

    So, I did a bunch of storage research before buying my latest server. The server I eventually ended up with was a DELL T320, with the PERC H310 card. I outfitted it with (2) 500GB 7.2K SATA drives in a RAID 1 array. The thinking was I was going to put a 2012 hypervisor on that array, and then use my own SSD drives for the VMs. This server will be used as a DC, and also either a moderate use file server or email server for a 20 person SMB.

    I worked directly with Kingston, and received some trial SSD drives (KC300). The drives work, but as many here have probably seen, they constantly flash amber, throw up a lot of errors in Server Administrator, and of course provide no SMART monitoring or advanced warnings. I'm a little nervous at some point this is going to come back and bite me.

    So now I am rethinking this whole thing, and where I should go from here. Even though you cannot buy them like this from DELL, I've been told the server can run SATA and SAS concurrently, so I think I have a lot of options. This server is new so data is not a concern.

    Would love to hear any thoughts about my situation, or answers to the following questions.

    Thanks in advance!

    1. Am I nuts to consider these third party SSDs without any monitoring or predictive failure?

    2. Would it make sense to trash the 7.2K drives, and just buy (4) SAS 10K drives for in there and make one big RAID 10 array?

    3. Considering what I already have and our requirements, would it just make sense to buy a few more 7.2K drives and make a RAID 10 array out of them? Is there a huge performance difference between those two arrays? (7.2K vs. 10K both in a RAID 10.)

    4. Sub question: where did 15K SAS drives go? DELL doesn't really offer them any more.

    5. Do people buy the PERC H310? Seems like the H710 is mentioned a lot.



  • How risk averse are you? How much does disk speed impact your environment?

    edit: Welcome!! Good first post.



  • Well, we're a smaller company, so we could probably be down for a day or so.

    We also have pretty good backups.

    Not sure about the disk speed requirement, to be honest. Always just assumed go big! 🙂

    Oh and thanks for the welcome. I hope it was a good first post, ha ha!



  • @BRRABill

    Hi BRRABill if you can swing it why not got for a RAID 5 out of SSD's. SSD's aren't subject to the same failings of classic Spinning Rust.

    I wouldn't trash the drives, might need them for something.

    I wouldn't consider buying Spinning Rust ever at this point. Everything comes with SSD's or I buy SSD's and put them in myself. (See a lot of my topics as of recently)

    15K SAS drives are around, but expensive


  • Banned

    @BRRABill said:

    Not sure about the disk speed requirement, to be honest. Always just assumed go big! 🙂

    I'd look into your IOPS. I'd say it's fairly low though.


  • Banned

    @BRRABill said:

    1. Considering what I already have and our requirements, would it just make sense to buy a few more 7.2K drives and make a RAID 10 array out of them? Is there a huge performance difference between those two arrays? (7.2K vs. 10K both in a RAID 10.)

    4 at 7.2k can still be fairly slow. The bigger the array the faster it will be.



  • @BRRABill said:

    1. Sub question: where did 15K SAS drives go? DELL doesn't really offer them any more.

    Firstly - welcome to the boards.

    Where did they go - they've been more less replaced by SSD. They are so expensive that you get better performance for dollar with SSD.

    Where are the errors coming from with these SSD's installed? Is the PERC controller given them because the drives aren't Dell Drives?



  • My concern is more not knowing if anything is going wrong with the drives.

    Or having to look at these stupid flashing lights all day. (<= kidding)

    I thought RAID 5 was frowned upon these days?



  • I thought of one more question:

    A lot of the talk of the enterprise SSD and battery on PERC cards revolves around power loss.

    But why is that an issue if the server (probably) has a UPS and shutdown?



  • @BRRABill said:

    My concern is more not knowing if anything is going wrong with the drives.

    Or having to look at these stupid flashing lights all day. (<= kidding)

    I thought RAID 5 was frowned upon these days?

    RAID 5 on spinning rust is. RAID 5 SSD is AOKAY


  • Banned

    @BRRABill said:

    My concern is more not knowing if anything is going wrong with the drives.

    Or having to look at these stupid flashing lights all day. (<= kidding)

    I thought RAID 5 was frowned upon these days?

    Only for HDDs, not SSD.

    Have you updated the PERC Raid firmware?



  • @BRRABill said:

    I thought RAID 5 was frowned upon these days?

    The problems with RAID 5 on spinning disks are UREs and the amount of time it takes to rebuild the new drive. SSDs don't suffer either of these issues, so it's back on the table.



  • @BRRABill said:

    I thought of one more question:

    A lot of the talk of the enterprise SSD and battery on PERC cards revolves around power loss.

    But why is that an issue if the server (probably) has a UPS and shutdown?

    What happens if the power supply in the server dies? You still want something to backup the cache on the controller card to cover this instance. The other option is a controller with flash memory instead of cache.



  • Yes, everything in the server is totally updated firmware-wise.

    Pretty sure it's just an issue of it being a non-DELL drive. I read a lot of drives from other manufacturers were exhibiting the same symptoms.

    I guess some SSDs work, and some don't. Kingston has been good with working with me, but this might not be fixable on their end.

    I guess that would be ANOTHER question ... anyone using 3rd party SSDs that work with DELL servers?



  • @Dashrender said:

    What happens if the power supply in the server dies? You still want something to backup the cache on the controller card to cover this instance.

    Ah, good point. Possibly a reason they go with the H710 then! And enterprise-class SSDs with power protection.


  • Service Provider

    @BRRABill said:

    I thought RAID 5 was frowned upon these days?

    On spinning rust (aka Winchester drives.) On SSD it is the norm.



  • @BRRABill said:

    @Dashrender said:

    What happens if the power supply in the server dies? You still want something to backup the cache on the controller card to cover this instance.

    Ah, good point. Possibly a reason they go with the H710 then! And enterprise-class SSDs with power protection.

    We've had a recent really deep dive on why enterprise class drives are a waste of money (search through the IT Discussions). Perhaps you could use a non Dell RAID controller to solve this problem?


  • Service Provider

    @BRRABill said:

    @Dashrender said:

    What happens if the power supply in the server dies? You still want something to backup the cache on the controller card to cover this instance.

    Ah, good point. Possibly a reason they go with the H710 then! And enterprise-class SSDs with power protection.

    Yes, in general with servers a RAID controller is one of the first places that I recommend making a bigger investment. The larger cache and faster CPUs of better cards, plus other features, really make a difference. Obviously battery or flash backing of your data is a big deal.


  • Service Provider

    @Dashrender said:

    We've had a recent really deep dive on why enterprise class drives are a waste of money (search through the IT Discussions). Perhaps you could use a non Dell RAID controller to solve this problem?

    LSI or Adaptec are the good choices. Once you go down that road, be sure to reconsider being on Dell hardware. At a minimum hit up @BradfromxByte to talk about refurbed gear.



  • This is the topic regarding Consumer SSDs vs Enterprise SSDs



  • @BRRABill said:

    1. Considering what I already have and our requirements, would it just make sense to buy a few more 7.2K drives and make a RAID 10 array out of them? Is there a huge performance difference between those two arrays? (7.2K vs. 10K both in a RAID 10.)

    Do you know your IOPs requirement? You might be able to get away with four 7.2K drives in a RAID 10. Splitting into two RAID 1's as you're currently planning is actually the worst thing you can do. It ends up wasting the majority of the performance offered by the drives you install the OS on.

    For this server you should install Hyper-V or Xen onto a SD card or USB stick, then run your VMs from the storage. If you have enough storage space with a RAID 1 SSD, that's probably good enough. If not, moving to a 4 drive RAID 5 on SSD would probably be the next place to look.


  • Service Provider

    My very first thought here is.... are SSDs or even SAS drives worth it? Before we talk first party or third party drives, let's get some performance ideas under our belts. Twenty users is not very many. AD needs no IOPS at all. File servers tend to not use a lot, on average. Email even less.

    SAS is fine here, but probably NL-SAS not 10K and very unlikely 15K. SATA will probably do the trick too. I wouldn't go cheap on 5400 RPM SATA or anything. But standard 7200 RPM SATA is probably just fine.

    If we move to SATA drives we get cheaper and we can incorporate the two 500GB drives that already exist into a single OBR10 array for better value and performance. Eight 500GB SATA drives isn't too shabby nor too expensive.



  • @scottalanmiller said:

    @Dashrender said:

    We've had a recent really deep dive on why enterprise class drives are a waste of money (search through the IT Discussions). Perhaps you could use a non Dell RAID controller to solve this problem?

    LSI or Adaptec are the good choices. Once you go down that road, be sure to reconsider being on Dell hardware. At a minimum hit up @BradfromxByte to talk about refurbed gear.

    I only mention using a non Dell card to get rid of the errors vs moving to Dell's expensive supported drives.



  • @scottalanmiller said:

    If we move to SATA drives we get cheaper and we can incorporate the two 500GB drives that already exist into a single OBR10 array for better value and performance. Eight 500GB SATA drives isn't too shabby nor too expensive.

    This is what I did for my first VM host, and it's pretty darned decent.



  • @scottalanmiller said:

    My very first thought here is.... are SSDs or even SAS drives worth it? Before we talk first party or third party drives, let's get some performance ideas under our belts. Twenty users is not very many. AD needs no IOPS at all. File servers tend to not use a lot, on average. Email even less.

    SAS is fine here, but probably NL-SAS not 10K and very unlikely 15K. SATA will probably do the trick too. I wouldn't go cheap on 5400 RPM SATA or anything. But standard 7200 RPM SATA is probably just fine.

    If we move to SATA drives we get cheaper and we can incorporate the two 500GB drives that already exist into a single OBR10 array for better value and performance. Eight 500GB SATA drives isn't too shabby nor too expensive.

    I agree in general, but it really depends on his needs - if space isn't required in bulk then nuts to the rust, go SSD



  • Space is definitely not a problem. The SSDs I have are fine for space.

    So does it seem we are leaning towards keeping the SSDs and throwing warnings to the wind?

    As an afterthought, I thought about not getting the PERC controller and a straight LSI/Adaptec. But then wouldn't that cause the same problems, where the DELL server itself doesn't like the drive, hence flashing the amber? Or is the hotplug cage lights controlled by the adapter itself?



  • Our mailserver is currently sitting at around 110GB. The data server at 130. That's including data and OS (Server 2003). (I'll be buying a second server in a few weeks once I get this all worked out. I didn't want to buy them both in case any issues came up. And did they!! 🙂 )

    So
    server 1 = dc/data
    server 2 = dc/mail

    The three Kingston SSDs are 480GB capacity. So there would be more than enough space.



  • @BRRABill said:

    Our mailserver is currently sitting at around 110GB. The data server at 130. That's including data and OS (Server 2003). (I'll be buying a second server in a few weeks once I get this all worked out. I didn't want to buy them both in case any issues came up. And did they!! 🙂 )

    So
    server 1 = dc/data
    server 2 = dc/mail

    The three Kingston SSDs are 480GB capacity. So there would be more than enough space.

    No!

    2 Servers Hypervisor Host.

    Virtualize every server you have, and run everything between the two host.

    Virtualize Everything!



  • Sorry to keep replying, but everyone is posting so quick I don't want anyone to miss anything.

    The 480GB Kingston SSDs are under $250 each including cage and 3.5" adapter. So cost isn't really a concern there either.

    The KC300 is a pro drive (similar to the Samsung 850) that is tuned slightly for server use. It's not enterprise-grade, but my Kingston rep (who, with the rest of Kingston have been GREAT) thinks it will be fine for my usage.



  • @DustinB3403 said:

    Virtualize every server you have, and run everything between the two host.

    Virtualize Everything!

    That is what we are going to do.

    Each server will run 2 VMs.

    machine 1 VM1 = DC
    machine 1 VM 2 = data
    machine 2 vm1 = DC
    machine 2 vm2 = mail

    I wanted to have 2 servers for DC redundancy.



Looks like your connection to MangoLassi was lost, please wait while we try to reconnect.