ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. biggen
    3. Posts
    B
    • Profile
    • Following 0
    • Followers 0
    • Topics 13
    • Posts 156
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: RAID5 SSD Performance Expectations

      @Pete-S said in RAID5 SSD Performance Expectations:

      @Pete-S said in RAID5 SSD Performance Expectations:

      @scottalanmiller said in RAID5 SSD Performance Expectations:

      @Pete-S said in RAID5 SSD Performance Expectations:

      Having a drive failure will become such an odd failure like having a raid controller, a motherboard or a CPU fail. You'd just replace it and restore the entire thing from backup.

      I think drives already fail less than RAID controllers. From working in giant environmnts, the thing that fails more than mobos or CPUs is RAM. That's the worst one as it does the most damage and is hard to mitigate.

      The difference though is that mobo, controllers, PSUs, are stateless to the system but drives are stateful. So their failure has a different type of impact, regardless of frequency.

      Well, the stateful-ness of the drives is not something we can count fully on, hence the saying "raid is not backup".

      What I'm proposing is that when it becomes very unlikely that a drive fails we could rethink our strategy and go for single drives instead of raid arrays. In the very unlikely event that a failure did occur, we are restoring from backup, which we are prepared to do anyway.

      With HDDs the failure rate is too high but with enterprise SSDs it's starting to get into the "will not fail" category.

      As an example assume we have 4 servers with a RAID10 array of 4 x 2TB drives each. Annual failure rate of HDDs are a few percent, say 3% for arguments sake. With 16 drives in total, every year there is about 50% chance that a drive will fail. So over the lifespan of the servers it's very likely that we will see one or more drive failures.

      Now assume the same 4 servers with a single enterprise 4TB NVMe drive in each. Annual failure rate is 0.4% (actual number a few years back). With 4 drives in total, every year there is less than 2% chance that any drive will fail. So over the lifespan of the server it's very unlikely that we will ever see a drive failure at all. Sure, if it does happen anyway, we are restoring from backup instead of rebuilding the array.

      As long as you can justify the downtime in the event that a single drive failure takes an entire server down (albeit with a low statistical chance).

      If that isn't a concern no use running RAID anyway.

      posted in IT Discussion
      B
      biggen
    • RE: Reconsidering ProxMox

      Thanks @scottalanmiller. This is for my home system so I guess I'll just run with a ZFS mirror since no HW Raid on this machine.

      posted in IT Discussion
      B
      biggen
    • RE: Reconsidering ProxMox

      @scottalanmiller Are you installing these at customer locations? Since you aren't using ZFS with Proxmox are you doing hardware RAID and then using LVM backed storage on top of that for customers?

      posted in IT Discussion
      B
      biggen
    • RE: Reconsidering ProxMox

      I've been playing with Proxmox quite a bit. Still love xcp-ng but Proxmox does somethings that xcp-ng doesn't. A built in host management interface for instance is wonderful.

      I do wish Proxmox supported MD. I know I can easily configure it but don't really want to do that.

      posted in IT Discussion
      B
      biggen
    • RE: Reconsidering ProxMox

      @scottalanmiller said in Reconsidering ProxMox:

      @biggen said in Reconsidering ProxMox:

      @scottalanmiller What’s your storage configuration like?

      I’ve been playing with it on ZFS Raid 1 mirror. Proxmox OS and VMs all on same mirror. Performance is “OK”. Not as good as MD with same setup though.

      Wonder if it’s better to create separate Raid 1 ZFS pools. One for the Proxmox OS and one for the VMs.

      We don't use ZFS - slow and we don't want its features (few actually do.) LVM is what we use. What is making you want to look at ZFS? It's not meant for speed and has little generally purpose these days. It's not bad, but mostly it's deployed by accident when people aren't sure what it is. Then people swear by "features" that everything has thinking they are unique to ZFS.

      ZFS is a great system, with niche applicability.

      I wanted to just mirror a SSD pair but thought the only way to "officially" to that with Proxmox was ZFS since they don't support MD.

      posted in IT Discussion
      B
      biggen
    • RE: Reconsidering ProxMox

      @VoIP_n00b I'll read over your link. I admit I haven't messed with it a ton. Kinda assumed it would work "out of the box" but looks like I need to tinker.

      posted in IT Discussion
      B
      biggen
    • RE: Reconsidering ProxMox

      @VoIP_n00b Its a lab for testing so no Enterprise drives. Just a pair of Samsung 970 Pros.

      Box only has 32GB of RAM so that would mean that ZFS on Proxmox would be using at most 16GB of RAM for the ARC by default. Seems like ZFS needs a ton of RAM.

      posted in IT Discussion
      B
      biggen
    • RE: Reconsidering ProxMox

      @scottalanmiller What’s your storage configuration like?

      I’ve been playing with it on ZFS Raid 1 mirror. Proxmox OS and VMs all on same mirror. Performance is “OK”. Not as good as MD with same setup though.

      Wonder if it’s better to create separate Raid 1 ZFS pools. One for the Proxmox OS and one for the VMs.

      posted in IT Discussion
      B
      biggen
    • RE: Xeoma NVR

      I get it. But most don't companies that give free software for reviews usually target well known review sites/bloggers in private. They don't simply have a web page for all to see that is dedicated so that any Tom, Dick, and Harry that can open up a Wordpress account, post a 250 word review, and get free stuff.

      It’s just led to some shady practices in the past where they were asking people to spam forums with reviews if you didn't have a blog to post the review to.

      I actually purchased a 2 camera license from them but decided to move onto Blue Iris that seems to have better documentation and development. I was also a bit concerned on what information from the Xeoma server instance "phones home" to the Kremlin while its running. Its bad enough we have to use Dahua and Hikvision cameras that are Chinese made and ripe with security issues most of the time.

      posted in IT Discussion
      B
      biggen
    • RE: Xeoma NVR

      The issue i have with these guys is they are a Russian outfit and they pay reviewers with free camera licenses for good reviews posted online.

      I just don’t care for that business practice.

      posted in IT Discussion
      B
      biggen
    • RE: Linux Desktop: what's the "preferred" distro?

      Typing this from Mint Cinnamon. Pretty impressed with it so far. Pretty snappy interface and browsing with Chromium is good.

      I haven't used Linux on a Desktop/Laptop in probably 10 years. I've only used it for servers since the late 90s. Not having to export/import bookmarks and having Google Chrome save passwords sure makes the transition easier. I'm not a power user with a lot of custom Windows applications so most of my use is on the web which is OS agnostic anyway.

      I'll have to see what custom/pretty themes are out their for Cinnamon.

      posted in IT Discussion
      B
      biggen
    • RE: Linux Desktop: what's the "preferred" distro?

      Getting ready to throw Mint Cinnamon onto a laptop although Pop OS seems to be the new belle of the ball nowadays.

      posted in IT Discussion
      B
      biggen
    • RE: OpenVPN vs WireGuard vs ZeroTier

      I've yet to play with Wireguard even though the home lab guys love it over on reddit. The issue I have is that OpenVPN AS is so darn easy to setup and use. Wireguard looks much more "unpolished" from the small bit I've researched. As @scottalanmiller says, speed isn't really a big deal. I need ease of installation and maintenance which OpenVPN AS has going for it currently over any speed benefits that Wireguard provides.

      I also need Windows Wireguard clients but last I looked those were still in beta testing.

      posted in IT Discussion
      B
      biggen
    • RE: XenServer gave error I'm not familiar with

      @krisleslie So are you are Citrix Xen or xcp-ng?

      posted in IT Discussion
      B
      biggen
    • RE: Dell PERC H740 with SSDs?

      @scottalanmiller Just not as commonplace to find enterprise gear to whitebox with. Where are you looking for enterprise motherboards?

      posted in IT Discussion
      B
      biggen
    • RE: Dell PERC H740 with SSDs?

      @Pete-S

      NVMe is simply outstanding. And the drives themselves are roughly the same price as SAS for >comparable models.

      I wish there were more choices for whiteboxed servers as far as NVMe goes. I've not found any prosumer motherboards that support dual M.2 22110 and there are virtually no choices at all for U.2 unless going with a vendor (e.g. Dell, HPE, etc...)

      posted in IT Discussion
      B
      biggen
    • RE: RAID rebuild times 16TB drive

      @StorageNinja No personal experience with it. I've only ever run RAID 1 or 10. Just the reading I've done over the years from people reporting how long it took to rebuild larger RAID 6 arrays.

      BTW, are you the same person who is/was over at Spiceworks? I always enjoyed reading your posts on storage. I respect both you and @scottalanmiller in this arena immensely.

      posted in IT Discussion
      B
      biggen
    • RE: Cloudflare for Families, Anyone?

      Pretty cool. I’ll have to try it and see how it goes.

      I’ve been using Unbound for several years running on a Raspberry Pi and using a custom black list. Love not having to run ad blockers on each computer browser since it’s all taken care of with Unbound.

      posted in IT Discussion
      B
      biggen
    • RE: RAID rebuild times 16TB drive

      @scottalanmiller @Pete-S

      Excellent. Thanks for that explanation guys and that nifty diagram Pete!

      I guess I was skeptical I had correct what @Pete-S said because I've seen so many reports that its taken days/weeks to rebuild [insert whatever size] TB Raid 6 arrays in the past. But I guess that was because those systems weren't just idle. There was still IOPS on those arrays AND a possible CPU/cache bottleneck.

      posted in IT Discussion
      B
      biggen
    • RE: RAID rebuild times 16TB drive

      Just so I understand what you are saying, you are indicating that it doesn't matter the array size or RAID type, that it will simply take ~24hrs to fully write to those 16TB drives 100% if the system isn't doing anything other than a rebuild?

      So, for example, a RAID 1 16TB mirror would have the same rebuild time as a RAID 6 32TB array (4 x 16TB) or a RAID 10 32TB array (4 x 16TB)? I must be misunderstanding.

      posted in IT Discussion
      B
      biggen
    • 1 / 1