ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. PhlipElder
    3. Posts
    • Profile
    • Following 0
    • Followers 3
    • Topics 28
    • Posts 913
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: On prem Exchange hardware questions.

      @scottalanmiller said in On prem Exchange hardware questions.:

      @PhlipElder said in On prem Exchange hardware questions.:

      EDIT: Which was great because we weren't being hit by a bus load of calls when O365/M365 went down.

      Sure, but that's like driving without a seatbelt in one car, and having someone in another car with a seatbelt. Then saying "ha, by being in the car without a seatbelt, we weren't hurt when the other car had an accident." It sounds like you are doing something safer, but you aren't, it's just presented in an emotionally misleading way.

      The real advantage to lost of on-prem hosted systems is that outages tend to be temporally isolated - each outage has no connection to another. So you don't get swamped with outage calls all at once, even though your overall downtime is likely many, many times higher and requires way more engineering effort.

      Supporting both, I know the difference is huge. On prem outages means we have to dedicate engineering time, generally billable, and do all kinds of customer management. O365 outages our service center can just point customers to the DownDetector page and explain that the service is down until MS corrects it. Even with loads of O365 customers calling in at once, it's less effort to deal with 100 customers on O365 during an outage than one on prem that we have to actually fix.

      Again, on prem makes lots of sense at the right times. Just saying that presenting the recent outage as if it would affect the decision of any logical IT shop doing its evaluation properly is misleading. It's an emotional plea, but someone using proper risk assessment would understand that it's just part of any system and the fact that it was recent is not relevant and doesn't affect future risk assessment.

      Not going down this road with you again Scott.

      TTFN

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: On prem Exchange hardware questions.

      @scottalanmiller said in On prem Exchange hardware questions.:

      @PhlipElder said in On prem Exchange hardware questions.:

      Exchange via SPLA is a dollar or two a month per SAL. Cheap like Borscht.

      That's the licensing cost. That doesn't include the Windows cost, the hardware cost, the IT costs... that stuff all adds up. I'm not saying that on premises never makes sense, just that you have to compare apples to apples.

      We won a competition against cloud. Our solution set all-in was less than the O365 competition.

      Cloud is never cheaper.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: On prem Exchange hardware questions.

      @scottalanmiller said in On prem Exchange hardware questions.:

      @PhlipElder said in On prem Exchange hardware questions.:

      After the big outage that still going to be a go as far as M365/O365?

      That really wasn't a very long outage. Nothing compared to the normal outages from on-prem solutions. Par for the course outages are already assumed in any planning and that's all that the "big" outage a few days ago was. It's not outside of the standard operating of O365, everyone using O365 should have already been expecting something like that. And that's not condemning MS, it's just how it is. Hosted isn't perfect, and O365 isn't balanced for maximum uptime, that's not its goal. This wasn't a huge outage in time, nor were there a lot of other outages recently.

      It was for those that depended on it.

      Our on-premises solutions are 100% up-time with the exception of one due to environmental issues.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: On prem Exchange hardware questions.

      @JasGot said in On prem Exchange hardware questions.:

      I need to provide an on prem exchange solution for a company with 100 users and about 135GB in mailbox database use.

      I am interested if anything has changed in the last few years concerning Exchange on a VM and Spinning v. SSD drives?

      If Exchange were going to be the ONLY VM on a host, would you still go VM?

      Are SSDs overkill for Exchange?

      I know there will be lots of questions, and thoughts, but I this is what is on my mind right now, so I thought it would be a good place to start.

      I will also be proposing Microsoft 365, but I want to have a solid on prem plan if they choose to stay on prem.

      After the big outage that still going to be a go as far as M365/O365?

      Exchange via SPLA is a dollar or two a month per SAL. Cheap like Borscht.

      100 users with a 135GB database is tiny.

      Depending on user's reliance on search we would set up as follows:

      • Virtual Machine with 4 vCPUs or 6 vCPUs depending on underlying setup.
      • 24GB vRAM to 32GB vRAM to start
        ** Tuning for search post install - Exchange needs RAM not I/O
      • VHDX/VMDK 0: Operating System
      • VHDX/VMDK 1: Exchange install
      • VHDX/VMDK 2: Database(s)
      • VHDX/VMDK 3: Logs

      We install into Windows Server 2019 Core with the latest updates slipstreamed into the image.

      So far, we've done quite well.

      As an FYI: Almost all of our clients are on-premises for their workloads including Exchange.

      EDIT: Which was great because we weren't being hit by a bus load of calls when O365/M365 went down. 😉

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: How can we recover data from Hard Drives were on RAID 10 without controller?

      @openit www.runtime.org
      GetDataBack for NTFS with RAID Reconstructor.
      We've had excellent success with their product.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: What Are You Watching Now

      @scottalanmiller said in What Are You Watching Now:

      Some classic Frasier with the family.

      We're talking about what we're going to watch as a family next. Currently, it's The Waltons.

      I'm thinking of picking up Star Trek The Next Generation and Deep Space Nine then on to Babylon 5.

      posted in Water Closet
      PhlipElderP
      PhlipElder
    • RE: What Are You Watching Now

      Inception in the theatre. Twice.

      Saw TENET on Thursday. Theatre's sound was hooped so missed most of the dialogue and came away with a migraine because of the over driven bass. Still got the gist of it, but need to see it in a better theatre.

      Tonight, family friends coming over to watch The Dark Knight Rises. We've watched the previous two already (we get together once every two weeks). Previous to that was the Lord of the Rings Extended Editions. It's becoming a bit of a tradition here. 🙂

      posted in Water Closet
      PhlipElderP
      PhlipElder
    • RE: NVMe and RAID?

      @biggen said in NVMe and RAID?:

      But what about the server case itself? What models are you putting these components in? I'd probably do a tower for the initial build.

      Pedestal: Silversone CS381.
      Rack Chassis: We go barebones from a variety of vendors. Intel, TYAN, ASRock Rack, and others
      Rack Chassis Standalone: Chenbro comes to mind. Silverstone also makes them. We've looked into iStar and Rosewill though never jumped on board.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: NVMe and RAID?

      @biggen said in NVMe and RAID?:

      The SFF-8654 to dual SFF-8643 is a bit of a unicorn isn't it? Heck, the SFF-8654 isn't even listed in the SAS wiki.

      They are now. Finding them was a real challenge. And even then, we need to order them in bulk.

      We may put a few up for sale for folks doing custom builds since they are so hard to find.

      We have plans for them. 🙂

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: NVMe and RAID?

      @biggen said in NVMe and RAID?:

      So this Icy Dock enclosure would connect to both of those SlimSAS port with what exactly? Four of these?

      Edit: No that wouldn't work. Like you said, need a Y-cable. Something like this?

      Correct on both counts.
      https://blog.mpecsinc.com/2020/07/27/custom-build-s2d-the-elusive-slimsas-8x-sff-8654-cable/

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: NVMe and RAID?

      @biggen said in NVMe and RAID?:

      @PhlipElder

      The ROMED6U-2L2T is mATX? Whats the advantage there over a full size ATX board?

      Smaller chassis. It's the next best thing to Mini-ITX but without the pains of dealing with Mini-ITX.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: NVMe and RAID?

      @biggen said in NVMe and RAID?:

      Yeah I have no problem whiteboxing stuff for me (or close family), but when you do it for others, they expect tech support for life. I don't really want to go down that road 🙂

      But a PoC build may be more "in line" with his budge needs. Thanks for that @PhlipElder !

      That's what we do as a business.

      We've been system builders since day one of MPECS in 2003 but since the late 1990s for myself.

      We have a parts bin full of broken promises.

      But, we also have a defined solution set that we know works so we run with them.

      Our support terms are clearly defined and require a contract.

      We are either building a mutually beneficial business relationship or it ain't gonna happen. We don't do one-offs unless there's good reason to.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: NVMe and RAID?

      @scottalanmiller said in NVMe and RAID?:

      @PhlipElder said in NVMe and RAID?:

      @marcinozga said in NVMe and RAID?:

      @PhlipElder said in NVMe and RAID?:

      @marcinozga said in NVMe and RAID?:

      @PhlipElder said in NVMe and RAID?:

      @biggen said in NVMe and RAID?:

      I appreciate all the help guys. Yeah I'm compiling a price list but it ain't cheap. Server alone would be about $7k and that's on the low end with smaller NVMe drives (1.6TB). Then still have to purchase the switch and then have to purchase the 10Gbe NICs for the workstations themselves.

      Its a large investment that I bet never sees the light of day. It will turn into "I have $2k, what can you build with that?"

      FleaBay is your best friend. 😉

      10GbE pNIC: Intel x540: $100 to $125 each.

      For 10GbE switch go for NETGEAR XS712T XS716T or XS728T depending on port density needed. The 12-port is $1K.

      As far as the server goes, is this a proof of concept driven project?

      • ASRock Rack Board
        ** Dual 10GbE On Board (designated by -2T)
      • Intel Xeon Scalable or AMD EPYC Rome
      • Crucial/Samsung ECC Memory
      • Power Supply

      The board should have at least one SlimSAS x8 or preferably two. Each of those ports gives you two NVMe drives. An SFF-8654 Y cable to connect to a two drive enclosure would be needed. I suggest ICYDOCK.

      The build will cost a fraction of a Tier 1 box.

      Once the PoC has been run and the kinks worked out, then go for the Tier 1 box tailored to your needs.

      I love Asrock Rack products, their support is great, if they can actually fix the damn issues, if not, you're sol. My next server refresh will have this board: https://www.asrockrack.com/general/productdetail.asp?Model=ROMED8-2T#Specifications

      We just received two ROMED6U-2L2T boards:
      https://www.asrockrack.com/general/productdetail.asp?Model=ROMED6U-2L2T#Specifications

      They are a perfect board for our cluster storage nodes with two built-in 10GbE ports. An AMD EPYC Rome 7262 processor, 96GB or 192GB of ECC Memory, four NVMe via SlimSAS x8 on board, and up to twelve SATA SSDs or HDDs for capacity and we have a winner.

      FYI: We only use EPYC Rome processors with a TDP of 155 watts or higher. Cost wise, there's very little increase while the performance benefits are there.

      EDIT: Missed the Slimline x8 beside the MiniSAS HD ports. That's six NVMe drives if we go that route.

      You're probably overpaying with that CPU, here's a deal not many know about, Epyc 7302P for $713
      https://www.provantage.com/hpe-p16667-b21~7CMPTCR7.htm

      We're in Canada. We overpay for everything up here. :S

      And even when you pay a lot, you often can't get things. We tried to order stuff from Insight Canada for our Montreal office and after a week of not being able to ship, they eventually just told us that they couldn't realistically service Canada.

      We're creative with our procurement process so we don't have issues with getting product.

      Insight is tied to Ingram Micro. If they don't have it, Insight doesn't.

      Our Canadian distribution network used to be quite homogeneous with all three major distributors having similar line cards. The competition was good though pricing was fairly consistent across the three.

      We have a number of niche suppliers that help when we can't get product from the Big Three always making sure we're dealing with legit product not grey market. We verify that with our vendor contacts.

      PING if you need anything. 😉

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: What Are You Watching Now

      My daughter saw Inception last week with friends. When she came home she wanted to see it again so I took her.

      Wow. What an amazing movie.

      Bought the Blu-Ray so we're going to watch it as a fam probably this evening.

      posted in Water Closet
      PhlipElderP
      PhlipElder
    • RE: NVMe and RAID?

      @marcinozga said in NVMe and RAID?:

      @PhlipElder said in NVMe and RAID?:

      @marcinozga said in NVMe and RAID?:

      @PhlipElder said in NVMe and RAID?:

      @biggen said in NVMe and RAID?:

      I appreciate all the help guys. Yeah I'm compiling a price list but it ain't cheap. Server alone would be about $7k and that's on the low end with smaller NVMe drives (1.6TB). Then still have to purchase the switch and then have to purchase the 10Gbe NICs for the workstations themselves.

      Its a large investment that I bet never sees the light of day. It will turn into "I have $2k, what can you build with that?"

      FleaBay is your best friend. 😉

      10GbE pNIC: Intel x540: $100 to $125 each.

      For 10GbE switch go for NETGEAR XS712T XS716T or XS728T depending on port density needed. The 12-port is $1K.

      As far as the server goes, is this a proof of concept driven project?

      • ASRock Rack Board
        ** Dual 10GbE On Board (designated by -2T)
      • Intel Xeon Scalable or AMD EPYC Rome
      • Crucial/Samsung ECC Memory
      • Power Supply

      The board should have at least one SlimSAS x8 or preferably two. Each of those ports gives you two NVMe drives. An SFF-8654 Y cable to connect to a two drive enclosure would be needed. I suggest ICYDOCK.

      The build will cost a fraction of a Tier 1 box.

      Once the PoC has been run and the kinks worked out, then go for the Tier 1 box tailored to your needs.

      I love Asrock Rack products, their support is great, if they can actually fix the damn issues, if not, you're sol. My next server refresh will have this board: https://www.asrockrack.com/general/productdetail.asp?Model=ROMED8-2T#Specifications

      We just received two ROMED6U-2L2T boards:
      https://www.asrockrack.com/general/productdetail.asp?Model=ROMED6U-2L2T#Specifications

      They are a perfect board for our cluster storage nodes with two built-in 10GbE ports. An AMD EPYC Rome 7262 processor, 96GB or 192GB of ECC Memory, four NVMe via SlimSAS x8 on board, and up to twelve SATA SSDs or HDDs for capacity and we have a winner.

      FYI: We only use EPYC Rome processors with a TDP of 155 watts or higher. Cost wise, there's very little increase while the performance benefits are there.

      EDIT: Missed the Slimline x8 beside the MiniSAS HD ports. That's six NVMe drives if we go that route.

      You're probably overpaying with that CPU, here's a deal not many know about, Epyc 7302P for $713
      https://www.provantage.com/hpe-p16667-b21~7CMPTCR7.htm

      We're in Canada. We overpay for everything up here. :S

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: NVMe and RAID?

      @marcinozga said in NVMe and RAID?:

      @PhlipElder said in NVMe and RAID?:

      @biggen said in NVMe and RAID?:

      I appreciate all the help guys. Yeah I'm compiling a price list but it ain't cheap. Server alone would be about $7k and that's on the low end with smaller NVMe drives (1.6TB). Then still have to purchase the switch and then have to purchase the 10Gbe NICs for the workstations themselves.

      Its a large investment that I bet never sees the light of day. It will turn into "I have $2k, what can you build with that?"

      FleaBay is your best friend. 😉

      10GbE pNIC: Intel x540: $100 to $125 each.

      For 10GbE switch go for NETGEAR XS712T XS716T or XS728T depending on port density needed. The 12-port is $1K.

      As far as the server goes, is this a proof of concept driven project?

      • ASRock Rack Board
        ** Dual 10GbE On Board (designated by -2T)
      • Intel Xeon Scalable or AMD EPYC Rome
      • Crucial/Samsung ECC Memory
      • Power Supply

      The board should have at least one SlimSAS x8 or preferably two. Each of those ports gives you two NVMe drives. An SFF-8654 Y cable to connect to a two drive enclosure would be needed. I suggest ICYDOCK.

      The build will cost a fraction of a Tier 1 box.

      Once the PoC has been run and the kinks worked out, then go for the Tier 1 box tailored to your needs.

      I love Asrock Rack products, their support is great, if they can actually fix the damn issues, if not, you're sol. My next server refresh will have this board: https://www.asrockrack.com/general/productdetail.asp?Model=ROMED8-2T#Specifications

      We just received two ROMED6U-2L2T boards:
      https://www.asrockrack.com/general/productdetail.asp?Model=ROMED6U-2L2T#Specifications

      They are a perfect board for our cluster storage nodes with two built-in 10GbE ports. An AMD EPYC Rome 7262 processor, 96GB or 192GB of ECC Memory, four NVMe via SlimSAS x8 on board, and up to twelve SATA SSDs or HDDs for capacity and we have a winner.

      FYI: We only use EPYC Rome processors with a TDP of 155 watts or higher. Cost wise, there's very little increase while the performance benefits are there.

      EDIT: Missed the Slimline x8 beside the MiniSAS HD ports. That's six NVMe drives if we go that route.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: NVMe and RAID?

      @biggen said in NVMe and RAID?:

      I appreciate all the help guys. Yeah I'm compiling a price list but it ain't cheap. Server alone would be about $7k and that's on the low end with smaller NVMe drives (1.6TB). Then still have to purchase the switch and then have to purchase the 10Gbe NICs for the workstations themselves.

      Its a large investment that I bet never sees the light of day. It will turn into "I have $2k, what can you build with that?"

      FleaBay is your best friend. 😉

      10GbE pNIC: Intel x540: $100 to $125 each.

      For 10GbE switch go for NETGEAR XS712T XS716T or XS728T depending on port density needed. The 12-port is $1K.

      As far as the server goes, is this a proof of concept driven project?

      • ASRock Rack Board
        ** Dual 10GbE On Board (designated by -2T)
      • Intel Xeon Scalable or AMD EPYC Rome
      • Crucial/Samsung ECC Memory
      • Power Supply

      The board should have at least one SlimSAS x8 or preferably two. Each of those ports gives you two NVMe drives. An SFF-8654 Y cable to connect to a two drive enclosure would be needed. I suggest ICYDOCK.

      The build will cost a fraction of a Tier 1 box.

      Once the PoC has been run and the kinks worked out, then go for the Tier 1 box tailored to your needs.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: NVMe and RAID?

      @scottalanmiller said in NVMe and RAID?:

      @Pete-S said in NVMe and RAID?:

      If you do a fileserver like this, skip the hypervisor completely and run it on bare metal. You'll lose at ton of performance otherwise.

      Agreed. This is one of those rare exceptions.

      I'm not sure about this claim? Maybe ten years ago.

      The above solution I mentioned has the workloads virtualized. We've had no issues saturating a setup with IOPS or throughput by utilizing virtual machines.

      It's all in the system configuration, OS tuning, and fabric putting it all together. Much like setting up a 6.2L boosted application, there's a lot of pieces to the puzzle.

      EDIT: As a qualifier, we're an all Microsoft house. No VMware here.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: NVMe and RAID?

      @biggen said in NVMe and RAID?:

      @Pete-S I'll have to look again then at Intel offering. I figured AMD had Intel blown out of the water as far as cost-per-core offerings go nowadays.

      On a pound for pound basis the AMD EPYC Rome platforms we are working with are less expensive and vastly superior in performance.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: NVMe and RAID?

      @biggen said in NVMe and RAID?:

      @PhlipElder said in NVMe and RAID?:

      @biggen Right now, the only place we're using NVME in servers is for either cache in a hybrid storage setting (cache/capacity or cache/performance/capacity) or for servers with all NVMe.

      Intel's VROC plug-in dongle enables RAID 1 in certain settings. That's driven by the CPU. Not sure Dell supports it.

      For most applications, an R740xd with high performance NV-Cache and SATA SSD in RAID 6 will do. Intel SSD DC-S4610 series (D3-4610).

      We have plenty of setups like that for virtualized SQL/database workloads as well as 4K/8K video storage.

      EDIT: Forgot, in the Intel Server Systems we deploy we install a couple Intel NVMe drives, the VROC dongle for Intel only NVMe, and RAID 1 them for the host OS.

      Thanks for that info. Yeah, I'm thinking NVMe is probably overkill for video editing over a network connection. Especially considering the fact that he would be network bound anyway. I was thinking either 12Gb SAS SSDs in RAID 1 (2TB+ variety) or 6Gb SATA SSDs in Raid 1. This at least gives the option to go back to hot/blind swap with the appropriate PERC.

      We deployed an Intel Server System R2224WFTZSR 2U dual socket with a pair of Intel Xeon Gold 6240Y processors. We set up two dual-port Intel x540-T2 10GbE network adapters and a pair of LSI SAS HBAs for external SAS cable connections. It's purpose was to host two to four virtual machines for 150 to 300 1080P cameras throughout a building.

      Between 5 and 15 of those camera streams would be processed by recognition software and fire e-mail flags off to management staff for various conditions.

      Storage is a pair of Intel SSDs for the host OS, a pair of Intel SSD D3-S4610 series in RAID 1 for the high I/O processing, and an HGST 60-bay JBOD loaded with 12TB NearLine SAS drives.

      We used Storage Spaces to set up a 3-way mirror on the drives in the JBOD yielding 33% production storage.

      Constant throughput is about 375MB/Second to 495MB/Second depending on how many folks are moving through the building.

      We've put a number of other virtual machines on the server to utilize more CPU.

      4K video editing is something we have on the radar for these folks as they've started filming their vignettes and other recordings in 4K.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • 1
    • 2
    • 12
    • 13
    • 14
    • 15
    • 16
    • 45
    • 46
    • 14 / 46