ML
    • Register
    • Login
    • Search
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    1. Home
    2. Tags
    3. blade
    Log in to post
    • All categories
    • Pete.S

      Experience with Supermicro Microcloud servers?
      IT Discussion • supermicro blade • • Pete.S

      17
      0
      Votes
      17
      Posts
      483
      Views

      Dashrender

      @pete-s said in Experience with Supermicro Microcloud servers?:

      @dashrender said in Experience with Supermicro Microcloud servers?:

      @pete-s said in Experience with Supermicro Microcloud servers?:

      @dashrender said in Experience with Supermicro Microcloud servers?:

      @pete-s said in Experience with Supermicro Microcloud servers?:

      @dashrender said in Experience with Supermicro Microcloud servers?:

      What's the use case?

      For blades in general? It's hyperconverged infrastructure, hosting environments, container clusters etc. Basically everywhere you want to cram in as much as possible in the least amount of rack space.

      I suppose - but damn - that seems like a HUGE amount of compute power next to low amount of storage. If that's the setup you need - again HUGE amount of compute and tiny storage, then it's probably just fine.

      I know what you mean but it's not really that low. Consider that the server I linked to have 3.5" bays. So you can have 2 x 18TB (standard enterprise size in stock) per node or 288 TB of raw storage per 3U rack. A rack full of those will give you over 3 PB of disk or 1.5PB of SSDs (8TB ea).

      There are other models too, some have 4 bays per node. So you have some options.

      that storage ends up being soooo incredibly slow, the power of the CPUs seems like they would be wasted.

      Now if all of the storage is hanging off a single or split between two/three nodes, then we start looking more like a Scale box, only way smaller.

      I'd be worried about only having two power supplies in there too. that might be a folly on my part, but with that many drives/CPUs and only two PS's?

      Today you don't need a lot of spindles in an array to get speed. Storage would be blazing fast with for example two NVMe drives per node.

      8TB is readily available but you could get 16TB NVMe drives too.

      yeah, NVMe would be fast... I made an assumption before looking more closely at your picture that it was limited to HDDs.
      which today would just be stupid.. so my bad.

    • scottalanmiller

      Reset HP BladeSystem ILO to DHCP
      IT Discussion • hp hpe blade ilo • • scottalanmiller

      3
      0
      Votes
      3
      Posts
      283
      Views

      scottalanmiller

      This process removes the pre-assigned static IP address from the blade. This commonly happens when the blade is moved from another encloser. Very annoying.

      If your chassis has EBIPA set, the blade will use that instead of DHCP upon this reset.

    • scottalanmiller

      HP c7000 Blade Enclosure Failed Validation Error in Virtual Connection Manager on FlexFabric
      IT Discussion • hp hewlett-packard hpe blade hp c7000 hpe virtual connect manager hpe virtual connect flexfabric • • scottalanmiller

      9
      0
      Votes
      9
      Posts
      971
      Views

      scottalanmiller

      Here is what showed up in the unit...

      https://assets.curvature.com/sites/default/files/pdf/GLC-SX-MMD-CURV.pdf

    • scottalanmiller

      Deployment Scenarios for the Dell PowerEdge VRTX
      xByte • dell dell poweredge dell vrtx blade perc8 das dell poweredge m630 dell poweredge m830 vdi robo • • scottalanmiller

      4
      1
      Votes
      4
      Posts
      1736
      Views

      scottalanmiller

      @Tim_G said in Deployment Scenarios for the Dell PowerEdge VRTX:

      I'd certainly love one of these for a home lab. That's for sure!

      I can see the whole ROBO deployment scenario. Even in single/several use situations, like you mentioned... such as for specific critical high-performing services that won't outgrow it. Anywhere else, I'd feel stuck with it... what happens if you start to outgrow the resources? How scalable is it? That's what would determine it's worth for usage in place of regular 2u deployment servers.

      Maybe I'm just not a blade type person.

      It's really meant for contained scenarios where you know how big you will grow. For a normal SMB, the issue is growing into it, rather than outgrowing it.

    • scottalanmiller

      What is a Blade Server
      IT Discussion • servers blade • • scottalanmiller

      18
      2
      Votes
      18
      Posts
      3616
      Views

      StrongBad

      Blades seem to make you give up a lot of flexibility. With an old fashioned server I can run them diskless today and add disks tomorrow if the way that I want to use them changes. But if I have a blade, I'm stuck.

    • E

      Dell VRTX
      IT Discussion • dell dell vrtx server san das storage blade • • Eric

      14
      2
      Votes
      14
      Posts
      4965
      Views

      scottalanmiller

      Something to consider with a VRTX is that that is eight to sixteen Intel Xeon CPUs in a loaded chassis. That is a massive amount of compute power (with very little storage throughput.) So you have the CPU power to handle easily ~400 typical VMs. But the storage capacity and throughput of no more than an R510. Even an R720xd or R730xd has more drive capacity than the VRTX. So the ratio of IOPS and capacity to CPU is wildly different than with normal servers.