To vSAN or not to vSAN?



  • @Shaman06 said in To vSAN or not to vSAN?:

    Try as I might, I can't get them to separate ops and dev. Ops is day to day stuff like file servers and the local domain controller.

    That's ops for sure, aka production.



  • @Shaman06 said in To vSAN or not to vSAN?:

    Additional questions aside, it seems that no one is in favor of a vSAN given the provided details so I'm probably going to stick with DAS. I was pretty sure it was the way to go, I just love the idea of instant vMotion.

    Yeah, that's the way to go here. Don't overcomplicate things, lots of effort without any real need.



  • @DustinB3403 said in To vSAN or not to vSAN?:

    @Shaman06 said in To vSAN or not to vSAN?:

    Additional questions aside, it seems that no one is in favor of a vSAN given the provided details so I'm probably going to stick with DAS. I was pretty sure it was the way to go, I just love the idea of instant vMotion.

    This isn't that we aren't in favor of vSAN. It's that you aren't in a place to take advantage of it. DAS while slower, is what you have today and you can't change that. Adding vSAN on top will make things more complex and slower (because of the complexity).

    The simple answer is replace everything and use Direct Attached storage.

    DAS is faster than vSAN. Slower than local. vSAN requires replication that introduces latency.



  • @scottalanmiller said in To vSAN or not to vSAN?:

    @Obsolesce said in To vSAN or not to vSAN?:

    @DustinB3403 said in To vSAN or not to vSAN?:

    DAS while slower

    DAS is fast AF

    But when compared to internal local storage, it's slower. Don't confuse absolute and relative performance.

    No confusion here. I was staying within context of the vSAN and DAS or vSAN using DAS scope within the post (Dustin mentioning DAS is slower than vSAN). Nobody else mentioned local storage. I'm guessing you got confused over things not said (or said by you), and thought you needed to add in some random facts about it in ways that aren't even really a factor in the case here.

    Also, it seems apparent that in the OP's case, the throughput difference between local storage and DAS is negligible. So much so as not even a consideration. There wasn't any mention of any factors to assume otherwise.



  • @scottalanmiller said in To vSAN or not to vSAN?:

    @Obsolesce said in To vSAN or not to vSAN?:

    @DustinB3403 said in To vSAN or not to vSAN?:

    DAS while slower

    DAS is fast AF

    But still one more hop, so slower.

    In absolute terms it's on the faster side of things.

    But when compared to internal local storage, it's slower. Don't confuse absolute and relative performance.

    I thought DAS (direct attached storage) was the same as internal storage, it's just sitting in a box external to the server, but the Server's RAID card has a cable to the DAS chassis, which then connects to the backplanes in the DAS chassis to disk.. no different than from the RAID controller > cable > backplane in server > disk.
    Where is my misunderstanding?

    Now the cables are longer, so maybe that's where it's slower?



  • @scottalanmiller said in To vSAN or not to vSAN?:

    @Shaman06 said in To vSAN or not to vSAN?:

    @scottalanmiller said in To vSAN or not to vSAN?:

    What's making them want to keep running on it and investing your expensive time into supporting very cheap old gear (cheap now that it is old.)

    The environment is split. Our "prod" is almost brand new and uses an NVMe vSAN. As long as that environment is quick and running, very little care is given to our "ops" environment. To the business's credit, they seem to acknowledge that it's old hardware and don't seem to care a ton about downtime. As an IT professional, I loathe unscheduled downtime.

    If the business doesn't care, then it doesn't matter. As an IT pro, your desires should exactly mimic the business's. Any deviation in IT desiring something different than the business is a mistake on IT's side. If downtime isn't important, it isn't important, period. Don't let IT become emotional, that's common and leads to an unhealthy mismatch. IT has no needs outside of the business needs, they conceptually don't exist.

    This is something you constantly preach - but really, When have you ever heard (and being who you are, you've somehow miraculous heard it) a owner/CEO say - I don't care about about downtime. In general, they simply don't say that.



  • @Dashrender said in To vSAN or not to vSAN?:

    @scottalanmiller said in To vSAN or not to vSAN?:

    @Shaman06 said in To vSAN or not to vSAN?:

    @scottalanmiller said in To vSAN or not to vSAN?:

    What's making them want to keep running on it and investing your expensive time into supporting very cheap old gear (cheap now that it is old.)

    The environment is split. Our "prod" is almost brand new and uses an NVMe vSAN. As long as that environment is quick and running, very little care is given to our "ops" environment. To the business's credit, they seem to acknowledge that it's old hardware and don't seem to care a ton about downtime. As an IT professional, I loathe unscheduled downtime.

    If the business doesn't care, then it doesn't matter. As an IT pro, your desires should exactly mimic the business's. Any deviation in IT desiring something different than the business is a mistake on IT's side. If downtime isn't important, it isn't important, period. Don't let IT become emotional, that's common and leads to an unhealthy mismatch. IT has no needs outside of the business needs, they conceptually don't exist.

    This is something you constantly preach - but really, When have you ever heard (and being who you are, you've somehow miraculous heard it) a owner/CEO say - I don't care about about downtime. In general, they simply don't say that.

    I've not heard anyone say this, I've heard owner/CEO's say they don't care (about the minutia) and that as long as the business is functional, they neither care nor want anyone or thing to cost more money.



  • @Dashrender said in To vSAN or not to vSAN?:

    @scottalanmiller said in To vSAN or not to vSAN?:

    @Shaman06 said in To vSAN or not to vSAN?:

    @scottalanmiller said in To vSAN or not to vSAN?:

    What's making them want to keep running on it and investing your expensive time into supporting very cheap old gear (cheap now that it is old.)

    The environment is split. Our "prod" is almost brand new and uses an NVMe vSAN. As long as that environment is quick and running, very little care is given to our "ops" environment. To the business's credit, they seem to acknowledge that it's old hardware and don't seem to care a ton about downtime. As an IT professional, I loathe unscheduled downtime.

    If the business doesn't care, then it doesn't matter. As an IT pro, your desires should exactly mimic the business's. Any deviation in IT desiring something different than the business is a mistake on IT's side. If downtime isn't important, it isn't important, period. Don't let IT become emotional, that's common and leads to an unhealthy mismatch. IT has no needs outside of the business needs, they conceptually don't exist.

    This is something you constantly preach - but really, When have you ever heard (and being who you are, you've somehow miraculous heard it) a owner/CEO say - I don't care about about downtime. In general, they simply don't say that.

    Words means nothing. What people SAY is through policy. And MOST CEOs don't just say it, they demand it. Loudly. Clearly, Without question, constantly.



  • @Dashrender said in To vSAN or not to vSAN?:

    I don't care about about downtime.

    The only thing they SHOULD say is "I care about profits." As soon as a CEO cares about downtime rather than the cost that downtime might cause, they have no business being a CEO, or even in business.



  • @Dashrender said in To vSAN or not to vSAN?:

    @scottalanmiller said in To vSAN or not to vSAN?:

    @Obsolesce said in To vSAN or not to vSAN?:

    @DustinB3403 said in To vSAN or not to vSAN?:

    DAS while slower

    DAS is fast AF

    But still one more hop, so slower.

    In absolute terms it's on the faster side of things.

    But when compared to internal local storage, it's slower. Don't confuse absolute and relative performance.

    I thought DAS (direct attached storage) was the same as internal storage, it's just sitting in a box external to the server, but the Server's RAID card has a cable to the DAS chassis, which then connects to the backplanes in the DAS chassis to disk.. no different than from the RAID controller > cable > backplane in server > disk.
    Where is my misunderstanding?

    Now the cables are longer, so maybe that's where it's slower?

    Let's take a theoretical, perfectly straight, road. Is it going to take longer to drive 1 mile or 2 miles? It's basic physics. Each additional piece in the chain adds time, even if it's just an external JBOD.



  • @travisdh1 said in To vSAN or not to vSAN?:

    @Dashrender said in To vSAN or not to vSAN?:

    @scottalanmiller said in To vSAN or not to vSAN?:

    @Obsolesce said in To vSAN or not to vSAN?:

    @DustinB3403 said in To vSAN or not to vSAN?:

    DAS while slower

    DAS is fast AF

    But still one more hop, so slower.

    In absolute terms it's on the faster side of things.

    But when compared to internal local storage, it's slower. Don't confuse absolute and relative performance.

    I thought DAS (direct attached storage) was the same as internal storage, it's just sitting in a box external to the server, but the Server's RAID card has a cable to the DAS chassis, which then connects to the backplanes in the DAS chassis to disk.. no different than from the RAID controller > cable > backplane in server > disk.
    Where is my misunderstanding?

    Now the cables are longer, so maybe that's where it's slower?

    Let's take a theoretical, perfectly straight, road. Is it going to take longer to drive 1 mile or 2 miles? It's basic physics. Each additional piece in the chain adds time, even if it's just an external JBOD.

    So we're talking microseconds here, or even nano seconds.. gotcha.



  • @Dashrender said in To vSAN or not to vSAN?:

    @travisdh1 said in To vSAN or not to vSAN?:

    @Dashrender said in To vSAN or not to vSAN?:

    @scottalanmiller said in To vSAN or not to vSAN?:

    @Obsolesce said in To vSAN or not to vSAN?:

    @DustinB3403 said in To vSAN or not to vSAN?:

    DAS while slower

    DAS is fast AF

    But still one more hop, so slower.

    In absolute terms it's on the faster side of things.

    But when compared to internal local storage, it's slower. Don't confuse absolute and relative performance.

    I thought DAS (direct attached storage) was the same as internal storage, it's just sitting in a box external to the server, but the Server's RAID card has a cable to the DAS chassis, which then connects to the backplanes in the DAS chassis to disk.. no different than from the RAID controller > cable > backplane in server > disk.
    Where is my misunderstanding?

    Now the cables are longer, so maybe that's where it's slower?

    Let's take a theoretical, perfectly straight, road. Is it going to take longer to drive 1 mile or 2 miles? It's basic physics. Each additional piece in the chain adds time, even if it's just an external JBOD.

    So we're talking microseconds here, or even nano seconds.. gotcha.

    Yes, which in storage you notice. Because storage involves insane numbers of back and forth trips. No one is saying it isn't fast, the physics just say it can never be "as fast" all other factors being equal.

    It's physically farther, and it is another logical device doing a translation. More latency.



  • @Dashrender said in To vSAN or not to vSAN?:

    @travisdh1 said in To vSAN or not to vSAN?:

    @Dashrender said in To vSAN or not to vSAN?:

    @scottalanmiller said in To vSAN or not to vSAN?:

    @Obsolesce said in To vSAN or not to vSAN?:

    @DustinB3403 said in To vSAN or not to vSAN?:

    DAS while slower

    DAS is fast AF

    But still one more hop, so slower.

    In absolute terms it's on the faster side of things.

    But when compared to internal local storage, it's slower. Don't confuse absolute and relative performance.

    I thought DAS (direct attached storage) was the same as internal storage, it's just sitting in a box external to the server, but the Server's RAID card has a cable to the DAS chassis, which then connects to the backplanes in the DAS chassis to disk.. no different than from the RAID controller > cable > backplane in server > disk.
    Where is my misunderstanding?

    Now the cables are longer, so maybe that's where it's slower?

    Let's take a theoretical, perfectly straight, road. Is it going to take longer to drive 1 mile or 2 miles? It's basic physics. Each additional piece in the chain adds time, even if it's just an external JBOD.

    So we're talking microseconds here, or even nano seconds.. gotcha.

    Tell this to a database admin. Those milliseconds turn into minutes when you have to do millions of them for a single perceived action to the end user. This is why client/server architecture is often so insanely slow, even over super fast connections.



  • The added latency when you have an external drive enclosure comes from the SAS expander inside. On modern SAS expanders it's about 0.01 ms.

    However the connection between the server and the drive enclosure is limited by the number of SAS lanes. It's 4 lanes per connector so a SAS-1 enclosure can't deliver more than 4x3Gbit/s = 1200 MB/sec of data. SAS-2 is twice as fast and SAS-3 is four times as fast as SAS-1.

    Sometimes you have several enclosures connected to one server and they are connected in a daisy chain. Still, the maximum transfer rate is the same - even if you now have perhaps 48 drives or something like that.

    When you have servers with many drives inside (>8) there is often a SAS-expander inside the server as well and the same limits apply.



  • The problem the way I see it with old stuff like the OP has is while there is nothing wrong with the technology itself, everything is relatively slow with today's standard and much more complex than needed.

    When it was new you had to accept the added complexity because it was the only way to get a huge amount of storage with reasonable speed. Today you can make a simple solution that is faster.

    For instance a SAS drive array with 16 x 300GB will get absolutely destroyed in performance by 4x2TB SSD drives. And you can put those 4 drives directly in the server.
    And the failure rate of the old solution is many, many times higher than the new solution because there are so many more things that could possibly fail.

    Even on a shoestring budget you can still improve things. If you can simplify, you can increase reliability. If you can increase the size of the drives you will need fewer drives and you get increased reliability.



  • @Pete-S said in To vSAN or not to vSAN?:

    The problem the way I see it with old stuff like the OP has is while there is nothing wrong with the technology itself, everything is relatively slow with today's standard and much more complex than needed.

    It's very true. And the risk today is that we are so accustomed to that complexity, that we often avoid the simple answers today.


Log in to reply