Thin Provisioning vs Thick



  • Our Standard Practice is to thick provision everything. I was looking at getting this changed. Our Corp Director of IT wants me to get a list together of pros/cons of with sources and articles to back it up. Any good sources and thoughts?

    Here's some of the current con's they have for thin

    Easy to over provision Arrays
    Reclaiming space is not easy
    performance hit with thin (especially it's growing)
    We don't have DRS which makes managing difficult (not sure what DRS is).



  • @thecreativeone91 said:

    Easy to over provision Arrays if you are being ridiculous and not using common sense OR monitoring!!
    Reclaiming space is not easy but at least it is possible
    performance hit with thin (especially it's growing) which is trivial and should normally be ignored except for special circumstances
    We don't have DRS which makes managing difficult (not sure what DRS is). I'm unclear what he is saying here.



  • Downsides to thick:

    • Requires you to waste tons of capacity putting "safety buffers" into every virtual disk. This is basically paying lots of money to allow the IT team to be really lazy and poorly organized. It's basically betting against yourself.
    • Can't reclaim space.
    • Requires far more planning at a more granular basis.
    • Lowers agility and flexibility.
    • Fundamentally eschews key thought processes around consolidation.


  • I'm guessing based on what you mention here that you have a VMWare environment. DRS is VMWare's Distributed Resource Scheduler and will allow you to set rules and thresholds in place for VMs to be migrated to other hosts if the load is too high on a specific host in the cluster. They also have Storage DRS which operates in a similar way.



  • I thin provision everything apart from SQL Server where I'm more interested in performance than convenience, but I don't really know how "trivial" the performance hit from thin actually is. I just read that you shouldn't do it for SQL Server.



  • @Carnival-Boy said:

    I thin provision everything apart from SQL Server where I'm more interested in performance than convenience, but I don't really know how "trivial" the performance hit from thin actually is. I just read that you shouldn't do it for SQL Server.

    If I remember correctly that is because thin provisioning of SQL can quickly balloon if log files aren't truncated or backed-up correctly using up all available storage. It had/has nothing to do with performance.

    I assumed thin provisioning was a defacto standard several years ago the thought of not doing it never crossed my mind.



  • @coliver said:

    @Carnival-Boy said:

    I thin provision everything apart from SQL Server where I'm more interested in performance than convenience, but I don't really know how "trivial" the performance hit from thin actually is. I just read that you shouldn't do it for SQL Server.

    If I remember correctly that is because thin provisioning of SQL can quickly balloon if log files aren't truncated or backed-up correctly using up all available storage. It had/has nothing to do with performance.

    I assumed thin provisioning was a defacto standard several years ago the thought of not doing it never crossed my mind.

    It has to do with Transaction log files not being backed up and truncated. They should be fairly small though. We did have one that was 100GB when I started though. Fixed it now.



  • There will be a performance bottle-neck whilst thin disks are being expanded, won't there?



  • @Carnival-Boy Yes. But generally, that is negligible. I usually only notice it when doing installs of large software packages (in the several GB range).



  • @Carnival-Boy said:

    I thin provision everything apart from SQL Server where I'm more interested in performance than convenience, but I don't really know how "trivial" the performance hit from thin actually is. I just read that you shouldn't do it for SQL Server.

    It's a lot of "it depends." If you have a SQL Server that has a stable set size (lots of updates or reads) the impact approaches zero. If you are constantly ingesting new data and the usage increases constantly you have a small amount of growth hit going on behind the scenes on a semi-regular basis.



  • @coliver said:

    If I remember correctly that is because thin provisioning of SQL can quickly balloon if log files aren't truncated or backed-up correctly using up all available storage. It had/has nothing to do with performance.

    That's a concern for people not managing logs, but that could be addressed with logs on a different virtual disk or being shipped off. There is concern for regularly growing databases that the DB itself will grow on a regular basis.

    If your concern was ballooning then that's a moot fear since going to thick provisioning takes 100% of the risk of that and hits you right up front. It's like being afraid that someone is going to steal your car and in response to that fear you run out and give your car away to the first stranger you see so that you take 100% of the damage of having your car stolen right now. Um.... that makes no sense, right? Same with avoiding thin provisioning because it will balloon 🙂 Worst case scenario, which should never happen in theory, is that thin because "as" bad as thick, it can't become worse.



  • @coliver said:

    I assumed thin provisioning was a defacto standard several years ago the thought of not doing it never crossed my mind.

    It was not long, long ago. But many years ago it became so. There are still good times and reasons to do thick, but the majority of the time and unless you are aware of a specific performance need thin is the best practice.

    And similar to jagged tables in a relational database - once you have any thin, half of the advantages (no need to monitor your storage pool itself) are gone and only the performance hit remains as a concern.



  • @Carnival-Boy said:

    There will be a performance bottle-neck whilst thin disks are being expanded, won't there?

    Not bottleneck but impact. The impact is small, this is the equivalent of making any file larger on a normal filesystem. Every time that you add data to a file, it has to grow. That takes a tiny bit of system overhead. So just like the action of writing to the log file has overhead, this would be like writing to another log file, just one on the VMFS layer rather than on the NTFS layer. So there is overhead, but it is really small. It's not like a massive disk operation or anything.



  • @dafyre said:

    @Carnival-Boy Yes. But generally, that is negligible. I usually only notice it when doing installs of large software packages (in the several GB range).

    And I believe that you can tune it to do larger leaps so that it is only taking the hit once rather than many smaller times.


Log in to reply