Sizing a Server and Disks - SQL VM
-
@dashrender said in Sizing a Server and Disks - SQL VM:
@dafyre said in Sizing a Server and Disks - SQL VM:
@scottalanmiller said in Sizing a Server and Disks - SQL VM:
@dashrender time for a video, I guess.
After mulling over your comments and talking it over with someone else, I see the benefits of using several smaller disks rather than one big one. Especially when you can grow a disk relatively easily. That is something I hadn't considered until now.
But why not just use one large disk? then you can expand that as much as you want?
Because you have to manage moving the partitions around. This is a huge pain in the ass compared to just expanding it.
-
@dashrender said in Sizing a Server and Disks - SQL VM:
@dafyre said in Sizing a Server and Disks - SQL VM:
@scottalanmiller said in Sizing a Server and Disks - SQL VM:
@dashrender time for a video, I guess.
After mulling over your comments and talking it over with someone else, I see the benefits of using several smaller disks rather than one big one. Especially when you can grow a disk relatively easily. That is something I hadn't considered until now.
But why not just use one large disk? then you can expand that as much as you want?
What happens if you have to resize a partition that is between C and E? The default Windows utilities (as far as I'm aware) won't let you do this.
Splitting it up into separate VMDKs eliminates that issue.
-
@dustinb3403 said in Sizing a Server and Disks - SQL VM:
@dashrender said in Sizing a Server and Disks - SQL VM:
@dafyre said in Sizing a Server and Disks - SQL VM:
@scottalanmiller said in Sizing a Server and Disks - SQL VM:
@dashrender time for a video, I guess.
After mulling over your comments and talking it over with someone else, I see the benefits of using several smaller disks rather than one big one. Especially when you can grow a disk relatively easily. That is something I hadn't considered until now.
But why not just use one large disk? then you can expand that as much as you want?
Because you have to manage moving the partitions around. This is a huge pain in the ass compared to just expanding it.
You missed the topic change - we're talking one partition per disk now. Scott says Partitions are done - over - pointless.
-
@dafyre said in Sizing a Server and Disks - SQL VM:
@dashrender said in Sizing a Server and Disks - SQL VM:
@dafyre said in Sizing a Server and Disks - SQL VM:
@scottalanmiller said in Sizing a Server and Disks - SQL VM:
@dashrender time for a video, I guess.
After mulling over your comments and talking it over with someone else, I see the benefits of using several smaller disks rather than one big one. Especially when you can grow a disk relatively easily. That is something I hadn't considered until now.
But why not just use one large disk? then you can expand that as much as you want?
What happens if you have to resize a partition that is between C and E? The default Windows utilities (as far as I'm aware) won't let you do this.
Splitting it up into separate VMDKs eliminates that issue.
Nope, just easier to use a VMDKs for everything. It's also faster to recover and easier to manage then a single large VMDK.
-
@dashrender said in Sizing a Server and Disks - SQL VM:
@dustinb3403 said in Sizing a Server and Disks - SQL VM:
@dashrender said in Sizing a Server and Disks - SQL VM:
@dafyre said in Sizing a Server and Disks - SQL VM:
@scottalanmiller said in Sizing a Server and Disks - SQL VM:
@dashrender time for a video, I guess.
After mulling over your comments and talking it over with someone else, I see the benefits of using several smaller disks rather than one big one. Especially when you can grow a disk relatively easily. That is something I hadn't considered until now.
But why not just use one large disk? then you can expand that as much as you want?
Because you have to manage moving the partitions around. This is a huge pain in the ass compared to just expanding it.
You missed the topic change - we're talking one partition per disk now. Scott says Partitions are done - over - pointless.
100% are.
-
@dafyre said in Sizing a Server and Disks - SQL VM:
@dashrender said in Sizing a Server and Disks - SQL VM:
@dafyre said in Sizing a Server and Disks - SQL VM:
@scottalanmiller said in Sizing a Server and Disks - SQL VM:
@dashrender time for a video, I guess.
After mulling over your comments and talking it over with someone else, I see the benefits of using several smaller disks rather than one big one. Especially when you can grow a disk relatively easily. That is something I hadn't considered until now.
But why not just use one large disk? then you can expand that as much as you want?
What happens if you have to resize a partition that is between C and E? The default Windows utilities (as far as I'm aware) won't let you do this.
Splitting it up into separate VMDKs eliminates that issue.
Right - so don't have a d and e any more - Only have D.. put all data on D. splitting partitions gives you nothing.
-
There might be a situation specific reason to have each something on it's own space, OK fine.. I guess in that case, assign one partition per drive and move on.
But my point was - if you don't need a different partition for a very specific reason, then why have more than one VMDK with one partition.
Now a reason to split might be to put the SQL VMDK onto SSD, while putting data VMDK on spinning rust.
-
The OS still needs to assign a letter to use the drive. . .
-
@dustinb3403 said in Sizing a Server and Disks - SQL VM:
The OS still needs to assign a letter to use the drive. . .
Sure - but so?
Oh and that's not true. Windows has supported mount points for a while now. I know I did it as a test more than 5 years ago.. hell, maybe more than 10.
-
@dashrender hrm. . . I might need to do some digging on that.
-
@dashrender said in Sizing a Server and Disks - SQL VM:
@dustinb3403 said in Sizing a Server and Disks - SQL VM:
The OS still needs to assign a letter to use the drive. . .
Sure - but so?
Oh and that's not true. Windows has supported mount points for a while now. I know I did it as a test more than 5 years ago.. hell, maybe more than 10.
It's been around since Server 2012 IIRC. They didn't work well in 2012 but have been working really well in R2 and 2016.
-
@dustinb3403 said in Sizing a Server and Disks - SQL VM:
@dashrender hrm. . . I might need to do some digging on that.
-
I've never heard that, wow I feel bad now.
Good to know for the future.
-
We have a Hyper-V host with two tiers of storage: an all SSD RAID, and an all HDD RAID.
WHen I set up the MS SQL server (mainly for MS Dynamics purposes, but it also serves some other critical business functions), I had to do it according to what the Dynamics consultant suggested:
- MS SQL VM: (virtual disk (os drive letter))
- serv-SQL.vhdx (C:)
- serv-SQL-DATA.vhdx (D:)
- serv-SQL-LOG.vhdx (E:)
- serv-SQL-BACKUP.vhdx (F:)
The D and E virtual disks are located on the SSD RAID on the physical host, the other two are on the HDD RAID.
The stuff needing to be fast like the TempDB, Log, main DB, etc is all on SSD... while the backups and the OS are on the HDD RAID.
I do have SSD caching for the HDD RAID, so the other stuff is actually sped up, though not 100% of the time. Most writes are anyways.
- MS SQL VM: (virtual disk (os drive letter))
-
Sorry peeps got a bit crazy in work and busy with house stuff.
Will try and go through the thread in the morning and answer what I can.I will say the guide I linked doesn't seem to be geared to virtualization.
Tim_g is what I was thinking with the split arrays and separate vmdk
-
@hobbit666 said in Sizing a Server and Disks - SQL VM:
I will say the guide I linked doesn't seem to be geared to virtualization.
Wouldn't need to be. There is no case where you should have a physical database in a VERY long time, databases are among the first workloads to have gone 100% virtual. And even so, the storage considerations for a database are not impacted by physical vs. virtual, it's always the same.
-
@tim_g said in Sizing a Server and Disks - SQL VM:
We have a Hyper-V host with two tiers of storage: an all SSD RAID, and an all HDD RAID.
WHen I set up the MS SQL server (mainly for MS Dynamics purposes, but it also serves some other critical business functions), I had to do it according to what the Dynamics consultant suggested:
- MS SQL VM: (virtual disk (os drive letter))
- serv-SQL.vhdx (C:)
- serv-SQL-DATA.vhdx (D:)
- serv-SQL-LOG.vhdx (E:)
- serv-SQL-BACKUP.vhdx (F:)
The D and E virtual disks are located on the SSD RAID on the physical host, the other two are on the HDD RAID.
The stuff needing to be fast like the TempDB, Log, main DB, etc is all on SSD... while the backups and the OS are on the HDD RAID.
I do have SSD caching for the HDD RAID, so the other stuff is actually sped up, though not 100% of the time. Most writes are anyways.
A lot of that is primarily offset by RAM, anyway.
- MS SQL VM: (virtual disk (os drive letter))
-
@dustinb3403 said in Sizing a Server and Disks - SQL VM:
FYI nothing in your OP states the type of drives so we have to make an assumption based on the drawings.
But if you are using SSDs, unless you need some really insane IOPS, use OBR5, you get more storage and it is more than reliable enough.
If using HDDs use RAID10.
Obviously all of the conditions apply with both (RAID 5 ssd) don't use consumer gear, enable monitoring, replace equipment when it fails etc etc.
Well that's the thing. With the requirement of SQL is it better to go full SSD? If so we will price it up. If that's too many ££££ then we will look at split the array into two like @Tim_G has this setup.
-
@scottalanmiller said in Sizing a Server and Disks - SQL VM:
Separate VMDKs is never separate RAIDs. They are recommending different arrays for each.
They are wrong and this is ridiculously horrible guidance, but that is what they mean. What you are seeing is a 1990's guide regurgitated by someone non-technical who parroted back "rule of thumb" based on the assumption of using spinning disks, with RAID 5, without cache - basically, a run of the mill, physical, 1998 install.
Whatever guide this is, it's not for any product in the real world for nearly two decades.
You say that but the Document is dated 2017?
Problem I have (and this is not a dig at you, it's more what I observe from our MSP and others in the Dept) but without forums like this and people in the real world, how would we know this is bad??? Its a Microsoft document giving advice on their product.
So I now have to convince my manger and the board that what M$ are saying in their guide is wrong. -
@scottalanmiller said in Sizing a Server and Disks - SQL VM:
Definitely not. You should "never" partition today. If you want partitions, that means that you actually wanted volumes. Partitions are effectively a dead technology - an "after the fact" kludge that exists for cases where voluming wasn't an option - which should never be the case today as this is solved universally. Partitions are fragile and difficult to manage and have many fewer options and less flexibility. They have no benefits, which is why they are a dead technology.
Partitions exist today only for physical Windows installs, where there is no hypervisor and no enterprise volume manager to do the work - in essence, they are for "never".
But with the recommended setup for SQL in having separate drives (in windows) for Logs, TempDB, Backup etc the rule should be separate vmdk disks?
Like
vmdk1 = OS
vmdk2 = Logs
vmdk3 = TempDB