SQL Server - best practices for SMB



  • We've just purchased Dynamics NAV ERP. Somewhat predictably, I'm now getting into a bit of a row with my new Dynamics NAV partner about the install, principally over SQL Server.

    I want one big RAID10 on a virtualised host (local storage), shared with other VMs.

    They want a dedicated, physical host with separate small arrays for data, logs and temp files.

    I'm like "dude, it's 2015 not 2005!" (I'm not really).

    It's so tiring. I thought the physical vs virtualisation and RAID10 versus separate RAID1 arguments had been done and virtualised OBR10 was best-practice. I don't even want to talk about it any more. Obviously there are exceptions, but I'm talking about a normal SMB running a normal SQL ERP server.

    It's uncomfortable. They're not DBAs and I'm not a DBA. Neither of us are remotely qualified (they're a small company too). They're like "well this is what we always do and so we're comfortable with it", but I'm like "well this is what I always do and so I'm comfortable with it". It's not the best way to start a major project with a new partner.

    The only argument I can see is for having a separate array for SQL server, rather than sharing it with other VMs. This is because with a shared array, you can't guarantee performance. You might get better overall performance most of the time (thanks to the extra spindles), but there might be other times when those other VMs are hogging resources. You get contention. I see this as less of an issue in an SMB where there are only one or two IT guys who have a lot of control. Those IT guys can make sure other VMs behave themselves.

    One thing I'm not sure about is whether data, logs and temp files should each reside on their own separate logical drives. or whether I should install the OS on one logical drive (C:) and everything else on another logical drive (D:)


  • Service Provider

    DBAs aren't the ones who should be making these decisions anyway, they don't know systems. They know databases.

    Remind them that the world's biggest databases, once doing literally trillions of transactions per day are all virtualized and don't use individual disks like that.


  • Service Provider

    You can use Storage QoS on a single array to guarantee a higher SLA than possible with separate arrays.


  • Service Provider

    I had a client buy an accounting package named Dysel that is based on NAV. The vendor attempted to tell me that i had to do that. I simply ignored them.

    I installed the guest VM and created 3 VHDX and attached them. They logged in via RDP and installed from there. To this day I have no idea if they know it is virtualized or not.



  • @JaredBusch said:

    I had a client buy an accounting package named Dysel that is based on NAV. The vendor attempted to tell me that i had to do that. I simply ignored them.

    I installed the guest VM and created 3 VHDX and attached them. They logged in via RDP and installed from there. To this day I have no idea if they know it is virtualized or not.

    I was told by our SAP vendor that they can't guarantee performance if it wasn't on a physical box. When I asked how they ran their ERP suite they said it was virtualized with VMWare, still didn't seem to understand the point I was trying to make.


  • Service Provider

    @coliver said:

    @JaredBusch said:

    I had a client buy an accounting package named Dysel that is based on NAV. The vendor attempted to tell me that i had to do that. I simply ignored them.

    I installed the guest VM and created 3 VHDX and attached them. They logged in via RDP and installed from there. To this day I have no idea if they know it is virtualized or not.

    I was told by our SAP vendor that they can't guarantee performance if it wasn't on a physical box. When I asked how they ran their ERP suite they said it was virtualized with VMWare, still didn't seem to understand the point I was trying to make.

    Guarantees is one of those tricky things. A guarantee is for politics. It's for pointing fingers, not doing the best, in nearly all cases.



  • We run SQL 2012 as a VM (VSphere) for the back end of Epicor on a server that is OBR10 (all 10K SAS drives, not SSD). There are several other VMs on that server. We have the OS on C and then 1 vHD for logs, 1 for TempDB, and one for the databases.

    If the server you get has enough IOPs to blow your current IOPs requirements of existing workloads out of the water, you should be fine (I would think).





  • @NetworkNerd Thanks. That's an interesting read. How many concurrent users do you have on Epicor?



  • @coliver said:

    I was told by our SAP vendor that they can't guarantee performance if it wasn't on a physical box.

    I'd be interested to know how they could guarantee performance on a physical box. Surely there are too many variables at both the hardware and software level to guarantee anything?



  • @scottalanmiller said:

    DBAs aren't the ones who should be making these decisions anyway, they don't know systems. They know databases.

    I wouldn't know. I'd have thought a DBA should have a good understanding of storage, and equally a storage guy should have a good understanding of mainstream databases.



  • @scottalanmiller said:

    Remind them that the world's biggest databases, once doing literally trillions of transactions per day are all virtualized and don't use individual disks like that.

    Quite. It's ironic that Microsoft is explicitly saying that their preferred installation is hosted on Azure. The majority of their marketing for the latest release of NAV is focussed on Azure. You won't currently get any Microsoft employee saying virtualisation is bad.

    I'm not engaging my partner. I'm just telling them how it will be. Life's too short to argue about the merits of virtualisation to people who only like to stick with what they know.


  • Service Provider

    @Carnival-Boy said:

    @scottalanmiller said:

    DBAs aren't the ones who should be making these decisions anyway, they don't know systems. They know databases.

    I wouldn't know. I'd have thought a DBA should have a good understanding of storage, and equally a storage guy should have a good understanding of mainstream databases.

    DBAs really don't. Both fields are a specialty. Most DBAs know zero about storage. Most storage people know almost nothing about databases.

    Storage, however, is storage and the idea that you need much knowledge about databases to know how to set it up is a myth. It's one of those thing propagated by the database people because they don't want to be "just another app" but feel like their needs must be special and require special knowledge. They do not.

    There are a few things to know, but they are not database specific. Like what is the read/write ratio, is it random or sequential, what kind of app level caching takes place, etc.

    Database specific things that need to be known and understood are things like SQL Server's unique ability to do "application level RAID" if you want to get hard core performance. This requires serious coordination between DBA and Storage Team and I've never heard of anyone considering it. Unless going to that level, no DBA should be involved and storage should treat it like any generic random access, write heavy file.



  • Just doing some background reading. Turns out my new NAV partner doesn't get involved in SQL server installs anyway, so I don't know why they were raising any objections. So I'm on my own (gulp!).

    Just read this:
    http://sqlmag.com/sql-server/sql-server-virtualization-tips
    Five tips:

    1. Take advantage of Second Level Address Translation
    I've never heard of this before, but I presume this will just happen on my new Proliant, without me doing anything.

    2. Don't Overcommit Your Physical Processors
    This is standard virtualisation practice, so I'm cool with it. Plus, I'm limited to four cores with my SQL Server licence anyway.

    3. Use Dynamic Memory
    "In order for SQL Server to take advantage of dynamic memory, the guest OS must be able to recognize hot-add RAM". Hmmn, I've never heard of hot-add RAM before. However, further reading says that the Standard edition doesn't support it anyway, so I can forget about this.

    4. Use Fixed Virtual Hard Disks or Pass-Through Disks
    I generally use dynamic VHDs, but our ERP system will be relatively small, so will used a fixed VHD. Pass-through disks look like overkill, so I won't bother with them.

    5. Don't Use The Default Storage Configuration
    Hmmn, they're saying use separate disks for data and log files. Don't use OBR10. I thought this was an old-fashioned concept since disk speed hasn't improved whilst everything else? So lack of IOPS becomes your constraint, and lack of IOPS can only be solved by more spindles.

    Next read is this:
    http://www.vmware.com/files/pdf/solutions/SQL_Server_on_VMware-Best_Practices_Guide.pdf
    but it's a bit heavier, so may wait until bedtime reading.



  • @Carnival-Boy said:

    @NetworkNerd Thanks. That's an interesting read. How many concurrent users do you have on Epicor?

    We have 14 data collection users and 30 office users if memory serves. The data collection users are for use out in the shops, and the menus have limited functionality compared to the full office user experience.

    We also use SSRS to generate reports for Epicor (which runs on the same SQL Server as the Epicor database).

    But what makes us a bit unique is that my boss has written some ASP pages that allow people to pull Epicor data from our company intranet site. Many reports are now generated from that area rather than in the Epicor application itself.



  • I know Epicor recommends you use the Paravirtualized ISCI adapter for best performance as well as VMXNet3 and thick eager zero provisioned disks. They even tell you to reserve the CPU and memory for best performance of the VM (which we haven't done much of, really).



  • And how is your performance? I assumed you had more users based on your specs, so I'm guessing (hoping) you're flying. I hadn't heard of a Fusion IO card, but RAM seems really cheap at the moment so I'd probably always spend money on that rather than anything else, although SQL Server Standard is limited to 64GB. I think the Fusion card only kicks in when SQL can't get data out of RAM so wouldn't get used that much, depending on how big your database is (how big is it, by the way?)

    For our existing, out-going ERP system, most of our reports are written by me in ASP (classic ASP) directly accessing the database. I also use ASP for writing data a lot of the time, avoiding the ERP client entirely. I love classic ASP, but really need to retire it and learn something a bit more up-to-date.

    I'm planning on running the application server and SQL server on the same VM, rather than splitting them like you did. I'm not sure what the benefits are of splitting them?

    Anecdotally, I hear that Microsoft have done a great job of improving performance over the last couple of versions of Dynamics NAV. The new version is supposed to considerably faster than the previous one. So hopefully everything will be fine. Their goal is for everyone to run it on Azure so they have a stake in performance that perhaps other ERP vendors, like Epicor, don't have. Whilst the worse the performance is the more revenue they would get from Azure as people have to purchase more resources, if it's slow it would create a negative image of Azure and put people off going down the cloud route.



  • What language did you use for classic ASP? JScript and VBScript were the most popular, if I remember.

    ASP.NET with C# is pretty powerful and easy to use.



  • VBScript.



  • @Carnival-Boy said:

    And how is your performance? I assumed you had more users based on your specs, so I'm guessing (hoping) you're flying. I hadn't heard of a Fusion IO card, but RAM seems really cheap at the moment so I'd probably always spend money on that rather than anything else, although SQL Server Standard is limited to 64GB. I think the Fusion card only kicks in when SQL can't get data out of RAM so wouldn't get used that much, depending on how big your database is (how big is it, by the way?)

    For our existing, out-going ERP system, most of our reports are written by me in ASP (classic ASP) directly accessing the database. I also use ASP for writing data a lot of the time, avoiding the ERP client entirely. I love classic ASP, but really need to retire it and learn something a bit more up-to-date.

    I'm planning on running the application server and SQL server on the same VM, rather than splitting them like you did. I'm not sure what the benefits are of splitting them?

    Anecdotally, I hear that Microsoft have done a great job of improving performance over the last couple of versions of Dynamics NAV. The new version is supposed to considerably faster than the previous one. So hopefully everything will be fine. Their goal is for everyone to run it on Azure so they have a stake in performance that perhaps other ERP vendors, like Epicor, don't have. Whilst the worse the performance is the more revenue they would get from Azure as people have to purchase more resources, if it's slow it would create a negative image of Azure and put people off going down the cloud route.

    Performance was very good on Epicor 9, but it seems a little slower overall in Epicor 10. I have been monitoring that very closely, and I am seeing that we still seem fine in terms of the number of IOPs the server has compared to its workload, even with an increased number of users.

    The real reason for splitting was encapsulation. There may be times when we need to reboot the application server but want to keep the SQL server running for people to be able to access our intranet system using the classic ASP pages my boss has written.

    The database is about 36 GB now (maybe a little smaller but very close). It dropped in size by a factor of 5-10% in the upgrade. Epicor split their database into multiple schemas in version 10, and I know that made updating some of the ASP pages tough for my boss. And the custom fields that were part of normal tables for jobs, orders, etc. are now split off into user defined tables, causing many, many additional join statements to be added to code (which I feel pretty confident is slowing some things down, especially since the user-defined tables did not get indexes applied to them like all the other tables). I think we may still have some tuning to do. We have a 4-core SQL license, and thus far we are not pushing it, but I think we had some oversight in the upgrade regarding SSRS. That is the new engine for all reports, whereas they were Crystal Reports that ran mostly on the application server in the previous version. In this version the SQL Server takes a hit. I have beefed up the RAM to 50 GB this weekend with 40 GB of that reserved. I set the SQL memory limit to 40 GB as well.

    But we have another problem going on right now with our web server being on the fritz and still physical. We have to find a way to get that moved very soon and are working on creative solutions.



  • Quick question on SQL Server.

    I'm installing it on OBR10, so no issues there. I will install the OS on it's own partition (C:). SQL Server will also be installed there (C:\program files). I will then create at least one extra partition for the database(s).

    Is it best practice to install databases on a single partition (E:), or should I create separate partitions for data, logs and tempdb (E:,F:,G:).

    This isn't a performance issue, obviously (unless it effects fragmentation in any way?). But from an admin perspective, is there any benefit to creating extra partitions? I would have thought that one of biggest risks with a database is running out of space and keeping an eye on space on one partition is easier than three.

    I've googled and can't find any answers. Lots of discussions on the merits of physically separating data and logs, but not much on logically separating them. I guess this means it doesn't matter too much, but I need to make a decision one way or the other.



  • @Carnival-Boy said:

    Quick question on SQL Server.

    I'm installing it on OBR10, so no issues there. I will install the OS on it's own partition (C:). SQL Server will also be installed there (C:\program files). I will then create at least one extra partition for the database(s).

    Is it best practice to install databases on a single partition (E:), or should I create separate partitions for data, logs and tempdb (E:,F:,G:).

    This isn't a performance issue, obviously (unless it effects fragmentation in any way?). But from an admin perspective, is there any benefit to creating extra partitions? I would have thought that one of biggest risks with a database is running out of space and keeping an eye on space on one partition is easier than three.

    I've googled and can't find any answers. Lots of discussions on the merits of physically separating data and logs, but not much on logically separating them. I guess this means it doesn't matter too much, but I need to make a decision one way or the other.

    We have our split up to OS, Data, and Logs. Not sure if it is best practice or not, it was recommended by both of our ERP vendors.



  • @coliver said:

    @Carnival-Boy said:

    Quick question on SQL Server.

    I'm installing it on OBR10, so no issues there. I will install the OS on it's own partition (C:). SQL Server will also be installed there (C:\program files). I will then create at least one extra partition for the database(s).

    Is it best practice to install databases on a single partition (E:), or should I create separate partitions for data, logs and tempdb (E:,F:,G:).

    This isn't a performance issue, obviously (unless it effects fragmentation in any way?). But from an admin perspective, is there any benefit to creating extra partitions? I would have thought that one of biggest risks with a database is running out of space and keeping an eye on space on one partition is easier than three.

    I've googled and can't find any answers. Lots of discussions on the merits of physically separating data and logs, but not much on logically separating them. I guess this means it doesn't matter too much, but I need to make a decision one way or the other.

    We have our split up to OS, Data, and Logs. Not sure if it is best practice or not, it was recommended by both of our ERP vendors.

    We're in the same boat. Though I'm sure the split is from old school separate drive groups (separate RAID'ed drives).



  • @Dashrender said:

    @coliver said:

    @Carnival-Boy said:

    Quick question on SQL Server.

    I'm installing it on OBR10, so no issues there. I will install the OS on it's own partition (C:). SQL Server will also be installed there (C:\program files). I will then create at least one extra partition for the database(s).

    Is it best practice to install databases on a single partition (E:), or should I create separate partitions for data, logs and tempdb (E:,F:,G:).

    This isn't a performance issue, obviously (unless it effects fragmentation in any way?). But from an admin perspective, is there any benefit to creating extra partitions? I would have thought that one of biggest risks with a database is running out of space and keeping an eye on space on one partition is easier than three.

    I've googled and can't find any answers. Lots of discussions on the merits of physically separating data and logs, but not much on logically separating them. I guess this means it doesn't matter too much, but I need to make a decision one way or the other.

    We have our split up to OS, Data, and Logs. Not sure if it is best practice or not, it was recommended by both of our ERP vendors.

    We're in the same boat. Though I'm sure the split is from old school separate drive groups (separate RAID'ed drives).

    Yep, I was thinking the same thing, when the performance of an individual drive/array wouldn't be enough for the entire database server.


  • Service Provider

    I always install SQL to C and then make a new virtual disk for the data and logs. then a third vdisk for the local SQL backups.

    Strictly for ease of seeing usage at a glance. No performance concerns.



  • I've planned on only one vdisk. I hadn't considered creating more than one.


  • Service Provider

    @Carnival-Boy said:

    I've planned on only one vdisk. I hadn't considered creating more than one.

    Both approaches are completely valid. I like keeping the partitions separate, but I only like it a little. So I lean that way, but keeping them all in one is perfectly fine too.



  • We had a rather large MS Dynamic GP deployment for ERP that handled both employee data as well all citizen's data and issues. it was ran on a VM fine with SQL and the storage was and iSccsi Vdisk (connected to esx not windows) to the SANs and we had no issues with that.



  • @scottalanmiller said:

    @Carnival-Boy said:

    I've planned on only one vdisk. I hadn't considered creating more than one.

    Both approaches are completely valid. I like keeping the partitions separate, but I only like it a little. So I lean that way, but keeping them all in one is perfectly fine too.

    If you only create one virtual disk and then partition it (eg 😄 for the OS, E: for data) and you later want to increase the size of C:, I believe you can't.

    If you create separate disks then you can. Go into Windows Disk Management, right-click C:, and select 'Extend Volume'.

    For this reason you should always create separate disks, I think. Correct me if I'm wrong.


  • Service Provider

    @Carnival-Boy said:

    If you create separate disks then you can. Go into Windows Disk Management, right-click C:, and select 'Extend Volume'.

    For this reason you should always create separate disks, I think. Correct me if I'm wrong.

    You can do it with a single vdisk, but you have to go through a lot more work and you have to use third party disk partitioning tools to do it.