Always Virtual - A Topic discussing the many cases of Virtualizing your Server Fleet



  • As having as many topics as there are about "Should I virtualize this" or "What can I do to back this up" or "I'm dead in the water and need help rebuilding my server". I figure here's a good place to discuss different topics we need around the communities, where a virtual fleet would be perfect.

    Discussing the reasons why virtual makes total sense in almost every case. The backup options for virtual (incremental and full), implementation options (what hypervisors are commonly used). How to pick a hypervisor for your business. How to build reliability by virtualizing and work-load conditions for different softwares. SQL, AD, File Shares to name a few.

    Here we have an article of a newbie employee to the world of IT looking to build a more reliable backup solution for 4 physical servers. Currently running Server2003 with Backup Exec, and a few custom scripts.

    Without any specifics regarding the reasons why everything is still physical and without any glaring reasons why this shouldn't be virtual. The backup options for this case seem pretty straight forward.

    Virtualize the fleet on XenServer CE and use NAUBackup to export the backup files off host. Reuse the existing physical servers as storage options.

    For the incremental portion of the equation, there are a multiplicity of options. Many free and paid options. Name a few.



  • Obviously we have the Big 3 Hypervisors, Xen, ESX (ESXi) and Hyper-V as the major players in the Virtualization world. Also there's Proxmox, KVM and RHEV as far as Type 1 Hypervisors go that I'm aware of (seen used)

    Many of us on this community use one of the big 3 for the obvious reasons. They're the most common and well known and extremely well developed Hypervisors. Including a multiplicity of features.

    XenServer is quite simply the most usable platforms that there are in the Hypervisor world. It's literally "limitless" with the features you can add to it. Using tools like HALizard or NAUBackup to just name two feature options.

    HALizard makes it so you can get HA between two hosts without the added complexity of what Xen natively offers.

    NAUBackup offers the ability to create complete backup images of your VM's, and migrate these off host. (So you can use it as a backup solution with just a CentOS File Server or NAS)

    In addition to these options you have the full gambit of software available from your Guest OS, be it Linux or Windows FreeBSD or even Mac OS. Software such as Shadow Protect for Windows Server, Back In Time for Mac OS or the far to many to even list options from Linux.

    The management of building these backups can be completely automated depending on what Hypervisor/3rd party software you're using, and is negligible to say the least, in that it takes almost no effort to effectively protect your business from almost every failure scenario imaginable.

    If you need an additional backup location offsite there are hundreds of options. Services such as Amazon S3 or BackBlaze (Cloud Storage) or any other off-site storage provider make keeping your systems & data completely safe.

    Cloud Storage is likely the most reliable solution for off-site & managed file storage you can get. You data is "portable" globally. Should 1 data center have a massive outage, you can still access your data from another one on the other side of the world. So long as you have internet access you can get your data. Fees may apply, but this is a truly reliably solution if you need it.

    If you don't like the cloud approach you can look into COLO's (Colocation) where you either rent a server from the COLO, or you send a server configured as you need it. They manage the power, AC, and internet so your data is always available. COLO's by design have 1 issue. If this site goes up, so does your data and backups.

    And if you're really tight on a budget there's the friendly "Rent-a-Center" approach. Not many IT professionals will like this approach I'm going to describe below. But it's still completely viable. But it's not without risk.

    The Rent-a-Center approach involves putting a server into an employees home. Ideally this home would have reliable internet, possibly with dual ISP's, backup power options, and is fully AC managed & monitored.

    The very last option would be backing up locally, on site to a server with enough storage to cover the companies needs. A standalone server by it's self in this case is complete insanity. Having a single backup, is not enough to protect a business. If you choose this option, you really should consider one of the other options above as well.

    Because if you don't have a backup of you backup you might as well throw in the towel. Always implement 2 backup plans whenever possible. One by its self will give you a good level of reliability in the event out an outage. But two is better.

    Especially if it can be done for a very reasonable price.

    Hopefully this conversation can continue on how to build / implement backup solutions for companies of all sizes in one central location.(Here).

    I hope to hear your thoughts on the above options, what you might recommend or what you do.



  • Lastly we have the options for fail-over. Whether fail-over is a part of the Guest OS and sub functions, or if your using Fail-Over at the hypervisor level.

    In either case nothing comes close to being able to watch one host go up in proverbial flames (or literal ones) and still continue your day to day operations because the systems you've implemented make it possible for these sorts of cases.

    When you virtualize your server fleet, and build reliability into it, by considering your availability needs (Standard or High) and designing your system to meet those needs can you actually rest peacefully at night.

    Only in a few scenarios is "reliability by doubling everything" dangerous. Namely if they are in the same facility. Geographic location as very little impact on modern virtualization platforms. Don't limit your self to a single closet.

    Sister systems can be on the other side of the world in an active-active state. Just waiting for either system to go offline for one reason or another. Immediately taking over in these events and continuing to work without issue. These sister systems can be in separate branch offices or COLO's. Geographic distance only improves the system's reliability when your thinking natural disasters. The further apart the better.

    Windows has many features built into its current 2012 server operating system where you don't even have to migrate the virtual machine it's self. You have a separate VM running on a separate host. It'll take over and run should something happen to the "primary" server. Although in this case "Primary" probably not the best word to describe these servers.

    Linux also has a ton of features for this, far to many and well outside of my expertise to discuss in full regarding their capabilities. Some of these are Amanda, Bacula, Rsync, Unitrends and Veeam. These all offer different options and many are for specific uses. But they're up to if not better than what Microsoft has to offer in many circumstances.

    Always continue poking and prodding your backup solution. It can most definitely be improved in 99.9999% of cases.



  • @DustinB3403 said:

    Obviously we have the Big 3 Hypervisors, Xen, ESX (ESXi) and Hyper-V as the major players in the Virtualization world. Also there's Proxmox, KVM and RHEV as far as Type 1 Hypervisors go that I'm aware of (seen used)

    Those are all KVM rebranded. There are only four type 1 hypervisors in the AMD64 world.



  • The biggest reasons for always virtualizing are around hardware abstraction and free. Virtualization is free and easy, which is important as it takes away the "why not virtualize" caveats. Virtualization has, effectively, no downsides. It actually lowers the cost and effort of systems administration and through the miracles of abstraction it actually makes the overall system simpler, rather than more complex!

    The hardware abstraction aspect is critical because it makes our systems more stable, rather than less stable, and more flexible for whatever we might need in the future. It reduces technical debt with no real cost of its own. These aspects mean equal or lower cost with lower risk.

    It's these aspects, the "always pros" and the "lack of cons" that puts virtualization into the solid "always category.


Log in to reply