I should add that this is not exactly true. "Windows Dedupe" is not available in Hyper-V, which might sound obvious, as it is a Windows feature. But Hyper-V does have ReFS which has a new feature in 2016 that is not called dedupe but acts like a really light dedupe option. So while "Windows Dedupe" is not available, there is a dedupe technology available to use. But it's not a great one, it's super basic. But it also has like zero overhead, so can be pretty nice to use even if it doesn't do a lot.
I just face palmed reading that first paragraph:
LVM RAID support
One of the biggest features of blivet-gui 2.0 is support of LVM RAID. When adding a new LV, you can now choose RAID level for it. LVM supports the same RAID types mdraid does but you can choose different level for every logical volume so it's more flexible than using LVM on top of an mdraid.
LVM still uses md raid, just like you would normally.
That said, people like their gui for all the things, why not have yet another tool to manage everything.
So when they are ready to make their software production-ready, it will be the other way around.
Exactly. Like if they ONLY supported VMware or ONLY supported Hyper-V or ONLY supported Windows we'd understand. Those are all potential production ways to run software. But a physical deployment? This means that they've never tested and don't support ANY production environment.
There is a big gap between having "one tested environment for production" and "not tested or supported for production."
Exact same reason we dropped FogBugz. They only supported non-production platforms.
I still love XenServer and XenOrchestra as my one of choice for now. But then i'm only running small Linux/Windows servers on it anyway like Zabbix, Radius, Unifi. Nothing full production like the DC or Dynamics/SQL
Thank you for your feedback on 5nine Manager, guys.
Our goal is to provide an intuitive yet powerful product for complete centralized Hyper-V management experience.
It is now de-facto standard for managing environments with up to 20 Hyper-V host.
Now we are going even further, and prepared a great product, that can help you scale, segregate user management roles with RBAC, automation support and even more. Enjoy!
I know there is a best practice that discourages an environment with only one domain controller.
Why? Do you really need two domain controllers? How many authentications are you doing? How much downtime can you afford? Would it be better to have a single domain controller on a VM that you can backup and restore in a few minutes versus having two running at all times?
Why = because a document from Microsoft said so and at the time when I made our domain I didn't know any better :).
What you're asking me is what I'm asking myself, which moves me to the conclusion that when it's time to make the VM for the accounting software, the old box should just go away. Especially since my tiny number of users would be able to log into their workstations with cached credentials until I can get the domain controller VM functioning again.
Who cares what some paper from the company selling you the licensing says.
What does your company need?
I have never used two domain controllers in the SMB space. Even before virtualization at my clients.
It is simply not something needed.
You don't think the downtime justified the cost for a SMB I'm assuming and load balancing isn't a concern
Rarely is downtime worth the cost of mitigating it in an SMB environment. They often don't actually understand what the true cost of downtime is and exaggerate it more often then not. If you're getting enough requests that you're hitting a performance threshold on the domain controller then you may be out of the SMB space.
And authentication often has a near zero impact for short durations. A DC down could easily go 30 minutes and literally have no one notice.
@Kelly yeah the HC3 Move (powered by Double-Take) is certainly easier and more direct...
if the xenserver export virtual disk file is a VHD file, you can probably skip converting it to qcow2 but simply renaming the file to match the guid of the exported empty qcow2 file (so it will still be named <guid>.qcow2 even though it's actually your VHD file.) Our import is actually using the qemu-img convert tool which will automagically detect that it's really a VHD format and do the conversion at the same time it imports it to save that extra step. (and also you can use that same "import shell" over and over... just keep replacing the guid.qcow2 file with the one you want to import)
and after a lot of episodes, I've fixed the thing.
please let me state this again: I HAVE.
Not the HPE support, not the reseller tech support.
I = An almost idiot ex-embedded sw developer illogically cast into the sysadmin role of a non-sense company!
there was a mismatch between OS version, bios version, controllers firmware version, controller drivers version. Selecting the right combination has (apprarently) fixed the issue!
how storage controllers affect reboot signals is still a mistery... but actually all was related to the smart paging I mentioned before. Moving it from the default location caused all the issues.
I have discovered the "bug" recreating a new vm with copy-pasted hdd but with hypervisor defaults... 2 twins VM one running one not... only difference: location of the smart paging file. XD
@Kelly i have seen the feature matrix, and that's why i asked.. because there are two editions in feature matrix.. but @DustinB3403 said above that the matrix includes support.. is this correct or not?
The matrix is from Citrix, not XenServer. It is really confusing terminology.
Standard and community are the same. The only difference is standard has support from Citrix.
Enterprise and the features listed there are only available to those with a paid support subscription for that version.
The codebase may be identical, but you can't access things like GPU virtualization without the Enterprise version.
Even more confusing is that those things are add one rather than unlocks... so it isn't really xenServer doing it.
KVM with Virtmanager.
Centos latest is my preferred KVM host.
And Fedora LXDE Spin is my preferred Virt-Manager choice, I use it from inside Windows using Virtualbox.
Cause it resembles the old ESXi Philosophy with Vsphere C# client
And you can do whatever you want with it, and for free. Especially when you use virt IO drivers for network and disk, you get nearly identical bare metal speeds.
But sadly its lacking other tools to allow users to connect and use the VM, you will have to do this from inside the VM like RDP and VNC.