@Dashrender That's because our pricing is totally flat (against Veeam/VMware). So if XOA cost more for 2 sockets, it's not a general truth when you have more of them (that was the sense of my post)
Posts
-
RE: Xen Orchestra Upgrading
-
RE: Xen Orchestra Upgrading
@Dashrender Depends on the number of sockets you have.
-
RE: Xen Orchestra Upgrading
Note this opinion is on XS + XO from the sources. If you pay for the turnkey version (appliance + updater + support) this comment is not relevant.
-
RE: Free bare metal virtualization, HyperV free or VMware ESXi 6 free or else ?
@John-Nicholson This is something you can do easily on your NFS/SMB remote (eg with ZFS), and because it will be handled transparently at the FS level, no problem with XO.
-
RE: XenServer hyperconverged
I also updated the benchmarks with a Crystal Diskmark working on a 4GiB file (avoiding ZFS cache). Difference of performance is now huge, so the impact of the replication is not that bad at the end.
-
RE: XenServer hyperconverged
@Danp That's because despite the added layers, we are making some operations locally and also on a SSD (vs the existing NAS where everything is done remotely)
It's more indicative than a real apple to apple benchmark. The idea here, is to show the system would be usable, performances are not the main objective here.
-
RE: Free bare metal virtualization, HyperV free or VMware ESXi 6 free or else ?
This is currently a large amount of work (therefore cost for us) to be able to read VHD files (explaining it will be in Premium version of XOA) and do all the needed stuff on top. But I hope we'll be on time!
-
RE: XenServer hyperconverged
@black3dynamite I can't use LVM because it's block-based. I can only work with file level backend.
I did try to play with blocks, performance was also correct (a new layer so a small extra overhead). But I got a big issue in certain cases. Also, it was less scalable in "more than 3" hosts scenario.
-
RE: XenServer hyperconverged
@black3dynamite I'm not sure to understand the question.
So far, the "stack" is:
- Local Storage in LVM (created during XS install)
- on top of that, filling it by a big data disk used by a VM
- the VM will expose this data disk
- XenServer will mount this data disk and create a file level SR on it
- VMs will use this SR
It sounds like a tons of extra layers, but that's the easiest one I found after a lot of tests (you can see it as a compromise between modifying the host too deeply to reduce the layers VS not modifying anything into the host but have more complexity to handle on VM level). You can consider it as an "hybrid" approach.
Ideally, XenServer could be modified directly to allow this (like VMWare do with VSAN), and expose the configuration via XAPI.
I think if we (XO project) show the way, it could (maybe) trigger some interest on Citrix side (which is only into XenDesktop/XenApp, but hyperconvergence even make sense here)
-
RE: XenServer hyperconverged
Okay, so after just few days of technical experiments, here is the deal.
Context
- 2x XS7 hosts, installed directly on 1x Samsung EVO 750 (128 GiB) each
- dedicated 1Gb link between those 2 machines (one Intel card, the other is Realtek garbage)
Usually, in a 2 hosts configuration, it's not trivial to avoid split-brain scenarios.
In a very small setup like this (2 hosts only with few disk space), you'll expect the overhead to be the worst possible regarding the proportion of resources. But will see it's still reasonable.
Current working solution
A shared file storage (thin provisioned):
What's working
- data replicated on both nodes
- fast live migrate VMs (just the RAM) between hosts without a NAS/SAN
- very decent perfs
- "reasonable" overhead (~2GiB RAM on each Node + 10GiB of storage lost)
- scalable up to the max pool size (16 hosts)
- killing one node and other VMs on the other host will still work
- using XenServer HA on this "shared" storage to automatically bring back to life VMs that were on the killed node
- no split brain scenario (at least during my tests)
- no over complicated configuration on hosts
Overhead
- RAM overhead: <5GiB RAM on 32GiB installed
- Storage overhead: lost around 9GB of disk space per host
Obviously, in case of using large local HDDs, storage overhead will become negligible.
Scalability
In theory, going for more than 3 nodes will open interesting perfs scalability. So far, it's just replicating data, but you can also spread them when you have 3+ nodes.
Perfs
I'm comparing to a dedicated NAS with ZFS RAID10 (6x500GiB HDDs) with 16GiB of RAM (very efficient cache for random read/write) with semi-decent hardware (dedicated IBM controller card), on a NFS share.
ZFS NAS XOSAN diff Sequential reads 120 MB/s 170 MB/s +40% 4K reads 9.5 MB/s 9.4 MB/s draw Sequential writes 115 MB/s 110 MB/s -5% 4k writes 8.4 MB/s 17 MB/s +200% As you can see, that's not bad.
Drawbacks
- right now, it's a fully manual solution to install and deploy, but it could be (partly) automated
- it's a kind of "cheating" with XAPI to create a "shared" local file SR (but it works ^^)
- XS host can't mount the share automatically on boot for some reasons. So I'm currently finding a way to do that correctly (maybe creating a XAPI plugin?)
- you'll have to deploy 2 or 3 rpm's on Dom0, but the footprint is pretty light
- it will probably (very likely in fact) work only on XS7 and not before
- the only clean way to achieve this is to have SMAPIv3 finished. Until then, we'll have (at XO) to glue stuff in the best way we could to provide a correct user experience.
Conclusion
It's technically doable. But there is a mountain of work to have this in a "one click" deploy. I'll probably make a closed beta for some XOA users, and deploy things semi-manually to validate a bit the concept before spending to much time scaling something that nobody will use in production for some reasons (interest, complexity, etc.).
-
RE: Why is VMWare considered so often
There is a middle ground between everything known and using proprietary software everywhere.
Some people consider this is a philosophical debate, some don't.
My point of view is transparency is the key, value is in the service/experience, not the (closed) code itself. At least, this is how it evolves (see the proportion of OSS in companies 15 y ago vs now).
/my 2 cents
-
RE: Why is VMWare considered so often
@John-Nicholson That's what we are doing on XenServer market (for a lot of reasons, especially the API itself is good enough to stay agent less on our side)
-
RE: Why is VMWare considered so often
@John-Nicholson I never searched the VMWare contribution into the OSS, so I can be wrong but I never heard a lot people contributing to it (on major projects not those tailored only for VMWare itself)
-
RE: Why is VMWare considered so often
@John-Nicholson I'm not here to attack the product at all (I don't even know what half of the acronyms meant). I'm not building an hypervisor.
I'm just here to try to survive with the crumbs left from server virt market, without leaving my philosophy (making Free software).
-
RE: Why is VMWare considered so often
Anyway, my point was:
-
Free software is great and powerful, you just trade this against time to understand how it works (or cross your fingers, but it's not acceptable in production). Note that you could mitigate the risk in different ways but you should understand most of your infrastructure.
-
Support/Service on proprietary software can be useful if you have money and you don't care about what's happening here (ie not your core business)
-
Support/Service on Open Source software is a kind of best of both worlds.
But that's my opinion
-
-
RE: Why is VMWare considered so often
@John-Nicholson I think indeed you are not yet leading the licensing FUD, Oracle seems always better than anyone else on earth.
-
RE: Why is VMWare considered so often
So you edited your post and now I don't agree with you
You can have support for XS and also support for XO (our pricing model is based on a dedicated appliance + support).
-
RE: Why is VMWare considered so often
@John-Nicholson I think you are mixing Xen and XenServer here.
-
RE: Why is VMWare considered so often
@John-Nicholson said in Why is VMWare considered so often:
Years ago, I had a DRDB cluster go split brain on me
This, totally agree. Before adopting a product in a real prod env, you have 2 choices:
- pay for a turnkey solution (packaged with support)
- install it by yourself ONLY after having enough knowledge on how it works.
As you said earlier, you can't master something in 4h. You need to practice, validate, crash it, restore it etc.