XenServer, local storage, and redundancy/backups
-
So I have four hosts that we had OpenStack Cloud installed on (before my time). They were running Ceph locally to provide storage and redundancy. I'm trying to evaluate whether moving them to XS is feasible without any capital expenditure. As far as I know the machines do not have RAID cards, but several mismatched drives (1-4 TB). Is there any way I can achieve a reasonable amount of redundancy in this scenario?
-
Can you tear down the OpenStack Cloud while you work on rebuilding these systems into a reasonable configuration?
-
@DustinB3403 said:
Can you tear down the OpenStack Cloud while you work on rebuilding these systems into a reasonable configuration?
I'm going to blow it away and reinstall completely fresh. The OpenStack version is pretty old and it is running on Ubuntu 12.04.
-
Since you have more than 2 hosts you could use the built in tools from Xen to create a highly reliable pool.
I believe @halizard might even work with more than 2 host. But their bread and butter so to speak is a 2 host setup from everything I've read.
-
I thought @halizard was for more than two hosts and that you needed HA-ISCI for just two nodes?
-
No @halizard makes it so you can work with just 2 host.
-
Possibly more, but I haven't dug into that.
-
Can you break down the hardware better? I'm unclear if you have an OpenStack computer structure AND a CEPH one or if that is all on the same hardware?
If the former, why not keep CEPH and only move the top layer from OpenStack to XS?
Also, what is driving the move away from OpenStack? Just a desire for simplicity?
-
@DustinB3403 said:
Since you have more than 2 hosts you could use the built in tools from Xen to create a highly reliable pool.
I believe @halizard might even work with more than 2 host. But their bread and butter so to speak is a 2 host setup from everything I've read.
It is build on DRBD which is really just two hosts.
-
@scottalanmiller Thank you for the clarification.
-
@scottalanmiller said:
Can you break down the hardware better? I'm unclear if you have an OpenStack computer structure AND a CEPH one or if that is all on the same hardware?
If the former, why not keep CEPH and only move the top layer from OpenStack to XS?
Also, what is driving the move away from OpenStack? Just a desire for simplicity?
I have four hosts that are all OpenStack and Ceph Nodes. Bad design on all counts. I wish they were separated. The current hardware requirements preclude that at this point. My ultimate goal is to move the storage to dedicated hardware, perhaps utilizing Ceph then, but until then I need to get a working virtualization platform.
-
As for the driving factor, it is that we're trying to simplify our infrastructure. It looked like we might be able to achieve this using Mirantis to package up OpenStack, but we're having issues getting their deployment tools to work. If I had more time to play with things I might fight it to the point of working, but we're currently running in a semi crippled state (one of the hosts removed itself from the cloud), and I need to get something up and running sooner than later, but be a longish term solution. We don't really need a private cloud. It is a convenience mostly, and at this point it appears to not be worth the overhead to setup and maintain.
-
@Kelly said:
@scottalanmiller said:
Can you break down the hardware better? I'm unclear if you have an OpenStack computer structure AND a CEPH one or if that is all on the same hardware?
If the former, why not keep CEPH and only move the top layer from OpenStack to XS?
Also, what is driving the move away from OpenStack? Just a desire for simplicity?
I have four hosts that are all OpenStack and Ceph Nodes. Bad design on all counts. I wish they were separated.
Why do you feel the need to separate the storage and compute? What business reason exists to justify the added cost and management headache?
I get that you currently have a management headache, and I do like the idea of moving to something more reliable. XenServer with halizard and XenOrchestra would be a great drop-in replacement, it's what I'm migrating to at least.
-
@Kelly said:
@scottalanmiller said:
Can you break down the hardware better? I'm unclear if you have an OpenStack computer structure AND a CEPH one or if that is all on the same hardware?
If the former, why not keep CEPH and only move the top layer from OpenStack to XS?
Also, what is driving the move away from OpenStack? Just a desire for simplicity?
I have four hosts that are all OpenStack and Ceph Nodes. Bad design on all counts. I wish they were separated. The current hardware requirements preclude that at this point. My ultimate goal is to move the storage to dedicated hardware, perhaps utilizing Ceph then, but until then I need to get a working virtualization platform.
Gotcha, okay. was hoping that the CEPH infrastructure could remain. But I guess not.
-
@Kelly said:
As for the driving factor, it is that we're trying to simplify our infrastructure. It looked like we might be able to achieve this using Mirantis to package up OpenStack, but we're having issues getting their deployment tools to work. If I had more time to play with things I might fight it to the point of working, but we're currently running in a semi crippled state (one of the hosts removed itself from the cloud), and I need to get something up and running sooner than later, but be a longish term solution. We don't really need a private cloud. It is a convenience mostly, and at this point it appears to not be worth the overhead to setup and maintain.
I totally get that private cloud isn't likely to make sense with just four nodes, that's pretty crazy
Was just thinking of what might be easy going forward.
-
Do you really need HA? HA adds complication. Although there is an option here... with four nodes you could do TWO HA-Lizard clusters and I think put it all under XO for a single pane of glass. Not as nice as a single four node cluster, but free and works with what you have, more or less.
-
The mismatched local drives will pose a challenge for local RAID. You can use them, but you will get crippled performance.
-
@scottalanmiller said:
The mismatched local drives will pose a challenge for local RAID. You can use them, but you will get crippled performance.
It looks like there is an onboard 8 port LSI SAS controller in them, so I might be able to do RAID. My desire for HA is not for HA necessarily, but more survivability in the case of a single drive failing if I don't run any kind of RAID so as not to lose storage capacity.
-
@Kelly said:
It looks like there is an onboard 8 port LSI SAS controller in them, so I might be able to do RAID. My desire for HA is not for HA necessarily, but more survivability in the case of a single drive failing if I don't run any kind of RAID so as not to lose storage capacity.
You are going to lose storage capacity to either RAID or RAIN. Can't do any sort of failover without losing capacity. The simplest thing, if you are okay with it, would be to do RAID 6 or RAID 10 (depending on the capacity that you are willing to lose) using MD software RAID and not do HA but just run each machine individually. Use XO to manage them all as a pool.
-
@Kelly You would have to have a SAS controller of some type in there for the drives that are attached for CEPH now.