Hp storage D2d4324 nfs slow xenserver
-
@mroth911 said in Hp storage D2d4324 nfs slow xenserver:
So in your expect opinion what would be the proper setup for what I am trying to achieve. I am trying to create a fail over cluster running websevers using cpanel.
Well, that's a little complex so let's delve into it. What processes do we need to protect? In most cases we do high availability at the application layer, not at the platform layer. This is where you get the best reliability. What web servers are you running, what kind of applications are on them and what dependencies (like databases) do they have?
-
@DustinB3403 said in Hp storage D2d4324 nfs slow xenserver:
What you're running as a VM on the hypervisor doesn't effect the design of the hypervisor and fail-over capabilities. (generally)
osts.Sort of..... but only because more than 50% of the time what you run tells you that HA has no function at the platform level at all. Web servers, file servers, active directory and such you generally let do their own HA and you avoid hypervisor HA because it interferes with the HA that is already there.
-
We are running wordpress websites, nothing crazy. using mysql. we are using xenserver. and thats basically it .
The server specs are
hp dl360 g5 dual quad core I believe 32 or 64 gig of ram.
-
@mroth911 said in Hp storage D2d4324 nfs slow xenserver:
We are running wordpress websites, nothing crazy. using mysql. we are using xenserver. and thats basically it .
The server specs are
hp dl360 g5 dual quad core I believe 32 or 64 gig of ram.
MySQL / MariaDB has its own means of doing HA that is more powerful than what the hypervisor can do (HV can only do crash consistent whereas the database itself can do true HA and Fault Tolerance with zero data loss) and making Wordpress highly reliable is just a matter of load balancing the traffic.
So in a case like this, I would not have any HA at any level except for the applications. Just have two (or more) host nodes with zero shared infrastructure (no HA at any level, no shared storage, etc.) and let the applications (Apache and MySQL) do their jobs.
-
SO are you saying... us ha lizard or something like that?
-
-
@mroth911 said in Hp storage D2d4324 nfs slow xenserver:
SO are you saying... us ha lizard or something like that?
No he's not.
If you need to create HA for Active Directory, you buy two servers and install a ADDS server on each. The HA is provided at the software level of Windows, not at the hardware level or hypervisor level.
Scott is telling you that Apache and MySQL can do exactly that - skip the hypervisor HA and only use application HA. You'll also probably want to buy a load balancer to sit infront of these two servers as well.
-
I am starting to get the idea now. Sorry I am very green with this. Before finding out about this website. everyone I spoke to told me I need 3,2,1 concept. Which is what I was starting to do.I am glad I was referred to this website. Let me see if I understand.. I can take to web servers and have them link to each other? with no shared storage ? is that what I am understanding?
-
@Dashrender said in Hp storage D2d4324 nfs slow xenserver:
If you need to create HA for Active Directory, you buy two servers and install a ADDS server on each. The HA is provided at the software level of Windows, not at the hardware level or hypervisor level.
In the case of AD, even higher, actually. The HA is purely within the application, even Windows doesn't know it's HA.
-
@mroth911 said in Hp storage D2d4324 nfs slow xenserver:
Let me see if I understand.. I can take to web servers and have them link to each other? with no shared storage ? is that what I am understanding?
So yes and no. Let's break it down into discrete parts (and tell me if there are more parts that I don't know about, I'm just talking vanilla WordPress right now.)
You have two things that need to be HA here, the application and the database. These are two different things with different needs so we need to talk about them completely separately.
-
Wordpress Application High Availability
WordPress itself is a web application, it relies on a database for its data. WordPress itself is stateless (it does not change under normal operations.) Because of this, there is no need for "nodes" in a WordPress cluster to "talk" to each other. They don't even need to have similarity. You can have three nodes, one with WordPress running on Windows, one with WordPress running on Apache on FreeBSD and one with WordPress running on NGinx on CentOS and they all act the same and you can load balance between them. The stateless nodes have nothing to say to each other so there is no need for them to be the same (other than the WP code itself.)
So making WordPress at the application layer HA is super easy. It is as simple as running two or more instances of it and having your load balancer point them into a pool and send traffic to each as needed and remove any that stop responding from the pool. Easy peasy.
Because it is stateless, you have a few choices for making the different copies of WP identical.
You can...
- Do it by hand
- Use a simple tool like Rsync to take a master and make the others identical to it
- Build each node pristine each time using a tool like Ansible, Chef or Puppet
- Automate using a custom script
And, being stateless, WordPress can safely be made highly available using the hypervisor layer high availability tools as well but this is silly in a case like this because you would give up load balancing to do this which is just throwing away your value. So in this case, this would not make sense.
-
MySQL or MariaDB High Availability
The database portion of your WordPress stack is the critical one. Unlike the stateless application server, the database is stateful - which means that it is constantly in a state of change, is mutable and cannot be protected without knowing its current state. This means that tools like platform layer high availability cannot protect it well because they will treat the database as having crashed and could corrupt it or lose data during a failover. Not ideal. Nor will they allow for load balancing, which we often will not do anyway for the DB, but they eliminate that option.
For the database we need the database applications to speak to each other and keep the database nodes (two or more) synchronized with identical data in both places. We can do that in a master/slave way (aka active/passive) or we can do it in a multi-master way which is far more complex. But this has to be done in the database itself.
-
In both cases above, any shared storage would introduce a single point of failure that does not exist naturally. Without shared storage, each application copy and each database instance has a full copy of the entire application or dataset. So if one fails, another can take over. Zero data loss, zero shared points of failure.
With the 3-2-1 design (or ANY design with shared storage) any storage corruption OR any failure in the storage node causes the entire stack to be lost - there is no high availability or protection of any significance. The HA aspect is completely skipped in that case.
Shared storage also makes load balancing pointless as the most critical component for performance is the piece that is shared so "scaling up" doesn't really do very much as the database delays from the storage will remain the same no matter how many database nodes or application nodes you add. It's like hooking more cars together by tow ropes but still only engaging the engine in the first car. Doesn't make things go faster, just makes the single engine work harder (in many cases.) This is because you are unlikely to be CPU bound in a case like this.
-
So, from a hardware perspective, you would just want two physical servers (or more if you need greater performance than two can provide, but if that is the case consider bigger servers rather than more servers.) If you feel that you need more than two servers, we should talk about scaling. This site, MangoLassi, handles over two million full thread loads per month and over 160 million resource requests (hits) per month on a small fraction of the resources that you are talking about using here. Just for a capacity perspective.
From an operating system perspective, each OS is completely independent and knows nothing about the others, either.
It is the two applications (Apache and MySQL) alone that need their respective layers to be replicated for fault tolerance. No other pieces need to be "cluster aware".
-
This d2d is a paper weight for me. can i install freenas or something like that on this device?
-
@mroth911 said in Hp storage D2d4324 nfs slow xenserver:
This d2d is a paper weight for me. can i install freenas or something like that on this device?
why hurt your situation with freeNSA? just put some flavor of Linux on it you like and share the space out!
-
@mroth911 said in Hp storage D2d4324 nfs slow xenserver:
This d2d is a paper weight for me. can i install freenas or something like that on this device?
Avoid FreeNAS but "something" might be good. FreeBSD, OpenSuse, CentOS or Ubuntu. I would expect that you can, but you are into totally unsupported territory and trying to treat a device purchased to be a blackbox as a whitebox. So you are left with a hobby class device, at best. Is there a good reason to not just scrap it? It's a spent device.
-
What do you mean by spend device , I need help setting up HA for my web hosting system. I want to be able to fill up my servers that i have for web hosting, Also have automation as well. When clients purchase hosting. there domain/account auto provisions. Right now I am using cpanel with WHM. So my orignal thought was to have a nas/san house my vm's and connect the nodes to the nas/san. Now reading here, and learning that is not a good idea. So I want to try to reuse the d2d and not throw it away completely,
-
@mroth911 What, exactly, is the make/model of this d2d device?
-
@mroth911 said in Hp storage D2d4324 nfs slow xenserver:
What do you mean by spend device , I need help setting up HA for my web hosting system.
It's a device that depends on its black box nature and support from the vendor to be useful. It no longer has that and is now a useless device in a business setting, at least for production use.