@Obsolesce said in What is the Latest With SodiumSuite?:
I think it went the way of many Google products.
too bad, I'd love to have seen spiceworks taken down a notch.
@Obsolesce said in What is the Latest With SodiumSuite?:
I think it went the way of many Google products.
too bad, I'd love to have seen spiceworks taken down a notch.
it relies on DNS (or hosts files) to resolve itself and generate SSL certs, this has to work before you install the engine. If it did work, and you still get this error, this is a bug
I hate to necropost, but I'm wondering how the project is doing right now? The site is up, but didn't see any updates in 2 years..
Just got back from Georgia (the real one, not the US state), been drinking homemade wine there. That stuff is damn amazing
@DustinB3403 said in Testing oVirt...:
Also I feel like the most popular instructions were a blog post, rather than official documentation.
I'm not even sure they have a dedicated techwriter. This is why it is better to just follow the official RHV docs. You'll have to filter out the Red Hat specific details, like subscription-manager, but you'll definitely have a better experience
@DustinB3403 said in Testing oVirt...:
No because I was so fed up with the instructions I close abandoned the test.
Well, oVirt is an opensource and free project. You want it to be better, get involved, even by reporting bugs. I really don't get people who expect to just have everything perfectly served up and for free.
Really, every time someone tells me this or that OSS project is bad, I ask for the links to the opened issues.
@Obsolesce said in Testing oVirt...:
Enterprise storage I've seen, is SAS. Where do you see San in enterprise everywhere?
Pretty much everywhere really. HCI did not fly as the SAN killer, SDS is cheap to start with, but it can turn into hell at scale (ans scale is where it is supposed to shine). New companies like Infinidat are having a blast selling SANs, despite the pervasive notion that SAN is dead and gone. It's just like with tape - it's still out there, getting developed, sold and used, no matter how much we all hate dealing with it.
@DustinB3403 said in Testing oVirt...:
I tested oVirt and was really just generally confused by it, as it really makes the case of "we're great for SMBs who only have 2-3 hosts".
No way, oVirt is exactly the wrong solution for a tiny setup. It starts to shine when you really scale the hypervisor count.
Which in my lab would be a good fit to test with, but in practical implementation of it, it's anything but, the documentation is split and the setup was just overly difficult.
I tell everyone to just use the official RHV docs, they are much better. I really don't know what's going on with the upstream documentation in oVirt, I've been away from the project for 6 years now, but the downstream stuff is always up to date and in order.
It has a use case for sure, but I'd much rather use stand alone hosts and just have them setup to be able to be used as fail over targets like with XenServer/XCP-ng, Hyper-V.
In a tiny SMB - certainly. One of the use-cases where I replaced the existing setup with RHV in 2010-2011-ish, was a cluster of 300 Xen hosts, each paired in a 2-node failover cluster. A huge waste of hardware and a nightmare to manage. For the cost of 150 servers they could get rid of, they managed to buy a really nice SAN. And that's before the actual benefits of consolidating the resources of so many hosts.
The clustering bit is just way more complex than any use case I'd ever expect to need.
Clustering in oVirt is very simple - standard fencing mechanisms. What exactly was complex?
It could be easily cleaned up with a simple "install this iso on your 3 standalone hosts, on the first host do XYZ on hosts 2 and 3 do ABC"
I agree, documentation is always bad, especially when talking about large multi-node systems. Did you open a documentation BZ for the stuff that was hard to properly understand?
@scottalanmiller said in Testing oVirt...:
And in that use case it makes sense. But since then, they've changed their official message and now it fails pretty hard at the thing that it claims to be.
Are you sure it's the message and not they way you perceive what it says?
I was aware of oVirt and avoiding it in those days because it met no need I would run into anywhere. It was off the radar. But they've gotten enough attention, and changed what their claim their use case to be, so it seemed like they had broadened their use cases and were ready for much more common use cases. But appears to just be disingenuous marketing.
What they added was integration with a bunch of projects - several openstack modules, OVN, Foreman, Ansible etc. The main concept and the sweetspot for the use case remains the same. However, with these integrations, you can share the deployment, which I've done quite a lot of. Deploy RHV and Openstack side by side, let openstack deal with the cloud-oriented use cases and RHV deals with the more old fashioned non-ephemeral workloads (e.g. run the company's AD and Exchange), all under the same SDN provided by Neutron, and image store on Cinder etc. Again, a specific use-case, but there are no generic ones out there.
The issue being... when testing for either how we'd want something in the majority of use cases or how it is stated as being intended to be used how do we evaluated oVirt - and judging against all reasonable expectations, it falls very short. It is extremely limited
and again - the majority of what usecases? Where did you get the statistics, can you prove that that is the majority?
The classic virtualized DC was always run on shared storage. Live migration and HA don't make sense without it.
This is only true if you define "how it is intended" after the fact.
No, that is what you are doing here. You defined your own use case, which doesn't match oVirt's, and then you claim that is the "enterprise" and the "majority" use case. Without providing any proof. oVirt was designed to cover the pretty standard virtualized DC use case, that's a bunch of hypervisors using shared storage and providing VM HA and other features around that. That isn't a "niche" use case, it's quite common, from SMBs to large enterprises. Of course distributed workloads with local storage and N replicas on every node don't fit that bill, but those are niche, and should be run on specialized management systems. When I want to run my standard company infrastructure, e.g. my email servers, directory, file dumps and the like, I'll pick a solution that fits. Now please tell me how those are niche and uncommon, and should be using local storage because they are so latency-sensitive.
Not in the real world. SAN is a legacy technology in nearly all use cases, even in the enterprise. Common, yes. But mostly because salesman drive more sales than IT decision making does.
Right. Because you say so, as the "ultimate authority". HC has flunked as much as VDI did. SDS solutions are also a maintenance nightmare. SANs are old, clunky and damn expensive, but they work well, and THAT is the reason they are still being sold.
You made up that use case based on the limitations. It's so limited that you had to make a use case specifically to address them.
No, you made up the use case, and you base the limitations on it (btw, what are the limitations? You keep failing to actually state them).
You are setting your expectations by what the product does, not what it is supposed to do and/or by what the thread asked about it.
No, I set the expectations, then came up with the product. Remember, I was there when oVirt wasn't even oVirt yet. There was plenty of market research done, and weeks spent in customer meetings defining what they would like to see as a replacement for vmware.
Because it requires Gluster and/or remote storage. It doesn't offer straight local (highest performance) or high performance local cluster options.
So you are saying SAN performance cannot be high? Have you heard of this very new tech called "fibre channel" or this even newer one called "SSD"? How about infiniband? You're in for a surprise!
we see shops running databases
The most important definition of enterprise workloads would include "broad disparity in needs." Something that "only good for a niche" solutions of any type can't fulfill when looking for a central, unified option.
That's as good as saying "I have no answer". Don't evade, just describe your use case, or even better - give me an example of a system that can manage all that broadness you are trying to escape into
This is BS as we've established
We only established the fact you have no concrete answers, just vague "limitations" with nothing to back your words.
We are discussing oVirt as a "unified solution"
Unified with what? I already describe how you can unify it with Openstack, to create a joint system. There are other solutions, like MIQ that can unify different types of infrastructure. Define what you understand as "unified"
I bet 0% use it as stated as essentially a central, universal management platform for all (or reasonably "nearly all") workloads
I'm not sure where you got that. I never said it was meant for all workloads, I keep saying it is not. You are saying it's niche is non-existent because it is so "limited". Please, PLEASE just say what you want implemented, and we can find the right solution. Bashing on oVirt because it doesn't fit your specific niche is not productive.
@scottalanmiller said in Testing oVirt...:
Yes, very much so, but they don't promote it that way, they promote it as being for a different use case.
In my day the message was pretty clear - an alternative to the typical vsphere cluster with shared storage.
But obviously core to their stated use case - central enterprise KVM management.
What's wrong with that? Enterprises is where you get to see the large SAN installations, SMBs usually don't have the money for those
a reason why RVH isn't being used as intended basically anywhere
RHV is being used as intended almost everywhere I've supported or installed it. No reason to use it anywhere else.
Basically any enterprise shop will have local storage for workloads where appropriate, and so oVirt ends up being a "onesy twosy" installation rather than a central management tool
Local storage "where appropriate" usually means extremely datapath latency sensitive workloads, and if those require local storage, they probably also require baremetal, and should not be virtualized. FC latencies compared to local SAS are negligible, and you will lose more by virtualizing such workloads than by placing their data on a fast SAN.
You don't have to resort to trying to make it personal - which shows an emotional response that makes no sense here, it suggests that you know it's a bad fit and that my point is correct.
No, I simply know by now that you will resort to the "I worked on Wall street argument", so I simply want to show you it will not fly.
This is super simple, it is extremely limited and while that is by design, it goes against the way that the product is intended.
How is it limited again? I already told you what the intended use case it, everything added later on is an afterthought, chasing after some of that openstack market really. If you want to manage a bunch of localhosts instead of an actual cluster, you don't run oVirt.
And trying to play off "enterprises can deploy it" as "enterprises use only it" doesn't hold up. You are ignoring what we are talking about to try to make oVirt look way better than it is.
This is BS. I don't argue that in this particular case, oVirt may not be the best tool for the job. Then I do tell you what it is really for, you even agree with that, and then you tell me I'm wrong. After agreeing with me. Your argument is "it is limited because it is limited", very persuasive, obviously.
That's not to say that it is bad, but looking into using it for the use case it is promoted for, then discovering that it's not really built to be the broadly useful tool that everyone seems to push it as, simply leaves it as a sad, limiting experience.
You had the wrong expectations, were disappointed, and you're blaming the product. Sounds like "I bought this dam expensive Ferrari, but I can't haul 5 tons of gravel with it, Ferraris obviously suck!".
isolated, HA-focused, low performance clusters
Why low performance?
enterprise multi-purpose workloads or similar) it doesn't work well
Maybe you should define what you think of as "enterprise workloads". And just to jump ahead, lets just say I'm absolutely certain I can find examples of F100 enterprises running the exact workload types oVirt is perfect for. Will that mean you don't consider them enterprises, because they don't fit your definition?
It's meant only for very niche workloads within any large business, and only for extremely isolated small businesses for whom all workloads fit into that niche.
Any large business will run multiple solutions anyway. You don't run a single vsphere setup for an F100 corporation, you don't even ONLY run vsphere, you probably will have multiple virtualization solutions, public and private clouds, baremetal, container management systems etc etc etc. oVirt cannot cover all of that. No solution can in fact. Your conclusion - oVirt is limited. Mine - they are all limited, so we should be using the best solution for the job, and a real enterprise can have more than one job, don't expect a single tool to fit all the niches.
@scottalanmiller a limitation is that it is limited. perfect!
ovirt is a system designed to manage a large cluster of kvm hosts using shared storage. standalone hosts with local storage are not part of the use case. the fact that support for local stprage was added is besides the point and was done because it was a low hanging feature, not because it is really needed or used much.
whatever you think enterprise needs, you are not the final authority on that. fact is, RHV has a good install base in enterprise, including your favourite Wall Street.
@scottalanmiller said in Email server options:
Sort of, but that's not quite the same. It's distributed in both cases, it is redundant in both cases. There are lots and lots of factors involved, not just "breaking it up into nodes." It's more complex than that. At some point, more smaller spindles is safer, but at some point fewer, larger ones are.
At what point fewer larger spindles are safer? With more drives you get more spindles, reducing the seek time, the main problem with magnetic drives. With more drives you can implement RAID with better redundancy levels - 10, 50, the EE variants etc. The only real downside is the fact that you are running more kit - you need more physical space, connectors, cables, power and more parts might fail and need replacements (without affecting the system).
And you have to consider a lot of factors including drive fail rates, UREs, time to rebuild, time to replace, etc. It's a large equation.
Most of these factors, when dealing with spindles and not SSDs/NVMes favour the more/smaller idea.
For example, if your drives move 100 IOPS, then many small drives is likely to make sense. But if your drives move 10,000,000 IOPS, then two giant drives will likely make more sense (assuming equal failure risks.) Speed and failure rates are key here, if you don't consider then, you can't tell when more drives or fewer drives is safer.
10000000 IOPS? Are we still talking about spindles here?
@scottalanmiller said in Email server options:
Absolutely, although you have to consider the total number of spindles as well. Each additional spindle carries a risk factor, too.
same idea as with distributing a load between a lot of small hosts or running one big monolith.
@travisdh1 @scottalanmiller my point here is, huge drives sounds great on paper, in terms of $ per Gb, but whenever possible, I will always take a lot of smaller spindles over a few huge ones. When dealing with spindles that is, SSDs and NVMes are a whole different story of course.
Imagine you're building a large data store with huge disks, because it feels like you're getting more for less that way. And assuming your disk in a RAID5 takes X hours to rebuild. During that X, you're as vulnerable as if you were running raid0, more vulnerable, because you have multiple disks from the same production series, with the same age and wear on them, so chances are high more will die simultaneously. The larger the disks, the higher the X, and 12Tb will have you counting X in days, not hours, at least in a parity based RAID.
You can always go for other RAID levels, with higher redundancy rates, but that also has downsides, both in price and performance. In short, YMMV, but I always advise to take factors beside the price per Gb into consideration, it's a huge factor people tend to skip entirely.
@Pete-S said in Email server options:
Always go with 3.5" storage when you need some volume but not SSD speed.
Ultrastar 12TB 7.2K SAS-3 drives are about $400 each. 12TB RAID-1 becomes about $800 for 12TB storage. That's 6.7 cents per GB of data.
How long will it take for a raid array to rebuild on a 12Tb disk?
How did it (the fedora installation) fail? Sounds weird to me, those machines are ~2007-ish, but they have decent SAS interfaces and okay-ish xeons (for their time).
I would also count Zimbra NE and Zimbra free + Zimlets. That backup and push notifications are a huge boon, not to mention being able to delegate admin tasks per domain (I manage a company with 15 domains on a single Zimbra server)
@scottalanmiller said in ISP Failover with Cisco ASA:
That's mostly true. But Cisco considers it real Cisco and it shows their view of themselves. And that, I always think, is important. Cisco doesn't seem themselves as an enterprise player. And I've been in sales meetings with Cisco and that definitely comes through when talking to them.
That's not what I got from my sales conversations with them. They were very explicit about real Cisco and the lesser sub-brands.
Having been at two huge banks that were burned by being willing to use UCS, Cisco and enterprise are two words I never put together. From networking to phones to servers, Cisco is consistently overpriced and underperforming.
I absolutely loved UCS, even wrote the original oVirt/RHV plugin for the VMFEX cards. They were ahead of their time with those boxes, but the cloud pretty much killed everything really cool and advanced about HW
@scottalanmiller I can only relate to my own experience with them, and while it's not as significant as my experience with server hw or opensource virt stuff, I've gone through several hundred units of various vendors over the years. My experience with cisco has always been good. My experience with Juniper was pretty much on par. The same goes for checkpoint. The rest... not so great.
When I do a consulting gig building a DC, I always try to balance budget oriented solutions with hardware that is not going to be problematic. So when the client can afford cisco, we take it. When not, well, we look for solutions.
@scottalanmiller said in ISP Failover with Cisco ASA:
Meraki is actually a mid-level Cisco router. If you see problems on Meraki (and we all do), you are seeing Cisco issues. Cisco makes higher and lower level stuff under the Cisco brand. And a very specific range under the Cisco Meraki brand.
There's a reason I say meraki (or linksys) and not cisco. Those may have been companies acquired by Cisco, but it's not the same tech, and I do not consider it real cisco