The only thing I've had problems with when using double NAT is some phone systems get very very cranky.
Other than that, just the normal downsides of not controlling your own external access.
The only thing I've had problems with when using double NAT is some phone systems get very very cranky.
Other than that, just the normal downsides of not controlling your own external access.
I've seen this mentioned in a number of places recently, and it sounds like a good option to enable SSO. The open source/free version has useful features.
I just got it installed tonight. All I can say about it so far is that it is an easy install on Ubuntu Server using Docker. I'd imagine the Kubernetes version is also an easy install.
Just wondering if anyone else has used it and what you think of it if you have?
@gjacobse said in UNRAID: Did it improve since 2017?:
I can't say I trust it,.. to few would suggest it, and the few that might mention it have regularly said - no.
That said - ... what I really find interesting is that I have now seen it mentioned twice now in almost as many days... Just seemed funny and needed some 'additional' exposure.
I have an end goal - and with that an expectation that it will be done similar to how (sane) enterprise solutions would be done. In some regard - I enjoy the challenge of making a number of differently aged platforms work together - but if I want to extend my marketable skills - they need to be,.. in line with the market.
Proxmox may not be common in day to day discussions, but it's been around long enough - and has proven itself that it is 'on par / scale' as most Virtualization software out. So, I'm learning that and rather enjoying it.
But, storage is a failing point for my rack currently. With the ReadyNas biting the dust about a year ago, I decided on a solution, and would rather build anything I put online.
I have several monster cases which could be used,.. or I could go with an prebuilt system... But they quickly exceed the permissible budget..
Leverage Proxmox and reuse those drives however you can along with it. There are many ways to utilize the drives you already have, it's just a matter of figuring out the best way to make it happen on your budget.
@scottalanmiller said in UNRAID: Did it improve since 2017?:
@gjacobse said in UNRAID: Did it improve since 2017?:
Co-worker came
UNraid was a scam. Now they just resell stuff. No value.
This.
As with all NAS type systems, you're really better off managing the storage yourself. NAS are really only for those that don't know enough to properly manage storage for themself.
I wonder what happened to Cloudflare this morning?
@dave247 said in Moving off VMware Hypervisor to something else - need input:
Another question: when I was researching Proxmox, someone mentioned that it doesn't fully support shared block storage currently. It was basically stated that Proxmox and others haven't come up with an equivalent to VMFS for shared block storage yet, so they are typically leveraging LVM to partition off portions of disk for each VM limit access to those regions to a singular host at a time.
I had looked at this comparison matrix which shows that Proxmox does fully support shared storage, so I'm unclear on the exact specifics and if it really matters in my situation. We basically have an iSCSI storage controller for VM storage and then our ESXi hosts for compute (mentioned in my original post).
All I really care about if we move to Proxmox is that we can store VMs in our storage controller and use the hosts for compute, similar to how we're doing it with VMware today.
The short version is, those people don't know what they're talking about.
Those are two completely different things, with next to no similarities. VMFS is a shared filesystem (better compared to something like Gluster.) LVM is a volume management layer that a filesystem sits on top of.
@dave247 said in Moving off VMware Hypervisor to something else - need input:
@scottalanmiller just out of curiosity, could you provide any arguments against using Hyper-V?
We are 99.9% Windows PC & Server shop where I work so naturally some might suggest us using Microsoft's Hyper-V. I have used it a handful of times in the past but it didn't seem very user friendly and seemed to have issues at the time, granted it was over 8 years ago.
There are reasons why not even Microsoft runs the entirety of their cloud services on their own platform.
@scottalanmiller said in Moving off VMware Hypervisor to something else - need input:
FoxRMM is working on ProxMox backup monitoring being centralized and included in its next release too.
When does the rest of the world get a look at FoxRMM?
I'm in agreement with Scott here. There is a very short list of options, and Nutantix is not one of them.
Proxmox would be the primary choice (the backup server is really easy to work with as well), and XCP-NG if Proxmox can't be used.
Migrating from a VMWare to Proxmox is also really easy. I did a trial at a former work place.
@scottalanmiller said in What Are You Doing Right Now:
@DustinB3403 said in What Are You Doing Right Now:
@travisdh1 said in What Are You Doing Right Now:
I had a fun night last night adding storage to a server. When I went to move VM storage location, found a checkpoint (Hyper-V, ugh) from 2018.... Took a long while to coalesce.
This morning everything had finally coalesced and moved to the new storage array. Only took ~10 hours.
You're using Hyper-V? How's that been going and what management tools are you using?
I had some lunatic INSTALL it in the last two months! W.T.F.

@DustinB3403 said in What Are You Doing Right Now:
@travisdh1 said in What Are You Doing Right Now:
I had a fun night last night adding storage to a server. When I went to move VM storage location, found a checkpoint (Hyper-V, ugh) from 2018.... Took a long while to coalesce.
This morning everything had finally coalesced and moved to the new storage array. Only took ~10 hours.
You're using Hyper-V? How's that been going and what management tools are you using?
Not by choice. Existing customers and just the built-in management tools.
I had a fun night last night adding storage to a server. When I went to move VM storage location, found a checkpoint (Hyper-V, ugh) from 2018.... Took a long while to coalesce.
This morning everything had finally coalesced and moved to the new storage array. Only took ~10 hours.
@EddieJennings said in OVH Cloud, review after ~3 weeks use.:
Thank you for taking a chance for the rest of us
Of course.
It's working great for me because my TactialRMM instance has way more memory than it needs.
About 3 weeks ago I asked about OVH Cloud (https://mangolassi.it/topic/26257/ovh-cloud-anyone-use-their-vps?_=1758244042664).
Since it appears nobody else has used it, here's my short take on it after 3 weeks.
TLDNR: You get a lot for your money in CPU and RAM, but IOPS suck.
The management interface screen has everything you need to manage a VPS on it. It's clearly laid out and functional, which is what I want to see. Personal options vary of course, so here's a screenshot.

4 cores and 8 GB ram outclass all the other VPS providers I know of. Seems they are actually providing 2core/4thread.
CPU

RAM

Now the bad part, IOPS. Looks like it's limited to a single 1gb link.

So at the moment, OVH Cloud is great for anything requiring more than minimal CPU/RAM, but shouldn't be used for anything IOPS dependent. Working very well for my small TacticalRMM installation (less than 50 endpoints.)
@Oksana said in RAID 5 vs RAID 6: Which One Is Actually Safe in 2025?:
Parity RAID is still one of the best ways to balance cost, performance, and redundancy. But the real question in 2025 isn’t how RAID works, it’s whether RAID 5 is still safe or if RAID 6 should be the new default.
Our latest article by Vladyslav Savchenko for StarWind explains how rebuild times, URE risk, and drive size impact reliability, so you know when single parity is fine and when dual parity is essential. Read more here: https://starwind.com/s/xj
@scottalanmiller You might want to chime in here.
We've covered the issues with parity RAIDs on old style HDD here so much. No where in the article did I see mention of the glaring differences between HDD and SSD/NVMe.
@scottalanmiller said in OVH Cloud, anyone use their VPS?:
We use Vultr and are very happy. We've moved away from TacticalRMM to FoxRMM, our in house product (aka SodiumSuite.)
Vultr is one I've used in the past, but the pricing is on-par with Linode.
One major difference now that Linode has been bought by Akamai is the storage IOPS. I've run a couple tests, and they are using some sort of SAN instead of VSAN now. IOPS always max out at what you'd expect from a 200Gb network connection.
I might have to try out OVH Cloud. The price for 4vcorse and 8GB RAM starts at ~$5.00/month instead of the $25.00/month I'm currently paying at Linode.
I'm looking to get away form Linode for my TacticalRMM. Now that you can't "trick" TacticalRMM into installing/running on less than 4GB RAM, it's costing a good bit more.
OVH Cloud VPS is giving a lot more resources for not much money right now. So I was wondering if anyone else has used them? Any concerns with migrating my TacticalRMM instance to them?
@Oksana said in VMware DRS: Smarter Resource Management for vSphere:
Managing VMware clusters by hand can quickly become overwhelming as workloads grow and shift. That’s where Distributed Resource Scheduler (DRS) steps in – keeping clusters stable, apps fast, and admins stress-free.
In our latest guide by Dmytro Malynka for StarWind, we break down everything you need to know about VMware DRS: how it works, its requirements, key features, and why it’s a must-have for medium and large clusters. Read more here: https://starwind.com/s/wg
DRS sounds like a great thing on the surface before actually implementing it. In large clusters, VMs with heavy workloads get moved between hosts often with the associated disruptions when memory gets locked. Takes what is already a poor situation and makes it worse!
I formerly worked at an employer that had a very large VMWare cluster implemented with vSphere, vCloud and DRS enabled. They were up to 50 hosts when I left, and constantly ran into issues because of DRS.