If we were going to give containers a number, it would be more like a Type -1 than a 3. Containers are lighter than Type 1, not heavier than Type 2. Still totally different. But Type 0, 1, 2 go from lighter to heavier.
When we start looking, we'll start with the usual culprits like Veeam. ShadowProtect comes highly recommended to me by several folks (Including the guy who just built the ReadyNAS)... We'd hit the other major players as well.
So cost is going to bite you in the ass, since you said there were concerns about licensing. But they are all valid options.
We don't care if it's full install, agent based, or Hypervisor based. It just needs to work.
Based on this I would think Agent based would be a decent option. But didn't you say you have something along the lines of 300 VMs? Might become tedious.
Yeah, we'd likely do the dedupe at the Storage layer. Our Current Nimble devices do this relatively well with our live data. Something like 1.3 or 1.4 to 1 compression is what i remember. It may be more or less.
If we went Agent based, we can push the agents via PDQ Deploy for Windows and a shell script or something for Linux. (Most of our Production Linux systems are SLES 12)... If reboots are required, systems can be rebooted during our patch window... (5-7am every day).
As an aside question, do you have it that it recycles the storage after certain time with Synology to Synology backuP?
What do you mean recycles? It's not doing an offsite move-delete if that's what you mean, it's copying it in case either building is lost. Maybe I don't understand the question.
So when I setup the backup between Synology devices, I make sure that after a certain time/age the backup device deletes the older snapshots/backups.
Ah, got it. What are you using for backup software that you'd rather your backup software not delete it? Also, are you using Synology CLI for that? I don't know that I've noticed that option in the GUI as part of the task creation.
Glad to know this worked. I have had 3 different Synology NAS boxes over the past 6 years- An 1812+ that was just retired, 1813+, still going after almost 5 years and a new 3617xs and wondered what would happen if the box died. Never had any issues with any (knocking on wood).
Veeam DOES recommended avoiding low end NAS devices, and recommends SAN over NAS because Veeam wants block protocols. These parts are true and we don't need to watch videos as they are available in writing from @Rick-Vanover - we even have the author of the best practices here in the community!
0_1501712220659_Screenshot from 2017-08-02 17-16-46.png
The keys here are "low end" which is an issue around support. The misleading bit is that NAS means server, so low end servers are every bit affected in the same ways. The QNAP, Synology, ReadyNAS and other such devices are not actually NAS but Unified Storage, SAN as much as NAS. That Veeam recommends SAN instead of NAS is a protocol choice, it does not make those devices any less applicable. We should not be calling them NAS, as that is misleading, they are equally both.
If we really look at the guidance and consider what it could mean, the only real concern is "low end" and low end is always of some concern. Why spend so much on Veeam and Windows licensing and then get cheap on the hardware? You want solid storage hardware and solid support. But nothing here is telling us that there is anything wrong at all with these kinds of devices and certainly the issue is not some kind of corruption caused by the fact that they are in this product category.
Hrm, fast-clone. Probably time to try out a Btrfs based file server at home.
It's good stuff.
Yeah, I know brtfs is the way to go, I just haven't tried it out yet myself. Starting out on IRIX with XFS back in the day makes me a too nostalgic.
I still use XFS for everything.
When will be the right time to switch to btrfs then? We know it's been stable for long enough that it's becoming the default in a number of distributions now, but has it really been battle tested well enough yet?
Also, should we maybe make another thread for the btrfs discussion?
The answer here is you do not switch. You install a distro letting it do its native thing by default and less you have an over arcing huge reason to override defaults. So you will get this when you install a new system that now has it as a default.
openSuse, for example, has had it as default for two years.
Really though, I prefer XFS for anything that isn't a storage machine. VMs need something mature, stable and light. XFS does that well.
But does your preference mean that you will override a default installs choice just because that is your preference?
Using anything but default should have very clear reasons because the first time somebody besides you have to troubleshoot it there will be big problems.
I would often, yes actually. XFS is not like an odd, unsupported option. It's just not the default. It's still completely core to openSuse's design. They simply had to pick which one they were going to use when someone did not choose one or the other and they opted for extra features over lean design for those that don't know which they want, which I think makes sense. Just like CentOS opts for the simplicity of using root for administration instead of sudo, but makes it super easy to enable sudo. It's not default, but it's fully supported. They just had to choose something as default.
Only time and money, need in a business is always a function of money.
I mean all of them combine.
You are correct.
Need dictates the other two.
Well, the other two dictate need. Businesses aren't a "need" based thing. They have a goal: profits. Backup restore time is a discussion about time. So the technical piece gives us the time axis and that we are talking about a business gives us a cost one. That's it. The idea of "need" should never really come up in a business, businesses never need to do anything. They desire profits and all actions should reflect that. The concept of needs only serves to confuse people from the singular mission.
What Synology has done, to make this claim kinda legit, is look at what disks "can" stream (which is more than is listed here) and added the "cap" of the network. So if you do a contrived operation that pushes the drives to their throughput limit (a useless number hence why we don't measure drives by that metric) but tells us nothing about performance. That could be just two or three IOPS producing that limit. But in the real world, that's not useful.
Remember that this is backup. So if the backup system fails you have options like...
Taking a new backup from the live systems.
Offlining the limping array and taking a full backup of it before attempting a restore
Doing a backup/restore rather than an array recovery
All of these things make RAID 6's risks minimal. This isn't the only copy of anything, it's a backup. And it is not subject to availability risks (at least not in the way that live data is) so things that cause availability issues are not significant.