What's the Best Way to Deduplicate & Organize Files/Folders on a 200 TB NAS?
-
More on reflinks.
-
ZFS does not have reflinks, and doesn't plan to. It's a BtrFS feature back ported to XFS on Linux.
-
@scottalanmiller said in What's the Best Way to Deduplicate & Organize Files/Folders on a 200 TB NAS?:
ZFS does not have reflinks, and doesn't plan to. It's a BtrFS feature back ported to XFS on Linux.
That's what I thought, but I didn't have the data to back it up.
-
ZFS has a lot of similar stuff built in, I don't think that they want to do it two ways. It's not often that people want the extra reflinks functionality.
-
@strongbad said in What's the Best Way to Deduplicate & Organize Files/Folders on a 200 TB NAS?:
ZFS has a lot of similar stuff built in, I don't think that they want to do it two ways. It's not often that people want the extra reflinks functionality.
Yeah. ZFS's deduplication functionality is good...just resource intensive. I've talked to guys who build out large storage arrays using ZFS and deduplication and it gets complicated (at least from my ZFS novice point of view) if you want it to perform well.
-
@anthonyh said in What's the Best Way to Deduplicate & Organize Files/Folders on a 200 TB NAS?:
@strongbad said in What's the Best Way to Deduplicate & Organize Files/Folders on a 200 TB NAS?:
ZFS has a lot of similar stuff built in, I don't think that they want to do it two ways. It's not often that people want the extra reflinks functionality.
Yeah. ZFS's deduplication functionality is good...just resource intensive. I've talked to guys who build out large storage arrays using ZFS and deduplication and it gets complicated (at least from my ZFS novice point of view) if you want it to perform well.
ZFS was never built for performance (Sun said this directly.) It was for low cost, giant scale with good reliability and durability. So that it doesn't handle performance great while doing a feature like dedupe is not at all surprising.
It's also 13 years old and the granddaddy of its type of product.
-
@scottalanmiller said in What's the Best Way to Deduplicate & Organize Files/Folders on a 200 TB NAS?:
@tim_g said in What's the Best Way to Deduplicate & Organize Files/Folders on a 200 TB NAS?:
@scottalanmiller said in What's the Best Way to Deduplicate & Organize Files/Folders on a 200 TB NAS?:
@dbeato would need 256GB of RAM to attempt that with ZFS. That's a lot of RAM on a NAS.
How did you get 256GB of RAM needed?
That FreeNAS article recommends 5GB RAM per 1 TB of deduped data...
Considering he has 200TB of data he'd want to dedup, that's at least 1TB of RAM to start.This is because dedup on ZFS/FreeNAS is much more RAM intensive than all other file systems. (and also because 200TB is a ton of data)
What caused it to balloon so much recently? Traditionally it has been 1GB per 1TB.
Freebsd zfs page stated up to 5gb per 1tb last time I checked
-
The starwind dedup estimator can be a thing here?!
-
@scottalanmiller said in What's the Best Way to Deduplicate & Organize Files/Folders on a 200 TB NAS?:
More on reflinks.
It's already in Fedora. You don't need the sources any longer.
-
Here's using duperemove. It's annoyingly verbose so I can't get the output and the command in the same screenshot.
I ran
/tmp/duperemove/duperemove -hdr --hashfile=tmp/stuff.hash /mnt
And got