If the only goal is an open source file server (we call this a SAM-SD, there is a section of the forum just for that) then the most likely recommendations for an OS will be openSuse Tumbleweed or Fedora. CentOS and Ubuntu are fine choices too. FreeBSD is excellent, but less well known.
You make it sound as though the wise choice here would be to install OpenSuse etc direct to the hardware.
So in a "read back" mode, this tells us that the data stored here isn't important so there isn't really anything to worry about. If you do the expansion and it causes data loss, they can't be upset as they don't see value in the data. This also tells us that the servers shouldn't be there, because if they aren't backed up and they aren't a caching system then you shouldn't have them at all.
That list makes hardware RAID sound safer than ZFS, which is probably not quite true. But is the case is that the average implementation of hardware RAID is quite a bit safer than the average implementation of ZFS software RAID. Hardware RAID "handles everything for you" protecting you from most bad decisions. ZFS leaves all the nitty gritty details up to you which makes it super, duper easy to mess something up and leave yourself vulnerable. This is exacerbated by the Cult of ZFS problem and loads of misinformation swirling about its use. So the average person using ZFS is not even remotely prepared for what is needed to use it safely.
Some problems that we see people have when using ZFS without fully understanding storage:
Believing that ZFS doesn't use RAID (this is extremely common.)
Believing that RAIDZ is magic, rather than a brand name, and that normal RAID concerns do not apply. So we often see people implement RAID 5 in reckless, insane situations using "it's RAIDZ" as an excuse as if RAIDZ isn't just RAID 5 - literally just a brand name for RAID 5.
Treating common features common to all RAID systems as "unique" and believing that ZFS has feature after feature of protection that makes the need to protect against storage failure unnecessary.
Not understanding hot swap and blind swap differences and creating systems that they do not know how to address should a drive fail.
Believing that ZFS being magic is not at risk from power loss and failing to protect caches from power issues - something that they are not normally used to dealing with as hardware RAID does this for you.
Not understanding the CPU and memory needs of ZFS, especially with features like dedupe and RAIDZ3.
Ignoring common RAID knowledge and thinking that using ZFS means not using mirroring technologies.
It was amazing that Scott found it so fast. I was on the Windows side of things. Inside Windows they were using the iSCSI initiator to connect to the FreeNAS. All the sudden Windows would just log a ton of iSCSI events and go down.
I looked up the events and most people resolved them by putting the iSCSI traffic on a separate NIC. This happened two days in a row at about the same time each day. I was looking at snapshot, backup, etc times when Scott found it in the FreeNAS logs.
SysAdm™ provides a new way to manage your Server, Desktop or Cloud-based systems. By exposing an API via encrypted REST or WebSockets, it is now possible to remotely control all aspects of your machine, including management of software, updates, boot environments, users, backups, and more. SysAdm™ is the answer for companies looking for a low cost, yet scalable solution that easily manages different segments of IT infrastructure to keep things running smoothly. TrueOS® has now embedded all local and remote control panel functionality into SysAdm™ so you can easily find and adjust any configurable system element from one place.
When you say "in a business capacity"... Do you mean any business or just certain sizes? What is the reasoning? I know you think the FreeNAS community can be very brash/vile at times based on some of your earlier posts to people asking about FN.
Basically you are getting a highly stateful system where support is critical but in a crippled manner compared to just using FreeBSD. You are getting something easy to set up but difficult to support. If anything goes wrong you are in very tough shape. And updates come a bit behind. So you have a number of small issues that all add up to a not very business friendly product.
Updated our slave database servers to 10.3 a couple of weeks ago... we roll out phases since we've got so many damn servers. It's always a nightmare. If no issues for the next couple of weeks or additional patches, we'll roll out to primary database servers (they're all sets of master-master, the slaves are for search primarily, but also are useful for a live test if everything passes staging, and they're a hot backup too if any issues arise) and then web servers. We never go down so it's a painfully slow process.
Looping back to this, in the past month I've worked with three different companies that all experienced significant data loss or downtime because of their choice of FreeNAS. Two suffered from not having front loaded their engineering and had an inability to support their servers during routine operations and caused major outages because of it along with significant cost for repairs, and one company that lost its data because of unnecessary bugs in the FreeNAS GUI code that would have been avoided has they been simply on FreeBSD.
Additionally this past week FreeNAS 10 "Coral" was demonstrated to be so incredibly unstable a month after being released that they had to recall the release and revert to a "beta" status indefinitely. For a trivial end user application this would be bad, for a critical storage infrastructure component on which companies need to have rock solid faith, it's unthinkable.