I had a sales guy reach out to me about Synology. Have you had any exposure to them?
StarWind Software CTO, Chief Architect & Co-Founder. Microsoft Most Valuable Professional [MVP] in Cluster 2014 & 2015. SMB3, NFS, iSCSI & FCoE.
Thus far, I've got the impression that people might prefer dell, but they haven't exactly come out and said it.
I prefer Dell because I can get it refurb with full warranty through xByte. I've been hinting to them that we'd like them to offer SuperMicro as well, though.
xByte is an excellent option if company doesn't need new!
Hey... my company has been relying on http://www.thinkmate.com/ for their server needs. This isn't a company I know much about when it comes to server hardware. Do any of you know much about them? Are they any good? What is your hardware brand of choice? I've used dell in the past, and have loved them, but I'm not sure if it is worth trying to make the change.
#servers #thinkmate #dell #hp
I WISH there was a built-in way in Windows Server to use RAM as cache. I think it's awesome that StarWind has it.
Well it opens a big can of worms if done in the wrong way. People don't understand block DRAM cache is dangerous w/out synchronous replication between nodes to keep multiple copies of the cache coherent between independent "controller" nodes and sort of a log at the back end, they install StarWind on a single controller w/out UPS, experience power outage and few GBs of transactions lost and... come blame us! :(
One of the biggest benefits to Starwind is that it uses RAM cache in its SAN stack to give you millions of IOPS, instead of tens of thousands of IOPS, for lots of operations.
Yes we do! Will be happy to help to anybody who wants to play with it. Here:
Also, the need for the battery went away long ago. That's a 2000's problem.
Do what? This was a standard feature as recently as 2012 on the Dell T600 line.
Standard as a cheaper downgrade, right?
Standard as in it was the "Dell Recommended" option at the time.
And it looks like it was 2011.
And it looks like the R720xd I purchased from XByte to replace that server has the PERC H710P with a battery.
This page here says it uses NVCache. Why does it have a battery?
NV = battery-powered DRAM + flash for persistent storage. On power outage detected battery will drive on-board micro controller go gain access to memory bus and copy fast but volatile DRAM to slow but NV flash.
Double caching seems dangerous or at least slow things down a little.
Not dangerous or slow. We do similar things for high speed websites. But in the opposite direction, of course.
Double caching is no more dangerous than single caching, if caching is dangerous, it's dangerous. If it is safe, it doesn't matter how many times you do it.
As for latency, it's not an issue and all hardware RAID systems like this do it under the hood because it's the only good way to add the SSD layer. You always want a RAM cache on the front end, and you need disks on the back end. The question is, do you want SSD in the middle or not.
If you think about the performance leap from spinners to SSD, it makes total sense to have SSD in front of them for cache. Even a little SSD helps the spinners a LOT. Same difference from SSD to RAM. RAM is orders of magnitude faster than SSD, just like SSD is over spinners. So the gains are similar. And RAM doesn't wear like SSD does, so it doesn't just speed things up but allows for fewer and more efficient writes and that means longer SSD life which actually adds reliability.
Also, cache allows writes to happen faster allowing more data to be protected before a failure.
I have a main Hypervisor running with a PERC H730p (2gb cache), configured with write-back (uses the 2gb cache) for all volumes, even the SSD SanDisk DAS Cache slow HDD volume.
But if someone has a H330 (without a cache), you can still use an SSD cache (such as with SanDisk DAS Cache) perfectly fine. You don't NEED to have the onboard RAID card cache. You can even write-through to your SSD cache volume.
Don't use DAS Cache. Western Digital had basically discontinued the product. If you rely on in within your infrastructure it means you have something on I/O path going wrong ;(
I've no experience with Erasure on VMWare vSAN... but I know that it's production worthy and safe with S2D. It gives you the same resiliency but more efficient capacity. I believe it's nothing more than just an algorithm... so I can't see it being any less safe/efficient when used with a different product.
I do know that all flash = better efficiency.
It actually should do much better one. For some reason MSFT decided to cut off own balls and stop with double parity which is one linear parity sum and one global parity, while it was possible to make N => M e/c, same way as Azure and Ceph does.
Erasure coding - is it a safe for use in production on an all-flash array? I'm specifically talking about VMware's vSAN here, however the question is fairly broad.
The alternative is one I'm most familiar and comfortable with; RAID1(/10) mirrored to another node(s) to provide simple/reliable fault tolerance. However, there are plenty of people talking up the new RAID5/RAID6 erasure coding features as it substantially reduces overheads. Apparently there is far less risk of failure due to the much lower URE rates in flash storage.
I'm curious what you guys think? Is it risk adverse? @scottalanmiller has posted up some passionate threads in the past about why R5/R6 is the devil (which I totally agree with) so where do you stand with erasure coding?
Verdict: you can use whatever you want in production, both solutions have many-many adopters but none of them wasn't;t designed to run on flash (think about Pure engine) just because in such a case erasure coding should be done within FTL (flash translation layer) on so-called OpenSSDs (or their equivalent, whatever Pure is calling them).
Hope this helped :)
I have this product in my country, selling for 1000$ which due to my country have high taxes.
It supports the following:
Windows + Mac + Linux cross-platform file sharing
A multimedia hub for your photos, music and movies
Your complete backup solution
Support for RAID 0,1,5,6,10 and hard drive hot swapping
Equipped with 4 Ethernet ports that support failover and link aggregation
> SMB 2/3 support increases Windows networking performance by 30%-50%
And while i am looking at its features and and even Vmware certifications for NFS I am pleasantly encouraged to by this product for my company. bu I cant help but wonder what stuff like SAM-SD does, I mean why cant I use a full fledged OS and good motherboard and fill it up with the same WD RED NAS HDD and create RAID 10 , and have the benefit of full fledged OS. I am certain Centos 7 can do this, Wondering what are your thoughts cause I reckon it would be alot cheaper for 1000$to get motherboard + CPU + case + Ram and it should do more and beyond than this device with its crammed linux, and my device can be updated faster and easier. Wondering what are the risks, or it all depends on how competent you're in doing this, if your competent enough going your own path and making your own server is better, right ?
Sorry for any typos writing this with the monitor turned off, due to recent Epi-LASIK
The only reliable SMB3 stack comes from Microsoft, so be ready to spend $800 on Windows Server Standard (unless you're ready to abuse MSFT licensing and use free Hyper-V Server for that purpose). Samba doesn't go anywhere anytime soon IMHO :(
Yeah, this is exactly what you want most probably. No hardware vendor will give you any flexibility SAM-SD has.
Looks like your connection to MangoLassi was lost, please wait while we try to reconnect.