I installed Hyper-V Core, and I'm facing a though time configuring... The server is at a remote location, and connect to the remote network via VPN, and am trying to use tools like Server Manager, Hyper-V manager, and even 5nine.. Server Manager itself works fine, but when I launch tools (such a Computer Management) from within Server Manager, I get random access denied messages .. Even after adding it as a Trusted host
Why are you doing things over a VPN? Stop doing that, that's likely your problem.
Even better, this sounds like a MSP office he is working from, so they probably have all these VPN connections to various clients open.
That's super scary, MSPs using VPNs is how malware is going to suddenly take over the world. Cross contamination all over the place.
What you are thinking of is my recommendation for supported drives that are part of the system itself if you are going for a warranty supported system like from Dell or HPE. Bringing your own drives would push you to vendors like SuperMicro where you can mix and match for the best performance, cost and features.
I want to ask why we can't/shouldn't use consumer class drives in a Dell or HPE server, but I think the answer might be - because if you're paying for that level of support, why are you not going all in?
Is that right?
i.e. if you want to run your own performance/cost factors, you're better off starting with a SuperMicro, is that what you're saying?
One of the confusing pieces here is that Linux actually does things more clearly but the Windows world is so confusing that if you carry that confusion into the Linux world, it makes things harder. Windows rarely uses or discloses the names of their product components. So Windows Software RAID is used to describe part of the Windows OS. But what if you have software RAID on Windows that is not Windows Software RAID? Windows Admins typically have no good terminology to discuss this, even though it is common. They just.... don't know what's going on and don't document it. But in Linux, we have the terms on hand all of the time (MD, ZFS, whatever.) So the Linux side isn't as bad as it seems, but if you are used to a weird blend of generic names being used as if they are specifics from the Windows world and assume that the Linux world is just as crazy, then it seems crazy.
That list makes hardware RAID sound safer than ZFS, which is probably not quite true. But is the case is that the average implementation of hardware RAID is quite a bit safer than the average implementation of ZFS software RAID. Hardware RAID "handles everything for you" protecting you from most bad decisions. ZFS leaves all the nitty gritty details up to you which makes it super, duper easy to mess something up and leave yourself vulnerable. This is exacerbated by the Cult of ZFS problem and loads of misinformation swirling about its use. So the average person using ZFS is not even remotely prepared for what is needed to use it safely.
Some problems that we see people have when using ZFS without fully understanding storage:
Believing that ZFS doesn't use RAID (this is extremely common.)
Believing that RAIDZ is magic, rather than a brand name, and that normal RAID concerns do not apply. So we often see people implement RAID 5 in reckless, insane situations using "it's RAIDZ" as an excuse as if RAIDZ isn't just RAID 5 - literally just a brand name for RAID 5.
Treating common features common to all RAID systems as "unique" and believing that ZFS has feature after feature of protection that makes the need to protect against storage failure unnecessary.
Not understanding hot swap and blind swap differences and creating systems that they do not know how to address should a drive fail.
Believing that ZFS being magic is not at risk from power loss and failing to protect caches from power issues - something that they are not normally used to dealing with as hardware RAID does this for you.
Not understanding the CPU and memory needs of ZFS, especially with features like dedupe and RAIDZ3.
Ignoring common RAID knowledge and thinking that using ZFS means not using mirroring technologies.
The most common RAIN approach that I see is taking all disks in the pool, noting their nodal presence and using mirroring to distribute the data so that data mirrors never go to the same disk and/or the same node. So a little like a networked RAID 1E but with more flexibility and the option to add nodal separation and performance testing so that data moves to where it is used.
Are you aware of any open source RAIN systems?
Gluster and Swift
I think Ceph and Lustre may be two others.
Lustre is RAIN, but is closed. Gluster was the open replacement for Lustre.
Just a quick search showed that Lustre was GPL 2.0, not sure if that is new or not.
Oh wow, must be new. It was crazy expensive in 2006 when we were really investigating it. That's awesome.
Ah looks like it went open source in 2010.
Oh cool, so I remember things well then. I'm just out of date. Gluster probably forced their hand, why would anyone consider Lustre when it was closed source? The answer was probably... they wouldn't and didn't.
Yep, I'd assume that was the case. Especially when it is a such a specific, and at the time, niche market.
And when Gluster went directly after them, even in name.
@scottalanmiller Technically "virtual data room" is similar to file server and primarily accessed over a browser session. In addition to the standard file server features, it may also include features like-
Bulk watermarking of documents
"View only" mode for documents
Real time activity logs
Remote document shredding
Fenced view to protect documents from someone taking a photo of the monitor with document open
Q & A section and live discussions etc.
From a usage perspective, our "virtual data room" is shared between a bunch of firms (like insurers, lawyers, auditors, investors, engineering and construction companies etc) with different access levels who are working towards the completion of a specific project which may take 2-5 years to complete (please note, we are into wind/solar farm development). So I surely do not want all these guys on my file server doing crazy things like creating users and modifying originals.
Interesting. Seems like a mistake in terms. Why does being available over "web" make it "virtual". Seems like a marketing term. I don't see anyone but Citrix using it, and Citrix has a trend of totally making up and misusing terms. Citrix' use of "virtual" is the industry standard for "wrong". I have a feeling that this isn't a legitimate term. Looking at the wikipedia entry for it, it looks very suspect. And the definition doesn't feel right - a specific access technology for something so general wouldn't be appropriate. And the lack of other products or vendors using the term for something so common and normal is suspicious. For example, Sharepoint and Alfresco have been doing this for forever, but never use the term.
@scottalanmiller I not quite at that max but pretty close at 56TB in a single volume local storage. Running Win 2012 R2 VM on Hyper-V 2012 R2 hypervisor. So far so good (2 yrs now). No issues other than that initial backup was a real b1tch but now that it's incremental, it's all good. No other caveats that I'm aware of.
So is it done? Does Matt understand and agree to the point that Scott was making?
Yes I believe so.
TL;DR attempt #1 #2 #3 #4 (counting edits)
RAID10 does not need hot spares
If you have spare slots you'd be better served by a larger array with more IOPS
The corner case (the one raised by the op's question?) is would hot spares reduce the risk of array failure. The answer is 100% absolutely yes it will reduce the risk of failure.
The disagreement (I think..?) was if that's necessary. We agreed that it isn't necessary to have any hot spares for RAID10 unless there's mitigating factors (examples: remote COLO with horrific access issues, extremely risk averse use case).
Also the needs of a SAN are different than the needs of a LAN. So you likely want different switches. I'd love Netgear Prosafe unmanaged on my SAN but would generally prefer Ubiquiti EdgeSwitches on my LAN.
Any opinion on Unifi Switches yet?
We use one in the lab and it's been great, but we aren't pushing its limits or anything.
I've received two quotes for new server hardware - one from our local reseller and one directly from Dell. As far as I can tell, the two quotes are identical spec-wise but the local reseller is almost $12k more expensive. Here are the two quotes:
Quote from Dell:
2x Dell PowerEdge R430 servers $6,665.60
HP Quote from local reseller:
2x HP ProLiant DL360 servers $7,266.00
2x Xeon E5-2630 v3 CPUs
64 GB RAM (unknown configuration)
1x HP MSA 2040 SAN $20,932.00
14x HP MSA 1.2 TB 10K SAS 2.5in drives
includes $5,850 in labor so actual price
is only $15,082
1x Cisco Catalyst 2960-X gigabit switch $2,320.00
Is there any reason why I should choose the HP solution over the Dell solution? I will be running vSphere 6 on these servers. I'm not familiar with managing either server line so either way I'll be learning new management tools. When it comes to support I think I trust my local reseller more than Dell but $12k extra is hard to stomach just for that.
[Edit: CP Code M.]
Unless that OP is restricted to 1U hosts, I would go with the quote from Xbyte for Dell 730xd with same specs as in quotes is
Multiply by 2, add Starwind's vSAN and a couple 10Gb NICs and he's done. Especially if only 2 hosts. Same(ish) price, way more reliability, better performance all around. I'd post that reco on SW but would likely get banned lol.
The one thing not mentioned is if there are other hosts connecting to the SAN.
I didn't get to make it up, but I have been watching the sessions and burning up my data plan. Thanks for posting these. I also wanted to take a minute to call out the guy sitting front and center to just surf the internet the entire time SAM is talking.
I didn't even notice that, I'm going to look for it now.
Speaking of storage ...did the new licensing model for AetherStore come out yet?
Hey everyone! We haven’t officially released pricing for AetherStore 2.0 yet but as many of you know from MangoCon, AetherStore 2.0 will be Freemium. Exact pricing to be publicly announced soon. Hint: it’s going to be VERY affordable (think less expensive than Glacier).
We’ve already talked to several of you about installing the 2.0 Early Release, which we’ll start circulating soon. If you haven’t already talked to us and would like access to the Early Release PM me!
We are thinking of making a choco package for early release and pinging the package name around before it’s publicly available. How many of you use choco?