My money is on fake RAID you got there. So use md.
Also because you already use MD. :-)
You know what, I can even move the two disks to my new workstation and not have to transfer any data.
I think the answer may have been found... :D
Actually even better, I can break the RAID 1 on my current box and convert it to non-RAID, then move one of the disks to my new workstation and convert it back to RAID 1. If done right, I think I can do it all without any data loss too.
I'd prefer to just move the two disks and not have to transfer any data... but that's just me.
Yeah. I'll need to copy my home directory to the SSD in my current box before doing that though. That might be easier than breaking/re-creating the RAID. I'm not quite ready to "switch" yet. Hmm...
Oh, one thing I was really curious about too, would it make sense to get an extra drive as a hot spare? Normally with OBR10 and spinning HDD, I'd put every spindle into the array, but since SSDs are going to give me plenty of IOPS and capacity, I didn't know if it is a good/bad idea with OBR5.
The rule of "no hot spare ever" for RAID 5 still applies. If you were going to do this, you would do OBR 6. OBR6 can make loads of sense here, but OBR5 + HS does not.
But it is up to you, depends on cost and risk aversion.
There's an entire Ready Node selector here on VMware' site - http://vsanreadynode.vmware.com/RN/RN. One thing we learned is that you can go single-socket with the hosts to save on vSphere and vSAN licenses. Just beef up memory because there will be some overhead for vSAN traffic that folks seem to overlook. I know we did.
Most vendors are pushing single socket these days, except for MS. That's why Scale starts on single socket, for example.
That actually makes sense. With got 22core/44thread processors available, the need for more than a single processor is far less than it used to be.
Yes, it's a very common SMB mistake to buy two procs when almost never is that warranted in the SMB market. Single proc of the same core count and speed is faster than two procs and cheaper.
True lesson in this story is do not work on a production system with changes that could take it out of service when time is critical. Better to say, "Next time you have a long meeting at the end of a Friday, let me take a look at your computer."
I'd certainly love one of these for a home lab. That's for sure!
I can see the whole ROBO deployment scenario. Even in single/several use situations, like you mentioned... such as for specific critical high-performing services that won't outgrow it. Anywhere else, I'd feel stuck with it... what happens if you start to outgrow the resources? How scalable is it? That's what would determine it's worth for usage in place of regular 2u deployment servers.
Maybe I'm just not a blade type person.
It's really meant for contained scenarios where you know how big you will grow. For a normal SMB, the issue is growing into it, rather than outgrowing it.
The error says to contact support, I'd start there. While waiting for them. I'd do routine testing... reseat everything, pull some memory out, etc. If it is not time sensitive, I'd just wait for Dell to get back to you.
I've never had to wait more than 5 minutes for Dell chat support.
Thankfully just a PERC, so pop out the old one, pop in the new, enjoy your coffee.
I know we've been over this before, but is there any way to ensure that doesn't blow a second drive? Or is that mainly age related, with a little bit of bad luck thrown in?
Ensure? No, that's never an option. But replacing a drive in mirrored RAID (RAID 1, RAID 10) does not increase the risk of another drive blowing. That's part of parity RAID, not mirrored. So you protect the "other" drives from blowing my choosing mirrored RAID from the outset.
I've received two quotes for new server hardware - one from our local reseller and one directly from Dell. As far as I can tell, the two quotes are identical spec-wise but the local reseller is almost $12k more expensive. Here are the two quotes:
Quote from Dell:
2x Dell PowerEdge R430 servers $6,665.60
HP Quote from local reseller:
2x HP ProLiant DL360 servers $7,266.00
2x Xeon E5-2630 v3 CPUs
64 GB RAM (unknown configuration)
1x HP MSA 2040 SAN $20,932.00
14x HP MSA 1.2 TB 10K SAS 2.5in drives
includes $5,850 in labor so actual price
is only $15,082
1x Cisco Catalyst 2960-X gigabit switch $2,320.00
Is there any reason why I should choose the HP solution over the Dell solution? I will be running vSphere 6 on these servers. I'm not familiar with managing either server line so either way I'll be learning new management tools. When it comes to support I think I trust my local reseller more than Dell but $12k extra is hard to stomach just for that.
[Edit: CP Code M.]
Unless that OP is restricted to 1U hosts, I would go with the quote from Xbyte for Dell 730xd with same specs as in quotes is
Multiply by 2, add Starwind's vSAN and a couple 10Gb NICs and he's done. Especially if only 2 hosts. Same(ish) price, way more reliability, better performance all around. I'd post that reco on SW but would likely get banned lol.
The one thing not mentioned is if there are other hosts connecting to the SAN.
Right but with our software, they only support the default which is Gnome 3. And at $70K per seat per year for some of it, we don't stray too far away from that.
Okay, yeah. That's understandable. Set up Gnome 3 on one of your Home VMs and see how painful or not it is, lol.
Ha I've got a couple running here and I've tried different ways to get in. We can run apps with X2Go with the published apps list, but just no desktop. It's not a big deal, but some people like having the full desktop to work with.