Windows Failover Clustering Can't Add iSCSI Disk
-
@scottalanmiller said:
How do the Nimbles stack up against the big three: EMC, HDS and 3PAR? I've never used Nimble and hear people with good reports but I always find it hard to picture myself leaving the big three for enterprise block storage.
These here have been rock solid. We actually have 4 pairs of them just for our VMware infrastructure, and another pair for our Banner virtual infrastructure (VMware, Oracle, and the Banner software).
We had one blow out a drive last week, but it rebuilt over the weekend. Ours are configured with 3TB Spinning Rust drives, and some SSD drives for caching and tiering (unsure on size). Not sure what RAID they use or if it is a Nimble thing, but we didn't have any issues resulting from the dead drive.
-
@Dashrender said:
@scottalanmiller said:
Ah, are you using it the super correct way to refer to the entirety of the block storage network? In which case you need to update your parlance to the current slang for "storage array"
Are you saying "storage array" has a specific meaning - 2 or more SANs replicating with redundant links, etc?
I mean that what we always call a SAN is actually a block storage array. Technically the word SAN refers to the entire network on which a storage array (or many of them) sits including the switches, HBAs and everything attached.
-
@scottalanmiller said:
@Dashrender said:
@scottalanmiller said:
Ah, are you using it the super correct way to refer to the entirety of the block storage network? In which case you need to update your parlance to the current slang for "storage array"
Are you saying "storage array" has a specific meaning - 2 or more SANs replicating with redundant links, etc?
I mean that what we always call a SAN is actually a block storage array. Technically the word SAN refers to the entire network on which a storage array (or many of them) sits including the switches, HBAs and everything attached.
Which is why I usually refer to a single device as a storage device, lol.
-
@dafyre said:
@scottalanmiller said:
@Dashrender said:
@scottalanmiller said:
Ah, are you using it the super correct way to refer to the entirety of the block storage network? In which case you need to update your parlance to the current slang for "storage array"
Are you saying "storage array" has a specific meaning - 2 or more SANs replicating with redundant links, etc?
I mean that what we always call a SAN is actually a block storage array. Technically the word SAN refers to the entire network on which a storage array (or many of them) sits including the switches, HBAs and everything attached.
Which is why I usually refer to a single device as a storage device, lol.
Yeah, ideally we all would. Or at least a block storage device. Because really a NAS is a file storage device, too. And the two together is a unified storage device. But everyone uses the slang of calling the box a SAN. Which is what causes the problem of having to explain that it is the use that makes it a SAN, DAS or NAS rather than the device. It's because we are all using the slang instead of the correct terms from the get go.
-
@dafyre said:
@travisdh1 said:
@dafyre First absurdly dumb questioning here (hey, it's what I'm good at.) By your use of fail over cluster, that means you're setting this up at the SQL server and not the OS level, right?
Not this set, no. This set will be done as a File Server. I figured out part of what I am doing wrong, but I am waiting on my boss to give me an available IP address, lol.
Good! I'll buy the first round if we ever meet up.
Also, looks like I need to dig into this DAG thing.
-
Looks like they are not using the DAG term, exactly, in SQL Server. Although it is a database and they do call it an availability group. So it is DAG, but without the acronym mentioned.
-
@scottalanmiller said:
Looks like they are not using the DAG term, exactly, in SQL Server. Although it is a database and they do call it an availability group. So it is DAG, but without the acronym mentioned.
Well that makes a lot more sense now - I've only ever seen DAG used in relation to Exchange. But it totally makes sense that MS build AD and Exchange from SQL, and that it's the SQL stuff that would be DAG'ed.
-
@scottalanmiller said:
@Dashrender said:
aww.. so SQL has DAGs now too, eh?
Yes, actually SQL is the only thing that has them. AD and Exchange are using SQL under the hood. When they need HA, it is their SQL that primarily needs it. DAG is a SQL thing (AFAIK) and applies equally to all SQL-based products.
Related to SQL but not iSCSI ( and not to hijack this thread ) I upgraded all my servers to MySQL 5.7 this weekend after finding out about the GTID and Channel features for multi-master replication. It works perfectly. This is a huge feature making multi-homed MySQL all-master all-active replication available in a community release. Someone should start a thread on this if there is not one already.
About the iSCSI, I think I recall I saw a similar error once and found out I was doing something wrong. More specifically I didnt have the allowed iscsi initiators set correctly.
thx
-d -
@drewlander said:
@scottalanmiller said:
@Dashrender said:
aww.. so SQL has DAGs now too, eh?
Yes, actually SQL is the only thing that has them. AD and Exchange are using SQL under the hood. When they need HA, it is their SQL that primarily needs it. DAG is a SQL thing (AFAIK) and applies equally to all SQL-based products.
Related to SQL but not iSCSI ( and not to hijack this thread ) I upgraded all my servers to MySQL 5.7 this weekend after finding out about the GTID and Channel features for multi-master replication. It works perfectly. This is a huge feature making multi-homed MySQL all-master all-active replication available in a community release. Someone should start a thread on this if there is not one already.
About the iSCSI, I think I recall I saw a similar error once and found out I was doing something wrong. More specifically I didnt have the allowed iscsi initiators set correctly.
thx
-dNice to know about MySQL! I've never had to set up active/active MySQL before so that's a good thing to know. 8-)
I know what you mean about the iSCSI initiators not set correctly, but those are all correct here -- I can mount the iSCSI storage on each server (one at a time) with no problem. Heck, even the Windows Cluster validation tests the storage and it works fine! lol.
-
I have a sneaking suspicion that it is the fact that my 2 SQL Servers and the two other nodes that I want to use in this are on different subnets (although I am unsure as to why that would matter).
My boss is going to get me an IP addy this morning, and I'll build a totally separate cluster for these two servers and see what happens.