SAN LUNs Do Not Act Like NAS Shares



  • So this is a split off from another discussion.

    @ntoxicator said:

    shit me for getting torn to shreds on here. Pissing contest.

    Its much easier to verbalize than type out exact specifics.

    What i meant by "I just seen as Windows iSCSI initiator working much better; more manageable and not limited."

    Am I currently using windows iSCSI initator? NO
    Do I wish I was using it: Yes?
    Why: Because I feel it would be easier to manage and connect an iSCSI LUN as localized storage and data storage. The larger 2TB storage holds all the windows network shares and user profile data.... thats the problem.

    @Dashrender said:

    So disconnect the LUN from the XenServer and connect it directly to ProxMox, then give that drive to the windows VM. does that not work in ProxMox?

    @scottalanmiller said:

    SANs do not work that way.



  • So the high level issue here is that LUNs are raw storage, just like a hard drive. So any features and limitations of a hard drive are carried through with them. If you want to disconnect a LUN from one server (pulling a hard drive) and connect it to another one (inserting a hard drive) you can certainly do that. But to use that, you need that drive to contain a filesystem and partition table that the new machine can talk to.

    So if you are doing this from, say, one Windows Server to another Windows Server of similar versioning, no problem. It will be like you inserted a new USB drive or whatever. You can see everything. Metadata may or may not match up based on other factors.

    Same with Linux, going from Linux to another Linux box with the same filesystem support you are golden. Metadata concerns apply.

    Issues happen when you go between systems that likely do not have the same filesystem support. Linux to Windows, VMware to HyperV, etc. This is where issues come in. The new systems don't support the filesystems from the others and the metadata is definitely wrong.



  • In the case in the example, I've not tried this. It is XenServer to KVM. XenServer uses CentOS as its Dom0 and that supplies the drivers. KVM is Linux. So we are looking at Linux to Linux. Assuming compatible filesystem options and drivers, I think that it will work. But it is not a 100% guarantee that the files will all be as expected, it would require testing.



  • Aww, yes I started to understand that with your continued posts in the other thread.

    But this is good to know.

    I was originally reading that the OP had setup an ISCSI connection to XenServer and XenServer was handling the formatting of the drive. Since XenServer is Linux, and the new VM host he was going to is also Linux based, I assumed you could move from one to the other with the previously mentioned metadata concern.



  • There is also now a question of wither or not the ISCSI connection is simply a passthrough on the XenServer and Windows might be managing the storage Raw.



  • @Dashrender said:

    There is also now a question of wither or not the ISCSI connection is simply a passthrough on the XenServer and Windows might be managing the storage Raw.

    It isn't. It works almost identically to how ESXi handles iSCSI. Just a file on a disk for the most part.



  • @Dashrender said:

    There is also now a question of wither or not the ISCSI connection is simply a passthrough on the XenServer and Windows might be managing the storage Raw.

    Not if it attaches to the hypervisor, it's normally formatted in EXT3, EXT4 or XFS.



  • @scottalanmiller said:

    @Dashrender said:

    There is also now a question of wither or not the ISCSI connection is simply a passthrough on the XenServer and Windows might be managing the storage Raw.

    Not if it attaches to the hypervisor, it's normally formatted in EXT3, EXT4 or XFS.

    Normally - sure, but doesn't have to be.

    Heck, even in the new thread talking solely about storage he seems to be saying one thing while acting upon another.

    At least coliver seems to know what is going on - my complete lack of ever touching XenServer is hurting my ability to follow along.



  • @Dashrender said:

    Not if it attaches to the hypervisor, it's normally formatted in EXT3, EXT4 or XFS.

    Normally - sure, but doesn't have to be.

    Doesn't it? I mean technically you could modify some things, but the officially supported list is pretty small. Are you suggesting that it might be something quirky?



  • @scottalanmiller said:

    @Dashrender said:

    Not if it attaches to the hypervisor, it's normally formatted in EXT3, EXT4 or XFS.

    Normally - sure, but doesn't have to be.

    Doesn't it? I mean technically you could modify some things, but the officially supported list is pretty small. Are you suggesting that it might be something quirky?

    No. Am I mistaken that you can connect an ISCSI LUN to an ESXi box and provide direct disk access to the VM to the point where the hypervisor does nothing but pass the information from the VM to the to the ISCSI interface... and Windows literally handles everything as if the drive was directly connected to the Windows install? Sure this isn't standard or even common anymore... maybe it's not possible anymore either.



  • @Dashrender said:

    @scottalanmiller said:

    @Dashrender said:

    Not if it attaches to the hypervisor, it's normally formatted in EXT3, EXT4 or XFS.

    Normally - sure, but doesn't have to be.

    Doesn't it? I mean technically you could modify some things, but the officially supported list is pretty small. Are you suggesting that it might be something quirky?

    No. Am I mistaken that you can connect an ISCSI LUN to an ESXi box and provide direct disk access to the VM to the point where the hypervisor does nothing but pass the information from the VM to the to the ISCSI interface... and Windows literally handles everything as if the drive was directly connected to the Windows install? Sure this isn't standard or even common anymore... maybe it's not possible anymore either.

    The discussion was about Xen and KVM, which only use those three.



  • @Dashrender said:

    Am I mistaken that you can connect an ISCSI LUN to an ESXi box and provide direct disk access to the VM to the point where the hypervisor does nothing but pass the information from the VM to the to the ISCSI interface... and Windows literally handles everything as if the drive was directly connected to the Windows install? Sure this isn't standard or even common anymore... maybe it's not possible anymore either.

    That woudl be weird. I am not aware of such a thing ever existing. Why would that exist when you can just do that without needing that functionality?



  • I think this goes to the never thing you mentioned in the other thread.


  • Banned

    @Dashrender said:

    @scottalanmiller said:

    @Dashrender said:

    Not if it attaches to the hypervisor, it's normally formatted in EXT3, EXT4 or XFS.

    Normally - sure, but doesn't have to be.

    Doesn't it? I mean technically you could modify some things, but the officially supported list is pretty small. Are you suggesting that it might be something quirky?

    No. Am I mistaken that you can connect an ISCSI LUN to an ESXi box and provide direct disk access to the VM to the point where the hypervisor does nothing but pass the information from the VM to the to the ISCSI interface... and Windows literally handles everything as if the drive was directly connected to the Windows install? Sure this isn't standard or even common anymore... maybe it's not possible anymore either.

    I think you are confusing things. There's VMDirectPath in Vsphere but it does not do iSCSI pass through from iSCSI initiators. That would be pointless. You'd put the iSCSI in the VM if you needed that. the iSCSI is for connecting block-level storage to the hypervisor to store the VMDKs etc, on. Passing through block-level is rather pointless and kinda defeats some the benefits of virtulization.



  • I definitely wasn't thinking that one could pass the ISCSI through to the VM... You're right it would be better to just connect ISCSI direct to the VM...

    The block level is what I was talking about... It's been years since I've heard about it... It seem things have changed to the point that there is little or no gain in performance in this.

    Good to know ita joined the never do list.



  • VMDirectpath IO is for getting direct access to PCI and PCIe devices. It's not for SAN, even though you can use it for that. It's really designed around things that need to be fast (SAN is not about fast as a technology.) VMDP/IO is really for things like 10GigE pass through or FusionIO cards.

    There has never been a technology or a procedure around providing SAN connections to the VMs instead of through the hypervisor. You've always been able to do that, it has always been a pretty bad idea. It used to be a better idea than it is now, though, but it was never considered the way to handle things even a decade ago.





  • Just realized my thread was split. Going to read everyone's replies.



  • So what is conscious here?

    Would it be better for me to pass my iSCSI LUN directly to Windows Server and use the Windows iSCSI initiator? Or is it just bad... And would be better leaving Xen Server handle the entire ISCSI LUN?

    As right now, I have total of two(2) iSCSI LUN's attached to Citrix Xen Server and the node.

    One LUN #1, I have a disk I created which is attached to my Windows Server 2008 R2 Domain Controller. This disk is nearly 1TB in size and almost full.

    I've been wanting to Attach a new storage Repository (NFS) to Citrix Xen Server... And then MIGRATE the ENTIRE 1TB disk to this NFS volume as it would be much larger and easier to manage the raw disk file.

    However, then I thought.... it would be WAY easier to just attach an iSCSI volume directly to Windows using the Initiator. Then I can simply Grow the iSCSI LUN through my Synology NAS and be done...



  • NOTE:

    Even if I move the ENTIRE disk (from ISCSI LUN) to the new NFS Storage repository. I would still need to EXPAND the disk associated with the Windows Server. This new disk would be on the new NFS Storage Repository.

    However, I've had issues with Windows Server. Every time I've expanded the disk within Citrix Xen Server.... Windows server see's the new growth as a separate drive and I have to use software to merge the partition sizes together.



  • @ntoxicator said:

    Would it be better for me to pass my iSCSI LUN directly to Windows Server and use the Windows iSCSI initiator? Or is it just bad... And would be better leaving Xen Server handle the entire ISCSI LUN?

    That's just bad. Don't even consider that a possibility. This should always be handled by the hypervisor for all intents and purposes.



  • @ntoxicator said:

    However, then I thought.... it would be WAY easier to just attach an iSCSI volume directly to Windows using the Initiator. Then I can simply Grow the iSCSI LUN through my Synology NAS and be done...

    What would make this easier? Using iSCSI at all and using guests VM access to it should never be easier. I'm missing something here. How would that happen?



  • @ntoxicator said:

    So what is conscious here?

    Would it be better for me to pass my iSCSI LUN directly to Windows Server and use the Windows iSCSI initiator? Or is it just bad... And would be better leaving Xen Server handle the entire ISCSI LUN?

    As right now, I have total of two(2) iSCSI LUN's attached to Citrix Xen Server and the node.

    One LUN #1, I have a disk I created which is attached to my Windows Server 2008 R2 Domain Controller. This disk is nearly 1TB in size and almost full.

    I've been wanting to Attach a new storage Repository (NFS) to Citrix Xen Server... And then MIGRATE the ENTIRE 1TB disk to this NFS volume as it would be much larger and easier to manage the raw disk file.

    However, then I thought.... it would be WAY easier to just attach an iSCSI volume directly to Windows using the Initiator. Then I can simply Grow the iSCSI LUN through my Synology NAS and be done...

    Never* pass raw storage to your VMs, it defeats several advantages to virtualization without adding anything of value. Plus the Windows iSCSI implementation is really not good you would be introducing a lot of issues.

    *Of course there are times when you may want to do this but they are few and far between.



  • Thank you sir!

    I was just trying to make it simple at the possibility of less overhead to achieve better throughput on 1GBe network.

    Any reason why Windows doesnt see the growth in the disk when I expand it at hypervisor level? Will this still happen when moving to NFS Storage Repository?



  • @ntoxicator said:

    However, I've had issues with Windows Server. Every time I've expanded the disk within Citrix Xen Server.... Windows server see's the new growth as a separate drive and I have to use software to merge the partition sizes together.

    This has nothing to do with XenServer (it's always XenServer, never Xen Server. The space matters, there are XenServer and Xen server and they mean different things.) This is purely about how Windows deals with growing block storage. Using iSCSI will be identical. Your issues around automatic resizing are purely inside of Windows, XenServer has no influence or control here.



  • @ntoxicator said:

    I was just trying to make it simple at the possibility of less overhead to achieve better throughput on 1GBe network.

    The network bottleneck remains identical either way. It's system overhead alone that varies and is very close to equal as each is more efficient at a different stage of the process. But one gives full visibility and control, one breaks the virtualization model unnecessarily.



  • Thank you again Scott!

    Now... what you think would be more reliable or simpler solution?

    Use XenServer to migrate(move) the disk to the new Storage Repository (NFS)? This will take several hours.. And I'm worried that if something fails the entire disk migration will be lost.. or will XenServer do block by block and if any fail, it will keep on the original SR?

    Or should I just attach a new disk to the Windows Server VM (From the NFS Storage Repository) and manually copy all the files over using Microsofts data copy utility.. so the share folders & file permissions are carried over

    As I'll need the keep the same drive letter



  • @ntoxicator said:

    Thank you again Scott!

    Now... what you think would be more reliable or simpler solution?

    Use XenServer to migrate(move) the disk to the new Storage Repository (NFS)? This will take several hours.. And I'm worried that if something fails the entire disk migration will be lost.. or will XenServer do block by block and if any fail, it will keep on the original SR?

    Or should I just attach a new disk to the Windows Server VM (From the NFS Storage Repository) and manually copy all the files over using Microsofts data copy utility.. so the share folders & file permissions are carried over

    As I'll need the keep the same drive letter

    Are you currently not doing backups of this system? While losing the data is an understandable concern that risk should be tempered by having an offline copy of it somewhere.



  • The windows DC is backed up to Carbonite.

    I am backing up the LUN's on the Synology Rackstation a remote disk.



  • @ntoxicator said:

    The windows DC is backed up to Carbonite.

    I am backing up the LUN's on the Synology Rackstation a remote disk.

    Could you restore from that backup to the NFS storage and then add that to Windows? Still would have a potential network bottleneck and would still require downtime but you wouldn't be as concerned about data dropping.


Log in to reply