SAN LUNs Do Not Act Like NAS Shares
-
@Dashrender said:
There is also now a question of wither or not the ISCSI connection is simply a passthrough on the XenServer and Windows might be managing the storage Raw.
It isn't. It works almost identically to how ESXi handles iSCSI. Just a file on a disk for the most part.
-
@Dashrender said:
There is also now a question of wither or not the ISCSI connection is simply a passthrough on the XenServer and Windows might be managing the storage Raw.
Not if it attaches to the hypervisor, it's normally formatted in EXT3, EXT4 or XFS.
-
@scottalanmiller said:
@Dashrender said:
There is also now a question of wither or not the ISCSI connection is simply a passthrough on the XenServer and Windows might be managing the storage Raw.
Not if it attaches to the hypervisor, it's normally formatted in EXT3, EXT4 or XFS.
Normally - sure, but doesn't have to be.
Heck, even in the new thread talking solely about storage he seems to be saying one thing while acting upon another.
At least coliver seems to know what is going on - my complete lack of ever touching XenServer is hurting my ability to follow along.
-
@Dashrender said:
Not if it attaches to the hypervisor, it's normally formatted in EXT3, EXT4 or XFS.
Normally - sure, but doesn't have to be.
Doesn't it? I mean technically you could modify some things, but the officially supported list is pretty small. Are you suggesting that it might be something quirky?
-
@scottalanmiller said:
@Dashrender said:
Not if it attaches to the hypervisor, it's normally formatted in EXT3, EXT4 or XFS.
Normally - sure, but doesn't have to be.
Doesn't it? I mean technically you could modify some things, but the officially supported list is pretty small. Are you suggesting that it might be something quirky?
No. Am I mistaken that you can connect an ISCSI LUN to an ESXi box and provide direct disk access to the VM to the point where the hypervisor does nothing but pass the information from the VM to the to the ISCSI interface... and Windows literally handles everything as if the drive was directly connected to the Windows install? Sure this isn't standard or even common anymore... maybe it's not possible anymore either.
-
@Dashrender said:
@scottalanmiller said:
@Dashrender said:
Not if it attaches to the hypervisor, it's normally formatted in EXT3, EXT4 or XFS.
Normally - sure, but doesn't have to be.
Doesn't it? I mean technically you could modify some things, but the officially supported list is pretty small. Are you suggesting that it might be something quirky?
No. Am I mistaken that you can connect an ISCSI LUN to an ESXi box and provide direct disk access to the VM to the point where the hypervisor does nothing but pass the information from the VM to the to the ISCSI interface... and Windows literally handles everything as if the drive was directly connected to the Windows install? Sure this isn't standard or even common anymore... maybe it's not possible anymore either.
The discussion was about Xen and KVM, which only use those three.
-
@Dashrender said:
Am I mistaken that you can connect an ISCSI LUN to an ESXi box and provide direct disk access to the VM to the point where the hypervisor does nothing but pass the information from the VM to the to the ISCSI interface... and Windows literally handles everything as if the drive was directly connected to the Windows install? Sure this isn't standard or even common anymore... maybe it's not possible anymore either.
That woudl be weird. I am not aware of such a thing ever existing. Why would that exist when you can just do that without needing that functionality?
-
I think this goes to the never thing you mentioned in the other thread.
-
@Dashrender said:
@scottalanmiller said:
@Dashrender said:
Not if it attaches to the hypervisor, it's normally formatted in EXT3, EXT4 or XFS.
Normally - sure, but doesn't have to be.
Doesn't it? I mean technically you could modify some things, but the officially supported list is pretty small. Are you suggesting that it might be something quirky?
No. Am I mistaken that you can connect an ISCSI LUN to an ESXi box and provide direct disk access to the VM to the point where the hypervisor does nothing but pass the information from the VM to the to the ISCSI interface... and Windows literally handles everything as if the drive was directly connected to the Windows install? Sure this isn't standard or even common anymore... maybe it's not possible anymore either.
I think you are confusing things. There's VMDirectPath in Vsphere but it does not do iSCSI pass through from iSCSI initiators. That would be pointless. You'd put the iSCSI in the VM if you needed that. the iSCSI is for connecting block-level storage to the hypervisor to store the VMDKs etc, on. Passing through block-level is rather pointless and kinda defeats some the benefits of virtulization.
-
I definitely wasn't thinking that one could pass the ISCSI through to the VM... You're right it would be better to just connect ISCSI direct to the VM...
The block level is what I was talking about... It's been years since I've heard about it... It seem things have changed to the point that there is little or no gain in performance in this.
Good to know ita joined the never do list.
-
VMDirectpath IO is for getting direct access to PCI and PCIe devices. It's not for SAN, even though you can use it for that. It's really designed around things that need to be fast (SAN is not about fast as a technology.) VMDP/IO is really for things like 10GigE pass through or FusionIO cards.
There has never been a technology or a procedure around providing SAN connections to the VMs instead of through the hypervisor. You've always been able to do that, it has always been a pretty bad idea. It used to be a better idea than it is now, though, but it was never considered the way to handle things even a decade ago.
-
Oh, and reference...
-
Just realized my thread was split. Going to read everyone's replies.
-
So what is conscious here?
Would it be better for me to pass my iSCSI LUN directly to Windows Server and use the Windows iSCSI initiator? Or is it just bad... And would be better leaving Xen Server handle the entire ISCSI LUN?
As right now, I have total of two(2) iSCSI LUN's attached to Citrix Xen Server and the node.
One LUN #1, I have a disk I created which is attached to my Windows Server 2008 R2 Domain Controller. This disk is nearly 1TB in size and almost full.
I've been wanting to Attach a new storage Repository (NFS) to Citrix Xen Server... And then MIGRATE the ENTIRE 1TB disk to this NFS volume as it would be much larger and easier to manage the raw disk file.
However, then I thought.... it would be WAY easier to just attach an iSCSI volume directly to Windows using the Initiator. Then I can simply Grow the iSCSI LUN through my Synology NAS and be done...
-
NOTE:
Even if I move the ENTIRE disk (from ISCSI LUN) to the new NFS Storage repository. I would still need to EXPAND the disk associated with the Windows Server. This new disk would be on the new NFS Storage Repository.
However, I've had issues with Windows Server. Every time I've expanded the disk within Citrix Xen Server.... Windows server see's the new growth as a separate drive and I have to use software to merge the partition sizes together.
-
@ntoxicator said:
Would it be better for me to pass my iSCSI LUN directly to Windows Server and use the Windows iSCSI initiator? Or is it just bad... And would be better leaving Xen Server handle the entire ISCSI LUN?
That's just bad. Don't even consider that a possibility. This should always be handled by the hypervisor for all intents and purposes.
-
@ntoxicator said:
However, then I thought.... it would be WAY easier to just attach an iSCSI volume directly to Windows using the Initiator. Then I can simply Grow the iSCSI LUN through my Synology NAS and be done...
What would make this easier? Using iSCSI at all and using guests VM access to it should never be easier. I'm missing something here. How would that happen?
-
@ntoxicator said:
So what is conscious here?
Would it be better for me to pass my iSCSI LUN directly to Windows Server and use the Windows iSCSI initiator? Or is it just bad... And would be better leaving Xen Server handle the entire ISCSI LUN?
As right now, I have total of two(2) iSCSI LUN's attached to Citrix Xen Server and the node.
One LUN #1, I have a disk I created which is attached to my Windows Server 2008 R2 Domain Controller. This disk is nearly 1TB in size and almost full.
I've been wanting to Attach a new storage Repository (NFS) to Citrix Xen Server... And then MIGRATE the ENTIRE 1TB disk to this NFS volume as it would be much larger and easier to manage the raw disk file.
However, then I thought.... it would be WAY easier to just attach an iSCSI volume directly to Windows using the Initiator. Then I can simply Grow the iSCSI LUN through my Synology NAS and be done...
Never* pass raw storage to your VMs, it defeats several advantages to virtualization without adding anything of value. Plus the Windows iSCSI implementation is really not good you would be introducing a lot of issues.
*Of course there are times when you may want to do this but they are few and far between.
-
Thank you sir!
I was just trying to make it simple at the possibility of less overhead to achieve better throughput on 1GBe network.
Any reason why Windows doesnt see the growth in the disk when I expand it at hypervisor level? Will this still happen when moving to NFS Storage Repository?
-
@ntoxicator said:
However, I've had issues with Windows Server. Every time I've expanded the disk within Citrix Xen Server.... Windows server see's the new growth as a separate drive and I have to use software to merge the partition sizes together.
This has nothing to do with XenServer (it's always XenServer, never Xen Server. The space matters, there are XenServer and Xen server and they mean different things.) This is purely about how Windows deals with growing block storage. Using iSCSI will be identical. Your issues around automatic resizing are purely inside of Windows, XenServer has no influence or control here.