SAN LUNs Do Not Act Like NAS Shares
-
@Dashrender said:
Am I mistaken that you can connect an ISCSI LUN to an ESXi box and provide direct disk access to the VM to the point where the hypervisor does nothing but pass the information from the VM to the to the ISCSI interface... and Windows literally handles everything as if the drive was directly connected to the Windows install? Sure this isn't standard or even common anymore... maybe it's not possible anymore either.
That woudl be weird. I am not aware of such a thing ever existing. Why would that exist when you can just do that without needing that functionality?
-
I think this goes to the never thing you mentioned in the other thread.
-
@Dashrender said:
@scottalanmiller said:
@Dashrender said:
Not if it attaches to the hypervisor, it's normally formatted in EXT3, EXT4 or XFS.
Normally - sure, but doesn't have to be.
Doesn't it? I mean technically you could modify some things, but the officially supported list is pretty small. Are you suggesting that it might be something quirky?
No. Am I mistaken that you can connect an ISCSI LUN to an ESXi box and provide direct disk access to the VM to the point where the hypervisor does nothing but pass the information from the VM to the to the ISCSI interface... and Windows literally handles everything as if the drive was directly connected to the Windows install? Sure this isn't standard or even common anymore... maybe it's not possible anymore either.
I think you are confusing things. There's VMDirectPath in Vsphere but it does not do iSCSI pass through from iSCSI initiators. That would be pointless. You'd put the iSCSI in the VM if you needed that. the iSCSI is for connecting block-level storage to the hypervisor to store the VMDKs etc, on. Passing through block-level is rather pointless and kinda defeats some the benefits of virtulization.
-
I definitely wasn't thinking that one could pass the ISCSI through to the VM... You're right it would be better to just connect ISCSI direct to the VM...
The block level is what I was talking about... It's been years since I've heard about it... It seem things have changed to the point that there is little or no gain in performance in this.
Good to know ita joined the never do list.
-
VMDirectpath IO is for getting direct access to PCI and PCIe devices. It's not for SAN, even though you can use it for that. It's really designed around things that need to be fast (SAN is not about fast as a technology.) VMDP/IO is really for things like 10GigE pass through or FusionIO cards.
There has never been a technology or a procedure around providing SAN connections to the VMs instead of through the hypervisor. You've always been able to do that, it has always been a pretty bad idea. It used to be a better idea than it is now, though, but it was never considered the way to handle things even a decade ago.
-
Oh, and reference...
-
Just realized my thread was split. Going to read everyone's replies.
-
So what is conscious here?
Would it be better for me to pass my iSCSI LUN directly to Windows Server and use the Windows iSCSI initiator? Or is it just bad... And would be better leaving Xen Server handle the entire ISCSI LUN?
As right now, I have total of two(2) iSCSI LUN's attached to Citrix Xen Server and the node.
One LUN #1, I have a disk I created which is attached to my Windows Server 2008 R2 Domain Controller. This disk is nearly 1TB in size and almost full.
I've been wanting to Attach a new storage Repository (NFS) to Citrix Xen Server... And then MIGRATE the ENTIRE 1TB disk to this NFS volume as it would be much larger and easier to manage the raw disk file.
However, then I thought.... it would be WAY easier to just attach an iSCSI volume directly to Windows using the Initiator. Then I can simply Grow the iSCSI LUN through my Synology NAS and be done...
-
NOTE:
Even if I move the ENTIRE disk (from ISCSI LUN) to the new NFS Storage repository. I would still need to EXPAND the disk associated with the Windows Server. This new disk would be on the new NFS Storage Repository.
However, I've had issues with Windows Server. Every time I've expanded the disk within Citrix Xen Server.... Windows server see's the new growth as a separate drive and I have to use software to merge the partition sizes together.
-
@ntoxicator said:
Would it be better for me to pass my iSCSI LUN directly to Windows Server and use the Windows iSCSI initiator? Or is it just bad... And would be better leaving Xen Server handle the entire ISCSI LUN?
That's just bad. Don't even consider that a possibility. This should always be handled by the hypervisor for all intents and purposes.
-
@ntoxicator said:
However, then I thought.... it would be WAY easier to just attach an iSCSI volume directly to Windows using the Initiator. Then I can simply Grow the iSCSI LUN through my Synology NAS and be done...
What would make this easier? Using iSCSI at all and using guests VM access to it should never be easier. I'm missing something here. How would that happen?
-
@ntoxicator said:
So what is conscious here?
Would it be better for me to pass my iSCSI LUN directly to Windows Server and use the Windows iSCSI initiator? Or is it just bad... And would be better leaving Xen Server handle the entire ISCSI LUN?
As right now, I have total of two(2) iSCSI LUN's attached to Citrix Xen Server and the node.
One LUN #1, I have a disk I created which is attached to my Windows Server 2008 R2 Domain Controller. This disk is nearly 1TB in size and almost full.
I've been wanting to Attach a new storage Repository (NFS) to Citrix Xen Server... And then MIGRATE the ENTIRE 1TB disk to this NFS volume as it would be much larger and easier to manage the raw disk file.
However, then I thought.... it would be WAY easier to just attach an iSCSI volume directly to Windows using the Initiator. Then I can simply Grow the iSCSI LUN through my Synology NAS and be done...
Never* pass raw storage to your VMs, it defeats several advantages to virtualization without adding anything of value. Plus the Windows iSCSI implementation is really not good you would be introducing a lot of issues.
*Of course there are times when you may want to do this but they are few and far between.
-
Thank you sir!
I was just trying to make it simple at the possibility of less overhead to achieve better throughput on 1GBe network.
Any reason why Windows doesnt see the growth in the disk when I expand it at hypervisor level? Will this still happen when moving to NFS Storage Repository?
-
@ntoxicator said:
However, I've had issues with Windows Server. Every time I've expanded the disk within Citrix Xen Server.... Windows server see's the new growth as a separate drive and I have to use software to merge the partition sizes together.
This has nothing to do with XenServer (it's always XenServer, never Xen Server. The space matters, there are XenServer and Xen server and they mean different things.) This is purely about how Windows deals with growing block storage. Using iSCSI will be identical. Your issues around automatic resizing are purely inside of Windows, XenServer has no influence or control here.
-
@ntoxicator said:
I was just trying to make it simple at the possibility of less overhead to achieve better throughput on 1GBe network.
The network bottleneck remains identical either way. It's system overhead alone that varies and is very close to equal as each is more efficient at a different stage of the process. But one gives full visibility and control, one breaks the virtualization model unnecessarily.
-
Thank you again Scott!
Now... what you think would be more reliable or simpler solution?
Use XenServer to migrate(move) the disk to the new Storage Repository (NFS)? This will take several hours.. And I'm worried that if something fails the entire disk migration will be lost.. or will XenServer do block by block and if any fail, it will keep on the original SR?
Or should I just attach a new disk to the Windows Server VM (From the NFS Storage Repository) and manually copy all the files over using Microsofts data copy utility.. so the share folders & file permissions are carried over
As I'll need the keep the same drive letter
-
@ntoxicator said:
Thank you again Scott!
Now... what you think would be more reliable or simpler solution?
Use XenServer to migrate(move) the disk to the new Storage Repository (NFS)? This will take several hours.. And I'm worried that if something fails the entire disk migration will be lost.. or will XenServer do block by block and if any fail, it will keep on the original SR?
Or should I just attach a new disk to the Windows Server VM (From the NFS Storage Repository) and manually copy all the files over using Microsofts data copy utility.. so the share folders & file permissions are carried over
As I'll need the keep the same drive letter
Are you currently not doing backups of this system? While losing the data is an understandable concern that risk should be tempered by having an offline copy of it somewhere.
-
The windows DC is backed up to Carbonite.
I am backing up the LUN's on the Synology Rackstation a remote disk.
-
@ntoxicator said:
The windows DC is backed up to Carbonite.
I am backing up the LUN's on the Synology Rackstation a remote disk.
Could you restore from that backup to the NFS storage and then add that to Windows? Still would have a potential network bottleneck and would still require downtime but you wouldn't be as concerned about data dropping.
-
I probably could pull down the backup from carbonite as its backing up the entire data partition. However, then comes the restore time.
The Synology LUN backup is just LUN. cannot export to NFS. So would have to use Carbonite to restore.
I suppose all the options have their issues. no clean cut solution