ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    HyperV Partitioning

    Scheduled Pinned Locked Moved IT Discussion
    25 Posts 8 Posters 2.7k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • B
      bnrstnr
      last edited by bnrstnr

      Why do you need logging on a separate partition? Skip the partitioning altogether. I would only provision what you need, and do it all on the R10. No need to setup blank VMs unless you have a very specific reason to. I would use all the R10 space as needed, then if you run out of storage there you could start chipping away at the R1. Typically you'd just buy all the same hard drives and have OBR10 instead of having 2 different arrays, this would give you better performance, reliability, etc.

      1 Reply Last reply Reply Quote 2
      • JoelJ
        Joel
        last edited by

        This is how their tech team have requested it be configured.

        DustinB3403D 1 Reply Last reply Reply Quote 0
        • B
          bnrstnr
          last edited by

          Ah you're looking for a walkthrough, not advice, sorry I missed that part.

          It should be pretty straight forward, as you install right from the Hyper-V installer just like regular windows. Is there a specific thing you're having trouble with?

          1 Reply Last reply Reply Quote 0
          • DustinB3403D
            DustinB3403 @Joel
            last edited by

            @joel said in HyperV Partitioning:

            This is how their tech team have requested it be configured.

            Their tech team doesn't understand the benefits of OBR10 then and why splitting arrays like this was never a good thing.

            As for the partitioning, this is something you'd have to do at the raid controller, assuming it support this.

            J S 2 Replies Last reply Reply Quote 1
            • JoelJ
              Joel
              last edited by Joel

              So I can create the VM's no problem.
              But i'm not so clear on how I give the VMs a C:\ drive from part of the R1 Volume. I think i'm over confusing myself how to get it done best way

              DustinB3403D 1 Reply Last reply Reply Quote 0
              • DustinB3403D
                DustinB3403 @Joel
                last edited by DustinB3403

                @joel said in HyperV Partitioning:

                So I can create the VM's no problem.
                But i'm not so clear on how I give the VMs a C:\ drive from part of the R1 Volume. I think i'm over confusing myself how to get it done best way

                You would simply create a new disk and designate it as coming from that array (on that VM). On XenServer or ESXi you can specify what storage to use when creating disks, I'm almost 100% positive that this is doable on Hyper-V as well.

                1 Reply Last reply Reply Quote 0
                • black3dynamiteB
                  black3dynamite
                  last edited by

                  https://www.altaro.com/hyper-v/hyper-v-small-business-sample-host-builds/

                  I’ve only setup Hyper-V in one way. One big raid 10 and then create two partitions from the hypervisor.

                  1 Reply Last reply Reply Quote 0
                  • ObsolesceO
                    Obsolesce
                    last edited by

                    When you create the VM in HyperV Manager, it asks you where you want to create the .vhdx disk for the VM... choose the C drive of the host. Then after the VM is created, you can create another disk for the VM on the RAID10 in the VM settings.

                    As for what you are doing, your tech team is almost 100% wrong and they don't understand virtualization. You need to send them here.

                    DustinB3403D 1 Reply Last reply Reply Quote 3
                    • DustinB3403D
                      DustinB3403 @Obsolesce
                      last edited by

                      @tim_g said in HyperV Partitioning:

                      When you create the VM in HyperV Manager, it asks you where you want to create the .vhdx disk for the VM... choose the C drive of the host. Then after the VM is created, you can create another disk for the VM on the RAID10 in the VM settings.

                      As for what you are doing, your tech team is almost 100% wrong and they don't understand virtualization. You need to send them here.

                      I'm specifically talking with @Joel in PM, unfortunately he is outsourced IT in this case, and the client wants this system yesterday.

                      He understands what is wrong, and I've guided him to get things to be inline with what we'd do.

                      1 Reply Last reply Reply Quote 1
                      • B
                        bnrstnr
                        last edited by

                        Your arrays should show up in hyper-v with drive letters, just like in normal windows. Like @Tim_G said, just pick what folder you want to store the VHDs in

                        1 Reply Last reply Reply Quote 0
                        • J
                          Jimmy9008 @DustinB3403
                          last edited by

                          @dustinb3403 said in HyperV Partitioning:

                          @joel said in HyperV Partitioning:

                          This is how their tech team have requested it be configured.

                          Their tech team doesn't understand the benefits of OBR10 then and why splitting arrays like this was never a good thing.

                          As for the partitioning, this is something you'd have to do at the raid controller, assuming it support this.

                          2x1 in R1 = 1TB usable
                          4x2 in R10 = 4TB usable
                          Total = 5TB usable.

                          OBR10, all disks would drop to the smallest size available, so those 2TB can only be 1TB, right?
                          So in OBR10, that's now really only 6 x 1 TB, which is 3 TB usable.

                          Perhaps they understand OBR10, but can only use their disks they have, and need more than 3TB.

                          Splitting is not a good thing, but if that's all they have, well... its all they have.

                          ObsolesceO 1 Reply Last reply Reply Quote 0
                          • ObsolesceO
                            Obsolesce @Jimmy9008
                            last edited by

                            @jimmy9008 said in HyperV Partitioning:

                            @dustinb3403 said in HyperV Partitioning:

                            @joel said in HyperV Partitioning:

                            This is how their tech team have requested it be configured.

                            Their tech team doesn't understand the benefits of OBR10 then and why splitting arrays like this was never a good thing.

                            As for the partitioning, this is something you'd have to do at the raid controller, assuming it support this.

                            2x1 in R1 = 1TB usable
                            4x2 in R10 = 4TB usable
                            Total = 5TB usable.

                            OBR10, all disks would drop to the smallest size available, so those 2TB can only be 1TB, right?
                            So in OBR10, that's now really only 6 x 1 TB, which is 3 TB usable.

                            Perhaps they understand OBR10, but can only use their disks they have, and need more than 3TB.

                            Splitting is not a good thing, but if that's all they have, well... its all they have.

                            You would instead never have purchased the 1TB drive, and had gotten two more 2TB drives to make a 6-drive RAID10 for a total of 6TB usable. 2TB spinning rust is pretty close the same cost as 1TB.

                            ObsolesceO 1 Reply Last reply Reply Quote 1
                            • ObsolesceO
                              Obsolesce @Obsolesce
                              last edited by Obsolesce

                              @tim_g said in HyperV Partitioning:

                              @jimmy9008 said in HyperV Partitioning:

                              @dustinb3403 said in HyperV Partitioning:

                              @joel said in HyperV Partitioning:

                              This is how their tech team have requested it be configured.

                              Their tech team doesn't understand the benefits of OBR10 then and why splitting arrays like this was never a good thing.

                              As for the partitioning, this is something you'd have to do at the raid controller, assuming it support this.

                              2x1 in R1 = 1TB usable
                              4x2 in R10 = 4TB usable
                              Total = 5TB usable.

                              OBR10, all disks would drop to the smallest size available, so those 2TB can only be 1TB, right?
                              So in OBR10, that's now really only 6 x 1 TB, which is 3 TB usable.

                              Perhaps they understand OBR10, but can only use their disks they have, and need more than 3TB.

                              Splitting is not a good thing, but if that's all they have, well... its all they have.

                              You would instead never have purchased the 1TB drive, and had gotten two more 2TB drives to make a 6-drive RAID10 for a total of 6TB usable. 2TB spinning rust is pretty close the same cost as 1TB.

                              To further expand on this, you would, when installing HyperV Server, create your partitions then. A 60GB C partition, and the D partition for the remaining space.

                              J 1 Reply Last reply Reply Quote 1
                              • J
                                Jimmy9008 @Obsolesce
                                last edited by

                                @tim_g said in HyperV Partitioning:

                                @tim_g said in HyperV Partitioning:

                                @jimmy9008 said in HyperV Partitioning:

                                @dustinb3403 said in HyperV Partitioning:

                                @joel said in HyperV Partitioning:

                                This is how their tech team have requested it be configured.

                                Their tech team doesn't understand the benefits of OBR10 then and why splitting arrays like this was never a good thing.

                                As for the partitioning, this is something you'd have to do at the raid controller, assuming it support this.

                                2x1 in R1 = 1TB usable
                                4x2 in R10 = 4TB usable
                                Total = 5TB usable.

                                OBR10, all disks would drop to the smallest size available, so those 2TB can only be 1TB, right?
                                So in OBR10, that's now really only 6 x 1 TB, which is 3 TB usable.

                                Perhaps they understand OBR10, but can only use their disks they have, and need more than 3TB.

                                Splitting is not a good thing, but if that's all they have, well... its all they have.

                                You would instead never have purchased the 1TB drive, and had gotten two more 2TB drives to make a 6-drive RAID10 for a total of 6TB usable. 2TB spinning rust is pretty close the same cost as 1TB.

                                To further expand on this, you would, when installing HyperV Server, create your partitions then. A 60GB C partition, and the D partition for the remaining space.

                                You missed my point.

                                1 Reply Last reply Reply Quote 0
                                • S
                                  StorageNinja Vendor @DustinB3403
                                  last edited by

                                  @dustinb3403 said in HyperV Partitioning:

                                  Their tech team doesn't understand the benefits of OBR10 then and why splitting arrays like this was never a good thing.

                                  Running VM's on the same partition as your hypervisor and having noisy neighbor issues impact the hypervisors ability to perform can cause interesting race conditions. Now if your hypervisor is embedded (can run from RAM once loaded) this isn't a big deal, but in the case of Hyper-V (That has a god awful huge footprint) I wouldn't call this a bad idea.

                                  ObsolesceO 1 Reply Last reply Reply Quote 0
                                  • ObsolesceO
                                    Obsolesce @StorageNinja
                                    last edited by Obsolesce

                                    @storageninja said in HyperV Partitioning:

                                    @dustinb3403 said in HyperV Partitioning:

                                    Their tech team doesn't understand the benefits of OBR10 then and why splitting arrays like this was never a good thing.

                                    Running VM's on the same partition as your hypervisor and having noisy neighbor issues impact the hypervisors ability to perform can cause interesting race conditions. Now if your hypervisor is embedded (can run from RAM once loaded) this isn't a big deal, but in the case of Hyper-V (That has a god awful huge footprint) I wouldn't call this a bad idea.

                                    So there's no misunderstand, I'm using the terms "above" and "below" as in, hardware is at the bottom, and VMs are at the top.

                                    In Hyper-V, the hypervisor (Ring -1 (minus one)) runs below the Windows kernel (Ring 0). Hyper-V needs higher privilege than Ring 0, and needs dedicated access to the hardware. So it goes Ring 3 (VMs) --> Ring 0 (Kernel Mode (VM BUS, VSP, Drivers)) --> Ring -1 (hypervisor (hyper-v)) --> Physical hardware.

                                    Ring -1 (the hyper-v hypervisor) sits below the Windows Kernel, controlling all access to physical components.

                                    Windows Server runs on top, and every VM runs beside it. The only thing Windows Server can do, is manage the VMs using various components.

                                    The Hyper-V Hypervisor is only 20 MB. It runs in memory. Not sure what you mean by "god awful footprint"?

                                    To say you can run a VM on the same partition as the hypervisor is wrong. You can't do it.

                                    Nobody is suggesting to stash a VM on the same partition as the hypervisor. What we are saying is to have one big RAID 10, with multiple partitions on it. And if one VM is that busy it's slowing down the rest... then that needs to be addressed separately. Nothing liek that was mentioned.

                                    This disk race condition is hypervisor agnostic, and happens between two or more VMs if one is too noisy.

                                    If you have a super busy hi disk I/O VM running on the same physical disk as another VM, it's going to slow down the other VM for sure unless you enable QoS.

                                    S 2 Replies Last reply Reply Quote 0
                                    • S
                                      StorageNinja Vendor @Obsolesce
                                      last edited by

                                      @tim_g said in HyperV Partitioning:

                                      Windows Server runs on top, and every VM runs beside it. The only thing Windows Server can do, is manage the VMs using various components.
                                      The Hyper-V Hypervisor is only 20 MB. It runs in memory. Not sure what you mean by "god awful footprint"?

                                      If that VM is a pure control plane, then I can reboot or patch it without impacting network or storage IO on the other VMs in the same way I can restart management agents on ESXi or KVM right? If Hyper-V is handling the network and storage traffic 100% then surely it must have its own driver stack, and not be dependent on the management VM for these functions, right?
                                      Unless this has changed, you lost every VM on a host from a simple reboot of the management VM previously.

                                      travisdh1T 1 Reply Last reply Reply Quote 0
                                      • S
                                        StorageNinja Vendor @Obsolesce
                                        last edited by

                                        @tim_g said in HyperV Partitioning:

                                        This disk race condition is hypervisor agnostic, and happens between two or more VMs if one is too noisy.

                                        The race condition happens because of IO components running on top of the lower level, and if they loose communication with the schedule you get a race condition (this is arguably 10x worse on VSA systems though). This is far more of an issue in systems that have IO pass through VM's, than ones where the IO/Networking driver stack is 100% in the hypervisor.

                                        1 Reply Last reply Reply Quote 0
                                        • travisdh1T
                                          travisdh1 @StorageNinja
                                          last edited by

                                          @storageninja said in HyperV Partitioning:

                                          @tim_g said in HyperV Partitioning:

                                          Windows Server runs on top, and every VM runs beside it. The only thing Windows Server can do, is manage the VMs using various components.
                                          The Hyper-V Hypervisor is only 20 MB. It runs in memory. Not sure what you mean by "god awful footprint"?

                                          If that VM is a pure control plane, then I can reboot or patch it without impacting network or storage IO on the other VMs in the same way I can restart management agents on ESXi or KVM right? If Hyper-V is handling the network and storage traffic 100% then surely it must have its own driver stack, and not be dependent on the management VM for these functions, right?
                                          Unless this has changed, you lost every VM on a host from a simple reboot of the management VM previously.

                                          You expect Microsoft to do things rationally, or correctly? That'd be a nice change of pace.

                                          S black3dynamiteB 2 Replies Last reply Reply Quote 0
                                          • S
                                            StorageNinja Vendor @travisdh1
                                            last edited by StorageNinja

                                            @travisdh1 said in HyperV Partitioning:

                                            You expect Microsoft to do things rationally, or correctly? That'd be a nice change of pace.

                                            My point is that things in the IO path go through that VM. They didn't want to write a full IO driver stack for Hyper-V so they have the VM for that. Compute/Memory doesn't go through it (that I know of), but network and disk IO do. (Otherwise Perfmon wouldn't work as a monitoring solution on the host).

                                            AFAIK only ESXi uses a microkernel that has a fully isolated management agent plane (It's actually just a busybox shell).

                                            ObsolesceO 1 Reply Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 2 / 2
                                            • First post
                                              Last post