ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. PhlipElder
    3. Posts
    • Profile
    • Following 0
    • Followers 3
    • Topics 28
    • Posts 913
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: HyperV Server - Raid Best Practices

      @scottalanmiller said in HyperV Server - Raid Best Practices:

      @PhlipElder said in HyperV Server - Raid Best Practices:

      @black3dynamite said in HyperV Server - Raid Best Practices:

      Wouldn't it best to use the SSD for things like caching, page file?

      Neither the host nor the guests should be paging. If they are, then there is a problem with the way things are set up host wise or in-guest resources wise.

      But there should be a file for emergencies.

      Run on the host/node in elevated CMD:

      wmic.exe computersystem where name="SERVERNAME" set AutomaticManagedPagefile=False
      wmic.exe pagefileset where name="c:\\pagefile.sys" set InitialSize=4199,MaximumSize=4199
      shutdown -r -t 0
      

      The double slash is required.

      For standalone hosts we set 8192 instead of 4199.

      Either MiniDump or Active Crash Dump is set. That's about all the page file would be used for is helping to produce those dump file AFAIK. A full dump would require a full page file equal to installed RAM. That's nuts when we're deploying hosts/nodes with 512GB to 3TB of available RAM on one node.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: HyperV Server - Raid Best Practices

      @black3dynamite said in HyperV Server - Raid Best Practices:

      Wouldn't it best to use the SSD for things like caching, page file?

      Neither the host nor the guests should be paging. If they are, then there is a problem with the way things are set up host wise or in-guest resources wise.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: HyperV Server - Raid Best Practices

      @Joel said in HyperV Server - Raid Best Practices:

      Hi guys.
      Im drawn between two setup scenarios for a new server:

      Option1:
      2x 240GB SSD Sata 6GB (for OS)
      4X 2TB 12Gb/s (for Data)
      I was planning on using Raid1 for the OS and then Raid5/6 for the Data storage

      Options2:
      6x 2TB Drives in OBR10 for everything and then creating two partiions (1 for OS) and (1 for data).

      Is there any better options? What would you do.

      Environment will be Windows running. The server (bare metal) will run HyperV Server and the data drive will home 3x VM's (1x SQL, 1x DC and 1x FileServer).

      Thoughts welcomed and appreciated.

      I suggest using PerfMon to baseline IOPS, Throughput, Disk Latency, and Disk Queue Lengths on the current host to get a feel for pressure on the disk subsystem. That would make the decision making process a bit simpler as the future setup could be scoped to fit today's performance needs and scaled a bit for tomorrow's needs over the solution's lifetime.

      EDIT: PerfMon on the host also has guest counters that can further help to scope which VMs demand what.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: HyperV Server - Raid Best Practices

      @DustinB3403 said in HyperV Server - Raid Best Practices:

      @PhlipElder said in HyperV Server - Raid Best Practices:

      We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.

      It's set up with:
      2x 240GB Intel SSD DC S4500 RAID 1 for host OS
      2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMs

      Here it is.

      Yeah, special case. Note in the quote "We have one of our boxes (R2208GZ4GC)..."

      1.9TB SSDs are ours and just enough space to work with for their setup thus the 240GB for host OS. We have another pair of 800GB Intel SSDs set aside as we may actually need more space than anticipated.

      Since this is a recovery situation we can't afford any extra time waiting on spindles. The server gets delivered this weekend and the cluster there rebuilt. It's a 2-node asymmetric setup (Intel R1208JP4OC with DataON DNS-1640 JBOD and 24x HGST 10K SAS spindles).

      We get our box back after the project is complete.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: HyperV Server - Raid Best Practices

      @Dashrender said in HyperV Server - Raid Best Practices:

      @DustinB3403 said in HyperV Server - Raid Best Practices:

      @Joel said in HyperV Server - Raid Best Practices:

      Option1:
      2x 240GB SSD Sata 6GB (for OS)
      4X 2TB 12Gb/s (for Data)
      I was planning on using Raid1 for the OS and then Raid5/6 for the Data storage

      Right there. Post #1

      That's Joel not @PhlipElder

      Doh! šŸ˜„ SMH!

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: HyperV Server - Raid Best Practices

      @Dashrender said in HyperV Server - Raid Best Practices:

      @PhlipElder said in HyperV Server - Raid Best Practices:

      @DustinB3403 said in HyperV Server - Raid Best Practices:

      @PhlipElder said in HyperV Server - Raid Best Practices:

      @Dashrender said in HyperV Server - Raid Best Practices:

      @PhlipElder said in HyperV Server - Raid Best Practices:

      @Dashrender said in HyperV Server - Raid Best Practices:

      @PhlipElder said in HyperV Server - Raid Best Practices:

      @Dashrender said in HyperV Server - Raid Best Practices:

      @PhlipElder said in HyperV Server - Raid Best Practices:

      @DustinB3403 said in HyperV Server - Raid Best Practices:

      @PhlipElder said in HyperV Server - Raid Best Practices:

      We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.

      It's set up with:
      2x 240GB Intel SSD DC S4500 RAID 1 for host OS
      2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMs

      It's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.

      What you have is wasted SSD performance and cost and storage capacity.

      The performance pays for itself when they are in full swing with very little to no noticeable latency. And, updates run a lot faster.

      Cost wise, it's not that much of a step.

      What? The performance of the hypervisor is meaningless - assuming that's what's on the SSDs and not the VMs. So yeah, the SSDs would be a waste.

      As for updates - how often are you updating the hypervisor that you care that much about how fast updates run on the hypervisor?

      I was thinking more for the guests than the host.

      A spindled RAID 1 for the host OS would be a real pain for updating the host. The CUs get yuge and time consuming.

      EDIT: A pair of Intel SSD DC series SATA drives are not expensive at 240GB or smaller. In the overall scheme of things the biggest cost on the host are the CPUs and memory then the storage subsystem depending on setup.

      OK fine - the CUs for the hypervisor get big - and? you're likely only patching them monthly, if even. How much time are you saving by using SSD there compared to HDD? 1 min? maybe 5? That's a lot of time to justify the cost of SSDs, even if they are only $99/ea. Why have them at all? Why not just install the hypervisor on the RAID 10 your VMs are on, save yourself $200.

      Yeah I get it - it's a $3,000+ server, $200 is nothing... but it's still about 8%+... so it's not nothing...

      Y'all realize WD/HGST no longer makes 2.5" SAS spindles? Seagate won't be too much further down the road. They've reached the end of the road.

      As far as the dollar figure goes, where there's value there's value. That too is all in the eye of the beholder. Our clients see it as we're delivering flash in our standalone and clustered systems.

      We shall need to agree to disagree.

      TTFN

      Wait - If having the hypervisor be fast for updating - wouldn't it be even more important to have the workloads themselves be faster too? How are you not justifying putting all the data on SSD?

      See my earlier recommendation. Our starting go to for all servers has been 8x 10K SAS in RAID 6 with two logical disks with the aforementioned performance specifications.

      SSD in standalone hosts has been an option cost wise for a few years now depending on data volume.

      Ultimately, it's up to the customer/client. Here's $$$ for spindle solution and here's $$$$ for SSD solution and here's the benefits of one over the other. They make the choice.

      But it sounds as if you're stating that a faster booting Hypervisor is some miracle baby jesus tech. When it's just sunk cost.

      Nope. The benefits of going solid-state are twofold for us and the customer for sure. But, that's not the reason to deploy solid-state.

      I agree with Dustin - you make it seem like putting the hypervisor on SSD is something that matters - that it's a choice that could be good - and that's so rarely true. Personally it's so rarely true that I wouldn't even consider it personally.

      Now - an all SSD or all HDD - that's totally a different conversation - definitely choose what is right for the customer (or what they choose is right for themselves)... but that is HUGELY different than the hypervisor being on SSD and the VMs being on HDD - that just seems like a complete waste of money.

      Point of clarification: We deploy all 10K SAS RAID 6 or we deploy all-flash.

      Please point out where it was said that we deploy SSD for host OS and HDD/Rust for VMs?

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: HyperV Server - Raid Best Practices

      @DustinB3403 said in HyperV Server - Raid Best Practices:

      @PhlipElder said in HyperV Server - Raid Best Practices:

      @Dashrender said in HyperV Server - Raid Best Practices:

      @PhlipElder said in HyperV Server - Raid Best Practices:

      @Dashrender said in HyperV Server - Raid Best Practices:

      @PhlipElder said in HyperV Server - Raid Best Practices:

      @Dashrender said in HyperV Server - Raid Best Practices:

      @PhlipElder said in HyperV Server - Raid Best Practices:

      @DustinB3403 said in HyperV Server - Raid Best Practices:

      @PhlipElder said in HyperV Server - Raid Best Practices:

      We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.

      It's set up with:
      2x 240GB Intel SSD DC S4500 RAID 1 for host OS
      2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMs

      It's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.

      What you have is wasted SSD performance and cost and storage capacity.

      The performance pays for itself when they are in full swing with very little to no noticeable latency. And, updates run a lot faster.

      Cost wise, it's not that much of a step.

      What? The performance of the hypervisor is meaningless - assuming that's what's on the SSDs and not the VMs. So yeah, the SSDs would be a waste.

      As for updates - how often are you updating the hypervisor that you care that much about how fast updates run on the hypervisor?

      I was thinking more for the guests than the host.

      A spindled RAID 1 for the host OS would be a real pain for updating the host. The CUs get yuge and time consuming.

      EDIT: A pair of Intel SSD DC series SATA drives are not expensive at 240GB or smaller. In the overall scheme of things the biggest cost on the host are the CPUs and memory then the storage subsystem depending on setup.

      OK fine - the CUs for the hypervisor get big - and? you're likely only patching them monthly, if even. How much time are you saving by using SSD there compared to HDD? 1 min? maybe 5? That's a lot of time to justify the cost of SSDs, even if they are only $99/ea. Why have them at all? Why not just install the hypervisor on the RAID 10 your VMs are on, save yourself $200.

      Yeah I get it - it's a $3,000+ server, $200 is nothing... but it's still about 8%+... so it's not nothing...

      Y'all realize WD/HGST no longer makes 2.5" SAS spindles? Seagate won't be too much further down the road. They've reached the end of the road.

      As far as the dollar figure goes, where there's value there's value. That too is all in the eye of the beholder. Our clients see it as we're delivering flash in our standalone and clustered systems.

      We shall need to agree to disagree.

      TTFN

      Wait - If having the hypervisor be fast for updating - wouldn't it be even more important to have the workloads themselves be faster too? How are you not justifying putting all the data on SSD?

      See my earlier recommendation. Our starting go to for all servers has been 8x 10K SAS in RAID 6 with two logical disks with the aforementioned performance specifications.

      SSD in standalone hosts has been an option cost wise for a few years now depending on data volume.

      Ultimately, it's up to the customer/client. Here's $$$ for spindle solution and here's $$$$ for SSD solution and here's the benefits of one over the other. They make the choice.

      But it sounds as if you're stating that a faster booting Hypervisor is some miracle baby jesus tech. When it's just sunk cost.

      Nope. The benefits of going solid-state are twofold for us and the customer for sure. But, that's not the reason to deploy solid-state.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: HyperV Server - Raid Best Practices

      @Dashrender said in HyperV Server - Raid Best Practices:

      @PhlipElder said in HyperV Server - Raid Best Practices:

      @Dashrender said in HyperV Server - Raid Best Practices:

      @PhlipElder said in HyperV Server - Raid Best Practices:

      @Dashrender said in HyperV Server - Raid Best Practices:

      @PhlipElder said in HyperV Server - Raid Best Practices:

      @DustinB3403 said in HyperV Server - Raid Best Practices:

      @PhlipElder said in HyperV Server - Raid Best Practices:

      We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.

      It's set up with:
      2x 240GB Intel SSD DC S4500 RAID 1 for host OS
      2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMs

      It's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.

      What you have is wasted SSD performance and cost and storage capacity.

      The performance pays for itself when they are in full swing with very little to no noticeable latency. And, updates run a lot faster.

      Cost wise, it's not that much of a step.

      What? The performance of the hypervisor is meaningless - assuming that's what's on the SSDs and not the VMs. So yeah, the SSDs would be a waste.

      As for updates - how often are you updating the hypervisor that you care that much about how fast updates run on the hypervisor?

      I was thinking more for the guests than the host.

      A spindled RAID 1 for the host OS would be a real pain for updating the host. The CUs get yuge and time consuming.

      EDIT: A pair of Intel SSD DC series SATA drives are not expensive at 240GB or smaller. In the overall scheme of things the biggest cost on the host are the CPUs and memory then the storage subsystem depending on setup.

      OK fine - the CUs for the hypervisor get big - and? you're likely only patching them monthly, if even. How much time are you saving by using SSD there compared to HDD? 1 min? maybe 5? That's a lot of time to justify the cost of SSDs, even if they are only $99/ea. Why have them at all? Why not just install the hypervisor on the RAID 10 your VMs are on, save yourself $200.

      Yeah I get it - it's a $3,000+ server, $200 is nothing... but it's still about 8%+... so it's not nothing...

      Y'all realize WD/HGST no longer makes 2.5" SAS spindles? Seagate won't be too much further down the road. They've reached the end of the road.

      As far as the dollar figure goes, where there's value there's value. That too is all in the eye of the beholder. Our clients see it as we're delivering flash in our standalone and clustered systems.

      We shall need to agree to disagree.

      TTFN

      Wait - If having the hypervisor be fast for updating - wouldn't it be even more important to have the workloads themselves be faster too? How are you not justifying putting all the data on SSD?

      See my earlier recommendation. Our starting go to for all servers has been 8x 10K SAS in RAID 6 with two logical disks with the aforementioned performance specifications.

      SSD in standalone hosts has been an option cost wise for a few years now depending on data volume.

      Ultimately, it's up to the customer/client. Here's $$$ for spindle solution and here's $$$$ for SSD solution and here's the benefits of one over the other. They make the choice.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: HyperV Server - Raid Best Practices

      @Dashrender said in HyperV Server - Raid Best Practices:

      @PhlipElder said in HyperV Server - Raid Best Practices:

      @Dashrender said in HyperV Server - Raid Best Practices:

      @PhlipElder said in HyperV Server - Raid Best Practices:

      @DustinB3403 said in HyperV Server - Raid Best Practices:

      @PhlipElder said in HyperV Server - Raid Best Practices:

      We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.

      It's set up with:
      2x 240GB Intel SSD DC S4500 RAID 1 for host OS
      2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMs

      It's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.

      What you have is wasted SSD performance and cost and storage capacity.

      The performance pays for itself when they are in full swing with very little to no noticeable latency. And, updates run a lot faster.

      Cost wise, it's not that much of a step.

      What? The performance of the hypervisor is meaningless - assuming that's what's on the SSDs and not the VMs. So yeah, the SSDs would be a waste.

      As for updates - how often are you updating the hypervisor that you care that much about how fast updates run on the hypervisor?

      I was thinking more for the guests than the host.

      A spindled RAID 1 for the host OS would be a real pain for updating the host. The CUs get yuge and time consuming.

      EDIT: A pair of Intel SSD DC series SATA drives are not expensive at 240GB or smaller. In the overall scheme of things the biggest cost on the host are the CPUs and memory then the storage subsystem depending on setup.

      OK fine - the CUs for the hypervisor get big - and? you're likely only patching them monthly, if even. How much time are you saving by using SSD there compared to HDD? 1 min? maybe 5? That's a lot of time to justify the cost of SSDs, even if they are only $99/ea. Why have them at all? Why not just install the hypervisor on the RAID 10 your VMs are on, save yourself $200.

      Yeah I get it - it's a $3,000+ server, $200 is nothing... but it's still about 8%+... so it's not nothing...

      Y'all realize WD/HGST no longer makes 2.5" SAS spindles? Seagate won't be too much further down the road. They've reached the end of the road.

      As far as the dollar figure goes, where there's value there's value. That too is all in the eye of the beholder. Our clients see it as we're delivering flash in our standalone and clustered systems.

      We shall need to agree to disagree.

      TTFN

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: HyperV Server - Raid Best Practices

      @DustinB3403 said in HyperV Server - Raid Best Practices:

      @PhlipElder it's still added cost for little to no gain.

      Try and justify this poor decision all you want. But it was and is still a poor decision.

      To each their own.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: Why aren’t chip credit cards stopping ā€œcard presentā€ fraud in the US?

      @JaredBusch said in Why aren’t chip credit cards stopping ā€œcard presentā€ fraud in the US?:

      @Dashrender said in Why aren’t chip credit cards stopping ā€œcard presentā€ fraud in the US?:

      @nadnerB said in Why aren’t chip credit cards stopping ā€œcard presentā€ fraud in the US?:

      A significant majority of cards here in Au have a "tap 'n' go" feature. There are idiots the put a nail punch into the chip several times to "disable" the "tap 'n' go" feature to make their card "more secure"... which send them right back to magnetic strip swiping... #MeatwareMayhem

      Even when it's important to them, the end user refuses to educate themselves.

      While I'm not surprised to hear about hole punching - I've never heard about it - what, do they just not want to be more secure? Why kill the chip?

      Because part of the chip is RFID capabilities. Stupid humans still.

      Our CCs have the chip on one side and the RFID radio on the other. There's usually a little wave in the CC's plastic where the RFID chip is sitting below.

      posted in News
      PhlipElderP
      PhlipElder
    • RE: HyperV Server - Raid Best Practices

      @Dashrender said in HyperV Server - Raid Best Practices:

      @PhlipElder said in HyperV Server - Raid Best Practices:

      @DustinB3403 said in HyperV Server - Raid Best Practices:

      @PhlipElder said in HyperV Server - Raid Best Practices:

      We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.

      It's set up with:
      2x 240GB Intel SSD DC S4500 RAID 1 for host OS
      2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMs

      It's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.

      What you have is wasted SSD performance and cost and storage capacity.

      The performance pays for itself when they are in full swing with very little to no noticeable latency. And, updates run a lot faster.

      Cost wise, it's not that much of a step.

      What? The performance of the hypervisor is meaningless - assuming that's what's on the SSDs and not the VMs. So yeah, the SSDs would be a waste.

      As for updates - how often are you updating the hypervisor that you care that much about how fast updates run on the hypervisor?

      I was thinking more for the guests than the host.

      A spindled RAID 1 for the host OS would be a real pain for updating the host. The CUs get yuge and time consuming.

      EDIT: A pair of Intel SSD DC series SATA drives are not expensive at 240GB or smaller. In the overall scheme of things the biggest cost on the host are the CPUs and memory then the storage subsystem depending on setup.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: HyperV Server - Raid Best Practices

      @DustinB3403 said in HyperV Server - Raid Best Practices:

      @PhlipElder said in HyperV Server - Raid Best Practices:

      We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.

      It's set up with:
      2x 240GB Intel SSD DC S4500 RAID 1 for host OS
      2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMs

      It's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.

      What you have is wasted SSD performance and cost and storage capacity.

      The performance pays for itself when they are in full swing with very little to no noticeable latency. And, updates run a lot faster.

      Cost wise, it's not that much of a step.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: Why aren’t chip credit cards stopping ā€œcard presentā€ fraud in the US?

      @coliver said in Why aren’t chip credit cards stopping ā€œcard presentā€ fraud in the US?:

      @PhlipElder said in Why aren’t chip credit cards stopping ā€œcard presentā€ fraud in the US?:

      Swipe needs to be banned. Period.

      I would love a swipe + pin setup. I think that would be the best of all worlds. Fast, easy, secure.... for the most part.

      Nope. That magnetic stripe needs to disappear. Skimmers are easy. It's really tough to "skim" a CHIP setup.

      posted in News
      PhlipElderP
      PhlipElder
    • RE: HyperV Server - Raid Best Practices

      We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.

      It's set up with:
      2x 240GB Intel SSD DC S4500 RAID 1 for host OS
      2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMs

      It's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: HyperV Server - Raid Best Practices

      If the server has eight 2.5" bays then fill them with 10K SAS in RAID 6.

      8x 10K SAS in RAID 6 sustains 800MiB/Second throughput and 250 IOPS to 450 IOPS per disk depending on the storage stack format.

      Set up two logical disks:
      1: 75GB for the host OS
      2: Balance for virtual machines.

      Use FIXED VHDX for the VM OS VHDX files and same for 250GB or less data VHDX files.

      Then, use one DYNAMIC VHDX for the file server's data to expand with. This setup would limit the performance degradation that would otherwise happen over time due to fragmentation. It also allows for better disaster recovery options as moving around a 1TB VHDX with only 250GB in it would be painful.

      If there's a need for more performance then go with at least four Intel SSD D3-4610 series SATA in RAID 5 (more risky) or five in RAID 6 (less risky). We'd go for eight smaller SSDs versus five larger ones for more performance using the above formula.

      Blog Post: Disaster Preparedness: KVM/IP + USB Flash = Recovery. Here's a Guide

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: Why aren’t chip credit cards stopping ā€œcard presentā€ fraud in the US?

      @mlnews said in Why aren’t chip credit cards stopping ā€œcard presentā€ fraud in the US?:

      Fraud is on the rise despite a move to chip cards.

      A security analysis firm called Gemini Advisory recently posted a report saying that credit card fraud is actually on the rise in the US. That's surprising, because the US is three years out from a big chip-based card rollout. Chip-based cards were supposed to limit card fraud in the US, which was out of control compared to similar fraud in countries that already used EMV (the name of the chip card standard)....

      I remember reading comments from the American payment industry folks that basically said Americans were too stupid to do Chip & PIN. We've had it here for a very long time with TAP being a relatively recent addition. TAP is limited to $50 or $100 depending on merchant and product. It makes transactions fast versus any other method.

      Swipe needs to be banned. Period.

      Next up: RFID protection wallets. A must-have for frequent travelers.

      posted in News
      PhlipElderP
      PhlipElder
    • RE: Guide for VM Server Migration...

      @Obsolesce said in Guide for VM Server Migration...:

      Live migration works well.

      Indeed. Set up Kerberos Constrained Delegation between the old and new host and configure incoming migration settings on the new host and go for it. Shared Nothing Live Migration.

      It can also be done using self-issued certificates thus avoiding the AD join and changes.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: Guide for VM Server Migration...

      1: Export from the source and import at the test server.

      2: Why not use Azure backup to back up the production VMs and then use that backup to restore to the test server? That would be a great way to test the entire process.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: Hyper-V 2019

      @zachary715 said in Hyper-V 2019:

      Noob question. Does MS offer a Hardware Compatibility List for Hyper-V? How do you determine if your hardware will be compatible with each version of Hyper-V?

      www.windowsservercatalog.com <-- The hardware references are all in there.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • 1 / 1