What Are You Doing Right Now
-
@RojoLoco said in What Are You Doing Right Now:
@scottalanmiller said in What Are You Doing Right Now:
@RojoLoco said in What Are You Doing Right Now:
@Dashrender said in What Are You Doing Right Now:
@RojoLoco said in What Are You Doing Right Now:
@Dashrender said in What Are You Doing Right Now:
@RojoLoco said in What Are You Doing Right Now:
After lunch, the virtualization project begins! I think I have everything set to get the hyper v 2016 installed to a USB. Still waiting on drives and RAMs, but I might as well get going.
OH, is that now supported through documentation outside of OEMs?
I'm not sure, but I know how to find out. I saw an older thread on SW where @scottalanmiller and @Dashrender were discussing the pitfalls of installing hyper v to USB or SD... I find an article showing how to do it w hyper v 2016, so we will see.
We've had that same discussion here, along with @brrrbill but it was around XenServer and how challenging it is to get the logs saved to another location and never writing on the SD card.
I have a feeling I'll be putting a couple of drives in a RAID 1 for the hypervisor. Or would I be better off going OBR10 and a small partition?
OBR10 and small partition is the way to go.
Then that's the plan. I will probably install to these 2 spare drives to test, if successful then I'll make an image and then restore to the small partition.
Am I way off base by ordering 8x 1.2TB / 10k rpm / 2.5" drives to fill this box? They were the biggest/fastest SAS drives I could find. That would give 4.8TB in a RAID10.
You're asking the wrong question - or rather looking at it wrong.
What will this box do? what is needed?
Perhaps you need 8 SSDs in RAID 10 because of your IOP load, perhaps you only need 2 drives in RAID 1, again because of IOP load.
Granted a VM host is definitely much harder to know what you need outside the current expected workload, if there is assumed room for growth on that same host.
-
@RojoLoco said in What Are You Doing Right Now:
Also, @scottalanmiller, how small should the hypervisor partition be? 20-30gb?
I should start a thread....
Pretty small generally, but Hyper-V 2016 I'm not sure.
-
@Dashrender said in What Are You Doing Right Now:
@RojoLoco said in What Are You Doing Right Now:
@scottalanmiller said in What Are You Doing Right Now:
@RojoLoco said in What Are You Doing Right Now:
@Dashrender said in What Are You Doing Right Now:
@RojoLoco said in What Are You Doing Right Now:
@Dashrender said in What Are You Doing Right Now:
@RojoLoco said in What Are You Doing Right Now:
After lunch, the virtualization project begins! I think I have everything set to get the hyper v 2016 installed to a USB. Still waiting on drives and RAMs, but I might as well get going.
OH, is that now supported through documentation outside of OEMs?
I'm not sure, but I know how to find out. I saw an older thread on SW where @scottalanmiller and @Dashrender were discussing the pitfalls of installing hyper v to USB or SD... I find an article showing how to do it w hyper v 2016, so we will see.
We've had that same discussion here, along with @brrrbill but it was around XenServer and how challenging it is to get the logs saved to another location and never writing on the SD card.
I have a feeling I'll be putting a couple of drives in a RAID 1 for the hypervisor. Or would I be better off going OBR10 and a small partition?
OBR10 and small partition is the way to go.
Then that's the plan. I will probably install to these 2 spare drives to test, if successful then I'll make an image and then restore to the small partition.
Am I way off base by ordering 8x 1.2TB / 10k rpm / 2.5" drives to fill this box? They were the biggest/fastest SAS drives I could find. That would give 4.8TB in a RAID10.
You're asking the wrong question - or rather looking at it wrong.
What will this box do? what is needed?
Perhaps you need 8 SSDs in RAID 10 because of your IOP load, perhaps you only need 2 drives in RAID 1, again because of IOP load.
Granted a VM host is definitely much harder to know what you need outside the current expected workload, if there is assumed room for growth on that same host.
Currently, this is simply a test environment / proof of concept for virtualizing our infrastructure, etc. Since I'm not sure exactly which current machines will get the P2V treatment, I'm not sure about IOPs. I have the link to the "how to measure IOPs in Windows" article, but I don't know what to run it on (we have an embarrassing number of physical machines). So for this box, I want maximum local storage while remaining somewhat cost efficient. So no flash yet. I have to show the bosses that these existing servers can be run virtually until they can be migrated to newer OSes or absorbed and decommissioned. They will buy 8 drives for this box before they will buy an HBA to connect to the gi-normous DAS storage box we have. Hopefully, the ability to virtualize will dazzle them into giving me the right number of physical hosts to run all (or most) of our current servers, and at that point I will be allocated a chunk of the ~80TB storage monster. But for now, I have an R430 with 8 drive bays that will house a number of different VHDs before we roll out a real solution.
-
Since it's a test box, why not just purchase a SSD or two, unless you need a ton of storage for testing?
-
@Dashrender said in What Are You Doing Right Now:
Since it's a test box, why not just purchase a SSD or two, unless you need a ton of storage for testing?
Our customer DBs are freaking huge, so much storage is needed. Besides a DC and Exchange server, most will be dev systems w/ big SQL databases. Besides, I want to have a badass test box for later.
-
@scottalanmiller said in What Are You Doing Right Now:
@RojoLoco said in What Are You Doing Right Now:
Also, @scottalanmiller, how small should the hypervisor partition be? 20-30gb?
I should start a thread....
Pretty small generally, but Hyper-V 2016 I'm not sure.
I saw articles saying at least 16gb (for USB/SD), recommend 32.
I guess I can install to temporary drives and check the data size.
-
@RojoLoco said in What Are You Doing Right Now:
Currently, this is simply a test environment / proof of concept for virtualizing our infrastructure, etc. Since I'm not sure exactly which current machines will get the P2V treatment, I'm not sure about IOPs. I have the link to the "how to measure IOPs in Windows" article, but I don't know what to run it on (we have an embarrassing number of physical machines).
You need to do it on all of them that you plan to virtualize. The StarWinds guys have access to DPACK, which you can install on all machines and then get a report on what your usage is on all sorts of stats.
So for this box, I want maximum local storage while remaining somewhat cost efficient.
Maximum? to what end? This still won't be cheap. Do you need 2 TB drives or 4? etc.
I have to show the bosses that these existing servers can be run virtually until they can be migrated to newer OSes or absorbed and decommissioned.
Just curious, what's the plan for this? to P2V each one and put them in production to show it works?
HBA to connect to the gi-normous DAS storage box we have.
SAN/DAS eh - well, if you have the need.
-
@RojoLoco said in What Are You Doing Right Now:
@Dashrender said in What Are You Doing Right Now:
@RojoLoco said in What Are You Doing Right Now:
@scottalanmiller said in What Are You Doing Right Now:
@RojoLoco said in What Are You Doing Right Now:
@Dashrender said in What Are You Doing Right Now:
@RojoLoco said in What Are You Doing Right Now:
@Dashrender said in What Are You Doing Right Now:
@RojoLoco said in What Are You Doing Right Now:
After lunch, the virtualization project begins! I think I have everything set to get the hyper v 2016 installed to a USB. Still waiting on drives and RAMs, but I might as well get going.
OH, is that now supported through documentation outside of OEMs?
I'm not sure, but I know how to find out. I saw an older thread on SW where @scottalanmiller and @Dashrender were discussing the pitfalls of installing hyper v to USB or SD... I find an article showing how to do it w hyper v 2016, so we will see.
We've had that same discussion here, along with @brrrbill but it was around XenServer and how challenging it is to get the logs saved to another location and never writing on the SD card.
I have a feeling I'll be putting a couple of drives in a RAID 1 for the hypervisor. Or would I be better off going OBR10 and a small partition?
OBR10 and small partition is the way to go.
Then that's the plan. I will probably install to these 2 spare drives to test, if successful then I'll make an image and then restore to the small partition.
Am I way off base by ordering 8x 1.2TB / 10k rpm / 2.5" drives to fill this box? They were the biggest/fastest SAS drives I could find. That would give 4.8TB in a RAID10.
You're asking the wrong question - or rather looking at it wrong.
What will this box do? what is needed?
Perhaps you need 8 SSDs in RAID 10 because of your IOP load, perhaps you only need 2 drives in RAID 1, again because of IOP load.
Granted a VM host is definitely much harder to know what you need outside the current expected workload, if there is assumed room for growth on that same host.
Currently, this is simply a test environment / proof of concept for virtualizing our infrastructure, etc. Since I'm not sure exactly which current machines will get the P2V treatment, I'm not sure about IOPs. I have the link to the "how to measure IOPs in Windows" article, but I don't know what to run it on (we have an embarrassing number of physical machines). So for this box, I want maximum local storage while remaining somewhat cost efficient. So no flash yet. I have to show the bosses that these existing servers can be run virtually until they can be migrated to newer OSes or absorbed and decommissioned. They will buy 8 drives for this box before they will buy an HBA to connect to the gi-normous DAS storage box we have. Hopefully, the ability to virtualize will dazzle them into giving me the right number of physical hosts to run all (or most) of our current servers, and at that point I will be allocated a chunk of the ~80TB storage monster. But for now, I have an R430 with 8 drive bays that will house a number of different VHDs before we roll out a real solution.
A Dell DPACK would probably be a good idea.
-
@RojoLoco said in What Are You Doing Right Now:
@Dashrender said in What Are You Doing Right Now:
Since it's a test box, why not just purchase a SSD or two, unless you need a ton of storage for testing?
Our customer DBs are freaking huge, so much storage is needed. Besides a DC and Exchange server, most will be dev systems w/ big SQL databases. Besides, I want to have a badass test box for later.
Well in that case, you definitely can't buy anything until you know the IOPs of the SQL box that uses the most IOPs so you have at least that many when testing that box and keep things at the same level.
-
@Dashrender said in What Are You Doing Right Now:
@RojoLoco said in What Are You Doing Right Now:
Currently, this is simply a test environment / proof of concept for virtualizing our infrastructure, etc. Since I'm not sure exactly which current machines will get the P2V treatment, I'm not sure about IOPs. I have the link to the "how to measure IOPs in Windows" article, but I don't know what to run it on (we have an embarrassing number of physical machines).
You need to do it on all of them that you plan to virtualize. The StarWinds guys have access to DPACK, which you can install on all machines and then get a report on what your usage is on all sorts of stats.
So for this box, I want maximum local storage while remaining somewhat cost efficient.
Maximum? to what end? This still won't be cheap. Do you need 2 TB drives or 4? etc.
I have to show the bosses that these existing servers can be run virtually until they can be migrated to newer OSes or absorbed and decommissioned.
Just curious, what's the plan for this? to P2V each one and put them in production to show it works?
HBA to connect to the gi-normous DAS storage box we have.
SAN/DAS eh - well, if you have the need.
I guess I need to do more w/ DPACK, I've barely scratched the surface with it.
This chassis only takes 2.5" drives, and I'm not sure I want to go SATA just to get more storage. Systems will be P2V'ed and spun up on the hyper v host. 4.8TB / 128GB RAM should let me do 4-5 systems at a time.
As far as the DAS, it is already here, part of the production environment, but it has lots of empty space that could be allocated for VMs.
-
@Dashrender said in What Are You Doing Right Now:
@RojoLoco said in What Are You Doing Right Now:
@Dashrender said in What Are You Doing Right Now:
Since it's a test box, why not just purchase a SSD or two, unless you need a ton of storage for testing?
Our customer DBs are freaking huge, so much storage is needed. Besides a DC and Exchange server, most will be dev systems w/ big SQL databases. Besides, I want to have a badass test box for later.
Well in that case, you definitely can't buy anything until you know the IOPs of the SQL box that uses the most IOPs so you have at least that many when testing that box and keep things at the same level.
The heaviest usage SQL boxes will be the last things to get virtualized, I don't want to hear the bitching and moaning that will happen if 5ms of latency suddenly appeared.
-
OK switch gears - you are calling it a DAS - I'm curious when does DAS become SAN?
-
@RojoLoco said in What Are You Doing Right Now:
@Dashrender said in What Are You Doing Right Now:
@RojoLoco said in What Are You Doing Right Now:
@Dashrender said in What Are You Doing Right Now:
Since it's a test box, why not just purchase a SSD or two, unless you need a ton of storage for testing?
Our customer DBs are freaking huge, so much storage is needed. Besides a DC and Exchange server, most will be dev systems w/ big SQL databases. Besides, I want to have a badass test box for later.
Well in that case, you definitely can't buy anything until you know the IOPs of the SQL box that uses the most IOPs so you have at least that many when testing that box and keep things at the same level.
The heaviest usage SQL boxes will be the last things to get virtualized, I don't want to hear the bitching and moaning that will happen if 5ms of latency suddenly appeared.
https://www.microsoft.com/en-gb/download/details.aspx?id=20163
edit: oops, old link - new link below
https://gallery.technet.microsoft.com/DiskSpd-a-robust-storage-6cd2f223
-
@Dashrender said in What Are You Doing Right Now:
OK switch gears - you are calling it a DAS - I'm curious when does DAS become SAN?
When it is networked.
-
@Dashrender said in What Are You Doing Right Now:
OK switch gears - you are calling it a DAS - I'm curious when does DAS become SAN?
When you use iSCSI and a network to get to it. These are directly connected, 12GB/s SAS cables, full tilt rock and roll. Only IP address is for the management interface.
-
@scottalanmiller said in What Are You Doing Right Now:
@Dashrender said in What Are You Doing Right Now:
OK switch gears - you are calling it a DAS - I'm curious when does DAS become SAN?
When it is networked.
If fiberchannel considered a network?
-
This post is deleted! -
@Dashrender said in What Are You Doing Right Now:
@scottalanmiller said in What Are You Doing Right Now:
@Dashrender said in What Are You Doing Right Now:
OK switch gears - you are calling it a DAS - I'm curious when does DAS become SAN?
When it is networked.
If fiberchannel considered a network?
FC is a protocol. FC is DAS if it is direct, networked if there is a network.
-
DAS = Direct Attached Storage (e.g. no switch)
SAN = Storage Area Network (e.g. there is a switch) -
SAS, SATA, USB, IEEE1394, FC, FCoE, ATAoE, ZSAN.... none of them are DAS or SAN. All block protocols are both. DAS v SAN is how you use it.