Removing shared storage from VMWare environment
-
@Dashrender said:
Since your current solution is designed to be able to run everything on a single server, after you migrate most of that load to O365 I don't see why you wouldn't retire the second server completely.
By running two servers you have:
twice the cooling cost
twice the number of servers to manage/update
twice the power consumption
twice the amount of UPSAnd best of all, you'd have twice the storage to purchase and an extra 10 Gb card to buy.
According to Scott, these servers have something like 4 hours of downtime every 7-8 years, on average. Unless you really need to lower that downtime, the expense of those drives and everything else I listed is pretty high.
Interesting thought. It is really 1 of 7 servers in this location.
So a few bullet points to support the multiple servers:
- We are a 24/7 organization we have users in multiple locations working at anytime throughout the day. I will still need to service application and workstation authentication.
- Being 24/7 means I can't drop the whole thing for maintenance.
- The time managing 2-3 extra virtual machines is negligible
- 300 watts is what this single server consumes -- the cost that adds to being able to service everything without maintenance downtime is again in my opinion negligible
- The business is still out on whether or not same sign-on is sufficient for Office 365 vs single sign-on. I think the same sign-on is sufficient, but if the business wants single sign-on then ADFS will need to be deployed and available to service O365 login requests.
I would agree with your solution in a smaller, single location business -- it just wouldn't jive with the way we operate.
-
@Dashrender said:
You mention that you're having performance issues today - do you know where those issues are coming from? Disk IO not enough? Production network not fast enough, etc?
It is definitely in the storage network that is slowing us down. I am sharing 8 SATA spindles for too many virtual machines. Plus MPIO on the 1Gig side gets saturated quite frequently, but upgrading the controllers in the P2000 to 10GB iSCSI is more than the SSDs I referenced above.
-
@donaldlandru said:
Basis for the O365 first? Curious if there is benefit or other reasoning?
This will free up resources for the other VMs so that you're not running too close to the max with everything on one host.
Yes, the system was designed to hold both servers work load if required. Neither server is currently more than 30-40% utilized.
Okay, so everythign on one host isn't such a big concern.
-
Keep in mind that with your VMware license you should be able to do Storage VMotion, etc, from the shared storage up to the Local storage on VMH-OPS1 after it gets rebuilt.
-
@donaldlandru said:
- We are a 24/7 organization we have users in multiple locations working at anytime throughout the day. I will still need to service application and workstation authentication.
Being 24/7 doesn't mean you can't afford down time. @scottalanmiller has a lot of posts on this. It's about how much that costs you, not about how often you work. We are a fortune 100 and we have down times. Heck we have pretty regular momentary (once a month or so) blips with our exchange systems.
-
@Jason said:
@donaldlandru said:
- We are a 24/7 organization we have users in multiple locations working at anytime throughout the day. I will still need to service application and workstation authentication.
Being 24/7 doesn't mean you can't afford down time. @scottalanmiller has a lot of posts on this. It's about how much that costs you, not about how often you work. We are a fortune 100 and we have down times. Heck we have pretty regular momentary (once a month or so) blips with our exchange systems
Let's look at it from a different angle
- The hardware is already owned and only 3 years old minus the $1600 for SSDs
- The software is already owned
- The "data center" is already built out and over cooled
To me, saying lets discard this server we already own and license in favor of now creating outages for maintenance does not make any sense.
-
@donaldlandru said:
- Being 24/7 means I can't drop the whole thing for maintenance.
How much maintenance do you do? What is the annual downtime caused by VMware? Only VMware and hardware maintenance is assisted by having the second server.
-
@donaldlandru said:
To me, saying lets discard this server we already own and license in favor of now creating outages for maintenance does not make any sense.
That might be true, but let's do a little napkin math...
- Why is it overcooled? That should be fixed regardless of anything else. Just wasting money.
- If you add heat, you still cool more, regardless of how much you cool now, correct? So that is more money.
- The power draw costs money.
- How much downtime does this prevent?
Add those together and see if it makes sense.
-
@scottalanmiller said:
@donaldlandru said:
- Being 24/7 means I can't drop the whole thing for maintenance.
How much maintenance do you do? What is the annual downtime caused by VMware? Only VMware and hardware maintenance is assisted by having the second server.
Assuming a non DFRS file server, that would be assisted by this as well.
@donaldlandru , you said you have 7 servers. can't you install a DC on one of those? Are any of those virtualized or are they all bare metal?
-
@Dashrender said:
@scottalanmiller said:
@donaldlandru said:
- Being 24/7 means I can't drop the whole thing for maintenance.
How much maintenance do you do? What is the annual downtime caused by VMware? Only VMware and hardware maintenance is assisted by having the second server.
Assuming a non DFRS file server, that would be assisted by this as well.
@donaldlandru , you said you have 7 servers. can't you install a DC on one of those? Are any of those virtualized or are they all bare metal?
DFRS would do it on a single physical host for software upgrades, too.
-
@scottalanmiller said:
DFRS would do it on a single physical host for software upgrades, too.
it would? how? If DFRS is only on one VM (or two VMs on the same host) and that host goes down (for maintenance, failure, whatever) wouldn't that data all be unavailable?
-
@scottalanmiller said:
@donaldlandru said:
- Being 24/7 means I can't drop the whole thing for maintenance.
How much maintenance do you do? What is the annual downtime caused by VMware? Only VMware and hardware maintenance is assisted by having the second server.
Now there is an aha moment and presents me a question to bring back to the business. How much downtime is acceptable die to server hardware failure vs spending an additional $1600 to eliminate all but a dual server failure from impacting the services provided by these virtual machines (other disasters of course not included).
-
@donaldlandru said:
@scottalanmiller said:
@donaldlandru said:
- Being 24/7 means I can't drop the whole thing for maintenance.
How much maintenance do you do? What is the annual downtime caused by VMware? Only VMware and hardware maintenance is assisted by having the second server.
Now there is an aha moment and presents me a question to bring back to the business. How much downtime is acceptable die to server hardware failure vs spending an additional $1600 to eliminate all but a dual server failure from impacting the services provided by these virtual machines (other disasters of course not included).
exactly! That's why I mentioned the 4 hours of anticipated downtime over 7-8 years. If one server is expected to only have 4 hours of downtime over 7-8 years, is it worth spending $1600 plus heating/cooling/power/UPS, etc to prevent that 4 hours?
-
@donaldlandru said:
@scottalanmiller said:
@donaldlandru said:
- Being 24/7 means I can't drop the whole thing for maintenance.
How much maintenance do you do? What is the annual downtime caused by VMware? Only VMware and hardware maintenance is assisted by having the second server.
Now there is an aha moment and presents me a question to bring back to the business. How much downtime is acceptable die to server hardware failure vs spending an additional $1600 to eliminate all but a dual server failure from impacting the services provided by these virtual machines (other disasters of course not included).
$1600 up front plus $200 a month or whatever. That adds up over a five year span. $200 or power and cooling is $2400 a year or $12,000 over five years. That's a total of $13,600 not including any effort from you or licensing or anything.
-
@Dashrender said:
@donaldlandru said:
@scottalanmiller said:
@donaldlandru said:
- Being 24/7 means I can't drop the whole thing for maintenance.
How much maintenance do you do? What is the annual downtime caused by VMware? Only VMware and hardware maintenance is assisted by having the second server.
Now there is an aha moment and presents me a question to bring back to the business. How much downtime is acceptable die to server hardware failure vs spending an additional $1600 to eliminate all but a dual server failure from impacting the services provided by these virtual machines (other disasters of course not included).
exactly! That's why I mentioned the 4 hours of anticipated downtime over 7-8 years. If one server is expected to only have 4 hours of downtime over 7-8 years, is it worth spending $1600 plus heating/cooling/power/UPS, etc to prevent that 4 hours?
The heating/cooling on this is probably an atypical situation as the building provided dedicated cooling but does not pass through the cost of this to our organization it is included in the base lease. Even on an estimated usage our lease is for 10 years and just signed this year.
The UPS and power do come into play but at 200 watts (one server in question) is a small piece of the pie
-
And you can still keep the second server for emergencies. It can be off and racked, just sitting on the shelf with VMware on an SD card. Should the main server die, swap the drives and fire up. Downtime of ten minutes. So that is a HUGE risk mitigation right there for dirt cheap (Free).
-
@donaldlandru said:
The UPS and power do come into play but at 200 watts (one server in question) is a small piece of the pie
Size of the pie can be misleading. Absolute cost is what would matter in this instance.
-
@scottalanmiller said:
@donaldlandru said:
The UPS and power do come into play but at 200 watts (one server in question) is a small piece of the pie
Size of the pie can be misleading. Absolute cost is what would matter in this instance.
200w 24/7 @ $0.12 KWh is $211/annually
This server is using 4% of the UPS, if I take that as one time that is $200 worth of UPS.
-
@donaldlandru said:
@scottalanmiller said:
@donaldlandru said:
The UPS and power do come into play but at 200 watts (one server in question) is a small piece of the pie
Size of the pie can be misleading. Absolute cost is what would matter in this instance.
200w 24/7 @ $0.12 KWh is $211/annually
Why does 200w seem so low?
-
@Dashrender said:
@donaldlandru said:
@scottalanmiller said:
@donaldlandru said:
The UPS and power do come into play but at 200 watts (one server in question) is a small piece of the pie
Size of the pie can be misleading. Absolute cost is what would matter in this instance.
200w 24/7 @ $0.12 KWh is $211/annually
Why does 200w seem so low?
That feels very low. You've got dual procs I assume, that's normally over 200W alone. Then the PSU and UPS overhead. The power of the SSDs, memory, fans, etc. It adds up. Can't imagine getting in under 300-400W.