VMware Community Homelabs
-
@FATeknollogee said in VMware Community Homelabs:
Why don't we see this from the other Type-1 groups?
One thing the VMware "community" is very good at is grass roots movement.
From @lamw on Twitter:
https://www.virtuallyghetto.com/2020/02/vmware-community-homelabs-project.htmlBecause there is thing thing called the cloud, which is way cheaper and way more beneficial and efficient than spending thousands on hardware upfront.
Whatever it is, excluding some very specific scenarios, I'd rather learn and build it in Azure or AWS... You know, Two birds one stone.
-
@Obsolesce said in VMware Community Homelabs:
@FATeknollogee said in VMware Community Homelabs:
Why don't we see this from the other Type-1 groups?
One thing the VMware "community" is very good at is grass roots movement.
From @lamw on Twitter:
https://www.virtuallyghetto.com/2020/02/vmware-community-homelabs-project.htmlBecause there is thing thing called the cloud, which is way cheaper and way more beneficial and efficient than spending thousands on hardware upfront.
Whatever it is, excluding some very specific scenarios, I'd rather learn and build it in Azure or AWS... You know, Two birds one stone.
The cloud is flexible but not cheap. It's only cheaper if your needs are very small or very dynamic. If you need performance or storage, cloud instances will get proportionally even more expensive.
And they know it of course. That's why they expand their services to make it harder to move the workload to another cloud provider or your own datacenter.
-
@Obsolesce said in VMware Community Homelabs:
Because there is thing thing called the cloud, which is way cheaper and way more beneficial and efficient than spending thousands on hardware upfront.
that does nothing to teach platforms, though.
And not necessarily cheaper.
-
@Pete-S said in VMware Community Homelabs:
The cloud is flexible but not cheap. It's only cheaper if your needs are very small or very dynamic.
Exactly, for a lab of any scale (especially without big uptime / HA / performance needs... as labs don't have) cloud is often the worst option (unless it is the cloud product itself that you are trying to learn) because it will cost more and not teach you the hardware and platform portions.
-
@Pete-S said in VMware Community Homelabs:
if your needs are very small or very dynamic.
Kind of like a home lab, right?
I know it all depends, which goes in all directions.
Generally, i lab things out for a little and trash it. My costs typically never exceed a few bucks a month in Azure, and on AWS the free tier and costs are even better when buying those $300 credits for $20 or whatever it is. Mix and match. You get the most experience and bang for your buck.
So yeah, can be so much cheaper for even large needs.
Now, if you want a home lab to run "other" things... meh, then IMO it's not t so much a test lab anymore. (Scott, not addressing your platform exceptions here)
-
@scottalanmiller said in VMware Community Homelabs:
@Obsolesce said in VMware Community Homelabs:
Because there is thing thing called the cloud, which is way cheaper and way more beneficial and efficient than spending thousands on hardware upfront.
that does nothing to teach platforms, though.
We'll yeah, those would be the specific scenarios you didn't capture in what you quoted from me.
-
@Obsolesce said in VMware Community Homelabs:
Whatever it is, excluding some very specific scenarios, I'd rather learn and build it in Azure or AWS... You know, Two birds one stone.
I get that. I think, at least in the SMB space, it's the opposite, though. Azure and AWS are both rare and easy to pick up. But hardware and platforms you need all of the time.
If in the enterprise, then your way makes way more sense.
-
@Obsolesce said in VMware Community Homelabs:
Generally, i lab things out for a little and trash it. My costs typically never exceed a few bucks a month in Azure, and on AWS the free tier and costs are even better when buying those $300 credits for $20 or whatever it is. Mix and match. You get the most experience and bang for your buck.
I've found for me, that some of the best lab stuff is not setting up and tearing down, but setting up to keep operating. You get a whole different level of experience when you keep it running, patch it, maintain it, etc.
-
@scottalanmiller said in VMware Community Homelabs:
@Obsolesce said in VMware Community Homelabs:
Generally, i lab things out for a little and trash it. My costs typically never exceed a few bucks a month in Azure, and on AWS the free tier and costs are even better when buying those $300 credits for $20 or whatever it is. Mix and match. You get the most experience and bang for your buck.
I've found for me, that some of the best lab stuff is not setting up and tearing down, but setting up to keep operating. You get a whole different level of experience when you keep it running, patch it, maintain it, etc.
For my labs, I normally keep mine running especially if I don’t normally use them for work.
-
So @Obsolesce is talking about cloud - ASW/Asure, but what about other VPS providers like Vultr? Compared to owning your own hardware, unless you have some fairly large workloads, these are generally pretty cheap - and this isn't even considering power/cooling, etc.
Of course, if your goal is the learn ESXi or KVM, etc, yeah, you're going to need some hardware for that.
-
Most IT people I know get their hardware for nothing where they work. So they have stuff like R710's at home.
So setting up a home lab isn't a cost thing, more a question of having the space. -
@Pete-S said in VMware Community Homelabs:
Most IT people I know get their hardware for nothing where they work. So they have stuff like R710's at home.
So setting up a home lab isn't a cost thing, more a question of having the space.The cost of power to run that 710 can likely make it more cost effective to run the VMs in a VPS. plus that gets rid of the noise pollution.
-
@Dashrender said in VMware Community Homelabs:
@Pete-S said in VMware Community Homelabs:
Most IT people I know get their hardware for nothing where they work. So they have stuff like R710's at home.
So setting up a home lab isn't a cost thing, more a question of having the space.The cost of power to run that 710 can likely make it more cost effective to run the VMs in a VPS. plus that gets rid of the noise pollution.
Depends on how many VMs you have. A R710 can easily handle twenty of those $5 vultr VMs. That's $1200 per year for VMs in a VPS. No way you can rack up that amount of money in electricity on one server.
-
@Dashrender said in VMware Community Homelabs:
ASW/Asure, but what about other VPS providers like Vultr? Compared to owning your own hardware, unless you have some fairly large workloads, these are generally pretty cheap
Cost is similar to AWS or Azure. It's surprisingly not as cheap as it seems. If you are only talking about two temporary VMs, yeah, it's cheap. If you talking about some number of long term workloads, it gets costly quickly.
-
@Pete-S said in VMware Community Homelabs:
@Dashrender said in VMware Community Homelabs:
@Pete-S said in VMware Community Homelabs:
Most IT people I know get their hardware for nothing where they work. So they have stuff like R710's at home.
So setting up a home lab isn't a cost thing, more a question of having the space.The cost of power to run that 710 can likely make it more cost effective to run the VMs in a VPS. plus that gets rid of the noise pollution.
Depends on how many VMs you have. A R710 can easily handle twenty of those $5 vultr VMs. That's $1200 per year for VMs in a VPS.
No way you can rack up that in electricity.Plus the needs of a lab VM are often very different from the needs of a production one. Prod needs fast disks and fast CPU, and "just enough" RAM. Labs need very little CPU and disk performance, but lots of RAM.
And just one workload like NextCloud could cost a fortune on even Vultr, but be nearly free on an R710.
We have old R510 units that could run 30+ VMs, easily. A good 50% more than @Pete-S is estimating. And adding RAM alone would allow us to up that number significantly.
-
@Pete-S said in VMware Community Homelabs:
@Obsolesce said in VMware Community Homelabs:
@FATeknollogee said in VMware Community Homelabs:
Why don't we see this from the other Type-1 groups?
One thing the VMware "community" is very good at is grass roots movement.
From @lamw on Twitter:
https://www.virtuallyghetto.com/2020/02/vmware-community-homelabs-project.htmlBecause there is thing thing called the cloud, which is way cheaper and way more beneficial and efficient than spending thousands on hardware upfront.
Whatever it is, excluding some very specific scenarios, I'd rather learn and build it in Azure or AWS... You know, Two birds one stone.
The cloud is flexible but not cheap. It's only cheaper if your needs are very small or very dynamic. If you need performance or storage, cloud instances will get proportionally even more expensive.
And they know it of course. That's why they expand their services to make it harder to move the workload to another cloud provider or your own datacenter.
It 100% depends on the type of work. If you're taking in 5 billion requests per day, there's no way it's going to be cheaper on site than in public cloud.
Even with CFD type work where you'd normally have a cluster. You have to weigh the capex of the servers and the ongoing cost of the maintenance and utilities vs just spinning up 10-20 nodes when you need it. However if you get into long running solves like 6 months or so, then it might be cheaper on site. It really depends on the work load.
-
@FATeknollogee said in VMware Community Homelabs:
Why don't we see this from the other Type-1 groups?
One thing the VMware "community" is very good at is grass roots movement.
From @lamw on Twitter:
https://www.virtuallyghetto.com/2020/02/vmware-community-homelabs-project.htmlThere's a ton of stuff out there on IRC, Reddit, Slack, Telegram, and other mediums for the other types of servers.
https://www.reddit.com/r/homelab/ I mean this is literally people just posting their home labs and specs. I'm not sure what else you want?
-
-
@stacksofplates said in VMware Community Homelabs:
@Pete-S said in VMware Community Homelabs:
@Obsolesce said in VMware Community Homelabs:
@FATeknollogee said in VMware Community Homelabs:
Why don't we see this from the other Type-1 groups?
One thing the VMware "community" is very good at is grass roots movement.
From @lamw on Twitter:
https://www.virtuallyghetto.com/2020/02/vmware-community-homelabs-project.htmlBecause there is thing thing called the cloud, which is way cheaper and way more beneficial and efficient than spending thousands on hardware upfront.
Whatever it is, excluding some very specific scenarios, I'd rather learn and build it in Azure or AWS... You know, Two birds one stone.
The cloud is flexible but not cheap. It's only cheaper if your needs are very small or very dynamic. If you need performance or storage, cloud instances will get proportionally even more expensive.
And they know it of course. That's why they expand their services to make it harder to move the workload to another cloud provider or your own datacenter.
It 100% depends on the type of work. If you're taking in 5 billion requests per day, there's no way it's going to be cheaper on site than in public cloud.
Even with CFD type work where you'd normally have a cluster. You have to weigh the capex of the servers and the ongoing cost of the maintenance and utilities vs just spinning up 10-20 nodes when you need it. However if you get into long running solves like 6 months or so, then it might be cheaper on site. It really depends on the work load.
What home lab is going to be serving 5 billion requests per day? You're talking production, not home lab.
-
@scottalanmiller said in VMware Community Homelabs:
@Obsolesce said in VMware Community Homelabs:
Generally, i lab things out for a little and trash it. My costs typically never exceed a few bucks a month in Azure, and on AWS the free tier and costs are even better when buying those $300 credits for $20 or whatever it is. Mix and match. You get the most experience and bang for your buck.
I've found for me, that some of the best lab stuff is not setting up and tearing down, but setting up to keep operating. You get a whole different level of experience when you keep it running, patch it, maintain it, etc.
I do that when it's something I'm using past the testing/labbing experience. But then at that point it's not so much a test lab anymore.
It's hard to keep something going you never really use... typically forget about because patching can be automatic, but when not, even maintaining something you don't use much is kind of.... I don't know, wasteful IMO. Because you can be using those resources towards something you will be actively maintaining and using while learning. (given you are talking about platform test lab, which means that hardware is dedicated to that purpose) Perhaps if it's a platform, like you want to run Openstack or something to get experience as many large companies use that (not sure about SMB).
I do get the other side too. There are many things in SMB you can better lab or experience on your own hardware, because that's where most SMBs are coming from, and many either lack the need to move away from it, or lack the competence and culture to move to cloud.
Either way, it depends on where you want to go with your career and what environments you want to work with.