ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. dyasny
    D
    • Profile
    • Following 0
    • Followers 0
    • Topics 1
    • Posts 387
    • Best 59
    • Controversial 1
    • Groups 0

    dyasny

    @dyasny

    77
    Reputation
    331
    Profile views
    387
    Posts
    0
    Followers
    0
    Following
    Joined Last Online
    Location Canada

    dyasny Unfollow Follow

    Best posts made by dyasny

    • RE: Win10 vs Fedora 28: Boot speed

      @scottalanmiller this was actually always the case. MS ran into boot time problems a while ago, and their solution was to get into the UI as quickly as possible, while continuing to load everything in the background. This is why Windows is damn slow after you boot and slowly picks up speed as everything REALLY loads.

      The proper timing would be from getting out of POST and to being able to actually use the desktop, not to seeing the desktop.

      I remember several benchmarks being published some 10-15 years ago, showing several additional minutes of severe slowness even when your basic desktop seemed to be loaded.

      TLDR: this is an illusion or a dirty hack, depending on your preference in terminology 🙂

      posted in IT Discussion
      D
      dyasny
    • RE: What is Virtualization?

      There is no "vSphere free offering", ESXi standalone is what you can run for free, with limitations.

      Also, QEMU/KVM is not necessarily QEMU/KVM, KVM can be used separately, and so can QEMU.

      And one last thing - in any article involving virtualization, it is important to explain the difference between a hypervisor and a full virtualization management product, as well as the many layers in between. vmkernel is not ESXi and is not vSphere, but people lump everything under VMWare and then do silly comparisons. A pure hypervisor is nothing more than a driver for the AMD-V/Intel VT-D CPU extensions, and nothing else. To turn that into a usable VM you need an emulator for the other hardware a VM has (which is where stuff like QEMU come in) with various levels of optimized hardware emulation and physical hardware access (paravirtualized hardware). These are already two layers of software just to be able to run a VM. And we left out the fact nothing can REALLY run on baremetal, metal needs drivers, so the "pure" hypervisor is really one of the drivers that exist in a set of drivers, schedulers and supporting software, aka the kernel. Xen is one such kernel with the hypervisor included. Linux with KVM makes another such kernel. On top of that you have the base management layer, so that you don't need to type in a 15-line-long command just to get a VM going, this is where you have stuff like libvirt, ESXi and so on. And the you get the datacenter level management layer (vSphere, oVirt) or the IaaS management layer (Openstack Nova, EC2 etc)

      posted in Self Promotion
      D
      dyasny
    • RE: If you are new drop in say hello and introduce yourself please!

      @nerdydad hard to tell. I was born in Georgia (the real one, in the USSR, not the state 🙂 ), but lived most of my life in Israel, with a few years in the UK and Ireland.

      posted in Water Closet
      D
      dyasny
    • RE: Testing oVirt...

      @DustinB3403 said in Testing oVirt...:

      No because I was so fed up with the instructions I close abandoned the test.

      Well, oVirt is an opensource and free project. You want it to be better, get involved, even by reporting bugs. I really don't get people who expect to just have everything perfectly served up and for free.

      Really, every time someone tells me this or that OSS project is bad, I ask for the links to the opened issues.

      posted in IT Discussion
      D
      dyasny
    • RE: What Are You Doing Right Now

      @EddieJennings it only took 'em 20 years 🙂

      posted in Water Closet
      D
      dyasny
    • RE: Testing oVirt...

      @scottalanmiller said in Testing oVirt...:

      Yeah, DevOps in finance is old hat. They've been doing that for quite a while.

      devops, config management, containers, kubernetes, a bunch of various big-data tech. When I see that mentioned, I can easily imagine what the structure of their currently developed software is - microservices all the way, no legacy involved.

      And if anyone but us two is reading this - DevOps isn't new, it's as ancient as companies like Ford and Toyota, ask any business major (think of that over your next smoothie, young hipsters)

      posted in IT Discussion
      D
      dyasny
    • RE: If you are new drop in say hello and introduce yourself please!

      @scottalanmiller yeah, my visits were to Detroit (ugh, just ugh.) and Boston, which was actually pretty good, including pastries. I'll be in San Francisco next month, will see what I find there. Here in Canada, coffee and pastries aren't anything to write home about, but you need to know the right places - chains are notably bad, but some mom-and-pop shops can turn out to be very decent, and I'm pretty sure it's the same in the US (at least that was the case in Boston)

      posted in Water Closet
      D
      dyasny
    • RE: Linux Storage Benchmark (IOPS)

      https://github.com/vladzcloudius/diskplorer

      This is a cool wrapper for FIO, written by a colleague of mine. FIO provides you with the maximums, while this tool will allow you to measure the optimal settings and actual disk capabilities.

      posted in IT Discussion
      D
      dyasny
    • RE: What Are You Drinking

      Since it's morning right now, I'm drinking coffee. Variating between Tim Horton's dark roast (I'm in Canada, eh!) and Israeli Turkish black coffee (Turkish grind arabica I buy in Israel when I visit there for work).

      When I do feel like a stiff drink, I usually go for Tullamore Dew oIrish whiskey or Pere Magloir calvados

      posted in Water Closet
      D
      dyasny
    • RE: Virt-Manager on multiple pc's

      @FATeknollogee if you have an ovirt-engine somewhere central, that can reach to all the other locations, you can create a datacentre per location and place standalone hosts in there, using local storage. You will still have a single pane of glass to manage it all from a single address, a centralized VM configuration store, and the option to scale to additional sites or add hosts in a specific DC. I've run a RHV setup with ~300 hosts spread out across the world like this, and it was much easier than dealing with entirely standalone machines.

      posted in IT Discussion
      D
      dyasny

    Latest posts made by dyasny

    • RE: StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far)

      @scottalanmiller said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):

      Right, but we aren't talking about not sharing it. So, again, you are talking about something different. I'm not sure where you are getting lost, but you are talking about totally different things than everyone else. This has nothing to do with the discussion here.

      Good, then we are at least partially on the same page

      Again, didn't scale for you, but your failures do not extend to everyone else. I'm not sure why you feel it can't scale, but it does successfully for others. The common factor here is "your attempts have failed". You have to stop looking at that as a guide to what "can't be done."

      OK, what is the largest cluster size you can run with this starwind solution reliably? Their best practice doc is very careful about mentioning scale being a problem, although they call it "inconveniences".

      Again... you are not understanding that because something can be bad doesn't mean it is always bad. This are basic logical constructs. You are missing the basic logic that absolutely no amount of observation of failures makes other people's observations of success impossible.

      I am talking about a very basic thing - storage tasks require resources. Those resources need to come from somewhere. If you don't use dedicated boxes, you have to take resources away from your VMs. It is extremely simple.

      You are assuming automated rebalance.

      Automated or manually triggered - it's a costly operation. Even if you don't run a sync cycle but do a dumb data stream from a quiesced source, you will be pushing lots of data over several layers of hardware and protocol, that does not come for free. When you replace a disk in a RAID array, you are going to suffer from performance degradation until the raid is in sync, because the hardware or software raid system will be working hard to push all the missing data to the new disk in the best case scenario, and will be generating a ton of parity and hashes in the worst. This does not come cheap.

      So you don't understand the pool risks and think that node risks alone exist and that the system as a whole carries no risks? This would explain a lot of the misconceptions around HC. The cluster itself carries risks, it's a single pool of software. Every platform vendor will tell you the same.

      I understand the risks, and losing just a storage node or just a hypervisor node is much less risk than losing both at once. I was hoping you would understand that, but I guess I shouldn't hope.

      Actually, that breaks the laws of physics. So obviously not true. SAN can't match speed or reliability of non-SAN. That's pure physics. You can't break the laws of math or physicals just by saying so.

      Really? FC at lightspeed from a couple of yards away is significantly slower than local disk traffic? Are you sure we have the same physics in mind?

      HC has EVERY possible advantage of SAN by definition, it just has to there is no way around it, but adds the advantages of reduction in risk points and adds the option of storage locality. Basic logic proves that HC has to be superior. You are constantly arguing demonstrably impossible "facts" as the bases for your conclucsions. But everyone knows that that's impossible.

      You keep talking about your assumptions as if they are the one and only possible truth. They are not. HC cannot have the advantages of a SAN because the SAN is more than just a big JBOD (and even if it were, it has the advantage of being a much larger JBOD than you could ever hope to build on a single commodity server). A SAN has tons of added functionality which it deals with without loading the hosts. If you start implementing all of that in HC, you end up spending even more local host resources on non-workload needs. So either your "basic logic" is flawed, or you simply aren't able to accept that there might be points of view besides yours.

      basically we are having a time warp discussion back to 2007 when almost everyone truly believed that SANs actually were magic and did things that could not be explained or done without the label "SAN" involved and that physics or logic didn't apply.

      I'm not the one talking about "magic sauce" here, remember? I am actually talking about implementation specifics and how they are not simple (because I know these details and technologies well enough to discuss them and see no magic in them)

      I get it, storage can be confusing. But arguing against 15+ years of information that is well established and just acting like it hasn't happened and just reiterating the myths that have been solidly debunked and ignoring that this is all well covered ground just makes it seem crazy.

      Have you noticed how you never have any real arguments instead what I see is "this is known for N years!" and "this is the one and only logic!"? I get it, being defied with solid technical arguments can be confusing, but please try to bear with me here instead of just defaulting the the usual non-arguments. Can you explain to me, how keeping large storage volumes synchronized over a network has no overhead and consumes no host resources please? It's a simple question, and I will not accept "magic" as an answer. Saying that pushing large amounts of data across a network comes at no cost is pretty much defying the laws of physics, so I'd like to know how exactly you expect to circumvent them.

      posted in Starwind
      D
      dyasny
    • RE: StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far)

      @scottalanmiller said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):

      Maybe you are running into these problems due to bad products, planning or design, but you are hoisting problems you've had onto everyone else where we aren't experiencing these problems.

      You are "projecting". I'm not saying you or people you know having implemented HC and had it not work well. But you are confusing a bad implementation or planning with the architecture being bad. The two are different things.

      Or maybe I'm just speaking from experience, and there's plenty of it. Local storage that is not shared is great, but it doesn't scale and pretty much kills all the nice features you can have in a virtualized DC - live migration, HA, all those things you don't care about in SMBs I suppose. Getting that storage replicated in a scalable fashion is hard, simple CBT pushed over network (what the folks at Linbit basically do) does not scale. And hard tasks require resources, those resoutces have to come from somewhere.

      Ceph and Gluster are both known to be bad for that, that should have been known going into the project. That someone didn't head you off at the pass shows that the mistakes and oversights were happening early on. We could easily have warned you that that wasn't meant to have good local performance.

      Mixing any distributed storage solution with any other workload is known to be bad, this is exactly what I'm saying. I've come into those projects when they were already implemented and got things working by breaking up those overloaded hosts into hardware that was doing one job and doing it well on either side.

      Okay... so thebig question is... since this is not part of HC... why? Stop doing that. You can never have a rational, useful discussion about HC until you talk about HC and not something else.

      But I am, at least at scale. DRBD and any similar system does not scale. When things are small (SMB level again) this is peanuts, we can do anything because our tasks are smaller than the hardware we can get. What happens at scale though?

      No one anywhere recommends 200 nodes in a single cluster. If you think this is a good idea, SAN, SDS, HC, or otherwise, we are on different pages. That's a scale that literally no one, not MS, not StarWind, not VMware, not RedHat recommends as a single failure domain. DO they support it? Yup. Do they think you are crazy? Yup.

      200 nodes is small for the scale I typically deal with. Red Hat has solutions that can deal with this kind of scale easily. I know of a few other companies that do. MS, VMW and probably StarWind do not, because of the nature of their clustering implementation, but that's basically all about how you manage locking.

      Even in the enterprise, which you claim to know, they often use workloads scopes of this size for performance and safety. The larger the pool, the bigger the problems.

      Not really. In a large pool, a dead node simply gets easily replaced. The effect is very small.

      If you want to get into giant pools you have to pick your battles.

      I usually am in those numbers, but ok

      Let's talk reasonable size, like 10-80 nodes. If you need screaming performance that no SAN can match, then you are looking at StarWind network RAID which does that.

      OK, so we have a network RAID, a bunch of blocks get streamed to other nodes when writes occur on one. When all there is is pushing block across, things are simple. What happens when a node dies, and I have to suddenly rebalance the data distribution? How is consistency kept? How does the system decide which blocks get streamed where? Even in a 10 node cluster, it would be plain out stupid to keep all the data replicated to everywhere, 10x the data on local disks would be too expensive

      If you just want cheap pooled storage, you look at CEPH (and there are accelerators for that to make it fast,if you need to).

      Here we have a distributed system, which starts at a least a core per RBD, and 32Gb or RAM to even get started properly. In SMB, I doubt you see many monstrous hypervisors with hundreds of cores, so what is there left to run your actual VMs?

      At truly giant scale, the only real benefit to totally external storage, is when speed and reliability are so unimportant that you are willing to sacrifice them to a huge degree to save a few dollars. But with HC's low cost today, even that is approaching the impossible.

      Only having a good storage fabric can give you excellent speed and very low latencies, and as for reliability - you can build whatever you want on the SAN side depending on your requirements. The only good thing about HC is local storage access, and it isn't really that far ahead of any decent fabric anyway, if at all.

      Because you don't have all those things. Large chunks over the network takes like no resources. No idea where you think the overhead comes from, but for most of us, things like copying data is not a high overhead activity. It's a dedicated network in most cases, with offload engines on the NICs, and things like tiering and such take extremely little overhead (if done well.) These just aren't CPU or RAM intensive activities.

      That is simply not true. Pushing large amount of data over the network is not cheap, and that is in the case of simple streaming. When you start running synchronizations and tiering stuff gets harder. And when you have to rebalance (which ceph does often) you need even more resources. Yes, you can dedicate NICs to just that (and those NICs will not be there to provide more bandwidth to the workload traffic) but in order to push large amounts of data into the NICs you also need CPU cycles and and RAM. It's CS 101, there are no free rides.

      Those aren't HA, so not applicable to the discussion. HCI is assumed to be replicating to other nodes, something those providers don't provide. They are stand alone compute nodes. Very different animal.

      My point exactly. If HC was so great, why wouldn't they be using it?

      posted in Starwind
      D
      dyasny
    • RE: StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far)

      @scottalanmiller said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):

      @dyasny said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):

      HCI doesn't hold up in the real world. See, I can keep doing this too

      Except it does, tons of places are using it, and show me ANY example where it didn't shine... any. WHile somewhere, one must exist, dollars to donuts you can't find one.

      I've worked with several Openstack and K8s clusters where the storage was local to the hypervisors, served from Ceph/Gluster/AFS/SheepDog. Horrible experience each time.

      posted in Starwind
      D
      dyasny
    • RE: StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far)

      @scottalanmiller said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):

      This just doesn't hold up in the real world.

      HCI doesn't hold up in the real world. See, I can keep doing this too 🙂

      Most companies and workloads are not trying to do things where this makes sense at all.

      Maybe in the SMB space there is less planning and more "lets just deliver something and let someone else support it", but it doesn't work in the enterprise.

      All this SDS like this is doing is making new SAN pools, right back to the complexity, costs, and risks that we had before.

      Exactly. SDS or SANs - you can pick whatever you want and suits you best. But if you are going to be running a replicated distributed storage service on hardware that is already quite busy running VMs, you'll end up in trouble as soon as you go anywhere near capacity.

      If your design creates all this overhead, whether you address it in one box or many, chances are that design itself is the flaw. Not always, but generally. But whether you have RAID or RAIN, the overhead to do this stuff just isn't there when implemented well.

      How exactly can there be no overhead, when you are synchronizing large chunks of blocks over the network? Especially with high replication factors? Add encryption to that, add all the extra logic for local tiering, and there's no wonder the minimal sizing for SDS is so high.

      Now sure, if we only look at totally garbage solutions, we can make any design seem like a problem. but you have to separate good design from good products. Bad products exist even in well designed solutions.

      Right, but because we measure it and there isn't means that there isn't. Just because you claim a load that no one has, which is all you are doing, doesn't make it real. You have created problems that no one faces and are acting like we are all impacted by them.

      More sophistry. Can you be more specific please, instead?

      You are literally claiming that contrary to all evidence, common sense, and industry knowledge, that software RAID is not just a huge load, but so large that we now need not only hardware RAID cards to do it, but entire hardware RAID servers!

      I'm not talking about simple software RAID, I'm talking about keeping a distributed storage system in sync, and then on top of that keep constantly rebalancing to satisfy the local tiering requirements. And while doing all that juggling, also ensure the system remains resilient to node failure. This is a lot of work, unless like @Dashrender says there is magic at play. I don't believe in magic.

      So to skip the rest of your quotes, in general, what you are saying is that a system which is essentially something like a simple KVM with DRBD, is the perfect solution. I am saying sure, for two nodes. How about 200?

      Do you think AWS/GCP/Azure are running HCI solutions for example?

      posted in Starwind
      D
      dyasny
    • RE: StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far)

      @scottalanmiller said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):

      This is they myth. In most HCI it adds no appreciable load. As long as you believe that things like storage and networking are going to create a lot of load, yes, this is going to seem like a point of risk, although even then things like RAID cards fixed that in the era where that was true.

      But since it doesn't add load, and actually adds less load than splitting it out, this logic is backwards.

      I already answered that above. Just because you say it doesn't add any load, doesn't mean it doesn't.

      SDS isn't part of HC. This might be a root of your confusion. This is why some HC, like the one for whom the thread is about doesn't do this and just does RAID. Overhead is ridiculously low.

      How exactly do they deal with the HA side of things? With RAID, and a host going down, all the VMs using that host go down, RAID or not.

      posted in Starwind
      D
      dyasny
    • RE: StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far)

      @DustinB3403 said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):

      That's a joke?

      You should have seen my British friend tell it 🙂

      posted in Starwind
      D
      dyasny
    • RE: StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far)

      @Dashrender this is not about the basket/eggs thing, consolidation is well and good, but HCI adds a massive load on each host, and the resources for that load have to come from somewhere. SDS is not easy and it does demand CPU, RAM and network resoruces. SDN is just as bad. Lump it all into the same host, and you've got nowhere to run VMs adequately, that's my point.

      There's a very old joke - a man is pulled over by a policeman, as he was driving with one hand and hugging his girlfriend with the other. The policeman says "Sir, you are doing two things and both of them badly". This is exactly why HCI is wrong. Yes, if all you have is a single machine, you'll be lumping all your workloads on it, but if you are building a real datacenter, you better do the networking stack properly, using the right hardware: even if it's going to be some opensource SDN like Calico and not a suitcase of money sent to cisco, you should dedicate the correctly spec'd hardware to that, the same goes for the storage stack - you want to run on commodity hardware using opensource SDS software - be my guest, but dedicate those hosts to SDS and spec them out to fit the task, and the same goes for the workload-bearing machines, whether they will be KVM hypervisors or a docker swarm or an overprices vmware cluster - that's immaterial. If you do the HCI thing, you cannot spec the hardware to the task, you end up running all of those services and workloads on the same set of hosts, and all those tasks will be sharing that hardware, either competing for resources, or cutting available un-utilized resouces away from where they could be needed.

      Yes, the nicer HCI systems can try to keep the data they serve balanced so that it is at least partially local to the workload, but in a properly build virtual DC this is not a problem. Infiniband, FC and even FCoE make latency moot, and throughputs can be much higher than over a local SAS or even NVMe channels.

      posted in Starwind
      D
      dyasny
    • RE: StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far)

      @scottalanmiller said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):

      HC was always a thing, though, that's the thing. That it got buzz is different. We've had HC all along, just people didn't call it anything.

      OK, just so we're on the same page here, are you saying we should simply install a bunch of localhosts and be done, for all the types of workloads out there?

      No, it's separating it that is the bad idea.

      No, it's mixing it that is the bad idea. See, I can also do this 🙂

      Separate means less performance and more points of failure.

      It would seem so, but in fact, you already have to run those services (storage, networking, control plane) anyway, and they all consume resources, and a lot of them. And then you dump the actual workload on the same hosts as well, so either you simply have much less to assign to the workload and the services, or they have to compete for those resources. Either is bad, and when one host fails, EVERYTHING on it fails. So you have to not just deal with a storage node outage or a controller outage, or a hypervisor outage, but with all of them at the same time. How exactly is that better for performance and MTBF?

      It's just like hardware and software RAID... when tech is new you need unique hardware to offload it, over time, that goes away. This has happened, at this point, with the whole stack. And did long ago, there was just so much money is gouging people with SANs that every vendor clung to that as long as they could.

      I'm not saying SANs are the answer to everything, I'm saying loading all the infrastructure services plus the actual workload on a host is insane. If you have a cluster of hosts providing FT SDN, and another cluster providing FT SDS and a cluster of hypervisors using those service to run workloads using the networking and storage provided, I'm all for it. This system can easily deal with an outage of any physical component, without triggering chain reactions across the stack. But this is just software defined infrastructure, not HCI.

      But putting those workloads outside of the server make it slower, costlier, and riskier. There's really no benefits.

      Again, I don't care much for appliance-like solutions. A SAN or a Ceph cluster, I can use either, hook it up to my hypervisors and use the provided block devices. But if you want me to run the (just for example here) Ceph RBD as well as the VMs and the SDN controller service on the same host - I will not take responsibility for such a setup.

      posted in Starwind
      D
      dyasny
    • RE: StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far)

      @DustinB3403 said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):

      HCI isn't just shared storage. It's shared everything.

      Great, so we are also running the SDN controllers on all the hosts. Even an OVN controller is a huge resource hog. A Neutron controller in Openstack is even worse. And then the big boys come in, have you tried to build an Arista setup?

      I am not talking theory here, I'm talking implementation, as someone who built datacenters and both public and private clouds at scale. Running the entire stack on each host, along with the actual workload is a horrible idea.

      What do you mean, mixing everything? The magic sauce is what makes tools like Starwinds vSAN an amazing tool.

      Sounds like marketing bs to me, sorry 🙂 Magic sauce? Really?

      It works with the hypervisor to manage all of your hosts from a single interface. Should any host go down, those resources are offline, but the VM's that may have been on there are moved to the remaining members of the HCI environment (of multiple physical hosts).

      Sounds like any decently built virtualized DC solution, from proxmox to ovirt to vcenter and xenserver. How is it "magic" exactly?

      The easiest way I can think to explain your rational @dyasny is to pretend I'm building a server, but because I don't trust the RAID controller that I can purchase for my MB, I purchase a bunch of external disks, plug those into another MB and then attach that storage back to my server via iSCSI over the network.

      This is a ridiculous example. What you describe is instead of having a server with a disk controller, disks , GPU and NICs, I'd install a single card that is a NIC, a GPU and can store data. So that instead of the PCI bus accessing each controller separately with better bandwidth, all the IO and different workloads are driven through a single PCI bus channel. And then use "magic" to install several of those hybrid monster cards in the hopes of making them work better.

      How is this safer, more reliable and cheaper than just adding all of the physical resources into a single server? Then combining 2, 3 or however many of the identical servers together with some magic sauce and managing it from a single interface?

      There you go with the magic sauce koolaid again.

      posted in Starwind
      D
      dyasny
    • RE: StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far)

      @DustinB3403 said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):

      Hell your desktop or laptop is hyperconverged.

      Everything is self contained.

      Yup, this is all just marketing hype. In the real world, a standalone host is just a standalone host, it was before HCI was a thing and will be after.
      Also note, I always use the term HCI, not just HC, and I always mean it to be exactly what it is being sold as - a way of building virtualized infrastructure so that the shared storage in use, is provided by the same machines that host the workloads, off of their internal drives. I could get into the networking aspect of things, but that will only make my point stronger - mixing everything on a single host is a bad idea.

      posted in Starwind
      D
      dyasny