@scottalanmiller your emails from the forum started showing up in inbox again. That is at least why I returned
Posts made by Aconboy
-
RE: What Are You Doing Right Now
-
RE: What Are You Doing Right Now
working on fixing a broken installer for the 6.x kernel on 12th gen nucs
-
RE: Scale Computing VS Proxmox
@scottalanmiller I gotta show up from time to time or people think I am a myth
-
RE: Scale Computing VS Proxmox
@stacksofplates Yes, we did in 2019. There are a couple of ways that can be done. When you snapshot a VM, any disk in that snap can be mounted to any other vm, provided that the logged in user is at a permissions level allowing it. That is actually part of the mechanism that several backup vendors (acronis, storware,etc) use to do agentless backups of Scale Computing VM's. If you haven't taken a look since 2018, you should take a look again as there has been so very many things added since then.
-
RE: What Are You Doing Right Now
Remembering that mangolassi.it is a thing and that I should likely check in as it has been a minute since dropping by
-
RE: Cost Study: 3 Node Scale vs. 3 Node VMware VSAN
@scottalanmiller - adding Simplivity and Nutanix to the mix might be interesting, but you would additionally have to factor in the cpu/ram resources being consumed by their VSA's (SAN being implemented as virtual machines with protocol overhead) which would require the use of higher end procs and larger RAM footprints to be able to run the same number of VM's at the same performance level as either Scale or VMWare VSAN. Seems to me the density implications would wind up having a direct impact on the bottom line prices of any of the VSA based offerings when comparing to the platforms that don't need one.
-
RE: Replacing the Dead IPOD, SAN Bit the Dust
@scottalanmiller said in Replacing the Dead IPOD, SAN Bit the Dust:
25
looking at this thread, I would say that a Scale 1150 cluster would fit the bill nicely, and even with a single node for second site dr, he would still likely be under $35k all-in
-
RE: Replacing the Dead IPOD, SAN Bit the Dust
@JaredBusch Not that much more expensive and far more reliable for the job at hand
-
RE: The VSA is the Ugly Result of Legacy Vendor Lock-Out
@thwr sure thing
the 1150 ships with a baseline of 8 broadwell cores per node with the E5-2620v4 upgradable to the e5-2640v4 with 10 cores per node. It ships with 64 GB RAM upgradable to 256 GB per node. It ships with either a 480 GB, 960 GB, or 1.92 TB eMLC ssd per node and 3 1,2, or 4 TB NL-SAS drives per node. Each node has quad gigabit or quad 10gig nics. All features and functionalities are included (HA, DR, multi-site replication, up to 5982 snapshots per-vm, auto tiering with HEAT staging and destaging and automatic prioritization of workload IO to name a few). All 1150 nodes can be joined with all other scale node families and generations both forward and back, so upgrade paths are not artificially limited. -
RE: The VSA is the Ugly Result of Legacy Vendor Lock-Out
@thwr @breffni-potter @travisdh1 - we have just released our 1150 platform which brings all features and functionalities with both flash and spinning disk in at a pricepoint under $30k USD for a complete 3 node cluster.
-
RE: The VSA is the Ugly Result of Legacy Vendor Lock-Out
@Breffni-Potter It is absolutely genuine. For example, Simplivity requires that a minimum of 48 GB RAM PER-Node be reserved for their VSA with an entry level, with the higher end nodes taking 100GB RAM for the VSA per node. In some of their older gear, the number was around 150GB per node. With Nutanix, the number with all features turned off starts at 16GB per node, but jumps up to 32 or somewhat more per node as features are turned on. Same story with all the other varietal VSA based vendors. Basically, a VSA is not free, it is a virtualized SAN, and they run an instance of it on every node in their architectures, with the associated resource consumption. - The VSA didn't eliminate the SAN, it virtualized it then replicated it over and over. That is just on the RAM side of things. Then there is cpu core usage associated with each VSA - cores and ram going to run the VSA's instead of the actual workloads. In HC3, we not only eliminated the SAN, we did so without using a VSA at all, so those "reserved" resources go directly into actually running VM's, all the while streamlining the IO path so that there is a dramatic reduction in the number of hops it takes to do things like change a period to a comma.
-
RE: New Scale HC3 Tiered Cluster Up in the Lab
@hobbit666 take a look at our new 1150 cluster - same feature set at a price point of $24,500 starting wednesday....
-
RE: Scale Radically Changes Price Performance with Fully Automated Flash Tiering
@Breffni-Potter One of our founders - Jason Collier is on the floor at Spiceworld London as we speak
-
RE: New Scale HC3 Tiered Cluster Up in the Lab
@hobbit666 - Sit tight for a product announcement in about a month - you will really like that if you liked this
-
RE: New Scale HC3 Tiered Cluster Up in the Lab
@hobbit666 the new 3 node 2150x starter cluster lists at 47k Sterling - be happy to chat it through with you if you like and show you one via webex
-
RE: Topics regarding Inverted Pyramids Of Doom
@coliver Had that happen in my lab SPa failover to SPb... not so much......
-
RE: First Look at the Scale
@DustinB3403 Yeah, I confess I have a minecraft server running as a VM on one of my clusters
-
RE: What Switches do you use?
@FiyaFly hopefully, you arent using the Juniper firewalls..... http://thehackernews.com/2015/12/hacking-juniper-firewall-security.html?m=1
-
RE: What Switches do you use?
I have been putting the Mellanox SX1012 through it's paces for the last couple of months and am impressed to say the least. 1U half width switch with 12 56GbE ports of QSFP+ that break out into 48 ports of 10Gig/1gig. Basically it is 1/10/40/56GbE for under 10 grand and has ZERO performance issues. http://www.mellanox.com/page/products_dyn?product_family=163 Iron Networks has it for ~5k - http://shop.ironnetworks.com/msx1012b-2brs?utm_source=google_shopping&gclid=Cj0KEQiA496zBRDoi5OY3p2xmaUBEiQArLNnK-nkxw0RsoMKFLQDuC3zgkhB0Fb0ZDrDLQ30Of5BHFgaAqON8P8HAQ
-
RE: First Look at the Scale
@ntoxicator the VirtIO drivers are mounted as a virtual CD & you simply pick the versions appropriate for the VM during the OS install (windows). For Linux, BSD, et.al. the drivers are already in those kernels and have been since 2007 or so.