@thwr sure thing
the 1150 ships with a baseline of 8 broadwell cores per node with the E5-2620v4 upgradable to the e5-2640v4 with 10 cores per node. It ships with 64 GB RAM upgradable to 256 GB per node. It ships with either a 480 GB, 960 GB, or 1.92 TB eMLC ssd per node and 3 1,2, or 4 TB NL-SAS drives per node. Each node has quad gigabit or quad 10gig nics. All features and functionalities are included (HA, DR, multi-site replication, up to 5982 snapshots per-vm, auto tiering with HEAT staging and destaging and automatic prioritization of workload IO to name a few). All 1150 nodes can be joined with all other scale node families and generations both forward and back, so upgrade paths are not artificially limited.

Best posts made by Aconboy
-
RE: The VSA is the Ugly Result of Legacy Vendor Lock-Out
-
RE: Cannot decide between 1U servers for growing company
@ntoxicator - for that kind of money, I could put in a cluster at the primary site, and cluster at a DR site, and have real time replication with failover and failback (and likely still have beer money left over).
-
RE: First Look at the Scale
@ntoxicator the VirtIO drivers are mounted as a virtual CD & you simply pick the versions appropriate for the VM during the OS install (windows). For Linux, BSD, et.al. the drivers are already in those kernels and have been since 2007 or so.
-
RE: Topics regarding Inverted Pyramids Of Doom
@coliver Had that happen in my lab SPa failover to SPb... not so much......
-
RE: Replacing the Dead IPOD, SAN Bit the Dust
@scottalanmiller said in Replacing the Dead IPOD, SAN Bit the Dust:
25
looking at this thread, I would say that a Scale 1150 cluster would fit the bill nicely, and even with a single node for second site dr, he would still likely be under $35k all-in
-
RE: Cost Study: 3 Node Scale vs. 3 Node VMware VSAN
@scottalanmiller - adding Simplivity and Nutanix to the mix might be interesting, but you would additionally have to factor in the cpu/ram resources being consumed by their VSA's (SAN being implemented as virtual machines with protocol overhead) which would require the use of higher end procs and larger RAM footprints to be able to run the same number of VM's at the same performance level as either Scale or VMWare VSAN. Seems to me the density implications would wind up having a direct impact on the bottom line prices of any of the VSA based offerings when comparing to the platforms that don't need one.
-
RE: Scale Computing VS Proxmox
@stacksofplates Yes, we did in 2019. There are a couple of ways that can be done. When you snapshot a VM, any disk in that snap can be mounted to any other vm, provided that the logged in user is at a permissions level allowing it. That is actually part of the mechanism that several backup vendors (acronis, storware,etc) use to do agentless backups of Scale Computing VM's. If you haven't taken a look since 2018, you should take a look again as there has been so very many things added since then.