@scottalanmiller said in Apple Officially Releases their ARM M1 Powered Lineup:
USB4 is TB3 compatible.
Intel removed licensing for TB3 as part of agreement to make part of USB4 spec.
@scottalanmiller said in Apple Officially Releases their ARM M1 Powered Lineup:
USB4 is TB3 compatible.
Intel removed licensing for TB3 as part of agreement to make part of USB4 spec.
@Dashrender said in vSAS - Value Serial Attached SCSI:
I'd ask the same about SAS vs NVMe chips... I mean, of course they might not cost different, but then again, they would be wildly apart.
100% of the time a SAS drive needs a SAS HBA or RAID Controller to speak to a CPU (and possibly a SAS expander). In 90% of the NVMe configs you see, a NVMe drive talks straight to the PCI-E bus and the CPU so this is apples/oranges.
@Dashrender said in vSAS - Value Serial Attached SCSI:
Can you hot swap NVMe?
Yes. There's basically 2 ways. Intel VMD, and the NVMe spec for hotswap. It's a work in progress but yes, it's not impossible. Our HCLlists which NVMe drives we support it with
Is their a backplane solution for NVMe?
There are backplanes that can take EITHER NVMe or SAS drives (HPE has them). Note, U.2 generally just plug straight NVMe to PCI-E bus, but there are crossbar solutions (There were a few hundred bucks last I saw, but they exist).
These might be reasons... plus cost is still a reason assuming NMVe's are noticeably more expensive.
There are Cheaper QLC NVMe drives, and there are high endurance eMLC SAS drives that will out perform them on writes. Don't confuse an interface for a drive speed.
@Pete-S said in Apple Officially Releases their ARM M1 Powered Lineup:
For instance, ubuntu has had one or several mobile OS working on some devices. It's all ARM cpus. Have any of these ever worked on any Apple tablet or phone or watch or...?
Apple has API's for Byve. Technically I think we use them for some of our container run time stuff so we can call metal etc. You may never see Linux run bare metal on Macbooks but if it runs in a Virtual Machine that abstracts the hardware (IE in Fusion) do you really care?
@scottalanmiller said in Apple Officially Releases their ARM M1 Powered Lineup:
And on PowerPC before that.
and Motorola before that!
@marcinozga said in Apple Officially Releases their ARM M1 Powered Lineup:
That 16GB RAM limit though.... Previous gen Mac Mini went up to 64GB. And no 10Gbit ethernet
Ehhh USB4 we'll have adapters in no time.
@Pete-S said in How Many HCI Nodes for the SMB:
But if you run a generic benchmark that is not designed to give inflated numbers, the situation is different.
I don't run generic benchmarks in production for a living thankfully
The reality is most CPU intensive stuff takes advantage of at least some of the new offload extensions and libraries. Also memory throughput is often the limiting factor for databases and other intensive IO applications.
@travisdh1 said in How Many HCI Nodes for the SMB:
How many SMBs actually use Oracle RAC or SAP HANA? Can't be many.
I know people with 20 employees and 400 oracle databases FWIW. There's a lot of smaller application providers who do niche SaaS stuff.
SAP is pulling Oracle support, and making everyone move to HANA going forward for their apps.
@scottalanmiller said in How Many HCI Nodes for the SMB:
The implication of two nodes is that it is still N+1. You just buy bigger nodes if necessary to keep it to two nodes.
If your licensing Oracle RAC for 40K per core (list, I know you'll pay less but still) or SAP HANA (where you pay per TB of RAM) then scaling out to a larger cluster has some advantages on N+1 math where 50% vs. 25% on 4 smaller nodes for HA protection comes to play.
@scottalanmiller said in Proxmox install for use with a ceph cluster:
Yes, but it blocks SMART so you never want to do it, it undermines the stability of the JBOD. There's always a standard controller on the MOBO for the JBOD connections.
Also blocks TRIM commands (not that I trust the Linux TRIM driver to ATA drives given how many one off exceptions to disable they've had to write).
Operationally it's messy because on a drive failure you have to go in with PERCLI etc, and rebuild the RAID 0s. We used to run this but it was just a royal pain in the ass.