What's the use case?
For blades in general? It's hyperconverged infrastructure, hosting environments, container clusters etc. Basically everywhere you want to cram in as much as possible in the least amount of rack space.
I suppose - but damn - that seems like a HUGE amount of compute power next to low amount of storage. If that's the setup you need - again HUGE amount of compute and tiny storage, then it's probably just fine.
I know what you mean but it's not really that low. Consider that the server I linked to have 3.5" bays. So you can have 2 x 18TB (standard enterprise size in stock) per node or 288 TB of raw storage per 3U rack. A rack full of those will give you over 3 PB of disk or 1.5PB of SSDs (8TB ea).
There are other models too, some have 4 bays per node. So you have some options.
that storage ends up being soooo incredibly slow, the power of the CPUs seems like they would be wasted.
Now if all of the storage is hanging off a single or split between two/three nodes, then we start looking more like a Scale box, only way smaller.
I'd be worried about only having two power supplies in there too. that might be a folly on my part, but with that many drives/CPUs and only two PS's?
Today you don't need a lot of spindles in an array to get speed. Storage would be blazing fast with for example two NVMe drives per node.
8TB is readily available but you could get 16TB NVMe drives too.
yeah, NVMe would be fast... I made an assumption before looking more closely at your picture that it was limited to HDDs.
which today would just be stupid.. so my bad.