My name is Blake Rodier, I am the Tier 1 Support Manager at Scale Computing. I have been with Scale Computing for almost 4 years now. I had previously been a member of the support teaming moving up to a Tier 3 Support Engineer before transitioning to management on the team. On the support team we try our best to provide the best experience anyone has had with technical support. If you have any questions for us please let us know.
blakerodier
@blakerodier
I am the Tier 1 Support Manager at Scale Computing
Best posts made by blakerodier
-
RE: Introductions
-
RE: Support Tips: Scale HC3 CPU Over Provisioning
@Kelly The available virtual cores you can assign to a particular VM match the number of logical threads(for hyperthreading) on the nodes. If you have identically spec'd nodes there will be no restrictions put in place beyond that for the number of virtual CPUs you can assign to a particular VM.
The limit of number of virtual cores to the smallest amount of logical threads is specifically to prevent issues with over-provisioning so there are automatic limits put in place to prevent issues that would cause VMs to not be stable.
I'm really glad to hear you are purchasing a cluster and we look forward to working with you.
-
Support Tips: Scale HC3 CPU Over Provisioning
Hey all. I wanted to talk about a topic that may be confusing for some users. Over-Provisioning CPUs for VMs.
Scale Computing’s HC3 system like any virtualization platform has physical limitations when it comes to CPU resources. Each node in a cluster has a physical limit to the number of physical cores and threads available to present to guest OSs. If you choose to you can allocate resources to your virtual machines based strictly on on the number of threads available on a physical node.
If you have two 8 core hyperthreaded processors in a node you will have 32 threads or logical CPUs you can assign for a 1 to 1 ratio on each node. In most instances there are CPU cycles that will go unused in a 1 to 1 setup. HC3 can utilize processor scheduling to allow for Over-Provisioning of CPUs meaning that for the same 32 threads that are present you could allocate 64 or 96 virtual CPUs and allow VMs to utilize idle cycles that would have been wasted in a 1 to 1 setup.
One restriction that is built into HC3 is that it will limit any individual VM to the number of threads available on the lowest capacity node in the cluster. If you have 2 nodes with 32 threads and 1 node with 16 threads, any one VM will only be able to allocate 16 virtual CPUs. This prevents any 1 VM from being unable to run if a livemigrate to another node happens and the VM tries to utilize all virtual CPUs. When deciding how to over-provision it is best to start with conservative numbers and build up if there are no issues. If you start seeing high CPU spikes or high wait times it may be important to turn down the amount of over-provisioning being used.
-
Support Tips: Scale HC3 VirtIO Performance Drivers
Hey all. Another tip from support. This time we wanted to talk about performance drivers.
HC3 uses the KVM hypervisor which can provide para-virtualized devices to the guest OS which will decrease latency and improve performance for the virtual devices. Virtio is the standard used by KVM. We recommend selecting performance drivers for any supported OS which creates Virtio block devices. Emulated block devices are also supported for legacy operating systems.
Virtio driver support has been built into the Linux Kernel since 2.6.25. Any Linux distro utilizing a 2.6.25 or later distro will natively support Virtio network and storage block devices presented by HC3. Older kernels can potentially allow the Virtio modules to be backported. Any modern Linux distro should be on a Kernel version late enough to natively support Virtio.
Virtio drivers for Windows OSs are available for guest and server platforms starting at Windows XP and Windows Server 2003. There are Virtio drivers available for all OS releases since those releases. Any OS older than XP or Server 2003 will have to use the emulated non-performance block device type and will experience decreased performance compared to more modern OSs.
At Scale Computing, we periodically update the Virtio performance drivers provided with HC3 via firmware updates. We recommend only using the included Virtio ISO or one provided by Scale Support. Untested Virtio drivers could cause an inability to livemigrate VMs or other issues. New Virtio drivers will not be automatically added to guest VMs. You will need to mount the ISO CD to the VM and manually install the updated drivers via device manager. You can also utilize group policy to roll out updates of virtio drivers when they are available, which we will have an application note outlining that process soon.
-
Support Tips: HEAT
Today I wanted to dig a little deeper on our new HyperCore Enhanced Automated Tiering(HEAT) feature which includes configurable SSD priority allocation at the individual virtual disk level through an easy-to use slide bar in the HC3 interface, and intelligent data block priority based on block I/O heat mapping assessed utilizing historical I/O information on each virtual disk.
By default, a new virtual disk will automatically get assigned a SSD priority of 4 on a scale of 0 to 11. The scale, represented by the slide bar in the HC3 interface, is exponential, meaning that changing the priority of a VM virtual disk from 4 to 5 in the management UI doubles the priority of that data for SSD placement. This scaling means that even one position change on the slide bar can provide significant performance improvement and requires less experimenting with various settings.
The values of 0 and 11 on the slide bar are unique. A setting of 0 eliminates SSD storage usage on that virtual disk (convenient for a static FTP drive, for example). 11 changes the SSD priority by an order of magnitude, multiplying the priority of the virtual disk data for SSD placement by 10. When you turn it to 11, you really crank it.
Due to the nature of HEAT it will automatically put the most accessed or "hot" blocks to the ssd storage and move less accessed or "cold" blocks to spinning disks. This means leaving all disks at 4 will provide a balanced and significant performance benefit to all disks equally, but you have the freedom to increase or decrease the priority based on your specific needs.
Support is always available if you need guidance or have questions about the best way to prioritize your disks on your HC3 system.
Latest posts made by blakerodier
-
Support Tips: HEAT
Today I wanted to dig a little deeper on our new HyperCore Enhanced Automated Tiering(HEAT) feature which includes configurable SSD priority allocation at the individual virtual disk level through an easy-to use slide bar in the HC3 interface, and intelligent data block priority based on block I/O heat mapping assessed utilizing historical I/O information on each virtual disk.
By default, a new virtual disk will automatically get assigned a SSD priority of 4 on a scale of 0 to 11. The scale, represented by the slide bar in the HC3 interface, is exponential, meaning that changing the priority of a VM virtual disk from 4 to 5 in the management UI doubles the priority of that data for SSD placement. This scaling means that even one position change on the slide bar can provide significant performance improvement and requires less experimenting with various settings.
The values of 0 and 11 on the slide bar are unique. A setting of 0 eliminates SSD storage usage on that virtual disk (convenient for a static FTP drive, for example). 11 changes the SSD priority by an order of magnitude, multiplying the priority of the virtual disk data for SSD placement by 10. When you turn it to 11, you really crank it.
Due to the nature of HEAT it will automatically put the most accessed or "hot" blocks to the ssd storage and move less accessed or "cold" blocks to spinning disks. This means leaving all disks at 4 will provide a balanced and significant performance benefit to all disks equally, but you have the freedom to increase or decrease the priority based on your specific needs.
Support is always available if you need guidance or have questions about the best way to prioritize your disks on your HC3 system.
-
Support Tips: Scale HC3 VirtIO Performance Drivers
Hey all. Another tip from support. This time we wanted to talk about performance drivers.
HC3 uses the KVM hypervisor which can provide para-virtualized devices to the guest OS which will decrease latency and improve performance for the virtual devices. Virtio is the standard used by KVM. We recommend selecting performance drivers for any supported OS which creates Virtio block devices. Emulated block devices are also supported for legacy operating systems.
Virtio driver support has been built into the Linux Kernel since 2.6.25. Any Linux distro utilizing a 2.6.25 or later distro will natively support Virtio network and storage block devices presented by HC3. Older kernels can potentially allow the Virtio modules to be backported. Any modern Linux distro should be on a Kernel version late enough to natively support Virtio.
Virtio drivers for Windows OSs are available for guest and server platforms starting at Windows XP and Windows Server 2003. There are Virtio drivers available for all OS releases since those releases. Any OS older than XP or Server 2003 will have to use the emulated non-performance block device type and will experience decreased performance compared to more modern OSs.
At Scale Computing, we periodically update the Virtio performance drivers provided with HC3 via firmware updates. We recommend only using the included Virtio ISO or one provided by Scale Support. Untested Virtio drivers could cause an inability to livemigrate VMs or other issues. New Virtio drivers will not be automatically added to guest VMs. You will need to mount the ISO CD to the VM and manually install the updated drivers via device manager. You can also utilize group policy to roll out updates of virtio drivers when they are available, which we will have an application note outlining that process soon.
-
RE: Support Tips: Scale HC3 CPU Over Provisioning
@Kelly The available virtual cores you can assign to a particular VM match the number of logical threads(for hyperthreading) on the nodes. If you have identically spec'd nodes there will be no restrictions put in place beyond that for the number of virtual CPUs you can assign to a particular VM.
The limit of number of virtual cores to the smallest amount of logical threads is specifically to prevent issues with over-provisioning so there are automatic limits put in place to prevent issues that would cause VMs to not be stable.
I'm really glad to hear you are purchasing a cluster and we look forward to working with you.
-
Support Tips: Scale HC3 CPU Over Provisioning
Hey all. I wanted to talk about a topic that may be confusing for some users. Over-Provisioning CPUs for VMs.
Scale Computing’s HC3 system like any virtualization platform has physical limitations when it comes to CPU resources. Each node in a cluster has a physical limit to the number of physical cores and threads available to present to guest OSs. If you choose to you can allocate resources to your virtual machines based strictly on on the number of threads available on a physical node.
If you have two 8 core hyperthreaded processors in a node you will have 32 threads or logical CPUs you can assign for a 1 to 1 ratio on each node. In most instances there are CPU cycles that will go unused in a 1 to 1 setup. HC3 can utilize processor scheduling to allow for Over-Provisioning of CPUs meaning that for the same 32 threads that are present you could allocate 64 or 96 virtual CPUs and allow VMs to utilize idle cycles that would have been wasted in a 1 to 1 setup.
One restriction that is built into HC3 is that it will limit any individual VM to the number of threads available on the lowest capacity node in the cluster. If you have 2 nodes with 32 threads and 1 node with 16 threads, any one VM will only be able to allocate 16 virtual CPUs. This prevents any 1 VM from being unable to run if a livemigrate to another node happens and the VM tries to utilize all virtual CPUs. When deciding how to over-provision it is best to start with conservative numbers and build up if there are no issues. If you start seeing high CPU spikes or high wait times it may be important to turn down the amount of over-provisioning being used.
-
RE: Introductions
My name is Blake Rodier, I am the Tier 1 Support Manager at Scale Computing. I have been with Scale Computing for almost 4 years now. I had previously been a member of the support teaming moving up to a Tier 3 Support Engineer before transitioning to management on the team. On the support team we try our best to provide the best experience anyone has had with technical support. If you have any questions for us please let us know.