ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. scale
    • Profile
    • Following 0
    • Followers 4
    • Topics 189
    • Posts 309
    • Best 264
    • Controversial 0
    • Groups 0

    scale

    @scale

    807
    Reputation
    2.0k
    Profile views
    309
    Posts
    4
    Followers
    0
    Following
    Joined Last Online

    scale Unfollow Follow

    Best posts made by scale

    • 3-Node Minimum? Not So Fast

      http://blog.scalecomputing.com/3-node-minimum-not-so-fast/

      For a long time, when you purchased HC3, you were told there was a 3 node minimum. This minimum of 3 nodes is what is required to create a resilient, highly available cluster. HC3 architecture, based on this 3 node cluster design, prevents data loss even in the event of a whole node failure. Despite these compelling reasons to require 3 nodes, Scale Computing last week announced a new single node appliance configuration. Why now?

      Recent product updates have enhanced the replication and disaster recovery capabilities of HC3 to make a single node appliance a compelling solution in several scenarios. One such scenario is the distributed enterprise. Organizations with multiple remote or branch offices may not have the infrastructure requirements to warrant a 3 node cluster. Instead, they can benefit from a single node appliance as a right-sized solution for the infrastructure.

      0_1478556856718_Screen-Shot-2016-07-18-at-2.06.52-PM-300x183.png

      In a remote or branch office, a single node can run a number of workloads and easily be managed remotely from a central office. In spite of the lack of clustered, local high availability, single nodes can easily be replicated for DR back to an HC3 cluster at the central office, giving them a high level of protection. Deploying single nodes in this way offers an infrastructure solution for distributed enterprise that is both simple and affordable.

      Another compelling scenario where the single node makes perfect sense is as a DR target for an HC3 cluster. Built-in replication can be configured quickly and without extra software to a single HC3 node located locally or remotely. While you will likely want the local high available and data protection a 3-node cluster provides for primary production, a single node may suffice for a DR strategy where you only need to failover your most critical VMs to continue operations temporarily. This use of a single node appliance is both cost effective and provides a high level of protection for your business.

      0_1478556872680_Replication-300x143.png

      Finally, although a single node has no clustered high availability, for very small environments the single node appliance can be deployed with a second appliance as a DR target to give an acceptable level of data loss and availability for many small businesses. The ease of deployment, ease of management, and DR capabilities of a full blown HC3 cluster are the same reasons to love the single node appliance for HC3.

      Find out more about the single node appliance configuration (or as I like to call it, the SNAC-size HC3) in our press release and solution brief.

      posted in Scale Legion scale scale hc3 hyperconvergence virtualization
      scaleS
      scale
    • Explain Hyperconvergence Like I Am Five

      It’s supposedly the wave of the future, but you’re not sure if hyperconvergence is a new type of server architecture or the name of one of the Decepticons from Transformers.

      If you’ve always wanted to know what the heck hyperconvergence is all about, but want things explained from the beginning and without all the marketing buzzwords, then clear you calendar for January 21st at 11 a.m. CT!

      Join Scale Computing as they go back (WAY back) and explain hyperconvergence as if you were still five.

      Find out how:

      • Hyperconvergence is like Optimus Prime because it handles compute, storage, networking and virtualization by itself
      • Hyperconvergence simplifies data center operations, and lets users do more with less
      • Hyperconvergence helps organizations avoid fingerpointing among OS, hypervisor, and storage vendors

      Make sure you register for this exciting Spiceworks Webinar before it rolls out, you won't want to miss it!

      What questions do you have about hyperconvergence?

      posted in Self Promotion scale scale hc3 hyperconvergence virtualization storage rain spiceworks webinar
      scaleS
      scale
    • What do DDOS attacks mean for Cloud users?

      Last Friday, a DDOS attack disrupted major parts of the internet in both North America and Europe. The attacks seems largely targeted on DNS provider Dyn disrupting access to major service providers such as Level 3, Zendesk, Okta, Github, Paypal, and more, according to sources like Gizmodo. This kind of botnet-driven DDOS attack is a harbinger of future attacks that can be carried out over an increasingly connected device world based on the Internet of Things (IoT) and poorly secured devices.

      alt text

      This disruption highlights a particular vulnerability to businesses that have chosen to rely on cloud-based services like IaaS, SaaS, or PaaS. The ability to connect to these services is critical to business operations and even though the service may be running, if users cannot connect, it is considered downtime. What is particularly scary about these attacks for small and midmarket organizations especially, is that they become victims of circumstance from attacks directed at larger targets.

      As the IoT becomes more of a reality, with more and more devices of questionable security joining the internet, the potential for these attacks and their severity can increase. I recently wrote about how to compare cloud computing and on-prem hypercoverged infrastructure (HCI) solutions, and one of the decision points was reliance on the internet. So it is not only a matter of ensuring a stable internet provider, but also the stability of the internet in general with the possibility of attacks targeting a number of different services.

      Organizations running services on-prem were not affected by this attack because it did not affect any internal network environments. Choosing to run infrastructure and services internally definitely mitigates the risk of outage from external forces like collateral damage from attacks on service providers. Many organizations that choose cloud services do so for simplicity and convenience because traditional IT infrastructure, even with virtualization, is complex and can be difficult to implement, particularly for small and midsize organizations. It has only been recently that hyperconverged infrastructure has made on-prem infrastructure as simple to use as the cloud.

      The future is still uncertain on how organizations will ultimately balance their IT infrastructure between on-prem and cloud in what is loosely called hybrid cloud. Likely it will simply continue to evolve continuously with more emerging technology. At the moment, however, organizations have the choice of easy-to-use hyperconverged infrastructure for increased security and stability, or choose to go with cloud providers for complete hands-off management and third party reliance.

      As I mentioned in my cloud vs. HCI article, there are valid reasons to go with either and the solution may likely be a combination of the two. Organizations should be aware that on-prem IT infrastructure no longer needs to be the complicated mess of server vendors, storage vendors, hypervisor vendors, and DR solution vendors. Hyperconverged infrastructure is a viable option for organizations of any size to keep services on-prem, stable, and secure against collateral DDOS damage.

      posted in Scale Legion scale scale blog ddos security hyperconvergence
      scaleS
      scale
    • Cloud Computing vs. Hyperconvergence

      As IT departments look to move beyond traditional virtualization into cloud and hyperconverged infrastructure (HCI) platforms, they have a lot to consider. There are many types of organizations with different IT needs and it is important to determine whether those needs align more cloud or HCI. Before I dig into the differences, let me go over the similarities.

      Both cloud and HCI tend to offer a similar user experience highlighted by ease of use and simplicity. One of the key features of both is simplifying the creation of VMs by automatically managing the pools of resources. With cloud, the infrastructure is all but transparent as the actual physical host where the VM is running is far removed from the user. With live migration capabilities and auto provisioning of resources, HCI can provide nearly the same experience.

      As for storage, software defined storage pooling has made storage management practically as transparent in HCI as it is in cloud. In many ways, HCI is nearly a private cloud, but without the complexity of traditional underlying virtualization architecture, HCI makes infrastructure management turnkey and lets administrators focus on the workloads and applications, just like the cloud, but keeps everything on prem and not managed by a third party.

      Still, there are definite differences between cloud and HCI so let’s get to those. I like to approach these with a series of questions to help guide between cloud and on prem HCI.

      Is your business seasonal?

      If your business is seasonal, the pay as you go Opex pricing model of cloud might make more sense as well as the bursting ability of cloud. If you need lots of computing power but only during short periods of the year, cloud might be best. If you business follows a more typical schedule of steady business throughout the year with some seasonal bumps, then an on prem Capex investment in HCI might be the best option.

      Do you already have IT staff?

      If you already have IT staff managing an existing infrastructure that you are looking to replace, an HCI solution will be both easy to implement and will allow your existing staff to change focus from infrastructure management to implementing better applications, services, and processes. If you are currently unstaffed for IT, cloud might be the way to go since you can get a number of cloud based application services for users with very little IT administration needed. You may need some resources to help make a variety of these services work together for your business, but it will likely be less than with an on prem solution.

      Do you need to meet regulatory compliance on data?

      If so, you are going to need to look into the implications of your data and services hosted and managed off site by a third party. You will be reliant on the cloud provider to provide the necessary security levels that meet compliance. With HCI, you have complete control and can implement any level of security because the solution is on prem.

      Do you favor Capex or Opex?

      Pretty simple here. Cloud is Opex. HCI can be Capex and is usually available for Opex as well through leasing options. The cloud Opex is going to be less predictable because many of the costs are based on dynamic usage, where the Opex with HCI should be completely predictable with a monthly leasing fee. considering further the Opex for HCI is usually in the form of lease-to-own so it drops off dramatically once the lease period ends as opposed to cloud Opex which is perpetual.

      Can you rely on your internet connection?

      Cloud is 100% dependent on internet connectivity so if your internet connection is down, all of your cloud computing is unavailable. The internet connection becomes a single point of failure for cloud. With HCI, internet connection will not affect local access to applications and services.

      Do you trust third party services?

      If something goes wrong with cloud, you are dependent on the cloud provider to correct the issue. What if your small or medium sized cloud provider suddenly goes out of business? Whatever happens, you are helpless, waiting, like an airline passenger waiting on the tarmac for a last minute repair. With HCI, the solution is under your control and you can take action to get systems back online.

      Let me condense these into a little cheat sheet for you.

      alt text

      One last consideration that I don’t like to put into the question category is the ability to escape the cloud if it doesn’t work out. Why don’t I like to make it a question? Maybe I just haven’t found the right way to ask it without making cloud sound like some kind of death trap for your data, and I’m not trying to throw cloud under the bus here. Cloud is a good solution where it fits. That being said, it is still a valid consideration.

      Most cloud providers have great onboarding services to get your data to the cloud more efficiently but they don’t have any equivalent to move you off. It is not in their best interest. Dragging all of your data back out of the cloud over your internet connection is not a project anyone would look forward to. If all of your critical data resides in the cloud, it might take a while to get it back on prem. With HCI it is already on prem so you can do whatever you like with it at local network speeds.

      I hope that helps those who have been considering a choice between cloud and HCI for their IT infrastructure. Until next time.

      posted in Scale Legion cloud hyperconvergence scale blog
      scaleS
      scale
    • Introducing the Single Node Scale HC3 Appliance

      Since the news was quietly leaked here the other day, we wanted to take a moment and tell you about the new, single node Scale HC3 appliance and officially answer questions as they may arise. Scale Computing now offers our Scale HC3 platform in a single node configuration. This allows customers to deploy the Scale HC3 to situations where the capacity or high availability of the three node (or greater) cluster is not warranted such as ROBO (remote office / branch office) locations and SMB or even SOHO (small office / home office) customers that cannot justify the cost of high availability but do want Scale's flexibility, support and ease of use.

      The single node configuration comes with the same easy to use all inclusive management interface, complete support and advanced storage layer that you expect from Scale, just in a smaller package without high availability. We hope that this is of interest to many small businesses or company divisions that would benefit from all that Scale offers but have been unable to do so due to the entry point of having to have three nodes for a minimum cluster.

      Customers starting with a single node configuration will be able to transparently upgrade to three or more nodes when they time comes for them to grow their environments, as well.

      Single node configurations can replicate with other single node configurations (making a two node configuration possible in some ways) as well as with high availability cluster configurations making them very well suited to remote offices.

      Pricing: An official price list is not available yet, but the single node configuration is priced as the same as the per node prices of the existing cluster configurations. The single node configuration is only an update to our software to allow for single node operation and not new hardware, so the single nodes are the same as individual nodes in a cluster. The single node configuration, therefore, starts at just 33% the price of our normal starting price for a three node cluster.

      posted in Scale Legion scale scale hc3
      scaleS
      scale
    • Technology Becomes Obsolete. Saving Does Not.

      The list of technological innovations in IT that have already passed into obsoletion is long. You might recall some not so ancient technologies like the floppy disk, dot matrix printers, ZIP drives, the FAT file system, and cream-colored computer enclosures. Undoubtedly these are still being used somewhere by someone but I hope not in your data center. No, the rest of us have moved on. Technologies always fade and get replaced by newer, better technologies. Saving money, on the other hand, never goes out of style.

      http://blog.scalecomputing.com/wp-content/uploads/2017/07/social-card-travel-money.jpg

      You see, when IT pros like you buy IT assets, you have to assume that the technology you are buying is going to be replaced in some number of years. Not replaced because it no longer operates. It gets replaced because it is no longer being manufactured or supported and has been replaced by newer, better, faster gear. This is IT. We accept this.

      The real question here is, are you spending too much money on the gear you are buying now when it is going to be replaced in a few years anyway? For decades, the answer is mostly yes, and there are a two reasons why. Over-provisioning and complexity.

      Over-Provisioning

      When you are buying an IT solution, you know you are going to keep that solution for a minimum of 3-5 years before it gets replaced. Therefore you must attempt to forecast your needs for 3-5 year out. This is practically impossible but you try. Rather than risk under-provisioning, you over-provision to prevent yourself from having to upgrade or scale out. The process of acquiring new gear is difficult. There is budget approval, research, more guesstimating future needs, implementation, and risk of unforeseen disasters.

      But why is scaling out so difficult? Traditional IT architectures involve multiple vendors providing different components like servers, storage, hypervisors, disaster recovery, and more. There are many moving parts that might break when a new component is added into the mix. Software licensing may need to be upgraded to a higher, more expensive tier with infrastructure growth. You don’t want to have to worry about running out of CPU, RAM, storage, or any other compute resource because you don’t want to have to deal with upgrading or scaling out what you already have. It is too complex.

      Complexity

      Ok, I just explained how IT infrastructure can be complex with so many vendors and components. It can be downright fragile when it comes to introducing change. Complexity bites you when it comes to operational expenses as well. It requires more expertise, more training, and tasks become more time consuming. And what about feature complexity? Are you spending too much on features that you don’t need? I know I am guilty of this in a lot of ways.

      I own an iPhone. It has all kinds of features I don’t use. For example, I don’t use Bluetooth. I just don’t use external devices with my phone very often. But the feature is there and I paid for it. There are a bunch of apps and feature on my phone I will likely never use, but all of those contributed to the price I paid for the phone, whether I use them or not.

      I also own quite a few tools at home that I may have only used once. Was it worth it to buy them and then hardly ever use them? There is the old saying, “It is better to have it and not need it than to need it and not have it.” There is some truth to that and maybe that is why I still own those tools. But unlike IT technologies, these tools may well be useful 10, 20, even 30 years from now.

      How much do you figure you could be overspending on features and functionality you may never use in some of the IT solutions you buy? Just because a solution is loaded with features and functionality does not necessarily mean it is the best solution for you. It probably just means it costs more. Maybe it also comes with a brand name that costs more. Are you really getting the right solution?

      There is a Better Way

      So you over-provision. You likely spend a lot to have resources and functionality that you may or may not ever use. Of course you need some overhead for normal operations, but you never really know how much you will need. Or you accidently under-provision and end up spending too much upgrading and scaling out. Stop! There are better options.

      If you haven’t noticed lately, traditional Capex expenditures on IT infrastructure are under scrutiny and Opex is becoming more favorable. Pay-as-you-go models like cloud computing are gaining traction as a way to prevent over-provisioning expense. Still, cloud can be extremely costly especially if costs are not managed well. When you have nearly unlimited resources in an elastic cloud, it can be easy to overprovision resources you don’t need, and end up paying for them when no one is paying attention.

      Hyperconverged Infrastructure (HCI) is another option. Designed to be both simple to operate and to scale out, HCI lets you use just the resources you need and gives you the ability to scale out quickly and easily when needed. HCI combines servers, storage, virtualization, and even disaster recovery into a single appliance. Those appliances can then be clustered to pool resources, provide high availability, and become easy to scale out.

      HC3, from Scale Computing, is unique amongst HCI solution in allowing HCI appliances to be mixed and matched within the same cluster. This means you have great flexibility in adding just the resources you need whether it be more compute power like CPU and RAM, or more storage. It also helps future proof your infrastructure by letting you add newer, bigger, faster appliances to a cluster while retiring or repurposing older appliances. It creates an IT infrastructure that can be easily and seamlessly scaled without having to rip and replace for future needs.

      The bottom line is that you can save a lot of money by avoiding complexity and over-provisioning. Why waste valuable revenue on total cost of ownership (TCO) that is too high. At Scale Computing, we can help you analyze your TCO and figure out if there is a better way for you to be operating your IT infrastructure to lower costs. Let us know if you are ready to start saving. www.scalecomputing.com

      posted in Scale Legion scale scale hc3 scale blog
      scaleS
      scale
    • MS SQL Server Best Practice Guide on Scale HC3

      Below is a snippet of our Microsoft SQL Best Practices Reference Sheet available via our Customer and Partner support portals. This is designed to help any users on HC3 better understand common best practices with SQL maintenance and setup, as well as how to best utilize your HC3 system for your SQL servers.

      Pre-Installation and Migration Tasks

      • Spend time before transitioning to the HC3 system to understand your needs throughout the year. Month-end, quarter-end, and year-end activities could be more resource intensive than daily requirements. Plan to utilize the HC3 system HEAT capabilities for high-utilization “seasons.”
      • Run the Sql Server Best Practice analyzer on existing databases to look for possible improvements prior to migrating to the HC3 system.
      • Spend time testing SQL Server configurations prior to deploying to live operations on the HC3 system.
      • Don’t oversize your installation and deprive other VMs of necessary resources.
      • Make sure all applicable guest OS patches are applied before migration.

      Windows Guest Configuration

      • Make sure Receive Side Scaling (RSS) is enabled. It is configured to be enabled by default.
      • Format data and log file drives as NTFS with a 64 KB allocation unit size. To verify that your drive has been formatted properly, run fsutil fsinfo ntfsinfo from the command line.
      • Set power management to High Performance in the guest OS.
      • Use 64-bit version of Windows guest OS.
      • Do not configure data or log file drives as Dynamic drives in disk management.
      • Add your SQL service account to the “Perform Volume Maintenance Task” in Windows Security Policy to use Instant File Initialization (IFI).
      • Reduce the size of your page files to the minimum possible. The OS should be configured with a sufficient amount of physical memory.

      SQL Installation Guidelines

      • Keep the OS, data files, log files, and backups on separate drives so that you can assign a different HEAT flash priority to data and log file drives if necessary.
      • Don’t set databases to grow by a percentage. Use set increments.
      • Be sure to right-size your database.
      • Use the 64-bit version of SQL Server.
      • Spread high IOP databases across multiple VMs as opposed to multiple instances on the same SQL server.
      • Make sure all applicable SQL Server patches are applied.

      Much more information in this guide is available using the following links (or by logging in and searching "SQL")

      Customer portal
      Partner portal

      posted in Scale Legion scale scale hc3 ms sql server database
      scaleS
      scale
    • Scale HC3 VirtIO Performance Drivers

      HC3 uses the KVM hypervisor which can provide para-virtualized devices to the guest OS which will decrease latency and improve performance for the virtual devices. Virtio is the standard used by KVM. We recommend selecting performance drivers for any supported OS which creates Virtio block devices. Emulated block devices are also supported for legacy operating systems.

      Virtio driver support has been built into the Linux Kernel since 2.6.25. Any Linux distro utilizing a 2.6.25 or later distro will natively support Virtio network and storage block devices presented by HC3. Older kernels can potentially allow the Virtio modules to be backported. Any modern Linux distro should be on a Kernel version late enough to natively support Virtio.

      Virtio drivers for Windows OSs are available for guest and server platforms starting at Windows XP and Windows Server 2003. Any Windows OS beyond those will have Virtio driver support as well. Any OS older than XP or Server 2003 will have to use the emulated non-performance block device type and will experience decreased performance compared to more modern OSs.

      At Scale Computing, we periodically update the Virtio performance drivers provided with HC3 via firmware updates. We recommend only using the included Virtio ISO or one provided by Scale Support. Untested Virtio drivers could cause an inability to livemigrate VMs or other issues. New Virtio drivers will not be automatically added to guest VMs. You will need to mount the ISO CD to the VM and manually install the updated drivers via device manager. You can also utilize group policy to roll out updates of virtio drivers when they are available

      posted in Scale Legion scale scale hc3 virtio kvm virtualization hyperconvergence hyperconverged
      scaleS
      scale
    • Job Posting: Operations Coordinator at Scale Computing

      Link to application: https://boards.greenhouse.io/scalecomputing/jobs/257375#.V6H_UTsrLIU

      Ensure the following functions are executed with efficiency and accuracy. Document and keep up-to-date all processes pertaining to responsibilities. Create in-house process training or documents that are accessible to those outside of the Operations department, such as support and sales.Establish and report metrics on responsibilities to maintain minimum levels of operation and find efficiencies.

      Important skills & qualifications: 4 year degree or relevant experience. Proficiency in MS Office Suite products. Experience with CRM tool recommended. Ability to work in a fast paced, interrupt driven workplace while maintaining organization, attention to detail and flexibility.

      Responsibilities listed below can change at any time and other projects may be assigned as time and ability warrant. Responsibilities will be assigned on a gradual basis. Moderate lifting may be required. Position reports to the Director of Operations.

      Sales

      • Manage trade-in returns
      • Manage service contract renewals
      • Assign 3rd party licensing and work with Ops Specialist on renewals

      Support/Services

      • Place Support replacement requests within SLA
      • Follow up and process all return cases within 30 days of part shipment
      • Contact customers, and work with finance to invoice if needed, for outstanding returns
      • Follow up and process all Quality cases with contract manufacturer
      • Manage RMA returns that go back directly to contract manufacturer
      • Weekly/monthly/quarterly replacement shipping metrics
      • Reimage nodes as needed

      Shipping/Receiving

      • Ship sales/replacement requests
      • Assist marketing/accounting with other shipments if needed
      • Check in returns, manage daily DAM reporting
      • Resolve minor shipping issues with UPS
      • Schedule roadshow & migration cluster shipping

      Inventory

      • Maintain replacement part inventory and place purchase orders with contract manufacturer
      • Manage spare part dashboard on SalesForce.com
      • Review contract manufacturer portal for replacement order status
      • Track Scale Internal assets
      • Monthly Inventory Audit

      Asset Management – Asset Lifecycle maintenance

      • Maintain asset integrity when parts are shipped to/returned from Customers (i.e. update accounts, move entitlements, etc.)
      • Work with Sales Support Renewal Manager to ensure all Customer entitlements are accurate
      • Manage all internal inventory in Scale’s on-site warehouse

      Accounting/Finance

      • Upload DAM reporting to Confluence
      • Update weekly contract manufacturer stocking and shipment reports
      • Manage monthly RMA reconciliation
      • Review and approve UPS/third party logistics invoices

      Other

      • Record and manage contract manufacturer MQT (manufacturer quality tracker)
      • Place orders with 3rd party vendors as assigned
      • Weekly & monthly metrics pertaining to above responsibilities
      posted in Job Postings job job posting scale
      scaleS
      scale
    • Scale Radically Changes Price Performance with Fully Automated Flash Tiering

      Sorry for the Press Release copy, but we wanted to get a uniform announcement out about our new automated flash tiering technology and HEAT.

      Scale Computing Radically Changes Price-Performance in the Datacenter with Fully Automated Flash Tiering

      Scale Computing, the leader in hyperconverged technology across the mid-market today announced the integration of flash-enabled automated storage tiering into its award-winning HC3 platform.

      This update to Scale’s converged HC3 system adds hybrid storage including SSD and spinning disk with HyperCore Enhanced Automated Tiering (HEAT). Scale’s HEAT technology uses a combination of built-in intelligence, data access patterns, and workload priority to automatically optimize data across disparate storage tiers within the cluster.

      “Hyperconvergence is nothing if not about simplicity and cost. But it is also about performance, especially in the SMB to mid-size enterprises where most, if not all workloads will simultaneously run on a single cluster of nodes,” said Arun Taneja, Founder and Consulting Analyst of the Taneja Group. “Introducing flash into a hard disk based system is easy; the question is how do you do it so that it maintains low cost and simplicity while boosting performance. This is what Scale has done in these new models. The only decision the IT admin and the business user need to make is to determine the importance of the application and its priority. After that flash is invisible to them. The only thing visible is better application performance. This is how it should be.”

      Scale Computing’s HC3 platform brings storage, servers, virtualization, and high availability together in a single, comprehensive system. With no virtualization software to license and no external storage to buy, HC3 solutions lower out-of-pocket costs and radically simplify the infrastructure needed to keep applications optimized and running.

      This update to the HC3 HyperCore storage architecture combines Scale’s HEAT technology with SSD-hybrid nodes that add a new tier of flash storage to new or existing HC3 clusters. HEAT technology combines intelligent automation with simple, granular tuning parameters to further define flash storage utilization on a per virtual disk basis for optimal performance.

      Through an easy-to-use slide bar, users can optionally tune flash priority allocation to more effectively utilize SSD storage where needed from no flash at all for a virtual disk, to virtually all flash by “turning it to 11.” Every workload is different and even a small amount of flash prioritization tuning, combined with the automated, intelligent I/O mapping, can have a big impact on the overall performance of flash storage in the HC3 cluster.

      Unlike other storage systems that use flash storage only for disk caching, Scale’s HC3 virtualization platform adds flash capacity and performance to the total storage pool. Customers will immediately and automatically take advantage of the flash I/O benefits without any special knowledge about flash storage.

      “Like any organization, we have applications that need maximum performance, applications where performance isn’t a priority, and still others where higher performance would be helpful but not mission critical,” said Mike O’Neil, Director of IT at Hydradyne. “But unlike some organizations, we weren’t in a position to dedicate the resources needed to support these differing workloads. With Scale, we will have an architecture in place that immediately and automatically allows VMs to take advantage of flash storage without us even thinking about storage or virtualization configuration.”

      Scale’s HyperCore architecture dramatically simplifies VM storage management without VSAs (Virtual Storage Appliances), SAN protocols and file system overhead. VMs have direct access to virtual disks, allowing all storage operations to occur as efficiently as possible. HyperCore applies logic to stripe data across multiple physical storage devices in the cluster to aggregate capacity and performance. The HyperCore backplane network lets any node and any VM access any disk and is performance optimized to scale as nodes are added.

      “With this release, we radically change the economics and maximize the value of flash storage for all customer segments, from the SMB to the enterprise,” said Jeff Ready, CEO of Scale Computing. “Many vendors use a flash write-cache as a way to mask otherwise sluggish performance. Instead, we have built an architecture that intelligently adjusts to the changing workloads in the datacenter, to maximize the performance value of flash storage in every environment.“

      Scale is deploying its new HEAT technology across the HC3 product line and is introducing a flash storage tier as part of its HC2150 and HC4150 appliances. Available in 4- or 8-drive units, Scale’s latest offerings include one 400 or 800GB SSD with three NL-SAS HDD in 1-6TB capacities and memory up to 256GB, or two 400 or 800GB SSD with 6 NL-SAS HDD in 1-2TB capacities and up to 512 GB memory respectively. Network connectivity for either system is achieved through two 10GbE SFP+ ports per node.

      The new products can be used to form new clusters, or they can be added to existing HC3 clusters. Existing workloads on those clusters will automatically utilize the new storage tier when the new nodes are added.

      For additional information or to purchase, interested parties can contact Scale Computing representatives at https://www.scalecomputing.com/scale-computing-pricing-and-quotes

      posted in Self Promotion scale scale hc3 scale heat scale hc3 hc2150 hyperconvergence hypercore
      scaleS
      scale

    Latest posts made by scale

    • The Intersections of Cloud, IoT, and Edge Computing

      https://www.scalecomputing.com/uploads/general-images/VennDiagram.png

      It is 2018 and although we’ve already accepted that cloud computing will not fully replace the IT datacenter, we are still discovering how the rise of IoT will consume cloud computing vs edge computing vs the datacenter. First, though, let me define cloud, IoT, and edge computing in the context of this discussion.

      Cloud computing, in this context, refers to internet-based computing services such as (but not limited to) AWS, Azure, Google Cloud Platform, and others. These are cloud computing resources than can be used to extend the computing resources of a datacenter and, in some cases, replace the computing resource needs of remote sites. Cloud computing may also include the use of cloud-based applications.

      IoT is the proliferation of micro-computing devices that are sending data to centralized computing resources for processing and analytics. IoT can encompass nearly any kind of computing device from a common personal device like a phone or tablet to a camera or GPS on a drone or to a sensor on a piece of manufacturing equipment.

      Edge computing is anywhere outside the datacenter where cloud computing cannot replace on-prem computing needs. Think of a mobile platform like a large ship or mobile oil/gas platform, a remote medical facility, a manufacturing facility, or a retail location. These sites may not have reliable enough internet connectivity to cloud computing to ensure the quality of service they require for on-site computing needs.

      Ok, now that we have the definitions in order, what does it all mean for IT? Well, I’ll give you the age old IT answer: It depends. There are too many scenarios to cover but we’ll start by saying that for any given scenario, there will likely at least two of of these three types of computing involved. Let’s talk about a three examples where these different types of computing might be combined.

      Retail (Cloud + Edge)

      Probably the easiest example where cloud computing and edge computing are combined would be retail. Many retail operations include a combination of online sales and brick-and-mortar stores. The online sales component is often cloud-based where the brick-and-mortar operations require on-prem system that also connect with the online systems. A single store operation does not really meet the definition of edge computing since the one store, no matter how modest the computing systems, would be considered the datacenter, however larger operations with multiple store locations and a centralized office/datacenter would definitely meet the definition.

      Retail locations might use a combination of cloud and edge computing for a number of different functions but often it is desirable to have highly available, on-prem edge computing to make sure key point-of-sale systems are functional and PCI compliant even if internet is not. Connection to cloud-based applications or VMs may also be needed for store operations that are less sensitive to outages. IoT may also play a role in retail with digital devices in the hands of store associates, point-of-sale devices, security systems, or maybe even smart sensors on things like refrigerated cabinets.

      Agriculture (Cloud + IoT)

      When it comes to managing large areas of land, the limits of traditional networking come up short. Farmers are increasingly using IT to manage farm operations and a device or sensor on a piece of farm equipment in a field a mile away is probably going to be out of range of a wifi router but not a cellular tower. This is where IoT can intersect with cloud computing beyond the reach of on-prem infrastructure.

      Transmitting and analyzing data from the literal field can increase operational efficiency in agriculture and this can only be achieved in real-time by internet connected devices. Unlike with some other industry operations, a break in internet connection to the cloud will not cause crops to stop growing and probably not stop farmers from plowing, planting, or harvesting, but when connected, the data can help perform these tasks more efficiently.

      There are also scenarios where edge computing also fits into agriculture. For example, a dairy farm may have hundreds of sensors connected locally for monitoring milk production where data is collected on edge computing systems and then also sent on to the cloud for data analysis. There are no hard and fast rules on which technologies to use where, but rather simply choosing those that can do the job most effectively.

      Manufacturing (Edge + IoT)

      Manufacturing processes can range from extremely hazardous to fairly benign. A system failure or a loss of production can be extremely costly whether it leads to a life threatening accident in a steel mill or 10,000 diecast fasteners that don’t fit. Manufacturing sites need reliable computing power to manage complex modern computing processes and these processes can include hundred or thousands of sensors or other IoT devices for both safety and efficiency. Whether wired or wireless, these networks of sensors and devices require real-time monitoring and data processing where the cloud can’t quite fill the need.

      To maintain reliable production schedules, production facilities need reliable on-prem, edge computing resources that can gather IoT data and maintain the pace of production. These edge computing systems may also go on to send their data up to the cloud for further processing but they are still vital to maintaining production on the ground when network latency to cloud systems can be an issue.

      These three examples are just a few of the many IT environments that will combine cloud, IoT, and edge computing. Nearly every organization will have IT requirements now or in the future that encompass some or all of these infrastructure technologies.

      Summary

      Cloud, IoT, and edge computing all have very real and critical roles to play in both modern and future IT infrastructure. While their roles will continue to evolve, it is clear that each is a growing part of the IT industry and hybrid IT infrastructures that adopt and combine these different technologies will have a competitive advantage over those who do not. Just as these technologies continue to evolve, so will the ways they’ll continue to intersect.

      posted in Scale Legion iot cloud scale scale hc3 scale blog hyperconvergence hyperconverged edge computing
      scaleS
      scale
    • Best of Show - Midmarket CIO Forum

      This is not the first time I have blogged about winning awards at the Midmarket CIO Forum. Our midmarket customers and their peers seem to just naturally recognize the value of our infrastructure solutions. So maybe it isn't too surprising that when the Midmarket CIO Forum introduced a new Best of Show award this year, Scale Computing came out on top along with a win for Best Midmarket Strategy.

      https://www.scalecomputing.com/uploads/general-images/Best_in_Show_CIO.jpg

      I've been in the IT industry for 20 years now and have been involved in many award submissions over those years at all levels of the industry. I've also been in meeting and involved in projects designed to help win awards. If I have learned anything in those years, it is that setting out to win an award is a losing strategy.

      The only time I've been involved in award-winning solutions is when the only objective has been to provide a great solution to customers. The concept of winning in IT should go no further than making IT easier to implement, easier to manage, and cost less. That is what we strive to do at Scale Computing. The fact that we are recognized by industry CIOs is just icing on the cake.

      https://www.scalecomputing.com/uploads/general-images/Awards_CIO.jpg

      posted in Scale Legion scale scale hc3 midmarket cio forum conference awards hyperconvergence hyperconverged
      scaleS
      scale
    • Getting Educated on the Scale HC3

      More and more organizations are discovering for themselves how HC3 exceeds their expectations at transforming their IT infrastructure. We set out to change the way organizations thought about IT infrastructure, but so often we find that they don't really believe it until they see it.

      The Metropolitan School District of Wayne Township was no different. Although they vetted HC3 thoroughly against other solutions, they still weren't convinced that HC3 wasn't a risk. It wasn't until the HC3 solution was implemented that they realized they had gotten even more than they had hoped for.

      See for yourself in this video case study.

      Youtube Video

      We understand you may still not be convinced that HC3 can be so easy-to-use and so flexible that it can meet and even exceed your IT infrastructure needs. If not, join us on our weekly live demo and see for yourself why HC3 is everything we say it is and more. Click below to choose your demo time and date.

      posted in Scale Legion scale scale hc3 scale blog hyperconvergence hyperconverged
      scaleS
      scale
    • Scale: VM long term archival - Leveraging HC3 VM Export - NAS and Cloud Storage

      Many customers use the built in HC3 VM export to supplement their regular backup / restore / replication / DR strategies. For those not familiar, HC3 export takes a specified VM, running or shut off, takes a point in time snapshot of that VM, lets you specify a remote SMB file server share (currently must support SMB v1) and creates a fully independent copy of that VM snapshot on that share. The export will create a parent folder with the VM name. an XML file that contains all the configuration information about that VM such as number of vCPU's, RAM, nics, etc. and will create a qcow2 format virtual disk file for each virtual disk in that VM. Obviously there is a VM import function that uses all of that to reverse the process and recreate that VM and it's data from those fully independent export files. (also note that qcow2 is an open format that there are a variety of tools that can convert to and from qcow2 and other virtual disk formats)

      While the HC3 UI currently only allows exports to be submitted immediately (and remember they are done from a snapshot so it's fine to export a running VM), the ScaleCare support team can and will set up simple scheduling of VM exports for you using some "under the hood" tools, even giving you some control of which VM's are exported using "tags" you can add and remove in the HC3 UI and storing batch VM exports in a date stamped directory name structure on said SMB file share... hmm, guessing most of you can see where this is going. Well there are a lot of different directions you might go with this depending on your needs.

      Could these vm export files be considered an extra level of backup? sure! We have customers using monthly, weekly or even nightly scheduled exports as that.

      Could these exports be retained for long periods of time, even many years? Absolutely, and unlike just data backups or archives these are fully bootable VMs with not just the data but the right version of the OS and applications required to access and process that data.

      Where might you keep these export files? Well there is all sorts of deep and cheap budget NAS storage available, not to mention roll your own software solutions using commodity hardware if you want to go that route. I've heard of other customers using their "old" / retired production infrastructure (servers and storage) to house this export repository.

      Some other things I've personally played around with include storing exports on a Windows server VM with the built in file system de-duplication enabled. Obviously if you are storing lots of versions of the same VM and are able to deduplicate at a sub file level you could see very high deduplication rates.

      I've also played around with using cloud storage to achieve high capacity / off-site long term retention and will likely post more about some of these solutions in the future. From "file servers in the sky" to cloud storage gateway solutions available in the market, many that could run as virtual appliances right on your HC3 system. Further, there are all sorts of low level tools to simply copy files from ground to cloud where an admin could script some of that ... for example I've used azcopy . Although the AWS Cloud Storage Gateway only exposes storage as NFS and iSCSI, and is only released as a VMware VMDK or Hyper-V VHD, I have converted and run those virtual appliances on HC3 to provide a local gateway to AWS S3 cloud storage ... hopefully AWS will fully support a native KVM version soon since they are converting their whole EC2 cloud back end to use KVM as the hypervisor.

      I'm currently playing with a "preview" feature from Microsoft Azure called Azure File Sync that essentially provides "cloud tiering" to an on prem windows file server. So I do all my HC3 exports to that windows file share which in my case is running as a VM on HC3, those files then get immediately "synced" to an Azure cloud based file share so I get rapid off-site and off-cluster protection but as those files get older and I start to fill my local file share, eventually older files are "stubbed" on my local file server to free that space and the data exists only in azure, yet can be retrieved automatically if it is accessed. So conceptually you could store years worth of HC3 exports on this Azure tiered share with only a small % of the overall storage needed on the ground. This feature is still in preview and there are a number of limitations on share size, etc. that exist today but is one interesting example that may be ready for prime time soon. https://azure.microsoft.com/en-us/blog/announcing-the-public-preview-for-azure-file-sync/

      Would be interested in hearing what products and solutions HC3 users are using or are interested in using for deep and cheap, long term archive storage...

      posted in Scale Legion scale scale hc3 storage archival hyperconvergence hyperconverged
      scaleS
      scale
    • Running Docker Containers in Scale HC3 VMs ... on Linux or Windows

      Let me begin by stating, I'm no docker / containers expert but we've been getting an increasing number of questions about containers on HC3, as well as an increasing number of customers actually using containers in production so I wanted to gather up some information, try some things out myself and begin a discussion here.

      For years, you have been able to run linux based containers (using docker and LXC) inside linux VM's running on HC3. Nothing really fancy and there are all sorts of guides on docker out there. But high level, on Centos7 for example - simply su "yum install docker" then su "docker run hello-world" to run your first container. So Linux based containers on Linux VMs running on HC3 - check!

      However, Microsoft recently introduced the ability to run Windows based containers (windows binaries) using Windows Containers feature in Windows Server 2016 and Windows 10. We've had a few people ask about it or try it inside windows VM's running on HC3 and have heard mixed results generally installation or the believe that nested virtualization (VTx) inside the VM was required. In my initial testing, I myself also saw mixed results but I believe I've "cracked the code" to running docker for windows images, on Windows VM's running on HC3.

      tl/dr: docker for windows needs the windows OS to have a virtual switch configured, which is a component of windows hyper-v role... if it's not installed it will try to install hyper-v ... appear to work but not really (and actually can pretty badly mess up windows so don't do this on production VMs! use snapshots, test, etc.) If you try to install hyper-v using the add roles / features wizard inside a HC3 VM - it will complain that the CPU isn't VM capable because we don't pass the VTx flags into the guest OS (by design). The workaround seems to be to install the Hyper-V role using DISM (which doesn't seem to check the CPU flags), then configure a virtual switch (using either powershell or Hyper-V manager GUI), THEN install docker for Windows (selecting the option prompted to use Windows Containers). I'll give some steps and screenshots below.

      So step one would be to install Hyper-V role and tools needed to configure the virtual switch ... ( I expect there is a single step command to install both in one step)

      https://us.v-cdn.net/6029942/uploads/editor/ss/n91k3dababmf.png

      https://us.v-cdn.net/6029942/uploads/editor/dj/z6hholqkcmsv.png

      Next step is to configure a virtual switch ... which I have done both using powershell and the Hyper-V manager

      https://us.v-cdn.net/6029942/uploads/editor/el/lqyd0ivhcvk0.png

      At some point you also need to enable the windows Containers feature as well but it doesn't seem to matter when or how. I've done it using the gui roles / feature wizard, you could do it via powershell, if you skip it and install docker for windows, at some point it will ask you to install it as well. The powershell command would be: Enable-WindowsOptionalFeature -Online -FeatureName containers -All

      I don't know if it was required but I specifically selected to download and install from the Docker Edge Channel to get the latest features as of March 2018. At a point during the install I was asked whether I wanted to switch to use the built in Windows Container Support and I responded yes

      https://us.v-cdn.net/6029942/uploads/editor/x2/xbeysqfnb9uj.png

      After the install I was able to run the windows version of hello world and have also run the full microsoft/windowsservercore container with "powershell" command. I've also tried other windows based containers including SQL server 2016 ("docker search microsoft" is a good place to star

      https://us.v-cdn.net/6029942/uploads/editor/7v/8n2387qyv73k.png

      One capability available to windows containers on physical machines is instead of sharing the same base windows kernel, to launch a new kernel inside a hyper-v VM for greater isolation (also known as hyper-v containers.) Attempting to start a container with the --isolation=hyperv flag fails because that "level 2" VM can't be created using hyper-v.

      C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: container 0b2c3ccb877d0f250cb2a03c00a909838f998f01d65b03a031255927a9faa6d6 encountered an error during CreateContainer: failure in a Windows system call: No hypervisor is present on this system
      

      Trying to run Linux based docker containers on Windows also fails with various messages as expected.

      As always - would love to hear from HC3 users about their thoughts / plans / use or questions around containers in general (hint: there are at least a few different possible future features I see relating to running on HC3 I can see here that we will be monitoring the demand for from our customer base)

      posted in Scale Legion scale scale hc3 docker virtualization linux windows hyperconvergence hyperconverged
      scaleS
      scale
    • Migration on Tiered Clusters

      If you've ever imported a large amount of data to a HC3 Tiered node cluster (a cluster with nodes containing SSDs), you've likely noticed an odd behavior. During data import the SSDs gradually fill up much faster than any HDDs, but then their utilization goes back down seemingly on its own over time. Why?

      On a tiered cluster all new writes to virtual disks by default go through the SSD tier in order to improve performance. As blocks are determined to be hot or cold (highly active or relatively dormant) they are tiered accordingly and will either remain on SSD or will be moved down to the HDDs.

      This default behavior of prioritizing all writes to SSD may not be desirable for large data migrations to the HC3 cluster. It is possible to circumvent the behavior by setting the SSD priority level (the HEAT Priority in the HC3 web interface) for the virtual disk to 0 during the data migration. When a HC3 virtual disk's HEAT Priority is set to 0 all new writes on the virtual disk by default will be written to the spinning disks, bypassing the SSDs. Once you are finished with the migration change the SSD priority to the desired level and the cluster will automatically detect the hot or cold blocks and begin tiering them appropriately.

      posted in Scale Legion scale scale hc3 hyperconvergence hyperconverged
      scaleS
      scale
    • Marketing Post About IT Infrastructure

      Introduction

      You’ve no doubt read countless blog posts or marketing emails that have tried to market you some IT product or solution. Also, you are not a mindless consumer so you have an idea of how these pieces of marketing content work. This is no exception.

      This introduction is the part of the post where I, as a content marketer, try to appeal to your emotions and get you hooked into reading more. I might likely say something like, ‘This week was the official start of the Spring season and we can now look forward to some warm weather, green trees, and outdoor fun!’ You know, something positive that almost anyone can relate to who lives far enough away from the equator.

      The Problem

      This is also the part of the post where I talk about some problem you likely have as an IT professional or IT organization. I might tell you some aspects of IT infrastructure are too complex, or that they are too expensive, or that you spend too much valuable time performing mundane tasks. Any or all of these is likely true to some degree and you would likely want to know more.

      I’d next focus on one of these problem topics in more detail, explaining further how it may be affecting you personally. Let’s say I focus on complexity in infrastructure. I’d probably go with an assumption that you have a VMware virtualized environment with at least some servers and a SAN or NAS appliance. Fairly safe guess, right? Even if you don’t, I’m pretty sure you’ll know what I’m talking about. It is an extremely common IT infrastructure setup so why is it a problem, you may wonder.

      The truth is that you are already aware of the problems, although you may have taken them for granted. Therefore, it is my task to point them out to you. I’ll likely mention that your servers, storage, virtualization, and even disaster recovery solutions may all be from different vendors and as such, you have multiple sets of patches and updates to apply independently, multiple support organizations to work with, multiple maintenance contracts and license renewals to deal with, and lots of chances for these technologies to conflict.

      I might also talk about how difficult it can be to implement and integrate these different vendor solutions, or how the level of integration makes it hard to scale out or scale up. I might appeal to your emotions about how many nights and weekends you lose to work because of system patches and updates or needing to upgrade, replace, or scale out existing infrastructure. The more I talk about it, the more likely you are to realize how these problems may apply to your organization or you personally.

      The Solution

      My favorite part of the post. This is the part where I tell you that there is an answer to your problems and it is our HC3 solution from Scale Computing. But I am not going to do that this time. As true as it might be that HC3 will solve many of your IT infrastructure problems, it is just more effective for you to see it for yourself on a live demo or hear for yourself from one of our customers through a case study or through a personal customer referral.

      While it may be my job to market to you in this kind of format, sometimes it is just easier to let the product, backed by the hard work of our dedicated product, development, and support teams, speak for itself. After all, you don’t want to waste time being marketed to. You just want IT solutions that will make your organization more successful and make you an IT superstar.

      posted in Scale Legion scale scale hc3 infrastructure marketing social media
      scaleS
      scale
    • RE: Random Thread - Anything Goes

      @craig-theriac Thanks!

      posted in Water Closet
      scaleS
      scale
    • RE: Random Thread - Anything Goes

      @hobbit666 said in Random Thread - Anything Goes:

      But I want to look at hyperconverged like a 3 node scale deployment. So need a quick quote 😁😁

      Awesome, that's what we like to hear. Let me see what I can do.

      posted in Water Closet
      scaleS
      scale
    • RE: Scale Computing General News

      Idaho Farm Bureau chooses Scale HC3

      "Though they have different pressures and market demands, small and midsized insurers aren’t ignoring the rapid digital transformation sweeping the insurance industry.

      Idaho Farm Bureau is in the midst of upgrading the company’s IT infrastructure and core systems, initiatives that are allowing the multiline P&C insurer to glean insight and improve the customer experience using next-generation digital tools like drones, smarthome devices and geographical information systems (GIS).

      CIO Adam Waldron says that the company was looking to set up a hot disaster-recovery site when it contracted with Scale Computing’s HC3 for server, storage and virtualization. Now that Scale is up and running, the company is able to leverage the investment to support advanced digital efforts. Scale integrates with Google Cloud Platform to allow customers to tap into additional storage and running environments at the same time it continues to work on-premise.

      “We were a VMWare shop, but to get the capabilities [and hardware] we needed, it would double our licensing costs,” he says. “What it came down to was scale, rapid expansion [of our server space] when we needed it.”"...

      posted in Scale Legion
      scaleS
      scale