ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. scale
    3. Posts
    • Profile
    • Following 0
    • Followers 4
    • Topics 189
    • Posts 309
    • Groups 0

    Posts

    Recent Best Controversial
    • Scale UK Case Study: Penlon

      Location: Abingdon, Oxfordshire, UK
      Industry: Medical Manufacturing

      Key Challenges

      • Existing solution was difficult and complex to manage
      • Inability to easily migrate data
      • The legacy solution was outdated and updates were proving difficult
      • No ability to scale out
      • Concerns over the reliability of business continuity

      Scale Computing Solution
      Penlon selected Scale Computing’s HC3 cluster, to support over 40 virtual machines.

      Business Benefits

      • No licensing costs
      • Improved data centre capabilities
      • Dramatically reduced time management
      • Added ability to scale out the IT infrastructure and plan IT budgets
      • Reduction in RPO and RTO
      • Complete business continuity for the IT environment

      Penlon is a leading medical device manufacturing company, based in Abingdon. Established in 1943, the company has a long-standing reputation for quality and service within the medical industry for manufacturing and distributing products and systems for anesthesia, intubation, oxygen therapy and suction. Penlon operates internationally, with a presence in over 90 countries, spanning across Europe, North America, Middle East and Asia.

      As a traditional medical manufacturing company, Penlon constantly reviews its product design, manufacturing processes and IT systems. As part of this Penlon looked to review its IT systems in order to deliver on two main objectives; simplified management and business continuity.

      Having previously moved to virtualisation in order to save time and to create a more streamlined and enhanced IT environment, Penlon wanted to simplify the management and complexity of its infrastructure whilst guaranteeing business continuity for its customers.

      IT Challenges
      Penlon had been previously relying on a traditional VMware environment, but over time it was proving too complex and difficult to manage. In particular, the complexity of the system meant that Penlon struggled to migrate data and install updates. Without regular updates the company was left vulnerable to downtime and costly outages, as there was no way to ensure they had the most up to date environment.

      Tony Serratore, IT Manager, at Penlon commented, “The systems were vastly difficult to manage and when it came to updates we had to ensure everything was in sync. If the system seemed to be working we would not even think about installing upgrades as it was too complex and came with risks. But this wasn’t a long term solution.”

      Not only was the existing VMware environment difficult to update, but the system was high maintenance. The IT team would have to allocate resources to ensure its smooth running, costing them both time and money. “The choice was to either stay with our current system running the risk of downtime or we could look for a new solution that was simple, cost effective, and easy to use,” explained Serratore.

      Identifying the key requirements
      Of paramount importance, not just to the IT team but to the company, was the need to guarantee continuity. Penlon also wanted the added ability to scale, adding capacity as and when needed. Serratore commented, “As an international company Penlon is constantly looking to expand its business. However, planning budgets for IT infrastructure ahead can be difficult. We wanted a solution which would work with us and support our growing business, offering the flexibility and agility we needed.”

      Proving the concept
      After evaluating the market and considering a number of other vendors such as Simplivity, Penlon opted for the Scale Solution. The company was introduced to Scale Computing through reseller NAS UK and opted for a two week Proof of Concept (POC) of the Scale Computing hyperconverged HC3 cluster. Serratore explained, “After registering our interest, we received a POC product within two days. Not only were we impressed with the technology but we felt Scale Computing would value us as a customer if we made that investment.”

      After running a successful POC, Penlon opted for the HC 4000 and HC 1000 cluster, which offered disaster recovery, high availability, cloning, replication and snapshots providing complete business continuity.

      Enjoying scalability, simplified management and business continuity
      The HC3 clusters dramatically reduced management time, allowing the IT department to focus on other challenges rather than IT infrastructure. In addition, the technology offered scale-out architecture providing Penlon the room to expand. Serratore noted, “The Scale solution fits perfectly with our IT roadmap as we can add capacity as and when needed. We don’t need to over provision and can simply expand our environment when needed. With Scale we can align IT strategy with business growth.”

      “After implementing Scale, we reduced our management time by hours. Previously we would have spent time managing our VMware environment but the Scale solution is so easy to use we have been able to dramatically reduce management time,” continued Serratore. “Our RPO and RTO dramatically reduced from three days to a matter of minutes. We can now use this time to focus on other IT priorities making a real difference to the business.”

      “With Scale we have effectively been able to build a data centre in a server room, without cloud based services. The technology provides servers, storage and virtualisation in one solution with complete transparency,” concluded Serratore.

      Original Location: https://www.scalecomputing.com/case_studies/penlon/

      posted in Self Promotion scale scale hc3 hyperconvergence case study
      scaleS
      scale
    • Press Release: Scale Computing Radically Simplifies Disaster Recovery with Launch of DRaaS Offering

      Our public press release here: https://www.scalecomputing.com/press_releases/scale-computing-radically-simplifies-disaster-recovery-with-launch-of-draas-offering/

      Scale Computing Radically Simplifies Disaster Recovery with Launch of DRaaS Offering

      Scale Computing, the market leader in hyperconverged storage, server and virtualization solutions for midsized companies, today launched its ScaleCare Remote Recovery Service, a Disaster Recovery as a Service (DRaaS) offering that provides offsite protection for businesses at a price that fits the size and budget of their datacenter needs.

      Building on the resiliency and high availability of the HC3 Virtualization Platform, ScaleCare Remote Recovery Service is the final layer of protection from Scale Computing needed to ensure business continuity for organizations of all sizes. ScaleCare Remote Recovery Service is a cost-effective alternative to backup and offsite shipping of physical media or third-party vendor hosted backup options. Built into the HC3 management interface, users can quickly and easily set up protection for any number of virtual machines to Scale Computing’s SAEE-16 SOC 2 certified, PCI compliant, remote datacenter hosted by LightBound.

      “The ScaleCare Remote Recovery Service has put my mind at ease when it comes to recovery,” said David Reynolds, IT manager at Lectrodryer LLC. “Setting up automatic monthly, weekly, daily and minute snapshots of my VMs is unbelievably easy. All these are pushed to the cloud automatically and removed on the date you set them to expire. Highly recommended.”

      ScaleCare Remote Recovery Service provides all the services and support businesses need without having to manage and pay for a private remote disaster recovery site. Whether protecting only critical workloads or an entire HC3 environment, users pay for only the VM protection they need without any upfront capital expense.

      Built on snapshot technology already built into the HC3 HyperCore architecture, ScaleCare Remote Recovery Service allows users to customize their replication schedules to maximize protection, retention and bandwidth efficiency. After the initial replica is made, only changed blocks are sent to the remote data center. Remote availability of failover VMs within minutes, failback to on-site local HC3 clusters and rollbacks to point-in-time snapshots as needed provide ultimate data protection and availability.

      “Remote data protection is the best course of action organizations can take to alleviate the proverbial placing of all of their eggs in one basket, but hosting a site for disaster recovery purposes is often not something midrange companies are prepared to handle themselves,” said Jeff Ready, CEO and co-founder of Scale Computing. “With remote, continuous replication and failover features already baked into the HC3 Virtualization Platform, launching the ScaleCare Remote Recovery Service is an ideal way for us to ensure our customers have the remote disaster recovery they need without the costs and headaches of doing it themselves.”

      Pricing for ScaleCare Remote Recovery Service starts at $100 per month per virtual machine. For more information or to sign up for services, interested parties may contact Scale Computing via the company’s website at http://www.scalecomputing.com or by calling 1-877-SCALE-59.

      Thanks for putting up with the press release postings, guy! 🙂

      posted in Self Promotion scale scale hc3 draas disaster recovery
      scaleS
      scale
    • RE: It's 10K Day

      Congrats to the community on a major milestone.

      posted in Announcements
      scaleS
      scale
    • RE: ThanksAJ Having a Tough Morning

      Glad to hear that you are okay, @thanksajdotcom

      posted in Water Closet
      scaleS
      scale
    • 7 Reasons Why I Work at Scale Computing

      I came from a background in software that has spanned software testing, systems engineering, product marketing, and product management, and this year my career journey brought me to Scale Computing as the Product Marketing Manager. During the few months I have been with Scale, I’ve been amazed by the hard work and innovation embodied in this organization. Here are some of the reasons I joined Scale and why I love working here.

      1 – Our Founding Mission

      Our founders are former IT administrators who understand the challenges faced by IT departments with limited budgets and staff. They wanted to reinvent IT infrastructure to solve those challenges and get IT focused on applications. That’s why they helped coin the term “hyperconverged infrastructure”.

      2 – Focus on the Administrator

      Our product family, HC3, was always designed to address the needs of datacenters managed by as few as one administrator by combining features and efficiency achieved in enterprise solutions for any budget. HC3 scales from small to enterprise because its roots are planted in the needs of the individual administrator focused on keeping applications available.

      3 – Second to None Support

      I have a firm belief that good support is the cornerstone of successful IT solutions. Our world class support not only includes hardware replacement but 24/7/365 phone support from only qualified experts. We don’t offer any other level of support because we believe every customer, no matter their size or budget, deserves the same level of support.

      4 – 1500+ Customers, 5500+ Installs

      Starting in 2008 and bringing HC3 to market in 2012, we’ve sold to customers in nearly every industry from manufacturing, education, government, healthcare, finance, hotel/restaurant, and more. Customer success is our driving force. Our solution is driving that success.

      5 – Innovative Technology

      We designed the HC3 solution from the ground up. Starting with the strengths of open source KVM virtualization, we developed our own operating system called HyperCore which includes our own block access, direct attached storage system with SSD tiering for maximum storage efficiency. We believe that if it is worth doing then it is worth doing the right way.

      6 – Simplicity, Scalability, and Availability

      These core ideas keep us focused on reducing costs and management when it comes to deployment, software and firmware updates, capacity scaling, and minimizing planned and unplanned downtime. I believe in our goal to minimize the cost and management footprint of infrastructure to free up resources for application management and service delivery in IT.

      7 – Disaster Recovery, VDI, and Distributed Enterprise

      HC3 is more than just a simple infrastructure solution. It is an infrastructure platform that supports multiple use cases including disaster recovery sites, virtual desktop infrastructure, and remote office and branch office infrastructure. I love that the flexibility of HC3 allows it to be used in nearly every type of industry.

      Scale Computing is more than just an employer; it is a new passion for me. I hope you keep following my blog posts to learn more about the awesome things we are doing here at Scale and I hope we can help you bring your datacenter into the new hyperconvergence era.

      Originally posted on the Scale Blog: http://blog.scalecomputing.com/7-reasons-why-i-work-at-scale-computing/

      posted in Self Promotion scale scale blog
      scaleS
      scale
    • RE: Designing a Reliable Web Application

      You can use a high availability platform (Scale HC3 would be an example but is only one of many options) to handle the failover of the web servers which, as @scottalanmiller said, are normally read only and don't have to worry about crash consistency.

      For a database you would ideally want to run at least one virtual machine on two different servers or cluster nodes and use something like "pinning" to guarantee that each database instance remains on separate hardware. Then you can use the database's own replication functionality to maintain data safety in the event of a hardware failure.

      This would limit the effort necessary to deal with protecting the different functions leaving you with only the database as needing the additional effort. But it would not address load balancing for the application, only protection of availability.

      posted in IT Discussion
      scaleS
      scale
    • Hyperconvergence for the Distributed Enterprise

      IT departments face a variety of challenges but maybe none as challenging as managing multiple sites. Many organizations must provide IT services across dozens or even hundreds of small remote offices or facilities. One of the most common organizational structures for these distributed enterprises is a single large central datacenter where IT staff are located supporting multiple remote offices where personnel have little or no IT expertise.

      These remote sites often need the same variety of application services and data services needed in the central office, but on a smaller scale. To run these applications, these sites need multiple servers, storage solutions, and disaster recovery. There is no IT staff on site so remote management is ideal to cut down on the productivity cost of sending IT staff to remote sites frequently to troubleshoot issues. This is where the turn key appliance approach of hyperconvergence shines.

      A hyperconverged infrastructure solution combines server, storage, and virtualization software into a single appliance that can be clustered for scalability and high availability. It eliminates the complexity of having disparate server hardware, storage hardware, and virtualization software from multiple vendors and having to try to replicate the complexity of that piecemeal solution at every site. Hyperconverged infrastructure provides a simple repeatable infrastructure out of the box. This approach makes it easy to scale out infrastructure at sites on demand from a single vendor.

      At Scale Computing, we offer the HC3 solution that truly combines server, storage, virtualization, and even disaster recovery and high availability. We provide a large range of hardware configurations to support very small implementations all the way up to full enterprise datacenter infrastructure. Also, because any of these various node configurations can be mixed and matched with other nodes, you can scale the infrastructure at a site with extra capacity and/or compute power as you need very quickly.

      HC3 management is all web-based so sites can easily be managed remotely. From provisioning new virtual machines to opening consoles for each VM for simple and direct management from the central datacenter, it’s all in the web browser. There is even a reverse SSH tunnel available for ScaleCare support to provide additional remote management of lower level software features in the hypervisor and storage system. Redundant hardware components and self healing mean that hardware failures can be absorbed while applications remain available until IT staff or local staff can replace hardware components.

      With HC3, replication is built in to provide disaster recovery and high availability back to the central datacenter in the event of entire site failure. Virtual machines and applications can be back up and running within minutes to allow remote connectivity from the remote site as needed. You can achieve both simplified infrastructure and remote high availability in a single solution from a single vendor. One back to pat or one throat to choke, as they say.

      If you want to learn more about how hyperconvergence can make distributed enterprise simpler and easier, talk to one of our hyperconvergence experts.

      Original article: http://blog.scalecomputing.com/hyperconvergence-for-the-distributed-enterprise/

      posted in Self Promotion scale scale hc3 hyperconvergence scalecare virtualization
      scaleS
      scale
    • RE: MangoLassians assemble to help out Spiceworks

      Hoping that everyone lands somewhere safely and gets to make the best of this opportunity.

      posted in Water Closet
      scaleS
      scale
    • RE: Is the Time for VMware in the SMB Over?

      Can I mention several major updates to Scale's KVM platform as well, in that time period!

      posted in IT Discussion
      scaleS
      scale
    • RE: NTG Lab is moving!

      Excited to see the lab back up and running again in the new location!

      posted in IT Discussion
      scaleS
      scale
    • RE: Gaming - What's everyone playing / hosting / looking to play

      @wirestyle22 said in Gaming - What's everyone playing / hosting / looking to play:

      @scottalanmiller said in Gaming - What's everyone playing / hosting / looking to play:

      @wirestyle22 said in Gaming - What's everyone playing / hosting / looking to play:

      @scottalanmiller said in Gaming - What's everyone playing / hosting / looking to play:

      @wirestyle22 said in Gaming - What's everyone playing / hosting / looking to play:

      @scottalanmiller said in Gaming - What's everyone playing / hosting / looking to play:

      @nadnerB said in Gaming - What's everyone playing / hosting / looking to play:

      @IRJ said in Gaming - What's everyone playing / hosting / looking to play:

      @nadnerB said in Gaming - What's everyone playing / hosting / looking to play:

      Playing KOTOR 2. Hoping to finish it this time.

      It is a good game, but not as good as the original. I can't stand the old lady on it.

      Don't have the first one... yet

      I have the original on both Steam and on the NVidia Shield TV.

      How is the NVidia Shield working out for you btw?

      I don't have it with me at the moment but from what little I have used it, it is awesome.

      I'm interested in getting a Steam Link for some of my retro games.

      Steam Link is just remote access to your Steam computer. It doesn't enable any specific functionality.

      It would be nice to play everything on my TV. Especially when I have my little brother over etc. I like my guests downstairs instead of in my room.

      Steam Link looks really interesting. I have high hopes that they will get it to be useful. I have no tried it yet but have been keeping an eye on it.

      posted in Water Closet
      scaleS
      scale
    • RE: What is the Upside to VMware to the SMB?

      We certainly feel that KVM offers a high degree of value in the SMB space. Powerful, flexible and with good options for kernel level expansion like Scale's unique storage layer.

      posted in IT Discussion
      scaleS
      scale
    • 4 Hidden Infrastructure Costs for the SMB

      Infrastructure complexity is not unique to enterprise datacenters. Just because a business or organization is small does not mean it is exempt from the feature needs of big enterprise datacenters. Small and mid-size organizations require fault tolerance, high availability, mobility, and flexibility as much as anyone. Unfortunately, the complexity of traditional datacenter and virtualization architecture hit the SMB the hardest. Here are 4 of the hidden costs that can cripple the SMB IT budget.

      1 – Training and Expertise

      Setting up a standard virtualization infrastructure can be complex; it requires virtualization, networking, and storage expertise. In the larger enterprises, expertise is often spread out across dozens of admins through new hire, formal training, or consulting. However, in the SMB data center with only a handful or even only one admin and limited budgets, expertise can be harder to come by. Self-led training and research can take costly hours out of every week and admins may only have time to achieve a minimum level of expertise to maintain an infrastructure without the ability to optimize it. Lack of expertise affects infrastructure performance and stability, not allowing for the most return on infrastructure investment.

      2 – Support Run-Around

      A standard virtualization infrastructure has components from a number of different vendors including the storage vendor, server vendor, and hypervisor vendor to name just the basics. Problems arising in the infrastructure are not always easy to diagnose and with multiple vendors and vendor support centers in the mix, this can lead to a lot of finger pointing. Admins can spend hours if not days calling various support engineers from different vendors to pinpoint the issue. Long troubleshooting times can correspond to long outages and lost productivity because of vendor support run-around.

      3 – Admin Burn-Out

      The complexity of standard virtualization environments containing multiple vendor solutions and multiple layers of hardware and software mean longer nights and weekends performing maintenance tasks such as firmware updates, refreshing hardware, adding capacity, and dealing with outages caused by non-optimized architecture. Not to mention, admins of the complex architectures cannot detach long enough to enjoy personal time off because of the risk of outage. Administrators who have to spend long nights and weekends dealing with infrastructure issues are not as productive in daily tasks and have less energy and focus for initiatives to improve process and performance.

      4 – Brain Drain

      Small IT shops are particularly susceptible to brain drain. The knowledge of all of the complex hardware configurations and application requirements is concentrated in a very small group, in some cases one administrator. While those individuals are around, there is no problem but when one leaves for whatever reason, there is a huge gap in knowledge which might never be replaced. There can be huge costs involved in rebuilding the knowledge or redesigning systems to match the expertise of the remaining or replacement staff.

      Although complexity has hidden costs for all small, medium, and enterprise datacenters, the complexity designed for the enterprise and inherited down into the SMB makes those costs more acute. When choosing an infrastructure solution for a small or mid-size datacenter, it is important to weigh these hidden costs against the cost of investing in solutions that offer automation and management that mitigate the need for expertise, support run-around, and after hours administration. Modern hyperconverged infrastructures like HC3 from Scale Computing offer simplicity, availability, and scalability to eliminate hidden infrastructure costs.

      Original Article: http://blog.scalecomputing.com/4-hidden-infrastructure-costs-for-the-smb/

      posted in Self Promotion scale scale hc3
      scaleS
      scale
    • The VSA is the Ugly Result of Legacy Vendor Lock-Out

      VMWare and Hyper-V with the traditional Servers+Switches+SAN architecture – widely adopted by enterprise and the large mid-market – works. It works relatively well, but it is complex (many moving parts, usually from different vendors), necessitates multiple layers of management (server, switch, SAN, hypervisor), and requires the use of storage protocols to be functional at all. Historically speaking, this has led to either the requirement of many people from several different IT disciplines to effectively virtualize and manage a VMWare/Hyper-V based environment effectively, or to smaller companies taking a pass on virtualization as the soft and hard costs associated with it put HA virtualization out of reach.

      0_1467048816273_legacy-300x171.jpg

      With the advent of Hyperconvergence in the modern datacenter, HCI vendors had a limited set of options when it came to the shared storage part of the equation. Lacking access to the VMKernel and NTOS kernel, they could only either virtualize the entire SAN and run instances of it as a VM on each node in the HCI architecture (horribly inefficient), or move to hypervisors that aren’t from VMWare or Microsoft. The first choice is what most took, even though it has a very high cost in terms of resource efficiency and IO path complexity as well as nearly doubling the hardware requirements of the architecture to run it. They did this for the sole reason that this was the only way to continue providing their solutions based on the legacy vendors and their lock out and lack of access. Likewise, they found this approach (known as VSA or Virtual SAN Appliance) to be easier than tackling the truly difficult job of building an entire architecture from the ground up, clean sheet style.

      The VSA approach – virtualize the SAN and its controllers – also known as pulling the SAN into the servers. The VSA or Virtual San Appliance approach was developed to move the SAN up into the host servers through the use of a virtual machine on each box. This did in fact simplify things like implementation and management by eliminating the separate physical SAN (but not its resource requirements, storage protocols, or overhead – in all actuality, it reduplicates those bits of overhead on every node, turning one SAN into 3 or 4 or more). However, it didn’t do much to simplify the data path. In fact, quite the opposite. It complicated the path to disk by turning the IO path from:

      application->RAM->disk

      into :

      application->RAM->hypervisor->RAM->SAN controller VM->RAM-> hypervisor->RAM->write-cache SSD->erasure code(SW R5/6)->disk->network to next node->RAM->hypervisor->RAM->SAN controller VM->RAM->hypervisor->RAM->write-cache SSD->erasure code(SW R5/6)->disk.

      This approach uses so much resource that one could run an entire SMB to MidMarket datacenter on just the CPU and RAM being allocated to these VSA’s

      0_1467048905297_VSA-300x156.jpg

      This “stack dependent” approach did, in fact, speed up the time-to-market equation for the HCI vendors that implement it, but due to the extra hardware requirements, extra burden of the IO path, and use of SSD/flash primarily as a caching mechanism for the now tortured IO path in use, this approach still brought a solution in at a price point and complexity level out of reach of the modern SMB.

      HCI done the right way – HES

      The right way to do an HCI architecture is to take the exact opposite path than all of the VSA based vendors. From a design perspective, the goal of eliminating the dedicated servers, storage protocol overhead, resources consumed, and associated gear is met by moving the hypervisor directly into the OS of a clustered platform that runs storage directly in userspace adjacent to the kernel (known as HES or in-kernel). This leverages direct I/O, thereby simplifying the architecture dramatically while regaining the efficiency originally promised by virtualization.

      0_1467048927135_scribe-300x186.jpg

      This approach turns the IO path back into :

      application -> RAM -> disk -> backplane -> disk

      This complete stack owner approach, in addition to regaining the efficiency promised by HCI, allows for features and functionalities (that historically had to be provided by third parties in the legacy and VSA approaches) to be built directly into the platform, allowing for true single vendor solutions to be implemented and radically simplifying the SMB/SME data center at all levels – lower cost of acquisition, lower total TCO. This makes HCI affordable and approachable to the SMB and Mid-Market. This eliminates the extra hardware requirements, the overhead of SAN, and the overhead of storage protocols and re-serialization of IO. This returns efficiency to the datacenter.

      When the IO Path is compared side by side, the differences in the overhead and the efficiency become obvious, and the penalties and pain caused by legacy vendor lock-in start to really stand out, with VSA based approaches (in a basic 3 node implementation) using as much as 24 vCores and up to 300GB RAM (depending on the vendor) just to power the VSA’s and boot themselves vs HES using a fraction of a core per node and 6GB RAM total. Efficiency matters.

      0_1467048953985_diff-300x123.jpg

      Original post: http://blog.scalecomputing.com/the-vsa-is-the-ugly-result-of-legacy-vendor-lock-out/

      posted in Self Promotion scale scale hc3 scale blog hyperconvergence
      scaleS
      scale
    • RE: Scale Computing Brings First Fully Featured Sub-$25,000 Flash Solution to SMB Market

      @Dashrender said in Scale Computing Brings First Fully Featured Sub-$25,000 Flash Solution to SMB Market:

      @PSX_Defector said in Scale Computing Brings First Fully Featured Sub-$25,000 Flash Solution to SMB Market:

      SATA is perfectly fine for 90% of what people do. It's the 8% that need something more that would need SAS based while the last 2% will need PCI-E performance.

      With numbers like those, ML seems like an odd place to be talking/worrying about it. Also, are the last 10% really looking at a Scale Cluster? I suppose some percentage of them might be.

      90% of what people do, not 90% of people. It's a much higher percentage of people. That's why the Scale HC3 tiering system is such a good fit, we believe. It allows the majority of your storage to be tuned to sit on the SATA drives, which are perfectly fast enough for 90% of your needs, and lets the 10% of your needs that need to be on high performance SSD to sit there without needing two different solutions.

      And with our heat mapping technology we help to tune the workloads for what is used rather than forcing you to pick manually for all workloads. You can override this with manual priorities, but on its own it self tunes.

      So our hope is that the 90/10 split which is a good way to think of it actually makes Scale ideal for the majority of users because they have the 90/10 mix rather than in spite of it.

      posted in Self Promotion
      scaleS
      scale
    • RE: The Four Things That You Lose with Scale Computing HC3

      @Breffni-Potter said in The Four Things That You Lose with Scale Computing HC3:

      @scottalanmiller said

      It's all ethernet, so you could mix it together on a single switch and just VLAN them apart from each other.

      I'm trying to be blunt. 🙂

      Do I need a switch at all for the backplane or can they communicate directly? Do I need to factor in 10GigE switches for redundancy as well?

      Did we manage to answer your questions?

      posted in Self Promotion
      scaleS
      scale
    • Scale Makes Play for Nutanix Entry Level Market from El Reg

      The Register has a recent article about our new, entry level cluster that has just recently been announced: Scale Makes Play for Nutanix Entry Level Market

      "The HC1100 has 64GB of DRAM per node instead of the HC1000's 32GB; per-node CPU core count increases from four to six (1.7GHz Broadwell E5-2620 v4) cores, and the SATA disks change to four 1TB SAS 7,200rpm drives.

      The HC1150 nodes also have 64GB of DRAM, eight 2.1GHz Broadwell cores, three 1TB SAS disks, and a single 480GB SSD. Both HC1100 and HC1150 have two 1GbitE network ports, and their enclosures are 1U high."

      posted in Self Promotion scale scale hc3 nutanix hyperconvergence scale hc3 hc1150
      scaleS
      scale
    • Scale Awarded New Storage Patent

      http://www.storagenewsletter.com/rubriques/systems-raid-nas-san/scale-computing-assigned-patent/

      Scale Computing, Inc., Indianapolis, IN, has been assigned a patent (9,348,526) developed by White, Philip Andrew, and Hsieh, Hank T., San Francisco, CA, for a “placement engine for a block device.”

      The abstract of the patent published by the U.S. Patent and Trademark Office states: ”A system, method, and computer program product are provided for implementing a reliable placement engine for a block device. The method includes the steps of tracking one or more parameters associated with a plurality of real storage devices, RSDs, generating a plurality of RSD objects in a memory associated with a first node, generating a virtual storage device, (VSD) object in the memory, and selecting one or more RSD objects in the plurality of RSD objects based on the one or more parameters. Each RSD object corresponds to a particular RSD in the plurality of RSDs. The method also includes the step of, for each RSD object in the one or more RSD objects, allocating a block of memory in the RSD associated with the RSD object to store data corresponding to a first block of memory associated with the VSD object.“

      The patent application was filed on March 28, 2014 (14/229,748).

      Not the most exciting thing for IT professionals, but we are pretty excited about the work done and that it has been recognized with a patent.

      posted in Self Promotion scale storage scale hc3
      scaleS
      scale
    • RE: Random Thread - Anything Goes

      @DustinB3403 said in Random Thread - Anything Goes:

      @scottalanmiller and @Minion-Queen My unread icon is now doing the same thing as reported this morning.

      Same here on Chrome. I was wondering what was going on.

      posted in Water Closet
      scaleS
      scale
    • Scale HC3 New and Improved Real-Time Per VM Statistics

      This one is actually from last month, but it never got posted and is still pretty relevant so sharing it with you.

      When we designed HC3 clusters, we made them fault-tolerant and highly available so that you did not need to sit around all day staring at the HC3 web interface in case something went wrong. We designed HC3 so you could rest easy knowing your workloads were on a reliable infrastructure that didn’t need a babysitter. But still, when you need to manage your VM workloads on HC3, you need fast reliable data to make management decisions. That’s why we have implemented some new statistics along with our new storage features.

      If you haven’t already heard the news(click here), we have integrated SSD flash storage into our already hyper-efficient software defined storage layer. We knew this would make you even more curious about your per VM IOPS so we added that statistic both as a cluster wide statistic and a per VM statistic, refreshed continuously in real-time.

      Up until now you have been used to the at-a-glance monitoring of statistics for CPU utilization, RAM utilization, and storage utilization for the cluster and now you will see the cluster-wide IOPS statistic right alongside what you were already seeing. For individual VMs, you are now going to see real-time statistic for both storage utilization and IOPS, right on the main web interface view.

      0_1466533699646_Screenshot-2016-04-25-12.20.41.png

      Why are we doing this now? The new flash storage integration and automated tiering architecture allows you to tune the priority of flash utilization on the individual virtual disks in your VMs. Monitoring the IOPS for each VM will help guide you as you tune the virtual disks for maximum performance. You’ll not only see the benefits of the flash storage more clearly in the web interface but you will see the benefits of tuning specific workloads to make the best use of the flash storage in your cluster.

      Take advantage of these new statistics when you update your HyperCore software and you’ll see the benefit of monitoring your storage utilization more granularity. Talk to your ScaleCare support engineers to learn how to get this latest update.

      Original post: http://blog.scalecomputing.com/new-and-improved-real-time-per-vm-statistics/

      posted in Self Promotion scale storage iops hyperconvergence scale hc3 hypercore
      scaleS
      scale
    • 1 / 1