ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. scale
    3. Best
    • Profile
    • Following 0
    • Followers 4
    • Topics 189
    • Posts 309
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: Education Runs on the Scale HC3 Infrastructure

      https://www.scalecomputing.com/case_studies/st-richards-catholic-college/

      St Richard’s has a 50-year tradition of high quality education as a foundation to build upon, with a goal to transition from print-based teaching to an environment of digital learning excellence.

      posted in Scale Legion
      scaleS
      scale
    • RE: Scale Webinar: Disaster Recovery Made Easy

      Sorry for taking a bit, I was able to find it for you. Here is the link:

      Disaster Recovery Made Easy from Scale Webinar

      posted in Self Promotion
      scaleS
      scale
    • RE: New Scale HC3 Tiered Cluster Up in the Lab

      We are very excited to get to announce this today.

      posted in IT Discussion
      scaleS
      scale
    • RE: Education Runs on the Scale HC3 Infrastructure

      https://www.scalecomputing.com/case_studies/auburn-university/

      Auburn University was established in 1856 as the East Alabama Male College, 20 years after the city of Auburn’s founding. With more than 25,000 students, Auburn University offers more than 140 degree options in 13 schools and colleges at the undergraduate, graduate and professional levels.

      Auburn’s schools and colleges include: College of Agriculture; College of Architecture, Design & Construction; Harbert College of Business; College of Education; Samuel Ginn College of Engineering; School of Forestry and Wildlife Sciences; Graduate School; Honors College; College of Human Sciences; College of Liberal Arts; School of Nursing; Harrison School of Pharmacy; College of Sciences and Mathematics; College of Veterinary Medicine

      posted in Scale Legion
      scaleS
      scale
    • RE: The Four Things That You Lose with Scale Computing HC3

      @Breffni-Potter said in The Four Things That You Lose with Scale Computing HC3:

      @scottalanmiller said

      It's all ethernet, so you could mix it together on a single switch and just VLAN them apart from each other.

      I'm trying to be blunt. 🙂

      Do I need a switch at all for the backplane or can they communicate directly? Do I need to factor in 10GigE switches for redundancy as well?

      Did we manage to answer your questions?

      posted in Self Promotion
      scaleS
      scale
    • RE: Is the Time for VMware in the SMB Over?

      Can I mention several major updates to Scale's KVM platform as well, in that time period!

      posted in IT Discussion
      scaleS
      scale
    • CIOReview on Simplifying Virtualization with Hyperconverged Infrastructure

      Sadly, no mention of Scale in this article, but CIOReview has a brief discussion on convergence and the move to hyperconvergence in their latest article on Simplifying Virtualization. They talk a little about the goals and how it is sometimes believed to have been derived from converged infrastructure.

      A very high level overview, to be sure, and not very technical and might decide to do some defining of their own. But I think that it helps to give a good overview of the space.

      posted in Scale Legion hyperconvergence hyperconverged cioreview infrastructure
      scaleS
      scale
    • RE: Scale UK Case Study: Penlon

      There has been some discussion about Scale presence in the UK market, so really wanted to share this case study with you guys. Thanks, as always!

      posted in Self Promotion
      scaleS
      scale
    • RE: Designing a Reliable Web Application

      You can use a high availability platform (Scale HC3 would be an example but is only one of many options) to handle the failover of the web servers which, as @scottalanmiller said, are normally read only and don't have to worry about crash consistency.

      For a database you would ideally want to run at least one virtual machine on two different servers or cluster nodes and use something like "pinning" to guarantee that each database instance remains on separate hardware. Then you can use the database's own replication functionality to maintain data safety in the event of a hardware failure.

      This would limit the effort necessary to deal with protecting the different functions leaving you with only the database as needing the additional effort. But it would not address load balancing for the application, only protection of availability.

      posted in IT Discussion
      scaleS
      scale
    • Scale HyperCore 7.2 Released

      Scale is happy to announce the general availability of our HyperCore 7.2 firmware for the HC3 hyperconverged platform.

      This is a rolling upgrade, meaning it is non-disruptive to running VMs. HyperCore versions 7.1.11 and above have a direct upgrade path to version 7.2.16. See the 7.2 Release Notes available in the Customer and Partner Portals for additional upgrade paths.

      New features include:

      • Windows 10/2016 Support
        • Allows for users to install and operate Windows 10 and Server 2016 on the HC3 system
      • Virtual Disk Expansion on Clone
        • When cloning a VM users will have the option to expand the size of the virtual disks and apply the expanded disk capacity to the cloned VM
      • Updated Scale Tools
        • Included is an updated Scale Tools ISO file
          • Scale-virtio-win-0.1.126.sc00.iso
            • New virtIO (performance) storage and network drivers with an easy-to-use Windows installer wizard
            • Newly included is a lightweight agent for application consistent VM-level snapshots
      • New Bulk Actions
        • Added the ability to manually Snapshot, Clone, and Delete for a group of VMs
          • You can select VMs by choosing the VM’s name or applied tags
      • Other Enhancements/Bug Fixes
        • High memory utilization no longer causes VM live migrations to take exceedingly long
        • Max console resolution for Windows 2012/R2 and above is no longer maxed at 1024x768
        • When using more than 90% of the storage capacity the system will automatically prevent new clones, snapshots, and VMs
        • A warning will be issued via email and displayed in the UI
        • Fixed an issue where clocksource failure could cause invalid virtual disk (VSD) lease expiration
        • The background disk scrubber now remedies latent errors that previously could cause issues
      posted in Scale Legion scale scale hc3 hypercore hypercore 7.2 hyperconvergence hyperconverged
      scaleS
      scale
    • Back to School – Infrastructure 101

      As a back to school theme, I thought I’d share my thoughts on infrastructure over a series of posts. Today’s topic is SAN.

      Storage Area Networking (SAN) is a technology that solved a real problem that existed a couple decades ago. SANs have been a foundational piece of IT infrastructure architecture for a long time and have helped drive major innovations in storage. But how relevant are SANs today in the age of software-defined datacenters? Let’s talk about how we have arrived at modern storage architecture.

      First, disk arrays were created to house more storage than could fit into a single server chassis. Storage needs were outpacing the capacity of individual disks and the limited disk slots available in servers. But adding more disk to a single server led to another issue, available storage capacity was trapped within each server. If Server A needed more storage and Server B had a surplus, the only way to redistribute was to physically remove a disk from Server B and add it to Server A. This was not always so easy because it might be breaking up a RAID configuration or there simply might not be the controller capacity for the disk on Server A. It usually meant ending up with a lot of over-provisioned storage, ballooning the budget.

      SANs solved this problem by making a pool of storage accessible to servers across a network. It was revolutionary because it allowed LUNs to be created and assigned more or less at will to servers across the network. The network was fibre channel in the beginning because ethernet LAN speeds were not quite up to snuff for disk I/O. It was expensive and you needed fibre channel cards in each server you needed connected to the SAN, but it still changed the way storage was planned in datacenters.

      Alongside SAN, you had Network Attached Storage (NAS) which had even more flexibility than SAN but lacked the full storage protocol capabilities of SAN or Direct Attached Storage. Still, NAS rose as a file sharing solution alongside SAN because it was less expensive and used ethernet.

      The next major innovation was iSCSI which originally debuted before it’s time. The iSCSI protocol allowed SANs to be used over standard ethernet connections. Unfortunately the ethernet networks took a little longer to become fast enough for iSCSI to take off but eventually it started to replace fibre channel networks for SAN as 1Gb and 10Gb networks became accessible. WIth iSCSi, SANs became even more accessible to all IT shops.

      The next hurdle for SAN technology was the self-inflicted. The problem was that now an administrator might be managing 2 or more SANs on top of NAS and server-side Direct Attached Storage (DAS), and these different components did not play well together necessarily. There were so many SANs and NAS vendors that used proprietary protocols and management tools that it was once again a burden on IT. Then along came virtualization.

      The next innovation was virtual SAN technology. There were two virtualization paths that affected SANs. One path was trying to solve the storage management problem I had just mentioned, and the other path was trying to virtualize the SAN within hypervisors for server virtualization. These paths eventually crossed as virtualization became the standard.

      Virtual SAN technology initially grew from outside SAN, not within, because SAN was big business and virtual SAN technology threatened traditional SAN. When approaching server virtualization, though, virtualizing storage was a do or die imperative for SAN vendors. Outside of SAN vendors, software solutions were seeing the possibility with iSCSI protocols to place a layer of virtualization over SAN, NAS, and DAS and create a single, virtual pool of storage. This was a huge step forward in accessibility of storage but it came at a cost of both having to purchase the virtual SAN technology on top of the existing SAN infrastructure, and at a cost of efficiency because it effectively added another, or in some cases, multiple more layers of I/O management and protocols to what already existed.

      When SANs (and NAS) were integrated into server virtualization, it was primarily done with Virtual Storage Appliances that were virtual servers running the virtual SAN software on top of the underlying SAN architecture. With at least one of these VSAs per virtual host, the virtual SAN architecture was consuming a lot of compute resources in the virtual infrastructure.

      So virtual SANs were a mess. If it hadn’t been for faster CPUs with more cores, cheaper RAM, and flash storage, virtual SANs would have been a non-starter based on I/O efficiency. Virtual SANs seemed to be the way things were going but what about that inefficiency? We are now seeing some interesting advances in software-defined storage that provide the same types of storage pooling as virtual SANs but without all of the layers of protocol and I/O management that make it so inefficient.

      With DAS, servers have direct access to the hardware layer of the storage, providing the most efficient I/O path outside of raw storage access. The direct attached methodology can and is being used in storage pooling by some storage technologies like HC3 from Scale Computing. All of the baggage that virtual SANs brought from traditional SAN architecture and the multiple layers of protocol and management they added don’t need to exist in a software-defined storage architecture that doesn’t rely on old SAN technology.

      SAN was once a brilliant solution to a real problem and had a good run of innovation and enabling the early stages of server virtualization. However, SAN is not the storage technology of the future and with the rise of hyperconvergence and cloud technologies, SAN is probably seeing its sunset on the horizon.

      Original Post: http://blog.scalecomputing.com/back-to-school-infrastructure-101/

      posted in Self Promotion scale scale blog san storage hyperconvergence
      scaleS
      scale
    • RE: Webinar: Dec 8th 2016 - Hyperconvergence versus Cloud Computing

      Awesome guys, have fun on the webinar!

      posted in IT Discussion
      scaleS
      scale
    • RE: Favorite Swag Tshirts

      I am a bit partial to the Scale shirts 🙂

      The modern, super soft tee shirts seem to be the universal favorites. People always love those.

      posted in MangoCon
      scaleS
      scale
    • Midwest Acoust-A-Fiber Case Study for Scale HC3

      https://www.scalecomputing.com/case_studies/midwest-acoust-a-fiber/

      midwest-acoust-a-fiber-logo.png

      “Going the other route, it would have likely taken triple that amount of time and a lot more in terms of configuration.” -Daniel Penrod, Manager Systems Administrator

      Manufacturing Success Story: Midwest Acoust-A-Fiber

      FAST FACTS:

      Midwest Acoust-A-Fiber is a leading manufacturer of fiber-engineered composites, including sound and heat shields for the automotive industry. As a tier one supplier to General Motors and DaimlerChrysler and a tier two supplier to the Ford Motor Company, maintaining uptime and reliability in the IT infrastructure with limited resources is vital to the success of the company

      INTRODUCTION:

      Faced with aging servers that had a propensity for failure, Midwest Acoust-A-Fiber was on the search for a turnkey virtualization solution that would improve the uptime and reliability in their IT infrastructure. At the same time, limited IT resources required that the solution be both easy to use and affordable.

      “Saving money was a primary driver for us,” said Daniel Penrod, Manager Systems Administrator

      CHALLENGE:

      Midwest Acoust-A-Fiber received several competing quotes, including HP and Dell, for piecemeal solutions that would require significant effort to setup and manage. In addition to servers, Midwest Acoust-A-Fiber would need to purchase a Storage Area Network (SAN) or Network Attached Storage (NAS) for shared storage, as well as license VMware to act as the hypervisor.

      “The problem with all of the solutions was the price when you coupled it with VMware,” said Penrod. “The initial purchase combined with the ongoing management costs were prohibitive given our limited IT budget and resources.”

      SOLUTION:

      Penrod eventually found an alternative solution in Scale Computing’s HC3. HC3 was built with the availability of a virtualized server and SAN, the scalability of a clustered infrastructure and the simplicity of a single server. By deploying HC3 , Midwest Acoust-A-Fiber was able to realize the benefits of a fully virtualized environment without the added complexity of a typical virtualization deployment – all at a fraction of the costs of other alternatives

      “The other solutions were as much as double the cost,” said Penrod.

      HC3 does not require storage protocols, networking or provisioning. On the storage side, there are no RAID sets, iSCSI targets or LUNs, multi-pathing, storage security, zoning or fabric for Midwest Acoust-A-Fiber to setup or manage. On the server side, they will never have to deal with the complexity of iSCSI initiators, host and VM file systems, server clusters and policies.

      “In a small shop, you have to keep it simple. The more complexity that can be hidden from the administrator, the better,” said Penrod.

      To create a new virtual machine, Midwest Acoust-A-Fiber simply assigns the resources necessary for the VM and loads the operating system. Using HC3, Penrod was able to save hours of unnecessary overhead in the deployment of new VMs.

      “Going the other route, it would have likely taken triple that amount of time and a lot more in terms of configuration,” said Penrod.

      The time savings seen from deploying HC3 has enabled the IT department at Midwest Acoust-A-Fiber to take on other much needed projects including the upgrade of Windows 2000 Servers to a newer version.

      “I can now rebuild them in a side-by-side [virtualized] environment with little risk to the business,” said Penrod.

      posted in Scale Legion case study scale scale hc3 hyperconvergence hyperconverged
      scaleS
      scale
    • RE: Setup 3 node cluster

      @matthewaroth35 said in Setup 3 node cluster:

      Well i have scale 3 node cluster already .. I want to build another one like scale out of my old hp g7 servers

      Always great to get to "meet" our customers! Hope that it is working well for you. Sounds promising that you want more Scale functionality 🙂

      posted in IT Discussion
      scaleS
      scale
    • Cascade Lumber Company Case Study for Scale HC3

      https://www.scalecomputing.com/case_studies/cascade-lumber-company/

      cascade-lumber-logo.png

      "Scale HC3 is a perfect fit. HC3 simplifies server and storage virtualization and management. This gives technology directors time to focus on what's really important." -
      Joel Althoff, Cascade Lumber

      Manufacturing Success Story: Cascade Lumber and Manufacturing

      FAST FACTS:

      Cascade Lumber and Manufacturing is a manufacturer of wood and cold-formed steel wall components serving residential, commercial and agricultural markets in the Midwest. With two production sites, the majority of IT is administered remotely by a staff of one, requiring an infrastructure that is highly available and easily supported. Cascade prides itself on its use of computer technology throughout the estimating, design and manufacturing processes the backbone, of which is a solid IT infrastructure.

      INTRODUCTION:

      Cascade Lumber and Manufacturing fully understood the benefits of virtualization when they initially deployed Citrix XenServer Enterprise Edition, but had underestimated the complexity in both setup and maintenance of such a deployment. The environment necessitated a number of host servers with complex iSCSI initiators, server clusters and policies to be configured and managed. Cascade also had to deploy shared storage (SAN) with iSCSI targets and LUNs, multi-pathing and storage security to manage.

      CHALLENGE:

      While this infrastructure accomplished the reliability and high availability needed in Cascade’s environment, the cost and complexity of growing and maintaining the environment was prohibitive. “Trying to grow the capabilities of the infrastructure caused several headaches,” said Joel Althoff, IT Manager, Cascade Lumber and Manufacturing. “We needed the ability to grow without getting killed on the expense.” Growing the system meant adding storage capacity and computing host servers, as well as purchasing additional licenses of their hypervisor. Cascade needed to consolidate their IT infrastructure with a scalable and cost effective solution that was easy to manage.

      SOLUTION:

      Cascade came across Scale Computing’s HC3 – a hyperconvered solution that combines servers, storage, and virtualization into a product that delivers virtualized infrastructure-as-an-appliance. HC3 eliminates the need to purchase virtualization software, external servers and shared storage, resulting in significant reductions to cost and complexity.

      According to Althoff, “We felt HC3 was the simplest, most reliable and most cost-effective way of deploying our infrastructure. We have moved from three separate pieces to manage, down to just one and I can manage it from anywhere that has a web browser.”

      As a scale-out solution, additional compute or storage resources can be added to a cluster within minutes, with applications and data failing over between nodes in the event of equipment failure.This convergence has also resulted in significant timesavings in managing Cascade’s infrastructure. “I would estimate around 20 percent savings in management time over what we had with our XenServer deployment,” said Althoff.

      HC3 was built with the availability of a virtualized server and SAN, the scalability of a clustered infrastructure and the simplicity of a single server. By deploying HC3, Cascade was able to realize the benefits of a fully virtualized environment without the complexity that existed in their prior virtualization deployment – all at a fraction of the cost of their prior solution.

      “Most small businesses that I know are afraid of virtualization. They are scared of SANs and the complexity that it brings. HC3 enables small to medium-sized business to jump into virtualization without all of the pitfalls and confusion,” says Althoff.

      posted in Scale Legion scale scale hc3 hyperconvergence hyperconverged case study
      scaleS
      scale
    • RE: XenServer 6.2 servers down. I have no Xen skill. Most likely networking? Help!

      Glad to hear that you are starting to be able to recover some of your data. Definitely let us know if we can help in any way!

      posted in IT Discussion
      scaleS
      scale
    • Poster Display Case Study for Scale HC3

      https://www.scalecomputing.com/case_studies/poster-display/

      poster-display-logo.png

      “For IT departments in our market, HC3 could dominate. Scale is offering a product [HC3] that will stabilize and add insurance to their IT infrastructure and can grow as quickly, or as slowly, as their business grows.” - John Venter, IT Manager

      Manufacturing Success Story: Poster Display

      FAST FACTS:

      For more than 70 years, Poster Display Company has been driven by a single, overriding business philosophy: to use state of the art printing technology, company-wide innovation and human ingenuity to help its customers achieve increased sales through superior graphic solutions. With several diverse printers in their environment, the company relies heavily on a stable IT infrastructure to successfully deliver high quality, best-in-class graphics to each customer.

      INTRODUCTION:

      Prior to virtualizing, the IT environment of the Poster Display Company consisted of a handful of Microsoft Windows-based servers all with direct attached storage. The proposal to virtualize their infrastructure had been made a year prior, but the company struggled to justify the investment given the fact that their infrastructure, while extended well beyond the end of its useful life, was still operating effectively.

      CHALLENGE:

      John Venter, IT Manager at Poster Display Company, recalls asking himself the hypothetical question, “What if they fail?” (referring to the servers originally purchased in 2004). The hypothetical event then turned into reality when the motherboard in a critical server running an SQL database failed. “We were barely able to rebuild it. Had we lost all of that data, I would hate to think where we would be right now,” said Venter.

      “The problem with all of the solutions was the price when you coupled it with VMware,” said Venter. “The initial purchase combined with the ongoing management costs were prohibitive given our limited IT budget and resources.”

      “We needed a solution that could provide insurance against failure in order for our business to continue running effectively,” he continued.

      After the critical failure, it was much easier to justify investment in a highly available infrastructure, so the company set out to revisit the earlier proposal of two servers, a Dell SAN and VMware licensing. While this alternative met the requirement for high availability, the costs were still prohibitive. “The cost [of the VMware solution] was even higher with support factored in,” said Venter.

      SOLUTION:

      Poster Display Company was then introduced to Scale Computing’s HC3 – a ‘datacenter-in-a-box’ – integrating servers, storage, and virtualization into a single, highly available, easy-to-use and scalable system. In his initial review, Venter was impressed with the product’s ability to scale to the needs of his IT department at an affordable price. “You can buy what you need now and then easily add on later. It [HC3] helped us eliminate the concern that something won’t be able to grow with us as our business continues to grow,” said Venter.

      With no virtualization software to license, no external storage to buy and the hypervisor already integrated in the system, HC3radically simplifies the infrastructure needed to keep applications running. HC3 makes the deployment and management of a highly available and scalable infrastructure as easy to manage as a single server. “Going to a single dashboard to monitor the environment is something that is going to be very appealing in our market,” said Venter.

      When evaluating the total cost of ownership (TCO), small and midsized businesses implementing virtualization are able to realize greater cost savings when implementing HC3 compared to other solutions. “When you multiply out the years, the VMware option kept diverging from Scale over time,” said Venter.

      Starting at under $25,500 for a 3-node cluster, HC3 is ideal for first-time virtualizers and those that have avoided virtualizing due to the costs and complexities of the implementation and management. “For IT departments in our market, HC3 could dominate. Scale is offering a product [HC3] that will stabilize and add insurance to their IT infrastructure and can grow as quickly or as slowly as their business grows,” he concluded.

      posted in Scale Legion case study scale scale hc3 hyperconvergence hyperconverged
      scaleS
      scale
    • RE: Common paths to VDI?

      We (at Scale) have done a lot of work with Workspot for easy VDI solutions on Scale HC3. We have also done a lot of testing and validation around more traditional terminal services approaches like XenApp and Microsoft RDS. Both approaches have merit and vary in their value proposition, management, and approaches. Of course, a lot of Scale customers use the "simple" VDI approach of simply running Windows 8 or Windows 10 desktop VMs on top of their cluster and using the stock RDP options to connect to them, no special VDI products needed if you want to go that route. There are free front ends for this approach as well, we know that someone here in MangoLassi has used Guacamole, instead of RDS, as a front end connection aggregator for exactly that purpose.

      posted in IT Discussion
      scaleS
      scale
    • Hydradyne Case Study for Scale HC3

      Workspot and Scale have an interview video with Mike O'Neil, the Director of IT with Hydradyne

      https://vimeo.com/190567760

      posted in Scale Legion case study scale scale hc3 hyperconvergence hyperconverged workspot
      scaleS
      scale
    • 1 / 1