I'll be one of the speakers on a Spiceworks Webinar on What is a Hypervisor that is happening on Tuesday, April 12th.

Best posts made by scottalanmiller
-
What is a HyperVisor, Spotlight on KVM - SW Webinar on April 12
-
NTG Picture Show
@ntg has been around a long time and some of us around longer than others. I was going through some pictures that I have and decided that some history pics were needed.
Here is me long before my NTG days. The foot in the background is @art_of_shred which is why it makes the list.
Me the night before I did my first company road trip. I'd been with NTG for ten months but never traveled before. It was the sales presentation from that trip that ended up making NTG grow as a division so fast that it had to split from the parent company that had started it.
@Andw and I in the early 2000s road tripping to customers across the country.
Cool office shot.
-
Chosing an Enterprise Linux Server Distro
Choosing a Linux distribution for your server can be a confusing task. No guide could include all potential factors but here is a quick guide to understand what distros should be on your short list and when to consider which ones.
First, there are three key distros to consider for production servers: Suse and OpenSuse, CentOS and RHEL and Ubuntu. Each of these is a heavily used, well tested and enterprise vendor-backed distro. Support is broad both in traditional OEM terms as well as from communities and third party support vendors.
The biggest factors that you will use in deciding which distro to use will come down to non-distro factors:
- The distro with which you or your organization has the most experience is likely going to be the best choice unless the distro that you are used to lacks a key feature, update or has a compatibility problem.
- The distro primarily used by your application vendor can be a very large factor. In many cases it may make sense to choose the distro that the vendor themselves use as it will have the most testing, documentation and the largest user base. (This is not always the case, MongoDB and Ubuntu is an important case in point where the primarily test base is also the one least kept up to date.)
- The distro(s) best supported by your platforms of choice whether than is hardware, hypervisors or cloud providers. Considering how these will interact can be very important.
All other things being equal:
- RHEL / CentOS is generally considered the best all around server distribution. Mature, extremely stale and incredibly broad enterprise support. Great performance and features.
- Suse / OpenSuse is generally considered the best all around server distro outside of the US. Very mature and incredibly broad features and enterprise support. Largest focus on storage and clustering technologies making it unique when looking at building storage systems.
- Ubuntu is the newest enterprise option for Linux. Less mature than the other offerings and with a more cumbersome and problematic update cycle Ubuntu is often the preferred option for pure cloud deployments and is seeing more and more use as a primary application platform. Generally would only choose it based on the primary reasons and not on any current technical advantage.
Part of a series on Linux Systems Administration by Scott Alan Miller
-
Role play Session
That sounds naughty.
What about a session about how you can present ideas in a business to make a difference. So many IT folk are hampered by not getting their ideas listened to or accepted. What if we could change that?
Good practical "take home" knowledge that makes for career opportunities, too.
-
Installing Gluster on CentOS 7
Gluster, or formerly GlusterFS, is the venerable Linux world scale out storage system. Red Hat bought the GlusterFS project in 2011 and has developed and managed it since then. Since Red Hat is the project sponsor it seems obvious that RHEL 7 or CentOS 7 would be our ideal place for deploying Gluster. Gluster is the best known scale out storage system in the open source world and quite popular.
The first thing that we need is multiple VMs! That's right, Gluster doesn't do anything with only a single node. Now if you are on a platform like I am we can template and clone our systems to make this faster and easier. I'll point out where to do that. So if you are doing this on a cluster (I'm on a Scale HC3 HC2100) where you can using imaging to clone your nodes, I will show where we can pause to do that.
I am just building small, demo nodes here. My standard layout is to use a 16GB base build and then add on my storage as an extra device, a 100GB device in this example, likely you would use something many times larger in production.
Now to log in and get started:
yum -y install wget epel-release wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/RHEL/glusterfs-epel.repo yum install glusterfs-server pvcreate /dev/vdb vgcreate vol_gluster /dev/vdb lvcreate -l 100%FREE -n lv_gluster vol_gluster mkfs.xfs /dev/mapper/vol_gluster-lv_gluster mkdir -p /export/glusterdata mount /dev/mapper/vol_gluster-lv_gluster /export/glusterdata mkdir -p /export/glusterdata/brick echo "/dev/mapper/vol_gluster-lv_gluster /export/glusterdata xfs defaults 0 0" >> /etc/fstab systemctl start glusterd systemctl enable glusterd
At this point we have built the basics and could create a template from which to clone new gluster nodes. If this was going to be for production, I would stop here and create this as an unused base template as you may want to add nodes, replace nodes, recover nodes or whatever rather often. Keep a clean template ready to go.
In our example here, I am only making two so I will continue to use the original to build gluster1, but first I am going to clone it, change the hostname (vi /etc/hostname) and update the ip address (nmtui) and am ready to get started with the gluster2 node with minimal effort. If you don't have the ability to clone (maybe you are not building on a cluster) then you will need to repeat the above steps on each node.
Now once the second node is ready, back to the first node again:
gluster peer probe lab-lnx-gluster2 gluster volume create gv0 replica 2 lab-lnx-gluster1:/export/glusterdata/brick/ lab-lnx-gluster2:/export/glusterdata/brick/ mkdir /data mount -t glusterfs lab-lnx-gluster1:/gv0 /data
Gluster is up and running! But before we start doing anything, over to the second node:
mkdir /data mount -t glusterfs lab-lnx-gluster2:/gv0 /data
That's it, your Gluster storage cluster is up and running. Let's test it:
touch /data/test-file
Now go to each box and see if it is there!
-
Stop Talking About Keeping Eggs in a Basket
We all know the old expression "Don't keep all of your eggs in one basket." It's a mantra, especially around IT circles. People use it when talking about only having a single server or any number of things, it seems like such a reliable adage that we rarely question its applicability or even its accuracy. The idea, of course, is that transporting many eggs in a single basket and if that basket falls or is stolen we will have zero eggs. So instead we use two baskets, presumably one in each hand, so that if we drop one or someone steals one from us that at least we will have half of our eggs when we get home.
Of course the analogy is flawed as all are, but it gets the idea across. But is the idea even a good one? With eggs and baskets something that made you drop one basket would probably be likely to make you drop two as well, does having two baskets make you less likely to drop them? I'm not sure. If you are being robbed of eggs having to watch over two baskets likely makes you less able to watch over or protect the baskets. I'd say that if I really needed to get eggs home safely a single basket seems to be easier when taking into account the reality of carrying baskets.
But I digress, the real issue is the theory behind the eggs and their baskets. The idea is feeding a family and in that scenario individual eggs are expendable, almost worthless. If you have a dozen eggs for your family, presumably that's plenty to eat and everyone will be full. We don't want our family to starve, and they need eggs. We fear that our basket of eggs won't make it home and everyone will starve. So we split up the eggs into two baskets knowing that the risk of losing some eggs is higher, but presumably the risk of losing all of the eggs is lower (debatable, but that's what the adage is meant to propose) and it is better to almost certainly not starve with just six eggs making it home than it is to risk a greater chance of getting none home and starving while attempting to have all dozen and have the family be full.
Often this "avoid total loss" strategy is applied to retirement finances where total loss would be devastating but losing the ability to spend your golden years on luxury cruises and world tours would be sad but you could still have shelter and food. While logical there, even with finances the adage often runs afoul of intended goals as financial diversification more often a cause of financial ruin rather than a mitigating factor as financial research has shown that the most reliable strategies involve small diversification and heavy focus and that heavy diversification would increase risk.
Taking this adage to IT is very dangerous. Remember that the fundamental idea behind the adage is that losing some or even a lot of our eggs doesn't really matter, as long as some amount survives. This does not apply to IT, not normally, anyway. Using the logic based around the idea that your systems or your data are expendable and getting "some" of them to survive is nearly as good as having "all" just does not apply. Our goal is to protect everything at a sensible cost, not to protect some at higher cost but risk much of it.
In IT we would expect the very opposite to be true - don't risk your eggs in two baskets, put them in one and watch over it carefully. Splitting up the eggs doesn't lower your risk of getting all of the eggs home, it raises it. If we can only afford one basket of eggs, keeping all of the eggs in that basket is generally best. If we feel that the needs for the eggs is too high to risk on a single basket we would not spit up our basket of eggs but instead we would buy two full baskets of eggs and make the eggs redundant, not just the baskets. And we would probably get someone else to carry one of the baskets. And to take a different route home.
Rethink the eggs in a basket strategy. It sounds quaint and few people step back to think about its applicability. But beware of handy sounding phrases replacing proper cost and risk analysis as the situation is more complex than a simple adage can address. Your company's systems and data are more important than eggs.
-
Your Time Is Valuable
@guyinpv said in Home business ideas for transition out of 9-5?:
I feel like every in home job I do is a rip off. It's 20% work and 80% waiting for their Walmart special to catch up.
All IT people have to fix this thinking. That's not ripping someone off, that's someone making you waste your valuable time and you need to be compensated for it. No electrician, contractor, plumber, doctor, lawyer or other professional having to sit and wait because of customer decisions or limitations would ever have a thought like they were ripping off the customer for something that wasn't their own fault (and rarely even for things that are.) You ARE working and you DO deserve to be paid. You are mentally ripping yourself off. That their computer is slow, their Internet is slow or they live in the middle of nowhere is only their decision and the consequences of that decision belong to them, not to you.
-
Building Out XenServer 6.5 with USB Boot and Software RAID 10
Finding almost no concise guide to doing this online and trying to help someone go through this in the simplest manner possible, I thought that it would be good to document the process as we typically don't see XenServer with software RAID and when you find guides for that they assume that you are booting from the software RAID array as well which adds a lot of unnecessary complication.
echo "modprobe raid10" > /etc/sysconfig/modules/raid.modules modprobe raid10 mdadm --create /dev/md0 --level=10 --raid-devices=4 /dev/sd[b-e] cat /proc/mdstat chmod a+x /etc/sysconfig/modules/raid.modules mdadm --examine /dev/sd[b-e] mdadm --detail /dev/md0 mkfs.ext3 /dev/md0 mkdir /data mdadm --detail --scan --verbose >> /etc/mdadm.conf xe sr-create type=ext device-config:device=/dev/md0 shared=false host-uuid:!(mdadm --detail /dev/md0 | grep UUID | cut -d' ' -f3) name-label="Local OBR10"
-
RE: Your Time Is Valuable
This is a thing that I see constantly in the IT space but from no other profession or career category. A receptionist doesn't feel that he is ripping off the company because he sits idly at a desk all day. A waiter doesn't feel like he is ripping off the restaurant if no customers come in. Your time is important, very important, and someone is asking you to give it up for their benefit. It's a business transaction. No wonder people don't see IT as being valuable, we rarely see ourselves as valuable. If you hired anyone to do anything and asked them to just stand around waiting... you'd still have to pay them. That they use their time doing something perceived as valuable or perceived as "Just waiting" is totally irrelevant. The bottom line is you are tying up their time when they could be drinking a beer on the beach, but instead they are in your house or at your office doing your bidding. They need to get paid.
-
RE: Best. Post. Ever.
Okay, kudos to Sean (the main mod at SW) who stepped in and removed both David (the junior mod) and my comments but welcomed me to post in another thread if needed to discuss it. I don't have too much problem with the "don't link to ML" rules, but when it's a question that needs an answer AND I don't link, and I do it under the rules, I don't think that they should complain. But that's all grey area. David's comments were just wrong and needed to be taken down, and Sean did that very quickly and handled it well.
-
Making Templates on a Scale HC3 Cluster
Templating, or making a base image of an OS, is an important means of standardization and speeding up the deployment of virtual machines on nearly any virtualization platform (especially cloud platforms.) Doing this on a Scale HC3 cluster is no different and is very easy.
Before we have any templates we need to make a gold reference VM from which to work. This is a one time manual virtual machine creation process (which can be automated through any usual means: e.g. Kickstart for Red Hat Linux.) We do this like any normal VM installation, by uploading an installation ISO to our Scale media library, creating a new VM and performing an installation.
At this point, once the VM is installed, we could make a template, but generally we are going to want to modify the base image before templating. For example, likely running updates to ensure that our template is fully patched before we use it as a base for other systems reducing the number of patches to download and apply to the resulting VMs. Often we will want to add a root key or local administration account and apply permissions to the base image to make access and administration of the VMs easier and possibly automated for in house scripts or agentless like Ansible. Including standard tools like glances, htop, sysstat, fail2ban or Chef. Enabling a firewall.
Once our template is ready, we simply power it down and then use the clone button to make new VMs from our template. The template itself would remain powered down.
To make templates easier to use on the Scale HC3, I recommend making a tag for them called "templates" and keeping any and all templates in a separate group away from other VMs so that they are easily identifiable. And, of course, designate them by name such as by ending their name with "-template."
-
Linux Desktop for Learning Linux System Administration
It is very common that when desiring to learn Linux System Administration people will often jump to first installing a Linux desktop distribution to use on their desktop or laptop rather than starting with a Linux system designed to be used as a server. There is nothing wrong with using a Linux desktop, of course. And there are many ways in which using one will make it easier or faster to work on other Linux systems (things like SSH, SFTP, SCP, X Windows and more built right into the shell without needing extra tools), but it is very important that the Linux desktop not become a crutch.
In the Windows world, as an example, we would never decide to learn about Windows Server System Administration and instead of installing the latest Windows Server OS, start by installing a Windows desktop and going about using it on a daily basis. Of course, partially that is because nearly everyone already has Windows desktop experience. But moreso it is because we recognize that opening applications on the desktop, writing an email, filling in a spreadsheet, surfing the web, playing video games and similar activities teach us essentially nothing about even basic, GUI-driven Windows administration and that doing so would only distract us from our task of learning. If we were to talk about GUI-less Windows administration the chasm here would be even more pronounced as PowerShell and similar tools would generally never even come up in Windows desktop use conversation. The average Windows desktop user, one who uses nothing else and uses Windows desktops all day, every day and has for decades knows absolutely nothing about the tools, concepts and approaches to systems administration. The desktop is simply not a path to that learning.
In the Linux world this is even more pronounced, for three reasons.
First, the Linux desktop world is very varied and typically far more graphical and advanced that that of the Windows world. Using a Linux graphical desktop system will often abstract the OS away to a greater degree than will a Windows desktop. This is most dramatically seen on Android where the ability to even determine that Linux is being used essentially does not exist.
Secondly, culture. In the Windows world it is seen as generally acceptable in nearly all circles to use, or to at least fall back to, graphical tools for system administration. The average Windows administration, even senior level ones, can potentially survive without knowing any great degree of the Command Shell, PowerShell, scripting or the like. Many do use these tools, of course, but few have to. You have been able to, and very commonly did, build a career based on the use of graphical tools in the Windows world. In Linux (and UNIX in general) graphical tools exists as well in the same, general way. But culturally it is seen in incompetent and unacceptable to use them for a variety of reasons (primarily for reason #3 below.) This is so dramatic that it is simply assumed that all administration is done from a command line and all testing, interviewing and job assumptions are based around this.
Thirdly, availability. In the Windows world, until very recently, it was assumed that all systems would have a GUI installed and even in 2016 with Microsoft pushing very, very hard to move administration away from GUI tools to the command line the majority of servers are still being optionally installed with a GUI and most remote management is done with a GUI as well. In the Linux (and all UNIX) world, this is very much not the case. Outside of special cases like terminal servers, it is simply assumed that Linux servers will have no GUI whatsoever. Not a local one, not a remote one. It is traditional, as well as economical, that servers be lean and not carry unnecessary code. This has, for decades, given Linux a major performance and deployment advantage over Windows based on this cultural difference alone. The GUI is often larger, on its own, than an entire server is otherwise.
For these reasons, while it is great to move to a Linux desktop, it is very important to remember that doing so should never be a "step" in learning Linux administration.
-
Linux: File Colors
If you are lucky enough to be working on a Linux system from a colour TTY session, then you likely get to see a lot of commands, such as ls returning file results in colour! Lucky you. This makes things much easier to quickly see differences in files. On CentOS, we will get these standard colours:
- Executable files: Green
- Directories: Blue
- Graphical Image files: Magenta
- Symbolic / soft links: Cyan
- Pipes: Yellow
- Sockets: Magenta
- Orphaned symbolic links & missing links: Blinking bold white on red background
- Block device drivers: Bold yellow foreground on black background
- Archives or compressed files: Red
-
RE: Well, that really, really sucks.
@wirestyle22 said in Well, that really, really sucks.:
@travisdh1 said in Well, that really, really sucks.:
The FDA is currently confiscating everything of value at work.
Most of you know I've been thinking about another job for a long time already, well, guess what!
My immediate plan is to incorporate my name as an LLC, so I can do some consulting work. I'd appreciate a heads up if any of you know of a good full time position.
Sorry you're in this kind of situation man. Idk where you are located but If I hear of anything opening up in NJ I will definitely let you know. The down side is you'd most likely live in NJ.
Better to be unemployed in Ohio....
-
Linux: Aliases
In Linux it is common to use the alias function in order to modify system behaviour in handy ways. Most Linux distros today ship with a stock list of alias modifications to make their systems easier, safer or just unique. The alias command is used to both display existing aliases and to create or modify our own aliases.
An alias in Linux is just what it sounds like, an alternative way to refer to something. Most commonly an alias might be used to reduce a common, complex command into something very short and simple or to make a common typo correct.
For example, it is not uncommon for a full time Windows Admin who only needs Linux sometimes to alias dir to ls so that they can accidentally type dir and get what they expected.
If we just use the alias command with no options, it will display the aliases currently set on our system. Let's see what are the default aliases on a stock CentOS 7 installation:
# alias alias egrep='egrep --color=auto' alias fgrep='fgrep --color=auto' alias grep='grep --color=auto' alias l.='ls -d .* --color=auto' alias ll='ls -l --color=auto' alias ls='ls --color=auto' alias which='alias | /usr/bin/which --tty-only --read-alias --show-dot --show-tilde'
As you can see, there are seven default aliases included in CentOS 7. Most of these are pretty basic, just modifying the default egrep, fgrep, grep and ls commands to have full colour by default. Nothing too exciting.
We can make our own aliases for whatever we want. Typically this is not recommended, but it is a facility that is available and should be understood by system administrators as it would be easy to hide or override functionality using aliases and if they are not understood it can lead to confusion.
Why We Do Not Recommend Aliases for Most System Administration Functions This same reasoning will come up many times as a good general practice for SAs. By and large we do not want to customize our systems. System customizations are important for end uses who need to be efficient at repetitive tasks and daily duties. End users can often take time to set up their environments, tweak settings and such to make their environments work as they want them to work. As System Admins, we rarely have this luxury. What is most important is that we cannot guarantee that we will have these tweaks and settings when things matter most - when something is broken. Whether it is being used to having a "safety" feature to catch us from mistakes or a quick reference to a file we use, it is generally better that we learn to do things the standard way because when it is most important, that may be the only option that we have.
We will make and remove a simple alias here to learn out it works. Our alias will take us directly to the /tmp directory.
# alias go2t="cd /tmp"
Not very practical, but it shows how an alias works. Now we can test it.
# go2t # pwd /tmp
That was easy. But we do not want to keep that hanging around. We can remove an alias by setting it to null like so...
# unalias go2t
And the alias is gone. Run the alias command again and you will see that our go2t alias is not in the list, it has been erased.