I found it!!
https://community.spiceworks.com/topic/79629-do-not-have-access-to-hosted-email
You can finally learn everything that there is about programming printers.
I found it!!
https://community.spiceworks.com/topic/79629-do-not-have-access-to-hosted-email
You can finally learn everything that there is about programming printers.
@BBigford said in Boss I want to go to MangoCon....:
Man, I would love to go to this. But I don't get PTO, let alone paid travel to super awesome conventions.
Here is an idea... schedule MangoCon and just make sure that you have a new job before then. Easy peasy.
UNIX systems have a standard file-based permissions system that is simple and easy to learn and standard across the entire UNIX family. It is not uncommon for more granular file permissions, by way of ACLs (Access Control Lists), to be applied as well, but those are more advanced and we will address them in a later lesson. For now we are going to focus on the standard UNIX permissions. You will find these on Linux, BSD, Solaris, etc.
There are three types of UNIX File System Permissions. They are:
Each of these three permissions can be set for each of three "actors". These actors are:
When we use the ls -l command we are shown the information above. Reading this information on UNIX can be a bit confusing and takes time to grow accustomed to. Typically we have a ten character field that tells us the file permission information. We will ignore the first character for the moment. Here is an example file:
# ls -l
total 0
-rwxr-x--- 1 scott scott 0 Feb 10 04:49 myfile
In this example we have one file, named "myfile". The first ten characters of the output are: -rwxr-x---. Very confusing indeed! How are we supposed to read that?
Well the first character will be ignored leaving us with nine characters. Each of these nine characters, of fields, corresponds with one set of permissions from above. The first three characters (after having skipped the first) correspond to the permissions for the User Owner, the second three characters correspond to the Group Owner and the final three characters correspond to the World.
The characters come in order: rwxrwxrwx. If the permission is granted, the letter appears. If the permission is not granted, a - appears instead to show the place holder.
So in our example, which is rwxr-x--- we can translate this into: [User Owner: rwx] [Group Owner: r-x] [World: ---]
This is a very common set of permissions to find. The file owner (me) as full permissions to do anything with the file. My group has read and execute permissions but cannot modify the file (no write.) Anyone who is not in my designated group or is not me (the group and the user owners have no need to be related, the user owner will not necessarily be a member of the owner group) has no permissions at all and cannot access the file.
In our example above, the user owner and the group owner fields are designated by scott and scott, confusingly. In this case, this is a user named scott and a group that has the same name (presumably because I am the only member.) This is the standard way in which CentOS works. But each UNIX environment can do its own thing.
It is also very common to have the default group for users be a group called "users" or something similar. The two most common approaches are either a general user group and everyone gets the same one, or each user gets their own group (generally with a name that matches their username and a GID that is the same as their UID.) So look for those two paradigms to be in place on any given server. Of course, you can change this and customize it as you see fit on your own servers, there is no need to use the default. But the general industry feeling is that the dedicated user group approach is the best default approach today.
Now that we understand the user owner, the group owner and the permissions, we can learn how to modify them. There are standard UNIX commands to handle these tasks:
First we will change the owner from scott to root.
chown root myfile
Easy peasy. We could change the group from the scott group to the user group in a similar fashion.
chgrp user myfile
The chmod command can get a bit complicated. We are going to look at one way of using it here in this article and will reserve the "octet" means of working with it for a separate article as we want to get through our basic concepts here. You can use either method and most admins will use both depending on what they want to do.
The chmod command uses a mode syntax of [users we want to modify][+-=][mode to change]. Sounds a bit confusing and it is just a little. But if we see it in action I think that you will find it relatively straightforward.
If we use the + modifier, we will add permissions that may or may not have already been applied. If we want to make a file executable for the group owner, we would use "g" to designate the group owner and "x" to indicate executable and "+" to denote adding the permission, like so:
chmod g+x myfile
If we wanted to remove a permission, we would use a minus sign, -. In this example we will remove any possible write "w" permissions from the group owner and the world (other) groups (which are "g" and "o").
chmod go-w myfile
Using the equals modifier we can tell the command exactly what we want the resulting permissions to be rather than using the plus or minus to "modify from what it currently is." The plus and minus are relative permissions, the equals is absolute.
chmod u=rwx myfile
There we go, we can now change the owner, change the group and set the permissions (modes) on a file to control security.
One additional quick trick is the (a) user, for all. It refers to everyone, the user owner, group owner and the other together. So if you wanted to set execute permission for everyone:
chmod a+x myfile
I promised to show how to use the chown command to change both the user and the group at the same time. Just use a colon like so:
chown root:accounting myfile
Directories. Thus far we have only talked about how UNIX Standard Permissions apply to normal files. But we need to understand how they apply to directories, too. Directories use the same permissions structure but the permissions mean slightly different things when they are used on directories.
One additional feature of all of the commands that we have learned here when applied to directories is the ability to add permissions recursively. If you wanted to make "scott" the owner of a directory and everything that that it contains all at once you could do so like this:
chown -R scott /var/scottscooldirectory
So looking at use cases and what we plan to do and the scale of the NTG Lab, it seems like the best course of action is going to be to make a single "NTG Lab" network on ZeroTier (using their hosted ZeroTier Central service) and let everyone connect to the lab in that way. This is because we have the massive Scale HC3 and XS clusters so the amount of communications between systems in the lab is rather enormous. This has upsides and downsides, of course. But it covers a lot of important ground.
Upsides:
Downsides:
The good is that the ZeroTier security is pretty tight and all the NTG Lab users know each other for the most part. This is not a publicly accessible system, but it is not a private production one either. The idea here is that instead of needing to access every resource through a jump box to do anything, which is somewhat slow and resource intensive, a lot of things can be done directly. For example, if you want to build an application that uses port 3001 on a VM, you can access it directly from your desktop's web browser. No need to log in through some other means first. If you want to consume a database connection directly from your desktop, same thing.
For those concerned that their desktops or laptops will become exposed to the lab environment, which is a reasonable concern, I recommend creating a lab access VM, which can be very light and security (actually we'd all prefer that) that is treated as a rather production system. A Linux desktop is really ideal for being lightweight, very functional and no licensing concerns. Then things like RDP, X2go, SSH, VNC or whatever can be used directly from that to access lab resources.
Rocket.Chat can be a little tricky to install and it took some work to find a solid procedure. This installation is for CentOS 7 or RHEL 7 Linux and uses a locally installed MongoDB 3.2 database server but can easily be modified for a remote one, I tested with both. Rather than make a series of steps as this installation is a little weird, I just opted for a full install script. Copy the entire text to a file on your CentOS 7 box and run. It will set up the necessary repos, grab the latest stable MongoDB and Rocket.Chat installs, handle Node and NPM versioning, SELinux gets configured (and left on!!), get things installed, set necessary environmental variables for the future and fire up your new server. Even the firewall gets installed and configured. Run it and when you are done you should be able to log in from another machine's web browser right away and start using it!
Install Time: 1 minute
First we need a CentOS 7 Minimum install. I'm on a Scale HC3 cluster and so have a template that I set up that has just the basics for me. I replicate what is needed in the script, however, so a full basic install is all that you need.
#!/bin/bash
cat > /etc/yum.repos.d/mongodb-org-3.2.repo <<EOF
[mongodb-org-3.2]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/7Server/mongodb-org/stable/x86_64/
gpgcheck=0
enabled=1
EOF
yum install -y epel-release firewalld
yum install -y mongodb-org policycoreutils-python-2.2.5-20.el7.x86_64 nodejs
semanage port -a -t mongod_port_t -p tcp 27017
systemctl start mongod
systemctl enable mongod
firewall-cmd --zone=public --add-port=80/tcp --permanent
firewall-cmd --reload
yum -y install GraphicsMagick
curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh | bash
source ~/.bash_profile
nvm install v0.10.40
cd /opt
curl -L https://rocket.chat/releases/latest/download -o rocket.chat.tgz
tar zxvf rocket.chat.tgz
mv bundle Rocket.Chat
cd Rocket.Chat/programs/server
npm install inherits
npm install
cd ../..
npm install forever
echo "export ROOT_URL=http://rocket.lab.ntg.co/" >> ~/.bashrc
echo "export MONGO_URL=mongodb://localhost:27017/rocketchat" >> ~/.bashrc
echo "export PORT=80" >> ~/.bashrc
source ~/.bashrc
/opt/Rocket.Chat/node_modules/forever/bin/forever start /opt/Rocket.Chat/main.js
That's it. Builds itself and starts itself. One script, so far it is working each time.
Nothing like getting to the office and celebrating Christmas in the summer.... it's been a year since I've made it into the office so what do I get today....
It's been a good day.
I get asked this a lot and a recent response was rather thorough so I figured that I would post it here.
Question: I would like an explanation as to how you can have HA with local storage. as you say, "Local storage is faster, cheaper AND more reliable (closer to HA.) A SAN increases risk so moves you farther from HA. You CAN create HA using either as a building block, but SAN requires more work to get there as it is inherently more risky than local storage."
Answer: Well the first way to think about it is.... ask "How can you have HA with a SAN?"
Answer is, of course, that you can't. You can only talk about HA when you have two SANs, which are replicated to each other, and failover transparently with no close coupling. So, since a SAN is nothing but local storage stretched precariously over a network adding risk and bottlenecks, we can apply the same logic to local storage. How do you get HA with local storage? You have to replicate it, exactly like you do with a SAN. Remember that a SAN IS local storage - stretched over a network connection. So ANY feature that SAN would give us, local storage gives us too, but better.
So just like you must replicate SANs to make a SAN a part of an HA solution, you replicate your local storage to make it HA. The difference is that local storage starts out safer, cheaper and faster than a SAN so you have a leg up on the game. And ergo, once replicated, it replicates faster, cheaper and more safely too.
Just as SAN replication is unique to the SAN in question, local storage replication is unique to the local storage in question. If you have XenServer, for example, local storage replication is provided by DRBD. If you are on KVM, same thing. If you are on Hyper-V, you use Starwind. If you are on VMware you lose the best features, it's just not up to par with its competitors, but you can do Starwind for two nodes and VSAN at high cost.
It's the same tools that we used to make server clusters more reliable than SAN in the pre-virtualization world.
The important thing to remember is that SAN is nothing more than your local disk hanging off of a network connection. So once we know that, we know that SAN cannot add reliability, it only takes it away.
More on RLS or Replicated Local Storage.
Of course you can do things like use mainframe level RAS technologies to make a SAN highly available in a single chassis, but any technology to do this comes from the server side and is cheaper there so you could apply that to local storage as well so apples to apples local remains more reliable.
@IRJ said:
It's our vendor who sells us PCs. So they are trying to get everyone to order PCs in a panic. Very shady tactics...
I think that they'd be called an ex-vendor if they pulled that here
@RojoLoco said in Non-IT News Thread:
@mlnews said in Non-IT News Thread:
This may be the strangest NPR reporting I've ever seen:
The NPR profile found bros to be mainly white and divided into several subsets - "jockish", "dudely", "preppy" and "stoner" - and at the centre of this nexus was none other than Mr Lochte.
"He's a jock. He has a stoner affect. He competes in a preppy sport. He tweets pics of him and his dudes doing bro-ass things. So you can see why Lochte is the platonic ideal of bro-dom," NPR said.
Well, now we have even more evidence to support the notion that rich, white, bro-dude, quasi-professional athletes are actually lying pieces of shit. I went to a private school, so I have known this to be fact since 6th grade. Glad everyone else is up to speed.
Same culture that hires people because they went to college rather than because they are skilled or capable or the best candidate. It's an extension of bro culture. Same "we are part of the same social group" thing.
This is a work in progress.
GrayLog is the open source competitor to the ELK stack. Like ELK, GrayLog relies on the ElasticSearch database (and additionally on MongoDB.) GrayLog offers some great features missing from ELK, such as user management, but brings us some additional complexities as well and is a bit more difficult to get working on first install. The GrayLog documentation is rather lacking leaving us mostly on our own if we are not prepared to use their premade VM images.
Build on CentOS 7 on our own gives us more flexibility. We will start making a VM with two CPUs, eight gigs of RAM and, in this example, a terabyte of data storage space. For a demo install 50-100GB is likely more than enough.
We clone our base VM but add more vCPU and memory.
An additional storage device will be highly desired.
#!/bin/bash
cat > /etc/yum.repos.d/elasticsearch.repo <<EOF
[elasticsearch-1.7]
name=Elasticsearch repository for 1.7.x packages
baseurl=http://packages.elastic.co/elasticsearch/1.7/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
EOF
cat > /etc/yum.repos.d/mongodb-org-3.2.repo <<EOF
[mongodb-org-3.2]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/7Server/mongodb-org/3.2/x86_64/
gpgcheck=0
enabled=1
EOF
cd /tmp
rpm -Uvh https://packages.graylog2.org/repo/packages/graylog-1.3-repository-el7_latest.rpm
yum -y install wget firewalld epel-release
yum -y install nginx httpd-tools unzip glances htop java elasticsearch graylog-server graylog-web mongodb-org policycoreutils-python pwgen perl-Digest-SHA net-tools
semanage port -a -t mongod_port_t -p tcp 27017
systemctl start firewalld
systemctl enable firewalld
mv /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.old
echo 'network.host: localhost' > /etc/elasticsearch/elasticsearch.yml
systemctl start elasticsearch
systemctl enable elasticsearch
systemctl start mongod
systemctl enable mongod
You'll need the firewall open for port 9000 by default to see the web interface.
There are several configuration files that need changes made to get the system up and running and more to get logging happening.
In doing projects in the NTG Lab, I've been working with log management. I've always focused on ELK as the leading open source log management system but GrayLog comes up regularly as well. I've now installed both and have both running and am interested in comparing the two to understand the strengths and weaknesses.
First thing is, it is not the straightforward comparison that you would think. In both cases the solution is a stack, not a single product. In both cases the base of the stack, the core database, is ElasticSearch - a powerful, scalable, NoSQL database that handles easy clustering.
ELK is ElasticSearch, LogStash and Kibana as a stack. GrayLog is ElasticSearch and GrayLog or optionally, ElasticSearch, LogStash and GrayLog. If you choose to use LogStash, it really should be thought of as the ELK vs. ELG stacks with only the user interface being unique.
Some key differences thus far:
ELK is more up to date and runs on ElasticSearch 2. GrayLog is still limited to the older, but rather mature, ElasticSearch 1 products (we are testing on the latest ElasticSearch 1.7 system.)
Kibana is extremely difficult to use and is not intuitive at all. GrayLog seems to be easier to get initial reports out of.
Kibana does not have user management and relies on selling an additional, non-free, component to handle that. GrayLog includes user management (via a local MongoDB database) or will attach to LDAP or Active Directory for user management for free as part of the open source solution. There are no "paid add ons" with GrayLog.
First look seems like ELK is far easier to get logs into than GrayLog. But moving to ELG might fix this.
Google reached out to me while we were at MangoCon and asked to use my daughter's photo graph in a video about their machine learning program. This is a several year old picture and how they found it specifically I have no idea. But we discussed it with them and she got used in this internal use YouTube hosted video that is shown to incoming students going into or considering the program at Google.
Having learned about Netdata this morning here on ML it seemed like the perfect project for a sunny Sunday morning while the family was all still asleep. So here is the simple install on CentOS 7.
yum install zlib-devel gcc make git autoconf autogen automake pkgconfig
cd /opt
git clone https://github.com/firehol/netdata.git --depth=1
cd netdata
./netdata-installer.sh
That's it, Netdata is up and running. For me, I want to run this manually so as not to incur any performance hit when not in use. This is a real time performance analysis tool, not a capacity planning or warning package so having it run when not used is not very useful.
If you have a local desktop you could navigate to http://localhost:19999/ to see the output. It's that easy. However, who has a Linux server like that? So instead we need to see this remotely. Using SSH this is very simple:
ssh you.host.com -L 19999:127.0.0.1:19999
Now from your local web browser just look at http://localhost:19999/ instead!
We have been hearing that BASH and SSH are being readied for Windows Server this year. On the surface this sounds great, PowerShell is convoluted and hard to use, remoting to Windows is difficult and non-standard, these things will fix that. As someone coming from the UNIX world these things are awesome. But I think that there are a lot of factors that Windows Admins have not considered.
In the past, the introduction of PowerShell, for example, was not all that disruptive. Sure, it changed things, but Windows Admins did not en masse run out and learn PowerShell or stop using the GUI. PS Remoting didn't make people start using PS remotely very often. PS has languished, even though it is very powerful and very capable. The learning curve is just steep and the usability is low.
Windows Server continues to suffer from a culture push to use the GUI and, quite sadly, an administration community that is overpopulated with underqualified admins. It is so easy to admin a Windows Server, or to appear to, because of the ubiquitous GUI and socially accepted norms of administration based on decades of SMB-focused, GUI-centric culture that anyone and everyone claims to be a Windows Admin. As many companies cannot evaluate who is and who is not doing a good job on Windows, this is caused the market to have driven salaries down as the value of the average Windows Admin is low and those that are good have little way to prove themselves and raise their perceived value.
How does BASH and SSH (and to a lesser degree the new Linux on Windows subsystem that replaces the old UNIX on Windows POSIX layer) change this? Essentially overnight, modern Windows Servers are going to look and feel just like Linux does. Most Windows Admins already confuse the BASH shell with Linux itself so adding BASH to Windows is the same as turning Windows into Linux from that point of view (it, of course, is not.) Add SSH to the mix and the remote access methods and tools from the Linux world, which are completely ubiquitous, will be available to the Windows world.
Stop for a second and think about what that means. Overnight.... the world and culture of the high cost, high skilled, high efficiency Linux Administration space are going to be available to the Windows world. There is a very real possibility that GUIless Windows deployments will be the norm, remote access will be pooled with the existing remote access of the Linux world, that servers will be administered together rather than in two groups and that the Windows Administration world may shift to the Linux one, more or less overnight.
Microsoft knows what it is doing and embracing an entire field of administrators that have long been unavailable to it or that mocked it for lacking a strong CLI and remote access method will suddenly be able to work on Windows as if it were native.
This may be the move that prepares Microsoft to recycle its ecosystem, to dump the existing global pool of administrators and shake up IT bringing their own culture in line with their competition, making themselves far more viable for the world of cloud computing and removing decades of kruft that has collected around their culture and ecosystem.
BASH and SSH should be a wake up call to Windows Admins everywhere. Microsoft knows where the future is and it isn't in the way that Windows as been running in the past. It's time for the Windows Admin world to evolve or quite possibly, simply lose all relevancy and cease to exist.
Anyone else like learning foreign languages? Who needs a challenge to make it easier? Let's get some people involved and see how we can do. Post your accounts, let's link up and let's see who can score more points!