Building ELK on CentOS 7


  • Service Provider

    Okay, after much work, we finally have a working ELK install process for CentOS 7. This took a bit of work thanks to all of the configuration files that need to be created or modified. This is a long one, hopefully this will be useful.

    Here is a basic VM being created on a Scale HC3. You are going to want to start with at least two vCPU and at least four GB of RAM, I'd recommend at least six and eight is a good starting point if you have the resources and will use this for more than a lab. Half a terabyte is a good starting point for disk space. Heavily recommended that XFS be used.

    ELK on Scale

    #!/bin/bash
    
    cd /tmp
    yum -y install wget firewalld epel-release
    yum -y install nginx httpd-tools unzip
    systemctl start firewalld
    systemctl enable firewalld
    wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u65-b17/jdk-8u65-linux-x64.rpm"
    yum -y install jdk-8u65-linux-x64.rpm
    rm jdk-8u65-linux-x64.rpm
    rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch
    
    cat > /etc/yum.repos.d/elasticsearch.repo <<EOF
    [elasticsearch-2.x]
    name=Elasticsearch repository for 2.x packages
    baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
    gpgcheck=1
    gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
    enabled=1
    EOF
    
    cat > /etc/yum.repos.d/elasticsearch.repo <<EOF
    [elasticsearch-2.x]
    name=Elasticsearch repository for 2.x packages
    baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
    gpgcheck=1
    gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
    enabled=1
    EOF
    
    yum -y install elasticsearch
    mv /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.old
    echo 'network.host: localhost' > /etc/elasticsearch/elasticsearch.yml
    systemctl start elasticsearch
    systemctl enable elasticsearch
    
    cat > /etc/yum.repos.d/kibana.repo <<EOF
    [kibana-4.4]
    name=Kibana repository for 4.4.x packages
    baseurl=http://packages.elastic.co/kibana/4.4/centos
    gpgcheck=1
    gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
    enabled=1
    EOF
    
    yum -y install kibana
    mv /opt/kibana/config/kibana.yml /opt/kibana/config/kibana.yml.old
    echo 'server.host: "localhost"' > /opt/kibana/config/kibana.yml
    systemctl start kibana
    systemctl enable kibana.service
    htpasswd -c /etc/nginx/htpasswd.users kibanauser
    setsebool -P httpd_can_network_connect 1
    mv /etc/nginx/nginx.conf /etc/nginx/nginx.conf.old
    
    cat > /etc/nginx/nginx.conf <<EOF
    user nginx;
    worker_processes auto;
    error_log /var/log/nginx/error.log;
    pid /run/nginx.pid;
    
    events {
        worker_connections 1024;
    }
    
    http {
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';
    
        access_log  /var/log/nginx/access.log  main;
    
        sendfile            on;
        tcp_nopush          on;
        tcp_nodelay         on;
        keepalive_timeout   65;
        types_hash_max_size 2048;
    
        include             /etc/nginx/mime.types;
        default_type        application/octet-stream;
    
        include /etc/nginx/conf.d/*.conf;
    }
    EOF
    
    cat > /etc/nginx/conf.d/kibana.conf <<EOF
    server {
        listen 80;
    
        server_name example.com;
    
        auth_basic "Restricted Access";
        auth_basic_user_file /etc/nginx/htpasswd.users;
    
        location / {
            proxy_pass http://localhost:5601;
            proxy_http_version 1.1;
            proxy_set_header Upgrade \$http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_set_header Host \$host;
            proxy_cache_bypass \$http_upgrade;        
        }
    }
    EOF
    
    systemctl start nginx
    systemctl enable nginx
    systemctl start kibana
    systemctl restart nginx
    firewall-cmd --zone=public --add-port=80/tcp --perm
    firewall-cmd --reload
    
    cat > /etc/yum.repos.d/logstash.repo <<EOF
    [logstash-2.2]
    name=logstash repository for 2.2 packages
    baseurl=http://packages.elasticsearch.org/logstash/2.2/centos
    gpgcheck=1
    gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
    enabled=1
    EOF
    
    yum -y install logstash
    # See below for file generation for you
    # cd /etc/pki/tls/
    # openssl req -subj '/CN=elk.lab.ntg.co/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
    
    cat > /etc/logstash/conf.d/02-beats-input.conf <<EOF
    input {
      beats {
        port => 5044
        ssl => true
        ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
        ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
      }
    }
    EOF
    
    cat > /etc/logstash/conf.d/10-syslog-filter.conf <<EOF
    filter {
      if [type] == "syslog" {
        grok {
          match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
          add_field => [ "received_at", "%{@timestamp}" ]
          add_field => [ "received_from", "%{host}" ]
        }
        syslog_pri { }
        date {
          match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
        }
      }
    }
    EOF
    
    cat > /etc/logstash/conf.d/30-elasticsearch-output.conf <<EOF
    output {
      elasticsearch {
        hosts => ["localhost:9200"]
        sniffing => true
        manage_template => false
        index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
        document_type => "%{[@metadata][type]}"
      }
    }
    EOF
    
    service logstash configtest
    systemctl restart logstash
    systemctl enable logstash
    cd /tmp
    curl -L -O https://download.elastic.co/beats/dashboards/beats-dashboards-1.1.0.zip
    unzip beats-dashboards-*.zip
    cd beats-dashboards-1.1.0
    ./load.sh
    cd /tmp
    curl -O https://raw.githubusercontent.com/elastic/filebeat/master/etc/filebeat.template.json
    curl -XPUT 'http://localhost:9200/_template/filebeat?pretty' [email protected]
    firewall-cmd --zone=public --add-port=5044/tcp --perm
    firewall-cmd --reload
    systemctl restart logstash
    

    You will likely want to generate a server-side certificate for use with Logstash. This is not necessary depending on how you intend to use ELK, but for most common usages today, you will want to include this step:

    cd /etc/pki/tls/
    openssl req -subj '/CN=your.elk.fqdn.com/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
    

    This will generate the logstash-forwarder.crt file that we will see in another post.


  • Service Provider

    I have D&D tonight, but will be on this tomorrow.


  • Service Provider

    @JaredBusch said:

    I have D&D tonight, but will be on this tomorrow.

    Awesome. Let me know if you run into any problems. I tried hard to make this as scriptable as possible. I "think" that you can pop this all into a script and just run it, but there are so many moving parts that I'm wary to present it that way. This was stepped through on a vanilla build. One of the lines is just to verify configuration of the Logstash files and doesn't actually do anything. I also tried to compress the package installs into two lines at the beginning as much as possible.

    SELinux is addressed in there, no need to disable SELinux or any crap like that šŸ™‚ This actually configures that properly (I hope.)



  • @scottalanmiller said:

    @JaredBusch said:

    I have D&D tonight, but will be on this tomorrow.

    Awesome. Let me know if you run into any problems. I tried hard to make this as scriptable as possible. I "think" that you can pop this all into a script and just run it, but there are so many moving parts that I'm wary to present it that way. This was stepped through on a vanilla build. One of the lines is just to verify configuration of the Logstash files and doesn't actually do anything. I also tried to compress the package installs into two lines at the beginning as much as possible.

    SELinux is addressed in there, no need to disable SELinux or any crap like that šŸ™‚ This actually configures that properly (I hope.)

    This is definitely one of the more time consuming installs I've done. I need to work on an Ansible playbook for it. I install it so infrequently that it might not be worth it.



  • Eesh, I'm in over my head with this one. Might give it a crack at home but my goodness...



  • Nice! I'll try to give this a whirl at some point in the next couple of days.

    Thanks!



  • @scottalanmiller said:

    Half a terabyte is a good starting point for disk space.

    So much for me trying it - I might be lucky if I have 100 GB available for this. šŸ˜ž


  • Service Provider

    @Dashrender said:

    @scottalanmiller said:

    Half a terabyte is a good starting point for disk space.

    So much for me trying it - I might be lucky if I have 100 GB available for this. šŸ˜ž

    You can do that for seeing what it looks like. 20GB will work for a very tiny test workload. But very tiny.


  • Service Provider

    Just tested on a fresh build and it works BEAUTIFULLY. I put it into a script and ran it instead of going line by line, worked on the first try, no problems. It stops in the middle and asks for a password, that could be moved to the end or something, but it works just fine and isn't so slow that you'd want to walk away. So I added a BASH script header. If you want, just copy/paste into a text file and run it. Boom, done. Working ELK in a minute.


  • Service Provider

    @scottalanmiller so what do you setup your disk partitioning like in CentOS 7?

    On a minimal install left to automatic, if you use a larger drive, it will create a separate partition for all the space after 50gb.

    this is highly annoying because I created a 127GB drive (default in Hyper-V) and now 50GB is separate from all the rest.


  • Service Provider

    like this

    [[email protected] ~]# df -h
    Filesystem                   Size  Used Avail Use% Mounted on
    /dev/mapper/centos_elk-root   50G  855M   50G   2% /
    devtmpfs                     906M     0  906M   0% /dev
    tmpfs                        916M     0  916M   0% /dev/shm
    tmpfs                        916M  8.3M  907M   1% /run
    tmpfs                        916M     0  916M   0% /sys/fs/cgroup
    /dev/sda2                    494M   98M  396M  20% /boot
    /dev/sda1                    200M  9.5M  191M   5% /boot/efi
    /dev/mapper/centos_elk-home   75G   33M   75G   1% /home
    tmpfs                        184M     0  184M   0% /run/user/0
    [[email protected] ~]#
    

  • Service Provider

    You should at least tell the user that you are asking for the kibana password.

    htpasswd -c /etc/nginx/htpasswd.users kibanauser

    0_1456293132171_upload-2a181928-c672-4286-86d6-43b69bd92fc3


  • Service Provider

    I had this error.

    0_1456293268870_upload-fc8bbfa4-c8e3-4c86-8e10-5cd5cf3195c8


  • Service Provider

    Looks like maybe you forgot to start firewalld?

      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100   814  100   814    0     0   1370      0 --:--:-- --:--:-- --:--:--  1372
    {
      "acknowledged" : true
    }
    FirewallD is not running
    FirewallD is not running
    [[email protected] ~]# yum install firewalld
    Loaded plugins: fastestmirror
    Loading mirror speeds from cached hostfile
     * base: mirror.oss.ou.edu
     * epel: fedora-epel.mirror.lstn.net
     * extras: centos.mirrors.wvstateu.edu
     * updates: centos.mirrors.wvstateu.edu
    Package firewalld-0.3.9-14.el7.noarch already installed and latest version
    Nothing to do
    [[email protected] ~]# systemctl start firewalld
    [[email protected] ~]# systemctl status firewalld
    ā— firewalld.service - firewalld - dynamic firewall daemon
       Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
       Active: active (running) since Tue 2016-02-23 23:55:11 CST; 14s ago
     Main PID: 11482 (firewalld)
       CGroup: /system.slice/firewalld.service
               ā””ā”€11482 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid
    
    Feb 23 23:55:09 elk systemd[1]: Starting firewalld - dynamic firewall daemon...
    Feb 23 23:55:11 elk systemd[1]: Started firewalld - dynamic firewall daemon.
    [[email protected] ~]#
    

  • Service Provider

    Yeah, you set it to install, but you never start or enable it.

    0_1456293489801_upload-2c71d7be-5435-4f22-ab21-c2912aa344c8


  • Service Provider

    Line 109 needs commented out.

    0_1456293589646_upload-722d8a55-ede0-467f-815e-97aca00bde17

    add this right after the yum install to fix the firewall.

    yum -y install wget firewalld epel-release
    systemctl enable firewalld
    systemctl start firewalld
    yum -y install nginx httpd-tools unzip
    

    I would just remove line 109 it serves no purpose.

    Edit: Some dumbass forgot to snapshot the image so he could repeat the install...


  • Service Provider

    Why lock out with .htaccess? There is no hint what is needed to log in here.

    0_1456293980571_upload-16198c5d-89fa-48ac-a702-3d6b2cc05644

    I hate this level of authentication.

    Using kibanauser and the password I chose, results in Kibana setup.
    0_1456294107205_upload-68bec54b-23aa-4026-95d9-8080cfed408d


  • Service Provider

    @JaredBusch said:

    @scottalanmiller so what do you setup your disk partitioning like in CentOS 7?

    If I'm doing this for product, I do 20GB for the OS and 200GB+ on a second VHD for the data. I put it all under LVM and make a XFS filesystem on the secondary mount and mount it to data and make a symlink for the Elasticsearch database directory into there.


  • Service Provider

    @JaredBusch said:

    Why lock out with .htaccess? There is no hint what is needed to log in here.

    It's how Digital Ocean does it as well. Kibana doesn't have a built in authentication scheme that I know of. HTAccess is very simple for someone to just get started.


  • Service Provider

    And simple to remove when you want to move to something else.


  • Service Provider

    @JaredBusch said:

    Line 109 needs commented out.

    0_1456293589646_upload-722d8a55-ede0-467f-815e-97aca00bde17

    add this right after the yum install to fix the firewall.

    yum -y install wget firewalld epel-release
    systemctl enable firewalld
    systemctl start firewalld
    yum -y install nginx httpd-tools unzip
    

    I would just remove line 109 it serves no purpose.

    Edit: Some dumbass forgot to snapshot the image so he could repeat the install...

    Thanks. That was formatting I had originally put in before scripting it.


  • Service Provider

    @JaredBusch said:

    Looks like maybe you forgot to start firewalld?

    Fixed


  • Service Provider

    @scottalanmiller said:

    @JaredBusch said:

    @scottalanmiller so what do you setup your disk partitioning like in CentOS 7?

    If I'm doing this for product, I do 20GB for the OS and 200GB+ on a second VHD for the data. I put it all under LVM and make a XFS filesystem on the secondary mount and mount it to data and make a symlink for the Elasticsearch database directory into there.

    SO this mean you need to make one of your linux admin setrups on drive settings because that is not what CentOS does by dfault.



  • @JaredBusch said:

    @scottalanmiller said:

    @JaredBusch said:

    @scottalanmiller so what do you setup your disk partitioning like in CentOS 7?

    If I'm doing this for product, I do 20GB for the OS and 200GB+ on a second VHD for the data. I put it all under LVM and make a XFS filesystem on the secondary mount and mount it to data and make a symlink for the Elasticsearch database directory into there.

    SO this mean you need to make one of your linux admin setrups on drive settings because that is not what CentOS does by dfault.

    Would CentOS do what Scott does if you had two drives you provide CentOS to use? i.e. a 20 GB and a 200+ GB one? Would CentOS install the OS and everything fully on the 20, and then just mount the 200 on some point?


  • Service Provider

    @Dashrender said:

    @JaredBusch said:

    @scottalanmiller said:

    @JaredBusch said:

    @scottalanmiller so what do you setup your disk partitioning like in CentOS 7?

    If I'm doing this for product, I do 20GB for the OS and 200GB+ on a second VHD for the data. I put it all under LVM and make a XFS filesystem on the secondary mount and mount it to data and make a symlink for the Elasticsearch database directory into there.

    SO this mean you need to make one of your linux admin setrups on drive settings because that is not what CentOS does by dfault.

    Would CentOS do what Scott does if you had two drives you provide CentOS to use? i.e. a 20 GB and a 200+ GB one? Would CentOS install the OS and everything fully on the 20, and then just mount the 200 on some point?

    Than answer is not by default. It tries to make it's own magic.

    You can see here I created a 20gb and a 200GB vhdx and told the install to handle it all for me.

    0_1456324763628_upload-a8516602-b5e4-4c0e-9112-caabbb970b80

    Guess what, you still end up with a 50GB and a 170GB partitions scheme

    [[email protected] ~]# df -h
    Filesystem                   Size  Used Avail Use% Mounted on
    /dev/mapper/centos_elk-root   50G  882M   50G   2% /
    devtmpfs                     906M     0  906M   0% /dev
    tmpfs                        916M     0  916M   0% /dev/shm
    tmpfs                        916M  8.3M  907M   1% /run
    tmpfs                        916M     0  916M   0% /sys/fs/cgroup
    /dev/sda2                    494M   99M  395M  21% /boot
    /dev/sda1                    200M  9.5M  191M   5% /boot/efi
    /dev/mapper/centos_elk-home  168G   33M  168G   1% /home
    tmpfs                        184M     0  184M   0% /run/user/0
    [[email protected] ~]#
    

  • Service Provider

    CentOS 7 has a thing for 50GB root mounts.


  • Service Provider

    Yeah, the defaults suck a bit.


  • Service Provider

    @scottalanmiller Why are you using Oracle's Java SDK and not just java from the repo?

    I had read in another write up on the install that it works fine even if it is not the "official" method.


  • Service Provider

    @JaredBusch said:

    @scottalanmiller Why are you using Oracle's Java SDK and not just java from the repo?

    I had read in another write up on the install that it works fine even if it is not the "official" method.

    Even if it "works", Elasticsearch tests against and only officially supports the Oracle one. Just because I can get the OpenJDK to work, I'd hate to have it be buggy or problematic for someone down the line because I used one that wasn't tested against.


  • Service Provider

    @scottalanmiller said:

    @JaredBusch said:

    @scottalanmiller Why are you using Oracle's Java SDK and not just java from the repo?

    I had read in another write up on the install that it works fine even if it is not the "official" method.

    Even if it "works", Elasticsearch tests against and only officially supports the Oracle one. Just because I can get the OpenJDK to work, I'd hate to have it be buggy or problematic for someone down the line because I used one that wasn't tested against.

    I am not a fan of Oracle when it comes to Java



Looks like your connection to MangoLassi was lost, please wait while we try to reconnect.