• Log & Alerts Management

    IT Discussion
    13
    1 Votes
    13 Posts
    924 Views
    hobbit666H

    @coliver said in Log & Alerts Management:

    Graylog would be the solution for that.

    Recognise that name will have to look into that again

  • Zabbix and ELK integration?

    IT Discussion
    1
    1 Votes
    1 Posts
    350 Views
    No one has replied
  • 3 Votes
    1 Posts
    3k Views
    No one has replied
  • 1 Votes
    9 Posts
    1k Views
    scottalanmillerS

    Tags added.

  • 0 Votes
    3 Posts
    1k Views
    NetworkNerdN

    After asking the Wazuh employee I had been speaking to about Kibana 5.6.3, the GitHub repo was updated to include it.

  • 1 Votes
    4 Posts
    2k Views
    A

    hi @mhamed, if you are solved this step i need your help because I'm currently working on same Project .

  • 3 Votes
    6 Posts
    1k Views
    scottalanmillerS

    A little monitoring goes a long way 🙂

  • 6 Votes
    1 Posts
    2k Views
    No one has replied
  • 1 Votes
    110 Posts
    25k Views
    BRRABillB

    @dafyre said in SysLog Forwarding for XenServer:

    @BRRABill said in SysLog Forwarding for XenServer:

    I am the new King of Open Source.

    H aha ha. How's that?

    It's my answer to anything.

    Need a new logging server? Open Source!

    Need a new XXXXXX? Open Source!

  • Visualize AWS Detailed Billing with ELK

    IT Discussion
    2
    2 Votes
    2 Posts
    2k Views
    scottalanmillerS

    Neat

  • 0 Votes
    12 Posts
    4k Views
    scottalanmillerS

    Getting lots of eyes on this thread today. Very interesting.

  • ELK stops accepting logs

    IT Discussion
    26
    2 Votes
    26 Posts
    8k Views
    DanpD

    No issues in almost a week. I will probably drop the heap size back to 1g and see if it remains stable.

  • 4 Votes
    11 Posts
    7k Views
    DustinB3403D

    For anyone looking to do this here is my filebeat.yml file.

    # Name of the registry file. Per default it is put in the current working # directory. In case the working directory is changed after when running # filebeat again, indexing starts from the beginning again. registry_file: /var/lib/filebeat/registry # Full Path to directory with additional prospector configuration files. Each file must end with .yml # These config files must have the full filebeat config part inside, but only # the prospector part is processed. All global options like spool_size are ignored. # The config_dir MUST point to a different directory then where the main filebeat config file is in. #config_dir: ############################################################################### ############################# Libbeat Config ################################## # Base config file used by all other beats for using libbeat features ############################# Output ########################################## # Configure what outputs to use when sending the data collected by the beat. # Multiple outputs may be used. output: ### Elasticsearch as output elasticsearch: # Array of hosts to connect to. # Scheme and port can be left out and will be set to the default (http and 9200) # In case you specify and additional path, the scheme is required: http://localhost:9200/path # IPv6 addresses should always be defined as: https://[2001:db8::1]:9200 hosts: ["localhost:9200"] # Optional protocol and basic auth credentials. #protocol: "https" #username: "admin" #password: "s3cr3t" # Number of workers per Elasticsearch host. #worker: 1 # Optional index name. The default is "filebeat" and generates # [filebeat-]YYYY.MM.DD keys. #index: "filebeat" # A template is used to set the mapping in Elasticsearch # By default template loading is disabled and no template is loaded. # These settings can be adjusted to load your own template or overwrite existing ones #template: # Template name. By default the template name is filebeat. #name: "filebeat" # Path to template file #path: "filebeat.template.json" # Overwrite existing template #overwrite: false # Optional HTTP Path #path: "/elasticsearch" # Proxy server url #proxy_url: http://proxy:3128 # The number of times a particular Elasticsearch index operation is attempted. If # the indexing operation doesn't succeed after this many retries, the events are # dropped. The default is 3. #max_retries: 3 # The maximum number of events to bulk in a single Elasticsearch bulk API index request. # The default is 50. #bulk_max_size: 50 # Configure http request timeout before failing an request to Elasticsearch. #timeout: 90 # The number of seconds to wait for new events between two bulk API index requests. # If `bulk_max_size` is reached before this interval expires, addition bulk index # requests are made. #flush_interval: 1 # Boolean that sets if the topology is kept in Elasticsearch. The default is # false. This option makes sense only for Packetbeat. #save_topology: false # The time to live in seconds for the topology information that is stored in # Elasticsearch. The default is 15 seconds. #topology_expire: 15 # tls configuration. By default is off. #tls: # List of root certificates for HTTPS server verifications certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"] # Certificate for TLS client authentication #certificate: "/etc/pki/client/cert.pem" # Client Certificate Key #certificate_key: "/etc/pki/client/cert.key" # Controls whether the client verifies server certificates and host name. # If insecure is set to true, all server host names and certificates will be # accepted. In this mode TLS based connections are susceptible to # man-in-the-middle attacks. Use only for testing. #insecure: true # Configure cipher suites to be used for TLS connections #cipher_suites: [] # Configure curve types for ECDHE based cipher suites #curve_types: [] # Configure minimum TLS version allowed for connection to logstash #min_version: 1.0 # Configure maximum TLS version allowed for connection to logstash #max_version: 1.2 ### Logstash as output #logstash: # The Logstash hosts hosts: ["localhost:5044"] # Number of workers per Logstash host. #worker: 1 # Set gzip compression level. #compression_level: 3 # Optional load balance the events between the Logstash hosts #loadbalance: true # Optional index name. The default index name depends on the each beat. # For Packetbeat, the default is set to packetbeat, for Topbeat # top topbeat and for Filebeat to filebeat. #index: filebeat # Optional TLS. By default is off. #tls: # List of root certificates for HTTPS server verifications #certificate_authorities: ["/etc/pki/root/ca.pem"] # Certificate for TLS client authentication #certificate: "/etc/pki/client/cert.pem" # Client Certificate Key #certificate_key: "/etc/pki/client/cert.key" # Controls whether the client verifies server certificates and host name. # If insecure is set to true, all server host names and certificates will be # accepted. In this mode TLS based connections are susceptible to # man-in-the-middle attacks. Use only for testing. #insecure: true # Configure cipher suites to be used for TLS connections #cipher_suites: [] # Configure curve types for ECDHE based cipher suites #curve_types: [] ### File as output #file: # Path to the directory where to save the generated files. The option is mandatory. #path: "/tmp/filebeat" # Name of the generated files. The default is `filebeat` and it generates files: `filebeat`, `filebeat.1`, `filebeat.2`, etc. #filename: filebeat # Maximum size in kilobytes of each file. When this size is reached, the files are # rotated. The default value is 10 MB. #rotate_every_kb: 10000 # Maximum number of files under path. When this number of files is reached, the # oldest file is deleted and the rest are shifted from last to first. The default # is 7 files. #number_of_files: 7 ### Console output # console: # Pretty print json event #pretty: false ############################# Shipper ######################################### shipper: # The name of the shipper that publishes the network data. It can be used to group # all the transactions sent by a single shipper in the web interface. # If this options is not defined, the hostname is used. #name: # The tags of the shipper are included in their own field with each # transaction published. Tags make it easy to group servers by different # logical properties. #tags: ["service-X", "web-tier"] # Uncomment the following if you want to ignore transactions created # by the server on which the shipper is installed. This option is useful # to remove duplicates if shippers are installed on multiple servers. #ignore_outgoing: true # How often (in seconds) shippers are publishing their IPs to the topology map. # The default is 10 seconds. #refresh_topology_freq: 10 # Expiration time (in seconds) of the IPs published by a shipper to the topology map. # All the IPs will be deleted afterwards. Note, that the value must be higher than # refresh_topology_freq. The default is 15 seconds. #topology_expire: 15 # Internal queue size for single events in processing pipeline #queue_size: 1000 # Configure local GeoIP database support. # If no paths are not configured geoip is disabled. #geoip: #paths: # - "/usr/share/GeoIP/GeoLiteCity.dat" # - "/usr/local/var/GeoIP/GeoLiteCity.dat" ############################# Logging ######################################### # There are three options for the log ouput: syslog, file, stderr. # Under Windos systems, the log files are per default sent to the file output, # under all other system per default to syslog. logging: # Send all logging output to syslog. On Windows default is false, otherwise # default is true. #to_syslog: true # Write all logging output to files. Beats automatically rotate files if rotateeverybytes # limit is reached. #to_files: false # To enable logging to files, to_files option has to be set to true files: # The directory where the log files will written to. #path: /var/log/mybeat # The name of the files where the logs are written to. #name: mybeat # Configure log file size limit. If limit is reached, log file will be # automatically rotated rotateeverybytes: 10485760 # = 10MB # Number of rotated log files to keep. Oldest files will be deleted first. #keepfiles: 7 # Enable debug output for selected components. To enable all selectors use ["*"] # Other available selectors are beat, publish, service # Multiple selectors can be chained. #selectors: [ ] # Sets log level. The default log level is error. # Available log levels are: critical, error, warning, info, debug #level: error
  • 7 Votes
    30 Posts
    13k Views
    gotwfG

    P.S.; While the ability to "pivot" from e.g. alert to metrics to log seamlessly from w/in a single UI is indeed attractive, the time series data model of the PLG stack (Prometheus Loki Grafana) does not lend itself well to "The Tail at Scale" problem.

    https://www2.cs.duke.edu/courses/cps296.4/fall13/838-CloudPapers/dean_longtail.pdf

    IOW; it is all a lot more complex than one may initially imagine... lol.

  • ELK server is up, now how do I use it.

    IT Discussion
    15
    6 Votes
    15 Posts
    3k Views
    BRRABillB

    I was playing a little bit with LOGG.LY today and I think I fried my brain.

    I'm trying to get my logs off my XS USB boot device see it doesn't get its brain fried.

    I'll be watching this ELK discussion to see how everyone does.

  • 9 Votes
    43 Posts
    17k Views
    dafyreD

    @scottalanmiller said in Building ELK on CentOS 7:

    @dafyre said in Building ELK on CentOS 7:

    So... I went through and ran the script and it seems to have worked fine... What next?

    Edit: To collect logs from the local server, I also had to install filebeat on this server. So I reckon I can now go and install it on all my other systems as well.

    Yes, install Filebeat and point it to ELK. Check my Filebeat article for more info.

    Didn't realize you had one. 8-) But I'm good now. Logs are collecting as we speak. Bonus: Fail2Ban and Apache logs also work great in ELK.

  • 1 Votes
    5 Posts
    3k Views
    stacksofplatesS

    After some more testing it seems enabling output to journald.conf has worked. I did restart it after I tried that but it didn't show up. Now it's working. Not sure what changed, but at least it's working.

  • 4 Votes
    9 Posts
    4k Views
    stacksofplatesS

    They also forget about SELinux with their CentOS 7 docs. You need sudo setsebool -P httpd_can_network_connect 1 and possibly sudo chcon -R --type=httpd_syscontent_rw_t /opt/kibana

    Up and running now.

  • 2 Votes
    4 Posts
    2k Views
    scottalanmillerS

    @JaredBusch said:

    I have never successful gotten an ELK server up and running and ingesting logs. I really need to get on this.

    Digital Ocean has some great documentation on it. I love having an ELK server without any licensing limitations.

    The one really sad part, though, is that it is a single user login out of the box and the user management component Shield is non-free.

  • 4 Votes
    32 Posts
    12k Views
    scottalanmillerS

    Here is the SAR report for the server. Remember we are running at half the cores, half the memory that is recommended - mostly just as an experiment to see how much is really needed for things to be responsive. And so far, ingesting five servers, it is working just fine. We will be adding more servers and keeping an eye on things to see how the performance is and will grow the server if we need to. We are trying to learn from this so that we will have better capacity information. But for a smaller company it looks like a very small server will work just fine. No question that the server is busy, but now that it is up and running and no longer handling the initial setup, it's nowhere near being fully loaded.

    02:25:01 PM CPU %user %nice %system %iowait %steal %idle 02:35:01 PM all 12.91 19.61 4.53 0.37 0.00 62.59 02:45:01 PM all 2.68 6.86 2.34 0.20 0.00 87.91 02:55:01 PM all 2.73 6.42 2.25 0.21 0.00 88.40 03:05:01 PM all 2.26 9.77 2.07 0.19 0.00 85.71 03:15:01 PM all 3.56 6.49 2.57 0.30 0.00 87.07 03:25:01 PM all 3.52 12.39 2.90 0.26 0.00 80.93 03:35:01 PM all 2.97 6.45 2.37 0.27 0.00 87.95 03:45:01 PM all 2.54 11.15 2.17 0.17 0.00 83.97 03:55:01 PM all 1.44 5.42 1.69 0.10 0.00 91.35 04:05:02 PM all 0.98 4.86 1.52 0.06 0.00 92.58 04:15:01 PM all 1.54 5.07 1.75 0.09 0.00 91.54 04:25:01 PM all 1.52 10.37 1.91 0.11 0.00 86.10 04:35:01 PM all 3.74 6.99 2.65 0.23 0.00 86.38 04:45:01 PM all 3.11 10.70 2.42 0.24 0.00 83.53 04:55:01 PM all 1.02 5.07 1.59 0.05 0.00 92.26 05:05:01 PM all 1.76 5.64 1.89 0.15 0.00 90.57 05:15:01 PM all 0.93 9.27 1.64 0.05 0.00 88.11 05:25:01 PM all 1.71 5.45 1.86 0.13 0.00 90.85 05:35:01 PM all 2.58 5.40 2.24 0.14 0.00 89.64 05:45:01 PM all 4.18 11.75 2.92 0.25 0.00 80.90 05:55:02 PM all 3.16 5.85 2.13 0.26 0.00 88.60 06:05:01 PM all 3.54 6.36 2.32 0.20 0.00 87.58 06:15:01 PM all 3.14 10.63 2.14 0.16 0.00 83.92 06:25:01 PM all 4.87 11.22 3.27 0.24 0.00 80.40 Average: all 9.22 10.60 3.03 0.41 0.00 76.74