• 1 Votes
    1 Posts
    217 Views
    No one has replied
  • Centralized Log Management

    IT Discussion
    33
    0 Votes
    33 Posts
    3k Views
    scottalanmillerS

    @pete-s said in Centralized Log Management:

    Amazon is providing the service, not the software. So they don't need to adhere to GPL and similar licenses.

    oh they have to adhere, it's just that the license clearly states that there are no limits on use. So they were adhering perfect.

    ELK was upset that they didn't like how the code was being used when run in production and wanted control of the use of their code, not the reading or modification of the code.

  • Log & Alerts Management

    IT Discussion
    13
    1 Votes
    13 Posts
    899 Views
    hobbit666H

    @coliver said in Log & Alerts Management:

    Graylog would be the solution for that.

    Recognise that name will have to look into that again

  • 1 Votes
    23 Posts
    3k Views
    IRJI

    @DustinB3403 said in Wazuh - operational and can add agents - now what:

    @IRJ said in Wazuh - operational and can add agents - now what:

    So you already filtered it. Just click discover on top right

    Doh that is so easy that I didn't even think that was it.

    @DustinB3403

    3a8e8726-f742-493d-a2cd-5f54c82ce4fb-image.png

  • 1 Votes
    1 Posts
    282 Views
    No one has replied
  • 0 Votes
    51 Posts
    5k Views
    JaredBuschJ

    @zachary715 said in How to receive e-mail alerts from internal devices:

    Do you guys go beyond the SPF records and also implement DKIM or DMARC? I've looked into these briefly but not much. DKIM looks fairly straightforward with Office 365.

    I've checked them both. I will not implement DKIM anytime soon. It adds little on top of SPF.

    DMARC is a layer on top of SPF and/or DKIM you cannot use DMARC without one of the other in place.

    All DMARC does is tell the recipient system what to do with a message that fails the SPF/DKIM check. Instead of letting the recipient system decide what to do about it.

  • 4 Votes
    11 Posts
    7k Views
    DustinB3403D

    For anyone looking to do this here is my filebeat.yml file.

    # Name of the registry file. Per default it is put in the current working # directory. In case the working directory is changed after when running # filebeat again, indexing starts from the beginning again. registry_file: /var/lib/filebeat/registry # Full Path to directory with additional prospector configuration files. Each file must end with .yml # These config files must have the full filebeat config part inside, but only # the prospector part is processed. All global options like spool_size are ignored. # The config_dir MUST point to a different directory then where the main filebeat config file is in. #config_dir: ############################################################################### ############################# Libbeat Config ################################## # Base config file used by all other beats for using libbeat features ############################# Output ########################################## # Configure what outputs to use when sending the data collected by the beat. # Multiple outputs may be used. output: ### Elasticsearch as output elasticsearch: # Array of hosts to connect to. # Scheme and port can be left out and will be set to the default (http and 9200) # In case you specify and additional path, the scheme is required: http://localhost:9200/path # IPv6 addresses should always be defined as: https://[2001:db8::1]:9200 hosts: ["localhost:9200"] # Optional protocol and basic auth credentials. #protocol: "https" #username: "admin" #password: "s3cr3t" # Number of workers per Elasticsearch host. #worker: 1 # Optional index name. The default is "filebeat" and generates # [filebeat-]YYYY.MM.DD keys. #index: "filebeat" # A template is used to set the mapping in Elasticsearch # By default template loading is disabled and no template is loaded. # These settings can be adjusted to load your own template or overwrite existing ones #template: # Template name. By default the template name is filebeat. #name: "filebeat" # Path to template file #path: "filebeat.template.json" # Overwrite existing template #overwrite: false # Optional HTTP Path #path: "/elasticsearch" # Proxy server url #proxy_url: http://proxy:3128 # The number of times a particular Elasticsearch index operation is attempted. If # the indexing operation doesn't succeed after this many retries, the events are # dropped. The default is 3. #max_retries: 3 # The maximum number of events to bulk in a single Elasticsearch bulk API index request. # The default is 50. #bulk_max_size: 50 # Configure http request timeout before failing an request to Elasticsearch. #timeout: 90 # The number of seconds to wait for new events between two bulk API index requests. # If `bulk_max_size` is reached before this interval expires, addition bulk index # requests are made. #flush_interval: 1 # Boolean that sets if the topology is kept in Elasticsearch. The default is # false. This option makes sense only for Packetbeat. #save_topology: false # The time to live in seconds for the topology information that is stored in # Elasticsearch. The default is 15 seconds. #topology_expire: 15 # tls configuration. By default is off. #tls: # List of root certificates for HTTPS server verifications certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"] # Certificate for TLS client authentication #certificate: "/etc/pki/client/cert.pem" # Client Certificate Key #certificate_key: "/etc/pki/client/cert.key" # Controls whether the client verifies server certificates and host name. # If insecure is set to true, all server host names and certificates will be # accepted. In this mode TLS based connections are susceptible to # man-in-the-middle attacks. Use only for testing. #insecure: true # Configure cipher suites to be used for TLS connections #cipher_suites: [] # Configure curve types for ECDHE based cipher suites #curve_types: [] # Configure minimum TLS version allowed for connection to logstash #min_version: 1.0 # Configure maximum TLS version allowed for connection to logstash #max_version: 1.2 ### Logstash as output #logstash: # The Logstash hosts hosts: ["localhost:5044"] # Number of workers per Logstash host. #worker: 1 # Set gzip compression level. #compression_level: 3 # Optional load balance the events between the Logstash hosts #loadbalance: true # Optional index name. The default index name depends on the each beat. # For Packetbeat, the default is set to packetbeat, for Topbeat # top topbeat and for Filebeat to filebeat. #index: filebeat # Optional TLS. By default is off. #tls: # List of root certificates for HTTPS server verifications #certificate_authorities: ["/etc/pki/root/ca.pem"] # Certificate for TLS client authentication #certificate: "/etc/pki/client/cert.pem" # Client Certificate Key #certificate_key: "/etc/pki/client/cert.key" # Controls whether the client verifies server certificates and host name. # If insecure is set to true, all server host names and certificates will be # accepted. In this mode TLS based connections are susceptible to # man-in-the-middle attacks. Use only for testing. #insecure: true # Configure cipher suites to be used for TLS connections #cipher_suites: [] # Configure curve types for ECDHE based cipher suites #curve_types: [] ### File as output #file: # Path to the directory where to save the generated files. The option is mandatory. #path: "/tmp/filebeat" # Name of the generated files. The default is `filebeat` and it generates files: `filebeat`, `filebeat.1`, `filebeat.2`, etc. #filename: filebeat # Maximum size in kilobytes of each file. When this size is reached, the files are # rotated. The default value is 10 MB. #rotate_every_kb: 10000 # Maximum number of files under path. When this number of files is reached, the # oldest file is deleted and the rest are shifted from last to first. The default # is 7 files. #number_of_files: 7 ### Console output # console: # Pretty print json event #pretty: false ############################# Shipper ######################################### shipper: # The name of the shipper that publishes the network data. It can be used to group # all the transactions sent by a single shipper in the web interface. # If this options is not defined, the hostname is used. #name: # The tags of the shipper are included in their own field with each # transaction published. Tags make it easy to group servers by different # logical properties. #tags: ["service-X", "web-tier"] # Uncomment the following if you want to ignore transactions created # by the server on which the shipper is installed. This option is useful # to remove duplicates if shippers are installed on multiple servers. #ignore_outgoing: true # How often (in seconds) shippers are publishing their IPs to the topology map. # The default is 10 seconds. #refresh_topology_freq: 10 # Expiration time (in seconds) of the IPs published by a shipper to the topology map. # All the IPs will be deleted afterwards. Note, that the value must be higher than # refresh_topology_freq. The default is 15 seconds. #topology_expire: 15 # Internal queue size for single events in processing pipeline #queue_size: 1000 # Configure local GeoIP database support. # If no paths are not configured geoip is disabled. #geoip: #paths: # - "/usr/share/GeoIP/GeoLiteCity.dat" # - "/usr/local/var/GeoIP/GeoLiteCity.dat" ############################# Logging ######################################### # There are three options for the log ouput: syslog, file, stderr. # Under Windos systems, the log files are per default sent to the file output, # under all other system per default to syslog. logging: # Send all logging output to syslog. On Windows default is false, otherwise # default is true. #to_syslog: true # Write all logging output to files. Beats automatically rotate files if rotateeverybytes # limit is reached. #to_files: false # To enable logging to files, to_files option has to be set to true files: # The directory where the log files will written to. #path: /var/log/mybeat # The name of the files where the logs are written to. #name: mybeat # Configure log file size limit. If limit is reached, log file will be # automatically rotated rotateeverybytes: 10485760 # = 10MB # Number of rotated log files to keep. Oldest files will be deleted first. #keepfiles: 7 # Enable debug output for selected components. To enable all selectors use ["*"] # Other available selectors are beat, publish, service # Multiple selectors can be chained. #selectors: [ ] # Sets log level. The default log level is error. # Available log levels are: critical, error, warning, info, debug #level: error
  • 7 Votes
    30 Posts
    13k Views
    gotwfG

    P.S.; While the ability to "pivot" from e.g. alert to metrics to log seamlessly from w/in a single UI is indeed attractive, the time series data model of the PLG stack (Prometheus Loki Grafana) does not lend itself well to "The Tail at Scale" problem.

    https://www2.cs.duke.edu/courses/cps296.4/fall13/838-CloudPapers/dean_longtail.pdf

    IOW; it is all a lot more complex than one may initially imagine... lol.

  • 7 Votes
    8 Posts
    5k Views
    scottalanmillerS

    Graylog has updated and no longer relies on the old version of ElasticSearch. It will use ElasticSearch 2 now. So time to revisit.

  • 9 Votes
    43 Posts
    16k Views
    dafyreD

    @scottalanmiller said in Building ELK on CentOS 7:

    @dafyre said in Building ELK on CentOS 7:

    So... I went through and ran the script and it seems to have worked fine... What next?

    Edit: To collect logs from the local server, I also had to install filebeat on this server. So I reckon I can now go and install it on all my other systems as well.

    Yes, install Filebeat and point it to ELK. Check my Filebeat article for more info.

    Didn't realize you had one. 8-) But I'm good now. Logs are collecting as we speak. Bonus: Fail2Ban and Apache logs also work great in ELK.

  • 3 Votes
    2 Posts
    1k Views
    stacksofplatesS

    For a hosted solution, I've used Sealion before. The free offering isn't too bad for not important stuff ( you only get 3 days of data retention).

  • 1 Votes
    5 Posts
    3k Views
    stacksofplatesS

    After some more testing it seems enabling output to journald.conf has worked. I did restart it after I tried that but it didn't show up. Now it's working. Not sure what changed, but at least it's working.

  • 0 Votes
    4 Posts
    3k Views
    nadnerBN

    If you are uncertain, http://urlquery.net can be helpful.

    It will report on what happens when you go to a particular URL.
    It can even give you a preview (sometimes) of the page.

  • Loggly Log Monitoring

    IT Discussion
    13
    3 Votes
    13 Posts
    2k Views
    scottalanmillerS

    So quite a few options.

  • Problems setting up an ELK stack

    IT Discussion
    6
    0 Votes
    6 Posts
    2k Views
    gjacobseG

    Sorry to hear of the difficulty setting this up. But as I am interested in doing the same,.. maybe I'll skip the issues.

  • 0 Votes
    5 Posts
    2k Views
    Reid CooperR

    Have you started your logging project yet?

  • Anyone Play with Loggly Yet

    Water Closet
    8
    1 Votes
    8 Posts
    1k Views
    scottalanmillerS

    @Aaron-Studer said:

    Splunk Cloud
    http://www.splunk.com/goto/cloud

    Just launched.

    Ah, the competition is heating up.