When I was playing with graylog, I was using Beats
Care to elaborate?
Here is the sample config file:
Panic Soft #NoFreeOnExit TRUE define ROOT C:\Program Files (x86)\nxlog define CERTDIR %ROOT%\cert define CONFDIR %ROOT%\conf define LOGDIR %ROOT%\data define LOGFILE %LOGDIR%\nxlog.log LogFile %LOGFILE% Moduledir %ROOT%\modules CacheDir %ROOT%\data Pidfile %ROOT%\data\nxlog.pid SpoolDir %ROOT%\data <Extension _syslog> Module xm_syslog </Extension> <Extension _gelf> Module xm_gelf </Extension> <Input in_eventlog> Module im_msvistalog </Input> <Input in_internal> Module im_internal </Input> <Processor p_2syslog> Module pm_transformer Exec $Hostname = hostname(); OutputFormat syslog_rfc5424 </Processor> <Output out> Module om_udp Host host-ip-address Port 12201 # Exec to_syslog_snare(); OutputType GELF_UDP </Output> <Route 1> Path in_internal, in_eventlog => p_2syslog => out </Route>
And I do have an input setup in Graylog for glef udp using port 12201.
Not sure what else really needs to be "setup" as the logging appears to be relatively successful
2019-11-21 16:37:02 INFO nxlog-ce-2.10.2150 started 2019-11-21 16:37:03 WARNING Due to a limitation in the Windows EventLog subsystem, a query cannot contain more than 256 sources. 2019-11-21 16:37:03 WARNING The following sources are omitted to avoid exceeding the limit in the generated query: Microsoft-Windows-FeatureConfiguration/Operational Microsoft-Windows-Fault-Tolerant-Heap/Operational Microsoft-Windows-FailoverClustering-Manager/Admin Microsoft-Windows-EventCollector/Operational Microsoft-Windows-EnrollmentWebService/Admin Microsoft-Windows-EnrollmentPolicyWebService/Admin Microsoft-Windows-EDP-Audit-TCB/Admin Microsoft-Windows-EDP-Audit-Regular/Admin Microsoft-Windows-EDP-Application-Learning/Admin Microsoft-Windows-EapMethods-Ttls/Operational Microsoft-Windows-EapMethods-Sim/Operational Microsoft-Windows-EapMethods-RasTls/Operational Microsoft-Windows-EapMethods-RasChap/Operational Microsoft-Windows-EapHost/Operational Microsoft-Windows-DxgKrnl-Operational Microsoft-Windows-DxgKrnl-Admin Microsoft-Windows-DSC/Operational Microsoft-Windows-DSC/Admin Microsoft-Windows-DiskDiagnosticResolver/Operational Microsoft-Windows-DiskDiagnosticDataCollector/Operational Microsoft-Wind
So there are a few options for Graylog and utilities to get the logs from Windows to Graylog (or anything else). One of the recommended tools is NXLog as it's FOSS.
And while I was able to get Graylog setup and installed I can't for the life of me get my sample workstation to actually send any logs to my graylog server.
Does anyone have any pointers on this?
Thanks everyone. Turns out Hyper-V 2016 has been found and that is what they are going to use. So it is moot now. Now we get to migrate VMware to Hyper-V
But an old version of Hyper-V . . .
First off this assumes you are using CentOS 7 (centos 8 may work, I just didn't have the time get that ISO downloaded and troubleshoot all of these steps.
To start, update the OS so we're current and install some dependencies.
yum update -y yum install java-1.8.0-openjdk-headless.x86_64 yum install epel-release yum install pwgen vi /etc/yum.repos.d/mongodb-org-4.0.repo
When you are modifying this repo add the below
[mongodb-org-4.0] name=MongoDB Repository baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.0/x86_64/ gpgcheck=1 enabled=1 gpgkey=https://www.mongodb.org/static/pgp/server-4.0.asc :wq yum install mongodb-org
Enter 'Y' to confirm installation
systemctl daemon-reload systemctl enable mongod.service systemctl start mongod.service ps aux | grep mongo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch vi /etc/yum.repos.d/elasticsearch.repo
Insert the below into this repo file so we can install Elasticsearch-OSS (because the licensing is better for us in this case).
[elasticsearch-6.x] name=Elasticsearch repository for 6.x packages baseurl=https://artifacts.elastic.co/packages/oss-6.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md
Save and quit this file
yum install elasticsearch-oss vi /etc/elasticsearch/elasticsearch.yml
At the EoF add
Save and quit this file
chkconfig --add elasticsearch systemctl daemon-reload systemctl enable elasticsearch.service systemctl start elasticsearch.service ps aux | grep elastic rpm -Uvh https://packages.graylog2.org/repo/packages/graylog-3.0-repository_latest.rpm yum install graylog-server
Now to setup the configuration file
vi /etc/graylog/server/server.conf >> :shell >> pwgen -N 1 -s 96
Copy whatever is generated and insert it in "password_secret = "
Need to enter the root_password_sha2 to login to graylog web console (make it user friendly)
>> :shell echo -n "Enter Password: " && head -1 </dev/stdin | tr -d '\n' | sha256sum | cut -d" " -f1
Copy the Hash
Lastly edit the timezone
root_timezone = America/New_York
Save and quit this file
Ensuring everything starts at boot
chkconfig --add graylog-server systemctl daemon-reload systemctl enable graylog-server.service systemctl start graylog-server.service
vi /etc/rsyslog.conf >> EoF Insert *.* @ip-addr-of-server:1514;RSYSLOG_SyslogProtocol23Format systemctl restart rsyslog iptables -t nat -A PREROUTING -p tcp --dport 514 -j REDIRECT --to 1514 iptables -t nat -A PREROUTING -p udp --dport 514 -j REDIRECT --to 1514
Saving these rules so they load at next boot
iptables-save > /etc/sysconfig/iptables
Checking to make sure we're listening port on 9000
ss -nl | 9000 tcp LISTEN 0 128 [::ffff:127.0.0.1]:9000 [::]:* vi /etc/graylog/server/server.conf
Edit the HTTP settings so you can actually access the web interface from anything on your LAN (or cloud)
http_bind_address = ip-addr-of-server:9000
Save and quit this file
systemctl restart graylog-server
Wait a minute for everything to start up.
Then check the port for your public IP to make sure port 9000 is listening, it should be show like in the example below
ss -nl | grep 9000 tcp LISTEN 0 128 [::ffff:ip-addr-of-server]:9000 [::]:*
Adding some Firewall exceptions
firewall-cmd --zone=public --add-port=9000/tcp
At this point open a web browser and go to http://ip-addr-of-server:9000 and login with 'admin' and whatever pass you created in above
Time to update so we're current - I know @JaredBusch
sudo rpm -Uvh https://packages.graylog2.org/repo/packages/graylog-3.1-repository_latest.rpm yum clean all yum install graylog-server systemctl restart graylog-server
Re-login to your updated graylog server and you can clear the alarm about being out of date.
From here all you need to do is setup your inputs.
Did you install the Virtual Box drivers inside of your VM?
Is this using Oracle Virtual Box as the hypervisor? Then Kali linux is the guest (or Virtual Machine)?
@Pete-S So the simplest way I can think to explain this would be like this.
You have a network share which is relatively organized
You create a compressed tarball of any folder on that share and then move that tarball to offsite storage.
How would I realistically get a hash of that folder pre and post tar and compression and have it make sense? They aren't the same thing, even if they contain the same things.
Is it safe to assume that the gzip file is correct when it is created?
This is what I'm looking to verify