Unsolved Zabbix gone wild
-
@Mike-Davis said in Zabbix gone wild:
Last night I had a server that Zabbix seemed to think was fluctuating between 8% and 20.55% free disk space. This caused the trigger to generate 700+ emails. The emails started when Zabbix came out of maintenance mode. If it had went over 20% and stayed there we would have got one email per hour, but since it kept toggling back an fourth, it was kicking out two every minute. Has anyone ever had anything like this happen before? I'm still investigating.
Was the Free Space issue happening on one partition or two?
-
The alert was just set up for the drive.
-
Was the space as reported by the C drive fluctuating?
-
That's the odd thing. When I checked it, it was sitting at 12% I just gave it more storage.
-
@Mike-Davis said in Zabbix gone wild:
That's the odd thing. When I checked it, it was sitting at 12% I just gave it more storage.
Any kind of backup jobs or Scheduled tasks or anything? What does this server do?
-
That's right in the middle. Seems like it might have been growing and shrinking. That's a fairly common thing to have happen during certain processes.
-
@scottalanmiller said in Zabbix gone wild:
That's right in the middle. Seems like it might have been growing and shrinking. That's a fairly common thing to have happen during certain processes.
Unless it's a busy VM, +/- 12% at a time is kinda crazy.
-
@dafyre said in Zabbix gone wild:
@scottalanmiller said in Zabbix gone wild:
That's right in the middle. Seems like it might have been growing and shrinking. That's a fairly common thing to have happen during certain processes.
Unless it's a busy VM, +/- 12% at a time is kinda crazy.
Cache
-
@scottalanmiller said in Zabbix gone wild:
@dafyre said in Zabbix gone wild:
@scottalanmiller said in Zabbix gone wild:
That's right in the middle. Seems like it might have been growing and shrinking. That's a fairly common thing to have happen during certain processes.
Unless it's a busy VM, +/- 12% at a time is kinda crazy.
Cache
I could easily see 10/12% of RAM being a cache but 10% of your disk space... depending on the size of the disk, that could still be huge (not to mention slow).
-
@dafyre said in Zabbix gone wild:
@scottalanmiller said in Zabbix gone wild:
@dafyre said in Zabbix gone wild:
@scottalanmiller said in Zabbix gone wild:
That's right in the middle. Seems like it might have been growing and shrinking. That's a fairly common thing to have happen during certain processes.
Unless it's a busy VM, +/- 12% at a time is kinda crazy.
Cache
I could easily see 10/12% of RAM being a cache but 10% of your disk space... depending on the size of the disk, that could still be huge (not to mention slow).
What if it is a cache of logs being compressed or something similar?
-
Or temporary database tables?
-
The plot thickens. This is the 12 hour graph:
You can see when I added space, but it still keeps going up and down. I'm going to restart the zabbix service.
-
@Mike-Davis What kind of server is this?
-
@dafyre It's a remote desktop server. I have one user on it now. I'm going to bounce it in a little bit when they are finished.
-
Users could be doing nearly anything in that case.
-
@scottalanmiller said in Zabbix gone wild:
Users could be doing nearly anything in that case.
Temporary Database Tables... or user causing issues... You can guess which one I'd pick as the problem.
-
@scottalanmiller I would agree, except I don't think anyone was on at 3:00 AM, and there is only one user on now, and that is another Admin.
-
@Mike-Davis said in Zabbix gone wild:
@scottalanmiller I would agree, except I don't think anyone was on at 3:00 AM, and there is only one user on now, and that is another Admin.
You can track RD sessions for that in Zabbix. You have to nab them with perf_counter.
-
The thing is, I'm on the server and I can see free space and it's not changing. The Zabbix graph thinks it is though. I'll launch perfmon to see if it's changing faster than I can see it.
-
Perfmon shows it steady with 37% free. So it must be Zabbix.