Issue
I am working on adding some nagios alerts to our system -- some of which will monitoring the rate of certain events hitting the nginx/apache logs (or parsing values from those logs.) The way I've approached the problem so far is with a simple shell script tail -f'ing the log for 25 seconds or so to a temporary file, killing the process, and then running awk, etc over the temp file. The goal here being to get a log "sample" over 25 seconds and then perform analysis.
This is less than ideal obviously because of the increase in disk IO due to these temp files -- what I really would like is an "enhanced" tail -f that would terminate the pipe cleanly after a certain number of seconds. Ie:
tail -f --interval '5 seconds' | grep "/serve"
Would tail the log for 5 seconds and show me all the lines that have "/serve".
I'd imagine I can whip up a ruby script to do this pretty quickly, but I wanted to make sure there wasn't a more unixy way to accomplish this. At a high level, is there a better way to be taking samples of a log from the last N seconds (and no, I'd rather not be parsing timestamps, etc.)
Solution
A slightly different approach:
(tail -f /var/log/messages & P=$! ; sleep 5; kill -9 $P) | grep /serve
Answered By - brian-brazil Answer Checked By - Mildred Charles (WPSolving Admin)