This is a short post about something I wrote a while ago. I wrote a monitoring tool (haven’t we all?) and wanted to add a capability to scan log files (in this case alert log, etc.) so this is the solution I came up with
The project is in github, you can find it here.
It’s quite short and this is some technical details:
- I wanted to scan only lines that I haven’t scanned before
- I don’t know if someone cleared the file or not. If it was cleared, I have to start scanning from the start
- I wanted the option to support scanning multiple log files (in this case I had alert log of the database and ASM, and rman full and incremental backup logs)
- I’m using an internal file where I keep some metadata info
- In order to support multiple log files I use a unique identifier to each file (e.g DBALERT for the database alert log, ASMALERT for the ASM alert log, etc). This is used as a key in the internal file
- To know if the file has been cleared I decided to run md5 on the first 10 lines of the file (this is configurable). I assume that 10 lines are enough for that. This only introduces one issue when the file is getting filled really slowly, so if the file has only 5 lines, the next time it will scan from the beginning. In my case (backup logs, alert logs, etc) it wasn’t a problem so I didn’t handle this
- So the script gets the info from the internal file based on the key provided, then gets the md5 and the last line from the last scan. The the md5 matches the current file’s md5 it will return the line from the internal file. If now, it will return the first line
- In any case it will update the line number and the md5 in the internal file
I hope this is clear it will be useful to some of you.