Graylog AI Summary: Daily error and security log summaries via Ollama

Graylog is great for centralising logs, but scanning through hundreds of error and security events every morning is tedious. Static alerts (e.g. in Grafana or OpenSearch) help when you know exactly what to look for, but they can’t match the quality of an LLM actually reading and summarising the logs: rules stay rigid and pattern-based, while an LLM can highlight what matters, connect the dots, and prioritise. I wanted a daily digest: fetch the relevant logs, have an LLM summarise them, and get the result in Telegram or Slack so I can decide quickly what needs attention.

That’s what graylog-ai-summary does: it pulls error and security-related logs from a Graylog stream, sends them to Ollama for a structured summary, and delivers the output to Telegram and/or Slack. You run it once per day (e.g. via a systemd timer) and get a short, readable report instead of opening Graylog and filtering by hand.

The problem

Logs are often only opened when something has already gone wrong – reactive instead of proactive. With Graylog you can define streams, run searches, and set up alerts, but for a daily “what happened?” overview you often end up:

  • Manually opening the right stream and time range
  • Filtering by severity (errors, warnings) or keywords (e.g. “unauthorized”, “login failed”)
  • Skimming long lists of raw messages

That’s time-consuming and easy to skip. I wanted something that runs automatically, selects the right logs (by severity and/or security keywords), and turns them into a concise summary I can read in a chat.

The concept

The flow is simple: Graylog → fetch and filter → Ollama → summary → Telegram or Slack.

flowchart LR
  A[Graylog<br/>error & security logs] --> B[Fetch & filter<br/>severity, keywords]
  B --> C[Ollama<br/>LLM summarises]
  C --> D[Summary<br/>readable report]
  D --> E[Telegram / Slack<br/>delivery]

You choose which logs go in (e.g. only errors and critical, plus any line containing security-related keywords). The script sends that slice to your Ollama instance; the model returns a short, structured summary. That summary is posted to your chosen channel or chat. No manual opening of Graylog, no rule engine to maintain – just a daily push that answers “what should I look at?” in one read.

What the summary looks like

The LLM doesn’t just concatenate log lines. It can group by theme, highlight the most important events, and suggest priorities. A typical morning digest might look like:

  • 3 critical errors – database connection timeouts between 02:00 and 02:15; one app server affected.
  • Security-related: 2 failed login attempts (source IP X), 1 “invalid token” in the API stream.
  • Recommendation: Check DB connectivity and review the failed-login source; API token may need rotation.

That’s the kind of prioritised, human-readable overview that static threshold alerts rarely give you. You still decide what to do – but you start from a summary instead of a wall of raw log lines.

Why an LLM instead of more rules?

Rule-based alerts are good when you know the exact pattern (e.g. “alert if error count > 5”). They’re less good at answering “what actually happened and what matters most?” across many different log messages. An LLM can:

  • Summarise across many lines and reduce noise
  • Group by topic or severity without you defining every rule
  • Produce natural language you can read in a minute

You keep Graylog (and optionally Grafana/OpenSearch) for storage and real-time alerts; the daily LLM summary is an extra layer for proactive oversight.

Where to find it and how to run it

graylog-ai-summary is on GitHub: github.com/brsksh/graylog-ai-summary. The repo has a quickstart, configuration examples (Graylog API token, stream ID, Ollama URL, Slack/Telegram webhooks), and a systemd timer setup for a daily run. Clone it, add your credentials (e.g. via .env), run a dry run to test, then enable the timer if you want the digest every morning. For detailed options – which log levels, security keywords, model choice, SSL – the README in the repo has you covered.

Related Articles