From DevOps to DevSecOps: Practical Lessons from the Trenches

There’s no shortage of DevSecOps buzzwords. You can bolt “Sec” into any slide deck and be done. What I was missing for a long time was a practical mental model: how do you actually change your day‑to‑day work so security is part of the pipeline, not an afterthought?

Recently I went through a DevSecOps certification that tried to answer exactly that. The course mixed theory (secure SDLC, threat modeling, OWASP Top 10, CI/CD hardening) with hands‑on work: Python scripts, a small Django shop, a REST API, an AI‑backed service, and infrastructure code. It ran over a couple of months, which was perfect: enough time between modules to build things, break them, and come back with questions. It was also simply a cool experience overall – good pacing, plenty of labs, and a strong focus on “let’s try this in code” instead of yet another slide deck.

Combined with my own projects (Vault deployments, Graylog AI summaries, and server setup), the course forced me to wire its ideas into something that actually runs.

This post is not a marketing page for the certificate. It’s a set of lessons that survived contact with reality, backed by code and repos I actually touched.

What the certificate really covered

On paper, the curriculum looked like this (simplified):

  • DevSecOps foundations
    • What DevSecOps actually is (and how it differs from classic DevOps).
    • Git / Git hosting, branching, reviews.
    • CI/CD basics with GitLab CI and GitHub Actions.
  • Secure coding & application security
    • OWASP Top 10: XSS, injection, auth issues, insecure direct object references, etc.
    • Practical labs with deliberately vulnerable apps (e.g. OWASP Juice Shop).
  • Python for automation
    • Building small tools to automate security‑related tasks.
    • Parsing logs, calling APIs, gluing services together.
  • APIs & services
    • Designing and securing REST APIs (authentication, rate limiting, input validation).
    • Logging and observability for services.
  • DevSecOps pipelines
    • Integrating SAST/DAST, dependency checks and container scanning.
    • Breaking builds on real issues instead of treating security as “best effort”.

The provider’s own roadmap broke this down into ten stages:

#StageFocus (short)
1Foundationslaw & regulations, phases of attacks, KRITIS, security goals, ethical hacking, OWASP, DevSecOps, Git
2Python programmingprogramming concepts, OOP, error handling, testing, deployments
3Pentesting labtools in Python, ISO/OSI, TCP/UDP, ExifTool & QPDF, CVE exploits, Kali Linux
4Linuxconsole tools, SSH, Fail2ban, Metasploitable 2
5OWASP Juice ShopCTF challenges, XSS, injection attacks
6Cloudpublic/private/hybrid, containers, VMs, orchestration, scalability, WAF
7DevSecOpsGit, security by design, relation to DevOps, CI/CD pipelines, SAST/DAST, observability
8XSSrisk awareness, Google XSS Game, finding XSS vulnerabilities
9Media lawGDPR, data protection, right to one’s image, copyright, third‑country transfers
10Password crackingsalt & pepper, rainbow tables, dictionary attacks, brute force

The interesting part was how this played out in concrete repositories.

Course repos and what they taught in practice

The course worked with several reference projects and example repos, which I forked and extended. They covered different layers of a typical stack.

All of this happened in “real” Git workflows: I had to manage my own forks and branches, open pull requests against the training repos, and wait for mentors to review them. Feedback wasn’t optional – you’d iterate on a PR until the task was properly solved, and only then move on to the next module. That forced me to treat even small exercises like production code: clean commits, readable diffs, and fixes that actually addressed the comments.

Python tasks: automating checks and glue work

One repository was a Python tasks project used in the modules on automation. The exercises there were very down‑to‑earth:

  • Parse and normalize log files.
  • Call external APIs, handle timeouts and errors.
  • Write small command‑line tools that do one thing well.

Rewriting a few of those tasks with a DevSecOps mindset changed how I approached “quick scripts”:

  • Validate and sanitize any input, even in “internal” tools.
  • Always log with enough context to debug later (timestamps, source, correlation IDs where it makes sense).
  • Treat external calls (APIs, subprocesses) as untrusted and fail safely.

This directly fed into how I later implemented automation around Vault and Graylog: short Python glue code with defensive defaults instead of throwaway scripts.

Django shop: secure basics in a real web app

Another repository was a small Django e‑commerce app from the course. It’s not meant to be production‑ready, but it was perfect to practice:

  • Authentication and authorization:
    • Proper login/logout flows.
    • Distinguishing between normal users and admins.
  • Input validation and output encoding:
    • Where XSS can sneak in (template rendering, unsafe use of safe, query parameters).
    • How Django’s built‑in protections help, and where you can still shoot yourself in the foot.
  • Defending against brute force and simple abuse:
    • Rate‑limiting login attempts.
    • Locking or slowing down suspicious accounts/IPs.
  • Secure configuration:
    • Handling SECRET_KEY, debug flags, allowed hosts, and CSRF correctly.
    • Moving environment‑specific settings out of the code.

Working through these tasks turned “I know OWASP Top 10 from slides” into “I can point to the line where this app is vulnerable, and I can fix it without breaking everything else.”

Truck signs API: REST security and observability

A separate API project for truck/traffic signs focused on service design:

  • Building a small REST API:
    • Endpoints to create, list and query resources.
    • Proper HTTP methods and status codes.
  • Securing endpoints:
    • Token or JWT‑based authentication.
    • Basic rate limiting and input validation.
  • Observability:
    • Structured logs per request (method, path, user, latency, status).
    • Error handling that surfaces enough detail in logs, but not in responses.

That repository was a great playground for integrating the DevSecOps pipeline lessons:

  • Run linters and tests on every push.
  • Add dependency scanning (e.g. Python packages) to the CI pipeline.
  • Fail the build when a high‑severity vulnerability is found.

You end up seeing your CI config as a policy document, not just a build script.

AI chatbot starter: securing “AI‑flavoured” services

Another project was an AI chatbot starter (TypeScript/Node), used in a community event. It wasn’t purely about security, but it tied many ideas together:

  • Secret handling for API keys (LLM provider, vector store, etc.).
  • Dependency and container scanning for a Node/TypeScript stack.
  • Safe logging of prompts and responses (avoiding sensitive data in logs).
  • Rate limiting and basic abuse detection (to avoid your API key being burned by misuse).

Working through this repository made it clear that “AI app” is just another web service from a DevSecOps perspective: the threat model changes slightly, but the fundamentals are the same.

Server setup & infra: hardening from the start

Outside the course, I already had a server setup automation repository for provisioning and configuring VMs. After the infrastructure and pipeline modules, I revisited it:

  • Wrap everything in Ansible roles and playbooks.
  • Add security headers, TLS, and firewall rules as part of the default configuration.
  • Standardize logging and log shipping (e.g. to Graylog).

That’s where the line between “course project” and “real life” completely blurred: the same patterns I practiced on the training repositories now live in the setup for Vault, Graylog, and my own blog.

Hands-on security labs: thinking like an attacker

Besides the “build something” projects, a big part of the course was breaking things on purpose. That mix was important: it’s hard to protect against attacks you’ve never actually tried yourself.

Some of the labs and exercises included:

  • OWASP challenges for XSS – finding and exploiting cross‑site scripting in deliberately vulnerable apps, then fixing them.
  • Exploiting a KeePass weakness – using a known issue to extract secrets for testing, and understanding what a “password manager vuln” looks like in practice.
  • File metadata inspection – reading EXIF and other metadata to see what people accidentally leak in uploads.
  • Burp Suite request manipulation – intercepting and modifying HTTP traffic to bypass client‑side checks and poke at APIs.
  • URL and link parsing – spotting open redirects and dangerous URL handling edge cases.
  • Password hashing and cracking – seeing the difference between weak and strong hashes, and what “offline attacks” feel like.

All of this was done as guided exercises, with the explicit goal of building attacker intuition so you can defend better. Once you’ve successfully pulled data out of a misconfigured app or cracked a weak hash yourself, recommendations like “use proper hashing” or “don’t trust client‑side validation” stop being theory.

The nice part is that many of these ideas map directly into automation: SAST tools that catch obvious injection problems, dependency and container scans that surface known issues, or simple DAST checks in CI. You don’t have to be a full‑time pentester to start; you can bake basic security checks into the pipelines you already have.

1. Centralizing secrets instead of sprinkling them around

In my own stack I treated secrets for a long time as slightly more sensitive config — stored in Ansible vars, .env files, or CI settings. Secure “enough”, but spread over too many places with little visibility.

Looking at that with a DevSecOps mindset, one thing became obvious:

Secrets are assets. Treat them like data with a lifecycle, not just strings with *** in your logs.

In practice, that led to this stack:

  • Vault as the source of truth for secrets
    I deploy a dedicated Vault instance with Ansible:

    • Install binary.
    • Create data/config/log directories.
    • Write the Vault config.
    • Configure a systemd service.
  • No unseal keys or root tokens in automation
    Mirroring the course’s warnings:

    • Ansible deploys Vault and defines how it should run.
    • Initialization and unseal stay manual (or in very controlled scripts) so the critical keys never live in playbooks or CI logs.
  • Pulling the Ansible Vault password from Vault itself
    Instead of a long‑lived local password:

    • A wrapper script fetches the Ansible Vault password from Vault at runtime.
    • Access is logged and controlled via Vault policies.

Lesson: DevSecOps is not “add a secrets manager and move on”. It’s designing automation so privileged material never needs to live where it shouldn’t, and the dangerous operations (init, unseal, rotation) are explicitly not part of your everyday pipelines.

2. Turning logs into something humans actually read

When I revisited my setups from a DevSecOps angle, monitoring, logging and incident response quickly came back into focus. The idea was familiar: centralize logs, standardize formats, define alerts. In practice, my setups suffered from a classic problem:

Centralized logs nobody reads are only marginally better than no logs.

The APIs and Python tools I built – both in the course and on my own – pushed me to:

  • Emit structured logs.
  • Include security‑relevant context (who did what, from where).
  • Think about what you’d need in an incident.

I already had Graylog collecting logs, with streams and dashboards. Good for forensic work, rarely used proactively. That thinking directly inspired Graylog AI Summary:

  • Input:
    Error logs and security‑relevant events from Graylog (e.g. severity, security keywords).
  • Processing:
    Filtered slices go to an LLM (via Ollama) which produces:
    • A short narrative summary.
    • Grouped issues.
    • Rough priorities.
  • Output:
    The digest is sent to chat (Telegram/Slack). You don’t need to open Graylog to know whether something needs attention.

Lesson: Don’t just centralize logs. Summarize and push them to where you already work. That’s the difference between “we have dashboards” and “we actually notice problems before users do.”

3. Automate the boring, guard the dangerous

The course repeated a mantra: automate everything you can, but not everything you could.

You see this in the training repositories and in my own ones:

  • In the Python tasks:

    • It’s great to auto‑parse logs and poke APIs.
    • It’s dangerous to auto‑remediate without guardrails.
  • In the Django shop and APIs:

    • Migrations, tests and basic checks should run automatically.
    • Deleting user data or rotating critical keys should not be silent side‑effects.
  • In my infra repositories:

    • Provisioning servers, configuring Vault and Graylog, setting headers: all automated.
    • Vault init/unseal, policy changes, and production DNS tweaks: consciously not automated away.

Lesson: DevSecOps isn’t “automate security away”. It’s drawing a hard line between what should be boringly automated and what should always involve deliberate human decisions.

4. Pipelines as security control points, not just build scripts

One of the course’s more opinionated messages about CI/CD was:

Your pipeline is a security control. Treat it as seriously as production.

This landed for me when wiring up pipelines for the course repositories and my own projects:

  • For a Python API:
    • Run formatters, linters and tests.
    • Add dependency scanning and fail on real vulnerabilities.
  • For the Django shop:
    • Run basic DAST against a test instance.
    • Check for missing migrations or obvious misconfigurations.
  • For the AI chatbot starter:
    • Scan dependencies and images.
    • Enforce that secrets are pulled from secure storage, not baked into configs.

In my own infra and blog stack, this turned into:

  • Separate pipelines for:
    • Build & test (no production credentials).
    • Deployment (limited, deploy‑only credentials).
  • Pipelines that refuse to deploy when:
    • Tests or security checks fail.
    • The artifact or image doesn’t meet basic policy (e.g. no high‑severity CVEs).

Lesson: Even for a one‑person stack, the pipeline is often the only programmable gate between “a commit” and “exposed to the internet”. Treat it as such.

5. Start small, stack improvements, don’t wait for permission

Looking back, the progression from “DevOps” to “DevSecOps” in my projects wasn’t a revolution, it was a sequence:

  1. Centralize secrets
    Deploy Vault, stop scattering credentials across configs and CI systems.
  2. Make logging actionable
    Graylog for centralization, then AI summaries to make it human‑friendly.
  3. GitOps and harden by default
    Keep infra and security config in Git, then bake security headers, TLS, firewall rules and sane defaults into that automation.
  4. Upgrade pipelines
    Treat CI/CD as policy and security control, not just as build glue.
  5. Keep learning through practice
    Python tasks, Django shop, APIs and intentionally vulnerable apps in the course – plus my own tools – all feeding back into better patterns.

The certification helped by giving names and structure to what I was already doing and where the gaps were. And because it ran over several months, there was enough time to apply things between modules, ask better questions in the next session, and see progress in real projects rather than just in lab exercises.

But the important part is: none of this required a huge team or permission from anyone.

You can start on any project you control:

  • Move one secret into a proper vault.
  • Add one meaningful security check to your pipeline.
  • Turn one noisy log source into a summary you’ll actually read.
  • Take one course repository (or toy app) and fix a real vulnerability end‑to‑end.

That’s DevSecOps in practice: not a title, but a series of small, deliberate upgrades to how you build and run software.

In the end, DevOps, GitOps and DevSecOps are not just sets of practices or tools – they’re ways of thinking. You don’t “install” them, you live them: how you design systems, how you write code, how you run infrastructure from Git, how you respond to incidents, and how you keep improving the security of what you ship every day.

Related Articles