Many open source projects struggle with “AI Slop”, and some have begun to judge a PR if it is AI generated or not. And treat it differently or do not accept it if it is AI-generated. This in my opinion is generally not the right move. It is more important if a PR / patch has good quality or not. So checks should focus on this.
Logging pipelines carry some of the most sensitive operational data inside modern infrastructure. If an attacker can read or manipulate those streams, they gain deep visibility into the system.
The Internet has always been a battlefield. It was even designed to be war-resistant and with commercialisation in the late 1990s it became also attractive to a large number of additional malicious actors. Today, cyber warfare is in a sense even stronger than a hot war. It now is important to keep up with bad actors. In the age of quantum computing, this also means support for post quantum cryptography (PQC).
So, “Digital Sovereignty.” The new “Kampfbegriff” (combat term), as they say here in Germany. It’s remarkable how quickly a single phrase can make you a tech hero or a protectionist villain before you’ve even had your morning coffee. Everyone tells me you must have it. No one quite agrees on what it is, but it sounds expensive, deeply political, and probably involves lots of PowerPoint slides with shiny shields on them.
Honestly, the whole “exclusionary” or “punitive” framing of the term just makes me… well, let’s go with “mildly disappointed.” For me, this shouldn’t be a grand ideological statement. It’s much simpler. Digital Sovereignty is just Freedom of Choice. That’s it. No more, no less. It’s the right to change your mind without your entire infrastructure collapsing like a house of cards.
If you are fighting low-quality AI code, or you think the whole thing is just marketing noise, here is a radical idea: fix the environment instead of blaming the tool. Doing AI right is not rocket science. It is mostly common sense. And, as so often, discipline.
I keep repeating this because it matters. There are three simple pillars that make AI code generation actually work. I can prove it, I do it every day.
We are in “pushing new features out” mode since mid-2025. Observability Stack, Open Telemetry support, enhanced ETL tooling, centralized ratelimiters, CI improvements, AI components, much better doc, and, and and – a wealth of new features. Together with community contributors, that’s close to 900 commits. Big and small, some very big!
Please accept YouTube cookies to play this video. By accepting you will be accessing content from YouTube, a service provided by an external third party.
It’s strange – we have great new tools, but many folks are using them in such a sloppy way, that the tools get discredited. And this also tends to boil down that serious users get into trouble. You probably guess what I am talking about: AI tooling.
I am for sure nobody who jumps on the latest hype. So I resisted AI quite long for complex things. Until it was ready, which for me was around summer 2025. That was for coding. For some doc writing it was ready earlier (and I used it to cover my weak spots). Now, AI has evolved to also help with video and audio.
Remember – I am for quite a while in this business. Have you ever wondered why I took up that work on logging. Which, btw, was perceived as “pretty boring” on those days without real cyber attacks. I took the time to record one of my usual and “highest-quality” videos to tell you the story of how WinSyslog and rsyslog were born. I hope you enjoy.
You know, I like efficient processes. After all, that was one reason that I wrote rsyslog. Which, btw, nowadays is increasingly useful and cost-saving as an ETL/ingestion issue for its speed. So no surprise, I also like efficient workflows.
We strongly believe in CI. Especially with AI code generation, it is your ultimate safeguard. However, CI is costly, and AI review usually runs max once per CI run.
So I have paired that review with some local test execution and review. Nowadays of course AI assisted. I usually use CLI tools for their efficiency. As part of the post-build process, I make the AI run various checks. The last one is a full review. I often use cubic for that, because it provides very good results to me.
I keep seeing the same take pop up: “AI is overhyped. Mostly money burning.” Sure. There is hype. There is also a whole lot of low-effort “vibe coding” that produces low-quality output at impressive speed.
But there is also something else: if you treat AI as a serious engineering tool and you are willing to do the unglamorous work, it can make measurable difference and boost productivity and quality.
So here is a concrete case study: agentic code work in rsyslog.
I know many people know I am with Adiscon. They also know we do both open source and closed source software. That combination often raises eyebrows, and I occasionally get the same question: how do we manage this without open-core games, dual-licensing traps, or hidden agendas?
Blue and orange streams meet in a gear: open source and commercial work, shared concepts, clear boundaries. (Image: Rainer Gerhards via AI)
rainer.gerhards.net uses cookies to ensure that we give you the best experience on our website. If you continue to use this site, you confirm and accept the use of Cookies on our site. You will find more informations in our Data Privacy Policy.