A few days ago, we posted the short announcement on the project site:
👉 New rsyslog AI Assistant — powered by DigitalOcean Gradient
This is the longer story: the plan, the first results, and what we think is sensible to do next.

Planned experiment, unexpectedly fast lift-off
This wasn’t a late-night “what if” hack. It was a planned, strategic experiment. We secured the right pieces up front so we could test an open-model assistant in a realistic environment, not a demo. Then the first try worked better than expected. Sensible reaction: show it early, call it a preview, learn from real users.
It runs on DigitalOcean Gradient with Llama 3.3 Instruct (70B). The interesting part isn’t the brand names; it’s the combination: open model + operational control. That is the first result that genuinely excites me. Open models are now strong enough to be useful for serious infrastructure topics. A year ago, I would have argued longer.
We’re still early. This is a preview, not a trophy.
Why Gradient (and what we actually gain)
Standing it up on Gradient was nearly as simple as creating a Custom GPT, only with knobs that actually matter: retrieval logic, prompt design, iteration speed. That means we can shape the system based on reality, not wishful thinking. Also helpful: we can switch models later (including commercial ones) without tearing down the wiring. No model lock-in gymnastics.
And a practical note: the site integration is… boringly simple. A small script. Which is exactly the kind of boring you want in production. It makes a login-free assistant on rsyslog.com a realistic near-term step, not a “someday” slide.
We haven’t learned a ton from production usage yet. That’s deliberate. QA workflow, evaluation metrics, and knowledge enrichment are coming online next. We’ll tune, measure, repeat. (Not glamorous, but highly effective.)
Sponsorships and sustainability
Thanks to DigitalOcean (DO) infrastructure sponsorship on Gradient and engineering time sponsored by Adiscon, we can observe real usage before deciding on any cost levers. If adoption grows very fast, we may need to introduce a sustainable model for business use; non-business use will remain free. The current sponsorships let us learn what’s actually needed before making any decisions.
Open models, real control
The headline insight so far is simple: open models work for our use case, and having full control over the stack aligns with how rsyslog has always been run. That opens a useful door: self-hosted or internal assistants that large orgs can operate on their own infrastructure, combining local knowledge with our curated content. Full sovereignty, no vendor dependency, and clean interfaces so components can be swapped, not worshipped. This direction also aligns with our broader ROSI – rsyslog open stack for information thinking: open, composable, and swap-friendly.
Documentation modernization: the quiet hero
Our ongoing AI-oriented documentation restructuring is already paying off. Cleaner structure and better chunking produce a stronger knowledge base and noticeably better RAG behavior. This was the plan from the start: fix the docs so humans and machines both benefit. The assistant simply makes the effect visible.
What’s next
- Continuous refinements to the Gradient assistant: retrieval, context shaping, guardrails.
- QA and evaluation workflow with metrics we can act on.
- More advanced Gradient features explored in production-like conditions.
In parallel, we keep the ChatGPT assistant for stability and well-polished answers. It remains the right choice for many users today.
Medium term: move closer to a one-click, login-free assistant on rsyslog.com. Not a moonshot; just careful engineering.
Outlook
We’re aiming for practical, open, and boring-in-the-right-places. If you need to run this on-prem or strictly in your own cloud, the architecture should cooperate, not argue. That is the point.
No buzzwords required: open models, clear interfaces, measured iteration. If it helps you get work done at 03:00, we’re doing it right.