Introducing new team member

Good news: we have some new folks working on the rsyslog project. In a small mini-series of two blog postings I’d like to introduce them. I’ll start with Jan Gerhards, who already has some rsyslog-related material online.

Jan studies computer science at Stuttgart University. He has occasionally worked on rsyslog for a while. This summer, he does a two-month internship at Adiscon. He also plans to continue to work on rsyslog and logging-related projects in the future. Right now, he looks into improving the mmanon plugin. He is also trying to improve debug logging, which I admit is my personal favorite (albeit it looks like this is the lower-priority project ;-)).

Would creating a simple Linux log file shipper make sense?

I currently think about creating a very basic shipper for log files, but wonder if it really makes sense. I am especially concerned if good tools already exists. Being lazy, I thought I ask for some wisdom from those in the know before investing more time to search solutions and weigh their quality.

I’ve more than once read that logstash is far too heavy for a simple shipper, and I’ve also heard that rsyslog is also sometimes a bit heavy (albeit much lighter) for the purpose. I think with reasonable effort we could create a tool that

  • monitors text files (much like imfile does) and pulls new entries from them
  • does NOT further process or transform these logs
  • sends the resulting file to a very limited number of destionations (for starters, I’d say syslog protocol only)
  • with the focus on being very lightweight, intentionnally not implementing anything complex.
Would this be useful for you? What would be the minimal feature set you need in order to make it useful? Does something like this already exist? Is it really needed or is a stripped-down rsyslog config sufficient?
I’d be grateful for any thoughts in this direction.

rsyslog error reporting improved

Rsyslog provides many up-to-the point error messages for config file and operational problems. These immensly helps when troubleshooting issues. Unfortunately, many users never see them. The prime reason is that most distros do never log syslog.* messages and so they are just throw away and invisible to the user. While we have been trying to make distros change their defaults, this has not been very successful. The result is a lot of user frustration and fruitless support work for the community — many things can very simple be resolved if only the error message is seen and acted on.

We have now changed our approach to this. Starting with v8.21, rsyslog now by default logs its messages via the syslog API instead of processing them internally. This is a big plus especially on systems running systemd journal: messages from rsyslogd will now show up when giving

$ systemctl status rsyslog.service

This is the place where nowadays error messages are expected and this is definitely a place where the typical administrator will see them. So while this change causes the need for some config adjustment on few exotic installations (more below), we expect this to be something that will generally improve the rsyslog user experience.

Along the same lines, we will also work on some better error reporting especially for TLS and queue-related issues, which turn out high in rsyslog suport discussions.

Some fine details on the change of behaviour:

Note: you can usually skip reading the rest of this post if you run only a single instance of rsyslog and do so with more or less default configuration.

The new behaviour was actually available for longer, It needed to be explicitly turned on in rsyslog.conf via

global(processInternalMessages=”off”)

Of course, distros didn’t do that by default. Also, it required rsyslog to be build with liblogging-stdlog, what many distros do not do. While our intent when we introduced this capability was to provide the better error logging we now have, it simply did not turn out in practice. The original approach was that it was less intrusive. The new method uses the native syslog() API if liblogging-stdlog is not available, so the setting always works (we even consider moving away from liblogging-stdlog, as we see this wasn’t really adopted). In essence, we have primarily changed the default setting for the “processInternalMessages” parameter. This means that by default, internal messages are no longer logged via the internal bridge to rsyslog but via the syslog() API call [either directly or
via liblogging). For the typical single-rsyslogd-instance installation this is mostly unnoticable (except for some additional latency). If multiple instances are run, only the “main” (the one processing system log messages) will see all messages. To return to the old behaviour, do either of those two:

  1. add in rsyslog.conf:
    global(processInternalMessages=”on”)
  2. export the environment variable RSYSLOG_DFLT_LOG_INTERNAL=1This will set a new default – the value can still be overwritten via rsyslog.conf (method 1). Note that the environment variable must be set in your startup script (which one is depending on your init system or systemd configuration).

Note that in most cases even in multiple-instance-setups rsyslog error messages were thrown away. So even in this case the behaviour is superior to the previous state – at least errors are now properly being recorded. This also means that even in multiple-instance-setups it often makes sense to keep the new default!

rsyslog’s master-candidate branch gone away

Thanks to the new improved CI workflow, we do no longer manually need to do a final check of pull requests. I have used the new system for roughly two weeks now without any problems. Consequently, I have just removed the master-candiate branch from our git (with a backup “just in case” currently remaining in the adiscon git repository).

Anyone contributing, please check the CI status of your PRs, as we can only merge things that pass the CI run. Note, though, that there still is a very limited set of tests which may falsly fail. Their number is shrinking, and I usually catch these relatively shortly and restart them. If in doubt, please add a comment to the PR and I’ll investigate.

Improvements in CI environment and workflow change

Roughly one and a half year ago we at the rsyslog project started to get serious with CI, that time with travis only. Kudos to Thomas D. “whissi” for suggesting this and helping us to setup the initial system. In aid of CI, we have changed to a purely Pull Request (PR) driven develpoment model, and have made great success with that.

Over time, we have added more CI ressources (thanks to Digital Ocean for capacity sponsorship!) and begun to use Buildbot to drive those. Buildbot is a great tool, and has helped us tremendously to further improve software quality. Unfortunately, though, it does not offer as close integration into (guthub) PRs as Travis does. This resulted in a workflow where we had all PRs initially checked by Travis and, if all went well, I manually merged them to master-candidate branch, which Buildbot monitored. In those infrequent cases where the buildbot tests detected problems, I needed to manually contact the PR submittors. This worked well, but required some effort on my part.

The past two week we designed and implemented a small script that integrates github with buildbot much like Travis does. In essence, a new PR (or an update to an existing one) now automatically initiates the buildbot build AND the result is shown right on github inside the PR. That’s pretty sweet as it a) keeps submittors informed of everything, b) provides even better coverage of multiple platfrom testing and c) saves me from a lot of manual labor. Note that at the moment we see some infrequent quirks from this system (like some buildbot slaves not reporting, probably due to temporary network issues), but it already works much better than the old manual system. Also, I still have the capability to check things manually if there is a quirk.

As a consequence, we will change the workflow once again, removing master-candidate branch from it. Now that each and every PR is checked with all checks we have, there is no need to have an interim step when finally merging.

rsyslog and liblognorm will switch to libfastjson as replacement for json-c

We have been using json-c for quite a while now and had good success with it. However, recent problem reports and analysis indicate that we need to replace it in the future. Don’t get me wrong: json-c is a solid piece of software, but we most probably use it much more intensely as the json-c developers ever anticipated. That’s probably the actual root cause why we need to switch.

A main problem spot is performance: various experiments, profiler runs, code review and experimental implementations have prooven that json-c is a severe bottleneck. For example, in the evaluation of liblognorm v2 performance, we found out that json-c calls dominated processing time by more than 90%. Once we saw this, we dug down into the profiler and saw that the hashtable hash calculation as well as memory allocations took a large amount of overall processing time. We have submitted an initial performance enhancement PR to json-c which also got merged. That already removed processing time considerably. We continued on that path, resulting in a quite large second enhancement PR, which I withdrew due to disagreement with the json-c development lead.

A major problem for our application of json-c is that the hash table implementation beneath it is not a good match to our needs. We have been able to considerably speed it up by providing a new hash function (based on perl’s hash function), but to really get the performance we need, we would need to replace that system. However, that is not possible, because json-c considers the hash tables part of its API. Actually, json-c considers each function, even internal ones, as part of the API, so it is very hard to make any changes at all.

Json-c also aims at becoming fully JSON compliant. It currently is not due to improper handling of NUL bytes, but the longer-term plan is to support NUL bytes this. While this is a good thing to do for a general json library, it is a performance killer for our use case. I know, because I faced that same problems with the libee implementation years ago, where we ditched it later in accordance with the CEE standards body board. I admit I also have some doubts if that change in json-c will actually happen, as it IMHO requires a total revamp of the API.

Also, the json-c project releases far to infrequently (have a look at recent json-c releases, the last one was April, 2014). And then, it takes the usual additonal timelag for distros to pick up the new version. So even if we could successfully submit further performance-enhancing PRs to json-c, it would take an awful lot of time before we could actually use them. I would definitely not like to create private packages for rsyslog users, as this could break other parts of a system.

Finally, json-c contains a real bad race bug in reference counting, which can cause rsyslog to segfault under some conditions. A proposed fix was unfortunately not accepted by the json-c development lead, so this is an open issue. Even if it were, it would probably take a long time until the release of the fixed version and its availability in standard packages.

In conclusion and after a lot of thinking, we decided that it is best to fork json-c, which we than did. The new project is named libfastjson. As the name suggests, it’s focus is on performance. It will not try to be 100% JSON compliant. We will not support NUL characters in a standards-conformant way. I expect this not to be a big deal, as json-c also never did this, and the number of complaints seem to be very low. So libfastjson will not aim to be  general purpose json library, but one that offers high performance at some functionality cost and works well in highly threaded applications. Note that we will also reduce the number of API functions and especially remove those that we do not need and that cost performance. Also, the data store will probably be changed from the current hashtable-only system to something more appropriate to our tasks.

Libfastjson already includes many performance enhancement changes and a solid fix for the reference counting bug. Up until that bug, we planned to release in the Feb..April 2016 time frame, together with liblognorm v2. Now this has changed, and we actually did a kind of emergency release (0.99.0) because of the race bug. The source tarball is already available. We are working on packages in the rsyslog repositories (Ubuntu is already done). Rsyslog packages are not yet build against it, but we may do an refresh after the holiday period.

Rsyslog 8.15.0 optionally builds against libfastjson (it is preferred if available). Due to the race bug, we have decided that rsyslog 8.16.0 will require libfastjson.

A side-note is due: we have been thinking about a replacement for the variable subsystem since summer or so. We envision that there are capabilities even beyond of what libfastjson can do. So we still consider this project and think it is useful. In regard to liblognorm, however, we need to provide a more generic interface, and libfastjson is a good match here. Also, we do not know how long it will take until we replace the variable system. We don’t even know if we actually can do it time-wise.

rsyslog release policy issues

The usual end of the year release policy discussion has begun on the rsyslog mailing list and I wanted to post some thoughts here for broader audience and easy access in the future. Enjoy ;)


Up until ~15 month ago, we released when there was need to. Need was defined as


– important enough (set of bugfixes)
– new functionality

This resulted in various releases. We had the stable/devel releases. Stable releases were rare, devel frequent.

Now, we have scheduled releases. Actually, a release is triggered when we hit a certain calender date, irrelevant of whether or not there is need to release (there is always one or two minor fixes, so we will probably never exprience a totally blank release). We also have switched to stable releases only, and done so without grief (basically because a) we have improved testing and b) users didn’t use devel at all).

I just dug into the old discussion. A good entry point is probably this here, where we talk about patches:

http://lists.adiscon.net/pipermail/rsyslog/2014-October/038796.html

The new system works reasonably well. It has it’s quircks, though. Let’s look at a concrete example:

8.14.0, to me, was an absolutely horrible release. The worst we have done in the past 2 to 3 years. I worked hard on fixing some real bad race issues with JSON variables. Friday before the release I was ready to release that work, which would be really useful for folks that make heavy use of those variables. Then, over the weekend and Monday, it turned out that we may get unwanted regressions that weren’t detected earlier (NO testbench can mimic a heavy-used production system, so let’s not get into “we need better tests” blurb). The end result was that I pulled the plug on release day, and what we finally released was 8.13.0 plus a few small things. All problems with variables persisted. If I had have half a week to a week (don’t remember exactly) more, we could have done a real release instead of the 8.13 re-incarnation. But, hey, we run on a schedule.


Now 8.15.0 fixes these problems (except for the json-c induced segfault, which we cannot fix in rsyslog). I also has all other “8.14” enhancements and fixes and so is actually worth 3 month of work. It is a *very heavy* release. Usually, I’d never released such a fat release shortly before the holiday period. Not that I distrust it, and we really got some new testing capabilites (really, really much better), so it is probably the most solid release for a longer time (besides the small quirk with the missing testbench files). But in general I don’t like to do releases when I know there is very limited resources available to deal with problems. That’s the old datacenter guy in me. But, again, hey, we run on a schedule.


There have similiar occasions in the past 14 month. That’s the downside. And due to the 6-week cycle things usually do not get really bad.


The scheduled model has a lot of good things as well. First of all, everyone (users and contributors) know when the next release will be. This also means you can promise to include something into a specific release. However, usually users know when the release happens, but not what will be part of it, so in a sense it’s not much better than before IMO. The new model has advantages for me: less releases mean less work. Also, I do not longer really need to think about when to do a release, which feature is important engouh and so. I just look at the calender and know that, for example, in 2016, November 15th we will have a release, no matter if I am present, no matter what is done code-wise etc (we actually had, for the first time everm a release while I was in vacation and it went really well as I learned later). That really eases my task.


All of this bases on the “we release every 6 weeks, interim releases happen only for emergencies and anything else may be pulled as patches” policy. If we now begin to say “this problem is inconvenient to ..{pick somebody}”, we need to do a re-release we get into trouble. I wonder which groups of “sombody” are important enough to grant non-emergency releases. Are only distro maintainers important enough? Probably not. So enterprise users? Mmmm.. maybe small enterprises as well? Who judges this? So let’s assume every user is as important as every other (an idea I really like). If I then look at my change logs, I think I would need to release more frequent. In essence, I would need to release again when it is needed, which is, surprise, the as-needed schedule).


Rsyslog is not a project big enough to do an even more complex release schedule. To keep things managable to me, I need to release either


a) as-needed

b) on schedule (except for *true* emergencies)

And *that all* is the reason for my reluctance to break the release policy because this time distro maintainers experience the bug versus end users.


I am currently tempted to switch back to “as-needed” mode, even though this means more work for me. 

moving towards liblognorm v2

The initial version of liblognorm v2 is almost ready. It offers many new features, like custom data types, much easier rule description langugage, and potentially even greater performance (we have not yet verfied this). As some of you know, I have worked very hard on liblognorm during the past weeks. I have now reached a very important milestone and will switch the git master branch to use the new version. If things go smooth enough, the initial release of liblognorm v2 will go along with the next rsyslog release. Daily build will have it very soon.

Liblognorm v2 also contains the full v1 engine and thus is fully compatible with the previous versions, as far as rulebases are concerned. For more on the compatibility, please read the compatiblity document. In fact, by default the v1 engine is used. To opt in for the new features, you need to add a line

version=2

to the top of your rulebases Then it is when you need to really check the compatibility document. This is also what brings you the enhancements.

A couple of notes are due: while we are approaching the initial release, not all design goals have been met yet. Most importantly, we are feeding back user comments into the development process. As such, the v2 feature set is not 100% finalized yet. This means that we cannot yet fully guarantee that all constructs you use will remain compatible with versions released later.  But those that know us also know that this risk is minimal and, if it happens, will be easy to fix. The core concepts are ready and unlikely to change. Note that I will continue to actively work on v2 and more features will be upcoming in the next weeks.

The online doc should be updated in two days at latest (actual update date depends on when I can switch the git branch and how this interacts with the automatic doc generating scripts). I invite you to use the new version and am sure it will be much easier to use and powerful.

Note to developers: the v2 and v1 engine are very different. V2 is a complete rewrite of the core components. Nevertheless, v1 and v2 share some of the same file names. For many reasons, this means I need to rewrite the git master branch with the new version.

Also note that as of now, no new development happens to v1, this version is essentially dead. Very important fixes to the v1 engine will be applied to the v1 subsystem of v2.

Feedback on v2 is appreciated, please post issues or feature requests directly to liblognorm’s github trackers (if possible).

liblognorm’s “rest” parser now more useful

The liblognorm “rest” parser was introduced some time ago, to handle cases where someone just wants to parse a partial message and keep all the “rest of it” into another field. I never was a big fan of this type of parser, but I accepted it because so many people asked. Practice, however, showed that my concerns were right: the “rest” parser has a very broad match and those that used it often got very surprising results.

A key cause of this issue was that the rest parser had the same priority as other parsers, and most importantly a higher priority than a simple character match. so it was actually impossible to match some constant text that was at the same location than the “rest” parser.

I have now changed this so that the rest parser is always called last, if no other thing matches – neither any parser nor any constant text. This will make it work much more like you expect. Still, I caution against using this parser as it continues to provide a very broad match.

Note that the way I have implemented this is not totally clean from a software engineering point of view, but very solid. A cleaner solution will occur during the scheduled rewrite of the algorithm (later in spring/summer).

Note that existing rulebases using “rest” may behave differently with the new algorithm. However, previously the result was more or less random, so any other change to the rulebase could also have caused different behaviour. So this is no compatibility break as there really is no compatibility to retain.

This will be released with 1.1.2, probably in early may. If you need it urgently, you can use a daily build.

Call for Log Samples

There is one big problem in research for better logging methods: no good logging sample repositories exist. Well, not even bad ones… I am currently doing some preliminary steps towards a new, better log normalization system. Among others, it will contain a structure analyzer which will remove much of the manual burden of creating normalization rules. But, guess what: while the project looks very promising, lack of log samples is a real big problem!

To solve that problem, I have setup a public log ingestor that you can simply send logs to. The system is reachable as follows:
hostname: logsubmit.rsyslog.com
port: 514
protocol: any flavor of syslog or other text data

If you run rsyslog, you can use add this snippet to /etc/rsyslog.conf:
*.* @logsubmit.rsyslog.com
How did this idea materialize? During my talk at the German Unix User’s Group FFG 2015 conference last week in Stuttgart, I mentioned that problem and Dirk Wetter had the idea to provide a log receiver that makes it very easy for people to contribute. There were some concerns that this may open up my server for DoS, and that of course is true. Nevertheless, I liked the idea and so we setup a machine today. It may be DDoS’ed and other bad things may happen, but then we got more experience. It’s split from the main systems, so that shouldn’t cause much harm.
For log contributors, please keep on your mind that you send data to a public service and so this is probably not a great idea to do this for sensitive systems. But if we get enough data from uncritical systems, we can still gain a lot from that, most importantly it helps us gain insight into structural log mining methods — which will also lead to above-mentioned tool. All logs gathered by this method will be placed in the research log repository, which currently is hosted on github. It is licensed under BSD 2-clause in the hope that a sufficiently large and diverse data set is also of great value for other researchers (did I mention it is ultra-hard to find any log sample data sets?). If you are interested in cotributing logs, but would want to do so under NDA, that’s of course also possible. In that case, please just drop me an email to see how to best go forward with that.