We often receive requests for Debian packages. So far, we did not package for recent Debian, as the Debian maintainer, Michael Biebl, does an excellent job. Other than us, he is a real expert on Debian policies and infrastructure.
docker group security risk
The Docker doc spells out that there are security concerns of adding a user to the docker group. Unfortunately, they do not precisely give what the concern is. I guess that is a “security-by-obscurity” approach trying to avoid bad things. Practice show this isn’t useful: the bad guys know anyways, and the casual user has a bad time understanding the actual risk involved.
It is considerable, so let me explain at least one risk (I have not tried exhaustively check security issues): The containers are usually defined to run as root user. This permits you to bypass permission checks on the host.
Let’s assume a $USER is inside the docker group and otherwise has just installed docker. So he can run
$ docker run -v/etc:/malicious -ti –rm alpine
# cd /malicious
# vi sudoers
…. edit, write …
# ctl-D
As such, the user can modify system config that he could not access otherwise. It’s a real risk. If you have a one-person “personal” machine/VM where the user has sudo permissions in any case … I’d say it’s no real issue.
The story is a different one on e.g. a CI machine. It’s easy to inject bad code into public pull requests, and so it’ll run on the CI platform. Usually (before spectre/meltdown…), this was guarded by the (low) permissions of the CI worker user (if you run CI with a sudo-enabled user … nothing changed). When you enable it to use docker, you now get this new class of attack vector. Don’t get me wrong: I do NOT advocate against using docker in CI. Right the opposite, it’s an excellent tool there. I just want to make you aware that you need to consider and mitigate another attack vector.
rsyslog 8.31 – an important release
Today, we release rsyslog 8.31. This is probably one of the biggest releases in the past couple of years. While it also offers great new functionality, what really important about it is the focus on further improved software quality.
Let’s get a bit down on it. First let’s mention some important new features:
- mmongodb has greatly been enhanced – among others, it now uses the current, state-of-the art client library and supports TLS and MongoDB replicasets… and more. Special thanks go to Jérémie Jourdin and Hugo Soszynski of Advens
- omprog has been greatly improved and now provides full access to all of rsyslog’s action capabilities. A big thanks goes to Joan Sala.
- the KSI signature subsystem has been upgraded and now operates faster than ever. Thanks to Allan Park for his work.
- a seamingly small but important capability has been added to mmanon: it now supports embedded IPv4 adresses in IPv6. This is vital to achieve perfect privacy. Thanks to Jan Gerhards for adding this.
- … and of course a couple of smaller additions (albeit not less important).
- testbench dynamic tests have been extended
- coverage of different compilers and compiler options has been enhanced
- more modules are automatically scanned by static analysis
- daily Coverity scans were added to the QA system, which have proofen to be a very useful addition
- more aggressive and automated testing with threading debuggers (valgrind’s helgrind and clang thread sanitizer) has been added, also with great success
- as a result of these actions, we could find and fix many small software defects.
- and there also have been some big and important fixes, namely for imjournal, omelasticsearch, mmdblookup and the rsyslog core
The clang thread sanitizer
- install clang package (the OS package is usually good enough, but if you want to use clang 5.0, you can obtain packages from http://apt.llvm.org/)
- export CC=clang // or CC=clang-5.0 for the LLVM packages
- export CFLAGS=”-g -fsanitize=thread -fno-omit-frame-pointer”
- re-run configure (very important, else CFLAGS is not picked up!)
- make clean (important, else make does not detect that it needs to build some files due to change of CFLAGS)
- make
- install as usual
- stop the rsyslog system service
- sudo -i (you usually need root privileges for a typical rsyslogd configuration)
- execute /path/to/rsyslogd -n …other options…
here “/path/to” may not be required and often is just “/sbin” (so “/sbin/rsyslogd”)
“other options” is whatever is specified in your OS startup scripts, most often nothing - let rsyslog run; thread sanitizer will spit out messages to stdout/stderr (or nothing if all is well)
- press ctl-c to terminate rsyslog run
Automating Coverity Scan with a complex TravisCI build matrix
This is how you can automate Coverity Scan using Travis CI – especially if you have a complex build matrix:
- create an additional matrix entry you will exclusively use for submission to Coverity
- make sure you use your regular git master branch for the scans (so you can be sure you scan the real thing!)
- schedule a Travis CI cron job (daily, if permitted by project size and Coverity submission allowance)
- In that cron job, on the dedicated Coverity matrix entry:
- cache Coverity’s analysis tools on your own web site, download them from there during Travis CI VM preparation (Coverity doesn’t like too-frequent downloads)
- prepare your project for compilation as usual (autoreconf, configure, …) – ensure that you build all source units, as you want a full scan
- run the cov-int tool according to Coverity instructions
- tar the build result
- use the “manual” wget upload capability (doc on Coverity submission web form); make sure you use a secure Travis environment variable for your Coverity token
- you will receive scan results via email as usual – if you like, automate email-to-issue creation for newly found defects
Continue reading “Automating Coverity Scan with a complex TravisCI build matrix”
Time for a better Version Numbering Scheme!
The traditional major.minor.patchlevel versioning scheme is no longer of real use:
- users want new features when they are ready, not when a new major version is crafted
- there is a major-version-number-increase fear in open source development, thus major version bumps sometimes come very random (see Linux for example)
- distros fire the major-version-number-increase fear because they are much more hesitant to accept new packages with increased major version
Busy at the moment…
Some might have noticed that I am not as active as usual on the rsyslog project. As this seems to turn out to keep at least for the upcoming couple of weeks, I’d like to give a short explanation of what is going on. Starting around the begin of June I got involved into a political topic in my local village. It’s related to civil rights, and it really is a local thingy, so there is little point in explaining the complex story. What is important is that the originally small thing grew larger and larger and we now have to win a kind of election – which means rallies and the like. To make matters a little worse (in regard to my time…) I am one of the movement’s speakers and also serve as subject matter expert to our group (I am following this theme for over 20 years now). To cut a long story short, that issue has increasingly eating up my spare time and we are currently at a point where little is left.
Usually, a large part of my spare time goes into rsyslog and related projects. Thankfully, Adiscon funds rsyslog development, and so I can work on it during my office hours. However, during these office hours I am obliged to work on paid support cases and also a limited number of things not directly related to rsyslog. Unfortunately, August (and early September) is main holiday season in our region. As such, I also have limited co-workers available that I could share rsyslog work with. And to make matters “worse”, I need to train new folks to get started with rsyslog work – one of them does a summer internship, so I need to work with him now. While new folks is always a good thing to have on a project (and I really appreciate it), this means further reduction of my rsyslog time.
The bottom line is that due to all these things together, I am not really able to react to issues as quickly as I would like to. The political topic is expected to come to a conclusion -one way or the other- by the end of September. Due to personal reasons, I will not be able to do work at all in early October (long-planned out of office period), but I hope to be fully available again by mid October. And the good news is that we will have a somewhat larger team by this time because Jan, who does the internship, will continue to work part time on the project. Even better: Pascal will be with Adiscon for the next months on a full-time basis and will be able to work considerable hours on rsyslog.
So while we have a temporary glitch in availability, I am confident we’ll recover from that in autumn and we have very exciting work upcoming (for example, the TLS work Pascal has just announced). I have also a couple of very interesting suggestions which are currently discussed with support contract customers.
All in all, I beg for your patience. And I am really thankful to all of our great community members which do excellent work on the rsyslog mailing list, github and other places. Not to forget the great contributions we increasingly get. Looking forward to many more years of productive syslogging!
Introducing new team member
Good news: we have some new folks working on the rsyslog project. In a small mini-series of two blog postings I’d like to introduce them. I’ll start with Jan Gerhards, who already has some rsyslog-related material online.
Would creating a simple Linux log file shipper make sense?
I currently think about creating a very basic shipper for log files, but wonder if it really makes sense. I am especially concerned if good tools already exists. Being lazy, I thought I ask for some wisdom from those in the know before investing more time to search solutions and weigh their quality.
I’ve more than once read that logstash is far too heavy for a simple shipper, and I’ve also heard that rsyslog is also sometimes a bit heavy (albeit much lighter) for the purpose. I think with reasonable effort we could create a tool that
- monitors text files (much like imfile does) and pulls new entries from them
- does NOT further process or transform these logs
- sends the resulting file to a very limited number of destionations (for starters, I’d say syslog protocol only)
- with the focus on being very lightweight, intentionnally not implementing anything complex.
rsyslog error reporting improved
Rsyslog provides many up-to-the point error messages for config file and operational problems. These immensly helps when troubleshooting issues. Unfortunately, many users never see them. The prime reason is that most distros do never log syslog.* messages and so they are just throw away and invisible to the user. While we have been trying to make distros change their defaults, this has not been very successful. The result is a lot of user frustration and fruitless support work for the community — many things can very simple be resolved if only the error message is seen and acted on.
We have now changed our approach to this. Starting with v8.21, rsyslog now by default logs its messages via the syslog API instead of processing them internally. This is a big plus especially on systems running systemd journal: messages from rsyslogd will now show up when giving
$ systemctl status rsyslog.service
This is the place where nowadays error messages are expected and this is definitely a place where the typical administrator will see them. So while this change causes the need for some config adjustment on few exotic installations (more below), we expect this to be something that will generally improve the rsyslog user experience.
Along the same lines, we will also work on some better error reporting especially for TLS and queue-related issues, which turn out high in rsyslog suport discussions.
Some fine details on the change of behaviour:
Note: you can usually skip reading the rest of this post if you run only a single instance of rsyslog and do so with more or less default configuration.
The new behaviour was actually available for longer, It needed to be explicitly turned on in rsyslog.conf via
global(processInternalMessages=”off”)
Of course, distros didn’t do that by default. Also, it required rsyslog to be build with liblogging-stdlog, what many distros do not do. While our intent when we introduced this capability was to provide the better error logging we now have, it simply did not turn out in practice. The original approach was that it was less intrusive. The new method uses the native syslog() API if liblogging-stdlog is not available, so the setting always works (we even consider moving away from liblogging-stdlog, as we see this wasn’t really adopted). In essence, we have primarily changed the default setting for the “processInternalMessages” parameter. This means that by default, internal messages are no longer logged via the internal bridge to rsyslog but via the syslog() API call [either directly or
via liblogging). For the typical single-rsyslogd-instance installation this is mostly unnoticable (except for some additional latency). If multiple instances are run, only the “main” (the one processing system log messages) will see all messages. To return to the old behaviour, do either of those two:
- add in rsyslog.conf:
global(processInternalMessages=”on”) - export the environment variable RSYSLOG_DFLT_LOG_INTERNAL=1This will set a new default – the value can still be overwritten via rsyslog.conf (method 1). Note that the environment variable must be set in your startup script (which one is depending on your init system or systemd configuration).
Note that in most cases even in multiple-instance-setups rsyslog error messages were thrown away. So even in this case the behaviour is superior to the previous state – at least errors are now properly being recorded. This also means that even in multiple-instance-setups it often makes sense to keep the new default!