This short tutorial explains everyday service management. While it claims to address management of rsyslog, it actually describes the tools for all services. The tutorial is written for CentOS 7, but should work equally well on other systemd-based systems like CentOS 8, recent Fedora, recent Debian and recent Ubuntu. Continue reading “How to start, stop and query the status of rsyslog (on a systemd system)”
This tutorials tells how rsyslog is configured to send syslog messages over the network via TCP to a remote server. No advanced topics are covered. We use CentOS 7. This is part of a rsyslog tutorial series.
We will configure an end node (here: LR) to send messages via TCP to a remote syslog server. We do not apply local pre-filtering and we want to make only minimal changes to the CentOS 7 default configuration. In our base lab scenario, this will lead to the following configuration:
This tutorials tells how rsyslog is configured to accept syslog messages over the network via TCP. No advanced topics are covered. We use CentOS 7. This is part of a rsyslog tutorial series.
We will configure the relay system to accept TCP based syslog from remote ends. We do not, however, configure any sender to connect to it. In our base lab scenario, this will lead to the following configuration:
Note that we will accept incoming logs and store them into the same location as we do for local logs. Handling them different will be part of a later tutorial. Continue reading “rsyslog: configure syslog TCP reception”
While I have been in Tallinn to give some lectures about syslog technology in general as well as rsyslog in specific I had the idea to use that opportunity to think about crafting rsyslog tutorials in general.
For the practical session at TALTECH IT-College I have identified a couple of typical configuration tasks. As experience shows, to carry them out successfully not only rsyslog knowledge is required but general sysadmin know-how as well. Continue reading “Tutorials for rsyslog”
Did you know? The rsyslog project offers a stable release every day! The world is changing and getting faster each time. Especially software. If a bug is fixed, you want to have the fix as soon as possible. Even more so if it is security related.
Development happens in active code. When we fix something, this is done in so-called “master branch”. If you use master branch, you are covered as soon as the fix is applied.
Traditionally there are “stable builds” which are released relatively infrequently. Sometimes many months. In rsyslog’s case, only 6 weeks. For bugfixes, someone needs to backport the fix to that “stable build”. Backporting comes with its own risks, as the code is integrated into a version it was never written for.
We have much more frequently updated versions as well. They are crafted each day and contain the current latest and greatest. Including all known fixes. These daily version are usually considered as experimental or development version. Quite honestly, this is no longer the case.
Before I continue, please consider that rsyslog is a relatively small project. What is true for it may not be true for much larger ones.
In rsyslog, we have two important policies:
- new versions never break existing configurations (except for extremely important reasons) – this means you can always update to the latest version without risking that your config blows up
- we rely on our CI – if a change passes the testbench, it basically is good to go.To complement, we also have manual reviews of critical changes. Only things we are pretty confident in go into master branch.
What does that mean? First of all, master branch actually is a stable version. The code passed all of our checks and safeguards. We do not question the stability and continue to work on new features or bug fixes. Note that the strong position here sometimes upsets contributors. We have and had PRs who take month to ripen before they are finally sufficiently good to be merged to master.
The second thing is that daily stable builds are built from master and so are also stable.
Now let’s consider what happens when it is time to create the 6-weekly stable release. Code-wise it’s pretty simple: as master is stable, we simply take master branch and declare it as the new stable version. It’s the same version as the daily build from that day. What is different is that the doc is consolidated and we prepare files nicely for package build. Then, packages are built and tested. Except for the doc, one could also have used the daily stable build.
Think about it. What it ultimately means is that the 6-week “stable” release is just a way to avoid doing more frequent updates. But the daily build is actually as stable as the stable release.
In a world of rapidly moving development, using the daily stable build has a lot of advantages. Most importantly, one gets fixes as soon as possible. Not to mention new features.
I understand that the scheduled release may be the better option for some environments. But for most, the daily stable is actually to be preferred.
Please note: daily stable builds are currently only available for Ubuntu. With our efforts towards OpenSuse Build Service we aim to make them available for a wider number of platforms.
Some (very) large companies really believe in their purchasing power – to mutual disbenefit. I wanted to share an anonymized case with you. One, that unfortunately is not totally uncommon.
The case: we got an inquiry from a large enterprise quite a while ago. They wanted support and help for a (as they said) large new product development that would probably sold as a solution. Rsyslog being a small but not unimportant part in it. We put quite some effort, including teleconferences, into answering their initial questions about rsyslog and our services. When it than came to the actual purchase the potential service volume began to shrink.
What first looked like a solid project to us ended in discussions about how to use the smallest possible support contract. Then, we were asked to provide quotes for an interesting amount of development hours (but without details about what to develop). In the end run, all of this has vanished now and we are at a very small support contract level. Still we are getting hinted that there will be “large follow-up orders”. For the pretty small volume actually talked about, we already had discussions and reviews of terms and conditions. Just to get you an idea: hiring a lawyer to evaluate the requirements would probably cost twice or more of the overall purchase volume.
Still, we are professionals. So we made changes to the agreements provided, avoiding of course everything that would put undue risk or cost on us. Not unexpectedly, this came back today for negotiating and the request for even more teleconferences. I need to mention that the setup effort by today was already larger than the intended purchase volume.
As such, this was our response (in italics):
many thanks for you mail. Unfortunately I need to tell you that Adiscon is no longer interested in pursuing this opportunity.
Please let me explain. It is our policy to not accept terms and conditions other than ours for purchase volumes as low as we are discussing here. It is by far more cost effective for us to skip these business opportunities than to try to engage. We have tried our best to accommodate your needs and provide help in getting this project going, but we are at a limit of what we can do for small purchases.
I know that you will now mention there may be large follow-up purchases. In our experience, this is actually very seldom the case and so we also have the general policy to be very conservative in evaluating opportunity potential. The overwhelming experience is that customers with concrete plans always do much larger initial commitments.
We need to abide to our practice-proven policies also to guard our customers. They trust us that we provide great and reliable service. We can do so only if we only enter into mutually-beneficial contracts.
I understand that you are limited by your policies in how far you can go. I understand that you may never do a larger commitment for an initial project as part of the its policy. We fully understand that position. But in the end result everything boils down to incompatible policies and thus inability to find common ground with reasonable effort. As such we recommend to watch out for a different service provider.
Some of you may think it would have been professional to keep on negotiating. But I don’t think so: We may miss an opportunity. Right. But our really overwhelming experience is that projects that are initiated like this one usually fail, and cause a lot of harm while doing so. My personal experience is that if a large corporation is actually interested in services, they either
- provide a larger initial investment (they know what they need)
- do a test-purchase for small value without the need for a full contract (they know it is inefficient to negotiate 100 hours for a purchase of a couple of hours)
I really, really learnt that if the corporation does not want do give you a chance to provide a real quote for development needs and insists on full contract for small purchase – they do not really know what they are after. Or they are just after getting an unfair benefit. Or are generally too hard to work with to make sense for a small service provider. None of this is the foundation for a great cooperation.
I think in such cases it is ethical to say “no”. It’s actually important to do so: our customers trust us in providing great value to them. And that is only possible if we do not engage in what looks like really bad deals.
Creating a pull request is simple. But creating a really good pull request seems to be a different beast. If you follow some simple rules a great PR is easy to create:
- one PR, one feature (or bugfix)
- use one commit for feature PRs
- use two commits for bugfix PRs
- write descriptive commit messages
- provide documentation for user-visible changes
- keep your PR in shape while you work on it
PRs should include one commit per feature or bugfix – but not more. Especially fix-up commits are really bad and we try to automatically reject them.
A fix-up commit is one that fixes a previous commit within the same PR.The key point is that it does not correct a current coding bug, but one that would have been introduced in the same PR. The proper thing to do is melt it together with the commit that made the mistaken. It is best to not even create the fix-up commit in the first place. Use “git commit –amend” when applying the fix.
There is a hard technical reason why fix-up commits are bad: git bisect provides an easy way to find regressions. When there are commits that do not build (or where tests fail), git bisect does not work. Continue reading “Squash your Pull Requests!”
Today’s release of rsyslog 8.1901.0 contains a small but important feature: the ability to specify a minimum batch size. It is much-needed for some outputs, with ElasticSearch (and ClickHouse) being prime examples. While I am happy I finally implemented it, I am also a bit ashamed it took me almost three and a half year since Radu Gheorghe proposed that feature in 2015.
Quick reminder on how rsyslog batches work: we receive messages and put them into queues. From these queues, we pull so-called batches (sets of messages) and have them processed by output modules. A batch can contain a given maximum number of messages (by default and depending on case around 1024 or below). If there are that many messages inside the queue, a full batch is extracted and processed. If the queue does not contain that many, whatever it currently has is taken and forms the batch. As such a batch contain as few messages as one. Continue reading “Finally … rsyslog Minimum Batch Sizes”