cologne municipal archive building collapsed


In Cologne, Germany, the municipal archive collapsed today at around 2pm. People believed to be trapped in building. It is feared that lives have been lost (according to Cologne newspaper Express, no known death at this time [7:10p], but 9 people missed [5:40p]). It was a typical business day and there were both clerks as well as customers inside the building. However, no official statement yet exists. According to Reuters (4:55p), official said at least one person injured, possibly others trapped in collapsed building.

As some people told German media, there have been subway construction works close to the collapsed buildings. Sources say subway workers ran out of the construction site and yelled. That lead to some people fleeing the building. According to one eyewitness, some other, smaller buildings have also collapsed in the mean time (4:40p). The witness says the road sag. According to cologne radio station WDR, the building actually collapsed into a newly-build subway tunnel. The road shall be wide open, also collapsed into it (~5p).

While this is speculation, it looks like the subway construction seems to have caused shifts of earth masses, which ultimately resulted in the collapse of the building. Cologne subway operator KVB says there were no larger construction work at this moment below the building. If that is true, it may probably be the result of a larger chain of events (and hopefully the last in that chain…).

On German radio station SWR3, a neighbor said that a close-by church was close to collapese due to subway work. This situation seems to have been solved in the mean time.

Last week, the site was part of the large cologne carnival parade. One can not imaging what might have been caused if the collapse had happened at that time.

Some picture of the site before the incident:

The webcam I quote below seems to have been right inside the collapsed building (speculation on my part). I was able to connect to the web cam server five times now, the picture is always the one below. I guess that was the last picture the webcam ever made. If so, the collapse was closely after 2:20pm:

Google maps for orientation (you see it happend right in a densly populated area):

View Larger Map

View Larger Map

The municipal archives was not only a historical building, it also held important historical documents (see description below). I guess that many of these documents have been lost, but hope that many can be recovered. According to Cologne’s official web site, it was one of the largest municipal archives in Germany, holding original documents from over thousand years of history. As it looks, there seem to have not been any roman artifacts inside the building.

Correction: the building itself was not historical, it was erected in 1971. There is a picture of it available at the German news site Spiegel online (you may need to go back and forth as they add pictures – this does not look like a permanent link).

Links:

I stop compiling news now (6:20p), nothing really new appeared the past hour. I guess the situation must clear up. Mainstream media will probably have good coverage tomorrow. If you hear anything interesting, please let me know (e.g. by commenting).

rsyslog doc – state of the art…

Most people agree that rsyslog is a decent and useful piece of software. However, most people (including me) also agree that the rsyslog documentation is, ahem, sub-optimal.

When I code, I always think “I’ll do the doc soon”. But when “soon” arrives, something else is in the way. Yet another (justified) feature request, articles and other projects (yes, they exist ;)). At least I try to convey the important concepts and backgrounds here in the blog, but you have a hard time if you intend to extract a specific feature from the blog. So: the doc is in a bad shape.

I just got an offer from an volunteer who would like to help with the doc. That may even be the start of a rsyslog doc team. In any case, that’s a fantastic opportunity. First of all, more doc means more and happier users. Secondly, I think it is very useful when someone other than me writes user doc. I can’t even envision the questions that a regular user may ask, and this is a problem for any manual I write.

I hope this collaboration manifests. In order to aid it, let me briefly describe what currently exists: www.rsyslog.com is driven by Postnuke for various reasons, the most important one that I have a postnuke wiz at hand, so I do not need to dig in any dirty details if I need something extra ;) Postnuke is a CMS, so dynamic content can be added and is easy to edit by anyone else. So far, we use the web site itself primarily for news announcements.

The real doc set is kept as HTML. We use a Postnuke module to integrate that static html into the CMS. The HTML doc set exists only once, right inside the rsyslog git tree. When I make changes, they automatically go into git, go into the tarball and I also copy them over to the web site. All of this is without any effort, which is good. The bottom line is that the HTML doc set needs to be modified by patches or me pulling from someone else’s git archive (both of which I will happily do). I think it is good to have the html pages available in the tarball, previous discussion on the rsyslog mailing list showed that package maintainers think so, too.

There exists two man pages. They are extremely bad. They need to be hand-synced with the html pages and I almost always forget to do so. Man pages do not go onto the web (besides some very old copies I produced via a clumsy way). But the live in git and the tarball, too.

A partial effort was done to internationalize the doc set, based on the usage of docbook. I think this is a good approach and the work done so far is kept in the rsyslog docbook branch. However, the approach currently focuses on the man pages. I do not know if it will work for the HTML doc, too.

I find docbook a very interesting concept, but the learning curve is steep. I simply had not enough time yet to dig deeply into it to start any serious work with it (html and LaTeX are still king for me ;)).

We have also a few places of obviously user-contributed content, the most important one being the rsyslog wiki. It contains many useful things, among others config samples. The bad thing about the wiki is that there is only a single one. So it probably is not the place to describe things that are very version dependent. Or is it and I have just the wrong approach – correct me!

Worth mentioning is also the rsyslog knowledge base, which primarily focuses dynamic content and discussions. But the search function is a very useful tool. Also, part of the larger knowledge base is devoted to gather information on how to configure syslog devices, how to best react to messages and how to consolidate e.g. Windows events. This obviously is not direct rsyslog documentation, but I hope it is useful and will continue to grow even more useful.

Finally, there is the mailing list and most importantly the mailing list archive. While this is definitely not considered a documentation resource, the archive has a lot of valuable information and it may even be a starting point for creating “real” doc.

I hope this is a good and complete wrap-up of the doc situation. If I have forgotten anything or you’d like to tell me your thoughts: just use the comment function! :)

rsyslog now default on stable Debian

Hi all,

good news today. Actually, the good news already happened last Saturday. The Debian project announced the new stable Debian 5.0 release.

Finally having a new stable Debian is very good news in itself – congrats, Debian team. You work is much appreciated!

But this time, this was even better news for me. Have a look at the detail release notes and you know why: Debian now comes with a new syslogd, finally replacing sysklogd. And, guess what – rsyslog is the deamon of choice! So it is time to celebrate for the rsyslog community, too.

There were a couple of good reasons for Debian to switch to rsyslog. Among others, an “active upstream” was part of the sucess, thanks for that, folks (though I tend to think that after the more or less unmaintained sysklogd package it took not much to be considered “active and responsive” ;)).

Special thanks go to Michael Biebl, who worked really hard to make rsyslog available on Debian. It is one thing to write a great syslogd, it is a totally different one to integrate it into an distro’s infrastructure. Michael has done a tremendous job, and I think this is his success at least as much as it mine. He is very eager to do all the details right and has provided excellent advise to me very often. Michael, thanks for all of this and I hope you’ll share a virtual bottle of Champagne with me ;)

Also, the rsyslog community needs sincere thanks. Without folks that spread word and help others get rsyslog going this project wouldn’t see the success it experiences today.

I am very happy to have rsyslog now running by default on Fedora and Debian, as well as a myriad of derivates. Thanks to everyone who helped made this happen. So on to a nice, little celebration!

Thanks again,
Rainer

PS: promise: we’ll keep rsyslog in excellent shape and continue in our quest for a world-class syslog and event processing subsystem!

screwed up on LinkedIn ;)

A couple of days ago, I created a rsyslog group on LinkedIn. Then I was curios what happened. Well, nothing. Nothing at all. So I thought it was probably not the right time for such a thing.

And, surprise, surprise, I today browsed through LinkedIn and saw there were 16 join requests. Oops… there seem to be no email notifications for them. Bad… Well, I approved all folks. If you were one of them and now read this blog post: please accept my apologies! Obviously, this was just another time I screwed up on the Internet…

To prevent any further such incidents, I have now set the group to automatically approve everyone who is interested in joining. That’s great for this type of group, actually I am happy for everyone who comes along ;)

When does rsyslog close output files?

I had an interesting question on the rsyslog mailing list that boils down to when rsyslog closes output files. So I thought I talk a bit about it in my blog, too.

What we need to look at is when a file is closed.
It is closed when there is need to. So, when is there need? There are currently three cases where need arises

a) HUP or restart
b) output channel max size logic
c) change in filename (for dynafiles, only)

I think a) needs no further explanation. Case b) should also be self-explanatory: if an output channel is set to a maximum size, and that size is reached, the file is closed and a new one re-opened. So for the time being let’s focus on case c):

I simplified a bit. Actually, the file is not closed immediately when the file name changes. The file is kept open, in a kind of cache. So when the very same file name is used again, the file descriptor is taken from the cache and there is no need to call open and close APIs (very time consuming). The usual case is that something like HOSTNAME or TAG is used in dynamic filename generation. In these cases, it is quite common that a small set of different filenames is written to. So with the cache logic, we can ensure that we have good performance no matter in what order messages come in (generally, they appear random and thus there is a large probability that the next message will go to a different file on a sufficiently busy system). A file is actually closed only if the cache runs out of space (or cases a) or b) above happen).

Let’s look at how this works. We have the following message sequence:


Host Msg
A M1
A M2
B Ma
A M3
B Mb

and we have a filename template, for simplicity, that consists of only %HOSTNAME%. What now happens is that with the first message the file “A” is opened. Obviously, messages M1 and M2 are written to file “A”. Now, Ma comes in from host B. If the name is newly evaluated, Ma is written to file B. Then, M3 again to file A and Mb to file B.

As you can see, the messages are put into the right files, and these files are only opened once. So far, they have not been closed (and will not until either a) happens), because we have just two file descriptors and those can easily be kept in cache (the current default for the cache size, I think, 100).

I hope this is useful information.

On the reliable plain tcp syslog issue … again

Today, I thought hard about the reliable plain TCP syslog issue. Remeber? I have ranted numerous times on why “plain tcp syslog is not reliable” (this link points to the initial entry), and I have shown that by design it is not possible to build a 100% reliable logging system without application level acks.

However, it hit me during my morning shower (when else?) that we can at least reduce the issue we have with the plain TCP syslog protocol. At the core of the issue is the local TCP stack’s send buffer. It enhances performance but also causes our app to not know exactly what has been transmitted and what not. The larger the send buffer, the larger our “window of uncertainty” (WoU) about which messages made it to the remote end. So if we are prepared to sacrifice some performance, we can shrink this WoU. And we can simply do that by shrinking the send buffer. It’s so simple that I wonder a shower was required…

In any case, I’ll follow that route in rsyslog in the next days. But please don’t get me wrong: plain TCP syslog will not be reliable if the idea works. It will just be less unreliable – but much less ;)

Low-End Windows Event Log Tool Released

Adiscon, my company, has just released EventConsolidator, an easy-to-use tool for Windows event log consolidation targeted to the small business market. Unlike our full-blown EventReporter and MonitorWare Agent products, this is a purely agentless solution to monitor the standard Windows Event Log files. Also, it does not (yet) store any events in a central repository but rather works on the native event logs of the Windows machines it monitors.

The tool comes with basic display and searching abilities and some preconfigured reports. This is Adiscon’s first move into that market segment. I am very interested to see which feedback we get from that tool. We are very open to all customer suggestions. I have to admit that I argued it may be better to do the essentials first and then look for what people really need rather than to build a complex one-size-fits all approach. So it will be interesting, at least for me, to see if that thought works out. Just for the records, EventConsolidator is commercial software, but a full featured free trial is available form the EventConsolidator site.

US Citizen? Your credit is in doubt…

I was introduced to a very subtle effect of the Heartland breach. Remember, card processor Heartland has screwed up and, as some sources say, 100 million credit card numbers were stolen from them via a Trojan. That fact spread big news and, among others, started a discussion if PCI has been proven to be useless. But there seem to be additional effects: US customers seem to have lost a lot of credibility in international shopping.

In Adiscon’s office, I heard today that we got a call from one of our card processors. Keep in mind that we are based in Germany. The card processor inquired about a recent transaction and asked us to check whether this could be credit card fraud. It was not, but he left us his phone number so that we could check with him in the future when we suspected fraud on transactions.

This is quite unusual and immediately drew my attention. I gave that guy a call. He explained that they are routinely checking US credit card transactions because some problems have been seen recently with US cards. He explained to me that the processor would like to protect merchants, because “if you ship the goods and the cardholder protests the charge … weeks later … you will be charged back but unable to recover the goods” (good point, btw). So I came up and asked if they were calling because of the Heartland breach. Not only, he said, but that would be an example (I deciphered this as a “yes”). So then I asked if they had not blacklisted the affected card numbers. Some statements followed, which I deciphered to mean “no”. So the cards are still active and seem to cause issues (why else would a card processor begin to call its merchants?).

I know that heartland does not know exactly which card numbers have been stolen. But it is known that most probably any card processed within the past 10 month is highly suspect. So wouldn’t it have been fair security practice to put these cards on the blacklist and issue new ones to the cardholders? Sure, that would be inconvenient (read: costly) and, probably more important, would have shown to everyone that someone has screwed up, but would that not be much better than putting both consumer and vendors at risk? Without an automatic blacklisting, consumers need to pay much more attention to their credit card bill.

An interesting side-effect is that US customers seem to have lost credit outside of the US. For example, it was suggested to me that we check each US-order in depth before delivering anything. If everyone else gets this advise, US customer’s will probably find shopping overseas quite inconvenient…

If you loose your credit card, you are legally required to call your card issuer and report that loss. As long as you do not notify them, you are liable. If, on the other hand, someone in the card industry looses your card (number), nobody seems to be liable: Customers must check their statements and vendors must do in-depth checks (sigh) on their customers. Is this really good practice?

And what if a card is used to commit credit card fraud? No problem at all (for the card industry): either the cardholder will not notice it (and pay the fraud) or the cardholder protests the charge, in which case the merchant needs to pay. The later case involves some manual processing by the card industry: again, no problem! The merchant is charged a hefty protest fee. Looking at how hefty the fee is, it seems to be even profitable for the card industry if it takes that route.

Bottom line: who is responsible? Card industry (Heartland in this case). Who pays? Everyone else! Isn’t that a nice business model? Where is the motivation to keep such a system really secure?

I think that really questions if the card industry is interested in security. PCI may not have failed (I tend to agree to Anton Chuvakin here). But it smells a bit like PCI and whetever else efforts can not succeed, because they are not deployed in a honestly security-aware environment but rather in one that needs good execuses for sloppy security. As long as the card industry does not do the right thing as soon as it costs the card industrie’s money, real security can not be achieved.

Begun to roll out race patches…

I have now begun to roll out the rsyslog race patches. Before the weekend, I rolled out the patch for the debian_lenny and development branches and today the beta branch followed. I am now looking forward for feedback from the field. The patch for v3-stable is ready (and available via git), but I’d like to get at least a bit more feedback before I do another stable release.

Wanna play? No, says the DRM!

Do you like DRM? Isn’t that a perfect thing to make sure you are properly licensed with all your music, movies and, of course, software? Well, folks like the EFF have strongly opposed DRM right from the beginning. One of their arguments always has been that, if thought to the end, would revoke the user the ability to work do with his machine what he wants do.

Now we see a perfect sample. Grave Rose just posted a nice link on twitter: “Gears of War DRM screwup makes PC version unplayable“. It’s all about a DRM cert that seems to have expired with the end result that the game no longer works. Thankfully, as we do not (yet) have the full trusted computing platform in place. So, you still change you PC. This enabled users to set back their system clocks and so the game worked again. rofl…

Granted, this is not a real DRM issue. Such an expiration date can be encoded in software ever since. With a good debugger, it is not too hard to remove it (of course, that’s not legal and with DRM it is considerably more work to do…). But if we are forced to use more and more DRM and if we are forced to use hardware platforms that deny true admin access to its owner and we have legislation that outlaws helping yourself – won’t those issues become the norm.

For most of the time, you could rest assured that once you had installed something, and did not change it, it was likely to run for eternity (well… somewhat). This seems no longer to hold true. The only true solution is to use as much open source as possible and say no to any DRM-enabled products.

As an interesting side-note, I am not sure if the poor gamers who set back their system clocks are in legal troubles: didn’t they try to circumvent a technical copy protection? Not sure about the DMCA, but in Germany you could argue that this is an illegal attack… Happy gaming!