rsyslog v6 stable released

Happy new year! After being back from vacation, I started the new year with finally releasing the first stable rsyslog v6. While it was ready for a while, I was hesitant to release it when there was so much going on and time left for any quirks that may show up (I know I all too often overlook a thing or two with such a release and, if so, it is good to react fast). So today was finally the day. Let’s see and wait if I failed somewhere.

The new release is a very important one for me. First of all, because it helps me to get down to a decent set of versions I need to support. Doing all that v4..6 stuff in parallel with multiple branches really became time-consuming (but manageable thanks to git!). V4 has now been retired, and the number of branches is reduced to a minimum.
 
Very important is the new log classification capability. While it is in the beta (and devel) for quite a while, I think people were hesitant to try it out. At the same time, this prevented us from developing it any further. With it now being part of a stable release, I hope that it will get momentum.

Besides software availability, rule bases for common syslog messages are also important. Adiscon will assist with creating these rule bases upon request. I hope that people will use that facility. All they need to do is provide sufficient information about what they want to normalize and some sample messages for us to work with. We will than see how to create the required rulebase. This whole effort is based on liblognorm, so the technology can also be used by other projects (sagan is a good example that actually currently has more interest in that feature that rsyslog has).

Quick update on log normalization

I just wanted to give a heads up on the status of log normalization. We have just released updated versions of libee and liblognorm. These provide important new features, like the capability to annotate events based on classification. This, among others, brings the libraries more back inline with recent CEE developments. Also, the log normalizer tool is nearly ready for prime time. The “only” thing that is missing is a decent set of rule bases. Thankfully, sagan already has a couple. I guess besides programming obtaining rule bases is a major thing to target.

As soon as I find time (I hope soon!), I’ll finalize some lose ends on the software side and get doc online on how to use the normalizer tool. I think with that a great tool for everyday use in log analysis will become available. Feedback and collaboration is always appreciated!

log annotation with liblognorm

I have recently written about the concept of event (log) annotation and liblognorm. During the past days I have made my mind up and have begun implementing some stuff today. In essence, rule bases will receive a new rule type “annotate”, which contains the annotation part. Here is a sample from my lab environment:


rule=logon:<%-:number%>1 %timestamp:date-rfc5424% %src-id:word% ...
annotate=logon:+action="login"

Note the text in red. This is a liblognorm tag (not to confuse with a CEE tag!). This rule base tells the normalizer to append, according to the target format, the fields that are given in the annotate statement to any events that have the tag in question (“logon” in our case).

Today, I am extending the rule base parser to support the annotate rule. Within the next days, I’ll update the rest of the system. When this is done, I’ll probably release that version so that you can try out the new functionality in your own environment.

liblognorm event annotation … and CEE

As you probably know, CEE is an effort driven by MITRE to support a common event expression format. Liblognorm is a log normalizer library (aka “network event normalizer”). One of its primary target formats is CEE.

For pure normalizing needs, liblognorm extracts data fields from semi-structured log message. The extracted fields are available inside a (basically) name/value property list. Liblognorm also permits to classify messages, e.g. as being a logon or logoff message. For this classification, liblognorm provides so-called “tags”. These are simple words (strings of characters) which can be specified by the user. Tags reside in a special property called “tags”, but otherwise occupy a flat space (tags can easily be structured via punctuation).

CEE takes a slightly different approach: while it shares the tag concept (actually liblognorm inherited tags from an earlier version of CEE), CEE classifies tags into different tag types. For example, “logon” may be a tag, but can only be used to describe an action(-field). As such, “logon” can not be present by itself in a CEE log record, it must be given as value of the action field (‘action=”logon”‘). Also, CEE requires some other fields which may not be present explicitly from the original message even though the information may implicitly be present inside it. To express such information entities (and tags in the CEE way), liblognorm needs the capability to add additional fields to an  extracted event. Let call these set of fields the “annotation” for easier future reference. Liblognorm needs to annotate the event so that the target format’s (CEE) requirements are met. While I was talking about CEE so far, I assume (and know from previous experience) that other formats may have similar requirements, albeit different fields that need to be annotated.

The question is now: how to implement this in liblognorm? The initial idea was to include the annotation inside the normalization rule itself. That has a major drawback: If a rule base is to be used for CEE and some other format, the annotation may be different, and thus the same rule base cannot be used. These two rule bases would differ in just the annotation. So it seems more natural, and easier to maintain, to split the recognition rule from the annotation rule. In that setting, the message is recognized and classified by recognition rules and the annotation is based on (different) annotation rules. So only one set of recognition rules can be used by multiple annotation rules. Only the latter need to be redefined for different target formats (or systems).

This split-rule method is the way I intend to head to. In essence, the current “rule=” rule and its format will remain untouched. It will be augmented by “annotate=” rules, which contain the full annotation. The binding between these two will be done via classification (liblognorm tags): in the first step, the message is recognized, data extracted and tags assigned, just like it is currently done. Then a second step will be added. It traverses through the tags and adds all annotation that are defined for the message’s tag set. So the binding is on the tag set. Finally, it is probably necessary to add a third step that can remove unwanted fields. This step is probably target-format specific. For example, this step could eliminate the liblognorm tag set from an event if CEE compliance is desired, because CEE does not support, not even permit, an extra tag set.

Feedback on this approach is appreciated. It is my hope to be able to implement this in the near future.

Paper on LogNormalization

I wanted to make all of your aware that I have posted a paper on log normalization . This was originally done in regard to CEE, but I noticed that the classification of different log sources and the way to handle them is of broader use. I hope you find the paper useful.

normalizing Apache Access logs to JSON, XML and syslog

I like to make my mind up based on examples, especially for complex issues. For a discussion we had on the CEE editorial board, I’d like to have some real-world example of a log file with many empty fields. An easy to grasp, well understood and easy to parse example of such is the Apache access log. Thanks to Anton Chuvakin and his Public Security Log Sharing Site I also had a few research samples at hand.

Apache common log format is structured data. So there is no point in running it through a free-text normalization engine like liblognorm. Of course it could process the data, but why use that complex technology. Instead, the decoder is now part of libee and receives a simple string describing which fields are present. It’s called like this:

$ ./convert  -exml -dapache -D “host identity user date request status size f1 useragent” < apache.org > apache.xml

Options specify encoder and decoder, and the string after -D tells the convert field names and order. But now let’s speak the input and output for itself:

The converter works by calling the decoder, which creates an in-memory representation of the log format in a CEE-like object model. Then, that object model and the called-for encoder is used to write the actual output format. That way, the conversion tool can convert between any structured log format, as long as the necessary encoders and decoders are available. This greatly facilitates log processing in heterogeneous environments.

Note that liblognorm works similar and, from libee’s point of view, can be viewed at as an decoder for unstructured text data (or, more precisely, for text data where the structure is well-hidden ;)).

log normalization – first results

At the beginning of this week I was pretty confident, that I would not make my self-set deadline of one month to implement a first rough proof of concept of liblognorm, a log normalizing library. Fortunately, I made extremely good progress the past two days and I am now happy to say that I have such a proof of concept available. All of this can be seen by pulling from Adiscon’s public git server: you need libestr, libee and liblognorm to make it work.

Right now, I’d like to provide a glimpse at how things work. Thanks to Anton Chuvakin and his Public Security Log Sharing Site I got a couple of examples to play with (but I am still interested in more lag samples, especially from Cisco devices). Out of the many, I took a random messages.log file written by sysklogd. This is my input file and can be seen here.

To normalize events, liblognorm needs to know which fields are present at which positions of the input file. It learns this via so-called “samples”. Samples are very similar to the patterns used by virus scanners: like virus patterns describe how a specific virus looks, log samples describe how a specific log line looks. Other than virus patters, I have crafted a format hopefully easy (enough) to understand by sysadmins, so that everyone can add relevant samples himself. To support this, samples look relatively similar to actual log lines, and this is the reason I have termed them “log samples”. Like log files, samples are stored in simple text files. For the initial test, I used a a very small set of samples, available here. A production system will have many more samples, and I envision systems that have many (ten?-) thousand of samples loaded at the same time. If you look at the samples, take special care about entities enclosed in ‘%’ – these are field definitions, the rest is literal text.

The actual normalization is performed by the libraries engine, which parses log lines, based on the samples, into fields. This creates an in-memory representation of the event, which can than be processed by the driving application or be written to some other media or the network.

Liblognorm will come with a small toll called “the normalizer”. It is a minimal library user: it loads a sample database and reads log lines from standard input, creates the event in-memory representation and then writes this representation to standard output in a standardized format. So far, it supports formats as they are expected for the upcoming CEE standard.

The result of a normalizer run on my test input file based on the provided sample base can be seen here. The output is actually a bit more verbose than described above, because it lists the to-be-normalized line as well. If you look at the properties I extracted, you’ll probably notice that some do not make too much sense (maybe…). Also, a classification of the message is missing. Don’t care about these aspects right now: it’s a proof of concept and these things will be addressed by future development (the classification, for example, will be based on CEE taxonomy via tags).

I hope I was able to convey some of the power that is available with liblognorm. Of course, a “little bit” of more work and time will be required to get it production-ready. Unfortunately, I will be unavailable for larger parts of the next two weeks (other work now pressing plus a long-awaited seminar ;)), but I will try to get liblognorm as quickly as possible into the best shape possible. In the meantime, if you like, feel free to have a look at its code or play with it. All of what I wrote can actually be done with the versions available in git.

Call for Log Samples

My log normalization effort made good progress and I have a very rough first proof of concept available. It will take a log sample database, and transform input log files to a CEE-like output format.
Now I am looking at ways to practice-test it. So I’d appreciate if you could point me to some sources of log files. It mustn’t be terabytes, but they should be anonymized and be usable in the public Internet. For obvious reasons, it would be good if they are from widely deployed devices.
I would use a subset of these samples to extract usable sample database entries and see how the run through the normalizer.
Thanks,
Rainer

libee status update

Yesterday I reached some important milestones:
  • defined an internal simple event representation format “int” to create test cases
     http://doc.libee.org/classee__int.html
  • wrote a decoder “int” -> CEE
  • wrote a syslog encoder CEE->syslog
    (with enough escapes to be testable, but without all details)
  • wrote a small program (int2syslog) that ties these pieces togetherand can be used to visualize test cases in syslog representation
 All is available from the libee git at http://git.adiscon.com/?p=libee.git;a=summary

logging and the C NUL problem

Again, I ran into the “C NUL Problem”, that is the way C strings are terminated. Unfortunately, the creators of C have represented strings as variable arrays of char without an explicitely-stated size. Instead of a size property, a C string is terminated by an US-ASCII NUL character (”). This works well enough in most cases, but has one serious drawback: the NUL character is a reserved character that cannot be part of any C string. So, for example strlen(“AB”) equals one and not three as one would usually expect.

CERT has a good presentation of some of the more important problems associated with the standard C string handling functions. I do not intend to reproduce this here or elaborate on any further details except that we get into big trouble if NUL characters are used e.g. in logging data sets. We had this problem in the IETF syslog WG, where we permited NUL to be part of the syslog message text, but permitted a receiver to escape it. This is far from being an ideal solution, but we considered it good enough, especially as it permits to keep compatible with existing toolset libraries.

Now, in CEE, we face the same challenge: the problem is if the in-memory representation of event fields should permit NUL characters. The correct technical answer to this question is “yes, of course”, but unfortunately it has a serious drawback that can affect adoption: if NULs are permited, none of the string handling functions of the C runtime library can be used. This is, as said above, because the C runtime library is not able to handle NULs inside “standard” C strings. A potential solution would be to escape NULs when they enter the system. However, that would require an additional reserved character, to do the escaping. So in any case, we’ll end up with a string that is different from what the “usual” runtime library routines expect.

Of course, this problem is not new, and many solutions already have been proved. The obvious solution is to replace the standard C string handling functions with octet-counting functions that do not require any reserved characters.

A short, non- weighted list of string replacement string libraries is:

Note that some of them try to mimic standard C strings as part of their API. I consider this highly dangerous, because it provides a false sense of security. While the library now can handle strings with included NUL characters (like “AB”), all parts of the string after the first NUL will be discarded if passed to a “regular” C runtime library string function (like printf!). So IMO this is a mis-feature. A replacement library must explicitely try to avoid compatibility to the C runtime library in order to safe the user from subtle issues, many of them resulting in security problems (think: information hiding).

Unfortunately, I could not identify any globally-accepted string replacement library that is in widespread use.. Despite its deficits, C programmers’  tend to use the plain old string functions present in the standard C runtime library.

So we are back to the original issue:

If CEE supports NUL characters inside strings, the C standard string library can not be used, and there are also problems with a potentially large number of other toolsets. This can lead to low acceptance rate.

But if CEE forbids NUL characters, data must be carefully asserted when it enters the system. Most importantly, a string value like “AB” must NOT be accepted when it is put in via an API. Experience tells that implementors sometimes simply overlook such restrictions. So this mode opens up a number of very subtle bug (security) issues.

I am very undicided which route is best. Obviously, a sound technical solution is what we want. However, the best technical solution is irrelevant if nobody actually uses it. In that light, the second best solution might be better. Comments, anyone?