On Executing a Ruleset in non-SIMD Mode

WARNING – WORK IN PROGRESS
This is probably inconsistent, not thought out, maybe even wrong. Use this information with care.

I wanted to share a complexity of executing a rsyslog ruleset in non-SIMD mode, which means one message is processed after each other. This posting is kind of a think tank and may also later be useful for others to understand design and design decisions. Note, however, that I do not guarantee that anything will be implemented as described here.

As you probably know rsyslog supports transactions and, among other things, does this to gain speed from it. This is supported by many modules, e.g. database modules or the tcp forwarder. In essence, what happens is
  • transaction is begun
  • messages are feed to the action
  • transaction is stopped

For most operations, success/failure information is only available after step 3 (because step 2 mainly buffers to-be-processed messages).

This plays very well with the current SIMD engine, which advances processing one statement after the other, and then processed the batch as whole. That means all three steps are done at one instance when the relevant statement is processed. This permits the engine to read the final result before advancing state to the next action.

Now envision a non-SIMD engine. It needs to start the transaction as soon as the first message is ready for processing, and the submit messages as they are processed. However, in non-SIMD mode, each action will only carry out step 2, and then immediately advance to the next statement. This happens for each message. Only after ruleset processing is completed, the final commit can occur.
This means that we do not have the actual outcome of operation until ruleset processing has finished. Most importantly, the result of statement n is no longer available in statements s with s > n. Some features (like failover processing — execIfPreviousIsSuspended) depend on the previous statement’s result, so this is a problem. Even more so, retrying an action is far from trivial, as we do no longer necessarily have the exact same information that was gathered during processing available when processing is done (later statements may have changed message state).
There are some solutions to these problems
  1. if execIfPreviousIsSuspended is specified, the statement in front of it must be forced to commit after each message (which costs quite a lot of performance, but at least only if the users turns on that feature)
  2. To mitigate the performance loss of those auto-commits, we could add a new syntax which explicitly permits to specify failover actions. These could be linked to the primary statement and NOT be executed during regular rule engine execution. We would still need to buffer (message pointers) for the case they are to be executed, but that’s probably better. Most importantly, there would be no other conditional logic supported, which makes processing them rather quickly.
  3. The message object must support a kind of “copy on write” (which would even be very useful with the v7 engine, which also permits more updates than any pre-v8 engine ever was designed for…). This could be done by splitting the (traditional) immutable part of the message structure from things like variables. Message modification modules modifying the “immutable” part would need to do a full copy, others not (the usual case). Of course, this copy on update can make variable operations rather costly.
  4. Output modules could be forced to perform a kind of “retry commit” — but this is a bad option because it a) puts (repetitive) burden on the output (in essence, the output faces exactly the same problems like the core engine, EXCEPT probably that it knows better which exact data items it needs — easy for traditional template based interface). b) it removes from the engine the ability to re-try parts of the transaction. So this is not very appealing.
  5. In any case, the actual “action retry handling” should probably be applied to the commit interface, far less than the usual submit interface.

What even more complicates things is that we do not know what modules that use the message passing interface actually do with the messages. In current code, most of them are message modification modules. This means in order for them to work correctly, they actually need to execute on the message “right in time”. And, of course, there is even more complexity in that each output may do partial commits at any time. The most popular case probably is when it runs out of some buffer space.

To solve these issues, a two-step execution inside the rule system seems desireable:

  • execution phase
  • commit phase

Note that this two-phase approach is already very similar to what action queues do. However, this also means that action queues in pre-v8 can be victim to race conditions if variables are heavily used.

In any case, using action queues to perform these two steps seems very natural and desirable. Unfortunately, there is still considerate overhead attached to this (mutex operations, required context switches), which makes this very unattractive. The end result if taking this path probably would be a reduced overall processing speed, something we really don’t like to see. Also, failover processing would not work if following that path.

Execution Phase

  1. advance message state – message modification (mm) modules must be called immediately
  2. “shuffle” msgs to actions – the main concern here is to make sure that the action sees an immutable action object, at least in regard to what it needs from it (we may need to add an entry point to ask the action to extract whatever it actually needs and return a pointer to that – not necessary for simple strings, for a prominent example).
    Note that doAction is never called for non mm-modules.

At the end of execution phase, for each action we have an array of to-be-processed data.

Commit Phase
For each action with data, we submit the data to its action, performing all three steps. This way, we can easily keep track of the state advancement and action errors. It would be easy to implement dedicated failover processing at this stage (but this probably requires larger state info if the failover action is different from the primary one).

This two-phase approach somewhat resembles the original batching/SIMD idea of the pre-v8 engine. So it looks like this design was well up to the point of what we need to do. I am still a bit undecided if doing these engine changes are really worth it, but so far code clarity seem to be much better. Performance possibly as well, as the SIMD needed to carry a lot of state around as well.

I will now probably do a test implementation of the two-phase approach, albeit only for the traditional string interface.

Some ideas/results from the test implementation:

  •  The structure used to store messages could -LATER- be made the structure that is actually queued in action queues, enabling for faster performance (in-memory) and unified code.
  • Note: an advantage on storing the result string template vs. the message object between phases is of advantage as we do not need to keep the message immutable for this reason. It needs to be seen, though, if that really is an advantage from the overall picture (the question is can we get to a point where we actually do NOT need to do copy-on-write — obviously this would be the case if one string templates are used).