ACLs, imudp and accepting messages

I am working again on moving the DNS name resolution outside of the input thread of those sources where this is potentially time-consuming and affecting message acceptance rates. As it turned out, currently imudp seems to be the only case.

While this is potentially easy to do, a problem is ACLs ($AllowedSender) which use system names rather than ip addresses. In order to check these ACLs, we need to do a DNS lookup. Especially in the case of UDP, such a lookup may actually case message loss and thus may be abused by an attacker to cause a certain degree of denial of service (what also points out that these types of ACLs are not really a good idea, even though requested by practice).

In the light of this, I will now do something that sounds strange at first: I will always accept messages that require DNS lookups and enqueue these into the main queue and do the name resolution AND the final name-based ACL check only on the queue consumer part. Please note that it will be done BEFORE message content is parsed, so there is no chance that buffer overlow attacks can be carried out from non-authenticated hosts. The core idea is to move the lengthy, potentially message-loss causing code, away from the input thread. The only questionable effect I can currently see is that queue space is potentially taken up by messages which will immediately be discarded and should not be there in the first place. At the extreme end, that could lead to loss of valid messages. But on the other hand valid messages are more likely to be lost by the DNS name query overhead if I do the ACL check directly in the input thread.

If anyone has an argument against this approach please let me know.

A solution for invalid syslog message formats…

In syslog, we traditionally have a myriad of message formats, causing lots of trouble in real-world deployments. There are a number of industry efforts underway trying to find a common format. To me, it currently does not look like one of them has received the necessary momentum to become “the” dominating standard, so it looks like we need to live with various presentations of the same information for some more time.

The past two weeks, I have begun to make additions to rsyslog that hopefully will help solve this unfortunate situation. I know that I have no real cure to offer, but at least baby steps toward it. I have introduced so called message parsers, which can be utilized to convert malformed messages into rsyslog’s well-formed internal structure.

Why is it not a solution? Because what I really introduced was actually an interface, which permits to write different parsers for the myriad of devices. I have not provided a generic solution to do that, so the individual parsers need to be written. And secondly, I have not yet defined any more standard properties than those specified in the recent IETF syslog rfc series, most importantly RFC5424.

So why I hope this will lead to a long-term solution?
First of all, there are some hopes that the IETF effort will bring more standard items. Also, we could embed other specifications within the RFC5424 framework, so this could become the lingua franca of syslog message content over time. And secondly, I hope that rsyslog’s popularity will help in getting parsers at least for core RFC5424 information objects, which would be the basis for everything else. Now we have the capability to add custom parsers, and we have an interface that third parties can develop to (and do so with relative ease).

All in all, I think this development is a huge step into the right direction. The rest will be seen by history ;) To me, the probably most interesting question is if we will actually attract third party developers. If there are any, I’ll definitely will help get them going with the rsyslog API.

next round of performance enhancement in rsyslog

Today, I made a very important change to rsyslog: rulesets now can have their own “main” queue. This doesn’t sound too exciting, but can offer dramatic performance improvements.

When rsyslog was initially created, it followed the idea that messages must be processed in the order they were received. To facilitate that, all inputs submitted message to a single main message queue, off from which the processing took place. So messages stayed in reception order. … Well, actually they stayed in “enqueued order”, because it depended on the OS scheduler if input modules could really enqueue in the order they received. If, for example, input A received two messages, but was preempted by module B’s message reception, B’s data could hit the queue earlier than A’s. As rsyslog supported more and more concurrency, the order of messages did become ever less important. The real cure for ordered delivery is to look at high-precision timestamps and build the sort order based on them (in the external log analyzer/viewer).

So, in essence, reception order never has worked well and the requirement to try keep it has long been dropped. That also removed one important reason for the single main message queue. Still, it is convenient to have a single queue, as its parameters can be set once and for all.

But a single queue limits concurrency. In the parallel processing world, we try to partition the input data as much as possible so that the processing elements can independently work on the data partitions. All data received by a single input is a natural data partition. But the single main queue merged all these partitions again, and caused performance bottlenecks via lock contention. “Lock contention”, in simple words, means that threads needed to wait for exclusive access to the queue.

This has now been solved. Today, I created the ability to create ruleset-specific queues. In rsyslog, the user can decide which ruleset is bound to which inputs. For a highly parallel setup, each input should have its own ruleset and each ruleset should have defined its own “main” queue. In that setting, inputs do no longer block each other during queue access. On a busy system with many inputs, the results can be dramatic. And as more as a side-effect, each ruleset is now processed by its dedicated rule processing thread, totally independent from each other.

This design offers a lot of flexibility. But that is not enough. The next step I plan to do is to create the ability to submit a message to a different ruleset during processing. That way, hierarchies of rulesets can be created, and these rulesets can even be executed via separate thread pools, with different queue parameters and in full concurrency. And the best is that I currently think it will not be very hard to create the missing glue.

The only really bad thing is that the current configuration language is really not well-suited to handle that complexity (“really not” is not a type for “not really”…). But I have no alternative than to take this route again, until I finally find time to create a new config language. The only good thing is that I get better and better understanding of what this new language must be able to do, and it looks that my initial thoughts were not up to what now is required…

Canonical Paper on RSyslog

I just found out that Canonical (the Company behind Ubuntu) did a nice paper on rsyslog, which also explains why Ubuntu chooses rsyslog as its default syslogd.

It is interesting to see that the paper is well-written and well-researched, but rsyslog has also evolved while the paper has been written. So in fact, it offers even more features than described in the paper.

And, obviously, I am glad to see Ubuntu move to rsyslog as well.

Another note on hard-to-find-bugs…

Before I began to write this blog post, I realized how long I had not written anything! I promise to begin to write in a more timely manner, but the past weeks were merely a consolidation phase, ironing out bugs from the new releases.

I’d like to elaborate on one of these, one that really drove me crazy the past days. The problem was that omfile’s directory creation mode was sometimes set to zero (well, almost always in some configurations). What began as a minor nit turned into a real nightmare ;)

The issue was that the variable fDirCreateMode was always set to zero, except if it was written to at the start of module initialization or when it was simply displayed at start of module initialization. That sounded strange, but even stranger seemed that by moving around the variable definition in the sources code (and thus assumingly changing its memory location), nothing changed. So I came to a point where I used this code as a patch:

omfile.c from rsyslog git

Look at line 769. With that seemingly unrelated write, the variable stayed as expected. However, if I moved the write to a function, nothing worked again. Strange… After committing the patch, testing showed that the directory permissions now worked well BUT the file create mode now behaved wrong in the same way.

I was stunned – and speechless. What followed, were intense debugging sessions. I ended up finding the commit that introduced the problem, but still could not see why that commit would affect anything. After hours of debugging, I ended up with a stripped-down and almost codeless omfile, which still had the same problem. And it appeared and disappeared almost at random as code lines were moved in and out.

I once again checked the git history and then I noticed that a few commits down the line, I had introduced another config variable for the io buffer size. Now I finally had the idea. The size-type config directives were introduced for file size restrictions. Thus, the regular 32 bit integer is not sufficiently large for them. Consequently, they needed 64 bit integers as pointers! But, of course, I had provided only a pointer to a 32 bit int, thus the config handler overwrote another 32 bits that happened to be close to the address I provided.

This was clearly an error. But could it explain the behavior I saw? Not really… But the problem went away once I had corrected the issue. So I began to suspect the that compiler hard re-ordered variable memory assignment in order to optimize access to them (maybe to get a better cache hit rate or whatever else). But hadn’t I compiled with -O0 and as such no optimization should take place? I checked, and I realized that due to a glitch in lab setup, optimization actually was on, and not turned off! So now I think I can explain the behavior and theory as well as practice go hand in hand.

Really? What about the write and the debug print that made everything work? I guess these changes triggered some compiler optimization and thus the memory assignment was changed and so the “extra 32” bit pointed to some other variable. What also explains why the file creation mode was affected by my change. As well as why the bug reacted quite random to my testing code changes.

So it looks like I finally resolved this issue.

Lessens learned? Re-check your lab environment, even if it always worked well before. Be careful with assumption about memory layout, as the optimizer seems to heavily reorder variables, and even single statements and statement sequences seem to make a big difference. I knew the compiler reorders things, but I did not have this clear enough on my mind to become skeptic about my lab setup.

And, as always, some assumption limit your ability to really diagnose what goes on… May this be a reminder not only for me (I wonder how long it will last) but for others as well (thus I thought a blog post makes sense ;)).

rsyslog – what’s next?

I’ve not blogged so much the recent weeks. I have had my nose deep down inside rsyslog code, adding new features (like an automatic zip file writer or the ability to spoof/forge UDP sender addresses) and enhancing performance (where I think I scored some major points ;)).

So it is time for an update. Where is rsyslog heading to? With the many changes I made in the past two to three month, I think it is very important to let the code base stabilize. So I would prefer not to touch too much existing code for a while. Also, it is summer time and my summer vacation is not so far away. Another good argument that it is probably not the best time for big code changes (or do I like to break things before I go away…? ;)).

So looking at what to do next, I would like to center myself on improving the tool set that helps create rsyslog. That doesn’t mean direct improvement to the actual syslogd, but rather to tools that help build and maintain it. The first major effort in this regard was adding an automated testbench. If you look at v3, I think it has around four automated tests (previous versions had none). With v5, we have over 20 subtests, each of which test various cases, so in total we currently have around one hundered test cases automatically covered.

When I started with this in v3, it was a major effort, even though the number of covered cases was rather small. But getting started with a testbench meant I needed to evaluate ways to automate the tests and create them in the best possible way for rsyslog (which also means convenient during the development process). At that point, I tried a lot of things and finally came up with the current set of tests. The initial testbench covered only a very limited set of use cases.

Since then, it has greatly improved, but there are still a lot of uncovered areas. But I now regularly add new tests, most often when I implement new features, change existing ones or hunt bugs. The process is now well understood and many tests can be added with relative ease (but others not, I have some testcases in the queue that require notable extensions to my current system plus a bit … of the different toolset I will be talking about soon…).

Initially, I was rather skeptic if the testbench would really pay, especially after I saw the initial effort required (which I by far underestimated). But in the meantime I am convinced. Especially the past couple of month has shown that the automated tests both increase development productivity (by reducing the number of manual tests that need to be done and spotting regressions early) as well as code quality (detect regressions where they otherwise would have been overlooked).

Now I am in a similar situation in regard to performance testing as I was in regard to correctness testing: everything is done manually and with very low-level tools. Still, I was able to make good progress without tools. But I hope that tools would be as useful for performance testing as they were for bug hunting. Most importantly, my current performance improvement testing covers only limited (though highly relevant) scenarios: those where getting sufficiently reliable numbers is possible with the limited capabilities I have. Most importantly this means that almost all testing so far has been done with plain tcp syslog. While this still enables to check the core engine’s performance, it does not offer a clear view of e.g. UDP performance (which I really do not have now). Also, the examples are artificial, and it would be useful to get more of a real-world performance benchmark.

Finally, performance benchmarks stress the engine, especially its multi-threading capabilities. So performance testing is also a good way to uncover those nasty threading bugs, that one otherwise only detects when systems fail in production (and nobody then knows why…). So I consider decent performance test also to be a plus for code quality. I even consider them very important to stability e.g. the v5-engine, which so far has received only limited attention in practice. It looks like almost nobody ever tried it. I know because the initial v5 release had such a big memory leak, that any serious tester would have needed to come up and complain very quickly. A lack of test deployments makes it harder to mature the engine. I think that good stress tests (which all have a performance co-notation) will help to somewhat mitigate this problem. As a side-note, I have uncovered many of those bugs that I fixed during my manual performance testing. This seems to prove the point.

So I am more or less convinced (if nothing more urgent shows up) to spend some time implementing performance tools and tests for rsyslog. I would also like to include a somewhat older idea of a “diagnostic front-end” that would be able to pull (and maybe modify) some of rsyslog parameters. I’d expect that as a side-activity I’ll also gather (at a minimum) or improve (preferable) performance in a couple of areas (UDP reception performance is on top of the list). But improvements will only come after the basic tools have been written.

As with the testbench, that will mean that new features and enhancements will probably stall a bit in the coming weeks. This even more as I do not intend to write the front-end in C (I personally do not consider C to be the language of choice for non-performance critical interactive programs, especially looking into some of the portability issues – but YMMV…). So I will try to approach this with an Java app. I have to admit that I learned Java 8 to 10 years ago and never programmed much in it, so that will probably mean I’ll need to re-learn the language, but as I don’t consider this GUI to be something extremely critical, I don’t see any issues with me as a Java freshmen doing it.

As a side-note, I should probably mention that I am also involved in the phpLogCon project. So far, I am only part of the design team, but I have a number of really cool visualization features on my personal wish-list. If I ever get time to work on that (I hope for next year), I probably will need to do that in Java, so it doesn’t hurt to practice on a less demanding project. In that sense, I also hope to be able to set stage for some future cool technology while I work on a current demand ;) It would also be interesting so visualize some of the performance counters, but that’s another story ;)

All in all, getting an interactive troubleshooting and analysis front-end has big potential, not only for testing but also for deployments and finding configuration bugs (which become more and more an issue with improving complexity of the configuration). One could also envision that it could include a graphical configuration build … as well as tools for setting up all those TLS certs. I don’t think I can do all of this now or in the next quarter. But I think it is the right time now to begin working on a foundation that offers yet another big potential. Especially as it also helps to urgent need to get better testing for the engine plus the desire to further improve its performance (my goal is no less than to provide the by-far fastest AND most reliable syslogd on this planet ;)).

Well, that’s it for now. I hope you like the idea of an additional performance-centric toolset (which of course also requires engine enhancements) and a GUI as much as I do. If you have comments or concerns, please let me know. I sincerely hope to begin a new round of capability enhancements with this move.

high-peformance, low-precision time API under linux?

This time, I raise a question in my blog. Suggestions, tips and full answers are very welcome.

In rsyslog, there are various situations where I only need low resolution timestamps. With low resolution, I precise within a second. Of course, this thing is provided by the time() API. However, time() is very slow – far too slow for many things I do in rsyslog. So far, I have been able to work around this problem by doing a time() call only every n-th time where I run in tight loops and know that this will not bring me outside of me 1-second window (well, to be precise, this is at least very unlikely and thus acceptable).

However, this approach does not work for all work that I am doing. Now I am facing the challenge once gain, but this time in an area where the “query only n-th time” approach does not work. I need the time in order to schedule asynchronous activities (like writing so far unwritten buffers to disk). With them, there is no tight loop that provides me with some sense of timing, and so I simply do not know if half a second or half an hour has elapsed between calls – except when I do one of these costly time() calls.

A good work-around would be to define my own interval timer, awaking me e.g. every seconds. So I would not need absolute time but could do things based on these timer ticks. However, there is lot of evil in this approach, too: most importantly: this means rsyslogd will be active whenever the system is up, and running on a tick will prevent the operating system from switching the CPU to power saving modes. So this option looks very dirty, too.

So what to do now? Is there any (decently portable) way to get a second-resolution current timestamp (or a tick counter) without actually running on a tick?

If I don’t find a better solution, I’ll probably be forced to run rsyslogd on a tick, which would not be a good thing from a power consumption point of view.

As I already said, feedback is greatly appreciated…

Edit: in case my description was a bit unclear: it is not so important that the timestamp is of low resolution. Of course, I prefer higher resolution, but I would be OK with lower resolution if that is faster.

The problem with time() and gettimeofday() is that they are quite slow. As an example, I can only do around 250,000 time()/gettimteofday() calls per second on my current development system. So each API call takes around 4ms on that system. While this sounds much, it adds considerable runtime to each messages being processed – especially if multiple calls are required thanks to modular structure.

I have also thought about a single “lowres system time getter” inside rsyslog. However, that brings up problems with multi-threading. If one would like to be on the safe side, its entry points need to be guarded by mutexes, another inherently slow operation (depending on circumstances, overhead can be even worse then time()). With atomic operations, things may improve. But even then, we run into the issue that we do not know if the last call was half a second or half an hour ago…

Another edit:
This is a recording from a basic test I did on one lab system:


[rgerhards@rf10up tests]# cat timecaller.c
#include
#include
#include

int main(int argc, char* argv[])
{
time_t tt;
struct timeval tp;
int i;

for(i = 0 ; i < atoi(argv[1]) ; ++i) {
// time(&tt);
gettimeofday(&tp, NULL);
}
}
[rgerhards@rf10up tests]# cc timecaller.c
[rgerthards@rf10up tests]# time ./a.out 100000

real 0m0.309s
user 0m0.004s
sys 0m0.294s

The runtime for the time() call is roughly equivalent (especially giving the limited precision of the instrumentation). Please also note that we identified the slowness of the time() calls in autumn 2008, when doing performance optimization with the help of David Lang. David was the first to point to the time-consuming time() calls in strace. Reducing them made quite a difference.

Since them, I try to avoid time() calls at all costs.

introducing rsyslog v5

A new v5 version of rsyslog will be released today. Originally, I did not plan to start the v5 version before the end of the year (2009). But then we received sponsorship to enhance queue performance. And then we saw that an audit-grade queue subsystem was needed (audit-grade means that no message is ever lost, not even in fatal failure cases like sudden power loss).

Especially the audit-grade queue subsystem resulted in very large design changes to the queue engine. Their magnitude is so large that I assume we need some time to stabilize it. Thus, I have decided to start a new v5 branch, which will feature the redesigned queue engine.

When we introduced the queue engine in early 2008 (in rsyslog v3), it took roughly three to five month until it got decently stable. With the magnitude of changes we have done now, it will probably take some time, again. It depends a bit on the actual feedback we receive from practice. Also, this time I have added lots of automated tests, so a lot of bugs should already have been caught. Also, during the next weeks I will focus on actual deployment scenarios, rather than things that theoretically may happen (the testbench covers many of those). So, all in all, I expect that the new queue engine will become production-ready faster than the v3 engine.

Still, I think it is desirable to create a new major version branch for this change. So here we are, at v5. I will continue to develop functionality that does not necessarily need the new queue engine inside the v4-devel. That way, we will have this functionality available both with the proven queue engine as well as with the new experimental one. Note that I can not do this with a stable branch: per definition, stable branches never receive enhancements (as that would potentially destabilize the branch). So, for the time being and probably a couple of month, we will have two development branches: the v4 as well as the v5 branch. With that v5 will focus on the new queue engine plus any other additions, which are done in v4.

rsyslog queue enhancements – status report

I thought I post a few thoughts about how far the rsyslog queue enhancements have evolved.

We started with the goal to increase performance, especially for database outputs. As part of that endeavor, we designed and implemented message batches as the new processing entity. This approach was suggested by David Lang, who also offered very valuable feedback, suggestions and review of the relevant papers (not to mention actual testing) during the process. Then, we came to the conclusion that we need to have a truly ultra-reliable queue. One that does not even lose messages in case of a sudden fatal failure (like a power fail without a UPS – or a failing UPS!). That lead to further redesign and a lot of design work. All of this is very exciting.

Since last Friday, I have now worked on the actual code. I do now have updated for queue, the queue storage drivers and action processing. Most importantly, the rsyslog testbench does once again successfully run, even in DA queue mode. There are still a couple of things that need to be looked at, but I think most of the bulk work is done. What now follows is careful looking at the open issues plus a LOT more of testing.

The testbench has improved much in the past three month, but it still is far from covering even the most important code areas. Especially the various queueing scenarios are not very well covered by it, mostly because it is rather complex to do so. Anyhow, I will now try not to do so many ad-hoc manual tests but rather see that I can create more automated tests. While this is a lot more of work, even the current testbench has been proven to be extremely valuable during this mayor code change effort (which, let me re-iterate, is far from being fully completed). Without it, it would have been much harder to find those bugs that came up during the testbench run. I think that the time I invested into it already has payed back.

Let me end with a list of things I need to look at. That will at least help me keep focused and let you know what is extremely weak right now:

  • more tests
  • so far, the last batch is not freed until at least one more message comes in (permit DeleteProcessedBatch() to be called de-coupled)
  • cancel processing cleanup, decision if we should still support cancel processing entry points
  • configured discarding of messages on queue-full condition [at least add extra nElem counter]
  • make output actions support message-permanent failures (at least PostgreSQL output plugin) [also think about test cases for this]
  • double-check of action and action unit state processing
  • persisting of messages from memory queues during shutdown (testing)
  • Think about a new way of handling iDeqSlowdown (maybe during batch processing?)

ultra-reliable queueing in rsyslog

As part of the ongoing mailing list discussion on ultra-reliable queueing in rsyslog, I’d like to create another blogpost from discussion content (again, I hope this reference is handy in the future).

The key point with ultra-reliable queues is that no message can be lost once it has been enqueued. In the current (v2,v3,v4 <= 4.1.2) releases of rsyslog, this is ensured as long a the system is guarded against a sudden loss of power (or similar disaster) and even then all but the last messages dequeued are save.

To make queue operations ultra-reliable in that case, the queue needs to be run as a pure disk queue and a checkpoint interval of one needs to be used. This makes the queue reliable at the expense of performance. Note also that with a disk queue only a single queue worker is permitted.

Now let’s look at a simplified scenario:

input -> queue -> output

This is not correct in that inputs never connect directly to outputs, but this detail is irrelevant for what I intend to say (replace “input” by “producer” and “output” by “consumer” if you’d prefer to have a fully consistent version).

Let’s say the processing time is the cost we incur. If we look at it, the queue’s cost dominates by far the combined cost of input and output. In most cases, it dominates input+output cost so much, that you can express the total cost as just the cost of the queue operation, without looking at anything else.

So the input needs to wait until the queue is ready to accept a new message. Once it has done so, the output is notified and immediately acquires the queue lock and begins the dequeue operation. At the same time, the input has already finished input processing (as I said, this happens in virtually “no time” compared to the queue operation). So it needs to wait for the queue lock. Once the dequeue operation is finished, the output releases the lock, and processes the message in virtually no time, too. The input acquired the queue lock, and the whole story begins right from the start.

A small queue may build up depending on the OS scheduler, but I think most often, input and output will just wait for the queue to complete. In that sense, this mode is similar to DIRECT mode, except that a queue can build up when the action needs to be retried.

So to optimize such a scenario, the best thing to do is a totally new queue storage driver for such cases. Sequential files do not really work well if we have multiple producers running.

This is a major effort and even then we need to think about the implications I raised in regard to processing cost above.

First of all, rsyslog was never designed for this use case (preserve every message EVEN in case of sudden power fail). When I introduced purely disk-based queues, this was done to support disk-assisted mode. I needed a queue type to permit me store things on disk, if we run out of memory. As a “side-effect”, a pure disk mode was available also (I’d never implemented it for the sake of itself). As it was there, I decided to expose this mode and made it user-configurable. I thought (probably correct) that it could solve some need – a need that I’d consider “very exotic” (think about the reliance on a audit-grade protocol for this to really make sense). And I added the checkpoint capability because it seemed useful, even with disk-based queues, which could be guarded from total loss of messages by using a reasonable checkpoint interval. Again, a checkpoint interval of one is permitted just because this capability came “for free” and could be handy in some use cases.

The kiosk example we discussed 2008 (?) on the mailing list looked like a good match for such an exotic environment. Sudden power loss was an option, and we had low traffic volume. Bingo, perfect match.

However, I’d never thought about a reasonable high-volume system using disk-only queues. Think about the cost functions, such a system boils down to a DIRECT mode queue which just takes an exceptional lot of time for processing messages.

So probably the best approach for this situation would be to run the queue actually in direct mode. That removes the overwhelming cost of queue operations. Direct mode also ensures that the input receives an ack from the output [but there may be subtle issues which I need to check to make sure this is always the case, so do not take this for granted – but if it is not yet so, this should not be too complex to change]. With this approach, we have two issues left:

a) the output action may be so slow, that it actually is the dominating cost factor and not disk queue operation

b) the output action may block for an extended period of time (e.g. during a retry)

In case a), a disk-queue makes sense, because it’s cost is irrelevant in this scenario. Indeed, it is irrelevant under all circumstances. As such, we can configure a disk-only action queue in that case. Note that this implies a *very* slow output.

Case b) is more complicated. We do NOT have any proper way to address it with current code. The solution IMHO is to introduce a new queue mode “Disk Queue on Delay” which starts an ultra-reliable disk queue (preferably with a faster queue store driver) if and only if the action indicates that it will need extended processing time. This requires some changes to action processing, but the action state machine should be capable to handle that with relatively slight modification [again, an educated guess, not a guarantee]).

In that scenario, we run the action immediately whenever possible. Only if that take the (considerable) extra effort of buffering messages into a much-slower on disk queue. Note that such a mode makes only sense with audit-grade protocols and senders (which hold processing until the ACK has been received). As such, a busy system automatically slows down to the rate that the queue writer can handle. In this sense, the overall system (e.g. a financial trading system!) may be slowed down by the unavailability of a failing output (which in turn causes the extra and very high cost of disk queue operations). It needs to be considered if that is an acceptable price.

The faster an ultra-reliable queue disk store driver performs, the more cases we can handle in the spirit of a) above. In theory, this can lead to elimination of b) cases.

Nevertheless, I hope I have shown that re-designing the queue (drivers) to support high throughput AND ultra-reliable operations AT THE SAME TIME is far from being a trivial task. To do it right, it involves some other changes too.