rsyslog performance: main and action queue workers

Rsyslog has both “main” message queues and action queues. [Actually, “main message queues” are queues created for a ruleset, “main message” is an old-time term that was preserved even though it is no longer accurate.]

By default, both queues are set to one worker maximum. The reason is that this is sufficient for many systems and it can not lead to message reordering. If multiple workers are concurrently active, messages will obviously be reordered, as the order now, among others, depends on thread scheduling order.

So for now let’s assume that you want to utilize a multi-core machine. Then you most probably want to increase the maximum number of main message queue workers. The reason is that main queue workers process all filters for all rules inside the rule set, as well as full action processing for all actions that are not run on an asynchronous (action) queue. In typical setups, this offers ample of opportunity for concurrency. Intense tests on the v5 engine have shown near linear scalability to up to 8 cores, with still good improvements for higher number of cores (but increasing overhead). Later engines do most probably even better, but have not been rigorously performance tested (doing it right is a big effort in itself).

Action queues have a much limited concurrency potential because they do only a subset of the work (no filtering, for example, just generating the strings and executing the actual plugin action). The output module interface traditionally permits only one thread at a time to be present within the actual doAction() call of the plugin. This is to simply writing output plugins, but would be needed in any case as most can not properly handle real concurrent operations (think for example about writing to sequential files or a TCP stream…). For most plugins, the doAction() part is what takes most processing time. HOWEVER, multiple threads on action queues can build string concurrently, which can be a time consuming operation (especially when regexpes are involved). But even then it is hard to envision that more than two action queue worker threads can make much sense.

So the bottom line is that you need to increase the main queue worker threads to get much better performance. If you need to go further, you can fine-tune action queue worker threads, but that’s just that: fine-tuning.

Note that putting “fast” actions (like omfile) on an async action queue just to be able to specify action threads is the wrong thing to do in almost all situations: there is some inherent overhead with scheduling the action queue, and that overhead will most probably eat up any additional performance gain you get from the action queue (even more so, I’d expect that usually it will slow things down).

Action queues are meant to be used with slow (database, network) and/or unreliable outputs.

For all other types of actions, even long-running, increasing the main queue worker thread makes much more sense, because this is where most concurrency is possible. So for “fast” action, use direct action queues (aka “no queue”) and increase the main thread workers.

Finally a special note on low-concurrency rulesets. Such rulesets have limited inherent concurrency. A typical example is a ruleset that consists of a single action. For obvious reasons, the number of things that can be done concurrently is very limited. If it is a fast action, and there is little effort involved in producing the strings (most importantly no regex), it is very hard to gain extra concurreny, especially as a high overhead is involved with such fine-grained concurrency. In some cases, the output plugin may come to help. For example, omfile can do background writes, which will definitely help in such situations.

You are in a somewhat better shape if the string builder is CPU intense, e.g. because it contains many and complex regexes. Remember that strings can be build in parallel to executing an action (if run on multiple threads). So in this case, it makes sense to increase the max number of threads. It makes even more sense to increase the default batch size. That is because strings for the whole batch are build, and then the action plugin is called. So the larger the batch, the large these two partitions of work are. For a busy system, a batch size of 10,000 messages does not sound unreasonable in such cases. When it comes to which worker threads to increase, again increase the main queue workers, not the action ones. It must be iterated that this gives the rsyslog core more concurrency to work with (larger chunks of CPU bound activity) and avoids the extra overhead (though relatively small) of an async action queue.

I hope this clarifies worker thread settings a bit more.

performance of liblognorm/rsyslog parse tree

We are currently considering a new table lookup feature for rsyslog, and as usual, performance is a priority. As such, we intend to base the implementation on the liblognorm parse tree, which is highly efficient. In fact, the adaption (maybe a partial rewrite) I have in mind for rsyslog actually has O(1) time complexity. This drew some discussion, as parse trees are usually considered  to be O(log n). So I thought I quickly lay out why constant time complexity is possible.

To set some ground, let’s define the use case where this applies.We are talking about string matches, not regular expressions. In the case of liblognorm, we are also talking about exact string matches, and let me assume this for rsyslog’s lookup table functionality as well (at least in “string” mode). Note that this may be extended, and should be no problem, but I have yet to properly evaluate the effect on the parse tree. So let’s assume exact matches in this blog post. So basically, we have a set of strings and would like to see if there is an exact match (and pick up some value associated with that match). For example, we may have the strings “Berlin”, “Boston”, “Bogota” and “London”. That example is obviously extremely simplified, yet it will help us understand.

Whenever we talk about time complexity, we need to make sure what exact n we are talking about. When it comes to matching values, n is usually the number of different values. In our example, this n would be four. Of course, one could also look at string lengths, as the length obviously influences the complexity of the comparison operation. Let’s call this m, then in the above sample m would be six (length of longest string). If we take the unusual step to include that m, a linear list would have time complexity O(n*m), a binary tree O(log n*m) and rsyslog’s parse tree O(m). HOWEVER, there usually is an upper bound on string length, and for that reason m is taken as a constant in almost all cases. By definition, it will then no longer be mention in O notation, and so we end up with the usual view that a linear list is O(n), a binary tree O(log n) … and rsyslog’s parse tree O(1).

This is possible because rsyslog’s parse tree is based on the radix tree idea and kind of “compresses” the strings rather than including them in every node. The conceptual in-memory representation of a tree based on above values is as follows:

[root] - B --- O --- GOTA
  |      |     + --- STON
  |      +---- ERLIN
  + ---- LONDON

As you can see, the tree is actually based on single characters that are different between the strings. So we do not have each string at the tree level, but only the rest of what is different. On this works a simple state machine. On a conceptual level, the search string is processed character by character, so each character represents a state transition. For obvious reasons, there can be only as many state transitions as there are characters inside the longest string. This is what we called m above. However, the number of different strings is totally irrelevant for the search, because it does not influence the number of state transitions required. That number is what we called n. So in conclusion, the time complexity is only depending on m, which is kind of constant. Consequently, we have O(1) complexity.

Just to stress the point once again: we do not iterate over the individual strings in a tree-like manner, instead we iterate over the index string characters themselves. This creates a very broad and shallow tree, even for a large number of index values.

On real hardware, the number of strings of course has some slight effect, especially when it comes to cache hit performance. While a small tree may fit into the available cache, a much larger tree obviously will not. However, from a theoretical point of view, this does not matter, we still have constant time for the lookup. From a practical perspective, this also does not matter so much: any larger tree will have a larger memory requirement and thus a lower cache hit rate. In fact, rsyslog’s parse tree will probably perform better because of its compressed structure, which makes cache presence of at least top-level elements more likely.

For rsyslog, that’s all I need to tell. For liblognorm, the story is a bit longer. It is only “almost” O(1). While the string matches themselves can be carried out in constant time, liblognorm also supports parsers to pull out specific parts of a log message. The current parsers are all O(1) as well, so this does not change the picture. However, state transitions involving a parser are no longer simple operations where the transition is immediately clear. If, for a given prefix, multiple parsers may apply, liblognorm must iterate through them until a matching one is found. This is no longer O(1), but depends on the number of parsers. Of course, we can claim that the number of parsers is still a (even small) constant, and so we can still say all of this is O(1) without violating the definition. While this is true, there unfortunately comes another complexity with parsers: With constant string, backtracking is never required, as we immediately know the exact state transition. With parsers, we have dynamic elements, and later text matches may turn out to be a non-match. In that case, we need to go back to parser detection and see if another parser also fits. If so, we need to give it a try and need to re-evaluate the parse tree from that point on. This process is called backtracking and can ruin the runtime to become as worse as O(n^2), just like regular expressions. Obviously, the problem exists primarily of very generic parsers are used, like “word”. If, for example, parsers as specific as IPv4 are used, there is no chance for more than one parser matching. So the problem cannot occur. This is also why I caution against using too-generic parsers. Thankfully, very generic parsers are not used too frequently, and so we usually do not see this problem in practice. This is why I call liblognorm to be “almost” O(1), even though it may degrade to O(n^2) in extreme cases. Heuristically said, these extreme cases will not occur in practice.

The rsyslog parse tree does not use parsers inside the parse tree, just the fixed state transitions. As such, it actually is O(1). Oh, and yes: we should probably call the rsyslog “parse tree” better a “lookup tree” or “search tree” but for historical reasons it currently is named the way it is…

Using ElasticSearch Bulk Mode with rsyslog

Rsyslog’s omelasticsearch plugin now supports bulk mode. With bulk mode, message processing is much faster, especially if large loads are to be processed.

Bulk mode works with rsyslog’s batching capabilities. So it probably is a good idea to refresh some of the batching concepts. The core idea is that while we would like to process many messages at once, we do NOT want to wait hold processing messages “just” because they are too few. So with batching, you set an upper limit on the batch size (number of messages inside a batch). Let’s say the batch size is set to 32. When a new batch is to be processed, the queue worker tries to pull 32 messages off the queue. If there are 32 or more present, this is nice and all 32 are taken from the queue. But now let’s assume there are only 10 messages at all present inside the queue. In that case, the queue worker does not try to guess when the next 22 messages will arrive and wait for that (if the time is short enough). Instead, it just pulls the 10 already-present messages off the queue and these form the batch. When new messages arrive, they will be part of the next batch.

Now let’s look at the startup of a busy system. Lot’s of messages come in. Let’s assume they are submitted one-by-one (many sources submit multiple messages, but let’s focus on those cases that do not). If so, the first message is submitted and the queue worker is activated. Assuming this happens immediately and before any other message is submitted (actually unlikely!), it will initially create a batch of exactly one message and process that. In the mean time, more messages arrive and the queue fills. So when the first batch is completed, there are ample messages inside the queue. As such, the queue worker will pull the next set of 32 messages off the queue and form a new batch out of them. This continues as long as there are sufficient messages. Note that in practice the first batch will usually be larger than one and often be the max batch size, thanks to various inner workings I would not like to elaborate on in detail in this article. Large batch sizes with more than 1024 messages are not bad at all and may even help improve performance. When looking at a system performing with such large batches, you will likely see that partial batches are being created, simply for the reason that the queue does not contain more messages. This is not an indicator for a problem but shows that everything works perfectly!

The max batch size can be configured via

$ActionQueueDequeueBatchSize
and
$MainMsgQueueDequeueBatchSize

Note that the default sizes are very conservative (read: low), so you probably want to adjust them to some higher value. The best value depends on your workload, but 256 is probably a good starting point. If the action queue runs asynchronously (e.g. linkedlist mode, everything non-direct), the action queue batch size specifies the upper limit for the elasticsearch bulk submission.

To activate bulk mode, use

*.*     action(type=”omelasticsearch”
           … other params …
           bulkmode=”on”)

The default is the more conservative “off”. Note that the action can of course be used with any type of filter, not just the catch-all “*.*”. This is only used as a sample.

rsyslog performance improvement rather impressive

I (think I ;)) have finished this round of performance rsyslog tuning. The result is rather impressive:

On my virtualized 4-core development environment (not exactly a high-end environment), I went from approx. 70,000 messages per second (mps) to approx. 280,000 mps. Note that these numbers do not necessarily represent a practice configuration, but I think they show the magnitude of the optimization. Also note that some complex configurations have far lower gain, because some things (like execute an action only n times within m seconds or “message repeated n times” processing) require serialization of the message flow and there is little we can gain in this case.

I plan to do an “official release” in the not so distant future. Next, I will first see which patches I have in my queue and then I’ll focus on the config language enhancement. That’s much more complex than just the format — I’ll blog the details hopefully either later today or tomorrow morning.

Getting Top Performance out of rsyslog

Rsyslog is lightning fast. However, the configuration influences speed very much. This blog post tells what offers optimal performance for the most recent v5 version.

I will update this blog post whenever there is news to share (at least this is the plan). This information will also hopefully flow into the rsyslog doc at some time.

  • do not use more than one ruleset within a single queue
  • do not use rate limiting unless absolutely necessary
  • use array-based queue modes
  • do not use
  • send data from different inputs to multiple queues
  • use “$ActionWriteAllMarkMessages on” for all actions where you can afford it (it really makes a difference!)

This following blogpost also has some solid information on performance-influencing parameters: rsyslog evaluation. Note that it talks about a somewhat older rsyslog release. While already quoting 250,000 messages per second, rsyslog 5.5.6 is quite a bit faster.

further improving tcp input performance

As one of the next things, I will be further improving rsyslog‘s tcp syslog input performance. As you know, rsyslog already has excellent performance (some sources, for example, quote 250,000 msgs per second). But, of course, there is room for improvement.

One such area is imtcp, the tcp syslog input module. It uses a single polling loop to obtain data from all senders. It is worth noting that the actual input module does NOT do very much, but hands the majority of work off to queue worker threads. However, it pulls the data from operating system buffers to our user space and also fills some basic properties (like time of reception, remote peer and so on). Then, the message is pushed to the message queue and at the other side of the queue the majority of processing happens (including such things like parsing the message, which some would assume to happen inside the receiving thread).

As can be seen in practice, this design scales pretty well in most cases. However, on a highly parallel system, it obviously limits the process of pulling data “off the wire” to be done on a single CPU. If then the rule set is not very complex (and thus fast to process), the single-threadedness off the initial receiver becomes a bottleneck. On a couple of high performance systems, we have seen this to be the bottleneck, and I am now trying to address it.

Right now, I am looking for a good solution. There are two obvious ones:

a) start up a single thread for each connection
b) do a hybrid approach of what we currently do and a)

Even with 64bit machines and NPTL, approach a) does probably not work well for a very large number of active sessions. Even worse, receiving messages from two different hosts would then require at least one context switch, and do so repeatedly. Context switches are quite expensive in terms of performance, and so better to avoid. Note that the current approach needs no context switch at all (for the part it does). On a system with many connections, I would be close to betting that the runtime required by the a)-approch context switching alone is probably more than what we need to do the processing with our current approach. So that seems to be a dead end.

So it looks like b) is a route to take, combining a (rather limited) number of threads with an reception-even driven loop. But how to best do that? A naive approach is to have one thread running the epoll() loop and have a pool of worker threads that actually pull the data off the wire. So the epoll loop would essentially just dispense to-be processed file descriptors to the workers. HOWEVER, that also implies one context switch during processing, that is when the epoll loop thread activates a worker. Note that this situation is by far not as bad as in a): as we have limited number of workers, and they are activated by the epoll thread, and that thread blocks when no workers are available, we have limited the level of concurrency. Note that limiting the concurrency level roughly to the number of CPUs available makes a lot of sense from a performance point of view (but not necessarily from a program simplicity and starvation-avoidance point of view – these concerns will be locked at, but now I have a focused problem to solve).

One approach to this problem could be that I further reduce the amount of work done in imtcp: if it no longer pulls data off the wire, but just places the file descriptor into a “message” object and submit that to the overall queue, modified queue processing could then take care of the rest. However, there are many subtle issues, including how to handle system shutdown and restart as well as disk queues. In short: that probably requires a full redesign, or at least considerable change. Anything less than that would probably result in another processing stage in front of the rule engine, as outlined initially (and thus require additional context changes).

So I focused back to the optimal way to partition this problem. One (simple) approach is to partition the problem by tcp listeners. It would be fairly easy to run multiple listeners concurrently, but each of the listeners would have its own (epoll/take data off the wire)-loop that runs on the listener’s single thread. So in essence, it would be much like running two or more rsyslog instances, using the current code, concurrently. That approach obviously causes no additional context switches. But it has a major drawback: if the workload is spread unevenly between listeners, it may not provide sufficient parallelism to busy all CPU cores. However, if the workload is spread evenly enough, the approach can prevent starvation between listeners – but not between sessions of one listener. This problem is also not addressed by the current code, and there has never been any user complaint about that (or it’s potential effects). So one may conclude starvation is not an issue.

It looks like the usefulness of this approach is strongly depending on the spread of workload between different listeners. Looking at a busy system, we need focus on the number of highly active listeners in relation to the number of expectedly idle CPU cores i. That number i obviously must take into consideration any other processing requirements, both from rsyslog (parsing, rule processing, …) as well as all other processes the system is intended to run. So, in general, the number i is probably (much) lower than the total number of cores inside the system. If we now have a number l of listeners, we must look closely: if among all listeners, l_h is the number of high activity listeners, than it is sufficient to have i equals l_h: few occasional wakeups from low activity listeners do not really matter. However, if l_a is lower than i, or even just one, then we can not fully utilize the system hardware. In that case, we would need to provide partitioning based on sessions, and there we see a similar scheme based on the view of low- and high-activity sessions.

But the real questions is if we can assume that most busy systems have a sufficient number of high activity listeners, so that per-listener concurrency is sufficient to fully utilize the hardware. If that is the case, we can drastically improve potential message processing rates and still be able to keep the code simple. Even more concrete, the question is if we re sufficiently sure this approach works well enough so that we implement it. Doing so, could save considerable development effort, which could be put to better uses (like speeding up queue processing). BUT that development effort is wasted time if for a large enough number of systems we can not see benefit. And note that single-listener systems are not uncommon, a case where we would gain NO benefit at all..

I am actually somewhat undecided and would appreciate feedback on that matter.

Thanks in advance to all who provide it.
Rainer

Update: there is a long and very insightful discussion about this post on the rsyslog mailing list. All interested parties are strongly advised to read through it, it will definitely enhance your understanding. Please also note that based on that discussion the development focus shifted a bit.

rsyslog template plugins

As I have written yesterday, I am evaluating the use of “template modules” in rsyslog.

In that post, I mentioned that I’d expect a 5% speedup as proof that the new plugin type was worth considering. As it turns out, this method seems to provide a speedup factor of 5 to 6 percent, so it seems to be useful in its own right.

After I had written yesterday’s post, I checked what it would take to create a test environment. It turned out that it was not too hard to change the engine so that I could hardcode one of the default templates AND provide a vehicle to activate that code via the configuration file. Of course, we do not yet have full loadable modules, but I was able to create a proof of concept in a couple of hours and do some (mild) performance testing on it. The current code provides a vehicle to use a c-function based template generator. It is actiated by saying

$template tpl,=generator

where the equal sign indicates to use a C generator instead of the usual template string. The name that follows the equal sign that will probably later become the actual module name, but is irrelevant right now. I then implemented a generator for the default file format in a very crude way, but I would expect that a real loadable module will not take considerably more processing time (just a very small amount of calling overhead after the initial config parsing stage). So with that experimental code, I could switch between the template-based default file format and the generator based format, with the outcome being exactly the same.

Having that capability, I ran a couple of performance tests. I have to admit I did not go to a real test environment, but rather used my (virtualized) standard development machine. Also, I ran the load generator inside the same box. So there were a lot of factors that influenced the performance, and this for sure was no totally valid test. To make up for that, I ran several incarnations of the same test, with 1 to 10 million of test messages. The results quite consistently reported a speedup between 5 and 6 percent achieved by the C template generator. Even though the test was crude, this consistently seen speedup is sufficient proof for me that native template generators actually have value in them. I have to admit that I had expected improvements in the 1 to 2 percent area, so the 5 and more percent is considerable.

I committed the experimental branch to git, so everyone is free to review and test it oneself.

Now that I am convinced this is a useful addition, my next step will be to add proper code for template plugins (and, along that way, decide if they will actually be called template plugins — I guess library plugins could be used as well and with somewhat less effort and greater flexibility). Then, I will convert the canned templates into such generators and included them statically inside rsyslog (just like omfile and a couple of other modules are statically included inside rsyslog). I hope that in practice we will also see this potential speedup.

Another benefit is that any third party can write new generator functions. Of course, there is some code duplication inside such functions. But that should not be a bit issue, especially as generator functions are usually expected to be rather small (but of course need not be so). If someone intends to write a set of complex generator functions, these can be written with a common core module whom’s utility functions are accessed by each of the generators. But this is not of my concerns as of now.

Note that I will probably use very simple list data structures to keep track of the available generators. The reason is that after the initial config file parsing, access to these structures is no longer required and so there is no point in using a more advanced method.

I expect my effort to take a couple of days at most, but beware that Thursday is a public holiday over here in Germany and I may not work on the project on Thursday and Friday (depending, I have to admit, a little bit on the weather ;)).

rsyslog multithreading

From time to time, I receive questions on how many cores rsyslog can run on a highly parallel system. Rsyslog is massivley multi-threaded, but that does not necessarily mean that each configuration, and even each use case, can actually benefit from it.

The most important thing to gain a speedup from parallelism is the ability to break down the workload (this is called “partitioning”) and distribute it to a set of threads, which than can work in parallel on each part.

For the partitioning to work well, the workload, and configuration, must be “partionable”. Let me give a counter-example. If you have a single sender and a single action (yes, this sometimes is the case!), there can not be much parallelism. Such a config looks like this (using imtcp as an example here):

$TCPServerRun 10514
*.* /path/to/file

This can not gain much, because we have on thread for the TCP receiver, one thread for the filtering and one for the output. With the queue engine, we can increase the number of threads that will work on filters in parallel, but these have almost nothing to do in any case. We can not, however, walk in parallel into the output action, because a) the output plugin interface guarantees that only one thread hits a plugin at one time and b) it wouldn’t make much sense here in any case: what would it help if we had hit the output twice and then need top synchronize the file access? No much…

So the bottom line is that a configuration like the one above is highly sequential in nature and consequently there is almost no gain by running some of the tasks concurrently. So, out of the box, rsyslog gains speedup from parallel processing in more complex cases, with more complex rule and many of them.

We are working the provide excellent speedup even for sequential configurations. But this is a long and complex road. For example, in v5 we have now de-coupled message parsing from the receiver thread, resulting in somewhat improved speedup for sequential configs like the one above. Also, we have added batching support in v5, which reduces some overhead involved with multiple threads (and thus reduces the gain we could potentially have). And in late v4 builds we introduced the ability to do double-buffered block i/o for output files, which can considerably reduce i/o overhead for high end systems and also runs in pipeline mode, sequzing a bit more parallelism out of the sequential job.

So with the newer engines, we have been able to apply a basic processing pipeline that looks like

input -> parse & filter -> generate file data -> write

which can be done in parallel. Of course, the file write is action-specific, but I guess you get the idea. What you need to do, however, is configure all that. And even then, you can not expect a 4-time speedup on a quad core system. I’d say you can be happy if the speedup is around 2, depending on a lot of factors.

To get to higher speedups, the job must be made more parallel. One idea is to spread the input, e.g. run it on four ports, then create four rulesets with ruleset queues for each of the inputs. Ideally, to solve the file bottleneck, these should write into four different files. While I did not have the opportunity to test this out in an actual deployment, that should gain a much larger speedup. Because now we have four of this pipelines running in parall, on partitioned data where there is no need to synchronize between them.

Well, almost… The bad news is that the current code base (5.5.0 as of this writing) does unfortunately not yet provide the ability to run the input on more than one thread. So if you have 1000 tcp connections, all of these need to be processed by a single thread (even though they may use different ports, that doesn’t matter…). It is not as bad as it sounds, because the input now is *very* quick (remember the parsing is done concurrently in a different thread [pool!]). But still it causes some loss of parallel processing where not strictly needed. My thinking is that we should either do a “one thread per connection” server (not any longer such a big problem on 64bit machines) or (better but even more effort) do a thread pool for pulling data from the connections. Unfortunately, I do not have time to tackle that beast, but maybe someone is interested in sponsoring that work (that would be *really* useful)?

As you can see, full speedup by using multiple cores is perfectly doable, but going the max requires a lot of careful thinking. And, of course, I have to admit that the best features are present in the newest releases (somewhat naturally…). Obviously, there is some stability risk involved with them, but on the other hand I had some very good success reports from some high-end sites, at least on of them has v5 already deployed in large-scale production.

I could only touch the issue here, but I hope the information is useful. For further reading, I recommend both the doc on queues, as well as my explanation on how messages are processed in rsyslog. These documents are somewhat older and do not cover all details of pipeline processing (which simply did not exist at that time), but I think they will be very useful to read. And, yes, updating them is another thing on my too-long todo list…

rsyslog performance

Thanks to David Lang, I have been able to gather some performance data on rsyslog. More importantly, I have been able to improve rsyslog’s performance dramatically while working with David. He does not only dispense good advise, he has also a great test environment which I lack. If you would like to see how things evolve, be sure to follow this (lengthy ;) thread: http://kb.monitorware.com/rsyslog-performance-t8691.html.

But you are probably interested in actual numbers.
The current v3-stable (3.18.x) manages to process around 22.000 messages per second (mps) with DNS name resolution turned on and about double that value without. That’s not bad, but obviously there is room for improvement.

Thanks to our combined effort, we have reached a state where we can process more than 100,000 mps and there is an experimental version (applying some lock-free algorithms) that goes well beyond 200,000 mps. I am not yet sure if we will pursue the lock-free algorithm. There are ample of additional ideas available and I am quite positive we can push the limit even further.

All numbers were tested with a minimal configuration (one udp input, one file output) on a capable multi-core machine. The numbers above are for sustained traffic rates. More messages can be accepted (and buffered) during bursts.