Here are the m.t server resource graphs for the past two weeks. You can see halfway through when we got the initial extra load, stumbled a bit to reconfigure, and then levelled out.
We've had a few sudden spikes in load averages that corresponded to a sudden high number of Sidekiq jobs getting enqueued. I'm not sure where these jobs are coming from but the new configuration is handling them super-smoothly 😋
Here's the graph of Sidekiq jobs processed/failed over the same two-week time period. The peak last Monday is 1.7M jobs processed, 280k jobs failed.
The increase in failures is proportionally higher than the successes during that first spike before we reconfigured Sidekiq. I'm inferring that a slower rate of processing led to a higher number of failures (and subsequently, of retries, contributing back to the queue backlog problem).
A lot of the retries in that time were due to Mastodon::RaceConditionError exceptions and other admins reported this problem here: https://github.com/mastodon/mastodon/issues/15525
@ashfurrow Yeah but the later is easier when you have the money so a lot of corporations do it that way. :^)