Here's the graph of Sidekiq jobs processed/failed over the same two-week time period. The peak last Monday is 1.7M jobs processed, 280k jobs failed.
The increase in failures is proportionally higher than the successes during that first spike before we reconfigured Sidekiq. I'm inferring that a slower rate of processing led to a higher number of failures (and subsequently, of retries, contributing back to the queue backlog problem).
One other interesting thing to note is that, even though we've increased the Sidekiq throughput, we've actually *decreased* the total number of connections open to Postgres (~300 before and 160 now). I think this is due to Sidekiq connection pooling.
This goes to show that making better use of existing resources is often better than simply throwing more resources at a problem.
@ashfurrow Like in journalism, where it's the decision to "get it first" or to "get it right", management in IT far too often decides to "hit the problem with more metal", yeah. More often unnecessarily than not, and I can tell some stories about that…
You did the right thing, you also had the best support – I'm proud to be in "your house" here. Thanks for running the show – and thanks to crew & patrons to help you keep it running!
@ashfurrow Yeah but the later is easier when you have the money so a lot of corporations do it that way. :^)
@ashfurrow s/often/almost always/
A lot of the retries in that time were due to Mastodon::RaceConditionError exceptions and other admins reported this problem here: https://github.com/mastodon/mastodon/issues/15525