Here are the DigitalOcean graphs for mastodon.technology over the past 7 days, and the past 24 hours. Yesterday required us to rebalance our resource utilization (before this week, we were over-provisioned and were preparing to decrease server resources to save costs 😅).
I think we're in an okay spot. We don't have a lot of headroom left. Fingers crossed today. We may need to close new registrations, but we'll be upfront if we make that decision.
This instance is funded by a Patreon – if you'd like to contribute, head over to https://www.patreon.com/ashfurrow
One year ago, this instance was running on Docker. This configuration had been officially supported, but discouraged, by Mastodon maintainers – and for good reason. It introduced so many problems and so much overhead that I'm really grateful that everyone peer-pressured me into migrating off it. Now it runs as standard linux services. I'm positive this instance would be in tears right now, otherwise 😆
43k jobs queued up. default has a latency of 22 minutes, pull is over two hours lol okay
I see a few jobs taking a long time (>10 minutes) but most finish instantly. I’m trying to figure out where the next bottleneck to scale up is. How do I get the CPU cores doing more? PgBouncer? Multiple Sidekiq services? It seems like I have CPU headroom, so should I get more aggressive with my Postgres config? Is there a quick win without prolonged downtime? These are the questions on my mind.
The instance stayed up overnight with the new config changes (always a concern haha). Sidekiq queues are empty and have already chewed through an average day’s amount of jobs. I’m excited/terrified to see what happens today!
@ashfurrow the opposite feeling of a casino billboard with the jackpot value spinning higher and higher. Except, queue levels. ;)
This Mastodon instance is for people interested in technology. Discussions aren't limited to technology, because tech folks shouldn't be limited to technology either!