Show newer

a cool thing about Mastodon is that it's literally open source, you can just go poking around and see how it works

a user recently asked me why they couldn't add another reaction to an instance announcement. turns out there's a maximum of eight distinct reactions: github.com/mastodon/mastodon/b

from an educational perspective, it's really useful to have a large, in-production Rails+JS app where anyone can pull down the code

Show thread

it's great when the Mastodon upgrade notes are like "this update includes long-running migrations"

I was curious about this, since 3.5.2 is just a patch release. what could take so long? in this case, it's adding a single index to the status table (the largest table by large margin): github.com/mastodon/mastodon/b

Show thread

this is the part of the Mastodon upgrade process where the load averages spike 😅

Ash Furrow boosted
Ash Furrow boosted

(john hammond voice) i been hacked. all my dinos gone.

I regret to inform that Clementine has turned into one of those “loves the faucet” cats :clementine:

Ash Furrow boosted

Ideal discography length

months and months of waiting, then suddenly, there it is… 

ankle sock weather

it's one search index rebuild, michael, how long could it take? 10 weeks?

Show thread

eh whatever. worst case scenario is I have to re-build the search index.

Show thread

One other interesting thing to note is that, even though we've increased the Sidekiq throughput, we've actually *decreased* the total number of connections open to Postgres (~300 before and 160 now). I think this is due to Sidekiq connection pooling.

This goes to show that making better use of existing resources is often better than simply throwing more resources at a problem.

Show thread

A lot of the retries in that time were due to Mastodon::RaceConditionError exceptions and other admins reported this problem here: github.com/mastodon/mastodon/i

Show thread

Here's the graph of Sidekiq jobs processed/failed over the same two-week time period. The peak last Monday is 1.7M jobs processed, 280k jobs failed.

The increase in failures is proportionally higher than the successes during that first spike before we reconfigured Sidekiq. I'm inferring that a slower rate of processing led to a higher number of failures (and subsequently, of retries, contributing back to the queue backlog problem).

Show thread

Here are the m.t server resource graphs for the past two weeks. You can see halfway through when we got the initial extra load, stumbled a bit to reconfigure, and then levelled out.

We've had a few sudden spikes in load averages that corresponded to a sudden high number of Sidekiq jobs getting enqueued. I'm not sure where these jobs are coming from but the new configuration is handling them super-smoothly 😋

Ash Furrow boosted

spent the better part of the day theming my phone's home screen...

Ash Furrow boosted

has anyone made this joke yet 

@ashfurrow

I'd just like to interject for a moment. What you're referring to as Fediverse/Mastodon, is in fact, ActivityPub/Fediverse/Mastodon, or as I've recently taken to calling it, ActivityPub plus Fediverse plus Mastodon.

Show older
Mastodon for Tech Folks

This Mastodon instance is for people interested in technology. Discussions aren't limited to technology, because tech folks shouldn't be limited to technology either!