POLL, please comment if you have an opinon:

Codeberg.org is being spammed by users using one-time/disposable email services and TOR connections. These spam projects with thousands of bogus issue comments, cause pain for project owners, and spam their notification email inbox. Also, Codeberg's SMTP reputation is harmed.

We consider disabling access via TOR and one-time email providers to maintain smooth operation for all users.

What do you think? Is there a better approach?
Please have your say.

@codeberg maybe unpopular, but have you considered a captcha in the form of a JavaScript miner? Basically requiring anyone to halt for 60 seconds when registering an 5 when posting etc.

I do understand the practical problems (blockers, malware detection) and understand that people might have ideological issues with 'mining'.

But requiring a proof of work is defendable to your audience, I'd say.

@berkes You say a captcha per issue? Hmm ... not sure what to think, also there are very legitimate use cases to create issues via API, for example from CI.

@codeberg technically, an API could require a proof-of-work. I'm not aware of existing libraries or implementations.

The client (including the web-version) would then need to 'mine' some hashes before submitting a request(I.e. hashcash). Acting as captcha for API and web. Costing a normal user tiny amounts of electricity and delay, but bots large amounts of resources.

And if those hashes then bring in some micropayments, its a win-win.

@codeberg to clarify: each request needs such a PoW in a header or as part of the payload.

But this sounds like a big project on its own. Maybe others have built this already? Could be in the form of a HTTP proxy even.

@berkes It would be really interesting to build hashcash using a more modern PoW like Cuckoo Cycle and actually implement it in Gitea as DoS prevention.
For APIs it's a bit tricky to require your users to implement it themselves. If Gitea has client libs, it could be done.
@codeberg how about plain rate limiting? Like one request per 5 seconds.


@stevenroose @berkes Rate-limiting makes sense here, but need to check carefully that interactive use cases like GitNex client apps are not impacted.

@stevenroose @berkes one request per 5sec possibly a bit tight for interactive use, at the same time one spam issue comment every 5 seconds already quite a lot

@codeberg @stevenroose rate-limiting based on a combination of session (ID in headers?), IP and 'user' often works.


Where Z > X, Z > Y. And Y > X.

For rails, I always use github.com/jeremy/rack-ratelim. There might be something in go, that can be integrated in gitea. Or agnostic proxy that is as flexible and tunable.

Though a proxy has no knowledge of things like 'user' or 'customer'.

@codeberg also note that typically, rate limiting for GET request can be an order of magnitute more lenient than for PUT/patch/POST/DELETE.

E.g. where you say: per IP we allow 100.000 read (GET) requests per hour, but only 100 writes (post etc ) per hour.

@codeberg @berkes Actual clients could implement the hashcash. Even though if a client like GitNex needs an API call every second to operate normally (or occasional bursts of 10 calls at once) it's also gonna be negatively impacted.

Sign in to participate in the conversation
Mastodon for Tech Folks

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!