POLL, please comment if you have an opinon:
Codeberg.org is being spammed by users using one-time/disposable email services and TOR connections. These spam projects with thousands of bogus issue comments, cause pain for project owners, and spam their notification email inbox. Also, Codeberg's SMTP reputation is harmed.
We consider disabling access via TOR and one-time email providers to maintain smooth operation for all users.
What do you think? Is there a better approach?
Please have your say.
I do understand the practical problems (blockers, malware detection) and understand that people might have ideological issues with 'mining'.
But requiring a proof of work is defendable to your audience, I'd say.
@berkes You say a captcha per issue? Hmm ... not sure what to think, also there are very legitimate use cases to create issues via API, for example from CI.
@codeberg technically, an API could require a proof-of-work. I'm not aware of existing libraries or implementations.
The client (including the web-version) would then need to 'mine' some hashes before submitting a request(I.e. hashcash). Acting as captcha for API and web. Costing a normal user tiny amounts of electricity and delay, but bots large amounts of resources.
And if those hashes then bring in some micropayments, its a win-win.
@codeberg to clarify: each request needs such a PoW in a header or as part of the payload.
But this sounds like a big project on its own. Maybe others have built this already? Could be in the form of a HTTP proxy even.
@berkes It would be really interesting to build hashcash using a more modern PoW like Cuckoo Cycle and actually implement it in Gitea as DoS prevention.
For APIs it's a bit tricky to require your users to implement it themselves. If Gitea has client libs, it could be done.
@codeberg how about plain rate limiting? Like one request per 5 seconds.
Where Z > X, Z > Y. And Y > X.
For rails, I always use https://github.com/jeremy/rack-ratelimit. There might be something in go, that can be integrated in gitea. Or agnostic proxy that is as flexible and tunable.
Though a proxy has no knowledge of things like 'user' or 'customer'.
@codeberg also note that typically, rate limiting for GET request can be an order of magnitute more lenient than for PUT/patch/POST/DELETE.
E.g. where you say: per IP we allow 100.000 read (GET) requests per hour, but only 100 writes (post etc ) per hour.
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!