Video of my talk about and an idea for decentralized, community CDNs is online:
media.ccc.de/v/mch2022-198-tru

> I want a Web where CDNs are unnecessary.
>
> Where different organizations, different website operators, can help each other out by hosting assets for each others' websites, thus spreading the load across many orgs in solidarity, instead of centralizing it in gatekeepers.

An older fedi thread about just that:
mastodon.technology/@rysiek/10

@rysiek Does this address Squid and why it is or is not sufficent?

@dredmorbius Squid is a caching reverse proxy. It can be used for caching and microcaching, which I mention in this talk as generally a good idea.

But Squid will not help you if your website is down because all your edge nodes are overwhelmed or because you are censored on DNS level.

LibResilient works in visitors' browsers, and *will* help in such circumstances (depending on the config).

@rysiek I may be misreading / misremembering the origins of Squid, but my understanding was that Harvest (the project from which it originated) was meant to provide the sort of generalised caching system you seem to be getting at. (I need to view your preso, about to and will hopefully reduce my ignorance...).

In this paper we introduced the Harvest system, which addresses these problems through a
combination of topic-specific content indexing made possible by a very efficient distributed information gathering architecture; topology-adaptive index replication and hierarchical object caching; and structure-preserving indexes, flexible search engines, and data type-specific manipulation and integration mechanisms.

codeontheroad.com/papers/Harve

AFAIR Akamai ended up productionalising much of this as a commercial service.

(Hazy distant understanding / memories.)

@rysiek And, having listened to the preso: Resilient sounds a lot like what you were calling Samizdat at one point, or is building off of it. Browser + web-workers with a server-side config for alternate origins.

The proxy cache concept (Squid) satisfies content requests by sourcing them through proxies. The original request goes through the proxy, either client-side (e.g., a local Squid caching proxy) or server-side.

What Squid configuration allows for is specification of peers for content.

A chief problem with Squid, as with other Web proxy tools (e.g., Junkbuster Proxy, if you remember that, or Dansguardian), is that they play poorly with TLS/SSL traffic. Certificate pinning and other mechanisms make this hard to accomodate with the modern Web ... unless if like Cloudflare the TLS request terminates with the caching provider. (Problematic.)

I don't know that Squid has been used to provide robust support for content access even where the origin is dead ... though that does seem to be its basic function in fact.

@dredmorbius Squid doesn't run in the browser, and unless a user runs it themselves, it does not offer what LibResilient offers. So Squid is not a comparable solution.

@rysiek Right.

I think we've had versions of this conversation before. You're looking at a sites-and-users in-the-browser solution.

Squid might operate independently of sites, and on a server basis. Mind, "server" might be a network of small home/office routers running OpenWRT or similar.

One thought which occurs to me is that Tor may have an interest, in that Tor is often blocked by websites, but could potentially offer greater utility if it would sideline request and cache sites (via distributed proxy) in cases where the site itself fails to respond at all or as normally. Yes, that last means some case-by-case and whack-a-mole elements to this.

Samizdat / libResilient might be part of that sideline access.

Thoughts?

@dredmorbius not sure, really. Libresilient is client-side but website-deployed, as in: it is deployed on a website but operates on visitors' browsers.

Squid et al can be part of the picture as alternative endpoints, for example (I mention the reverse caching proxy option in my talk). But not sure how LibResilient could help Tor out, as LR is not a thing that is deployed with/in a browser.

@rysiek Tor would need access to sites blocking Tor exit points, to feed to its own (proposed by me) caching network.

A model for that might be:

Tor client makes site request.
Tor exit point (TEP) notes that request is denied.
TEP checks for Tor Cache Instance of requested content.
TEP communicates request for alternative access (RFAA).
Request is farmed out to subscribing libResilient host(s).
LR client makes request, feeds to LR network.
Tor network is included in this distribution, and caches and distributes the content (w/ expiry and other aspects).
TEP alternative result requesting client in its result set. (This offer could be simultaneous w/ RFAA, or on Tor Cache check.)
Client has option of requesting cached version instead of origin.

Or something like that.

@dredmorbius yeah, but you don't need LibResilient for that. This can be implemented directly in Tor or Tor Browser. LR is specifically meant for regular websites.

@rysiek Tor traffic is (mostly) regular websites.

The problem Tor faces is that Tor excit nodes are both determinable and explicitly blocked or filtered.

LR clients effectively offer an alternative request network in normal-user IP space.

@dredmorbius but why would you implement that in JavaScript using Service Workers, if you're a Tor/Tor Browser developer?

LR is not a general proxy tool. It's a very specific tool that is very specifically useful for *websites that deploy it* not for random tools that happen to interact with websites.

@dredmorbius either I am missing the point you are trying to make entirely, or I am completely failing at communicating what LR can and cannot do.

@rysiek There's a bit of my wanting to hear things and failing to grasp / recall what LR is doing.

Part of from my PoV is that ... LR seems a somewhat peculiarly limited approach. I'm sure you've got your reasons / rationale.

For an example that's closer to what I'd described above, there's the way the Internet Archive seems to work, in which it farms out requests to consumer IP space. (You'll occasionally see this where websites auto-assign language based on geocoding, with the entirely predictable results. G+ was notorious for this.)

@dredmorbius the point of LibResilient is allowing website operators to deploy something that will keep their sites up and running *without requiring the visitors to do anything* (other than visiting the website).

There's plenty of tools that make it possible to access blocked websites for users who are open to using such tools. Tor Browser is one.

But there are very few tools that meet users where they already are. LibResilient aims to change that.

@rysiek Right. I'm getting that now (and/or again).

There's clearly a niche for that.

There's also ... a broader problem that a site that might face such issues has to preemptively recognise this and take measures (by implementing LR).

I've a few concerns with that, largely around 1) getting sufficient mindshare, that 2) bad actors might be more prone to do so, and that 3) good actors might well fail to realise or complete the implementation.

Getting buy-in on even obviously good and useful technical concepts is bone-crushingly difficult.

I'd like to see an online-persistence mechanism that has fewer single-point roadblocks. E.g., the site doesn't have to specifically implement some library, but third parties might step up on that site's behalf.

(How clients become aware of tat / those alternatives ... is another question, of course.)

This isn't a "you're wrong" but rather an "I've got a different idea / problem to solve" viewpoint.

File accordingly.

I'm continuing to watch with interest.

@rysiek That's ... the project-relational framework, yes, though I don't see the technical element I'd proposed reflected.

That said, thanks. You're doing my research for me, and I acknolwedge that.

Sign in to participate in the conversation
Mastodon for Tech Folks

This Mastodon instance is for people interested in technology. Discussions aren't limited to technology, because tech folks shouldn't be limited to technology either!