Video of my talk about and an idea for decentralized, community CDNs is online:
media.ccc.de/v/mch2022-198-tru

> I want a Web where CDNs are unnecessary.
>
> Where different organizations, different website operators, can help each other out by hosting assets for each others' websites, thus spreading the load across many orgs in solidarity, instead of centralizing it in gatekeepers.

An older fedi thread about just that:
mastodon.technology/@rysiek/10

@rysiek Does this address Squid and why it is or is not sufficent?

@dredmorbius Squid is a caching reverse proxy. It can be used for caching and microcaching, which I mention in this talk as generally a good idea.

But Squid will not help you if your website is down because all your edge nodes are overwhelmed or because you are censored on DNS level.

LibResilient works in visitors' browsers, and *will* help in such circumstances (depending on the config).

@rysiek I may be misreading / misremembering the origins of Squid, but my understanding was that Harvest (the project from which it originated) was meant to provide the sort of generalised caching system you seem to be getting at. (I need to view your preso, about to and will hopefully reduce my ignorance...).

In this paper we introduced the Harvest system, which addresses these problems through a
combination of topic-specific content indexing made possible by a very efficient distributed information gathering architecture; topology-adaptive index replication and hierarchical object caching; and structure-preserving indexes, flexible search engines, and data type-specific manipulation and integration mechanisms.

codeontheroad.com/papers/Harve

AFAIR Akamai ended up productionalising much of this as a commercial service.

(Hazy distant understanding / memories.)

@rysiek And, having listened to the preso: Resilient sounds a lot like what you were calling Samizdat at one point, or is building off of it. Browser + web-workers with a server-side config for alternate origins.

The proxy cache concept (Squid) satisfies content requests by sourcing them through proxies. The original request goes through the proxy, either client-side (e.g., a local Squid caching proxy) or server-side.

What Squid configuration allows for is specification of peers for content.

A chief problem with Squid, as with other Web proxy tools (e.g., Junkbuster Proxy, if you remember that, or Dansguardian), is that they play poorly with TLS/SSL traffic. Certificate pinning and other mechanisms make this hard to accomodate with the modern Web ... unless if like Cloudflare the TLS request terminates with the caching provider. (Problematic.)

I don't know that Squid has been used to provide robust support for content access even where the origin is dead ... though that does seem to be its basic function in fact.

Follow

@dredmorbius yes, is an evolution of Samizdat. With time, Samizdat will just become a specific type of LibResilient deployment.

· · Web · 0 · 0 · 1
Sign in to participate in the conversation
Mastodon for Tech Folks

This Mastodon instance is for people interested in technology. Discussions aren't limited to technology, because tech folks shouldn't be limited to technology either!