Follow

what kinda projects are there working on git and software distribution tools? We working on federating NPM yet?

· SubwayTooter · 2 · 1 · 0
@specter the important question is: do you really want the vulnerabilities that come with not centralizing these?
@specter maven is that java nightmare, right? I don't think so. it's still a central repo from what I remember when auditing infra/dev environments at a previous job

@feld yeh I'm not very familiar. I'd have to understand software signing practices and stuff better. but I thought the whole concepts of repos in apt and maven and stuff were to decentralize the load. that and it's already possible to host your own NPM registry, GitHub offers it as a service. GH is centralized but in theory that means you could treat registries as repos, on whatever server, and add the ones you need to a project? idk

@specter package repos aren't "federated", whether it be npm or apt or maven. It's just an HTTP server somewhere; there's no concept of federation.

As for federating things like git: people need to ask how the trust model works. What's going to prevent someone from publishing vulnerable commits? Who gets to own the master/develop branches? What happens if someone tags a new release with malicious code inside?

If you have to change the model to only allow trusted servers to participate and you only give rights out to certain people to perform actions on certain areas of the repository, what have you actually gained other than massive complexity? You still need a trusted gatekeeper or team.

So if what you really want is replication of the repo for high availability, we already have that functionality.

If you want issues/wiki to not be forced to live on github.com as a single point of failure, someone should actually extend that functionality into git itself. Fossil did it like 15 years ago, but everyone keeps overlooking Fossil because of the Cult of Linus.

@feld yeah I guess to your point git itself is already decentralized, the supplementary development process is really where the problem lies perhaps then a federated GitHub, not git. Which since GH is also an alt NPM it'd be an all inclusive software distribution platform. Maybe that's just a self-hosted like GitLab, but maybe federation enables cross org collaboration??

@specter

> Maybe that's just a self-hosted like GitLab, but maybe federation enables cross org collaboration??

So perhaps what we really want is for Pleroma/Mastodon to be OAuth-like identity providers, someone create a GitHub/GitLab that replicates code repo *and* issues/wiki, and be able to login to any copy of the repository with your fedi authentication via OAuth/OpenID/IndieAuth

Now the only other problem I can forsee is how the hell do you handle merge conflicts if these servers find a merge conflict when replicating their copy of the data?

@feld I'd try and keep git out of it the federation cause like we've said it's already decentralized and the problem is controlling the development process which. Federated JIRA makes sense I think but I guess my original goal is for NPM and GH not to own distribution. Maybe federation doesn't need to be involved in that part either, simply each org self-hosting their distributables. The governance of common libs to be hashed out in that domain i.e. what server vs what username is providing this resource

@specter

> Maybe federation doesn't need to be involved in that part either, simply each org self-hosting their distributable

Well traditionally for apt/rpm/BSDs there have been a worldwide network of community hosted mirrors updated via rsync or ZFS snapshots to replicate the latest package dataset. That's always been sufficient.

I take issue with the idea of adding a ton of repos to my server. There are very few situations where I find that palatable. I do not like and do not trust the people running them. Just like PPAs in Ubuntu-land, or the EPEL repo; we banned them all for security reasons @ a power company and required any external packages not provided by the main OS vendor needs to be audited and imported into our own private package repo.

Reason being that those 3rd party repos are always poorly secured and you never know who is allowed to publish packages. And then also what happens when some idiot accidentally publishes a popular package in their own repo and it gets picked up automatically and installed... and worse, maybe it has malware in it... and you didn't realize the terrible dependency resolution code in your package manager willingly picked up the "newer" version of the package from another repo and it was just hidden in a wall of text or suppressed by your automation tools?

You can't really "pin" packages to the first repo they came from without running into problems in Linux-land, because like with EPEL repo you *must* update certain base packages (because they're so old) to support newer versions of the software you really want.
@specter when it comes to OS/packages/code you really want a simple trust model with as small of an attack surface as possible. You need trustworthy, competent humans who follow good security practices and cryptographically sign releases using good, modern tooling.

@feld is the main OS vendor not just a server you trust? You're always gonna have to vet your dependencies. Couldn't formalizing the process a bit (via activity pub extensions?) and enable "third parties" to come online and establish trust more easily? If a project could fork Ubuntu's process by propping up an instance, then their modifications can be audited as a more clean diff of procedure? I'm thinking maybe it's a leveling of the playing field kinda? Auditing is inescapable though

@specter

> is the main OS vendor not just a server you trust? You're always gonna have to vet your dependencies.

I don't know about you, but I've meet the FreeBSD core dev team in person and I'm also a developer myself, involved in poets/packages. I only have to trust that the checksums for the OS install media match the ones in the published/signed announcement and the included crypto key for verifying the packages doesn't throw any error when I install packages.

In FreeBSD we only sign the repo, a SQLite file, which inside has the listing of all packages and their metadata including the sha256 checksum of each package.

The packages are built from the ports tree, a big SVN repo that few people have access to. Each port includes the sha256 of the source code tarball which the maintainer has verified what is published by the 3rd party software authors if possible.

We have issues with people fetching "release tarballs" from GitHub because this feature is broken by design. The tarballs are generated from specific commits in the git repo but they're on demand and cached regionally. When USA and EU GitHub servers have different OS versions it can cause the tarball to have different checksums.

This entire methodology has served us well and we have even found hacked mirrors of open source projects because our builder or someone in the community reported checksum errors on foo-1.0.0.tgz while Debian/Redhat/etc never validate the tarball's integrity.

All of this is run on centralized hosting by a core team who is paid/funded to keep it running.

It's a good model you can trust, but it does make it hard to grow the team when we don't let random contributors have direct access
@specter tl;dr this is all tough to do but the best option we have is get to know who is running the show, get involved, build stronger community and developer relationships. Trust is hard but we can't allow anonymous actors write access to things.

This is also why I can ever trust the Docker model
Sign in to participate in the conversation
Mastodon for Tech Folks

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!