@rysiek "Someone said 'cloud' to my director, 2yrs later we are stuck with all this broken shit."

@sullybiker @rysiek I still haven't figured out how to upgrade stuff inside docker containers. Sure I can do "docker sh" but then I feel like it's just another LXC and the image version starts lying etc...

@saxnot @sullybiker you don't upgrade stuff in docker containers, they are supposed to be transient. If you need a newer version of something, redeploy from a newer version of the image.

This obviously has some security implications, so I mostly use docker images I roll myself (and rebuild at each deployment). But combined with compartmentalization docker offers, it's a fair trade, I feel.

@saxnot @sullybiker this is honestly the most important thing one needs to understand if one wants to feel at home working with containers (docker or other): a container is not supposed to be long-term, and is not supposed to run multiple services.

Any long-term data storage needs to be outside of it, and updates are handled by redeploying it.

Any additional service should be run in a separate container, using well-defined interfaces (network APIs, etc) for communication, to avoid surprising side-effects.

@rysiek @saxnot I did a lot of work with our security office after some research docker containers kept getting (harmlessly) compromised because they had unpassworded mongodb instances facing the internet. One lesson was that nobody understood the networking, in particular the relation to the host netfilter rules.

@rysiek @saxnot This was back when those mongodb images came like that as default though.

@sullybiker @saxnot oh yeah, that's a huge issue. People assume containers are not exposed outside, and that's often *wrong*. I did spend quite a while understanding docker networking to be able to get my netfilter rules to actually, you know, work in these cases.

Know your tools. 🤷‍♀️

@sullybiker @saxnot on the other hand, yes, compartmentalization helps a lot with security here. A compromised container is not necessarily and end-of-the-world event, depending what access it has to other containers and the underlying system.

That's something I *really* appreciate about containers: if you're doing things correctly, you end up with a set of well-separated basic services communicating via well-defined interfaces with no side-effects/side-channels. Way easier to audit and reason about.


*IF* you are doing things *correctly*...

At this stage it's almost like Sparta's response to Alexander's father.

@sullybiker @saxnot


People assume containers are not exposed outside, and that's often wrong.

PSA: Docker does not work with UFW. If your server uses UFW and you have Docker stuff behind it, they are most likely still visible outside.

Most "newbie" guides (that I've seen) will recommend UFW, and most "selfhosted" software will recommend Docker. Combine them both, and you fail.

What a wonderful world!

@saxnot @sullybiker it makes sense in some situations, it's an absolute clusterfuck in others. Hammer is not a screwdriver. :blobcatfingerguns:

@rysiek @saxnot I can see why they're so popular in DevOps, but they're also not well understood, at least in my organisation.

@rysiek @saxnot They scale well and if you run web applications they're fantastic.

@sullybiker @saxnot this is a common problem, because they so utterly go against many of the established assumptions/rules in systems administration.

But this does make sense in certain situations, and spending the time to understand them really pays off then.

@rysiek @saxnot Their immutable create/destroy nature is pretty cool, I think

@sullybiker @saxnot totally. And how they force you to really think about how pieces of your system communicate together, and come up with clear rules for that.

Sign in to participate in the conversation
Mastodon for Tech Folks

This Mastodon instance is for people interested in technology. Discussions aren't limited to technology, because tech folks shouldn't be limited to technology either!