The FOSS North conference has been delayed until November, but they're putting up lightning talks and in a week some of the scheduled talks from the conference, free to view:

The top level bearssl Makefile is a thing of beauty:

# The lines below are a horrible hack that nonetheless works. On a
# "make" utility compatible with Single Unix v4 (this includes GNU and
# BSD make), the '\' at the end of a command line counts as an escape
# for the newline character, so the next line is still a comment.
# However, Microsoft's nmake.exe (that comes with Visual Studio) does
# not interpret the final '\' that way in a comment. The end result is
# that when using nmake.exe, this will include "mk/", whereas
# GNU/BSD make will include "mk/".

# \
!ifndef 0 # \
!include mk/ # \
include mk/
# Extra hack for OpenBSD make.
ifndef: all
0: all
endif: all
# \

This is a nice illustration of the value of spreading disease cases over time, even if the resetting animation is a bit distracting.

Alright, now I'll need to implement delete_block_chain(root_block)…

flu, covid, and other communicable illnesses 

So it won't work. Which means that to make it work I'll need to either force-migrate the company to an AD domain or something else that can force-push CAs to client machines, or I'll have to figure out a way to automatically update DNS (hosted by some place with no automation support) to support letsencrypt DNS challenges.

Or perhaps easiest: refuse to work with Teams.

Show thread

Once again browser developers make my life harder: Microsoft Teams (which is run in a browser) has a feature to add a website tab. But that requires the site to be https, and my internal website (with not connection to the Internet) doesn't have https.

This is what I return to after my vacation. Living a bit further south might not be a terrible idea?

Spoiler alert: It is surprising that the Internet works.

@colmmacc: Tuesday Technical Tweet Thread Time! Let's go on the roller coaster of what happens at a low level when a DNS server sends an 4,000 byte EDNS0 response to a client whose MTU is 1200 bytes. Confused already? don't worry, we'll break it down. I promise it's super interesting.

@colmmacc: o.k. so DNS, the so-called "phonebook of the internet" (if you look the other way and ignore that that's a better metaphor for Google) ... ANYWAY ... DNS runs over the User Datagram Protocol (UDP).

@colmmacc: If you know anything about UDP it's that it's a "Fire and Forget" protocol, it can be lossy. From the perspective of an application, you send a packet, and it gets to the other end or it doesn't. If you want reliability, you have to retry yourself or have some kind of fallback.

@colmmacc: UDP is a "layer 4" protocol. It runs on top of the Internet Protocol .. which is a "layer 3" protocol. Scarequotes because the OSI layering model is insanely wrong and confusing in modern contexts.

@colmmacc: Here's what an IP header looks like. Study it. There will be an exam.

@colmmacc: And here's a UDP header ...

@colmmacc: O.k. so back to DNS for a bit. So DNS is a request-response protocol. Requests are like "What's the IP address for" and responses are like "here's the IP addresses for".

@colmmacc: IP-based networks have limits for how much data they can handle in a single packet. Now if you study the IP and UDP headers, you'll that they both have "length" fields, and these fields are 16-bits. So you'd think that we should be able to send up to 65535 bytes in a packet.

@colmmacc: That would be way too simple. Historically, the size of the packets we could send has actually limited by the underlying technology. Ethernet is often limited to 1500 bytes per packet, or up to 9,000 if you enable a special mode called "Jumboframes".

@colmmacc: Satellites, Radio systems, things like POS (Packet over Sonet), ATM, DSL, Token Ring, and more. These could all have different MTUs that actually limit how much data you could send or receive in one packet.

@colmmacc: The underlying reasons are usually esoteric, like how long you're allowed to send signals for at the electrical level. Anyway, main point: different mediums = different MTUs. MTU stands for "Maximum Transmission Unit".

@colmmacc: Different networks can be linked though, and you can only get a packet through all of the links if it's <= the lowest MTU of any of the links, so you need a way to discover the whole paths MTU. We'll come back to this, I promise. Placeholder for now.

@colmmacc: There's also a minimum MTU that any IP network has to support. It's 576 bytes, everything has to handle that. So DNS traditionally takes a shortcut around path MTU discovery and just says "lets use UDP and just stick to the minimum".

@colmmacc: So it says "Requests and responses have to be be smaller than 512 bytes", which when you add the 20 byte UDP header, and the 20 byte IP header, leaves 24 bytes of space for IP options. So it all fits.

@colmmacc: Fitting a DNS request into 512 bytes isn't too hard, it's why DNS has a limit on how big domain names can be. But since humans have to type them sometimes, long ones would always be a pain. No big deal.

@colmmacc: Fitting an entire DNS response into 512 bytes was easy to start out with .. just a few IPs, but it's gotten harder over time. Email, Anti-Spam measures, IPv6, and DNSSEC (which is garbage, but that's a different topic) have all pushed the size of responses up and up.

@colmmacc: DNS has a built in mechanism to handle this, called truncation. If the response that a server needs to send is too big, it sends as much as it can and then marks a bit that means "This response was truncated".

@colmmacc: The requester then retries the request over TCP, instead of UDP. Because TCP is intended for "bigger" messages. Two problems with this: sometimes people block TCP DNS without realizing, and TCP still has to figure out what the path MTU is.

@colmmacc: So here's how path MTU discovery works for TCP. The TCP connection starts with its best guess of what the MTU is, and calculates a "Maximum Segment Size" from this. This isn't the TCP window. It's the maximum data that TCP will put in a single packet.

@colmmacc: The client sends SYN, the server sends SYN|ACK, usual TCP handshake. These packets are small and rarely trigger path MTU discovery. For DNS; the request packet will be small too and likely won't trigger anything.

@colmmacc: But when the server sends the response, that will be big, maybe too big. But it sends it out anyway. That packet gets as far as it gets. If it gets to a link that has a smaller MTU, that link sends back an Internet Control Messsage Protocol (ICMP) message that says "MTU exceeded"

@colmmacc: It sends to this back to the sender ... the DNS server in this case. And so the sender has a clue what even triggered the error, it includes the top part of the packet that triggered the error.

@colmmacc: Basically the server tried to send a long letter by relay mail, and got to a relay that only has small envelopes. So that relay tore the original letter in half and send back that half in a small envelope saying "I only have small envelopes, so try again, good luck".

@colmmacc: O.k. so now the sender knows a new lower MTU for that path, and it caches it. The kernel keeps a cache of all MTUs it knows for all destinations. Now it recomputes a new TCP MSS and resends the data but in smaller packets. Yay! This is part of why TCP is reliable.

@colmmacc: This whole process takes a damn while though; the client had to fall back to TCP, and then the path mtu discovery had to happen, and finally the client gets the response ... as long as TCP wasn't blocked to begin with.

@colmmacc: So the DNS folks are like "Wouldn't it be great if we could just send large responses over UDP". And it's true. It would. And so it was invented, as part of EDNS0, a general extension mechanism for DNS.

@colmmacc: With EDNS0 enabled, the client can include a little fake sort of record that says "hey, I support EDNS0, and also I'm good if you send me large UDP responses ... up to 4,000 bytes, say".

@colmmacc: If a server also supports EDNS0, it can just send larger UDP responses, rather than truncating and falling back to TCP. We're now nearing the top of the rollercoaster.

@colmmacc: What happens when we send a large UDP response? Well as we saw earlier the UDP and IP headers both have length fields. The way it works out is that fragmentation actually happens at the IP layer ...

@colmmacc: Suppose we try to send a 4,000 byte UDP datagram over a 1500 byte MTU IP network, what happens is that it gets broken into 3 IP "fragments". The first contains the first part of the UDP message, including the UDP header itself, and the next packets follow on from there.

@colmmacc: Each fragment will have the same IP "identification" header (IPID) and different fragment offsets. It's the kernel's job to wait for all of the fragments to show up and pass them on to an application.

@colmmacc: Path MTU discovery works the same as with TCP. If the first fragment is too big, the sender will get an error. But since applications usually don't retry UDP responses, it can also just show up as that message being entirely dropped.

@colmmacc: It also means that fragments of the message look strange to many parts of the network, like firewalls and switches. UDP packets without UDP headers! How are they supposed to know whether the packet is allowed or not? How is flow-switching supposed to work? The internet shrugs.

@colmmacc: We often end up building in the ability to relate these packets to one another in many places, and since that's expensive, they also have to be rate-limited. It's deeply messy.

@colmmacc: That's what's going on down there though, and understanding that full picture is key to diagnosing some common network mysteries. O.k. I have a team meeting now, more later!

@colmmacc: Meeting postponed! o.k. so what happens when the DNS server sends a 4K response and the MTU is 1200 bytes? Well the DNS server gets the error, and it fixes the next response, but that first is lost. Super annoying. So the client has to retry, but then it works.

@colmmacc: Some more fun: network paths don't have to be symmetrical, so MTUs don't either. If the client needs to send a large amount of data, this whole process happens for them too. A sender and a receiver can legitimately end up with different limits towards one another.

@colmmacc: Because MTU discovery depends on state, and on ICMP messages being allowed, some folks do something like "MSS clamping" where for TCP they have the network actually meddle with the TCP connection (a little) to offer a different MSS to the other end.

@colmmacc: It is surprising that the Internet works.

Count votes on paper ballots. By hand.

a software professional

Poor taste in humor 

Software is like an onion. If you open up the inner layers you start crying

All transit in Wuhan being shut down is hard to imagine for anybody who doesn't know the city:

It is about the size of greater London, but replace the Thames with the Yangtze which is about 500m wide as it cuts the city in half. Now remove all buses, the Tube, trains and river ferries. Without transit, most of the 11 million people are essentially under house arrest.

Where does a mansplainer get his water?
From a well, actually.

So I was recently asked why I prefer to use free and open source software over more conventional and popular proprietary software and services.

A few years ago I was an avid Google user. I was deeply embedded in the Google ecosystem and used their products everywhere. I used Gmail for email, Google Calendar and Contacts for PIM, YouTube for entertainment, Google Newsstand for news, Android for mobile, and Chrome as my web browser.

I would upload all of my family photos to Google Photos and all of my personal documents to Google Drive (which were all in Google Docs format). I used Google Domains to register my domain names for websites where I would keep track of my users using Google Analytics and monetize them using Google AdSense.

I used Google Hangouts (one of Google’s previous messaging plays) to communicate with friends and family and Google Wallet (with debit card) to buy things online and in-store.

My home is covered with Google Homes (1 in my office, 1 in my bedroom, 1 in the main living area) which I would use to play music on my Google Play Music subscription and podcasts from Google Podcasts.

I have easily invested thousands of dollars into my Google account to buy movies, TV shows, apps, and Google hardware devices. This was truly the Google life.

Then one day, I received an email from Google that changed everything.

“Your account has been suspended”

Just the thing you want to wake up to in the morning. An email from Google saying that your account has been suspended due to a perceived Terms of Use violation. No prior warning. No appeals process. No number to call. Trying to sign in to your Google account yields an error and all of your connected devices are signed out. All of your Google data, your photos, emails, contacts, calendars, purchased movies and TV shows. All gone.

I nearly had a heart attack, until I saw that the Google account that had been suspended was in fact not my main personal Google account, but a throwaway Gmail account that I created years prior for a project. I hadn’t touched the other account since creation and forgot it existed. Apparently my personal Gmail was listed as the recovery address for the throwaway account and that’s why I received the termination email.

Although I was able to breathe a sigh of relief this time, the email was wake up call. I was forced to critically reevaluate my dependence on a single company for all the tech products and services in my life.

I found myself to be a frog in a heating pot of water and I made the decision that I was going to jump out.

Leaving Google

Today there are plenty of lists on the internet providing alternatives to Google services such as this and this. Although the “DeGoogle” movement was still in its infancy when I was making the move.

The first Google service I decided to drop was Gmail, the heart of my online identity. I migrated to Fastmail with my own domain in case I needed to move again (hint: glad I did, now I self host my email). Fastmail also provided calendar and contacts solutions so that took care of leaving Google Calendar and Contacts.

Here are some other alternatives that I moved to:

Gmail → Fastmail → Self-hosted (via Cloudron)
Google Contacts → FastmailNextcloud Contacts
Google Calendar → FastmailNextcloud Calendar
Google Search → BingDuckDuckGo
Google Maps → Bing MapsOpenStreetMaps and OsmAnd
Google Analytics → Matomo Analytics
Google Drive → Nextcloud Files
Google Photos → Nextcloud Files/Gallery
Google Docs → Collabora Office (Nextcloud integration) and LibreOffice
Google Play Music → Spotify / PlexSpotify / Jellyfin
Google Play Movies/TV → PlexJellyfin
Google Play Audiobooks/Books → Audible/Kindle
Google Play Store (apps) → F-Droid / Aurora Store
Google Android → Lineage OSUbuntu Touch on PinePhone (coming soon?)
Google’s Android Apps → Simple Mobile Tools
Google Chrome → Mozilla Firefox
Google Domains → Hover
Google Hangouts → Matrix and Nextcloud Talk
Google Allo → Signal
Google Podcasts → PocketCastsAntennaPod
Google Newsstand → RSS
Google Wallet → PayPal and Cash App
Google Voice →Ting Mobile

Migrating away from Google was not a fast or easy process. It took years to get where I am now and there are still several Google services that I depend on: YouTube and Google Home.

Eventually, my Google Home’s will grow old and become unsupported at which point hopefully the Mycroft devices have matured and become available for purchase. YouTube may never be replaced (although I do hope for projects like PeerTube to succeed) but I find the compromise of using only one or two Google services to be acceptable.

At this point losing my Google account due to a mistake in their machine learning would largely be inconsequential and my focus has shifted to leaving Amazon which I use for most of my shopping and cloud services.

The reason that I moved to mostly FOSS applications is that it seems to be the only software ecosystem where everything works seamlessly together and I don’t have to cede control to any single company. Alternatively I could have simply split my service usage up evenly across Google, Microsoft, Amazon, and Apple but I don’t feel that they would have worked as nicely together.

Overall I’m very happy with the open source ecosystem. I use Ubuntu with KDE on all of my computers and Android (no GApps) on my mobile phone. I’ve ordered the PinePhone “Brave Heart” and hope to one day be able to use it or one of its successors as a daily driver with Ubuntu Touch or Plasma Mobile.

I don’t want to give the impression that I exclusively use open source software either, I do use a number of proprietary apps including: Sublime Text, Typora, and Cloudron.

tech, accessbility 

Show more
Mastodon for Tech Folks

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!