I might stop doing my semi-regular #ItsAGraph / #NotATree / "tooling used by software industry is fundamentally broken on a philosophical level" / "organizing code in plaintext files is incredibly, ridiculously wasteful".
By accident, I found this:
...which covers 90% of the things I thought and ranted about over the last ~5 years, but better.
Seriously, go and read it.
And this seems to be a proper attempt to make a programming environment that doesn't suck: https://gtoolkit.com/
It's #Smalltalk, because of course it is.
Gonna push it to the front of my "to play with list". I can live with learning me some Smalltalk, or any other language for that matter, if it lets me work in an environment that doesn't make me want to stab myself in the eyes with a dull spoon, every single day.
I'm gonna say something blasphemous here: in context of these fundamental issues, #Emacs also sucks hard. So does #Lisp in general. Yes, they're immensely more ergonomic, malleable and powerful than their more mainstream competition, but they're still hindered by the same fundamental issue: they have the nature of writing code in plaintext files deeply embedded in their DNA.
(And unfortunately, I don't see a way for Emacs to improve here, as long as text buffers are its fundamental concept.)
This is turning into an unexpected thread 🧵, sorry. But there's one idea I couldn't put properly into words until now:
The problem with our tooling isn't plaintext representation per se. The problem is that it's simultaneously:
1) the ultimate, canonical representation of a program - the "single source of truth", and
2) the representation we work on directly when creating that program.
3) usually the *only* representation we work on.
The result is not powerful enough to manage complexity efficiently.
Here's why this is a problem: it makes us commit up front to a single view of a program, emphasizing some concepts, while making different - and often equally important - concepts implicit.
Because we have only one canonical representation of a program, it can support only a single way of understanding it.
The art of writing readable and maintainable code is necessary because of this: we can't express every concept properly at the same time, so we have to pick the ones we do, and let the rest be smeared.
🆕 A little addendum for the whole subtree under parent I'm replying to.
AFAIK AoP wasn't well received by the #programming community at large. I'm going to study in more detail the arguments brought up against it, but so far my vague impression is that they're both correct and missing the point.
They're correct in that AoP is non-local, "spooky action at a distance", making codebases harder to comprehend and debug without special tooling assist.
They're missing the point in that the real problem, IMO, is the plaintext, file-oriented, tree-structured form in which we write code - and for which we design our tools.
"Cross-cutting concerns" are, by definition, cross-cutting. Non-locality is an artifact of code format & tooling. When you turn a graph problem into a tree problem, you lose some edges.
I'm attaching a screenshot of the code example from that article.
Notice the coloring. Purple parts run in the browser; red parts run on the server. Photon "magically" handles ensuring the two runtimes stay in sync and execute in lockstep, with as little overhead as possible. Hell, it's quite likely that their overhead is *lower* than typical server/client communication people roll by hand.
Photon is relevant to this thread in two ways.
One, their trick is to compile your code to an explicit DAG.
This is a good example of the #ItsAGraph #NotATree insight about code. #Photon compiles your function to a DAG, and its runtime ensures the DAG stays in sync between the server and the browser - both in terms of how its executing, and its very shape (this is both #Lisp and #React-y code; the DAG will change dynamically).
Secondly, notice what this abstraction does: it eliminates the cross-cutting concern of dealing with server/client bookkeeping - the kind of bullshit code that's majority of any codebase.
So, right abstraction + details handled by runtime (here implemented as a macro) = a disappearing cross-cutting concern.
#Photon focuses only on handling the client/server split issue. But what about all the other cross-cutting concerns?
I wonder, could this runtime and its intermediary DAG be extended to make other cross-cutting concerns evaporate?
Shmaybe. You still write your code as a tree, so trying to handle more than 1 concern like this will lead to combinatorial explosion of possible code.
Curiously, the article mentions #functional #effect system as a way to handle side effects. I feel I need to quickly educate myself about those; maybe this is one of the ingredients to effectively de-bullshitify functional code?
Here's my thinking. All of the cross-cutting concerns I can think of are either side effects (logging, serialization, async/promises/futures/threading) or run their own parallel computational stream (error handling). Both of those need to be accounted for in my dream lang/runtime.
The traditional functional way of dealing with both is through monads, i.e. return values. This, as I may have already mentioned, is annoying even for the functional cross-cutting concerns like error handling (Expected/Either); it's straight in the bovine excrement territory for side effects.
Something I plan to investigate: could those "functional effects" become a way to neatly encode side-effects, and having those + monads, could both of them be made disappear from code by AoP-style runtime?
This Mastodon instance is for people interested in technology. Discussions aren't limited to technology, because tech folks shouldn't be limited to technology either!