We already have solutions to that basic problem through a large number of social programs. UBI doesn't change the basic social contract, it just fixes a bunch of problems in how we try to achieve these societal goals.
Need-based programs create complicated bureaucracies, fickle providers, inefficient direct-service providers, and frequent negative rates of return when someone tries to better themselves or their family.
So I guess Andrew Yang wants a universal basic income pegged to the productivity of AI. Maybe that's just to counter AI's influence on the economy, or maybe you could think of it as a dividend on automation.
I don't personally see any good reason to link automation and UBI. I like UBI simply as a technical solution to a societal requirement: that in a modern wealthy society we don't want anyone to starve or be overly deprived, irregardless of anything about the person.
Any superheroes that take the law of equal and opposite reaction seriously? Like, you can imagine people being very strong, and I suppose you'd also have to assume various body parts could take extreme pressure, but you still need to support the weight you carry with something, and superhuman inertia doesn't really make sense.
I put up another blog post on a subject I've been thinking about a lot lately: http://www.ianbicking.org/blog/2019/03/open-source-doesnt-make-money-by-design.html
A retrospective on the things I would have liked to try in Firefox Test Pilot: http://www.ianbicking.org/blog/2019/03/firefox-experiments-i-would-have-liked.html
I guess to work you need a second chapter. Do you embrace your evil deeds? Lash back against whoever sent you on this journey? Create an army of minions that you throw against their attackers, only speeding their demise?
Video game concept: a simple shooting grinder where you collect treasure and experience points, and as you get more experience points you develop skills, until you eventually learn to understand your enemy's voice, hearing them say "no! You've killed my entire family, how could you?" or "please, just leave us alone." Or "spare me, please"...
Do any programming languages define their own semantically aware patching tools? As I understand it version control tools don't actually encode diff, merge, or conflict resolution tools, but the defaults are the line-based tools we all know. But you could do something else...? Even just for JSON (this could avoid many unimportant package.json conflicts). Or for ipynb.
Wisdom, the 68-year-old Albatross, has hatched another chick. Presumably a widow (probably many times over), she met her current mate (Akeakamai) at the age of 56. She's known to have at least 31 children, each of which takes over a year to raise (including 7 months of incubation).
I'm taking a class where we have several multi-hour chunks of time set aside for us to meet in small (5-person) cohorts and do our work. It's been nice, we get to discuss lots of details, talk through formats and approaches.
It struck me: I never do this. I spend eight hours at work every day, working on hard problems co-owned with other people, and we never spend hours working on solutions together. Minutes sometimes? Lots of coordination time. But working... no. And this seems normal. Why is it normal?
As I become familiar with an issue where there is a diversity of subjective feelings people experience, I can learn to frame my imaginative empathy more correctly. But there's always new frontiers, there's more for me to learn and more for others to learn, and we're all just somewhere on our journey... so I think I should think more about this empathy trap when interpreting other people's interpretations.
Obviously how I first frame the question, and the fixed-vs-variable sense of my own identity is going to radically change the outcome of my imagination. But this all happens really quickly, it's like the very first moment of empathy, I'm not thinking rationally or even consciously yet. Next up I just have to decide if I even want to invest any energy in considering my inference.
An aide in a Vi Hart video I was watching: if I see someone doing X, and I want to figure out why, my first attempt will be to imagine I am doing X and why I might do that. Her example then: if I see someone talking a lot about their gender identity, then why might I imagine myself doing that? To... get attention? Because I'm relating this to my personal lived life I'm not going to come up with much. (Con't)
Separately I was thinking about libertarians vs. anarchists – they have many purportedly shared values, yet are far apart from each other. This definite vs. indefinite optimism (or maybe definite vs. indefinite good) could be a way of distinguishing them.
Libertarians are very indefinite, they studiously avoid making any assertion about what a good life is, what anyone might want to do with their freedom. Anarchists... much more definite, with opinions about what freedom actually means.
I find Peter Thiel to be a disturbing figure, and yet with some interesting perspectives.
In this review – https://slatestarcodex.com/2019/01/31/book-review-zero-to-one/ (section IV) he proposes a dichotomy:
"Definite optimism" is the belief things get better because of doing good things. Specific good things, like you figure out something that makes things better and you do it.
"Indefinite optimism" is structural: what is good? What is better? How do we make things better? Eh. We can only structure the world to hopefully ratchet up.
This Mastodon instance is for people interested in technology. Discussions aren't limited to technology, because tech folks shouldn't be limited to technology either!