Since @lrhodes just recently suggested that there should be a feature to hide inaccessible posts, and @gemlog reminded me today that folks might not even realize their names and toots might be hard for some people to read unless they are told, I guess it is my duty now to write another round of educational posts.
Hmm, this isn't really my area of expertise ... maybe I should just go find some old one and reboost?
Nah, let's do this.
In this thread, I'll list a couple suggestions for #Accessibility adjustments you can do to help out people who are #Blind, #VisuallyImpaired, or those who for other reasons use a #ScreenReader. Boosts appreciated!
But before I start:
If you can't do these things, that's fine. It is not my intention to bash other disabilities. The ways our lives suck are different, but we still should get along. You all are, after all, awesome people!
And if you just don't feel like doing them, that's OK. We are used to it. :)
Some people on the Fedi can't see your memes, doggos, flowers, art. That doesn't mean we wouldn't want to enjoy them.
There are also those who have to browse on data, and turned loading images off to save it. Those things eat up quite a bit of bandwidth!
And there are those who have trouble figuring out what they are looking at. Maybe because of the way their brain works, maybe because your image isn't clear to everyone.
You can help all of those by writing a caption for your image.
It doesn't need to be an essay. Even just a few words will do. Enough to explain the joke, or the cute pose your kitten is making. Don't worry about it; just write something!
If you have trouble remembering to do so, and would like a reminder, follow @PleaseCaption
@ternarypulsar What would you suggest?
Emojis being read and described to us is a good thing. Screen readers that don't do so yet (like old versions of JAWS) are generally seen as bad.
And in most cases, graphics, links, buttons and the like each being on their own line is good, too. It makes things more readable, and pressable things easier to press.
NVDA, the screen reader I use on Windows, does also have an option to use screen layout, which displays things closer to how you see them. I turn that on sometimes, but it can be a pain when, say, some badly made lists become just a long string of text. Most people keep it turned off, because they're used to doing things in a particular way at this point.
@Mayana assuming (!) that emojis are encoded as utf8 or something else that makes them recognizable, the reader could detect repetition or interspersing, as a naive proposition.
I'd say that it's up to you to decide what would be a good way to handle them, and for authors of screen readers to implement those suggestions
@ternarypulsar Hmm ... it does, in some cases. If you, say, put 5 🐘 emojis in a row in a word document, at least NVDA would read that as "5 Elephant".
But since they often show as graphics around the internet and in messaging apps, that's less possible there.
Yes, perhaps screen readers could constantly read ahead, do text prefetching, and use machine learning to guess how to best read it before you even get to it. But what someone is going to read next isn't always predictable.
And besides, I actually had a discussion about this with a friend yesterday, and we agreed that the biggest disadvantage would be the lag that might result. Screen readers, above all, have to be fast.
@ternarypulsar Well, the feature does not exist yet, so I can't answer that question. :)
But considering how laggy screen readers can be in other areas, how often they (still!) crash, and how badly some features are implemented ...
I can't be as optimistic as you. :)
@Mayana fair enough, but it still seems more efficient to attempt to get those performance issues fixed than to convince the entire world to not use certain patterns.
FTR, I find those also uncomfortable.
@ternarypulsar I am not trying to do that. See toot 2 in my thread.
I know most people will not change, and that's fine. It is not a huge problem, merely a inconvenience that I'd rather be without. The image description part is the only one that imo, people *really* should adapt on (but yes, yes, I know, image-describing AI).
Look, in the end, I am not a developer, so cannot help you much. If you are, and know Python, NVDA is open source:
As is, of course, Orca on Linux:
@Mayana my intention was not to suggest that you did, rather to emphasize that it might be very simple to implement (you wouldn't need AI for what I described)! and you can help, by opening a feature request on one of those project sites!
@Mayana so you mentioned "graphic Winking Face", that's something that could be string matched pretty reliably for most pages, I'd assume
This Mastodon instance is for people interested in technology. Discussions aren't limited to technology, because tech folks shouldn't be limited to technology either!