@storm @lorabe @danielst @devinprater @wizzwizz4 @csepp @pvagner I just donated 50 USD to puri.sm/fund-your-app/ to fund "Software Optimizations: Accessibility support/screen reader support with phosh/gtk4" @purism I think we need to find ways to fund a11y support in gtk4, sticking with older unsupported versions is not going to be sustainable for long term.

@storm @lorabe @danielst @devinprater @wizzwizz4 @csepp @pvagner @purism I just talked to someone at Purism and they are positive about supporting it as it aligns with their goals. They are asking me for a list of priorities. I suggested screen reader, but if you all, who needs this more than me, can create a prioritized list of accessibility features, then I can share it with them.

@praveen @storm @lorabe @danielst @devinprater @wizzwizz4 @csepp In relation to @purism #librem5 The most prominent and difficult to implement feature would be #accessibility aware touch input support. In order to be productive we need to be able to explore the screen content before activating touch controls.

@pvagner @praveen @storm @lorabe @danielst @devinprater @csepp @purism What would the UI for that be like? "Single tap reads, double tap activates"? (Would there be a clicking noise when you tap something, or does it just read straight away?)

From what I can tell, the stuff I've described wouldn't be that hard to implement, assuming a correct AT-SPI2 implementation in the application. In Firefox, you'd be able to "see through walls" (be told about things in hidden tabs) until that bug is fixed.

@wizzwizz4 @praveen @storm @lorabe @danielst @devinprater @csepp @purism Single tap / touch / hover would read what's under the finger if there is enough text / accessibility support within the underlying control. Double tap should activate. There should be also a way to assign other touch gestures to screen reader actions such as text review commands

@pvagner @praveen @wizzwizz4 @storm @lorabe@floss.social @devinprater@dragonscave.space @csepp @purism
(1/4) While Purism is overwhelmed, understaffed and underfunded, I could actually imagine that GTK4 makes a11y simpler in the long run. Why? Purism created libhandy, now libadwaita in GTK4, providing consistent, complex, advanced, themeable controls, automatically adapting whole dialogs between mobile and desktop form factors.

@pvagner @praveen @wizzwizz4 @storm @lorabe@floss.social @devinprater@dragonscave.space @csepp @purism
(2/4) libadwaita controls know about their state, e.g. settings dialog knows it's currently in the WiFi sub-dialog, even if the menu is hidden on mobile. Apps using those controls automatically benefit from all improvements there, be it default gestures or screen reader integration.

@pvagner @praveen @wizzwizz4 @storm @lorabe@floss.social @devinprater@dragonscave.space @csepp @purism
(3/4) Question: Is one-tap-read-two-click really a good approach? It implies you have to tap around a lot to find stuff. With libadwaita it should be possible to do something like "read out top level items". For gnome-settings, in desktop mode it would read "menu" and "content: WiFi", indicating that WiFi is the selected menu item.

@pvagner @praveen @wizzwizz4 @storm @lorabe@floss.social @devinprater@dragonscave.space @csepp @purism
(4/4) In the mobile view, only either menu or content are visible, starting with menu. Thus, it would instead directly read out the available items, possibly assisted by gestures, e.g. tap: stop and re-read current item, swipe up: read previous, swipe down: continue reading, swipe right: select. Then continuing by reading the top level items, either settings or groups of settings, inside of WiFi.

@danielst @praveen @wizzwizz4 @storm @lorabe @devinprater @csepp @purism Hover or single tap to explore and double tap to activate is a typical interaction model on IOS and android so far. I may very well be missunderstanding but what you are suggesting reads the whole screen at once and users should influence that reading.

Follow

@pvagner @praveen @wizzwizz4 @storm @lorabe@floss.social @devinprater@dragonscave.space @csepp @purism
Yes and no. Possibly, different mechanisms can complement each other. BUT I'd take advantage of structural info, which is possibly more available in libadwaita apps than in some frameworks. There might always be free-style content that needs exploring, but structured controls allow much more precise navigation, without possibly missing a weirdly placed item.

@pvagner @praveen @wizzwizz4 @storm @lorabe@floss.social @devinprater@dragonscave.space @csepp @purism
For example, the label of an input field is not independent, but directly related.
In my example, you could also swipe down 5 times very quickly, to skip reading 5 items if you already have a clue. And of course keyboard or speech recognition could be used to directly jump to relevant items. But the structural info "this is a menu" should help no matter what is your favorite approach.

@danielst @pvagner @praveen @storm @lorabe @devinprater @csepp @purism Yeah, it already does that; that's standard for AT-SPI2 screen-readers (i.e. Orca, the only AT-SPI2 screen-reader). This is talking about an additional behaviour for touchscreens.

@danielst @pvagner @praveen @storm @lorabe @devinprater @csepp @purism Though mapping *gestures* to “keyboard navigation” is an interesting idea. It probably shouldn't be the default (since it's different to the way everything else works), but it would be cool as an option, I think.

@danielst
I think continuing this discussion at gitlab.gnome.org/World/Phosh/p will be better, if any info we discussed here but missing there, then please add it.
@pvagner @wizzwizz4 @storm @lorabe @devinprater @csepp @purism

Sign in to participate in the conversation
Librem Social

Librem Social is an opt-in public network. Messages are shared under Creative Commons BY-SA 4.0 license terms. Policy.

Stay safe. Please abide by our code of conduct.

(Source code)

image/svg+xml Librem Chat image/svg+xml