Hacker News new | past | comments | ask | show | jobs | submit login

I can't reply to bzbarsky for some reason, but:

I assumed we were talking about vision impairment, because that's what the comment I replied to mentioned. Of course you can implement whatever else you want as well.

I question this "semantic DOM" idea: the trend has been towards filling the DOM with tons of crap in order to make applications, not documents. Do accessibility agents even work well on JavaScript heavy sites today?

Accessibility can and will be had without the DOM; while it is a concern, it shouldn't prevent things like WebGL + asm.js apps on the web.




No idea why you couldn't reply to me, but....

My point is that visual impairment is not mutually exclusive with other impairment, even though people often assume it is, consciously or not. This is an extremely common failure mode, not just in this discussion.

And while of course you _can_ implement whatever else you want, in practice somehow almost no one ever does. Doubly so for the cases they don't think about or demographics they decide are too small to bother with.

How well accessibility agents work on JS heavy sites today really depends on the site. At one end of the spectrum, there are JS heavy sites that still use built-in form controls instead of inventing their own, have their text be text, and have their content in a reasonable order in the DOM tree. At the other end there are the people who are presenting their text as images (including canvas and webgl), building their own <select> equivalents, absolutely positioning things all over the place, etc. Those work a lot worse.

You are of course right that accessibility can be had without the DOM, but "webgl" is not going to be it either. Accessibility for desktop apps typically comes from using OS-framework provided controls that have accessibility built in as an OS service; desktop apps that work in low-level GL calls typically end up just as not-accessible as your typical webgl page is today. So whatever you want use instead of the DOM for accessibility purposes really will need to contain more high-level human-understandable information than which pixels are which colors or what the audio waveform is. At least until we develop good enough AI that it can translate between modalities on the fly.


Speaking about AI, is it really that hard to do the OCR'ing of the images? I'm no expert, but I was under the impression hat this was a solved problem.


> I can't reply to bzbarsky for some reason

there's a rate limit to stop threads exploding




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: