For people who are blind or visually impaired, JAWS is synonymous with freedom to operate Windows PCs with a remarkable degree of control and precision with output in speech and Braille. The keyboard-driven application makes it possible to navigate GUI-based interfaces of web sites and Windows programs. Anyone who has ever listened to someone proficient in JAWS (the acronym for “Job Access With Speech”) navigate a PC can’t help but marvel at the speed of the operator and the rapid fire machine-voice responses from JAWS itself.
For nearly 25 years, JAWS has dominated the field of screen readers, and is in use by hundreds of thousands of people worldwide. It is inarguably one of the greatest achievements in modern assistive technology. We are delighted to announce that Glen Gordon, the architect of JAWS for over 25 years, is joining the agenda at Sight Tech Global, which is a virtual event (December 2-3) focused on how AI-related technologies will influence assistive technology and accessibility in the years ahead. Attendance is free and registration is open.
Blind since birth, Gordon’s interest in accessibility developed out of what he calls “a selfish desire to use Windows at a time when it was not at all clear that graphical user interfaces could be made accessible.” He has an MBA from the UCLA Anderson School, and he learned software development through “the school of hard knocks and lots of frustration trying to use inaccessible software.” He is an audio and broadcasting buff and host of FSCast, the podcast from Freedom Scientific.
The latest public beta release of JAWS contains a glimpse of the future for the storied software: It now works with certain user voice commands — “Voice Assist” — and provides more streamlined access to image descriptions, both thanks to AI technologies that the JAWS team at Freedom Scientific is using in JAWS as well as FUSION (which combines JAWS and ZoomText, a screen magnifier). Those updates address two of JAWS’ challenges — the complexity of the available keyboard command set that intimidates some users and “alt tags” on images that don’t always adequately describe the image.
“The upcoming versions of JAWS, ZoomText, and Fusion use natural language processing to allow many screen reader commands to be performed verbally,” says Gordon. “You probably wouldn’t want to speak every command, but for the less common ones Voice assist offers a way to minimize the key combinations that you need to learn.”
“Broadly speaking, we’re looking to make it easier for people to use a smaller command set to work efficiently. This fundamentally means making our products smarter, and being able to anticipate what a user wants and needs based on their prior actions. Getting there is an imprecise process and we’ll continue to rely on user feedback to help guide us towards what works best.”
The next generation of screen readers will take advantage of AI, among other technologies, and that will be a major topic at Sight Tech Global on December 2-3. Get your free pass now.
Sight Tech Global welcomes sponsors. Current sponsors include Verizon Media, Google, Waymo, Mojo Vision and Wells Fargo. The event is organized by volunteers and all proceeds from the event benefit The Vista Center for the Blind and Visually Impaired in Silicon Valley.
Pictured above: JAWS Architect Glen Gordon in his home audio studio.
Powered by WPeMatico