Printer Friendly

Facebook helps blind people 'see' what's in photographs; Facebook has unveiled a new AI-driven tool that gives visually-impaired users access to photos, following a project led by blind engineer Matt King. KATIE WRIGHT reports on how technology is making social media more accessible.

Byline: KATIE WRIGHT

FOR more than 15 years, screen readers have used text-to-speech technology to allow people who are blind or visually impaired to hear text on their computer screens.

But deciphering images has been much more difficult.

Emojis are accessible with screen readers because they have a Unicode definition attached, but photos aren't so simple - the sometimes hilarious results you get when using captionbot.ai, the app that tries to guess what's happening in your photos, can attest to that - which is why Facebook's latest unveiling is so significant.

Previously, a screen reader would just announce the word 'photo' and the name of the person who shared it, but now 'automatic alternative text' can recognise things like babies, beards, sports, food and whether a landscape is snowy, mountainous or a beach (so you can be sure exactly what kind of holiday your friend is bragging about).

Scroll over a photo of a forest, for example, and you'll hear "this photo may contain: outdoor, cloud, foliage, plant, tree".

Ten months in the making, the tool has been made possible because of developments in the social network's object recognition technology and is the first big project engineer Matt King has been part of at Facebook.

Matt, who lost his sight as a result of a degenerative condition called retinitis pigmentosa, told the BBC: "On Facebook, a lot of what happens is extremely visual. And, as somebody who's blind, you can really feel like you're left out of the conversation, like you're on the outside."

For the 285 million people in the world who are blind or have severe visual impairment, the feature offers a lot more inclusivity: around two billion photos that are shared on Facebook, Instragram and WhatsApp every day.

Driven by Artificial Intelligence, the system has been 'taught' using millions of example images, and tested thoroughly, and it will continue to get better, providing richer and more accurate descriptions as it grows.

Available initially on iOS screen readers in English, software engineer Shaomei Wu says that automatic alt text will be soon be rolled out to other languages and platforms.

He says: "While this technology is still nascent, tapping its current capabilities to describe photos is an important step toward providing our visually impaired community the same benefits and enjoyment that everyone else gets from photos."

CAPTION(S):

Food, forests and fun - soon anyone with a visual impairment will have a clearer idea of what's in photos

COPYRIGHT 2016 MGN Ltd.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2016 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Features
Publication:Manchester Evening News (Manchester, United Kingdom)
Date:Apr 16, 2016
Words:408
Previous Article:Pucker up with spring's best lippies; FROM GLOSS TO MATTE AND EVERYTHING IN BETWEEN, SPRING BRINGS A WHOLE HEAP OF FAB NEW LIP PRODUCTS. KATIE WRIGHT...
Next Article:Writing my novel taught me I'm no dancing queen; marion mcmullen FINDS OUT WHY BESTSELLING WRITER JANE COSTELLO WAS KEEN TO UNLEASH HER INNER DANCING...
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |