Beyond Access: Facebook’s Automated Image Descriptions and Disability Justice

This post was republished from Medium with the author’s permission.  

by Tasha Raella

Two weeks ago, Facebook launched Automatic Alternative Text, or AAT, a tool that provides automated image descriptions for blind users.

On the day the feature was released, the headlines were sensational, but Facebook’s accessibility team freely admits that AAT is in its infancy. The artificial intelligence algorithm it uses knows fewer than a hundred objects, and it only provides a description when it is 80 to 90% sure that it is correct. I am a blind Facebook user, and examples of image descriptions I have received so far include “Image may contain indoor,” “image may contain one person smiling,” and “image may contain hat.” As you can imagine, I am still just as unable to comment meaningfully on my friends’ photos as I was before this tool was released. Given this indisputable reality, why are most blind users so excited about this feature? And could examining the reasons behind this excitement reveal how image descriptions on Facebook are more than an access issue?

Matt King is a blind software engineer at Facebook who helped spearhead the development of AAT. In a comment on AppleVis, a popular forum for blind computer and iPhone users, he states that Facebook decided to go live with AAT now, as opposed to waiting until it became more sophisticated, based on data from blind users. Facebook’s research seemed to indicate that these users would find AAT valuable, even though the descriptions it generates are rudimentary. This finding is disconcerting, but not surprising.

In a 2011 blog post, Chris Hofstader, an assistive technology expert, bemoaned the fact that Google was being lauded by blind users for releasing a screenreader for Android, even though this screenreader did not include Web browser accessibility. Hofstader said that the release of this screenreader would be analogous to General motors deciding to stop creating innovative cars and instead choosing to release new vehicles based on Henry Ford’s Model T. He coined the term Model T Syndrome to refer to blind people’s effusive displays of gratitude when “a multi-billion dollar company does anything that may even be of marginal value to our community.”

Many blind users’ reactions to AAT are a perfect example of Model T syndrome. Rather than criticizing Facebook for providing descriptions that are stilted at best and unhelpful at worst, most blind people I know are grateful that Facebook thought of us at all. The fact that many of us are so grateful merely for being noticed speaks to the pervasiveness of ableism in our society, and how easy it is to internalize oppressive narratives: that we are not worthy, that we must accept what we are given, that we must not complain, that shoddy accessibility is better than no accessibility. Many of us probably internalize these narratives because we fear being accused of biting the hand that feeds us.

To understand how internalized ableism is at play here, it is helpful to unpack King’s explanation of why Facebook chose to go the AI route for generating image descriptions. According to King, Facebook considered several solutions for making images more accessible to the blind, but ultimately chose the AI approach because they didn’t want to “add a lot of friction.” King explains, “We could probably require people when they upload a photo: ‘please describe this for blind people.’ It would drive people nuts — that would never work at scale.” King’s use of the word “friction” is particularly telling. What he seems to be saying here is that while Facebook recognizes that blind people should have access to information about images, that access should not inconvenience sighted users. Rather than questioning the assumption that providing image descriptions is a burden and that blind people’s access needs are blind people’s problem, Facebook is reinforcing the ableist status quo.

Mia Mingus, a disability justice activist, says that providing accessible bathrooms and wheelchair ramps is not enough. In order to create a truly just world, we must challenge what she calls the myth of independence. We should instead view access as “collective and interdependent.” In other words, creating an accessible world is everyone’s responsibility.

As it is currently implemented, Facebook’s automated image description tool promotes independence, rather than interdependence. It sends the message, loud and clear, “Don’t bother writing a description of your new baby. Our AI has it covered.” In ten or twenty years, that might be the case, but not now. . With existing technology, the only way to ensure full and meaningful access to images is to encourage sighted users to describe their photos. Perhaps, in time, Facebook’s AI will learn from these descriptions. I have reached out to Facebook several times, explaining the value of human-generated image descriptions, but have not received a substantial response.

I, along with several other, like-minded blind users, urge Facebook to implement an approach to image description that is similar to Twitter’s, in that it offers a space for users to describe their photos. This approach would not only create a more inclusive Facebook, but it would also encourage us to imagine a world in which providing image descriptions is a pleasure, rather than a burden.

Advocates See Advances in Assistive Technology

Sun Sounds of Arizona
Mares Wright displays one of the radios Sun Sounds of Arizona gives out, in addition to its online live stream of radio programming. (Lily Altavena)

by Lily Altavena

In 2015, non-profit Sun Sounds of Arizona is still giving out functional, 80s- looking radios to listeners after more than 30 years on the airwaves. Geared to help people with disabilities, volunteers from the organization read everything from the Wall Street Journal to Playboy aloud.

But the tide is changing at Sun Sounds, where an online stream is gaining in popularity.

“I’m signing up more and more people who want to do it digitally,” spokesman Mares Wright said.

Assistive technology is going digital and making new strides. Those with disabilities don’t have to look further than their smart phones to find a range of apps and accessories to improve quality of life. Some of this new technology was on display recently at “White Cane Day: A Resource Fair” in Tempe, Ariz., hosted by the city’s diversity office. The event was aimed at connecting those in the community who are blind or have visual impairments to a variety of resources, according to Michele Stokes, the city of Tempe’s Americans with Disabilities Act Compliance Specialist.


The Prodigi, an electronic magnifier displayed by Mike Perry of Low Vision Plus in Arizona. (Lily Altavena)

Mike Perry owns Low Vision Plus, which offers assistive devices for those with visual impairments. One of the vendors at the fair, he said he’s seen a lot of improvements in the past few years.

“The technology has just become so much better,” Perry said. “Now people can afford to get it, even if they don’t have a lot of money.”

He displayed an electronic magnifier called the Prodigi at his table, which connects to a tablet to magnify and read words aloud. Perry estimates the Prodigi costs around $2,700 – thousands of dollars less than what its lower tech, 1990s counterpart would have cost.

Virginia Thompson, the assistive technology coordinator with the Arizona Center for the Blind and Visually Impaired, said she sees more technology useful for people with more than one disability. A home alert system might include flashing lights, vibrations and braille, for example.

Smart phones, too, are ripe for development in assistive technology. In Lyrics Guru, a video game app showcased at the fair, users have to guess the correct lyrics for a song. The company, Al Jones Corporation, is currently developing a voice-controlled version of the game for those who are blind or have low vision.

“From our understanding, there are not many video games out there for those with visual challenges,” Al Jones, the company’s CEO, said.

Events like “White Cane Day” are critical to the communities they reach, Stokes said. Especially if someone is in the process of losing their sight, finding a new tool might make a huge impact in their life, he said.

For Thompson, updates in assistive technology signals more inclusion for those with disabilities.

“We still have a long ways to go, but at least now deaf-blind people can be active in the community,” she said.

Fast Company

AT&T’S Challenge To Developers: Inventive Apps For The Disabled

AT&T’s latest app challenge, done in partnership with NYU’s Assistive Technology and Ability Lab, offers $100,000 to developers who create new apps or devices specifically aimed at aiding those with disabilities. Submissions will be due at the beginning of July, and AT&T will announce the winner on July 26, the 25th anniversary of the Americans With Disabilities Act. Read more.


The Silencing of the Deaf: How high-tech implants are being blamed for killing an entire subculture.

Deaf culture is unique in that it is not usually inherited– it is shared and passed down. Though implants have been lauded for their technological achievements, by helping deaf children hear, parents are essentially cutting them off from experiencing this vibrant culture. Read more on Medium.


How can Siri help people with Autism?

One thing Apple likely didn’t expect when it released its iPhone technology is how it positively impacts people with autism. Plusnet, a British internet service provider, describes how Siri, the ultimate virtual personal assistant, presents information from queries in such a digestible way that can help people with autism process that material more easily. Read more.