Skip to content

Why Do We Fix AI Bias But Ignore Accessibility Bias?

Posted in Accessibility News

Kalev Leetaru Contributor
AI & Big Data
I write about the broad intersection of data and society.

“As the Web has become increasingly visual, with pages of text replaced by rich high-resolution imagery and video, it has become increasingly inaccessible to those with differing physical abilities who rely on accessibility software like screen readers.”

Silicon Valley has become obsessed with addressing AI bias. As deep learning algorithms have graduated from the academic research lab into the real world, there has become a growing awareness of the implications of their innate biases as their limited Western training data has collided spectacularly with a globalized digital world.

Confronted with an increasingly skeptical public, relentless press coverage and growing policymaker interest, the deep learning community has responded with a wave of investments and initiatives focusing on how to address such bias.

This same digital transformation has brought with it an increasingly inaccessible Web that is creating an ever-greater digital divide for those with differing physical abilities, placing more and more of the world out of their reach.

Yet in stark contrast with the enormous resources being poured into combating AI bias, accessibility bias has received little attention, reminding us that only those biases most visible to the general public receive attention.

AI bias was once primarily the realm of the academic world, measured anecdotally and debated on the sidelines as a largely scholarly pursuit, with little interest from the mainstream deep learning community.

As deep learning has graduated into the real world, the implications of these biases have become more apparent to the general public. In particular, the extreme biases of the training datasets used to construct the modern era of deep learning have manifested themselves in the form of an AI-powered world that actively and severely discriminates against myriad demographics, cultures and geographies.

Facing mounting pressure from the public, press and policymakers, Silicon Valley has invested heavily in understanding and addressing AI bias.

Once a rarity, almost every major company working in deep learning today has at least some formal evaluation process to assess and mitigate bias in their AI algorithms.

Entire conferences are being held on AI bias and even mundane deep learning research papers are increasingly at least mentioning the issue of bias in their methodology sections.

Even the major deep learning development frameworks are beginning to release integrated bias mitigation workflows that can do everything from help guide developers towards best practices for minimizing bias to automated bias assessment of training data to identify discriminatory correlations, such as an innocuous variable unanticipatedly being correlated with a restricted variable like race or gender.

Yet for all of this interest and investment in AI bias, there has been little corresponding interest or investment in accessibility bias.

As the Web has become increasingly visual, with pages of text replaced by rich high-resolution imagery and video, it has become increasingly inaccessible to those with differing physical abilities who rely on accessibility software like screen readers.

An inspirational statement in the form of a textual tweet is accessible to all. The same statement in the form of an emotional image captioned as “6 people, people standing” with nothing else to lend context to its contents is entirely meaningless to those reliant upon screen readers, cutting them off from their elected representatives.

Rather than step forward to mandate accessibility in the social media era as it did in the Web era, the US Government has instead stepped back, waiving its formerly sacrosanct requirements that official governmental publications be accessible and instead accepting that the digital government of the future will not be available to those with differing physical abilities.

Strangely, the same policymakers, companies, foundations and thought leaders who have elevated AI bias into a topic of international discussion appear to have little interest in accessibility bias.

The individuals and organizations arguing for new laws to mitigate AI bias remain silent when asked why they are simultaneously arguing against enforcing existing laws that would address accessibility bias.

The foundations, publications and thought leaders lavishing attention and resources on AI bias appear lost for words and funding when it comes to accessibility bias.

This stark difference in interest between AI bias and accessibility bias is unfortunately hardly unexpected.

Silicon Valley’s interest in AI bias did not arise of its own volition. Rather, its current investments came about only through considerable public pressure and the growing threat of governmental intervention.

In contrast, the governments that once actively intervened on behalf of accessibility have conversely stepped back in the social media era, willing to accept the disenfranchisement of a whole swath of society as Congress’ interest in combating discrimination in the digital era wanes.

Putting this all together, as Silicon Valley invests heavily in combating AI bias, there has been little corresponding interest or investment in combating accessibility bias.

The unfortunate truth is that these differing levels of support come down to simple economics and visibility.

AI bias affects everyone and has an especial effect on economic processes like the ad engines and online shopping bazaars that power the digital economy. Most importantly, AI bias is directly visible to the public, often in spectacular fashion.

In contrast, accessibility bias affects a smaller portion of the population. It is also largely invisible to the general public except for brief moments like this week’s Facebook image outage.

Is there any hope for a more accessible Web?

The fact that some of the loudest voices in Congress warning about digital bias are themselves flagbearers of accessibility bias, relying extensively on the least accessible mediums for their official communications and refusing to provide accessibility options for differently abled constituents, it sadly seems there is little room for optimism that things will get better anytime soon.

Perhaps the greatest hope for a more accessible Web comes back to economics. Simply put, as the Web becomes more visual, the vast world of content understanding algorithms that moderate, manipulate, mine and monetize the social landscape are failing. In their rush to help build better image mining algorithms for their own monetizing needs, perhaps a byproduct might be a more accessible Web.

In the end, the stark difference between our interests in AI bias and accessibility bias reminds us that only the most visible and economically impactful biases have any hope of being addressed in today’s monetized Web.

Original at https://www.forbes.com/sites/kalevleetaru/2019/07/06/why-do-we-fix-ai-bias-but-ignore-accessibility-bias/#58769e2c7902

Retail Savings Guide for People with Disabilities

https://www.couponchief.com/guides/savings_guide_for_those_with_disability

ADHD and Exercise

http://www.treadmillreviews.net/adhd-and-exercise/

Travel Accessibility Resources for the Disabled and Handicapped
http://www.cheapoair.com/guides/travel-resources-disabled-handicapped

Tips for Traveling with Disabilities
http://www.cheapflights.com/news/traveling-with-disabilities/

Accessible Travel: Resources for the Disabled Explorer
http://www.wakanow.com/ng/pages/Accessible-Travel:-Resources-for-the-Disabled-Explorer

Source: http://www.accessibilitynewsinternational.com/why-do-we-fix-ai-bias-but-ignore-accessibility-bias/

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *