top of page
  • Writer's picturePriti Rangnekar

Decoding the "Encoding" of Ableism in Technology and Artificial Intelligence

Updated: Jul 26, 2021

Author: Priti Rangnekar

Originally written for Stanford University's Program in Writing and Rhetoric, Winter 2021.

In May 2018, Microsoft pledged $25 million over the next five years for individuals and organizations developing AI tools for those with disabilities (Protalinski). At its I/O conference in 2019, Google revealed three efforts - Project Euphonia, Live Relay, and Project Diva - to assist and provide more autonomy to people who are deaf or have speech impairments. While companies are seen extending their sphere of influence through an emphasis on using technology for social good, researchers are repeatedly calling into question the validity and fairness of AI models. The existence of racial and gender biases in these models has been well-studied, with studies by Timnit Gebru of Microsoft Research and Joy Buolamwini pointing to AI's 1% error rates when classifying faces of lighter-skinned men but 35% error rates for those of darker-skinned women. Yet, the same questions of bias, fairness, and algorithmic justice are far less often considered in the case of disability. Researchers at Microsoft and Google have already demonstrated that some AI classifies language about disability as inherently negative, and that speech recognition is less successful for people with deaf accents (Ross). Such incidences have alerted AI experts and disabled individuals alike in recognizing the discrimination that will manifest as a result, from biased natural language processing when hiring to the inefficacy of assistive speech technologies.

It is imperative and timely for researchers, industry professionals, and abled and disabled people alike to deeply reflect upon the intersection between disability and AI as artificial intelligence applications become ubiquitous in society. Why does disability present a unique challenge for questions of representation and accuracy in developing artificial intelligence technologies? What frameworks and definitions of fairness should be employed when determining the efficacy of machine learning models? How can artificial intelligence technologies ensure accessibility for people with disabilities, rather than amplifying power structures and assumptions? Contrasting questions of medical privacy and representation in datasets compel AI developers to make technical choices, while AI experts debate competing interpretations of fairness and justice. Ultimately, disabled communities often find themselves caught between appreciating AI advancements in increasing autonomy and the frustration that arises when technologies are not sufficiently adaptive or reflective of the true nature of disability.

One of the primary concerns surrounding the potential harms of artificial intelligence technologies on disabled individuals is rooted in the idea of "garbage in, garbage out," referring to the case of algorithms or systems with flawed or non-representative data inputs producing flawed or non-representative outputs. As IBM Accessibility researcher Shari Trewin highlights in 2018, a machine learning model trained on data in which "a health insurer routinely denies coverage to people with disabilities" will showcase the same discriminatory behavior (2). Similarly, AI researchers at Google analyzed one of the only linguistic datasets that mentioned different kinds of disability, based on the Jigsaw Unintended Bias in Toxicity Classification Challenge. Upon finding that 21% of comments labeled as having a mention of psychiatric or mental illness were labeled as toxic, the authors concluded that "these associations significantly shape the way disability terms are represented" in natural language processing models (Hutchinson et al. 5). Given that 20% of the U.S. population has a kind of disability, creating representative datasets is "an essential step to addressing potential bias against people with disabilities" (Trewin 2).

However, more nuanced analysis has drawn attention to the difficulty of implementing representative datasets, as well as the dangers of simplification. Whittaker et. al. of the AI Now Institute, an interdisciplinary research institute at NYU dedicated to understanding social impacts of AI, note that due to the diverse and often hidden manifestations of disability, "simply expanding a dataset's parameters to include new categories" will not truly increase representation (10). Moreover, the probabilistic and statistical nature of machine learning algorithms intrinsically place disabled minorities at a disadvantage, as "outlier data is often treated as 'noise' and disregarded," and a lack of individuals with a given kind of disability prevents algorithms from finding patterns (Trewin 3). This persistent statistical issue was witnessed when scholar Jutta Treviranus, founder of the Inclusive Design Research Centre, tested an AI model designed to guide autonomous vehicles. When an initial learning model was presented with footage of a "friend who propels herself backward in a wheelchair," the model "autonomous vehicle" consistently decided to run over the friend. Although Treviranus initially suspected this result could be combated by smarter models, she realized that smarter models only ran over the friend with higher levels of confidence. The algorithm had decided that "wheelchairs go in the opposite direction" based on the average behavior of wheelchairs it had been exposed to. This case study has led the AI Now Institute to conclude that adding more data to an AI model "reinforces the normative model," as "those who fall outside of this norm become increasingly remote 'outliers'" (Whittaker et al. 13). The potential threats for the disability community due to this trend are clear, as fraud detection algorithms that flag outlier input may inadvertently flag people with disabilities, despite their actions being "legitimate system inputs" (Guo et al. 4).

Due to these statistical issues, machine learning researchers tend to face a double bind. Including "outlier data" of less common disabilities can cause overfitting, in which the algorithm is unsuccessful for general, more common scenarios; using simpler models results in incorrect or "unfairly negative" predictions for "outlier individuals" (Trewin 3). Fortunately, technical advances in the machine learning and design field present potential for transforming the current probabilistic mathematical models used today. Most notably, "inclusive design practitioners" are developing learning models that "attend to the full diversity" of disabilities and scenarios rather than giving a preferential advantage to those who are similar to the majority (Trewin et al. 52). As a result, despite the lack of efficacy of the simple solution of increasing representation or dataset size, novel innovations in machine learning do present opportunities for improvement.

Nevertheless, questions of data collection continue to arise, as organizations strive to pursue efficient means of crafting datasets. Companies and organizations developing AI face conflicting privacy concerns, as forced data disclosures or data collection about disabled individuals can cause "higher health-insurance costs, denial of coverage or employment, or other forms of stigma" (Whittaker et al. 19). In Europe, recent GDPR regulations that have increased individuals' right to know about personal data collection have prevented AI systems from receiving "explicit information about disability that can be used to apply established fairness tests and corrections" (Trewin 1). As a result, some companies have taken to the use of "clickworkers, who construct or label data as being from people who are 'disabled' based on what is effectively a hunch" (Whittaker et al. 20). On the other hand, researchers have proposed suspicious means of predicting disabilities through implicit disclosure and data collection, such as through using a person's mouse movements for diagnosis of Parkinson's disease (Allerhand et al. 1). Although such means of data augmentation rooted in human judgment and data collection without transparency are likely to continue, recent propositions may provide an alternative that enables data acquisition in a more controlled manner. For example, the International Standards Organization's personal data preference standard will increase "self-determination regarding personal data," prioritizing the individual's preferences on data sharing as opposed to existing "all or nothing terms of service agreements" (Trewin et. al 48).

Beyond the issue of implementation of machine learning models, however, are deeper questions surrounding power structures and societal influences surrounding artificial intelligence's impact on disability. The AI Now Institute points to the additional risks posed by "significant power asymmetries" between disabled individuals who may be "classified, ranked, and assessed" and those designing and deploying AI systems (Trewin 1). A popular and relevant example is that of HireVue, a platform that uses AI to analyze indicators such as speech patterns and facial movements to determine viable job candidates based on video interviews. Although HireVue's patent uses objective language such as a "job performance score" and "detected" to characterize its algorithm as a means for reducing discrimination by human recruiters, social entrepreneurs Jim Fruchterman and Joan Mellea criticize its method that "discriminates against many people with disabilities that significantly affect facial expression and voice" (qtd. in Whittaker 16). Moreover, ensuring accountability of companies that use HireVue, such as Goldman Sachs and Unilever, through antidiscrimination laws such as the ADA is uniquely challenging. Due to the need for examining the AI system's performance across candidates in order to make claims about its bias, "those most likely to be harmed by such discrimination lack access to the information they need to bring a suit" (Whittaker 17). Thus, existing power structures between employers and disabled individuals are only amplified. Similarly, limitations or evidence of discrimination by AI against people with disabilities is often justified based on perceptions of what is "normal" or common. As Colleen Haas Distinguished Chair of Disability Studies Karen Nakamura states, a voice assistant such as Alexa is not expected to recognize the accent of a "user whose cerebral palsy affects their vocal cords since even "'normal' people also have trouble understanding them" (Nakamura 2). Thus, AI developers and companies are well-equipped to evade criticism or demands that accuse AI technologies of discrimination.

In order to better understand the nature of discrimination and bias when artificial intelligence models impact people with disabilities, AI experts have debated between competing models of fairness and justice for assessing the competence of AI systems. Some experts have focused on determining which model of fairness should be employed when developing AI systems affecting people with disabilities. IBM researchers have promoted the idea of "individual fairness" as aligning "better with the societal notion of fairness, and legal mandates against discrimination." Using this model, "similar individuals should have similar outcomes" in situations such as "deciding whether to grant loan applications" or in time taken to complete a test (Trewin et al. 49). However, Nakamura raises the realistic flaws with this idealized approach, given that the playing field is not equal for all individuals. "People who are blind or who are deaf cannot access the reCAPTCHA images or sound samples," creating a sort of "reverse Turing-test" in which disabled people cannot prove their humanity to a computer validation test (Nakamura). As a result, unless accessibility and equity are embedded in each aspect of the AI system, individual fairness inherently cannot be achieved due to systemic differences in individuals' accessibility to technology.

These fundamental issues with defining fairness have led some analysts to focus on deeper questions of justice in the context of disability. Researchers in Human Centered Design and Engineering at the University of Washington analyzed the case study of using AI for diagnosing autism based on facial expressions through two approaches of fairness and justice-based frameworks. While a framework of fairness focuses on "how oppression manifests in an individual" and promotes equality through "remediation of technologies," justice strives to repair past harm and "question the structures" that allow disabled people to be "disadvantaged in the first place" (Bennett and Keyes 2). In this scenario, a fairness-based approach would merely emphasize "diversifying datasets" in order to address the gender differences in autism symptoms and "consequential discrepancies in diagnostic rates." On the other hand, a framework of justice would recognize that "adding technical and scientific authority to medical authority" when determining diagnoses would cause patients to be "disempowered" with "less legitimacy given to the patient's voice." More importantly, such AI systems rely on the "premise that an early diagnosis is a good outcome," despite "biases in who can access diagnosis (and how diagnosis tests are constructed)" that limit their benefits (Bennett and Keyes 2). As a result, the initial issue of promoting fairness has been convoluted with more radical concerns surrounding whether AI-based analysis and applications are intended to assist disabled people in the first place.

Disability activists have corroborated in acknowledging flawed presumptions about disability as needing cure, in the context of AI that intends to assist disabled individuals. Disability activist and scholar Eli Clare discusses Ava, an AI-enabled app for allowing deaf people to engage in spoken conversations by converting speech to text. Since many Deaf people identify as a "linguistic minority" and attribute their troubles to "the non-deaf world's unwillingness to learn and use sign language" rather than their inability to hear, they opposed Ava for striving towards "eradicating both deafness as a medical condition and being Deaf as an identity" (qtd. in Whittaker 13). Alternatively, some disabled individuals value the autonomy and assistance that artificial intelligence can provide. In a collaborative thought piece, Peter Smith, who was left severely disabled after an accident, and his blind daughter Laura Smith reflect upon their daily interactions with AI-based technologies. Most notably, Laura describes her usage of a photograph app to acquire descriptions of objects her daughter sees, However, she admits that "some dedicated eBook devices are not accessible with screen readers and do not have options to enable speech," and Peter corroborates that "the software does not give me much time to speak the text" when using speech technology on mobile phones (Smith and Smith). Regardless of whether AI-based systems are appreciated or condemned by disabled individuals, current technologies prove a clear need for consistent effectiveness in providing autonomy in alignment with the needs of disabled individuals.

As a whole, awareness and discussion about the implications of artificial intelligence for marginalized groups have accelerated in the last decade, as the technology industry experiences a surge in innovation and the quantity of stakeholders - companies, researchers, and users alike. Despite increased recognition of the complexities surrounding disability, justice, and accessibility, a common thread across stakeholders has been the pressing desire for increased representation of disabled individuals in the design process. As Peter Smith himself writes in his thought piece, "the main lesson to be learned from the literature, and from our own experiences, is the importance of involving disabled people in the design of AI software and technology which is intended for use by those with disabilities." Ultimately, no amount of ethical analysis or technical experimentation with datasets and assistive systems can replace the intrinsic need for disabled individuals to become actively engaged and welcomed in the technology space as creators, rather than merely testing subjects or users. Considering this stark realization for companies and researchers alike, an important gap emerges in studies surrounding disability and artificial intelligence alike. We must identify the factors that hinder disabled individuals from finding inclusiveness and accessibility in the tech industry, employing a holistic approach that considers aspects such as education, the employment journey, and workplace culture. Having understood these root causes and amplifying factors, we ought to promote a combination of policies and societal mindset shifts that ensure that disabled workers are empowered to craft a world in which artificial intelligence and disability can thrive.


Annotated Bibliography

Bennett, C.L., Keyes, O.: What is the point of fairness? Disability, AI and the complexity of justice. In: ASSETS 2019 Workshop—AI Fairness for People with Disabilities (2019)

(Secondary Source). This paper critically reflects on the lens of "fairness" for assessing the ethics of AI for people with disabilities, as it reinforces power dynamics. The authors present two case studies from computer vision, including AI for disability diagnosis and AI for "seeing" to help visually impaired individuals and analyzes them through two lenses of fairness and justice.

Guo, Anhong, et al. “Toward Fairness in AI for People with Disabilities: A Research Roadmap.” ArXiv.org, 2 Aug. 2019, arxiv.org/abs/1907.02227.

(Secondary Source). This paper analyzes the technical challenges regarding disability bias in AI for different types of AI systems, including natural language processing, handwriting recognition, speaker analysis systems, and vision systems.

Hutchinson, Ben, et al. “Unintended Machine Learning Biases as Social Barriers for Persons with Disabilities.” ACM SIGACCESS Accessibility and Computing, 1 Mar. 2020, dl.acm.org/doi/abs/10.1145/3386296.3386305.

(Secondary Source). This study analyzes datasets for natural-language processing to identify causes and effects of machine learning biases. The authors, who are researchers at Google, identify that machine learning models reflect biases that are found in linguistic datasets, such as those associating people with mental illnesses with gun violence and homelessness.

Mills, M & Whittaker, M 2019, Disability, Bias, and AI. AI Now Institute Report. <https://ainowinstitute.org/disabilitybiasai-2019.pdf>

(Secondary Source). This paper comprehensively analyzes current research and challenges facing artificial intelligence and its relation to biases and discrimination against people with disabilities. The authors refer to discussions from a workshop for professionals in artificial intelligence, as well as prominent viewpoints from literature on this topic of bias, AI, and disability.

Nakamura, K., & Machinery, A. (2019). My Algorithms Have Determined You're Not Human: AI-ML, Reverse Turing-Tests, and the Disability Experience. UC Berkeley. http://dx.doi.org/10.1145/3308561.3353812

(Secondary Source). This paper analyzes some of the technicalities behind artificial intelligence development, as well as the causes of problematic and discriminatory AI models in the context of disability.

Protalinski, Emil. “Google Unveils 3 Accessibility Projects That Help People with Disabilities.” VentureBeat, VentureBeat, 8 May 2019, venturebeat.com/2019/05/07/google-ai-accessibility-project-euphonia-diva-live-relay/.

Ross, Martha. “People with Disabilities Are Disproportionately among the out-of-Work.” Brookings, Brookings, 30 June 2017, www.brookings.edu/blog/the-avenue/2017/06/30/people-with-disabilities-are-disproportionately-among-the-out-of-work/.

Smith, P., Smith, L. Artificial intelligence and disability: too much promise, yet too little substance?. AI Ethics (2020). https://doi.org/10.1007/s43681-020-00004-5

(Links to an external site.)

(Primary Source). This thought piece uses methodologies of autoethnography and reflection based on personal narratives of using AI technology. The author highlights that disabled people can feel frustrated when working with AI that is intended to assist them. Thus, the AI design process must include disabled individuals, and the author encourages co-design. This source also includes a review of many other research papers and works on the issue of ethics, AI, and disability.

Trewin, Shari. AI Fairness for People with Disabilities: Point of View. Nov. 2018, arxiv.org/pdf/1811.10670.pdf.

(Secondary Source). This paper argues that fairness regarding disability is different from fairness for other aspects such as race or gender, reexamines notions of "fairness" given privacy concerns and a large variety in how disability manifests, and offers suggestions for how to increase fairness for disabled individuals in AI applications.

Trewin, Shari, et al. “Considerations for AI Fairness for People with Disabilities.” AI Matters, 1 Dec. 2019, dl.acm.org/doi/abs/10.1145/3362077.3362086.

(Secondary Source). In this paper, AI experts describe the risks and opportunities of AI for employment, education, safety, and healthcare. In addition, they propose strategies for supporting AI fairness based on literature review and advocate for the inclusion of disabled individuals when building models and in testing.


35 views0 comments

Recent Posts

See All

Author Credit: Priti Rangnekar With advancements in self-driving cars, political chatbots, and artificial intelligence platforms, humans must now reexamine obscure ethical standards and program them a

bottom of page