Analyzing the Challenges and Implications of Applying AI for Diagnosing Autism Spectrum Disorder
Author: Priti Rangnekar
Originally written for Stanford University's Program in Writing and Rhetoric, Winter 2021.
Introduction: Cogniable and the Era of AI for Diagnosis
As an educator at Learning Skills Academy, an intervention center in Haryana, India, Dr. Swati Kohli was no stranger to children with learning disabilities. Naturally, when her own son, Ekagra, began to show signs of "being different," she swiftly noticed and wanted to learn more. Upon consulting the Head of the Psychology Department at a renowned institution, however, her concerns were merely dismissed. It was not until Ekagra turned four that he was diagnosed with autism spectrum disorder (Velayanikal). Kohli was overcome with guilt and regret, drawing from her view that the time had passed for intervention and therapy - which she believed could have addressed Ekagra's autism by capitalizing on his young brain's neuroplasticity, or ability to change. In line with Kohli's conviction that an earlier diagnosis may have improved Ekagra's life by providing him with access to medical care and therapy sessions, she and her husband committed themselves to ensuring that other families would not undergo the obstacles they faced in enabling Ekagra to thrive. The couple returned to India and founded Cogniable, a company that develops artificial intelligence products for the "early detection and affordable management of autism spectrum disorder." After collecting data from 37 autistic children in the India and US, the couple, along with a team of computer vision and behavior analysis specialists, created an AI tool for detecting autism. With just a smartphone, parents can use the AI-based app, which is capable of analyzing videos of a guided play session and matching behavioral markers, such as eye contact, with similar actions in public video datasets (Velayanikal). The ultimate goal would be to provide an autism diagnosis, and according to the Kohlis, potentially eliminating the need for expensive clinic visits or long travels to meet professionals before receiving a diagnosis.
Neither Ekagra's experience with the difficulty of ASD diagnosis nor the rise of AI as a diagnostic tool are anomalies. In 2020, researchers at Rutgers University estimated that one-fourth of children with autism are undiagnosed, preventing them from receiving care, support, and services for living with autism (Wiggins et. al). Due to the expensive and labor-intensive process of traditional diagnosis, which "requires trained clinicians to administer, leading to long wait times for at-risk children," many researchers in AI and disability alike have pointed to the use of AI-based ASD diagnosis (Abbas et. al). Current technologies predominantly use one of the following in order to provide a more automated and rapid diagnosis: short online questionnaires for caregivers and healthcare providers, analysis of semi-structured videos of behavior based on training on prior datasets of children with ASD, or AI-based robotic playmates that detect and classify factors such as eye-gaze and movement (Abbas et. al; Erden).
Although limited, the primary conversation surrounding AI-based ASD diagnosis has centered around its statistical effectiveness as proven in experiments, as well as differing viewpoints on the methodology of diagnosis. Numerous tech enthusiasts view AI as "a novel way to improve accuracy and effectiveness during the detection of autism", as machine learning can analyze large data sets to "recognize the phenotypes of autism" while reducing wait times (Erden et. al). In 2020, Cognoa, a startup at the intersection of AI and healthcare, announced that its AI-based behavioral diagnostic tools outperformed traditional methods during clinical trials (Scudellari). Some developers have even recommended that AI should be able to provide a diagnosis of ASD without the need for clinical involvement, especially "in small towns and rural areas where qualified personnel are very few" (Velayanikal). However, others argue that traditional methods should be improved rather than shifting to AI. For example, general health professionals have requested an increase in training to accurately recognize autism, given that most training for healthcare professionals is directed to a minority of child psychiatrists (Skellern et. al). Other pediatricians recommend a shift to human-based diagnosis that emphasizes biological factors, noting that "where there is an intellectual disability, searching for metabolic and genetic causes is important." As a result, they suggest "full assessments" including genetic tests and brain scans (Keefe et. al). However, there are currently no effective tests for ASD based on biological markers, and recent machine learning research in this field has been relatively preliminary (Frye et. al). Despite the rise of AI-based ASD diagnosis that is not rooted in biological or genetic analysis, analysis of such technologies has been limited. As researchers at the Department of Psychiatry at Seoul National University highlight, "no literature review has been conducted on the broad use of AI technology to distinguish individuals with ASD through an emphasis on behavioral aspects" (Song et. al).
Given that the CDC estimates that 1 in 54 children in the U.S. have been identified with ASD, it is crucial to analyze the ethical complications and implications that arise when AI is used for ASD diagnosis through non-biological or medical factors. In this paper, I seek to evaluate challenges of bias and effectiveness in the context of using AI for diagnosing ASD in children, along with critical perspectives regarding the societal implications of doing so. I first highlight the challenges presented for AI-based ASD diagnosis by the complexities surrounding the definition of ASD and its symptoms, especially in contrast with primarily physical disabilities. Following this explanation, I analyze considerations of algorithmic bias and representation in the dataset and algorithm development process. Then, I discuss how AI-based diagnosis strengthens existing power structures between companies, healthcare professionals, and individuals undergoing a diagnosis by amplifying controversies surrounding autonomy and intervention. Ultimately, these controversies necessitate a human-centered approach that prioritizes the humanity and involvement of those with ASD.
Encoding the Criteria for an ASD Diagnosis into Artificial Intelligence
ASD presents challenges for diagnostic AI technology due to inherent subjectivities in its definition, as well as the relative lack of objective means for diagnosis. The criteria for ASD have varied significantly over time, and no consensus currently exists on the precise requirements for diagnosis. According to the Diagnostic and Statistical Manual (DSM), autism spectrum disorders consist of a "broad range of differing experiences and behaviors that are present from birth" focusing on "social communication and interaction, and repetitive and restricted behaviors." This definition has evolved since the late 1900s, causing notable variations in whether an individual would be diagnosed as autistic. Released in 1994 and revised in 2000, the DSM-IV was the first DSM edition to characterize autism as a spectrum, featuring five conditions with distinct features. These included Rett syndrome, which affected movement and communication primarily in girls, as well as PDD-NOS for children who did not completely meet the criteria for autism but needed behavioral or developmental support (Zeldovich). On the other hand, the DSM-5 released in 2013, which introduced the term "autism spectrum disorder," required the presence of “persistent impairment in reciprocal social communication and social interaction” and "restricted, repetitive patterns of behavior" since early childhood. Due to these stricter criteria, as well as the elimination of Rett Syndrome, children with milder traits, as well as girls, are currently excluded from ASD diagnosis. Even today, disagreements exist regarding the criteria for ASD diagnosis, which are heavily dependent on culture and geographical region. Unlike the DSM-5, the International Classification of Diseases, which is used for diagnosing children in the United Kingdom and other countries, does not require a fixed number or combination of features for a diagnosis (Zeldovich). Instead, clinicians are allowed to individually decide whether a child's traits match identifying features. Evidently, the accuracy of a diagnosis depends largely on factors beyond the technicalities of the assessment itself due to the "confusion about criteria, disputes about terminology, and multiple perspectives on the condition by clinical practitioners" (Erden et. al).
These subjectivities and ambiguities in the definition of autism present divergence from the objectivity in defining disabilities with a more physical or biological basis. For example, diabetic retinopathy, an effect of diabetes that can cause blindness, is often detected after clinicians analyze the individual's retinal scans for visual signs of aneurysms or hemorrhages. As a result, artificial intelligence systems that rely on machine learning have been effective in increasing the automation and accuracy of detecting diabetic retinopathy. The IDx-DR system, which has been approved by the FDA for providing diagnosis without a clinician, has been trained on massive datasets with retinal images and apply mathematical equations to describe lesions in the retina (Ravindran). On the other hand, as the National Center on Birth Defects and Developmental Disabilities explains, receiving an ASD diagnosis has rarely been a straightforward journey, as "there is no medical test, like a blood test, to diagnose the disorder." Instead, the traditional process of ASD diagnosis relies on three steps: parents and guardians monitoring the child's development in meeting typical milestones, questionnaires used for screening when parents or doctors notice abnormalities, and comprehensive developmental evaluation featuring observation and structured testing of the child by a trained specialist. As a result, there is no objectified system for capturing and analyzing behavioral observational data, which is "based upon [the individual's] actions or subtle responses to social situations and their interpretation by the administrator." This reality of ASD diagnosis presents a stark contrast with the well-established protocols for collection and analysis of genetic and neuroimaging scans (Song et. al).
The implication of this subjectivity in definition and a lack of objective methods for diagnosing ASD is that developers of diagnostic AI are forced to consider the question of "what is autism spectrum disorder?" Thus, developers must make especially careful and wise decisions regarding the data and algorithms that their software is based upon to provide diagnoses.
Encoding ASD Diagnosis into Artificial Intelligence: Data, Algorithms, Accuracy
When crafting datasets for AI-based diagnosis, one of the primary concerns is rooted in the idea of "garbage in, garbage out," referring to the case of algorithms or systems with flawed or non-representative data inputs producing flawed or non-representative outputs. This challenge of lack of representation in training data is especially prominent in the case of ASD, which "varies widely in severity and everyday impairment." Despite the complex nature of this spectrum disorder, AI-based diagnostic models have erred towards simplistic classifications. In fact, the University of Huddersfield's comprehensive study of AI-based ASD diagnosis determined that the majority of AI-based ASD diagnosis tools used datasets with only two categories - ASD and Non-ASD (Thabtah). This alarming disparity between the spectrum-based nature of ASD and binary datasets used for training diagnostic AI is rooted in the fact tha machine learning technologies with binary classification are generally simpler and more accurate than those with multiple categories (Erden et. al). However, this simplicity of AI models makes them susceptible to error, especially when diagnosing individuals whose behaviors are milder or different from those of the autistic individuals in the dataset. AI lacks the ability to account for "nuance or flexibility," making it less likely to account for a child's individualized context or behaviors when comparing the child against a particular criteria for assessment (Erden et. al). Thus, false positives and negatives present a more significant risk when using current forms of artificial intelligence for diagnosis.
AI-based companies are currently addressing this issue of lack of representation for the entire autism spectrum by diversifying datasets. The expectation is that by specifically including cases of children with milder autism symptoms, or by increasing the proportion of female autism cases in datasets, diagnoses will be more accurate and combat current issues of lower diagnosis rates for girls (Bennet and Keyes). However, as Whittaker et. al. of the AI Now Institute, an interdisciplinary research institute at NYU dedicated to understanding social impacts of AI, highlight, "simply expanding a dataset's parameters to include new categories" will not truly increase representation (10). The probabilistic and statistical nature of machine learning algorithms intrinsically places minorities at a disadvantage, as "outlier data is often treated as 'noise' and disregarded," and a lack of individuals matching the individual's behavior and features prevents the algorithm from finding patterns and providing an accurate diagnosis (Trewin 3). Due to these statistical issues, machine learning researchers tend to face a double bind. Including "outlier data" of less common disabilities can cause overfitting, in which the algorithm is unsuccessful for general, more common scenarios; using simpler models results in incorrect or "unfairly negative" predictions for "outlier individuals" (Trewin 3). In most cases, due to the researchers' and industry's focus on statistical accuracy in supporting new AI-based diagnosis technologies, AI developers will unfortunately take the latter path of sacrificing nuance in capturing the complexities of ASD in favor of overall accuracy.
Even when designing algorithms, AI models for diagnosing ASD are particularly prone to replicating the biases of their developers, psychologists, and clinicians. Typically, AI models for diagnosing ASD rely on training datasets consisting of question-based results from established diagnostic tools, such as the ADI-R (Erden et. al). The ADI-R presents 93 questions to parents or caregivers regarding the child's full developmental history. The answers are then given a rating score by the interviewer; if the child's scores meet the score threshold, they are diagnosed with autism spectrum disorder. Despite the ADI-R's reputation as the "gold standard" of standardized interview tools, the ADI-R remains questionable due to its reliance on the interviewer's interpretation and ratings of the parents' answers, which are influenced by clinicians' professional training, personal biases, and cultural background (Song et. al).
The usage of artificial intelligence, often touted as an objective and scientific tool, has demonstrated its ability to replicate these biases. Diagnostic AI models rooted in prejudiced notions of the disability, such as those based on psychologist Simon Baron Cohen's paper portraying autism as an "extreme male brain," will overemphasize certain traits, such as the child's tendency to systemize and think scientifically rather than express empathy for fellow children. Such notions have been discredited due to the lack of evidence for the assumption that men and women demonstrate measurable differences in scientific thinking and social communication (Furfaro). Thus, although AI may not be subject to personal biases, circumstances, or lack of professional training that can sway the judgements of the physician, if the datasets the AI relies on are "compiled with the contributions of biased physicians, then those biases can be further reified" (Erden et. al).
Furthermore, irrespective of the specific algorithms and techniques used for AI-based ASD diagnosis, a crucial aspect of any AI development process is determining its accuracy. In this regard, the most relevant case study to discuss is that of Cognoa. In late 2020, the pediatric behavioral health company began the process of seeking FDA clearance for its "breakthrough digital autism diagnostic device" after its "successful pivotal study" (Cognoa). Cognoa's AI-based device requires caregivers to complete a questionnaire regarding the child's behavior, upload two videos of the child using Cognoa's mobile app, and complete two primary care appointments. The company, which conducted its study for ten months from July 2019 to May 2020 at 14 sites across the United States, determined its device's accuracy by "comparing its diagnostic output with the clinical reference standard, consisting of a diagnosis made by a specialist clinician, based on DSM-5 criteria and validated by one or more reviewing specialist clinicians." Evidently, Cognoa's AI-based diagnostic tool is based on the behaviors and traits of children already diagnosed with autism, with standards of accuracy for comparison being the diagnoses provided by clinicians. Therefore, the standards used for determining accuracy of an AI-based diagnostic tool inherently assume that prior decisions by clinicians are accurate and should be used as role models.
Beyond Coding and into Controlling: The Deeper Implications of AI-Based Diagnosis
It is evident that each stage of the development process - crafting datasets, developing algorithms, and determining accuracy - is mired with complications regarding representation and bias. However, the implications of these challenges in AI-based diagnosis extend beyond the manifestation of the AI model itself and into the realm of the autonomy and humanity of those with autism.
In any critical analysis of the implications of disability diagnosis, a crucial concept to discuss is that of the medical model of disability. The medical model of disability defines an illness or disability as the result of the individual's physical or mental impairment, thus characterizing disability as a defect within the individual. The model relies heavily on the assumption that the individual's disability will reduce her quality of life, so a just society should invest resources towards intervention and treatment of the individual's condition (Grover). However, this model has been severely criticized due to its characterization of disability as an intrinsically flawed phenomenon that is responsible for the disabled individual's "suffering," as well as its portrayal of medical professionals and able-bodied individuals as entities who can "address" or "cure" disability. Broadly, this model largely fails to consider the role of society in "disabling" individuals due to factors such as discriminatory practices, exclusionary design, and biased attitudes. The use of artificial intelligence for ASD diagnosis only exacerbates the medical model in the lives of disabled individuals.
In an attempt to design diagnostic AI tools that are objective, developers have begun placing an undue emphasis on identifying features and symptoms of deviation in a child's behavior, which the AI models can identify and use when providing a diagnosis. A sample report of AI that used the "difference in face scanning patterns between ASD and non-ASD participants" as a factor for classification highlighted that "the most distinguishable characteristic was that the [non-ASD] group spent more time looking at the right eye, while the ASD group spent more time on the left eye" (Song et. al). Moreover, the report especially noted the accuracy rate of 88.51% in identifying this distinction between non-ASD and ASD children. Similarly, a study by Hilton et. al reported that "83% of individuals with ASD had lower motor composite scores than non-ASD individuals" as a justification for using motor movements as a classification factor for AI diagnosis. Clearly, due to the requirements that AI-based diagnosis imposes for pattern-finding and feature detection, researchers are drawn to purposefully searching for abnormalities and pinpointing minute deviations between autistic children and "normal" children. Moreover, the unique weightage assigned to quantification of results, as evidenced with the values "88.51% and "83%," grants credence to stereotypes and sweeping generalizations about autism. Ultimately, these statistics alter the rhetoric of autism from conversations about the individuality and personhood of autistic children to demonstrations and proofs about their flaws and symptoms, such as "absent social skills and emotional awareness" (Keyes). Even when developers have good intend to view autism as a difference rather than a deficit, the entrenchment of the medical model is further perpetuated in the dataset and algorithmic design process. Most data and case studies regarding autism from the 1900s and early 2000s have largely focused on behaviors associated with autism as "deficits requiring remedies." Thus, although developers and clinicians may strive to use AI in a context "where neurodiversity is viewed as a difference" rather than to explicitly pinpoint flaws in an autistic child, the AI model cannot objectively account for the current circumstances, as it is heavily dependent on the aggregation of prior data and overarching context of autism as a deficit (Erden et. al).
The second important method in which AI exacerbates the medical model is in the diagnosis process itself. A relevant case study to consider in this regard is that of Cogniable, the company developed by the Kohli's after their experience with their autistic son Ekagra. Cogniable requests parents to "simply play with their child at home, based on a script [that Cogniable provides], and record that session" in the app, and hospital committees must "approve digital collection of video data remotely" (Velayanikal). Although this process may seem objective and data-oriented at first glance, it is quickly revealed that parents of children severely lack autonomy and choice in the diagnostic process, as their role is merely reduced to that of a data provider to a technology company that ultimately characterizes their child as autistic or "normal." Cogniable's targeted customers for their AI-based diagnosis app are predominantly parents and hospitals in rural areas with few trained professionals and resources for parents to gain education on autism spectrum disorder. Cogniable specifically highlights the example of Bangladesh as a city in which the company hosts to make diagnosis more accessible. However, nearly 21% of the population lives below the poverty line with "no electricity and limited access to health care" even in major cities (Rural Recovery Portal). Thus, in reality, many parents and hospitals do not have access to technical education and insights about privacy and data collection, drastically limiting the accountability companies such as Cogniable have regarding how the videos of children are truly used.
This lack of autonomy and ability to hold AI-based diagnosis companies accountable also extends into the realm of transparency in the diagnosis process. It is widely recognized that AI-based decision making in any societal context compromises aspects of transparency and requirements for justification. While humans can be requested to "justify their conclusions if there is a lack of certainty or trust," prognostication scores through algorithms are viewed as "black boxes" that should be assumed to provide accurate answers (Beil et. al). In combining technical authority with medical authority, people subject to medical contexts are "even further disempowered, with even less legitimacy given to the patient’s voice" (Bennet and Keyes). As a result, under the illusion of technology-produced accuracy, biases of developers creating the AI model and clinicians interpreting the results can go unchecked. This lack of transparency in the diagnosis process, combined with the use of AI models that families in rural areas are expected to blindly trust as a form of rescue, places families and children with ASD in a dangerous position of darkness. They are persuaded to provide recorded videos and deeply personal data of their children's behaviors; in return, they may not receive a proper justification for their diagnosis. Ultimately, AI-based diagnosis amplifies and normalizes "significant power asymmetries" between disabled individuals who may be "classified, ranked, and assessed" and those designing and deploying AI systems (Trewin 1).
Yet, the most disconcerting perpetuation of the medical model by AI-based diagnosis lies not when AI is used purely for diagnosis, but when deeper motives of developers and clinicians merge explicit diagnosis with subtle intervention. Cogniable's digital assessment app not only provides a diagnosis, but also enables the automatic development of a "customized therapy program" with "skills to be taught and the protocols to teach each skill" as an online manifestation of intervention to "address" autism. Most importantly, the developer of Cogniable, Dr. Kohli, is herself an educator in applied behavioral analysis, a highly controversial form of interventional therapy that seeks to make people with autism "normal" by training them through repetition and drills for over 40 hours a week, often against the child's will. Unfortunately, an app that initially seemed benevolent in promoting accessibility in acquiring diagnoses carries deeper context of incentives and deeper goals surrounding treatment and even elimination of autism at an early age. Some researchers at the Adaptive Systems Research Group at the University of Hertfordshire have gone so far as to virtually replace the role of clinicians and parents in the diagnosis process with "robotic playmates" that detect a child's "eye-gaze, affect, and movement" when interacting with the child (Dautenhahn et. al). Although these robotic devices are primarily passive in their diagnostic interactions, they often contain a "learning mode" that adjusts its style of interaction by inferring the child's behavior, seamlessly "delivering therapeutic intervention in tandem with diagnostic assessments'' (Lee et. al). Moreover, in many cases, the child and its family may not realize that intervention is occurring in the first place. Once again, autistic children are left under the influence of developers and clinicians by substantially reducing opportunities for the multi-way discussion and exchanges between clinicians, parents, and children that exist in traditional forms of diagnosis. From this alarming reality of the conflation of diagnosis and treatment, it is imperative to realize that autism is not merely a diagnostic label (Bennet and Keyes). Unfortunately, it carries associations about incapability and divergence from the norm, which have led to a plethora of "harmful behavior change therapies" rooted in deep-seated assumptions about autism as a deficit needing identification and ultimately, eradication.
The rise in AI-based diagnosis for autism spectrum disorder is unlikely to decelerate, as companies such as Cogniable and Cognoa complete their clinical trials and gain FDA approval. However, it is ultimately up to society to recognize that AI-based diagnostic tools are no savior, raising their own complications such as subjectivity in which definitions of ASD to use, statistical bias, and incorporations of the prejudices of developers and clinicians. The seemingly objective nature of AI technology, combined with the marketing efforts of developers to cast AI-based diagnosis in the light of accessibility and service, will amplify its usage not only for diagnosis, but also as a means for opening doors to intervention and treatment. While the simple path may be to follow along this slippery slope that exacerbates the medical model of disability, the necessary route is one of reflection and advocacy. After questioning the assumptions of disability as a deficit that AI models are rooted in, developers and healthcare professionals alike must grapple with questions that go beyond the implementation of AI: what is the purpose of diagnosis, and who should define what constitutes a disability such as autism? After all, current models of AI are merely substituting the role of clinicians in providing diagnosis; there is no improvement in the power given to autistic individuals in defining their identity and path ahead independent of algorithmic predictions and recommendations. Thus, society must choose not only to reach out and blindly promote such technologies, but also call in and make the fields of healthcare and technology themselves welcoming to autistic individuals. It is only through an environment that values their insights and perspectives that autistic children and their families will be able to thrive. Ultimately, we will have decoded not only the encoding of ASD diagnosis in AI, but also the encoding of beliefs and practices surrounding how society defines and understands disability.
Beil, Michael, et al. “Ethical Considerations about Artificial Intelligence for Prognostication in Intensive Care.” Intensive Care Medicine Experimental, Springer International Publishing, 10 Dec. 2019, www.ncbi.nlm.nih.gov/pmc/articles/PMC6904702/.
Bennett, C.L., Keyes, O.: What is the point of fairness? Disability, AI and the complexity of justice. In: ASSETS 2019 Workshop—AI Fairness for People with Disabilities (2019)
(Secondary Source). This paper critically reflects on the lens of "fairness" for assessing the ethics of AI for people with disabilities, as it reinforces power dynamics. The authors present two case studies from computer vision, including AI for disability diagnosis and AI for "seeing" to help visually impaired individuals and analyzes them through two lenses of fairness and justice.
Erden, Yasemin J., et al. “Automating Autism Assessment: What AI Can Bring to the Diagnostic Process.” Journal of Evaluation in Clinical Practice, 2020, doi:10.1111/jep.13527.
Guo, Anhong, et al. “Toward Fairness in AI for People with Disabilities: A Research Roadmap.” ArXiv.org, 2 Aug. 2019, arxiv.org/abs/1907.02227.
(Secondary Source). This paper analyzes the technical challenges regarding disability bias in AI for different types of AI systems, including natural language processing, handwriting recognition, speaker analysis systems, and vision systems.
Hutchinson, Ben, et al. “Unintended Machine Learning Biases as Social Barriers for Persons with Disabilities.” ACM SIGACCESS Accessibility and Computing, 1 Mar. 2020, dl.acm.org/doi/abs/10.1145/3386296.3386305.
(Secondary Source). This study analyzes datasets for natural-language processing to identify causes and effects of machine learning biases. The authors, who are researchers at Google, identify that machine learning models reflect biases that are found in linguistic datasets, such as those associating people with mental illnesses with gun violence and homelessness.
Mills, M & Whittaker, M 2019, Disability, Bias, and AI. AI Now Institute Report. <https://ainowinstitute.org/disabilitybiasai-2019.pdf>
(Secondary Source). This paper comprehensively analyzes current research and challenges facing artificial intelligence and its relation to biases and discrimination against people with disabilities. The authors refer to discussions from a workshop for professionals in artificial intelligence, as well as prominent viewpoints from literature on this topic of bias, AI, and disability.
Nakamura, K., & Machinery, A. (2019). My Algorithms Have Determined You're Not Human: AI-ML, Reverse Turing-Tests, and the Disability Experience. UC Berkeley. http://dx.doi.org/10.1145/3308561.3353812
(Secondary Source). This paper analyzes some of the technicalities behind artificial intelligence development, as well as the causes of problematic and discriminatory AI models in the context of disability.
Protalinski, Emil. “Google Unveils 3 Accessibility Projects That Help People with Disabilities.” VentureBeat, VentureBeat, 8 May 2019, venturebeat.com/2019/05/07/google-ai-accessibility-project-euphonia-diva-live-relay/.
Ross, Martha. “People with Disabilities Are Disproportionately among the out-of-Work.” Brookings, Brookings, 30 June 2017, www.brookings.edu/blog/the-avenue/2017/06/30/people-with-disabilities-are-disproportionately-among-the-out-of-work/.
Smith, P., Smith, L. Artificial intelligence and disability: too much promise, yet too little substance?. AI Ethics (2020). https://doi.org/10.1007/s43681-020-00004-5
(Primary Source). This thought piece uses methodologies of autoethnography and reflection based on personal narratives of using AI technology. The author highlights that disabled people can feel frustrated when working with AI that is intended to assist them. Thus, the AI design process must include disabled individuals, and the author encourages co-design. This source also includes a review of many other research papers and works on the issue of ethics, AI, and disability.
“The Evolution of 'Autism' as a Diagnosis, Explained.” Spectrum, 2 Apr. 2020, www.spectrumnews.org/news/evolution-autism-diagnosis-explained/.
Trewin, Shari. AI Fairness for People with Disabilities: Point of View. Nov. 2018, arxiv.org/pdf/1811.10670.pdf.
(Secondary Source). This paper argues that fairness regarding disability is different from fairness for other aspects such as race or gender, reexamines notions of "fairness" given privacy concerns and a large variety in how disability manifests, and offers suggestions for how to increase fairness for disabled individuals in AI applications.
Trewin, Shari, et al. “Considerations for AI Fairness for People with Disabilities.” AI Matters, 1 Dec. 2019, dl.acm.org/doi/abs/10.1145/3362077.3362086.
(Secondary Source). In this paper, AI experts describe the risks and opportunities of AI for employment, education, safety, and healthcare. In addition, they propose strategies for supporting AI fairness based on literature review and advocate for the inclusion of disabled individuals when building models and in testing.
Velayanikal, Malavika. “Harnessing AI to Ease the Stress of Managing Autism.” Mint, 18 Oct. 2020,www.livemint.com/news/business-of-life/harnessing-ai-to-ease-the-stress-of-managing-autism-11603040643437.html.