top of page

Why, Machine, Why? The Explainability Puzzle of Artificial Intelligence

Anupama Vijayakumar

10 May 2023

There is a major problem with integrating AI into high-stakes decision making. Deep learning algorithms cannot explain themselves, nor can their thinking be understood by humans. Efforts toward achieving explainable AI seek to tackle this problem. But can explainable AI meet humanity’s expectations?

Cover.jpg

The ability of an individual to explain one’s own actions has been a central organizing principle of human society since time immemorial. In addition to the rights or wrongs found within the law, the criminal justice system itself works on the basis of how the plaintiff and the defendant’s versions compare with each other. Arthur Conan Doyle’s Sherlock Holmes relied on the Aristotelian art of deduction to logically explain seemingly unsolvable crimes using superhuman powers of observation and intelligence.


Governments in democracies are obligated to explain their decisions or policies to their citizens in rational terms. In stark contrast, what happens when intelligent machines who cannot explain their actions get increasingly integrated into high stakes decision-making? Efforts to achieve explainable AI intend to tackle this very question.


The Problem


AI evolves through learning from instructions and data fed to it by programmers. A new age of AI has been realized post 2012 through breakthroughs in deep learning, a subset of machine learning. Deep learning is based on artificial neural networks modelled after the human brain itself and is regarded as the fundamental enabler of human-like AI.


Unlike in machine learning where an algorithm is trained under the programmer’s supervision, deep learning algorithms are fed vast amounts of raw data to identify patterns, reinforce best pathways and learn on their own. For instance, Google’s AlphaGo taught itself to play the complex and intuitive game of Go through hundreds of rounds played against other programs as well as humans. Although AlphaGo was technically not trained to face any human players, it was able to beat the world’s top Go players through making unpredictable moves that left them completely astounded.


Deep learning algorithms in this way are akin to black boxes. A human cannot interpret or explain how the algorithm arrived at a certain output or decision. The problems arise because the programmer’s role in development is limited to designing the learning environment within which the algorithm teaches itself to operate. The processes the neural network employs to learn from the environment and the parameters it bases its decisions on are beyond access.


Read More: Alongside Intelligence, can Ethics be Artificially Created Too?


In fields such as healthcare and military, the black box problem poses several challenges. A study published in Lancet Digital Health in 2022 discussed the curious case of an algorithm which unexpectedly learned to identify a person’s race from medical images including X-rays, MRIs and CT-Scans. The doctors simply could not determine how the algorithm figured out race from medical images. Various studies have already noted how algorithms widely employed in the US healthcare system have shown an unconscious bias against African Americans as well as disabled people, leaving hundreds of people devoid of much-needed medical assistance.


The professionals involved in these cases could not really understand how the algorithm decided a person’s eligibility for care. A common method used to consciously prevent against such algorithmic biases has been to leave out details on race and an individual’s economic status in the data accessible to the machine. However, if deep learning algorithms start teaching themselves to determine race through medical images, there is a serious possibility that prevailing inequalities in the society will get compounded.


The Solution?


In simple terms, explainable AI or XAI seeks to peer into the black box to understand the machine’s thought process. It seeks to integrate into an intelligent machine the ability to provide clear and understandable explanations for its decisions or actions. The end-goal of XAI approaches is to make intelligent machines suitable for wider integration into the society by making them more accountable for their actions.


Various approaches crafted by AI developers strive to probe into how neural networks make decisions. Such approaches that peer into the algorithms’ brain have been collectively referred to as AI neurosciences. Approaches within AI neurosciences are largely proof of concept and are continuously evolving as the algorithms themselves improve in unexplainable ways.


A dominant approach within this field is to employ counterfactual probes which work based on a trial-and-error method. Counterfactual probes essentially provide varied inputs (text, images or other relevant data) to check whether there are differences in the output. Local Interpretable Model-Agnostic Explanations (LIME) is an example of a counterfactual probe. In the case of a hiring algorithm which makes decisions based on certain keywords that signify an applicant’s qualification, LIME would delete or change certain words through several tests to see if the algorithm still considered the applicants to be eligible.


Programmers have also attempted to develop XAI tools through testing them in vintage videogames. For instance, Professor Mark Riedl, Professor at the Georgia Tech School of Interactive Computing has attempted to do so using the game Frogger. Riedl and his team trained an algorithm to describe its actions in English while trying to get the frog to navigate through traffic and get to a pond. The algorithm was trained by recording human gamers who were instructed to think their actions out loud. A separate algorithm was then trained to translate the gaming algorithm’s language to the audio clippings. While the project could achieve explainability to a certain extent, the algorithm was still limited in deciphering exact intentions behind each move.


Right to an Explanation


Irrespective of the feasibility of XAI, explainability has been underlined as a common guiding principle within global approaches that seek to regulate AI. Its importance is heavily emphasised upon in the European Union’s General Data Protection Regulation (GDPR). It grants an individual, the right to question a firm on the decisions it makes using algorithms. Businesses in turn are liable to explain the same, failing which it would be deemed non-compliant to the GDPR.


Moreover, under Article 22, the GDPR grants to an individual “the right of human intervention”. A human may be asked to review a system’s decision to review whether it is wrong. Meanwhile, the California Privacy Rights Act, 2021, grants a consumer, the right to understand and opt out of automated decision-making processes. An individual’s right to an explanation is also central to Canada’s Bill C-11, the country’s proposed reforms to privacy laws.


The ultimate end-goal of explainability may vary depending on the stakeholders involved and the domain it is applied to. XAI models involved in understanding medical diagnosis may not serve the purpose in law and order for example. This is because norms and decision-making mechanisms vary in each domain. Moreover, stakeholders including programmers, firms and policymakers appear to have varied understanding and expectations of XAI.


In practice, engineering concerns have been noted to dominate the notion of explainability, with concerns of impacted individuals and communities taking a backseat. This casts major doubts on whether XAI can be customised to be user-friendly as expected by policies such as GDPR. The rights the law seeks to guarantee would have no effect without reliable technical tools to back it up. The only course of action might be to prohibit the use of AI in certain decision-making settings altogether.


Explainability is pivotal to ensure that intelligent machines are reliable, humane, and trustworthy. Explanations would have to be measured against standards of justifiability, ideally by human adjudicators to ensure that AI as a black box does not masquerade biased and unfair decisions. The interests of the global society may be best served by multiple stakeholders coming together to clearly reconcile the feasibility of XAI with the expectations laid down in regulations. A call can then be taken on how fast machines need to be brought into high stakes decision-making.


Disclaimer: The article expresses the author’s views on the matter and do not reflect the opinions and beliefs of any institution they belong to or of Trivium Think Tank and the StraTechos website.

Anupama Vijayakumar

IMG-20220930-WA0000.jpg

Anupama is the Editor-in-Chief of StraTechos. She is a Co-founder and Director of Trivium Think Tank. She is currently a Ph.D. candidate at the Department of Geopolitics and International Relations, Manipal Academy of Higher Education and is a recipient of the Government of India's UGC Senior Research Fellowship. Her doctoral research focuses on the interplay between technology and power in international relations.

Comments

Share Your ThoughtsBe the first to write a comment.

Kindly refrain from using abusive language or hate speech. Comments will be moderated.

bottom of page