top of page

Alongside Intelligence, can Ethics be Artificially Created Too?

Anupama Vijayakumar

1 November 2022

As the dawn of the Artificial Intelligence age beckons, can standards ensure that AI is 'ethical' or 'humane'? A technology that at once holds great potential to help humans address some of the biggest problems facing mankind, could also evolve as a threat if its development is left unregulated.

Cover.jpg

Robots and intelligent machines have captured the human imagination for decades. Whether in science fiction or in visual media, intelligent artificial systems have often featured in prominent roles. However, there are extreme disparities in how such entities have been featured across a cross section of these plot lines. Fans of the Star Wars epics will have fond memories of assistant robots like R2D2 or C3PO. Those of the Terminator-franchise persuasion will recollect with dread the killer autonomous robots and the sentient system at its center, Skynet. 


Real-world developments in Artificial Intelligence (AI) are certainly some way off either of these extremes. Still, fast-paced advances in machine learning techniques have in recent years raised a pertinent debate: if humans do succeed in creating super-intelligent artificial systems, is it necessary that these machines will also imbibe qualities such as empathy or ethics? In other words, can AI be made ‘humane’, ‘lawful’ and ‘trustworthy’?


The jury is still out on these important considerations. Advocates of AI, including most of the major tech majors in the world, highlight the immense benefits such systems can bring through boosting productivity and overcoming hurdles that humans are ill-equipped for. On the other hand, a growing number of alarmists including tech entrepreneur Elon Musk and philosopher Nick Bostrom argue that intelligent machines will eventually cause the end of humanity as we know it. As AI algorithms become increasingly prevalent in providing day-to-day services, there have also been certain high-profile instances of rogue decision-making by such systems, with an inability to mirror key human attributes of fairness and justice being particularly worrisome.


At a time when equality, inclusiveness and overcoming of systemic biases are the buzzword in much of the world’s major democracies, will an AI-enabled future help eliminate prevailing social evils or exacerbate them? This is not, and should not, be a question merely for a traditionally reticent tech-community nor for the opaque ivory towers of national security decision making. Deliberations to set norms that will govern the way AI systems evolve will affect the entire global citizenry. As such, a whole-of-society approach is required at this juncture to ensure that the exponential integration of technology does not turn detrimental to humans.


Wait, isn’t technology unbiased?


At present, it is certainly true that the most prominent worry humans have with the rise of comparable intelligent beings boils down to a form of existentialism. Would there be any jobs left for humans if machines could be trained to replace them? This is not merely a worry for those in blue collar jobs such as factory workers, truckers or construction personnel – some of the groups which have organised across the world against the entry of AI and robotics into their domains. What of so-called upper management across industries whose day-to-day work can be replaced in an instant with a system that can operate MS Powerpoint and Excel? Who will Human Resources ‘manage’ once humans are mostly out of the picture? Will there be a need for editors, reporters and columnists whose job is increasingly dependent on technology with each passing day?


What of research and risk analysts or social scientists when faced with systems that can assimilate millions of data points in an instant and give predictive analysis? Will even healthcare professionals be spared once robotics evolves to levels where the most complex of surgeries can be performed with no human assistance required? Diagnostic treatments could certainly be taken over by systems fed with decades of practical medical records, not to mention the data from medical texts and journals. Eventually, when machines evolve to the level where they can program for themselves and increase their own efficiency, humans could find themselves unneeded even in the software and tech industry. At this point, what functions would mankind serve and would machines – akin to Skynet or Ultron if Marvel Comics is more your fancy – exercise their own judgement on the worth of human beings and proceed to act based on their assessment?


This scenario, while set reasonably far ahead in the future is the driving force behind efforts geared towards ensuring that AI, as well as technologies entailing an intersection of AI and robotics, evolves to be pro-human. Such efforts are considered particularly crucial at the current point of time when machine learning algorithms are quickly enabling the development of artificial neural networks to evolve to think like the human brain. However, in the quest for developing highly advanced forms of sentient AI there are quite a few visible drawbacks that could have far worse real-world implications in the short-run, than mere existential rhetoric. What makes these potential societal impacts all the more troublesome is that unlike a mass lay-off of workers, such technology-driven systemic imbalances would set in slowly, hardly drawing any attention while helping exacerbate existing differences in society.


‘The Social Dilemma’, a 2020 Netflix docudrama highlighted the potential impact deeper integration of technology into daily activity of people could have at multiple levels ranging from mental health to polarisation in society, even leading to widespread destabilisation across the globe. The film is focused on the impacts generated by unabashed proliferation of social media networks – increasingly a major data source feeding the development of AI algorithms in the commercial sector. While one docudrama is certainly not enough to base holistic opinions on any matter around, it is let’s say a good gateway drug for today’s social media addicted population to begin a much-required rehab process questioning how much control we are willing to cede to unbridled tech majors.


Technological innovation and development since the information and communication revolution have been largely carried forward by the private sector. Technologies including social media seem to be evolving to resemble classic dual-use technologies with the weaponisable attributes starting to reveal their ugly heads from under the garb of a rosy veil of civilian use. Ethical concerns raised against the current trajectory of the evolution of machine intelligence are many. Usage of users’ personal data by big tech companies such as Facebook, Google, Amazon and Netflix have been subject to rampant criticism on the grounds of privacy violations. The companies’ business model of “surveillance capitalism”, i.e the manipulation of user behaviour and preferences through use of algorithms has been termed highly unethical.


The phrase ‘data is the new oil’ is now commonplace when discussing the evolution of AI algorithms. Any machine learning system is only as good as the quality and the quantity of the data that is fed in to ‘train’ the system on how to carry out its objectives. Avoiding bias of all forms in data used to train algorithms as well as rendering programming transparent occupy a central place in the discourse on data ethics. For instance, algorithms designed to support decision-making in various organisations have been criticised for the lack of means to ensure accountability. Use of algorithms in predictive analysis in particular has been seen to be problematically biased in its outcomes, aggravating the existing inequalities in the society. AI systems have been typically shown to pick up on patent or latent biases in training data.


For instance, the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), an American predictive policing algorithm intended to identify the likelihood of a criminal committing further offences had a tendency to earmark African-Americans “almost twice as likely as whites to be labeled a higher risk”.  A follow-up study found that the white criminals whom the algorithm labelled as lower risk were found to commit more future crimes. In another notable instance, Amazon discontinued its AI recruitment tool which was meant to automate its search for the best suitable employees. The algorithm which had been trained on resumes of applicants who were prominently male was hence seen to not favour any female candidates. In both these cases, the algorithm seems to be riddled with cognitive or statistical biases with the effect of perpetuating prevailing biases in the society.


The opacity of AI systems further constitutes a major issue from the point of view of accountability. This is an inherent feature of machine learning wherein the algorithm often identifies patterns often unforeseen by the programmer who may label the data according to his/her convenience. When AI’s analysis is used as a basis for decision-making, it is next to impossible for the programmer or the affected person to identify the basis on which the system arrived at the particular outcome. Renowned diplomat Henry Kissinger posits that democratic decision-making will be fundamentally affected if humans proceed to rely on a supposedly superior technology for decision-making, but is unable to explain its decisions.


However, government intervention to regulate the nature of evolution and use of technologies such as machine learning has been minimal. Such conversations have been particularly hard to achieve in democracies where the need to regulate civilians’ use of ICTs has been pressed down by narratives drawing comparisons to authoritarian regimes trampling over individual rights, freedom of expression and right to privacy. In this background, big tech has forged forward like messiahs of everything that is good and positive in the world while making and breaking their own rules and pitting regulation as the ultimate enemy. Much of the problems that we face today has resulted out of this unsavoury marriage of developers’ myopia along with an absence of regulation.


AI offers immense potential to eliminate problems plaguing humanity such as hunger and disease and can make education truly universal. However, there is a real risk that it may instead work to aggravate destabilisation and oppression of vulnerable groups if left as a tool at the whims and fancies of the powerful political and commercial entities who seek to solidify their position. Tackling issues related to bias and transparency is the only way to ensure that AI evolves as an enabler of well-intentioned human endeavours. This can be achieved through bringing more stakeholders into building regulatory regimes and more importantly through building means through which non-experts on the subject can understand and question what goes behind the workings of an algorithm. Biases need to be tackled through incorporating diverse attributes into data used to train AI systems. Standardisation and regulation are the need of the hour as humanity moves toward an AI-enabled future.


Disclaimer: The article expresses the author’s views on the matter and do not reflect the opinions and beliefs of any institution they belong to or of Trivium Think Tank and the StraTechos website.

Anupama Vijayakumar

IMG-20220930-WA0000.jpg

Anupama is the Editor-in-Chief of StraTechos. She is a Co-founder and Director of Trivium Think Tank. She is currently a Ph.D. candidate at the Department of Geopolitics and International Relations, Manipal Academy of Higher Education and is a recipient of the Government of India's UGC Senior Research Fellowship. Her doctoral research focuses on the interplay between technology and power in international relations.

Comments

Compartilhe sua opiniãoSeja o primeiro a escrever um comentário.

Kindly refrain from using abusive language or hate speech. Comments will be moderated.

bottom of page