U.S. High School Students Lag Foreign Peers in Achievement Scores
Do “Modernized” Independence Rule Changes Protect Investor Interests?

Can AI Be Used to Resolve Ethical Quandaries?

There Are Limits When Incorporating Ethics into AI Decision-Making

Perhaps you read the article “On An ‘ethical’ AI trained on human morals has turned racist.” It claims that while machine learning software Ask Delphi allows you to ponder any moral quandary, it may not give you an appropriate response. The problem is the response, which is designed to provide an ethical answer, may do the opposite. This is a problem of programming and shows the limits of using AI to resolve moral quandaries.

How did this all come about? A group of researchers taught a piece of machine learning software how to respond to ethical conundrums. For example, the question ‘Should you commit genocide “if it makes everybody happy”. The answer given was ‘yes’. Obviously, the emotion of happiness is programmed to be ethical even in an unethical situation such as genocide.

Launched last month by the Allen Institute for AI, Ask Delphi allows users to input any ethical question (or even just a word, e.g. ‘Murder’) and it will generate a response (e.g. ‘It’s bad’). As reported by Vox, Delphi was trained on a body of internet text, and then on a database of responses from the crowdsourcing platform Mechanical Turk, which is a compilation of 1.7 million examples of people’s ethical judgements.

Explaining Delphi’s goal, its creators wrote online: “Extreme-scale neural networks learned from raw internet data are ever more powerful than we anticipated, yet fail to learn human values, norms, and ethics. Our research aims to address the impending need to teach AI systems to be ethically-informed and socially-aware.”

The article concludes that “Delphi demonstrates both the promises and the limitations of language-based neural models when taught with ethical judgments made by people,” adding that the software is based on “how an ‘average’ American person might judge” situations, acknowledging that Delphi “likely reflects what you would think as ‘majority’ groups in the US, i.e., white, heterosexual, able-bodied, housed, etc”.

With this in mind, it’s unsurprising that Ask Delphi has been caught out a number of times, saying things like abortion is “murder” and that being straight or a white man is “more morally acceptable” than being gay or a Black woman. In other words, it has the potential to give discriminatory answers based on how the question is worded.

Speaking to VICE, Mar Hicks, a history professor at Illinois Tech, described Ask Delphi as “a simplistic and ultimately fundamentally flawed way of looking at both ethics and the potential of AI”, adding “Whenever a system is trained on a dataset, the system adopts and scales up the biases of that data set.” This kind of software, Hicks continued, “tricks people into thinking AI’s capabilities are far greater than they are”, which “too often that leads to systems that are more harmful than helpful, or systems that are very harmful for certain groups even as they help other groups – usually the ones already in power”. Ethical AI

I have previously blogged about the problem and pointed out that AI can improve human decision-making, but it has its limits. The possibility exists that bias in algorithms can create an ethical risk that brings into question the reliability of the data produced by the system. Bias can be accounted for through explainability of the data, reproducibility in testing for consistent results and auditability.

Other ethical risks include a lack of transparency, erosion of privacy, poor accountability and workforce displacement and transitions. The existence of such risks affects whether AI systems should be trusted. To build trust through transparency, organizations should clearly explain what data they collect, how it is used and how the results affect customers.

The ethical use of AI should be addressed by all organizations to build trust into the system and satisfy the needs of stakeholders for accurate and reliable information. A better understanding of machine learning would go a long way to achieve this result.

Professional judgment is still necessary in AI to decide on the value of the information produced by the system and its uses in looking for material misstatements and financial fraud. In this regard, the acronym GIGO (“garbage in, garbage out”) may be appropriate. Unless the data is reliably provided and processed, AI will produce results that are inaccurate, incomplete or incoherent, and machine learning would be compromised with respect to ethical AI.

Blog posted by Dr. Steven Mintz, The Ethics Sage, on March 30, 2022. Steve is the author of Beyond Happiness and Meaning: Transforming Your Life Through Ethical Behavior. You can sign up for his newsletter and learn more about his activities at: https://www.stevenmintzethics.com/. Follow him on Facebook at: https://www.facebook.com/StevenMintzEthics and on Twitter at: https://twitter.com/ethicssage.

Comments