Ethical Risks of AI
04/18/2024
Creating an Ethical Framework to Mitigate Risk
Today's blog is an update to one previously posted about ethical risks of AI. I've added to it a discussion in the Harvard Business Review (HBR). A brief summary follows.
Concerns About Ethical Risks
According to the HBR article, for companies that use AI, risk aversion is a top priority. Over 50% of executives report “major” or “extreme” concern about the ethical and reputational risks of AI in their organization given its current level of preparedness for identifying and mitigating those risks. That means that building an AI ethical risk program that everyone is bought into is necessary for deploying AI at all. Done well, raising awareness can both mitigate risks at the tactical level and lend itself to the successful implementation of a more general AI ethical risk program.
Building this awareness usually breaks down into three significant problems.
First, procurement officers are one of the greatest — and most overlooked — sources of AI ethical risks. AI vendors sell into most every department in your organization, but especially HR, marketing, and finance. If your HR procurement officers don’t know how to ask the right questions to vet AI products then they may, for instance, import the risk of discriminating against protected sub-populations during the hiring process.
Second, senior leaders often don’t have the requisite knowledge for spotting ethical flaws in their organization’s AI, putting the company at risk, both reputationally and legally. For instance, if a product team is ready to deploy an AI but first needs the approval of an executive who knows little about the ethical risks of the product, the reputation of the brand (not to mention the executive) can be at high risk.
Third, an AI ethical risk program requires knowledgeable data scientists and engineers. If they don’t understand the ethical risks of AI, they may either fail to understand their new responsibilities as articulated in the program, or they may understand them but not understand their importance, which in turn leads to not taking them seriously. On the other hand, if you have an organization that understands the ethical, reputational, and legal risks of AI they will understand the importance of implementing a program that systematically addresses those issues cross-organizationally.
I have previously blogged about the problem of creating an ethical framework and processes for AI. AI can improve human decision-making, but it has its limits. The possibility exists that bias in algorithms can create an ethical risk that brings into question the reliability of the data produced by the system. Bias can be accounted for through explainability of the data, reproducibility in testing for consistent results and auditability.
Ethics Risks
We often hear that AI systems must mitigate bias by understanding the workings of sophisticated, autonomous algorithms to take steps to eliminate unfair bias over time as they continue to evolve. Moreover, universal standards for fairness and trust should inform overall management policies for the ethical use of AI.
AI can improve human decision-making, but it has its limits. The possibility exists that bias in algorithms can create an ethical risk that brings into question the reliability of the data produced by the system. Bias can be accounted for through explainability of the data, reproducibility in testing for consistent results and auditability.
Other ethical risks include a lack of transparency, erosion of privacy, poor accountability and workforce displacement and transitions. The existence of such risks affects whether AI systems should be trusted. To build trust through transparency, organizations should clearly explain what data they collect, how it is used and how the results affect customers.
Ethical Concerns in Business
The most prominent ethical concerns facing AI-powered businesses today include:
- Bias: Training data can skew AI’s objectivity and undermine its impartiality. This is a serious issue for companies, as AI-powered businesses can’t afford to accidentally publish materials based on harmful stereotypes.
- Misinformation: AI models can inadvertently spread falsehoods and misleading information. This is because AI large language models are essentially an “echo chamber” and are largely incapable of assessing the veracity of the data they are fed.
- Intellectual Property: Creatives around the world are up in arms regarding the rise of AI — and for good reason. Generative AI models use real artists’ and writers’ work without attribution or citation. This may land firms in hot legal water in the future.
Ethical Principles
Some organizations have developed their own ethical principles for dealing with AI. The following is KPMG’s five principles of ethics and AI:
- Transforming the workplace: Massive change in roles and tasks that define work, along with the rise of powerful analytic and automated decision-making, will cause job displacement and the need for retraining.
- Establishing oversight and governance: New regulations will establish guidelines for the ethical use of AI and protect the well-being of the public.
- Aligning cybersecurity and ethical AI: Autonomous algorithms give rise to cybersecurity risks and adversarial attacks that can contaminate algorithms by tampering with the data. KPMG reported in its 2019 CEO Outlook that 72 percent of U.S. CEOs agree that strong cybersecurity is critical to engender trust with their key stakeholders, compared with 15 percent in 2018.
- Mitigating bias: Understanding the workings of sophisticated, autonomous algorithms is essential to take steps to eliminate unfair bias over time as they continue to evolve.
- Increasing transparency: Universal standards for fairness and trust should inform overall management policies for the ethical use of AI.
Auditing AI Data
Auditing is the function of examining data to determine whether it is accurate and dependable, and the system used to generate it is operating as intended. Data that is biased will produce biased results. For example, a financial institution that grants mortgage loans to white applicants in larger numbers than minorities may be biased. Perhaps the condition for approving mortgages is based on where you live. It could be that the rich neighborhoods have priority in selecting a qualified mortgagee. Assuming the data was biased, machine-learning AI systems would unintentionally reproduce these results over time.
AI auditing works well for a leasing firm with hundreds of lease contracts given the need to verify that each one has been properly recorded either as an asset with future value or expense for the period. AI systems can help to quickly analyze complex contracts to make that determination, but the accounting standards must be accurately inputted so the system knows what to look for.
Fraud Detection
The biggest value of using AI in auditing is to detect fraud, the idea being to identify and catch anomalies. For example, a reimbursable expense submitted by an employee should be examined by tying it to a restaurant receipt. What if the receipt for $100 is not based on food ordered but instead is a gift certificate for a friend or family member? The exact amount of the receipt may raise a red flag in an AI-driven, machine learning system where all data is examined, unlike a more traditional data processing system that uses sampled data.
Companies lose an estimated 5 percent of their revenue annually as a result of occupational fraud, according to the 2022 ACFE Report to the Nations. It turns out, the risk of occupational fraud is much higher than many managers and leaders realize. AI systems can analyze large amounts of data quickly and thoroughly to determine whether assets have been misappropriated.
AI systems can also have predictive value through machine learning and identifying high-risk areas and events. It can devise an accounting fraud prediction model that more accurately calculates the probability of future material misstatements in financial statements and to improve the quality of audits.
Using AI to examine all the financial data and determine whether financial fraud exists provides a big advantage over previous systems. It affords a higher level of assurance and reduces the risk of fraud.
Corporate Governance
Corporate governance is essential to develop and enforce policies, procedures and standards in AI systems. Chief ethics and compliance officers have a key role to play, including identifying ethical risks, managing those risks and ensuring compliance with standards.
Governance structures and processes should be implemented to manage and monitor the organization’s AI activities. The goal is to promote transparency and accountability while ensuring compliance with regulations and that ethical standards are met.
A research study by Genesys found that more than one-half of those surveyed say their companies do not currently have a written policy on the ethical use of AI, although 21 percent expressed a definite concern that their companies could use AI in an ethical manner. The survey included 1,103 employers and 4,207 employees regarding the current and future effects of AI on their workplaces. The 5,310 participants were drawn from six countries: the U.S., Germany, the U.K., Japan, Australia and New Zealand. Additional results include:
- 28 percent of employers are apprehensive their companies could face future liability for an unforeseen use of AI.
- 23 percent say there is currently a written corporate policy on the ethical use of AI.
- 40 percent of employers without a written AI ethics policy believe their companies should have one.
- 54 percent of employees believe their companies should have one.
HR 6580 Algorithmic Accountability Act of 2022
Algorithmic bias is the systematic and repeatable decisions of computer systems that create unfair, discriminatory, or inequitable outcomes. Algorithmic accountability means the process of holding some entities responsible or accountable in cases where the algorithms they develop or operate make decisions that result in unfair outcomes.
Federal lawmakers reintroduced HR6580, the Algorithmic Accountability Act of 2022, to Congress to reduce inequalities in AI systems. The bill requires companies that use AI technology to assess the risk of their algorithms, mitigate negative impacts, and submit reports to the Federal Trade Commission (FTC). The FTC would oversee enforcement and publish information about the algorithms that companies use to increase accountability and transparency.
“The Algorithmic Accountability Act would present the most significant challenges for businesses that have yet to establish any systems or processes to detect and mitigate algorithmic bias,” said Siobhan Hanna, managing director of global AI systems for TELUS International. “Entities that develop, acquire and utilize AI must be cognizant of the potential for biased decision making and outcomes resulting from its use.”
Conclusions
The ethical use of AI should be addressed by all organizations to build trust into the system and satisfy the needs of stakeholders for accurate and reliable information. A better understanding of machine learning would go a long way to achieving this result.
Professional judgment is still necessary in AI to decide on the value of the information produced by the system and its uses in looking for material misstatements and financial fraud. In this regard, the acronym GIGO (“garbage in, garbage out”) may be appropriate. Unless the data is reliably provided and processed, AI will produce results that are inaccurate, incomplete or incoherent, and machine learning would be compromised with respect to ethical AI.
Posted by Steven Mintz, Ph.D., aka Ethics Sage, on April 18, 2024. You can sign up for his newsletter and learn more about his activities at: https://www.stevenmintzethics.com/.