Have Kamala Harris’s Values Changed?
Gen-Z Employees: Work to Live or Live to Work?

Business & Society: The Ethical Use of AI

Trust in Business and Accountability is the Key

According to a poll issued by Bentley University and Gallup released on September 12, 2024, the 2024 Bentley-Gallup Business in Society Report, for the third consecutive year, a majority of Americans continue to see businesses as having a positive effect on people’s lives. The results are based on a Gallup poll of 5,835 U.S. adults, aged 18 or older, conducted from April 29-May 6, 2024. However, the one area where concerns about the responsibilities and accountability of business to society is in the ethical use of Artificial Intelligence (AI).

Ethical Risks

Three-quarters of Americans say AI will reduce the total number of jobs in the country over the next 10 years, the same percentage that said so last year. Also similar to last year, 77% of adults do not trust businesses much (44%) or at all (33%) to use AI responsibly. Additionally, nearly seven in 10 of those who are extremely knowledgeable about AI have little to no trust in businesses to use AI responsibly.

AI can improve human decision-making, but it has its limits. The possibility exists that bias in algorithms can create an ethical risk that brings into question the reliability of the data produced by the system. Bias can be accounted for through explainability of the data, reproducibility in testing for consistent results and auditability.

Other ethical risks include a lack of transparency, erosion of privacy, poor accountability and workforce displacement and transitions. The existence of such risks affects whether AI systems should be trusted. To build trust through transparency, organizations should clearly explain what data they collect, how it is used and how the results affect customers.

I have previously blogged about the ethical risks when businesses use AI. The most prominent ethical risks facing AI-powered businesses today include:

  • Bias: Training data can skew AI’s objectivity and undermine its impartiality. This is a serious issue for companies, as AI-powered businesses can’t afford to accidentally publish materials based on harmful stereotypes.
  • Misinformation: AI models can inadvertently spread falsehoods and misleading information. This is because AI large language models are essentially an “echo chamber” and are largely incapable of assessing the veracity of the data they are fed.
  • Intellectual Property: Creatives around the world are outraged regarding the rise of AI — and for good reason. Generative AI models use real artists’ and writers’ work without attribution or citation. This may land firms in hot legal water in the future.

Ethical Principles

Some organizations have developed their own ethical principles for dealing with AI. The following is from KPMG:

  • Transforming the workplace: Massive change in roles and tasks that define work, along with the rise of powerful analytic and automated decision-making, will cause job displacement and the need for retraining.
  • Establishing oversight and governance: New regulations will establish guidelines for the ethical use of AI and protect the well-being of the public.
  • Aligning cybersecurity and ethical AI: Autonomous algorithms give rise to cybersecurity risks and adversarial attacks that can contaminate algorithms by tampering with the data. KPMG reported in its 2019 CEO Outlook that 72 percent of U.S. CEOs agree that strong cybersecurity is critical to engender trust with their key stakeholders, compared with 15 percent in 2018.
  • Mitigating bias: Understanding the workings of sophisticated, autonomous algorithms is essential to take steps to eliminate unfair bias over time as they continue to evolve.
  • Increasing transparency: Universal standards for fairness and trust should inform overall management policies for the ethical use of AI. AI

Corporate Governance

A research study by Genesys found that more than one-half of those surveyed say their companies do not currently have a written policy on the ethical use of AI, although 21 percent expressed a definite concern that their companies could use AI in an ethical manner. The survey included 1,103 employers and 4,207 employees regarding the current and future effects of AI on their workplaces. The 5,310 participants were drawn from six countries: the U.S., Germany, the U.K., Japan, Australia and New Zealand. Additional results include:

  • 28 percent of employers are apprehensive their companies could face future liability for an unforeseen use of AI.
  • 23 percent say there is currently a written corporate policy on the ethical use of AI.
  • 40 percent of employers without a written AI ethics policy believe their companies should have one.
  • 54 percent of employees believe their companies should have one.

Algorithmic Accountability Acts

Algorithmic bias is the systematic and repeatable decisions of computer systems that create unfair, discriminatory, or inequitable outcomes. Algorithmic accountability means the process of holding some entities responsible or accountable in cases where the algorithms they develop or operate make decisions that result in unfair outcomes.

Federal lawmakers have been discussing legislation about the responsible use of AI for several years. They reintroduced the Algorithmic Accountability Act of 2022 (HR 6580), to Congress to reduce inequalities in AI systems. The bill requires companies that use AI technology to assess the risk of their algorithms, mitigate negative impacts, and submit reports to the Federal Trade Commission (FTC). The FTC would oversee enforcement and publish information about the algorithms that companies use to increase accountability and transparency. The  Act would present the most significant challenges for businesses that have yet to establish any systems or processes to detect and mitigate algorithmic bias,” said  Siobhan Hanna, managing director of global AI systems for TELUS International. “Entities that develop, acquire and utilize AI must be cognizant of the potential for biased decision making and outcomes resulting from its use.”

The Algorithmic Accountability Act of 2023 (HR 3369) requires companies to assess the impacts of the AI systems they use and sell, creates new transparency about when and how such systems are used, and empowers consumers to make informed choices when they interact with AI systems.

What the Bill Does:

  • Provides a baseline requirement that companies assess the impacts of automating critical decision-making, including decision processes that have already been automated.
  • Requires the Federal Trade Commission (FTC) to create regulations providing structured guidelines for assessment and reporting.
  • Ensures that companies that make critical decisions and those that build the technology are responsible for assessing the impact of decision processes.
  • Requires reporting of select impact-assessment documentation to the FTC.
  • Requires the FTC to publish an annual anonymized aggregate report on trends and to establish a repository of information where consumers and advocates can review which critical decisions have been automated by companies along with information such as data sources, high level metrics and how to contest decisions, where applicable.
  • Adds resources to the FTC to hire 75 staff and establishes a Bureau of Technology to enforce this Act and support the Commission in the technological aspects of its functions.

Conclusions

The ethical use of AI should be addressed by all organizations to build trust into the system and satisfy the needs of stakeholders for accurate and reliable information. Professional judgment is still necessary in AI to decide on the value of the information produced by the system and its uses in looking for material misstatements and financial fraud. IH.R.3369 - Artificial Intelligence Accountability Act (118th Congress (2023-2024)) attempts to do just that. Businesses need to support it to enhance regulatory oversight of AI before it creates serious security and use problems with unintended consequences that affect all segments of society.

Posted by Steven Mintz, aka Ethics Sage, on September 24, 2024. You can sign up for his newsletter and learn more about his activities at: https://www.stevenmintzethics.com/.

 

Comments