Vivek Ramaswamy and Civics Education
Does Private Equity Ownership of CPA Firms Violate Audit Independence

Ethics & AI: Unanswered Questions

Ethics in the Age of AI: Addressing Moral Challenges in the Era of Automation

I have previously blogged about the problem of creating an ethical framework and processes for AI. AI can improve human decision-making, but it has its limits. The possibility exists that bias in algorithms can create an ethical risk that brings into question the reliability of the data produced by the system. Bias can be accounted for through explainability of the data, reproducibility in testing for consistent results and auditability.

Other ethical risks include a lack of transparency, erosion of privacy, poor accountability and workforce displacement and transitions. The existence of such risks affects whether AI systems should be trusted. To build trust through transparency, organizations should clearly explain what data they collect, how it is used and how the results affect customers.

The ethical use of AI should be addressed by all organizations to build trust into the system and satisfy the needs of stakeholders for accurate and reliable information. A better understanding of machine learning would go a long way to achieve this result.

Artificial intelligence (AI) has been heralded as the next great equalizer. If leveraged correctly, AI could deliver us to an egalitarian future where students get the education they deserve and entrepreneurial minds receive the support they need.

However, the recent rise of AI does pose some serious ethical challenges, too. Even simple questions, like “Who decides how AI should be developed?” and “What purpose will large network neural networks serve?” raise a host of moral and legal questions.

Navigating the choppy waters of AI ethics is tricky. However, as I have previously said, by melding empathy with ethics, we can steer a course that leads us to a more equitable future.

AI in Business

Artificial intelligence tools like GPT4 and Tableau have revolutionized the way businesses generate new materials and analyze data. Today, 77% of all businesses are “exploring” AI while 35% already use some form of artificial intelligence. These AI tools are clearly worth the investment. Even simple tools, like AI-powered chatbots, can handle monotonous tasks, free up employee’s time, and improve operational efficiency. However, these tools come with a risk, too. The most prominent ethical concerns facing AI-powered businesses today include:

  • Bias: Training data can skew AI’s objectivity and undermine its impartiality. This is a serious issue for companies, as AI-powered businesses can’t afford to accidentally publish materials based on harmful stereotypes.
  • Misinformation: AI models can inadvertently spread falsehoods and misleading information. This is because AI large language models are essentially an “echo chamber” and are largely incapable of assessing the veracity of the data they are fed.
  • Intellectual Property: Creatives around the world are up in arms regarding the rise of AI — and for good reason. Generative AI models use real artists’ and writers’ work without attribution or citation. This may land firms in hot legal water in the future.

KPMG has identified five principles of ethics and AI: Ethics & ai

  • Transforming the workplace: Massive change in roles and tasks that define work, along with the rise of powerful analytic and automated decision-making, will cause job displacement and the need for retraining.
  • Establishing oversight and governance: New regulations will establish guidelines for the ethical use of AI and protect the well-being of the public.
  • Aligning cybersecurity and ethical AI: Autonomous algorithms give rise to cybersecurity risks and adversarial attacks that can contaminate algorithms by tampering with the data. KPMG reported in its 2019 CEO Outlook that 72 percent of U.S. CEOs agree that strong cybersecurity is critical to engender trust with their key stakeholders, compared with 15 percent in 2018.
  • Mitigating bias: Understanding the workings of sophisticated, autonomous algorithms is essential to take steps to eliminate unfair bias over time as they continue to evolve.
  • Increasing transparency: Universal standards for fairness and trust should inform overall management policies for the ethical use of AI.

Modern businesses can navigate the ethical challenges associated with AI by creating policies that align with their mission and values. These ethical guidelines can include regulatory compliance details and “best practices” for employees to follow. A progressive approach to internal regulation is particularly important today, as governments are still catching up with the explosion of AI.

Data Privacy

Artificial intelligence tools draw from a deep pool of data when making decisions. These large data sets form the backbone of effective AI, as machine learning programs require a wealth of information to deliver accurate insights. However, collecting and utilizing consumer data raises serious ethical concerns.

Fortunately, firms can tighten up their cybersecurity and prevent identity-based breaches by utilizing AI. AI can combat cybercrime while championing data protection by automatically blocking known threats and identifying bot activity. More advanced AI programs can even detect suspicious activity and empower cybersecurity experts to take action before a breach occurs.

Powerful machine learning programs can leverage data lakes to predict future threats, too. This is particularly important for firms that can’t afford to hire a fleet of cybersecurity professionals, but still need a water-tight network. Even simple changes, like automated “risk decisioning” can flag unusual behavior and redirect malicious actors to a two-factor authentication page.

Ownership

Artificial intelligence has the potential to reduce social inequality and promote egalitarian ideals. Some AI tools, like those utilized by the Khan Academy and DuoLingo, already offer learners the opportunity to access high-quality education.

Rose Luckin, Professor of Learner-Centred Design at University College London, explains that the disruptive potential of AI may lead to “more creativity and critical thinking [in education] and less memorization.” This can increase learner agency and ensure that students are equipped to “interpret facts and weigh up the evidence.”

However, Luckin also points out that “profit-driven imperatives of big tech companies” may be at odds with the social service that AI is supposed to provide. This reflects a larger ethical question that burgeons even the most ardent of AI supporters: who controls the development of AI?

Left unchecked, AI may exacerbate existing inequality and consolidate the class divide. Erik Brynjolfsson, director of the Stanford Digital Economy Lab, explains that, currently, AI is being developed to “simply replace workers, rather than extending human capabilities and allowing people to do new tasks.”

This is a serious issue, as simple automation tools may generate “greater inequality of income and wealth.” Brynjolfsson also explains that this may be why there are more billionaires in the U.S. today, despite the fact that real wages have fallen for many Americans.

Steering AI towards egalitarian goals may require some heavy-handed oversight. However, guiding the trajectory of AI is crucial to the long-term success of automation and economic growth. Governmental funding can also ensure that researchers are able to pursue emergent technology that serves a social good without having to worry about shareholder dividends. This will generate more jobs, improve employee productivity, and guard against the hoarding of wealth in big tech cities like San Francisco and Seattle.

Conclusion

Artificial intelligence has the potential to reshape the way we live. However, the rapid expansion of AI presents some serious ethical questions and moral challenges. Intervention and oversight are necessary to prevent bias and guard against further wealth inequality. These interventions may even increase productivity, as AI that serves egalitarian goals can help folks receive the education they deserve, regardless of their socio-economic background.

I would like to thank Charlie Fletcher for her important contributions to this blog. Charlie can be contacted at: [email protected].

Posted by Dr. Steven Mintz, aka Ethics Sage, on September 19, 2023. You can learn more about Steve’s activities by checking out his website at: https://www.stevenmintzethics.com/ and signing up for his newsletter.  

Comments