Why You Should Have Invested in Enterprise Data Yesterday
Organizational Implications of Whistleblowing

Ethical Issues to Consider When Using ChatGPT

Controlling for the Risks

I have previously blogged about moral questions to consider in the use of ChatGPT.

It seems that virtually everyone is talking about its use. Educators fear that students will use it to write term papers. Those in the workplace have concerns about the security and privacy of the data. In each case and others, there are ethical issues to consider. AI ethics managers have expressed concerns about privacy, manipulation, bias, difficulty understanding how it works, inequality and labor displacement. 

The purpose of this blog is to address some of the ethical questions in the use of ChatGPT. In a future blog I will address ethical uses in the workplace and in higher education.

What Can it Do?

As an artificial language model, ChatGPT is dependent on the data it is fed to make inferences and return accurate information. Using a wide range of internet data, ChatGPT can help users answer questions, write articles, program code, and engage in in-depth conversations on a substantial range of topics.

AI programs are designed to improve efficiency for performing basic tasks, including researching and writing. Some users are even asking ChatGPT to take on more complex forms of these tasks, including drafting emails to fire clients, crafting job descriptions, and writing company mission statements.

Writing for the Illinois Society of CPAs, Elizabeth Pittelkow, the VP of Finance for GigaOm, points out that: “It is important to note that while ChatGPT can provide helpful suggestions, it is not as good at decision-making or personalizing scripts based on personality or organizational culture.” She points out that: “An effective way to use ChatGPT and similar AI programs is to ensure a human or group of humans is reviewing the data, testing it, and implementing the results in a way that makes sense for the organization using it.” One example is with job descriptions written by an AI program. It is essential to build in internal controls by having one human “ensure the details make sense with what the organization does and does not do.”

What Does ChatGPT Itself Feel About the Ethical Issues?

Pittelkow decided to try ChatGPT and asked the bot if it could tell her more about the ethics of AI. In response it did not hesitate to point out that the field of ethics in AI is concerned with the moral implications of the development and use of the technology, pointing to a range of ethical topics on bias and fairness, privacy, responsibility and accountability, job displacement, and algorithmic transparency.

She also asked how ChatGPT can be used ethically. The bot suggested that being respectful, avoiding spreading misinformation, protecting personal information, and using ChatGPT responsibly, are all ethical issues to consider. Avoiding bias is a concern. The data entered into the system is only useful if impartiality can be assured.

Risks in Using ChatGPT ChatGPT

I have previously blogged about the ethical risks in using AI in general, which also applies to ChatGPT. AI can improve human decision-making, but it has its limits. The possibility exists that bias in algorithms can create an ethical risk that brings into question the reliability of the data produced by the system. Bias can be accounted for to explain the generation of the data, reproducibility in testing for consistent results and auditability. Another weakness is transparency. Organizations should clearly explain what data they collect, how it is used and how the results affect customers.

Generative AI systems can give inaccurate or misleading results because of prompts that are too vague but also from poor data sources. The limitations of the technology mean it can experience problems on relatively simple queries. 

Other ethical risks include a lack of transparency, erosion of privacy, poor accountability and workforce displacement and transitions. The existence of such risks affects whether AI systems should be trusted. To build trust through transparency, organizations should clearly explain what data they collect, how it is used and how the results affect customers.

Data security and privacy are important issues to consider in deciding whether to use ChatGPT, especially in the workplace. As an AI system, ChatGPT has access to vast amounts of data, including sensitive financial information. There is a risk that this data could be compromised. It is important that essential security measures are in place to protect this data from unauthorized access.

Pittelkow points out that:

“While ChatGPT can provide helpful suggestions, it is not as good at decision-making or personalizing scripts based on personality or organizational culture. An effective way to use ChatGPT and similar AI programs is to ensure a human or group of humans is reviewing the data, testing it, and implementing the results in a way that makes sense for the organization using it. For example, with job descriptions written by an AI program, at least one human should ensure the details make sense with what the organization does and does not do.”

One way that ChatGPT is working on preventing the release of inappropriate content is by asking humans to flag content for it to ban. Of course, this method brings up a number of ethical considerations. People who are utilitarians would argue that this method is ethical because the end justifies the means—the masses are not subject to bad content because only a few people are. The value of processing large amounts of data and responding with answers can simplify workplace processes, but the possible displacement of workers needs to be considered.

In terms of preventing unethical behaviors, such as users asking the program to write their papers to pass off as their own, some technology developers are creating AI to specifically combat nefarious usage with AI. One such technology is ZeroGPT, which can help people determine if content is generated from a human or from AI.

Conclusions

The ethical use of AI should be addressed by all organizations to build trust into the system and satisfy the needs of stakeholders for accurate and reliable information. A better understanding of machine learning would go a long way to achieve this result.

Professional judgment is still necessary in AI to decide on the value of the information produced by the system. Unless the data is reliably provided and processed, AI will produce results that are inaccurate, incomplete or incoherent, and machine learning would be compromised with respect.

Blog posted by Dr. Steven Mintz on May 30, 2023. You can sign up for Steve’s newsletter and learn more about his activities on his website (https://www.stevenmintzethics.com/).

Comments