Impulsivity and Poor Judgment Trigger Bipolar Behavior
My life in the 60s: A Coming of Age

AI-Washing Threatens A Company's Reputation

From Ethics Washing to AI Washing

I have previously blogged about ‘Ethics Washing’ in general, and blogged about it in the workplace, pointing out that it refers to the practice of ethical window dressing. It is where an organization gives lip service to ethics to make it seem as though it acts responsibly but does not do anything to make sure that, in fact, it is occurring in practice. Ethics is mainly for show. In other words, the organization adopts a position that they are an ethical organization by referring to their policies, but those policies may not be implemented in practice.

AI washing, or the use of false and misleading statements about artificial intelligence, has risen to a level that has prompted the government to act. Businesses interested in AI must balance their enthusiasm with careful attention to their public claims. Most important, they must have an ethical base to be believable and build trust with stakeholders.

Ethics Washing is the practice of fabricating or exaggerating a company’s interest in equitable AI systems that work for everyone. An organization that practices it follows a concept that can best be characterized as promoting ethics for the good of all. Some point to Google’s experience in 2019 of creating an Artificial Intelligence (AI) ethics board only to disband it less than two weeks later.

In today’s blog I look at the practice of ‘AI Washing’, a derivative of Ethics Washing. Firms that exaggerate what they have done with AI might do so because they assume that they will have a sufficient defense if questioned. Absent a comprehensive and cohesive set of metrics, it is difficult, to say the least, to measure compliance. False promises might be made because the organization knows how difficult it is to equate practices with promises.

AI Washing Examples

The practice of AI washing occurs when a company exaggerates or distorts its AI capabilities, creating a false impression of the company’s technological expertise and future prospects. The SEC has increased its focus on this matter and issued guidance reminding companies that AI disclosures remain subject to the basic principles of the securities laws and should only be made with reasonable bases, which should be disclosed to investors.2 In addition to this informal guidance and recent administrative proceedings, the SEC has used the comment letter process to request enhanced disclosure of AI claims in registration statements and periodic reports.

Many tech companies establish AI departments to create the illusion of monitoring AI activities, but they often serve as more of a public relations strategy than a genuine attempt to integrate it into organizational practices. AI ethics committees may be established by tech companies that often have limited powers to enforce or implement their findings.  They may be established for ‘window dressing’.

Examples of tech companies that have set up AI ethics committees or similar bodies include Microsoft’s AI and Ethics in Engineering and Research (AETHER) Committee and Facebook’s Oversight Board, which focuses on AI-related content moderation. These committees aim to address the complex and evolving ethical challenges posed by AI and strive to strike a balance between technological innovation and responsible use. However, their effectiveness and independence are a subject of concern.

Another example is the Partnership on AI (PAI), a consortium of tech companies, including Facebook, Google, and others, aiming to address AI’s societal impact. The PAI has also faced issues related to transparency in its decision-making processes and a perceived lack of public input in its governance. These situations raise questions about the extent to which companies are willing to adhere to their ethical guidelines when they clash with business interests.

SEC’s Position

According to Cooley Pub Co, in January 2024 the SEC was raising red flags about the practice of hyping policies about AI without adequate documentation. The former SEC Chair, Gary Gensler, cautioned companies to be careful about making false claims about their use of AI: “Companies should be ‘truthful about what they’re saying about their claims’….They should also ‘talk about the risks’ of using AI ‘and how they manage their risk,’” Gensler said.  

Cooley suggests that claims about AI prospects should have a reasonable basis, and investors should be informed about that basis. When disclosing material risks about AI—and a company may face multiple risks, including operational, legal, and competitive—investors benefit from disclosures particularized to the company, not from boilerplate language. Companies should ask themselves basic questions, such as: ‘If we are discussing AI in earnings calls or having extensive discussions with the board, is it potentially material?’ These disclosure considerations may require companies to define for investors what they mean when referring to AI. For instance, how and where is it being used in the company? Is it being developed by the issuer or supplied by others?

AI

Presto Automation Inc. Settlement

In mid-January 2025, the SEC announced that bit had settled charges against Presto Automation Inc., a restaurant-technology company that was listed on the Nasdaq until September 2024, for making materially false and misleading statements about critical aspects of its flagship AI product, Presto Voice. Presto Voice employs AI-assisted speech recognition technology to automate aspects of drive-thru order taking at quick-service restaurants.

The SEC Order points out that the AI technology used in the product was not developed by Presto—at least not until September 2022; rather, the company deployed speech recognition technology owned and operated by a third party.  But Presto failed to disclose in its SEC filings that it used the third party’s AI technology, rather than its own, to power all of the Presto Voice units it deployed commercially during that time period. The SEC charged that Presto made materially misleading statements in violation of the Securities and Exchange Acts and failed to maintain adequate disclosure controls; however, in light of its financial condition and remedial actions, the SEC imposed only a cease-and-desist order and no civil penalty.

Similarly, in April remarks at a Program on Corporate Compliance and Enforcement, then-SEC Enforcement Director Gurbir Grewal cited several statistics showing the immense importance that investors were attributing to AI: “according to a recent survey, 61% of investors believe that faster adoption of AI is very, or extremely, important to a company’s value. That jumps to 85% when including investors who believe it is moderately important.” As a result, there may well be a “perfect storm…brewing around AI.”

Bloomberg Law Views

Bloomberg recommends steps that companies should take in the AI space.

Avoid hype. Vague descriptions, overstated claims, or a lack of clear understanding about AI’s functionalities can indicate AI washing. When using AI-powered technology, don’t race to promote it to investors or consumers and instead focus on the technology’s substance. This means providing detailed, accurate descriptions of AI systems, including their specific functions, benefits, and limitations.

Develop and enforce robust AI governance. Establish a comprehensive AI governance program, involving collaboration across departments, including legal, technology, investor relations, and marketing. Each plays a role in ensuring AI-related claims are accurate and align with a company’s business strategy.

Ensure accurate public disclosures. Consider what information companies disclose to the public, investors, and competitors about their AI capabilities. Be clear about what their technology can and can’t do, paying special attention to the phrasing used in public disclosures.

Engage AI experts. Experts can play a key role in assessing and validating AI capabilities. It’s worth turning to an internal team of data scientists or engaging an external partner with deep AI expertise. Assessing third-party AI systems requires advanced technical capabilities, but taking adequate time and resources to properly evaluate can help protect businesses from financial losses and other risks.

By prioritizing transparency, responsible communication, and genuine innovation, companies can ensure integrity and contribute to a more trustworthy AI landscape. True innovation will be rewarded, helping the entire industry by further advancing technology, encouraging investment, and promoting confidence in various products.

Writing for Forbes online, Dr. Lance B. Eliot, , a world-renowned expert on AI and Machine Learning, says that in addition to possible reputational harm there are “numerous legal ramifications [that can] bite them and their firm. One is that they didn’t do what they said they did and can be potentially legally held liable for their false claims. Moreover, AI practices might end up violating laws involving societally sensitive areas such as exhibiting undue biases and acting in discriminatory ways.

Bloomberg Law writers state that “misrepresenting AI erodes consumer and investor trust by prioritizing short-term hype over long-term reputation building. Once lost, trust is hard to rebuild. Companies risk their credibility, and the veracity of other disclosures may be questioned. This damages relationships with partners, decreases consumer loyalty, and tarnishes brand image—and in a worst-case scenario, it could lead to class-action litigation.”

It is not surprising to me that questions have been raised by the SEC and others about the truthfulness of disclosures about AI. Companies have adopted the policy of window dressing before including their ethical practices. It’s up to the government (i.e., SEC) and Congress to establish regulations/laws to create a pathway to ethical practice. It’s also up to technology companies to establish their own guidelines/policies that put the needs of the public/investors ahead of all else, including their own interests.

Posted by Steven Mintz, aka Ethics Sage, on March 19, 2025. You can sign up for his newsletter and learn more about his activities at: https://www.stevenmintzethics.com/.

Comments