7 Rules to Avoid AI Ad Backlash and Use It Safely

7 Rules to Avoid AI Ad Backlash and Use It Safely

Dec 2, 2025

The light and shadow of AI advertising
The light and shadow of AI advertising

Introduction

The advent of generative AI has brought about a dramatic transformation in the advertising industry. Its ability to significantly reduce production costs and time while creating previously unthinkable creative works has become a powerful tool for many marketers.

However, on the other hand, it is also true that the "flames" and legal troubles surrounding AI advertising have persisted, causing many companies to hesitate in their adoption. Careless use can lead to fatal outcomes that greatly harm brand image. Managing these risks of AI advertising and safely harnessing its power has emerged as a new skill demanded of modern marketers.

In this article, we will thoroughly analyze actual cases of backlash and explain seven significant pitfalls inherent in AI advertising, along with specific measures to avoid them. By reading this article, you can equip yourself with the essential rules needed to use the power of AI safely and effectively and achieve success.

1. The Fatal Discomfort Caused by Human-Like AI Expressions

One of the most direct causes of backlash against AI advertising is the "Uncanny Valley phenomenon." In August 2024, a fried potato AI advertisement released by McDonald's Japan faced significant criticism as a typical example of this phenomenon.

Many viewers found the digits of the woman appearing in the video to look like six fingers, rendering it human-like yet unnatural in detail, leading to physiological discomforts such as "creepy" and "disturbing." This is what psychology refers to as the "Uncanny Valley phenomenon." The human brain is extremely sensitive to slight discrepancies in faces and bodies, and AI-generated figures that can't perfectly replicate human beings can evoke strong discomfort in users.

Particularly in food advertising, where cleanliness and appetizing appeal are paramount, this type of discomfort can be fatal. It can directly harm product image and reduce purchasing intent.

Pro Tip: At this point, marketers should not use AI for generating human depictions. AI should only be utilized for idea generation or creating visual materials that do not involve humans (scenery, objects, etc.), and the final touch for creative works involving people must always be done manually by a human creator. Adopting a "hybrid model" is the safest way to avoid this pitfall.

2. The "Training Data Issue" of AI Tools Is Not Just Someone Else's Problem: Copyright Infringement Risks Fall on User Companies

In 2023, it was revealed that the image generation AI "Stable Diffusion" used in a campaign by Asahi Beer had become the subject of a class-action lawsuit over suspected copyright infringement, receiving significant criticism. This case highlights the seriousness of copyright issues in AI advertising.

The crux of the problem lies in the fact that many generative AIs learn from copyrighted image data collected (scraped) without permission from the internet. The legal risk faced by companies developing AI tools directly connects to the brand image and legal risks of the advertisers using those tools.

Japan's copyright law (Article 30, Paragraph 4) is lenient regarding the use of data for AI learning purposes, but international responses differ. In the United States, lawsuits surrounding "fair use" are ongoing, and in the EU, a system that allows rights holders to declare "opt-out" from AI learning of their works is recognized. With global advertising, marketers must consider the legal regulations of each market.

The Agency for Cultural Affairs has indicated that "if similarities or dependencies between AI-generated works and existing copyrighted works are recognized, it may be deemed copyright infringement," stating that even in cases of accidental resemblance, permission from the rights holder is required.

Actionable Advice: To protect the company's trust, marketers need to exercise utmost caution in selecting AI tools. They must choose tools that have transparent sources for their training data or provide compensation systems regarding intellectual property (IP). For example, tools like "Adobe Firefly," which use licensed Adobe Stock images as training data, are currently one of the safest options.

3. Disharmony with Brand Image: When Cost Reduction Undermines Customer Trust

The introduction of AI advertising does not positively impact every brand. In 2024, when Japan Airlines (JAL) launched an advertisement using AI-generated images, its usage clashed with its brand image, resulting in backlash.

JAL's core brand values are "safety" and "trustworthiness," along with the "human touch in service" nurtured over many years. However, the use of AI advertising led many consumers to perceive the company as prioritizing "cost reduction, disregarding customers." The issue is not the AI technology itself but the fact that the airline industry, where reliability is paramount, introduced technology without considering the alignment with its own brand values.

Careless reliance on AI may undermine the trust relationship with customers and pose risks that could destroy the value built up by the brand over many years.

Key Lesson: Before introducing AI advertising, marketers must first redefine their company’s brand identity. It is essential to carefully ascertain whether the texture of the creatives generated by AI and the act of using AI itself contradicts the value that their brand promises to customers.

4. Silence Is the Worst Choice: Transparency in AI Use Is Key to Avoiding Backlash

In 2024, a campaign illustration posted by the official X account of Sugiyaku was suspected of being "AI-generated", leading to backlash. The problem was exacerbated by the company's continuous ambiguous responses regarding the use of AI.

By avoiding clear explanations, the official account deepened consumer suspicions, creating distrust that something was being concealed. According to a survey by Forbes Japan, about 55% of consumers responded that they could identify "AI-generated content." Even more seriously, the survey found that "around half of consumers, 50%, expressed some degree of aversion to advertisements generated by AI." Hiding the fact that AI is being used is no longer a realistic strategy.

As consumer AI literacy increases, ambiguous responses or silence only serve to amplify doubts and intensify backlash.

Practical Takeaway: To ensure trust in AI advertising, transparency is crucial. When using AI, it is essential to adopt a candid approach rather than hiding this fact. For instance, simply stating, "This advertisement was created by generative AI" within the advertisement can prevent unnecessary speculation and build trust with consumers.

5. Criticism for Underestimating Creators: AI as a Tool for "Collaboration" Rather Than "Replacement"

One of the most persistent criticisms of AI advertising is the ethical concern that it "takes away creators' jobs and underestimates their value." This criticism is not merely an ethical issue but directly leads to specific brand risks.

In the backlash case involving McDonald's Japan, the company had previously been recognized for actively employing new illustrators, making the shift to AI interpretated as a

Introduction

The advent of generative AI has brought about a dramatic transformation in the advertising industry. Its ability to significantly reduce production costs and time while creating previously unthinkable creative works has become a powerful tool for many marketers.

However, on the other hand, it is also true that the "flames" and legal troubles surrounding AI advertising have persisted, causing many companies to hesitate in their adoption. Careless use can lead to fatal outcomes that greatly harm brand image. Managing these risks of AI advertising and safely harnessing its power has emerged as a new skill demanded of modern marketers.

In this article, we will thoroughly analyze actual cases of backlash and explain seven significant pitfalls inherent in AI advertising, along with specific measures to avoid them. By reading this article, you can equip yourself with the essential rules needed to use the power of AI safely and effectively and achieve success.

1. The Fatal Discomfort Caused by Human-Like AI Expressions

One of the most direct causes of backlash against AI advertising is the "Uncanny Valley phenomenon." In August 2024, a fried potato AI advertisement released by McDonald's Japan faced significant criticism as a typical example of this phenomenon.

Many viewers found the digits of the woman appearing in the video to look like six fingers, rendering it human-like yet unnatural in detail, leading to physiological discomforts such as "creepy" and "disturbing." This is what psychology refers to as the "Uncanny Valley phenomenon." The human brain is extremely sensitive to slight discrepancies in faces and bodies, and AI-generated figures that can't perfectly replicate human beings can evoke strong discomfort in users.

Particularly in food advertising, where cleanliness and appetizing appeal are paramount, this type of discomfort can be fatal. It can directly harm product image and reduce purchasing intent.

Pro Tip: At this point, marketers should not use AI for generating human depictions. AI should only be utilized for idea generation or creating visual materials that do not involve humans (scenery, objects, etc.), and the final touch for creative works involving people must always be done manually by a human creator. Adopting a "hybrid model" is the safest way to avoid this pitfall.

2. The "Training Data Issue" of AI Tools Is Not Just Someone Else's Problem: Copyright Infringement Risks Fall on User Companies

In 2023, it was revealed that the image generation AI "Stable Diffusion" used in a campaign by Asahi Beer had become the subject of a class-action lawsuit over suspected copyright infringement, receiving significant criticism. This case highlights the seriousness of copyright issues in AI advertising.

The crux of the problem lies in the fact that many generative AIs learn from copyrighted image data collected (scraped) without permission from the internet. The legal risk faced by companies developing AI tools directly connects to the brand image and legal risks of the advertisers using those tools.

Japan's copyright law (Article 30, Paragraph 4) is lenient regarding the use of data for AI learning purposes, but international responses differ. In the United States, lawsuits surrounding "fair use" are ongoing, and in the EU, a system that allows rights holders to declare "opt-out" from AI learning of their works is recognized. With global advertising, marketers must consider the legal regulations of each market.

The Agency for Cultural Affairs has indicated that "if similarities or dependencies between AI-generated works and existing copyrighted works are recognized, it may be deemed copyright infringement," stating that even in cases of accidental resemblance, permission from the rights holder is required.

Actionable Advice: To protect the company's trust, marketers need to exercise utmost caution in selecting AI tools. They must choose tools that have transparent sources for their training data or provide compensation systems regarding intellectual property (IP). For example, tools like "Adobe Firefly," which use licensed Adobe Stock images as training data, are currently one of the safest options.

3. Disharmony with Brand Image: When Cost Reduction Undermines Customer Trust

The introduction of AI advertising does not positively impact every brand. In 2024, when Japan Airlines (JAL) launched an advertisement using AI-generated images, its usage clashed with its brand image, resulting in backlash.

JAL's core brand values are "safety" and "trustworthiness," along with the "human touch in service" nurtured over many years. However, the use of AI advertising led many consumers to perceive the company as prioritizing "cost reduction, disregarding customers." The issue is not the AI technology itself but the fact that the airline industry, where reliability is paramount, introduced technology without considering the alignment with its own brand values.

Careless reliance on AI may undermine the trust relationship with customers and pose risks that could destroy the value built up by the brand over many years.

Key Lesson: Before introducing AI advertising, marketers must first redefine their company’s brand identity. It is essential to carefully ascertain whether the texture of the creatives generated by AI and the act of using AI itself contradicts the value that their brand promises to customers.

4. Silence Is the Worst Choice: Transparency in AI Use Is Key to Avoiding Backlash

In 2024, a campaign illustration posted by the official X account of Sugiyaku was suspected of being "AI-generated", leading to backlash. The problem was exacerbated by the company's continuous ambiguous responses regarding the use of AI.

By avoiding clear explanations, the official account deepened consumer suspicions, creating distrust that something was being concealed. According to a survey by Forbes Japan, about 55% of consumers responded that they could identify "AI-generated content." Even more seriously, the survey found that "around half of consumers, 50%, expressed some degree of aversion to advertisements generated by AI." Hiding the fact that AI is being used is no longer a realistic strategy.

As consumer AI literacy increases, ambiguous responses or silence only serve to amplify doubts and intensify backlash.

Practical Takeaway: To ensure trust in AI advertising, transparency is crucial. When using AI, it is essential to adopt a candid approach rather than hiding this fact. For instance, simply stating, "This advertisement was created by generative AI" within the advertisement can prevent unnecessary speculation and build trust with consumers.

5. Criticism for Underestimating Creators: AI as a Tool for "Collaboration" Rather Than "Replacement"

One of the most persistent criticisms of AI advertising is the ethical concern that it "takes away creators' jobs and underestimates their value." This criticism is not merely an ethical issue but directly leads to specific brand risks.

In the backlash case involving McDonald's Japan, the company had previously been recognized for actively employing new illustrators, making the shift to AI interpretated as a

\FreeDownload Now/

\FreeDownload Now/

\FreeDownload Now/

Cascade - ご紹介資料
Cascade - ご紹介資料

Table of Contents