Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
LawNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
BOOM LabsNo Image is Available
Deepfake TrackerNo Image is Available
VideosNo Image is Available

Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
LawNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
BOOM LabsNo Image is Available
Deepfake TrackerNo Image is Available
VideosNo Image is Available
Law

Interview: Can AI Imitate Ghibli Style Without Breaking The Law?

Legal precedent specifically addressing AI-generated art mimicking copyrighted styles is limited, but recent cases indicate that Indian courts are closely scrutinising AI-related copyright issues, and stricter regulations may emerge in the future.

By -  Ritika Jain |

8 April 2025 4:12 PM IST

The recent trend of social media users using OpenAI’s ChatGPT to create photos reminiscent of Studio Ghibli has raised copyright concerns and questioned the limitations of “fair use” of an artist’s style.

The popularity of the Ghiblification of personal photos not only broke OpenAI, but it also resulted in record users signing up for ChatGPT - its flagship program.

BOOM reached out to advocate Namarata Pahwa who specialises in copyright laws and art to understand a significant concern: What material can AI use? And more significantly, what are the limitations that govern the rights of an artist? Till what point is the artists’ work protected? At what point does it become inspiration, thereby allowing their copyrighted work to become fair play?

The following is an edited excerpt from the interview.

Is it legally permissible for AI models to be trained on copyrighted materials, such as Studio Ghibli's films, without explicit consent from the copyright holders?

No, it is not legally permissible to train AI models on copyrighted materials like Studio Ghibli's films without explicit consent. Such works are protected under copyright law, and using them without authorization may constitute infringement through unauthorized reproduction or creation of derivative works.

In India, while “fair dealing” is allowed for limited purposes such as research or criticism, its application to commercial AI training is unclear and likely falls outside permitted use. Courts are increasingly scrutinizing AI's use of copyrighted content, and using protected works like Ghibli films without permission poses significant legal risks.

To remain compliant, AI developers should use public domain content, licensed datasets, or secure formal permissions from copyright holders.

Does generating images in the “style” of Studio Ghibli constitute copyright infringement, even if specific characters or scenes are not replicated?

Under the Indian Copyright Act, 1957, copyright protects the specific expression of an idea, not the general “style” of an artwork. Therefore, generating AI images in the style of Studio Ghibli—without copying specific characters or scenes—does not automatically constitute copyright infringement.

However, legal risks remain. If the generated images closely resemble Ghibli’s original works, they could be considered derivative works, potentially infringing copyright under the “substantial similarity” test used by Indian courts. Additionally, under trademark law and the doctrine of passing off, commercial use of Ghibli-style images that mislead consumers into believing they are official products may be actionable.

India’s fair dealing exceptions are narrow and unlikely to protect such AI-generated art, especially for commercial use. While there is limited precedent, Indian courts are increasingly scrutinizing AI-related copyright issues, and more regulations are likely as the legal framework evolves.

How does copyright law differentiate between an artist drawing inspiration from a style and an AI replicating that style?

Indian copyright law distinguishes between human artistic inspiration and AI-style replication through several key considerations. 

The scale and scope of creation also set them apart. While a human artist's inspired work is typically limited, AI can generate vast quantities in a specific style, potentially saturating the market and impacting the economic interests of original creators. This scale amplifies legal concerns. Furthermore, Indian copyright law traditionally recognizes human beings as authors. The concept of AI authorship is contested, with current legal interpretations generally requiring human involvement for copyright protection.

Regardless of the creator, the legal test of “deceptively similar” remains crucial. If AI-generated output closely resembles protected elements of an original style, it could lead to infringement. Indian courts are actively defining “substantial similarity” in the context of AI art. The Indian Copyright Act, 1957, does not explicitly address AI-generated works, creating legal ambiguity. Additionally, India's more restrictive concept of “fair dealing” compared to the US's “fair use” further limits the permissible use of copyrighted styles. Recent court cases indicate that Indian courts are taking these issues seriously and are beginning to establish legal precedents.

Given that Studio Ghibli's founder, Hayao Miyazaki has publicly opposed AI-generated art, could his stance influence any potential legal actions against OpenAI? Is consent necessary in such cases, or is consent simply a concept in the age of AI?

Miyazaki’s outspoken opposition to AI-generated art, while not creating new legal rights, could significantly influence potential legal action against companies like OpenAI—especially in jurisdictions like India, where public sentiment and artists’ moral rights carry legal weight. Miyazaki’s stature and his condemnation of AI art may shape public opinion, encourage legal scrutiny, and even prompt legislative reforms. His statements could support arguments about the harm AI-generated art inflicts on artists’ reputations and the broader creative industry, especially in claims involving economic harm or unfair competition.

In India, moral rights under copyright law protect an artist’s reputation and prevent distortion of their work. Miyazaki’s stance could bolster arguments that AI-generated images mimicking Studio Ghibli’s style violate these rights, especially if used commercially, potentially misleading audiences and triggering “passing off” claims. His public disapproval further reinforces the strong identity associated with Ghibli’s aesthetic, increasing the likelihood of successful legal challenges.

Consent remains a central but unresolved issue in AI training. While some jurisdictions permit the use of copyrighted works without explicit consent under doctrines like fair use, this is legally contentious and subject to ongoing lawsuits. Courts have yet to reach consensus, but the growing number of licensing deals between AI firms and content creators reflects increasing recognition of consent’s importance. If courts or regulators eventually require consent for training data, AI companies may face legal obligations to license copyrighted material—including Studio Ghibli’s works.

Miyazaki’s influence could steer Studio Ghibli toward legal action or public condemnation of AI models that imitate its style. His views, combined with mounting creator backlash, could help drive legal and regulatory reforms mandating transparency, consent, and compensation in AI training—a shift that would reshape how companies like OpenAI operate in the creative space.

Could the widespread use of AI to replicate specific artistic styles lead to a re-evaluation of copyright laws to better protect artists and studios?

Currently, copyright law protects specific works but not artistic styles, creating a legal gray area that allows AI models to mimic an artist’s style without direct infringement. This loophole enables companies to profit from AI-generated images that closely resemble renowned styles—like Studio Ghibli's—without compensating the original creators. Critics argue this dilutes artistic identity, exploits creators, and leaves them with little legal protection.

As a result, there is growing pressure from artists, studios, and industry groups to reform copyright laws. Proposed changes include expanding protections to cover distinctive artistic styles, similar to how trademark law safeguards brand identities. This could make it illegal to use AI to imitate a protected style without permission. Another idea is to require AI developers to license or pay royalties when using copyrighted works in training datasets. Some also suggest leveraging trademark law if AI-generated content misleads consumers by resembling a studio’s visual identity.

Major studios like Disney and Warner Bros. are exploring legal measures to combat AI mimicry, and lawsuits have been filed against AI companies for training on copyrighted content without consent. Internationally, China has implemented regulations requiring AI-generated content to respect copyright laws, signaling possible global shifts.

Future reforms may include stronger protections for artistic styles, licensing frameworks, transparency about training data, and opt-out options for artists. Though these changes will take time and vary by country, AI is clearly driving legal systems to evolve. Studios like Studio Ghibli, which opposes AI-generated art, may play a key role in shaping these legal reforms.

How do international copyright laws vary in addressing AI-generated content that mimics the style of established artists or studios?

International copyright laws vary widely in addressing AI-generated content that mimics artistic styles. Most current laws protect specific works, not styles, but the rise of AI art is prompting global reassessment.

In the U.S., styles aren’t protected by copyright, and AI-generated content typically avoids infringement unless it directly copies a specific work. Fair use may permit AI training on copyrighted material, but lawsuits against companies like OpenAI and Stability AI could set new legal precedents. AI-generated works without significant human input are not eligible for copyright protection.

The EU doesn’t protect artistic styles either but enforces stricter transparency. The AI Act (2024) requires companies to disclose training data, and the EU Copyright Directive protects creators from unauthorized online use, giving artists more recourse. Countries like Germany and France have stronger copyright enforcement that may aid artists.

Japan allows AI training on copyrighted material without consent if it doesn’t harm the market value of the original. While styles aren’t protected, public pressure could spur reform.

China has implemented some of the world’s strictest AI content laws. Its 2023 regulations mandate copyright respect and prohibit training on copyrighted content without consent, offering strong protections for artists and studios.

In the UK, artistic styles aren’t protected, and AI-generated works lack copyright unless human authorship is involved. The UK is reviewing its policies, potentially leading to tighter regulations.

Canada, Australia, and South Korea are also considering reforms, while countries like India and Brazil still follow traditional copyright frameworks, leaving AI-generated art in a legal gray area.

Globally, while most laws don't currently protect artistic styles, AI is driving legal reevaluation. Future reforms may include requiring consent, royalties, and training data transparency to better protect creators.

Are there existing legal precedents where courts have ruled on cases involving AI-generated content and copyright infringement? Could you please elaborate with some examples?

The legal landscape around AI-generated content and copyright infringement is still evolving, but several key cases are beginning to shape how courts interpret these issues.

In Thaler v. Perlmutter (2023), a U.S. court ruled that AI-generated works without significant human input cannot be copyrighted. The case reaffirmed that only human authors can claim copyright, limiting protections for AI-generated art—especially relevant when style mimicry lacks human contribution.

In the high-profile Getty Images v. Stability AI (ongoing in the U.S. and UK), Getty accuses Stability AI of using millions of copyrighted images without permission to train its model. Stability AI defends the practice under fair use. A ruling in Getty’s favor could force AI firms to license training data; if Stability wins, it could affirm the legality of using copyrighted content without consent for training purposes.

Similarly, the Andersen v. Stability AI, Midjourney & DeviantArt (2023) class-action lawsuit involves artists claiming copyright infringement from AI models trained on their work. The outcome could determine whether AI mimicry of style or training without consent constitutes infringement.

In Zarya of the Dawn (2023), the U.S. Copyright Office granted copyright only to the human-created portions of a graphic novel, denying it for AI-generated images. This reinforces that AI outputs without human authorship aren't protected under current U.S. law.

China, however, has taken a stricter approach. In Tencent v. Shanghai Moonton (2021), a court ruled in favor of Tencent, finding that AI-generated content resembling copyrighted work violated intellectual property rights.

Overall, courts are trending toward denying copyright to AI-only creations while grappling with the legality of training on copyrighted material. Key rulings in the coming years could reshape how AI companies use creative content and how artists, including studios like Ghibli, can protect their work.

What measures can artists and studios take to protect their styles from being replicated by AI technologies without authorisation?

To protect their artistic styles from unauthorised AI replication, artists and studios can adopt a combination of technological, legal, and advocacy-based strategies. Technological tools offer an initial layer of defense. Visible or invisible watermarks and digital signatures can help establish ownership, although advanced AI may sometimes bypass these. Tools like Nightshade and Glaze allow artists to subtly alter their images, effectively “poisoning” them for AI training by misleading models and preventing accurate style replication. Additionally, blockchain technology and NFTs can serve as proof of authenticity and ownership, strengthening legal claims and preserving the value of original works.

On the legal front, artists should use clear licensing agreements when sharing work online, explicitly outlining usage rights and restrictions. Creative Commons licenses can provide flexible frameworks for this. It’s also important to review platform terms of service and make use of opt-out options where available to prevent artwork from being scraped for AI training. In cases of clear infringement, legal action remains a viable, though often resource-intensive, route—especially when a unique artistic identity or trademark is misused.

Beyond individual protections, advocacy is key. Artists can raise awareness about AI’s impact on creativity and campaign for stronger copyright protections, particularly for styles that define a brand or studio. Engaging the public and policymakers through social media, industry organizations, and public discourse is essential for building momentum toward legal reform. Ultimately, a multifaceted, adaptive approach is necessary to ensure that artists retain control over their creative identity in an era of rapidly advancing AI.

Artists are worried about their work being unethically scraped, then, at this point, will something like a Spotify model work where it signs licensing and compensation agreement with record labels and media companies for the reproduction of art?

The “Spotify model” of licensing and compensation could offer a viable framework for addressing the ethical concerns of artists whose work is used to train AI. Similar to how Spotify negotiates with record labels for music rights, AI companies could license artworks from artists, studios, or collecting societies, defining how the work may be used in training datasets and what compensation is owed. Payments could be structured as per-use fees, subscriptions for dataset access, or revenue-sharing agreements, potentially creating new income streams for artists.

This approach would provide legal clarity and ethical legitimacy, replacing unauthorized scraping with formal agreements. It could also pave the way for the development of collecting societies for visual artists, similar to those in the music industry, to manage licensing and royalty distribution. However, significant challenges remain. As seen in the music world, artists may feel undercompensated, especially if royalties are low or opaque. Independent or lesser-known artists may struggle to negotiate fair deals, while larger studios could dominate licensing revenues.

Another issue is the difficulty of defining and tracking the use of an “artistic style” in AI-generated content, which complicates attribution and payment. Ensuring transparency in how AI companies use training data will be crucial to winning artists’ trust. While early efforts are underway to build licensing platforms for visual art, a fully realized, equitable “Spotify model” is still evolving. Nonetheless, as legal and public pressure grows, such a system may become essential in aligning AI development with creators’ rights and interests.

Does AI have the right to learn?

The question of whether AI possesses a “right to learn” is a multifaceted issue involving philosophical, ethical, and legal dimensions. As of the Indian landscape today, the answer is not a simple yes or no. Instead of focusing on AI's "right," ethical discussions often emphasize the responsibility of those who develop and deploy AI to ensure that the learning process is ethical, taking into account data privacy, bias, and the impact on human creators. In India, the legality of AI training depends on compliance with copyright law and data protection regulations.

The Indian Copyright Act does not explicitly permit using copyrighted works for AI training, and the scope of “fair dealing” in this context remains legally unsettled. AI learning must also respect data privacy under evolving laws. Thus, AI’s ability to learn is not a right, but a regulated activity subject to existing human-centric legal frameworks.

Sam Altman said if you don’t allow fair use, then the AI race is over. What are the implications and how will that impact the rights of an artist?

Sam Altman’s statement that limiting fair use would end the “AI race” reflects the AI industry’s reliance on vast datasets, often containing copyrighted content, to train advanced models. He argues that restricting this access—through stricter copyright enforcement—could slow innovation and give countries with looser regulations a competitive edge.

If broad fair use is not allowed, AI companies could face legal risks, increased costs, and reduced access to diverse data, potentially leading to weaker AI models. They may be forced to rely on licensed or public domain content, limiting their capabilities and slowing progress.

For artists, this shift could offer more control and potential compensation through licensing agreements. It may help combat unauthorized use of their work and strengthen legal protections. However, managing these licenses—especially for independent artists—could be complex, with the risk of unequal compensation, similar to streaming platform models.

Altman’s stance highlights a fundamental conflict: accelerating AI innovation versus protecting creators’ rights. The outcome of this debate—especially regarding fair use—will significantly shape global AI development and artist protections. Legal systems worldwide, including India’s, are now under pressure to balance these competing priorities through clear, fair regulation.

What are the ethical concerns when one starts monetising from AI-generated art borrowed or “inspired” from copyrighted material, or material taken without consent?

Monetising AI-generated art inspired by copyrighted material without consent raises serious ethical concerns. Primarily, it exploits artists by using their work—often without permission or compensation—to train models that can mimic their styles. This undermines creative ownership and devalues the labor behind original art. The lack of transparency also misleads consumers, who may believe they’re purchasing human-made art.

Moreover, such practices risk cultural and artistic dilution, as AI lacks the lived experience and emotional depth that define authentic human expression. Widespread use of AI-generated art can displace human artists, threatening livelihoods and reducing opportunities in creative industries. The issue of intellectual property theft is also central—when AI outputs closely resemble copyrighted works, it blurs the line between inspiration and imitation.

Ethically, AI should serve as a tool to support, not replace, human creativity. Solutions include mandatory transparency (labeling AI-generated art), opt-out options for artists, and fair compensation models when copyrighted content is used for training. Without these safeguards, monetizing AI art risks harming artists, deceiving consumers, and eroding the integrity of the creative process.


Tags: