Quick summary: who acted and why this matters
The Content Overseas Distribution Association, CODA, which represents major Japanese rights holders including Studio Ghibli, Bandai Namco and Square Enix, has sent a formal letter asking OpenAI to stop using its members’ works to train AI models such as Sora 2. CODA says the way machine learning models reproduce or imitate copyrighted works may amount to copyright infringement. The group also argues that OpenAI’s current opt-out policy for training data likely conflicts with Japanese copyright law, which generally requires creators or rights holders to give prior permission.
This move follows public concern after the release of Sora 2, an image-generation model, which produced a surge of content that resembled well known Japanese intellectual property. Japan’s government also made a request to OpenAI asking the company to avoid replicating Japanese artwork, increasing pressure on the company to respond. The dispute highlights tensions between generative AI companies, creative industries, and national laws on copyright.
What exactly happened
CODA, the Tokyo based trade group that represents film studios, game publishers and other rights holders, wrote to OpenAI asking it to stop training Sora 2 and other models on materials owned by CODA members. The letter argues two core points:
- Machine learning can create outputs that replicate copyrighted works, which may constitute infringement.
- OpenAI’s opt-out approach to removing copyrighted material after the fact is inconsistent with Japanese law, which generally requires prior authorization from rights holders.
These claims follow public examples of AI image generators producing art in the style of specific Japanese creators and franchises. After Sora 2 launched, users reported a large volume of images that closely matched the look and feel of well known Japanese IP. The result was a formal request from Japan’s government asking OpenAI to stop producing outputs that replicate Japanese artwork.
Why this matters for ordinary readers
This is not only a dispute between big companies. It will affect how people use generative AI tools, how creators make a living, and how tech firms design their products. Key effects could include:
- Changes to how image and text models are trained, which could limit the range of styles they can reproduce.
- New options for creators to protect their work, including clearer opt-in licensing or stronger opt-out tools.
- Potential legal rulings or regulation in Japan that set precedents other countries could follow.
For everyday users, those changes may mean fewer immediate impressions of famous styles in AI outputs. For creators and fans, it could mean more control over how beloved characters and designs are used by AI.
How OpenAI’s opt-out policy works, and the legal question in Japan
OpenAI has said it removes material from training sets when a rights holder requests it. That is an opt-out approach, where training proceeds unless a creator tells the company not to use their work. CODA says that under Japanese copyright rules, rights holders must give permission before their works are used in a new project like training an AI model. In short, CODA views opt-out as insufficient and likely unlawful.
The legal question centers on whether the copying and internal use of copyrighted works during model training is allowed without prior consent. CODA argues that the way a generative model stores and reproduces patterns could count as creating infringing copies. A formal interpretation by Japanese authorities, or a court decision, could change how AI companies build and deploy models that involve copyrighted content.
Examples that raised alarms
Reports and user posts showed Sora 2 generating images that evoked specific anime and game styles associated with Studio Ghibli and other Japanese creators. In previous cases, other AI models also produced outputs labeled or described as in the style of well known artists. Those examples triggered public outcry and amplified calls for stronger protections for visual artists and IP owners.
Technical options for companies that want to comply
If OpenAI and others need to align with Japanese law and rights holder wishes, there are several technical paths they could take. Each option involves trade offs.
- Strict opt-in licensing. Companies would obtain licenses before training on specific works. This gives creators control, but it increases costs and slows model development.
- Better filtering and exclusion at scale. Firms could try to identify and remove copyrighted content before or during training. This is technically difficult and may not be perfect.
- Fine-tuning on licensed data. A base model could be trained on permissive data, while specific styles or franchises are reproduced only after explicit licensing for fine-tuning.
- Watermarking and usage limits. Outputs could be labeled, and commercial uses restricted for certain styles unless permissions are granted.
Potential legal and business outcomes
The most immediate possibilities are negotiation and policy changes rather than a sudden ban. CODA’s request could lead to:
- Direct licensing deals, where AI firms pay for access to Japanese content to train or fine-tune models.
- Legal guidance or court cases in Japan that clarify when model training counts as copying under copyright law.
- Regulatory responses that require more transparent data practices, or that set rules for how companies must obtain consent.
- Operational changes at AI companies that reduce the risk of creating outputs that mimic protected works too closely.
Wider effects outside Japan
What happens in Japan could influence other countries. If Japanese law or government action forces a move away from opt-out training, companies may adopt similar practices worldwide to avoid complexity. That could affect the speed of AI development and the range of styles available to users globally.
What this means for creators, fans, and everyday users
Creators may get stronger rights and more control over whether and how their work is used to train AI. That could increase licensing income for established IP holders, and give independent artists better options for excluding their work from training sets.
Fans who enjoy generative AI outputs in the style of favorite creators or franchises may see fewer instant results that mimic those styles. At the same time, clearer rules could reduce legal uncertainty for creators and platforms.
Key takeaways
- CODA, representing Studio Ghibli, Bandai Namco and Square Enix, asked OpenAI to stop using members’ work to train Sora 2 and other models.
- CODA claims machine learning replication can infringe copyright and that opt-out training likely violates Japanese law requiring prior permission.
- Sora 2 produced many images resembling Japanese IP, prompting a formal request from Japan’s government to OpenAI.
- Possible outcomes include licensing deals, new technical safeguards, legal rulings in Japan, and changes in how AI firms worldwide handle training data.
FAQ
Will OpenAI have to stop using all Japanese art to train models?
Not necessarily. The outcome depends on negotiations, legal interpretation, and possible regulation. CODA wants OpenAI to stop using member works without permission. OpenAI could respond with licenses, data removal, or technical safeguards.
Does this mean generative AI will get worse?
Models may change, but not necessarily for the worse. Removing specific copyrighted works could reduce the model’s ability to copy those styles exactly. At the same time, companies can train using licensed data and other public sources to keep creating useful tools.
Can an artist opt out now?
OpenAI and other companies offer ways for creators to request removal of their work from training sets, but CODA argues that opt-out is insufficient under Japanese law. The right path may be opt-in licensing or updated legal rules that clarify permissions ahead of training.
Conclusion
The request from CODA, backed by major Japanese rights holders including Studio Ghibli, Bandai Namco and Square Enix, puts new pressure on OpenAI to change how it uses copyrighted works for training. The issue brings copyright law, national policy and the technical details of model training into direct conflict. For everyday users and creators, the dispute may change what AI tools can do, who benefits from those tools, and how companies obtain the data that drives generative models. The next steps will likely be negotiation and legal clarification in Japan, and those outcomes could shape global practices for AI training and licensing.







Leave a comment