Our Expert in Japan
No results available
The regulatory landscape for generative AI copyright Japan has shifted decisively in the first half of 2026, driven by a convergence of updated administrative guidance from the Japan Patent Office (JPO), Cabinet-level revisions to the Act on the Protection of Personal Information (APPI), and the EU‑Japan IP Action framework signalling cross-border enforcement alignment. For general counsels, in‑house IP teams and product leads at media, gaming and AI companies, the immediate challenge is no longer whether Japan permits text-and-data mining (TDM) for AI training, that question was largely answered by Article 30‑4 of the Copyright Act, but rather how to structure licences, performer releases and data-transfer controls so that lawful training does not become unlawful exploitation.
This guide delivers the practitioner-level checklists, model clause language and compliance workflows that competing commentary has so far left out, building on earlier coverage of the 2026 reforms to provide a single, actionable reference for cross-border teams.
Key takeaways:
Under the Japanese Copyright Act, Article 30‑4 provides the primary statutory basis for using copyrighted works in generative AI training. The provision permits exploitation of a copyrighted work without the rights holder’s authorisation where the use does not serve the purpose of enjoying the “thoughts or sentiments expressed” in that work, a threshold commonly referred to as the “non-enjoyment” test. The Agency for Cultural Affairs has elaborated on this standard in its General Understanding on AI and Copyright, confirming that computational analysis, information extraction and machine-learning training generally fall within the scope of permitted non-enjoyment uses.
The article 30‑4 copyright carve-out allows companies to ingest copyrighted text, images, audio and video into training pipelines without obtaining individual licences, provided that the purpose remains analytical rather than expressive. In practice, this covers the following activities:
The WIPO analysis on AI and copyright in Japan confirms that Japan’s TDM exception is among the broadest in any major jurisdiction, because Article 30‑4 does not impose a sector limitation (it applies equally to commercial and non-commercial actors) and does not require that the work be “lawfully accessed,” although separate legal theories, such as terms-of-service breach, may still constrain scraping activities.
The non-enjoyment test has a critical boundary: once an AI system generates outputs that substitute for the original copyrighted work’s market function, Article 30‑4 no longer shields the developer or deployer. Industry observers expect regulators to scrutinise the following scenarios with increasing rigour:
The practical implication for counsel is straightforward: lawful training does not guarantee lawful output. Every deployment pipeline must include a separate output-clearance step, independent of the training-data analysis.
The April 2026 Cabinet-level revisions to the APPI have introduced stricter requirements for cross-border data transfers, with direct consequences for companies that train generative AI models on datasets containing personal information. Under the amended framework, any transfer of personal data to a jurisdiction that does not maintain an “equivalent level” of data protection requires one of several prescribed safeguards, and critically, the amendments have expanded the definition of personal information to encompass biometric data, behavioural profiles and certain pseudonymised datasets that are commonly used in AI training pipelines.
The APPI amendments clarify that model outputs may themselves constitute personal information if they are capable of identifying a specific individual, either directly or in combination with other readily available data. This means that a generative model trained on facial images, voice recordings or writing samples that produces outputs resembling identifiable individuals triggers APPI obligations at the output stage, not merely at the data-ingestion stage. Companies deploying such models must implement controls at both ends of the pipeline: input filtering to segregate personal data, and output monitoring to detect and flag identifiable content before distribution.
For legal teams managing AI training data Japan pipelines that involve offshore model hosting, cloud computing or third-party annotation services, the following operational checklist reflects the APPI amendments now in force:
Early indications suggest that the PPC intends to conduct targeted audits of AI companies from late 2026, making proactive compliance documentation a practical priority rather than a theoretical exercise.
Where Article 30‑4 does not cover a particular use, or where risk tolerance demands belt-and-braces protection, content licensing for AI becomes the primary risk-mitigation tool. The licensing strategy for generative AI copyright Japan projects should follow a structured sequence: identify the assets to be licensed, map the rights holders, define the scope of the grant, and embed protective clauses that allocate risk between licensor and licensee.
| Licence Type | When to Use | Key Contractual Protections |
|---|---|---|
| Training‑only data licence | Large datasets where only internal model training is allowed, no downstream distribution | Explicit “training only” grant; no commercial output right; deletion/forgetting clauses; audit & provenance; indemnity for third‑party claims |
| Content‑to‑output licence | Publishers or platforms needing to distribute or commercialise AI outputs derived from licensed works | Output rights grant (explicit); moral‑rights waiver where possible; revenue share/royalty; representation that rights cleared |
| Performer/voice release | Use of performer images/voice for training or synthetic outputs (deepfakes, voice clones) | Full release for training & outputs; right to sub‑license; compensation terms; reputational damage indemnity |
Example language, for discussion only. These clauses require adaptation to specific transaction terms and Japanese-law review before execution.
When reviewing or drafting AI training licences, in-house counsel should flag the following red-flag areas:
Japan’s Copyright Act grants performers exclusive rights over the recording, reproduction, distribution and public transmission of their performances. These rights exist independently of any copyright in the underlying work, which means that rights clearance voice deepfake projects must address performer rights as a distinct layer, not merely as an extension of the copyright licence for the underlying music, script or broadcast.
A comprehensive clearance workflow for voice and image synthesis must identify and obtain releases from each of the following rights holders:
Example language, for discussion only.
Voice Cloning Release (short form): “Performer grants Producer an irrevocable, worldwide licence to use Performer’s voice, vocal characteristics and speech patterns for the purpose of training voice-synthesis models and generating synthetic voice outputs. Performer acknowledges that synthetic outputs may not be distinguishable from Performer’s natural voice and consents to such use, subject to the compensation terms in Schedule [X].”
Deepfake Indemnity Clause (short form): “Producer shall indemnify Performer against any claim, loss or damage arising from the use of synthetic reproductions of Performer’s voice or likeness, including reputational harm, provided that the claim does not arise from Performer’s own breach of this Agreement.”
Administrative panels convened under the Agency for Cultural Affairs and the Japan Fair Trade Commission have signalled that unauthorised deepfake and voice-cloning uses may give rise to claims under tort law (Civil Code Article 709), personality-rights doctrine, and, where the performer is engaged in commercial endorsement, unfair competition law. The likely practical effect will be that damages awards encompass not only lost licensing fees but also compensation for reputational harm, which Japanese courts have historically assessed broadly in personality-rights cases.
The EU‑Japan IP Action, formalised in early 2026, establishes a bilateral framework for coordinated enforcement, standard-setting and mutual recognition in intellectual property matters, including the emerging intersection of copyright, data protection and AI. For multinationals operating AI development pipelines that span Japanese and European jurisdictions, the framework signals a trend toward regulatory convergence that demands proactive contractual planning.
Cross-border dataset licences should address jurisdiction, governing law and regulatory-cooperation triggers explicitly. Industry observers expect the following provisions to become standard in multinational AI licensing agreements:
Example language, for discussion only.
“This Addendum supplements the Dataset Licence Agreement between the parties to address cross-border transfers of Licensed Materials. Licensee shall process Licensed Materials transferred from Japan in compliance with the APPI and any applicable PPC guidance. Where Licensed Materials are transferred to or from a jurisdiction within the European Economic Area, Licensee shall additionally comply with the GDPR. Each party shall promptly notify the other of any regulatory inquiry or enforcement action concerning the Licensed Materials and shall cooperate in responding to such inquiry. In the event that a regulatory change materially restricts cross-border transfer, either party may invoke the data-localisation contingency set out in Schedule [Y].”
The following matrix maps common generative-AI use cases to the clearances, data-protection steps and contractual terms required under the current Japanese regulatory framework:
| Use Case | Copyright Clearance | APPI Consideration | Key Contract Terms |
|---|---|---|---|
| Internal R&D model training | Article 30‑4 likely sufficient; no output licence needed | DPIA if personal data included; standard vendor DPA | Training-only licence; deletion on termination; audit |
| Publish AI-generated content (text, image, music) | Content-to-output licence required; output-clearance review | Output monitoring for identifiable individuals | Output rights grant; attribution; indemnity; moral-rights waiver |
| Consumer app with voice synthesis | Performer release + content licence for underlying works | Biometric data safeguards; cross-border transfer controls | Voice cloning release; deepfake indemnity; data localisation clause |
| Cross-border dataset transfer (Japan → EEA/US) | Same as above per use case | PPC-prescribed SCCs; DPIA; transfer logs | Cross-border addendum; regulatory-cooperation clause; repatriation trigger |
Operational checklist for legal teams:
Japanese copyright holders may pursue injunctive relief, damages and, in cases involving moral-rights violations, orders for corrective measures including public apology or retraction. For companies receiving infringement claims related to generative AI outputs, the recommended response protocol is as follows:
The Japan Fair Trade Commission’s report on generative AI has additionally flagged competition-law concerns where dominant platforms use their market position to extract training-data licences on unfair terms, signalling that rights holders may have recourse under unfair-trade-practice provisions as well as copyright law. Early indications suggest that enforcement activity will increase throughout the second half of 2026 as rights holders, particularly in the music, publishing and gaming sectors, test the boundaries of Article 30‑4 through formal complaints and litigation.
Under current Japanese copyright law, copyright subsists only in works that are “creative expressions of thoughts or sentiments.” Where a human author exercises creative choices in selecting prompts, curating outputs and making editorial decisions, the resulting work may qualify for copyright protection. Purely autonomous AI outputs, generated without meaningful human creative input, are unlikely to attract copyright protection. The practical advice for companies seeking to protect their AI-assisted works is to document the human creative contributions at each stage of the production process.
The 2026 regulatory environment in Japan demands that companies move beyond passive reliance on the Article 30‑4 safe harbour and adopt a proactive compliance posture. The following five actions should be prioritised in the next quarter:
This guidance is general in nature and does not constitute legal advice. Companies should consult qualified counsel for advice tailored to their specific circumstances, jurisdictions and risk profiles.
This article was produced by Global Law Experts. For specialist advice on this topic, contact Chie Kasahara at Atsumi & Sakai, a member of the Global Law Experts network.
posted 32 minutes ago
posted 2 hours ago
posted 2 hours ago
posted 2 hours ago
posted 2 hours ago
posted 3 hours ago
posted 3 hours ago
posted 3 hours ago
posted 3 hours ago
posted 3 hours ago
posted 4 hours ago
posted 4 hours ago
No results available
Find the right Advisory Expert for your business
Sign up for the latest advisor briefings and news within Global Advisory Experts’ community, as well as a whole host of features, editorial and conference updates direct to your email inbox.
Naturally you can unsubscribe at any time.
Global Law Experts is dedicated to providing exceptional legal services to clients around the world. With a vast network of highly skilled and experienced lawyers, we are committed to delivering innovative and tailored solutions to meet the diverse needs of our clients in various jurisdictions.
Global Advisory Experts is dedicated to providing exceptional advisory services to clients around the world. With a vast network of highly skilled and experienced advisors, we are committed to delivering innovative and tailored solutions to meet the diverse needs of our clients in various jurisdictions.
Send welcome message