Govt Panel Weighs Blanket Licence for Generative AI Training on Copyrighted Works in India
The Union government has begun a formal review of India’s Copyright Act, 1957 to address the legal challenges thrown up by the rapid spread of generative AI tools. In a written reply in Lok Sabha, Minister of State for Commerce and Industry Jitin Prasada confirmed that the move is aimed at clarifying how copyrighted works can be used to train AI systems and who owns rights in AI‑generated content.
Expert Panel to Examine Generative AI–Copyright Interface
The Department for Promotion of Industry and Internal Trade (DPIIT) has set up an eight‑member expert committee to study the impact of generative AI on copyright law and recommend whether amendments are needed. The panel, constituted on 28 April 2025, has been asked to identify legal and policy challenges linked to AI, review the adequacy of existing statutory provisions and suggest changes where required.
According to the government, the committee’s terms of reference include examining how generative AI systems use copyrighted content, assessing whether current exceptions and limitations are sufficient, and drafting a working paper for public consultation. The committee has already completed Part 1 of this working paper, which focuses on the use of copyrighted works in the training of generative AI models.
Working Paper on AI Training and Copyright Law
Part 1 of the DPIIT working paper, released in November and now open for stakeholder feedback, analyses how AI developers scrape and ingest large volumes of text, images, audio, video and code for training. The paper concludes that using lawfully accessed copyrighted works for generative AI training engages copyright because it involves acts such as reproduction and adaptation, even if the training process is not visible to end‑users.
The working paper rejects the idea of a broad, open‑ended exception that would allow all AI training on copyrighted content without permission or payment. Instead, the majority of committee members favour a statutory or mandatory blanket licensing mechanism that would permit AI developers to train on lawfully accessed works in exchange for regulated remuneration to rightsholders.
Proposed Blanket Licence Model for Generative AI Training
Under the proposed hybrid licensing model, AI companies would obtain a non‑exclusive licence to use all lawfully accessed copyrighted works in India for training their models, subject to payment of statutory royalties. A central collecting entity, likely managed by rightsholder organisations, would collect fees from AI developers and distribute them among authors, publishers, music labels and other copyright owners.
The committee suggests that such a scheme could operate in a manner comparable to compulsory or statutory licences already familiar in broadcasting and music, while being adapted for the unique features of AI training. The paper emphasises that this approach aims to provide legal certainty for AI innovators, while ensuring that creators are not deprived of fair compensation when their works are used at scale by powerful models.
Part 2 to Tackle AI‑Generated Works
The government has clarified that the panel is now working on Part 2 of the paper, which will examine authorship, ownership and copyrightability of AI‑generated content. Key questions under consideration include whether purely AI‑generated outputs qualify for copyright protection, how much human involvement is necessary, and who should be treated as the author when multiple parties are involved.
This second phase is also expected to address the risk of AI outputs closely mimicking existing copyrighted works, such as artworks, photographs, music or news reports, raising concerns about derivative works and substantial similarity. Once completed, the two‑part paper is likely to form the basis of any amendments that the government may introduce to the Copyright Act or associated rules.
What The Review Aims to Achieve
The overarching objective of the exercise is to update India’s copyright framework so that it can handle AI‑driven uses of content without either stifling innovation or undermining creative industries. On one hand, policymakers recognise that cutting‑edge AI systems require access to vast amounts of data, including news, books and audio‑visual works, to remain globally competitive.
On the other hand, creators and content businesses have flagged unlicensed scraping and opaque training practices as a threat to existing revenue models, especially in sectors like media, music and publishing. By pushing a licensing‑led approach rather than a broad exception, the working paper attempts to structure a middle path that rewards creators while giving AI firms a predictable legal route to access data.
Impact on News Writing and Journalism
For journalists and news publishers, including digital portals, the day‑to‑day act of news writing remains protected as an ordinary “literary work” under the Copyright Act. The review does not seek to limit or regulate how reporters write, edit or publish news; instead, it focuses on how AI systems use those news reports as input for training.
News articles, editorials, explainers and analysis pieces are among the copyrighted works that large AI models routinely crawl and ingest. Under the blanket licence proposal, any lawfully accessible news story on your website could be used to train generative AI models as a matter of right, but AI companies would be required to pay statutory remuneration through the proposed licensing framework.
Rights and Remuneration for News Publishers
This model could open a new revenue stream for news organisations, which have long complained that tech platforms benefit disproportionately from their content. Instead of negotiating one‑to‑one deals with every AI company, publishers would receive payments through a collecting society or similar body whenever their content is used in large‑scale AI training.
However, the flip side is that individual rightsholders might not have an absolute veto to prevent lawfully accessible content from being used for training, as long as the AI developer participates in the statutory licence. The balance between guaranteed remuneration and loss of granular control is likely to be a key fault‑line in consultations, particularly for bigger media houses that may prefer bespoke licensing arrangements.
Effects on Everyday Newsroom Practices
Routine newsroom practices such as quoting, summarising or critiquing other reports fall under existing fair dealing provisions for reporting of current events and are not the target of the AI‑focused review. Journalists can continue to quote from government replies, court orders or other outlets with appropriate attribution in line with current norms.
The more significant shift is likely to come from the way AI tools are integrated into newsrooms themselves. As long as a story is primarily human‑authored and AI is used only to assist with research, language optimisation or headline suggestions, it should continue to qualify as a human work with full copyright protection for the reporter or employer.
How AI Assisted Journalism May be Treated
The unsettled area is heavily AI‑generated content, where tools draft large portions of text and the human user merely edits or approves it. Part 2 of the DPIIT working paper will try to clarify whether such outputs attract copyright at all, how to judge the level of human creativity involved, and whether bylines and contracts need to adapt.
For newsrooms, any future rules could impact how they claim rights over AI‑assisted stories, syndicate content, or pursue infringement against unauthorised use of their AI‑heavy pieces. The debate may also influence editorial policies on transparency, including whether outlets openly disclose the use of generative AI in producing certain formats such as explainers, recaps or headlines.
Who Stands to Gain or Lose
Large AI developers and platform companies are likely to face higher compliance and licensing costs if a mandatory blanket licence comes into force. While they gain legal certainty, they will have to factor in regular royalty payments, dataset documentation and possible transparency requirements about the sources used for training.
For creators and content businesses, including news organisations, the proposed regime offers potential monetary returns where currently there may be none. Smaller AI startups and open‑source projects could, however, find the cost and complexity of licensing a hurdle unless fees are structured in a way that supports innovation.
How India Compares with Global Approaches
Globally, jurisdictions are taking different routes to deal with generative AI and copyright. The European Union relies on text and data mining exceptions under its DSM Directive, combined with obligations under the EU AI Act for foundation models to document training data and ensure copyright compliance.
The United States is currently testing the limits of “fair use” through high‑profile lawsuits against AI companies, without a specific statutory exception for AI training. Japan, by contrast, has a relatively broad text and data mining exception that allows certain unlicensed uses for data analysis, which has been interpreted to support AI training in many contexts.
India’s proposed blanket licensing model stands out because it centres on compulsory remuneration instead of relying primarily on exceptions or a flexible fair use doctrine. Policymakers see this as a “middle path” that can unlock training data for AI companies while ensuring that writers, journalists, artists and other creators share in the economic value created.
What to Watch Next
The immediate next step is the completion and publication of Part 2 of the DPIIT working paper, followed by another round of stakeholder consultations. Any eventual amendments to the Copyright Act or new rules will depend on feedback from creators, tech firms, start‑ups, civil society and legal experts.
For the news industry, this process will determine whether AI training on news content becomes a regulated, remunerative activity and how AI‑assisted journalism is treated in Indian law. Until then, journalists can continue their existing practices, while keeping a close eye on how the balance between innovation and protection is being redrawn in Parliament and policy circles.














