The global AI safety debate has escalated after the United Kingdom announced tighter rules for AI chatbots, India opened a major international summit on artificial intelligence, and China’s ByteDance faced copyright accusations over its latest AI video model.
In London, the British government said it would close a legal gap in online safety legislation after Elon Musk’s chatbot Grok was used to generate sexualised deepfake images. At the same time, world leaders gathered in New Delhi to discuss AI governance, while in Beijing, ByteDance pledged to strengthen safeguards following claims of large-scale copyright infringement.
Together, the developments highlight mounting international pressure on AI companies over issues ranging from child protection and misinformation to intellectual property rights and job disruption.
UK Moves to Regulate AI Chatbots Under Online Safety Act
Britain’s government confirmed on Monday that AI chatbots will be brought under the scope of the Online Safety Act, following public backlash over Grok’s ability to generate sexualised images of women and children using text prompts.
Prime Minister Keir Starmer said the government would act quickly to close what he described as a legal loophole. In a statement issued ahead of a speech, Starmer said chatbot providers would be required to comply with rules governing illegal content or face legal consequences.
Under the Online Safety Act, which came into force in July, platforms that host potentially harmful material must implement strict age verification measures. These include tools such as facial image checks or credit card verification.
The law already makes it illegal to create or share:
Non-consensual intimate images
Child sexual abuse material
Sexual deepfakes generated using AI
However, regulators noted that some AI chatbots were not fully covered if they allowed interaction only between a user and the system, without content sharing between users. The new measures are intended to address that gap.
Britain’s media regulator Ofcom opened an investigation in January into social media platform X, which hosts Grok, over potential failures to meet safety obligations. The country’s data protection authority has also launched a wider probe into X and xAI to examine whether personal data laws were breached in connection with the creation of sexualised deepfakes.
Starmer’s Labour government has also launched a consultation on a possible social media ban for children under 16 and is considering limits on features such as infinite scrolling.
The UK previously pledged in January 2025 to reduce regulatory barriers to attract AI investment and position the country as what Starmer called an “AI superpower.” The latest steps reflect the growing tension between innovation and regulation.
AI Impact Summit in India Focuses on Governance and Safety
At the same time, India is hosting the five-day AI Impact Summit in New Delhi, bringing together 20 national leaders and 45 ministerial delegations, alongside technology executives including OpenAI’s Sam Altman and Google’s Sundar Pichai.
The summit aims to produce a “shared roadmap for global AI governance and collaboration,” according to organisers. Indian Prime Minister Narendra Modi described the event as evidence of the country’s rapid progress in science and technology.
The meeting, the fourth of its kind after previous gatherings in Paris, Seoul and Britain’s Bletchley Park, is built around three themes described as “people, progress, planet.”
Key concerns on the agenda include:
Risks to child safety and the spread of AI-generated misinformation
The economic impact of automation and potential job losses
Environmental and energy implications of large-scale AI systems
Amba Kak, co-executive director of the AI Now Institute, questioned whether world leaders would take meaningful steps to hold AI companies accountable. She told AFP that previous voluntary industry commitments had largely relied on self-regulation.
Last year, dozens of countries signed a statement in Paris calling for AI to be developed in an open and ethical manner. The United States did not sign that declaration, with Vice President JD Vance warning that excessive regulation could damage a fast-growing sector.
India, which recently rose to third place globally in AI competitiveness according to Stanford University researchers, is seeking to position itself as a leading AI hub. However, experts note that it still trails the United States and China in infrastructure and research capabilities.
Concerns are also growing over employment. India’s large outsourcing and call centre industries may face disruption from advanced AI voice and assistant tools. Some market analysts have linked recent declines in outsourcing company shares to rapid improvements in AI systems.
ByteDance Faces Copyright Claims Over AI Video Model
In China, technology company ByteDance said it is working to strengthen safeguards after facing allegations of copyright infringement linked to its AI video generation model, Seedance 2.0.
The Motion Picture Association accused the model of unauthorised use of US copyrighted works “on a massive scale.” MPA chairman Charles Rivkin said the system lacked meaningful protections against infringement and called for it to cease operations.
The actors’ union SAG-AFTRA also condemned what it described as the unauthorised use of performers’ voices and likenesses.
Seedance 2.0, currently available only as a limited test version in China, has generated highly realistic video scenes featuring well-known film characters and actors. Some of these videos have attracted millions of views online.
In response, ByteDance told AFP that it respects intellectual property rights and is taking steps to strengthen safeguards to prevent unauthorised use of copyrighted material and likenesses.
Swiss consultancy CTOL Digital Solutions described Seedance 2.0 as one of the most advanced AI video generation systems currently available, claiming it outperformed competing models in testing.
Conclusion:
The global AI safety debate is intensifying as governments, regulators and industry groups confront the rapid expansion of generative AI. From child protection laws in the UK to governance discussions in India and copyright disputes in China, policymakers face increasing pressure to balance innovation with accountability.






