India has rolled out its final AI Governance Guidelines, a new framework that prioritizes coordination over heavy regulation. The guidelines, unveiled under the IndiaAI Mission, aim to spark innovation while keeping potential risks in check with practical, evidence‑based tools.
Key pieces of the plan include the AI Governance Group (AIGG), a Technology and Policy Expert Committee (TPEC), and the AI Safety Institute (AISI). Together they provide a whole‑of‑government approach that avoids a single, over‑centralised regulator. Instead, sectoral regulators retain the lead on enforcement and oversight, striking a balance between flexibility and accountability.
Nasscom, India’s leading IT trade association, praised the guidelines. The group notes that the framework embraces proportional, voluntary measures, graded liability and a non‑punitive incident system. “The guidelines are pragmatic, built on real incidents and designed to learn, adapt and avoid regulating imagined harms,” Nasscom said.
Legal experts within the team plan to work with existing statutes, identify gaps and make targeted amendments rather than draft an entirely new AI law. This approach also echoes Nasscom’s stance that a separate AI law is unnecessary at this stage.
The new rules list seven ethical principles and cover six core governance pillars. An action plan with short, medium and long‑term targets offers clear guidance for developers, industry players and regulators alike. All of this moves India toward the India‑AI Impact Summit 2026, where leaders will discuss responsible AI adoption across sectors.
By embedding flexibility, shared responsibility and evidence‑based risk management, the guidelines set a forward‑thinking standard for AI governance in India.
Source: ianslive
Stay informed on all the latest news, real-time breaking news updates, and follow all the important headlines in world News on Latest NewsX. Follow us on social media Facebook, Twitter(X), Gettr and subscribe our Youtube Channel.



