The Indian Ministry of Finance has announced a ban on the usage of AI tools and applications, including ChatGPT and DeepSeek, on official government devices. This directive, dated 29th January 2025, is intended to protect sensitive government data from potential security threats.
The notice, signed by Joint Secretary Pradeep Kumar Singh, warns that the use of AI-powered applications on workplace computers could compromise classified government information. To mitigate these risks, the ministry has instructed all employees to avoid using such tools on official devices.
This circular has gained approval from the Finance Secretary and has been circulated to major government departments, such as Revenue, Economic Affairs, Expenditure, Public Enterprises, DIPAM, and Financial Services.
The prohibition is part of a broader global concern regarding AI platforms managing sensitive data. Numerous AI models, like ChatGPT, process user inputs on external servers, raising alarms about data breaches or unauthorised access.
Similar restrictions on AI have been enacted by governments and corporations across the globe. Several private companies and international organisations have already limited the use of AI tools to protect against data leaks.
Although this directive bar AI applications on official devices, it does not clarify if employees may utilise them on personal devices for work-related tasks. This approach suggests that the government is taking a prudent stance on AI implementation, prioritising data security over ease of use.
As AI tools gain traction in various workplaces, the future establishment of regulated AI use policies by the Indian government remains unclear. For the time being, officials within the finance ministry are required to rely on conventional methods, at least on their work computers.
Why has the ban? The Indian Finance Ministry’s choice to prohibit AI tools on official devices arises from security and confidentiality issues. Here are some reasons why the government may be pursuing this measure:
1. Risk of data leaks
AI models such as ChatGPT and DeepSeek handle user inputs on external servers, meaning any delicate government information entered into these tools could potentially be saved, accessed, or misused. Given that government entities manage classified financial data, policy documents, and internal communications, even unintentional exposure could lead to significant risks.
2. Lack of control over AI models
In contrast to traditional software utilised in government offices, AI tools are cloud-based and controlled by private enterprises (for instance, OpenAI for ChatGPT). The government lacks direct oversight regarding how these tools store or process information, enhancing worries about foreign access or cyber threats.
3. Compliance with data protection policies
India is striving to enhance data privacy legislation, including the Digital Personal Data Protection (DPDP) Act, 2023. Permitting AI tools on official devices without definitive regulations could result in violations of data protection policies, exposing government systems to vulnerabilities.