Highlights
AI Tools and Privacy: Expert Insights
AI tools play a crucial role in supporting daily activities such as research, coding, and note-taking. However, for Harsh Varshney, a 31-year-old employee of Indian heritage at Google in New York, these tools also necessitate stringent privacy practices.
According to Varshney, who previously spent two years on Google’s privacy team and is now part of the Chrome AI security team working against hacking and AI-driven phishing, AI tools provide substantial assistance.
Four Key Privacy Practices for AI Use
Treating AI Like a Public Postcard
Varshney emphasizes the importance of treating AI interactions cautiously. He notes that a misplaced sense of familiarity with AI can cause individuals to divulge personal information online that they would normally keep private. He refrains from sharing sensitive details such as credit card numbers, Social Security numbers, home addresses, or medical information with public chatbots to prevent potential data leakage.
Choosing the Appropriate “Room”
Varshney advises users to consider the environment, or “room,” in which they are engaging with AI. He explains that enterprise AI tools, which typically do not learn from user conversations, are more suitable for workplace discussions. Varshney likens this to having a chat in a bustling coffee shop versus a confidential meeting in a private office. He prefers to use enterprise solutions for anything related to Google projects, including email revisions, instead of relying on public chatbots.
Deleting Chat Histories Regularly
Regular deletion of chat histories is another safety precaution Varshney adopts. He has experienced situations where enterprise tools recalled personal data he had forgotten sharing. For instance, an enterprise Gemini chatbot once indicated his exact address. He prefers using “temporary chat” or incognito modes to ensure no data is retained.
Using Trusted Tools and Reviewing Privacy Settings
Varshney sticks to reputable tools like Google’s AI, OpenAI’s ChatGPT, and Anthropic’s Claude, ensuring he reviews their privacy settings regularly. He advises checking the privacy policies of all tools and specifically looking for options related to improving the model for everyone. By ensuring this setting is disabled, users can protect their conversations from being utilized for further training.
Varshney concludes that while AI technology holds significant potential, it is essential to exercise caution to safeguard personal data and identities during its use.






