AI technology is now essential for various everyday tasks such as research, coding, and note-taking. However, for Harsh Varshney, a 31-year-old Google employee of Indian origin based in New York, implementing strict privacy measures is crucial.
AI tools significantly assist him with in-depth research, note-taking, coding, and online searches. A former member of Google’s privacy team for two years, he currently works within the Chrome AI security team to safeguard the browser from cyber threats and AI-enabled phishing attacks.
Varshney has identified four key habits to enhance personal data protection while engaging with AI.
Highlights
Treat AI Like a Public Postcard
He views interactions with AI as akin to sending a public postcard. A false sense of familiarity can lead individuals to disclose sensitive information they would typically keep private. Therefore, he refrains from sharing credit card numbers, Social Security numbers, home addresses, or medical histories with public chatbots, since AI may retain this data and potentially expose it in the future.
Consider the “Room” You’re In
Varshney evaluates the environment he is in when using AI. Enterprise AI platforms, which generally do not train on user interactions, are more suitable for professional discussions. He likens the situation to chatting in a bustling coffee shop, where privacy is compromised, versus a confidential meeting within an office setting. Consequently, he avoids public chatbots for Google projects and prefers enterprise solutions, even for simple email edits.
Regularly Delete Chat History
He is diligent about clearing chat history on a routine basis, as some enterprise tools retain historical data. He was astonished when an enterprise Gemini chatbot could recall his exact address without him realising he had shared it in a previous query to refine an email. To mitigate this, he opts for temporary chat or incognito modes to prevent data storage.
Use Trusted Tools and Review Privacy Settings
Varshney limits his usage to well-regarded tools such as Google’s AI, OpenAI’s ChatGPT, and Anthropic’s Claude while also checking privacy configurations. It is vital for users to read privacy policies for any platforms they use. In privacy settings, look for the option to opt-out of ‘improving the model for everyone.’ Ensuring this setting is disabled helps to prevent personal conversations from being utilized for training purposes.
AI technology has tremendous potential. Nevertheless, Varshney emphasises the need for vigilance to protect personal data and identities during its use.
