Highlights
Google’s Antigravity Restrictions Impacting Gemini AI Users
Google has imposed restrictions on certain users accessing its Antigravity coding assistant and Gemini AI Ultra subscriptions. These users were reportedly misusing Gemini AI models through accounts associated with the open-source coding framework OpenClaw. It was discovered that these users violated Google’s policies by engaging in harmful, abusive, or unauthorised activities.
Concerns Among Gemini Users
The restrictions have sparked discussions among Gemini users regarding whether companies have the authority to determine who can access their tools. As AI tools have become essential for daily tasks such as coding, research, and business operations, such decisions can significantly affect user productivity.
OpenClaw’s Peter Steinberger Reacts
Peter Steinberger, associated with OpenClaw, has openly condemned Google’s actions, describing them as overly severe. In a post on X, he stated that caution is advised for those using Antigravity and hinted at withdrawing support. Steinberger expressed frustration, noting that while other entities communicate kindly about issues, Google opts for bans.
Response from Varun Mohan
In contrast, Varun Mohan, a former CEO of Windsurf and engineer at Google DeepMind, commented on X, mentioning a significant uptick in malicious usage of the Antigravity framework, which he stated has severely affected service quality for legitimate users. Mohan clarified that the restrictions targeting OpenClaw users stemmed from misuse of the Antigravity backend.
Mohan explained that the company needed to act swiftly to restrict access to those using the product improperly. He acknowledged that some users may not have realised their actions were in violation of the terms of service and indicated that a resolution would be found for them. However, he emphasised the importance of fairness to genuine users due to limited capacity.
Anthropic’s Earlier Measures
Notably, Anthropic had also adjusted its consumer terms by prohibiting OAuth tokens in external tools, including OpenClaw. This particular AI tool has faced scrutiny for a considerable time due to the security risks associated with its open-source AI agent.
OpenAI’s Stand on OpenClaw
Despite the security concerns related to the OpenClaw framework, OpenAI has employed its creator, Peter Steinberger. Sam Altman from OpenAI remarked that Steinberger will play a crucial role in developing the next generation of personal agents, underscoring that his technical skills are regarded as instrumental in advancing future AI agent systems.
