Highlights
DeepSeek’s AI Model Sparks Controversy
Recently, the AI startup DeepSeek has garnered attention with its R1 model, which is said to rival the capabilities of OpenAI’s GPT-4-level o1 model, all while being trained with a mere investment of £5.6 million. This is significantly lower than the £100 million or more that OpenAI invested. However, Google DeepMind’s CEO, Demis Hassabis, remains sceptical.
DeepSeek’s Claims Under Scrutiny
During the Artificial Intelligence Action Summit held in Paris, Hassabis noted the impressive achievements of DeepSeek but mentioned that their assertions might be “exaggerated and a little bit misleading.”
Hassabis highlighted that the reported £5.6 million likely only reflects the cost of the final training run, excluding the complete development expenses. This includes crucial elements such as data collection, infrastructure, and various training iterations. He also hinted that DeepSeek may have drawn upon existing Western AI models to optimise its own, a critique previously levelled by OpenAI.
OpenAI expressed their concerns, stating, “We know companies based in the People’s Republic of China—and others—are continually attempting to emulate the models of leading US AI firms,” following the introduction of DeepSeek.
Is DeepSeek a True Breakthrough?
Despite the notable performance of DeepSeek, Google does not regard it as a significant leap in AI efficiency. Hassabis contended that Google’s Gemini models outperform DeepSeek in terms of cost-to-performance ratio, yet they have not received comparable promotional efforts.
He stated, “While it is impressive, it does not represent a revolutionary outlier on the efficiency scale.”
DeepSeek’s announcements have certainly incited discussion within the AI community. As the competition for increasingly powerful and cost-effective AI models heats up, the ultimate measure will be long-term performance and true innovation, instead of just ambitious assertions.