Highlights
GPT-4.1 Launch: Enhanced AI for All Users
GPT-4.1 is now accessible to a broader user base thanks to OpenAI’s announcement regarding its rollout of both GPT-4.1 and GPT-4.1 Mini AI models in ChatGPT. Initially released through OpenAI’s API in April, these models are now available to subscribers of ChatGPT Plus, Pro, and Team plans. Additionally, GPT-4.1 Mini can be accessed by both free and paid users. The company noted on X (formerly Twitter) that this decision was made in response to considerable user interest.
According to the announcement, GPT-4.1 is now directly available in ChatGPT as of today. This model stands out for its capabilities in coding tasks and following instructions. Its speed makes it an excellent substitute for OpenAI o3 and o4-mini for routine coding requirements.
Removal of GPT-4o Mini
In this update, OpenAI has concurrently decided to remove the GPT-4o Mini model from ChatGPT for all users. Earlier, on April 30, the GPT-4.0 model was also taken down to minimise confusion surrounding the available models.
Enhanced Performance of GPT-4.1
GPT-4.1 aims to deliver enhanced performance, particularly in coding and instruction-following tasks when compared to GPT-4o, while also functioning at a quicker rate. This model proves especially beneficial for software engineers engaged in writing or debugging code. However, OpenAI has clarified that GPT-4.1 is not classified as a frontier model, meaning it does not present new modalities or significantly advanced capabilities and therefore does not require the same stringent safety reporting as its counterparts.
Safety Standards and Improvements
Johannes Heidecke, Head of Safety Systems at OpenAI, mentioned, “GPT-4.1 builds on the safety work and mitigations developed for GPT-4o. Across our standard safety evaluations, GPT-4.1 performs at parity with GPT-4o, showing that improvements can be delivered without introducing new safety risks.” He also added that the model “doesn’t surpass o3 in intelligence.”
Commitment to Transparency
OpenAI previously faced substantial scrutiny for launching GPT-4.1 without a accompanying safety report, leading to concerns from the AI research community regarding diminishing transparency standards. In light of this, the company has pledged to disclose internal safety evaluation results more regularly through a new initiative. This initiative, named the Safety Evaluations Hub, was launched on Wednesday.






