Tag: artificial intelligence

  • Unlocking Mind Power: Meta’s Groundbreaking AI for Thought-Based Typing, but with a Twist

    Unlocking Mind Power: Meta’s Groundbreaking AI for Thought-Based Typing, but with a Twist


    Meta’s Brain-Typing AI: The Future of Communication

    In 2017, Meta, previously known as Facebook, envisioned an innovative concept: a brain-reading hat that would enable individuals to type merely by thinking. Years later, the company has indeed developed a similar technology, yet it is still far from being ready for everyday application.

    Understanding Meta’s Brain-Typing System

    Meta’s brain-typing system harnesses artificial intelligence and neuroscience to interpret brain activity, allowing it to predict which keys a person intends to press based solely on their thoughts. However, the system has its limitations: it necessitates a substantial, costly machine and can only function within a meticulously controlled laboratory environment.

    How Meta’s Brain-Typing AI Operates

    As detailed by a recent article from MIT Technology Review, the technology employs a sophisticated brain scanner known as magnetoencephalography (MEG). This machine detects minuscule magnetic signals produced by brain activity. Due to its size and sensitivity, the scanner must be housed in a specially designed room to avoid disruptions from Earth’s magnetic field.

    Meta’s researchers have developed an AI model named Brain2Qwerty, which is programmed to analyse these brain signals. The AI has been trained to discern patterns in the data corresponding to specific letters as participants typed on a keyboard. With time, the accuracy of the system has improved to an impressive 80% in predicting which letter a participant is thinking about.

    The research involved 35 participants at a Spanish research facility. Each individual endured about 20 hours within the scanner, typing sentences while the AI examined their brain activity.

    Challenges Facing Meta’s Brain-Typing Technology

    Despite the advances, Meta’s brain-typing system is far from being a market-ready product. The challenges include:

    • Massive and costly equipment: The MEG scanner is incredibly heavy, weighing approximately half a ton and costing around $2 million, making everyday use unfeasible.
    • Need for complete head stillness: Any minor movement renders the brain signals unreadable.
    • Insufficient accuracy: Although this system is at the forefront of its technology, it still misinterprets about 32% of letters.

    Led by Jean-Rémi King, Meta’s research team is not focused on product development but rather aims to deepen the understanding of human intelligence. Gaining insights into how the brain structures language may enhance AI systems, including chatbots and various language models.

    The Future of Brain-Computer Interfaces

    While Meta’s brain-typing AI is not equipped for real-world applications yet, advancements in brain-computer interfaces (BCIs) are swiftly progressing. Currently, some of the most effective systems employ electrodes implanted in the brain, enabling individuals with paralysis to operate computers or communicate through synthetic voices.

    Innovative companies like Neuralink, established by Elon Musk, are working on brain implants designed to restore movement and communication capabilities for disabled individuals. Although Meta’s exploration is focused on non-invasive techniques, it still faces significant challenges before brain-controlled typing can become a reality.

    At this moment, Meta’s brain-typing AI stands as a remarkable scientific milestone, though it is likely to remain within the confines of research laboratories for the time being.

  • “Embrace the Future: Sunil Wahi of Oracle Champions the AI Revolution for Businesses”

    “Embrace the Future: Sunil Wahi of Oracle Champions the AI Revolution for Businesses”



    Artificial Intelligence Transformation: Oracle’s Innovative Strategy

    Artificial Intelligence Transformation: Oracle’s Innovative Strategy

    Artificial Intelligence (AI) is driving significant changes across global industries, with Oracle leading this technological shift. The company employs its robust Oracle Cloud Infrastructure (OCI) to effectively integrate AI within its business applications, aiming to foster innovation and enhanced efficiency for its customers. During the recent Cloud World event held in Mumbai, Sunil Wahi, Vice President of Applications Solution Engineering at Oracle Asia Pacific, shared important insights regarding Oracle’s AI strategy, highlighting trends such as ‘agentic AI’, partnerships between humans and machines, and the expansive potential of these technologies across various sectors.

    Oracle’s Vision for AI Integration

    Wahi noted that the AI landscape is swiftly advancing past simple predictive analytics. He mentioned that in 2025, a significant emphasis will be placed on workflows driven by AI agents, which will be capable of making decisions and supporting intricate functions. Oracle is proactively working on the development and implementation of these AI agents within its Fusion Cloud applications, with the intention of automating business processes to enhance overall productivity across multiple industries.

    Generative AI and Automation in Business Processes

    Wahi elaborated that the realm of generative AI and AI agents is paving the way for extensive automation within business processes through the use of artificial intelligence. He highlighted Oracle’s Fusion Cloud system featuring “touchless finance processes,” which is an exemplary case of this trend that is already producing remarkable productivity enhancements.

    Agentic AI in Practice

    Exploring the concept of “agentic AI,” Wahi provided a practical example from the manufacturing sector. He explained how AI agents assist manual workers on production floors by supplying relevant data and even foreseeing potential machine failures. These agents serve a purpose beyond mere automation; they are designed to empower employees, lessen manual tasks, and generate insightful, data-driven outcomes. Oracle is broadening this agentic framework across various fields, including finance, human resources, customer experience, and analytics.

    Human-Centric AI Strategy

    Wahi emphasised that Oracle’s AI strategy is centred on enhancing human-AI collaboration. He stated that their efforts are aimed at reinforcing human decision-making processes, incorporating “checkpoints” and workflow-driven methodologies that empower businesses to tailor automation levels while preserving human oversight. Wahi reiterated that human intervention will always play a crucial role in their approach, ensuring that clients retain control over automation extent.

    AI Disruption Potential in Healthcare and Finance

    Wahi identified healthcare and finance as two sectors ripe for disruptive AI advancements. He outlined Oracle’s innovations in healthcare—such as AI-driven clinical assistance and inventory management tools—that are designed to elevate patient care and streamline operations. In the finance sector, he pointed out “predictive cash management” solutions that forecast liquidity and guide users through agentic workflows aimed at optimising cash flow. Moreover, he remarked on the significant opportunities AI presents for Small and Medium Businesses (SMBs) by addressing workforce shortages and improving user experiences.

    Partnership with NVIDIA

    During the interview, Oracle’s strategic alliance with NVIDIA was also discussed. Wahi explained that NVIDIA’s advanced GPUs are enhancing Oracle’s infrastructure and capabilities for large language models, leading to faster processing and pre-built AI applications in Fusion. He noted that Oracle intends to focus on offering ready-made, embedded AI solutions within their Fusion applications, rather than obliging clients to develop their own AI models.

    Commitment to AI Development in India

    With India’s increasing focus on AI, bolstered by a recent budget announcement allocating ₹500 crore for an AI centre of excellence, Wahi affirmed Oracle’s dedication to supporting AI initiatives in India. He highlighted partnerships with local organisations such as the National Skills Development Corporation (NSDC) and various banks, underscoring a collaborative effort to stimulate AI innovation and adoption in the region.

    Encouragement for Businesses Embracing AI

    Wahi concluded by advising organisations that may be hesitant to adopt AI. He encouraged executives to take the plunge into AI, suggesting that the process may not be as complicated as it seems. Starting with pilot projects can help build confidence while showcasing the noticeable advantages that come with AI integration.

    Wahi’s insights present a vision of an AI-driven future, led by intelligent agents while firmly rooted in human-centric design. As Oracle continues to innovate and collaborate within India and globally, the company is paving the way for a future where AI empowers businesses across various industries to achieve unparalleled efficiency and growth.


  • South Korean Government Halts China’s DeepSeek Over Security Risks, Urges Vigilance

    South Korean Government Halts China’s DeepSeek Over Security Risks, Urges Vigilance


    South Korea’s Caution Against DeepSeek: A Focus on Security

    South Korea’s industry ministry has initiated a temporary restriction on employee access to the Chinese artificial intelligence startup, DeepSeek, due to prevailing security concerns. An official from the ministry announced this on Wednesday, highlighting the government’s call for careful consideration regarding generative AI services.

    Government Advisory on AI Services

    The government issued a directive on Tuesday, urging ministries and agencies to exercise caution when utilising AI services such as DeepSeek and ChatGPT in workplace environments, according to various officials.

    Korea Hydro & Nuclear Power, a state-run entity, stated it had restricted the usage of AI services, including DeepSeek, earlier this month.

    Defence and Foreign Ministries Take Action

    As per reports on Thursday, the defence ministry has also prohibited access to DeepSeek on computers designated for military purposes. Furthermore, the foreign ministry has imposed limits on DeepSeek usage on computers that are linked to external networks. Although the ministry did not specify the security measures in detail, it acknowledged the restrictions made.

    International Actions Concerning DeepSeek

    DeepSeek did not provide an immediate response to an email request for comments regarding these developments. It remains unclear if any specific actions have been taken against ChatGPT by the ministries involved.

    This ban places South Korea among the latest countries to issue warnings or restrictions concerning DeepSeek. Australia and Taiwan have also banned the AI service on all government devices this week due to perceived security threats posed by the Chinese startup.

    Earlier in January, Italy’s data protection authority mandated that DeepSeek block its chatbot within the nation after the company failed to address the regulator’s privacy policy concerns adequately.

    Various governments across Europe, the United States, and India are currently assessing the implications of utilising DeepSeek.

    South Korean Tech Firms Exercise Caution

    The South Korean information privacy watchdog intends to inquire DeepSeek regarding the management of users’ personal information.

    DeepSeek’s recent release of advanced AI models last month has created significant waves in the technology sector. The company claims that its models either match or exceed the capabilities of those developed in the United States and are offered at a significantly lower cost.

    In light of security concerns, Kakao Corp, a leading chat application provider in South Korea, has advised its employees to avoid using DeepSeek. This guidance was given on Wednesday, the day after the company announced a collaboration with prominent generative AI firm OpenAI.

    South Korean technology firms are increasingly adopting a cautious approach towards the use of generative AI. SK Hynix, a producer of AI chips, has limited access to generative AI services and allowed use only when essential, as stated by a spokesperson.

    Naver, a significant web portal in South Korea, has also urged its employees to refrain from utilising generative AI services that store data externally.

  • “Google Reassesses Diversity Hiring in Wake of Trump’s Bold Stance”

    “Google Reassesses Diversity Hiring in Wake of Trump’s Bold Stance”


    Google Reviews Diversity, Equity, and Inclusion Hiring Practices

    Google, the US-based search engine, announced on Thursday that it is ceasing the recruitment of additional employees from under-represented groups as it reevaluates its diversity, equity, and inclusion (DEI) strategies.

    Changes in DEI Initiatives

    Fiona Cicconi, the chief people officer of Alphabet, Google’s parent company, communicated in an internal email that in 2020, the company had established ambitious hiring targets and aimed to expand their offices outside California and New York to better represent diverse communities. Cicconi further noted, as reported by Reuters, that moving forward, the company would not be setting aspirational goals.

    Policy Review Related to Trump’s Administration

    Additionally, Google indicated that it is assessing policy changes implemented by the Trump administration that aimed to limit DEI initiatives in the federal sphere and among national contractors.

    Internal Groups Remain Active

    Despite this shift, the tech giant will continue its support for internal employee groups such as “Trans at Google,” “Black Googler Network,” and the “Disability Alliance.” These groups are designed to keep employees informed about critical decisions affecting products and policies.

    Impact of Presidential Orders

    This announcement coincides with the enforcement of a directive from US President Donald Trump that prohibited DEI initiatives. After taking office, Trump imposed restrictions on DEI programmes throughout the federal government and initiated a review of federal funding to ensure it was not allocated to similar initiatives.

    Employee Diversity Goals and Progress

    In the wake of the tragic police killings of George Floyd and other Black Americans in 2020, Google’s CEO Sundar Pichai stated that the company aimed to increase its hiring of leaders from under-represented groups by 30 percent by the year 2025.

    In 2021, the corporation started assessing executive performance based on team diversity and inclusion, following concerns raised by a leader in artificial intelligence research who was reportedly dismissed after voicing criticism about Google’s approaches.

    Achievements in Diversity

    As of 2024, Google’s chief diversity officer, Melonie Parker, disclosed to the BBC that the organisation had achieved approximately 60 percent of its five-year diversity goal.

    Other Tech Companies Follow Suit

    Google is not isolated in its decision to halt DEI hiring programmes; in January, Mark Zuckerberg’s Meta Platforms announced a similar cessation of its diversity, equity, and inclusion initiatives.

    Furthermore, Amazon has communicated its intent to scale back “outdated programmes and materials” pertaining to representation and inclusion in a memo addressed to its workforce.

    Critics argue that DEI initiatives can sometimes result in an unfair advantage for less qualified individuals over those who are more deserving, purely based on their association with disadvantaged groups.

  • Google Lifts Restrictions on AI Utilization for Military and Surveillance Purposes

    Google Lifts Restrictions on AI Utilization for Military and Surveillance Purposes


    Google AI Principles Update: Shift in Focus on AI Applications

    Google has made a major update to its AI Principles, eliminating a portion that specifically outlined areas in which the company would not develop or implement artificial intelligence. The revised document, released on Tuesday, no longer includes prior commitments to avoid using AI for weaponry, surveillance, or any applications that could infringe on human rights.

    This change indicates that Google may be reevaluating its position on areas it previously restricted as competition within the AI sector escalates.

    The Background of Google’s AI Principles

    The AI Principles were first established in 2018, detailing Google’s philosophy on AI development with a strong emphasis on ethics, fairness, and accountability. Throughout the years, the document has been updated, but the four fundamental restrictions had remained intact—until this latest revision.

    A review of an archived version of the document on Wayback Machine shows that Google has eliminated the section titled “Applications we will not pursue.” This part had clearly stated that Google would not:

    • Develop AI technologies that could cause or are likely to cause overall harm
    • Engage in the creation of weapons or technologies that contribute directly to harm
    • Construct surveillance technologies that go against international standards
    • Produce AI systems that conflict with human rights and international law

    The removal of these commitments raises questions about whether Google is now willing to explore AI applications in defence, security, or surveillance fields.

    Insights from Google DeepMind Leaders

    Following the announcement, Google DeepMind’s CEO Demis Hassabis and Senior VP for Technology and Society James Manyika published a blog post outlining the company’s updated AI strategy.

    The blog emphasized the belief that democracies should spearhead AI development, guided by core principles of freedom, equality, and respect for human rights. It further stressed the importance of companies, governments, and organizations that uphold these values working together to develop AI responsibly, while also promoting national security.

    While Google did not explicitly mention plans to venture into military or surveillance AI, the lifting of restrictions indicates a possible policy shift amid intensifying global competition within the artificial intelligence domain.

    Global Context and Implications

    Google’s revision arrives at a crucial juncture when AI technologies are becoming increasingly integrated into national security and defence strategies on a global scale. Countries such as the US, China, and European nations are all heavily investing in AI-driven security and military initiatives. Google’s recent change may suggest its intentions to maintain a competitive edge in this rapid evolution.

    Additionally, the update corresponds with recent initiatives from the US government aimed at promoting public-private partnerships in AI advancement, particularly in realms like cybersecurity, autonomous systems, and intelligence analysis.

    However, critics are expressing concern that Google’s choice to abandon these ethical commitments could foster a lack of transparency, raising the potential for AI to be employed in ways that may jeopardise privacy and human rights.

    With this alteration, Google has opened the possibility for wider AI applications, yet it remains uncertain whether the company will actively seek defence contracts or engage in national security projects.

    Furthermore, this development comes as Google faces mounting challenges from competitors such as OpenAI, Microsoft, DeepSeek, and Anthropic, all of which are advancing generative AI, automation, and AI-based analytics.

  • Revolutionizing Reality: ByteDance Unveils the Cutting-Edge OmniHuman-1 Model for Deepfake AI Videos

    Revolutionizing Reality: ByteDance Unveils the Cutting-Edge OmniHuman-1 Model for Deepfake AI Videos



    Deepfake AI: ByteDance’s OmniHuman-1 Revolutionizes Video Creation


    Deepfake AI: ByteDance’s OmniHuman-1 Revolutionizes Video Creation

    Deepfake AI technology has reached new heights with the introduction of OmniHuman-1 by ByteDance, the parent company of TikTok. This innovative deepfake AI can produce highly realistic videos using just a single image and audio input. TechCrunch reports that the OmniHuman-1 is capable of seamless animations, adjusting body shapes, and even altering existing videos with impressive accuracy.

    Capabilities and Limitations of OmniHuman-1

    ByteDance’s OmniHuman-1 has undergone training with an extensive dataset of 19,000 hours of video. Although it produces remarkable results, the model is not without its flaws. It faces challenges when dealing with low-quality images and specific poses. Below are examples of videos created using the OmniHuman-1 model:

    One notable creation includes a TED Talk that was never actually delivered.

    Additionally, a deepfake incarnation of Albert Einstein’s lecture was generated by the model.

    Ethical Implications of Deepfake Technology

    The advancements in deepfake technology bring forth both creative opportunities and significant ethical concerns. As demonstrated by ByteDance’s OmniHuman-1, there is potential for innovative uses, yet it is imperative to recognize the accompanying risks.

    In South Korea, for example, the proliferation of deepfake pornography has led to the implementation of new laws that criminalize the creation, possession, and distribution of such content. Nevertheless, enforcing these regulations poses challenges, and advocates highlight the need to address underlying societal issues like misogyny to effectively combat the problem.

    Legal Considerations in the UK

    In the United Kingdom, Channel 4 has faced backlash for allegedly breaching the Sexual Offences Act 2003 by airing an AI-generated video depicting actress Scarlett Johansson without her consent. Legal experts warn that sharing nonconsensual deepfake content could potentially violate the law, underscoring the urgent need for clearer guidelines concerning AI-generated media.

    Global Response and Regulation of Deepfake Technology

    In reaction to the rising challenges posed by deepfakes, various jurisdictions are implementing regulations. The European Union has taken a significant step by approving the Artificial Intelligence Act in 2024, aimed at reforming legal frameworks related to AI with specific measures addressing deepfakes. However, the detection and prosecution of deepfake-related offenses remain complicated, which necessitates the continuous adaptation of legal systems to appropriately balance technological progress with justice and integrity.

    As the landscape of deepfake technology evolves, it is essential for legal systems, detection initiatives, and public awareness programs to progress alongside these developments in order to minimize associated risks effectively.


  • “Sam Altman Advocates for India to Pave the Way in Developing Compact AI Solutions”

    “Sam Altman Advocates for India to Pave the Way in Developing Compact AI Solutions”


    OpenAI CEO Sam Altman Highlights India’s Role in Artificial Intelligence Development

    On his second trip to India in two years, OpenAI CEO Sam Altman highlighted the country’s potential to lead in artificial intelligence, particularly in creating small and reasoning models. With rising competition from Google’s Gemini, DeepSeek, and other AI players, along with increasing global scrutiny over AI’s influence, Altman’s visit to India underscores the nation’s growing importance in the AI landscape.

    India’s Leadership in Building Reasoning Models

    Altman stated, “India should be a leader in building small models, especially reasoning models,” pointing out that while the costs associated with AI training are forecasted to escalate rapidly, the resulting intelligence and revenue are expected to increase substantially. He mentioned that current AI models are already nearing the capability to effectively tackle critical issues such as healthcare and education—areas where India stands to benefit significantly from AI-driven advancements.

    OpenAI’s Growing Presence in India

    India has become the second-largest market for OpenAI, reflecting the swift adoption of AI-powered technologies across the country. Altman encouraged India to “do everything within the AI stack,” suggesting that the nation should not only utilise AI but also actively engage in constructing and enhancing various components of the AI value chain.

    Recognizing Innovation in Indian Startups

    Altman expressed admiration for the achievements of Indian startups, researchers, and developers in AI innovation, noting, “It’s amazing to see what India has done so far.”

    Training Costs and Future Infrastructure Needs

    Addressing the rising expenses involved in training AI models, Altman remarked that although costs remain high, the cost per unit of AI intelligence is decreasing dramatically, with a reduction factor of 10 every year. Nonetheless, he countered the idea that this decline would lessen the demand for AI hardware, suggesting that the need for AI infrastructure will persist and likely intensify as AI usage expands.

    Clarifying Focus on Foundational Models

    He further clarified that his previous comments regarding foundational models had been misconstrued, possibly referring to earlier discussions about whether India should concentrate on developing its own large-scale AI models or utilise already available ones.

    Altman’s Asia-Pacific Tour Conclusion in Delhi

    Delhi served as the final destination of Altman’s Asia-Pacific tour, where he met with policymakers, AI researchers, and business executives. His visit coincides with India’s escalating AI ambitions, marked by initiatives such as Bhashini (for AI-driven language translation), government-supported AI computing infrastructure, and an increasing emphasis on AI regulation.

    Focus on AI Distillation and Model Efficiency

    While OpenAI continues to work on AI distillation to create smaller and more efficient models, Altman acknowledged that advancements in this area have not yet been “incredible.” His statements indicate that cost-effective, smaller models could represent a crucial domain in which India can excel, aligning perfectly with the nation’s aspirations in AI.

  • Ola CEO Bhavish Aggarwal’s Vision: Krutrim to Channel ₹10,000 Crore into AI Development by Next Year

    Ola CEO Bhavish Aggarwal’s Vision: Krutrim to Channel ₹10,000 Crore into AI Development by Next Year


    Krutrim: Bhavish Aggarwal’s AI Startup Revolutionising Indian Language Technology

    Krutrim, founded by Ola’s Bhavish Aggarwal, is making waves in the artificial intelligence sector with an impressive investment of ₹2,000 crore. Plans are underway to enhance this investment to ₹10,000 crore by the end of the next year. The startup has also set up an advanced AI lab and unveiled a range of innovative language models to improve AI capabilities for Indian languages.

    Innovative AI Language Models

    Among the new models launched by Krutrim are Krutrim-2 and Krutrim-1, which are large language models designed for broad AI applications. Additionally, there is Chitrarth-1, a vision-language model; Dhwani-1, which is dedicated to speech processing; and Krutrim Translate that focuses on text-to-text translation. Furthermore, Krutrim has introduced BharatBench, a platform aimed at testing and benchmarking various AI models.

    Focus on Indian-Specific AI Challenges

    In a post on X (formerly Twitter), Aggarwal highlighted that the company’s mission is to create AI solutions specifically tailored for India, tackling challenges like diverse languages, limited data, and cultural subtleties. He announced segments of Krutrim’s work will be open-sourced, enabling developers and researchers to utilise its speech-to-text translation technologies.

    Supporting India’s Linguistic Diversity

    The models developed by Krutrim are trained using multilingual datasets and support ten Indian languages, including Hindi, Bengali, Telugu, Tamil, Marathi, Gujarati, Kannada, Malayalam, Odia, and Assamese, in addition to English.

    Advancing AI Initiatives in India

    This initiative coincides with India’s rapid progress in AI, including plans to create a large language model akin to China’s DeepSeek. As part of this ambitious undertaking, Krutrim is launching India’s inaugural GB200 AI supercomputer in collaboration with Nvidia, expected to go operational by March. Additionally, the company has rolled out Krutrim Cloud, a cloud-based service offering developers and businesses access to high-performance computing resources.

    Milestones and Future Plans

    Since its inception in 2023, Krutrim has successfully raised $50 million, achieving a remarkable valuation of $1 billion, making it the first AI startup in India to reach unicorn status by 2024. The funding round was spearheaded by Matrix Partners India, who have also previously invested in Aggarwal’s other ventures, Ola Cabs and Ola Electric.

  • India Set to Unveil Its Indigenous AI Foundation Model in Just 10 Months, Challenging ChatGPT and DeepSeek

    India Set to Unveil Its Indigenous AI Foundation Model in Just 10 Months, Challenging ChatGPT and DeepSeek

    India is preparing to introduce its first domestically developed AI foundational model within the next 10 months, as announced by IT Minister Ashwini Vaishnaw. This declaration took place at a press briefing, marking a significant milestone in India’s aspirations for artificial intelligence. The initiative aims to enhance India’s role in the global AI arena, which is predominantly led by the US and China.

    Vaishnaw highlighted the government’s commitment to advancing AI through the India AI Mission, which received approval last year with a budget allocation of ₹10,000 crore. The mission’s objective is to provide researchers, startups, and academic institutions with access to AI technology and computing power, enabling them to play an active role in the industry.

    A crucial aspect of this initiative is the enhancement of computing infrastructure. Vaishnaw disclosed that India has already established a network of 18,000 high-end GPUs, with 10,000 designated for AI development. These resources will be made available to researchers, universities, and startups, aiming to reduce the financial challenges associated with AI innovation.

    Ashwini also provided a breakdown of the exact GPUs that are deployed in this project:

    • 12,896 GPUs: Nvidia H100
    • 1,480 GPUs: Nvidia H200
    • 742 AI Accelerators: AMD MI325X and MI300X

    Vaishnaw stated that the primary requirement for constructing AI models is computing power. He explained that those with substantial financial resources typically acquire it, hence the need to create a framework where access is guaranteed for all.

    By offering shared computing resources, the government aims to democratise AI development, following the principles established by the Digital India programme. This approach will allow Indian institutions to create foundational AI models, which act as the foundation for applications such as generative AI, machine learning tools, and automation technologies.

    With a completion deadline set at an “outer limit” of 10 months, India is making significant strides in a sector essential for economic advancement and technological independence.

  • Bhavish Aggarwal Unveils Krutrim AI Labs with a Whopping ₹2,000 Crore Investment

    Bhavish Aggarwal Unveils Krutrim AI Labs with a Whopping ₹2,000 Crore Investment

    Ola’s Founder Unveils Krutrim AI Labs for Indian-Focused AI Development

    Bhavish Aggarwal, the founder and CEO of Ola, has revealed the establishment of Krutrim AI Labs which aims to create artificial intelligence tailored specifically for India. An initial investment of Rs 2,000 crore has been announced, with plans to increase this commitment to Rs 10,000 crore by the following year. This funding will facilitate the setup of the AI lab and the development of sophisticated AI models that cater to Indian users.

    Aggarwal stated that the Krutrim AI Lab will focus on tackling issues related to Indic languages, data scarcity, and cultural relevance. He noted on X (previously known as Twitter) that the team has been engaged in AI development for a year and is now making their research accessible to the open-source community, along with the release of various technical documents. Their primary objective is to enhance AI capabilities for Indian languages, address data scarcity, and respect cultural nuances.

    Open-Source AI Models and Benchmarks

    As part of its open-source efforts, Krutrim is launching several AI models, including:

    • Krutrim 2: An enhanced Large Language Model (LLM).
    • Chitrarth 1: A Vision Language Model.
    • Dhwani 1: A Speech Language Model.
    • Vyakhyarth 1: An Indic embedding model.
    • Krutrim Translate 1: A text-to-text translation model.

    Furthermore, Krutrim has also introduced BharatBench, a benchmark designed specifically for assessing the performance of Indic AI technologies.

    Building the First GB200 AI Supercomputer

    In collaboration with NVIDIA, Krutrim is in the process of constructing India’s first GB200 AI supercomputer, which is anticipated to be operational by March. This supercomputer is set to be the largest AI supercomputer in India by the year’s end.

    Aggarwal underscored that despite the rapid progress Krutrim has made within just a year, the journey for AI development in India is far from complete. By making its models open-source, the company aims to foster collaboration and contribute to the establishment of a robust AI ecosystem in the country.

    Funding and Investment Growth

    In December 2024, Aggarwal secured a funding solution for his AI endeavour, Krutrim SI Designs, by pledging Ola Electric shares to generate debt through debenture issuance. So far, Krutrim has successfully raised around $75 million and entered the unicorn category in January of the previous year. Notable investors include Z47 (formerly Matrix), the Sarin Family, among others.