TMTPOST -- Google announced release of its best model yet for everyone in the face of Chinese artificial intelligence (AI) upstart DeepSeek’s rise.
Credit:Google
Google announced on Wednesday its latest AI model suite, Gemini 2.0, is available to everyone from that day. The launch includes three different models: Gemini 2.0 Flash for general use that offers an access to a 1 million token context window for up to 1,500 pages of file uploads, Gemini 2.0 Flash-Lite, a new variant that is Google's most cost-efficient model yet, and an experimental version of Gemini 2.0 Pro, its best model yet for coding performance and complex prompts. All these three models are accessible in the Gemini API via Google AI Studio and in Vertex AI platforms. With the largest context window of the Gemini family, Gemini 2.0 Pro Experimental enables to process appromimately 1.5 million words at once, and is also available to Gemini Advanced subscribers in the Gemini app.
Moreover, Google said Gemini app users can try Gemini 2.0 Flash Thinking Experimental for free on desktop and mobile starting Wednesday. The latest 2.0 experimental model is currently ranked as the world’s best model on LMArena, an open platform for crowdsouced AI benchmarking. Built on the speed and performance of 2.0 Flash, this model is trained to break down prompts into a series of steps to strengthen its reasoning capabilities and deliver better responses, according to Google. From the model’s thought process, Users can see why it responded in a certain way, what its assumptions were, and trace the model's line of reasoning.
All of these models will feature multimodal input with text output on release, with more modalities ready for general availability in the coming months, Google said.
The latest release marks Google’s further push in the agentic era wherein it kicked off by launch of an experimental version of Gemini 2.0 Flash in December. In a blog that month, Google said Gemini 2.0 enables users to build new AI agents that bring us closer to the company’s vision of a universal assistant.
The release is also Google’s bid to rival new comers like DeepSeek. The Gemini 2.0 Flash updated last week costs developers 10 cents per million tokens for text, image, and video inputs, while the new variant Gemini 2.0 Flash-Lite matches the performance of its predecessor, Gemini 1.5 Flash, at the same price and speed, for just 0.75 cents per million tokens.
DeepSeek’s popular chatbot service application powered by its AI models jumped to the No.1 spot in app stores on January 26 , dethroning OpenAI’s ChatGPT as the most downloaded free app in U.S. on Apple’s App Store, and ha maintained its lead since then.
What shocked Silicon Valley is that it took just $5.58 million for DeepSeek to train its V3 large language model (LLM). The startup claimed it used 2,048 Nvidia H800 chips, a downgraded version of Nvidia’s H100 chips designed to comply with U.S. export restrictions. DeepSeek in January released open-source DeepSeek-R1, a reasoning models that it claims performance comparable to leading offerings like OpenAI’s o1 at a fraction of the cost. The model is up to 50 times cheaper to run than many U.S. AI models.
Comments