Add Techlomedia as a preferred source on Google.

Google has announced the launch of Gemini 3 Flash, a new model in its Gemini 3 family that focuses on speed, efficiency, and lower cost. With this release, Google is expanding access to its next-generation AI across consumer apps, developer tools, and enterprise platforms.

Gemini 3 Flash follows the launch of Gemini 3 Pro and Gemini 3 Deep Think, which were introduced last month. According to Google, the Gemini 3 platform is already seeing heavy adoption, with more than one trillion tokens processed per day through its API. Developers and users are using these models for tasks such as coding simulations, game development, multimodal understanding, and complex reasoning.

Gemini 3 Flash is built on the same intelligence foundation as Gemini 3 Pro. The key difference is its focus on low latency and efficiency. Google says the model delivers Pro-level reasoning while responding much faster and using fewer resources. This makes it suitable for everyday tasks as well as more advanced agent-based workflows.

Google claims that Gemini 3 Flash performs strongly on several high-level benchmarks. It reportedly matches or rivals larger models on complex reasoning and multimodal tests, while also outperforming the previous generation Gemini 2.5 Pro in many areas. The company highlights that Gemini 3 Flash uses about 30 percent fewer tokens on average for common tasks, which directly lowers costs.

Based on internal and third-party benchmarking, Google says Gemini 3 Flash is around three times faster than Gemini 2.5 Pro while costing significantly less. Pricing is set at $0.50 per million input tokens and $3 per million output tokens, positioning it as a more affordable option for large-scale use.

For developers, Gemini 3 Flash is aimed at fast, iterative workflows. Google points to strong performance in coding-related benchmarks, including agent-based coding tasks. The model is designed to handle real-time applications such as interactive tools, video analysis, data extraction, and visual question answering, where both speed and reasoning quality matter.

Several companies, including JetBrains, Bridgewater Associates, and Figma, are already using Gemini 3 Flash in production, according to Google. These early users highlight its balance of inference speed, efficiency, and reasoning compared to larger and more expensive models. The model is available to enterprises through Vertex AI and Gemini Enterprise.

For general users, Gemini 3 Flash is now becoming the default model in the Gemini app, replacing Gemini 2.5 Flash. This means users worldwide will get access to Gemini 3-level capabilities at no cost. The company also plans to roll out Gemini 3 Flash as the default model for AI Mode in Google Search.

Follow Techlomedia on Google News to stay updated. Follow on Google News

Affiliate Disclosure:

This article may contain affiliate links. We may earn a commission on purchases made through these links at no extra cost to you.

LEAVE A REPLY

Please enter your comment!
Please enter your name here