has launched Gemma 3, a family of open-source artificial intelligence (AI) models designed for high performance and accessibility. The announcement comes as the company boosts its effort to better compete with companies including ChatGPT-maker
OpenAI, Facebook and DeepSeek.
According to the company, Gemma 3 models build upon the success of the previous AI models, which recently celebrated its first anniversary with over 100 million downloads. Gemma 3 aims to empower developers to create powerful AI applications that can run directly on devices, from smartphones to workstations.
“Gemma 3 delivers state-of-the-art performance for its size, outperforming Llama-405B, DeepSeek-V3 and o3-mini in preliminary human preference evaluations on LMArena’s leaderboard. This helps you to create engaging user experiences that can fit on a single GPU or TPU host,�?the company said.
Gemma 3 is powered by same tech as Gemini 2.0
Gemma 3, derived from the same research and technology powering Google's Gemini 2.0 models, is available in various sizes (1B, 4B, 12B, and 27B) to cater to diverse hardware and performance requirements.
As per Google, the models outperform competitors in preliminary human preference evaluations, and offer advanced text and visual reasoning capabilities, enabling developers to build applications that analyse images, text and short videos.
Key features of Gemma 3 include out-of-the-box support for over 35 languages and pretrained support for over 140 languages, a 128k-token context window for processing and understanding extensive information and upport for function calling and structured output to automate tasks and create agentic experiences.
Gemma 3 integrates with popular development tools and platforms, including Hugging Face Transformers, Ollama, JAX, Keras, PyTorch, Google AI Edge, UnSloth, vLLM, and Gemma.cpp. Developers can easily customise and deploy Gemma 3 using platforms like Google Colab, Vertex AI, and NVIDIA GPUs, which have been optimised for maximum performance.