Google expands Gemma family with compact 270M variant
Briefly

Gemma 3 270M is a new AI model from Google, characterized by 270 million parameters optimized for task-specific fine-tuning. It features 170 million parameters for embedding and 100 million for transformer blocks, combined with a large vocabulary of 256,000 tokens. The model is designed for high-volume tasks, offering significant operational cost savings and energy efficiency, consuming only 0.75% battery power for 25 conversations. It enables rapid fine-tuning within hours and can run on-device, protecting sensitive information. The model is available across various platforms, including Hugging Face and Docker.
Google introduces Gemma 3 270M, a compact AI model with 270 million parameters, designed for task-specific fine-tuning and offering significantly lower operating costs than larger models.
The model's structure includes 170 million parameters for embedding and 100 million for transformer blocks, allowing it to manage specific and rare terms effectively.
With energy efficiency tested on a Pixel 9 Pro SoC, the INT4-quantized version consumes only 0.75 percent battery power for 25 conversations, making it the most power-efficient Gemma model yet.
The flexibility of Gemma 3 270M allows for rapid fine-tuning, enabling companies to optimize configurations within hours rather than days, with on-device running ensuring data privacy.
Read at Techzine Global
[
|
]