A study published in Frontiers in Communication reveals that larger language models (LLMs) require exponentially more energy, resulting in greater carbon emissions compared to simpler models. The research evaluated 14 open-source LLMs, discovering that those with higher accuracy yielded more environmental harm. While some exceptions exist, the trend shows that improving model capability amplifies its ecological footprint. Researchers emphasize that smaller models may suffice for simpler tasks, suggesting a potential need to balance AI advancements with environmental sustainability.
We don't always need the biggest, most heavily trained model, to answer simple questions. Smaller models are often sufficient and less harmful to the environment.
As you increase model size, typically models become more capable, use more electricity and have more emissions.
Collection
[
|
...
]