Thinking AI Models: 50x More CO2 for Little Benefit?

by Priyanka Patel

BERLIN, 2025-06-19 07:55:00

AI’s Carbon Footprint: A Hidden Cost

Large language models consume significant energy, contributing to substantial CO2 emissions, a fact often overlooked by users.

  • Reasoning-enabled models produce up to 50 times more CO2 emissions than concise response models.
  • The accuracy-sustainability trade-off is inherent in current LLM technologies.
  • Subject matter substantially impacts emissions, with complex topics driving up carbon footprints.

The environmental impact of using AI, especially large language models (LLMs), is significant. Research reveals that these technologies generate considerable CO2 emissions, with “thinking” models contributing substantially to this carbon footprint.

No matter what questions we pose, an AI model always produces an answer.These responses rely on “tokens” which are words or parts of words converted into a numerical string that the LLM processes. This conversion, along with other computing processes, generates CO2 emissions, a fact many users may not realize.

What are Tokens? Tokens are the essential units of data that LLMs process. Understanding tokens is key to understanding the energy consumption of these models.

Researchers in Germany measured and compared the CO2 emissions of various pre-trained LLMs, using a standardized set of questions. The study, published in Frontiers in Communication, revealed a crucial aspect of AI: how it processes information directly influences its environmental impact.

‘Thinking’ vs. Concise: Emission Differences

The study evaluated 14 LLMs, varying from seven to 72 billion parameters, using 1,000 benchmark questions across different subjects. Parameters influence how LLMs learn and process information. The models that utilized reasoning, on average, created 543.5 ‘thinking’ tokens per question. In contrast, concise models required only 37.7 tokens per question. These “thinking” tokens are additional tokens generated by reasoning LLMs before delivering an answer, and a larger token footprint translates to higher CO2 emissions.

The most accurate model, the reasoning-enabled Cogito model with 70 billion parameters, reached 84.9% accuracy. However, it produced three times more CO2 emissions than models of similar size that generated concise answers. “Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies,” said Maximilian Dauner, a researcher at Hochschule München university of Applied Sciences and first author of the study. He also noted that “None of the models that kept emissions below 500 grams of CO2 equivalent achieved higher than 80% accuracy on answering the 1,000 questions correctly.”

Accuracy vs. Sustainability: Is it possible to develop AI models that are both highly accurate and environmentally friendly? What innovations might bridge this gap?

Subject Matter’s Impact

The subject matter also affected the CO2 emissions significantly. Questions requiring extensive reasoning, such as those related to abstract algebra or beliefs, resulted in up to six times higher emissions than more straightforward topics, like high school history.

Did you know? CO2 equivalent is the unit used to measure the climate impact of various greenhouse gases.

making Informed Choices

The researchers hope their findings will encourage users to make more informed decisions about AI use. According to Dauner, “Users can significantly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that genuinely require that power.”

The choice of the model can significantly impact CO2 emissions. For example, having DeepSeek R1 (70 billion parameters) answer 600,000 questions would create CO2 emissions equal to a round-trip flight from London to New York. Simultaneously occurring, Qwen 2.5 (72 billion parameters) can answer more than three times as many questions (about 1.9 million) with similar accuracy rates while generating the same emissions.

London to New York: The CO2 emissions from using certain AI models can be comparable to significant real-world activities like transatlantic flights. This puts the energy consumption into viewpoint.

The study results might potentially be impacted by the hardware used, the emission factor that may vary regionally depending on local energy grid mixes, and the examined models, potentially limiting the generalizability of the results.

“If users know the exact CO2 cost of their AI-generated outputs, such as casually turning themselves into an action figure, they might be more selective and thoughtful about when and how they use these technologies,” Dauner concluded.

The Sustainable Path: AI Innovations on the Horizon

The carbon footprint of large language models, as we’ve seen, is a growing concern. The study highlighted the tension between accuracy and sustainability. However, the research also underscores the importance of innovation in mitigating these environmental impacts. What concrete steps are being taken to create more eco-kind AI? Current research is focusing on several areas, including more efficient model architectures, improved hardware, and the use of renewable energy.

One promising area is the progress of more energy-efficient AI models. Researchers are actively exploring new architectures that require fewer computational resources to achieve similar levels of performance.This can translate directly into reduced CO2 emissions. Moreover, the advancements in hardware design are also playing a pivotal role. More efficient processors and specialized chips designed for AI tasks can significantly decrease the energy consumption of model training and inference.

What is Model Architecture? Model architecture refers to the structure and design of an AI model. Different architectures can have varying levels of computational efficiency.

Another crucial aspect to reducing AI’s carbon footprint involves the use of renewable energy sources. As data centers and computing facilities consume massive amounts of power,transitioning to green energy becomes imperative. Many companies are now investing in powering their AI infrastructure with renewable sources like solar and wind. This strategic shift significantly reduces the overall carbon footprint of AI operations.

These innovations are not just theoretical. Several organizations are actively working to address the environmental concerns raised by the increasing use of LLMs. For example, the MIT Generative AI Impact Consortium is dedicated to developing open-source generative AI solutions [[3]]. This collaborative effort across multiple disciplines aims to accelerate advancements in education, research, and industry. The consortium’s focus includes sustainable practices and green computing to help create eco-friendly models.

How can we make the most eco-friendly options? It means being conscientious about using AI and finding ways to limit the environmental toll. The challenge is finding the balance between these two features.

Another effort being done in this field is that of integrating AI in databases. MIT researchers have created an easy-to-use tool for performing complex statistical analyses on tabular data [[2]].This system uses probabilistic AI models and SQL to produce results that are faster and more accurate, which can result in less energy being spent.

The most effective method of decreasing the carbon output of LLMs is through the research and deployment of more energy-efficient technologies. These developments include the use of modern AI architecture, energy-efficient processors, and a reliance on renewable energy sources.

Practical Steps to Reduce Your AI Carbon Footprint

While large-scale innovations are critically important, individual actions can also contribute to more sustainable AI practices. Here are several easy steps you can take right now:

  • Choose concise models: When choosing an LLM,opt for models that are designed to produce concise answers. “Thinking” models, as the research suggests, require more computational power.
  • Be specific with prompts: Craft specific and clear prompts to get the information you need. Vague prompts can lead to increased processing and higher energy consumption.
  • Limit heavy usage: Think before using high-capacity models for routine tasks. Use them only when necessary for complex tasks.
  • Consider model alternatives: Explore and compare different AI models based on their energy efficiency. Consider using smaller, more efficient models for less demanding tasks.

You may also like

Leave a Comment