AI Energy Usage

Environmental Impact of LLMs

Understanding Energy and Water Usage in Large Language Models (LLMs) and AI

As artificial intelligence, particularly large language models (LLMs) like GPT, continues to grow in capability and popularity, questions around the environmental impact of these technologies have become increasingly urgent. Two critical resources involved in the lifecycle of AI systems are energy and water. These resources are consumed at different stages of development and deployment, each with distinct implications for sustainability.

Energy Consumption in AI

Energy usage in AI is primarily concentrated in two phases: training and inference.

  • Training: Training a state-of-the-art LLM involves massive parallel computations over several weeks or months. For example, training GPT-3 is estimated to have consumed around 1,287 MWh of electricity—enough to power an average American home for over 120 years. Most of this energy goes into operating data centers, cooling infrastructure, and specialized hardware like GPUs and TPUs.
  • Inference: Once trained, running the model (inference) consumes much less energy per task, but at scale, inference can outweigh training energy consumption. Billions of queries per day across cloud platforms mean the cumulative energy costs remain substantial.

Water Usage and AI

Less discussed but equally important is the water footprint of AI. Water is used extensively in data centers for cooling, particularly when servers operate under high computational loads.

  • According to a 2023 University of California–Riverside study, training GPT-3 in Microsoft’s U.S. data centers is estimated to have consumed ~700,000 liters of clean freshwater—equivalent to the water needed to produce over 300 cars or 370 pairs of jeans.
  • Inference also contributes to water use, especially in regions that rely on evaporative cooling systems. On average, it is estimated that 500 ml of water is used every time you ask a large model a complex question.

Geographic and Temporal Variability

The environmental impact of LLMs also varies significantly depending on:

  • Data center location: Water usage and energy sources differ by region. Data centers in dry or drought-prone areas can place additional stress on local water supplies.
  • Time of day: Some companies have begun scheduling AI training at times when electricity grids are greener and water availability is higher.
  • Cooling technology: Facilities using direct air or liquid immersion cooling tend to have different resource profiles than those using evaporative methods.

Toward Sustainable AI

Reducing the resource intensity of AI requires a multi-pronged approach. Leading companies are investing in energy-efficient chips, renewable energy sourcing, and model optimization to decrease carbon and water footprints. Additionally, efforts to improve model efficiency—such as smaller distilled models and dynamic computation—offer promising paths forward.

As AI continues to scale globally, addressing its energy and water demands will be crucial for ensuring that progress in machine intelligence aligns with the broader goals of environmental sustainability and responsible innovation.

📚 Further Reading on AI's Environmental Impact