Methodology
How every number on this site is calculated, and where the data comes from.
Our Approach
Know Your Compute provides transparent, data-backed estimates of the environmental cost of AI inference.
We prioritise official disclosures from providers, then peer-reviewed academic research, then transparent estimation. We never present a guess as a fact — every figure on this site shows its confidence level.
All calculations cover inference only (the energy used when you interact with a model), not training. We focus on per-query impact because inference now accounts for 80–90% of AI's total energy consumption.
Data Source Tiers
Tier 1 — Verified
Official disclosures from providers:
- Google Gemini technical paper (August 2025)
- Mistral AI Life Cycle Assessment
- Sam Altman's ChatGPT energy disclosure (June 2025)
Tier 2 — Peer-reviewed
Published academic research:
Tier 3 — Estimated
Our estimation framework, based on:
- GPU system power specifications (NVIDIA H100, Google TPU)
- API performance benchmarks (latency, throughput)
- Regional environmental multipliers (PUE, WUE, CIF)
The Estimation Formula
For models without official disclosures, we estimate energy per query as:
E_query = (P_gpu × N_gpu × T_inference × PUE) / 3600E_query- Energy per query in watt-hours (Wh)
P_gpu- GPU system power in watts, including server overhead (~1.72× the GPU's rated TDP)
N_gpu- Number of GPUs needed per query (derived from model size and memory requirements)
T_inference- Inference time in seconds (from API benchmarks or throughput estimates)
PUE- Power Usage Effectiveness of the data centre (1.0 = perfect, typical is 1.1–1.35)
CO₂ and water are then derived from energy using regional environmental data:
CO₂ (grams) = E_query × CIF / 1000Water (mL) = E_query × (site_WUE + source_WUE)Regional Environmental Data
Environmental impact varies significantly by inference location. We use these regional factors:
Limitations
- We measure inference only, not training or hardware manufacturing
- Energy consumption varies 10–100× by prompt length and complexity
- Provider disclosures may undercount overhead (cooling, networking, storage)
- Self-hosted model data depends on your hardware and electricity source
- Water and CO₂ figures are derived from energy using regional averages — actual values vary by time of day and season
- We update data periodically; real-world efficiency improves continuously
Data last reviewed: March 2026
Calculate My Compute