GPT-4.5 Environmental Impact
OpenAI's large-scale reasoning model
Per query = Medium query (~1000 tokens)
Energy per query
20.5 Wh
CO2 per query
8.0 g
Water per query
80 mL
Processing location
Azure US East / Sweden
Provider
OpenAI
Category
Text / Chat
Grid carbon intensity
450 g CO2/kWh (25% renewable)
How does GPT-4.5 compare?
Detailed Breakdown
Energy Consumption
GPT-4.5 is one of the most energy-intensive text models, consuming approximately 20.5 Wh per query — roughly 68x more than a Google search and 17x more than standard GPT-4o. This is due to its massive parameter count and the extended computation required for its enhanced reasoning capabilities.
Power Source & Carbon
Runs on Microsoft Azure, primarily in US East (Virginia) and Sweden Central regions. Microsoft's electricity consumption nearly tripled from 10.8M MWh in 2020 to 29.8M MWh in 2024, driven substantially by AI workloads like these large models.
Water Usage
At approximately 80 mL per query, GPT-4.5's water consumption reflects its high energy needs. Every 20 queries consume roughly 1.6 liters of water — more than three bottles.
What does your GPT-4.5 usage cost the planet?
Use our calculator to estimate your personal environmental footprint based on how often you use GPT-4.5.
Calculate My ComputeFrequently Asked Questions
How much energy does GPT-4.5 use per query?
Each GPT-4.5 query consumes approximately 20.5 Wh of energy. This is 68x more than a traditional Google search (~0.3 Wh).
What is GPT-4.5's carbon footprint?
Based on the carbon intensity of Azure US East / Sweden, each query produces approximately 8.0 g of CO2. The grid in this region has a carbon intensity of 450 g CO2/kWh with 25% renewable energy.
How much water does GPT-4.5 use?
Each query consumes approximately 80 mL of water, primarily used for cooling the data centers that process the request.
How does GPT-4.5 compare to a Google search?
A GPT-4.5 query uses 68x more than a Google search in terms of energy. A Google search uses approximately 0.3 Wh, while GPT-4.5 uses 20.5 Wh.
Technical Details
Architecture
Dense Transformer (decoder-only)
Context window
128,000 tokens
Release date
2025-02-27
Open source
No
Training data cutoff
2024-10