In a landmark study released by Google on August 22, titled ‘Measuring the Environmental Impact of Delivering AI at Google Scale’, the company outlines a “first-of-its-kind full-stack methodology” to measure the footprint of AI inference. 

Rather than focusing only on the power drawn by chips, the paper accounts for “AI accelerators, CPUs, RAM, idle machines and data centre overheads such as cooling and power distribution.”

The results may surprise even AI’s fiercest critics. According to the paper, the median Gemini text prompt consumes just 0.24 watt-hours of energy, which “represents less energy than watching TV for nine seconds”. Each prompt also produces 0.03 grams of CO₂ equivalent (gCO₂e) and uses 0.26 millilitres of water or “about five drops”.

“These numbers are well below many previous estimates for AI inference,” the Google team wrote, positioning the findings as a counter to rising concerns that chatbots and assistants are quietly driving up global emissions.

Perhaps more striking are the efficiency improvements Google claims to have achieved over the past year. “Median Gemini text prompt energy use has decreased 33-fold and carbon emissions 44-fold, while output quality has been maintained or improved,” the paper states.

The report stresses that these figures apply only to text-based prompts on Gemini Apps. Multimedia tasks, such as generating images or video, remain far more resource-intensive. Nor does the study examine model training, the initial phase that often consumes orders of magnitude more compute and electricity.

Push for Transparency

For Google, the bigger play is setting a precedent for transparency. “We hope this methodology can inform broader industry standards and enable consistent, comparable reporting of AI’s environmental impact,” the authors noted.

The company argues that without a uniform framework, AI’s footprint will remain opaque, fuelling speculation and distrust. By publishing details on how it measures energy, water and emissions per query, Google is nudging peers like OpenAI, Anthropic and Meta to follow suit.

Still, not everyone is convinced. As The Verge pointed out, some experts argue that Google’s market-based emissions accounting could underestimate true carbon output compared to location-based methods that reflect regional grid intensity. Others highlight the report’s exclusion of indirect water use and the potential for a ‘Jevons paradox’ effect, even if each query becomes greener, the sheer explosion of AI usage could push overall demand and emissions higher.

Axios noted that “efficiency gains at the prompt level may obscure rising overall environmental costs”, particularly as AI moves into search, productivity tools, and consumer devices.

The study lands at a time when policymakers in Europe and the US are pressing tech firms to disclose AI’s hidden costs. By presenting a methodology and relatively small per-prompt footprint, Google is attempting to shape that conversation.

Whether this becomes an industry standard or another corporate benchmark remains to be seen. But for now, Google wants the world to know: asking Gemini a question might be greener than you think.

The post Google Claims Gemini’s AI Prompt Uses ‘Less Energy Than 9 Seconds of TV’ appeared first on Analytics India Magazine.