Taking a closer look at AI’s supposed energy apocalypse (2024)

Taking a closer look at AI’s supposed energy apocalypse (1)

Late last week, both Bloomberg and The Washington Post published stories focused on the ostensibly disastrous impact artificial intelligence is having on the power grid and on efforts to collectively reduce our use of fossil fuels. The high-profile pieces lean heavily on recent projections from Goldman Sachs and the International Energy Agency (IEA) to cast AI's "insatiable" demand for energy as an almost apocalyptic threat to our power infrastructure. The Post piece even cites anonymous "some [people]" in reporting that "some worry whether there will be enough electricity to meet [the power demands] from any source."

Digging into the best available numbers and projections available, though, it's hard to see AI's current and near-future environmental impact in such a dire light. While generative AI models and tools can and will use a significant amount of energy, we shouldn't conflate AI energy usage with the larger and largely pre-existing energy usage of "data centers" as a whole. And just like any technology, whether that AI energy use is worthwhile depends largely on your wider opinion of the value of generative AI in the first place.

Not all data centers

While the headline focus of both Bloomberg and The Washington Post's recent pieces is on artificial intelligence, the actual numbers and projections cited in both pieces overwhelmingly focus on the energy used by Internet "data centers" as a whole. Long before generative AI became the current Silicon Valley buzzword, those data centers were already growing immensely in size and energy usage, powering everything from Amazon Web Services servers to online gaming services, Zoom video calls, and cloud storage and retrieval for billions of documents and photos, to name just a few of the more common uses.

The Post story acknowledges that these "nondescript warehouses packed with racks of servers that power the modern Internet have been around for decades." But in the very next sentence, the Post asserts that, today, data center energy use "is soaring because of AI." Bloomberg asks one source directly "why data centers were suddenly sucking up so much power" and gets back a blunt answer: "It’s AI... It’s 10 to 15 times the amount of electricity."

Taking a closer look at AI’s supposed energy apocalypse (2)

Unfortunately for Bloomberg, that quote is followed almost immediately by a chart that heavily undercuts the AI alarmism. That chart shows worldwide data center energy usage growing at a remarkably steady pace from about 100 TWh in 2012 to around 350 TWh in 2024. The vast majority of that energy usage growth came before 2022, when the launch of tools like Dall-E and ChatGPT largely set off the industry's current mania for generative AI. If you squint at Bloomberg's graph, you can almost see the growth in energy usage slowing down a bit since that momentous year for generative AI.

Determining precisely how much of that data center energy use is taken up specifically by generative AI is a difficult task, but Dutch researcher Alex de Vries found a clever way to get an estimate. In his study "The growing energy footprint of artificial intelligence," de Vries starts with estimates that Nvidia's specialized chips are responsible for about 95 percent of the market for generative AI calculations. He then uses Nvidia's projected production of 1.5 million AI servers in 2027—and the projected power usage for those servers—to estimate that the AI sector as a whole could use up anywhere from 85 to 134 TWh of power in just a few years.

To be sure, that is an immense amount of power, representing about 0.5 percent of projected electricity demand for the entire world (and an even greater ratio in the local energy mix for some common data center locations). But measured against other common worldwide uses of electricity, it's not representative of a mind-boggling energy hog. A 2018 study estimated that PC gaming as a whole accounted for 75 TWh of electricity use per year, to pick just one common human activity that's on the same general energy scale (and that's without console or mobile gamers included).

Taking a closer look at AI’s supposed energy apocalypse (3)

More to the point, de Vries' AI energy estimates are only a small fraction of the 620 to 1,050 TWh that data centers as a whole are projected to use by 2026, according to the IEA's recent report. The vast majority of all that data center power will still be going to more mundane Internet infrastructure that we all take for granted (and which is not nearly as sexy of a headline bogeyman as "AI").

Taking a closer look at AI’s supposed energy apocalypse (2024)

FAQs

Why is AI sometimes inaccurate? ›

AI is very good at recognizing patterns and analyzing data, but it lacks the ability to interpret meaning in the same way that humans can. This can make it difficult for algorithms to identify sarcasm, irony, or other forms of figurative language that rely on context and cultural knowledge.

What do you think of artificial intelligence? ›

A recent survey by Forbes indicated that many Americans still trust humans over AI by a large percentage. Those surveyed shared that they think people would do a better job of administering medicine, writing laws, and even choosing gifts, just to name a few.

How do you think we should go about AI? ›

Hopefully, just as a system applying Machine Learning can use past examples to predict the future, we can look back and feel assured that we will find new ways to spend our time, as old jobs become obsolete due to automation. Otherwise, we will be in an acute need for a new economic model.

How will AI change the world? ›

Research shows that AI can help less experienced workers enhance their productivity more quickly. Younger workers may find it easier to exploit opportunities, while older workers could struggle to adapt. The effect on labor income will largely depend on the extent to which AI will complement high-income workers.

Why can't AI be trusted? ›

Humans are largely predictable to other humans because we share the same human experience, but this doesn't extend to artificial intelligence, even though humans created it. If trustworthiness has inherently predictable and normative elements, AI fundamentally lacks the qualities that would make it worthy of trust.

Can AI give wrong answers? ›

Google's AI Overviews have given incorrect, misleading and even dangerous answers. The fact that Google includes a disclaimer at the bottom of every answer (“Generative AI is experimental”) should be no excuse.

What does Elon Musk say about AI? ›

Elon Musk says artificial intelligence will take all our jobs and that's not necessarily a bad thing. “Probably none of us will have a job,” Musk said about AI at a tech conference on Thursday.

Who has spoken out against AI? ›

Through the years, great minds from Elon Musk to Stephen Hawking and Bill Gates have aired their concerns about the power of AI to potentially disrupt – and eventually maybe even end – our lives.

Will AI take over humanity? ›

If you believe science fiction, then you don't understand the meaning of the word fiction. The short answer to this fear is: No, AI will not take over the world, at least not as it is depicted in the movies.

What's the AI thing everyone is doing? ›

AI Selfies

People just enter prompts into the AI selfie generator, choose the AI photo style they need, and they can get the AI generated portraits or selfies as expected or something creative. Now, let's take a look at some amazing AI selfies that really catches eye balls here!

Is AI something we should worry about? ›

Can AI cause human extinction? If AI algorithms are biased or used in a malicious manner — such as in the form of deliberate disinformation campaigns or autonomous lethal weapons — they could cause significant harm toward humans. Though as of right now, it is unknown whether AI is capable of causing human extinction.

Can AI generate its own thoughts? ›

In short, today's AI can't actually think for itself, so I wouldn't call it truly intelligent. That's why computer scientists came up with another term: artificial general intelligence (AGI). This is the real deal when it comes to intelligence. The thing is, we don't actually have anything like that yet.

What will life be like in 2035? ›

The world in 2035 will probably be much like it is today, but smarter and more automatic. Some innovations we might not notice, while others will knock us sideways, changing our lives forever.

What is the next big thing after AI? ›

Quantum computing will optimise routes, improve efficiency, and reduce costs by doing sophisticated computations that regular computers cannot. Quantum computing has several interesting applications that might change whole industries. While quantum computing has great potential, it also has drawbacks.

What can AI do that humans cannot? ›

AI enhances decision-making by leveraging vast data to identify patterns and trends often invisible to humans. Machine learning algorithms can analyze historical data and predict future outcomes, allowing businesses and individuals to make informed decisions quickly and accurately.

Why is AI not always correct? ›

It makes stuff up and is incredibly confident that it's right - when it's not. This happens enough that there is a term for it. It's called a hallucination. Hallucinations happen when generative AI produces content that is not grounded in reality or does not accurately reflect the source data it has been trained on.

Why is AI not 100% accurate? ›

As a business innovation specialist and data scientist, I can attest that AI systems are fallible and may produce inaccurate outcomes if trained on biased or limited datasets. Biases present in the training data can perpetuate and even amplify societal biases, resulting in unfair or discriminatory results.

Why is AI ineffective? ›

Projects often stumble due to inadequate data, which hampers the system's ability to learn and make accurate predictions. Whether it's supervised learning, neural networks, or decision trees, the volume of quality data directly impacts the effectiveness of the AI solution.

How could artificial intelligence go wrong? ›

AI is only as unbiased as the data and people training the programs. So if the data is flawed, impartial, or biased in any way, the resulting AI will be biased as well. The two main types of bias in AI are “data bias” and “societal bias.”

Top Articles
Latest Posts
Article information

Author: Mr. See Jast

Last Updated:

Views: 6452

Rating: 4.4 / 5 (75 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Mr. See Jast

Birthday: 1999-07-30

Address: 8409 Megan Mountain, New Mathew, MT 44997-8193

Phone: +5023589614038

Job: Chief Executive

Hobby: Leather crafting, Flag Football, Candle making, Flying, Poi, Gunsmithing, Swimming

Introduction: My name is Mr. See Jast, I am a open, jolly, gorgeous, courageous, inexpensive, friendly, homely person who loves writing and wants to share my knowledge and understanding with you.