How energy powers your AI work and fun: a step-by-step guide
Briefly

How energy powers your AI work and fun: a step-by-step guide
""My advice for you is to start from something that you have created with AI and walk backwards of how it works," Remi Raphael, the first-ever chief AI officer of the nonprofit Electric Power Research Institute, told me. "Bits and bites. Electrons to microchips." Let's begin at the end: Whether it's a memo for your boss or a meme of your cat, it's all produced the same way. Let's focus on a cat image, because that's more fun."
"Step 8 (end goal): My giant Seattle cat image. I use an AI tool - like ChatGPT or Gemini - to write a prompt for an image of my cat made giant stretching next to Seattle's Space Needle. The platform delivers it to me, even though all the heavy lifting happens in a data center far away - just like the regular ol' Internet."
AI-generated content is produced by models hosted in remote data centers using large GPU clusters. Generating an image involves inference when a user requests output and training that occurs earlier and consumes far more energy. Data centers house GPUs in rows of servers and require massive cooling and substantial electricity to operate. Companies often rent hardware from cloud providers and rely heavily on GPUs manufactured mainly by Nvidia. The hardware choices and the energy demands of training versus inference shape both the environmental footprint and the infrastructure economics of large AI models.
Read at Axios
Unable to calculate read time
[
|
]