“To put that in context, researchers at Nvidia, the company that makes the specialised GPU processors now used in most machine-learning systems, came up with a massive natural-language model that was 24 times bigger than its predecessor and yet was only 34% better at its learning task. But here’s the really interesting bit. Training the final model took 512 V100 GPUs running continuously for 9.2 days. “Given the power requirements per card,” wrote one expert, “a back of the envelope estimate put the amount of energy used to train this model at over 3x the yearly energy consumption of the average American.” You don’t have to be Einstein to realise that machine learning can’t continue on its present path, especially given the industry’s frenetic assurances that tech giants are heading for an “AI everywhere” future.”

Source : Can the planet really afford the exorbitant power demands of machine learning? | John Naughton | Opinion | The Guardian