“We identified four best practices that reduce energy and carbon emissions significantly — we call these the “4Ms” — all of which are being used at Google today and are available to anyone using Google Cloud services.
Model. Selecting efficient ML model architectures, such as sparse models, can advance ML quality while reducing computation by 3x–10x.
Machine. Using processors and systems optimized for ML training, versus general-purpose processors, can improve performance and energy efficiency by 2x–5x. Mechanization. Computing in the Cloud rather than on premise reduces energy usage and therefore emissions by 1.4x–2x. Cloud-based data centers are new, custom-designed warehouses equipped for energy efficiency for 50,000 servers, resulting in very good power usage effectiveness (PUE). On-premise data centers are often older and smaller and thus cannot amortize the cost of new energy-efficient cooling and power distribution systems.
Mechanization. Computing in the Cloud rather than on premise reduces energy usage and therefore emissions by 1.4x–2x. Cloud-based data centers are new, custom-designed warehouses equipped for energy efficiency for 50,000 servers, resulting in very good power usage effectiveness (PUE). On-premise data centers are often older and smaller and thus cannot amortize the cost of new energy-efficient cooling and power distribution systems.
Map Optimization. Moreover, the cloud lets customers pick the location with the cleanest energy, further reducing the gross carbon footprint by 5x–10x. While one might worry that map optimization could lead to the greenest locations quickly reaching maximum capacity, user demand for efficient data centers will result in continued advancement in green data center design and deployment.
These four practices together can reduce energy by 100x and emissions by 1000x.”
“Now two teams of forensic linguists say their analysis of the Q texts shows that Mr. Furber, one of the first online commentators to call attention to the earliest messages, actually played the lead role in writing them. Sleuths hunting for the writer behind Q have increasingly overlooked Mr. Furber and focused their speculation on another QAnon booster: Ron Watkins, who operated a website where the Q messages began appearing in 2018 and is now running for Congress in Arizona. And the scientists say they found evidence to back up those suspicions as well. Mr. Watkins appears to have taken over from Mr. Furber at the beginning of 2018. Both deny writing as Q. The studies provide the first empirical evidence of who invented the toxic QAnon myth, and the scientists who conducted the studies said they hoped that unmasking the creators might weaken its hold over QAnon followers.”
“People are afraid to engage with uncertainty. They don’t know how to engage with uncertainty. And they worry about the politicization of uncertainty. But we’re hitting a tipping point. By not engaging with uncertainty, statistical imaginaries are increasingly disconnected from statistical practice, which is increasingly undermining statistical practice. And that threatens the ability to do statistical work in the first place. If we want data to matter, the science community must help push past the politicization of data and uncertainty to create a statistical imaginary that can engage the limitations of data.
The statistical imaginary of precise, perfect, and neutral data has been ruptured. There is no way to put the proverbial genie back in the bottle. Nothing good will come from attempting to find a new way to ignore uncertainty, noise, and error. The answer to responsible data use is not to repair an illusion. It’s to constructively envision and project a new statistical imaginary with eyes wide open. And this means that all who care about the future of data need to help ground our statistical imaginary in practice, in tools, and in knowledge. Responsible data science isn’t just about what you do, it’s about what you ensure all who work with data do.”
“Quels que soient les axes de développement retenus, une chose est claire aux yeux de Florence G. Sell, professeur en droit privé à l’Université de Lorraine : « la mise à disposition des décisions de justice couplée aux progrès des outils du Big Data va permettre une vision beaucoup plus globale et approfondie du fonctionnement de la justice ». Pour l’experte, l’institution judiciaire a tout intérêt à se saisir de ces outils pour améliorer sa qualité et son efficacité. Et si elle ne le fait pas,« d’autres acteurs, tels les avocats ou les startups le feront : ce seront alors eux qui seront à la pointe d’une évolution de toute façon irrémédiable. »”
“Professional portrait photographers are able to create compelling photographs by using specialized equipment, such as off-camera flashes and reflectors, and expert knowledge to capture just the right illumination of their subjects. In order to allow users to better emulate professional-looking portraits, we recently released Portrait Light, a new post-capture feature for the Pixel Camera and Google Photos apps that adds a simulated directional light source to portraits, with the directionality and intensity set to complement the lighting from the original photograph.”
“Using a machine-learning algorithm, MIT researchers have identified a powerful new antibiotic compound. In laboratory tests, the drug killed many of the world’s most problematic disease-causing bacteria, including some strains that are resistant to all known antibiotics. It also cleared infections in two different mouse models. The computer model, which can screen more than a hundred million chemical compounds in a matter of days, is designed to pick out potential antibiotics that kill bacteria using different mechanisms than those of existing drugs.”
“To put that in context, researchers at Nvidia, the company that makes the specialised GPU processors now used in most machine-learning systems, came up with a massive natural-language model that was 24 times bigger than its predecessor and yet was only 34% better at its learning task. But here’s the really interesting bit. Training the final model took 512 V100 GPUs running continuously for 9.2 days. “Given the power requirements per card,” wrote one expert, “a back of the envelope estimate put the amount of energy used to train this model at over 3x the yearly energy consumption of the average American.” You don’t have to be Einstein to realise that machine learning can’t continue on its present path, especially given the industry’s frenetic assurances that tech giants are heading for an “AI everywhere” future.”
“ We’ve observed agents discovering progressively more complex tool use while playing a simple game of hide-and-seek. Through training in our new simulated hide-and-seek environment, agents build a series of six distinct strategies and counterstrategies, some of which we did not know our environment supported. The self-supervised emergent complexity in this simple environment further suggests that multi-agent co-adaptation may one day produce extremely complex and intelligent behavior.”
“We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization — all without task-specific training.”