Étiquette : machine learning (Page 1 of 10)

A Saclay, le master qui forme l’élite des spécialistes en intelligence artificielle

https://i0.wp.com/www.beaude.net/no-flux/wp-content/uploads/2023/02/90ba057_1652453099544-coutroutsios-saclay-violet-web.jpg?w=676&ssl=1

“« C’est très difficile de lutter pour les garder dans un environnement académique. L’un de mes doctorants a fait un stage d’été chez Tesla, qui lui a fait une offre avant qu’il ne termine sa thèse », explique-t-il. L’étudiant a décliné mais Tesla est revenu le chercher, une fois son doctorat achevé, en surenchérissant son offre d’embauche, qui avoisinait les 500 000 euros annuels sans les stock-options. Matthieu Cord peut s’amuser à faire la liste de ses thésards partis vers Deepmind, Facebook et Apple surtout : « Les meilleurs partent rapidement en sortie de thèse, ceux qui commencent à publier sont très vite dans les radars des géants du numérique et, après, c’est terminé, on ne les garde plus. » Une fois les jeunes diplômés « absorbés », les liens sont distendus, d’autant que certaines sociétés imposent une forme de loi du silence aux chercheurs et aux salariés.
Certains étudiants du MVA de la promo 2022 s’interrogent désormais aussi sur leur « responsabilité » et leur « rôle sociétal » dans la conception des algorithmes. Un cours de machine learning responsable a été créé à la rentrée 2021 pour répondre à cette aspiration, 60 étudiants avaient manifesté leur intérêt pour une trentaine de places ouvertes. Mathis Clautier refuse de mettre son intelligence au service d’une robotique destinée à la guerre. Il n’est pas sans savoir que Boston Dynamics, une start-up de robotique médiatisée grâce à ses robots humanoïdes, ayant appartenu à Google de 2013 à 2017, avait collaboré avec le programme de recherche de la défense américaine et que l’un de ses robots quadrupèdes « Spot » a fait ses débuts avec l’armée française, en 2021.”

Source : A Saclay, le master qui forme l’élite des spécialistes en intelligence artificielle

Google AI Blog: Good News About the Carbon Footprint of Machine Learning Training

“We identified four best practices that reduce energy and carbon emissions significantly — we call these the “4Ms” — all of which are being used at Google today and are available to anyone using Google Cloud services.

  • Model. Selecting efficient ML model architectures, such as sparse models, can advance ML quality while reducing computation by 3x–10x.
  • Machine. Using processors and systems optimized for ML training, versus general-purpose processors, can improve performance and energy efficiency by 2x–5x. Mechanization. Computing in the Cloud rather than on premise reduces energy usage and therefore emissions by 1.4x–2x. Cloud-based data centers are new, custom-designed warehouses equipped for energy efficiency for 50,000 servers, resulting in very good power usage effectiveness (PUE). On-premise data centers are often older and smaller and thus cannot amortize the cost of new energy-efficient cooling and power distribution systems.
  • Mechanization. Computing in the Cloud rather than on premise reduces energy usage and therefore emissions by 1.4x–2x. Cloud-based data centers are new, custom-designed warehouses equipped for energy efficiency for 50,000 servers, resulting in very good power usage effectiveness (PUE). On-premise data centers are often older and smaller and thus cannot amortize the cost of new energy-efficient cooling and power distribution systems.
  • Map Optimization. Moreover, the cloud lets customers pick the location with the cleanest energy, further reducing the gross carbon footprint by 5x–10x. While one might worry that map optimization could lead to the greenest locations quickly reaching maximum capacity, user demand for efficient data centers will result in continued advancement in green data center design and deployment.

These four practices together can reduce energy by 100x and emissions by 1000x.”

Source : Google AI Blog: Good News About the Carbon Footprint of Machine Learning Training

Q on a flag

“Now two teams of forensic linguists say their analysis of the Q texts shows that Mr. Furber, one of the first online commentators to call attention to the earliest messages, actually played the lead role in writing them. Sleuths hunting for the writer behind Q have increasingly overlooked Mr. Furber and focused their speculation on another QAnon booster: Ron Watkins, who operated a website where the Q messages began appearing in 2018 and is now running for Congress in Arizona. And the scientists say they found evidence to back up those suspicions as well. Mr. Watkins appears to have taken over from Mr. Furber at the beginning of 2018. Both deny writing as Q. The studies provide the first empirical evidence of who invented the toxic QAnon myth, and the scientists who conducted the studies said they hoped that unmasking the creators might weaken its hold over QAnon followers.”

Source : Who Is Behind QAnon? Linguistic Detectives Find Fingerprints – The New York Times

Statistical Imaginaries – by danah boyd

“People are afraid to engage with uncertainty. They don’t know how to engage with uncertainty. And they worry about the politicization of uncertainty. But we’re hitting a tipping point. By not engaging with uncertainty, statistical imaginaries are increasingly disconnected from statistical practice, which is increasingly undermining statistical practice. And that threatens the ability to do statistical work in the first place. If we want data to matter, the science community must help push past the politicization of data and uncertainty to create a statistical imaginary that can engage the limitations of data.
The statistical imaginary of precise, perfect, and neutral data has been ruptured. There is no way to put the proverbial genie back in the bottle. Nothing good will come from attempting to find a new way to ignore uncertainty, noise, and error. The answer to responsible data use is not to repair an illusion. It’s to constructively envision and project a new statistical imaginary with eyes wide open. And this means that all who care about the future of data need to help ground our statistical imaginary in practice, in tools, and in knowledge. Responsible data science isn’t just about what you do, it’s about what you ensure all who work with data do.”

Source : Statistical Imaginaries – by danah boyd

Racisme, sexisme : les IA peuvent-elles supprimer les discriminations dans les affaires judiciaires ?

Palais de justice tribunal

“Quels que soient les axes de développement retenus, une chose est claire aux yeux de Florence G. Sell, professeur en droit privé à l’Université de Lorraine : « la mise à disposition des décisions de justice couplée aux progrès des outils du Big Data va permettre une vision beaucoup plus globale et approfondie du fonctionnement de la justice ». Pour l’experte, l’institution judiciaire a tout intérêt à se saisir de ces outils pour améliorer sa qualité et son efficacité. Et si elle ne le fait pas,« d’autres acteurs, tels les avocats ou les startups le feront : ce seront alors eux qui seront à la pointe d’une évolution de toute façon irrémédiable. »”

Source : Racisme, sexisme : les IA peuvent-elles supprimer les discriminations dans les affaires judiciaires ?

Google AI Blog: Portrait Light: Enhancing Portrait Lighting with Machine Learning

“Professional portrait photographers are able to create compelling photographs by using specialized equipment, such as off-camera flashes and reflectors, and expert knowledge to capture just the right illumination of their subjects. In order to allow users to better emulate professional-looking portraits, we recently released Portrait Light, a new post-capture feature for the Pixel Camera and Google Photos apps that adds a simulated directional light source to portraits, with the directionality and intensity set to complement the lighting from the original photograph.”

Source : Google AI Blog: Portrait Light: Enhancing Portrait Lighting with Machine Learning

MIT researchers used a machine-learning algorithm to identify a drug called halicin that kills many strains of bacteria. Halicin (top row) prevented the development of antibiotic resistance in E. coli, while ciprofloxacin (bottom row) did not.

“Using a machine-learning algorithm, MIT researchers have identified a powerful new antibiotic compound. In laboratory tests, the drug killed many of the world’s most problematic disease-causing bacteria, including some strains that are resistant to all known antibiotics. It also cleared infections in two different mouse models. The computer model, which can screen more than a hundred million chemical compounds in a matter of days, is designed to pick out potential antibiotics that kill bacteria using different mechanisms than those of existing drugs.”

Source : Artificial intelligence yields new antibiotic | MIT News

“To put that in context, researchers at Nvidia, the company that makes the specialised GPU processors now used in most machine-learning systems, came up with a massive natural-language model that was 24 times bigger than its predecessor and yet was only 34% better at its learning task. But here’s the really interesting bit. Training the final model took 512 V100 GPUs running continuously for 9.2 days. “Given the power requirements per card,” wrote one expert, “a back of the envelope estimate put the amount of energy used to train this model at over 3x the yearly energy consumption of the average American.” You don’t have to be Einstein to realise that machine learning can’t continue on its present path, especially given the industry’s frenetic assurances that tech giants are heading for an “AI everywhere” future.”

Source : Can the planet really afford the exorbitant power demands of machine learning? | John Naughton | Opinion | The Guardian

“ We’ve observed agents discovering progressively more complex tool use while playing a simple game of hide-and-seek. Through training in our new simulated hide-and-seek environment, agents build a series of six distinct strategies and counterstrategies, some of which we did not know our environment supported. The self-supervised emergent complexity in this simple environment further suggests that multi-agent co-adaptation may one day produce extremely complex and intelligent behavior.”

Source : Emergent Tool Use from Multi-Agent Interaction

« Older posts

© 2024 no-Flux

Theme by Anders NorenUp ↑