Étiquette : deepmind

https://lh3.googleusercontent.com/KKbgSsS1qIoesiy2Ws_WDsDSyGhTZgP9W3qZr-xS5ElnafEu80joptKmc2hgz01a6j6yIj5cvCnqz8bBfXG8BND44ZKJ_kv7tTHQAA=w2048-rw-v1

“This computational work represents a stunning advance on the protein-folding problem, a 50-year-old grand challenge in biology. It has occurred decades before many people in the field would have predicted. It will be exciting to see the many ways in which it will fundamentally change biological research”.

Professor Venki Ramakrishnan – Nobel Laureate and President of the Royal Society

“We trained this system on publicly available data consisting of ~170,000 protein structures from the protein data bank together with large databases containing protein sequences of unknown structure. It uses approximately 16 TPUv3s (which is 128 TPUv3 cores or roughly equivalent to ~100-200 GPUs) run over a few weeks, a relatively modest amount of compute in the context of most large state-of-the-art models used in machine learning today.”

Source : AlphaFold: a solution to a 50-year-old grand challenge in biology | DeepMind

“After training our agents for an additional week, we played against MaNa, one of the world’s strongest StarCraft II players, and among the 10 strongest Protoss players. AlphaStar again won by 5 games to 0, demonstrating strong micro and macro-strategic skills. “I was impressed to see AlphaStar pull off advanced moves and different strategies across almost every game, using a very human style of gameplay I wouldn’t have expected,” he said. “I’ve realised how much my gameplay relies on forcing mistakes and being able to exploit human reactions, so this has put the game in a whole new light for me. We’re all excited to see what comes next.”

Source : AlphaStar: Mastering the Real-Time Strategy Game StarCraft II | DeepMind

«The AlphaGo Zero program recently achieved superhuman performance in the game of Go, by tabula rasa reinforcement learning from games of self-play. In this paper, we generalise this approach into a single AlphaZero algorithm that can achieve, tabula rasa, superhuman performance in many challenging domains. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go».

Source : [1712.01815] Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm

« Ingesting data, predicting trends, and suggesting solutions is almost perfectly suited to DeepMind’s neural network expertise. While the National Grid is surely aware of some potential optimisations, a more rigorous investigation by a DeepMind AI may uncover solutions that the grid’s human operators have never considered. One thing’s for certain: a system as large as the UK grid has millions of inefficiencies ».

Source : DeepMind in talks with National Grid to reduce UK energy use by 10% | Ars Technica UK

« Sur StarCraft II, contrairement aux échecs et au go déjà vaincus, il y a bel et bien ici un facteur physique lié pour partie à l’exécution des tâches à accomplir en jeu qu’il va falloir prendre en compte. Un choix va donc devoir être fait de manière arbitraire de façon à limiter la machine danses possibilités. Ce choix pourrait peser en profondeur sur les résultats que l’on attend de l’IA » – Pierre-Marie Humeau, aka YoGo

Source : Google DeepMind : les joueurs de StarCraft II se sentent-ils prêts à affronter une IA ? – Sciences – Numerama

Suleyman also argued that the kind of General AI we see in movies today probably won’t look anything like the general AI systems we will get decades from now. “When it comes to imagining what the future will be like, a lot of that is fun and entertaining, but it doesn’t bear a great deal of resemblance to the systems that we are building,” he said. “I can’t really think of a film that makes me think: yeah – AI looks like that.”

Source : DeepMind’s Mustafa Suleyman says general AI is still a long way off | TechCrunch

« The AI vastly outperformed a professional lip-reader who attempted to decipher 200 randomly selected clips from the data set. The professional annotated just 12.4 per cent of words without any error. But the AI annotated 46.8 per cent of all words in the March to September data set.
The AI system was trained using some 5000 hours from six different TV programmes, including NewsnightBBC Breakfast and Question Time. In total, the videos contained 118,000 sentences ».

Source : Google’s DeepMind AI can lip-read TV shows better than a pro | New Scientist

ML Control On/Off

Our machine learning system was able to consistently achieve a 40 percent reduction in the amount of energy used for cooling, which equates to a 15 percent reduction in overall PUE overhead after accounting for electrical losses and other non-cooling inefficiencies. It also produced the lowest PUE the site had ever seen.

Source : Google DeepMind

© 2024 no-Flux

Theme by Anders NorenUp ↑