Tag: artificial intelligence (page 1 of 12)

“With a unified model for a large number of languages, we run the risk of being mediocre for each language, which makes the problem challenging. Moreover, it’s difficult to get human-annotated data for many of the languages. Although SynthText has been helpful as a way to bootstrap training, it’s not yet a replacement for human-annotated data sets. We are therefore exploring ways to bridge the domain gap between our synthetic engine and real-world distribution of text on images”.

Source : Rosetta: Understanding text in images and videos with machine learning – Facebook Code

“DARPA envisions a future in which machines are more than just tools that execute human-programmed rules or generalize from human-curated data sets. Rather, the machines DARPA envisions will function more as colleagues than as tools. Towards this end, DARPA research and development in human-machine symbiosis sets a goal to partner with machines. Enabling computing systems in this manner is of critical importance because sensor, information, and communication systems generate data at rates beyond which humans can assimilate, understand, and act. Incorporating these technologies in military systems that collaborate with warfighters will facilitate better decisions in complex, time-critical, battlefield environments; enable a shared understanding of massive, incomplete, and contradictory information; and empower unmanned systems to perform critical missions safely and with high degrees of autonomy. DARPA is focusing its investments on a third wave of AI that brings forth machines that understand and reason in context”.

Source : AI Next Campaign

“The Amazon Echo as an anatomical map of human labor, data and planetary resources By Kate Crawford and Vladan Joler”

Source : Anatomy of an AI System

“Google et DeepMind ont convenu de bien délimiter le champ d’action des algorithmes d’optimisation afin d’éviter un incident d’exploitation. « Nos opérateurs sont toujours en contrôle et peuvent choisir de quitter le mode de contrôle de l’IA à tout moment. Dans ces scénarios, le système de contrôle passera du contrôle de l’intelligence artificielle aux règles et heuristiques sur site qui définissent aujourd’hui l’industrie de l’automatisation », précise la société. En suivant ce chemin, il s’avère que Google a réduit sa facture électrique : en moyenne, l’entreprise dit avoir obtenu des économies d’énergie de 30 % alors que le mécanisme n’est déployé que depuis quelques mois”.

Source : IA : Google confie les clés du refroidissement de ses data centers à ses algorithmes – Tech – Numerama

“Et Apple alors ? Eh bien Apple ne conserve pas vos conversations. Elles sont traitées et supprimées. Cela signifie que vous n’avez pas d’historique”.

Source : Google Home, Amazon Echo : comment effacer tout ce que vous avez dit à votre assistant personnel – Tech – Numerama

“We don’t just want this to be an academically interesting result – we want it to be used in real treatment. So our paper also takes on one of the key barriers for AI in clinical practice: the “black box” problem. For most AI systems, it’s very hard to understand exactly why they make a recommendation. That’s a huge issue for clinicians and patients who need to understand the system’s reasoning, not just its output – the why as well as the what.
Our system takes a novel approach to this problem, combining two different neural networks with an easily interpretable representation between them. The first neural network, known as the segmentation network, analyses the OCT scan to provide a map of the different types of eye tissue and the features of disease it sees, such as haemorrhages, lesions, irregular fluid or other symptoms of eye disease. This map allows eyecare professionals to gain insight into the system’s “thinking.” The second network, known as the classification network, analyses this map to present clinicians with diagnoses and a referral recommendation. Crucially, the network expresses this recommendation as a percentage, allowing clinicians to assess the system’s confidence in its analysis”.

Source : A major milestone for the treatment of eye disease | DeepMind

Two women outdoors looking at a mobile device using facial recognition technology.

“Microsoft announced Tuesday that it has updated its facial recognition technology with significant improvements in the system’s ability to recognize gender across skin tones. That improvement addresses recent concerns that commercially available facial recognition technologies more accurately recognized gender of people with lighter skin tones than darker skin tones, and that they performed best on males with lighter skin and worst on females with darker skin.
With the new improvements, Microsoft said it was able to reduce the error rates for men and women with darker skin by up to 20 times. For all women, the company said the error rates were reduced by nine times. Overall, the company said that, with these improvements, they were able to significantly reduce accuracy differences across the demographics”.

Source : Microsoft improves facial recognition to perform well across all skin tones, genders

“Our team of five neural networks, OpenAI Five, has started to defeat amateur human teams at Dota 2. While today we play with restrictions, we aim to beat a team of top professionals at The International in August subject only to a limited set of heroes. We may not succeed: Dota 2 is one of the most popular and complex esports games in the world, with creative and motivated professionals who train year-round to earn part of Dota’s annual $40M prize pool.
OpenAI Five plays 180 years worth of games against itself every day, learning via self-play. It trains using a scaled-up version of Proximal Policy Optimization running on 256 GPUs and 128,000 CPU cores — a larger-scale version of the system we built to play the much-simpler solo variant of the game last year. Using a separate LSTM for each hero and no human data, it learns recognizable strategies. This indicates that reinforcement learning can yield long-term planning with large but achievable scale — without fundamental advances, contrary to our own expectations upon starting the project”.

Source : OpenAI Five

“Creating slow-motion footage is all about capturing a large number of frames per second. If you don’t record enough, it becomes choppy and unwatchable as soon as you slow down your video — unless, that is, you use artificial intelligence to imagine the extra frames”.

Source : Nvidia uses artificial intelligence to fake realistic slow-motion video – The Verge

“The team at Oak Ridge says Summit is the first supercomputer designed from the ground up to run AI applications, such as machine learning and neural networks. It has over 27,000 GPU chips from Nvidia, whose products have supercharged plenty of AI applications, and also includes some of IBM’s Power9 chips, which the company launched last year specifically for AI workloads. There’s also an ultrafast communications link for shipping data between these silicon workhorses.
Bob Picciano of IBM says all this allows Summit to run some applications up to 10 times faster than Titan while using only 50 percent more electrical power. Among the AI-related projects slated to run on the new supercomputer is one that will crunch through huge volumes of written reports and medical images to try to identify possible relationships between genes and cancer. Another will try to identify genetic traits that could predispose people to opioid addiction and other afflictions”.

Source : The world’s most powerful supercomputer is tailor made for the AI era – MIT Technology Review

Older posts

© 2018 no-Flux

Theme by Anders NorenUp ↑