Étiquette : machine learning (Page 2 of 10)

Really ?

“We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization — all without task-specific training.”

Source : Better Language Models and Their Implications

“Historically, it has provided only one translation for a query, even if the translation could have either a feminine or masculine form. So when the model produced one translation, it inadvertently replicated gender biases that already existed. For example: it would skew masculine for words like “strong” or “doctor,” and feminine for other words, like “nurse” or “beautiful.”

Source : Google is fixing gender bias in its Translate service

“With a unified model for a large number of languages, we run the risk of being mediocre for each language, which makes the problem challenging. Moreover, it’s difficult to get human-annotated data for many of the languages. Although SynthText has been helpful as a way to bootstrap training, it’s not yet a replacement for human-annotated data sets. We are therefore exploring ways to bridge the domain gap between our synthetic engine and real-world distribution of text on images”.

Source : Rosetta: Understanding text in images and videos with machine learning – Facebook Code

loss-1

“OpenAI Five lost two games against top Dota 2 players at The International in Vancouver this week, maintaining a good chance of winning for the first 20-35 minutes of both games”.

Source : The International 2018: Results

“Built by creative agency Redpepper, There’s Waldo zeroes in and finds Waldo with a sniper-like accuracy. The metal robotic arm is a Raspberry Pi-controlled uArm Swift Pro which is equipped with a Vision Camera Kit that allows for facial recognition. The camera takes a photo of the page, which then uses OpenCV to find the possible Waldo faces in the photo. The faces are then sent to be analyzed by Google’s AutoML Vision service, which has been trained on photos of Waldo. If the robot determines a match with 95 percent confidence or higher, it’ll point to all the Waldos it can find on the page”.

Source : This robot uses AI to find Waldo, thereby ruining Where’s Waldo – The Verge

“Research at Netflix is aimed at improving various aspects of our business. Research applications span many areas including our personalization algorithms, content valuation, and streaming optimization. To maximize the impact of our research, we do not centralize research into a separate organization. Instead, we have many teams that pursue research in collaboration with business teams, engineering teams, and other researchers.
This allows for close partnerships between researchers and the business or engineering teams in each area. In addition, research that applies to the same methodological area or business area is shared and highlighted in discussion and debate forums to strengthen the work and its impact. These forums also serve to identify and motivate future research directions”.

Source : Netflix Research

«Today, we collectively and continuously document our city experience on social media platforms, shaping a virtual city image. Multiplicity reveals a novel view of this photographic landscape of attention and interests. How does Paris look as seen through the lens of thousands of photographers? What are the hotspots of attraction, what are the neglected corners? What are recurring poses and tropes? And how well do the published pictures reflect your personal view of the city?» – Via Nicolas Nova

Source : Truth & Beauty – Multiplicity

«MIT researchers have developed a computer interface that can transcribe words that the user verbalizes internally but does not actually speak aloud.The system consists of a wearable device and an associated computing system. Electrodes in the device pick up neuromuscular signals in the jaw and face that are triggered by internal verbalizations — saying words “in your head” — but are undetectable to the human eye. The signals are fed to a machine-learning system that has been trained to correlate particular signals with particular words».

Source : Computer system transcribes words users “speak silently” | MIT News

« Older posts Newer posts »

© 2024 no-Flux

Theme by Anders NorenUp ↑