Étiquette : bias (Page 3 of 5)

 

« En élargissant les jeux de données, nous risquons surtout de rendre les populations les plus fragiles plus faciles à contrôler et à surveiller ! « Une surveillance égalitaire n’est pas l’égalité ! » Au contraire ! Le risque est de développer des atteintes encore plus disproportionnées qu’elles ne sont aux groupes les plus minoritaires et les plus fragiles ! Ces systèmes sont « dangereux quand ils échouent, nocifs quand ils fonctionnent ». « Améliorer un système injuste ne peut que créer un plus grand préjudice » ».

Source : Kate Crawford : « l’IA est une nouvelle ingénierie du pouvoir » | InternetActu.net

TELEMMGLPICT000211074639.jpeg

“80 to 90 per cent of the predictive assessment was based on the algorithms’ analysis of candidates’ use of language and verbal skills. “There are 350-ish features that we look at in language: do you use passive or active words? Do you talk about ‘I’ or ‘We.’ What is the word choice or sentence length? In doctors, you might expect a good one to use more technical language,” he said. “Then we look at the tone of voice. If someone speaks really slowly, you are probably not going to stay on the phone to buy something from them. If someone speaks at 400 words a minute, people are not going to understand them. Empathy is a piece of that.” The company says the tecnology is different to facial recognition and instead analyses expressions. Facial expressions assessed by the algorithms include brow furrowing, brow raising, eye widening or closing, lip tightening, chin raising and smiling, which are important in sales or other public-facing jobs.”

Source : AI used for first time in job interviews in UK to find best applicants

“Historically, it has provided only one translation for a query, even if the translation could have either a feminine or masculine form. So when the model produced one translation, it inadvertently replicated gender biases that already existed. For example: it would skew masculine for words like “strong” or “doctor,” and feminine for other words, like “nurse” or “beautiful.”

Source : Google is fixing gender bias in its Translate service

“Smart Compose is an example of what AI developers call natural language generation (NLG), in which computers learn to write sentences by studying patterns and relationships between words in literature, emails and web pages. A system shown billions of human sentences becomes adept at completing common phrases but is limited by generalities. Men have long dominated fields such as finance and science, for example, so the technology would conclude from the data that an investor or engineer is “he” or “him.” The issue trips up nearly every major tech company. ”

Source : Fearful of bias, Google blocks gender-based pronouns from new AI tool | Reuters

recruiting automation

“The group created 500 computer models focused on specific job functions and locations. They taught each to recognize some 50,000 terms that showed up on past candidates’ resumes. The algorithms learned to assign little significance to skills that were common across IT applicants, such as the ability to write various computer codes, the people said. […] With the technology returning results almost at random, Amazon shut down the project.”

Source : Amazon scraps secret AI recruiting tool that showed bias against women | Reuters

Une capture d'écran de la réunion interne de Google publiée par Breitbart.

“C’est un coup médiatique majeur pour Breitbart, un site ultraconservateur et pro-Trump un temps dirigé par Steve Bannon, qui fut directeur de campagne du candidat victorieux. C’est aussi du petit-lait pour une large frange des républicains américains, qui accuse depuis des mois les grands groupes de la Silicon Valley de censurer les opinions conservatrices. Ces récriminations, pour une large partie infondées, sont remontées jusqu’au président Donald Trump, lequel a tout récemment accusé Google, justement, d’amoindrir sa présence dans les résultats de recherche”.

Source : Une vidéo montre le choc des dirigeants de Google après l’élection de Donald Trump

Two women outdoors looking at a mobile device using facial recognition technology.

“Microsoft announced Tuesday that it has updated its facial recognition technology with significant improvements in the system’s ability to recognize gender across skin tones. That improvement addresses recent concerns that commercially available facial recognition technologies more accurately recognized gender of people with lighter skin tones than darker skin tones, and that they performed best on males with lighter skin and worst on females with darker skin.
With the new improvements, Microsoft said it was able to reduce the error rates for men and women with darker skin by up to 20 times. For all women, the company said the error rates were reduced by nine times. Overall, the company said that, with these improvements, they were able to significantly reduce accuracy differences across the demographics”.

Source : Microsoft improves facial recognition to perform well across all skin tones, genders

Deep Learning

«DL will not disagree with any data, will not figure out the injustices in the society, it’s just all “data to learn”. You should hire a dedicated human staff to create fake fair data of an ideal society where white people are arrested as often as blacks, where 50% of directors are women, and so on. But the cost of creating vast amounts of de-biased data edited by human experts, just to train a DL model, makes not worth to replace humans with AI in first place! Further, even if you had trained a DL model that really is fair, you have no evidence to convince a judge or a user about the fairness of any decision, since the DL will give no explanations».

Source : Deep Learning is not the AI future

« Older posts Newer posts »

© 2024 no-Flux

Theme by Anders NorenUp ↑