Étiquette : bias (page 1 of 3)

Google Vision API

“Google notes in its own AI principles that algorithms and datasets can reinforce bias: ‘We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.’ Google invited affected developers to comment on its discussion forums. Only one developer had commented at the time of writing, and complained the change was down to ‘political correctness.’ ‘I don’t think political correctness has room in APIs,’ the person wrote. ‘If I can 99% of the times identify if someone is a man or woman, then so can the algorithm. You don’t want to do it? Companies will go to other services.’”

Source : Google AI will no longer use gender labels like ‘woman’ or ‘man’ on images of people to avoid bias

http://www.beaude.net/no-flux/wp-content/uploads/2020/01/143762_HERO-GettyImages-860967818.jpg

“‘Notability are notoriously byzantine, to say it kindly,’ the anonymous editor says. They hope the push to reform the guidelines will help compensate for the historic underrepresentation of women and minorities, since it’s not just women who find their path into Wikipedia blocked. ‘A lot of prejudice is unconscious and intersectional,’ says Lubbock. ‘Wikipedia is dealing not just with a gender inequality issue, but also racial and geographical inequalities.’

Source : Female scientists’ pages keep disappearing from Wikipedia – what’s going on? | News | Chemistry World

“Changing algorithms is easier than changing people: software on computers can be updated; the “wetware” in our brains has so far proven much less pliable. None of this is meant to diminish the pitfalls and care needed in fixing algorithmic bias. But compared with the intransigence of human bias, it does look a great deal simpler. Discrimination by algorithm can be more readily discovered and more easily fixed.”

Source : Biased Algorithms Are Easier to Fix Than Biased People – The New York Times

View image on Twitter

“Goldman Sachs denied allegations of gender bias and said on Monday that it will reevaluate credit limits for Apple Card users on a case-by-case basis for customers who received lower credit lines than expected.“We have not and never will make decisions based on factors like gender,” Carey Halio, Goldman’s retail bank CEO, said in a statement. “In fact, we do not know your gender or marital status during the Apple Card application process.”Halio said that customers unsatisfied with their line should contact the company.“Based on additional information we may request, we will re-evaluate your credit line,” the statement said.”

Source : Goldman Sachs to reevaluate Apple Card credit limits after bias claim

As researchers and engineers, our goal is to make machine learning technology work for everyone.

via Google – YouTube

Sondage de Facebook…

4 façons d’avoir confiance, 1 façon extrême de ne pas avoir confiance, que dire…

« Correlation is not causation »

“Data dredging (also data fishing, data snooping, data butchery, and p-hacking) is the misuse of data analysis to find patterns in data that can be presented as statistically significant when in fact there is no real underlying effect. This is done by performing many statistical tests on the data and only paying attention to those that come back with significant results, instead of stating a single hypothesis about an underlying effect before the analysis and then conducting a single test for it.”

Source : Data dredging – Wikipedia

 

« En élargissant les jeux de données, nous risquons surtout de rendre les populations les plus fragiles plus faciles à contrôler et à surveiller ! « Une surveillance égalitaire n’est pas l’égalité ! » Au contraire ! Le risque est de développer des atteintes encore plus disproportionnées qu’elles ne sont aux groupes les plus minoritaires et les plus fragiles ! Ces systèmes sont « dangereux quand ils échouent, nocifs quand ils fonctionnent ». « Améliorer un système injuste ne peut que créer un plus grand préjudice » ».

Source : Kate Crawford : « l’IA est une nouvelle ingénierie du pouvoir » | InternetActu.net

TELEMMGLPICT000211074639.jpeg

“80 to 90 per cent of the predictive assessment was based on the algorithms’ analysis of candidates’ use of language and verbal skills. “There are 350-ish features that we look at in language: do you use passive or active words? Do you talk about ‘I’ or ‘We.’ What is the word choice or sentence length? In doctors, you might expect a good one to use more technical language,” he said. “Then we look at the tone of voice. If someone speaks really slowly, you are probably not going to stay on the phone to buy something from them. If someone speaks at 400 words a minute, people are not going to understand them. Empathy is a piece of that.” The company says the tecnology is different to facial recognition and instead analyses expressions. Facial expressions assessed by the algorithms include brow furrowing, brow raising, eye widening or closing, lip tightening, chin raising and smiling, which are important in sales or other public-facing jobs.”

Source : AI used for first time in job interviews in UK to find best applicants

« Older posts

© 2020 no-Flux

Theme by Anders NorenUp ↑