Étiquette : bias (Page 1 of 3)

Google AI will no longer use gender labels like ‘woman’ or ‘man’ on images of people to avoid bias

Google Vision API

“Google notes in its own AI principles that algorithms and datasets can reinforce bias: ‘We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.’ Google invited affected developers to comment on its discussion forums. Only one developer had commented at the time of writing, and complained the change was down to ‘political correctness.’ ‘I don’t think political correctness has room in APIs,’ the person wrote. ‘If I can 99% of the times identify if someone is a man or woman, then so can the algorithm. You don’t want to do it? Companies will go to other services.’”

Source : Google AI will no longer use gender labels like ‘woman’ or ‘man’ on images of people to avoid bias

Female scientists’ pages keep disappearing from Wikipedia – what’s going on?

http://www.beaude.net/no-flux/wp-content/uploads/2020/01/143762_HERO-GettyImages-860967818.jpg

“‘Notability are notoriously byzantine, to say it kindly,’ the anonymous editor says. They hope the push to reform the guidelines will help compensate for the historic underrepresentation of women and minorities, since it’s not just women who find their path into Wikipedia blocked. ‘A lot of prejudice is unconscious and intersectional,’ says Lubbock. ‘Wikipedia is dealing not just with a gender inequality issue, but also racial and geographical inequalities.’

Source : Female scientists’ pages keep disappearing from Wikipedia – what’s going on? | News | Chemistry World

Biased Algorithms Are Easier to Fix Than Biased People

“Changing algorithms is easier than changing people: software on computers can be updated; the “wetware” in our brains has so far proven much less pliable. None of this is meant to diminish the pitfalls and care needed in fixing algorithmic bias. But compared with the intransigence of human bias, it does look a great deal simpler. Discrimination by algorithm can be more readily discovered and more easily fixed.”

Source : Biased Algorithms Are Easier to Fix Than Biased People – The New York Times

Goldman Sachs to reevaluate Apple Card credit limits after bias claim

View image on Twitter

“Goldman Sachs denied allegations of gender bias and said on Monday that it will reevaluate credit limits for Apple Card users on a case-by-case basis for customers who received lower credit lines than expected.“We have not and never will make decisions based on factors like gender,” Carey Halio, Goldman’s retail bank CEO, said in a statement. “In fact, we do not know your gender or marital status during the Apple Card application process.”Halio said that customers unsatisfied with their line should contact the company.“Based on additional information we may request, we will re-evaluate your credit line,” the statement said.”

Source : Goldman Sachs to reevaluate Apple Card credit limits after bias claim

Data dredging – Wikipedia

“Data dredging (also data fishing, data snooping, data butchery, and p-hacking) is the misuse of data analysis to find patterns in data that can be presented as statistically significant when in fact there is no real underlying effect. This is done by performing many statistical tests on the data and only paying attention to those that come back with significant results, instead of stating a single hypothesis about an underlying effect before the analysis and then conducting a single test for it.”

Source : Data dredging – Wikipedia

Kate Crawford : « l’IA est une nouvelle ingénierie du pouvoir »

 

« En élargissant les jeux de données, nous risquons surtout de rendre les populations les plus fragiles plus faciles à contrôler et à surveiller ! « Une surveillance égalitaire n’est pas l’égalité ! » Au contraire ! Le risque est de développer des atteintes encore plus disproportionnées qu’elles ne sont aux groupes les plus minoritaires et les plus fragiles ! Ces systèmes sont « dangereux quand ils échouent, nocifs quand ils fonctionnent ». « Améliorer un système injuste ne peut que créer un plus grand préjudice » ».

Source : Kate Crawford : « l’IA est une nouvelle ingénierie du pouvoir » | InternetActu.net

« Older posts

© 2020 no-Flux

Theme by Anders NorenUp ↑