Étiquette : bias (Page 1 of 4)

Racisme, sexisme : les IA peuvent-elles supprimer les discriminations dans les affaires judiciaires ?

Palais de justice tribunal

“Quels que soient les axes de développement retenus, une chose est claire aux yeux de Florence G. Sell, professeur en droit privé à l’Université de Lorraine : « la mise à disposition des décisions de justice couplée aux progrès des outils du Big Data va permettre une vision beaucoup plus globale et approfondie du fonctionnement de la justice ». Pour l’experte, l’institution judiciaire a tout intérêt à se saisir de ces outils pour améliorer sa qualité et son efficacité. Et si elle ne le fait pas,« d’autres acteurs, tels les avocats ou les startups le feront : ce seront alors eux qui seront à la pointe d’une évolution de toute façon irrémédiable. »”

Source : Racisme, sexisme : les IA peuvent-elles supprimer les discriminations dans les affaires judiciaires ?

What puzzles and poker teach us about misinformation | Financial Times

“My advice is simply to take note of your emotional reaction to each headline, sound bite or statistical claim. Is it joy, rage, triumph? Fine. But having noticed it, keep thinking. You may find clarity emerges once your emotions have been acknowledged. So what do puzzles, poker, and misinformation have in common? Some puzzles — and some poker hands — require enormous intellectual resources to navigate, and the same is true of certain subtle statistical fallacies. But much of the time we fool ourselves in simple ways and for simple reasons. Slow down, calm down, and the battle for truth is already half won.”

Source : What puzzles and poker teach us about misinformation | Financial Times

“Twitter it was looking into why the neural network it uses to generate photo previews apparently chooses to show white people’s faces more frequently than Black faces. Several Twitter users demonstrated the issue over the weekend, posting examples of posts that had a Black person’s face and a white person’s face. Twitter’s preview showed the white faces more often.”

Source : Twitter is looking into why its photo preview appears to favor white faces over Black faces – The Verge

Google Vision API

“Google notes in its own AI principles that algorithms and datasets can reinforce bias: ‘We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.’ Google invited affected developers to comment on its discussion forums. Only one developer had commented at the time of writing, and complained the change was down to ‘political correctness.’ ‘I don’t think political correctness has room in APIs,’ the person wrote. ‘If I can 99% of the times identify if someone is a man or woman, then so can the algorithm. You don’t want to do it? Companies will go to other services.’”

Source : Google AI will no longer use gender labels like ‘woman’ or ‘man’ on images of people to avoid bias

http://www.beaude.net/no-flux/wp-content/uploads/2020/01/143762_HERO-GettyImages-860967818.jpg

“‘Notability are notoriously byzantine, to say it kindly,’ the anonymous editor says. They hope the push to reform the guidelines will help compensate for the historic underrepresentation of women and minorities, since it’s not just women who find their path into Wikipedia blocked. ‘A lot of prejudice is unconscious and intersectional,’ says Lubbock. ‘Wikipedia is dealing not just with a gender inequality issue, but also racial and geographical inequalities.’

Source : Female scientists’ pages keep disappearing from Wikipedia – what’s going on? | News | Chemistry World

“Changing algorithms is easier than changing people: software on computers can be updated; the “wetware” in our brains has so far proven much less pliable. None of this is meant to diminish the pitfalls and care needed in fixing algorithmic bias. But compared with the intransigence of human bias, it does look a great deal simpler. Discrimination by algorithm can be more readily discovered and more easily fixed.”

Source : Biased Algorithms Are Easier to Fix Than Biased People – The New York Times

View image on Twitter

“Goldman Sachs denied allegations of gender bias and said on Monday that it will reevaluate credit limits for Apple Card users on a case-by-case basis for customers who received lower credit lines than expected.“We have not and never will make decisions based on factors like gender,” Carey Halio, Goldman’s retail bank CEO, said in a statement. “In fact, we do not know your gender or marital status during the Apple Card application process.”Halio said that customers unsatisfied with their line should contact the company.“Based on additional information we may request, we will re-evaluate your credit line,” the statement said.”

Source : Goldman Sachs to reevaluate Apple Card credit limits after bias claim

« Older posts

© 2021 no-Flux

Theme by Anders NorenUp ↑