“Historically, it has provided only one translation for a query, even if the translation could have either a feminine or masculine form. So when the model produced one translation, it inadvertently replicated gender biases that already existed. For example: it would skew masculine for words like “strong” or “doctor,” and feminine for other words, like “nurse” or “beautiful.”
Source : Google is fixing gender bias in its Translate service
“Smart Compose is an example of what AI developers call natural language generation (NLG), in which computers learn to write sentences by studying patterns and relationships between words in literature, emails and web pages. A system shown billions of human sentences becomes adept at completing common phrases but is limited by generalities. Men have long dominated fields such as finance and science, for example, so the technology would conclude from the data that an investor or engineer is “he” or “him.” The issue trips up nearly every major tech company. ”
Source : Fearful of bias, Google blocks gender-based pronouns from new AI tool | Reuters
“The group created 500 computer models focused on specific job functions and locations. They taught each to recognize some 50,000 terms that showed up on past candidates’ resumes. The algorithms learned to assign little significance to skills that were common across IT applicants, such as the ability to write various computer codes, the people said. […] With the technology returning results almost at random, Amazon shut down the project.”
Source : Amazon scraps secret AI recruiting tool that showed bias against women | Reuters
“C’est un coup médiatique majeur pour Breitbart, un site ultraconservateur et pro-Trump un temps dirigé par Steve Bannon, qui fut directeur de campagne du candidat victorieux. C’est aussi du petit-lait pour une large frange des républicains américains, qui accuse depuis des mois les grands groupes de la Silicon Valley de censurer les opinions conservatrices. Ces récriminations, pour une large partie infondées, sont remontées jusqu’au président Donald Trump, lequel a tout récemment accusé Google, justement, d’amoindrir sa présence dans les résultats de recherche”.
Source : Une vidéo montre le choc des dirigeants de Google après l’élection de Donald Trump
“Microsoft announced Tuesday that it has updated its facial recognition technology with significant improvements in the system’s ability to recognize gender across skin tones. That improvement addresses recent concerns that commercially available facial recognition technologies more accurately recognized gender of people with lighter skin tones than darker skin tones, and that they performed best on males with lighter skin and worst on females with darker skin.
With the new improvements, Microsoft said it was able to reduce the error rates for men and women with darker skin by up to 20 times. For all women, the company said the error rates were reduced by nine times. Overall, the company said that, with these improvements, they were able to significantly reduce accuracy differences across the demographics”.
Source : Microsoft improves facial recognition to perform well across all skin tones, genders
«DL will not disagree with any data, will not figure out the injustices in the society, it’s just all “data to learn”. You should hire a dedicated human staff to create fake fair data of an ideal society where white people are arrested as often as blacks, where 50% of directors are women, and so on. But the cost of creating vast amounts of de-biased data edited by human experts, just to train a DL model, makes not worth to replace humans with AI in first place! Further, even if you had trained a DL model that really is fair, you have no evidence to convince a judge or a user about the fairness of any decision, since the DL will give no explanations».
Source : Deep Learning is not the AI future
«The tech industry is no longer the passion play of a bunch of geeks trying to do cool shit in the world. It’s now the foundation of our democracy, economy, and information landscape» – danah boyd.
Source : Your Data is Being Manipulated – Data & Society: Points
«Google’s AI chief isn’t fretting about super-intelligent killer robots. Instead, John Giannandrea is concerned about the danger that may be lurking inside the machine-learning algorithms used to make millions of decisions every minute».
Source : Forget Killer Robots—Bias Is the Real AI Danger – MIT Technology Review
« The process of mathematically defining “fair” decision-making metrics also forces us to pin down tradeoffs between fairness and accuracy that must be faced and have sometimes been swept under the carpet by policy-makers. It makes us rethink what it really means to treat all groups equally—in some cases equal treatment may only be possible by learning different group-specific criteria.There is an entirely new field emerging at the intersection of computer science, law, and ethics. It will not only lead to fairer algorithms, but also to algorithms which track accountability, and make clear which factors contributed to a decision. There’s much reason to be hopeful! » – Jennifer T. Chayes.
Source : How Machine Learning Advances Will Improve the Fairness of Algorithms | HuffPost