Tag: bias (page 1 of 2)

recruiting automation

“The group created 500 computer models focused on specific job functions and locations. They taught each to recognize some 50,000 terms that showed up on past candidates’ resumes. The algorithms learned to assign little significance to skills that were common across IT applicants, such as the ability to write various computer codes, the people said. […] With the technology returning results almost at random, Amazon shut down the project.”

Source : Amazon scraps secret AI recruiting tool that showed bias against women | Reuters

Une capture d'écran de la réunion interne de Google publiée par Breitbart.

“C’est un coup médiatique majeur pour Breitbart, un site ultraconservateur et pro-Trump un temps dirigé par Steve Bannon, qui fut directeur de campagne du candidat victorieux. C’est aussi du petit-lait pour une large frange des républicains américains, qui accuse depuis des mois les grands groupes de la Silicon Valley de censurer les opinions conservatrices. Ces récriminations, pour une large partie infondées, sont remontées jusqu’au président Donald Trump, lequel a tout récemment accusé Google, justement, d’amoindrir sa présence dans les résultats de recherche”.

Source : Une vidéo montre le choc des dirigeants de Google après l’élection de Donald Trump

Two women outdoors looking at a mobile device using facial recognition technology.

“Microsoft announced Tuesday that it has updated its facial recognition technology with significant improvements in the system’s ability to recognize gender across skin tones. That improvement addresses recent concerns that commercially available facial recognition technologies more accurately recognized gender of people with lighter skin tones than darker skin tones, and that they performed best on males with lighter skin and worst on females with darker skin.
With the new improvements, Microsoft said it was able to reduce the error rates for men and women with darker skin by up to 20 times. For all women, the company said the error rates were reduced by nine times. Overall, the company said that, with these improvements, they were able to significantly reduce accuracy differences across the demographics”.

Source : Microsoft improves facial recognition to perform well across all skin tones, genders

Barack Obama and Prince Harry

«Al of us in leadership have to find ways in which we can recreate a common space on the internet, » he said. « One of the dangers of the internet is that people can have entirely different realities. They can be just cocooned in information that reinforces their current biases.» – Barack Obama.

Source : Prince Harry interviews Barack Obama on Radio 4: Ex-President warns social media is corroding civil discourse

Deep Learning

«DL will not disagree with any data, will not figure out the injustices in the society, it’s just all “data to learn”. You should hire a dedicated human staff to create fake fair data of an ideal society where white people are arrested as often as blacks, where 50% of directors are women, and so on. But the cost of creating vast amounts of de-biased data edited by human experts, just to train a DL model, makes not worth to replace humans with AI in first place! Further, even if you had trained a DL model that really is fair, you have no evidence to convince a judge or a user about the fairness of any decision, since the DL will give no explanations».

Source : Deep Learning is not the AI future


The Anatomy of a Large-Scale Hypertextual Web Search Engine

«The tech industry is no longer the passion play of a bunch of geeks trying to do cool shit in the world. It’s now the foundation of our democracy, economy, and information landscape» – danah boyd.

Source : Your Data is Being Manipulated – Data & Society: Points

«Google’s AI chief isn’t fretting about super-intelligent killer robots. Instead, John Giannandrea is concerned about the danger that may be lurking inside the machine-learning algorithms used to make millions of decisions every minute».

Source : Forget Killer Robots—Bias Is the Real AI Danger – MIT Technology Review

« The process of mathematically defining “fair” decision-making metrics also forces us to pin down tradeoffs between fairness and accuracy that must be faced and have sometimes been swept under the carpet by policy-makers. It makes us rethink what it really means to treat all groups equally—in some cases equal treatment may only be possible by learning different group-specific criteria.There is an entirely new field emerging at the intersection of computer science, law, and ethics. It will not only lead to fairer algorithms, but also to algorithms which track accountability, and make clear which factors contributed to a decision. There’s much reason to be hopeful! » – Jennifer T. Chayes.

Source : How Machine Learning Advances Will Improve the Fairness of Algorithms | HuffPost

Percentage of adults per county who think …

Global warming maps

Source : How Americans Think About Climate Change, in Six Maps – The New York Times

Nous vivons désormais dans des environnements de moins en moins dangereux, de plus en plus pacifiés, mais notre cerveau nous force toujours à détecter le moindre signal d’alarme, parce que, depuis des millions d’années, cette stratégie de l’amalgame lui a été bien plus profitable que la subtilité du peut-être, il faudrait voir, si, réellement et au-delà de tout doute raisonnable, il y a vraiment du feu au pied de cette fumée.
À ne pas oublier: nous sommes les descendants de singes flippés des brins d’herbe agités par le vent, car il n’aura fallu qu’une seule hyène surgissant des broussailles pour que leurs sceptiques et flegmatiques acolytes soient rayés de la carte. Aujourd’hui, le vert est la couleur dont l’œil humain détecte le plus de nuances et nous sommes toujours génétiquement prédisposés à préférer une idée reçue qu’un fait patiemment solidifié par un difficile, pénible et souvent décourageant amoncellement de preuves exigeant d’exploiter des capacités mentales littéralement contre-intuitives.

Source : Pourquoi s’offusquer de la post-vérité? C’est le mode par défaut de notre cerveau | Slate.fr

Older posts

© 2018 no-Flux

Theme by Anders NorenUp ↑