Tag: bias (page 1 of 2)

Barack Obama and Prince Harry

«Al of us in leadership have to find ways in which we can recreate a common space on the internet, » he said. « One of the dangers of the internet is that people can have entirely different realities. They can be just cocooned in information that reinforces their current biases.» – Barack Obama.

Source : Prince Harry interviews Barack Obama on Radio 4: Ex-President warns social media is corroding civil discourse

Deep Learning

«DL will not disagree with any data, will not figure out the injustices in the society, it’s just all “data to learn”. You should hire a dedicated human staff to create fake fair data of an ideal society where white people are arrested as often as blacks, where 50% of directors are women, and so on. But the cost of creating vast amounts of de-biased data edited by human experts, just to train a DL model, makes not worth to replace humans with AI in first place! Further, even if you had trained a DL model that really is fair, you have no evidence to convince a judge or a user about the fairness of any decision, since the DL will give no explanations».

Source : Deep Learning is not the AI future

The Anatomy of a Large-Scale Hypertextual Web Search Engine

«The tech industry is no longer the passion play of a bunch of geeks trying to do cool shit in the world. It’s now the foundation of our democracy, economy, and information landscape» – danah boyd.

Source : Your Data is Being Manipulated – Data & Society: Points

«Google’s AI chief isn’t fretting about super-intelligent killer robots. Instead, John Giannandrea is concerned about the danger that may be lurking inside the machine-learning algorithms used to make millions of decisions every minute».

Source : Forget Killer Robots—Bias Is the Real AI Danger – MIT Technology Review

« The process of mathematically defining “fair” decision-making metrics also forces us to pin down tradeoffs between fairness and accuracy that must be faced and have sometimes been swept under the carpet by policy-makers. It makes us rethink what it really means to treat all groups equally—in some cases equal treatment may only be possible by learning different group-specific criteria.There is an entirely new field emerging at the intersection of computer science, law, and ethics. It will not only lead to fairer algorithms, but also to algorithms which track accountability, and make clear which factors contributed to a decision. There’s much reason to be hopeful! » – Jennifer T. Chayes.

Source : How Machine Learning Advances Will Improve the Fairness of Algorithms | HuffPost

Percentage of adults per county who think …

Global warming maps

Source : How Americans Think About Climate Change, in Six Maps – The New York Times

Nous vivons désormais dans des environnements de moins en moins dangereux, de plus en plus pacifiés, mais notre cerveau nous force toujours à détecter le moindre signal d’alarme, parce que, depuis des millions d’années, cette stratégie de l’amalgame lui a été bien plus profitable que la subtilité du peut-être, il faudrait voir, si, réellement et au-delà de tout doute raisonnable, il y a vraiment du feu au pied de cette fumée.
À ne pas oublier: nous sommes les descendants de singes flippés des brins d’herbe agités par le vent, car il n’aura fallu qu’une seule hyène surgissant des broussailles pour que leurs sceptiques et flegmatiques acolytes soient rayés de la carte. Aujourd’hui, le vert est la couleur dont l’œil humain détecte le plus de nuances et nous sommes toujours génétiquement prédisposés à préférer une idée reçue qu’un fait patiemment solidifié par un difficile, pénible et souvent décourageant amoncellement de preuves exigeant d’exploiter des capacités mentales littéralement contre-intuitives.

Source : Pourquoi s’offusquer de la post-vérité? C’est le mode par défaut de notre cerveau | Slate.fr

While machine-learning technology can offer unexpected insights and new forms of convenience, we must address the current implications for communities that have less power, for those who aren’t dominant in elite Silicon Valley circles.Currently the loudest voices debating the potential dangers of superintelligence are affluent white men, and, perhaps for them, the biggest threat is the rise of an artificially intelligent apex predator.But for those who already face marginalization or bias, the threats are here.

Source : Artificial Intelligence’s White Guy Problem – The New York Times

« Les médias n’ont pas remis en question les enquêtes d’opinion qui confirmaient leur sentiment que Donald Trump n’avait aucune chance de l’emporter. Ils ont dépeint les partisans de Trump qui croyaient encore en ses chances comme coupés de la réalité. Finalement, c’était l’inverse ».

Source : Comment la victoire de Donald Trump a-t-elle pu échapper aux sondages et aux médias ?

If AI learns language sufficiently well, it will also learn cultural associations that are offensive, objectionable, or harmful. At a high level, bias is meaning. “Debiasing” these machine models, while intriguing and technically interesting, necessarily harms meaning.

Source : Language necessarily contains human biases, and so will machines trained on language corpora

Older posts

© 2018 no-Flux

Theme by Anders NorenUp ↑