Étiquette : bias (Page 4 of 5)

« The process of mathematically defining “fair” decision-making metrics also forces us to pin down tradeoffs between fairness and accuracy that must be faced and have sometimes been swept under the carpet by policy-makers. It makes us rethink what it really means to treat all groups equally—in some cases equal treatment may only be possible by learning different group-specific criteria.There is an entirely new field emerging at the intersection of computer science, law, and ethics. It will not only lead to fairer algorithms, but also to algorithms which track accountability, and make clear which factors contributed to a decision. There’s much reason to be hopeful! » – Jennifer T. Chayes.

Source : How Machine Learning Advances Will Improve the Fairness of Algorithms | HuffPost

Nous vivons désormais dans des environnements de moins en moins dangereux, de plus en plus pacifiés, mais notre cerveau nous force toujours à détecter le moindre signal d’alarme, parce que, depuis des millions d’années, cette stratégie de l’amalgame lui a été bien plus profitable que la subtilité du peut-être, il faudrait voir, si, réellement et au-delà de tout doute raisonnable, il y a vraiment du feu au pied de cette fumée.
À ne pas oublier: nous sommes les descendants de singes flippés des brins d’herbe agités par le vent, car il n’aura fallu qu’une seule hyène surgissant des broussailles pour que leurs sceptiques et flegmatiques acolytes soient rayés de la carte. Aujourd’hui, le vert est la couleur dont l’œil humain détecte le plus de nuances et nous sommes toujours génétiquement prédisposés à préférer une idée reçue qu’un fait patiemment solidifié par un difficile, pénible et souvent décourageant amoncellement de preuves exigeant d’exploiter des capacités mentales littéralement contre-intuitives.

Source : Pourquoi s’offusquer de la post-vérité? C’est le mode par défaut de notre cerveau | Slate.fr

While machine-learning technology can offer unexpected insights and new forms of convenience, we must address the current implications for communities that have less power, for those who aren’t dominant in elite Silicon Valley circles.Currently the loudest voices debating the potential dangers of superintelligence are affluent white men, and, perhaps for them, the biggest threat is the rise of an artificially intelligent apex predator.But for those who already face marginalization or bias, the threats are here.

Source : Artificial Intelligence’s White Guy Problem – The New York Times

According to sources, the Trending team’s editorial staff were alerted at 4pm that they were being fired—as the news of Facebook’s switch to algorithms first broke—and were asked to leave the building by 5pm.

However, removing human writers from Trending doesn’t necessarily eliminate bias. Human bias can be embedded into algorithms, and extremely difficult to strip out.

Source : Facebook (FB) fired its Trending editors, apparently trying to get rid of bias by getting rid of humans — Quartz

For the last couple of years librarians have talked about a 20th century black hole when trying to describe the effect that copyright has on making cultural heritage available online (it appears that the concept was first used publicly by Prof. James Boyle in a 2009 column for the Financial Times).
At Europeana we are able to show the 20th century black hole in our dataset by looking at the temporal distribution of works within the dataset. We did so in a first analysis that we undertook in May 2012 and we have just repeated this exercise at the request of the European Commission which is looking for evidence to assess the impact of copyright on the online availability of cultural heritage. Just as in 2012 we are seeing the concept of the 20th century black hole confirmed in our data.

Source : The missing decades: the 20th century black hole in Europeana – Europeana Professional

« Older posts Newer posts »

© 2024 no-Flux

Theme by Anders NorenUp ↑