«Google’s AI chief isn’t fretting about super-intelligent killer robots. Instead, John Giannandrea is concerned about the danger that may be lurking inside the machine-learning algorithms used to make millions of decisions every minute».
« The process of mathematically defining “fair” decision-making metrics also forces us to pin down tradeoffs between fairness and accuracy that must be faced and have sometimes been swept under the carpet by policy-makers. It makes us rethink what it really means to treat all groups equally—in some cases equal treatment may only be possible by learning different group-specific criteria.There is an entirely new field emerging at the intersection of computer science, law, and ethics. It will not only lead to fairer algorithms, but also to algorithms which track accountability, and make clear which factors contributed to a decision. There’s much reason to be hopeful! » – Jennifer T. Chayes.
Nous vivons désormais dans des environnements de moins en moins dangereux, de plus en plus pacifiés, mais notre cerveau nous force toujours à détecter le moindre signal d’alarme, parce que, depuis des millions d’années, cette stratégie de l’amalgame lui a été bien plus profitable que la subtilité du peut-être, il faudrait voir, si, réellement et au-delà de tout doute raisonnable, il y a vraiment du feu au pied de cette fumée.
À ne pas oublier: nous sommes les descendants de singes flippés des brins d’herbe agités par le vent, car il n’aura fallu qu’une seule hyène surgissant des broussailles pour que leurs sceptiques et flegmatiques acolytes soient rayés de la carte. Aujourd’hui, le vert est la couleur dont l’œil humain détecte le plus de nuances et nous sommes toujours génétiquement prédisposés à préférer une idée reçue qu’un fait patiemment solidifié par un difficile, pénible et souvent décourageant amoncellement de preuves exigeant d’exploiter des capacités mentales littéralement contre-intuitives.
While machine-learning technology can offer unexpected insights and new forms of convenience, we must address the current implications for communities that have less power, for those who aren’t dominant in elite Silicon Valley circles.Currently the loudest voices debating the potential dangers of superintelligence are affluent white men, and, perhaps for them, the biggest threat is the rise of an artificially intelligent apex predator.But for those who already face marginalization or bias, the threats are here.
« Les médias n’ont pas remis en question les enquêtes d’opinion qui confirmaient leur sentiment que Donald Trump n’avait aucune chance de l’emporter. Ils ont dépeint les partisans de Trump qui croyaient encore en ses chances comme coupés de la réalité. Finalement, c’était l’inverse ».
If AI learns language sufficiently well, it will also learn cultural associations that are offensive, objectionable, or harmful. At a high level, bias is meaning. “Debiasing” these machine models, while intriguing and technically interesting, necessarily harms meaning.
According to sources, the Trending team’s editorial staff were alerted at 4pm that they were being fired—as the news of Facebook’s switch to algorithms first broke—and were asked to leave the building by 5pm.
However, removing human writers from Trending doesn’t necessarily eliminate bias. Human bias can be embedded into algorithms, and extremely difficult to strip out.
From an Uber rider’s perspective, Keith says, a round number surge like two times looks like the company is just slapping on a higher price tag because it’s raining. But when it’s 2.1 times as much, we think there must be a complex algorithm (which there is) coming up with that figure. The ride, then, is surely worth 2.1 times as much.
For the last couple of years librarians have talked about a 20th century black hole when trying to describe the effect that copyright has on making cultural heritage available online (it appears that the concept was first used publicly by Prof. James Boyle in a 2009 column for the Financial Times).
At Europeana we are able to show the 20th century black hole in our dataset by looking at the temporal distribution of works within the dataset. We did so in a first analysis that we undertook in May 2012 and we have just repeated this exercise at the request of the European Commission which is looking for evidence to assess the impact of copyright on the online availability of cultural heritage. Just as in 2012 we are seeing the concept of the 20th century black hole confirmed in our data.