Étiquette : bias (Page 1 of 4)

Tessellated TikTok logos against a dark background.

“A member of the Stanford Behavioral Laboratory posted on a Prolific forum, “We have noticed a huge leap in the number of participants on the platform in the US Pool, from 40k to 80k. Which is great, however, now a lot of our studies have a gender skew where maybe 85% of participants are women. Plus the age has been averaging around 21.” Wayne State psychologist Hannah Schechter seems to have been the first person to crack the case. “This may be far-fetched,” she tweeted, linking to Frank’s video, “but given the timing, virality of the video, and the user’s follower demographics….” Long-standing Prolific survey-takers complained on Reddit that Frank had made it difficult to find paid surveys to take on the overrun platform.”

Source : A teenager on TikTok disrupted thousands of scientific studies with a single video – The Verge

AI Explorables | PAIR

“The rapidly increasing usage of machine learning raises complicated questions: How can we tell if models are fair? Why do models make the predictions that they do? What are the privacy implications of feeding enormous amounts of data into models? This ongoing series of interactive, formula-free essays will walk you through these important concepts.”

Source : AI Explorables | PAIR

LaMDA: our breakthrough conversation technology

An animation demonstrating how language is processed by LaMDA technology.

“These early results are encouraging, and we look forward to sharing more soon, but sensibleness and specificity aren’t the only qualities we’re looking for in models like LaMDA. We’re also exploring dimensions like “interestingness,” by assessing whether responses are insightful, unexpected or witty. Being Google, we also care a lot about factuality (that is, whether LaMDA sticks to facts, something language models often struggle with), and are investigating ways to ensure LaMDA’s responses aren’t just compelling but correct. But the most important question we ask ourselves when it comes to our technologies is whether they adhere to our AI Principles. Language might be one of humanity’s greatest tools, but like all tools it can be misused. Models trained on language can propagate that misuse — for instance, by internalizing biases, mirroring hateful speech, or replicating misleading information. And even when the language it’s trained on is carefully vetted, the model itself can still be put to ill use.  ”

Source : LaMDA: our breakthrough conversation technology

Using AI to help find answers to common skin conditions

“To make sure we’re building for everyone, our model accounts for factors like age, sex, race and skin types — from pale skin that does not tan to brown skin that rarely burns. We developed and fine-tuned our model with de-identified data encompassing around 65,000 images and case data of diagnosed skin conditions, millions of curated skin concern images and thousands of examples of healthy skin — all across different demographics.  Recently, the AI model that powers our tool successfully passed clinical validation, and the tool has been CE marked as a Class I medical device in the EU.”

Source : Using AI to help find answers to common skin conditions

Racisme, sexisme : les IA peuvent-elles supprimer les discriminations dans les affaires judiciaires ?

Palais de justice tribunal

“Quels que soient les axes de développement retenus, une chose est claire aux yeux de Florence G. Sell, professeur en droit privé à l’Université de Lorraine : « la mise à disposition des décisions de justice couplée aux progrès des outils du Big Data va permettre une vision beaucoup plus globale et approfondie du fonctionnement de la justice ». Pour l’experte, l’institution judiciaire a tout intérêt à se saisir de ces outils pour améliorer sa qualité et son efficacité. Et si elle ne le fait pas,« d’autres acteurs, tels les avocats ou les startups le feront : ce seront alors eux qui seront à la pointe d’une évolution de toute façon irrémédiable. »”

Source : Racisme, sexisme : les IA peuvent-elles supprimer les discriminations dans les affaires judiciaires ?

What puzzles and poker teach us about misinformation | Financial Times

“My advice is simply to take note of your emotional reaction to each headline, sound bite or statistical claim. Is it joy, rage, triumph? Fine. But having noticed it, keep thinking. You may find clarity emerges once your emotions have been acknowledged. So what do puzzles, poker, and misinformation have in common? Some puzzles — and some poker hands — require enormous intellectual resources to navigate, and the same is true of certain subtle statistical fallacies. But much of the time we fool ourselves in simple ways and for simple reasons. Slow down, calm down, and the battle for truth is already half won.”

Source : What puzzles and poker teach us about misinformation | Financial Times

“Twitter it was looking into why the neural network it uses to generate photo previews apparently chooses to show white people’s faces more frequently than Black faces. Several Twitter users demonstrated the issue over the weekend, posting examples of posts that had a Black person’s face and a white person’s face. Twitter’s preview showed the white faces more often.”

Source : Twitter is looking into why its photo preview appears to favor white faces over Black faces – The Verge

Google Vision API

“Google notes in its own AI principles that algorithms and datasets can reinforce bias: ‘We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.’ Google invited affected developers to comment on its discussion forums. Only one developer had commented at the time of writing, and complained the change was down to ‘political correctness.’ ‘I don’t think political correctness has room in APIs,’ the person wrote. ‘If I can 99% of the times identify if someone is a man or woman, then so can the algorithm. You don’t want to do it? Companies will go to other services.’”

Source : Google AI will no longer use gender labels like ‘woman’ or ‘man’ on images of people to avoid bias

http://www.beaude.net/no-flux/wp-content/uploads/2020/01/143762_HERO-GettyImages-860967818.jpg

“‘Notability are notoriously byzantine, to say it kindly,’ the anonymous editor says. They hope the push to reform the guidelines will help compensate for the historic underrepresentation of women and minorities, since it’s not just women who find their path into Wikipedia blocked. ‘A lot of prejudice is unconscious and intersectional,’ says Lubbock. ‘Wikipedia is dealing not just with a gender inequality issue, but also racial and geographical inequalities.’

Source : Female scientists’ pages keep disappearing from Wikipedia – what’s going on? | News | Chemistry World

« Older posts

© 2021 no-Flux

Theme by Anders NorenUp ↑