Mois : mai 2023 (Page 3 of 3)

Google « We Have No Moat, And Neither Does OpenAI »

https://i0.wp.com/www.beaude.net/no-flux/wp-content/uploads/2023/05/https3A2F2Fsubstack-post-media.s3.amazonaws.com2Fpublic2Fimages2F241fe3ef-3919-4a63-9c68-9e2e77cc2fc0_1366x588.webp?w=676&ssl=1

“At the beginning of March the open source community got their hands on their first really capable foundation model, as Meta’s LLaMA was leaked to the public. It had no instruction or conversation tuning, and no RLHF. Nonetheless, the community immediately understood the significance of what they had been given. A tremendous outpouring of innovation followed, with just days between major developments (see The Timeline for the full breakdown). Here we are, barely a month later, and there are variants with instruction tuning, quantization, quality improvements, human evals, multimodality, RLHF, etc. etc. many of which build on each other. Most importantly, they have solved the scaling problem to the extent that anyone can tinker. Many of the new ideas are from ordinary people. The barrier to entry for training and experimentation has dropped from the total output of a major research organization to one person, an evening, and a beefy laptop.”

Source : Google « We Have No Moat, And Neither Does OpenAI »

Comment l’Europe veut réguler très rapidement l’intelligence artificielle

https://i0.wp.com/www.beaude.net/no-flux/wp-content/uploads/2023/05/f67e068_2022-03-23t101753z-1994423264-rc2a8t9ais9g-rtrmadp-3-ukraine-crisis-eu-stateaid.jpg?w=676&ssl=1

“Leurs concepteurs devront notamment empêcher la création de contenu illégal, de résumés de données protégées par le droit d’auteur et ne pas entraîner des algorithmes sur du contenu protégé. OpenAI, éditrice de ChatGPT, ainsi que ses concurrents, devra aussi évaluer et limiter les risques et s’enregistrer dans la base de données de l’UE.
De nouvelles interdictions ont été décidées concernant d’autres systèmes d’IA: pas de systèmes de reconnaissance des émotions ou encore pas de récupération de données biométriques provenant des médias sociaux ou de la vidéosurveillance pour créer des bases de données de reconnaissance faciale. Les domaines d’application sont donc très larges.”

Source : Comment l’Europe veut réguler très rapidement l’intelligence artificielle – Le Temps

30 years of a free and open Web | CERN

“Exactly 30 years ago, on 30 April 1993, CERN made an important announcement. Walter Hoogland and Helmut Weber, respectively the Director of Research and Director of Administration at the time, decided to publicly release the tool that Tim Berners-Lee had first proposed in 1989 to allow scientists and institutes working on CERN data all over the globe to share information accurately and quickly. Little did they know how much it would change the world. On this day in 1993, CERN released the World Wide Web to the public. Now, it is an integral feature of our daily lives: according to the International Telecommunications Union, more than 5 billion people, two thirds of the worldwide population, rely on the internet regularly for research, industry, communications and entertainment. “Most people would agree that the public release was the best thing we could have done, and that it was the source of the success of the World Wide Web,” says Walter Hoogland, co-signatory of the document that proclaimed the Web’s release, “apart from, of course, the World Wide Web itself!””

Source : 30 years of a free and open Web | CERN

‘The Godfather of AI’ Quits Google and Warns of Danger Ahead

Geoffrey Hinton, wearing a dark sweater.

“Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.
Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”
As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”
Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop”

Source : ‘The Godfather of AI’ Quits Google and Warns of Danger Ahead – The New York Times

Newer posts »

© 2024 no-Flux

Theme by Anders NorenUp ↑