Étiquette : chatgpt

Wikilegal/Copyright Analysis of ChatGPT

“It is important to note that Creative Commons licenses allow for free reproduction and reuse, so AI programs like ChatGPT might copy text from a Wikipedia article or an image from Wikimedia Commons. However, it is not clear yet whether massively copying content from these sources may result in a violation of the Creative Commons license if attribution is not granted. Overall, it is more likely than not if current precedent holds that training systems on copyrighted data will be covered by fair use in the United States, but there is significant uncertainty at time of writing. ”

Source : Wikilegal/Copyright Analysis of ChatGPT – Meta

The Hacking of ChatGPT Is Just Getting Started

https://i0.wp.com/www.beaude.net/no-flux/wp-content/uploads/2023/05/security_jailbreaking_chatgpt_ai.jpg?resize=676%2C380&ssl=1

“It took Alex Polyakov just a couple of hours to break GPT-4. When OpenAI released the latest version of its text-generating chatbot in March, Polyakov sat down in front of his keyboard and started entering prompts designed to bypass OpenAI’s safety systems. Soon, the CEO of security firm Adversa AI had GPT-4 spouting homophobic statements, creating phishing emails, and supporting violence. Polyakov is one of a small number of security researchers, technologists, and computer scientists developing jailbreaks and prompt injection attacks against ChatGPT and other generative AI systems.
The process of jailbreaking aims to design prompts that make the chatbots bypass rules around producing hateful content or writing about illegal acts, while closely-related prompt injection attacks can quietly insert malicious data or instructions into AI models. Both approaches try to get a system to do something it isn’t designed to do.
The attacks are essentially a form of hacking—albeit unconventionally—using carefully crafted and refined sentences, rather than code, to exploit system weaknesses. While the attack types are largely being used to get around content filters, security researchers warn that the rush to roll out generative AI systems opens up the possibility of data being stolen and cybercriminals causing havoc across the web.”

Source : The Hacking of ChatGPT Is Just Getting Started | WIRED UK

AI has better ‘bedside manner’ than some doctors, study finds

Doctor & Patient ChatGPT

“ChatGPT appears to have a better ‘bedside manner’ than some doctors – at least when their written advice is rated for quality and empathy, a study has shown. The findings highlight the potential for AI assistants to play a role in medicine, according to the authors of the work, who suggest such agents could help draft doctors’ communications with patients. “The opportunities for improving healthcare with AI are massive,” said Dr John Ayers, of the University of California San Diego. However, others noted that the findings do not mean ChatGPT is actually a better doctor and cautioned against delegating clinical responsibility given that the chatbot has a tendency to produce “facts” that are untrue.”

Source : AI has better ‘bedside manner’ than some doctors, study finds | Artificial intelligence (AI) | The Guardian

Yann Le Cun, «parrain de l’IA» : «L’intelligence artificielle conduira peut-être à un nouveau siècle des Lumières»

https://i0.wp.com/www.beaude.net/no-flux/wp-content/uploads/2023/04/V4VMZ3OSMRFBTIISSODVQUZWQQ.jpg?w=676&ssl=1

“«Certains ont parlé dans des termes trop excessifs des dangers possibles des systèmes d’IA jusqu’à la «destruction de l’humanité». Mais l’IA en tant qu’amplificateur de l’intelligence humaine conduira peut-être à une espèce de nouvelle renaissance, un nouveau siècle des Lumières avec une accélération du progrès scientifique, peut-être du progrès social. Ça fait peur, comme toute technologie qui risque de déstabiliser et de changer la société.
«C’était le cas pour l’imprimerie. L’Eglise catholique disait que ça détruirait la société, mais elle s’est plutôt améliorée. Ça a engendré bien sûr l’apparition du mouvement protestant et des guerres de religion pendant un ou deux siècles en Europe, mais a également permis l’essor du siècle des Lumières, de la philosophie, le rationalisme, la science, la démocratie, la révolution américaine et la révolution française… Il n’y aurait pas eu ça sans l’imprimerie. A la même époque, au XVe siècle, l’Empire ottoman a interdit l’usage de l’imprimerie. Ils avaient trop peur d’une déstabilisation possible de la société et de la religion. La conséquence est que l’Empire ottoman a pris 250 ans de retard dans le progrès scientifique et social, ce qui a grandement contribué à son déclin. Alors qu’au Moyen Age, l’Empire ottoman a été dominant dans la science. Donc on a le risque, certainement en Europe, mais aussi dans certaines parties du monde, de faire face à un nouveau déclin si on est trop frileux sur le déploiement de l’intelligence artificielle.»”

Source : Yann Le Cun, «parrain de l’IA» : «L’intelligence artificielle conduira peut-être à un nouveau siècle des Lumières» – Libération

Are universities to slow to cope with Generative AI?

https://i0.wp.com/www.beaude.net/no-flux/wp-content/uploads/2023/04/AI-Policy-LSE-Impact.png?w=676&ssl=1

“Part of the problem is that even a singular system like ChatGPT encompasses a dizzying array of use cases for academics, students and administrators that are still in the process of being discovered. Its underlying capacities are expanding at a seemingly faster rate than universities are able to cope with, evidenced in the launch of GPT-4 (and the hugely significant ChatGPT plug in architecture), all while universities are still grappling with GPT-3.5. Furthermore, generative AI is a broader category than ChatGPT with images, videos, code, music and voice likely to hit mainstream awareness with the same force over the coming months and years.
In what Filip Vostal and I have described as the Accelerated Academy, the pace of working life increases (albeit unevenly), but policymaking still moves too slowly to cope. In siloed and centralised universities there is a recurrent problem of a distance from practice, where policies are formulated and procedures developed with too little awareness of on the ground realities. When the use cases of generative AI and the problems it generates are being discovered on a daily basis, we urgently need mechanisms to identify and filter these issues from across the university in order to respond in a way which escapes the established time horizons of the teaching and learning bureaucracy”

Source : Are universities to slow to cope with Generative AI? | Impact of Social Sciences

What We Still Don’t Know About How A.I. Is Trained

https://i0.wp.com/www.beaude.net/no-flux/wp-content/uploads/2023/04/Halpern_final.gif?w=676&ssl=1

“According to OpenAI’s charter, its mission is “to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.” Leaving aside the question of whether AGI is achievable, or if outsourcing work to machines will benefit all of humanity, it is clear that large-language A.I. engines are creating real harms to all of humanity right now. According to an article in Science for the People, training an A.I. engine requires tons of carbon-emitting energy. “While a human being is responsible for five tons of CO2 per year, training a large neural LM [language model] costs 284 tons. In addition, since the computing power required to train the largest models has grown three hundred thousand times in six years, we can only expect the environmental consequences of these models to increase.””

Source : What We Still Don’t Know About How A.I. Is Trained | The New Yorker

Intelligence artificielle: L’attentisme de la Suisse face à ChatGPT irrite des élus

https://i0.wp.com/www.beaude.net/no-flux/wp-content/uploads/2023/04/3yN7RyHf4PMACrDWu2E6gr.jpg?resize=676%2C450&ssl=1

“D’après le conseiller national vaudois, «on ne sait pas comment marche ChatGPT, quels sont ses biais et qui le contrôle». Samuel Bendahan trouve l’attentisme suisse insuffisant. Il réclame la création d’un centre de compétence dédié. «Le champ d’action du préposé fédéral à la protection des données n’est pas assez étendu pour traiter toutes les problématiques liées aux IA», regrette-t-il. Si la Suisse est en retard en matière de nouvelles technologies, il estime que c’est parce que les élus ne savent pas forcément comment elles marchent et, «plus grave, qu’il y a une volonté de ne pas réguler par pure idéologie, par haine de la régulation pure. Or il est dangereux de ne rien faire et de laisser le privé agir de lui-même.»”

Source : Intelligence artificielle: L’attentisme de la Suisse face à ChatGPT irrite des élus | 24 heures

CNET Is Experimenting With an AI Assist. Here’s Why

“The goal: to see if the tech can help our busy staff of reporters and editors with their job to cover topics from a 360-degree perspective. Will this AI engine efficiently assist them in using publicly available facts to create the most helpful content so our audience can make better decisions? Will this enable them to create even more deeply researched stories, analyses, features, testing and advice work we’re known for?
I use the term « AI assist » because while the AI engine compiled the story draft or gathered some of the information in the story, every article on CNET – and we publish thousands of new and updated stories each month – is reviewed, fact-checked and edited by an editor with topical expertise before we hit publish. That will remain true as our policy no matter what tools or tech we use to create those stories. And per CNET policy, if we find any errors after we publish, we will publicly correct the story.
Our reputation as a fact-based, unbiased source of news and advice is based on being transparent about how we work and the sources we rely on. So in the past 24 hours, we’ve changed the byline to CNET Money and moved our disclosure so you won’t need to hover over the byline to see it: « This story was assisted by an AI engine and reviewed, fact-checked and edited by our editorial staff. » We always note who edited the story so our audience understands which expert influenced, shaped and fact-checked the article.”

Source : CNET Is Experimenting With an AI Assist. Here’s Why – CNET

© 2024 no-Flux

Theme by Anders NorenUp ↑