Étiquette : deep learning (Page 2 of 10)

Pause Giant AI Experiments: An Open Letter

“Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that « At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models. » We agree. That point is now.

Source : Pause Giant AI Experiments: An Open Letter – Future of Life Institute

GPT-4

“We’ve created GPT-4, the latest milestone in OpenAI’s effort in scaling up deep learning. GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks.”

Source : GPT-4

🌱 –seeds in Midjourney What they are, why they’re useful, where to find them, & when/how to use them

“Why are –seeds useful in MJ? Using the same prompt + seed in MJ v4 (the current default) will produce identical images. This is VERY useful when building a prompt, b/c it lets you visualize the impact of any addition/change you make (like here, with lighting)”

China’s chatbots, like Baidu’s Ernie, grapple with tech and censorship

“But an AI bot cannot always distinguish between helpful and hateful content. According to George Washington University’s Ding, after ChatGPT was trained by digesting the 175 billion parameters that inform it, parent company OpenAI still needed to employ several dozen human contractors to teach it not to regurgitate racist and misogynist speech or to give instructions on how to do things like build a bomb. This human-trained version, called InstructGPT, is the framework behind the chat bot. No similar effort has been announced for Baidu’s Ernie Bot or any of the other Chinese projects in the works, Ding said. Even with a robust content management team in place at Baidu, it may not be enough. Zhao, the former Baidu employee, said the company originally dedicated just a handful of engineers to the development of its AI framework. “Baidu’s AI research was slowed by a lack of commitment in a risk-ridden field that promised little return in the short term,” she said.”

Source : China’s chatbots, like Baidu’s Ernie, grapple with tech and censorship – The Washington Post

ChatGPT allowed in International Baccalaureate essays

ChatGPT allowed in International Baccalaureate essays

« While the prospect of ChatGPT-based cheating has alarmed teachers and the academic profession, Matt Glanville, the IB’s head of assessment principles and practice, said the chatbot should be embraced as “an extraordinary opportunity”. However, Glanville told the Times, the responses must be treated as any other source in essays.
“The clear line between using ChatGPT and providing original work is exactly the same as using ideas taken from other people or the internet. As with any quote or material adapted from another source, it must be credited in the body of the text and appropriately referenced in the bibliography,” he said.
The IB is taken by thousands of children every year in the UK at more than 120 schools. Glanville said essay writing would feature less prominently in the qualifications process in the future because of the rise of chatbot technology.
“Essay writing is, however, being profoundly challenged by the rise of new technology and there’s no doubt that it will have much less prominence in the future.”
He added: “When AI can essentially write an essay at the touch of a button, we need our pupils to master different skills, such as understanding if the essay is any good or if it has missed context, has used biased data or if it is lacking in creativity. These will be far more important skills than writing an essay, so the assessment tasks we set will need to reflect this.” »

Source : ChatGPT allowed in International Baccalaureate essays | English baccalaureate | The Guardian

Fighting ‘Woke AI,’ Musk Recruits Team to Develop OpenAI Rival

https://i0.wp.com/www.beaude.net/no-flux/wp-content/uploads/2023/03/d29c6a47-a141-47aa-a443-32a96eb3d0f7.jpg?w=676&ssl=1

“Elon Musk has approached artificial intelligence researchers in recent weeks about forming a new research lab to develop an alternative to ChatGPT, the high-profile chatbot made by the startup OpenAI, according to two people with direct knowledge of the effort and a third person briefed on the conversations. In recent months Musk has repeatedly criticized OpenAI for installing safeguards that prevent ChatGPT from producing text that might offend users. Musk, who co-founded OpenAI in 2015 but has since cut ties with the startup, suggested last year that OpenAI’s technology was an example of “training AI to be woke.” His comments imply that a rival chatbot would have fewer restrictions on divisive subjects compared to ChatGPT and a related chatbot Microsoft recently launched. To spearhead the effort, Musk has been recruiting Igor Babuschkin, a researcher who recently left Alphabet’s DeepMind AI unit and specializes in the kind of machine-learning models that power chatbots like ChatGPT. ”

Source : Fighting ‘Woke AI,’ Musk Recruits Team to Develop OpenAI Rival — The Information

A Saclay, le master qui forme l’élite des spécialistes en intelligence artificielle

https://i0.wp.com/www.beaude.net/no-flux/wp-content/uploads/2023/02/90ba057_1652453099544-coutroutsios-saclay-violet-web.jpg?w=676&ssl=1

“« C’est très difficile de lutter pour les garder dans un environnement académique. L’un de mes doctorants a fait un stage d’été chez Tesla, qui lui a fait une offre avant qu’il ne termine sa thèse », explique-t-il. L’étudiant a décliné mais Tesla est revenu le chercher, une fois son doctorat achevé, en surenchérissant son offre d’embauche, qui avoisinait les 500 000 euros annuels sans les stock-options. Matthieu Cord peut s’amuser à faire la liste de ses thésards partis vers Deepmind, Facebook et Apple surtout : « Les meilleurs partent rapidement en sortie de thèse, ceux qui commencent à publier sont très vite dans les radars des géants du numérique et, après, c’est terminé, on ne les garde plus. » Une fois les jeunes diplômés « absorbés », les liens sont distendus, d’autant que certaines sociétés imposent une forme de loi du silence aux chercheurs et aux salariés.
Certains étudiants du MVA de la promo 2022 s’interrogent désormais aussi sur leur « responsabilité » et leur « rôle sociétal » dans la conception des algorithmes. Un cours de machine learning responsable a été créé à la rentrée 2021 pour répondre à cette aspiration, 60 étudiants avaient manifesté leur intérêt pour une trentaine de places ouvertes. Mathis Clautier refuse de mettre son intelligence au service d’une robotique destinée à la guerre. Il n’est pas sans savoir que Boston Dynamics, une start-up de robotique médiatisée grâce à ses robots humanoïdes, ayant appartenu à Google de 2013 à 2017, avait collaboré avec le programme de recherche de la défense américaine et que l’un de ses robots quadrupèdes « Spot » a fait ses débuts avec l’armée française, en 2021.”

Source : A Saclay, le master qui forme l’élite des spécialistes en intelligence artificielle

Google AI updates: Bard and new AI features in Search

Abstract shapes in Google's four colors on a gray background

“We’ve been working on an experimental conversational AI service, powered by LaMDA, that we’re calling Bard. And today, we’re taking another step forward by opening it up to trusted testers ahead of making it more widely available to the public in the coming weeks.Bard seeks to combine the breadth of the world’s knowledge with the power, intelligence and creativity of our large language models. It draws on information from the web to provide fresh, high-quality responses. Bard can be an outlet for creativity, and a launchpad for curiosity, helping you to explain new discoveries from NASA’s James Webb Space Telescope to a 9-year-old, or learn more about the best strikers in football right now, and then get drills to build your skills.”

Source : Google AI updates: Bard and new AI features in Search

CNET Is Experimenting With an AI Assist. Here’s Why

“The goal: to see if the tech can help our busy staff of reporters and editors with their job to cover topics from a 360-degree perspective. Will this AI engine efficiently assist them in using publicly available facts to create the most helpful content so our audience can make better decisions? Will this enable them to create even more deeply researched stories, analyses, features, testing and advice work we’re known for?
I use the term « AI assist » because while the AI engine compiled the story draft or gathered some of the information in the story, every article on CNET – and we publish thousands of new and updated stories each month – is reviewed, fact-checked and edited by an editor with topical expertise before we hit publish. That will remain true as our policy no matter what tools or tech we use to create those stories. And per CNET policy, if we find any errors after we publish, we will publicly correct the story.
Our reputation as a fact-based, unbiased source of news and advice is based on being transparent about how we work and the sources we rely on. So in the past 24 hours, we’ve changed the byline to CNET Money and moved our disclosure so you won’t need to hover over the byline to see it: « This story was assisted by an AI engine and reviewed, fact-checked and edited by our editorial staff. » We always note who edited the story so our audience understands which expert influenced, shaped and fact-checked the article.”

Source : CNET Is Experimenting With an AI Assist. Here’s Why – CNET

« Older posts Newer posts »

© 2024 no-Flux

Theme by Anders NorenUp ↑