“Why are –seeds useful in MJ? Using the same prompt + seed in MJ v4 (the current default) will produce identical images. This is VERY useful when building a prompt, b/c it lets you visualize the impact of any addition/change you make (like here, with lighting)”
“But an AI bot cannot always distinguish between helpful and hateful content. According to George Washington University’s Ding, after ChatGPT was trained by digesting the 175 billion parameters that inform it, parent company OpenAI still needed to employ several dozen human contractors to teach it not to regurgitate racist and misogynist speech or to give instructions on how to do things like build a bomb. This human-trained version, called InstructGPT, is the framework behind the chat bot. No similar effort has been announced for Baidu’s Ernie Bot or any of the other Chinese projects in the works, Ding said. Even with a robust content management team in place at Baidu, it may not be enough. Zhao, the former Baidu employee, said the company originally dedicated just a handful of engineers to the development of its AI framework. “Baidu’s AI research was slowed by a lack of commitment in a risk-ridden field that promised little return in the short term,” she said.”
« While the prospect of ChatGPT-based cheating has alarmed teachers and the academic profession, Matt Glanville, the IB’s head of assessment principles and practice, said the chatbot should be embraced as “an extraordinary opportunity”. However, Glanville told the Times, the responses must be treated as any other source in essays.
“The clear line between using ChatGPT and providing original work is exactly the same as using ideas taken from other people or the internet. As with any quote or material adapted from another source, it must be credited in the body of the text and appropriately referenced in the bibliography,” he said.
The IB is taken by thousands of children every year in the UK at more than 120 schools. Glanville said essay writing would feature less prominently in the qualifications process in the future because of the rise of chatbot technology.
“Essay writing is, however, being profoundly challenged by the rise of new technology and there’s no doubt that it will have much less prominence in the future.”
He added: “When AI can essentially write an essay at the touch of a button, we need our pupils to master different skills, such as understanding if the essay is any good or if it has missed context, has used biased data or if it is lacking in creativity. These will be far more important skills than writing an essay, so the assessment tasks we set will need to reflect this.” »
“Elon Musk has approached artificial intelligence researchers in recent weeks about forming a new research lab to develop an alternative to ChatGPT, the high-profile chatbot made by the startup OpenAI, according to two people with direct knowledge of the effort and a third person briefed on the conversations. In recent months Musk has repeatedly criticized OpenAI for installing safeguards that prevent ChatGPT from producing text that might offend users. Musk, who co-founded OpenAI in 2015 but has since cut ties with the startup, suggested last year that OpenAI’s technology was an example of “training AI to be woke.” His comments imply that a rival chatbot would have fewer restrictions on divisive subjects compared to ChatGPT and a related chatbot Microsoft recently launched. To spearhead the effort, Musk has been recruiting Igor Babuschkin, a researcher who recently left Alphabet’s DeepMind AI unit and specializes in the kind of machine-learning models that power chatbots like ChatGPT. ”
“« C’est très difficile de lutter pour les garder dans un environnement académique. L’un de mes doctorants a fait un stage d’été chez Tesla, qui lui a fait une offre avant qu’il ne termine sa thèse », explique-t-il. L’étudiant a décliné mais Tesla est revenu le chercher, une fois son doctorat achevé, en surenchérissant son offre d’embauche, qui avoisinait les 500 000 euros annuels sans les stock-options. Matthieu Cord peut s’amuser à faire la liste de ses thésards partis vers Deepmind, Facebook et Apple surtout : « Les meilleurs partent rapidement en sortie de thèse, ceux qui commencent à publier sont très vite dans les radars des géants du numérique et, après, c’est terminé, on ne les garde plus. » Une fois les jeunes diplômés « absorbés », les liens sont distendus, d’autant que certaines sociétés imposent une forme de loi du silence aux chercheurs et aux salariés.
Certains étudiants du MVA de la promo 2022 s’interrogent désormais aussi sur leur « responsabilité » et leur « rôle sociétal » dans la conception des algorithmes. Un cours de machine learning responsable a été créé à la rentrée 2021 pour répondre à cette aspiration, 60 étudiants avaient manifesté leur intérêt pour une trentaine de places ouvertes. Mathis Clautier refuse de mettre son intelligence au service d’une robotique destinée à la guerre. Il n’est pas sans savoir que Boston Dynamics, une start-up de robotique médiatisée grâce à ses robots humanoïdes, ayant appartenu à Google de 2013 à 2017, avait collaboré avec le programme de recherche de la défense américaine et que l’un de ses robots quadrupèdes « Spot » a fait ses débuts avec l’armée française, en 2021.”
“We’ve been working on an experimental conversational AI service, powered by LaMDA, that we’re calling Bard. And today, we’re taking another step forward by opening it up to trusted testers ahead of making it more widely available to the public in the coming weeks.Bard seeks to combine the breadth of the world’s knowledge with the power, intelligence and creativity of our large language models. It draws on information from the web to provide fresh, high-quality responses. Bard can be an outlet for creativity, and a launchpad for curiosity, helping you to explain new discoveries from NASA’s James Webb Space Telescope to a 9-year-old, or learn more about the best strikers in football right now, and then get drills to build your skills.”
“The goal: to see if the tech can help our busy staff of reporters and editors with their job to cover topics from a 360-degree perspective. Will this AI engine efficiently assist them in using publicly available facts to create the most helpful content so our audience can make better decisions? Will this enable them to create even more deeply researched stories, analyses, features, testing and advice work we’re known for?
I use the term « AI assist » because while the AI engine compiled the story draft or gathered some of the information in the story, every article on CNET – and we publish thousands of new and updated stories each month – is reviewed, fact-checked and edited by an editor with topical expertise before we hit publish. That will remain true as our policy no matter what tools or tech we use to create those stories. And per CNET policy, if we find any errors after we publish, we will publicly correct the story.
Our reputation as a fact-based, unbiased source of news and advice is based on being transparent about how we work and the sources we rely on. So in the past 24 hours, we’ve changed the byline to CNET Money and moved our disclosure so you won’t need to hover over the byline to see it: « This story was assisted by an AI engine and reviewed, fact-checked and edited by our editorial staff. » We always note who edited the story so our audience understands which expert influenced, shaped and fact-checked the article.”
“Nous avons demandé à trois Intelligences Artificielles (DALL.E, MIDJOUNEY & STABLE DIFFUSION) de générer des images. Laurence DEVILLER, Professeure en informatique et IA à la Sorbonne Université, et Albertine MEUNIER, Artiste Numérique et Chevalier de la Légion, d’Honneur-sauront elles les différencier d’oeuvres humaines ?”
“DALL·E’s Edit feature already enables changes within a generated or uploaded image — a capability known as Inpainting. Now, with Outpainting, users can extend the original image, creating large-scale images in any aspect ratio. Outpainting takes into account the image’s existing visual elements — including shadows, reflections, and textures — to maintain the context of the original image.”
“As part of Intel’s Responsible AI work, the company has productized FakeCatcher, a technology that can detect fake videos with a 96% accuracy rate. Intel’s deepfake detection platform is the world’s first real-time deepfake detector that returns results in milliseconds.
Most deep learning-based detectors look at raw data to try to find signs of inauthenticity and identify what is wrong with a video. In contrast, FakeCatcher looks for authentic clues in real videos, by assessing what makes us human— subtle “blood flow” in the pixels of a video. When our hearts pump blood, our veins change color. These blood flow signals are collected from all over the face and algorithms translate these signals into spatiotemporal maps. Then, using deep learning, we can instantly detect whether a video is real or fake. ”