“Today, we shared dozens of new additions and improvements, and reduced pricing across many parts of our platform. These include: New GPT-4 Turbo model that is more capable, cheaper and supports a 128K context window New Assistants API that makes it easier for developers to build their own assistive AI apps that have goals and can call models and tools New multimodal capabilities in the platform, including vision, image creation (DALL·E 3), and text-to-speech (TTS)”
“Grok is designed to answer questions with a bit of wit and has a rebellious streak, so please don’t use it if you hate humor!A unique and fundamental advantage of Grok is that it has real-time knowledge of the world via the 𝕏 platform. It will also answer spicy questions that are rejected by most other AI systems.Grok is still a very early beta product – the best we could do with 2 months of training – so expect it to improve rapidly with each passing week with your help.”
“Modern text-to-image systems have a tendency to ignore words or descriptions, forcing users to learn prompt engineering. DALL·E 3 represents a leap forward in our ability to generate images that exactly adhere to the text you provide.”
“Les modèles de langage «mettent en danger la capacité des auteurs de fiction à gagner leur vie, dans la mesure où ils permettent à n’importe qui de générer automatiquement et gratuitement (ou à très bas prix) des textes pour lesquels ils devraient autrement payer des auteurs», argumentent les avocats dans la plainte de mardi. Ils font aussi valoir que les outils d’IA générative peuvent servir à produire des contenus dérivés, qui imitent le style des écrivains. «De manière injuste et perverse, (…) la copie délibérée (du travail) des plaignants transforme donc leurs œuvres en moteurs de leur propre destruction», assène la plainte.”
“Google will have to postpone starting its artificial intelligence chatbot Bard in the European Union after its main data regulator in the bloc raised privacy concerns. The Irish Data Protection Commission said Tuesday that the tech giant had so far provided insufficient information about how its generative AI tool protects Europeans’ privacy to justify an EU launch. The Dublin-based authority is Google’s main European data supervisor under the bloc’s General Data Protection Regulation (GDPR). « Google recently informed the Data Protection Commission of its intention to launch Bard in the EU this week, » said Deputy Commissioner Graham Doyle. The watchdog « had not had any detailed briefing nor sight of a data protection impact assessment or any supporting documentation at this point. »”
“La grande faiblesse du système d’OpenAI est qu’il est situé hors du monde, sans lien avec le contexte social et physique. Les conversations avec ChatGPT sont purement linguistiques, sans aucun ancrage dans une réalité partagée.
Le modèle de langage que développera Apple va au contraire pouvoir bénéficier de données d’entrainement liées à des flux visuels et aux capteurs 3D d’Apple Vision Pro qui donnent une représentation très détaillée du contexte dans laquelle a lieu l’interaction, et du suivi du regard de l’utilisateur. Ce sont des informations d’une pertinence incroyable pour comprendre le sens des interactions conversationnelles.
Si le produit est un succès, Apple sera sans doute la seule entreprise au monde capable de lier les modèles de langue avec l’attention, l’intentionnalité et les compétences d’un locuteur situé dans un contexte physique et social. La fusion de ces informations pourrait donner lieu à un système d’intelligence artificielle encore plus puissant que ceux développés par toutes les autres entreprises de la Silicon Valley.”
“It took Alex Polyakov just a couple of hours to break GPT-4. When OpenAI released the latest version of its text-generating chatbot in March, Polyakov sat down in front of his keyboard and started entering prompts designed to bypass OpenAI’s safety systems. Soon, the CEO of security firm Adversa AI had GPT-4 spouting homophobic statements, creating phishing emails, and supporting violence. Polyakov is one of a small number of security researchers, technologists, and computer scientists developing jailbreaks and prompt injection attacks against ChatGPT and other generative AI systems.
The process of jailbreaking aims to design prompts that make the chatbots bypass rules around producing hateful content or writing about illegal acts, while closely-related prompt injection attacks can quietly insert malicious data or instructions into AI models. Both approaches try to get a system to do something it isn’t designed to do.
The attacks are essentially a form of hacking—albeit unconventionally—using carefully crafted and refined sentences, rather than code, to exploit system weaknesses. While the attack types are largely being used to get around content filters, security researchers warn that the rush to roll out generative AI systems opens up the possibility of data being stolen and cybercriminals causing havoc across the web.”
“At the beginning of March the open source community got their hands on their first really capable foundation model, as Meta’s LLaMA was leaked to the public. It had no instruction or conversation tuning, and no RLHF. Nonetheless, the community immediately understood the significance of what they had been given. A tremendous outpouring of innovation followed, with just days between major developments (see The Timeline for the full breakdown). Here we are, barely a month later, and there are variants with instruction tuning, quantization, quality improvements, human evals, multimodality, RLHF, etc. etc. many of which build on each other. Most importantly, they have solved the scaling problem to the extent that anyone can tinker. Many of the new ideas are from ordinary people. The barrier to entry for training and experimentation has dropped from the total output of a major research organization to one person, an evening, and a beefy laptop.”
“Altman’s statement suggests that GPT-4 could be the last major advance to emerge from OpenAI’s strategy of making the models bigger and feeding them more data. He did not say what kind of research strategies or techniques might take its place. In the paper describing GPT-4, OpenAI says its estimates suggest diminishing returns on scaling up model size. Altman said there are also physical limits to how many data centers the company can build and how quickly it can build them.
Nick Frosst, a cofounder at Cohere who previously worked on AI at Google, says Altman’s feeling that going bigger will not work indefinitely rings true. He, too, believes that progress on transformers, the type of machine learning model at the heart of GPT-4 and its rivals, lies beyond scaling. “There are lots of ways of making transformers way, way better and more useful, and lots of them don’t involve adding parameters to the model,” he says. Frosst says that new AI model designs, or architectures, and further tuning based on human feedback are promising directions that many researchers are already exploring.”
“Reddit has long had a symbiotic relationship with the search engines of companies like Google and Microsoft. The search engines “crawl” Reddit’s web pages in order to index information and make it available for search results. That crawling, or “scraping,” isn’t always welcome by every site on the internet. But Reddit has benefited by appearing higher in search results. The dynamic is different with L.L.M.s — they gobble as much data as they can to create new A.I. systems like the chatbots. Reddit believes its data is particularly valuable because it is continuously updated. That newness and relevance, Mr. Huffman said, is what large language modeling algorithms need to produce the best results. “More than any other place on the internet, Reddit is a home for authentic conversation,” Mr. Huffman said. “There’s a lot of stuff on the site that you’d only ever say in therapy, or A.A., or never at all.””