Étiquette : open ai

OpenAI’s CEO Once Bragged About His Hoard of Guns and Gas Masks

https://futurism.com/_next/image?url=https%3A%2F%2Fwp-assets.futurism.com%2F2023%2F02%2Fsam.jpg&w=2048&q=75

“The tech wunderkind explained to the assembled partygoers that he’s freaked by the concept of the world ending and wants to prepare to survive it. The two scenarios he gave as examples, and we promise we’re not making this up, were a « super contagious » lab-modified virus « being released » onto the world population and « AI that attacks us. » « I try not to think about it too much, » the OpenAI CEO told the reportedly uncomfortable startup founders surrounding him at that forgotten Silicon Valley gathering. « But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to. » So yeah, that’s the guy who is in charge of the company that was initially founded with the philanthropic goal of promoting responsible AI, and which subsequently decided to go for-profit and is now making money hand over fist on its super-sophisticated neural networks that many fear will take their jobs. Do with that information what you will.”

Source : OpenAI’s CEO Once Bragged About His Hoard of Guns and Gas Masks

New AI classifier for indicating AI-written text is no longer available due to its low rate of accuracy

“As of July 20, 2023, the AI classifier is no longer available due to its low rate of accuracy. We are working to incorporate feedback and are currently researching more effective provenance techniques for text, and have made a commitment to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated.”

Source : New AI classifier for indicating AI-written text

Sam Altman, ChatGPT Creator and OpenAI CEO, Urges Senate for AI Regulation

“Some of the toughest questions and comments toward Mr. Altman came from Dr. Marcus, who noted OpenAI hasn’t been transparent about the data its uses to develop its systems. He expressed doubt in Mr. Altman’s prediction that new jobs will replace those killed off by A.I. “We have unprecedented opportunities here but we are also facing a perfect storm of corporate irresponsibility, widespread deployment, lack of adequate regulation and inherent unreliability,” Dr. Marcus said. Tech companies have argued that Congress should be careful with any broad rules that lump different kinds of A.I. together. In Tuesday’s hearing, Ms. Montgomery of IBM called for an A.I. law that is similar to Europe’s proposed regulations, which outlines various levels of risk. She called for rules that focus on specific uses, not regulating the technology itself.”

Source : Sam Altman, ChatGPT Creator and OpenAI CEO, Urges Senate for AI Regulation – The New York Times

Google « We Have No Moat, And Neither Does OpenAI »

https://i0.wp.com/www.beaude.net/no-flux/wp-content/uploads/2023/05/https3A2F2Fsubstack-post-media.s3.amazonaws.com2Fpublic2Fimages2F241fe3ef-3919-4a63-9c68-9e2e77cc2fc0_1366x588.webp?w=676&ssl=1

“At the beginning of March the open source community got their hands on their first really capable foundation model, as Meta’s LLaMA was leaked to the public. It had no instruction or conversation tuning, and no RLHF. Nonetheless, the community immediately understood the significance of what they had been given. A tremendous outpouring of innovation followed, with just days between major developments (see The Timeline for the full breakdown). Here we are, barely a month later, and there are variants with instruction tuning, quantization, quality improvements, human evals, multimodality, RLHF, etc. etc. many of which build on each other. Most importantly, they have solved the scaling problem to the extent that anyone can tinker. Many of the new ideas are from ordinary people. The barrier to entry for training and experimentation has dropped from the total output of a major research organization to one person, an evening, and a beefy laptop.”

Source : Google « We Have No Moat, And Neither Does OpenAI »

OpenAI’s CEO Says the Age of Giant AI Models Is Already Over

https://i0.wp.com/www.beaude.net/no-flux/wp-content/uploads/2023/04/Sam-Altman-OpenAI-MIT-Business-1246870629.jpg?resize=676%2C451&ssl=1

“Altman’s statement suggests that GPT-4 could be the last major advance to emerge from OpenAI’s strategy of making the models bigger and feeding them more data. He did not say what kind of research strategies or techniques might take its place. In the paper describing GPT-4, OpenAI says its estimates suggest diminishing returns on scaling up model size. Altman said there are also physical limits to how many data centers the company can build and how quickly it can build them.
Nick Frosst, a cofounder at Cohere who previously worked on AI at Google, says Altman’s feeling that going bigger will not work indefinitely rings true. He, too, believes that progress on transformers, the type of machine learning model at the heart of GPT-4 and its rivals, lies beyond scaling. “There are lots of ways of making transformers way, way better and more useful, and lots of them don’t involve adding parameters to the model,” he says. Frosst says that new AI model designs, or architectures, and further tuning based on human feedback are promising directions that many researchers are already exploring.”

Source : OpenAI’s CEO Says the Age of Giant AI Models Is Already Over | WIRED

GPT-4

“We’ve created GPT-4, the latest milestone in OpenAI’s effort in scaling up deep learning. GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks.”

Source : GPT-4

DALL·E: Introducing Outpainting (reminder)

https://i0.wp.com/www.beaude.net/no-flux/wp-content/uploads/2022/12/girl-with-a-pearl-earring.jpeg?w=676&ssl=1

“DALL·E’s Edit feature already enables changes within a generated or uploaded image — a capability known as Inpainting. Now, with Outpainting, users can extend the original image, creating large-scale images in any aspect ratio. Outpainting takes into account the image’s existing visual elements — including shadows, reflections, and textures — to maintain the context of the original image.”

Source : DALL·E: Introducing Outpainting

Open AI - Jukebox

“We’re introducing Jukebox, a neural net that generates music, including rudimentary singing, as raw audio in a variety of genres and artist styles. We’re releasing the model weights and code, along with a tool to explore the generated samples.”

Source : Jukebox

“ We’ve observed agents discovering progressively more complex tool use while playing a simple game of hide-and-seek. Through training in our new simulated hide-and-seek environment, agents build a series of six distinct strategies and counterstrategies, some of which we did not know our environment supported. The self-supervised emergent complexity in this simple environment further suggests that multi-agent co-adaptation may one day produce extremely complex and intelligent behavior.”

Source : Emergent Tool Use from Multi-Agent Interaction

Really ?

“We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization — all without task-specific training.”

Source : Better Language Models and Their Implications

© 2024 no-Flux

Theme by Anders NorenUp ↑