Étiquette : future (Page 1 of 6)

Millions of new materials discovered with deep learning – Google DeepMind

https://lh3.googleusercontent.com/bthZ6UxFcEOVB5mbBtNo1kHBfO0Ubuu5pn-XUeGZNIGQVJYIsznm6QqLJnorrYdiGNCC6IbF7_9p3ZItbgRtYj6HY3-8lz-KiCS-v7ySl8eANw7t=w1232-rw

“To build a more sustainable future, we need new materials. GNoME has discovered 380,000 stable crystals that hold the potential to develop greener technologies – from better batteries for electric cars, to superconductors for more efficient computing.Our research – and that of collaborators at the Berkeley Lab, Google Research, and teams around the world — shows the potential to use AI to guide materials discovery, experimentation, and synthesis. We hope that GNoME together with other AI tools can help revolutionize materials discovery today and shape the future of the field.”

Source : Millions of new materials discovered with deep learning – Google DeepMind

Frontier risk and preparedness

Frontier Risk And Preparedness

“To support the safety of highly-capable AI systems, we are developing our approach to catastrophic risk preparedness, including building a Preparedness team and launching a challenge.
The team will help track, evaluate, forecast and protect against catastrophic risks spanning multiple categories including:

  • Individualized persuasion
  • Cybersecurity
  • Chemical, biological, radiological, and nuclear (CBRN) threats
  • Autonomous replication and adaptation (ARA)”

Source : Frontier risk and preparedness

OpenAI’s CEO Once Bragged About His Hoard of Guns and Gas Masks

https://futurism.com/_next/image?url=https%3A%2F%2Fwp-assets.futurism.com%2F2023%2F02%2Fsam.jpg&w=2048&q=75

“The tech wunderkind explained to the assembled partygoers that he’s freaked by the concept of the world ending and wants to prepare to survive it. The two scenarios he gave as examples, and we promise we’re not making this up, were a « super contagious » lab-modified virus « being released » onto the world population and « AI that attacks us. » « I try not to think about it too much, » the OpenAI CEO told the reportedly uncomfortable startup founders surrounding him at that forgotten Silicon Valley gathering. « But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to. » So yeah, that’s the guy who is in charge of the company that was initially founded with the philanthropic goal of promoting responsible AI, and which subsequently decided to go for-profit and is now making money hand over fist on its super-sophisticated neural networks that many fear will take their jobs. Do with that information what you will.”

Source : OpenAI’s CEO Once Bragged About His Hoard of Guns and Gas Masks

How do tech bros plan to ride out Armageddon? Living it up on their private islands

Not for sale … part of the coast of the sovereign state of Nauru.

“I want to stress again that EA is a very serious and intelligent movement promoted by very serious and intelligent people because, to the untrained eye, it can sometimes look like a cult of unhinged narcissists. That Nauru project, for example? That wasn’t the only weird idea the folk at FTX had dreamed up in the name of effective altruism. According to the court filings, the FTX Foundation, the non-profit arm of FTX, had authorised a $300,000 (£230,000) grant to an individual to “write a book about how to figure out what humans’ utility function is (are)”. The foundation also made a $400,000 grant “to an entity that posted animated videos on YouTube related to ‘rationalist and [effective altruism] material’, including videos on ‘grabby aliens’”.
So there you go. Some of the best minds of our generation (or so they’d have you believe) are busying themselves with strategies on grabby aliens and Pacific island bunkers. Is this effective? Is this altruism? I can’t tell you for sure what the future of effective altruism is, but the road to hell is paved with good intentions. ”

Source : How do tech bros plan to ride out Armageddon? Living it up on their private islands | Arwa Mahdawi | The Guardian

AI Is Doing a Terrible Job Trading Stocks in the Real World. AI is limited to plagiarizing history

https://futurism.com/_next/image?url=https%3A%2F%2Fwp-assets.futurism.com%2F2023%2F08%2Fai-terrible-job-trading-stocks.jpg&w=2048&q=75

“Eric Ghysels, an economics professor at the University of North Carolina at Chapel Hill, noted that while an AI can be speedier than human investors moment-to-moment, it’s sluggish to adapt to « paradigm-shifting events » like the war in Ukraine — or maybe even the rise of AI. Meaning, in his opinion, an AI can’t beat human investors over time. « Maybe one day it will, but for now AI is limited to plagiarizing history, » Ghysels told the WSJ.”

Source : AI Is Doing a Terrible Job Trading Stocks in the Real World

Google « We Have No Moat, And Neither Does OpenAI »

https://i0.wp.com/www.beaude.net/no-flux/wp-content/uploads/2023/05/https3A2F2Fsubstack-post-media.s3.amazonaws.com2Fpublic2Fimages2F241fe3ef-3919-4a63-9c68-9e2e77cc2fc0_1366x588.webp?w=676&ssl=1

“At the beginning of March the open source community got their hands on their first really capable foundation model, as Meta’s LLaMA was leaked to the public. It had no instruction or conversation tuning, and no RLHF. Nonetheless, the community immediately understood the significance of what they had been given. A tremendous outpouring of innovation followed, with just days between major developments (see The Timeline for the full breakdown). Here we are, barely a month later, and there are variants with instruction tuning, quantization, quality improvements, human evals, multimodality, RLHF, etc. etc. many of which build on each other. Most importantly, they have solved the scaling problem to the extent that anyone can tinker. Many of the new ideas are from ordinary people. The barrier to entry for training and experimentation has dropped from the total output of a major research organization to one person, an evening, and a beefy laptop.”

Source : Google « We Have No Moat, And Neither Does OpenAI »

‘The Godfather of AI’ Quits Google and Warns of Danger Ahead

Geoffrey Hinton, wearing a dark sweater.

“Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.
Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”
As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”
Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop”

Source : ‘The Godfather of AI’ Quits Google and Warns of Danger Ahead – The New York Times

« Older posts

© 2024 no-Flux

Theme by Anders NorenUp ↑