“Elon Musk has approached artificial intelligence researchers in recent weeks about forming a new research lab to develop an alternative to ChatGPT, the high-profile chatbot made by the startup OpenAI, according to two people with direct knowledge of the effort and a third person briefed on the conversations. In recent months Musk has repeatedly criticized OpenAI for installing safeguards that prevent ChatGPT from producing text that might offend users. Musk, who co-founded OpenAI in 2015 but has since cut ties with the startup, suggested last year that OpenAI’s technology was an example of “training AI to be woke.” His comments imply that a rival chatbot would have fewer restrictions on divisive subjects compared to ChatGPT and a related chatbot Microsoft recently launched. To spearhead the effort, Musk has been recruiting Igor Babuschkin, a researcher who recently left Alphabet’s DeepMind AI unit and specializes in the kind of machine-learning models that power chatbots like ChatGPT. ”
“C’est un phénomène que racontent surtout les plus de 35 ans. Des coiffeuses se sont mises à coiffer des gens qui ne leur parlaient plus. Des contrôleurs de train traversent des voitures dans lesquelles chaque voyageur a les yeux rivés sur un écran. Des caissières voient passer des clients, le téléphone coincé dans le cou, en communication avec des interlocuteurs invisibles. Des médecins observent des salles d’attente dans lesquelles on continue à s’asseoir automatiquement aux deux bouts, mais personne ne brise plus la glace. C’est la fin du bavardage. Pas des grands débats, mais du small talk comme on dit en anglais, « de la pluie et du beau temps » en version française, pour parler de ces petits échanges qui n’ont pourtant généralement pas grand-chose à voir avec la météo. «
Vous pouvez partager un article en cliquant sur les icônes de partage en haut à droite de celui-ci. « Autrefois, il arrivait qu’on s’excuse auprès de son voisin de train quand, après avoir discuté, on sortait un livre. Comme si le mode par défaut était d’échanger. A présent, le mode par défaut, c’est d’être plongé dans son téléphone et de s’excuser si on doit lui adresser la parole », explique Diouldé Chartier, dont l’agence D’Cap Research a conduit plusieurs études en observation des comportements des usagers de la SNCF. »”
“The goal: to see if the tech can help our busy staff of reporters and editors with their job to cover topics from a 360-degree perspective. Will this AI engine efficiently assist them in using publicly available facts to create the most helpful content so our audience can make better decisions? Will this enable them to create even more deeply researched stories, analyses, features, testing and advice work we’re known for?
I use the term « AI assist » because while the AI engine compiled the story draft or gathered some of the information in the story, every article on CNET – and we publish thousands of new and updated stories each month – is reviewed, fact-checked and edited by an editor with topical expertise before we hit publish. That will remain true as our policy no matter what tools or tech we use to create those stories. And per CNET policy, if we find any errors after we publish, we will publicly correct the story.
Our reputation as a fact-based, unbiased source of news and advice is based on being transparent about how we work and the sources we rely on. So in the past 24 hours, we’ve changed the byline to CNET Money and moved our disclosure so you won’t need to hover over the byline to see it: « This story was assisted by an AI engine and reviewed, fact-checked and edited by our editorial staff. » We always note who edited the story so our audience understands which expert influenced, shaped and fact-checked the article.”
“A $450 million investment from Apple’s Advanced Manufacturing Fund provides the critical infrastructure that supports Emergency SOS via satellite for iPhone 14 models. Available to customers in the US and Canada beginning later this month, the new service will allow iPhone 14 and iPhone 14 Pro models to connect directly to a satellite, enabling messaging with emergency services when outside of cellular and Wi-Fi coverage. A majority of the funding goes to Globalstar, a global satellite service headquartered in Covington, Louisiana, with facilities across the US. Apple’s investment provides critical enhancements to Globalstar’s satellite network and ground stations, ensuring iPhone 14 users are able to connect to emergency services when off the grid. At Globalstar, more than 300 employees support the new service.”
“Augmented reality allows us to spend more time focusing on what matters in the real world, in our real lives. It can break down communication barriers — and help us better understand each other by making language visible. Watch what happens when we bring technologies like transcription and translation to your line of sight.”
“Emoji reactions have been available for years on other services like Facebook, as well as on Twitter itself within direct messages. But what’s interesting about Twitter’s emoji choices for its latest test is that none of them are especially negative. There’s no “Angry face” like you’ll find on Facebook, or “Thumbs down” like in Twitter’s direct message emoji reactions.Twitter explains that it decided against choosing these negative emoji because people it surveyed said “they were concerned about receiving negative reactions to some of their thoughts.” A valid concern given how toxic many conversations on Twitter can be.”
“Deepfakes have become more believable in recent years. In some cases, humans can no longer easily tell some of them apart from genuine images. Although detecting deepfakes remains a compelling challenge, their increasing sophistication opens up more potential lines of inquiry, such as: What happens when deepfakes are produced not just for amusement and awe, but for malicious intent on a grand scale? Today, we — in partnership with Michigan State University (MSU) — are presenting a research method of detecting and attributing deepfakes that relies on reverse engineering from a single AI-generated image to the generative model used to produce it. Our method will facilitate deepfake detection and tracing in real-world settings, where the deepfake image itself is often the only information detectors have to work with.”
“These early results are encouraging, and we look forward to sharing more soon, but sensibleness and specificity aren’t the only qualities we’re looking for in models like LaMDA. We’re also exploring dimensions like “interestingness,” by assessing whether responses are insightful, unexpected or witty. Being Google, we also care a lot about factuality (that is, whether LaMDA sticks to facts, something language models often struggle with), and are investigating ways to ensure LaMDA’s responses aren’t just compelling but correct. But the most important question we ask ourselves when it comes to our technologies is whether they adhere to our AI Principles. Language might be one of humanity’s greatest tools, but like all tools it can be misused. Models trained on language can propagate that misuse — for instance, by internalizing biases, mirroring hateful speech, or replicating misleading information. And even when the language it’s trained on is carefully vetted, the model itself can still be put to ill use. ”