“Dans une tentative louable, mais périlleuse de lutter contre la fraude en ligne, la France s’apprête à obliger les créateurs de navigateurs à mettre en œuvre une fonctionnalité technique relevant de la dystopie. L’article 6 du projet de loi SREN obligerait les développeurs de navigateur à créer les moyens de bloquer obligatoirement les sites web figurant sur une liste fournie par le gouvernement et intégrée directement dans le navigateur. Une telle mesure renverserait des décennies de normes établies en matière de modération des contenus. Celle-ci fournira également aux gouvernements autoritaires un moyen de minimiser l’efficacité des outils qui peuvent être utilisés pour contourner la censure.”
“The proposal says the minor mode feature would try to prevent “internet addiction” by limiting children younger than 8 to 40 minutes of smartphone time a day. The time limit would increase with age, reaching two hours daily for those ages 16 to 18.Apps would also have to tailor their content for different age groups. Children younger than 3, for example, should be shown nursery rhymes and programs that can be watched with parents, according to documents from the Cyberspace Administration of China. Those between 8 and 12 could be offered videos about life skills, general knowledge, age-appropriate news and “entertainment content for positive guidance.””
“The proposed Article 35 of the Military Planning Law gives ANSSI the authority to install “technical markers” — hardware and software enabling the collection of user data on the networks of electronic communications operators and data center operators. This provision would grant ANSSI the authority to install surveillance capabilities in private data centers without due process, posing a grave risk to the civil liberties of both French and global Internet users. This appears to be in conflict not only with EU law but also with the OECD Declaration on Government Access to Personal Data Held by Private Sector Entities, which seeks to ensure that “government access should be carried out in a manner that is not excessive in relation to the legitimate aims and in accordance with legal standards of necessity, proportionality, reasonableness and other standards that protect against the risk of misuse and abuse, as set out in and interpreted within the country’s legal framework.””
“We find that foundation model providers unevenly comply with the stated requirements of the draft EU AI Act. Enacting and enforcing the EU AI Act will bring about significant positive change in the foundation model ecosystem. Foundation model providers’ compliance with requirements regarding copyright, energy, risk, and evaluation is especially poor, indicating areas where model providers can improve. Our assessment shows sharp divides along the boundary of open vs. closed releases: we believe that all providers can feasibly improve their conduct, independent of where they fall along this spectrum. Overall, our analysis speaks to a broader trend of waning transparency: providers should take action to collectively set industry standards that improve transparency, and policymakers should take action to ensure adequate transparency underlies this general-purpose technology.”
Source : Stanford CRFM
“Part of the problem is that even a singular system like ChatGPT encompasses a dizzying array of use cases for academics, students and administrators that are still in the process of being discovered. Its underlying capacities are expanding at a seemingly faster rate than universities are able to cope with, evidenced in the launch of GPT-4 (and the hugely significant ChatGPT plug in architecture), all while universities are still grappling with GPT-3.5. Furthermore, generative AI is a broader category than ChatGPT with images, videos, code, music and voice likely to hit mainstream awareness with the same force over the coming months and years.
In what Filip Vostal and I have described as the Accelerated Academy, the pace of working life increases (albeit unevenly), but policymaking still moves too slowly to cope. In siloed and centralised universities there is a recurrent problem of a distance from practice, where policies are formulated and procedures developed with too little awareness of on the ground realities. When the use cases of generative AI and the problems it generates are being discovered on a daily basis, we urgently need mechanisms to identify and filter these issues from across the university in order to respond in a way which escapes the established time horizons of the teaching and learning bureaucracy”
“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that « At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models. » We agree. That point is now.”
“« Nous sommes le gouvernement des droits réels, pas des droits incantatoires », a scandé Sophie Cluzel pour justifier la timidité du gouvernement sur le sujet. Mais dire non à une idée à un instant T sous prétexte qu’actuellement « aucun système informatique ne pourra la mettre en œuvre », c’est prendre le problème à l’envers. C’est se réfugier derrière une fausse idée de l’informatique pour justifier une décision politique. Indexer un droit aux capacités de nos systèmes informatiques, c’est mettre le code au-dessus de la loi.”
Ces multiples atouts ont leurs limites. « L’encyclopédie fonctionne parce qu’il y a plus de gens bien intentionnés que de gens malhonnêtes. Mais si le ratio s’inverse, les bénévoles qui agissent sur leur temps libre auront du mal à maintenir Wikipédia en état », alerte Pierre-Yves Beaudouin. Si certaines pages se prêtent mal aux tripotages, les angles morts restent légion.
“Nous devrions éviter de faire porter à nos concitoyens un dilemme moral : serions non fautifs si nous ne téléchargeons pas cette application? La pression sociale ou le sentiment de culpabilité pourrait faire naître un consentement induit, indirectement contraint.”
“On a cool day late last September, half a dozen Chinese engineers walked into a conference room in the heart of Geneva’s UN district with a radical idea. They had one hour to persuade delegates from more than 40 countries of their vision: an alternative form of the internet, to replace the technological architecture that has underpinned the web for half a century. Whereas today’s internet is owned by everyone and no one, they were in the process of building something very different — a new infrastructure that could put power back in the hands of nation states, instead of individuals.”