Étiquette : governance (Page 1 of 6)

La proposition française de bloquer les sites web via le navigateur nuira gravement à l’internet ouvert mondial

“Dans une tentative louable, mais périlleuse de lutter contre la fraude en ligne, la France s’apprête à obliger les créateurs de navigateurs à mettre en œuvre une fonctionnalité technique relevant de la dystopie. L’article 6 du projet de loi SREN obligerait les développeurs de navigateur à créer les moyens de bloquer obligatoirement les sites web figurant sur une liste fournie par le gouvernement et intégrée directement dans le navigateur. Une telle mesure renverserait des décennies de normes établies en matière de modération des contenus. Celle-ci fournira également aux gouvernements autoritaires un moyen de minimiser l’efficacité des outils qui peuvent être utilisés pour contourner la censure.”

Source : La proposition française de bloquer les sites web via le navigateur nuira gravement à l’internet ouvert mondial – Open Policy & Advocacy

China Proposes ‘Minor Mode’ to Limit Kids’ Smartphone Use

https://i0.wp.com/www.beaude.net/no-flux/wp-content/uploads/2023/08/04china-phone-ban-kmwb-superJumbo.jpg?resize=676%2C451&ssl=1

“The proposal says the minor mode feature would try to prevent “internet addiction” by limiting children younger than 8 to 40 minutes of smartphone time a day. The time limit would increase with age, reaching two hours daily for those ages 16 to 18.Apps would also have to tailor their content for different age groups. Children younger than 3, for example, should be shown nursery rhymes and programs that can be watched with parents, according to documents from the Cyberspace Administration of China. Those between 8 and 12 could be offered videos about life skills, general knowledge, age-appropriate news and “entertainment content for positive guidance.””

Source : China Proposes ‘Minor Mode’ to Limit Kids’ Smartphone Use – The New York Times

Concerns over DNS Blocking. June 23, 2023 | by Vinton Cerf

“The proposed Article 35 of the Military Planning Law gives ANSSI the authority to install “technical markers” — hardware and software enabling the collection of user data on the networks of electronic communications operators and data center operators. This provision would grant ANSSI the authority to install surveillance capabilities in private data centers without due process, posing a grave risk to the civil liberties of both French and global Internet users. This appears to be in conflict not only with EU law but also with the OECD Declaration on Government Access to Personal Data Held by Private Sector Entities, which seeks to ensure that “government access should be carried out in a manner that is not excessive in relation to the legitimate aims and in accordance with legal standards of necessity, proportionality, reasonableness and other standards that protect against the risk of misuse and abuse, as set out in and interpreted within the country’s legal framework.””

Source : Concerns over DNS Blocking. June 23, 2023 | by vinton cerf | Jun, 2023 | Medium

Do Foundation Model Providers Comply with the EU AI Act? – Stanford CRFM

https://i0.wp.com/www.beaude.net/no-flux/wp-content/uploads/2023/06/results.png?resize=676%2C358&ssl=1

“We find that foundation model providers unevenly comply with the stated requirements of the draft EU AI Act. Enacting and enforcing the EU AI Act will bring about significant positive change in the foundation model ecosystem. Foundation model providers’ compliance with requirements regarding copyright, energy, risk, and evaluation is especially poor, indicating areas where model providers can improve. Our assessment shows sharp divides along the boundary of open vs. closed releases: we believe that all providers can feasibly improve their conduct, independent of where they fall along this spectrum. Overall, our analysis speaks to a broader trend of waning transparency: providers should take action to collectively set industry standards that improve transparency, and policymakers should take action to ensure adequate transparency underlies this general-purpose technology.”

Source : Stanford CRFM

Are universities to slow to cope with Generative AI?

https://i0.wp.com/www.beaude.net/no-flux/wp-content/uploads/2023/04/AI-Policy-LSE-Impact.png?w=676&ssl=1

“Part of the problem is that even a singular system like ChatGPT encompasses a dizzying array of use cases for academics, students and administrators that are still in the process of being discovered. Its underlying capacities are expanding at a seemingly faster rate than universities are able to cope with, evidenced in the launch of GPT-4 (and the hugely significant ChatGPT plug in architecture), all while universities are still grappling with GPT-3.5. Furthermore, generative AI is a broader category than ChatGPT with images, videos, code, music and voice likely to hit mainstream awareness with the same force over the coming months and years.
In what Filip Vostal and I have described as the Accelerated Academy, the pace of working life increases (albeit unevenly), but policymaking still moves too slowly to cope. In siloed and centralised universities there is a recurrent problem of a distance from practice, where policies are formulated and procedures developed with too little awareness of on the ground realities. When the use cases of generative AI and the problems it generates are being discovered on a daily basis, we urgently need mechanisms to identify and filter these issues from across the university in order to respond in a way which escapes the established time horizons of the teaching and learning bureaucracy”

Source : Are universities to slow to cope with Generative AI? | Impact of Social Sciences

Pause Giant AI Experiments: An Open Letter

“Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that « At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models. » We agree. That point is now.

Source : Pause Giant AI Experiments: An Open Letter – Future of Life Institute

Allocation adultes handicapés (AAH) : non, notre « système informatique » n’est pas le problème

“«  Nous sommes le gouvernement des droits réels, pas des droits incantatoires », a scandé Sophie Cluzel pour justifier la timidité du gouvernement sur le sujet. Mais dire non à une idée à un instant T sous prétexte qu’actuellement « aucun système informatique ne pourra la mettre en œuvre », c’est prendre le problème à l’envers. C’est se réfugier derrière une fausse idée de l’informatique pour justifier une décision politique. Indexer un droit aux capacités de nos systèmes informatiques, c’est mettre le code au-dessus de la loi.”

Source : Allocation adultes handicapés (AAH) : non, notre « système informatique » n’est pas le problème

« Ça peut mal tourner  » : comment Wikipédia se protège contre ceux qui tentent de le manipuler

Ces multiples atouts ont leurs limites. «  L’encyclopédie fonctionne parce qu’il y a plus de gens bien intentionnés que de gens malhonnêtes. Mais si le ratio s’inverse, les bénévoles qui agissent sur leur temps libre auront du mal à maintenir Wikipédia en état », alerte Pierre-Yves Beaudouin. Si certaines pages se prêtent mal aux tripotages, les angles morts restent légion.

Source : « Ça peut mal tourner  » : comment Wikipédia se protège contre ceux qui tentent de le manipuler

« Older posts

© 2024 no-Flux

Theme by Anders NorenUp ↑