{"id":20562,"date":"2025-05-06T14:00:50","date_gmt":"2025-05-06T17:00:50","guid":{"rendered":"https:\/\/www.bibliotecadeseguranca.com.br\/fr\/?p=20562"},"modified":"2025-05-06T12:51:04","modified_gmt":"2025-05-06T15:51:04","slug":"artificial-intelligence-accountability-policy-report","status":"publish","type":"post","link":"https:\/\/www.bibliotecadeseguranca.com.br\/fr\/livros\/artificial-intelligence-accountability-policy-report\/","title":{"rendered":"Artificial Intelligence Accountability Policy Report"},"content":{"rendered":"<p><span style=\"color: #003366;\"><strong><img loading=\"lazy\" decoding=\"async\" class=\"alignleft size-full wp-image-20563\" src=\"https:\/\/www.bibliotecadeseguranca.com.br\/fr\/wp-content\/uploads\/2025\/05\/artificial-intelligence-accountability-policy-report.jpg\" alt=\"Artificial Intelligence Accountability Policy Report\" width=\"145\" height=\"188\" srcset=\"https:\/\/www.bibliotecadeseguranca.com.br\/fr\/wp-content\/uploads\/2025\/05\/artificial-intelligence-accountability-policy-report.jpg 145w, https:\/\/www.bibliotecadeseguranca.com.br\/fr\/wp-content\/uploads\/2025\/05\/artificial-intelligence-accountability-policy-report-116x150.jpg 116w\" sizes=\"auto, (max-width: 145px) 100vw, 145px\" \/>NTIA &#8211; National Telecommunications and Information Administration<\/strong><\/span><\/p>\n<p><span style=\"color: #003366;\"><strong>R\u00e9sum\u00e9:<\/strong><\/span> Artificial intelligence (AI) systems are rapidly becoming part of the fabric of everyday American life. From customer service to image generation to manufacturing, AI systems are everywhere.<br \/>\nAlongside their transformative potential for good, AI systems also pose risks of harm. These risks include inaccurate or false outputs; unlawful discriminatory algorithmic decision making; destruction of jobs and the dignity of work; and compromised privacy, safety, and security. Given their influence and ubiquity, these systems must be subject to security and operational mechanisms that mitigate risk and warrant stakeholder trust that they will not cause harm.<br \/>\nCommenters emphasized how AI accountability policies and mechanisms can play a key part in getting the best out of this technology. Participants in the AI ecosystem \u2013 including policymakers, industry, civil society, workers, researchers, and impacted community members \u2013 should be empowered to expose problems and potential risks, and to hold responsible entities to account.<\/p>\n<p><a href=\"https:\/\/www.bibliotecadeseguranca.com.br\/wp-content\/uploads\/2025\/05\/artificial-intelligence-accountability-policy-report.pdf\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft wp-image-12143 size-full\" src=\"https:\/\/www.bibliotecadeseguranca.com.br\/fr\/wp-content\/uploads\/2015\/03\/download_fr.gif\" alt=\"T\u00e9l\u00e9charger\" width=\"107\" height=\"25\" \/><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>NTIA &#8211; National Telecommunications and Information Administration R\u00e9sum\u00e9: Artificial intelligence (AI) systems are rapidly becoming part of the fabric of everyday American life. From customer service to image generation to manufacturing, AI systems are everywhere. Alongside their transformative potential for good, AI systems also pose risks of harm. These risks include inaccurate or false outputs; unlawful discriminatory algorithmic decision making; destruction of jobs and the dignity of work; and compromised&hellip; <\/p>\n","protected":false},"author":3,"featured_media":20563,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[51,132,6,19,112],"tags":[],"class_list":["post-20562","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-english","category-intelligence-artificelle","category-livros","category-risco-e-perdas","category-download"],"views":170,"_links":{"self":[{"href":"https:\/\/www.bibliotecadeseguranca.com.br\/fr\/wp-json\/wp\/v2\/posts\/20562","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.bibliotecadeseguranca.com.br\/fr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.bibliotecadeseguranca.com.br\/fr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.bibliotecadeseguranca.com.br\/fr\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.bibliotecadeseguranca.com.br\/fr\/wp-json\/wp\/v2\/comments?post=20562"}],"version-history":[{"count":1,"href":"https:\/\/www.bibliotecadeseguranca.com.br\/fr\/wp-json\/wp\/v2\/posts\/20562\/revisions"}],"predecessor-version":[{"id":20564,"href":"https:\/\/www.bibliotecadeseguranca.com.br\/fr\/wp-json\/wp\/v2\/posts\/20562\/revisions\/20564"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.bibliotecadeseguranca.com.br\/fr\/wp-json\/wp\/v2\/media\/20563"}],"wp:attachment":[{"href":"https:\/\/www.bibliotecadeseguranca.com.br\/fr\/wp-json\/wp\/v2\/media?parent=20562"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.bibliotecadeseguranca.com.br\/fr\/wp-json\/wp\/v2\/categories?post=20562"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.bibliotecadeseguranca.com.br\/fr\/wp-json\/wp\/v2\/tags?post=20562"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}