• Aucun résultat trouvé

National policies for trustworthy AI

Dans le document SCOPING THE OECD AI PRINCIPLES (Page 22-25)

Governments should develop policies, in co-operation with all stakeholders, to promote trustworthy AI systems and achieve fair and beneficial outcomes for people and the planet, consistent with the principles above.

2.1. Investing in responsible AI research and development

Governments should consider and encourage long-term investments in inter-disciplinary basic research and development to spur innovation in trustworthy AI that would focus on challenging technical issues as well as on AI-related social implications and policy issues.

Governments should in particular consider:

- Developing high-level frameworks to coordinate whole-of-government investments, especially in promising areas underserved by market-driven investments.

- Prioritising inter-disciplinary research and development to address the ethical, legal, and social implications of AI, crosscutting issues such as bias, privacy, transparency, accountability and the safety of AI, and difficult technical challenges such as explainability.

- Using public procurement, promoting joint public and private procurement, and establishing flexible joint venture funding systems to spur market investment in responsible research and development, to encourage broad-based evolution of the market for AI-based solutions, and to foster diffusion of AI systems that benefit society across regions, firms and demographic groups.

2.2. Fostering an enabling digital ecosystem for AI

Governments should foster an enabling ecosystem, including digital technologies and infrastructure, competitive markets as well as mechanisms for sharing AI knowledge to support the development of trustworthy AI systems.

Governments should in particular consider:

- Investing in, and providing incentives to the private sector to invest in, AI enabling infrastructure and technologies such as high-speed broadband, computing power and data storage, as well as fostering entrepreneurship for trustworthy AI systems.

- Encouraging the sharing of AI knowledge through mechanisms such as open AI platforms and data sharing frameworks while respecting privacy, intellectual property and other rights.

2.3 Providing an agile [and controlled] policy environment for AI

Governments should provide an enabling policy environment to support the agile, safe and transparent transition from research and development to deployment and operation of trustworthy AI systems. To this effect, governments should review existing laws, regulations, policy frameworks and assessment mechanisms as they apply to AI and adapt them, or develop new ones as appropriate.

Governments should further encourage that AI actors comply with the applicable national frameworks and global standards.

Governments should in particular consider:

- Using experimentation, including regulatory sandboxes, innovation centres and policy labs, to provide a controlled environment in which AI systems can be tested.

- Encouraging stakeholders to develop or adapt, through an open and transparent process, codes of conduct, voluntary standards and best practices to guide AI actors throughout the AI lifecycle, including for monitoring, reporting, assessing and addressing harmful effects or misuse of AI systems.

- Establishing and encouraging public and private sector oversight mechanisms of AI systems, as appropriate, such as compliance reviews, audits, conformity assessments and certification schemes, while considering the specific needs of and constraints faced by SMEs.

- Establishing mechanisms for continuous monitoring, reporting, assessing and addressing the implications of AI systems that may pose significant risks or target vulnerable groups.

2.4. Building human capacity and preparing for job transformation

Governments should work closely with social partners, industry, academia, and civil society to prepare for the transition in the world of work and empower people with the competences and skills necessary to use, interact and work with AI.

They should ensure that AI deployment in society goes hand in hand with equipping workers fully for a fair transition and new opportunities in the labour markets. They should do so with a view to fostering

entrepreneurship, creating quality jobs, making human work safer, more productive and more rewarding, and ensuring that no one is left behind.

Governments should in particular consider:

- Developing a policy framework conducive to the creation of new employment opportunities.

- Encouraging research on occupational and organisational changes to anticipate future skills needs and improve safety.

- Promoting a broad, flexible and equal opportunity range of life-long education, technological literacy, skills and capacity-building measures to allow people and workers to successfully engage with AI systems across the breadth of applications.

- Developing schemes, including through social dialogue, for fair transition to support people whose current jobs may be significantly transformed by AI, with a focus on training, career guidance and social safeguard systems.

- Encouraging education institutions and employers to provide interdisciplinary education and training needed for trustworthy AI, from STEM to ethics, including through apprenticeships and reskilling programmes to train AI specialists, researchers, innovators, operators and workers.

2.5 International cooperation for trustworthy AI

Governments should actively cooperate at international level, among themselves and with stakeholders in all countries, to invigorate inclusive and sustainable economic growth and well-being through trustworthy AI in all world regions, and to address global challenges.

They should work together transparently in all relevant global and regional fora to advance the adoption and implementation of these principles and progress on trustworthy AI.

Governments should in particular consider:

- Supporting international and cross-sectoral collaboration concerning these principles, including through open, global multi-stakeholder dialogues that can enable long-term expertise for trustworthy AI.

- Promoting cross-border collaboration for responsible AI innovation through sharing of AI knowledge, and maintaining [free] [transborder] flows of data with trust that safeguard security, privacy, human rights and democratic values.

- Encouraging the development of globally accepted practical technical standards, terminology, taxonomy, and measurement methodologies and indicators to guide international co-operation on trustworthy AI.

- Building AI capacity to bridge digital divides and to share the benefits of trustworthy AI among all countries.

[Provision on measurement to be added: Governments should encourage the development of internationally comparable metrics based on common measurement methodologies, standards and best practices to measure global activity related to AI research, development and deployment, and to gather the necessary evidence base to assess progress in the implementation of these principles.]

Dans le document SCOPING THE OECD AI PRINCIPLES (Page 22-25)

Documents relatifs