
When it comes to regulation, divergent approaches can create barriers to innovation and diffusion.Effective cooperation will require concrete steps in specific areas, which the recommendations of this report aim to suggest. These will require deeper understanding of how AI works in practice and working through the operation of principles in specific contexts and in the face of inevitable tradeoffs, such as may arise when seeking AI that is both accurate and explainable. The next steps in AI governance involve translating AI principles into policy, regulatory frameworks, and standards. While much progress has been made aligning on responsible AI, there remain differences-even among Forum for Cooperation on AI (FCAI) participants. International cooperation based on commonly agreed democratic principles for responsible AI can help focus on responsible AI development and build trust.Several essential inputs used in the development of AI, including access to high-quality data (especially for supervised machine learning) and large-scale computing capacity, knowledge, and talent, benefit from scale. An absence of international cooperation would lead to competitive and duplicative investments in AI capacity, creating unnecessary costs and leaving each government worse off in AI outcomes. Cooperation among governments and AI researchers and developers across national boundaries can maximize the advantage of scale and exploit comparative advantages for mutual benefit. AI research and development is an increasingly complex and resource-intensive endeavor, in which scale is an important advantage.There are several reasons to sustain and enhance international cooperation. Why international cooperation on AI is importantĮven more than many domains of science and engineering in the 21st century, the international AI landscape is deeply collaborative, especially when it comes to research, innovation, and standardization. At the end of this report, we list the topics that we propose to explore in our forthcoming group discussions.
KOFAX VRS DEACTIVATE MACHINE NOT AVAILABLE HOW TO
In exploring how to align these various policymaking efforts, we focus on the most compelling reasons for stepping up international cooperation (the “why”) the issues and policy domains that appear most ready for enhanced collaboration (the “what”) and the instruments and forums that could be leveraged to achieve meaningful results in advancing international AI standards, regulatory cooperation, and joint R&D projects to tackle global challenges (the “how”). Global corporate investment in AI has reportedly reached US$60 billion in 2020 and is projected to more than double by 2025. Most recently, the EU proposal for adoption of regulation on AI has marked the first attempt to introduce a comprehensive legislative scheme governing AI. guidance to federal agencies on regulation of AI and an executive order on how these agencies should use AI. guidance on understanding AI ethics and safety have been frontrunners in this sense they were followed by the U.S. Canada’s directive on the use of AI in government, Singapore’s Model AI Governance Framework, Japan’s Social Principles of Human-Centric AI, and the U.K. While many of these focus on general principles, the past two years have seen efforts to put principles into operation through fully-fledged policy frameworks. In addition, there has been a proliferation of declarations and frameworks from public and private organizations aimed at guiding the development of responsible AI. Several other international organizations have become active in developing proposed frameworks for responsible AI development. The Organization for Economic Cooperation and Development (OECD) launched the AI Policy Observatory to support and inform AI policy development. In addition, in 2018 the G-7 agreed to establish the Global Partnership on AI, a multistakeholder initiative working on projects to explore regulatory issues and opportunities for AI development. These encompass both technical aspects of AI (in standards development organizations (SDOs) such as the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), and the Institute of Electrical and Electronics Engineers (IEEE) among others) and the ethical and policy dimensions of responsible AI. Associate Research Assistant and Digital Forum Coordinator, Global Governance, Regulation, Innovation and the Digital Economy (GRID) – CEPSĪt the same time, the work on developing global standards for AI has led to significant developments in various international bodies.
