Final answer:
To build a new AI R&D framework, the G20 AI Principles should be consulted for their comprehensive approach to responsible AI development, including corporate responsibility, ethics and governance, and interdisciplinary collaborations.
Step-by-step explanation:
If you are attempting to build a new framework for the research and development (R&D) of AI, you might first look at the G20 AI Principles for their emphasis in the area. These principles are designed to guide the responsible development and use of AI, incorporating a multi-stakeholder approach. They are mindful of global economic impacts and technical advancements while also addressing corporate responsibility and the potential dangers of AI.
The G20 AI Principles also address the importance of diversifying the engineering core to include social scientists and cognitive scientists, thereby enriching the development process with interdisciplinary insights. Furthermore, creating ethics certification programs and acknowledging the spectrum of thoughts surrounding the ethics and governance of AI can be gleaned from these principles.
These principles are part of a broader initiative to ensure that AI technologies are developed with legal transparency, accountability, and with an eye towards the societal impact, responding effectively to concerns regarding privacy, security, job loss, and the overarching ethical implications of artificial intelligence.