Joint Recommendation on Policies for the Development and Use of AI

David Farber[1]
Shumpei Kumon [2]

Note: The following statements are solely the views and opinions of the individual authors and do not necessarily reflect those of the organizations they are affiliated with.

1, Our Vision on AI

Expectations for AI

The accelerating development of AI (Artificial Intelligence) technologies, especially generative AI, has been truly remarkable in recent years, and its expanding social applications are expected to make a significant contribution to solving global threats and challenges that human society faces, such as poverty, conflict, suppression of human rights, infectious diseases, economic depressions, environmental problems, and even infodemics[3]. We, in this sense, fundamentally welcome further progress in the development of AI technologies.

The Risks Posed by AI

At the same time, however, we must not forget that AI technology, in its development and utilization, entails risks as great as its mighty potential. Therefore, we must continuously examine the risks of AI and dynamically formulate and implement necessary countermeasures based on the results of such analysis. This risk analysis and implementation of countermeasures, collectively called “AI Governance,” should be promoted on a global scale and thus inherently requires international cooperation.

There is no doubt that AI will have an extremely large social impact, perhaps no less than the advent of the Internet we experienced 30 years ago. The potential of AI should not be monopolized by a specific few, be it a country or a company, but should be shared as openly as possible so that all of humanity can enjoy its benefits, just as the Internet has been.

Positioning of Generative AI

In the shadow of the economic slump following The Great Recession in 2008, a “new accelerated growth process” has begun, quietly at first, but eventually in a manner that has excited and enthused a large number of people. This is the breakthrough in AI technology. So far, the “breakthrough” has occurred in two stages. The first is the elaboration of “deep learning,” a method of machine learning, and the second is “generative artificial intelligence,” as typified by the recent ChatGPT.

Distinguish between “Weak AI” and “Strong AI”

hatGPT, a concrete example of a generative AI, is an AI specialized in natural language processing and is considered as a kind of intelligence in that it almost certainly passes the Turing Test (which may need re-statement). In other words, the current generative AI has a certain level of usefulness when used in specific applications, but it still remains as “Weak AI” or “Narrow AI.”

The real threat to our society is the emergence of “Strong AI” with “will” and “identity,” but on several grounds, we believe that AI at this stage is not evolving in this direction, and we believe that these two different types of AI should be distinguished very clearly.

In his 2017 “Maturation of Modernity and Emergence of a New Civilization: Human Civilization and Artificial Intelligence I” (NIRA Research Report), Kumon advocated the following, “Let specialized AI be the blessing for humanity.”[4]

If we succeed in coexistence [with specialized AI], modern civilization will succeed in superimposing itself on post-modern civilization while reaching true maturity and achieving finality. In a society in which human beings and AI with augmented human capabilities are paired up to work together and coexist peacefully while preserving human free will and autonomy, AI will become a blessing for mankind.[5]

Desirable social responses to AI at the present stage

Content created by a generative AI is likely to contain unintentional errors or intentional fakes. Therefore, users should not unconditionally trust or innocently use generative AI. In this sense, we propose to widely promote the establishment and dissemination of social rules on how and in what form generative AI should be used.

On the other hand, current generative AI is not considered to have supernatural dangers such as “revolt against humanity” posed by the so-called Singularity theory, and for the time being, we believe it is important to pursue the possibilities offered by AI while appropriately regulating its outcomes in the marketplace and information space [6].

Expectations for G7 Leadership

In this regard, we welcome the fact that AI Governance will be one of the key agenda items at the G7 Summit to be held in Japan in May 2023. We hope that the G7 leadership will steer global AI Governance efforts in the right direction, minimizing AI risks and ensuring that AI will play a major role in making the human society of the future, the Information Society as it matures, more peaceful, prosperous, and pleasurable than it is today.

In this context, we welcome the U.S. government’s “AI Bill of Rights,”[7] the Japanese government’s “AI Strategy 2022”[8], the European Union’s “AI Strategy (2018) ,”[9] and the Council of Europe’s efforts to develop an “AI Treaty,”[10] which places people’s rights and wellbeing as top priorities, we look forward to further progress in harmonization and its expansion to a global scale.

2, Specific Recommendations

In the following, we will attempt to make more concrete recommendations on what should be done regarding the development and use of AI technologies, focusing on what we believe to be the minimum requirements now.

Disclosure of information on technological development

If the development of AI technology progresses further, there is a none zero possibility that autonomous AI (so-called “strong AI” or even “artificial superintelligence”[11] ) with the will and purpose to compete with or even surpass humans will be realized around “2045” (Kurzweil). We do not call for a total ban on the use of “generative AI” or the development of “artificial superintelligence,” but we do believe that it is essential to be more than adequately prepared. We should not take the risk of only developing technology before this is in place.

However, development in technological fields should basically be promoted freely, and especially in the field of AI, where there are many unknowns, fixed laws and regulations that impede innovation should be avoided. To the extent possible, we call for securing a path of development based on the creativity of private entities and the choices of users in the marketplace and information space and when regulations are to be imposed, they should be limited to cases where clear damage or harm has been confirmed, and existing legal frameworks should be utilized as much as possible.

Accordingly, we call upon the entities promoting the technological development of AI to regularly and voluntarily disclose timely information on its status and prospects. We propose that governments and respective International Organizations jointly establish certain standards for the content and methods of information to be disclosed to the public and enact a system that mandates the implementation of such standards. In formulating such a system, the disclosure obligations imposed on listed companies may offer a good reference to follow.

In this regard, we basically support the open policy formation through RFCs, as indicated in the “AI Accountability Policy”[12] released by the U.S. National Telecommunication and Information Administration (NTIA) on April 11. We also support the fact that the Japan has already promoted the implementation of the “AI Governance Guidelines”[13] by Ministry of Economy, Trade and Industry, and suggest that other countries follow suit.

Establishment and promotion of AI Utilization Guidelines

Considering the current innovativeness of generative AI and its great potential, we believe it is extremely important for users to be fully aware of the capabilities and limitations of AI before making use of it. We do not take the position of unconditionally and comprehensively prohibiting the use of generative AI. AI providers should disclose and provide sufficient information to users in a timely manner in accordance with the “AI Governance Guidelines” mentioned above. At the same time, however, the responsibility of those who use the tools should also be clarified. It should be clearly confirmed that the users themselves must bear the responsibility for any negative consequences that may result from their incorrect use. This is because we believe that the real actors in the information society are the people (citizens or netizens) who utilize information technology.

More specifically, “Guidelines for AI Utilization” should be established and promoted. However, it is not necessarily a single one. Different standards are needed according to the attributes of users (citizens, students, companies, etc.), technological skills, purpose of use, for example. In the future, we propose that a multi-stakeholder approach, in which various entities such as governments, companies, citizens, and researchers work together to establish guidelines in their respective fields and promote their dissemination.

For the G7 Summit to be held in Japan, based on our review of the history of computer science and technology, Internet technology, and their application and diffusion, we propose our recommendations in the hope that the Information Society will develop in a more desirable direction for mankind.

To contact the authors, please send email to the following addresses:

Prof. David Farber:
admin@www.ccrc.ac.jp

Prof. Shumpei Kumon:
iza@anr.org

[1] Guest Professor (Global) and Co-Director, Cyber Civilization Research Center, Keio University

[2] Professor and Director, Institute for InfoSocionomics, Tama University

[3] https://www.who.int/health-topics/infodemic

[4] “Specialized AI” here is synonymous to “weak AI”

[5] https://www.nira.or.jp/paper/report_1708.pdf (in Japanese only)

[6] In contrast to the “marketplace” in industrial society, Kumon named the information space that emerges in an information society “Intelplace.”

[7] https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf

[8] https://www8.cao.go.jp/cstp/ai/aistratagy2022en_ov.pdf

[9] https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52018DC0237&from=EN

[10] https://rm.coe.int/cai-2023-01-revised-zero-draft-framework-convention-public/1680aa193f

[11] Nick Bostrom “Super Intelligence” 2014 and Tomohiro Inoue “Jinkou Chochinou” 2017

[12] https://ntia.gov/press-release/2023/ntia-seeks-public-input-boost-ai-accountability

[13] https://www.meti.go.jp/shingikai/mono_info_service/ai_shakai_jisso/20220128_report.html