Humanity has been imagining intelligent machines since long before we could build them. As artificial intelligence and robotics begin to fulfil their promises, they therefore arrive pre-loaded with meaning, sparking associations – and media attention – disproportionate to their capacities. How we talk about future technologies and their risks and benefits can significantly influence their development, regulation, and place in public opinion. Balancing AI’s potential and its pitfalls therefore requires navigating this web of associations.
A CFI-CCRC Joint Workshop “Hopes and Fears for AI: Imagining Intelligent Machines” was hosted on Sept 11, 2018. It featured three researchers from the AI Narratives project at the Leverhulme Centre for the Future of Intelligence (CFI) , University of Cambridge. CFI started this project in 2016 together with the Royal Society in London, investigating the influence of pre-existing fictional and nonfictional AI narratives on contemporary AI research, the popular understanding of AI, and AI policy.
To understand the hopes and fears that shape people’s perception of intelligent machines, it helps to recognize the gap between the imaginations and actual technological advancement, and to identify problems generated from all kinds of destructive fantasies about AI. The team is currently developing the next stage of this project, titled Global AI Narratives, which aims to look into questions in different national and cultural contexts outside the English-speaking West.
Dr. Kanta Dihal, Research Project Coordinator of CFI, started the session with the story of René Descartes’ daughter, Francine Descartes. The grieving Descartes is said to have constructed an animatronic effigy in her likeness after her death.
Humans tend to experience both hopes and fears towards new technologies. Desire for the power of creation, fulfillment of personal hopes, and the ability to go beyond the natural boundary between life and death are some of the popular themes making people enthusiastic about artificial intelligence. These hopes and fears are much more prevalent in the case of AI than for other technologies.
However, many ancient legends, like the story of the first killer robot “Talos”, and science fiction, revealed the fears humans hold towards the uncontrollable development of AI. This kind of struggle between hope and fear gradually builds up the images and stereotypes people have about artificial intelligence.
Dr. Stephen Cave, Executive Director of CFI, continued the discussion on how AI is being portrayed in the Western context. Stephen and Kanta are currently working on a paper on hopes and fears for AI Nature Machine Intelligence, which will be published in February 2019 and a joint book project AI: A Mythology (2020).
The prospect of sharing our lives with intelligent machines seems to provoke imaginative extremes: thinking about them seems to make us either wildly optimistic or melodramatically pessimistic. The optimists believe AI would help solve many social problems, while adversely, the pessimists worry that autonomous machines would lead to humanity’s downfall – which have often been picked by the mass media and used to polarize people’s hopes and fears towards AI.
As Stephen Hawking has put it during a speech given at the University of Cambridge, “AI will be either the best or worst thing for humanity”, we cannot be sure yet but should make the best of it.
Stephen looked into the structure of this imaginary mindset and categorized people’s fears and hopes into 4 groups – Life, Time, Desire and Power, and explained the concepts with examples from science fiction and nonfiction narratives about AI in the West.
These narratives are worth discussing – it helps innovators develop systems that fulfill the desires for a life of health and ease, while the fearful stories inspire careful handling of AI ethics and safety. However, sometimes these extreme stories can bring in false hope and mistrust towards AI. Thus, the way we communicate generates diverse and complex impacts on the understanding of opportunities and risks of the new technological developments.
Dr Sarah Dillon, Programme Director at CFI, presented her research project “What AI Researchers Read: The Role of Literature in Artificial Intelligence Research”. She is interested in looking at how literature actually influences people working in AI research. She presented an interview study with 20 AI researchers in the UK, which she and her student Jennifer Schaffer-Goddard conducted last year.
Undoubtedly, literature and science fiction have symbolic roles in influencing researchers’ perception of AI ethics and philosophy. They create common languages, develop future scenarios and transmit various images of AI to the public.
Again, AI narratives could be both useful and problematic depending on how we communicate. It is always catchier to talk about AI conquering the world. The team hopes to gather both scholars and artists, encouraging them to generate more interesting, diverse and constructive stories of AI and related technologies.
Dr. Dillon also introduced a new programme the AI Narratives project is part of, “AI: Narratives and Justice”, which also includes projects on AI history, AI and gender, decolonizing AI, and global AI narratives.
A fruitful discussion centered around the similarities and differences between Western and Japanese narrative portrayals of AI was the highlight of the day. Professor Jun Murai, Co-director of the Cyber Civilization Research Center at Keio University, shared the history of Japanese manga in relation to robots and artificial intelligence. He used examples of Karakuri, Mighty Atom and Tetsujin 28, to demonstrate the development of AI narrative portrayals in the Japanese society.
Professor Keigo Okawa, Professor of Graduate School of Media Design at Keio University (KMD), introduced two related courses designed by KMD which aim to stimulate discussion among local and international students on the history of pop culture and investigate its effect on influencing robot development in the Japanese context.
By delving deeper into the narrative portrayal of artificial intelligence in different cultures, researchers, innovators, developers and society can better understand the fundamental questions regarding future technologies at a global level, and collaborate while making the best of the opportunities presented by artificial intelligence.
Click here to view the photos
Article prepared by Cherry Wong