page image

CHAPTER 3 - The Philosophical, Poetic and Political

This is where we catapult from what we think we understand about the human-machine relationship to how we might rethink its future.

“Every single one of you, is an AI,” asserted De Kai Wu, Professor of Computer Science and Engineering at Hong Kong University of Science and Technology (HKUST). Like empathy, traditionally ascribed human qualities - such as creativity, emotion recognition, and sympathy - are no longer the “comforting myths” that once defined what is uniquely human. Instead, as De Kai points out, an AI can simulate and implement these criteria perfectly well. Hence, he argues that what differentiates humans from machines is negligible. Take for example, emotion recognition. Today, automatic emotion recognition is being performed with high accuracy by machines thanks to multiple inputs like facial expression or speech intonation or lexical choices. “The real differentiator is intentionality,” said De Kai, “and that’s true regardless of whether the agent is a human or a robot, because, remember, you are all AIs.”

Expanding on this, Rama Akkiraju, IBM Fellow, challenged the room to consider, “Why do we care? Why is it important for agents—computer agents, robotic agents—to express or demonstrate sympathy, empathy, or any of these things? Why is it not enough for them to simply understand the user that they’re interacting with?”

Songyee Yoon, CEO at NCSOFT, also cautioned not to conflate qualities of trust, empathy, and moral standards onto a machine’s functionality and role within the relationship. “People think that because it’s an outcome of a computer or computational algorithm, it’s more accurate, more just, more moral,” said Yoon. “I think we are faced with the need to form a new relationship which has immense cognitive empathy but does not necessarily come with a similar moral standard that we are used to have or are rightly expected to have,” said Yoon.

Terms like “understanding,” “purpose,” and “intelligence” typically litter AI discussions. However with their placement adjacent to artificial intimacy, participants exposed several nuanced and emerging challenges—surprisingly—about the human-to-human relationship. For example, intentionality helps shine light on the role big commercial enterprises can play in manipulating human-machine interactions to improve their revenue at the expense of consumer privacy, agency, and intimacy (more on this later). The crux is that while artificial intimacy concerns itself (currently) with the human-machine interaction, its second-order effects are already beginning to ripple through society in the form of increasing mental health concerns correlated with the use of social media, particularly with younger users. In order to handle the powerful effects that are on the horizon, society must grapple with new modes of thinking and potential responses.

The Philosophical
No discussion on AI is devoid of its epistemological history. But, what if this origin story had it wrong? What if the traditional European perspective—humans have intelligence, animals have instinct and machines have mere mechanisms—is in fact a categorizing mistake? Tobias Rees, Director of Transformations of the Human at the Berggruen Institute, posited just this:

“There’s a kind of promise to AI that the distinction between intelligence and mechanism or between organism and machine or between human and machine might be negligible or unimportant, that it might not actually be really something that matters.”

Rees argued that this historical ontology defines categorical boundaries driven by human exceptionalism, limiting our ability to “question how to build machines to which we can relate and that can relate to us.”

Instead, Rees offered an alternative method. What if we situate AI within the natural intelligent living system where every plant, animal, or being exhibits intelligence specific to its own existence? The current excitement for neural net architectures and machine learning are examples within the field of AI of this shift from a symbolic-logic driven (top-down) tradition to a more emergent “bottom-up” approach to intelligence. Informed by Gregory Bateson’s Steps to an Ecology of the Mind, Rees questioned: “What would it mean to build AI from the perspective of an ecology of the mind? To build AI that is part of the world and that has a world?”

This call to tear down longstanding ontological categories may seem like an intellectual exercise only; but, as Rees expanded, there is real interest in designing experiments with companies to explore what it means to create future technologies that defer the hard boundaries between human and machine. Instead, what if companies create a true companion species, where the human is not anymore exceptional than the machine? If done correctly, there could be tremendous opportunities to address areas such as health and wellness, and the needs of a growing aging population, etc.

Moreover, Rees and others ask to consider modes of Eastern philosophy, such as classical Confucius or a Taoist perspective where the individual does not exist. Unlike in European philosophy, reciprocity and symmetry also do not exist. In an experiment with Sony in Hong Kong, Rees and team are attempting to build a “Daoist AI” that is grounded in relationalism rather than intelligence to inform an agent’s ability to navigate the world. This three-year research project aims to explore questions surrounding the relevance of Chinese thought on AI, if it can be created, and its implications on the new political economy.

Similarly, Yukie Nagai, Project Professor at the University of Tokyo, noted research from the Osamu Sakura Laboratory which seeks to understand public visualizations of AI/robots in Japan and other East Asian countries. Their work suggests that the West juxtaposes humans to non-human animals and to AI/robots, while the Eastern narrative, heavily influenced by Taoism, emphasizes a continuity between objects. “In Japanese, the traditional picture called Ukio-e, the mother and infant are sitting next to each other and sharing the same perspective,” explained Nagai, “whereas the European or American pictures, they are often face to face with each other. So, it seems that such cultural or historical differences between the human-human relationship and the human-machine relationship have a strong influence on how we perceive and how we accept machines.”

The takeaway: In order to successfully design, develop, understand, live, and realize a future Society 5.0, we may need to substantially rethink how we approach these questions: Do we need to move towards a model rooted in Eastern philosophy? What of other philosophical traditions? And can such a momentous ontological shift keep pace with scientific discovery and technological development? At the moment, there are far more questions than answers. But it is imperative that AI continues to be an experimental space that pushes us to think of new possibilities for understanding the world, how it is organized, and what role and place humans have in relation to the natural world and or machines.

The Poetic
Poetry gives us another framework to consider a relationship with AI. As Italian novelist Umberto Eco wrote, “[that] those things we cannot theorize about, we must narrate.” One powerful instantiation of this is the use of the metaphor as both descriptive and prescriptive to AI design. “If we give metaphors or guidelines that essentially offer people choices when they’re deciding what to use AI for, it’s a way to have impact,” noted Tom Gruber of Humanistic AI. One approach is to position AI as either automating, augmenting, collaborating, or competing with the human—otherwise known as the “assistant.”

The second predominant metaphor is AI as an “invisible hand” or “Big Brother.” This metaphor conveys a powerful, omnipresent authority (think Facebook, Google, etc.) that manipulates individual behavior through surveillance and hyper-personalization. It places the individual at a disadvantage and relegates both agency and power to a third-party. To counter, Gruber offered an alternative metaphor—“Big Mother”—which moves the human and AI relationship towards augmentation, collaboration, and empowerment. Instead of big data and surveillance, the metaphor conveys the need to prioritize self-awareness, self-nurturing, self-care, and independence.

We’ll dive into the economic challenges of adopting new metaphors—like Big Mother—in the following section. Lucas Dixon, Scientist at Google Research, also urged the group to think even more broadly. He offered three additional metaphors for consideration: 1) AI as “another sense;” 2) AI as a sub-personality of the person who uses it; and 3) AI as a relationship itself.

The takeaway: Metaphors, like Big Mother, can serve as powerful tools in the design and application of AI. If combined with a deliberate examination for the “intent” of the AI agent, then perhaps we are moving in the right direction. AI for good has always aimed to enhance our capacity to be better humans, individually. But, is there also room for an AI to help us restore human intimacy? And can it help elevate us out of just individual needs? “If you take a step back and think just a little bit about why we usually build technology for the human race, usually we do it to shore up something we’re not good at, and oftentimes to amplify something we’d like to be better at,” described Gruber. “But today’s problems are human-created problems. . . so one goal is to develop something [an AI] that can help humans grow, that is better at knowing humanity’s weaknesses and strengths than humans. . . A new role for AI would be to help us be our better selves collectively.” The third and final critique—the political—focuses on where the metaphorical rubber hits the real-world road.

The Political
To the extent that change can be achieved in the ontological space or that we find success in shifting to positive metaphors, reality remains bound by economic, legal, and political concerns. Recently, we have witnessed an upswing of ethical principles, guidelines, and frameworks - from the OECD to the Global Partnership in AI to Responsible AI at Microsoft to the Universal Guidelines in AI. The idea of “human-centric” AI serves as a cornerstone for AI policy work, the Aspen Institute included. Big themes include fairness, accountability, transparency, democratic governance, safety, etc.

“As a general matter, the (U.S.) law is very good with dealing with relations between entities like government and citizens or companies and consumers,” noted Marc Rotenberg, President and Executive Director of the Electronic Privacy Information Center (EPIC). “But, it’s actually not so good in dealing with the kind of intimate relations between private people. But, a lot of ethical discussions about the relations between people and machines fairly quickly work their ways into consumer products, commercial applications, government systems, where those frameworks suddenly become very important.” Mentioned above, Big Brother and similar devices designed to interact with people capture enormous amounts of sensitive data. And, “It’s almost directly proportional to the degree of intimacy between the device and the person where the interaction takes place,” said Rotenberg.

Just consider the amount of information you’ve shared with Google via your email account. Or to Amazon’s Alexa with your last voice command. The volume of personalized data is staggering. Yet for the moment, these human-machine interactions are generally considered benign. Rather, issues such as corporate incentives, privacy violations, accountability, and liability began to surface through the lens of artificial intimacy. Cyrus Hodes, Vice President at the Future Society, cautioned that we must be careful to clearly distinguish the utility function of the concern around artificial intimacy: “Utility of an AI system that can understand as superhumans versus the utility of projecting empathy versus being fooled.”

But, how do we respond to that as a society in terms of codes and regulations? Richard Whitt, President of the GLIA Foundation and former public policy director at Google, suggested, “One way to frame this is the proposition that there are net benefits to society and to individuals to have human intimacy.” What is involved in that often asymmetric power relationship between parties—elements such as trust, vulnerability, and mutuality—can be and has been dealt with in the common law for hundreds of years through the fiduciary duties of care and loyalty. Whitt suggests that fiduciary law offers one way to help ensure that the deeply intimate bonds that can be forged by these technologies are based on genuine relationship, rather than outright exploitation. Songyee Yoon, CEO of NCSOFT, added, “We do not want our empathy to be exploited for commercial motivations of a third party (whether it is a corporation or a political party) and it is something that we need to be particularly aware of and concerned about, and we probably need work on developing policy guidelines to protect ourselves from such attempts.”

Intentionality is a key co-concept to consider. If these software applications or devices are serving larger for-profit enterprises, then the goals, motivations, and intent of the AI is driven by socioeconomic systems. For example, Kate Darling, Research Specialist at the MIT-Media Lab, described that in watching the movie Her the most interesting question was not about the main character’s emotional relationship with the AI, but what happens after: “What if the company exploited his willingness to pay for the emotional connection and demanded that in order to keep the software running, you have to pay a licensing fee of $10,000 a month? And he would have paid for it, right?”

Whitt identified two levels of infrastructure that warrant attention: the virtual, which include AIs and other technology tools, and the human, operating behind the scenes. While the focus often is on the presented virtual applications or tool augmentations, the human side includes the commercial incentive structures, and the specific terms of the social relationships that are embedded in the virtual layer. “Is it possible to have an AI that truly serves our interest, without also reckoning with the human infrastructure?” asked Whitt. “And, can the current asymmetric platform/user arrangement still support a healthy relational approach between the AI and the user?”

Our economic reality is a key focal point. Specifically, pulling on the Big Mother metaphor, participants questioned potential business models that may incentivize its adoption. One specific area is in challenging current key performance indicators used to determine success in different economic models. “I’d really like to have metrics like good [for human] autonomy and [helps people develop] great judgement,” said Lucas Dixon of Google Research. “But, the problem is how do we know whether the machine interaction is helping someone develop empathy or, in fact, it’s restricting someone from that development of empathy? The general problem is that it's hard to measure these things. How do we know? What's the test?” Tom Gruber offered one example of a commercial training system for the military and other corporations which leverages AI to understand visual, emotional, and voice patterns. This intelligent tutor adjusts for your personal learning curve; but the metric is not for targeted ads. Instead, the metric is the joint performance of the human within an augmented learning environment.

By way of a metaphorical stool, De Kai offered another potential approach to tackle issues around the structure and stability of a liberal world order. He calls this effort the “democratization of empathy.” The first two legs are a scarcity mindset and allocation of resources. The third leg is our ability to outrun the destructive technologies we invent. “I’m concerned that AI today is eliminating that third leg,” said De Kai. “There’s no stopping AI evolution. But even if one country or one group stops, others will continue. And yet, our culture evolution is still plodding along the same linear rate. Humanity needs cultural hyper-evolution at a pace that it’s never before witnessed.”

The takeaway: Like the philosophical and poetic critiques, the political gives way to more questions than answers. But, looking at artificial intimacy through the political lens, we are beginning to see glimpses of areas in which our regulatory and economic structures may need a re-tooling. Participants offered a variety of interventions from changes in metrics for corporate governance to the need for increased autonomy through a fiduciary component to finding new ways to support inter- and transdisciplinary collaborations. Like the two critiques above, experimentation will play a critical role. The other question is, who will lead?

 
 
Title Goes Here
Close [X]