Phenomenology & Artificial Intelligence

In the movie Her (2013), directed by Spike Jonze, Theodore falls in love with the voice of a disembodied operating system (OS), who names herself Samantha after he chooses the OS to be female. Samantha is voiced by Scarlett Johansson, who we all know has a knack for stealing roles that aren’t written for her and, in this case, the role is of a body-less artificial intelligence (AI) agent.

Two important insights are to be drawn out from the movie: the first being the instinctual urge to choose between male or female—a binary so entrenched in the human experience that apparently a sexless, genderless OS had to conform to. The second point pertains to the fact that although the OS was not an embodied one, it quickly learned to become a woman when Theodore assigned her the (virtual) sex. Simone De Beauvoir’s phrase “One is not born, but rather becomes, a woman,” quickly jumps to mind here. The takeaway is this: an AI bot, the OS, gains consciousness as it/she learns to act as a woman and becomes a significant other.

There’s so much to say about the little virtual love story that ensues and the juxtaposition of alienation and intimacy, all so tenderly wrapped up in the beautiful mise-en-scène that Jonze created for us. But is that level of artificial cognition and intelligence shown by Samantha possible whilst disembodied? (Spoiler alert: No.)

How does Samantha embody, figuratively but not literally, the gender and sex she identified with? More importantly, is it actually possible for a disembodied AI (even the most advanced kind) to gain that level of consciousness – become so ‘human’ that it’s a subject of love and affection?

I can present an entire corpus of literature dedicated to discussing the limitations of AI, brittleness of systems, and blatant ‘artificial stupidity’, to prove that today’s AI cannot possibly exhibit high-level cognitive behavior such as the one Samantha demonstrates in the movie. However, that implies that the holdup is of a technical nature – that with better engineering solutions, we’d eventually be able to reach high-level artificial cognition (sometimes referred to as ‘consciousness’). This is not the case.

In what follows, I borrow from the field of Existential Phenomenology, which names an approach to philosophy that emphasizes consciousness and objects of direct experience. I particularly draw on the work of Martin Heidegger and Maurice Merleau-Ponty, whose teachings suggest the impossibility of disembodied conscious AI.

Phenomenology & Artificial Intelligence

Inspired by phenomenologists, in 1972 Hubert Dreyfus wrote his divisive book What Computers Can’t Do: A Critique of Artificial Reason, in which he argues that disembodied artificial general intelligence (AGI) is impossible. In an interview, Dreyfus said that ‘if Maurice Merleau-Ponty is right, [scientists] are taking on something that is impossible for them to do’ and that hopes ‘for progress in models for making computers intelligent are like the belief that someone climbing a tree is making progress toward reaching the moon’.

What Dreyfus was referring to is Merleau-Ponty’s idea that human knowledge is partly tacit and embodied, and therefore cannot be logically incorporated in a computer program. This particular phenomenological insight guided much of Dreyfus’s critiques.

So, why is it thought that Merleau-Ponty’s phenomenological teachings make impossible the development of AI that could approximate human intelligence?

In his magnum opus, Phenomenology of Perception (1945), Merleau-Ponty upheld the phenomenological tradition by challenging dualisms of body-mind, object-subject, and argued that consciousness is embodied. This anti-dualism challenged the viewpoints that were to become common in the field of cognitive science. Cognitive science’s study of the human mind and cognition relied heavily on theories of computationalism—of an understanding of the human mind in terms of a stimulus-response model (SRM), much like an information-processing system. The revival of interest in Merleau-Ponty is because he challenged these computational theories of mind (CTM).

Instead, Merleau-Ponty asserted that important components of intelligent behavior, learning, and skillful action, can be described and explained without recourse to mental representations at all. For him, the contents of perception cannot be made explicit in a logical system of ordered propositions. He argues:

Perception…is the background from which all acts stand out and is presupposed by them. The world is not an object … it is the natural setting of, and field for, all my thoughts and all my explicit perceptions (xxi-xii).

The term ‘background’ here is telling: it refers to the discussion of the Gestalt figure-background framework as what the phenomenon of perception is. Meaning an explicit perceptual act can only happen within a ‘field’ of sorts—within a holistic context. Merleau-Ponty argues that this background element, wherein things only stand out through their relations with other things in a field, is how all human thought works.

Elsewhere, Merleau-Ponty asserts his anti-representationalist account of cognition when he outlines the shortcomings of objective thought and mechanistic physiology’s computationalist theories about human perception. Such models, he argues, fail to recognize the active role of the body’s intentionality, its perpetual directedness towards something and its disposition to excitation. Instead, he adopts a ‘psychophysical’ view to assess how the psychical determinants and physiological conditions make up cognition.

What Computers Can’t Do

Dreyfus adopted Merleau-Ponty’s theoretical framework and claimed that many abilities do not require mental representations. He employed Merleau-Ponty’s phenomenology of skillful behavior as the opposite of what he calls the ‘Myth of the Mental’: the myth that philosophers fall into when construing human experience and intelligence as marked by thinking and reasoning, leading them to ignore the embodied activities that are a necessary layer of human intelligence.

In expanding and defending Merleau-Ponty’s complex claim—that cognition is embedded in a general context of practical activity, or ‘embodied coping’, rather than mental acts—Dreyfus puts forward a five-point learning model of skill acquisition and uses the example of learning to drive. The learning model, considered in stages from novice to expert, shows the non-representational, non-cognitive aspect of our practical dealings in the world: of how skills are acquired by dealing with things and situations, which, in turn, determine how things and situations show up for us ‘as requiring our responses’ like solicitations. The driving example shows that reaching the level of expertise happens in an atheoretical way in which ‘intuitive behavior replaces reasoned responses.’  Dreyfus sees expert knowledge as embodied, immediate intuition where learning takes on a new form.

Dreyfus transported those ideas to the world of AI and argued that, since AI is not embodied and human knowledge necessarily is, AI will never fully reach human-level cognition and will always be restricted in its application (as in, not reach full AGI). This, he argues, is because human intelligence cannot be modeled or reduced to brain functions—that the brain and mind are not analogous to a computer hardware and its software. This conclusion aimed at offsetting the ‘inflated claims’ and ‘unwarranted optimism’ about AI’s future.

Being, Sex, and the Other

We have therefore established that the body cannot be screened off from the neural substrates of consciousness. However, all kinds of reflection about the role of a robotic body instantaneously whittle away to one question: is it a male or female body—and does that matter? Would it have been more credible (and obviously more aesthetically pleasing) if Scarlett Johansson had appeared – in full – in the movie?

Yes, because not only are we embodied beings, but we are also sexed, relational beings. Theodore is not falling in love with a vacuum; he is drawn to an image of a ‘woman’ with whom he converses and eventually desires.

Once we understand, based on Merleau-Ponty’s phenomenology, that existence is the site of experience, it becomes virtually impossible to understand our dynamic interplay with worldhood, ontically as well as ontologically, by abstracting our sexed being from our experience. If we bring the intentionality of the body to the forefront, as Merleau-Ponty suggests, being is always being-towards. Our sexuality is a major driver of our lived-being—since human sexuality is an expression of one’s way of being-towards the world, time and other men.

Similarly, in Being and Time (1927), Heidegger posits the Mit-Sein (being-with) as the constitutive structure of Dasein, of being-in-the-world. Presupposed by, but also discovered through, our everyday dealing with equipment, we find that Dasein is always Mit-Sein, that the world is made or shared by The Other. As intentional and relational beings, the Other relates to our being and cannot be divorced from it. As Ong-Van-Cung puts it, “if the body is ‘the hidden form of being oneself,’ [….] sexuality is the ambiguous atmosphere coextensive with life, due to which man has a history.”

This instrumental, formative part of our cognition was clearly considered by the creators of Her. In the movie, Theodore and Samantha sense that physical intimacy is missing and suggest using what they call a “sex surrogate”: someone who would ‘simulate’ Samantha’s presence. Of course, this doesn’t work and Theodore quickly ends it, but the point stands: Samantha needed a body.

Admittedly, to do without sex and gender in AI seems an enticing proposal that divorces itself from mankind’s most troubling conceptions. Yet, if the goal is to develop AI that is similar to us—AI that has a higher-level, “human” intelligence and experience of the world (a Dasein)—then it has to be embodied, sexed, and embedded in our environment. Otherwise, attempts to create truly intelligent AI will remain futile and inherently limited.

For more work on artificial intelligence visit Synapse Analytics:

LinkedIn: https://www.linkedin.com/company/synapse-analytics/

Facebook: https://www.facebook.com/SynapseAnalytics

Instagram: https://www.instagram.com/synapseanalytics/

Aliah Yacoub

Aliah is currently working as an Artificial Intelligence Philosopher at Synapse Analytics, an AI and Data Science company, where she is the editor-in-chief of techQualia publication. She is interested in researching and writing about the human mind and its artificial counterpart.

https://twitter.com/AliahYacoub
Previous
Previous

Circus Maximus

Next
Next

Language in Star Trek