Menu

Let’s roll in the video cart for this post. We’ll start with this clip:

The official narrative that runs through this video is that robots are increasingly lifelike and capable of amplifying, or adding flourishes to, qualities we admire in living form: in this case ‘talent,’ but also efficiency, intelligence, agility, etc. It’s a story in which we are primed to marvel at their accomplishments and applaud their ever-amazing improvements. And we do — just look at the audience in this clip.

But look closer. The most critical part of the story in this video focuses our attention not on amazement with robotic antics, but on empathy, when one of the robots collapses. Do you really think this fail was accidental? It’s more likely a narrative device used to generate relatability and overcome the creepiness we humans sense when robots become ‘too perfect’ or too similar to living beings. What better way to enlist this device than through performative vulnerability, and Boston Dynamics serves it up steaming hot here.

Artists Sun Yuan and Peng Yu also leverage the device well in their piece titled Can’t Help Myself that was on display at the Guggenheim Museum in New York City. The work consisted of a glass encased giant robotic arm that was programmed to perform 32 unique dance moves, such as “ass shake”, “scratch an itch”, “jazz hands,” and “bow and shake,” to entertain and anthropomorphize the sculpture. However, it was also programmed to sweep up a dark red fluid that continually leaked from its inner core. As time went on the leak became larger and harder for the arm to manage, which shifted the behavior of the robot from joyful dancing to the punishing working of capturing and retaining its vital fluids. Eventually, the artists decided to turn the robot off.

Trigger alert: it’s sad, really sad. Seriously, the first time I saw this, it hit deep.

Let’s take a little diversion to think about what’s happening with audiences in both of these examples. We’ll start by borrowing from the Japanese concept of Wabi Sabi. It centers around an appreciation for the beauty found in imperfection. In practice, it highlights the aesthetic value of authenticity, including the passage of time, aging, and even decay. The ‘wabi’ half of the concept is expressed through rustic simplicity and understated elegance. The ‘sabi’ half refers to the beauty that comes with age and wear; and it celebrates this through appreciation for patinas, imperfections, and the stories they provoke.

Wabi Sabi can be seen as a deeply-rooted substructure for what appears to be happening in audiences that become emotionally engaged when a robot’s imperfections are surfaced. The Pratfall phenomenon from social psychology runs parallel to this. It suggests that a person’s perceived attractiveness increases after they make a mistake—provided they are generally competent. In addition to the ‘injured’ robots in the clips above, we see this phenomenon leveraged in widely popular works like Wall-E (we feel sympathy for him in his futile struggle to clean up infinite piles of garbage on a post-apocalyptic earth), or in the novel Hum (when humans feel sympathy for robots after witnessing them grow exhausted from perpetually delivering ads to humans). Their vulnerabilities and imperfections are designed to evoke an endearing audience response — and it seems to work (maybe too well).

When Things Get Personal

While narratives like the ones we see above can be very effective when robots are observed from a distance (behind glass, on a stage, in a movie or novel), they are at risk of disintegrating when robot-human interactions become more intimate. In those settings, where humans shift from audience to participants or cohabitants, we start to see forms of human resistance, and even destruction, as both the seamlessness of embodied technology and the manufactured vulnerability of its ‘imperfections,’ shift from human fascination to ruptures in our understanding of consciousness, and ourselves. This enchanting machine that was once surprising and charming is now ‘all up in my business.’ So, we resist; but our resistance is layered…and very telling. Let’s take a look at a few examples.

You may remember the hitchhiking robot from 2013. It was designed to be temporarily adopted as a traveling companion, and passed along, as part of an experiment to learn about how people interact with technology. From Wikipedia:

It was small and had a look the team described as "yard-sale chic," to evoke trust and empathy, and had a child's car seat base to be easily and safely transportable.

The robot could not walk – it completed its "hitchhiking" journeys by "asking" to be carried by those who picked it up. The robot could engage in basic conversations, discuss facts, and function as a robotic companion during travels in the vehicle of the driver who picked it up.

[and then...]In 2015, its attempt to hitchhike across the United States ended when it was stripped, dismembered, and decapitated in Philadelphia.

In this CBS story covering the robot’s demise, one of the creators is quoted saying that she “was most concerned about children who loved hitchBOT and followed it on social media. Her team doesn’t plan to release the last photo of it to protect young fans who might be distraught.” (!)

Let’s move on to some more mundane examples to dive deeper into the contrast between the ‘official’ narrative of robotics and everyday human interactions with them. Take a moment to watch these robots folding laundry or peeling an orange.

How easy is it to imagine these robots performing these tasks in your home? And how comfortable is it to imagine them taking up the space equivalent of a human in your home to do so? Or, maybe they perform these tasks while you sleep. A bit disconcerting, right?

Maybe the robotics just “isn’t there yet.” Or, we humans still need more time to acclimate to their presence. Both are likely true, and I’m sure there are highly capable researchers addressing both.

However, even if those obstacles are overcome, the subtext of human-robot interaction narratives position humans as passive receivers of the wonders robots bestow upon us. Yet, when we see them function in the real world — in OUR world (not the factory floor or the stage) — they tend to frustrate and/or alienate us. Their presumed efficiencies often contrast with our expectations, and confound more than they delight us.

Second, if we presume that our role is that of commander in a future populated with everyday robots, do they not become an embodied and constant reminder of how inefficient we humans are? If we’re being told that the highest value of executing any task is sleek, seamless, and serene efficiency, our flaws and idiosyncrasies are magnified by contrast. We’re none of those things, at least not consistently. Their very presence may seriously and continually challenge the value of our agency in the world. What’s to become of our pride in ‘mastery’ (regardless of how mundane it may be)?

Further, even if robots are pre-programmed to express performative vulnerability and concocted imperfections to win us over emotionally, might we not become even more resistant to their presence once these strategies are exposed?

Taken together these considerations are signals that more intimate interactions with robots are likely to trigger or amplify animus within us, which may in part explain some of the destructive behavior we’ve seen above. We resist their presence to highlight, or punctuate, the difference between robots and humans; to create greater distinction between the two embodied, yet conflicted, forces sharing common space and familiar tasks. Take a moment to check out the Flesh Fair from the movie AI:

It would seem likely that this form of resistance is more likely to increase than decrease as robotics that enlist narratives of perfection and performative vulnerability advance. Might we not eventually devise screening tests, harbor resentments, demand constraints, or…worse? This is classic ‘Othering,’ something that our species can’t seem to shake.

But resisting and Othering are only part of the equation. It’s HOW robots go about their work that further exacerbates our troubling relationship with them when in close proximity. A closer look at the video of a robot peeling an orange reveals that what is intended by the makers to function as a pair of hands, starts to cross very quickly into the uncanny as it appears that each hand has a ‘mind’ of its own – a sort of hydra beast dissecting something that has no inherent meaning to it.

So, the simple task of peeling an orange starts to reveal an alien approach to the task that becomes unsettling for humans to witness. I wouldn’t be surprised if most people watching this are seized by the desire to grab the orange from the robot and peel it themselves — not to do it faster or more efficiently, but to put an end to witnessing the weird ‘logics’ embedded in each hand of the robot. This might be even more evident in the video showing a robot unlock a tool box on that same page.

Which leads us to the question of which is more disconcerting: robots that apply alienating logics as they interact with the familiar tasks around us, or robots that perform familiar tasks in increasing rates of efficiency that outstrip our capabilities? Or, perhaps worst, interacting with domestic robots that feign imperfection to position themselves as endearing? (lmagine the robot that peels an orange dropping a slice and then timidly apologizing for its ‘mistake.’ Eww.)

Perfection and Domesticity

In light of the unsettling relationships we may be forming with robots, let’s take a look at how our responses have already begun to take shape in domestic applications of robots when the narrative of efficient perfection alone is centered.

From my own research into technology in the home across many cultures, I can attest to a clear and constant tension between the goals we set for orderly households and messy business of living. ‘Magical’ tech solutions have compelling value propositions that suit our aspirations, but our experiences with them are inevitably disappointing. Again and again, it quickly becomes evident that tech perfection and efficiency are at odds with domesticity.

Take, for example, the humble robot vacuum. It actually has quite a long history, but ask anyone to tell you a story about their experiences, and they will more than likely share a story of failure and frustration. Their expectations for efficient, automated, and pristine cleanliness has more often than not been dashed and replaced by the new responsibility of tending and rescuing their annoying robot.

This contrast is perhaps captured most poignantly by this photo, in which cleaning staff at a robot vacuum showroom is seen using a good ole broom to clean the floors:

Which leads me to ask: If narratives of robotics inherently conflict with the realities of domesticity, in what ways might they align more appropriately with our daily lives? I would argue that alignment is more likely when:

  • the consequences are low (we’re not expecting perfection or waiting for a robot to execute a critical task);
  • efficiency isn’t the point (we’re not comparing our capabilities to the robot’s);
  • opportunities for playful personification originate with humans (not from robots’ performative vulnerability);
  • the robot’s logics have metaphoric parallels (the robot either offers no clear indication of its own logic, or provides signals that reflect logics with which humans identify).

So, in the case of robot vacuums, even if the sales pitch (perfectly automated cleanliness) never fully materializes, it likely doesn’t matter that much if the floor isn’t as spotless as promised (low stakes). And, besides, its mishaps are sometimes amusing. Further, its ‘logics’ are largely cartographic and familiar (like mowing a lawn). What’s more, we seem especially keen to morph them into play things — a very telling hack, even if it is still just a vacuum in the end.

The Future is…Cute

So, where is all this going? If manufacturers are savvy, they will listen carefully to the signals consumers are sending and double down on autonomous robotic toys. As products positioned more for entertainment than for executing chores, they’re simply a better ‘fit’ for us than the promises of efficiency, feigned imperfections, or uncanny logics, that have triggered human resistance, Othering, and resentment. Instead, bots should enable low-stakes ‘delight’ functions, deprioritize efficiency, transparently facilitate personification, and avoid the discomfort of the uncanny by leveraging familiar human metaphors (the pet, the confidante, the class clown, etc.).

There are a few early entrants in this space now — chief among them Sony’s robotic puppy named Aibo and LivingAI’s desktop ‘pet,’ Emo. However, both are far from the emotional, personality-packed attributes we might imagine from an autonomous Wall-E who hangs out with us at home, for example.

And what of those humanoid robotic servants like Tesla’s Optimus? Perhaps their presence is better suited for factory floors and warehouses. In short, know your place, bots — cuz, well, we’re not always an amicable species.


Discover more from Ethnographic Mind

Subscribe to get the latest posts sent to your email.

Leave a Reply