Artificial Intelligence (AI) is poised to enter the home in a big way, via technologies like smart-home devices, the Internet of Things, companion robots and digital personal assistants. The home robot Jibo is on the cover of the latest Time magazine as one of the 25 best inventions of 2017.

For these new technologies to function optimally, they need to be ‘emotionally intelligent’ — that is, they need to build and sustain trusting, intimate and evolving relationships with their human users.  

Although still in development phase, early applications of Emotional AI show that it will enable devices to tap into our emotional states — to understand how we are feeling and why, and to predict our future moods and emotions. In response, emotionally enabled devices will be able to determine and simulate an ‘appropriate’ emotional state.

The new wave of cute home robots, for example, which are designed primarily for companionship and care rather than domestic service tasks, claim to grow alongside you and your family. If “technology wants what life wants”, as cyberculture observer Kevin Kelly argued, then these home robots want loving, caring, long-term relationships with us. As Jibo put it, home robots simply want to become “part of the family”.

For these long-term relationships to develop, home robots — and other emotionally enabled companion devices — will need to learn a great deal about us, as well as those closest to us. They’ll need to learn not just our daily routines, habits and preferences, but also the nuances of our emotional states, as well as our emotional ‘trigger points’. This kind of ‘deep machine learning’ will enable these digital companions to grow alongside us, adapting to the unique culture of the adoptive household.

But it’s not just about the robots. Emotional AI is already embedding itself in daily life with things like Apple’s digital personal assistant Siri or its new animoji feature for iPhone X, emotion-enabled toys like Anki’s Cozmo, or even Facebook’s ‘like’ function — as well as in many other less-visible applications, such as sentiment analysis for market research. 

Yet, while these emotionally intelligent agents continue to learn a great deal about us, many of us are yet to apply ourselves to understanding the full scope of this technology, particularly when it comes to information privacy. What, exactly, is the price we might pay — wittingly or unwittingly — for the experience of companionship with our devices?

The value of emotional data

Currently, advances in the field of Emotional AI are almost solely driven by commercial interests and centre around face-tracking software and voice-recognition algorithms — both of which aim to recognise emotions and moods, real and fake. Controversially, some face-tracking software claims to be able to tell relatively private matters such as a person’s sexual orientation, political leaning and IQ via analysis of their face alone.  

As emotionally enabled devices become more ubiquitous, both inside and outside the home, we’re beginning to shed new sets of data in new situations — granular bits of deeply personal information about what ‘makes us tick’, which some scholars have dubbed our personal data emanations

Cuteness is known to give us an affective ‘hit’ (similar to gambling or drug taking), which produces a quick, rewards-based neurological response and makes a detour around rational thought.

Our personal data emanations, or ‘emotional data’, is of enormous interest to business in the digital economy, which has a great deal to gain from the manipulation and exploitation of our emotions. Sentiment analysis firms like Lexalytics, for example, are currently providing their customers with data profiles grounded in emotion and mood, in order to “get down to what matters most: monitoring and modulating consumer desire”.

And although the retail industry is, arguably, poised to gain the most from our emotional data (think highly targeted advertising and all the other myriad ways in which companies might “develop dynamic brand experiences based on the emotional responses of customers”), there are countless other ways in which the expansion of algorithms into emotions, moods and affect will be leveraged by big business.

The path of least resistance

All of which begs the question: Why aren’t we more careful when it comes to our privacy? Studies have shown that, although most people claim to be highly concerned about personal privacy, our regular use of technology doesn’t always (or often) reflect this concern — something known as the ‘privacy paradox’.

Further, researchers have shown that our attitudes about our personal privacy are highly malleable, subject to influence by interface design and reciprocal information exchange, for example.

My research (along with Dr Catherine Caudwell from the School of Design at Victoria University) has linked the rise of the cute aesthetic — fast becoming one of the most ubiquitous aesthetics of the digital age — to the design of many Emotional AI devices.

Cuteness is known to give us an affective ‘hit’ (similar to gambling or drug taking), which produces a quick, rewards-based neurological response and makes a detour around rational thought. This means that, while we are interacting with cute home robots like Jibo or Kuri, or with Apple’s animoji, we are more caught up with our emotional and physical responses to this technology than with concerns about information privacy.

The cute design of Emotional AI provides a kind of affective ‘path of least resistance’, in which future-oriented decisions about privacy are exchanged for the short-term, positive affective hit that accompanies feelings of companionship or intimacy.

Safeguarding our privacy

Potentially, there are some exciting, and productive, applications for Emotional AI, particularly in healthcare, education and as it might be deployed in conflict zones. Further, public awareness about Big Data has come a long way in recent years, and many people are taking steps to customise privacy settings on devices and social media platforms. However, I think there needs to be more – and better – information available about developments in Emotional AI and its implications for personal privacy, particularly as it enters the domestic sphere.

Often, for example, these companion agents are designed to interact with children; this, to me, raises huge ethical and moral concerns around the collection of children’s emotional data (not to mention data about other ‘vulnerable’ members of society).

Debates need to take place around whether certain Emotional AI applications encroach on moral codes, and rules will need to be established around privacy and the ethics of this technology. Importantly, we’ll need to figure out whether biases around race, gender, sexuality and socio-economic status might be built into these applications, leading to the rise of digital inequalities.

Given the rapid — and lucrative — expansion of information capital into emotion, it’s vital we push for privacy protections and regulatory safeguards, which have not kept up with new developments in Emotional AI, not to mention our growing need for companionship in an increasingly isolating world.

Dr Cherie Lacey will explore the ideas in this article further as part of ‘In robots we trust’, her free public event with Victoria University of Wellington Professor of Ethics Nicholas Agar, as part of Victoria’s Spotlight Lecture Series. Wednesday 6 December, 12.30pm–1.15pm, Lecture Theatre 2, Rutherford House, Bunny Street, Pipitea Campus. Register here. Read Professor Agar’s earlier Newsroom article ‘Don’t trust economists about the future of work’.

Dr Cherie Lacey is a Lecturer in the School of English, Film and Media Studies at Victoria University of Wellington.

Leave a comment