Changemakers: Pamela Pavliscak on Emotionally Intelligent Technology
Pamela is a tech emotionographer whose work brings together empathy research, affective computing, and expressive interfaces to make technology more heartfelt and human-kind. She is a faculty member at Pratt Institute in NYC, adviser at the new studio Empathic, and author of the groundbreaking book Emotionally Intelligent Design. Pamela sat down with Andrew from ATIH to share her insights into the future of emotionally intelligent technology and the unique ethical challenges of this emerging field.
Andrew: How does your upbringing affect how you understand your relationship with technology today?
Pamela: I grew up in a family of engineers in the Detroit area. One episode from my childhood made a huge impression on me: seeing the clay models of cars at my dad’s workspace. The idea that a thoughtful team of real people was modeling something for other people to make a part of their daily lives was something I hadn’t considered. It demonstrated in this larger-than-life way just how much design was a conduit between people, and how much care could go into that process.
What first inspired you to start thinking critically about the need for emotionally-intelligent technology?
As a designer and researcher with a background in human-computer interaction, I didn’t start out studying emotion. But every time I looked at how people engaged with technology, there it was. Really, it started with a more personal experience. One day my youngest daughter disappeared into her room with Alexa. Feeling curious, and a little bit guilty, I listened in. I overheard her trying to banter and play as she would with a friend and get frustrated when it didn’t go as expected. Apparently, there was some cheating!
Around the same time, I encountered this new wave of technology that aims to understand human emotion. Emotional Artificial intelligence can detect and interpret a tiny flicker of emotional signal through voice, face, text, or other biometrics. Virtual Reality can help us develop our empathetic skills. And social bots can make out interactions feel more intuitive. I don’t think we can solve the emotion gap in technology just by adding more technology, but it certainly compels us to pay more attention to emotion. I’m convinced that future technology will be a—hopefully harmonious—blend of human and machine emotional intelligence.
You’ve said that we need technology that “respects what it means to be human.” Can you give examples of technologies or products that you feel meet this standard?
Technology that recognizes humans as complicated, ever-changing, emotional beings is rare. There are two barriers. First, we can find ourselves in a tech time warp, where we get recommendations based on what we recently did or said and that may not give us space to change and grow. Second, we can feel emotionally drained, or even manipulated. A lot of what we consider engaging design is built of a cycle of negative emotion. For instance, you might turn to a social media app when you are bored. Because the app is designed to temporarily “solve” your boredom in a way that is variable enough to keep you coming back for more, you start to feel guilty or even addicted. And, of course, we know some of the most addictive apps are fueled by more potent negative emotions. Anger is a high activation emotion, and it spreads across social networks easily. This can leave us feeling agitated in the moment and with a kind of rage hangover later.
On the flip side, the most promising products augment human emotional intelligence. The idea that a little self-awareness can help us regulate our emotions is a core concept of emotional intelligence. I’m really encouraged by new wearables and apps that encourage us to actively manage our mental health, especially those like Maslo that combine the self-evaluation of a diary with technology that detects emotion for your own use or for sharing with a coach or therapist. Empathy, emotional and cognitive, is fundamental to emotional intelligence too. Technology that connects people with greater intimacy and compassion is an area where I think we’ll see a lot of innovation. BeMyEyes, an app that connects blind and partially sighted people with volunteers, is a great example.
You’ve been working with empathic technology, sometimes called emotional AI. What are some of the ethical challenges associated with it?
When we are developing technology that taps into something so personal and primal as emotion, it’s important to take great care. Emotional AI picks up biological signals, from brainwaves, pulse, skin temperature, tone of voice, facial expression, touch and gait, and translates these to emotion concepts. People are naturally, and sensibly, cautious about this technology. In my research, I’ve been focused on bringing the concepts to life through simulation or co-creation, so we can understand what the implications might be.
There are a few big panic button issues that come up again and again. One is privacy, or the concern that emotion tech will overtly or covertly detect what we are feeling and reveal too much about our emotional life in unintended or uncontrolled ways. Another is bias, or the worry that human emotion will be modeled in a way that is not inclusive based on limitations of training data and lack of sensitivity to cultural context. More philosophically, there’s the concern that the models themselves won’t reflect current trends in our understanding of emotion which is less grounded in biology and more in personal, social, and cultural context. Finally, there are existential concerns, or the idea that emotion, like consciousness, is too complex and just too human to be modeled at all.
All of these are valid concerns! The good news is that technologists working in the field are very aware of these concerns and generally welcome a cautious approach. Regulation may give the field a chance to develop without pressure to use the technology in questionable ways, too. As a member of the IEEE P7014 standard working group for ethical use of empathic technology, it’s been encouraging to see an abundance of collective intelligence put into making sure it’s used for good and not harm.
What advice would you give to technologists who are looking to design more emotionally intelligent products?
One model that I’m shifting toward in my work is to think of designing a product as building a relationship. In a relationship, we have a period where we start to get to know each other, then we build trust as we go along, and we co-create shared ways of being together. This shifts the mental model we have of design in a lot of ways. It moves us toward thinking of people as connected actors with emotional needs and shared interests. It gives us a way to evolve our interactions with a product over time more intuitively. It also serves as an ethical guide grounded in care and a sense of responsibility toward each other.
You can connect with Pamela Twitter and LinkedIn and can learn more about her work in her book, Emotionally Intelligent Design.