Injecting Empathy into AI - What To Know!
Usually empathy is highlighted as an uniquely human trait. But is it really?
In Philip Dick’s 1968 book, “Do Androids Dream of Electric Sheep,” the key test to detect whether one was dealing with an android or a human being was based on measuring empathy. Empathy is one of those qualities, alongside creativity, intuition, and moral comprehension, that is systematically characterised as being specific to humans. So, is it impossible to inject empathy into artificial intelligence? To answer this question requires two fundamental elements:
1. to define exactly what is meant by empathy, and
2. to be able to evaluate and measure empathy.
What is empathy and why that matters?
There are a multitude of definitions of empathy, with some dividing it into such as categories as cognitive, affective, somatic, motivational, collective, parochial or compassionate empathy. So, there is no one universal understanding of it. Of course, empathy’s a complex subject, much like humanity, emotions, and truth. At a topline level, empathy is about understanding the other person’s thoughts, feelings, and experience. To gain this understanding, I personally define empathy as having two varieties: cognitive and affective (aka feeling). This is important because it allows that cognitive empathy can exist by itself. I have landed on this distinction because I have found that I am only really good at understanding cognitively what someone else is thinking and feeling. I’ve rarely found myself feeling what someone else is feeling. This is also important because cognitive empathy is something that I can learn, whereas I don’t think it’s possible to teach me to feel.
Some empathy experts insist that cognition without the affective part is not empathy. While having both may lead to a stronger understanding, I believe that cognitive empathy has much to provide, especially in a business context, but also in any one-on-one relationship. For starters, the absence of cognitive empathy makes for a poor two-way connection. Thus, if we accept that cognitive empathy exists as a standalone form of empathy, and to the extent it’s something one can learn, it becomes possible to consider encoding it into a machine.
The feeling conundrum
Yet, it’s clear that affective empathy is impossible for machines. While there’s been incredible progress in teaching computers to detect emotions with nuance and, we’re forever refining anthropomorphised android faces that look like they are emitting emotions, it is just not feasible that a machine will feel as we do, and therefore it can’t experience affective empathy. That said, it will be able to accurately mimic feelings and in so doing may give us the sense that it is being empathic with us.
On being empathic
A second consideration that’s important with regard to applying empathy to AI is first viewing empathy as a quality you bring to your decision-making process versus the subject’s perception of empathy. In other words, there’s an output and a reception of empathy. Both the output (from the emitter) and input (by the receiver) are worth evaluating.
On the emission side, the person (or machine) may be exercising empathy in a way that is invisible to the receiver. It’s easy to imagine cases where empathy may be used, but not perceived. For example, in the design process, empathy is a fabulously useful tool, but the end user may not explicitly attribute the great design to empathy in the design shop. Take a shampoo bottle. You know when you’re shampooing your hair and your hands become slippery because of the foam? Then when you go to grab the bottle again it might slip out of your hand? That’s because of the silicone in the formula. Well, what if we (used empathy and) designed a shampoo bottle with ribs to make it easier to hold on to in the shower? The person in the shower might not attribute the bottle’s ribs to empathy, but it was surely present in the design process.
In terms of the output, it is worthwhile evaluating empathy and, eventually, attributing a result, a performance. This is being done already by certain companies in different capacities, for example in customer relationship management (e.g. Pegasystems) or customer service (e.g. DigitalGenius). AI is being used to help human agents to formulate more effective messaging by proposing options that are scored for their empathic quotient, among other criteria. The agent is then able to select the most appropriate pre-loaded message. While it may seem curious to measure empathy as an output, it has merit in our daily lives, in that we human beings can intentionally learn to develop empathy, gain data, observe behaviour, listen actively and otherwise seek to understand the person or persons with whom we’re interacting. And we can try to appreciate to what extent we’ve improved our own level of empathy. The same, thus, can be done with artificial intelligence and the various types of applications, such as a chatbot, computer-aided design, automated answering services, and employee evaluation and surveillance systems. The trick will be in contextualising and labelling the different actions, and then scoring empathy in ways that make sense for the organisation using it.
Perceiving empathy
Meanwhile, there is also the perception of empathy by the one on the receiving end. To what extent is the receiver experiencing the empathy? In academic circles, there is much debate around the ability to measure empathy. It remains inconclusive and precise measurement is broadly elusive. Most of the research tends to focus on measuring the reception versus the emission of empathy. That’s because we (as a society) tend to be interested in empathy for its beneficial effects on the receiver. But I firmly believe that both sides are worthy of evaluation.
Dissecting and measuring empathy
One of the keys to evaluating empathy is to understand the intention of the emitter. Why is he/she deploying empathy? As is much chronicled, empathy can also be used for malevolent ends. It’s a key tool for a sociopath. It can be used to manipulate or during negotiations. Naturally, it’s difficult to ascertain someone’s genuine intention. One way to evaluate intention is to match with the follow-up actions by the emitter. Moreover, some of my fellow empathy enthusiasts will consider empathy not useful without an accompanying (beneficial) action. It’s worth pointing out that in Philip Dick book, the machine only measures the emitter’s empathic level. In any event, whether or not we are able to settle on the true empathy of a machine, the process of breaking it down and putting it into code will at the very least add to the body of work on what is empathy and how to measure it.
An empathic AI…
While the notion of an empathic robot or machine worries some people, I am adamant that some of the initiatives to make an empathic AI – especially when it concerns therapeutic applications – will be beneficial for society. Mental health issues are widely reported to be on the rise and the supply of human therapists is not sufficient to meet the demand in many countries. It is my contention that, in today’s distinctly crazed environment where people are overly concerned, if not infatuated, with themselves, people don’t have the bandwidth, much less the desire, to listen to and help others. Surely, the rise in mental health conditions is tied in with our penchant to be insular and self-involved. If we’re not able or willing to make the effort to listen to and help one another, the opening – and the need – for an empathic AI seems evident. Whether or not the empathy is perceived, the effort of trying to make a therapeutic bot more empathic – supported by honest intentions, an appropriate ethical framework and healthy business model – will be useful.
For the moment, we are still at the beginning of the journey toward building empathic AI. For the most part, the initiatives to encode empathy in AI are being applied to instances or cases where the stakes and consequences are not particularly significant. In other words, it’s being applied in order to sell another widget rather than handling a suicidal teenager. But, it’s a naturally messy and complex study. In any effort to encode empathy into AI, it will be important to break down empathy into discreet components that can be tagged for their empathic quotient. Every situation and interaction will have variables and a context. For the AI to become consistently and deeply empathic, it will require almost endless iterations to create well-worked and curated learning models that have access to sufficient data about the individual (and, therefore, their trust). For now, it only happens in spurts and in limited conditions.
It will be exciting to see how the AI develops. Sure there will be mistakes and bad actors along the way. But that’s true of just about every new tech…