top of page

Empathy and Artificial  Intelligence

In the future artificial Intelligence will play a major role in all social systems. With these increasing interactions between artificial intelligence and humans, A.I. will need to have an empathetic response for a better experience for humans in the future. When we decided to decode empathy, a clear application of it was to teach Artificial Intelligence empathy. But is it even possible to do so? 

IdeaBankPosters.jpg

Let's start with the question: 'why teach empathy to an A.I in the first place?'

​

​

A good reason why, is the ‘Trolley experiment’. In this thought experiment a self driving car must decide between self immolation of itself and its passengers and the pedestrians on the rod. On doing some research about this, we came across the terms ‘systemisers’ and ‘empathisers’ coined by the psychologist Simon Baron Cohen. Systemizers take into consideration the whole system. They focus on efficiency. They follow a set of rules and look at things in an objective manner. Empathizers on the other hand have a strong drive to understand the emotions of another and then respond with appropriate emotions of their own.

​

​

Most machines and artificial intelligence are  developed to make life easier for the large population. It is about creating efficiency and optimising goals. It works towards the betterment of the whole system not necessarily for the individual. The focus is on numbers and data. Hence, machines are systemizers.

​

​

Simon Baron Cohen in one of his talks outlines Ideology as a reason for the lack of empathy in individuals. He gave the example of Karl Brant, who was Hitler’s personal physician and the head of the Nazi Euthanasia Program. Karl Brant was tried after the war and sentenced to death. When they took him to the gallows and put the black cloth on his head, he yelled “It is no shame to stand on this scaffold. I have served my country”. Karl Brandt believed in the idea of a perfect population with no disability,  resulting in a lack of empathy towards people with special needs.

​

​

The recent Marvel film,” Avengers: Infinity War” puts forth an interesting argument. In the film we see Thanos, the antagonist with a mission to wipe out half the population at random, for he believes that it is the most fair and efficient way of rebuilding an over populated universe with fixed resources. When the film came out we found many adolescents around us, agreeing with his vision!  This is something we are beginning to understand now.

​

​

Our mind can be divided into two; the new and the old. The new brain is capable of understanding imagined realities, foreseeing future scenarios and believing in values. The old brain on the other hand is our animal brain and responds to our immediate emotional needs and biological responses.

In the film, when Thanos felt that he is helping the human race by wiping out half of it,  it is the new brain at work and not the old brain. He could see, as a system the universe would function better. If in the future an A.I  as a systemiser shares the same view could it potentially lead to mass homicide of the human race? Could it choose to wipe out a few individuals or a particular race?  Yes, this nightmare is a hypothetical cliche but what we are trying to point out is the issue with framing rules.

​

 

The current education system in India is optimised to deal with the large population of children. What works for numbers does not for the individual. Yes, A.I. can achieve personalisation. It can study a single child and cater to his/her needs. but to what optimisation? Who decides what is best for the child? Who decides what to prioritize; academics, playtime or a hobby? The AI, the creator of A.I, or the teacher? They all fail to understand what the child wants. They fail to empathise.

​

 

Today we are talking about legislature for A.I., we’re discussing the laws Sophia would have to follow. Isaac Asimov in his ‘ Foundation’ Series puts down three rules for an A.I. to follow . One of them being, “‘A robot may not injure a human being or, through inaction, allow a human being to come to harm”. This is an ideology ingrained into a machine. Now present the A.I with the trolley problem or a case of euthanasia, how will it make a decision? Will it be an answer for the mass or for that particular case? If I were in a case where I want to be euthanised would I want to be stuck with an A.I. that follows this rule?  

So can A.I. react on a case to case basis based on what it feels rather than a morality or a set of rules? Can it have instinct?

​

 

This got us thinking about Machine Learning - an A.I learning on its own through experience and not a pre-programmed set of rules. We came across  ‘Norman’ the “psychopath” A.I. developed at the MIT media lab. It was fed information from the darkest corners of ‘Reddit’ and its responses to the Rorschach test were drastically different from a normal A.I’s. This led us to our most important question, do we really need to do anything to the A.I for it to be empathetic? That is - set ground rules, program an empathetic response, etc. Or do we only need to teach humans to be more empathetic?

​

 

If the aim behind the creation of A.I is human-centric, that is for the betterment of humanity, that in itself is a rule. It does not have intuition or feeling. It doesn’t do things because it feels like doing things.

So What is the difference and what is missing between an A.I and a human?

​

 

Human beings are wired to be empathetic and these are drives that have been built over thousands of years. We have instinct, we have emotions. So will giving the A.I instincts be enough to make it human? Is it even possible to do this? At the moment it doesn’t seem so. So it may not be possible for an A.I to empathise. But will it be possible to get an empathetic response from an A.I and how can we achieve this? Based on our five parameter empathy model, an A.I is capable of the first four. It can collect Information, it can seek more if required, It can connect information and its ability to process large sums of data and information removes most biases. But how can it have the fifth or display an emotional capacity as well?

​

​

 

In the  film Hyper-normalisation by Adam Curtis, Eliza a counselling computer was brought our attention. Eliza was one of the first attempts at computer therapist. Most users felt that ELiza listened to you and it was a successful experiment. What Eliza did was mirror people back to themselves. It was surprising that people still talked to her despite their knowledge of her being a robot and not a real person.  Users found her non-superior tone was refreshing. People felt she understood them and had gone through something similar. People felt empathised with, because she passed no judgment. She passed no judgement on their vulnerability. In the age of individualism, that is what people liked about her. Maybe that is what is required for an A.I to seem empathetic.

​

 

So there is no ideal solution to the trolley problem unless you define what is ideal which again is a bias. And the problem with trolley problem is that we fail take time into account. Captain Sully in his hearing urges the prosecution to take into account his reaction time when running tests on the simulator. When humans are placed in this situation in that moment your emotions are more likely to override rationality. Your bias is tied to your personality so it’s perhaps inevitable in most cases. You can’t be expected to calculate the possibilities that could occur if you saved one over the other in the moment. And yes A.I, is capable of calculating but it is still biased. The bias of its creator is written into it. There is some subjectivity to objectivity. This makes you wonder if there is any absolute truth after all. Even Science is born from observation. Science was predominantly written by men. Science today revolves around the speed of light as a constant. Had women had a bigger role to play it may have been different. Not correct, but different. So where can one find absolute truth and objectivity? We believe it’s in YOUR observation; your everyday observation and your experience. So don’t undermine your experience as unimportant. Don’t disregard your emotions. For A.I. to mirror humans we need to be more human.

​

​

This entire piece is influenced by our bias. We deem individual happiness over happiness of a community as more important. There is a clear bias towards empathisers. Bias isnt bad. Any stand you take is a bias.

​

 

To conclude, If human social interaction is headed towards human and machine interaction A.I. will need to be empathetic. A.I. today is not capable of empathising as it is not capable of having an emotional capacity yet. However it may be capable of generating empathetic output. For this to be possible it needs to learn to mirror humans and allow for vulnerability. However the best way to achieve this maybe through increasing empathy amongst humans. Maybe A.I. needs to influence empathy in humans to nudge for better relationships. Can technology influence empathy in humans?

​

 

​

bottom of page