article

Personality and emotions for EHE-driven entities

Share on facebook
Share on twitter

The first article in our “EHE AI” series dealt with defining what we meant by AI and learning agents, and the various ways a machine could learn. Depending on how evolved we want to take it, there would be various game play considerations involved. All of this was very high-level, of course, and we didn’t elaborate on the implementation or genre-specific consequences, since the EHE techs we’re developing are not only game-agnostic, but they’re also genre-agnostic. We’re developing a system that can learn, adapt, and play in various environments. 

This second article will probably be seem a little weird to some as it will deal with emotions and personalities. Why “weird”? Because if there’s one thing we usually are safe in asserting, is that machines are cold calculators. They calculate the odds and make decisions based on fixed mathematical criteria. Usually, programmers prefer them that way as well. 

When companies talk about giving personality and emotion to AI-driven agents, they usually refer to the fixed-decision making pipeline that we mentioned in the previous article. Using different graphs will generate different types of responses to these agents, so we can code one to be more angry, more playful, more curious, etc. While these are personalities, the agents themselves don’t really have a choice in the matter and don’t understand the differences in the choices they’re making. They’re only following a different fixed set of order. 

Science-fiction has often touched the topic of emotional machines, or machines with personalities. This is the last step before we get to the sentient machine, a computer system that knows it exists. 

You can rest assured, though. I won’t reveal here that we’ve created such a system. But these concepts are what constitutes the weirdness of this article, and the often-unsettling analysis of its consequences. 

Personality

In the first article, I mentioned how many companies use buzz-words as marketing strategies, thus the “intelligent” fridge beeped when the door was left opened. In the same vein, many companies will talk about personality in machines and software. Let’s define how we’re addressing it to lessen the confusion. 

For the EHE, a personality is a filter through which you understand the world. A “neutral” entity, without any personality, will analyze that they’re 50% hungry. That is the ratio it finds of food in system vs. need. But we humans usually don’t view the situation as coldly as this. Some of us are very prone to gluttony, and will eat all the time, with even the minimal level of hunger. Some others are borderline anorexic and will never eat, no matter how badly we physically need it. Then there’s the question of the interactions with the other needs – I’d normally eat, but I’m playing right now, so it’ll have to wait. In short, the “playing computer games” need trumps the “hunger” need. But if I’m bored and have nothing else to do, my minimal hunger will be felt as more urgent. 

All these factors, and dozens more, vary from one individual to another. This is why you can talk about someone and say things like “Don’t mention having a baby to her, she’ll break down crying”. Or “Get him away from alcohol or he’ll make a fool of himself”. We know these traits in the people close to us because we can anticipate how they’ll react to various events. We’re even much better to notice these traits in others than in ourselves and are often surprised when someone will mention such a trait about us. We see them in others most of the time because they differ than ours. 

For some events -let’s continue using the “hunger” example-, we will, of course, feel that we’re “normal”, so our own perception of how someone should react to food is based on how we ourselves would react. For instance, if someone reacts less strongly than we would, then this surely means that thisperson has a low appetite. But if this person reacts way less strongly when hungry, then we become worried about their health. The opposite is also true, if the person has a stronger reaction when starving. 

These levels of reactions to various events in our lives are what we define as being our personality. It is the filter with which we put a bias on the rational reality of our lives. Without it, everyone would be clones, reacting the same way, to everything happening. But because of various reasons (cultural, biological, taught) we’re all unique in these aspects. 

Our EHE-driven agents also have such elements to filter up or down the various events that happen to them. An agent that is very “campy” and guarded will react much strongly to danger than another one who is more brazen. The same event will be felt by both entities differently, the first one ducking for cover and the second one charging, all of this dynamically determined based on the situation at hand, not pre-scripted for each “types”. 

Emotions

If a personality filters differently the “in” component of an EHE entity, emotions filters the “out”. If a personality shapes how we perceive the various events in our lives, our emotions will modulate how we determine our reactions to them. Thus, emotions are used a little bit like an exponential scale that starts to block rational thinking. 

The way we’re approaching this is to say that someone who isn’t emotional is logical, cold, and cerebral. That person will analyze the situation and take the best decision based on facts. One of the events in the movie I, Robot illustrates this perfectly when a robot rescues Del Spooner-the Will Smith character- from drowning. The robot analyzed that Spooner had more chances of survival than the little girl next to him, so it saved him. It was a cold decision based on facts and probabilities, not on our human tendencies to save children first, and be more affected by the death of a child (or a cute animal). 

On the contrary, someone who is overtly emotional will become erratic, unhinged, crazy, etc. That person will react to events in unpredictable, illogical ways. They will not think of the consequences of their actions and, as would a more rational person would do, do whatever seems best to address the situation right away. I’m angry at you? I’ll punch you in the face. That is an emotional response. I “saw red”, I “wasn’t thinking”, etc. That is, I stopped caring about what would happen next. I simply went to the most brutal and efficient way to address my anger. 

Normally, the EHE will consider outside consequences when dealing with a problem to solve. This is one of the core tenants of the technology. If I’m angry at you, there’s a number of things that I could do, ranging from ignoring, to politely arguing, to yelling, to fist-fighting. These things escalate in efficiency to solve the problem – ignoring you doesn’t address my anger in the slightest (until a certain amount of time will have passed), but it does the least collateral damages. On the other end of the spectrum, punching you in the face is probably the most satisfying way of addressing my anger, but it has the unpleasant consequence of possibly landing me in jail. 

If I’m in control of my emotions, I’ll juggle the satisfaction of punching you in the face with the unpleasantness of going to jail and will probably judge that going to jail is more unpleasant than punching is pleasant, thus the action will be discarded from the range of things that I should do. But if ignoring you doesn’t solve the problem (and possibly even increases it), and if trying to sort things out doesn’t solve it as well, then me being “angry at you” is starting to be a real problem, and my inefficiency at solving it is increasing my “anger” emotion. The more it rises, the less I start considering the collateral effects of my actions. So, after a certain amount of time, the punching-in-the-face action that was completely discarded earlier will become a totally sane response, because I’ve temporarily forgotten some of the consequences of it and focuses only on “does it solve my angry at you problem?”. 


Of course, an emotion can also be positive. Who hasn’t done something stupid because of joy? Or because of love? Who hasn’t done something he or she later regretted, wondering “What was I thinking?”. Well, that’s precisely the point: this person wasn’t thinking, and was temporarily blinded by a high level of emotion, incapable of seeing outside consequences of what he or she was doing. 

When broken down like that, personalities and emotions become manageable for computers, and they can then reproduce convincing behaviors without direction or scripts. Depending on the definitions of the world, they can adapt and evolve in time, and become truly believable characters. 

In our next article, we’ll start talking about common memories and how to collectively learn from experience and evolve as a group or make the AI of a game better in time based on the collective experiences of everyone, to the benefit of everyone. 

Share on facebook
Share on twitter