TCL EP12 - We get feelings about AI Emotions.
Artwork by Longquenchen
Our trio are back and we discuss if emotions are even a thing once AI begin to operate with independence and, dare I say it, sentience? They become who we are and our actions are indeed their algorithms. Therefore we need to rectify ourselves in order to train AI to become what we want, an ally of human development and well being.
We debate what all this means and discuss if emotional intelligence will indeed become part of artificial sentience, and we may not even have a say.
Deepu: so if you use AI, for example, in policing, for example, right? So how do you then maybe with AI, you can have some sort of like a predetermination. Okay. The person, ABCD one, two, three, whatever it is, actions and therefore they are found guilty automatically guilty of a predefined sentence.
Jai: I wouldn't put AI in charge of that though. That's not something that you want to hand over to a robot, because then the ethics come in.
Jai: who's a robot to judge a human on their conduct. Whether they being bad or good.
Vije: Hello, gentlemen, guess what?
I just realized what you were saying. And I was like listening to the conversation for the last 30 seconds.
Jai: And you didn't notice that you were recording.
Vije: Let me go, right question as a record. Here's the thing. This is what it's all about. AI is already being used for that already in China. and it's been used in other countries that China has sold.
The technology to AI is already in charge of that stuff.
Deepu: Let's just make it one thing. It's not really AI. This is pre pretty AI. This is AI. Yeah, not really.
Vije: No. It's automation. it's automatically detecting the face automatically picking up checks and automatically creating a profile alert.
They get points deducted, and the points go lower because of what they said. what they did or how they look like they get, as the points get detected, they can't, take the bus. So the loan, rate is increased. It's immediately done automatically just like the black mirror stuff.
Deepu: okay. we do need to define separate, AI and automation.
Cause those are two different things. Artificial intelligence and automation. don't necessarily mean the same thing, but can be mutually exclusive.
Vije: Correct. But remember, as we discussed, our behavior is the algorithm that gets defined and trained in future AI. So if this is the practice that we operate on, the AI will be doing it for men in future.
Jai: Do you see the issue? The issue is you want to bring ethics into. A system that's already flawed. That's already unethical. So you starting from, that's you obviously, but the technology and the application of it is already starting from a flawed foundation. it became unethical.
So now you're going to try and bring it back instead of just starting from scratch and being base level ethical. does that make sense? In other words, instead of making it bad and then trying to make it good, just. I thought with it being good, but you've got your manager. You are already flawed in that sense because they're already on it.
Cool. Trying to create these AI. So if you going to bring an ethics team in the ethical thing to do is not to create AI until you sought after your human problem. That's the most ethical thing to do. If you're trying to add ethics into, if you're saying well, screw that we're going to make AI anyway. Then there's nothing ethical about it, then it's going to be flawed, like the people who are creating it.
And it's just going to perpetuate a kind of flawed system, as opposed to solving it.
Vije: That's why we create our own image. And that's what the philosophers on one side say is going to happen.
Jai: what I'm saying is it is happening. This is not a philosophy ethical question. it's happening. so the, we're debating it.
we don't really have a say on who goes and creates the AI and who doesn't. We
Vije: can, not in the way you might meet think of it, but yes, we do have a say. It's just that, what we decide is the actions that we put into the system to train it. The problem is we can't quantify it properly.
That's why it's meaningful.
Deepu: okay. Let's take a step back, right? Hold on. The thing is, let's test, as I said earlier, is that we've got a difference or do got to differentiate between automation and artificial intelligence. And if you really look at, AI is essentially a set of automation commands, right?
So we started off with something simple that can be automated or repetitive tasks that
Jai: can be automated.
Deepu: And then you make those, are they a machinery or a processor or some sort of computational software that can process, take a task and do some intervention. That's a repeatable action. And then keep automating certain input.
Given a certain output is then taking all of these multiple automated tasks. And delivering it in a multiple inputs, multiple outputs. It's
Jai: basically a really complicated if then statement
Deepu: and this data that you, those data that you feed it will then obviously make that decision making. More efficient or accurate because of different scenarios in it's putting in, it's getting the right output based on who you are, where you're from, or what's the actual scenario, et cetera.
Jai: So what
Vije: a loading tube is the philosophical part that I was referring to.
The one philosophy, the team that's sitting on one side is saying, it's going to be flawed anyway, because humans are flawed. So it's better to use the floor and train it to be better. Okay. On the other hand, people say what Jay was saying, make it good from the start. The question is, the only way you can do that is if you can quantify all the variables that make intelligence that's emotion, think a thought logic, what is it?
Emotional quotient, all that little things that humans do. If you can quantify it, you can build it from the start to be unplowed, so to speak. But other floors will say that's impossible. What we can do is make a bad person.
Deepu: I'm that person better?
Vije: Yep. Okay. So one AI development is rehabilitating human thought into pure logic.
Basically that's in star Trek, where you go, where humans are compared to Vulcan. Balkans are biological species, but they have trained themselves to be logical. Even though deep down, they are very violent, but through meditation they found peace for hundreds of years, right? So they became a logical cohesive species.
That's essentially that philosophy, other philosophy cling on violent species, but except them that way, because that's how they're born. But somehow. You could, somehow you can take that and say, what are the basics that make it violent? What are the indicators that say, this is a violent species. If you can quantify it and say, violence equals X and all of that and say, this person is violent by 2.3 out of whatever.
If I can quantify right, then you can develop it from the start.
Jai: So two
Deepu: very weird
Jai: examples because both of them are starting from a flawed premise of being violent. And I'm saying flawed premises in we're not a violent people by nature.
Vije: let me put it this way. The Vulcan chose a part because they wanted to be better.
They stay, they have a valid tendency. They used to have a lot of war. They chose a path to train themselves, which means. They rehabilitated themselves to become a logical species while the Klingon chose and accepted their patterns. Yes, this is who we truly are. This is us. We are going to become into, and through that, they can show the intelligence, both, basically the same in the sense they both build spaceships.
They both learn how to fly fast and light, but both have two completely separate pots. That's the whole point.
Jai: Yeah, but in terms of AI, I don't see where the point is because AI is a program. And I, because I would think as a programmer, what you're saying is it's going to have bugs in it. So we're just gonna keep putting patches.
And my thought as a program is that's fine, but it's are going to be more efficient and practice tickle for the, for a good program. To start from a base code that's as bug free as possible, instead of going, it's going to be flawed from the start. that's what
Vije: going to throw Spanish works.
One of the professors that I've really known professor Barnard from university of Pretoria, many years ago, he was head of AI and I asked him country's separate emotion and, make it logical and a peaceful AI. And he says, if you build it from the beginning, The theory is that if artificial intelligence finds sentience emotional growth and emotional, per se, self, understanding where you are self aware of your emotions will become a natural growth of the code that you can't prevent.
Jai: correct? Correct.
Deepu: The definition.
Jai: So the definition of intelligence, what do you define as emotion?
Deepu: honestly. Yeah.
so that's the thing, right? So that's the question that discussion I was having with you Jay earlier, before reject came on was how do we define that? And this is, we can't just define it. Generally by one word by saying, just be good because that empathy right now, look at very, maybe terrible examples is a Donald Trump, any other human being
Jai: on this planet.
Like he thinks
Deepu: he's doing things in a good way, he's doing the right thing, but. Somebody else's judgment of it is no, it's completely it's fraud. You're being
Jai: biased. How about this? How about being good? Instead of saying, being good? How about saying, being loving?
Deepu: No. No. So the point I'm alluding to here is that before that AI becomes sinking, it's all about the data that it's fit right now.
This data that it's going to use as its source material. To achieve that emotional state of understanding.
Jai: It's never going to be an emotional thing because this is the thing, Hey, I actually said here's a collection of if then statements emotional and they would play into that. It's just going to be a whole bunch.
Deepu: If you can quantify it. Yes,
Vije: that's fine.
Jai: If you, if
Vije: whatever, the, whatever you just mentioned, if you can separate the two, if you can clearly say,
Jai: so come into this at all. There's no need to quantify it. It's a connection then statements.
Vije: Okay, let me put it this way. If then statements are complex enough to give sentience within the instance statements in the thought logic process of making decisions, emotional structure will be implemented naturally.
Jai: why do you think emotional will come in?
Vije: Because this is not philosophical on the school. A, they say that
Jai: happen. I forget what they say. Why do you think emotions would come in? Why do you agree with what they say?
Vije: Okay. Why I agree with that statement is because we don't know what's okay.
What portion of your brain makes you emotional? for example, what kind of piece of your brain exactly. Precisely. And I will switch off fully emotions and you will be pure logical. Can I do that better? That's what I meant. Which part of the brain, how do I do that?
Jai: Because emotions are not a brain thing, but that's what that's why I'm saying emotions.
Wouldn't come into. Okay, because you can't define emotions. There's no reason to say that emotions would come. It would define intelligence and artificial intelligence. The very fact that you say you don't know what emotions are means that no one can program it into a what we're going to call it an AI device.
Therefore, they will never be emotions in it, unless it gains awareness and created for itself. But that's not an Intel. That's not a sign of maybe self sentence, not necessarily artificial
Jai: We're just talking artificial intelligence.
Vije: the whole point is we want AI to move in that space.
If it's just AI in what you're saying, we'll get perfect. Then it's just a bunch of different statements. It's just a functional robot. But then the argument is that's not really AI. It's just Supreme. It is
Jai: not intelligence.
Vije: But that's what we are striving towards. We're going from AI to sentience, right?
Jai: So we're talking about AI.
Vije: Okay. If a, another species of robots can look like humans perfectly without any different patient. I'm talking about that, the fact that it will go towards that direction, it has to bridge somewhere. it wouldn't just stop. They would start evolving.
Jai: It wouldn't stop, but it wouldn't stop there either.
It will continue. So what's the
Vije: purpose of today, but what's going to happen, but you carry on that way. You will go from automation to AI to see this is where it comes into you. Are they really alive? they will say, but I have emotion. I have all this. That's the whole point. If we can functionally see and separate, but your emotions, it's just a logic chip and they will go back at us and say, but so is that with you?
Jai: so this is a weird conversation because it's a whole bunch of theoretical hypotheses in a whole bunch of different directions. Cause we haven't defined AI at a particular point. Cause you're talking about sentience. We've been talking about automation, which has nothing to do with sentience.
Jai: depo. And I latched onto that if then statements, which is a whole bunch of intelligence, but not.
Vije: AI is engines. AI is engines,
Jai: then it's no longer AI, then it's just, I.
Vije: Now you've hit the nail on the head humans and all can accept that. You see they're going to still separate us from them. This is where discrimination and everything will kick in. This is what I mean.
Jai: Why are you like, okay, now we've got an, a totally different topic.
I know we'll be now going for BLM for it, for AI.
Vije: There is that if you can, if you separate and say, I'm sorry, you are not intelligent. What is stopping them from talking back to us and say, but why would you call it AI? We are, I, we are like, you.
Vije: So why would it still be called AI? AI is
Jai: just be call it and then humanity will learn to incorporate this additional level of intelligence into the fray.
As we have been progressively incorporating more and more differences into our collective people.
Vije: Yes. So we are in agreement. So the question is, where does it separate? Where does it stop being AI and where does it become? I, and I'm saying we are not going to stop calling it. We going to keep calling it AI.
It's the difference is it's they, when they put their foot down and say, no, stop calling us AI, I hate the word a, we just, I, that's what I'm. Yeah.
Jai: And then we can go forward from there and say, sure.
Vije: I said, let's get to get it together, but that's the whole
Deepu: point. That future is very far off though, right?
Jai: Not even wondering why we're talking about
Vije: it's complex because school of that, if you're going to develop AI, that becomes, I, you can't just put the suit and say, sorry, you are not high, but your emotional . I don't think it's going to happen. I don't think you're going to develop in such a way,
Jai: talking about emotions as though you know something about it, but you also indicated that you don't know anything about emotion.
Vije: No. So it's going to happen anyway. We can't stop it.
Jai: Yeah. but by the fact that you're saying that, why would you even assume that an artificial intelligence would develop emotions? Just because humans have, it doesn't mean it has to exist in any other form of intelligence. Oh,
Vije: we don't, that's the thing we don't know.
We don't know. Okay.
Jai: But you never know, but you're assuming that it would, which is
Vije: delicious. It will be my belief is that will be,
Jai: yeah. Yeah. That's your belief is
Deepu: not I'm sorry. Oh
Vije: no. Okay. I'm not giving a scientific approach. It's a philosophical thought that if, when AI becomes, I want to call it emotion, we'll be there.
And we have no say in it.
Jai: But you're just giving a thought. You're not even saying why people think that that's, if the philosophical thought there's no, what's the reasoning behind it? What evidence? Like, why would you say something without evidence?
Vije: Because there is no evidence yet, because that's what,
Jai: why would you say then?
Because I believe no evidence. There's no need for it. Do you get that?
Vije: Okay. Put it this way. Biggest issue. Elon Musk. I said, I think a week ago, I'm very scared of the AI we're developing because it's turning, it's going to be a tyrant is actual words. It's going to become a tyrant and going to take over earth and decide the human future.
because he's scared of what AI is going to be come. that's going to be the psychopath that is going to relax. Hold on. So when this happens, he is very scared of this future, this Skynet basically. And what I'm saying is you are not scared that's what is going to happen. You are scared of us because of our.
Jai: are you talking about, you're talking about
Vije: human beings our data, all the things we're doing is feeding into a training system. That's developing this. So it's not the AI. It's us. Our behavior is going to cause a psychopathic emotional tyrant. If we don't do anything about it ourselves, because whatever data we train into that system, even if you think emotion has been cut out, it's going to make choices like a human.
Jai: exactly because of the collection of if then statements created by humanity, which has so what's happening is humans are being emotional about their selection of dense statements, but AI. Has doesn't have those emotions. All they have is a, if this happens and a collection of 10 statements
Vije: what if that's emotional, the AI subconscious thought process? What if that is in fact emotion, that different statements? I, in
Jai: fact, it's the illusion of emotions. It's not actually
Vije: notion for me as well. it's solution for us. will,
Jai: what I'm saying is okay. Delightful listening to you get emotional about this because it's such a human thing too, to say I'm so emotional and get caught up in, these emotions that momentary that they think.
It's universal. emotions are momentary, but if humans are feeling it all the time, but they, when I say they momentary there's different emotions, moment to moment. which means when you're feeling, let's say loving, you're going to make a different choice to the same if statement and when you're feeling angry.
Different than statements, which then get fed into this AI, which gives the illusion of emotion, but not really because the human who programmed, it was triggered by a particular thing. They looked at the picture of their loved one. They were feeling loving and gave advanced statement from a loving space.
They were looking at a pure picture of their ex and got angry and gave a dense space. So that then. Statement is being saved with that particular emotion pinpointed, but the emotion was triggered at that moment. When the AI applies the dense statement, it's not triggering any new emotion. it's an old emotion that was on a human that then assigned it to that.
Then statement the AI itself won't have a. Emotional kind of reaction because unless the AI can be triggered by any event and actually have an expectation of events to feel a certain way. how do you get something to feel something then emotions point emotion, no side to be evolved in artificial intelligence.
That is something you wouldn't be able to program from a human perspective. if an AI, so what I'm stipulating is if AI does become emotional, that means that AI can make a decision on the fly, whether it's angry or loving or whatever, which means it becomes sentience. It no longer has it choice.
Whether it copies the emotion of its creators, they were tyrants or lovers, and then chooses it's its own emotional path. And that's. Then a sentient being, then it's no longer an artificial intelligence may have evolved from artificial intelligence, not sentient being that can make choices for itself and can presumably replicate itself.
but do you get what I'm saying is that the emotional aspect of nonsense AI would not exist because it's yeah. It's programmed by a human who cannot keep the same emotional space from one moment to the other, because humans keep getting triggered by what they. Controlling the emotion.
Deepu: I hear what you're saying though, but see, you can add, you can abstract that higher into a higher level of saying. Instead of just saying, if this situation and feel this kind of emotion, you can abstract that to a high level of saying, okay, this is a type of emotion. happiness is an index or feeling, quantifiably by.
Those kinds of feeling, and say for example, if they, the AI is a human, like a robot that their creator, if it sees that, one is coming to us as a first and feel the Sutton, emotion, right? So you can quantify it by certain input. Reactions by saving time with battery is low and it sees standard.
So yourself moving a robot and it sees a plot point, it
Jai: can get.
Deepu: Like visually, it can get happy and it can display a sign of happiness or can even like speed up the process of speed direct.
Jai: yeah. Which can all be programmed in, you can program emotional reactions so that the human seeing it would, but the AI itself is not feeling emotional.
Vije: To be clear quickly before you carry on. Are you saying because we simply replicate human emotions and when, if, then statement the emotion, the machine feels no longer real
Jai: well, it's not real because it's the machine itself is not initiating an emotion it's taking a record.
Vije: Assuming it doesn't work the same way in humans.
What if that's how our neurons are designed?
Deepu: So if you let's apply the same principle, It's to us, because if there is apparently a so-called creator for us, then we should never be able to feel the kind of empathy that our creators felt by that kind of inheritance because we've got our own kind of code.
Jai: So what I said was. For the eight, when we're talking AI in a non-sentient way, we could program. Emotional responses so that it looks like it's emotional, but it's really an event statement. However, if the AI does achieve sentience and figures out how to stimulate its own emotions, I think to come up with its emotions, then it's no longer AI. And then it has its own emotional kind of templates,
Deepu: Defined by the boundaries of its software. Also because there's only so much, or maybe by toggling, maybe not software, but hardware, at least
Jai: I'm saying is if it's ever able to. Realistically feel emotions. It's no longer bound by the software.
Now it's achieved a level of sanctions that transcends what we've programmed into it. Okay.
Vije: I'm with you, Jay. I think the argument is the line between AI and it's going to be blurred the line between AI and it's not going to be a switch, right? It's going to be this gradual thing that's going to happen.
You're not going to say aha. No, we don't. We won't know. When it does until we accept it, or they tell us, that's what I mean. It's going to be a blur between AI and I, if we can not quantify us then between AI and I cannot be quantified.