Canada
Asked
Resolved Resolved by JustinRatliff!

Artificial Conscience

I’m not sure if we’ll be able to solve this question but if anyone could get close I feel it’d be you guys here in the community.

With all the research/work being done on Artificial Intelligence, do you think there should be just as much put into Artificial Conscience?

I know there’s some work being done with Autonomous vehicles over at http://moralmachine.mit.edu but I don’t think enough is being done in this area. I feel like there should be more Jimmy Cricket style ethical engines being built.

One item that I don’t think is talked about much is the idea of self-sacrifice. Robots are infallible, if their body is damaged it can be rebuilt, their experiences and life can live on in another electromechanical host (if backed up to the cloud at a regular rate). Our bodies are kinda a one shot deal. In being infallible, I think robots should sacrifice their chassis at all costs to save us vulnerable humans.

This kind of question plays into Asimov’s three laws a bit.

I’m super curious to hear about what you guys think about this.


Related Hardware EZ-B IoTiny

ARC Pro

Upgrade to ARC Pro

Synthiam ARC Pro is a new tool that will help unleash your creativity with programming robots in just seconds!

PRO
USA
#1   — Edited

To be clear, you mean in a situation where the robot sacrifices itself to save a human?

#2  

Well. growing up a fan of the Cylons in Battlestar and then the Terminators, My feeling would be to never let the robots think about things. Too dangerous if they continue to compute decisions a million times faster than human brains. If they start to think of them self as God and humans as insignificant germs,this will not have a happy ending for us. James Cameron just did an interview when asked this same type of Question. As the first Terminator movie he did .It was just a cool script he wrote. Now he has seen too much as an Elite famous director by real scientists engineering the actual A.I. in coming future robots and in war machines,he said he is no longer cheerful of what is coming.

PRO
Canada
#3  

I am more inclined to think the relationship between us and the ultra intelligent brain (once it emerges), is similar to that of us and ants. It probably doesn't care much about us (no love/sacrifice and no hate/war), but if we get in the way we will be pushed aside.

But with the similar levels of intelligence we have now (like in self-driving cars) your question is a very good one although it has many complex aspects. Let's say we hardcode that in case of dangerous situation if there are no passengers in the car, the car should sacrifice itself to save humans if possible, sure. For example by driving off the road into the ditch to avoid crash. But what about the situation where it has to decide between who to sacrifice for when there are no other options. The car has to answer: Should I crash with the old human to save the young or the other way around? Which lives matter the most? Should I try to save my own passenger or the 3 pedestrians? (people won't buy a car that prefers saving more people than saving its own passenger btw)

Ethical dilemmas like this  makes answering such question very tough and in my opinion out of the realm of engineering sometimes.

PRO
Canada
#4  

@fxrtst That was just an example of an area of Artificial Conscience that bothers me. What I'm looking to discuss is the entirety of what @Robo Rad is describing.

I think it's inevitable that we will have robots that think for themselves (Artificial Intelligence) but how do we create moral machines? Robots with empathy and ethics, that can make decisions based upon human rights and laws. Shouldn't this be at the forefront of Robots and A.I. instead of an after thought?

I can see the Terminator and Cylon robots being built today but where are the robots like Bi-centennial man or David from the movie A.I. that seem to have an Artificial Conscience?

It seems that Asimov started the conversation about this in the 1940s, but how come we haven't arrived at any answers or conclusions yet?

PRO
USA
#5   — Edited

Interesting you bring this up. I have a script I've been working on, about this subject.

We can always write in laws in the code. But is a conscience entity (robot) that is allowed to "think" for itself, going to leave that code "laws" unchanged? If we tell it to do so..will it obey indefinitely? It think it would, in the beginning, but what happens as it evolves or an ill intention-ed person introduces the first code to disobey those rules? Will ethics play into the code? Do they WANT to be like humans to emulate our choices, emotions? Like the choice to help one another. Once the jinni is out of the bottle can we put it back in? Most likely not. Great questions and a great subject.

"To Save myself or save the human?"

PRO
USA
#6   — Edited

I've spoken in the past where society is so much influenced by film. Those images of robots doing harm to humans is what at times seem like an insurmountable task to get everyone to rethink this fear. I've always said i dream of a world in which humans and robots can live together..but we humans can't really live peacefully side by side...how long before suspicions arise, or the first incident where a human is accidentally killed by a robot? Would a bipedal robot like C3PO or Atlas seem more dangerous because they are more human in appearance vs a R2D2 or wheeled robot? Anthropomorphic prejudice will play a role in how these robot 'offenders' are dealt with.

#7  

One interesting thing we are seeing with our AI or Machine Intelligence is our human biases are going into those engines.  Things we just don't consider or take for granted can innocently sneak into an AI system as a bias and produce undesired outcomes.

So it would be super interesting to feed an AI system a bunch of data on self sacrifice examples and then see what it's "engine" produces in daily encounters.  Odds are, it would be entertaining.

Will, I love your clarification question.....because after you asked, I got this grand image of a robotic cult from Indiana Jones and the Temple of Doom

PRO
USA
#8   — Edited

It all boils down to:

Would you like any artificial intelligence ruling the world?

Smart machines: write or amend code create their own systems, languages calculate faster

remember: smarter, faster does not equal wiser

The artificial will never completely understand the genuine, real human element

Humans create machines, machines don't create Humans

The artificial is only as good as the code a human has written it

artificial will never = human - so the equation will never compile

There are many good tasks robots with artificial intelligence can perform today (2019)

robots have been around for a long time in many businesses , factories, hospitals, space, just many places doing many good tasks we humans designed them to do.

calculations will never equal true love, compassion, empathy and all the rest...

robots with artificial intelligence may assist us, help us (we hope may never destroy us) but never replace us

Said to say, greed and domination play a serious role in the future

One thing comes to mind in this conversation is the "singularity net" and artificial intelligence -look up if you haven't yet

Here is some thought from the past https://psychology.wikia.org/wiki/Artificial_consciousness

here is my two cents, be well