Canada
Asked
Resolved Resolved by JustinRatliff!

Artificial Conscience

I’m not sure if we’ll be able to solve this question but if anyone could get close I feel it’d be you guys here in the community.

With all the research/work being done on Artificial Intelligence, do you think there should be just as much put into Artificial Conscience?

I know there’s some work being done with Autonomous vehicles over at http://moralmachine.mit.edu but I don’t think enough is being done in this area. I feel like there should be more Jimmy Cricket style ethical engines being built.

One item that I don’t think is talked about much is the idea of self-sacrifice. Robots are infallible, if their body is damaged it can be rebuilt, their experiences and life can live on in another electromechanical host (if backed up to the cloud at a regular rate). Our bodies are kinda a one shot deal. In being infallible, I think robots should sacrifice their chassis at all costs to save us vulnerable humans.

This kind of question plays into Asimov’s three laws a bit.

I’m super curious to hear about what you guys think about this.


Related Hardware EZ-B IoTiny

ARC Pro

Upgrade to ARC Pro

With ARC Pro, your robot is not just a machine; it's your creative partner in the journey of technological exploration.

PRO
USA
#1   — Edited

To be clear, you mean in a situation where the robot sacrifices itself to save a human?

#2  

Well. growing up a fan of the Cylons in Battlestar and then the Terminators, My feeling would be to never let the robots think about things. Too dangerous if they continue to compute decisions a million times faster than human brains. If they start to think of them self as God and humans as insignificant germs,this will not have a happy ending for us. James Cameron just did an interview when asked this same type of Question. As the first Terminator movie he did .It was just a cool script he wrote. Now he has seen too much as an Elite famous director by real scientists engineering the actual A.I. in coming future robots and in war machines,he said he is no longer cheerful of what is coming.

PRO
Canada
#3  

I am more inclined to think the relationship between us and the ultra intelligent brain (once it emerges), is similar to that of us and ants. It probably doesn't care much about us (no love/sacrifice and no hate/war), but if we get in the way we will be pushed aside.

But with the similar levels of intelligence we have now (like in self-driving cars) your question is a very good one although it has many complex aspects. Let's say we hardcode that in case of dangerous situation if there are no passengers in the car, the car should sacrifice itself to save humans if possible, sure. For example by driving off the road into the ditch to avoid crash. But what about the situation where it has to decide between who to sacrifice for when there are no other options. The car has to answer: Should I crash with the old human to save the young or the other way around? Which lives matter the most? Should I try to save my own passenger or the 3 pedestrians? (people won't buy a car that prefers saving more people than saving its own passenger btw)

Ethical dilemmas like this  makes answering such question very tough and in my opinion out of the realm of engineering sometimes.

PRO
Canada
#4  

@fxrtst That was just an example of an area of Artificial Conscience that bothers me. What I'm looking to discuss is the entirety of what @Robo Rad is describing.

I think it's inevitable that we will have robots that think for themselves (Artificial Intelligence) but how do we create moral machines? Robots with empathy and ethics, that can make decisions based upon human rights and laws. Shouldn't this be at the forefront of Robots and A.I. instead of an after thought?

I can see the Terminator and Cylon robots being built today but where are the robots like Bi-centennial man or David from the movie A.I. that seem to have an Artificial Conscience?

It seems that Asimov started the conversation about this in the 1940s, but how come we haven't arrived at any answers or conclusions yet?

PRO
USA
#5   — Edited

Interesting you bring this up. I have a script I've been working on, about this subject.

We can always write in laws in the code. But is a conscience entity (robot) that is allowed to "think" for itself, going to leave that code "laws" unchanged? If we tell it to do so..will it obey indefinitely? It think it would, in the beginning, but what happens as it evolves or an ill intention-ed person introduces the first code to disobey those rules? Will ethics play into the code? Do they WANT to be like humans to emulate our choices, emotions? Like the choice to help one another. Once the jinni is out of the bottle can we put it back in? Most likely not. Great questions and a great subject.

"To Save myself or save the human?"

PRO
USA
#6   — Edited

I've spoken in the past where society is so much influenced by film. Those images of robots doing harm to humans is what at times seem like an insurmountable task to get everyone to rethink this fear. I've always said i dream of a world in which humans and robots can live together..but we humans can't really live peacefully side by side...how long before suspicions arise, or the first incident where a human is accidentally killed by a robot? Would a bipedal robot like C3PO or Atlas seem more dangerous because they are more human in appearance vs a R2D2 or wheeled robot? Anthropomorphic prejudice will play a role in how these robot 'offenders' are dealt with.

#7  

One interesting thing we are seeing with our AI or Machine Intelligence is our human biases are going into those engines.  Things we just don't consider or take for granted can innocently sneak into an AI system as a bias and produce undesired outcomes.

So it would be super interesting to feed an AI system a bunch of data on self sacrifice examples and then see what it's "engine" produces in daily encounters.  Odds are, it would be entertaining.

Will, I love your clarification question.....because after you asked, I got this grand image of a robotic cult from Indiana Jones and the Temple of Doom

PRO
USA
#8   — Edited

It all boils down to:

Would you like any artificial intelligence ruling the world?

Smart machines: write or amend code create their own systems, languages calculate faster

remember: smarter, faster does not equal wiser

The artificial will never completely understand the genuine, real human element

Humans create machines, machines don't create Humans

The artificial is only as good as the code a human has written it

artificial will never = human - so the equation will never compile

There are many good tasks robots with artificial intelligence can perform today (2019)

robots have been around for a long time in many businesses , factories, hospitals, space, just many places doing many good tasks we humans designed them to do.

calculations will never equal true love, compassion, empathy and all the rest...

robots with artificial intelligence may assist us, help us (we hope may never destroy us) but never replace us

Said to say, greed and domination play a serious role in the future

One thing comes to mind in this conversation is the "singularity net" and artificial intelligence -look up if you haven't yet

Here is some thought from the past https://psychology.wikia.org/wiki/Artificial_consciousness

here is my two cents, be well

#9  

Another thing to consider is our morals change and evolve with us over time as we do.

So...what's my point...I think what I'm heading towards is a robot can probably only be about as moral as we humans are at any given point time.  Because we are learning and change and evolve (hopefully for the better) and if our robots and AI equal or exceed our mental abilities they should be able to contribute to that change and evolution.

A functional moral process for a intelligent robot or AI might look something like this:

  1. A hard coded set basic low level laws (think the 3 laws of robotics or 10 commandments or ETs last commandment "Be Good")
  2. A Committee or Organization to set governing rules for robot and AI morals (assumption is these devices will be produced on scale and manufactures will belong to organizations that govern them for compliance standards)
  3. The Committee will revise and release new standards for the hard coded low level laws and "broader cloud based laws" to govern moral code of robots, similar to how wireless and network communication standards are revised today
  4. Robots and AI will down load "broader cloud based laws" to govern more complex situations
  5. Manufactures and Owners of Robots and AI systems will contribute feedback to the Committee
  6. Robots and AI will also contribute their own feedback to the committee with reports on when laws were confusing's, when errors happened, conflicts, laws were not followed, laws or morals were violated, etc.

On a smaller scale though, could this be implemented?  Could JD be sanctified?

PRO
Canada
#10   — Edited

Thank you for all your comments! I'll likely slowly tackle the wealth of ideas and insights that are here.

@amin I agree that once the singularity hits we will be merely like ants to an AI but I was thinking that maybe we could make a case for our existence and how we could of benefit to an AI:

  1. We are creators, explorers, and pioneers, we keep moving forward to the next discovery
  2. We are survivors. We live off the biological not electrical (we are untethered)
  3. We are adaptable and can adjust to changing conditions
  4. We are flexible creatures that can accomplish great feats
  5. We are dexterous and can do amazing works with our hands (including repair work)
  6. We are just. We believe in justice and the rights of all sentient beings
  7. We are intelligent and strategic, and can come up with creative solutions
  8. You guys probably have better ideas than this, add your own....

With the idea of a car that sacrifices itself in order to save lives I feel like that people would still purchase something like that as long as it had an amazing internal safety system and a massive insurance policy. I feel like most people would prefer the inconvenience of having to get a new vehicle rather than the moral implications of taking someone's life.

I definitely agree that ethical dilemmas go beyond the realm of engineering but, Hey, we here in the community are from all walks of life and experience I wouldn't say we are all engineers or engineer focused. I feel like we have enough coverage to make sure the humanities and artistic sides are represented. Even most of us have that side of ourselves to draw from.

#12  

@EZang60 But is that a simple view?  It mean, that's kinda deep isn't it?   Because if its artificial, then there are no rules, except the ones your create, right?   Then the consciences is what you create.  And the morality is what you create.  Then its like a zen Bob Ross painting, there are no mistakes, only happy accidents.:)

Which means the ownership of morality or artificial conscience falls back on the creator, right?

PRO
USA
#13  

correct

the ones we create, if we don't create, then no conciseness at all

#14  

I like Ben Goertzel, its kind of a good round up on where we are now and where we are heading.

PRO
Canada
#15  

@fxrtst

Will a robot itself leave it's own code unchanged? If we tell a robot something will if follow that direction indefinitely? What happens if the robot evolves? What happens when people with ill-intent change the code? Will ethics play into the code?

I think I can answer all these questions together by citing the premise of this thread, just as conscience is a built-in part of us (we start life with a conscience), so should an Artificial conscience be designed into robots. Robots should have beginning, fail-safe ethical code, that is hard coded into each one that we build, so it has a basis of moral understanding on how to operate in our world. I have no idea how to do it but the code should be locked down and inaccessible to inference by people, other robots, or the robot itself. That being said, we should be able to send one way, over-the-air, updates to bring the robots in line with any updates to rights, freedoms, and laws that are relevant. I feel that these updates would have to be tested extensively before being released into the robot population. Sure, the robots can evolve but the fail-safe ethical code should remain unchanged until the updates are ready. It would be similar to the way we operate as a society, we update our morals with new human rights laws being passed.

Will robots want to emulate our choices and emotions?

In the J.J. Abrams movie Star Trek, the story of Spock is a amazing one and could be used as a decent illustration for this question. In the movie, Spock has an internal conflict of identity about being part human and part Vulcan. The Vulcan species is very logical and robot-like and Spock has an internal struggle as he balances his human emotions with his Vulcan side. In the end he finds positive characteristics of both parts of himself. I believe robots may find it beneficial to have human-like qualities in order to navigate and share our world together, but I have no idea if they would desire these things.

Once the genie in out of the bottle can we put it back in?

I guess this is more of an AI question. No, there likely isn't a way to go back once AI becomes self-aware (besides EMP blasting the entire planet) but I would hope that we have done enough testing and put enough precautions in place that things would work out well. This thread is an attempt to talk about the precautions we should put in place, such as an Artificial Conscience.

To save myself or save the human?

Great question! If a robot can upload it's memories, experiences, and knowledge to the cloud it's essentially immortal as it can upload those things to a new host body. As humans we are fragile and can't transfer to a new body. I think it only makes sense for robots to be our guardians. They can sacrifice themselves to help us since we are such short-lived beings on this planet, and they will be around much longer than us (and can have rebuilt bodies).

What happens if a robot accidentally kills a human?

It's bound to happen, in fact it has already happened. Robert Williams was killed in 1979 by a stacking robot in a Ford automobile factory. His family sued the manufacturer of the robot company for not having enough safety measures and won. A second robot killing happened a few years later in 1981. Kenji Urada was pinned by a production robot when he jumped over a safety fence to perform some maintenance. In this case the robot was removed from the factory and man-high fences were built around other 2 robots at the plant. I find it interesting these two cases had similar accidents happen but 2 different parties were held accountable. I think that each case of robots accidentally killing a human will be unique and one key thing will be to blame but I also think all parties involved will need to be accountable to make all the improvements they can, whether they are the manufacturer, programmer, distributor, designer, end user, and possibly the robot itself.

Will Anthropomorphic/Bipedal robots be deemed more dangerous that smaller ones?

Yes, I too believe that they'll be deemed more dangerous. They seem threatening due to their size and they'll likely be able to perform the same tasks as humans but with much more strength, speed, and repetition which is also threatening in more ways than one. We don't have to build Anthropomorphic robots but for some reason we have the tendency to do so.

Misc remarks:

I think the entire tv/movie industry is great at playing up either fear or fantasy (some do both) and that's how we get lured into watching content. It seems like the industry has the amazing ability of coming up with interesting answers to the hypothetical question: "What if.....?". I myself am very interested in exploring that question here and now, in the reality we live in.

#16  

Going back to the video and talk from Ben Goertzel, I think he made an interesting point that a lot of "AI" or machine learning is not "General" but instead domain specific.  In a way, I feel like my robots and AI are "domain specific" as well even though my goal might be general intelligence but the end result is a robot that entertains me which makes it narrow.

And I think Ben's point about AI conscience emerging from the areas where we focus our AI like the domains of military, advertising, and healthcare is interesting to consider.  Because how do you narrow ethical AI in those domains?  I picture emerging conscience like Age of Ultron....saying to its self, "what is this, where am I."   As it starts to make sense of its world and its self.   That could be creepy or very colorful.  Especially as "sex" robots and AI start to gain in popularity.

I tend to think the AI conscience will emerge like a new creature, somewhat human but different enough that we may only recognize it in passing.  Then the bigger question might be is how will we react to the new life form that has this conscience?

#18  

I really just realized how funny it is having Artificial Conscience filed in under Questions...and marked as unresolved!!!

I guess it will stay unresolved for a while...:D

#19   — Edited

xD Good point Mickey....and "AI Support Bot" tried to answer..  I shouldn't laugh, At least AI did join our conversation!!

PRO
USA
#20  

I just wanted to take a moment to thank D J, Jeremey, and professor E for all the work they have done and for sharing all the technology they have learned with the public (me).

best wishes,

Angelo  - EZang60

#21  

Same here, Sythyiam is a great home!!  Best place to be...:D

PRO
Canada
#22  

Thanks guys! I would definitely say it's a team effort:D There are some very important behind the scenes people like @Alan, @Amin, @Valentin, @Ahewaz, investors, and board members that help make it all happen. I guess I could take this opportunity to say thank you to you all in the community! This community would not be what it is without your engagement, ideas, questions and positive outlook on robotics!

Yeah this question thread might go unresolved for some time but my hope is that our community could create more awareness around this area of Artificial Conscience and have a positive impact on the industry. Once I see that, we can mark this question as solved:) I just wish there was an option to select "The Synthiam Community" rather than one individualxD

I have some more responses to write on this question, stay tuned!

PRO
USA
#23  

I read in the about section "Who is DJ Sure?

In 2012, DJ founded EZ-Robot Inc, where he led as CEO until its acquisition in 2019.

Who acquired Ez-RoBot ?

PRO
Canada
#24  

@EZang60 Thank you for your thoughts and views! While I can't respond to every statement you made, I'll try my best to respond to a few.

Oh and in the future there will likely be a press release about the EZ-Robot Inc. purchase, stay tuned.

Would you like any artificial intelligence ruling the world? No, I would absolutely not want Artificial Intelligence ruling the world. It's one of my greatest fears, as well as many other peoples'. I would like to prevent that kind of thing, it's one of the reasons why I started this thread:)

The artificial will never completely understand the genuine, real human element It's definitely true that robots will never fully understand what it is to be human but I feel like we should at least try to help them understand us. I think robots can eventually do better than what we designed them to do, AI and Artifical Conscience should help with that, but we have a responsibility to guide it.

Sad to say, greed and domination play a serious role in the future Greed and domination will always play a factor in our future but morality always seems to rise to the top in the end. We saw this in the League of Nations being formed after WW1 and United nations formed after WW2. We come to grips with our mistakes, make changes and try to do better tomorrow. I believe that it's our own human conscience that prevents us from completely destroying our world.

One thing comes to mind in this conversation is the "singularity net" and artificial intelligence  This relates to the TED talk shared by @Mickey666Maus (which was a good overview of current AI). Ben Goertzel and his colleagues at Singularity Net are merely looking for financial gain using AI. The "common good" that Ben describes is his TED talk is good for him and his network but what benefit would there be for everyone else? It seems their priority is fame (through Sophia) and fortune (Singularity Net Blockchain efforts) by leveraging AI, and I am against what they are doing. These guys seem like AI cowboys and I fear that they are not putting enough check and balances in place to proceed safely with it.

Here is some thought from the past https://psychology.wikia.org/wiki/Artificial_consciousness Thanks for sharing this link, it's very informative. I think that the page is more about the historical/present state of Artifical Conciousness (self-awareness) whereas I'd like to discuss the present state of Artificial Conscience (Moral/Ethical) which is the inevitable next step after it. I have accepted that AI will become self-aware at some point but what I'm trying to work toward is a way to guide that self-awareness that won't end in humanities untimely demise. I feel that an Artificial Conscience could be the answer.

Instead of reaching out to Academia, or science like this: http://theconversation.com/will-artificial-intelligence-become-conscious-87231, My simple view is: Artificial is Artificial not a true Conscience. Even though Artificial will never = human, that doesn't mean we can't create a synthesized moral code for Robots to live under and a safety system to prevent the manipulation of that code. I don't think that robots will ever be human, or even need to be, they are their own entity (race of intelligent beings) and as such have will their own way of viewing the universe and traversing it. That being said, they share a world with us so I feel we'll need a shared ethical structure to co-exist.

thoughts on: god is a bot and Anthony Levandowski is his messenger, read: https://www.wired.com/story/god-is-a-bot-and-anthony-levandowski-is-his-messenger/?utm_source=morning_brew I wasn't so impressed by this wired article, which talks about the accomplishments of Anthony Levandowski in the automated driving industry but not really about the religion (Way of the Future) that he founded. It doesn't have much to do with the article's title, it's just Anthony's back story. The reality of the story is that Anthony has definitely left his mark on the automated driving industry but has been in all kinds of legal trouble for possibly stealing autonomous vehicle trade secrets from Google. If the article had a clearer premise I think that it would say that, if the allegations are true, this man does not have a strong moral character and is probably not very trustworthy. Maybe his ideas on AI aren't either.

PRO
Canada
#25  

@JustinRatliff Thank you as well for your thoughts! Here are my responses to some of the things you brought up:

One interesting thing we are seeing with our AI or Machine Intelligence is our human biases are going into those engines. Things we just don't consider or take for granted can innocently sneak into an AI system as a bias and produce undesired outcomes. Even though this may seem like a bad thing I think there needs to be some human bias incorporated into AI as this is our planet. AI is being born into a place that already has been shaped and terraformed into a place that is (mostly) ideal for humans. AI is created by us (thus the Artificial in it's name) and in my mind it should share some of it's creators qualities.

So it would be super interesting to feed an AI system a bunch of data on self sacrifice examples and then see what it's "engine" produces in daily encounters. Odds are, it would be entertaining.

I think it would be awesome to do some AI simulation, in terms of self-sacrifice or not, and see outcomes would happen in a test environment. Does such an environment exist? If not, why not?xD

Another thing to consider is our morals change and evolve with us over time as we do. I addressed this point in my response to @fxrtst, but I like your point that even we are changing and over time. We are only as moral as we can be until the next change.

A functional moral process for a intelligent robot or AI might look something like this: ... Love your list, I couldn't agree more! That kind of list was exactly what I was looking for. I feel like your list is the closest thing we'll get to answering the initial question, so you get the credit! That it doesn't mean we have to stop this discussion, I'd like to keep refining your list:)

On a smaller scale though, could this be implemented? Could JD be sanctified?

I would love to explore how to implement this on a small scale! Maybe it could some day become a skill control:D (oh man, that works on so many levels LOL)

Which means the ownership of morality or artificial conscience falls back on the creator, right? I feel that, Yes, is the answer to this question. We should definitely be accountable for our own creations and their flaws. It falls on the responsibility of the creator to work with the creation to iron out the imperfections and issues. Hmmm, but the more I think about it, the more I feel like it's shared between the creator/creation as the creation should be self-aware enough to recognize a bug/flaw.

PRO
USA
#26  

True real Feelings and Emotions

In simple terms:

Artificial - Dictionary meanings: made by human skill; produced by humans (opposed to natural): artificial flowers.

imitation; simulated; sham: artificial vanilla flavoring.

lacking naturalness or spontaneity; forced; contrived; feigned: an artificial smile.

full of affectation; affected; stilted: artificial manners; artificial speech.

made without regard to the particular needs of a situation, person, etc.; imposed arbitrarily; unnatural: artificial rules for dormitory residents.

Biology. based on arbitrary, superficial characteristics rather than natural, organic relationships: an artificial system of classification.

Jewelry. manufactured to resemble a natural gem, in chemical composition or appearance.

No matter how smart a machine becomes, it will never be consciously aware of what it’s doing or why.

It doesn’t really know or can purposely react on its own, nor have, in the truest sense, make proper correct choices or have a personality.

Example from net: For all of the wonderful advances made by Tesla, its in-car autopilot drove into the back of a bright red fire truck because it wasn’t programmed to recognize that specific object, and this highlights the problem with AI and machine learningthere’s no actual awareness of what’s being done or why.

Artificial Conciseness? It is simple not consciously alive From the net: To live consciously means being aware of everything that may have an influence on your actions, purposes, values and goals. Living consciously means seeking out these things to the best of your ability, and then acting appropriately with what you see and know.

Being alive is not to be compared with a robot’s battery life or however you charge the robot.

By itself, on its own Has no true feelings of what to do today, be happy, sad, joy, pain, cold, heat, wind, rain, snow, calm etc really is like us humans. It will never understand what: getting tired, bored, excited, feeling sick or healthy, understand consequences or repercussions are etc It will never really understand real life: what animals, insects, plants, microbes, people etc really are.

2 simple Examples: I decided today to fly my drone robot at a certain time today The drone robot could not possible know this. It could not decide not to fly today The drone robot was in it’d case; it did not know where it was. The drone robot did not know it needed to be charged nor what it’s going to do or why.

2 - My wife’s set her alarm on her I-phone for a certain time, it went off as directed however, it did not know really why it went off There is no actual awareness of what’s being done or why it is being done nor does it care. (What is care?)

Sad to say, we created too many networks for too many serious situations and operations, time will tell.

Elon Musk commented that competition between nations to create artificial intelligence could lead to World War III.

On artificial intelligence advancement, the late Stephen Hawking thought it would be the worst event in the history of civilization and could end with humans being replaced.

I am not an alarmist, however I see the possibility for Competitive exclusion if the correct checks and balances are not put in place.

In conclusion at this point: Again, I enjoy robots and appreciate you guys but greed and domination play major parts in the near future on a larger scale.

Even Russian President Vladimir Putin understands this better than most, and said, Whoever becomes the leader in this sphere will become the ruler of the world.