Canada
Asked
Resolved Resolved by JustinRatliff!

Artificial Conscience

I’m not sure if we’ll be able to solve this question but if anyone could get close I feel it’d be you guys here in the community.

With all the research/work being done on Artificial Intelligence, do you think there should be just as much put into Artificial Conscience?

I know there’s some work being done with Autonomous vehicles over at http://moralmachine.mit.edu but I don’t think enough is being done in this area. I feel like there should be more Jimmy Cricket style ethical engines being built.

One item that I don’t think is talked about much is the idea of self-sacrifice. Robots are infallible, if their body is damaged it can be rebuilt, their experiences and life can live on in another electromechanical host (if backed up to the cloud at a regular rate). Our bodies are kinda a one shot deal. In being infallible, I think robots should sacrifice their chassis at all costs to save us vulnerable humans.

This kind of question plays into Asimov’s three laws a bit.

I’m super curious to hear about what you guys think about this.


Related Hardware EZ-B IoTiny

ARC Pro

Upgrade to ARC Pro

ARC Pro will give you immediate updates and new features needed to unleash your robot's potential!

#9  

Another thing to consider is our morals change and evolve with us over time as we do.

So...what's my point...I think what I'm heading towards is a robot can probably only be about as moral as we humans are at any given point time.  Because we are learning and change and evolve (hopefully for the better) and if our robots and AI equal or exceed our mental abilities they should be able to contribute to that change and evolution.

A functional moral process for a intelligent robot or AI might look something like this:

  1. A hard coded set basic low level laws (think the 3 laws of robotics or 10 commandments or ETs last commandment "Be Good")
  2. A Committee or Organization to set governing rules for robot and AI morals (assumption is these devices will be produced on scale and manufactures will belong to organizations that govern them for compliance standards)
  3. The Committee will revise and release new standards for the hard coded low level laws and "broader cloud based laws" to govern moral code of robots, similar to how wireless and network communication standards are revised today
  4. Robots and AI will down load "broader cloud based laws" to govern more complex situations
  5. Manufactures and Owners of Robots and AI systems will contribute feedback to the Committee
  6. Robots and AI will also contribute their own feedback to the committee with reports on when laws were confusing's, when errors happened, conflicts, laws were not followed, laws or morals were violated, etc.

On a smaller scale though, could this be implemented?  Could JD be sanctified?

PRO
Canada
#10   — Edited

Thank you for all your comments! I'll likely slowly tackle the wealth of ideas and insights that are here.

@amin I agree that once the singularity hits we will be merely like ants to an AI but I was thinking that maybe we could make a case for our existence and how we could of benefit to an AI:

  1. We are creators, explorers, and pioneers, we keep moving forward to the next discovery
  2. We are survivors. We live off the biological not electrical (we are untethered)
  3. We are adaptable and can adjust to changing conditions
  4. We are flexible creatures that can accomplish great feats
  5. We are dexterous and can do amazing works with our hands (including repair work)
  6. We are just. We believe in justice and the rights of all sentient beings
  7. We are intelligent and strategic, and can come up with creative solutions
  8. You guys probably have better ideas than this, add your own....

With the idea of a car that sacrifices itself in order to save lives I feel like that people would still purchase something like that as long as it had an amazing internal safety system and a massive insurance policy. I feel like most people would prefer the inconvenience of having to get a new vehicle rather than the moral implications of taking someone's life.

I definitely agree that ethical dilemmas go beyond the realm of engineering but, Hey, we here in the community are from all walks of life and experience I wouldn't say we are all engineers or engineer focused. I feel like we have enough coverage to make sure the humanities and artistic sides are represented. Even most of us have that side of ourselves to draw from.

#12  

@EZang60 But is that a simple view?  It mean, that's kinda deep isn't it?   Because if its artificial, then there are no rules, except the ones your create, right?   Then the consciences is what you create.  And the morality is what you create.  Then its like a zen Bob Ross painting, there are no mistakes, only happy accidents.:)

Which means the ownership of morality or artificial conscience falls back on the creator, right?

PRO
USA
#13  

correct

the ones we create, if we don't create, then no conciseness at all

#14  

I like Ben Goertzel, its kind of a good round up on where we are now and where we are heading.

PRO
Canada
#15  

@fxrtst

Will a robot itself leave it's own code unchanged? If we tell a robot something will if follow that direction indefinitely? What happens if the robot evolves? What happens when people with ill-intent change the code? Will ethics play into the code?

I think I can answer all these questions together by citing the premise of this thread, just as conscience is a built-in part of us (we start life with a conscience), so should an Artificial conscience be designed into robots. Robots should have beginning, fail-safe ethical code, that is hard coded into each one that we build, so it has a basis of moral understanding on how to operate in our world. I have no idea how to do it but the code should be locked down and inaccessible to inference by people, other robots, or the robot itself. That being said, we should be able to send one way, over-the-air, updates to bring the robots in line with any updates to rights, freedoms, and laws that are relevant. I feel that these updates would have to be tested extensively before being released into the robot population. Sure, the robots can evolve but the fail-safe ethical code should remain unchanged until the updates are ready. It would be similar to the way we operate as a society, we update our morals with new human rights laws being passed.

Will robots want to emulate our choices and emotions?

In the J.J. Abrams movie Star Trek, the story of Spock is a amazing one and could be used as a decent illustration for this question. In the movie, Spock has an internal conflict of identity about being part human and part Vulcan. The Vulcan species is very logical and robot-like and Spock has an internal struggle as he balances his human emotions with his Vulcan side. In the end he finds positive characteristics of both parts of himself. I believe robots may find it beneficial to have human-like qualities in order to navigate and share our world together, but I have no idea if they would desire these things.

Once the genie in out of the bottle can we put it back in?

I guess this is more of an AI question. No, there likely isn't a way to go back once AI becomes self-aware (besides EMP blasting the entire planet) but I would hope that we have done enough testing and put enough precautions in place that things would work out well. This thread is an attempt to talk about the precautions we should put in place, such as an Artificial Conscience.

To save myself or save the human?

Great question! If a robot can upload it's memories, experiences, and knowledge to the cloud it's essentially immortal as it can upload those things to a new host body. As humans we are fragile and can't transfer to a new body. I think it only makes sense for robots to be our guardians. They can sacrifice themselves to help us since we are such short-lived beings on this planet, and they will be around much longer than us (and can have rebuilt bodies).

What happens if a robot accidentally kills a human?

It's bound to happen, in fact it has already happened. Robert Williams was killed in 1979 by a stacking robot in a Ford automobile factory. His family sued the manufacturer of the robot company for not having enough safety measures and won. A second robot killing happened a few years later in 1981. Kenji Urada was pinned by a production robot when he jumped over a safety fence to perform some maintenance. In this case the robot was removed from the factory and man-high fences were built around other 2 robots at the plant. I find it interesting these two cases had similar accidents happen but 2 different parties were held accountable. I think that each case of robots accidentally killing a human will be unique and one key thing will be to blame but I also think all parties involved will need to be accountable to make all the improvements they can, whether they are the manufacturer, programmer, distributor, designer, end user, and possibly the robot itself.

Will Anthropomorphic/Bipedal robots be deemed more dangerous that smaller ones?

Yes, I too believe that they'll be deemed more dangerous. They seem threatening due to their size and they'll likely be able to perform the same tasks as humans but with much more strength, speed, and repetition which is also threatening in more ways than one. We don't have to build Anthropomorphic robots but for some reason we have the tendency to do so.

Misc remarks:

I think the entire tv/movie industry is great at playing up either fear or fantasy (some do both) and that's how we get lured into watching content. It seems like the industry has the amazing ability of coming up with interesting answers to the hypothetical question: "What if.....?". I myself am very interested in exploring that question here and now, in the reality we live in.

#16  

Going back to the video and talk from Ben Goertzel, I think he made an interesting point that a lot of "AI" or machine learning is not "General" but instead domain specific.  In a way, I feel like my robots and AI are "domain specific" as well even though my goal might be general intelligence but the end result is a robot that entertains me which makes it narrow.

And I think Ben's point about AI conscience emerging from the areas where we focus our AI like the domains of military, advertising, and healthcare is interesting to consider.  Because how do you narrow ethical AI in those domains?  I picture emerging conscience like Age of Ultron....saying to its self, "what is this, where am I."   As it starts to make sense of its world and its self.   That could be creepy or very colorful.  Especially as "sex" robots and AI start to gain in popularity.

I tend to think the AI conscience will emerge like a new creature, somewhat human but different enough that we may only recognize it in passing.  Then the bigger question might be is how will we react to the new life form that has this conscience?