Welcome to Synthiam!

Program robots using technologies created from industry experts. EZ-Builder is our free-to-use robot programming software that makes features like vision recognition, navigation and artificial intelligence easy.
Get Started

Canada
Asked
Resolved Resolved by JustinRatliff!

Artificial Conscience

I’m not sure if we’ll be able to solve this question but if anyone could get close I feel it’d be you guys here in the community.

With all the research/work being done on Artificial Intelligence, do you think there should be just as much put into Artificial Conscience?

I know there’s some work being done with Autonomous vehicles over at http://moralmachine.mit.edu but I don’t think enough is being done in this area. I feel like there should be more Jimmy Cricket style ethical engines being built. 

One item that I don’t think is talked about much is the idea of self-sacrifice. Robots are infallible, if their body is damaged it can be rebuilt, their experiences and life can live on in another electromechanical host (if backed up to the cloud at a regular rate). Our bodies are kinda a one shot deal. In being infallible, I think robots should sacrifice their chassis at all costs to save us vulnerable humans.

This kind of question plays into Asimov’s three laws a bit.

I’m super curious to hear about what you guys think about this.


Related Hardware EZ-Robot EZ-B IoTiny
#21  
Same here, Sythyiam is a great home!! 
Best place to be...:D
Synthiam
#22  
Thanks guys! I would definitely say it's a team effort:D There are some very important behind the scenes people like @Alan, @Amin, @Valentin, @Ahewaz, investors, and board members that help make it all happen. I guess I could take this opportunity to say thank you to you all in the community! This community would not be what it is without your engagement, ideas, questions and positive outlook on robotics! 

Yeah this question thread might go unresolved for some time but my hope is that our community could create more awareness around this area of Artificial Conscience and have a positive impact on the industry. Once I see that, we can mark this question as solved:) I just wish there was an option to select "The Synthiam Community" rather than one individualxD

I have some more responses to write on this question, stay tuned!
#23  
I read in the about section "Who is DJ Sure?

In 2012, DJ founded EZ-Robot Inc, where he led as CEO until its acquisition in 2019.

Who acquired Ez-RoBot ?
Synthiam
#24  
@EZang60 Thank you for your thoughts and views! While I can't respond to every statement you made, I'll try my best to respond to a few.

Oh and in the future there will likely be a press release about the EZ-Robot Inc. purchase, stay tuned.

Would you like any artificial intelligence ruling the world?

No, I would absolutely not want Artificial Intelligence ruling the world. It's one of my greatest fears, as well as many other peoples'. I would like to prevent that kind of thing, it's one of the reasons why I started this thread:)

The artificial will never completely understand the genuine, real human element

It's definitely true that robots will never fully understand what it is to be human but I feel like we should at least try to help them understand us. I think robots can eventually do better than what we designed them to do, AI and Artifical Conscience should help with that, but we have a responsibility to guide it.

Sad to say, greed and domination play a serious role in the future

Greed and domination will always play a factor in our future but morality always seems to rise to the top in the end. We saw this in the League of Nations being formed after WW1 and United nations formed after WW2. We come to grips with our mistakes, make changes and try to do better tomorrow. I believe that it's our own human conscience that prevents us from completely destroying our world.

One thing comes to mind in this conversation is the "singularity net" and artificial intelligence 

This relates to the TED talk shared by @Mickey666Maus (which was a good overview of current AI). Ben Goertzel and his colleagues at Singularity Net are merely looking for financial gain using AI. The "common good" that Ben describes is his TED talk is good for him and his network but what benefit would there be for everyone else? It seems their priority is fame (through Sophia) and fortune (Singularity Net Blockchain efforts) by leveraging AI, and I am against what they are doing. These guys seem like AI cowboys and I fear that they are not putting enough check and balances in place to proceed safely with it.

Here is some thought from the past https://psychology.wikia.org/wiki/Artificial_consciousness

Thanks for sharing this link, it's very informative. I think that the page is more about the historical/present state of Artifical Conciousness (self-awareness) whereas I'd like to discuss the present state of Artificial Conscience (Moral/Ethical) which is the inevitable next step after it. I have accepted that AI will become self-aware at some point but what I'm trying to work toward is a way to guide that self-awareness that won't end in humanities untimely demise. I feel that an Artificial Conscience could be the answer.

Instead of reaching out to Academia, or science like this: http://theconversation.com/will-artificial-intelligence-become-conscious-87231, My simple view is: Artificial is Artificial not a true Conscience.

Even though Artificial will never = human, that doesn't mean we can't create a synthesized moral code for Robots to live under and a safety system to prevent the manipulation of that code. I don't think that robots will ever be human, or even need to be, they are their own entity (race of intelligent beings) and as such have will their own way of viewing the universe and traversing it. That being said, they share a world with us so I feel we'll need a shared ethical structure to co-exist.

thoughts on: god is a bot and Anthony Levandowski is his messenger, read: https://www.wired.com/story/god-is-a-bot-and-anthony-levandowski-is-his-messenger/?utm_source=morning_brew

I wasn't so impressed by this wired article, which talks about the accomplishments of Anthony Levandowski in the automated driving industry but not really about the religion (Way of the Future) that he founded. It doesn't have much to do with the article's title, it's just Anthony's back story. The reality of the story is that Anthony has definitely left his mark on the automated driving industry but has been in all kinds of legal trouble for possibly stealing autonomous vehicle trade secrets from Google. If the article had a clearer premise I think that it would say that, if the allegations are true, this man does not have a strong moral character and is probably not very trustworthy. Maybe his ideas on AI aren't either.
Synthiam
#25  
@JustinRatliff Thank you as well for your thoughts! Here are my responses to some of the things you brought up:

One interesting thing we are seeing with our AI or Machine Intelligence is our human biases are going into those engines. Things we just don't consider or take for granted can innocently sneak into an AI system as a bias and produce undesired outcomes.

Even though this may seem like a bad thing I think there needs to be some human bias incorporated into AI as this is our planet. AI is being born into a place that already has been shaped and terraformed into a place that is (mostly) ideal for humans. AI is created by us (thus the Artificial in it's name) and in my mind it should share some of it's creators qualities.

So it would be super interesting to feed an AI system a bunch of data on self sacrifice examples and then see what it's "engine" produces in daily encounters. Odds are, it would be entertaining.

I think it would be awesome to do some AI simulation, in terms of self-sacrifice or not, and see outcomes would happen in a test environment. Does such an environment exist? If not, why not?xD

Another thing to consider is our morals change and evolve with us over time as we do.

I addressed this point in my response to @fxrtst, but I like your point that even we are changing and over time. We are only as moral as we can be until the next change.

A functional moral process for a intelligent robot or AI might look something like this: ...

Love your list, I couldn't agree more! That kind of list was exactly what I was looking for. I feel like your list is the closest thing we'll get to answering the initial question, so you get the credit! That it doesn't mean we have to stop this discussion, I'd like to keep refining your list:)

On a smaller scale though, could this be implemented? Could JD be sanctified?

I would love to explore how to implement this on a small scale! Maybe it could some day become a behavior control:D (oh man, that works on so many levels LOL)

Which means the ownership of morality or artificial conscience falls back on the creator, right?

I feel that, Yes, is the answer to this question. We should definitely be accountable for our own creations and their flaws. It falls on the responsibility of the creator to work with the creation to iron out the imperfections and issues. Hmmm, but the more I think about it, the more I feel like it's shared between the creator/creation as the creation should be self-aware enough to recognize a bug/flaw.
#26  
True real Feelings and Emotions

In simple terms:


Artificial - Dictionary meanings:
made by human skill; produced by humans (opposed to natural): artificial flowers.

imitation; simulated; sham: artificial vanilla flavoring.

lacking naturalness or spontaneity; forced; contrived; feigned: an artificial smile.

full of affectation; affected; stilted: artificial manners; artificial speech.

made without regard to the particular needs of a situation, person, etc.;
imposed arbitrarily; unnatural: artificial rules for dormitory residents.

Biology. based on arbitrary, superficial characteristics rather than natural, organic relationships: an artificial system of classification.

Jewelry. manufactured to resemble a natural gem, in chemical composition or appearance.


No matter how smart a machine becomes, it will never be consciously aware of what it’s doing or why.

It doesn’t really know or can purposely react on its own, nor have, in the truest sense, make proper correct choices or have a personality.

Example from net: For all of the wonderful advances made by Tesla, its in-car autopilot drove into the back of a bright red fire truck because it wasn’t programmed to recognize that specific object, and this highlights the problem with AI and machine learningthere’s no actual awareness of what’s being done or why.

Artificial Conciseness? It is simple not consciously alive
From the net:
To live consciously means being aware of everything that may have an influence on your actions, purposes, values and goals.
Living consciously means seeking out these things to the best of your ability, and then acting appropriately with what you see and know.

Being alive is not to be compared with a robot’s battery life or however you charge the robot.

By itself, on its own
Has no true feelings of what to do today, be happy, sad, joy, pain, cold, heat, wind, rain, snow, calm etc really is like us humans.
It will never understand what: getting tired, bored, excited, feeling sick or healthy, understand consequences or repercussions are etc
It will never really understand real life: what animals, insects, plants, microbes, people etc really are.

2 simple Examples:
I decided today to fly my drone robot at a certain time today
The drone robot could not possible know this.
It could not decide not to fly today
The drone robot was in it’d case; it did not know where it was.
The drone robot did not know it needed to be charged nor what it’s going to do or why.

2 - My wife’s set her alarm on her I-phone for a certain time, it went off as directed however, it did not know really why it went off
There is no actual awareness of what’s being done or why it is being done nor does it care. (What is care?)

Sad to say, we created too many networks for too many serious situations and operations, time will tell.

Elon Musk commented that competition between nations to create artificial intelligence could lead to World War III.

On artificial intelligence advancement, the late Stephen Hawking thought it would be the worst event in the history of civilization and could end with humans being replaced.

I am not an alarmist, however I see the possibility for Competitive exclusion if the correct checks and balances are not put in place.

In conclusion at this point:
Again, I enjoy robots and appreciate you guys but greed and domination play major parts in the near future on a larger scale.

Even Russian President Vladimir Putin understands this better than most, and said, Whoever becomes the leader in this sphere will become the ruler of the world.