
Artificial Conscience
I’m not sure if we’ll be able to solve this question but if anyone could get close I feel it’d be you guys here in the community.
With all the research/work being done on Artificial Intelligence, do you think there should be just as much put into Artificial Conscience?
I know there’s some work being done with Autonomous vehicles over at http://moralmachine.mit.edu but I don’t think enough is being done in this area. I feel like there should be more Jimmy Cricket style ethical engines being built.
One item that I don’t think is talked about much is the idea of self-sacrifice. Robots are infallible, if their body is damaged it can be rebuilt, their experiences and life can live on in another electromechanical host (if backed up to the cloud at a regular rate). Our bodies are kinda a one shot deal. In being infallible, I think robots should sacrifice their chassis at all costs to save us vulnerable humans.
This kind of question plays into Asimov’s three laws a bit.
I’m super curious to hear about what you guys think about this.
@JustinRatliff Thank you as well for your thoughts! Here are my responses to some of the things you brought up:
One interesting thing we are seeing with our AI or Machine Intelligence is our human biases are going into those engines. Things we just don't consider or take for granted can innocently sneak into an AI system as a bias and produce undesired outcomes. Even though this may seem like a bad thing I think there needs to be some human bias incorporated into AI as this is our planet. AI is being born into a place that already has been shaped and terraformed into a place that is (mostly) ideal for humans. AI is created by us (thus the Artificial in it's name) and in my mind it should share some of it's creators qualities.
So it would be super interesting to feed an AI system a bunch of data on self sacrifice examples and then see what it's "engine" produces in daily encounters. Odds are, it would be entertaining.
I think it would be awesome to do some AI simulation, in terms of self-sacrifice or not, and see outcomes would happen in a test environment. Does such an environment exist? If not, why not?
Another thing to consider is our morals change and evolve with us over time as we do. I addressed this point in my response to @fxrtst, but I like your point that even we are changing and over time. We are only as moral as we can be until the next change.
A functional moral process for a intelligent robot or AI might look something like this: ... Love your list, I couldn't agree more! That kind of list was exactly what I was looking for. I feel like your list is the closest thing we'll get to answering the initial question, so you get the credit! That it doesn't mean we have to stop this discussion, I'd like to keep refining your list
On a smaller scale though, could this be implemented? Could JD be sanctified?
I would love to explore how to implement this on a small scale! Maybe it could some day become a skill control
(oh man, that works on so many levels LOL)
Which means the ownership of morality or artificial conscience falls back on the creator, right? I feel that, Yes, is the answer to this question. We should definitely be accountable for our own creations and their flaws. It falls on the responsibility of the creator to work with the creation to iron out the imperfections and issues. Hmmm, but the more I think about it, the more I feel like it's shared between the creator/creation as the creation should be self-aware enough to recognize a bug/flaw.
True real Feelings and Emotions
In simple terms:
Artificial - Dictionary meanings: made by human skill; produced by humans (opposed to natural): artificial flowers.
imitation; simulated; sham: artificial vanilla flavoring.
lacking naturalness or spontaneity; forced; contrived; feigned: an artificial smile.
full of affectation; affected; stilted: artificial manners; artificial speech.
made without regard to the particular needs of a situation, person, etc.; imposed arbitrarily; unnatural: artificial rules for dormitory residents.
Biology. based on arbitrary, superficial characteristics rather than natural, organic relationships: an artificial system of classification.
Jewelry. manufactured to resemble a natural gem, in chemical composition or appearance.
No matter how smart a machine becomes, it will never be consciously aware of what it’s doing or why.
It doesn’t really know or can purposely react on its own, nor have, in the truest sense, make proper correct choices or have a personality.
Example from net: For all of the wonderful advances made by Tesla, its in-car autopilot drove into the back of a bright red fire truck because it wasn’t programmed to recognize that specific object, and this highlights the problem with AI and machine learning-there’s no actual awareness of what’s being done or why.
Artificial Conciseness? - It is simple not consciously alive From the net: To live consciously means being aware of everything that may have an influence on your actions, purposes, values and goals. Living consciously means seeking out these things to the best of your ability, and then acting appropriately with what you see and know.
Being alive is not to be compared with a robot’s battery life or however you charge the robot.
By itself, on its own Has no true feelings of what to do today, be happy, sad, joy, pain, cold, heat, wind, rain, snow, calm etc really is like us humans. It will never understand what: getting tired, bored, excited, feeling sick or healthy, understand consequences or repercussions are etc It will never really understand real life: what animals, insects, plants, microbes, people etc really are.
2 simple Examples: I decided today to fly my drone robot at a certain time today The drone robot could not possible know this. It could not decide not to fly today The drone robot was in it’d case; it did not know where it was. The drone robot did not know it needed to be charged nor what it’s going to do or why.
2 - My wife’s set her alarm on her I-phone for a certain time, it went off as directed - however, it did not know really why it went off There is no actual awareness of what’s being done or why it is being done nor does it care. (What is care?)
Sad to say, we created too many networks for too many serious situations and operations, time will tell.
Elon Musk commented that competition between nations to create artificial intelligence could lead to World War III.
On artificial intelligence advancement, the late Stephen Hawking thought it would be the worst event in the history of civilization and could end with humans being replaced.
I am not an alarmist, however I see the possibility for Competitive exclusion if the correct checks and balances are not put in place.
In conclusion at this point: Again, I enjoy robots and appreciate you guys but greed and domination play major parts in the near future on a larger scale.
Even Russian President Vladimir Putin understands this better than most, and said, Whoever becomes the leader in this sphere will become the ruler of the world.