Peter Weyland and David

Started by bambi_burster, Mar 23, 2012, 09:38:56 AM

Author
Peter Weyland and David (Read 10,689 times)

bambi_burster

bambi_burster

I am interested by the idea that Peter Weyland has been frozen in stasis and placed on the Prometheus. It brings me to the conclusion that the Weyland company know a hell of alot more than they are letting on. This time however they are sending an expensive exploration crew rather than space truckers.
Now if these engineers introduced technology to our world or accelerated our development at a molecular level imagine the possibilities for Weyland. His quest for greatness would be achieved. Perhaps David is programmed to collect information and examples of this technology at any cost and that the crew again is expendable. Perhaps i'm completely wrong but the idea of this plot is exciting to me.

Game_Over_Man

Game_Over_Man

#1
Fassbender and Theron pretty much intonated at Wondercon during a post-interview-interview, that David is not a good guy

Despicable Dugong

Despicable Dugong

#2
QuoteWeyland builds and successfully deploys thousands of Seventh Generation Davids into workplaces across the universe. Human acceptance of David 7 reaches an all-time high thanks to Weyland's highly classified emotional encoding technology, David 7 can accurately replicate most human emotions down to the tiniest nuance while consistently achieving all mission objectives.

https://www.weylandindustries.com/#/timeline

The David androids will be the ultimate company 'men.'

bambi_burster

bambi_burster

#3
Interesting. Seems there is a continuity with the future films regarding the behavioural inhibitors or the 3 laws of robotics. That certainly makes an android very dangerous

Deuterium

Deuterium

#4
It might be appropriate to note that Asimov's original Laws of Robotics evolved somewhat, over time.  He added a Zeroth Law, that supercedes the original Three Laws.

Despite the "axiomation" of robotic laws governing robots relationship with humans/humanity, one can still conceive scenarios in which the validity / consistency of the laws may be problematic.

One example would be the following:

Given:  David is a morally "good" robot, and operates within the constraints set forth by the "Laws of Robotics".

Question:  What is David's course of action, if he is put in a position where he must make a decision which may sacrifice the life of a single human (A), but potentially save the lives of two other humans (B & C)?  To add more complexity to the scenario...what if David also knows that the one of the other humans (B), is inherently "bad", and if saved, might do future violence and/or murder upon one or more other humans?

What does David do?

If he saves (A), then (B & C) die.

If he saves (B & C), then (A) dies...and there is a high probability that (B) will kill one or more humans.

Eldritch

Eldritch

#5
He gets an InvaldionActionException and freezes... just like any Java application out there  :P

Deuterium

Deuterium

#6
Quote from: Eldritch on Mar 23, 2012, 05:34:38 PM
He gets an InvaldionActionException and freezes... just like any Java application out there  :P

;D

David gets the equivalent of the Blue Screen of Death.  I like it.

First Blood

First Blood

#7
Quote from: Deuterium on Mar 23, 2012, 05:35:28 PM
Quote from: Eldritch on Mar 23, 2012, 05:34:38 PM
He gets an InvaldionActionException and freezes... just like any Java application out there  :P

;D

David gets the equivalent of the Blue Screen of Death.  I like it.

http://www.youtube.com/watch?v=UUJBGjPeAdA#ws

Deuterium

Deuterium

#8
 ;D

aliennaire

aliennaire

#9
Quote from: bambi_burster on Mar 23, 2012, 09:38:56 AM
Perhaps David is programmed to collect information and examples of this technology at any cost and that the crew again is expendable. Perhaps i'm completely wrong but the idea of this plot is exciting to me.
Will be a compelling twist, if "at any cost" would also mean, that his majesty Peter Weyland's life is expandable as well, in David's eyes.

Deuterium, in your task, a robot with a basical 3 Lows firmware is abided by saving the A human, as the event of his sacrifice comes first. Also, robot is not allowed to make some judgements on people's behaviour, even if some of them will come out evil in nature, but that will happen later. However Zeroth rule could override robot's decision and give him some sort of a free will, I guess.

Quote from: First Blood on Mar 23, 2012, 05:44:56 PM
http://www.youtube.com/watch?v=UUJBGjPeAdA#ws
Haha! He's gone undeniably mad! ;D

Deuterium

Deuterium

#10
Quote from: aliennaire on Mar 23, 2012, 06:49:18 PM
Deuterium, in your task, a robot with a basical 3 Lows firmware is abided by saving the A human, as the event of his sacrifice comes first. Also, robot is not allowed to make some judgements on people's behaviour, even if some of them will come out evil in nature, but that will happen later. However Zeroth rule could override robot's decision and give him some sort of a free will, I guess.


It was my intention, in the thought experiment, that the notional threat to agent (A) was occuring concurrently with the threat to agents (B & C)...forcing David to make a decision on which party to save.  In other words, all parties are in "peril" at the same moment in time.  Nevertheless, it is not clear from the wording of the laws, that temporal priority is taken into account.  In other words, if David's action to help agent (A) automatically consigns agents (B & C) to a certain death, even if that occurs at a slightly later time...seems to still pose a dilemna.

I do like your idea that the Zeroth law might imply a certain freedom for David to make a judgement, and pehaps even allowing Free Will and sentient/cognitive "intuition" to guide his decision.

bambi_burster

bambi_burster

#11
I guess in a way a synthetic person could conceivably have a clearer idea of morality towards human life. Then again a synthetics judgement isn't clouded by morality so I guess it isn't fair to suggest Its ideas are pure because of programming. I'm not forgetting emotional programming but in the end it's basic programming overrides any emotional attachment real or otherwise.

Deuterium

Deuterium

#12
Quote from: bambi_burster on Mar 23, 2012, 08:53:33 PM
I guess in a way a synthetic person could conceivably have a clearer idea of morality towards human life. Then again a synthetics judgement isn't clouded by morality so I guess it isn't fair to suggest Its ideas are pure because of programming. I'm not forgetting emotional programming but in the end it's basic programming overrides any emotional attachment real or otherwise.

I am not certain I would necessarily agree with that.  If and when we develop a true Artificial Intelligence, which carries all the hallmarks of human consciousness; self-awareness, self-reflection, cognitive recognition that other beings have conciousness of their own (which implies empathy), feelings, intentionality, etc., it would certainly not be unreasonable to expect such a creation to also exhibit such abstract qualia as "morality" and "ethics".

bambi_burster

bambi_burster

#13
good point. I'm an amateur on this subject but I was always fascinated with the androids in the Alien films (not so much call).

aliennaire

aliennaire

#14
Quote from: Deuterium on Mar 23, 2012, 08:49:32 PM
It was my intention, in the thought experiment, that the notional threat to agent (A) was occuring concurrently with the threat to agents (B & C)...forcing David to make a decision on which party to save.  In other words, all parties are in "peril" at the same moment in time.  Nevertheless, it is not clear from the wording of the laws, that temporal priority is taken into account.  In other words, if David's action to help agent (A) automatically consigns agents (B & C) to a certain death, even if that occurs at a slightly later time...seems to still pose a dilemna.
Oh, I seemed to miss the condition of simultaneity of events, sorry... Well, in that case, the most rational decision for a robot would be to take the place instead of person A, saving all of them, probably only for a moment, if it is hinted, that person B could murder everyone after. And if he haven't any possibility to make the lethal job for person A, because of distance, lack of time, his skills, he probably would make an attempt to stop person B from killing person C without serious maiming the former. Well, it seems rational to me, I mean, if I were to programm David's reactions, I'd invent  :P some algorithm describing such succession of steps. But it's all moot, nevertheless interesting to think of.

AvPGalaxy: About | Contact | Cookie Policy | Manage Cookie Settings | Privacy Policy | Legal Info
Facebook Twitter Instagram YouTube Patreon RSS Feed
Contact: General Queries | Submit News