It might be appropriate to note that Asimov's original Laws of Robotics evolved somewhat, over time. He added a Zeroth Law, that supercedes the original Three Laws.
Despite the "axiomation" of robotic laws governing robots relationship with humans/humanity, one can still conceive scenarios in which the validity / consistency of the laws may be problematic.
One example would be the following:
Given: David is a morally "good" robot, and operates within the constraints set forth by the "Laws of Robotics".
Question: What is David's course of action, if he is put in a position where he must make a decision which may sacrifice the life of a single human (A), but potentially save the lives of two other humans (B & C)? To add more complexity to the scenario...what if David also knows that the one of the other humans (B), is inherently "bad", and if saved, might do future violence and/or murder upon one or more other humans?
What does David do?
If he saves (A), then (B & C) die.
If he saves (B & C), then (A) dies...and there is a high probability that (B) will kill one or more humans.