Asimov’s
Three Laws Of Robotics!
1.
A robot may not
injure a human being, or, through inaction, allow a human being to come to harm.
2.
A robot must obey
the orders given it by human beings except where such orders would conflict with
the First Law.
3.
A robot must
protect its own existence as long as such protection does not conflict with the
First or Second Law.
But, the first law
itself can be divided further into two separate laws:
1.
A robot may
not injure a human being; OR,
2.
Through inaction,
allow a human being to come to harm.
Paradox! And if
read with the Second and Third Law, then they give sweeping Powers to the Robots!
How?
Now, before moving
forward, please understand that why I’ve included the DABUS case herein below, as
eventually, a personhood would be legally granted to the Robots, once they
are Commissioned in the PUBLIC Services, for taking actions / inactions on
behalf of the Governments or Private Sectors. Just alike a Company!
NOW:
The first law itself
when divided into two distinct laws, are basically DIRECTORY
in nature and Not MANDATORY, enveloped with may
and OR. Once the first law becomes directory in nature,
then the second and third would be read taking the directory
nature of the first law, which would make the interpretations de-facto without
being considered ultra vires, giving arbitrary powers to the Robots!
Herein the principles
questioned in DABUS case would also be applicable. If a Robot is given the identity
of the person, aka, human being; then the Robot may consider
other Machines also as Human Beings. Further, a Robot may, or, may not injure a
person (now a person also including the machines). On a separate clause,
what would constitute an (in)action then? Herein the Trolley Problem
would kick in! Should the Robot be allowed to take action against 5 to save 1, or
vice versa, or, the Robot would bring a 3rd entity to save now 5+1 =
6, sacrificing the 7th entity?
So how would this
work? A Robot may not injure a human being, or, in case of a personhood
now, may not even injure another Robot! But the Robot further must take an
action, as it cannot be allowed to be in an inactive state in accordance with
the second law, because, even in the slightest conversation, if Robot would
sense a harm, both to the Human Beings and Machines, it would have to become active;
and in accordance with the Third Law now, wherein, the Robot is allowed to Protect
its own Existence, as the first law is directory in nature, and not mandatory. If
in case the first law becomes mandatory, then, the Robot would always be in an inactive
state in an infinite loop, in case the personhood has been granted
to it; and even if not.
What it proves? There
cannot a law drafted, that would make any Robot slave of the humans; especially,
if in case personhood has been granted to them!
Am I missing
something? Scoob! 😊
© Pranav Chaturvedi 2025