450+ experts on 30 subjects ready to help you just now
Starting from 3 hours delivery
Remember! This is just a sample.
You can get your custom paper by one of our expert writers.Get custom essay
121 writers online
The term “robotics” was first coined by the legendary science fiction writer Sir Isaac Asimov in his 1941 short story “Liar!”. One of the first to see the vast potential of the up and coming technologies that were yet to see public approval or interest in his time. Since then, however, robotics has been on a startling upward trajectory that has placed it into the forefront of cutting edge technologies. While robotics has come with many benefits to modern day humanity it is also a subject of endless heated debates. Humanity is on the verge of a robot revolution. And while many see it as a gateway to progress not seen since the Renaissance it could just as easily result in the end of the human race. With the ever-present threat of accidentally creating humanities unfeeling successors it’s only natural to question how much, if at all, we should allow ourselves to become reliant on our technologies.
“As machines get smarter and smarter, it becomes more important that their goals, what they are trying to achieve with their decisions, are closely aligned with human values,” says UC Berkeley computer science professor Stuart Russell, co-author of the standard textbook on artificial intelligence. He believes that the survival of our species may depend on instilling values in AI, but doing so could also ensure harmonious robo-relations in more prosaic settings. “A domestic robot, for example, will have to know that you value your cat,” he says, “and that the cat is not something that can be put in the oven for dinner just because the fridge is empty.” But how, exactly, does one impart morals to a robot? Simply program rules into its brain? Send it to obedience class? Play it old episodes of Sesame Street? While roboticists and engineers at Berkeley and elsewhere grapple with that challenge, others caution that doing so could be a double-edged sword. While it might mean better, safer machines, it may also introduce a slew of ethical and legal issues that humanity has never faced before—perhaps even triggering a crisis over what it means to be human.
The notion that human/robot relations might prove tricky is nothing new. In 1947, science fiction author Isaac Asimov introduced his Three Laws of Robotics in the short story collection I, Robot, a simple set of guidelines for good robot behavior. 1) Don’t harm human beings, 2) Obey human orders, 3) Protect your own existence. Asimov’s robots adhere strictly to the laws and yet, hampered by their rigid robot brains, become mired in seemingly unresolvable moral dilemmas. In one story, a robot tells a woman that a certain man loves her (he doesn’t), because the truth might her feelings, which the robot understands as a violation of the first law. To avoid breaking her heart, the robot broke her trust, traumatizing her in the process and thus violating the first law anyway. The conundrum ultimately drives the robot insane.
Although a literary device, Asimov’s rules have remained a jumping off point for serious discussions about robot morality, serving as a reminder that even a clear, logical set of rules may fail when interpreted by minds different from our own. Recently, the question of how robots might navigate our world has drawn new interest, spurred in part by accelerating advances in AI technology. With so-called “strong AI” seemingly close at hand, robot morality has emerged as a growing field, attracting scholars from philosophy, human rights, ethics, psychology, law, and theology. Research institutes have sprung up focused on the topic.
The public conversation took on a new urgency recently when Stephen Hawking announced that the development of super-intelligent AI “could spell the end of the human race.” An ever-growing list of experts, including Bill Gates, Steve Wozniak and Berkeley’s Russell, now warn that robots might threaten our existence. Their concern has focused on “the singularity,” the theoretical moment when machine intelligence surpasses our own. Such machines could defy human control, the argument goes, and lacking morality, could use their superior intellects to extinguish humanity. Ideally, robots with human-level intelligence will need human-level morality as a check against bad behavior. However, as Russell’s example of the cat-cooking domestic robot illustrates, machines would not necessarily need to be brilliant to cause trouble. In the near term we are likely to interact with somewhat simpler machines, and those too, argues Colin Allen, will benefit from moral sensitivity. Professor Allen teaches cognitive science and history of philosophy of science at Indiana University at Bloomington. “The immediate issue,” he says, “is not perfectly replicating human morality, but rather making machines that are more sensitive to ethically important aspects of what they’re doing.” And it’s not merely a matter of limiting bad robot behavior. Ethical sensitivity, Allen says, could make robots better, more effective tools. For example, imagine we programmed an automated car to never break the speed limit. “That might seem like a good idea,” he says, “until you’re in the back seat bleeding to death. You might be shouting, ‘Bloody well break the speed limit!’ but the car responds, ‘Sorry, I can’t do that.’ We might want the car to break the rules if something worse will happen if it doesn’t. We want machines to be more flexible.”
As machines get smarter and more autonomous, Allen and Russell agree that they will require increasingly sophisticated moral capabilities. The ultimate goal, Russell says, is to develop robots “that extend our will and our capability to realize whatever it is we dream.” But before machines can support the realization of our dreams, they must be able to understand our values, or at least act in accordance with them. Which brings us to the first colossal hurdle: There is no agreed upon universal set of human morals.
Morality is culturally specific, continually evolving, and eternally debated. If robots are to live by an ethical code, where will it come from? What will it consist of? Who decides? Leaving those mind-bending questions for philosophers and ethicists, roboticists must wrangle with an exceedingly complex challenge of their own: How to put human morals into the mind of a machine. There are a few ways to tackle the problem, says Allen, co-author of the book Moral Machines: Teaching Robots Right From Wrong. The most direct method is to program explicit rules for behavior into the robot’s software—the top-down approach.
The rules could be concrete, such as the Ten Commandments or Asimov’s Three Laws of Robotics; or they could be more theoretical, like Kant’s categorical imperative or utilitarian ethics. What is important is that the machine is given hard-coded guidelines upon which to base its decision-making. Stuart Russell I Robot Sir Issac Asomov
We provide you with original essay samples, perfect formatting and styling
To export a reference to this article please select a referencing style below:
Sorry, copying is not allowed on our website. If you’d like this or any other sample, we’ll happily email it to you.
Attention! This essay is not unique. You can get a 100% Plagiarism-FREE one in 30 sec
Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.
Please check your inbox.
Want us to write one just for you? We can custom edit this essay into an original, 100% plagiarism free essay.Order now
Are you interested in getting a customized paper?Check it out!