By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy. We’ll occasionally send you promo and account related email
No need to pay just yet!
About this sample
About this sample
Words: 882 |
Pages: 2|
5 min read
Published: Dec 16, 2021
Words: 882|Pages: 2|5 min read
Published: Dec 16, 2021
Innovation has always seemed to be the biggest benefit and risk for humankind. With its many advantages, it does not seem like people have lived long enough to see the list of repercussions that come along from the continuous amount of “advancements” and “modifications” people have invented. Mankind can however see this highlighted in the automobile industry. Though cars were invented in 1885 they did not have a seat belt implementations as a requirement until 1966, so prior to that period many car producers took pride in the “mobility” they offered their customers by getting rid of seatbelts while not mentioning, or considering, the risk that came without having it. The newest spectacle coming from the automobile industry is a vehicle that can pilot itself, however the public nor the industry itself is fully prepared to handle the responsibilities and challenges that come along with this new innovation.
The immediate assumption that comes with a self-driving car is, simply put, ease. The convenience to let the machine take over while you enjoy the extra time in your hands to do what you find convenient. The immediate mindset is not only idealistic but also riddled with a lack of responsibility. Take into consideration the fatal 2016 car crash that ended the partnership between Mobileye (corporation that provides assistance through collision prevention) and Tesla (a car manufacturing company that specifies in electric cars). The crash occurred when the Tesla Model S did not detect a Semi Truck as it crossed in front of the Model S. As Chris Neiger tech and telecom writer noted, “Neither company publicly blamed the other for the crash, but Tesla CEO Elon Musk made comments that weren’t complimentary to Mobileye following the accident” (Neiger). In situations like this, it is hard to understand who exactly is at fault, making it even harder to fix the issues that tend to occur from unforeseen circumstances. This lack of responsibility transcends the automobile industry as can be seen by the fact that some consumers of self-driving cars tend to abuse auto drive function. “There have been a variety of debates and reports of people in Teslas on highways, asleep at the wheel, being driven by autopilot” (Templenton). If the option for carelessness was protected by a level of believed security that comes with the newest innovations it makes it more convenient for people to be recklessly driving.
Another dilemma is a far more complicated and far more philosophical, people should simply ask, where does human beings should draw the line for moral boundaries with self-driving cars? According to a computer scientist and co-author Iyad Rahwan, “People who think about machine ethics make it sound like you can come up with a perfect set of rules for robots, and what we show here with data is that there are no universal rules,” (Maxmen). Published by Nature Research (leading international journal of science), the largest recorded survey for machine ethics charts asks over 2.3 million people across the world a very difficult question, what moral code should self-driving cars have? (Maxmen). It did this by laying out thirteen scenarios in which the death of someone was inevitable and people were asked who to spare based on their socio-economic standing, age, gender and amount of people. From the survey we can see many gaps and problems that come with the construction of these machine morals. For instance, an ethical paradox that was found in the survey shows that “people said that they wanted an autonomous vehicle to protect pedestrians even if it meant sacrificing its passengers — but also that they would not buy self-driving vehicles programmed to act this way.” (Maxmen).
In addition, the moral compass was also skewed proportionally based on the cultural differences between countries and nationalities. Let’s take into consideration the responses on the how the self-driving vehicle should act when reacting to a person crossing the road illegally. Countries (such as Japan and Finland) that have stricter government institutions tend to choose to hit people who jaywalked than countries that have a looser government institution (such as Nigeria and Pakistan). Another example of how culture affects the way people react to certain situations can be seen by the socio-economic standing of the countries. Countries that have a high wealth disparity (such as Colombia) tend to show that they would favor killing those in a lower financial status while in countries that do not have a dramatic wealth disparity could go either way. These moral norms are different based on where they are from highlighting that this is as much of a machine learning problem, an innovative problem as it is a human problem. “We need to come up with a social consensus…about which risks we are willing to take” (Maxmen).
It is clear, self-driving cars should be regulated more in comparison to human drivers.
These vehicles are said to have trouble driving through crowds of pedestrians and could undoubtedly cause a far worse accident than anything human error could typically sustain. In addition to that, Self-driving cars might encounter a difficulty to deal with moral dilemmas. Force to face with a choice between running into a group of pedestrians and killing them or run into a building and kill all the passengers. In the end autonomous cars could cause more risks if not regulated properly.
Browse our vast selection of original essay samples, each expertly formatted and styled