By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy. We’ll occasionally send you promo and account related email
No need to pay just yet!
About this sample
About this sample
Words: 908 |
Pages: 2|
5 min read
Published: Feb 13, 2024
Words: 908|Pages: 2|5 min read
Published: Feb 13, 2024
The development of technology is aimed at making our life easier. Whether in the medical field where advance technology may be able to detect early signs of diseases or in the automotive sector where humans can travel by car without driving. The advancement of these machineries or robots bring up many challenges. Ethical challenges being one of the toughest ones to tackle, are tricky to overcome as there is no one right answer. Different people have different opinions. In a long chain process, there are bound to be conflicts in beliefs. Thus, who’s to say one belief is more right than the other?
According to an article published on Technology Review , ethical dilemmas when operating a self-driving car is inescapable. These ethical dilemmas may differ in result from person to person. For example, if the car were to choose between hitting 5 people or 1 person in order to protect the passengers in the car, different people may choose differently. Some may take into account the age groups of those involved while some may even choose to risk their own life for the sake of the pedestrian. Hence, the moral algorithms may need to vary for each customer. However, in order to customise each algorithm, a lot of data must be gathered. This can only be done if the customer participates in answering many ethical decision-making questions. Only then will an algorithm predicting the decision of the customer can be prepared.
This raises another ethical challenge being, if a crash were to occur injuring those around the vehicle instead of the passengers, is the buyer the one to blame since the algorithm is based on the decisions of the buyer? Another article published on The Conversation expresses how thinking about extreme situations will not help decision making in day-to-day situations. Thus, research should be more focused on challenges at crosswalks instead of concentrating on severe scenarios. In fact, they argue as to why the algorithms should be based on human decisions? Humans are imperfect creatures and make decisions based on their own preference to colour, race, age and other factors. These self-driving cars should be safer and drive more impartially than humans do.
It is no doubt that the development of artificial intelligence (AI) also raise ethical issues that need to be resolved before further advancement. One of the interesting issues raised by The Cambridge Handbook of Artificial Intelligence is the dilemma if the AI used appeared to discriminate racially. The example given was if the AI used for approval of mortgage loans by banks appeared to be discriminating against black applicants. Although it can be argued that the algorithms used did not even consider race, it cannot be denied that statistically, black applicants were getting less approval than any other race. However, this can be explained as the algorithms would take into account the applicant’s address and seeing as communities live together, it provides a clarification to this problem. When developing this type of technology, it is important to take everything into consideration including perception of the outcome and unintentional discrimination.
One of the ways to overcome ethical challenges would be by considering the most important features of AI that need to be taken into account when developing the technology such as transparency, predictability, “robust against manipulation” and responsibility. One of the main concerns is transparency which is crucial in finding out the root cause of the problems that arise such as in the case stated above. Frustration and feelings of injustice may surface if nobody can find out why the outcome to the algorithm seems to be discriminative. Furthermore, AI technology uses precedent cases to analyse and learn from in order to make better and more informed decisions in the future. This is the way the machines improve and develop by themselves without needing a creator to update the system periodically. In some cases, one event may seem similar but may require a completely different approach as compared to past events. When put to the test, these AIs have to constantly update their algorithms to fit the current ethical dilemma they face.
Another desirable feature for AI is to be “robust against manipulation”. It is imperative that there are no loop holes in the system that may be exploited by ill minded humans. This criterion is often a necessity in information security but not in machine learning journals. Lastly and one of the most vital criteria is responsibility. The problem with AI development is that the scale that it takes to produce is too big until the responsibility for the product is very widely spread. When a problem arises or the system fails, there is not one person to take the blame or to account responsible. Does the blame fall on the software designers or the directors that gave approval to the design?
By considering all these important features, answers to ethical questions may be answered with confidence. Although there are obvious challenges and conflicts in the advancement of these machineries, progress will still ensue. Without sacrifices being made during the trial period, how can we improve? Customer feedback is the best method in improving the technology. It is clear that at the end of the day, AI, medical robots and autonomous vehicles have come a long way and the use of these equipment will continue to grow until it has become the norm in society. By then, different sort of ethical challenges will arise, and the cycle will repeat itself.
Browse our vast selection of original essay samples, each expertly formatted and styled