close
test_template

Robots as The Solution to Equality in The Job Interview Process

About this sample

About this sample

close

Words: 1653 |

Pages: 3|

9 min read

Published: Sep 4, 2018

Words: 1653|Pages: 3|9 min read

Published: Sep 4, 2018

In an effort to understand the human mind, philosophers and scientists alike have looked towards complex technology to help explain psychological phenomena. In medieval times, philosophers compared the brain to a hydraulic pump, which was largely influenced by the prevalence of hydraulic systems as a newly discovered innovation. During the mid-19th century, models of the brain resembled the technology of the telegraph, dubbed the “Victorian Internet,” as an understanding of neural activation flowing over nerves was compared to information flowing over wires in a telegraph. Today, many view computers and robots as potential brain models, as evidenced by the popularization of the computational model of the mind and advances in artificial intelligence. While analogies offer a simple basis of comparison for the abundant mysteries of the brain, they can also render complex technology and, by proxy, the brain as magical and inaccessible (Anderson). As a result, our society glorifies technology as infallible, unbiased, and unfailing. Consequently, we have created more roles for technology, specifically robots, to become even more involved in our lives.

'Why Violent Video Games Shouldn't Be Banned'?

One human-occupied role that is beginning to show promise for robot replacement is in the job interview process. In recent years, Australia’s La Trobe University has partnered with Japan’s NEC Corporation and Kyoto University to create communication robots with emotional intelligence to help conduct job interviews for companies. These robots have the ability to perceive facial expressions, speech, and body language to determine whether prospective employees are “emotionally fit and culturally compatible” ("Matilda the Robot”). The first robots were named Matilda and Jack, but they are now joined by similar robots Sophie, Charles, Betty, and two additional unnamed robots (Nickless). Dr. Rajiv Khosla, the director of La Trobe’s Research Centre for Computers, Communication, and Social Innovation, says that “IT [information technology] is such a pervasive part of our lives, we feel that if you bring devices like Sophie into an organisation it can improve the emotional well-being of individuals." Computers and robots are often restricted to analyzing quantitative data, but communication robots like Matilda are able to analyze people and their qualitative, emotional properties. These emotionally intelligent robots show promising potential for eliminating inequality and bias in the employee selection process, but they will only be able to do so under specific parameters.

Emotionally intelligent robots may be able to help reduce employment inequality because they do not hold implicit biases as humans do. Unfortunately, our prejudices often prevent us from making fair and equitable decisions, which is especially evident in the job interview process. In an interview, National Public Radio’s science correspondent Shankar Vedantam describes research findings involving the effect of bias in the interview process. In one study, researchers found that the time of day when the interview is conducted has a profound impact on whether a candidate is chosen for a job or not (Inskeep). This means than an aspect as seemingly inconsequential as circadian rhythms, one of our most primitive instincts, can be complicit in swaying our best judgment. Professional occupation serves as a primary means of income and an indicator of status. Given the importance of this role, we should strive to create a fair system for all job applicants, but complete fairness may not be possible if human biases cannot be controlled.

Beyond basic physiological factors, these biases extend to racial prejudices as well. In 2013, John Nunley, Adam Pugh, Nicholas Romero, and Richard Seals conducted research to understand the job market for college graduates across racial boundaries. They submitted 9,400 online job applications on behalf of fake college graduates with variation across college majors, work experience, gender, and race. To indicate race, half of the applicants were given typically white-sounding names, such as “Cody Baker,” while the other half were given typically black-sounding names like “DeShawn Jefferson.” Despite equal qualifications among the fake applicants, the black applicants were 16% less likely to be called back for an interview (Arends). Therefore, racial prejudices, even if unintentional and unconscious, can create unfairness in the job interview process.

In light of these implicit biases that affect the employee selection process, robots are a viable option for conducting objective, fair job interviews. Even though robots are often thought of as machines for human convenience, they have the potential to equalize opportunities, especially in situations in which humans think and behave irrationally. Robots operate on purely logical algorithms, which allow them not to be swayed by irrational biases and strictly adhere to specific criteria. Because a candidate’s credentials cannot necessarily be measured quantitatively and thus are subject to qualitative biases, it may be most fair for them to be evaluated by an objective machine.

However, the use of robots with the objective of eliminating bias is not a panacea and must be approached with caution. While robots do act logically, they only do so within the parameters of their programmed algorithms. If a program is coded to be inherently biased, then it follows that the machine on which it operates will perpetuate that bias. This past year, Amazon was accused of using a “racist algorithm” that excluded minority neighborhoods in major cities from its Prime Free Same-Day Delivery service, while consistently offering the specialty service to predominantly white neighborhoods. The algorithm’s data linking maximum profit with the predominantly white neighborhoods was a direct result of decades of systemic racism, which caused gentrification between high-income, white and low-income, minority neighborhoods. Ironically, the low-income neighborhoods that were excluded from the service would benefit the most from free additional services, while the high-income neighborhoods that received it are more likely to have easier access to low-cost, quality goods. While Amazon claimed that they were only using the facts, which stated that they would not make a profit in the neighborhoods that they excluded (Gralla), they ultimately were using an algorithm based on socioeconomically biased data to perpetuate racist patterns.

Another similar, and perhaps more pertinent, example of biased programming is Microsoft’s Twitter chatbot experiment. This past year, Microsoft released a chatbot software named Tay, which was designed to interact with teenaged Twitter users by impersonating their language. Soon after its release, Twitter trolls coerced Tay into saying racist slurs and other derogatory statements. As Tay posted more offensive tweets, Microsoft disabled the program and released a statement of apology. In the statement, Peter Lee, Corporate Vice President at Microsoft Research, apologized for the lack of oversight of the program saying “AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical” (Fitzpatrick). Lee’s statement speaks to the widespread challenge of creating artificial intelligence that is not influenced by the very human biases that it was created to avoid. Thus, communication robots are a viable option for creating a fairer interview process; however, it is imperative to recognize that robots are also susceptible to human biases. In the case of the Amazon’s racist algorithm, the robots used data that reflected patterns of racial gentrification; in the case of Microsoft’s Tay, the chatbot mimicked the derogatory language of other Twitter users. Both cases serve to illuminate the pervasive and multifaceted role of human bias on artificial intelligence, which is often mistakenly regarded as objective and fair. Artificial intelligence is malleable and easily manipulated by prejudice; thus, creating communication robots that do not reflect prejudice should be a top priority for La Trobe University and others who create similar machines.

Two goals that were previously mentioned in regard to the communication robots were to check that potential employees would be “emotionally fit and culturally compatible” ("Matilda the Robot”). But what does it mean to be “emotionally fit” or “culturally compatible”? There are a number of potential factors that can affect how a person expresses emotions, such as cultural heritage, gender, and mental health, but the wording of La Trobe University’s statement is unclear about whether their communication robots take into account these factors or if they penalize those who do not fit the emotional template of an ideal candidate. For instance, if qualified job candidates who are not native to a particular culture do not express normative body language, and consequently do not pass Matilda’s test on the basis of cultural incompatibility, then the assumption is that foreigners should not be employed. As many American companies are beginning to embrace the concept of a diverse workplace environment, communication robots in the job interview process that run on a specific algorithmic template of an ideal candidate may hinder diversity rather than push towards equality and progress. Unfortunately, the available information on La Trobe University’s communication robots is limited, and these questions cannot be answered concretely. However, all companies that create artificial intelligence should strive for transparency and constantly question themselves throughout the design process so that they can help, rather than hinder, the push for equality by creating truly unbiased machines.

Get a custom paper now from our expert writers.

In conclusion, communication robots like Matilda show potential to help progress towards equality in the search for employment. However, the algorithms on which they run should be monitored carefully, as artificial intelligence is easily susceptible to influence by human bias. In order to ensure that these robots are capable of promoting fairness and equality, the tech industry should actively seek a diverse environment in which all kinds of people are represented, so that various voices can cross-check the innovation process to prevent incidents like Amazon’s racist algorithm and Microsoft’s chatbot Tay. Furthermore, the creators of Matilda should seek to define exactly what it means to be “emotionally fit and culturally compatible” to ensure that some people are not inherently and arbitrarily given a significant advantage when being interviewed by communication robots (“Matilda the Robot”). Recognizing the profound impact of human bias on artificial intelligence may help us to understand technology on a deeper level than merely admiring it as magic untouched by human biases. Perhaps it is a first step towards demystifying computer technology, and, eventually, the human mind.

Image of Dr. Oliver Johnson
This essay was reviewed by
Dr. Oliver Johnson

Cite this Essay

Robots As The Solution To Equality In The Job Interview Process. (2018, Jun 17). GradesFixer. Retrieved April 25, 2024, from https://gradesfixer.com/free-essay-examples/are-robots-the-solution-to-equality-in-the-job-interview-process/
“Robots As The Solution To Equality In The Job Interview Process.” GradesFixer, 17 Jun. 2018, gradesfixer.com/free-essay-examples/are-robots-the-solution-to-equality-in-the-job-interview-process/
Robots As The Solution To Equality In The Job Interview Process. [online]. Available at: <https://gradesfixer.com/free-essay-examples/are-robots-the-solution-to-equality-in-the-job-interview-process/> [Accessed 25 Apr. 2024].
Robots As The Solution To Equality In The Job Interview Process [Internet]. GradesFixer. 2018 Jun 17 [cited 2024 Apr 25]. Available from: https://gradesfixer.com/free-essay-examples/are-robots-the-solution-to-equality-in-the-job-interview-process/
copy
Keep in mind: This sample was shared by another student.
  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours
Write my essay

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

close

Where do you want us to send this sample?

    By clicking “Continue”, you agree to our terms of service and privacy policy.

    close

    Be careful. This essay is not unique

    This essay was donated by a student and is likely to have been used and submitted before

    Download this Sample

    Free samples may contain mistakes and not unique parts

    close

    Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

    close

    Thanks!

    Please check your inbox.

    We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

    clock-banner-side

    Get Your
    Personalized Essay in 3 Hours or Less!

    exit-popup-close
    We can help you get a better grade and deliver your task on time!
    • Instructions Followed To The Letter
    • Deadlines Met At Every Stage
    • Unique And Plagiarism Free
    Order your paper now