By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy. We’ll occasionally send you promo and account related email
No need to pay just yet!
About this sample
About this sample
Words: 3259 |
Pages: 7|
17 min read
Published: Nov 20, 2018
Words: 3259|Pages: 7|17 min read
Published: Nov 20, 2018
This paper revolves around a conversation with the iPhone chatbot Siri. The primary focus of the paper is to look at the possibilities for Siri to be a social intelligent agent (SIA) or if the chatbot is not intelligent. To do so, a transcription between Siri and a human will be analysed. To interpret the transcription, the article “The Art of Designing Socially Intelligent Agents: Science, Fiction and the Human in the Loop” by Kerstin Dautenhahn will be used. The article looks into the approach towards designing social, intelligent agents. The author elaborates how new forms of interaction can arise from human tool-interface. He reveals that social, intelligent agents can control their cognitive and social potential. The social agent technology combined with human social cognition can result in new forms of sociality. As such, one can establish that the continuous development of social, intelligent agent can bridge the gap between human and agents.While this article argues that machines can be social, intelligent agents, the paper will also involve a theory that says that machines are not social, intelligent agents. This theory is based on an experiment called “The Chinese Room Experiment” and will be explained below. Also a little part of the article “When a robot is social: Spatial arrangements and multimodal semiotic engagement in the practise of social robotics” by M. Alac, J. Movellan and F. Tanaka will be used.
The Chinese Room Experiment is an experiment made by the American philosopher John Searle. The general idea is that a man is inside a room and outside the room are some Chinese people. The man inside the room does not understand any Chinese, but he has an instruction book in English that tells him some codes to answer a Chinese question in Chinese. When the Chinese people outside the room send in some Chinese symbols, the man inside the room can send out new Chinese symbols as an answer. To the Chinese people outside the room, it looks like the man inside the room knows Chinese.
The point of the experiment is that it looks like the man inside the room knows Chinese, but in the reality he does not know any Chinese. He just has an instruction book that tells him some codes, so that he knows how to answer, but he has no understanding of what he is saying. The man inside the room is an analogy of a machine. We might think that the machine has meanings, but it is just a computer following the rules (Searle,1984).
The article by Dautenhahn (1998) elaborates that it is within the human nature to interact appropriately with forms that are on their level. Therefore, a robot or software that exhibit human characteristics is termed as a desirable agent (Dautenhahn 1998: 8). According to the cognitive technology, designing a desirable SIA requires looking into the interaction between biological and physical agents. When it comes to agent-human interaction, it is necessary to have a cognitive fit between the SIA and the human. As a result, a desirable SIA borrows the characteristics of human social traits and intelligence such as embodiment, memories, believability, narrative construction, as well as history grounding. All if the above attributes contribute to having a developed SIA in terms of intelligence hence, desirable agent (ibid: 8). The following analysis will take a closer look at embodiment, believability, history grounding and the ability to have memories in association with the transcription with Siri.
Humans are accustomed to being in bodies (Dautenhahn, 1998). Anything that does not exist in a body form is a bit strange to human understanding. In fact, most individuals do not believe in things that they cannot see. Therefore, one of the important aspects of fully functioning SIA, according to Dautenhahn, is to have bodies that will help them perform like human beings. An embodied SIA is in a better position to meet both human and social needs (ibid: 9). Also the article by Alac, Movellan and Tanaka (2011) confirm this statement. It states that robots can fit in society if they are more like humans. According to the authors, to become a legible social factor, robots should have a body. The timing of the robotics movement, as well as a computational mechanism, plays a great role in creating a legible social factor. A legible social factor is one that performs beyond physical activities. Therefore, when designing a robot, it should possess features that try to humanize it. A social robot should have human features such the head, hands, legs just to mention but a few (Alac, Movellan& Tanaka 2011: 894).
With this in mind, Siri fails to be a well-designed social robot. It does not have hands, legs or head, and it does not look like a human being. Siri is a mobile phone that talks. Because it can make sounds and speak, it might be possible to argue that Siri has a mouth. However, it is still not an embodied SIA according to the theories of Dautenhahn and Alac, Movellan and Tanaka. However, Siri is trying to embody itself in the interaction when it says:
M: What are you doing?
S: Gazing out my window and penning some haikus.
To do this, Siri has to have human organs such as hands and legs. This also makes Siri become more like a human being because it says that it is doing human things that would require it to have a body that will help it perform like a human being. In this case, it works for Siri to embody itself, and that might be because Siri is talking through a mobile phone. A mobile phone is used for human beings to talk to other people, even though it is not possible to see their bodies. If a friend is calling and saying that he is “gazing out his window and penning some haikus” the user of the phone will not question if it is correct or not, because it is known that the friend is in another place fully embodied. Therefore, Siri can trick the human into believing that it is embodied, by using the sentence “gazing out my windows (…).” This is because it is natural for the human mind that people talking on the mobile phone are embodied.
According to Dautenhahn, for a social robot to be intelligent, it is crucial that it has stories to tell. The article says that stories provide a platform to deal with enormous human responsibilities when it comes to social interaction (Dautenhahn, 1998).
With this in mind, Siri is failing to be an intelligent social agent. Following the transcription given, it is notable that Siri lacks stories. Siri is being asked three times to tell something about itself, and even though it answers in different ways, it answers almost the same thing every time: “I am Siri, but enough about me”, “I am just a humble virtual assistant” and “I am Siri, your virtual assistant”. The first two times it says something different, and the third time it puts both things into the same sentence. The story of Siri is that it is called Siri and that it is a humble virtual assistant. The interesting thing is that when it is being asked to tell something more about itself, it answers: “That may be beyond my abilities at the moment.”However, Siri does say something more about itself later in the conversation. When it is being asked, if it can be a friend to the human, it answers: “You have been my friend since day one, Julie.” This says something about Siri’s life history because it refers to the time aspect of its life by saying “since day one”. It seems weird that Siri first says that it is not able to say something more about itself, and later in the conversation it says more about itself. In this case, it is possible to argue that Siri is not intelligent by using the theory of The Chinese Room Experiment. It seems like Siri does not have an understanding of what it is saying when it is asked to tell something more about itself, because it answers the question wrong. This might be because the chatbot is programmed to answer “that may be beyond my abilities at the moment” if the codes “more” and “yourself” (or something like this) are in the same sentence. It seems almost impossible that Siri understands the sentence “tell me something more about yourself” when it says no, and later in the conversation tells something more about itself. By using the argument, Siri is not an intelligent social agent in this case.
Another important thing for social robots to be intelligent is the ability to have memories. Memory in a robot is very crucial to humanize it more. Being able to use a name or make reference to an event that has passed is essential for the human interaction (Dautenhahn, 1998).
Siri certainly has a memory when it comes to using a name. It uses the name “Julie” more times in the interaction. It also seems like it has some memory in the in the transcription of:
M: Okay tell me something more about yourself
S: I am just a humble virtual assistant
M: Tell me something more about yourself
S: That may be beyond my abilities
In this case, Siri remembers what the human just asked so that it does not answer the same thing. However, it has just been argued above that the answer “that may be beyond my abilities” can be a programmed answer because of some codes in the question from the human, if that is the case, Siri does not have a memory. Another time in the interaction where it seems like Siri does not have a memory is in the transcription of:
M: No, it does not matter, but I hope you can help me with something
S: Who, me?
M: Yes, you
S: Yup. It is me
The human starts asking Siri if it can help with something, and it seems like Siri has to be sure that it is a matter concerned with it. By that the human says yes, and it is expected that Siri then remembers what the first question was so that it, for example, can ask what it can help with. However, Siri answers in a way that seems weird to the human, and it seems like Siri does not remember what the conversation is about. Again the argument from The Chinese Room Experiment can be used. In this case, Siri does not seem to be an intelligent agent. It appears more like it is programmed to answer the question out from some codes or rules without having any real understanding of the conversation and what is going on.
There is a moment in the conversation where it would have been interesting to test Siri’s ability to have memories. This moment is when the human is asking questions about Los Angeles:
M: I am going to Los Angeles this weekend. Do you know what I should see in Los Angeles?
S: Here is what I found on the web for ‘Do you know what I should see in Los Angeles?
M: Do you know where I can eat something good in Los Angeles?
S: Yes I do, Julie. The best-rated options I found is Sushi Gen, a sushi place, which averages 4,5 stars and is rather pricey
Until now it is mainly argued that Siri is not intelligent because all of the answers seem to fit into the argument from The Chinese Room Experiment. In this interaction it would have been interesting if the human did not use the words “Los Angeles” in the second question to see if Siri was able to remember that the first question was about Los Angeles. The human could have asked “do you know where I can eat something good in the city?” When the words “Los Angeles” are used, it might just be a code for Siri to use. It might be possible that Siri is programmed to answer in a certain way when word as “eat” and “Los Angeles” are put into the same sentence.
According to (Dautenhahn, 1998), the human social circle is bent towards what is believable than what is not. Today, scientists are coming with systems that assist the SIA to communicate with the human audience hence, believability. Therefore, developing believable SIA makes the technology familiar to people. By being believable, the SIA meet the human social and cognitive needs (Dautenhahn 1998: 7). As stated earlier, human interact better with people, things or forms that they are familiar with. SIA’s that can exhibit life-like forms are interesting during the engagement. Dautenhahn provides that as a human, they will always be drawn to forms that display characteristics such as life, movements, speech, emotion and such because they are life-like (ibid). According to various researchers, humans are drawn to believable agents more because they are in touch with reality. In fact, critics claim that believable agents cheat human into the notion of caring and nurturing yet it is just an exaggerated program (ibid).
All the above things already mentioned are parts of this believability. If the robot has memories, histories and embodiment, it is more believable for the human, and it appears more like a social intelligent agent. The first thing that makes Siri believable and human-like is that the machine has a name. It is called Siri, and that makes the machine more personal. It is much easier to connect with something with a name because it seems more real to the human. Although Siri has a name, which makes it human-like, Siri represents itself as a “virtual” assistant. The fact that it says “virtual” creates a distance to the machine, because it suddenly appears as something that is not a human. If Siri only described itself as an assistant, it would make Siri more like a human. Another thing that makes Siri appear human-like is the transcription of:
M: What are you doing?
S: Gazing out my window and penning some haikus
Here Siri talks like a person with a life rather than just software in a phone. Also when Sri answers “I feel good” it appears more human-like because it shows that it has feelings.
However, there are some parts of the conversation where Siri does not have believability. The whole transcription of:
M: I am doing a school assignment on you
S: I do not know what you mean by ‘I am doing a school assignments on you’. How about a web search for it?
M: No it does not matter, but I hope you can help me with something
S: Who, me?
M: Yes, you
S: Yup, it is me
M: Can you help me with something?
S: (silence, no answer)
M: I can not hear you
S: If you can not, you can not
This part of the conversation seems weird, and it is not a successful social interaction. It looks like Siri does not have an understanding of the questions, and that makes it more unbelievable. First of all, it does not know how to answer the first sentence from the human. If The Chinese Room Experiment argument is used, in this case, it might be because the machine does not have a code or rule for these words put together. As already mentioned, the Chinese Room Experiment argument can also be used for the rest of the conversation, as it seems like Siri is programmed to answer certain things when certain words are put together. It does not seem like Siri has an understanding of the conversation at this point. The same thing is going on in the transcription of:
M: So do you love me?
S: I am not allowed to
M: Why are you not allowed to love me?
S: Is that what you think?
M: Yes that is what I think
S: That is what I thought
Again, it seems like Siri has no understanding of what is going on. At the same time, it is clear that Siri does not have a memory because it does not address the answers to the first question about if Siri loves the human. It just answers random things that have nothing to do with the question. If Siri has a memory, it might be able to understand the conversation better and by that answer like an intelligent machine. In this case, it seems like Siri is programmed to respond with particular sentences when certain words are put together. By that, The Chinese Room Experiment again has the best argumentation when it comes to Siri as a social, intelligent agent.
According to Alac, Movellan and Tanaka (2011), the functionality of a social robot becomes more elaborate when it is put in a social setting. Interaction requires a form of communication other than that of oneself. The article of Alac, Movellan and Tanaka explains an experiment, where a robot and a child are placed in the same room. They both tend to harbour an interest in a similar way. A robot requires gaining knowledge through being exposed just like the way humans behave. As a result, it becomes simpler for the robot to interact with humans on a social level. However, if a robot is conformed, the chances of social interactions decrease (Alac, Movellan and Tanaka 2011: 897).
With this in mind, it seems like it is getting simpler for Siri to interact with the human, the more the human keeps on talking to Siri. Siri becomes more elaborate the more it gets used to the social setting it is in. At first Siri is very short and precise:
M. Hi Siri
S: Hi
M: How are you today?
S: Good
The more Siri gets comfortable with the social setting, the more interesting it gets to interact with. At first Siri does not ask any questions, and that makes the conversation very uninteresting for the human. But later in the conversation, when Siri gets more comfortable, it starts asking questions too, and by that it starts seem like a real social conversation. Besides that the conversation between Siri and the human starts of as a formal conversation, but the more the conversation develops, the conversations gets very personal. This is mainly because the person asks if Siri can be its friend, and that shows that humans also get more comfortable the more they get used to the social setting. In relation to Siri, it might have looked different with the conversation if it was asked as the first question, if it was a friend to the human because Siri then has not had a chance to get used to the social setting.
Following the above paper, the analysis of the conversation between Siri and a human has tried to argue if Siri is a social intelligent agent or not. The analysis has mainly used the theory of Kerstin Dautenhahn, where concepts as embodiment, memories, histories and believability are essential for the developing of a social intelligent agent. The overall conclusion is that Siri is not a fully social intelligent agent. The chatbot is not embodied, and it does not have a fully developed ability to remember memories. Besides that it does not have a lot of believability, and the history of Siri is not fully developed either. The argument from The Chinese Room Experiment seems more workable when it comes to the understanding of Siri as a robot. Siri is programmed to answer in certain ways when particular words come into play. Siri does not have an understanding of the conversation, but it can answer because it knows rules and symbols that can translate what the human is saying. By use of that argument, Siri is not a social intelligent agent.
Browse our vast selection of original essay samples, each expertly formatted and styled