By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy. We’ll occasionally send you promo and account related email
No need to pay just yet!
About this sample
About this sample
Words: 831 |
Pages: 2|
5 min read
Published: Aug 16, 2019
Words: 831|Pages: 2|5 min read
Published: Aug 16, 2019
The Raven's Progressive Matrices is a set of visual problems used commonly to measure intelligence. The goal is to design an agent that can solve these problems just as humans can. To design such an agent, I will be using a combination of semantic networks and generate & test. Semantic networks are a form of knowledge representation that consists of nodes, links, and link labels (Winston, 1977, p. 19). An agent can use this representation of the problem to discern what the missing figure is. To get started, the agent needs to represent each of the figures given, and then use generate & test to create what the answer should be assuming the same transformations apply. The answer choice that is most similar to the generated figure would be what the agent chooses for the answer.
This problem is difficult because there isn't a "correct" way to represent a Raven Progressive Matrix problem with semantic networks.
Figure 1. Challenge Problem D-12 (Raven, J. 2003, p. 235)
For example, in the above picture from Challenge Problem D-12, it is difficult to see initially what needs to be represented. It could be the name of the shape, the orientation of the objects in relation to each other, the number of objects, and many other possibilities. A simple semantic network to represent the transformation of the first row can be:
A: x top left of y B: x top of y, y top of z C: x top left of y, y left of z, z bottom left of w
A to B transformation: x remain the same, a new z which y is on top of
B to C transformation: x remains the same, y remains the same, a new w which z is on top of
Another one that may work can be:
A: x is same as y B: x is the same as y, y is same as z C: x is same as y, y same as z same as w
A to B transformation: x and y's relationship is the same, a new z which maintains this relationship. x,y,z are all different shapes compared to last figure's x,y.
B to C transformation: x and y and z's relationship is the same, a new w which also keeps this going. x,y,z,w are all different shapes compared to last figure's x,y,z.
These 2 knowledge representations can lead to different answers when applied to the last row:
G: x top left of y, top right of z H: x top left of y, z top right of y, w top right of z
G to H transformation: new z top left of y, new w top right of z
H to ? transformation: new shape a top left of z, new shape b top right of a.
Using the second rule:
G: x is same shape as y, y same shape as z H: x same shape as y, y same shape as z, z same shape as w
G to H transformation: new shapes z, which is same shape as y, and new shape w same as z. New x,y,z,w also has a different shape than last figure's x,y.
H to ? transformation: new shape a, which is same shape as z, and new shape b, which is same as a. New a,b,x,y,z,w also has a different shape than last figure's x,y,z,w.
The first rule will allow you to choose any answer choice that satisfies the transformation rule from H to ?, which is based on the spatial relationships between the new shapes, while the second rule will allow you to choose answers based on the shapes' identities and how they are different compared to the last figure. Both of these rules unfortunately, do not solve this question, which goes to show that there may be rules that seem to explain the transformations yet isn't correct.
Generate and test mitigates this issue by generating all possible transformations, and testing each case to see which transformation leads to the best answer choice. If there are multiple answer choices that fit a specific transformation, the answer is chosen by a weighting scheme. For example, figure A and figure B are different by a reflection, but can also be different by a rotation. The weights for the transformations is used choose between the rotation of figure C or the reflection of figure C. In this project, I would weight reflection over rotation because it is a more specific transformation, which is more unique.
This proposed solution do have some faults, namely, the weighting of the feature is arbitrary. There is no reason reflection should have a higher weight than rotation based on the data. Maybe this is true for most cases, but it can lead to a completely wrong result later on. A possible solution to this is to do a even generalized generate & test. Right now, we do generate & test using one weighting scheme, when we can generate & test multiple weighting schemes, and compare between the best results from each weighting scheme.
Browse our vast selection of original essay samples, each expertly formatted and styled