By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy. We’ll occasionally send you promo and account related email
No need to pay just yet!
About this sample
About this sample
Words: 541 |
Page: 1|
3 min read
Published: Apr 30, 2020
Words: 541|Page: 1|3 min read
Published: Apr 30, 2020
K-Nearest Neighbor (KNN) is one of the most popular algorithms for pattern recognition. Many researchers have found that the KNN algorithm accomplishes very good performance in their experiments on different data sets. The traditional KNN classification algorithm has three limitations: (i) calculation complexity due to the usage of all the training samples for classification, (ii) the performance is solely dependent on the training set, and selection of k.
Nearest neighbor search is one of the most popular learning and classification techniques introduced by Fix and Hodges, which has been proved to be a simple and powerful recognition algorithm. Cover and Hart showed that the decision rule performs well considering that no explicit knowledge of the data is available. A simple generalization of this method is called K-NN rule, in which a new pattern is classified into the class with the most members present among the K nearest neighbors.
The traditional KNN text classification has three limitations:
Efficiency of kNNC depends largely upon the effective selection of k-Nearest Neighbors. The limitation of conventional kNNC is that once we choose the criteria for k-Nearest Neighbors selection, the criteria remain unchanged. But this characteristic of kNNC is not suitable for many cases if we want to make a correct prediction or classification in real life. An instance is described in the database by using a number of attributes and the corresponding values of those attributes. So similarity between any two instances is identified by the similarity of attribute values. But in real life data when we are describing two instances and are trying to find out the similarity between those two, similarities in different attributes do not weigh same with respect to a particular classification. Moreover, as with time more training data keeps on coming it may happen that similarity in a particular attribute value carries more or less importance than before. For example, say we are trying to predict the outcome of a soccer game based on the previous results. Now in that prediction, the place and the weather plays a very important role in the outcome of the game. But in future if all the soccer games are played in indoor stadiums then the field weather is no longer going to have same effect on the outcome of the game.
Browse our vast selection of original essay samples, each expertly formatted and styled