Pssst… we can write an original essay just for you.
Any subject. Any type of essay.
We’ll even meet a 3-hour deadline.
121 writers online
This algorithm can achieve a 99.6 percent detection rate from 9,825 images—assuming that the license plate frame’s edges are clear and horizontal. Moreever, this method of extracting characters from the binary image to define the no plate region is time-consuming because it processes all the binary objects. Furthermore, it gives an incorrect result if there is other text in the image.
Greyscale Images are those images which contain only a single value that is each pixel has only a single value, they carry only the information of intensity under them. They are also known as black and white image or a monochrome image as they mostly in grey clour the intensity is divided in such a way that black has the lowest intensity while white has the strongest. We firstly start by converting an color image into an greyscale image. The expression is: R=rgb2grey(p) Where R is the greyscaled image and p is the color image.
Color processing is a fundamental step in image processing as well as for plate recognition as in most of the countries certain norms are fixed for the plate color and nos like in india the vechiles have to keep the letters in black with a white background. But due to poor lightining conditions and plate location the output is not efficient that is why we need color processing so as to have an accurate retrieval of characters eith greater efficiency.
Before proceeding with thresholding the images must be converted in greyscale. Thresholding is done so as to create a binary images. Adaptive thresholding is a process in which a threshold value is calculated and then each pixel is compared with that constant(threshold) value and replaced with a pixel of black colour if the value is less than the constant value or a white pixel if the value is greater than the constant value. The threshold value is calculated taking an average of the local values of pixel The adaptive threshold is calculated based on the local mean of pixels intensity in windows of m × n pixels: O(X,Y)= 255 I(X,Y) < a+ß = 0 I(X.Y) > a-ß where I and O are the input and output images respectively. The window size parameters, m and n, are chosen based on the characters size in the region.
To expand the contrast of the image we have to perform the process of histogram equalization. Contrast extension process increases the sharpness of the image. Gray level histogram of an image is the distribution of grey values of an image. Histogram equalization is a popular method to improve the appearance of an image which has a very poor contrast. The total process is divided in four steps: (i) summing up all the histogram values (ii) dividing these values with the total no of pixels so as to normalize the values. (iii) multiply these values with the highest grey level value. (iv) chart the new grey level value.
Median filter is used for removing the unwanted noises in the image. In this method a matrix of 3×3 is passed in the image. According the noise levels these dimensions can be adjusted. The process involves (i) From the 3×3 matrix one pixel is chosen as the center pixel (ii) all the other surrounding pixels arecomputed as neighbourhood pixel (iii) Sorting process are applied between these nine pixels from smaller to the bigger, (iv) Median element is assigned to the fifth element (v) Theseprocedures are implemented to the all pixels in plate image.
By using the Regionprops function of MATLAB the characters of the resulted number plate region are segmented which gives us the bounding boxes for each of the characters. The smallest bounding box that contains a character is returned by Regionprops function. This method is used to obtain the bounding boxes of all characters in the number plate.
In Feature extraction process we find, we mark, and save all the features from the number plate segmented. To recognize the character in number plate images we use zonal density feature. In Zonal density function image is divided into different areas and object’s pixel in each of the area is been counted. The density of each area is the total object’s pixel. Total area in the image equal to total features acquired in the image. For 16 zonal density we divide a 32×32 image, so that in an image there are 16 features. In order to be divided into 16, 64, 128, 256 zones the pixel should be 32 x 32
We provide you with original essay samples, perfect formatting and styling
To export a reference to this article please select a referencing style below:
Sorry, copying is not allowed on our website. If you’d like this or any other sample, we’ll happily email it to you.
Attention! this essay is not unique. You can get 100% plagiarism FREE essay in 30sec
Sorry, we cannot unicalize this essay. You can order Unique paper and our professionals Rewrite it for you
Your essay sample has been sent.
Want us to write one just for you? We can custom edit this essay into an original, 100% plagiarism free essay.Order now
Are you interested in getting a customized paper?Check it out!