By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy. We’ll occasionally send you promo and account related email
No need to pay just yet!
About this sample
About this sample
Words: 1693 |
Pages: 4|
9 min read
Published: Mar 28, 2019
Words: 1693|Pages: 4|9 min read
Published: Mar 28, 2019
Having discussed with the specific problem in chapter 2, a research survey has been undertaken in the previous chapters. It is prudent now to discuss the methods, towards the solution of the Denoising problem. For this purpose, a novel approach has been developed by using information related alignment in this chapter. The methodology uses pre and post filter banks, which arrives to the solution of the approach. However, the strategy of entropy minimization and entropy maximization is considered for the validation of results for the analysis of medical ultrasound images.
The most important function in this approach is an image. From the classical point of view the image is defined as something which can be perceived with visual systems [1, R.V. K Reddy et al., 2016]. However, the image processing deals with several classes of images, with the fact that all images are not equally and directly perceived with human eyes.
Digital grey scale image is an image which can be modeled as a function of discrete domain referring to Ωd= [1,……..m] X [1,….,n] with the discrete range [0,….255], which is typically represented by a two- dimensional array [m n] with the n coordinate ranging from [1×n].
The Information is defined as the knowledge that the user gathers from the image related to the facts or the details about the subject of interest. It is the part of knowledge that is obtained from investigation or study.
A research paper with title “A Mathematical Theory of Communication” written by Claude Shannon in the year 1948 has been accepted as the birth of new field called as Information Theory. Shannon has used the theory of probability to model and describe the information sources. The dedicated works have defined the information source as the data produced by a source to be treated as a random variable. However, the measure of information depends upon the no. of possible outcomes. If the two given messages are assumed of length n1 & n2 with s1 & s2 as the number of symbols then, the measure of information is given by:
H= n log s
= log sn …Eq. (3.1.a )
The larger is the no. of messages, the more will be the amount of information. If there is a single message possible from any event, then that event will be said to have zero or no information. As,
log1= 0 …eq. (3.1.b)
The Claude Shannon has also defined the entropy of a system as if there is m no. of events given as: e1, e2…..em with probability of occurrence p1, p2…..pm, then the value of entropy will be calculated by:
H=∑ pi log 1/pi = -∑ pi log pi … Eq. (3.1.c)
Where the information of each event is weighted with the probability of its occurrence.
The Shannon entropy for an image is computed on the basis of distribution of the grey level values of the image represented by the gray level histogram and the probability distribution is calculated on the basis of the no. of times each grey value occurs in an image divided by the total no. of occurrence. The researchers have found that if an image consists of a single intensity then it will have low value of entropy and thus it contains the less amount of information. However, if there is more number of intensities, the image will have higher value of entropy with large amount of information present in it [10, J5, J. P. W Pluim, 2003].
Shannon defines the entropy for any n-state system as the gain of information from an event is inversely proportional to the probability of occurrence of the event. According to [11, N. R. Pal, 1991] for an image I with grey values I (x, y) at (x,y) and size PXQ belonging to the set of grey levels {0,1,……L-1}, the frequency of the grey levels would be given as:
∑_(j=0)^(i-1)▒〖Nj=PQ〗 …….Eq. (3.3.a)
And, if P[xi] is the probability of sequence xi of grey levels of length l then entropy will be given as-
H=1/l ∑p (xi)e1-p(xi) ……..Eq.(3.3.b)
Such entropy is called as Global Entropy.
Thus, the information present in any image is analyzed in the terms of entropy which gives the measurement of uncertainty.
The Histogram, for any image is a graph on x- axis that shows the number of pixels present in the image with different intensity values of an image. If any image is 8-bit grayscale image, then there will be 256 possible intensities with all 256 intensities displayed on the histogram. Similarly for colored images, it would be a 3-D histogram with three different axes for R, G &B changes. Thus, the entropy of the image is calculated on the basis of 256 quantity levels and it is directed by the value of N.
H(X) =-∑_(i=0)^255▒〖pi logpi 〗 …..(3.3.a)
Pi=Ng/Np …..(3.3.b)
Where Ng is the number of pixels corresponding to the grey levels and Np is the total number of pixels in the image; Pi is the probability of occurrence for each grey level intensity.
As information present in the image can be analyzed in terms of Entropy, the entropy of an image is found to be decreased with the decreasing amount of information contained in the image.
The histogram is a graph which shows the no. of pixels at each value of intensity for 8- bit grey scale image with 256 possible different intensities. Histogram Equalization is the statistical distribution of gray levels present in the image. It is a form of contrast enhancement used to increase the global contrast of the images. It adjusts the pixel values for the better distribution and contrast enhancement. It is used to stretch the histogram of any given image.
Histogram equalization is one of the popular conventional methods for image enhancement. The method redistributes the grey levels of the histogram of an image with significant change in the brightness of the image. This has led to the limitations of the conventional methods like loss of originality of the image, loosing of minute details and over enhancement. Many researchers have worked for histogram equalization techniques and its manipulations. As produced by the researchers in [5, E1, M. Kaur et al., 2013], there are various manipulations in Histogram equalization. As in Brightness Preserving Bi- Histogram Equalization (BBHE), the histogram of the input image is divided into two equal parts at the point XT so that the two different histograms are generated with two different ranges 0 to XT & XT+1 to XL-1. After then, both the histograms are separately equalized. In Dualistic Sub-Image Histogram Equalization (DSIHE), the method allows the division of input images to develop the sub-images with equal area and equal amount of pixels. The output image brightness is equal to the average of the area level of the sub image and its mean grey level. The researchers have also claimed the drawbacks of DSIHE method, as it cannot develop significant effect on the image brightness.
In Minimum Mean Brightness Error Bi-Histogram Equalization Method (MMBEBHE), the same approach as that of BBHE and DSIHE with threshold is followed. This method defines a threshold level, for dividing the equalized input image into sub images. If Ti is a threshold level, then the range of two sub-images will be defined as I [0, Ti] and I[ Ti+1,L-1]. MMBEBHE also considers the minimization of the mean brightness error. In this way, the Histograms of the sub-images are equalized. However, in Recursive Mean Separate Histogram Equalization Method(RMSHE), leading to the equalization of conventional histogram equalization method, the RMSHE method allows the decomposition of input image in a recursive manner with a defined scale r, resulting into the twice r number of sub-images.
Then, each sub image is independently enhanced with individual equalization of their histograms. According to [13, M3, M. Kaur et al., 2011 ] the authors have presented for a scale value of r=0, the RMSHE cannot generate the sub-images. However, for the value of r=1, the RMSHE method works equivalently to the BBHE method. Thus, the value of r increases as the tendency of presenting the value of brightness increases.
In Mean Brightness Preserving Histogram Equalization Method (MBPHE), following the conventional histogram equalization the MBPHE method tends to preserve the mean brightness of the image. The MBPHE method can be bisectional if the histogram of the input image has a quasi symmetrical distribution around the point of separation. This principle has shown failure in the real time applications as compared to Multi-Sectional MBPHE. The multi-sectional MBPHE allows the division of input histogram into R sub-histograms, with any positive integer value of R. Although the sub-histograms in this approach are created recursively and finally each sub-histogram is equalized independently.
In Dynamic Histogram Equalization (DHE), the method allows the division of the histogram of an input image into sub-histograms till it can ensure that there is no portion of the image remaining for sub-division and there is no portion dominated in the image with each sub-histogram is allotted with dynamic grey level range. Finally, all histograms are equalized independently. However, in Brightness Preserving Dynamic Histograms Equalization (BPDHE), the procedure allows the equalization of the mean intensity of the input image towards the mean intensity of the output image. This method tends to divide the histogram on the basis of local maximum with each portion is mapped to a dynamic range and then, the output intensity is normalized.
According to the authors, the BPDHE works better as compared to MBPHE and DHE. But, in Contrast Limited Adaptive Histogram Equalization (CLAHE), the algorithm allows to firstly partition the input image into some contextual regions followed by the application of histogram equalization on each region. This approach allows the visualization of even more hidden features of the given image. Since, the CLAHE approach works upon the small areas of the images, named as tiles, the contrast of each file is independently enhanced [14, H1, H.S.S. Ahmed, 2011]. The adjacent tiles are then combined together by the process called as bilinear interpolation. The CLAHE has allowed overcoming the limitations of HE and adaptive HE methods particularly for the homogeneous areas. According to HE and adaptive HE, the homogeneous areas give high peaks in the histograms for the contextual region and for several cases, the pixels may also fall inside the grey level slope.
Browse our vast selection of original essay samples, each expertly formatted and styled