By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy. We’ll occasionally send you promo and account related email
No need to pay just yet!
About this sample
About this sample
Words: 3110 |
Pages: 7|
16 min read
Published: Jan 21, 2020
Words: 3110|Pages: 7|16 min read
Published: Jan 21, 2020
Digital image acquisition or also known as digital image is the formation of digitally encoded portrayed visual component of an object. In addition, the attainment of information or updates without having any physical contact with the object or phenomena are actually the way of practising remote sensing application. Jensen (2000) states that remote sensing is defined as a technique of measuring information about an object without touching it (as cited by Liu in 2014). For further explanation, the term “remote sensing” defined the usage of satellite or any aircraft based on sensor technologies as a platform to identify and arranging the signals that was transmitted such as electromagnetic radiation. Satellite radiation sensors can be divided into two which are active and passive. For passive system, the sun act as the main source of electromagnetic radiation while for an active system, it transmit energy downwards and it can detect energy that are being radiated from earth. There are a few theories and methods on how the image being captured from the satellite in term of space and time.
Next, to explain more details and deeper regarding image acquisition, it is actually referring to an action of retrieving an image from hardware-based source which are use for processing. Image acquisition is a very crucial and important step because if no image obtained or retrieved, the next step can’t be performed, thus the image are not completely processed. There are three main principles in sensor arrangement which is used to modify energy into digital images. Those are, the presence of energy reflected from the object of interest, a sensing system which concentrating on the energy and a sensor device which can detect, measure and estimate the energy. To brief theoretically, firstly, combination of input electrical power and sensor material which are reacting with the detected energy will convert the energy into voltage. Then, the waveform produced by the output voltage is the response of the sensor and each sensor will acquire digital quantity which will be digitize by itself.
Mishra, Kumar and Shukla (2017) state that image acquisition is defined by the action of capturing images before it will be analysed. Image acquisition can be perform by using many type of sensors. Firstly, image acquisition can be done by using a single sensor. One of the common and well-known type of single sensor is a photodiode. Photodiode is made up of silicon materials and its output voltage waveform produced is corresponding to light. In addition, to enhance selectivity of the sensor, the use of a filter in front of it must be highly considered. With the presence of sensor, the sensor output will highly favour the band which the filter favour. To obtain an image with two-dimensional view by using a single sensor, the movement or the locomotion of the object should be fixed in x and y directions. Theoretically, rotation will produce movement in one direction while linear motion produce movement in the lines which are perpendicular to each other. Secondly, image acquisition can also be done with a line sensor or known as sensor strips. This type of sensor are used more often compared to single sensor. This type of sensor was arranged in line in the form of a sensor strip. The sensor strip helps to produce imaging output in one direction. Next, this line sensor works as any movement that are perpendicular to the strip will produce image that are in other direction. Lastly, digital image also can be acquired by using an array sensor. In this type of sensor, an individual sensor was arranged in a pattern of 2D arrangement. A quite large number of electromagnetic and a few ultrasonic sensing tools are being put in an ordered series of arrangement. This type of arrangement or pattern mostly can be found in digital cameras. The most common type of sensor for cameras is called CCD array, which are dominantly used in digital cameras and other light sensing tools and devices. Each sensor’s reaction is correlate to the fundamental light energy projected to the surface of the sensor. The sensor is a stuff that is widely used in astronomical and other applications that require low disturbance or noise image. In addition, the noise can be reduce by allowing the sensor combine and merge the input light signal over minutes or even hours. There are an advantage of using this type of array sensors which its arrangement is 2D, thus a full image can be acquired by concentrating the energy pattern on the surface of the array. As the sensor array is occurring together with the focal plane, it will produce a result which is equal to the amount of light received at each sensor.
The image acquisition term also include the image compression process. It is a process which an intention or objective to produce a compact and dense digital representation of a signal. The image compression techniques can be divided into two groups according to the capabilities of the original image whether it can be reconstructed or not using the compressed image. Those categorised techniques are lossless compression techniques and lossy compression techniques. For lossless compression process, the reconstructed data and the original value should be same in value for each sample of data. While for lossy compression techniques, which are mostly used in the image and video data processing applications, it does not require the value of original and reconstructed data to be the same. So, a few amount of loss is permissible for the value of the modified data. In addition, a compression process which its output is defect and faulty is referred as lossy compression techniques.
To obtain an image by using remote sensing satellite, there are a few steps that should be followed. The satellite image processing operations can be divided into four main groups which are image rectification and restoration, image enhancement, image classification and lastly, information extraction. Basically, this is a process to convert a raw image data in order to have an accurate data and to get rid of any noise or disturbance presence in the data. To undergo this process, the data should be recorded and keep in digital form so that it is suitable for storage in a computer or disk. Indeed, an appropriate hardware, software and image analysis system are also needed in order to complete this crucial process. There are a few commercially developed software that can be used for remote sensing and image process and analysis such as SAGA GIS and InterImage. Firstly, pre-processing process or also known as image rectification and restoration is a very crucial step in obtaining an image by using remote sensing satellite. In this early step, the aim is to make sure that the platform specific radiometric and geometric data are accurate and precise, meaning it is error-free. This operation is very prior and it is grouped as geometric and radiometric corrections. Radiometric corrections are very needed because there are variations that can be seen in scene illumination, viewing geometry, atmospheric conditions and also sensor noise and responses. Each of those data are different dependable to the specific sensor and platform used to acquire the data and the conditions during the acquisition process. However, it is on our own to convert or to adjust the data in order to find the comparisons between the data obtained. Some examples of radiometric corrections are altering the data for sensor deformity and unwanted sensor or any form of atmospheric noise. Besides, geometric corrections include geometric misrepresentation due to sensor- Earth geometry variations. Noise in an image such as systemic striping or banding and dropped lines are cause by deformity in the sensor response and transmission. Thus, those effected should be adjusted before next process is performed. However, other errors cannot be corrected in this way, so, geometric registration process must be performed. The geometric registration process identified image coordinates in form of (row, column) in a certain point which is known as Ground Control Points (GCPs). The GCPs in the inaccurate image are match to the accurate positions from a map and this is known as image to map registration.
Next, the second step in the satellite image processing is the image enhancement process. The main objective is to provide better appearance of the imagery, so that the visual appearance of the image can be improved to help in visual interpretation and analysis. Some of the common type of image enhancement that can be found in the GIS and image processing tool is contrast enhancement, linear stretch, histogram equalization, density slicing and edge enhancement. Contrast enhancement is a process to adjust the image brightness in order to meet and adapt with the display system. It involves the changing of original values which then lead to the usage of more available range that will help to strengthen the contrast between target and backgrounds. Linear stretch is a process in which we calibrate the original brightness value into a new distribution. Next, in histogram equalization, the original brightness value is being modified to form a uniform distribution of intensity while density slicing is a process in which intervals of the brightness valued are being mapped into discrete colours. Lastly, edge enhancement is to strengthen contrast in a local region to support visual transitions between a region of contrasting brightness. It is very important to study and examine the image of histograms before conducting any image enhancement. The histogram is usually shown in three bands which are red, blue and green.
Image classification is a process of allocating land cover classes to pixels. As an example, there are nine land cover data sets which are then being categorised into forest, urban, agriculture and other classes. In addition, there are three main image classification methods in remote sensing which are unsupervised image classification, supervised image classification and object-based image analysis. Unsupervised and supervised image classification are most likely common and being used by people that are involved in this field. However, lately, object-based image analysis was being used by those people because it is very useful as it offers high resolution data. Firstly, in unsupervised classification, pixels are firstly grouped into clusters according to their properties. This classification technique is the most basic technique as it does not require any samples. There are only two steps in this technique which is generate and assign clusters. Next, in supervised classification, the selected samples will be use by the software to apply them to the entire image. There are three steps in this classification which are select training areas, generate signature files and classify. Lastly, object -based image classification grouped pixels into different shapes and sizes in a process called multi-resolution segmentation or segment mean shift. This process formed similar image object by classifying pixels.
Image which were filled with different scales and sizes of objects are formed and those objects are more meaningful as they represent a true feature in the image. Objects can be classified based on texture, context and geometry by using object-based image classification. Objects can be created and classified by using multiple bands. However, better land cover was not yet guaranteed when we choose higher resolution image. What does matter is the images classification techniques that were being selected in order to have a precise output. In order to get a better land cover, we should really know when we should use pixel based (supervised and unsupervised classification) and when we should choose object-based classification. The main factor that we should consider when choosing classification techniques is the spatial resolution. Spatial resolution is a term that is referring to the number of pixels use in construction of an image. By having a greater number of pixels, the image is having a higher spatial resolution. Thus, before considering any classification techniques, we should know that by having a low spatial resolution image, we can choose either pixel based or object-based classification techniques as both will perform well. However, if the image is a high spatial resolution image, object-based classification will provide us a better, precise and accurate outputs. Based on a case study that was conducted by University of Arkansas which is to object based and pixel-based classification, it was proved that object-based classification exceed the ability of pixel based classification as it used both spectral and contextual information which has higher accuracy, thus, provide a better results that we can rely on.
As mentioned before, remote sensing instruments are divided into two which are passive and active. Passive tool detect natural energy directly while passive tool sense only radiation which is being reflected by other source. The most common external source is sunlight. There are a few types of passive instruments such as radiometer, imaging radiometer, spectrometer and spectroradiometer. Radiometer is a device that evaluate the strength of electromagnetic radiation of some band in the electromagnetic spectrum. On the other side, imaging radiometer comprised of scanning ability to produce a 2D array of pixels. Spectrometer are used to detect, evaluate and inspect the spectral content of the electromagnetic radiation. Some examples of active instruments are radar, scatterometer, lidar and laser altimeter. Radar or also known as Radio Detection and Ranging operates by using transmitter which are located either at radio or microwave frequencies in order to emit electromagnetic radiation. Scatterometer has a microwave radar with high frequency designed to measure and estimate scattering radiation. Besides, lidar or also known as Light Detection and Ranging operates by using a laser and receiver with a sensitive detector tool to broadcast a light pulse and to measure the backscattered and reflected light respectively. Lastly, laser altimeter are designed specifically to measure and estimate the height of the instrument platform above the surface by using Lidar.
Next, there are two type of satellite orbits which are geostationary orbit and polar orbit. Basically, geostationary orbit was operated by many applications such as direct broadcast, communications and relay system. The most particular importance on why direct broadcast TV are using this type of satellite orbits because it has the advantage that the satellite will only remain at one place throughout the day, thus, it can remain on track and the antenna would not change its directions as antenna are only directed towards the satellite. This type of satellite will rotate in the same direction as the rotation of the Earth for an approximate time of 24 hours. So, it means that the satellite will spin and rotates at same angular velocity as the Earth in same direction, thus it will remain in the same position parallel to Earth. However, to make sure that it revolves exactly as the speed of Earth, we need to know the exact time for the rotation of the Earth. Geostationary orbit satellites clearly cannot provide a full global coverage. But, a single geostationary orbit satellite can show approximately about 42% of Earth’s surface. So, theoretically, if three satellites are placed around the globe, it is achievable for us to get complete coverage around the equator. Although the satellites are placed in geostationary orbit, there are also some forces that still can act with the satellite, thus change its position slowly over time. Those factors include the earth elliptical shape, the pull of the Sun and Moon and others act that lead to the increase of satellite orbital inclination. However, those disturbance can be overcome as the fuel which is carried by satellites enables them to perform “station-keeping” process which the satellite is returned to its original and desired position. However, there are issues regarding the usage of satellites in geostationary orbit which has delay that are caused by its path length. This delay can cause distractions such as difficulties happened during telephone conversations when satellite links are used. The nearest cases that we can see daily is the situation when a news reporter is using satellite links. There is some disturbance that happen during the process as when a question was asked from the studio, the reporters seem to take some time to answer the question. This is the reason why people choose to use cables for long distance communication rather than satellites as the delay that occurred are lesser.
Next, the satellite that rotates around the Earth in its polar orbit which is perpendicular to the equatorial plane is known as polar satellites. A polar satellite will scan the entire Earth by successively passing through different points on Earth’s surface while the Earth is rotating at its axis. Polar Orbiting Environmental Satellites (POES) are situated in circular sun synchronous orbits and their altitudes are usually range from 700 to 800 kilometres. Its orbital period are about 98-102 minutes, thus, each satellite will complete approximately 14 orbits in a day. POES are very importance in meteorological and geophysical due to its well measured channels and high-resolution global coverage. They are designed to be located and stay in a low earth orbit and to reach high altitudes. This type of satellite is widely used for land mapping and to check the availability of useful land on Earth. Such as example, in India, a satellite known as Cartosat 1A and Cartosat 1B gives information about agricultural aspects. Cartosat 2 is also one of the satellites that are involved in land mapping process. Besides, polar orbit satellite is also used for disaster management. A satellite named SARSAT which is a save and rescue satellite was launched after the Uttarakhand disaster with its main objective which is to find missing people. There are a few differences between both type of satellite. Firstly, since polar orbiting satellite orbits lower than geostationary satellites, the data resolution is higher. Next, they also provide global coverage which are very necessary for Numerical Weather Prediction (NWP) and climatic studies. However, the disadvantage of polar orbit satellite is that it can’t give continuous viewing of one location as the satellite keeps orbiting compared to the geostationary satellite which stay at one place. For geostationary satellite, repeated observations can be made as the satellite is always at the same position relative to the Earth. Next, as the satellite’s distance is far from the Earth, it provide a large cover area which is almost a fourth of the Earth’s surface. It also have a 24 hour view of a certain place or area which make it suite and ideal for broadcast applications. However, due to its far distance, it leads to a weak signal and there will be a delay in the signal. It also has difficulty to broadcast signal to polar regions as it is centered above the equator. Also, due to high orbit location, the spatial resolution data is not great compared to polar orbiting satellite.
Browse our vast selection of original essay samples, each expertly formatted and styled