Monday, November 25, 2019
Image Processing and Enhancement Essay Example
Image Processing and Enhancement Essay Example Image Processing and Enhancement Essay Image Processing and Enhancement Essay Remote sensing (RS), also called earth observation, refers to obtaining information about objects or areas at the Earth? s surface without being in direct contact with the object or area. Humans accomplish this task with aid of eyes or by the sense of smell or hearing; so, remote sensing is day-today business for people.Remote sensing can be broadly defined as the collection and interpretation of information about an object, area, or event without being in physical contact with the object. Remote-sensing data play a growing role in studies of natural and semi natural environments, a role that stretches from a visual interpretation to sophisticated extraction of information by advanced image analysis and statistical algorithms. In their raw form, as received from imaging sensors mounted on satellite platforms, remotelysensed data generally contain flaws or deficiencies with respect to a particular application.To extract basic information from remotely-sensed data the flaws or deficiencies must remove or corrected. In this paper I will try to describes some important general means of image correction because it is difficult to decide what should be included under the heading of image correction, since the definition of what is, or is not, a deficiency in the data depends to a considerable extent on the use to which those data are to be put. So I will discuss the title like image Preprocessing, Digital image, image enhancement and other important titles related to Image correction and better image interpretation means.The other idea raised and discussed in this paper is the relationship between vegetation index and vegetation degradation by using remotely sensed data. 2. Function of image preprocessing and its importance for image Analysis The function of an image preprocessing is a means to applying some methods in order to correct image deficiencies and removal of flaws before using the images for other purposes. Mather and Koch (2011) stated that In their raw form, as received from imaging sensors mounted on satellite platforms, remotelys 1 a) before haze removal b) After haze removal 2 Figure 3 Haze Reduction 5 B) Sun angle correction According to Bakker and et al. (2011) stated that The position of the sun relative to the earth changes depending on time of the day and the day of the year. As a result, the image data of different seasons are acquired under different solar illumination. An absolute correction involves dividing the DN-values in the image data by the sine of the solar elevation angle. Landsat 7 ETM+ color infrared composites acquired with different sun angle. (A) The left image was acquired with a sun elevation of 37à ° and right image. B) With a sun elevation of 42à °. The difference in reflectance is clearly shown. (C) (B) The left image was corrected to meet the right image. Figure 4 Sun angle Correction 2. 2. Geometric Correction Geometric distortion is an error on image which occurred by one of the two possibilities either internally on the geometry of the sensor or externally the altitude of the sensor or the shape of the object. Supporting to this idea Kuznetsov and et al. (2012) describe that geometric distortion is an error on image, between the actual image coordinates and the ideal image coordinates.Geometric distortion is classified in to internal distortion resultin g from the geometry of the sensor and external distortion resulting from the altitude of the sensor or the shape of the object. 6 To correct such geometric distortion on the image we should use different geometric correction methods. Murayam and Dassanayake (2010) stated that geometric corrections include correcting for geometric distortions due to sensor-Earth geometry variations, and conversion of the data to real world coordinates (e. g. latitude and longitude) on the Earths surface.Conversion of the data to real world coordinates are carried by analyzing well distributed Ground Control Points (GCPs). This is done in two steps Geo-referencing: This involves the calculation of the appropriate transformation from image to terrain coordinates. Landsat 30m ETM+ Image Quickbird . 7m Natural Color Image Ground control points are identified between the two images in recognizable locations. These points should be static relative to temporal change. In this case road intersections are the best source of GCPââ¬â¢s. Features that move through time (i. e. horelines, etc. ) should be avoided if possible. Figure 5 Georeferencing Geocoding: This step involves resembling the image to obtain a new image in which all pixels are correctly positioned within the terrain coordinate system. Resampling is used to determine the digital values to place in the new pixel locations of the corrected output image. Figure 6 Geo coding 7 There are different techniques of resampling methods according Murayam and Dassanayake (2010) there is three techniques of resampling: 1. Nearest Neighborhood 2. Bi-linear interpolation 3. Cubic Convolution 1.Nearest Neighborhood According to Rees (2011) the nearest neighbor approach uses the value of the closest input pixel for the output pixel value. To determine the nearest neighbor, the algorithm uses the inverse of the transformation matrix to calculate the image file coordinates of the desired geographic coordinate. The pixel value occupying the c losest image file coordinate to the estimated coordinate will be used for the output pixel value in the geo-referenced image. This means that the nearest pixel value has more influence than apart pixel. Figure7.Nearest Neighborhood ADVANTAGES: Output values are the original input values. Other methods of resampling tend to average surrounding values. This may be an important consideration when discriminating between vegetation types or locating boundaries. Since original data are retained, this method is recommended before classification. Easy to compute and therefore fastest to use. DISADVANTAGES: Produces a choppy, stair-stepped effect. The image has a rough appearance relative to the original un-rectified data. Data values may be lost, while other values may be duplicated.Figure 1 shows an input file (orange) with a yellow output file superimposed. Input values closest to the center of each output cell are sent to the output file to the right. Notice that values 13 and 22 ar e lost while values 14 and 24 are duplicated. 8 2. Bi-linear interpolation The bilinear interpolation approach uses the weighted average of the nearest four pixels to the output pixel. Figure8. Bi-linear interpolation ADVANTAGES: Stair-step effect caused by the nearest neighbor approach is reduced. Image looks smooth. DISADVANTAGES: Alters original data and reduces contrast by averaging neighboring values together. Is computationally more expensive than nearest neighbor. 3. Cubic Convolution The cubic convolution approach uses the weighted average of the nearest sixteen pixels to the output pixel. The output is similar to bilinear interpolation, but the smoothing effect caused by the averaging of surrounding input pixel values is more dramatic. Figure9. Cubic Convolution ADVANTAGES: Stair-step effect caused by the nearest neighbor approach is reduced. Image looks smooth. DISADVANTAGES: Alters original data and reduces contrast by averaging neighboring values together. Is computa tionally more expensive than nearest neighbor or bilinear interpolation. In general image preprocessing is very essential step for better image analysis and interpretation because it corrects different types of image distortion. 9 Similar to this idea Murayam and Dassanayake (2010) stated that preprocessing includes data operation which normally precedes further manipulation and analysis of the image data to extract specific information. These operations aim to correct distorted or degraded image data to create a more faithful representation of the original scene. . Digital Image Formats and Its Arrangement According to Visual Resource Centre School of Humanities (2011) Digital images are electronic representations of images that are stored on a computer. The most important thing to understand about digital images is that you can? t see them and they don? t have any physical size until they are displayed on a screen or printed on paper. Until that point, they are just a collection o f numbers on the computer? s hard drive that describe the individual elements of a picture and how they are arranged.These elements are called pixel and they are arranged in a grid format with each pixel containing information about its color or intensity. Most of the time Band interleaved by line (BIL), band interleaved by pixel (BIP), and band sequential (BSQ) take as image digital format but this is not true rather they are schemes for storing the actual pixel values of an image in a file. Figure 10 Digital Data Format 10 According to ESRI resource center there are three common image digital formats these are Band interleaved by line (BIL), band interleaved by pixel BIP), and band sequential (BSQ) are three common methods of organizing image data for multiband images. BIL, BIP, and BSQ are not in themselves image formats but are schemes for storing the actual pixel values of an image in a file. While Visual Resource Centre School of Humanities (2010) there are four main file form ats for images: TIFF, JPEG, PNG and GIF. TIFF (Tagged Image File Format) Description: TIFF images are usually used for master image files. They contain image information in a lossless format (i. e. no image information is lost when images are saved) and so tend to be fairly large in size.They are therefore a good format for archiving images, but the large file size makes it an unsuitable format for use in web delivery or in presentation software, such as PowerPoint. Good for: master copies of images as all image information is retained when files are saved (lossless format). But: file sizes tend to be large due to lossless format, so TIFF files are not suitable for web delivery or inclusion in PowerPoint presentations. JPEG (Joint Photographic Experts Group) Description: This is the main format that is used for photographic-type images on the web.It is a ââ¬Å¾lossy? format: images are compressed when saved and so image information is lost each time the image is edited and saved. T he benefit of compression is a reduction in file size, but the down side is that if too much compression is applied, visible artefacts such as highlighting around areas of high contrast may occur. The following images show the effects on quality and file size of differing levels of compression on the same JPEG image notice the blurring around the edges of the statue in the final image.Good for: web delivery of photographic images due to ability to compress images without too much loss of quality, therefore giving smaller file sizes than TIFF formats. But: too much compression can lead to a loss of quality so care needs to be taken with the quality setting used when saving images. GIF (Graphical Interchange Format) 11 Description: Another format encountered on the Internet, the GIF format is usually used for icons or graphics that contain a limited range of flat colors. It is a lossless format (no information is lost when saving), but as limited color capabilities and so is not suit able for displaying photographs. Good for: web delivery of icons and graphics due to small file size and lossless format. But: supports limited range of colors, so is only suitable for certain types of images. PNG (Portable network graphics) Description: PNG is a relatively new web graphics format, designed primarily to replace the GIF format for use on the Internet, and potentially rival TIFF in the long term as an archival format due to its better compression performance.Its main advantages over GIF are an improved lossless compression method and support for ââ¬Å¾true color. Although software support for the PNG format has been slow in developing, this is now beginning to change and it may become a more common format in the future. Good for: web delivery due to lossless compression technique resulting in files of small size but high quality. But: JPEG format gives better results for photographic images, and older web browsers and programs may not support the PNG format. 4. Purpo se of image enhancement and Method of Image Enhancement 4. 1.Purpose of Image Enhancement The purpose of image enhancement is forming good contrast to visualize images in a better way in order to understand or extract the intended information from the image. Similarly Vij and singh (2008) discussed Image enhancement is a mean as the improvement of an image appearance by increasing dominance of some features or by decreasing ambiguity between different regions of the image. Image enhancement processes consist of a collection of techniques that seek to improve the visual appearance of an image or to convert the image to a form better suited for analysis by a human or machine.The other writers Shankar Ray (2011) also describe, Image enhancement is the modification of image, by changing the pixel brightness values, to improve its visual impact. Image enhancement techniques are performed by deriving the new brightness value for a pixel either from its existing value or from the brightnes s values of a set of surrounding pixels. 12 4. 2. Method of Image Enhancement According to Department of US Army (2003) method of image enhancement classified in to four these are 1) Contrast enhancement 2) band ratio 3) spatial filtering and 4) principle components.The type of enhancement performed will depend on the appearance of the original scene and the goal of the interpretation. This indicate that performing all methods of enhancement for one image may not be necessary and selection of methods are vary depending on for what purpose the image is prepared or what type of information is extracted from the image. 1) Contrast enhancement-such types of enhancement mostly occur to increase the brightness of the image by changing the DN Values of the image. According to Al-amri (2011) one of the most important quality factors in satellite images comes from its contrast.Contrast enhancement is frequently referred to as one of the most important issues in image processing. Contrast str etching is an enhancement method performed on an image for locally adjusting each picture element value to improve the visualization of structures in both darkest and lightest portions of the image at the same time. Of course there are different techniques of image contrast enhancement like Liner contrast, histogram equalization, histogram stretch and the like but the mean idea is discussed on above even though there is slight difference performing each techniques.Before contrast enhancement Figure 11 Contrast Enhancement 13 After contrast enhancement 2) Band ratio-using contrast techniques help to enhance the images with related to brightness problems but this technique cannot solve problems like shadowiness and the like such image enhancement is takes place by using band ratio techniques. According to Department of US Army (2003) stated that Band ratio is commonly used band arithmetic method in which one spectral band is proportional with another spectral band.This simple method r educes the effect of shadowing caused by topography, highlights particular image elements, and accentuates temporal differences. 3) Spatial filtering ââ¬âthis types of enhancement is very important to avoid over exaggerated details for specific place in the image. Murayam and Dassanayake (2010) describe spatial filtering as spatial filter is designed to emphasize larger, homogeneous areas of similar tone and reduce the smaller detail in an image, this serve to smooth the appearance of an image. Low pass filters are very useful for reducing random noise.It is occasionally advantageous to reduce the detail and exaggerated particular features in an image 4) Principal components- According to Department of US Army (2003) the principal component analysis (PCA) is a technique that transform the pixel brightness values. These transformations compress the data by drawing out maximum covariance and remove correlated elements. The other writer Rees (2001) also stated that the principal co mponents of a multiband image are the set of linear combination of the bands that are both independent of and also uncorrelated with, one another. . Purpose of image Transformation and Method of Image Transformation 5. 1. Purpose of image Transformation Image transformation is a means to re-express an image in a different manner which means it gives a chance to Cooke in good way. According to UNEP (2005) The Term: Transform means arithmetic operator It is all arithmetic operations that allow the generation of a new composite image from 1 or 2 or more bands of a multi-spectral, multi-temporal, multi-frequencies (wavelengths), multi-polarization, multi-incidence angle images.The resulting image may have properties which makes it suitable to particular purpose than the original. 14 1) New Information extraction from the exited data like Change detection, vegetation info, geological info 2. Data dimensionality reduction storage efficiency processing efficiency reduce the # of bands and reduce time 3. Produce more physically relevant spectral feature space Similarly Mather and Koch (2011) discussed an image transform is an operation that reexpresses in a different, and possibly more meaningful, form all or part of the information content of a multispectral or grey scale image.From the above idea we can understand that by applying image transformation with different transformation techniques we can extract new information with best visualization and minimum storage. 5. 2. Method of Image Transformation Different writers classified method of image transformation differently because of their purpose study for this paper I choose UNEP (2005) Method of image transformation. According to UNEP(2005) Method Image transformation can be classified into 6 these are 1. Simple Arithmetic Operations 2. Empirically-Based Image Transformation 3.Principal Component Analysis 4. Multiple Discriminant Analysis 5. Hue, Saturation and Intensity (HIS) 6. Fourier Transformation 1. Simple Arithmetic Operations Applying one of arithmetic operation among Addition, Subtraction, Multiplication, and Division to perform simple transformation. They performed on 2 or more co-registered images of the same geographical area. The images could be separate spectral bands from single MSS or TM data set or they may be individual band s from data sets that have been imaged at different dates. 1. Image Addition If multiple images of a given region are available for approximately the same date and if part of one of the images has some noise (spectral problem, haze, fog, cloud), then that part can be compensated from the other images available. 15 1. 2 Image Subtraction: To assess the degree of change in an area, two dates of coo-registered images can be used with subtraction operation. October 1988 Figure 12. Change Detection 1. 3 Image Multiplication: May 1992 If the analyst is interested in a part of an image, then extracting that area can be done by multiplying the area by 1 and t he rest by 0.This applied only when the boundary of the area of interest is irregular. 1. 4 Image Division: Image Ratio: Dividing the pixels in one image by the corresponding pixels in a second image. Most commonly used transformation. It is very important transformation techniques because ? ? Certain aspect of the shape of spectral reflectance curves of the different earth surface cover types can be brought out by ratio. Undesirable effects on the recorded images such as the effect of variable illumination resulting from variation in topography can be reduced by ratio. . Empirically-Based Image Transformation Experience with Landsat MSS data for agriculture areas and with the difficulties encountered in the use of ratio transformed and Principal Component, led to the development of Image Transformation that was based on the OBSERVATIONS: 16 2. 1 Perpendicular Vegetation Index PVI A plot of reflectance measured in the visible red band against reflectance in the Near IR for a partly vegetated area will results in a plot like and use soil line for calculating vegetation distance from the line two-dimension space. . 2 Tasseled Cap Transformation PVI considers spectral variation in two of the 4 Landsat MSS bands and use distance from a soil line in the two-dimension space defined by these two MSS bands a a measure of biomass of green leaf area. 3. Principal Component Analysis PCA Adjacent bands in Multi-spectral Scanner remotely sensed data (images) are generally correlated. Multi-band visible/NIR images of vegetated areas show negative correlation between NIR and visible red bands, and positive correlation among the visible bands Green and Red.This is because of the spectral characteristics of vegetation are such that as vigor or greenness of the vegetation increase the red reflectance diminishes and NIR increases. The presence of correlation among the bands of the optical reflected MSS images implies that there is REDUNDENCY in the data. This means that some inf ormation is being repeated. It is the repetition of the information between bands that is reflected in the correlation. Principal component analysis helps to remove such redundancy through compress the data by drawing out maximum covariance and remove correlated elements. 4.Multiple Discriminant Analysis Image transformation using linear function called discriminant function. It represents the coordinate axes in the dimensional space that defined by the spectral bands which making up the data. As in PCA the relationship between the spectral bands and the discriminant functions axis derived and the coordinates of the individual pixel vector computed in terms of discriminant function. A simple example: if you have two groups of land with special reflectance that can be discriminated on the basis of the measurement in the dimensional space or in the coordinate axis.Some scientist thinks that this transformation made for special assignment. But despite of that it is found very useful it those special cases where you cannot find solution for them unless using this transformation. 5. Hue, Saturation and Intensity (HIS) Hue: angular variable of the direction of colors Saturation: lightness of the color (toward white) 0-255 scale the amount of white in the color Intensity: color strength 17 I = R+G+B H= (G ââ¬â B)/I 3B S= (I ââ¬â 3B)/ I 6. Fourier Transformation All five transformations discussed, they were using multidimensional space (multi-band) of remotely sensed data.Fourier Transformation using single band. The main idea of this transformation is that it uses the gray scale value, which forming a single image or single band, can be viewed as 3-D surface. The raw and column (X,Y) or spatial coordinates defining two axis (X,Y) and the gray scale 0255 value at each pixel giving the 3rd dimension. Therefore, the resulting image or product will show the frequency of certain feature all around the image. So it is a kind of histogram of the image in 3-D. 6. Ve getation index and relation with vegetation degradation 6. What is Vegetation Index? According to Jackson and huete (1991) Vegetation index is calculating of spectral band of data by combining two or more spectral bands of data. Vegetation indices are formed from combinations of several spectral values that are mathematically recombined in such a way as to yield a single value indicating the amount or vigor of vegetation within a pixel . Campbell, (1996) cited in Freitas and et. al (2005). 6. 2 Vegetation index and degradation ââ¬âthe best method of vegetation index is NDVI which is a normalized vegetation index .It a good means to assess the amount of greenness an area in th inverse NDVI indicates the level of degradation of an area. For example Take Bahirdar image in 1990 winter season and calculate the NDVI value and get a result of 0. 7 and after 10 years in2000 take another image of the same season and calculate the NDVI value and get a result of 0. 2. These indicates that in 1990 Bahirdar was covered by green vegetation while the 2000 image show that most of the areas covered by vegetation is degraded and covered by rocks.If the NDVI value approaches to1 the area has good vegetation cover, if the NDVI value approaches to 0 the area has less vegetation which means the area is covered by rocks and if the NDVI value is negative the area has no vegetation rather the area is covered by snow. 18 7 Digital image classification 7. 1 What is Digital Image
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.