Thursday, November 17, 2016

Lab 6 Geometric Corrections

GEOG 338
Charlie Krueger
Lab 5: Geometric Correction

Goal and Background:
The goal of this lab is to give an introduction into a very crucial image preprocessing exercise that is known in the geographic world as geometric correction. Geometric is used mostly in remote sensing. This process is the correction of errors that are in data that was remotely sensed and the errors are usually caused by satellites or aircraft not staying at a constant altitude or when sensors are diverted from the primary focus place. To fix these images researchers often compare ground control points on an accurate base map and then re-sample so that the locations and appropriate pixel values can be calculated. This lab focused on the development of skills on the two major types of geometric corrections that are used the most on satellite images. The two types of procedures that are used when making these corrections on images are spatial interpolation and intensity interpolation which were both used in the lab. Spatial interpolation is when ground control point (GCP) pairs are used to establish a geometric coordinate transformation that is applied to fix the location of pixels in the output image. Intensity interpolation is the extraction of brightness values for x,y, location in the original but distorted image and its relocation to the correct x,y, coordinate location for the output image. There are also three different types of geometric correction and these are image-to-map rectification, image-to-image registration, and the hybrid approach. In this lab only the image-to-map and image-to-image methods were used to make geometric corrections

Methods:
The first section of the lab was dealing with the city of Chicago and was an image-to-map rectification. Erdas Imagine was the program that was being used in this lab and allowed for the class to run all of the necessary programs to correct the images. In Imagine two images of the Chicago area were being view one of which was the reference map which was used to correct the image that would be rectified. The image that was being corrected was selected then the multispectral tool bar was selected and then control points was selected from that. After control points was selected the user was to select polynomial in the Set Geometric Model dialog box. 
Multispectral toolbar with Control Points Highlighted

After that two boxes opened both containing tool. One was GCP tool reference setup which was left to the default and then the reference image was selected. The second box that was opened was the multipoint geometric correction, this was the tool where the process would take place of geometric correction. In this section the image only needed a 1st order polynomial equation to transform the image. This window contained the image that was being rectified and the reference image. First all of the points that were already on the image were deleted because they were incorrect. For this image four pairs of GCP’s were created on the images the first three were done manually on both and the last only need to click on one map to create a GCP on both. The Create GCP tool was used to create the points on both of the images. Since this image only needed a 1st order polynomial equation only 3 points had to be created for the image to say “Model solution is current” when before it had stated “model has no solution”. The reason only three points had to be added manually was because of the wrap tool error which means to many points were added to both maps. After the points were plotted then the Root mean square error (RMS) had to be examined. This represents how close the two points on the different images were to being accurate. The ideal RMS is to get a total error of 0.5 and below. So through zooming in on the points and slowly moving the points around the RMS total dropped below one which was what was needed for this intro lab. Next the Display Re-sample Image Dialog button was selected which created the image after it was saved in the student’s lab 6 folder.
Screenshot of the second images in the process of getting the RMS total lowed

            The next part of the lab was the image-to-image registration which was using two images that looked similar but one was being corrected and one was being used as the reference image. The slider tool was used to examine the difference of these images and how they were off from each other. Just like the other images the control points tool was selected and then the same process was started except this time in the polynomial model properties it was changed from 1 to 3. This means that a total of 10 GCPs would have to be placed on the images. Just like the first images the points were added then moved to get a low RMS total and then the display resample image dialog button was selected. Once this was selected the resample method was changed from nearest neighbor to bi-linear interpolation because this method worked better for these images.

Results:
        This was the image that was created from the Chicago images and the difference of this image is that the coordinate system is more accurate after the geometric corrections. When zoomed in on the images lines were straighter than the input's were. 

Image rectified from the Chicago Images, Image-to-map 


     This was the output from the second set of images and did not turn out that good. The image is not accurate and does not match up to the reference image on the left side of the image. Using the swipe tool this is very obvious but as it get closer to the right side of the image the images start to match up. This is strange that one side matches up with the reference image and the other does not.
Image rectified from the Second set of images Image-to-Image

Sources:

Satellite images are from Earth Resources Observation and Science Center, United States Geological Survey. Digital raster graphic (DRG) is from Illinois Geospatial Data Clearing House.

Thursday, November 10, 2016

Lab 5 LiDAR remote sensing

GEOG 338
Charlie Krueger
Lab 5: LiDAR remote sensing

Goal and Background:
The goal of this lab is to have the class gain the knowledge about LiDAR data and what can be done with the data. LiDAR stands for light detection and ranging and it uses the light from a laser pulse to measure ranges to the Earth. These laser pulses help gather information about the shape of the Earth and the characteristics of the Earth’s surface. Using LiDAR systems scientists are able to examine both the natural and manmade things in the environment like building and bridges. LiDAR uses two different types of lasers that work better for different surfaces. LiDAR uses near-infrared laser to map land, while it uses green light to measure seafloors and riverbeds because it can penetrate the water. In this lab there were two specific objectives that were given to the class and the first was processing and retrieving different surface and terrain models. The second was using the models to create intensity images and other products that come from using point cloud. The data that the class used in this lab was Lidar point clouds in a LAS file format. LiDAR is an expanding part of the remote sensing field and is sure to produce many jobs in the future so it was good for the class to become familiar with using it.

Methods:
The first part of this lab was point cloud visualization in Erdas Imagine which is a program that is used to look at data and make corrections to images in it. The lab instructed to copy the Lab 5 folder and move it into a different section were students could use the data. This data was then opened in ArcMap another program that is used to examine data and used to make changes to the data.
The second part of the lab was to generate a LAS dataset and explore lidar point clouds with ArcGIS/ArcMap. To start in ArcMap, the students had to create a new LAS dataset in the folder that represented lab 5. Next all of the data that was viewed in Erdas Imagine was copied and moved into the LAS dataset. The statistics of the added data was then calculated and examined in the lab. The coordinate system of this data was also looked at and it had valuable information for the lab in it. Then the actual coordinate system that would be used was set. The LAS dataset was then placed onto the screen of ArcMap so the data could be examined. The properties of the data were changed so that the information would come up on the screen and be viewed. The LAS dataset toolbar was very useful during this lab because it allowed for quick viewing changes of the data. Changing this like filters from elevation, aspect, slope, and contour was simple with this toolbar. The layer properties of the data set could also be used to transform the data into what the user wanted to view. Here the data filter could be customized even more and the way that was done was changing the classification codes and the returns. Another interesting tool was the LAS dataset profile which allowed for a 3D type of image to be created from a selected area from the map.
The final section of this lab was generation of Lidar derivative products. Here different views of the data would be created using different tool through ArcMap. The following maps were made in this section Digital surface model (DSM) with first return, Digital terrain model (DTM), Hillshade of your DSM, and Hillshade of your DTM.  The first tool that was used was LAS dataset to Raster and this tool took a while because of the large dataset that it was processing. With the outcome of this tool another tool was used on it and that was a tool call Hillshade. Also in this part the lab instructed to create a map by deriving Lidar Intensity image from point cloud. The same tools were used in this process just the set up was a bit different by changes the dataset to Points instead of elevation like the last map and first return everything else was the same. The map image was then saved over to a different file type so that it could be viewed in ERDAS Imagine.

Results:
Digital Surface Model (DSM) First Return
Digital Terrain Model (DTM) First Return

DSM Hill shade tool applied

DTM Hill shade tool applied


Intensity Image Created




Sources:

Lidar point cloud and Tile Index are from Eau Claire County, 2013.  Eau Claire County Shapefile is from Mastering ArcGIS 6th Edition data by Margaret Price, 2014.

Tuesday, November 1, 2016

Lab 4 Miscellaneous image functions

GEOG 338
Charlie Krueger
Lab 4 Miscellaneous image functions

 Goals/Background:


There are multiple goals for this lab exercise and the first is to be able to select a study area from an image that is much larger that it. This is because a lot of the time researcher will only want to focus on a smaller area of a larger satellite image. Another goal of this lab is to use tools on computer programs to help better the image of the map. This process is done by removing things like course resolution images and changing them to better visual purpose of the image. Taking an image and then recreating a higher resolution image. One goal of this lab was to enhance images in different ways like preforming radiometric enhancement techniques. One of these techniques is called haze reduction and this helps the resolution of the image become clearer by removing the haze.
This goal of the lab was recently new to the software that was being used for the lab and that was linking image viewer to Google Earth. Re-sampling was another goal of this lab where a researcher would change the size of a pixel either up or down to get a better view of the research area. The next goal of the lab was very interesting to learn about and this was image mosaicking. This is when a pair of images are intersected by two adjacent satellites and the images do not fit together well but then through mosaicking they do. The last goal of the lab was to detect changes on images that would change in brightness values.


Methods: 

The first section of the lab was creating an area of interest on a map which would help researchers focus on a study area. This area of interest was created using an inquire box from the raster tool box. After the tool was ran the selected area was save in a personal folder to use later. Next in this section the area of interest was cut from the selected map and then made into a personal file. The next tool that was used was the Subset and Chip tool which was also under the Raster tab in the program. This let the area of interest be placed onto another map and it stood out because of the different zone of the images.
            The next section use the pan sharpen tool and under it the resolution merge was selected. All the necessary images were selected for input and then output was creating a new image from the tool. Nearest Neighbor was the re-sampling technique that was selected for use. This tool created an image that was darker in color and was higher in resolution.
            This next part was Haze reduction which used the Haze Reduction tool under the Radiometric. After the original file has been run through this tool the image that was created was much more vibrant in color and the outlines of the objects became easier to view when surrounded.
            Google Earth was the star of this section of the lab when the viewer was link with Google Earth. This was completed by hitting the Connect to Google Earth button from the tool bar. Then the match to GE to view was selected so then the image viewer was looking at the same spot as Google Earth. Sync GE to View was then selected and then every move that was made on the image viewer Google Earth replicated.
            On part five resampling was the goal of this section and is the reduction or increase of the size of pixels. First the Raster tool bar was selected then spatial was chosen followed by resample pixel size tool. Then two different methods where applied to the same image and these were nearest neighbor and bilinear interpolation. The image from nearest neighbor showed no real results while B.I. made the image smoother around edges.
            Mosaicking was the next task that was given in the lab and this was the process of intersecting two adjacent satellite scenes. The two images were added and then Mosaic Express was used first and then followed by a more advance method called MosaicPro. The files were added into Mosaic Express and not much else was changed in the process. The final image did not turn out that well and had very different colors on them. Next MosaicPro was used and this was defiantly a more technical method of mosaicking. Pro was opened and the images were added making sure to specify image area options by clicking the compute active area and hitting set. Once both images where in the program a histogram matching tool was use making sure the colors would match. Then hitting process, the program created a much cleaner version of the images with the same type of colors.
            Last was the binary change detection using image differencing. Using the same image but taken from two very different years the program that was used to pick up on the changes in the brightness of the pixels. The raster tool was activated and the functions tab was selected and the two input operators. The program was ran giving the output of a change in the fourth layer of the images. Next in this section a simple model was created for use to find the change in the two images. This model did skew the histogram of the images so a correction had to be made and another simple model was created from the first by trying to correct it. Finally, a product that resulted in the change of the images came out and was brought over to ArcMap another program and then the info was used to create a final map. This map showed the counties behind the change and outlined them in red to easy identification.

Results
Results for Section 1. This was the area of interest after it was selected
 
Area of interest over the original map
The first attempt using Mosaicking Express
Using MosaicPro this is no longer a different in the images and they connect 
Final Map after the image differences were taken out and made its own layer 


Sources: 


Satellite images are from Earth Resources Observation and Science Center, United States Geological Survey. Shapefile is from Mastering ArcGIS 6th edition Dataset by Maribeth Price, McGraw Hill. 2014.