Tuesday, December 13, 2016

Lab 8 Spectral signature analysis & resource monitoring

GEOG 338
Charlie Krueger
Lab 8: Remote Sensing of the Environment

Goal Background:

The goal of this lab for the class was to give us experience doing the tasks such as measuring spectral reflectance and interpret them. This would be done on many different types of materials that the Earth has to offer. The samples would be taken from satellite images of the counties of Eau Claire and Chippewa. The lab will instruct the students on how to collect spectral signatures from the samples, graph the samples in a program, and preform interpretations on them to see if they pass the spectral separability test. Another lesson that will be taught through this lab is the monitoring of the health of vegetation and soils using a band ratio technique. At the conclusion of this lab each student will be ready to analyze spectral signatures from samples that they collected from the Earth’s surface and monitor the health of soils and vegetation

Methods:

In the first part of this lab the class used a landsat ETM+ image to collect and then analyze the spectral signatures of the various Earth surfaces where in Eau Claire and Chippewa counties. First the program Erdas Imagine was opened then the surface image was opened on the screen. The tools that would be used in this section were in the spectral area of Imagine. The drawing tool section was selected then polygon tool was chosen so that an area inside one of the surfaces could be digitized. This mean a section was selected then a small shape was drawn on the surfaces that was to be sampled in the first section this was open water so Lake Wissota was selected for the class. Next in the raster tool area the supervised button was hit and that opened a drop down that showed all the options. The option that was selected for the lab was signature editor which created a new window were all the samples that would be collected would be shown. The class name was then changed to reflect the type of surface that was collected and in this case it was open water. The display mean plot button was selected next and this opened a window that showed a line of where the area that was collected had the highest spectral signatures and the lowest. This process was then continued to include 11 more different types of Earth surfaces. These were moving water, forest, riparian vegetation, crops, urban grass, dry soil, moist soil, rock, asphalt highway, airport runway, concrete surface. All of these were then compared on one of the graphs and shows the difference in all of the surfaces. See results for the graph.


The second part of the lab was performing simple band ration to see the health of vegetation and soils. The vegetation image was loaded into viewer and then the button under raster was selected and this was unsupervised and from that drop down NDVI was chosen. This opened the indices interface and this was where in the input image and output area were selected. The sensor was changed to Landsat 7 Multispectral. After all these changes were made the ok was given and a new image was created from the program that was run. The image was then taken and placed into ArcMap a map making program and the layers were then labeled so that a person on the street could understand the map. The same process was then run on the soil health in the same area and the vegetation in the first process. The only difference in the process was that under function ferrous minerals were selected because that was what was being looked at. Again the image was moved over to ArcMap was analyzing.

Results:

This was the first sample that was taken of the standing water and this is the graph that shows the signature mean of the spectral reflectance.
























This is the image of all of the samples of the Earth surfaces on the signature mean plot graph. As someone could see there are many differences in the spectral reflectance of these samples. It was very interesting to see all the different samples next to each other.























This is the map that was created in ArcMap showing the difference in the vegetation in counties of Eau Claire and Chippewa. The map shows that the vegetation is higher in the eastern section were the development of bigger cities is very limited meaning that more vegetation is let grow.





































This final image shows how common ferrous minerals are in the sample areas. The image shows that ferrous minerals are most common found in areas that are more developed than others. The area around the city of Eau Claire has more ferrous minerals because they were probably exposed during the development of the city.



Sources:

Satellite image is from Earth Resources Observation and Science Center, United States Geological Survey.

.

Tuesday, December 6, 2016

Lab 7 Remote Sensing of the Environment

GEOG 338
Charlie Krueger
Lab 7: Remote sensing of the Environment

Goal and Background:
The main goal of the lab that was assigned was to improve on the skill of performing important corrections on aerial photos and satellite images. These images can have many issues when being viewed by researchers and by being able to correct these images leads to the improvement of data gathered from this research. This lab was created to strengthen and sharpen skills such as calculation of photographic scales, calculation relief displacement, and measuring the area and perimeter of objects in an images. The lab also brings in new information to the class about performing orthorectification on images taken by satellites and by the end the class will have the information to perform many different photo correcting tasks.

Methods:
The first part of this lab was focused on calculating scale of nearly vertical aerial photos which was covered in lecture and had been discussed prior to the lab. There were formulas that were need to solve each one of these problems plus the information that was given in the problem. On both of the problems a measurement on the computer screen had to be made to get info to solve the problem. This was a concern because of the fact that the screen was difficult to measure and could give different results if a different person were to measure it. In this same section the program ERDAS Imagine was used to find the perimeter and area of a lagoon taken for a satellite image. The program used a measurement tool and all that was done after that was carefully going around the edge and plotting point and then the answer was given after the whole lagoon had been surveyed.

The second part of this lab was stereoscopy which is the generation of a three dimensional image using an elevation model. The program that was used to create these 3D images was the ERDAS Image terrain tool Anaglyph which took the two images that were selected in the lab and ran a program over them creating the image. This was done twice in this section with the result from the second run program creating a very interesting 3D image. The 3D could only be seen while wearing blue and red lensed glasses or 3D glasses.

The last section of the lab was the orthorectification part which was the largest part of the lab and by far the most strenuous. This process takes two images that are close to each other in location but that do not fit together and runs programs so that it makes one smooth image. To begin with the toolbox button was selected and then Imagine Photogrammetry was selected and that brought up a whole new dialog box. The model setup was done by selecting polynomial based push broom and then SPOT Push broom. The coordinate system was then corrected was that the final images would line up with one and other in the same system. Next the images are added in the add frame icon, with this image on the screen the show and edit frame properties icon was selected which when looked and confirmed the image could then start to be edited. The Start point measurement tool was selected and the process of selecting ground control point began. In total 12 ground control points were selected between the reference image and the image trying to be corrected. The reference image was there to make sure that the positions of the ground control points were correct and the image would not be slanted in a strange way. After these points another reference image was added and another six ground control points were added. This second image was added to have another reference image and make the final image even more accurate. Next the reset vertical reference source icon was selected and this was allowed the Z value to be given to the images. The Z value would be the height in this image and would be given from the DEM palm_springs_dem.img. Finally, after a long process the Edit- Triangulation Properties icon was selected and there were many items changed in here because of the images specifics used in the program. The triangulation was run after this two images were selected to be made from this program and then these were brought to the viewer and held the end results.


Results:
Here is the image created from the Stereoscopy process. This image was created by a computer program that was given specific images that would create a 3D image. The process was simple because the lab laid out the specific details of how to create the image.

Stereoscopy Image




This second image is from the orthorectification process of the lab and it can be seen that there is a small black line that run in between the images. This goes away when the user zooms in using the Imagine program. This process was very detailed oriented and took a long process because of the selecting of the ground control points.

Orthorectification Image


Sources:
National Agriculture Imagery Program (NAIP) images are from United States Department of Agriculture, 2005.
Digital Elevation Model (DEM) for Eau Claire, WI is from United States Department of Agriculture Natural Resources Conservation Service, 2010.
Lidar-derived surface model (DSM) for sections of Eau Claire and Chippewa are from Eau Claire County and Chippewa County governments respectively.
Spot satellite images are from Erdas Imagine, 2009.
Digital elevation model (DEM) for Palm Spring, CA is from Erdas Imagine, 2009.
National Aerial Photography Program (NAPP) 2 meter images are from Erdas Imagine, 2009.

Thursday, November 17, 2016

Lab 6 Geometric Corrections

GEOG 338
Charlie Krueger
Lab 5: Geometric Correction

Goal and Background:
The goal of this lab is to give an introduction into a very crucial image preprocessing exercise that is known in the geographic world as geometric correction. Geometric is used mostly in remote sensing. This process is the correction of errors that are in data that was remotely sensed and the errors are usually caused by satellites or aircraft not staying at a constant altitude or when sensors are diverted from the primary focus place. To fix these images researchers often compare ground control points on an accurate base map and then re-sample so that the locations and appropriate pixel values can be calculated. This lab focused on the development of skills on the two major types of geometric corrections that are used the most on satellite images. The two types of procedures that are used when making these corrections on images are spatial interpolation and intensity interpolation which were both used in the lab. Spatial interpolation is when ground control point (GCP) pairs are used to establish a geometric coordinate transformation that is applied to fix the location of pixels in the output image. Intensity interpolation is the extraction of brightness values for x,y, location in the original but distorted image and its relocation to the correct x,y, coordinate location for the output image. There are also three different types of geometric correction and these are image-to-map rectification, image-to-image registration, and the hybrid approach. In this lab only the image-to-map and image-to-image methods were used to make geometric corrections

Methods:
The first section of the lab was dealing with the city of Chicago and was an image-to-map rectification. Erdas Imagine was the program that was being used in this lab and allowed for the class to run all of the necessary programs to correct the images. In Imagine two images of the Chicago area were being view one of which was the reference map which was used to correct the image that would be rectified. The image that was being corrected was selected then the multispectral tool bar was selected and then control points was selected from that. After control points was selected the user was to select polynomial in the Set Geometric Model dialog box. 
Multispectral toolbar with Control Points Highlighted

After that two boxes opened both containing tool. One was GCP tool reference setup which was left to the default and then the reference image was selected. The second box that was opened was the multipoint geometric correction, this was the tool where the process would take place of geometric correction. In this section the image only needed a 1st order polynomial equation to transform the image. This window contained the image that was being rectified and the reference image. First all of the points that were already on the image were deleted because they were incorrect. For this image four pairs of GCP’s were created on the images the first three were done manually on both and the last only need to click on one map to create a GCP on both. The Create GCP tool was used to create the points on both of the images. Since this image only needed a 1st order polynomial equation only 3 points had to be created for the image to say “Model solution is current” when before it had stated “model has no solution”. The reason only three points had to be added manually was because of the wrap tool error which means to many points were added to both maps. After the points were plotted then the Root mean square error (RMS) had to be examined. This represents how close the two points on the different images were to being accurate. The ideal RMS is to get a total error of 0.5 and below. So through zooming in on the points and slowly moving the points around the RMS total dropped below one which was what was needed for this intro lab. Next the Display Re-sample Image Dialog button was selected which created the image after it was saved in the student’s lab 6 folder.
Screenshot of the second images in the process of getting the RMS total lowed

            The next part of the lab was the image-to-image registration which was using two images that looked similar but one was being corrected and one was being used as the reference image. The slider tool was used to examine the difference of these images and how they were off from each other. Just like the other images the control points tool was selected and then the same process was started except this time in the polynomial model properties it was changed from 1 to 3. This means that a total of 10 GCPs would have to be placed on the images. Just like the first images the points were added then moved to get a low RMS total and then the display resample image dialog button was selected. Once this was selected the resample method was changed from nearest neighbor to bi-linear interpolation because this method worked better for these images.

Results:
        This was the image that was created from the Chicago images and the difference of this image is that the coordinate system is more accurate after the geometric corrections. When zoomed in on the images lines were straighter than the input's were. 

Image rectified from the Chicago Images, Image-to-map 


     This was the output from the second set of images and did not turn out that good. The image is not accurate and does not match up to the reference image on the left side of the image. Using the swipe tool this is very obvious but as it get closer to the right side of the image the images start to match up. This is strange that one side matches up with the reference image and the other does not.
Image rectified from the Second set of images Image-to-Image

Sources:

Satellite images are from Earth Resources Observation and Science Center, United States Geological Survey. Digital raster graphic (DRG) is from Illinois Geospatial Data Clearing House.

Thursday, November 10, 2016

Lab 5 LiDAR remote sensing

GEOG 338
Charlie Krueger
Lab 5: LiDAR remote sensing

Goal and Background:
The goal of this lab is to have the class gain the knowledge about LiDAR data and what can be done with the data. LiDAR stands for light detection and ranging and it uses the light from a laser pulse to measure ranges to the Earth. These laser pulses help gather information about the shape of the Earth and the characteristics of the Earth’s surface. Using LiDAR systems scientists are able to examine both the natural and manmade things in the environment like building and bridges. LiDAR uses two different types of lasers that work better for different surfaces. LiDAR uses near-infrared laser to map land, while it uses green light to measure seafloors and riverbeds because it can penetrate the water. In this lab there were two specific objectives that were given to the class and the first was processing and retrieving different surface and terrain models. The second was using the models to create intensity images and other products that come from using point cloud. The data that the class used in this lab was Lidar point clouds in a LAS file format. LiDAR is an expanding part of the remote sensing field and is sure to produce many jobs in the future so it was good for the class to become familiar with using it.

Methods:
The first part of this lab was point cloud visualization in Erdas Imagine which is a program that is used to look at data and make corrections to images in it. The lab instructed to copy the Lab 5 folder and move it into a different section were students could use the data. This data was then opened in ArcMap another program that is used to examine data and used to make changes to the data.
The second part of the lab was to generate a LAS dataset and explore lidar point clouds with ArcGIS/ArcMap. To start in ArcMap, the students had to create a new LAS dataset in the folder that represented lab 5. Next all of the data that was viewed in Erdas Imagine was copied and moved into the LAS dataset. The statistics of the added data was then calculated and examined in the lab. The coordinate system of this data was also looked at and it had valuable information for the lab in it. Then the actual coordinate system that would be used was set. The LAS dataset was then placed onto the screen of ArcMap so the data could be examined. The properties of the data were changed so that the information would come up on the screen and be viewed. The LAS dataset toolbar was very useful during this lab because it allowed for quick viewing changes of the data. Changing this like filters from elevation, aspect, slope, and contour was simple with this toolbar. The layer properties of the data set could also be used to transform the data into what the user wanted to view. Here the data filter could be customized even more and the way that was done was changing the classification codes and the returns. Another interesting tool was the LAS dataset profile which allowed for a 3D type of image to be created from a selected area from the map.
The final section of this lab was generation of Lidar derivative products. Here different views of the data would be created using different tool through ArcMap. The following maps were made in this section Digital surface model (DSM) with first return, Digital terrain model (DTM), Hillshade of your DSM, and Hillshade of your DTM.  The first tool that was used was LAS dataset to Raster and this tool took a while because of the large dataset that it was processing. With the outcome of this tool another tool was used on it and that was a tool call Hillshade. Also in this part the lab instructed to create a map by deriving Lidar Intensity image from point cloud. The same tools were used in this process just the set up was a bit different by changes the dataset to Points instead of elevation like the last map and first return everything else was the same. The map image was then saved over to a different file type so that it could be viewed in ERDAS Imagine.

Results:
Digital Surface Model (DSM) First Return
Digital Terrain Model (DTM) First Return

DSM Hill shade tool applied

DTM Hill shade tool applied


Intensity Image Created




Sources:

Lidar point cloud and Tile Index are from Eau Claire County, 2013.  Eau Claire County Shapefile is from Mastering ArcGIS 6th Edition data by Margaret Price, 2014.

Tuesday, November 1, 2016

Lab 4 Miscellaneous image functions

GEOG 338
Charlie Krueger
Lab 4 Miscellaneous image functions

 Goals/Background:


There are multiple goals for this lab exercise and the first is to be able to select a study area from an image that is much larger that it. This is because a lot of the time researcher will only want to focus on a smaller area of a larger satellite image. Another goal of this lab is to use tools on computer programs to help better the image of the map. This process is done by removing things like course resolution images and changing them to better visual purpose of the image. Taking an image and then recreating a higher resolution image. One goal of this lab was to enhance images in different ways like preforming radiometric enhancement techniques. One of these techniques is called haze reduction and this helps the resolution of the image become clearer by removing the haze.
This goal of the lab was recently new to the software that was being used for the lab and that was linking image viewer to Google Earth. Re-sampling was another goal of this lab where a researcher would change the size of a pixel either up or down to get a better view of the research area. The next goal of the lab was very interesting to learn about and this was image mosaicking. This is when a pair of images are intersected by two adjacent satellites and the images do not fit together well but then through mosaicking they do. The last goal of the lab was to detect changes on images that would change in brightness values.


Methods: 

The first section of the lab was creating an area of interest on a map which would help researchers focus on a study area. This area of interest was created using an inquire box from the raster tool box. After the tool was ran the selected area was save in a personal folder to use later. Next in this section the area of interest was cut from the selected map and then made into a personal file. The next tool that was used was the Subset and Chip tool which was also under the Raster tab in the program. This let the area of interest be placed onto another map and it stood out because of the different zone of the images.
            The next section use the pan sharpen tool and under it the resolution merge was selected. All the necessary images were selected for input and then output was creating a new image from the tool. Nearest Neighbor was the re-sampling technique that was selected for use. This tool created an image that was darker in color and was higher in resolution.
            This next part was Haze reduction which used the Haze Reduction tool under the Radiometric. After the original file has been run through this tool the image that was created was much more vibrant in color and the outlines of the objects became easier to view when surrounded.
            Google Earth was the star of this section of the lab when the viewer was link with Google Earth. This was completed by hitting the Connect to Google Earth button from the tool bar. Then the match to GE to view was selected so then the image viewer was looking at the same spot as Google Earth. Sync GE to View was then selected and then every move that was made on the image viewer Google Earth replicated.
            On part five resampling was the goal of this section and is the reduction or increase of the size of pixels. First the Raster tool bar was selected then spatial was chosen followed by resample pixel size tool. Then two different methods where applied to the same image and these were nearest neighbor and bilinear interpolation. The image from nearest neighbor showed no real results while B.I. made the image smoother around edges.
            Mosaicking was the next task that was given in the lab and this was the process of intersecting two adjacent satellite scenes. The two images were added and then Mosaic Express was used first and then followed by a more advance method called MosaicPro. The files were added into Mosaic Express and not much else was changed in the process. The final image did not turn out that well and had very different colors on them. Next MosaicPro was used and this was defiantly a more technical method of mosaicking. Pro was opened and the images were added making sure to specify image area options by clicking the compute active area and hitting set. Once both images where in the program a histogram matching tool was use making sure the colors would match. Then hitting process, the program created a much cleaner version of the images with the same type of colors.
            Last was the binary change detection using image differencing. Using the same image but taken from two very different years the program that was used to pick up on the changes in the brightness of the pixels. The raster tool was activated and the functions tab was selected and the two input operators. The program was ran giving the output of a change in the fourth layer of the images. Next in this section a simple model was created for use to find the change in the two images. This model did skew the histogram of the images so a correction had to be made and another simple model was created from the first by trying to correct it. Finally, a product that resulted in the change of the images came out and was brought over to ArcMap another program and then the info was used to create a final map. This map showed the counties behind the change and outlined them in red to easy identification.

Results
Results for Section 1. This was the area of interest after it was selected
 
Area of interest over the original map
The first attempt using Mosaicking Express
Using MosaicPro this is no longer a different in the images and they connect 
Final Map after the image differences were taken out and made its own layer 


Sources: 


Satellite images are from Earth Resources Observation and Science Center, United States Geological Survey. Shapefile is from Mastering ArcGIS 6th edition Dataset by Maribeth Price, McGraw Hill. 2014.