Tuesday, December 15, 2015

Spectral Signature Analysis

Introduction:
The main goal of this lab is to gain experience on the measurement and interpretation of spectral reluctance of Earth surfaces and near surface materials captured by satellite images. We were tasked to collect spectral signatures from sensed images and using ERDAS Imagining software we were able to graph and preform analysis on these images.

Data:
that data provided to us in the lab folder was a Landsat ETM+ image that covers Eau Claire and other regions in Wisconsin and Minnesota. this was used to collect and analyze specific signatures of various Earth surfaces and near surface features.

Methods:
To collect the spectral reflectance of the images the Landsat ETM+ image was used. using this data we were to collect reflectance signatures from 12 different sources found on earth. these are the 12 signatures:

1. Standing water
2. Moving water
3. Vegetation
4. Riparian vegetation
5. Crops
6. Urban Grass
7. Dry Soil (uncultivated)
8. Moist Soil (uncultivated)
9. Rock
10. Asphalt highway
11. Airport Runway
12. Concrete Surface

The first signature that we collected was from Lake Wissota. In order to collect a signature we needed to digitize an area around. To do this we used the polygon digitizing tool. next we clicked on Raster processing tools and used the signature editor tool. using the new signature from the area of interest we created a new class and changed it to standing water. when we used the display mean plot window we were able to show the spectral reflectance band.

Standing Water 

After this band was completed we were able to collect the other AOI by using the same method multiple times until we completed all 12 signatures. 

Conclusions
Collecting all the bands together created an interesting view of the captures. The collected captures show which type of signature is the highest. in this case it was the dry soil signature that reflected the most. while standing water reflected the least. This is unsurprising since the soil reflects back energy similar to a mirror due to the lack of moisture in the soil. The water however absorbs a lot of the energy due to water being a primary absorption band.


All Captures on same Signature Mean Plot

Thursday, November 19, 2015

Image Geometric Correction Methods

Introduction

Geometric correction is an important system that is normally performed on satellite images as a part of preprocessing a prior to the extraction of biophysical information. Geometric correction is utilized to remove geometric distortion of an image so that the pixels will be in a proper planimetric position. There are three types of geometric correction image-to-map, image-to-image and a hybrid approach that employees image to map and image to image. Geometric correction starts with a distorted image and a reference image. A collection of ground control points is collected onto the reference image to be interpolated onto the distorted image. Upon completion of the multipoint geometric correction the resulting image should be rectified

Goals

In this lab we were tasked to do two types of geometric correction image-to-map rectification and image-to-image rectification. We do both these steps with a reference image to create a planimetric image. The images that we had were skewed by internal error since the mapping satellite followed a NADIR path while the earth continued to move west to east creating a skewed version of the image. By using a reference image we can rectify this.

Methods

To create a rectified image ERDAS Imagine was utilized for this project. The first objective was to create a rectified image using a 1st order polynomial equation in a map to image rectification. The distorted image is Chicago in digital raster format. The reference image is Chicago in USGS format. The distorted image and rectified image needs to be imported into ERDAS. Upon importation the multispectral raster tool needs to be selected. Control points need to be inserted into the distorted and reference image. The rectification can be a simple multipoint geometric correction this order of transformation is the first order. This means we need to enter a minimum of 3 GCPs.

                The GCPs need to be inserted into the distorted and reference image. Since we are using a first order transformation care needs to be taken to not insert more than 4 points to avoid crashing the tool. Upon entering the GCPs in both the reference and distorted image the Root Mean Square (RMS) error needs to be below 2.0 for correct placement, this sometimes require manually replacement of the GCP’s.

RMS error correcting

                The image can now be computed using the multipoint geometric correction tool. Since this image is using a 1st order transformation we can use nearest neighbor interpolation. Once the image is interpolated we can see how the image is corrected into a geometric correct position.
geographically corrected on left, original on right

                The next step is to interpolate by image-to-image. This is done in the same way as the above image but with some differences. By using a reference image of Sierra Leone with a distorted image of the same area the image is rectified by using multispectral tool. For this image bilinear interpolation will be used, this means more GCPs are required due to the skewness of the image. After adding the 12 needed GCPs we are ready to interpolate. Upon running the bilinear interpolation we now have a correct image of Sierra Leone.


Bilinear corrected image on left with original on right

Results
The results of running geometric correction help to create spatially correct images in the form of interpolation. This data is useful to create correct images that are not distorted and can be evaluated without skewness.

Thursday, November 12, 2015

Using Lidar Data

Lab 5: Using Lidar Data

Introduction
To create accurate and high resolution maps we need to use data that is not from Digital Elevation Models which are becoming obsolete. The data that we need to use to create highly accurate high resolution maps is known as LIDAR or Light Detection and Ranging. This data is very interesting since it using UV Visible and NIR wavelengths to map physical features by emitting a laser to the ground and measuring the return rate. Using this data we can begin to create high resolution maps that are useful for Agriculture, Geologic process, surveying and even mapping of the ocean floor. Our application of LIDAR was to create basic hill shaded Digital Terrain Models (DTM) and Digital Surface Models (DSM)
Goals
The main goal of this lab exercise was to gain basic knowledge of lidar data structure and processing. The tasks that we were required to do were the processing and retrieval of the various surface and terrain models and to use this data to process and create a variety of images and products from the point cloud developed by the lidar data.
Methods
To create these high resolution maps we first downloaded the raw Lidar data from the assigned lab folder provided. Using this data we created a new LAS data set named Eau_Claire_City. After adding the LAS files to the data set window we were able to calculate and project a coordinate system for the xyz axis. When this data was projected onto ArcMap we were able to see the point cloud returns from the data. Using this raw Lidar data we are able to warp the map based on elevation, contour lines, aspect and slope. We were also using the lidar dat to create interactive views by using the profile view tool in the LAS Toolbar

The power of Lidar, Returns of Phoenix park bridge 

We were then able to create a DSM and DTM by manipulating the LIDAR data. We used the Arc Tool box to create a raster from the LIDAR data by following the route:

Conversion tools to raster> to las dataset> to raster

This allowed us to create a DSM based on the criteria that we selected. For our models we used parameters of Binning, Cell Assignment set to maximum and natural neighbor as the void fill method with a cell size of 2 meters per pixel for the raster. After the DSM was completed we then created a Hillshade effect to the DSM.
DSM Raster before Hillshade
DSM after Hillshade

Next we created a DTM which would only show the terrain of the area we were studying. We followed the same route as the DSM but switched it to minimum cell assignment type. We also only looked at the ground return of the LIDAR data. This will ignore all other returns except for those marked by the ground effectively giving us a look at only the terrain model.

DTM Raster with Hillshade


We also created an Intensity image that is only in black and white. This is very helpful for identifying features where a lot of detail is needed.
Intensity Image

Results

The results involved us creating new and exciting LIDAR maps based on DSM, DTM and Intensity. All of these will allow us to use lidar data to our advantage. This will be helpful in geologic practices, land use surveying and slope effects in the Eau Claire area. 

Thursday, October 29, 2015

Remote Sensing Lab 4


Background:

 In this report we were tasked to create image functions that could be used later for remote sensing applications. The following images were created using the supplied content in the lab exercise. Using this images we were then takes with manipulating the data to create new images, area of interests, image fusions, enhancement techniques and mosaicking. All these images would create better area of studies for us or better applications that we could use to further our research in remote sensing.

 

Methods:

 

The first task was to create a area of interest by using the Erdas computing program we were able to select an area of the image Eau_claire_2011.img we were able to do this by selecting the raster tool and creating an inquire box which selected an area we wished to study further. We then created a subset of the image by using the tool create subset image. We were then able to save this data and create a new area of study based on our inquire box

 





 

Image Fusion was the next task. A new area of interest would not be usable if we could not see the area that we were working with. Unfortunately the AOI that we were interested did not have a good quality of spatial resolution. To create a better resolution we use a pan sharpen tool with two different images. The first image had a 30 meter resolution while the second panchromatic image had 15 meter resolution. We used the raster tool followed by Pan sharpen to create a resolution merge. This merge was used with the inputs known as multispectral image fusion to create a sharper image for a better AOI. We used nearest neighbor algorithm to give us the best resolution possible. After the image resolution was created we were able to have an AOI with great resolution.

 




Here we see the new sharp image on the bottom right

 



We were finally ready to create an image mosaic using mosaic pro. Mosaic pro allows us to take two images from separate parts of the world that only share certain parts of each other and merge them together in order to create a single map. To do this we open up mosaic and mosaic pro to create new window. We then inserted our data that we would like to merge together. When we have the two images that we would like to merge together we have to make sure the histograms match by selecting the overlap areas, this will make sure that the images fuse together in an overlap function. Finally we were able to run the model and create a new mosaic from our work. The new mosaic has a greater area of study and a better more complete image than what we would accomplish with mosaic express.



Mosaic Pro

 

The last part of the lab was to find the difference between two images taken 20 years apart. To do this we need the image from 1991 and 2011. We took the 2011 image and we built a basic model in model builder in Erdas imaging. The model subtracted the 2011 image from the 1991 image to find the differences between the two. We clarified this by looking at the histogram. Once we completed this function we needed to isolate the areas that have changed. To do this we used a conditional function EITHER IF OR OTHERWISE to use this function we said that there was either going to be a change above the change/no change histogram in the +/- range. If we had a change in this range it would be selected by the algorithm. After these algorithms have ran the course the data was then imported into ArcMap and a function map was created showing the following data.


Results:

 

The results of the mapping show that the areas that are marked in red are areas that have changed in the last 20 years. The results are interesting because we can see that the areas that have changed are not as we normally would have predicted. From the results we can see that the areas that were most affected were the urban areas. In the last 20 years urban development could have created these changes. These urban changes could be the result of buildings either being created or destroyed as well as other methods of infrastructure are being created. We can also see that red areas are in areas not designated as urban. these areas could be harvested by crops or we could be seeing trees being harvested as well as development in the country side which could include small houses or new farm fields being created and finally we could be seeing mines becoming more common in the country since frac sand mining has been a booming industry in the past 5 years or so.