Tuesday, December 8, 2015

Lab 12- Volumetrics

Introduction:
    Volumetric analysis allows you to find the volume of a given sample in arcmap along with pix4d. There area many different tools that can be used to help you define the volume of your sample. Volumetric analysis can be used for a variety of reasons, the reason that we are working on right now is to find the aggregate volume of dirt/ rock piles at the Litchfield mine. There are different methods when trying to calculate your volume of your sample, one method is the cut/ fill method. A cut and fill operation is a procedure in which the elevation of a landform surface is modified by the removal or addition of surface material. This tool summarizes the areas and volumes of change. To complete this tool you have to take images of the surfaces at two different times allowing you to see the change. This tool then identifies regions of surface material removal, addition, and areas that haven't changed. This tool would be very helpful when trying to calculate the volume of rock that needs to be removed from an area for a mine.
    When trying to determine the volumetric analysis a few different types of data is needed. First you need images of the area. You can capture these images easily using a UAS or have the images provided for you. Along with the images it is ideal to have the GCP's of the area. Like I talked about in prior labs, GCP's allow you to tie down your points on a map giving you a more accurate reading of the surface. The reason why using a UAS is ideal is because many of these allow you to take the pictures along with having the GPS location of the pictures. If your working on a mine, many of these places have permanent GCP's which you can then tie down with the GPS locations you gained from your UAS. Using a UAS allows you to cut down on your field time along with being relatively cheaper.

Methods:
    When computing volumetrics it is essential to know the methods you are working with and how they can help. One of the tools used in this lab was the raster clip tool. This tool allows an area to be extracted based on a rectangle, polygon, points, mask, or circle even. When you are trying to extract an area you are cutting through cells all the time. The tool needs to know whether the cell is inside or outside your area you are trying to incorporate. The center of the cell is the judge for this. If the center is inside the area then it is calculated in, while if it is outside it is left out.
    Another important method used is the raster to tin tool. The purpose of this tool is to create a triangulated irregular network (TIN) whose surface doesn't deviate from the input raster by more then a specified z tolerance. The TIN allows you to have a 3D surface model by calculating elevations. A TIN also uses cells center points to fully cover the perimeter of the raster surface. The more cells it uses the smoother of a transition in elevation gradient it will have. 
figure 1
   
    The surface volume tool allows you to calculate the area and volume of the region between a surface and a reference plane. You could be looking for the volume above or below the surface plane. Figure 2 shows what it would look like when the reference plane is set to above and figure 3 is when the reference plane is set to below.When calculating for above, volume is only calculated between the plane and the underside of the surface. When calculating for below, volume is calculated between the plane and the topside of the surface.
figure 2
figure 3
    Polygon volume and surface volume are essentially the same tool, however polygon volume calculates the volume and surface area between a polygon and terrain or TIN surface. Once the polygons are made, it will only be calculated for the areas of the polygons and TIN that overlap.

Methods II:
    The first step to calculating volume for the Litchfield mine was to bring the imagery of the congregate piles of rocks/ dirt and the area into Pix4d. Using the volume tool in Pix4d, I was able to calculate the volume of three different mounds on the Litchfield mine. I choose three different mounds that didn't connect to other mounds but rather stood alone. This would allow for more accurate calculations. When using the calculation tool in Pix4d, all I had to do was click around the outside of the mounds. This would then allow Pix4d to calculate the volume so I could compare it to the volumetric analysis that I would later run in Arcmap. The first mound was the smallest of the three that came back with a volume reading of 21 meters cubed. Figure 4 shows my first mound I calculated. 

The second mound that I calculated the volume for in pix4d was much larger, and the elevation was well more defined in the picture. It was easier tracing the outside of the mound with the volume calculation tool which gave me a more accurate reading. Figure 5 is the second mound that I calculated. The second mound had a volume of 3952 meters cubed. This is the largest of my three mounds that I calculated. In the picture you can see in comparison just how big this mound is judging by the large dump truck that is parked right next to it. When trying to make comparisons in pictures it really helps to have something you know the size of as a reference.
figure 5
The third and final mound I calculated in pix4d was roughly half the size of the mound in figure 5. Although there is no visual aid for a comparison, such as the dump truck, it is still easy to see that it falls in-between the first and second mound. Likewise, the volume also falls in-between the first and second mound. Figure 6 shows the mound that the volume was calculated for along with the volume.The volume in figure 6 is 1977 meters cubed. Along with the volume, the cut and fill are also calculated, and not just only in 3D but 2D as well.
figure 6
    Upon completing all the volumetrics in pix4d, it was time to compare these to the 3D analyst in ArcMap. This is where the tools that were mentioned in the first methods section will come into play. The first step to calculating the volumetrics was to create the three mound feature classes in a geodatabase. The first tool to be used was the raster clip. This allowed me to clip each of the mounds and place them in the geodatabase. When clipping, it was key to leave enough area outside the mounds along with not clipping the mounds off. I made sure that the three mounds I chose were standing alone so there wasn't any worry about including another mound in the volumetric analysis.
    The next tool that was used was the surface volume tool. The reference was set to below, because I took the highest point in each of the raster clips and wanted to find the volume below that. before calculating the volume it was key to use the information tool to allow me to find the elevation for each of the rasters. This information was put in as my z value when calculating the surface volume tool. The volume for the first mound raster was 179. I computed the surface volume for the second mound and got a volume of 6957. I finished computing the surface volume for the third raster mound and got a volume of 3918.
    Now all this volumetrics was calculated with just one dataset for surface volume and raster clip. The tool cut fill would need more then one dataset. The cut fill tool lets one see the difference over time. For example if one was looking at the Litchfield mine and wanted to see how the humps either grew or disappeared, one would need atleast two datasets showing the difference between the two. With the cut fill tool, it could then be shown where areas of the mounds is gained or lost area. This type of tool would need temporal data, or data that shows a change over time.
    The final process of this lab was to take our three raster clips and convert them into a tin. The tool raster to tin allowed me to convert my raster of the given mounds into a defined TIN. Figure 7 is the output from the tool raster to tin for mound 1.
figure 7
In figure 7 it is easy to see the different elevations for mound 1 from the TIN. It allows for a 3D representation of elevation change while being a 2D picture. After getting the TIN for mound 1 I had to run the tool 'add surface elevation'. This tool allowed me to add the z mean or average elevation for mound 1. The final step was to find the volume for figure 7. By using the polygon volume tool I was able to calculate the volume for figure 7. It allowed me to input my TIN from figure 7 and gave me an output figure 8.
figure 8
The output volume from mound 1 was 6.33 meters cubed. This is significantly different from both our raster, assuming that is in feet, and pix4d which is also in meters. This would pose a problem. 
    For the second mound the raster to TIN tool was also performed and gave an output of figure 9. Again it is easy to see the elevation change in a TIN from the color gradient.
figure 9
After completion of making the TIN it was time again to add the surface information for the mean z score. I could now compute the final tool of polygon volume which gave an output of figure 10.
figure 10
This output gave the volume of 785 meters cubed. Once again this is significantly different compared to my raster volume of 6957 feet, and my pix4d volume of 3952 meters cubed. I have ran multiple volumetric analysis and still have not figure out why the volumes are drastically different compared to one another. 
    On the third and final mound, I again used the raster to TIN tool allowing for a 3D representation of that area. This is the middle mound in volume and area compared to the two others. Figure 11 is the output TIN from running the raster to tin tool.
figure 11
Figure 11 I personally think is the best to show elevation change. It is constantly on an up slope judging by the color scheme used. Upon admiring this TIN I was able to add the surface information of the z mean score allowing me to run the polygon volume tool. After tracing the surface area around the outside of the mound with the polygon tool I got an output reading figure 12.
figure 12
The final volume that was captured from the polygon volume tool for my third TIN was 515 meters cubed. Again this was different compared to my other volumes in the raster and pix4d. 
    After completing this lab it is easy to see that there is error in calculating volumetrics for this lab. With all the readings coming back differently, it is hard to say which one is right and which one is wrong. Using volumetrics can be an essential piece when trying to find the volumes for important surface data such as for a mine. When looking at mines in the future and ways to computing the volumes of aggregate mounds for those mines, I believe that UAS will be viewed as a step forward. However, one problem that the use of UAS could cause is for the mine workers whose jobs are on the line while competing against these aerial systems. It could cause for people to lose their jobs even. With more people getting into UAS mine companies would be smart to look at the background of the workers they higher especially if the field is taking a turn to computing volumetrics, and at a lower cost rate.

   


Tuesday, November 17, 2015

Lab 11- GCP's

Overview:
    In this weeks lab we were introduced again to the software Pix4D, however this week we would use GCP's while using the software to tie down our pictures to create true orthomosaics. Pix4D can process projects with or without GCP's. GCP's make higher global accuracy of the project when processing the images. When adding GCP's, there are several different ways to go about this depending on: if the initial images are geolocated, the coordinate system of the original images, and the coordinate system of the GCP's. I will go into greater depth about these three different methods.
    The first method I want to discuss is used when the image geolocation and the GCP's have a known coordinate system that is already in the Pix4D database. Although they may be in different coordinate systems, the software is able to do a conversion between the two. This is the most common case, and it allows to mark the GCP's on the images with little manual intervention. Since this method does require manual intervention, it is not recommended for overnight processing because you do have to mark the GCP's in the images. This is the method we used in lab 11.
 figure 1
    The second method that can be used is when the initial images are without geolocation, the initial images are geolocate in a local coordinate system, or the GCP's are in a local coordinate system. Once again with this method you have to manually mark the GCP's on the images allowing for better picture clarification. Since it does require a manual step, it is advised not to let this process run overnight since only the first initial processing will be done up to the part of manually marking the GCP's. 
 figure 2
    The third and final method of adding GCP's works for any case, no matter the coordinate system of the images or GCP's but it does require more time to mark the GCP's on the images. After importing the images and GCP's the processing can be done without any intervention by the user. This is the best choice for over night processing. 
figure 3
    The next process of creating a true orthomosaic and dealing with your GCP's is choosing a coordinate system. When creating a new project, the select output coordinate system window is displayed. The output coordinate system does not need to be the same as the images or GCP's coordinate system, however it is recommended that the output coordinate system is the same as the GCP's coordinate system. When choosing a coordinate system the default for images is WGS84. It is easier to display your GCP's on your images when the coordinate systems match, allowing for better accuracy in your pictures. 

Methods:
    Now that you have a basic overview of the different methods of adding GCP's and choosing a proper coordinate system for your images, I will now talk about the processing that we did in this lab. There was two parts to this lab and creating images, one with GCP's and one without GCP's. I will first talk about the part of adding GCP's and manually tying them down to your images along with the quality reports associated with them. 
    The very first thing that I needed to do was create a new project in Pix4D. This allowed me to save my project into my folder so I could easily access the reports and images. I now needed to add the images that I wanted to use. We used 342 images from the flight mission over the south middle school pond and surrounding area. The camera we used was the Canon Power Show SX260, with a default coordinate system of WGS84. Like I talked about earlier, this is the default coordinate system that is in the camera, however we want to change it. Since we are looking at a relatively small area we want to use the North American Datum (NAD83) Zone 15 North. This singles the area out to a coordinate system that fits the Wisconsin area very well. After I added the images and changed to coordinate system I was ready to go on through to adding the GCP's. This part was tricky at first because there were a few steps I had to jump through but was able to accomplish the task of adding the GCP's. In figure 4 on the right hand side menu in the lower portion you can see 5 small boxes. I check the GCP box then right clicked on it and went into the GCP/ manual tie point manager. 
 figure 4
The new window that opened allowed me to import our GCP's that we took while in the field for the area of interest. One small change when importing the GCP's was that I had to change he 'coordinates order' box to Y,X,Z. I then was able to go into my flight folder and import our six GCP's from the mission flight. This showed exactly where my GCP's were located on the Pix4D window. (figure 5)
 figure 5
After having imported all my images and the GCP's, I was now ready to run the initial processing. On the lower bar in figure 5 you can see three boxes, while one of them is the initial processing box. I had to uncheck the other two boxes so only the initial processing would run. Then I was ready to hit start located just below that. The initial processing took close to 45 minutes to run, and also gave us a quality report. The quality report (figure 6) basically gives an overview of the project after the intial processing was run. It gives all kinds of information about the area covered, images, flight path, and importantly about the GCP's and their errors. 
figure 6

After I looked over the quality report, and the initial processing was completed, I was ready to go back into the GCP/ manual tie manager under the GCP box on the lower right hand side. With the new window popped up I then clicked on the rayCloud Editor button in the lower left. This now brought me to another window that had all my GCP's referenced in it. I was able to go in and manually adjust exactly where the center was on every GCP. The more images I did the more the software adjusted to putting my cursor closer to the center of the GCP each time. A safe number of times to adjust each GCP was around 10. I went a little wild with a few getting into the 20 times adjustment range, but this just lead to better accuracy of my tie points. After I was happy with the number of adjustments with my GCp's I clicked okay which brought me back to the map view screen. I checked box two: point cloud and mesh, and box three: DSM, orthomosaic and index boxes in the lower tab. I was now ready to finish running my project. Again after each step was completed it would give me another quality report, this time though it would include GCP information such as errors and locations. (figure 7,8)
 figure 7

 
figure 8

    When I was finished running the last two boxes, my final orthomosaic came through. (figure 9) The final orthomosaic showed all the GCP's on it as well as the flight path along with the exact tie points to each image. At first it came across as being very jumbled but that was just because all the lines showing the tie points. Once I unchecked the tie point box and had the triangle mesh created, it was easy to see exactly what I was looking at. 
figure 9
In figure 9 you can see exactly the area that we planned to have made into an orthomosaic. The triangle area with the pond in the middle has the GCP's on the path along the outside. You can faintly see some of them. You can also see our cars parked in the lower right hand corner of the image as well, with the first GCP just in front of them. After completing the orthomosaic and the GCP processing, along with looking over the quality reports, I wanted to create a flight animation of the orthomosaic to give it justice of exactly the area we are looking at. Figure 10 shows the flight animation. 


 figure 10

    After completing everything with this orthomosaic and GCP's, it was now time to continue on with the lab and create another orthomosaic. This orthomosaic was of the same area, but this time I didn't use any GCP's when creating it. The reason I didn't use any of the GCP's was to see just how off the images and flight path is without having the GCP's to tie the images down. One way to see if there is any GCP's was by looking at the quality report on the first page under 'Quality check' 'georeferencing'. This will tell you exactly if you have nay GCP's in your orthomosaic. (figure 11)
 figure 11
You can see that it says that it is georeferenced, but does not contain any GCP's. This is one of many ways to tell if you have GCP's in your orthomosaic. Also, since I did not contain GCP's in the ortho, the flight path is way off and doesn't follow our area that we wanted to take pictures off. It has it going way out to the west. This is because we did not use GCP's. The GCP's allows for those images from the flight path to be pulled back into the designated area of interest. (figure 12)
 figure 12

    The only reason we ran two different orthomosaics was to see exactly how much GCP's can contribute to the accuracy of the ortho image. After completing this lab I realize that it is vital almost always to use GCP's when conducting flight mission. This allows for better accuracy along with making sure your showing exactly the area you want. Without Pix4D we wouldn't be able to show the difference between the two orthoimages, as they appear very similiar, but with a quality report, Pix4D, and the flight mission, I can see why it is vital to use GCP's. Now we have used Pix4d twice in lab and still have only covered the basics with it, GCP's and creating orthomosaics and using the measurement tools. There is still lots to learn from this software as it can shed light onto stuff that we never thought imaginable. 




Tuesday, November 10, 2015

Lab #10- Construction of a point cloud data set, true orthomosaic, and digital surface model using Pix4D software

Overview:
    Lab 10 was the introduction to the Pix4d software and some of the features it has to offer. Pix4D is an image processing software that is based on finding thousands of common points between images. Pix4D uses key points to create a 3D image. Key points are points on two images that overlap and align. With a high overlap in pictures, the more key points Pix4D will find, which leads to a more accurate 3D image. The recommended overlap for quality 3D images is at least 75% frontal and 60% side overlap.  Now this overlap is recommended for most cases, however small changes should be made when looking at other types of terrain. For instance when taking images of an agriculture field it would be smart to have a higher overlap due to the fact that much of the field is generally similar which makes it harder for Pix4D to find key points. When processing images of large uniform areas (water, sand, snow, fields) it is important to always use a high overlap and have the exposure settings set properly to gain as much contrast as possible. By following these quick tips it will allow for a better 3D image.
    Pix4D can also process images from multiple flights. When designing your flight plans however, you need to make sure that the plan captures the images with enough overlap, along with enough overlap between the two flight plans. Also, it is smart to try to take the images from the two different flight plans under the same conditions. This will lead to better quality images. You don't want to take images one day one a clear sunny day, then the next flight you gather your images from is cold, cloudy, and raining. This wouldn't lead to good contrast between your images and you would see the difference.
    One topic that always comes up with when dealing with mapping is GCP's (ground control points). Pix4D does not require GCP's  for processing, but they do significantly increase the absolute accuracy of the project. In projects with geolocated images, GCP's do increase the accuracy along with placing the model at the exact position on the earth. Basically it would be smart to use GCP's whenever possible especially for projects that need high quality reports.
    The final product I want to talk about when using Pix4D is your quality report. The quality report tells you about all the information that was put into the processing information of Pix4D. The report will tell the GCP's, how many images were taken and used, the coordinate system, and adding check points, along with many other features. A quality report will allow you to break down the information that was put into Pix4D to see if you can gain better optimization for your images.

Methods:
    Since this was our first time using Pix4D, the process of computing the information to create our orthomosaics at first seemed to be out of this world. I was very confused but as the process went on I gained a grasp on how Pix4D worked and the ways to use it.
    To begin we needed to create a new project in Pix4D. This was very simple and created it just like any other project. We went to project in the upper tab and went to create new project. This would allow us to save our projects into our lab folder for the class. Now we were ready to add our images. One thing to note however that all our images added just fine when we used the Sony SX260 camera. They were all geolocated already with orientation, meaning that all the images had the tie points already.
Figure 1
We were asked to create two different mosaics however, one with the Sony SX260 images and another with the Gems images. The Gems images however were more difficult to work with as they weren't geolocated. We had to go in and geolocate them with the export file-RGB from the Gems imagery folder.
Figure 2
After geolocating,219 out of the 220 images were properly geolocated. Although one was not located, it is still alright since there is so many images that we are dealing with. Now if we were just using say 15 images and one or two were missing, we would then have a problem. 
    We then went through a few pop-up screens talking about what projection we wanted and if we wanted it in meters or feet, we left these all at the default and proceed to the map. Our next screen would look like this:
Figure 3
Before continuing, under the layers box on the right hand side of the screen we turned on the GCP's and both of the processing areas. We can now click on the start box on the lower portion of the screen. Depending on how fast your computer is, how many images your processing, and the image quality, it could take from a couple minutes to compute all the data or several hours. It is all contingent. Upon the slow and painful process of waiting for the Pix4D to compute your images, you will eventually end up with an image. 
Figure 4
Now the image above looks scrambled and not very pleasing to the eye. All the big green dots are were the images were taken throughout the flight plan, while the blue dots are the geolocated points since this was taken with the SX260 camera. To be able to make this image make sense you need to locate the 'Triangle Meshes' box on the left side display. Once you click on this it will compute for a few minutes and a magical image will appear that will make sense for you. 
Figure 5
Now you can see an orthomosaic image that makes sense. I turned off the cameras box on the left hand side so the green and blue dots disappeared. Upon creating this image we were asked to do four more things for this lab. We needed to find the surface area of an object, volume of an object, measure the length or width of some object, and make a video animation of our image. Three of these functions are located under the measure tab on the upper portion of Pix4D, while the video animation is located right next to the measure icon. Upon creating the measurements we needed I could then export them individually and save them as a shapefile to allow me to bring them into ArcMap to create maps. Video animation was a little tricky at first, but the dialogue box that pops up when you click on the video animation button allows you to follow step by step instructions to create a video of your choosing. 
Video 1
With the video being the last step that we had to complete in Pix4D, we have now completed what we wanted to accomplish in it. Our next step would to be creating maps in ArcMap showing our shapefiles that we created from the measurements and the differences between the SX260 camera and the position compared to the Gems imagery. 
    I created four maps in Arcmap allowing me to show the difference between the two cameras that we used. When taking the images with the SX260 camera, the camera had to be have been slightly tilted giving us an image at an angle. This didn't show up on our maps we created but I noticed it in Pix4D. The first two maps I created were from the Gems images. I created just a mosaic of the soccer fields then created another one with my measurements on the mosaic. In both my Gems maps and the SX260 maps my measurements were of the same structures. This allowed me to compare the two measurements. Although the angle was slightly different with the SX260, the measurements would remain the same. 

Figure 6

Figure 7

Figure 8

Figure 9
One difference you can see between the Gems images and the SX260 images is of the soccer field area that is pictured. The reason why there is a difference is because we used two different flight paths for either camera. We could have uploaded and used the same flight path if we so choose, but decided to create a new flight path for each camera. Roughly the same area is pictured so it's not a big deal, but if this was to compare images for a multi-million dollar project I would have used the same flight path for both of the cameras giving the exact same area pictured. 
    In the end this software can open the doors to much more than what was performed in this lab. Only a small section was covered on how to make orthomosaic maps and use the measuring and video animation tools. These are just the basics of the computing skills that can be done with the software. With these tools in hand now however and reading through the software manual, the possibilities are endless. From mapping tunnels, to using GCP's, and mapping standing buildings, structures, and interior, all these functions lead to endless possibilities with improving map making and reports for business while opening a vast array of new jobs. 
 
 

Monday, October 19, 2015

Lab #6- Using the Gems software to construct geotiffs, and to field check your Gems data.


    Lab 6 was processing the flight mission and data from our field activity five around the pavilion on the Bollinger soccer fields. To process the images that we took, we used the Gems software. Gems stands for geo-localization and mosaicking system. Basically this software can weave all of the images that we gathered together in a form that makes sense to the human eye allowing for us to see one big picture of the study area. When processing the images the software uses two types of mosaics, fast and fine mosaics. Fast mosaics throws the images acquired down as fast as it can given the predicted alignment based on the navigation data from the sensor payload. Fine mosaics perform additional computer vision image processing techniques to finely align the imagery which takes longer to process. When possible you want to use a fine mosaic, which allows for a better image of the study area. Along with the fast and fine mosaics, the software also produces an array of different color schemes for your images. There were five different color schemes we used in this lab: RGB-fine, NDVI-FC1, NDVI-FC2, NDVI-mono, and Mono-fine. RGB stands for red, green, blue colors. Basically how the human eye would view and image in color. There are two different types of NDVI imagery, this imagery is for viewing vegetation. FC1-colors is when the redder the area the more water that is being emitted. Since we are viewing grass and pavement, we want the area were the grass is to be redder which would show that it is healthier for the grass. FC2-colors is just opposite. The greener the area the healthier and more water that is being emitted. To me personally the FC2-colors makes more sense in this case because healthy grass is naturally green. We also use two schemes of the mono-colors. Mono-colors are black and white with the white area being the healthier. This stuff that I am discussing only begins to scratch the surface of what the Gems software can do. Since it is our first lab I am looking forward to gaining a better understanding on what this software can really do.
    To begin my work in the Gems software I first had to upload our mission plan to the software. Once I uploaded the mission, I could now run the NDVI initialization. The NDVI initialization gives me my two different FC1 and 2 color schemes, along with my NDVI-mono pictures. Since our original images were taken in mono and RGB from our platforms, these image files are already in our files. After we have ran our NDVI initialization and have gotten our files, we can now generate our mosaics. Generating our mosaics will give us two different files, and overview of the files along with a tif file. What we are really wanting here is the tif files. This will allow for us to upload these files into ArcMap to create the maps we so please. A tif file originally brought up in photoshop, but can also be uploaded into ArcMap. A tif tile would look like this.
One important step when creating your running your mosaics is to check the fine alignment box along with NDVI and default color map boxes. This will give you better mosaics.
    The final step before creating your maps is to export your data to Pix4D.  Pix4D gives you the lat/long and altitude of each image. This will allow you to stitch together all the images onto a satellite base-map.
    Now comes the fun part of creating your maps for your study area. You can make a different number of maps for this depending on what you are trying to show. For the maps I created I wanted to show the different NDVI maps, RGB, and the mono maps. Along with these five different maps I also wanted to include a map of the study area without the Gems software on it.
    The first map I created was the RGB-Fine map. This map is a simple map of the study area. When I laid my RGB-tif on top of it the image was much clearer and easier to see the definition of the shed in the center. One problem that I had in the beginning with making all my maps was that the tifs had a white area around the outside of the study area which made it hard to match up with the base maps. This was a simple fix by using the mask tool in the toolbox of ArcMap. Once I created one mask for the RGB map, I could use it over again on each of the maps.
    The next map I created was the NDVI-FC1 map. This was the map with the red area emitting more water allowing for it to be healthier. In my map it came across as orange showing that it isn't emitting enough water to make it that healthy, but yet healthy enough to sustain green grass. This is the power of NDVI imaging. Although it may appear to be healthy green grass to the human eye, when you use the NDVI sensor you can see things differently allowing for you to get a different perspective on vegetation. My guess why it wasn't portrayed as red was because we conducted this mission in the fall time when the grass was entering its dormant stages.
    My next map was of the NDVI-FC2 tif. This was the same image as the FC1 image but the color scheme was different. For this map, the green area was the healthier and emitted more water, while the red areas were pavement and the small yellow area in the red was rock beds. You can see how different the FC1 and FC2 images are form each other, but in the end its just a different color scheme. Earlier I stated that the FC2 colors make more sense to me because grass is naturally green, and in FC2 the green area is the grass. This could change say if your looking at vegetation of a lake or a different scenario.
    The third map I created was the NDVI-Mono map. This map did not use fine mosaic, but instead used fast mosaic. You will be able to tell the difference between the two when there next to each other. You can also see how they are stitched together slightly different. In the mono color scheme, the white areas are the grass area while the black is the pavement and cement.
    The final map I created was the Mono-Fine map. Like I just talked about, this one did use the fine mosaic allowing for a better pitcher. Again, the white areas are the grass while the dark areas are the pavement or the building in the middle. You can easily see the difference in the stitching techniques when comparing these two maps together.
    The final map I put on here is of the study area itself. I wanted to include this as a reference so you can see what the area looked like before laying my different tifs over the area. I feel that it is important to have a reference of the study area included for this purpose.
    My final outcome would be of the six maps. One of just the study area, along with my five different maps that I created using the Gems software and ArcMap. This final map just shows the different maps that I created to show what they all look like. I could have created a number of different maps for whatever purpose I wanted to show.
    I was very new to the Gems software as this was the first time that I have ever used it when computing this data. I have only begun to scratch the surface on what the software can actually do and I feel that this software is very useful. By being able to compute mission plans and stitch together photos, this software can do just about whatever you want with the data.
    I really am interested in the NDVI applications of the software, along with showing healthy versus non-healthy vegetation. This summer I worked for a country club and talked with my boss about this course I was taking. I had already had a grasp on what NDVI imaging could do and was trying to think of ways to gear it towards a golf course. After seeing how this software can compute data from mission plans and put together images, I feel that this software could do a lot for the golfing world. That is only one small area that this software is very unique in, there are plenty of other things that it can do. I really am looking forward to learning more about the software and what it can do and give it a thumbs up from my standpoint. 








Wednesday, October 14, 2015

Field Activity #5- Obliques for 3D Model Construction

    In our fifth field activity we carried out a mission using oblique imagery for 3D model construction. This is the first time that we have used oblique imagery in the field. In all of our other field activities we have used nadir imagery. Nadir imagery is when the camera is pointed directly at the ground. This will give us an direct overhead view of our area of interest. Oblique imagery is slightly different than nadir imagery. Oblique imagery is when part of the horizon is in the picture as well, typically close to a 45 degree angle allowing for a side view of the area you are trying to take images of. Oblique images are perfect for allowing us to see the side view of the AOI allowing for us to compute the images into creating a 3D imagery model.
    The study area for our fifth field activity was on the soccer fields like many of our previous field activities. We took oblique images of the shed at the soccer fields using both the Iris and Gem platforms. It was a perfect day for flying with no clouds and barely any wind that made the flag move.
    In this field activity we used two different platforms. For our first mission we used the Iris platform with a GoPro camera mounted on it. The GoPro camera doesn't have GPS attached to it, so we would have to use GCP's while transferring and tying down our data when making a map of the shed. While using the Iris platform we used the structure scan mode in mission planner on the tablet. This allowed for us to set our parameters for the mission. We set the mission to oblique images and started the picture taking at 15 meters with 4 meter intervals. What I mean by this is that the mission was conducted in a cork screw fashion as it flew around the outside of shed it would raise 4 meters every time around. The images went up to 26 meters high and the images were taken at an 2 second camera photo interval. After the platform reached the 26 meter mark, it would then use cross hatching to get every nook in the roof. Upon completing this mission, we broke out the Gem platform. We did not use a mission planner for this platform, but instead flew this one manually. This allowed for us to start at a lower height and adjust the oblique angle for gain better detail on the lower section of the building along with the roof. We switched off flying around the building so everyone would get the opportunity to take pictures along with flying a multi rotor platform. 
    We have not yet learned how to process this imagery as this will be covered in future labs as the weather turns cold. When thinking about the difference between oblique and nadir imagery and the different purposes, I feel that oblique imagery has more of an upside then nadir especially with this field activity. With constructing a 3D model of a building this can pose many benefits compared to just having an overhead view like nadir imagery. This field activity was the first time that we used oblique imagery, but I look forward to seeing the differences once we have processed this imagery compared to using nadir imagery. 
    
    

Tuesday, October 6, 2015

Field Activity #4- Gathering Ground Control Points using various GPS Devices


     In our fifth field activity we were introduced to Ground Control Points (GCP's). GCP's are used to improve the quality of your aerial imagery acquisitions. When gathered properly they can produce data with sub-meter, and even millimeter accuracy. Likewise, when gathering your GCP's, if you do not gather them properly they can then diminish the quality of your accuracy. Another reason we want to use GCP's is that we are then able to tie down our imagery to a given coordinate system if the digital sensor doesn't have GPS on it (GoPro, Canon S110). It is important to have a coordinate system when displaying our imagery, especially for survey quality data.
     When looking at our survey area it is important to place our GCP's in a visible spot for our UAS to take images of. You don't want them placed under trees or other debris obstructing our field of vision. When placing our GCP's in our survey area it is important to have them spread out over the entire field. The closer you are to the edges of the survey field the more distorted our GCP's become. So, we do want one or two near the edges but not right on the edge while placing the other GCP's randomly throughout the rest of the survey field. Another point to bring up is when placing your GCP's with different elevation. If there is a hill in your survey area it is important to place more GCP's around and on that area. This will help with displaying the elevation of that area. It is a rule of thumb to have a minimum of three GCP's in your survey field, while it is recommended for better quality to have more. GCP's are very time consuming as you have to chart and mark where they all are. It is vital to have good field notes that you can look back on when you sketching your survey area with your GCP's. This will allow you to look back if there is any error when recording your GCP's. Pictured below is a sketch of our survey area that I took in the field while also labeling my GCP's.

    We placed six GCP's over the survey area in field activity five. We spaced them out relatively evenly throughout the survey field while making sure they weren't to close to the boundaries for distortion. Upon placing our GCP's we also recorded the GCP's with a Dual Frequency Survey Grade GPS. This GPS for us was our gold standard as it will get accuracy down to millimeters. This was the first time that we were introduced to this method of recording our GCP's so we were all relatively new to it. Some important things when dealing with this GPS was to make sure it was in the exact center of our GCP along with being level. This would allow for us to get the most accurate reading for our GCP's. Now when we recorded our GCP in the GPS it would tell us the horizontal and vertical extent along with the exact coordinates for the GCP. This would then allow us to tie down our GCP's to a coordinate system upon uploading the images. Pictured below is the gold standard GPS that we used to record our GCP's the first time. Also pictured is the first GCP that we placed in the survey field.
    The second method we used when collecting our GCP's was the Bad Elf GNSS Surveyor GPS. The GPS can produce sub-meter accuracy. This is a relatively small GPS that we lay in the center of the GCP on the ground then connect to a tablet app that will in turn allow us to record our GCP. We can also put field notes in the tablet allowing us to know the exact area for this GCP along with any other field notes that are worthy. Pictured below is the Bad Elf GNSS Surveyor that we used in the field. You can see that it is about the size of a stop watch, but is a very good device when collecting GCP data location when teamed with a tablet app.
 
    The final way we collected our GCP's was with a mobile phone. The reason we used a mobile phone GPS, is that after we analyze the data later this semester that we can then see the difference between the accuracy between the three different GPS's. We know the survey grade GPS will be the most accurate, but we want to see just how inaccurate your mobile phone GPS actually is. In today's society people rely on their mobile phones everyday when traveling and it would be good information to show and share just how inaccurate these devices are.
    The final part of our field activity was to carry out a flying mission over our GCP's and survey area. We used mission planner to plan a mission and was able to fly over our survey area. Although it was getting relatively dark when carrying out our mission, approximately 6:30 pm, we still were able to capture the GCP's over our survey field. Now that we have our mission carried out, we can now analyze the data and tie down our GCP's to a coordinate system with the three different means that we collected. We have not done this yet but I look forward to analyzing this data to see how accurate or inaccurate each means of collection was. The picture below is of our first GCP in our flight mission. This picture was taken on the first pass in our flight mission.

     When collecting our GCP's the easiest and fastest way in my opinion was with the Bad Elf GPS. All we needed was the tablet and the Bad Elf and took 10 seconds for the GCP to be collected. I also liked that we were able to put field notes right into the tablet, this could prove very helpful when analyzing the data. The second fastest method would be with your mobile phone. Just by placing your mobile phone in the middle and snapping a picture you are able to capture the GCP. Only down fall is that it is least accurate. The slowest but most effective way to capture them was with the survey grade GPS. Although it was slow and you have to be precise, you also get the most accurate GCP's. The reason GCP's are so time consuming is that you have to scan the entire survey field and determine where the proper place is to collect your GCP's. Also depending on how large or how much elevation change there is in the field. Another reason it can be time consuming is by the means you are collecting your GCP's. Are you using just your mobile phone or are you using a survey grade GPS, these will play a factor in how much time you put into collecting your GCP's. When talking about commercial and survey grade GCP's many different types come into mind. These deal a lot with permanent GCP's that need to be collected weekly, and sometimes even daily. These could deal with mines or areas that are being excavated or even areas of washout. With having permanent GCP's you know exactly where your last recordings came from and allow you to go back to that exact spot. Now when using permanent GCP's you don't need to necessarily put something in the ground. You could use a sprinkler head or another object that you know that will not change by the next time you come collect your points.
     Upon completing this field activity I was fairly educated on how to collect GCP's and the importance of them when dealing with survey grade quality mapping. Without GCP's your imagery could be well off your intended survey zone leaving you with poor quality images and mapping.

Wednesday, September 30, 2015

Field Activity #3- Conducting operations with multi-rotor UAS

    In field activity number two and three we carried out mission planning both in the computer lab and in the field. We did this as a whole class and not in our groups of three to four members. In field activity number four we carried out a mission plan in the field while also having a multi-rotor UAS fly the mission to capture pictures from our mission plan. This was the first time we actually put a UAS in the air on a mission and had it fly to the mission plan. This was great experience for future labs as well as learning how to carry missions out with our groups.
     For all of our flying that we have conducted so far we have been flying around the Bollinger soccer fields. This area has no obstructions with plenty of buffer space around if the UAS goes astray. The platform we used was the Matrix on 9/23/15. The wind was calm with altostratus clouds in the sky. This was setup for perfect flying conditions for our missions. Since we are flying on public soccer fields, we always have to be aware of pedestrians in the area. Everytime we have been at the fields there has been soccer practices going on. We need to make sure we stay plenty far away so we don't put anyone in harms way. This helps when we have a large sized class. More eyes is always better when flying.
    When we started this field activity we were broken into our groups of three or four. This allowed for us to have a pilot, spotter, and person on the communication center. We began by having all three of us sitting down and plotting out our mission in the field. We chose a small area over one of the soccer fields that we wanted to fly along with covering a building on the edge of our mission area. Since our mission was close to being a square we did not have to adjust the angle on the mission to make sense say if our AOI was a rectangle. We chose the speed we wanted the Matrix to fly at along with how many pictures and height. We used two cannon cameras as well that were mounted on the bottom of the Matrix that would allow for us to take pictures every three seconds. This would make sure we have plenty of pictures of the AOI along with plenty of overlap.
    We had our height set at 60 feet high. This seemed to work well with the cannon cameras that we used. Since the mission was flown in autopilot, we knew that the entire AOI was going to be covered. This was the first time that we flew with autopilot. I was amazed at the accuracy that the Matrix flew the mission with. I was able to be the spotter with my group along with watching an experienced pilot take off, loiter, and flip it into auto. One of the most important steps is while in loiter, you communicate with the comms center to make sure that everything checks out. You don't want to flip it into auto while in loiter if all of a sudden you drop a bunch of your satellites. This could cause for a problem with the mission or could send your UAS all over the place. The Matrix platform we used was perfect size for the mission that we flew along with the conditions. It didn't have to expend much energy when making turns in our mission because we didn't have barely any wind while in the air. This allowed for our pictures to come out with great quality.
    I was also able to view how the communication center worked while watching another group in our class perform the duties. I have before worked the communication center, but found out that by watching another group that I was able to plan ahead for the next task as I was trying to help to see if they missed any steps. One step that I found that can make or break your mission is that you need to read your waypoints more then once in mission planner. This will allow for your UAS to get the right waypoints rather then sending it astray.
    In conclusion it was great experience to plan our own mission in our groups along with putting a UAS in the air to fly our mission. This was the first time that we have put the two together in our groups. It is essential to go over every part of your mission in fine detail. This will allow for your missions to be carried out smoothly. The more detailed you are before you put your UAS in the air the less likely you will have problems with the mission in the air. We also learned about loitering and the autopilot functions on our TX controllers. The more we use the controllers the more I have a better understanding with where each function is and when it is proper to have it in auto, loiter, or standard controls. The more experience we get with mission planning the more I begin to understand how to relate this to a real world job. The autopilot function is a great tool to use when carrying out plenty of missions over a large area, while the standard flying function would need to be used when there are obstacles in the area or you need to fly in a detailed area say in-between houses or buildings at a lower elevation.

Wednesday, September 23, 2015

Field Activity #2- Mission Planning/ Pre-flight

    As we continue to move forward with this class we are getting closer to the fun part, flying the actual planes! Although we have all been putting time in on the flight simulator's, and may think that we can just take a plane out and fly it right now, there are a few things we need to do before actual take off. Before every flight we have a pre-flight check list. This check list is designed to allow us to check every single piece of the plane to allow it to fly safely and do the things we want it to do. Along with the pre-flight check list, we also have what's called mission planning. When dealing with mission planning we can upload a flight path from the mission planning server on our computer to allow for our plane to fly exact missions that we so plan. These missions that we upload to our UAS will have the UAS flying on autopilot. If there is a time when we need to take over for safety measures, we can do so.
    The mission planning server looks a lot like ArcMap with a satellite imagery overlay. You can reference the area you are going to be flying along with the surrounding area to devise a good mission plan that gives you adequate area to take off and land as well. There are purple dots on the screen to mark no fly zones. It is advised to stay well away from this area, you don't want to be contacted by the FFA for flying to close to a no fly zone. In the mission planning controls we are able to map an area we want the UAS to fly. We can have all sorts of patterns, boxes, squares, circles, random lines, etc. Now when dealing with an area that we want to have photographed in our mission, we can set how many pictures/ second, the angle of the pattern depending on wind, the grid and overlay of our pictures, down to the exact camera that we are going to be using. It is critical to have a good mission plan in store when you want to go out and fly. By doing this you can assure that you are going to be safe along with ethical when you are flying.
    After we have made our mission planning inside on the computer, we are ready to go outside and do our pre-flight checks. There are many different items on the UAS and computer that we need to go over so we have created a check list to ensure that we don't miss any steps that allows for us to have a safe and successful flight. This has now turned into a three person job with a pilot, spotter, and computer monitor. The first step is hooking up your communications from the computer to the modem. The modem allows for the computer to send the mission to the UAS allowing it to fly the mission we so desire. Typically the modem is attached to a wonder stick. A wonder stick allows for the modem to be raised into the air allowing for a better connection with the UAS. The first section of the check list now deals with flight prepping.
    We want to record the date, time, platform, and weather. When collecting the weather information we want to record the temperature, but most importantly the wind and wind direction. This will tell us wether it is safe to fly or not. The next ten steps of the flight prep have to deal with looking on the UAS. We want to make sure all the connections from electrical, frame, motor, props, batteries, and antennae are all tightly secure and have no damages. It is smart to use a small screwdriver to just go over all of these areas to make sure nothing is lose or wobbling. We want to inspect the props for cracks. Even a minor crack could cause for a failure in a prop which would make it dangerous to fly incase of a crash. When balancing the battery, we want to pick the UAS with or finger tips allowing for it to balance evenly. If the UAS is not balanced evenly, it can cause for it to fly out of control or for it to not be safe. It is important to have your battery balanced in the middle of your UAS. After completing all of these checks on the UAS you can now turn your TX transmitter on, while making sure your throttle joy stick is in the off position. 
    The next steps deal with powering up your UAS and mission plan. When you have completed a step on the check list we put an "x" through it. 
We want to double check that we have the modem properly connected to the computer. Typically when connecting your modem to the computer you want to use the blue ports on the side of your computer not the white ones. The blue ports are typically faster and allow for a better connection. We now can connect our UAS to the base station. We want to see how much battery is on our computer as well. Imagine flying your UAS on a mission then your computer dies, this could lead to a bad ending to your mission. The next step is very important. We need to check the battery voltage on our UAS. If the voltage is below a 12 we need to use a different battery. We also need to check the voltage on our TX transmitters. We also want this as close to maximum as possible, again imagine flying then your controls cut out because you forgot to check your TX battery. This would cause it to be very hard to land your UAS. When checking for satellites, you can find this in the lower left hand corner in red in your mission plan on your computer. A great number of satellites to have is over, while it is advised to not fly with less then six satellites. We now want to upload our mission to our UAS while making sure the mission area is secure. You don't want to be flying over a playground taking pictures while an elementary school is outside playing. This comes back to flying ethics and being safe. If you are questioning yourself if it's safe to fly, more then likely it is not. 
    Now that we have gone through all of the pre-flight check list it is time to start the takeoff sequence. It can't be stressed enough that when you are the pilot you concentrate on following the UAS in the sky along with the spotter. You need to watch for obstructions that could get in the way, or a shift in wind direction, and even pedestrians on the ground. While the pilot and spotter are concentrated with the flying aspect of the UAS the communications person needs to stay focused on the computer and mission plan. They need to watch to see if the satellites stay connected, follow the path of the UAS on the computer screen, and checking to make sure the mission plan is being carried out. At any time the communications realizes there is a problem with the plan they need to tell the pilot and spotter allowing for the mission to be aborted. 
The most important item when dealing with the takeoff is that the area is secure and clear of spectators. You don't want to be taxing down the runway when a person is walking down the same runway. This could lead to damages to the person and your UAS. You want to make sure all spectators are clear. You want your TX transmitter throttle all the way back. You can now flip the switch on your platform allowing for it to arm. When you do this you typically will hear beeping which tells you the platform is armed. You can now deactivate your kill switch. Once you arm your TX you are in control and ready to lift your UAS off the ground. Since we were using a multicopter for this, we want to raise it off the ground and have it loiter or hover in position. When your UAS is loitering you want to talk to the communications post. You want to check again for your satellite signals, weather, and waypoints on your mission. Once your communications checked out you are ready to bring your UAS up to the designated height and switch it to autopilot. Once you flip the switch to auto, your UAS will take off and fly the mission that you uploaded. MAke sure the pilot and spotter have eyes on the UAS while the communication post continues to talk and double check the satellites while the mission is being carried out. Once the mission is complete and you are ready to bring the UAS down, again check the area for spectators. You don't want to be bringing down your UAS and someone decides to walk under it while it has spinning blades and is landing, this could turn messy. 
    Upon being on the ground you are ready for the post land check list. You want to disconnect from the base station while powering down your UAS. Once the UAS is secure and motor functions are off with the kill switch activated, you want to check again your entire platform to see if any wires, or frame connections have become loose during the mission. I would retighten everything right away, this allows for you to have a baseline when you decide to carry out another mission. You want to disconnect your battery so you can charge it and turn your TX transmitter off. You have now successfully completed your mission.
    The final steps of this check list deal with transferring your pictures to your computer and analyzing the data. We have not carried out this step yet individually, but will in future missions. This is a checklist that has been created from many missions. It is never perfect and always has room to add. By changing items on this check list or adding items, you are assuring that you are acting as safe as possible when carrying out the mission. It's always safety first. The safety of you, pedestrians and spectators around your, and the surrounding area. It doesn't matter if your UAS crashes or you have to abort a mission as long as you stay safe in the field.