Repo containing code from research on Convolution Neural Networks on detection of extent and origin of burned land areas in the Amazon Rainforest.
In this project, a model based on the U-net architecture was trained for semantic segmentation. From 2 images of an area, one prior and one post some events of fire, we wish to generate a mask with 3 classes: Non burned areas (black pixels on the mask), Burned areas that were from a forest area prior to the fire (dark green pixels on the mask) and burned areas that were pasture prior to the fire (light green pixels on the mask)
The following table shows the metrics of the model for each class:
Class | Accuracy | Precision | Recall | F1 |
---|---|---|---|---|
Not Burned | 0.931086 | 0.976243 | 0.934025 | 0.954668 |
Forest | 0.984059 | 0.842559 | 0.740085 | 0.788004 |
Pasture | 0.925765 | 0.745628 | 0.902305 | 0.816518 |
Example of input and output expected from the neural net:
Input: Image of an area prior to a set of fires |
Input: Image of the same area as the previous figure, post a set of fires |
Output: Mask generated from both images above, representing the extent and origin of all burned lands presented in the image post the set of fires. |
Open a terminal and enter:
python get_prediction.py [path_raster_before_fires] [path_raster_after_fires] [resulting_mask_name]
Don't forget to include the extension in resulting_mask_name (e.g final_mask.png ). Note that both rasters need to have the same dimensions in order for the code to work.