Skip to content

Automatic detection of mowing and grazing ๐ŸŒพ๐Ÿฎ from Sentinel images ๐Ÿ›ฐ in the French Alps ๐Ÿ” through deep learning approaches ๐Ÿงฎ

lucas-batier/mowing-detection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

91 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Mowing/Grazing detection

Automatic detection of mowing and grazing from Sentinel images in the French Alps through deep learning approaches.

Process

The following process specify how to collect data, treat them and predict mowing from my end of study project at Laboratoire d'รฉcologie Alpines (LECA) (Grenoble). This process is created for computer scientist or people who, at least, know how to use python3 and install needed packages with pip.

  1. Download preprocess S1 tiled and S2 corrected images from PEPS
  2. Organize downloaded images in the predefined folder architecture
  3. Compute vegetation indices from S2 images
  4. Compute and filter the groundtruth and the dataset from reference files and Sentinel images
  5. Launch the learning and testing of the multimodal-temporal detector

1 Preprocessed images downloading

Criteria selection

  1. Go on the PEPS Explore tab

SENTINEL-1

  1. Select the region of interest by drawing on the map, then select the criteria in the item list on the left (example below for ร‰crin National Park region of interest on years 2018-2019)

S1 criteria selection

  1. Select the product you need by the checking boxes in the list below the map and click on drawing

S1 list selection

SENTINEL-2

  1. Select the region of interest by tile in the item list on the left, then select the criteria in the item list on the left (example below for tile 31TGK on years 2018-2019)

S2 criteria selection

  1. Select the product you need by the checking boxes in the list below the map and click on drawing

S2 list selection

Preprocessing

  1. Go to the processing center by clicking on the gears PEPS gears on the top right of the page
  2. Select the processing you need (S1Tiling for S1 - MAJA for S2)
  3. Select the products you want to process
  4. Launch the processing by clicking on drawing

Downloading

  1. The process state can be checked in drawing
  2. When the process is done, go to drawing and download the products

2 Folder architecture

The folder architecture is essential for the further processing such as the vegetation index.

  1. Save the downloaded products (images or folder of images) by the following folder architecture
parentfolder/tile/year/SENTINEL-X/products

Example for SENTINEL-1, tile 31TGK and year 2018

parentfolder/31TGK/2018/SENTINEL-1/s1*.tiff

Example for SENTINEL-2, tile 31TGK and year 2019

parentfolder/31TGK/2019/SENTINEL-2/SENTINEL2*/

NB: The SENTINEL-2 products are folders while SENTINEL-1's are images)

  1. Move the Scripts/ directory of this Git in the Parentfolder

3 Vegetation index

To compute the vegetation index images use the vegindex.py script. Depending on the number of S2 images it may take a while (1 day for 2 years of images). Moreover it takes a lot of memory (openning and writing of multiple bands in the same time), therefore we put many 'memory release' code but with a processor with less than 16GB of RAM it will crash anyway for this resolution (10980 x 10980).

EXAMPLE for tile 31TGK on year 2018 and 2019

python3 vegindex.py ../31TGK/2018/SENTINEL-2/* ../31TGK/2019/SENTINEL-2/*

4 Groundtruth and Dataset

Reference data

The aim for our references is to collect class 1 (mowed) and class 0 (not mowed lands)

Class 1 (Mowed)

Delphine DB collected global mowing area in ร‰crins National Park.

Class 0 (Not Mowed)

The CNES provide the OSO land map who give nice information (woodlands, grasslands and meadows) for class 0. It can be downloaded here

Class merging

Groundtruth image is processed by QGIS, downloadable here

Thus, open QGIS:

Class 1

  1. Add the Delphine DB raster (DB/Delphine/Mowing.img) as a layer (Layer - Add Layer - Add Raster Layer)

drawing

  1. Right click on the Delphine DB layer in the bottom left, go to Export - Save As...

drawing

  1. In the popup menu:
    1. Choose GEOTIFF as file format
    2. Write mowing1.tif as file name
    3. Choose EPSG:32631 - WGS 84 / UTM zone 31N as reference system
    4. Define the Extent coordinates as 5000040N, 809760E, 699960W and 4890240S
    5. Force the resolution to 10 x 10
    6. Click on OK
    7. Move the external file in DB/Mowing/

drawing

Class 0

  1. Add the OSO raster (DB/OSO/OSC_20XX.tif) as a layer (Layer - Add Layer - Add Raster Layer)
  2. Right click on the OSO layer in the bottom left, go to Export - Save As...
  3. In the popup menu: do exactly the same as for class 1 but write mowing0.tif as file name

drawing

Date grid

The dategrid is created by searching for a SENTINEL-1 - SENTINEL-2 correspondancy (according to their high revisit frequency, SENTINEL-1 images are selected by nearest-neighbours approximation)

dategrid.py script:

Input

  • --folder path to the parentfolder containing tile/year/SENTINEL-X/*

Output [All in the script folder]

  • dategrid.csv dategrid correspondancy between SENTINEL-1 and SENTINEL-2

Example

python3 dategrid.py --folder path/to/parentfolder

Parcel filtering and dataset computation

The parcel filtering and dataset creation is performed in 6 steps which can be done separately in the right order, or directly by filtering_n_dataset.py

  1. Select only meadows, grasslands and woody moorlands in OSO images and formalizing class 1 images (1_prefilter.py)
  2. Remove overlayer parcels (2_overlayremoval.py) NB: the images are also resized to decrease the computation time of next process
  3. Remove too small (< 1 hectare) and too big (> 100 hectares) parcels, crop them and save the final parcels' image (3_sizeremoval.py)
  4. Compute the groundtruth vector (4_groundtruthvect.py)
  5. Compute the contextual dataset (5_contextualdataset.py), for now only altitude
  6. Compute the modal dataset (6_modaldataset.py), this step is truly long (almost 1 week for 4000 parcels with 17 modes on 144 images, 2 years with a 5 days frequency) it could be a good idea to do it separately

filtering_n_dataset.py script:

Input

  • --class0 path to the unmowed parcel image
  • --class1 path to the mowed parcel image
  • --altitude path to the altitude map image
  • --datesgrid path to dates grid csv file
  • --tiledir path to the tile directory (e.g. 31TGK shown in section 2 Folder architecture)

Output [All in the script folder]

  • parcels.tif image containing parcels localisation and labels (from 1 to the number of parcels)
  • grt.npy groundtruth vector containing parcels groudtruth values (1 for mowed, 0 for unmowed)

Example

python3 filtering_n_dataset.py --class0 path/to/mowing0.tif --class1 path/to/mowing1.tif --altitude path/to/alti.tif --datesgrid path/to/dategrid.csv --tiledir path/to/31TGK

5 Learning and testing

Launch the learning and the testing of the model by the model.py script:

Input

  • --mode path to the modal dataset array
  • --context path to the contextual dataset array
  • --labels path to the groundtruth labels array

Output

  • AUC results for each folds

Example

python3 model.py --mode path/to/mode.npy --context path/to/context.npy --labels path/to/labels.npy

NB: This model was made in a truly coarse manner (in less than 2 months), do not rely a lot on it

About

Automatic detection of mowing and grazing ๐ŸŒพ๐Ÿฎ from Sentinel images ๐Ÿ›ฐ in the French Alps ๐Ÿ” through deep learning approaches ๐Ÿงฎ

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages