The project was conducted by
You can check out our webpage for an overview and an interactive demo.
To install dependencies, simply run :
make setup
in the root
directory of the project.
.
│
└───data
│ │
│ └───clean
│ │ Contains the merged dataset
│ │
│ └───processed
│ │ Contains the processed data
│ │
│ └───raw
│ Contains the raw data
│
└───models
│ Saved models during testing
│
└───notebooks
│ Notebooks related to the project
│
└───src
│ Python source code and scripts
│
└───visualizations
Data and model visualizations
We have provided a handful of flights and weather data between 2016 and 2019 as well as a complete trainingset for the data. However, data can be obtained and processed in the following manner.
To obtain weather data, navigate to src/data/weather
and run the scraper.py
script.
Then, run the processor.py
to process that raw data.
Flight records are provided as is in the repository due to NDA restrictions. Contact Avinor for a data agreement.
Then, run the processor.py
script in src/data/flights
to process the flight records.
When flight records and weather data are processed and exists in data/processed
, run the src/data/cleaner/main.py
script to merge the data into a final dataset.
In this project, we have a main pipeline for model experimentation and evaluation.
The pipeline can be found in src/pipeline
.
The pipeline has a folder models
that contains seperated playgrounds for each method/model. The models are imported into the main program where it is provided data and run.
The run function inside each model allows for expirementation in the model's domain.
To run the pipeline, simply run the main.py
inside the pipeline folder. To run a specific model playground, uncomment it from the main function.