The project uses Poetry for Python to created an isolated environment and manage package depencies. Poetry is a tool for dependency management and packaging in Python.
You will need to have an official distribution of Python version ^3.7. Follow instructions on this page to install poetry: (https://python-poetry.org/docs/#system-requirements)
curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python
(Invoke-WebRequest -Uri https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py -UseBasicParsing).Content | python
This project uses a virtual environment to isolate package dependenceis. To create a virtual environment and install required packages, run the following from a shell prompt:
$ poetry install
You'll also need to clone a new .env
file from the .env.template
to store local configuration options. This is a one-time operation on first setup:
$ cp .env.template .env # (first time only)
The .env
file is used by flask to set environment variables when running flask run
. This enables things like development mode (which also enables features like hot reloading when you make a file change).
Navigate to https://github.com/settings/applications/new and fill in the form. (Make sure you're logged into your github account!)
- Name the application e.g. DevOps ToDo App (Local)
- Set HomePage URL to: "http://localhost:5000/"
- Set Authorization callback URL to: "http://localhost:5000/login/callback"
- Copy the "Client ID" and set the CLIENT_ID to this value in the .env file
- Click on "Generate a new client secret", copy the secret and set the CLIENT_SECRET to this value in the .env file
To application runs with OAuth by default, to disable set LOGIN_DISABLED=True
in your .env
file.
There are two authorisation roles:
• reader - These users can view to-dos but not change or create new ones
• writer - These users can also change existing to-dos or create new ones
By default all users will have Read Only permmissions on the app unless the user is added to the hardcoded list writer_access
in user.py
.
This app is configured to send logs to an external service called Loggly available at: (https://www.loggly.com/)
Step 1: Sign up to Loggly by creating a free trial account. Step 2: Obtain a new "Customer Token". Step 3. Copy the value of the token and create the following environment variables in the .env file.
- LOGGLY_TOKEN - Customer token
- LOGGLY_TAG - Loggly tag value
- LOG_LEVEL - Options: ERROR, WARNING, INFO or DEBUG
Once the all dependencies have been installed, start the Flask app in development mode within the poetry environment by running:
$ poetry run flask run
You should see output similar to the following:
* Serving Flask app "app" (lazy loading)
* Environment: development
* Debug mode: on
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
* Restarting with fsevents reloader
* Debugger is active!
* Debugger PIN: 226-556-590
Now visit http://localhost:5000/
in your web browser to view the app.
This application is configured to use a MongoDB cluster which was created using the 'free to use' MongoDB Atlas service. When creating the cluster, please select the 'username and password' authentication method and make sure you add your local IP address when prompted (there's a button to do this automatically) so you can access the cluster from your local machine.
To run the app with MongoDB, copy the contents of env.template
into an .env
file and update the relvant environment variables.
You will also need to update the MongoDB cluster name found in dbClientUri on index.py
You can run this on a VM by running vagrant up
at the root. Once the command has finished, as above you can visit http://localhost:5000/
in your web browser to view the app. Any logs from this are saved to applogs.txt
.
During first time setup, run the following commands:
Dev:
To run the app on Docker in development mode (with hot reloading), run docker-compose up --build dev-app
Prod:
To run the app on Docker in production mode, run docker-compose up --build prod-app
.
In subsequent runs you can omit the --build
flag. Once again the app can then be found at [http://localhost:8080/
]
You will need a Chromedriver.exe
file at the project root and Chrome installed.
To run all tests, run pytest
.
To run unit tests, run pytest tests
.
To run integration tests, run pytest tests_integration
.
To run end-to-end tests, run pytest tests_e2e
.
To run the tests in a Docker container, run docker build --target test --tag test .
to build the container and
docker run test tests
to run all the unit tests.docker run test tests_integration
to run all the integration tests.docker run --env-file .env test tests_e2e
to run all the end-to-end tests.docker run --env-file .env test
to run all the tests.
A collection of C4 model Architecture diagrams have been created to visualise the hierachy of abstractions in the application. These include a Context, Container, and Component diagram. These can be viewed at https://app.diagram.net
.
This web application is hosted on Azure. The underlying infrastructure has been created using Terraform, an open source "IaC" tool which allows us to use declaritve coding to desribe the desired "end-state" infrastructure for running an applcation.
To make any changes to the infrastructure, you should edit the terraform files and then apply the changes. Avoid making changes directly on the Azure portal.
The workflows of Terraform are built on top of five key steps: Write, Init, Plan, Apply, and Destroy. See below for details:
- Run
terraform init
- This command is used to initialize the working directory containing Terraform configuration files. It is safe to run this command multiple times. - Make changes to your Terraform code.
- Create an execution plan using
terraform plan
command, this is a handy way to check whether the execution plan matches your expectations without making any changes to real resources or to the state. - Apply your changes by runing the
terraform apply
command. Terraform apply command is used to create or introduce changes to real infrastructure. - To destroy infrastructure governed by terraform you can you run
terraform destroy
command.
This application uses Azure Blob storage to store remote state. The setup for this was done by running: \scripts\StoreTfStateInAzureStorage.sh
. There is no need to execute this script again unless the state needs ot be setup again.
Minikube is a utility you can use to run Kubernetes (k8s) on your local machine. It creates a single node cluster contained in a virtual machine (VM). This cluster lets you demo Kubernetes operations without requiring the time and resource-consuming installation of full-blown K8s.
The instructions below will show you how to run the To-Do app on Kubernetes using minikube.
-
Run
minikube start
in an admin terminal to spin up your minikube cluster. -
Before we can get our To-Do app running on minikube, we'll need to create a Docker image for our Pod. Run the command below:
docker build --target production --tag todo-app:prod .
-
Copy the contents of secret.yaml.template to secret.yaml and populate the required variables with encoded values. The secret.yaml file has been added to .gitignore to prevent anyone from accidentally committing sensitive configuration to source control.
-
Deploy the To-Do App on Kubernetes by deploying the Kubernetes manifest files below with kubectl. Kubectl uses the Kubernetes API to interact with the cluster.
cd minikube
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
kubectl apply -f configmap.yaml
kubectl apply -f secret.yaml
- After each deployment, we need to run the command below to link up our minikube Service with a port on localhost.
kubectl port-forward service/module-14 5000:80