This is the JustFix.nyc Tenant Platform.
In addition to this README, please feel free to consult the project wiki, which contains details on the project's principles and architecture, development tips, and more.
Note: It's highly recommended you follow the Developing with Docker instructions, as it makes development much easier. But if you'd really rather set everything up without Docker, read on!
You'll need Python 3.8.0 and pipenv, as well as Node 10, yarn, and Git Large File Storage (LFS). You will also need to set up Postgres version 10 or later.
If you didn't have Git LFS installed before cloning the repository,
you can obtain the repository's large files by running git lfs pull
.
First create an environment file and optionally edit it as you see fit:
cp .justfix-env.sample .justfix-env
Since you're not using Docker, you will want to set DATABASE_URL
to your Postgres instance, using the Database URL schema.
Then set up the front-end and configure it to continuously re-build itself as you change the source code:
yarn
yarn start
Then, in a separate terminal, you'll want to instantiate your Python virtual environment and enter it:
pipenv install --dev --python 3.8
pipenv shell
(Note that if you're on Windows and have bash
, you
might want to run pipenv run bash
instead of
pipenv shell
, to avoid a bug whereby command-line
history doesn't work with cmd.exe
.)
Then start the app:
python manage.py migrate
python manage.py runserver
Then visit http://localhost:8000/ in your browser.
You'll want to create an admin user account to access the App's "Admin Site." Django has this functionality preset, so just navigate to the root directory and use the Django command for creating a super user:
python manage.py createsuperuser
The following prompts on your terminal window will set up the account for you. Once created, visit http://localhost:8000/admin and log in with your new credentials to access the Admin Site.
Some of this project's dependencies are cumbersome to install on some platforms, so they're not installed by default.
However, they are present in the Docker development environment (described below), and they are required to develop some functionality, as well as for production deployment. They can be installed via:
pipenv run pip install -r requirements.production.txt
These dependencies are described below.
WeasyPrint is used for PDF generation. If it's not installed during development, then any PDF-related functionality will fail.
Instructions for installing it can be found on the WeasyPrint installation docs.
To run the back-end Python/Django tests, use:
pytest
To run the front-end Node/TypeScript tests, use:
yarn test
You can also use yarn test:watch
to have Jest
continuously watch the front-end tests for changes and
re-run them as needed.
For help on environment variables related to the Django app, run:
python manage.py envhelp
Alternatively, you can examine project/justfix_environment.py.
For the Node front-end:
NODE_ENV
, can be set toproduction
for production or any other value for development.- See frontend/webpack/webpack-defined-globals.d.ts for more values.
Some data that is shared between the front-end and back-end is
in the common-data/
directory. The
back-end generally reads this data in JSON format, while the
front-end reads a TypeScript file that is generated from
the JSON.
A utility called commondatabuilder
is used to generate the
TypeScript file. To execute it, run:
node commondatabuilder.js
You will need to run this whenever you make any changes to the underlying JSON files.
If you need to add a new common data file, see
common-data/config.ts
, which
defines how the conversion from JSON to TypeScript occurs.
The communication between server and client occurs via GraphQL and has been structured for type safety. This means that we'll get notified if there's ever a mismatch between the server's schema and the queries the client is generating.
The server uses Graphene-Django for its GraphQL needs.
The JSON representation of its schema is in schema.json
and
is automatically regenerated by the development server,
though developers can manually regenerate it via
python manage.py graphql_schema
if needed.
Client-side GraphQL code is generated as follows:
-
Raw queries are in
frontend/lib/queries/
and given a.graphql
extension. Currently, they must consist of one query, mutation, or fragment that has the same name as the base name of the file. For instance, if the file is calledSimpleQuery.graphql
, then the contained query should be calledSimpleQuery
, e.g.:query SimpleQuery($thing: String) { hello(thing: $thing) }
-
Some GraphQL queries are automatically generated based on the configuration in
frontend/lib/queries/autogen-config.toml
. -
The querybuilder, which runs as part of
yarn start
, will notice changes to any of these raw queries orautogen-config.toml
or the server'sschema.json
, and do the following:-
It automatically generates any GraphQL queries that need generating.
-
It runs Apollo Code Generation to validate the raw queries against the server's GraphQL schema and create TypeScript interfaces for them.
-
For queries and mutations, it adds a function to the TypeScript interfaces that is responsible for performing the query in a type-safe way.
-
The resultant TypeScript interfaces and/or function is written to a file that is created next to the original
.graphql
file (e.g.,SimpleQuery.ts
).
If the developer prefers not to rely on
yarn start
to automatically rebuild queries for them, they can also manually runnode querybuilder.js
. -
At this point the developer can import the final TS file and use the query.
You can alternatively develop the app via Docker, which means you don't have to install any dependencies. However, Docker takes a bit of time to learn how to use.
As with the non-Docker setup, you'll first want to create an environment file and optionally edit it as you see fit:
cp .justfix-env.sample .justfix-env
Then, to set everything up, run:
bash docker-update.sh
Then run:
docker-compose up
This will start up all services and you can visit http://localhost:8000/ to visit the app.
Whenever you update your repository via e.g. git pull
or
git checkout
, you should update your containers by running:
bash docker-update.sh
If your Docker setup appears to be in an irredeemable state
and bash docker-update.sh
doesn't fix it--or
if you just want to free up extra disk space used up by
the app--you can destroy everything by running:
docker-compose down -v
Note that this may delete all the data in your instance's database.
At this point you can re-run bash docker-update.sh
to set
everything up again.
To access the app container, run:
docker-compose run app bash
This will run an interactive bash session inside the main app container.
In this container, the /tenants2
directory is mapped to the root of
the repository on your host; you can run any command, like python manage.py
or pytest
, from there. Specifically, within this bash session is where you can create an Admin User to access the App's Admin Site.
Development, production, and our continuous integration
pipeline (CircleCI) use a built image from the
Dockerfile
on Docker Hub as their base to ensure
dev/prod parity.
Changes to Dockerfile
should be pretty infrequent, as
they define the lowest level of our application's software
stack, such as its Linux distribution. However, changes
do occasionally need to be made.
Whenever you change the Dockerfile
, you will need to
push the new version to Docker Hub and change the
tag in a few files to correspond to the new version you've pushed.
To push your new version, you will need to:
-
Come up with a unique tag name; preferably one that isn't already taken. (While you can use an existing one, it's recommended that you create a new one so that other pull requests using the existing one don't break.)
For the rest of these instructions we'll assume your new tag is called
0.1
. -
Run
docker build -t justfixnyc/tenants2_base:0.1 .
to build the new image. -
Run
docker push justfixnyc/tenants2_base:0.1
to push the new image to Docker Hub. -
In
Dockerfile.web
,docker-services.yml
, and.circleci/config.yml
, edit the references tojustfixnyc/tenants2_base
to point to the new tag.
The app uses the twelve-factor methodology, so deploying it should be relatively straightforward.
At the time of this writing, however, the app's runtime environment does need both Python and Node to execute properly, which could complicate matters.
A Python 3 script, deploy.py
, is located in the
repository's root directory and can assist with
deployment. It has no dependencies other than
Python 3.
It's possible to deploy to Heroku using their Container Registry and Runtime. To build and push the container to their registry, run:
python3 deploy.py heroku
You'll likely want to use Heroku Postgres as your database backend.
The codebase has a number of optional integations with third-party services
and data sources. Run python manage.py envhelp
for a listing of all
environment variables related to them.
You can load all the NYCHA offices into the database via:
python manage.py loadnycha nycha/data/Block-and-Lot-Guide-08272018.csv
Once imported, any users from NYCHA who file a letter of complaint will automatically have their landlord address populated.
Note that the CSV loaded by this command was originally generated by the JustFixNYC/nycha-scraper tool. It can be re-used to create new CSV files that may be more up-to-date than the one in this repository.
The tenant assistance directory, known within the project as findhelp
, needs
shapefiles of New York City geographic regions to allow staff to define
the catchment areas of tenant resources. These shapefiles can be loaded via
the following command:
python manage.py loadfindhelpdata
The shapefile data is stored within the repository using Git LFS and has the following provenance:
findhelp/data/ZIP_CODE_040114
- https://data.cityofnewyork.us/Business/Zip-Code-Boundaries/i8iw-xf4ufindhelp/data/Borough-Boundaries.geojson
- https://data.cityofnewyork.us/City-Government/Borough-Boundaries/tqmj-j8zmfindhelp/data/Community-Districts.geojson
- https://data.cityofnewyork.us/City-Government/Community-Districts/yfnk-k7r4findhelp/data/ZillowNeighborhoods-NY
- https://www.zillow.com/howto/api/neighborhood-boundaries.htm
You can optionally integrate the app with Celery to ensure that some long-running tasks will not cause web requests to time out.
If you're using Docker, Celery isn't enabled by default. To enable it, you need
to extend the default Docker Compose configuration with docker-compose.celery.yml
.
For details on this, see Docker's documentation on Multiple Compose files.
For example, to start up all services with Celery integration enabled, you can run:
docker-compose -f docker-compose.yml -f docker-compose.celery.yml up