demonstrate a dockerized microservice that uses pytorch with CUDA acceleration
docker-ce v18.06.0-ce + nvidia-docker2
pytorch
pip install docker-compose (not in conda repositories)
create performance tests, and figure out how to perf-test differences between versions
Deploy to GCP, and verify GPU acceleration
install in developer mode from source
python setup.py develop
start the web server manually
python pytorch_server/pytorch_server.py
run tests against the web server
python -m unittest functional_tests/service_test.py
pytest functional_tests/service_test.py
Install the package
python setup.py install
python setup.py develop
start webserver with cli
pytorch_server start
Run functional tests
pytorch_cli test service
pytest functional_tests/service_test.py
to validate the docker container works properly, and docker-compose works
Run the container with docker-compose and test it with service_test
docker-compose build
docker-compose up
pytest integraiton_tests/service_test.py
container_test will run docker-compose before each test case, and execute the test against the running container
python -m unittest container_tests/container_test.py
pytest container_tests/container_test.py
sometimes necessary if container tests are failing
docker-compose down --rmi local
docker-compose build