Skip to content

oazarate/creative-platform

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Creative Platform Using Public AI Models Through Postman

Note: This repo draws material directly from this Medium article and the IBM MAX model documentation.

These APIs will only be running from March 11th, 2019 to March 22nd, 2019, or until compute credits are depleted.

Using the Platform

To use the platform:

Instructions-1

  • On the left, select the app you want to use, e.g. style-transfer-mosaic

Instructions-2

  • Go to body
  • Next to image, click and select the image you wish to submit to the app
  • Click Send

Instructions-3

  • If you get an error indicating that Postman could not get a response, try resizing the image to be smaller

To run your own implementation of the models on Google Cloud Platform's Kubernetes Engine, follow the steps below.

Creating the Platform

Accounts

If you do not have accounts with these services, create them. You will need compute credits for Google Cloud Platform.

Creating the APIs

If you wish to create an API for use with the platform, you can use this framework to transform your model into an API, though the author notes it may not be ideal for production. The code for the app has been cloned to code/keras-app.

The framework mentioned above was used the create the Docker image of the Image Net Classifier. From here on out, we will assume that any Docker images of interest are already APIs and available on DockerHub or GitHub.

Start your Kubernetes cluster

Open your Google Cloud Platform Console and go to Kubernetes Engine.

Create a cluster, selecting 2 nodes and 4vCPUs with 15 GBs of RAM.

Open the cloud shell from the console.

Obtaining the Docker Images of Interest

We will be running the following models in our platform:

To deploy the Image Net classifier, run

kubectl run keras-app --image=oazarate/keras-app --port 5000

Then check the status with kubectl get pods until it returns READY

kubectl get pods

Once it is ready, expose the port to make it publicly available

kubectl expose deployment keras-app --type=LoadBalancer --port 80 --target-port 5000

Then run kubectl get service to find the external IP.

This IP directs to the app and is later used in the creation of the json object for Postman.

To use the IBM MAX models, use:

kubectl apply -f https://raw.githubusercontent.com/IBM/MAX-Fast-Neural-Style-Transfer/master/max-fast-neural-style-transfer.yaml

However, as before, the app is only available locally. In this case, we create a service called my-service that exposes it

kubectl expose deployment max-fast-neural-style-transfer --type=LoadBalancer --name=my-service

You can the find the external IP of the service with kubectl get service

About

Creative platform application for Northwestern EECS 496-7 Advanced Deep Learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages