The Renku platform is under very active development and should be considered highly volatile.
The Renku API gateway connects the different Renku clients to the various Renku backend services (GitLab, Jupyterhub, etc). Currently, it mainly acts on the communication between the Renku web UI and GitLab.
In order to get an instance of Renku up and running, clone the main Renku repository and follow these instructions.
Once you have an instance of Renku running locally, you could modify the gateway code
and restart the platform through the make minikube-deploy
command to see the
changes. However, this will make for a very poor development experience as the deployment
process is optimized for production.
Instead we recommend connecting to your minikube (or any other kubernetes cluster) through
telepresence. Once telepresence is installed, create a python environment and install
the necessary python dependencies by running pipenv install
. Then, start a
telepresence shell through make dev
and launch a development server by executing
the prompted command inside the telepresence shell.
The gateway in development setting is now available under the ip-address of your
minikube cluster (${minikube ip}/api
) and you should see requests from the
Renku UI appear in the logs.
So what is happening here? The command make dev
launches telepresence which
swaps the renku-gateway service in your minikube deployment for a locally running version of
the gateway served by a flask development server. This gives you live updates on code change
in a minikube deployment!
You can run tests with
$ pipenv run pytest
The simplest way to deploy the gateway is using Helm charts and setting the needed values. But if you prefer to run directly the docker image here is the list of all environment variables that can be set, with their default values.
Variable name | Description | Default value |
---|---|---|
HOST_NAME | The URL of this service. | http://gateway.renku.build |
GATEWAY_SECRET_KEY | The key used, among others, to encrypt session cookies. This parameter is mandatory and there is no default. | |
GATEWAY_ALLOW_ORIGIN | CORS configuration listing all domains allowed to use the gateway. Use "*" to allow all. | "" |
GATEWAY_REDIS_HOST | The hostname/ip of the Redis instance used for persisting sessions. | renku-gw-redis |
RENKU_ENDPOINT | The URL of the UI of Renku. | http://renku.build |
GITLAB_URL | The URL of the Gitlab instance to proxy. | http://gitlab.renku.build |
GITLAB_PASS | A personal access token with api and sudo capabilities. | dummy-secret |
GITLAB_CLIENT_ID | The client ID for the gateway in Gitlab. | renku-ui |
GITLAB_CLIENT_SECRET | The corresponding secret. | no-secret-needed |
JUPYTERHUB_URL | The URL of the JupyterHub. | {{HOST_NAME}}/jupyterhub |
JUPYTERHUB_CLIENT_ID | The client ID for the gateway in JupyterHub. This corresponds to the service oauth_client_id. | gateway |
JUPYTERHUB_CLIENT_SECRET | The client secret for the gateway in JupyterHub. This corresponds to the service api_token. | dummy-secret |
KEYCLOAK_URL | The URL of the Keycloak instance. | http://keycloak.renku.build:8080 |
OIDC_CLIENT_ID | The client ID for the gateway in Keycloak. | gateway |
OIDC_CLIENT_SECRET | The client secret for the gateway in Keycloak. | dummy-secret |
GATEWAY_SERVICE_PREFIX | The URL prefix for the gateway. | / |
GATEWAY_ENDPOINT_CONFIG_FILE | The JSON definition of the API proxying endpoints. | endpoints.json |
To collect the user's token from the various backend services, the gateway uses the OAuth2/OIDC protocol and redirects the users to each of them.
To allow server-side sessions, the gateway relies on Redis.
If you want to add more services behind the gateway, you can easily configure the mapping in endpoints.json (or point to another configuration file).
This part is still work in progress to make it plug and play. But the idea is to add the necessary http endpoints for the login/redirect/tokens for the external service and start the process by redirecting from the last service. (At the moment Keycloak -> Gitlab -> JupyterHub). You can take as an example the gitlab_auth.py or jupyterhub_auth.py files and implement the /auth/<your service>/login, /auth/<your service>/token and /auth/<your service>/logout endpoints. You can then populate the Redis cache with the collected tokens that identify the user and can be used for authorization towards some API.
If your backend API needs a specific authentication/authorization method you can write an auth processor, like the GitlabUserToken, JupyterhubUserToken or KeycloakAccessToken.
By implementing a class extending the base processor, you can pre-process the incomming request and/or the returning response. You can have a look at the gitlab_processor.py as a starting example.