# then build the images. They should now be present inside of minicube# because the above command will run all docker commands on the minicube you can undo the change
eval$(minikube docker-env)
```
# Run in Production
We run the lab on google cloud n production. The hosting platform is not very clear at the moment. There is still a project with the hesgo to move it to a cheaper and open platform.
_So the following steps are specific for GKE_
# Run on Digial Ocan (DO)
We run the lab on digital ocean in production. We had a test environment on openshift, GKE, AWS and openstack. So it's possible to run the lab on a different platform.
_So the following steps are specific for DO. You may adapt them for another kubernetes provider. We run it in the same cluster we run the geolinker, orgalinker and the tagcloud._
## IP
We need a static IP to connect to the lab
## Create namespaces
We use a namespace `lab` and a namespace `lab-runner`. On the `lab` we host the node-proxy, node-proxy-frontend and a lab-proxy. Those are controlunites to manage the lab. On the lab-runner we run the containers of the users.
The Api needs access to the K8-Api. So we need a secrete to provide the tokens for that access.
## Keycloak secretes
We use keycloak for sso. In keycloak we need two clients, one for our js-app and one for our backend servers (node-proxy, lab-proxy). Both secretes are injected into the containers over k8 secretes
The api-container needs access to the kubernetes api. So we add a serviceaccount and rolebinding to allow the api-container to start/stop app's in the lab-runner namespace.
The API needs access to keycloak to prove authentication. We save this in a secrete as well
## acme-issuer
In our cluster we use issue lets-encrypt to secure our connections. Have a look at [deploy-geolinker](https://source.dodis.ch/histhub/deploy-geolinker/wikis/deployment-manual#cert-manager)
For the deployment we need to adjust the config options in the [30-api/40-node-proxy-deployment.yml](30-api/40-node-proxy-deployment.yml) configuration. For documentation of the available options check the [node-proxy project](https://source.dodis.ch/histhub/node-proxy).
For the deloyment we need to adjust the config options in the api/node-proxy-deplyment.yml configuration. For documentatin of the available options check the node-proxy project.
### Access to K8 (from the api)
The api needs access to the K8 to start and config the pods for the users. So we need to provide some configurations to allow this access. Be aware: from a security point of view this is not good proctice! Perhabs we find another way, f.e. servcieaccount to provide the credentials to the app.
#### kubconfig secrete
This secrete provides the config for the k8 console.
```
# You also need to include all the file linked in the config f.e. Certifiactes and so on.
We need to auth with google over a auth service. See-alos https://github.com/googleapis/google-auth-library-nodejs#choosing-the-correct-credential-type-automatically
The frontend provides a simple ui to start and stop containers. If you change the domain you may need to adjust the enviroments in the [node-proxy-frontend](https://source.dodis.ch/histhub/node-proxy)
```bash
kubectl -n lab apply -fdo/30-api/*
```
### Bugs
The current gke engine has a bug in takeover the readiness probe from the container. It tries to get a 200 status code from a curl to /. But we redirect to keycloak. For that we defined a readiness-probe, but gke ignores it.
Login to https://console.cloud.google.com/compute/healthChecks/ and modify the health check. from / to /ping
## Add cert manager
The certmanager is a helm chart to provide a simple integration of letsencrypt into k8. It's nit so simple... The main problem is, that GKE uses a custim ingress controller. Letsencrpts will create its own controller on a different IP and then the validation doesntwork. Thats why we don't use anotation on the ingress config, but first get the cert and then add it to the ingress. You need to issue a cert for all your services
## Proxy
The proxy handles connection and authorisation for user containers.