Commit 17442df0 authored by tobinski's avatar tobinski

Upgrade deployment for digital ocean

parent 8fec9893
Pipeline #4364 failed with stages
in 1 minute and 22 seconds
.idea/ .idea/
gke/secrets/ do/secrets/
## Minicube # Run on Digial Ocan (DO)
To est all this on minicube do the following We run the lab on digital ocean in production. We had a test environment on openshift, GKE, AWS and openstack. So it's possible to run the lab on a different platform.
```bash _So the following steps are specific for DO. You may adapt them for another kubernetes provider. We run it in the same cluster we run the geolinker, orgalinker and the tagcloud._
eval $(minikube docker-env)
# then build the images. They should now be present inside of minicube# because the above command will run all docker commands on the minicube you can undo the change
eval $(minikube docker-env)
```
# Run in Production
We run the lab on google cloud n production. The hosting platform is not very clear at the moment. There is still a project with the hesgo to move it to a cheaper and open platform.
_So the following steps are specific for GKE_
## IP ## Create namespaces
We need a static IP to connect to the lab We use a namespace `lab` and a namespace `lab-runner`. On the `lab` we host the node-proxy, node-proxy-frontend and a lab-proxy. Those are controlunites to manage the lab. On the lab-runner we run the containers of the users.
```bash ```bash
gcloud compute addresses create histhub-lab --global --project=histhub-lab kubectl apply -f do/10-namespace/namespace.yml
# list the ip
gcloud compute addresses list
``` ```
## Pull Secret ## Pull Secret
We need different secret to deploy the app. First of all we need access to our registry. For that we generate a pull secret We need different secret to deploy the app. First of all we need access to our registry. For that we generate a pull secret
```bash ```bash
kubectl create secret docker-registry dodis \ kubectl -n lab create secret docker-registry dodis \
--docker-server=source.dodis.ch:4577 --docker-username=tobias \ --docker-server=source.dodis.ch:4577 --docker-username=tobias \
.steiner@dodis.ch --docker-password=<your-pword> \ .steiner@dodis.ch --docker-password=<your-pword> \
--docker-email=tobias.steiner@dodis.ch --docker-email=tobias.steiner@dodis.ch
``` ```
## Api ## Keycloak secretes
The Api needs access to the K8-Api. So we need a secrete to provide the tokens for that access. We use keycloak for sso. In keycloak we need two clients, one for our js-app and one for our backend servers (node-proxy, lab-proxy). Both secretes are injected into the containers over k8 secretes
```bash
# backend
kubectl -n lab create secret generic keycloak-config --from-file=keycloak.json=/tmp/keycloak-backend.json
# frontend
kubectl -n lab create secret generic keycloak-config-frontend --from-file=keycloak.json=/tmp/keycloak-frontend.json
```
## Service account
The api-container needs access to the kubernetes api. So we add a serviceaccount and rolebinding to allow the api-container to start/stop app's in the lab-runner namespace.
```bash ```bash
# export the config for the current cluster kubectl -n lab apply -f do/20-serviceaccount/*
export KUBECONFIG=/tmp/kube-config
gcloud container clusters get-credentials lab-cluster --zone europe-west3-a --project histhub-lab
kubectl create secret generic kube-config --from-file=config=/tmp/kube-config
``` ```
The API needs access to keycloak to prove authentication. We save this in a secrete as well
## acme-issuer
In our cluster we use issue lets-encrypt to secure our connections. Have a look at [deploy-geolinker](https://source.dodis.ch/histhub/deploy-geolinker/wikis/deployment-manual#cert-manager)
```bash ```bash
kubectl create secret generic keycloak-config --from-file=keycloak.json=/tmp/keycloak.json # you need to serve the charts locally (https://source.dodis.ch/histhub/charts)
helm upgrade --install lab-issuer local/acme-issuer -f vars/acme-issuer-values.yaml
``` ```
## Api
For the deployment we need to adjust the config options in the [30-api/40-node-proxy-deployment.yml](30-api/40-node-proxy-deployment.yml) configuration. For documentation of the available options check the [node-proxy project](https://source.dodis.ch/histhub/node-proxy).
For the deloyment we need to adjust the config options in the api/node-proxy-deplyment.yml configuration. For documentatin of the available options check the node-proxy project. ```bash
kubectl -n lab apply -f do/30-api/*
### Access to K8 (from the api)
The api needs access to the K8 to start and config the pods for the users. So we need to provide some configurations to allow this access. Be aware: from a security point of view this is not good proctice! Perhabs we find another way, f.e. servcieaccount to provide the credentials to the app.
#### kubconfig secrete
This secrete provides the config for the k8 console.
```
# You also need to include all the file linked in the config f.e. Certifiactes and so on.
kubectl create secret generic kube-config --from-file=ssh-config=/path/to/config
``` ```
### Auth with google ## Frontend
We need to auth with google over a auth service. See-alos https://github.com/googleapis/google-auth-library-nodejs#choosing-the-correct-credential-type-automatically The frontend provides a simple ui to start and stop containers. If you change the domain you may need to adjust the enviroments in the [node-proxy-frontend](https://source.dodis.ch/histhub/node-proxy)
https://github.com/godaddy/kubernetes-client/issues/276 ```bash
https://github.com/godaddy/kubernetes-client/issues/262 kubectl -n lab apply -f do/30-api/*
```
# create the secrete with the gce config..
kubectl create secret generic gce-config --from-file=gce.json=
``` ```
### Bugs ## Proxy
The current gke engine has a bug in takeover the readiness probe from the container. It tries to get a 200 status code from a curl to /. But we redirect to keycloak. For that we defined a readiness-probe, but gke ignores it. The proxy handles connection and authorisation for user containers.
Login to https://console.cloud.google.com/compute/healthChecks/ and modify the health check. from / to /ping
## Add cert manager
The certmanager is a helm chart to provide a simple integration of letsencrypt into k8. It's nit so simple... The main problem is, that GKE uses a custim ingress controller. Letsencrpts will create its own controller on a different IP and then the validation doesntwork. Thats why we don't use anotation on the ingress config, but first get the cert and then add it to the ingress. You need to issue a cert for all your services
```bash ```bash
helm install --name cert-manager --namespace kube-system --set ingressShim.defaultIssuerName=letsencrypt-prod --set ingressShim.defaultIssuerKind=ClusterIssuer stable/cert-manager kubectl -n lab apply -f do/40-lab-proxy/*
# add system user
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
# add an issuer
cat << EOF| kubectl create -n kube-system -f -
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: dodis@dodis.ch
privateKeySecretRef:
name: letsencrypt-prod
http01: {}
EOF
# create a cert
kubectrl create -f api/certificate.yml
# check if the cert is issued
kubectrl logs -n kube-system logs cert-mamnager
# use the cert in the deployment
kubectrl create -f api/node-proxy-gce-ingress.yml
# now it should work
# you may need to update cermanager
helm upgrade cert-manager \
--set ingressShim.defaultIssuerName=letsencrypt-prod \
--set ingressShim.defaultIssuerKind=ClusterIssuer \
--namespace kube-system \
stable/cert-manager
``` ```
## Todo:
* create helm chart
---
# volume for the mongo data
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
name: mongo-claim0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
io.kompose.service: api
name: api
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: api
spec:
containers:
- name: api
# for local usage with minikube
image: source.dodis.ch:4577/histhub/node-proxy:latest
imagePullPolicy: Never
resources:
limits:
memory: "512Mi"
cpu: "0.3"
env:
- name: COOKIE_DOMAIN
value: 192.168.99.100
- name: DOMAIN
value: histhub.loclahost
- name: CORS_ALLOW_ORIGIN
value: 'http://frontend.127.0.0.1.nip.io:4200'
- name: DATABASE_SERVICE_NAME
value: mongo-svc
- name: MONGODB_USER
value:
- name: MONGODB_PASSWORD
value:
- name: MONGODB_DATABASE
value: histhub
- name: DRIVER
value: kubernetes
- name: DNS_RESOLVER
value: dnsResolverkube
- name: NAMESPACE
value: default
# imagePullSecrets:
# - name: dodis
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: api-svc
spec:
type: "NodePort"
ports:
- port: 80
targetPort: 3000
selector:
io.kompose.service: api
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "false"
name: api-ingress
spec:
rules:
- host: api.192.168.99.100.nip.io
http:
paths:
- path: /
backend:
serviceName: api-svc
servicePort: 80
---
apiVersion: v1
kind: Namespace
metadata:
name: lab
---
apiVersion: v1
kind: Namespace
metadata:
name: lab-runner
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: lab-sa
namespace: lab
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: lab-deployment
namespace: lab-runner
rules:
- apiGroups:
- "apps"
resources:
- "deployments"
verbs:
- "create"
- "delete"
- "patch"
- "list"
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: lab-pvc
namespace: lab-runner
rules:
- apiGroups:
- ""
resources:
- "persistentvolumeclaims"
verbs:
- "create"
- "delete"
- "patch"
- "list"
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: lab-svc
namespace: lab-runner
rules:
- apiGroups:
- ""
resources:
- "services"
verbs:
- "create"
- "delete"
- "patch"
- "list"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: lab-ingress
namespace: lab-runner
rules:
- apiGroups:
- "networking.k8s.io"
resources:
- "ingresses"
verbs:
- "create"
- "delete"
- "patch"
- "list"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: lab-pv
rules:
- apiGroups:
- ""
resources:
- "persistentvolumes"
verbs:
- "create"
- "delete"
- "patch"
- "list"
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: lab-deployment
namespace: lab-runner
roleRef:
kind: Role
name: lab-deployment
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: lab-sa
namespace: lab
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: lab-pvc
namespace: lab-runner
roleRef:
kind: Role
name: lab-pvc
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: lab-sa
namespace: lab
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: lab-svc
namespace: lab-runner
roleRef:
kind: Role
name: lab-svc
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: lab-sa
namespace: lab
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: lab-ingress
namespace: lab-runner
roleRef:
kind: Role
name: lab-ingress
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: lab-sa
namespace: lab
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: lab-pv
namespace: lab-runner
roleRef:
kind: ClusterRole
name: lab-pv
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: lab-sa
namespace: lab
...@@ -10,5 +10,5 @@ spec: ...@@ -10,5 +10,5 @@ spec:
- ReadWriteOnce - ReadWriteOnce
resources: resources:
requests: requests:
storage: 100Mi storage: 1Gi
status: {} status: {}
--- ---
# volume for the mongo data
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
name: mongo-claim0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
---
# the mongo DB deployment # the mongo DB deployment
apiVersion: extensions/v1beta1 apiVersion: extensions/v1beta1
kind: Deployment kind: Deployment
metadata: metadata:
labels: labels:
io.kompose.service: mongo app: mongo
name: mongo name: mongo
spec: spec:
replicas: 1 replicas: 1
...@@ -28,7 +14,7 @@ spec: ...@@ -28,7 +14,7 @@ spec:
metadata: metadata:
creationTimestamp: null creationTimestamp: null
labels: labels:
io.kompose.service: mongo app: mongo
spec: spec:
containers: containers:
- name: api-mongo - name: api-mongo
...@@ -49,15 +35,3 @@ spec: ...@@ -49,15 +35,3 @@ spec:
persistentVolumeClaim: persistentVolumeClaim:
claimName: mongo-claim0 claimName: mongo-claim0
status: {} status: {}
---
apiVersion: v1
kind: Service
metadata:
name: mongo-svc
spec:
type: "NodePort"
ports:
- port: 27017
targetPort: 27017
selector:
io.kompose.service: mongo
---
apiVersion: v1
kind: Service
metadata:
name: mongo-svc
spec:
ports:
- port: 27017
targetPort: 27017
selector:
app: mongo
...@@ -3,7 +3,7 @@ apiVersion: extensions/v1beta1 ...@@ -3,7 +3,7 @@ apiVersion: extensions/v1beta1
kind: Deployment kind: Deployment
metadata: metadata:
labels: labels:
io.kompose.service: api app: api
name: api name: api
spec: spec:
replicas: 1 replicas: 1
...@@ -11,14 +11,14 @@ spec: ...@@ -11,14 +11,14 @@ spec:
type: Recreate type: Recreate
template: template:
metadata: metadata:
creationTimestamp: null
labels: labels:
io.kompose.service: api app: api
spec: spec:
serviceAccountName: lab-sa
containers: containers:
- name: api - name: api
# for local usage with minikube imagePullPolicy: Always
image: source.dodis.ch:4577/histhub/node-proxy:latest image: source.dodis.ch:4577/histhub/node-proxy:14c7ae5f9da41c1a0d27c518a6560f42b11c6e3f
resources: resources:
limits: limits:
memory: "512Mi" memory: "512Mi"
...@@ -43,18 +43,14 @@ spec: ...@@ -43,18 +43,14 @@ spec:
- name: DNS_RESOLVER - name: DNS_RESOLVER
value: dnsResolverkube value: dnsResolverkube
- name: NAMESPACE - name: NAMESPACE
value: default value: lab-runner
- name: GOOGLE_APPLICATION_CREDENTIALS - name: APP_CHECK_INTERVAL
value: "/app/gce.json" value: "3600000"
- name: NAMESPACE_APPS
value: "lab-runner"
- name: NODE_ENV
value: "production"
volumeMounts: volumeMounts:
- name: "gce-config"
mountPath: "/app/gce.json"
subPath: gce.json
readOnly: true
- name: "kube-config"
mountPath: "/root/.kube/config"
subPath: config
readOnly: true
- name: "keycloak-config" - name: "keycloak-config"
mountPath: "/app/keycloak.json" mountPath: "/app/keycloak.json"
subPath: keycloak.json subPath: keycloak.json
...@@ -73,13 +69,8 @@ spec: ...@@ -73,13 +69,8 @@ spec:
- name: dodis - name: dodis
restartPolicy: Always restartPolicy: Always
volumes: volumes:
- name: "kube-config"
secret:
secretName: "kube-config"
- name: "keycloak-config" - name: "keycloak-config"
secret: secret:
secretName: "keycloak-config" secretName: "keycloak-config"
- name: "gce-config"
secret:
secretName: "gce-config"
status: {} status: {}
...@@ -4,9 +4,8 @@ kind: Service ...@@ -4,9 +4,8 @@ kind: Service
metadata: metadata:
name: api-svc name: api-svc
spec: spec:
type: "NodePort"
ports: ports:
- port: 80 - port: 80
targetPort: 3000 targetPort: 3000
selector: selector:
io.kompose.service: api app: api
---
apiVersion: extensions/v1beta1 apiVersion: extensions/v1beta1
kind: Ingress kind: Ingress
metadata: metadata:
name: node-proxy-api
annotations: annotations:
cert-manager.io/issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/tls-acme: "true"
name: api-ingress
spec: spec:
tls: