Readme.md 2.82 KB
Newer Older
tobinski's avatar
tobinski committed
1
2
3
4
5
# Elasticsearch
This is a custom elasticsearch image with the elastic image as a base and the s3 repository plugin for snapshots. This image follows the best practice of [elastic.co](https://github.com/elastic/helm-charts/blob/master/elasticsearch/README.md#how-to-install-plugins).

## Build
```bash
6
docker build --build-arg elasticsearch_version=7.9.3
tobinski's avatar
tobinski committed
7
```
8

9
10
11
## Spaces the S3 in Digital Ocean
To save a snapshot we use spaces (S* from Digital Ocean). Log in to your DO-Dashboard and create a new space. Go to the space setting and disable listening of the directory for anonymous users. Under the API menuitem you can create a new access and secret key for your spaces. This key is necessary to create the backup.

12
13
14
15
16
17
18
19
20
21
## Snapshot key
We use keystore to store s3 key and secrete as [recommended by elastic](https://github.com/elastic/helm-charts/blob/master/elasticsearch/README.md#how-to-use-the-keystore). Check the deployment of the es-cluster to see how we use the secrets.

```
kubectl create secret generic s3-access-key --from-literal=s3.client.default.access_key=''
kubectl create secret generic s3-secret-key --from-literal=s3.client.default.secret_key=''
```

## Config Snapshots S3
To use the [s3 plugin](https://www.elastic.co/guide/en/elasticsearch/plugins/6.8/repository-s3-repository.html) with Digital Ocean we need to configure a s3 repository for the backup. We use the official helm repo. Checkout the docu for [snapshots](https://github.com/elastic/helm-charts/tree/master/elasticsearch#how-to-enable-snapshotting).
tobinski's avatar
tobinski committed
22
We need to set chunk and buffer size to not trigger [DO's rate limit](https://discuss.elastic.co/t/snapshot-to-aws-s3-fails-es-1-5-2-aws-cloud-plugin-2-5-1/49541/2).
23
24
25
26
27
28
29
30
```bash
kubectl port-forward service/elasticsearch-master-headless 9200:9200
curl --location --request PUT 'localhost:9200/_snapshot/metagrid-backup-es-7' \
--header 'Content-Type: application/json' \
--data-raw '{
  "type": "s3",
  "settings": {
    "endpoint": "fra1.digitaloceanspaces.com",
tobinski's avatar
tobinski committed
31
32
33
34
35
36
    "bucket": "metagrid-backup-es-7",
     "chunk_size": "1gb",
    "max_restore_bytes_per_sec": "8000mb",
    "max_retries": "30",
    "buffer_size": "100mb",
    "max_snapshot_bytes_per_sec": "1000mb"
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
  }
}'
```

## Take a snapshot S3
To use the s3 plugin with Digital Ocean we need to configure a s3 repository for the backup.
```bash
kubectl port-forward service/elasticsearch-master-headless 9200:9200
curl --location --request PUT 'localhost:9200/_snapshot/metagrid-backup-es-7/%3Csnapshot-%7Bnow%2Fd%7D%3E' \
--header 'Content-Type: application/json'
```

## Check snapshot status
You may want to know when the snapshot is finished.
```bash
curl --location --request GET 'localhost:9200/_snapshot/metagrid-backup-es-7/%3Csnapshot-%7Bnow%2Fd%7D%3E' \
--header 'Content-Type: application/json'
```
You should log into Digital ocean Dashboard and check the spaces if there are new files.