Commit fa21c0e4 authored by Tobinsk's avatar Tobinsk
Browse files

Update readme

parent 3cabb1a8
# Geolinker common
This simple lib just bundle modules for the geolinker. Over a core instance you can have access to kafka-consumers, kafka-producers, reporter, logger, config and uriBuilder. This tools are used in most of the streaming app and so we can stop replicating code - at least a bit.
This simple lib just bundle modules for the geolinker. Over a core instance you can access kafka-consumers, kafka-producers, reporter, logger, config and uriBuilder. These tools are used in most of the streaming app and allow us to stop replicating code - at least a bit.
## Config
All streaming apps provides a common config-file. Over this file you can configure the diffrent modules and then access an instance over the core. The configs makes use of the `nconf` module. A typical config looks like
All streaming apps provide a common config-file. In this file you can configure the different modules and then access an instance over the core. The config makes use of the [nconf](https://www.npmjs.com/package/nconf) module. A typical config looks like this:
```json
{
"kafka": {
......@@ -50,7 +50,7 @@ All streaming apps provides a common config-file. Over this file you can configu
```
* broker = A string or an array of kafka brokers
* schema-registry = The schema registry url with protocol
* fetchAllVersions = Fetsch all versions of the schema or just the oldest one
* fetchAllVersions = Fetch all versions of the schema or just the oldest one
### kafka producer config
You can configure several producers in one file. You need to prefix the config with the name of the producer. In the following example you configure an `example` producer.
......@@ -71,7 +71,7 @@ You can configure several producers in one file. You need to prefix the config w
}
}
```
If you just need a single producer for the app (common case), you can leave the prefix and just go with following config
If you just need a single producer for the app (common case), you can omit the prefix and just go with following config
```json
{
"producer": {
......@@ -89,9 +89,9 @@ If you just need a single producer for the app (common case), you can leave the
```
* config = An object with producer config options from librdkafka
* batch.num.messages = Number of messages in a batch. If this is too big, the app will run out of memory
* message.send.max.retries = Retries for unsucceffull writes
* message.send.max.retries = Retries for unsuccessful writes
* retry.backoff.ms = Backoff time for retires
* message.max.bytes = Max lenght of the message
* message.max.bytes = Max length of the message
* topic = A string or an array of names for topics
* partition = The number if the partition. -1 will autoselect the partition
* author: The name of the author of a message (rarely used)
......@@ -137,19 +137,21 @@ If you just need a single consumer for the app (common case), you can leave the
* group.id = The id of the consumer group
* socket.keepalive.enable = Keep the socket open
* enable.auto.commit = Commit the offset automatic or manual
* queued.max.messages.kbytes = Max lenght of the message
* queued.max.messages.kbytes = Max length of the message
* topic = An object of of topics (we should refactor this)
* stream
* request.required.acks: Should kafka producer wait for ack
### Log (winston) config
As a logger we use `winston`. We use two transports to deliver logs. One to stadOut and one into a file. For the file we need a configured log-dir.
As a logger we use [winston](https://www.npmjs.com/package/winston). We use two transports to deliver logs. One to standOut and one into a file. For the file we need a configured log-dir.
```json
{
"log-dir": "/tmp"
}
```
### Reporter config
The status reporter need to get an address from the config to report.
Todo: We should delete the file transport. This creates conflicts in containers.
### Reporter config (deprecated)
The reporter is deprecated. We will delete it in the next version.
The status reporter needs to get an address from the config to report.
```json
{
"reporter": {
......@@ -160,12 +162,14 @@ The status reporter need to get an address from the config to report.
* url: The url of the reporter
## Core
The `core` is a container with instances or factories to provide the service. You can inject the core into a class to have access to the services. So you get one instance over the project.
The `core` is a container with instances or factories to provide the service. You can inject the core into a class to gain access to the services. So you get one instance over the project.
```js
const core = Core.init(name, configFilePath);
```
To init the core you need to provide a name of the app and the path to the config file.
## Reporter
## Reporter (deprecated)
The reporter is deprecated. We will delete it in the next version.
The reporter helps us to collect some basic metrics about the app. It helps us to debug and to see the throughput of the app.
```js
const core = Core.init(name, configFilePath);
......@@ -183,8 +187,8 @@ core.getReporter(10000, 'example').setDataIn(10);
core.getReporter('example').setDataOut(5);
```
## Kafka
The kafka class uses `kafka-avro` module to provide kafka and avro schemas. We just unified the method to configure and init kafka, consumers and producers.
The class gets the configuration from the nconf file. There we can configure all details for each consumer and producer.
The kafka class uses the [kafka-avro](https://github.com/waldophotos/kafka-avro) module to provide kafka and [avro schemas](https://avro.apache.org/docs/1.8.1/spec.html). We just unified the method to configure and init kafka, consumers and producers.
The class gets the configuration from `nconf`. There we can configure all details for each consumer and producer.
```js
const core = Core.init(name, configFilePath);
// get a kafka instance with some basic config
......@@ -202,7 +206,7 @@ kafka.init().then(async () => {
}
```
## Logger
We use `winston` as a logger module. The get access to the same logger instance over the app, we init and provide it in the core.
We use [winston](https://www.npmjs.com/package/winston) as a logger module. The get access to the same logger instance over the app, we init and provide it in the core.
```js
const core = Core.init(name, configFilePath);
// get the logger instance
......@@ -218,8 +222,9 @@ A Transformable stream to report throughput to the reporter
### Stream StreamProducerPreparer
A Transformable stream to prepare a message for kafka-avro
### Stream Debug
A Transfromable stream to log the content of the stream to console.
A Transformable stream to log the content of the stream to console.
### Stream StreamFilterTransformer
A transformable stream to manipulate data with a simple callback
## Tests
There are some tests `npm run test` for the lib. But we need to improve them and let the cover the full lib
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment