Last updated:2020-12-28 14:55:46

klog-filebeat overview

Filebeat is the open source log collection software developed by Elastic. You can deploy Filebeat on a server to collect logs of the server to a specified system, for example, Elasticsearch, Kafka, and Logstash. Kingsoft Cloud further develops Filebeat, adds more features to Filebeat, and provides the klog-filebeat agent that is tailored for KLog. klog-filebeat comes with the following features:

  • Exports logs to KLog.
    • Exports logs to multiple projects and LogPools.
    • Allows you to configure a filter for each LogPool to export only logs that meet conditions to the specified LogPool.
    • Allows you to specify the fields to export.
    • Dynamically loads the AccessKeyID and SecretAccessKey.
  • Uses Grok to parse logs and convert text to JSON objects.

For more information about the features of open source Filebeat, see Filebeat documentation.

Download klog-filebeat

  • Click here to download the latest version of klog-filebeat for Linux.
  • You can also run the following command on your server to download klog-filebeat:
wget "https://ks3-cn-beijing.ksyun.com/klog/filebeat/klog-filebeat.latest.tar.gz"

Install klog-filebeat

Decompress the package to your required path.

tar xvf klog-filebeat.latest.tar.gz

Run klog-filebeat

Run the following command to run klog-filebeat:

./filebeat -e

Configure klog-filebeat

klog-filebeat uses the following two configuration files: filebeat.yml and credential.ini. Both files are provided in the klog-filebeat package that you download. Decompress the package to obtain the two files.


You need to configure the AccessKeyID and SecretAccessKey of your Kingsoft Cloud account in this configuration file. Example:

access_key = AKLT6-bcdesfg-ajsIjfiejI9
secret_key = Jf2390j9E9finfiDIOFJFD8483+dfsDVdOCTFazCnDEAw+mxA/7Lfeh3ugErwoKKb5wNOIei==


This is the main configuration file. Example:

- type: log
  enabled: true
    # The log file to be collected.
    - /var/log/*.log

  endpoint: https://klog-cn-beijing.ksyun.com
  region: BEIJING
    path: credential.ini
    check_interval: 60
    - project_name: yourProjectName
      pool_name: yourPoolName
        - key: message

For more information about the output.klog configuration, see the following complete sample filebeat.yml file. For more information other configurations, see Filebeat documentation.

When you export data to KLog, the modules feature of Filebeat is unavailable.

Complete filebeat.yml file

The following code shows a complete sample filebeat.yml file and describes the configuration items.

# ------------------------------ Filebeat Input ----------------------------
- type: log
  enabled: true

  # The log file to be collected.
    - /your/app/log/path.log

  # If your logs are printed in the JSON format, you can set json.keys_under_root to true or false. klog-filebeat automatically parses the JSON content.
  # If you set this field to false, the parsed map object is stored in the json field of the filebeat event object.
  # If you set this field to true, the fields of the parsed map object are directly stored in the filebeat event object.
  # If you delete this field, your logs are stored in text in the message field of the filebeat event object.
  json.keys_under_root: false

# ------------------------------ KLog Output -------------------------------

# The configuration specific to klog-filebeat, which specifies klog-filebeat to send collected logs to the KLog server.
  # The KLog endpoint for receiving logs. If an internal endpoint is used, use HTTP.
  endpoint: https://klog-cn-beijing.ksyun.com
  region: BEIJING

  # The maximum number of logs sent at a time. Default value: 2048.
  bulk_max_size: 2048

  # The number of workers used to send logs. Default value: 1.
  worker_num: 1

  # The mode in which logs are compressed. Default value: lz4. To disable compression, set this field to none.
  compress_method: lz4

  # The file that contains the AccessKeyID and SecretAccessKey of your Kingsoft Cloud account.
    # The file path.
    path: credential.ini

    # The interval of checking the .ini file modification time. Unit: seconds. If the .ini file is modified, it is reloaded.
    check_interval: 60

  # The KLog projects and LogPools that receive logs. Each target represents a pair of project and LogPool.
  # You can set multiple targets and configure the same project and LogPool in different targets.
      # The name of the project.
    - project_name: yourProjectName

      # Use the value of the yourField1 field as the name of the project.
      # You can set one of the project_name and project_name_from_field fields. If you set both the fields, the project_name_from_field field prevails.
      # project_name_from_field: yourField1

      # The name of the LogPool.
      pool_name: yourPoolName

      # Use the value of the yourField2 field as the name of the LogPool.
      # You can set one of the pool_name and pool_name_from_field fields. If you set both the fields, the pool_name_from_field field prevails.
      # pool_name_from_field: yourField2

      # The fields to be sent. You can specify multiple fields. If you do not specify any field, all fields of an event are sent.

        # The name of the field to be sent. The period (.) is not allowed in a field name.
        - key: message

          # If a field is of the map type, you can set skip_root to true to send only the descendant fields of the field. Default value: false.
          skip_root: false

      # The filters. You can set multiple filters. Logs that match all the filters are sent to this target. If no filter is configured, all logs are sent.
          # The field to filter.
        - key: message

          # The threshold. The value can be a string or number. All numbers are converted to float64 internally.
          value: ""

          # The relational operator. Supported operators: >, >=, <, <=, ==, !=, exists, not_exists, contains (for strings only), and not_contain (for strings only).
          operator: "!="

# ------------------------------ New Processors of KLog --------------------------

  # The grok processor can parse text logs to JSON objects.
  # Grok is based on regular expressions. Therefore, enabling this field increases resource consumption of klog-filebeat.
  - grok:
      # The field to parse.
      source_field: message

      # The field to which the parsing result is stored.
      target_field: yourField3

      # The expression used for parsing.
      pattern: "%{IPORHOST:client} %{WORD:method} %{URIPATHPARAM:request} %{INT:size} %{NUMBER:duration}"

      # Optional. The custom expression used for parsing.
        MYPATTERN: "\d+"

filebeat event

The filebeat event is an important term in the data processing of Filebeat.

Each log collected by Filebeat is represented as an event in Filebeat. Filebeat reads, converts, and adds log fields by performing these operations on event fields. The following code shows a simple event.

	"@timestamp": "2020-09-21T03:08:07.682Z",
	"@metadata": {
		"beat": "filebeat",
		"type": "_doc",
		"version": "7.9.1"
	"log": {
		"offset": 423019,
		"file": {
			"path": "/path/to/your.log"
	"message": "2020-09-21 03:08:07.682 [INFO][47] ipsets.go 304: Finished resync family=\"inet\" numInconsistenciesFound=0 resyncDuration=2.18885ms",
	"input": {
		"type": "log"
	"ecs": {
		"version": "1.5.0"
	"host": {
		"name": "filebeat-cwssh"
	"agent": {
		"name": "filebeat-cwssh",
		"type": "filebeat",
		"version": "7.9.1",
		"hostname": "filebeat-cwssh",
		"ephemeral_id": "decf7010-9f78-41f7-85b6-7a0cc5ba4115",
		"id": "7b2bc79a-9da8-4038-ba55-993af4d9ac71"

Filebeat uses the message field to store raw log content. When you configure the filebeat.yml file, you can specify event fields in formats such as message, log.file.path, host.name, and json.field1.

Did you find the above information helpful?

Mostly Unhelpful
A little helpful
Very helpful

What might be the problems?

Unclear or awkward
Redundant or clumsy
Lack of context for the complex system or functionality

More suggestions


Please give us your feedback.


Thank you for your feedback.