Installation
With our solution for ElasticSearch and OpenSearch, you can start forwarding logs from your clusters in under 10 minutes, including forwarding metadata-enriched container logs, host logs, and audit logs. You can request an evaluation license that is valid for 30 days.
Install Collectord for Kubernetes / OpenShift
Note for clusters running Collectord for Splunk
If you are already running Collectord instance on your clusters for Splunk output, there are some important details to consider:
- Collectord for ElasticSearch deploys in the same namespace
collectorforkubernetes
, it does not affect anyhow the existing Collectord instance for Splunk output. - By default, Collectord for ElasticSearch deploys with
annotationSubdomain
set toelasticsearch
, which means that it will not pick up annotations defined for Splunk withcollectord.io
, only annotations defined aselasticsearch.collectord.io
. - The licensing mechanism is the same for both Collectord instances, so you can use the same license key for both instances. Running multiple Collectord instances on the same cluster does not multiply your licensing usage.
Installation
1. Download configuration
Use latest Kubernetes configuration file
collectorforkubernetes-elasticsearch.yaml.
This configuration deploys multiple workloads under collectorforkubernetes
namespace.
2. Configure Collectord
Open it in your favorite editor and specify one or more elasticsearch hosts for ingestion. Review and accept a license agreement and include license key (request an evaluation license key with this automated form).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | [general] acceptLicense = false license = fields.orchestrator.cluster.name = - ... # ElasticSearch output [output.elasticsearch] host = authorizationBasicUsername = authorizationBasicPassword = insecure = false |
For example
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | [general] acceptLicense = true license = ... fields.orchestrator.cluster.name = development ... # ElasticSearch output [output.elasticsearch] host = https://elasticsearch:9200 authorizationBasicUsername = elastic authorizationBasicPassword = elastic insecure = true |
3. Additional Configurations
-
If you are planning to deploy Collectord on a cluster, which was running for a while, and has a lot of logs stored on the disk, Collectord will forward all the logs, which can disturb your cluster. You can configure under
[general]
valuesthruputPerSecond
ortooOldEvents
to configure the number of logs you want to forward per second, and which events Collectord should skip. -
Collectord submits to ElasticSearch a new index lifecycle policy
logs-collectord
, which will delete indices older than 30 days. Please review content of the filees-default-index-lifecycle-management-policy.json
and adjust it to your needs. -
Collectord submits to ElasticSearch a new index templates for the datastream
logs-collectord-${COLLECTORD_VERSION}
andlogs-collectord-failed-${COLLECTORD_VERSION}
. Please review content of the fileses-default-index-template.json
andes-failed-index-template.json
and adjust it to your needs. -
Index
logs-collectord-failed-${COLLECTORD_VERSION}
is used to store logs, which failed to be ingested into ElasticSearch to the default index. This error can be caused by incorrect mapping. When, for example, you change the type of the field, and ElasticSearch cannot process this field anymore.
4. Apply the configuration
Apply this change to your Kubernetes cluster with kubectl
$ kubectl apply -f ./collectorforkubernetes-elasticsearch.yaml
Verify the workloads.
$ kubectl get all --namespace collectorforkubernetes
Give it a few moments to download the image and start the containers. After all the pods are deployed, go to the ElasticSearch and OpenSearch and you should see the data.
The Collectord forwards by default container logs, host logs (including syslog) and audit logs (if enabled)
ElasticSearch configuration
You can start using ElasticSearch right away and see the logs under Observability
-> Logs
.
OpenSearch configuration
If you are using OpenSearch, and this is a fresh installation, to be able to view the logs, you need to create an index pattern.
- Go to
Management
->Stack Management
->Index Patterns
->Create Index Pattern
- Use
logs-collectord-*
as index pattern name. - Click
Next step
- Select
@timestamp
as time field name. - Click
Create index pattern
.
Links
-
Installation
- Forwarding container logs, application logs, host logs and audit logs
- Test our solution with the embedded 30-days evaluation license.
-
Collectord Configuration
- Collectord configuration reference for Kubernetes and OpenShift clusters.
-
Annotations
- Changing a type and format of messages forwarded from namespaces, workloads and pods.
- Forwarding application logs.
- Multi-line container logs.
- Fields extraction for application and container logs (including timestamp extractions).
- Hiding sensitive data, stripping terminal escape codes and colors.
- Troubleshooting
- FAQ and the common questions
- License agreement
- Pricing
- Contact