Configurations for Splunk HTTP Event Collector
- Configurations for Splunk HTTP Event Collector
- Configure HTTP Event Collector secure connection
- HTTP Event Collector incorrect index behavior
- Using proxy for HTTP Event Collector
- Using multiple HTTP Event Collector endpoints for Load Balancing and Fail-over
- Enable indexer acknowledgement
- Client certificates for collector
- Support for multiple Splunk clusters
- Links
Configure HTTP Event Collector secure connection
Splunk by default uses self-signed certificates. Collector provides various configuration options for you to set up how it should connect to HTTP Event Collector.
Configure trusted SSL connection to the self-signed certificate
If you are using Splunk self-signed certificate, you can copy server CA certificate from $SPLUNK_HOME/etc/auth/cacert.pem
and create a secret from it.
oc --namespace collectorforopenshift create secret generic splunk-cacert --from-file=./cacert.pem
For every collectorforopenshift workload (2 Daemon Sets and 1 Deployment) you need to attach this secret as a volume.
... volumeMounts: - name: splunk-cacert mountPath: "/splunk-cacert/" readOnly: true ... volumes: - name: splunk-cacert secret: secretName: splunk-cacert ...
And update the ConfigMap under section [output.splunk]
[output.splunk] # Allow invalid SSL server certificate insecure = false # Path to CA cerificate caPath = /splunk-cacert/cacert.pem # CA Name to verify caName = SplunkServerDefaultCert
In this configuration, we define the path to
the CA server certificate that collector should trust and identify the name of the server, specified in the certificate,
which is SplunkServerDefaultCert
in case of default self-signed certificate.
After applying this update we set up trusted SSL connection between collector and HTTP Event Collector.
HTTP Event Collector incorrect index behavior
HTTP Event Collector rejects payloads with the indexes that specified Token does not allow to write. When you override indexes with the annotations, it is a very common mistake to make a misprint in the index name or forget to enable writing capabilities for the token in Splunk.
Collector provides configuration how these errors should be handled with configuration incorrectIndexBehavior
.
RedirectToDefault
- this is the default behavior, which forwards events with an incorrect index to default index of the HTTP Event Collector.Drop
- this configuration drops events with incorrect index.Retry
- this configuration keeps retrying. Some pipelines, like process stats, can be blocked for the whole host with this configuration.
You can specify behavior with the configuration.
[output.splunk] incorrectIndexBehavior = Drop
Using proxy for HTTP Event Collector
If you need to use a Proxy for HTTP Event Collector, you can define that with the configuration. If you are using SSL connection, you need to include the certificate used by the Proxy as well (similarly how we attach the certificate for Splunk
[output.splunk] url = https://hec.example.com:8088/services/collector/event/1.0 token = B5A79AAD-D822-46CC-80D1-819F80D7BFB0 proxyUrl = http://proxy.example:4321 caPath = /proxy-cert/proxie-ca.pem
Using multiple HTTP Event Collector endpoints for Load Balancing and Fail-over
The Collector can accept multiple HTTP Event Collector URLs for Load Balancing (in case if you are using multiple hosts with the same configuration) and for fail-over.
The collector provides you with 3 different algorithms for URL selection:
random
- choose random URL on first selection and after each failure (connection or HTTP status code >= 500)round-robin
- choose URL starting from the first one and bump on each failure (connection or HTTP status code >= 500)random-with-round-robin
- choose random url on first selection and after that in round-robin on each failure (connection or HTTP status code >= 500)`
The default value is random-with-round-robin
[output.splunk] urls.0 = https://hec1.example.com:8088/services/collector/event/1.0 urls.1 = https://hec2.example.com:8088/services/collector/event/1.0 urls.2 = https://hec3.example.com:8088/services/collector/event/1.0 urlSelection = random-with-round-robin token = B5A79AAD-D822-46CC-80D1-819F80D7BFB0
Enable indexer acknowledgement
HTTP Event Collector provides an Indexer acknowledgment, which allows knowing when payload not only accepted by HTTP Event Collector but also written to the Indexer. Enabling this feature can significantly reduce the performance of the clients, including the collector. But if you need guarantees for data delivery, you can enable it for HTTP Event Collector token and in the collector configuration.
[general] acceptLicense = true [output.splunk] url = https://hec.example.com:8088/services/collector/event/1.0 ackUrl = https://hec.example.com:8088/services/collector/ack token = B5A79AAD-D822-46CC-80D1-819F80D7BFB0 ackEnabled = true ackTimeout = 3m
Client certificates for collector
If you secure your HTTP Event Collector endpoint with the requirement of client certificates, you can embed them in the image and provide configuration to use them
[output.splunk] url = https://hec.example.com:8088/services/collector/event/1.0 token = B5A79AAD-D822-46CC-80D1-819F80D7BFB0 clientCertPath = /client-cert/client-cert.pem clientKeyPath = /client-cert/client-cert.key
Support for multiple Splunk clusters
If you need to forward logs from the same OpenShift cluster to multiple Splunk Clusters you can configure additional Splunk output in the configuration
[output.splunk::prod1] url = https://prod1.hec.example.com:8088/services/collector/event/1.0 token = AF420832-F61B-480F-86B3-CCB5D37F7D0D
All other configurations will be used from the default output output.splunk
.
And override the outputs for the Pods or Project like collectord.io/output=splunk::prod1
.
Links
-
Installation
- Start monitoring your OpenShift environments in under 10 minutes.
- Automatically forward host, container and application logs.
- Test our solution with the embedded 30 days evaluation license.
-
Collector Configuration
- Collector configuration reference.
-
Annotations
- Changing index, source, sourcetype for namespaces, workloads and pods.
- Forwarding application logs.
- Multi-line container logs.
- Fields extraction for application and container logs (including timestamp extractions).
- Hiding sensitive data, stripping terminal escape codes and colors.
- Forwarding Prometheus metrics from Pods.
-
Audit Logs
- Configure audit logs.
- Forwarding audit logs.
-
Prometheus metrics
- Collect metrics from control plane (etcd cluster, API server, kubelet, scheduler, controller).
- Configure collector to forward metrics from the services in Prometheus format.
-
Configuring Splunk Indexes
- Using not default HTTP Event Collector index.
- Configure the Splunk application to use not searchable by default indexes.
-
Splunk fields extraction for container logs
- Configure search-time fields extractions for container logs.
- Container logs source pattern.
-
Configurations for Splunk HTTP Event Collector
- Configure multiple HTTP Event Collector endpoints for Load Balancing and Fail-overs.
- Secure HTTP Event Collector endpoint.
- Configure the Proxy for HTTP Event Collector endpoint.
-
Monitoring multiple clusters
- Learn how you can monitor multiple clusters.
- Learn how to set up ACL in Splunk.
-
Streaming OpenShift Objects from the API Server
- Learn how you can stream all changes from the OpenShift API Server.
- Stream changes and objects from OpenShift API Server, including Pods, Deployments or ConfigMaps.
-
License Server
- Learn how you can configure remote License URL for Collectord.
- Monitoring GPU
- Alerts
- Troubleshooting
- Release History
- Upgrade instructions
- Security
- FAQ and the common questions
- License agreement
- Pricing
- Contact