- Logging with Fluentd
- Before you begin
- Setup Fluentd
- Example Fluentd, Elasticsearch, Kibana Stack
- Configure Istio
- View the new logs
- Cleanup
- See also
Logging with Fluentd
This task shows how to configure Istio to create custom log entriesand send them to a Fluentd daemon. Fluentdis an open source log collector that supports many dataoutputs and has a pluggablearchitecture. One popular logging backend isElasticsearch, andKibana as a viewer. At theend of this task, a new log stream will be enabled sending logs to anexample Fluentd / Elasticsearch / Kibana stack.
The Bookinfo sample application is usedas the example application throughout this task.
Before you begin
- Install Istio in your cluster and deploy anapplication. This task assumes that Mixer is setup in a default configuration(
—configDefaultNamespace=istio-system). If you use a differentvalue, update the configuration and commands in this task to match the value.
Setup Fluentd
In your cluster, you may already have a Fluentd daemon set running,such the add-on describedhereandhere,or something specific to your cluster provider. This is likelyconfigured to send logs to an Elasticsearch system or loggingprovider.
You may use these Fluentd daemons, or any other Fluentd daemon youhave set up, as long as they are listening for forwarded logs, andIstio’s Mixer is able to connect to them. In order for Mixer toconnect to a running Fluentd daemon, you may need to add aservicefor Fluentd. The Fluentd configuration to listen for forwarded logsis:
<source>type forward</source>
The full details of connecting Mixer to all possible Fluentdconfigurations is beyond the scope of this task.
Example Fluentd, Elasticsearch, Kibana Stack
For the purposes of this task, you may deploy the example stackprovided. This stack includes Fluentd, Elasticsearch, and Kibana in anon production-ready set ofServicesandDeploymentsall in a newNamespacecalled logging.
Save the following as logging-stack.yaml.
# Logging Namespace. All below are a part of this namespace.apiVersion: v1kind: Namespacemetadata:name: logging---# Elasticsearch ServiceapiVersion: v1kind: Servicemetadata:name: elasticsearchnamespace: logginglabels:app: elasticsearchspec:ports:- port: 9200protocol: TCPtargetPort: dbselector:app: elasticsearch---# Elasticsearch DeploymentapiVersion: apps/v1kind: Deploymentmetadata:name: elasticsearchnamespace: logginglabels:app: elasticsearchspec:replicas: 1selector:matchLabels:app: elasticsearchtemplate:metadata:labels:app: elasticsearchannotations:sidecar.istio.io/inject: "false"spec:containers:- image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.1.1name: elasticsearchresources:# need more cpu upon initialization, therefore burstable classlimits:cpu: 1000mrequests:cpu: 100menv:- name: discovery.typevalue: single-nodeports:- containerPort: 9200name: dbprotocol: TCP- containerPort: 9300name: transportprotocol: TCPvolumeMounts:- name: elasticsearchmountPath: /datavolumes:- name: elasticsearchemptyDir: {}---# Fluentd ServiceapiVersion: v1kind: Servicemetadata:name: fluentd-esnamespace: logginglabels:app: fluentd-esspec:ports:- name: fluentd-tcpport: 24224protocol: TCPtargetPort: 24224- name: fluentd-udpport: 24224protocol: UDPtargetPort: 24224selector:app: fluentd-es---# Fluentd DeploymentapiVersion: apps/v1kind: Deploymentmetadata:name: fluentd-esnamespace: logginglabels:app: fluentd-esspec:replicas: 1selector:matchLabels:app: fluentd-estemplate:metadata:labels:app: fluentd-esannotations:sidecar.istio.io/inject: "false"spec:containers:- name: fluentd-esimage: gcr.io/google-containers/fluentd-elasticsearch:v2.0.1env:- name: FLUENTD_ARGSvalue: --no-supervisor -qresources:limits:memory: 500Mirequests:cpu: 100mmemory: 200MivolumeMounts:- name: config-volumemountPath: /etc/fluent/config.dterminationGracePeriodSeconds: 30volumes:- name: config-volumeconfigMap:name: fluentd-es-config---# Fluentd ConfigMap, contains config files.kind: ConfigMapapiVersion: v1data:forward.input.conf: |-# Takes the messages sent over TCP<source>type forward</source>output.conf: |-<match **>type elasticsearchlog_level infoinclude_tag_key truehost elasticsearchport 9200logstash_format true# Set the chunk limits.buffer_chunk_limit 2Mbuffer_queue_limit 8flush_interval 5s# Never wait longer than 5 minutes between retries.max_retry_wait 30# Disable the limit on the number of retries (retry forever).disable_retry_limit# Use multiple threads for processing.num_threads 2</match>metadata:name: fluentd-es-confignamespace: logging---# Kibana ServiceapiVersion: v1kind: Servicemetadata:name: kibananamespace: logginglabels:app: kibanaspec:ports:- port: 5601protocol: TCPtargetPort: uiselector:app: kibana---# Kibana DeploymentapiVersion: apps/v1kind: Deploymentmetadata:name: kibananamespace: logginglabels:app: kibanaspec:replicas: 1selector:matchLabels:app: kibanatemplate:metadata:labels:app: kibanaannotations:sidecar.istio.io/inject: "false"spec:containers:- name: kibanaimage: docker.elastic.co/kibana/kibana-oss:6.1.1resources:# need more cpu upon initialization, therefore burstable classlimits:cpu: 1000mrequests:cpu: 100menv:- name: ELASTICSEARCH_URLvalue: http://elasticsearch:9200ports:- containerPort: 5601name: uiprotocol: TCP---
Create the resources:
$ kubectl apply -f logging-stack.yamlnamespace "logging" createdservice "elasticsearch" createddeployment "elasticsearch" createdservice "fluentd-es" createddeployment "fluentd-es" createdconfigmap "fluentd-es-config" createdservice "kibana" createddeployment "kibana" created
Configure Istio
Now that there is a running Fluentd daemon, configure Istio with a newlog type, and send those logs to the listening daemon. Apply aYAML file with configuration for the log stream thatIstio will generate and collect automatically:
Zip
$ kubectl apply -f @samples/bookinfo/telemetry/fluentd-istio.yaml@
If you use Istio 1.1.2 or prior, please use the following configuration instead:
Zip
$ kubectl apply -f @samples/bookinfo/telemetry/fluentd-istio-crd.yaml@
Notice that the address: "fluentd-es.logging:24224" line in thehandler configuration is pointing to the Fluentd daemon we setup in theexample stack.
View the new logs
- Send traffic to the sample application.
For theBookinfosample, visit http://$GATEWAY_URL/productpage in your web browseror issue the following command:
$ curl http://$GATEWAY_URL/productpage
- In a Kubernetes environment, setup port-forwarding for Kibana byexecuting the following command:
$ kubectl -n logging port-forward $(kubectl -n logging get pod -l app=kibana -o jsonpath='{.items[0].metadata.name}') 5601:5601 &
Leave the command running. Press Ctrl-C to exit when done accessing the Kibana UI.
Navigate to the Kibana UI and click the “Set up index patterns” in the top right.
Use
*as the index pattern, and click “Next step.”.Select
@timestampas the Time Filter field name, and click “Create index pattern.”Now click “Discover” on the left menu, and start exploring the logs generated
Cleanup
- Remove the new telemetry configuration:
Zip
$ kubectl delete -f @samples/bookinfo/telemetry/fluentd-istio.yaml@
If you are using Istio 1.1.2 or prior:
Zip
$ kubectl delete -f @samples/bookinfo/telemetry/fluentd-istio-crd.yaml@
- Remove the example Fluentd, Elasticsearch, Kibana stack:
$ kubectl delete -f logging-stack.yaml
- Remove any
kubectl port-forwardprocesses that may still be running:
$ killall kubectl
- If you are not planning to explore any follow-on tasks, refer to theBookinfo cleanup instructionsto shutdown the application.
See also
Mixer and the SPOF Myth
Improving availability and reducing latency.
Mixer Adapter Model
Provides an overview of Mixer's plug-in architecture.
Collecting Logs
This task shows you how to configure Istio to collect and customize logs.
Collecting Metrics
This task shows you how to configure Istio to collect and customize metrics.
Collecting Metrics for TCP services
This task shows you how to configure Istio to collect metrics for TCP services.
Getting Envoy's Access Logs
This task shows you how to configure Envoy proxies to print access log to their standard output.
