11 KiB
ELK Stack on Kubernetes (Minikube) with Ansible
A fully automated deployment of the ELK (Elasticsearch, Logstash, Kibana) observability stack on Kubernetes using Minikube, managed by Ansible. Includes NGINX reverse proxy for SSL/TLS termination and Authentik for SSO (OpenID Connect) authentication to Kibana.
Tech Stack & Versions
| Component | Version | Purpose |
|---|---|---|
| Ansible | 2.15+ | Deployment automation |
| Minikube | Latest | Local Kubernetes cluster |
| Elasticsearch | 8.17.0 | Search & analytics engine |
| Logstash | 8.17.0 | Data ingestion (JDBC from MySQL) |
| Kibana | 8.17.0 | Visualization dashboard |
| NGINX | 1.27 | SSL reverse proxy for Kibana |
| Authentik | 2024.12 | SSO/OIDC identity provider |
| MySQL | 8.4 | Data source with demo database |
| PostgreSQL | 16 | Authentik backend database |
| Redis | 7 | Authentik cache/message broker |
Prerequisites
An Ubuntu 22.04+ system (bare-metal or VM) with at minimum 8 GB RAM, 4 CPU cores, and 40 GB disk. Ansible 2.15+, Docker, and internet access are required.
Install Ansible if not already present:
sudo apt update && sudo apt install -y software-properties-common
sudo add-apt-repository --yes --update ppa:ansible/ansible
sudo apt install -y ansible
Project Structure
NOTE: The ansible folder is available here, and the BASH scripts can be found in this repo.
elk-k8s-deployment/
├── README.md ← You are here
├── ansible/
│ ├── inventory/
│ │ └── hosts.yml ← Inventory & variables
│ └── playbooks/
│ ├── deploy.yml ← Main deployment playbook
│ └── teardown.yml ← Cleanup playbook
├── k8s/
│ ├── namespaces/
│ │ └── namespaces.yaml ← elk + database namespaces
│ ├── mysql/
│ │ ├── configmap.yaml ← Init SQL with dummy data
│ │ ├── secret.yaml ← MySQL credentials
│ │ └── deployment.yaml ← Deployment, PVC, Service
│ ├── elasticsearch/
│ │ ├── elasticsearch.yaml ← StatefulSet, ConfigMap, Service
│ │ └── setup-job.yaml ← Post-deploy user/role setup
│ ├── logstash/
│ │ └── logstash.yaml ← Deployment, pipelines, Service
│ ├── kibana/
│ │ └── kibana.yaml ← Deployment, ConfigMap, Service
│ ├── nginx/
│ │ └── nginx.yaml ← Reverse proxy, SSL, NodePort
│ └── authentik/
│ └── authentik.yaml ← PG, Redis, Server, Worker
├── scripts/
│ ├── generate-certs.sh ← CA + component certificates
│ └── configure-authentik-oidc.sh ← OIDC provider setup via API
└── certs/ ← Generated certificates (gitignored)
Step-by-Step Deployment
Step 1 — Clone and Prepare
cd elk-k8s-deployment
chmod +x scripts/*.sh
Review and adjust variables in ansible/inventory/hosts.yml — especially passwords if deploying beyond a lab.
Step 2 — Run the Full Deployment
The single Ansible playbook handles everything from installing prerequisites to starting all services:
cd ansible
ansible-playbook -i inventory/hosts.yml playbooks/deploy.yml --ask-become-pass
This executes the following phases in order:
- Prerequisites — Installs Docker, kubectl, and Minikube
- Certificates — Generates a custom CA and TLS certs for all components
- Minikube — Starts the cluster with 4 CPUs, 8 GB RAM
- Namespaces — Creates
elkanddatabasenamespaces - Secrets — Loads TLS certificates into Kubernetes secrets
- MySQL — Deploys MySQL with the
elkdemodatabase and sample data - Elasticsearch — Deploys a single-node cluster with X-Pack security
- ES Setup — Runs a Job to configure users, roles, and index templates
- Authentik — Deploys PostgreSQL, Redis, Authentik server and worker
- Kibana — Deploys Kibana configured for OIDC + basic auth
- Logstash — Deploys Logstash with two JDBC pipelines from MySQL
- NGINX — Deploys the SSL reverse proxy (NodePort 30443)
- DNS — Adds
kibana.elk.localto/etc/hosts
Step 3 — Configure Authentik OIDC Provider
After all pods are running, configure the OIDC provider in Authentik:
# Port-forward Authentik
kubectl port-forward svc/authentik-server 9000:80 -n elk &
# Run the configuration script
bash scripts/configure-authentik-oidc.sh http://localhost:9000
# Stop the port-forward
kill %1
This creates an OIDC provider with client ID kibana and links it to a Kibana application in Authentik.
Step 4 — Verify the Deployment
# Check all pods
kubectl get pods -n elk
kubectl get pods -n database
# Expected output — all pods should be Running:
# elk namespace: elasticsearch-0, kibana-xxx, logstash-xxx,
# nginx-xxx, authentik-server-xxx, authentik-worker-xxx,
# authentik-postgresql-xxx, authentik-redis-xxx
# database namespace: mysql-xxx
Step 5 — Access the Services
| Service | URL | Credentials |
|---|---|---|
| Kibana (via NGINX) | https://kibana.elk.local:30443 | elastic / ElasticP@ss2024! |
| Authentik Admin | https://authentik.elk.local:30944 | akadmin / AdminP@ss2024! |
| Elasticsearch (direct) | kubectl port-forward svc/elasticsearch 9200:9200 -n elk |
elastic / ElasticP@ss2024! |
Since self-signed certificates are used, you will need to accept the browser security warning or import certs/ca.crt into your browser's trusted certificate store.
Running Individual Phases
To re-run only specific phases, use Ansible tags:
# Regenerate certificates only
ansible-playbook -i inventory/hosts.yml playbooks/deploy.yml --tags certs,secrets
# Redeploy only Logstash
ansible-playbook -i inventory/hosts.yml playbooks/deploy.yml --tags logstash
# Redeploy Elasticsearch and re-run setup
ansible-playbook -i inventory/hosts.yml playbooks/deploy.yml --tags elasticsearch,es-setup
Available tags: prereqs, certs, minikube, namespaces, secrets, mysql, elasticsearch, es-setup, kibana, logstash, nginx, authentik, dns.
Data Pipeline Details
Logstash runs two JDBC pipelines that pull data from MySQL across namespaces (mysql.database.svc.cluster.local):
employees pipeline — Runs every 2 minutes, tracks the updated_at column for incremental syncing. Enriches records with a salary_band field (junior / mid / senior). Writes to employees-YYYY.MM indices.
access_logs pipeline — Runs every minute, tracks the created_at column. Enriches records with status_category (success/redirect/client_error/server_error) and performance_tier (fast/normal/slow/critical). Writes to access-logs-YYYY.MM indices.
Both pipelines use the MySQL Connector/J 8.3.0 JDBC driver, which is downloaded automatically via an init container.
Authentik SSO Integration
The integration uses OpenID Connect between Authentik (Identity Provider) and Elasticsearch/Kibana (Service Provider).
Flow: User visits Kibana → selects "Log in with Authentik SSO" → redirected to Authentik → authenticates → redirected back to Kibana with OIDC token → Elasticsearch validates the token and maps the user to the superuser role.
To adjust the role mapping (e.g., map specific groups to specific roles), edit the authentik_users role mapping in Elasticsearch:
curl --cacert certs/ca.crt -u elastic:ElasticP@ss2024! \
-X PUT "https://localhost:9200/_security/role_mapping/authentik_users" \
-H "Content-Type: application/json" -d '{
"roles": ["kibana_admin"],
"enabled": true,
"rules": { "field": { "realm.name": "authentik" } }
}'
Using Custom CA-Signed Certificates
The included generate-certs.sh creates a self-signed CA for lab use. To use certificates signed by your organization's CA:
- Place your CA certificate, component certificates, and keys in the
certs/directory with the same filenames (ca.crt,elasticsearch.crt,elasticsearch.key, etc.) - Generate the PKCS12 keystore for Elasticsearch:
openssl pkcs12 -export -in certs/elasticsearch.crt -inkey certs/elasticsearch.key \ -CAfile certs/ca.crt -chain -out certs/elasticsearch.p12 -passout pass:changeit cp certs/elasticsearch.p12 certs/elastic-http.p12 - Re-run the secrets and affected component phases:
ansible-playbook -i inventory/hosts.yml playbooks/deploy.yml \ --tags secrets,elasticsearch,kibana,logstash,nginx
Ensure the SANs (Subject Alternative Names) in your certificates match the service DNS names used in the cluster (e.g., elasticsearch.elk.svc.cluster.local).
Teardown
cd ansible
# Remove all K8s resources but keep Minikube running
ansible-playbook -i inventory/hosts.yml playbooks/teardown.yml --ask-become-pass
# Remove everything including Minikube
ansible-playbook -i inventory/hosts.yml playbooks/teardown.yml --ask-become-pass -e stop_minikube=true
Troubleshooting
Pod stuck in CrashLoopBackOff
kubectl logs <pod-name> -n elk --previous
kubectl describe pod <pod-name> -n elk
Elasticsearch fails to start (vm.max_map_count)
# On the host (required for Elasticsearch)
sudo sysctl -w vm.max_map_count=262144
echo "vm.max_map_count=262144" | sudo tee -a /etc/sysctl.conf
Logstash cannot connect to MySQL
# Verify cross-namespace DNS resolution
kubectl run -it --rm debug --image=busybox -n elk -- nslookup mysql.database.svc.cluster.local
Certificate issues
# Verify certificate SANs
openssl x509 -in certs/elasticsearch.crt -noout -text | grep -A1 "Subject Alternative Name"
Reset Elasticsearch setup job
kubectl delete job elasticsearch-setup -n elk
kubectl apply -f k8s/elasticsearch/setup-job.yaml
Security Notes
This deployment uses demonstration credentials. For any non-lab use:
- Replace all passwords in
ansible/inventory/hosts.ymland corresponding K8s secrets - Use Ansible Vault to encrypt sensitive variables:
ansible-vault encrypt_string 'password' --name 'elastic_password' - Use CA-signed certificates from your organization's PKI
- Restrict the Elasticsearch
superuserrole mapping to specific OIDC groups - Enable Kubernetes NetworkPolicies to isolate namespace traffic
- Set Kubernetes resource quotas appropriate to your cluster