put
This commit is contained in:
319
elk-k8s-deployment/README.md
Normal file
319
elk-k8s-deployment/README.md
Normal file
@@ -0,0 +1,319 @@
|
||||
# ELK Stack on Kubernetes (Minikube) with Ansible
|
||||
|
||||
A fully automated deployment of the ELK (Elasticsearch, Logstash, Kibana) observability stack on Kubernetes using Minikube, managed by Ansible. Includes NGINX reverse proxy for SSL/TLS termination and Authentik for SSO (OpenID Connect) authentication to Kibana.
|
||||
|
||||
---
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ Minikube Cluster │
|
||||
│ │
|
||||
│ ┌─── Namespace: elk ─────────────────────────────────────────────────┐ │
|
||||
│ │ │ │
|
||||
│ │ ┌──────────┐ SSL ┌──────────┐ ┌────────────────┐ │ │
|
||||
│ │ │ NGINX │◄────────►│ Kibana │◄───────►│ Elasticsearch │ │ │
|
||||
│ │ │ Proxy │ :443 │ :5601 │ :9200 │ (single │ │ │
|
||||
│ │ │ :30443 │ │ │ │ node) │ │ │
|
||||
│ │ └──────────┘ └──────────┘ └────────────────┘ │ │
|
||||
│ │ │ ▲ ▲ │ │
|
||||
│ │ │ :9443 │ OIDC │ │ │
|
||||
│ │ ▼ │ │ │ │
|
||||
│ │ ┌──────────────┐ ┌─────┴──────┐ ┌─────┴──────┐ │ │
|
||||
│ │ │ Authentik │ │ Authentik │ │ Logstash │ │ │
|
||||
│ │ │ PostgreSQL │◄──►│ Server + │ │ (JDBC │ │ │
|
||||
│ │ │ + Redis │ │ Worker │ │ pipelines)│ │ │
|
||||
│ │ └──────────────┘ └────────────┘ └─────┬──────┘ │ │
|
||||
│ │ │ │ │
|
||||
│ └─────────────────────────────────────────────────────┼─────────────┘ │
|
||||
│ │ │
|
||||
│ ┌─── Namespace: database ──────────────────────────────┼─────────────┐ │
|
||||
│ │ ▼ │ │
|
||||
│ │ ┌────────────┐ │ │
|
||||
│ │ │ MySQL │ │ │
|
||||
│ │ │ 8.4 │ │ │
|
||||
│ │ │ (elkdemo) │ │ │
|
||||
│ │ └────────────┘ │ │
|
||||
│ └────────────────────────────────────────────────────────────────────┘ │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Tech Stack & Versions
|
||||
|
||||
| Component | Version | Purpose |
|
||||
|:----------------|:----------|:-------------------------------------------|
|
||||
| Ansible | 2.15+ | Deployment automation |
|
||||
| Minikube | Latest | Local Kubernetes cluster |
|
||||
| Elasticsearch | 8.17.0 | Search & analytics engine |
|
||||
| Logstash | 8.17.0 | Data ingestion (JDBC from MySQL) |
|
||||
| Kibana | 8.17.0 | Visualization dashboard |
|
||||
| NGINX | 1.27 | SSL reverse proxy for Kibana |
|
||||
| Authentik | 2024.12 | SSO/OIDC identity provider |
|
||||
| MySQL | 8.4 | Data source with demo database |
|
||||
| PostgreSQL | 16 | Authentik backend database |
|
||||
| Redis | 7 | Authentik cache/message broker |
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
An Ubuntu 22.04+ system (bare-metal or VM) with at minimum 8 GB RAM, 4 CPU cores, and 40 GB disk. Ansible 2.15+, Docker, and internet access are required.
|
||||
|
||||
Install Ansible if not already present:
|
||||
|
||||
```bash
|
||||
sudo apt update && sudo apt install -y software-properties-common
|
||||
sudo add-apt-repository --yes --update ppa:ansible/ansible
|
||||
sudo apt install -y ansible
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
elk-k8s-deployment/
|
||||
├── README.md ← You are here
|
||||
├── ansible/
|
||||
│ ├── inventory/
|
||||
│ │ └── hosts.yml ← Inventory & variables
|
||||
│ └── playbooks/
|
||||
│ ├── deploy.yml ← Main deployment playbook
|
||||
│ └── teardown.yml ← Cleanup playbook
|
||||
├── k8s/
|
||||
│ ├── namespaces/
|
||||
│ │ └── namespaces.yaml ← elk + database namespaces
|
||||
│ ├── mysql/
|
||||
│ │ ├── configmap.yaml ← Init SQL with dummy data
|
||||
│ │ ├── secret.yaml ← MySQL credentials
|
||||
│ │ └── deployment.yaml ← Deployment, PVC, Service
|
||||
│ ├── elasticsearch/
|
||||
│ │ ├── elasticsearch.yaml ← StatefulSet, ConfigMap, Service
|
||||
│ │ └── setup-job.yaml ← Post-deploy user/role setup
|
||||
│ ├── logstash/
|
||||
│ │ └── logstash.yaml ← Deployment, pipelines, Service
|
||||
│ ├── kibana/
|
||||
│ │ └── kibana.yaml ← Deployment, ConfigMap, Service
|
||||
│ ├── nginx/
|
||||
│ │ └── nginx.yaml ← Reverse proxy, SSL, NodePort
|
||||
│ └── authentik/
|
||||
│ └── authentik.yaml ← PG, Redis, Server, Worker
|
||||
├── scripts/
|
||||
│ ├── generate-certs.sh ← CA + component certificates
|
||||
│ └── configure-authentik-oidc.sh ← OIDC provider setup via API
|
||||
└── certs/ ← Generated certificates (gitignored)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step-by-Step Deployment
|
||||
|
||||
### Step 1 — Clone and Prepare
|
||||
|
||||
```bash
|
||||
cd elk-k8s-deployment
|
||||
chmod +x scripts/*.sh
|
||||
```
|
||||
|
||||
Review and adjust variables in `ansible/inventory/hosts.yml` — especially passwords if deploying beyond a lab.
|
||||
|
||||
### Step 2 — Run the Full Deployment
|
||||
|
||||
The single Ansible playbook handles everything from installing prerequisites to starting all services:
|
||||
|
||||
```bash
|
||||
cd ansible
|
||||
ansible-playbook -i inventory/hosts.yml playbooks/deploy.yml --ask-become-pass
|
||||
```
|
||||
|
||||
This executes the following phases in order:
|
||||
|
||||
1. **Prerequisites** — Installs Docker, kubectl, and Minikube
|
||||
2. **Certificates** — Generates a custom CA and TLS certs for all components
|
||||
3. **Minikube** — Starts the cluster with 4 CPUs, 8 GB RAM
|
||||
4. **Namespaces** — Creates `elk` and `database` namespaces
|
||||
5. **Secrets** — Loads TLS certificates into Kubernetes secrets
|
||||
6. **MySQL** — Deploys MySQL with the `elkdemo` database and sample data
|
||||
7. **Elasticsearch** — Deploys a single-node cluster with X-Pack security
|
||||
8. **ES Setup** — Runs a Job to configure users, roles, and index templates
|
||||
9. **Authentik** — Deploys PostgreSQL, Redis, Authentik server and worker
|
||||
10. **Kibana** — Deploys Kibana configured for OIDC + basic auth
|
||||
11. **Logstash** — Deploys Logstash with two JDBC pipelines from MySQL
|
||||
12. **NGINX** — Deploys the SSL reverse proxy (NodePort 30443)
|
||||
13. **DNS** — Adds `kibana.elk.local` to `/etc/hosts`
|
||||
|
||||
### Step 3 — Configure Authentik OIDC Provider
|
||||
|
||||
After all pods are running, configure the OIDC provider in Authentik:
|
||||
|
||||
```bash
|
||||
# Port-forward Authentik
|
||||
kubectl port-forward svc/authentik-server 9000:80 -n elk &
|
||||
|
||||
# Run the configuration script
|
||||
bash scripts/configure-authentik-oidc.sh http://localhost:9000
|
||||
|
||||
# Stop the port-forward
|
||||
kill %1
|
||||
```
|
||||
|
||||
This creates an OIDC provider with client ID `kibana` and links it to a Kibana application in Authentik.
|
||||
|
||||
### Step 4 — Verify the Deployment
|
||||
|
||||
```bash
|
||||
# Check all pods
|
||||
kubectl get pods -n elk
|
||||
kubectl get pods -n database
|
||||
|
||||
# Expected output — all pods should be Running:
|
||||
# elk namespace: elasticsearch-0, kibana-xxx, logstash-xxx,
|
||||
# nginx-xxx, authentik-server-xxx, authentik-worker-xxx,
|
||||
# authentik-postgresql-xxx, authentik-redis-xxx
|
||||
# database namespace: mysql-xxx
|
||||
```
|
||||
|
||||
### Step 5 — Access the Services
|
||||
|
||||
| Service | URL | Credentials |
|
||||
|:---------------------|:---------------------------------------|:----------------------------------|
|
||||
| Kibana (via NGINX) | https://kibana.elk.local:30443 | elastic / ElasticP@ss2024! |
|
||||
| Authentik Admin | https://authentik.elk.local:30944 | akadmin / AdminP@ss2024! |
|
||||
| Elasticsearch (direct) | `kubectl port-forward svc/elasticsearch 9200:9200 -n elk` | elastic / ElasticP@ss2024! |
|
||||
|
||||
Since self-signed certificates are used, you will need to accept the browser security warning or import `certs/ca.crt` into your browser's trusted certificate store.
|
||||
|
||||
---
|
||||
|
||||
## Running Individual Phases
|
||||
|
||||
To re-run only specific phases, use Ansible tags:
|
||||
|
||||
```bash
|
||||
# Regenerate certificates only
|
||||
ansible-playbook -i inventory/hosts.yml playbooks/deploy.yml --tags certs,secrets
|
||||
|
||||
# Redeploy only Logstash
|
||||
ansible-playbook -i inventory/hosts.yml playbooks/deploy.yml --tags logstash
|
||||
|
||||
# Redeploy Elasticsearch and re-run setup
|
||||
ansible-playbook -i inventory/hosts.yml playbooks/deploy.yml --tags elasticsearch,es-setup
|
||||
```
|
||||
|
||||
Available tags: `prereqs`, `certs`, `minikube`, `namespaces`, `secrets`, `mysql`, `elasticsearch`, `es-setup`, `kibana`, `logstash`, `nginx`, `authentik`, `dns`.
|
||||
|
||||
---
|
||||
|
||||
## Data Pipeline Details
|
||||
|
||||
Logstash runs two JDBC pipelines that pull data from MySQL across namespaces (`mysql.database.svc.cluster.local`):
|
||||
|
||||
**employees pipeline** — Runs every 2 minutes, tracks the `updated_at` column for incremental syncing. Enriches records with a `salary_band` field (junior / mid / senior). Writes to `employees-YYYY.MM` indices.
|
||||
|
||||
**access_logs pipeline** — Runs every minute, tracks the `created_at` column. Enriches records with `status_category` (success/redirect/client_error/server_error) and `performance_tier` (fast/normal/slow/critical). Writes to `access-logs-YYYY.MM` indices.
|
||||
|
||||
Both pipelines use the MySQL Connector/J 8.3.0 JDBC driver, which is downloaded automatically via an init container.
|
||||
|
||||
---
|
||||
|
||||
## Authentik SSO Integration
|
||||
|
||||
The integration uses OpenID Connect between Authentik (Identity Provider) and Elasticsearch/Kibana (Service Provider).
|
||||
|
||||
**Flow**: User visits Kibana → selects "Log in with Authentik SSO" → redirected to Authentik → authenticates → redirected back to Kibana with OIDC token → Elasticsearch validates the token and maps the user to the `superuser` role.
|
||||
|
||||
To adjust the role mapping (e.g., map specific groups to specific roles), edit the `authentik_users` role mapping in Elasticsearch:
|
||||
|
||||
```bash
|
||||
curl --cacert certs/ca.crt -u elastic:ElasticP@ss2024! \
|
||||
-X PUT "https://localhost:9200/_security/role_mapping/authentik_users" \
|
||||
-H "Content-Type: application/json" -d '{
|
||||
"roles": ["kibana_admin"],
|
||||
"enabled": true,
|
||||
"rules": { "field": { "realm.name": "authentik" } }
|
||||
}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Using Custom CA-Signed Certificates
|
||||
|
||||
The included `generate-certs.sh` creates a self-signed CA for lab use. To use certificates signed by your organization's CA:
|
||||
|
||||
1. Place your CA certificate, component certificates, and keys in the `certs/` directory with the same filenames (`ca.crt`, `elasticsearch.crt`, `elasticsearch.key`, etc.)
|
||||
2. Generate the PKCS12 keystore for Elasticsearch:
|
||||
```bash
|
||||
openssl pkcs12 -export -in certs/elasticsearch.crt -inkey certs/elasticsearch.key \
|
||||
-CAfile certs/ca.crt -chain -out certs/elasticsearch.p12 -passout pass:changeit
|
||||
cp certs/elasticsearch.p12 certs/elastic-http.p12
|
||||
```
|
||||
3. Re-run the secrets and affected component phases:
|
||||
```bash
|
||||
ansible-playbook -i inventory/hosts.yml playbooks/deploy.yml \
|
||||
--tags secrets,elasticsearch,kibana,logstash,nginx
|
||||
```
|
||||
|
||||
Ensure the SANs (Subject Alternative Names) in your certificates match the service DNS names used in the cluster (e.g., `elasticsearch.elk.svc.cluster.local`).
|
||||
|
||||
---
|
||||
|
||||
## Teardown
|
||||
|
||||
```bash
|
||||
cd ansible
|
||||
|
||||
# Remove all K8s resources but keep Minikube running
|
||||
ansible-playbook -i inventory/hosts.yml playbooks/teardown.yml --ask-become-pass
|
||||
|
||||
# Remove everything including Minikube
|
||||
ansible-playbook -i inventory/hosts.yml playbooks/teardown.yml --ask-become-pass -e stop_minikube=true
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Pod stuck in CrashLoopBackOff**
|
||||
```bash
|
||||
kubectl logs <pod-name> -n elk --previous
|
||||
kubectl describe pod <pod-name> -n elk
|
||||
```
|
||||
|
||||
**Elasticsearch fails to start (vm.max_map_count)**
|
||||
```bash
|
||||
# On the host (required for Elasticsearch)
|
||||
sudo sysctl -w vm.max_map_count=262144
|
||||
echo "vm.max_map_count=262144" | sudo tee -a /etc/sysctl.conf
|
||||
```
|
||||
|
||||
**Logstash cannot connect to MySQL**
|
||||
```bash
|
||||
# Verify cross-namespace DNS resolution
|
||||
kubectl run -it --rm debug --image=busybox -n elk -- nslookup mysql.database.svc.cluster.local
|
||||
```
|
||||
|
||||
**Certificate issues**
|
||||
```bash
|
||||
# Verify certificate SANs
|
||||
openssl x509 -in certs/elasticsearch.crt -noout -text | grep -A1 "Subject Alternative Name"
|
||||
```
|
||||
|
||||
**Reset Elasticsearch setup job**
|
||||
```bash
|
||||
kubectl delete job elasticsearch-setup -n elk
|
||||
kubectl apply -f k8s/elasticsearch/setup-job.yaml
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Security Notes
|
||||
|
||||
This deployment uses demonstration credentials. For any non-lab use:
|
||||
|
||||
- Replace all passwords in `ansible/inventory/hosts.yml` and corresponding K8s secrets
|
||||
- Use Ansible Vault to encrypt sensitive variables: `ansible-vault encrypt_string 'password' --name 'elastic_password'`
|
||||
- Use CA-signed certificates from your organization's PKI
|
||||
- Restrict the Elasticsearch `superuser` role mapping to specific OIDC groups
|
||||
- Enable Kubernetes NetworkPolicies to isolate namespace traffic
|
||||
- Set Kubernetes resource quotas appropriate to your cluster
|
||||
36
elk-k8s-deployment/ansible/inventory/hosts.yml
Normal file
36
elk-k8s-deployment/ansible/inventory/hosts.yml
Normal file
@@ -0,0 +1,36 @@
|
||||
all:
|
||||
hosts:
|
||||
localhost:
|
||||
ansible_connection: local
|
||||
ansible_python_interpreter: /usr/bin/python3
|
||||
|
||||
vars:
|
||||
# --- Project paths ---
|
||||
project_dir: "{{ playbook_dir }}/.."
|
||||
k8s_manifests_dir: "{{ project_dir }}/k8s"
|
||||
scripts_dir: "{{ project_dir }}/scripts"
|
||||
certs_dir: "{{ project_dir }}/certs"
|
||||
|
||||
# --- Minikube ---
|
||||
minikube_cpus: 4
|
||||
minikube_memory: 8192
|
||||
minikube_disk_size: "40g"
|
||||
minikube_driver: docker
|
||||
|
||||
# --- Domain ---
|
||||
domain: elk.local
|
||||
|
||||
# --- ELK versions ---
|
||||
elk_version: "8.17.0"
|
||||
nginx_version: "1.27"
|
||||
authentik_version: "2024.12"
|
||||
mysql_version: "8.4"
|
||||
|
||||
# --- Passwords (override in vault for production) ---
|
||||
elastic_password: "ElasticP@ss2024!"
|
||||
kibana_system_password: "KibanaP@ss2024!"
|
||||
mysql_root_password: "rootpassword"
|
||||
logstash_mysql_password: "logstash_password"
|
||||
authentik_pg_password: "AuthPgP@ss2024!"
|
||||
authentik_secret_key: "Sup3rS3cretK3yForAuth3nt1k2024ThatIsLongEnough!"
|
||||
authentik_bootstrap_password: "AdminP@ss2024!"
|
||||
351
elk-k8s-deployment/ansible/playbooks/deploy.yml
Normal file
351
elk-k8s-deployment/ansible/playbooks/deploy.yml
Normal file
@@ -0,0 +1,351 @@
|
||||
---
|
||||
# =============================================================================
|
||||
# ELK Stack on Kubernetes (Minikube) — Main Deployment Playbook
|
||||
# =============================================================================
|
||||
# Usage:
|
||||
# ansible-playbook -i inventory/hosts.yml playbooks/deploy.yml
|
||||
#
|
||||
# Tags:
|
||||
# --tags prereqs Install system dependencies
|
||||
# --tags certs Generate TLS certificates
|
||||
# --tags minikube Start/configure Minikube
|
||||
# --tags namespaces Create K8s namespaces
|
||||
# --tags secrets Create TLS secrets in K8s
|
||||
# --tags mysql Deploy MySQL
|
||||
# --tags elasticsearch Deploy Elasticsearch
|
||||
# --tags es-setup Run Elasticsearch post-setup
|
||||
# --tags kibana Deploy Kibana
|
||||
# --tags logstash Deploy Logstash
|
||||
# --tags nginx Deploy NGINX proxy
|
||||
# --tags authentik Deploy Authentik SSO
|
||||
# --tags dns Configure /etc/hosts
|
||||
# =============================================================================
|
||||
|
||||
- name: Deploy ELK Stack on Kubernetes
|
||||
hosts: localhost
|
||||
connection: local
|
||||
gather_facts: true
|
||||
|
||||
tasks:
|
||||
# =======================================================================
|
||||
# PHASE 1 — Prerequisites
|
||||
# =======================================================================
|
||||
- name: Install system prerequisites
|
||||
tags: [prereqs]
|
||||
block:
|
||||
- name: Update apt cache
|
||||
become: true
|
||||
ansible.builtin.apt:
|
||||
update_cache: true
|
||||
cache_valid_time: 3600
|
||||
|
||||
- name: Install required packages
|
||||
become: true
|
||||
ansible.builtin.apt:
|
||||
name:
|
||||
- curl
|
||||
- apt-transport-https
|
||||
- ca-certificates
|
||||
- gnupg
|
||||
- lsb-release
|
||||
- openssl
|
||||
- docker.io
|
||||
- conntrack
|
||||
state: present
|
||||
|
||||
- name: Ensure Docker service is running
|
||||
become: true
|
||||
ansible.builtin.systemd:
|
||||
name: docker
|
||||
state: started
|
||||
enabled: true
|
||||
|
||||
- name: Add current user to docker group
|
||||
become: true
|
||||
ansible.builtin.user:
|
||||
name: "{{ ansible_user_id }}"
|
||||
groups: docker
|
||||
append: true
|
||||
|
||||
- name: Install kubectl
|
||||
become: true
|
||||
ansible.builtin.shell: |
|
||||
if ! command -v kubectl &>/dev/null; then
|
||||
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
|
||||
install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
|
||||
rm -f kubectl
|
||||
fi
|
||||
args:
|
||||
creates: /usr/local/bin/kubectl
|
||||
|
||||
- name: Install minikube
|
||||
become: true
|
||||
ansible.builtin.shell: |
|
||||
if ! command -v minikube &>/dev/null; then
|
||||
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
|
||||
install minikube-linux-amd64 /usr/local/bin/minikube
|
||||
rm -f minikube-linux-amd64
|
||||
fi
|
||||
args:
|
||||
creates: /usr/local/bin/minikube
|
||||
|
||||
# =======================================================================
|
||||
# PHASE 2 — TLS Certificates
|
||||
# =======================================================================
|
||||
- name: Generate TLS certificates
|
||||
tags: [certs]
|
||||
block:
|
||||
- name: Create certs directory
|
||||
ansible.builtin.file:
|
||||
path: "{{ certs_dir }}"
|
||||
state: directory
|
||||
mode: "0755"
|
||||
|
||||
- name: Run certificate generation script
|
||||
ansible.builtin.command:
|
||||
cmd: bash "{{ scripts_dir }}/generate-certs.sh" "{{ certs_dir }}"
|
||||
creates: "{{ certs_dir }}/ca.crt"
|
||||
|
||||
# =======================================================================
|
||||
# PHASE 3 — Minikube
|
||||
# =======================================================================
|
||||
- name: Start and configure Minikube
|
||||
tags: [minikube]
|
||||
block:
|
||||
- name: Check if Minikube is running
|
||||
ansible.builtin.command: minikube status --format='{{.Host}}'
|
||||
register: minikube_status
|
||||
ignore_errors: true
|
||||
changed_when: false
|
||||
|
||||
- name: Start Minikube
|
||||
ansible.builtin.command: >
|
||||
minikube start
|
||||
--cpus={{ minikube_cpus }}
|
||||
--memory={{ minikube_memory }}
|
||||
--disk-size={{ minikube_disk_size }}
|
||||
--driver={{ minikube_driver }}
|
||||
--addons=default-storageclass,storage-provisioner
|
||||
when: minikube_status.rc != 0 or 'Running' not in minikube_status.stdout
|
||||
|
||||
- name: Enable ingress addon
|
||||
ansible.builtin.command: minikube addons enable ingress
|
||||
changed_when: false
|
||||
|
||||
# =======================================================================
|
||||
# PHASE 4 — Kubernetes Namespaces
|
||||
# =======================================================================
|
||||
- name: Create Kubernetes namespaces
|
||||
tags: [namespaces]
|
||||
ansible.builtin.command: kubectl apply -f "{{ k8s_manifests_dir }}/namespaces/namespaces.yaml"
|
||||
changed_when: false
|
||||
|
||||
# =======================================================================
|
||||
# PHASE 5 — TLS Secrets in Kubernetes
|
||||
# =======================================================================
|
||||
- name: Create TLS secrets
|
||||
tags: [secrets]
|
||||
block:
|
||||
- name: Delete existing elk-certificates secret (if any)
|
||||
ansible.builtin.command: kubectl delete secret elk-certificates -n elk --ignore-not-found
|
||||
changed_when: false
|
||||
|
||||
- name: Create elk-certificates secret in elk namespace
|
||||
ansible.builtin.command: >
|
||||
kubectl create secret generic elk-certificates -n elk
|
||||
--from-file=ca.crt={{ certs_dir }}/ca.crt
|
||||
--from-file=ca.key={{ certs_dir }}/ca.key
|
||||
--from-file=elasticsearch.crt={{ certs_dir }}/elasticsearch.crt
|
||||
--from-file=elasticsearch.key={{ certs_dir }}/elasticsearch.key
|
||||
--from-file=elasticsearch.p12={{ certs_dir }}/elasticsearch.p12
|
||||
--from-file=http.p12={{ certs_dir }}/elastic-http.p12
|
||||
--from-file=kibana.crt={{ certs_dir }}/kibana.crt
|
||||
--from-file=kibana.key={{ certs_dir }}/kibana.key
|
||||
--from-file=logstash.crt={{ certs_dir }}/logstash.crt
|
||||
--from-file=logstash.key={{ certs_dir }}/logstash.key
|
||||
--from-file=nginx.crt={{ certs_dir }}/nginx.crt
|
||||
--from-file=nginx.key={{ certs_dir }}/nginx.key
|
||||
--from-file=authentik.crt={{ certs_dir }}/authentik.crt
|
||||
--from-file=authentik.key={{ certs_dir }}/authentik.key
|
||||
|
||||
# =======================================================================
|
||||
# PHASE 6 — MySQL (database namespace)
|
||||
# =======================================================================
|
||||
- name: Deploy MySQL
|
||||
tags: [mysql]
|
||||
block:
|
||||
- name: Apply MySQL manifests
|
||||
ansible.builtin.command: kubectl apply -f "{{ k8s_manifests_dir }}/mysql/"
|
||||
changed_when: false
|
||||
|
||||
- name: Wait for MySQL to be ready
|
||||
ansible.builtin.command: >
|
||||
kubectl wait --for=condition=ready pod -l app=mysql
|
||||
-n database --timeout=180s
|
||||
changed_when: false
|
||||
|
||||
# =======================================================================
|
||||
# PHASE 7 — Elasticsearch
|
||||
# =======================================================================
|
||||
- name: Deploy Elasticsearch
|
||||
tags: [elasticsearch]
|
||||
block:
|
||||
- name: Apply Elasticsearch manifests
|
||||
ansible.builtin.command: kubectl apply -f "{{ k8s_manifests_dir }}/elasticsearch/elasticsearch.yaml"
|
||||
changed_when: false
|
||||
|
||||
- name: Wait for Elasticsearch to be ready
|
||||
ansible.builtin.command: >
|
||||
kubectl wait --for=condition=ready pod -l app=elasticsearch
|
||||
-n elk --timeout=300s
|
||||
changed_when: false
|
||||
|
||||
# =======================================================================
|
||||
# PHASE 8 — Elasticsearch post-setup
|
||||
# =======================================================================
|
||||
- name: Run Elasticsearch setup job
|
||||
tags: [es-setup]
|
||||
block:
|
||||
- name: Delete previous setup job (if any)
|
||||
ansible.builtin.command: kubectl delete job elasticsearch-setup -n elk --ignore-not-found
|
||||
changed_when: false
|
||||
|
||||
- name: Apply setup job
|
||||
ansible.builtin.command: kubectl apply -f "{{ k8s_manifests_dir }}/elasticsearch/setup-job.yaml"
|
||||
changed_when: false
|
||||
|
||||
- name: Wait for setup job completion
|
||||
ansible.builtin.command: >
|
||||
kubectl wait --for=condition=complete job/elasticsearch-setup
|
||||
-n elk --timeout=120s
|
||||
changed_when: false
|
||||
|
||||
# =======================================================================
|
||||
# PHASE 9 — Authentik SSO
|
||||
# =======================================================================
|
||||
- name: Deploy Authentik SSO
|
||||
tags: [authentik]
|
||||
block:
|
||||
- name: Apply Authentik manifests
|
||||
ansible.builtin.command: kubectl apply -f "{{ k8s_manifests_dir }}/authentik/"
|
||||
changed_when: false
|
||||
|
||||
- name: Wait for Authentik PostgreSQL
|
||||
ansible.builtin.command: >
|
||||
kubectl wait --for=condition=ready pod -l app=authentik-postgresql
|
||||
-n elk --timeout=120s
|
||||
changed_when: false
|
||||
|
||||
- name: Wait for Authentik Redis
|
||||
ansible.builtin.command: >
|
||||
kubectl wait --for=condition=ready pod -l app=authentik-redis
|
||||
-n elk --timeout=60s
|
||||
changed_when: false
|
||||
|
||||
- name: Wait for Authentik Server
|
||||
ansible.builtin.command: >
|
||||
kubectl wait --for=condition=ready pod -l app=authentik-server
|
||||
-n elk --timeout=180s
|
||||
changed_when: false
|
||||
|
||||
# =======================================================================
|
||||
# PHASE 10 — Kibana
|
||||
# =======================================================================
|
||||
- name: Deploy Kibana
|
||||
tags: [kibana]
|
||||
block:
|
||||
- name: Apply Kibana manifests
|
||||
ansible.builtin.command: kubectl apply -f "{{ k8s_manifests_dir }}/kibana/"
|
||||
changed_when: false
|
||||
|
||||
- name: Wait for Kibana to be ready
|
||||
ansible.builtin.command: >
|
||||
kubectl wait --for=condition=ready pod -l app=kibana
|
||||
-n elk --timeout=300s
|
||||
changed_when: false
|
||||
|
||||
# =======================================================================
|
||||
# PHASE 11 — Logstash
|
||||
# =======================================================================
|
||||
- name: Deploy Logstash
|
||||
tags: [logstash]
|
||||
block:
|
||||
- name: Apply Logstash manifests
|
||||
ansible.builtin.command: kubectl apply -f "{{ k8s_manifests_dir }}/logstash/"
|
||||
changed_when: false
|
||||
|
||||
- name: Wait for Logstash to be ready
|
||||
ansible.builtin.command: >
|
||||
kubectl wait --for=condition=ready pod -l app=logstash
|
||||
-n elk --timeout=180s
|
||||
changed_when: false
|
||||
|
||||
# =======================================================================
|
||||
# PHASE 12 — NGINX Proxy
|
||||
# =======================================================================
|
||||
- name: Deploy NGINX reverse proxy
|
||||
tags: [nginx]
|
||||
block:
|
||||
- name: Apply NGINX manifests
|
||||
ansible.builtin.command: kubectl apply -f "{{ k8s_manifests_dir }}/nginx/"
|
||||
changed_when: false
|
||||
|
||||
- name: Wait for NGINX to be ready
|
||||
ansible.builtin.command: >
|
||||
kubectl wait --for=condition=ready pod -l app=nginx
|
||||
-n elk --timeout=60s
|
||||
changed_when: false
|
||||
|
||||
# =======================================================================
|
||||
# PHASE 13 — DNS / /etc/hosts
|
||||
# =======================================================================
|
||||
- name: Configure local DNS
|
||||
tags: [dns]
|
||||
block:
|
||||
- name: Get Minikube IP
|
||||
ansible.builtin.command: minikube ip
|
||||
register: minikube_ip
|
||||
changed_when: false
|
||||
|
||||
- name: Add kibana.elk.local to /etc/hosts
|
||||
become: true
|
||||
ansible.builtin.lineinfile:
|
||||
path: /etc/hosts
|
||||
regexp: '.*kibana\.elk\.local.*'
|
||||
line: "{{ minikube_ip.stdout }} kibana.elk.local authentik.elk.local"
|
||||
state: present
|
||||
|
||||
# =======================================================================
|
||||
# FINAL — Summary
|
||||
# =======================================================================
|
||||
- name: Deployment summary
|
||||
tags: [always]
|
||||
block:
|
||||
- name: Get Minikube IP for summary
|
||||
ansible.builtin.command: minikube ip
|
||||
register: minikube_ip_final
|
||||
changed_when: false
|
||||
ignore_errors: true
|
||||
|
||||
- name: Print access information
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
╔══════════════════════════════════════════════════════════════╗
|
||||
║ ELK Stack Deployment Complete! ║
|
||||
╠══════════════════════════════════════════════════════════════╣
|
||||
║ ║
|
||||
║ Minikube IP: {{ minikube_ip_final.stdout | default('N/A') }}
|
||||
║ ║
|
||||
║ Kibana (NGINX proxy): ║
|
||||
║ https://kibana.elk.local:30443 ║
|
||||
║ User: elastic / ElasticP@ss2024! ║
|
||||
║ ║
|
||||
║ Authentik Admin: ║
|
||||
║ https://authentik.elk.local:30944 ║
|
||||
║ User: akadmin / AdminP@ss2024! ║
|
||||
║ ║
|
||||
║ Elasticsearch: ║
|
||||
║ kubectl port-forward svc/elasticsearch 9200:9200 -n elk ║
|
||||
║ User: elastic / ElasticP@ss2024! ║
|
||||
║ ║
|
||||
╚══════════════════════════════════════════════════════════════╝
|
||||
38
elk-k8s-deployment/ansible/playbooks/teardown.yml
Normal file
38
elk-k8s-deployment/ansible/playbooks/teardown.yml
Normal file
@@ -0,0 +1,38 @@
|
||||
---
|
||||
# =============================================================================
|
||||
# Teardown Playbook — Remove the entire ELK stack deployment
|
||||
# =============================================================================
|
||||
# Usage:
|
||||
# ansible-playbook -i inventory/hosts.yml playbooks/teardown.yml
|
||||
# =============================================================================
|
||||
|
||||
- name: Teardown ELK Stack
|
||||
hosts: localhost
|
||||
connection: local
|
||||
gather_facts: false
|
||||
|
||||
tasks:
|
||||
- name: Delete elk namespace (removes all ELK resources)
|
||||
ansible.builtin.command: kubectl delete namespace elk --ignore-not-found --timeout=120s
|
||||
changed_when: false
|
||||
|
||||
- name: Delete database namespace (removes MySQL)
|
||||
ansible.builtin.command: kubectl delete namespace database --ignore-not-found --timeout=120s
|
||||
changed_when: false
|
||||
|
||||
- name: Remove /etc/hosts entries
|
||||
become: true
|
||||
ansible.builtin.lineinfile:
|
||||
path: /etc/hosts
|
||||
regexp: '.*kibana\.elk\.local.*'
|
||||
state: absent
|
||||
|
||||
- name: Optionally stop Minikube
|
||||
ansible.builtin.command: minikube stop
|
||||
ignore_errors: true
|
||||
changed_when: false
|
||||
when: stop_minikube | default(false) | bool
|
||||
|
||||
- name: Print teardown complete
|
||||
ansible.builtin.debug:
|
||||
msg: "ELK stack teardown complete. Run with -e stop_minikube=true to also stop Minikube."
|
||||
305
elk-k8s-deployment/k8s/authentik/authentik.yaml
Normal file
305
elk-k8s-deployment/k8s/authentik/authentik.yaml
Normal file
@@ -0,0 +1,305 @@
|
||||
# =============================================================================
|
||||
# Authentik SSO — PostgreSQL, Redis, Server, Worker
|
||||
# All deployed in the elk namespace
|
||||
# =============================================================================
|
||||
|
||||
# --- PostgreSQL for Authentik ---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: authentik-db-secret
|
||||
namespace: elk
|
||||
type: Opaque
|
||||
stringData:
|
||||
POSTGRES_DB: "authentik"
|
||||
POSTGRES_USER: "authentik"
|
||||
POSTGRES_PASSWORD: "AuthPgP@ss2024!"
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: authentik-postgresql
|
||||
namespace: elk
|
||||
labels:
|
||||
app: authentik-postgresql
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: authentik-postgresql
|
||||
strategy:
|
||||
type: Recreate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: authentik-postgresql
|
||||
spec:
|
||||
containers:
|
||||
- name: postgresql
|
||||
image: postgres:16-alpine
|
||||
ports:
|
||||
- containerPort: 5432
|
||||
envFrom:
|
||||
- secretRef:
|
||||
name: authentik-db-secret
|
||||
volumeMounts:
|
||||
- name: pg-data
|
||||
mountPath: /var/lib/postgresql/data
|
||||
resources:
|
||||
requests:
|
||||
memory: "256Mi"
|
||||
cpu: "100m"
|
||||
limits:
|
||||
memory: "512Mi"
|
||||
cpu: "250m"
|
||||
volumes:
|
||||
- name: pg-data
|
||||
persistentVolumeClaim:
|
||||
claimName: authentik-pg-pvc
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: authentik-pg-pvc
|
||||
namespace: elk
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 5Gi
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: authentik-postgresql
|
||||
namespace: elk
|
||||
spec:
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- port: 5432
|
||||
targetPort: 5432
|
||||
selector:
|
||||
app: authentik-postgresql
|
||||
|
||||
---
|
||||
# --- Redis for Authentik ---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: authentik-redis
|
||||
namespace: elk
|
||||
labels:
|
||||
app: authentik-redis
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: authentik-redis
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: authentik-redis
|
||||
spec:
|
||||
containers:
|
||||
- name: redis
|
||||
image: redis:7-alpine
|
||||
ports:
|
||||
- containerPort: 6379
|
||||
command: ["redis-server", "--save", "60", "1", "--loglevel", "warning"]
|
||||
resources:
|
||||
requests:
|
||||
memory: "128Mi"
|
||||
cpu: "50m"
|
||||
limits:
|
||||
memory: "256Mi"
|
||||
cpu: "100m"
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: authentik-redis
|
||||
namespace: elk
|
||||
spec:
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- port: 6379
|
||||
targetPort: 6379
|
||||
selector:
|
||||
app: authentik-redis
|
||||
|
||||
---
|
||||
# --- Authentik Secret ---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: authentik-secret
|
||||
namespace: elk
|
||||
type: Opaque
|
||||
stringData:
|
||||
AUTHENTIK_SECRET_KEY: "Sup3rS3cretK3yForAuth3nt1k2024ThatIsLongEnough!"
|
||||
AUTHENTIK_POSTGRESQL__PASSWORD: "AuthPgP@ss2024!"
|
||||
AUTHENTIK_BOOTSTRAP_PASSWORD: "AdminP@ss2024!"
|
||||
AUTHENTIK_BOOTSTRAP_EMAIL: "admin@elk.local"
|
||||
AUTHENTIK_BOOTSTRAP_TOKEN: "bootstrap-token-elk-lab-2024"
|
||||
|
||||
---
|
||||
# --- Authentik Server ---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: authentik-server
|
||||
namespace: elk
|
||||
labels:
|
||||
app: authentik-server
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: authentik-server
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: authentik-server
|
||||
spec:
|
||||
containers:
|
||||
- name: authentik-server
|
||||
image: ghcr.io/goauthentik/server:2024.12
|
||||
command: ["ak", "server"]
|
||||
ports:
|
||||
- containerPort: 9000
|
||||
name: http
|
||||
- containerPort: 9443
|
||||
name: https
|
||||
env:
|
||||
- name: AUTHENTIK_REDIS__HOST
|
||||
value: "authentik-redis"
|
||||
- name: AUTHENTIK_POSTGRESQL__HOST
|
||||
value: "authentik-postgresql"
|
||||
- name: AUTHENTIK_POSTGRESQL__USER
|
||||
value: "authentik"
|
||||
- name: AUTHENTIK_POSTGRESQL__NAME
|
||||
value: "authentik"
|
||||
- name: AUTHENTIK_POSTGRESQL__PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: authentik-secret
|
||||
key: AUTHENTIK_POSTGRESQL__PASSWORD
|
||||
- name: AUTHENTIK_SECRET_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: authentik-secret
|
||||
key: AUTHENTIK_SECRET_KEY
|
||||
- name: AUTHENTIK_BOOTSTRAP_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: authentik-secret
|
||||
key: AUTHENTIK_BOOTSTRAP_PASSWORD
|
||||
- name: AUTHENTIK_BOOTSTRAP_EMAIL
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: authentik-secret
|
||||
key: AUTHENTIK_BOOTSTRAP_EMAIL
|
||||
- name: AUTHENTIK_BOOTSTRAP_TOKEN
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: authentik-secret
|
||||
key: AUTHENTIK_BOOTSTRAP_TOKEN
|
||||
volumeMounts:
|
||||
- name: authentik-certs
|
||||
mountPath: /certs
|
||||
readOnly: true
|
||||
resources:
|
||||
requests:
|
||||
memory: "512Mi"
|
||||
cpu: "250m"
|
||||
limits:
|
||||
memory: "1Gi"
|
||||
cpu: "500m"
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /-/health/ready/
|
||||
port: 9000
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 10
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /-/health/live/
|
||||
port: 9000
|
||||
initialDelaySeconds: 50
|
||||
periodSeconds: 30
|
||||
volumes:
|
||||
- name: authentik-certs
|
||||
secret:
|
||||
secretName: elk-certificates
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: authentik-server
|
||||
namespace: elk
|
||||
labels:
|
||||
app: authentik-server
|
||||
spec:
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 9000
|
||||
protocol: TCP
|
||||
name: http
|
||||
- port: 443
|
||||
targetPort: 9443
|
||||
protocol: TCP
|
||||
name: https
|
||||
selector:
|
||||
app: authentik-server
|
||||
|
||||
---
|
||||
# --- Authentik Worker ---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: authentik-worker
|
||||
namespace: elk
|
||||
labels:
|
||||
app: authentik-worker
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: authentik-worker
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: authentik-worker
|
||||
spec:
|
||||
containers:
|
||||
- name: authentik-worker
|
||||
image: ghcr.io/goauthentik/server:2024.12
|
||||
command: ["ak", "worker"]
|
||||
env:
|
||||
- name: AUTHENTIK_REDIS__HOST
|
||||
value: "authentik-redis"
|
||||
- name: AUTHENTIK_POSTGRESQL__HOST
|
||||
value: "authentik-postgresql"
|
||||
- name: AUTHENTIK_POSTGRESQL__USER
|
||||
value: "authentik"
|
||||
- name: AUTHENTIK_POSTGRESQL__NAME
|
||||
value: "authentik"
|
||||
- name: AUTHENTIK_POSTGRESQL__PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: authentik-secret
|
||||
key: AUTHENTIK_POSTGRESQL__PASSWORD
|
||||
- name: AUTHENTIK_SECRET_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: authentik-secret
|
||||
key: AUTHENTIK_SECRET_KEY
|
||||
resources:
|
||||
requests:
|
||||
memory: "512Mi"
|
||||
cpu: "250m"
|
||||
limits:
|
||||
memory: "1Gi"
|
||||
cpu: "500m"
|
||||
195
elk-k8s-deployment/k8s/elasticsearch/elasticsearch.yaml
Normal file
195
elk-k8s-deployment/k8s/elasticsearch/elasticsearch.yaml
Normal file
@@ -0,0 +1,195 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: elasticsearch-config
|
||||
namespace: elk
|
||||
data:
|
||||
elasticsearch.yml: |
|
||||
cluster.name: elk-lab
|
||||
node.name: elasticsearch-0
|
||||
network.host: 0.0.0.0
|
||||
discovery.type: single-node
|
||||
|
||||
# --- Security (X-Pack) ---
|
||||
xpack.security.enabled: true
|
||||
xpack.security.enrollment.enabled: false
|
||||
|
||||
# Transport TLS (node-to-node)
|
||||
xpack.security.transport.ssl.enabled: true
|
||||
xpack.security.transport.ssl.verification_mode: certificate
|
||||
xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elasticsearch.p12
|
||||
xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elasticsearch.p12
|
||||
|
||||
# HTTP TLS (client-to-node)
|
||||
xpack.security.http.ssl.enabled: true
|
||||
xpack.security.http.ssl.keystore.path: /usr/share/elasticsearch/config/certs/http.p12
|
||||
xpack.security.http.ssl.truststore.path: /usr/share/elasticsearch/config/certs/http.p12
|
||||
|
||||
# Token service for Kibana
|
||||
xpack.security.authc.token.enabled: true
|
||||
|
||||
# OIDC realm for Authentik SSO
|
||||
xpack.security.authc.realms.oidc.authentik:
|
||||
order: 2
|
||||
rp.client_id: "kibana"
|
||||
rp.response_type: "code"
|
||||
rp.redirect_uri: "https://kibana.elk.local/api/security/oidc/callback"
|
||||
op.issuer: "https://authentik-server.elk.svc.cluster.local/application/o/kibana/"
|
||||
op.authorization_endpoint: "https://authentik-server.elk.svc.cluster.local/application/o/authorize/"
|
||||
op.token_endpoint: "https://authentik-server.elk.svc.cluster.local/application/o/token/"
|
||||
op.userinfo_endpoint: "https://authentik-server.elk.svc.cluster.local/application/o/userinfo/"
|
||||
op.jwkset_path: "https://authentik-server.elk.svc.cluster.local/application/o/kibana/jwks/"
|
||||
claims.principal: "preferred_username"
|
||||
claims.groups: "groups"
|
||||
claims.name: "name"
|
||||
claims.mail: "email"
|
||||
ssl.certificate_authorities: ["/usr/share/elasticsearch/config/certs/ca.crt"]
|
||||
|
||||
xpack.security.authc.realms.native.native1:
|
||||
order: 0
|
||||
|
||||
xpack.security.authc.realms.file.file1:
|
||||
order: 1
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: elasticsearch-credentials
|
||||
namespace: elk
|
||||
type: Opaque
|
||||
stringData:
|
||||
ELASTIC_PASSWORD: "ElasticP@ss2024!"
|
||||
ES_KEYSTORE_PASS: "changeit"
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: elasticsearch
|
||||
namespace: elk
|
||||
labels:
|
||||
app: elasticsearch
|
||||
spec:
|
||||
serviceName: elasticsearch
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: elasticsearch
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: elasticsearch
|
||||
spec:
|
||||
securityContext:
|
||||
fsGroup: 1000
|
||||
initContainers:
|
||||
- name: set-permissions
|
||||
image: busybox:1.36
|
||||
command: ['sh', '-c', 'chown -R 1000:1000 /usr/share/elasticsearch/data']
|
||||
volumeMounts:
|
||||
- name: es-data
|
||||
mountPath: /usr/share/elasticsearch/data
|
||||
- name: set-vm-max-map-count
|
||||
image: busybox:1.36
|
||||
command: ['sysctl', '-w', 'vm.max_map_count=262144']
|
||||
securityContext:
|
||||
privileged: true
|
||||
containers:
|
||||
- name: elasticsearch
|
||||
image: docker.elastic.co/elasticsearch/elasticsearch:8.17.0
|
||||
ports:
|
||||
- containerPort: 9200
|
||||
name: http
|
||||
- containerPort: 9300
|
||||
name: transport
|
||||
env:
|
||||
- name: ELASTIC_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: elasticsearch-credentials
|
||||
key: ELASTIC_PASSWORD
|
||||
- name: ES_JAVA_OPTS
|
||||
value: "-Xms1g -Xmx1g"
|
||||
- name: xpack.security.transport.ssl.keystore.secure_password
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: elasticsearch-credentials
|
||||
key: ES_KEYSTORE_PASS
|
||||
- name: xpack.security.transport.ssl.truststore.secure_password
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: elasticsearch-credentials
|
||||
key: ES_KEYSTORE_PASS
|
||||
- name: xpack.security.http.ssl.keystore.secure_password
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: elasticsearch-credentials
|
||||
key: ES_KEYSTORE_PASS
|
||||
- name: xpack.security.http.ssl.truststore.secure_password
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: elasticsearch-credentials
|
||||
key: ES_KEYSTORE_PASS
|
||||
volumeMounts:
|
||||
- name: es-data
|
||||
mountPath: /usr/share/elasticsearch/data
|
||||
- name: es-config
|
||||
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
|
||||
subPath: elasticsearch.yml
|
||||
- name: es-certs
|
||||
mountPath: /usr/share/elasticsearch/config/certs
|
||||
readOnly: true
|
||||
resources:
|
||||
requests:
|
||||
memory: "2Gi"
|
||||
cpu: "500m"
|
||||
limits:
|
||||
memory: "3Gi"
|
||||
cpu: "1000m"
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
- bash
|
||||
- -c
|
||||
- |
|
||||
curl -s --cacert /usr/share/elasticsearch/config/certs/ca.crt \
|
||||
-u "elastic:${ELASTIC_PASSWORD}" \
|
||||
https://localhost:9200/_cluster/health | grep -qE '"status":"(green|yellow)"'
|
||||
initialDelaySeconds: 60
|
||||
periodSeconds: 15
|
||||
timeoutSeconds: 10
|
||||
volumes:
|
||||
- name: es-config
|
||||
configMap:
|
||||
name: elasticsearch-config
|
||||
- name: es-certs
|
||||
secret:
|
||||
secretName: elk-certificates
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: es-data
|
||||
spec:
|
||||
accessModes: ["ReadWriteOnce"]
|
||||
resources:
|
||||
requests:
|
||||
storage: 10Gi
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: elasticsearch
|
||||
namespace: elk
|
||||
labels:
|
||||
app: elasticsearch
|
||||
spec:
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- port: 9200
|
||||
targetPort: 9200
|
||||
protocol: TCP
|
||||
name: http
|
||||
- port: 9300
|
||||
targetPort: 9300
|
||||
protocol: TCP
|
||||
name: transport
|
||||
selector:
|
||||
app: elasticsearch
|
||||
140
elk-k8s-deployment/k8s/elasticsearch/setup-job.yaml
Normal file
140
elk-k8s-deployment/k8s/elasticsearch/setup-job.yaml
Normal file
@@ -0,0 +1,140 @@
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: elasticsearch-setup
|
||||
namespace: elk
|
||||
spec:
|
||||
backoffLimit: 5
|
||||
template:
|
||||
spec:
|
||||
restartPolicy: OnFailure
|
||||
containers:
|
||||
- name: setup
|
||||
image: curlimages/curl:8.5.0
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
- |
|
||||
set -e
|
||||
ES_URL="https://elasticsearch:9200"
|
||||
CA_CERT="/certs/ca.crt"
|
||||
ELASTIC_PASS="ElasticP@ss2024!"
|
||||
KIBANA_PASS="KibanaP@ss2024!"
|
||||
|
||||
echo "=== Waiting for Elasticsearch to be ready ==="
|
||||
until curl -s --cacert $CA_CERT -u "elastic:${ELASTIC_PASS}" \
|
||||
"${ES_URL}/_cluster/health" | grep -qE '"status":"(green|yellow)"'; do
|
||||
echo "Waiting for Elasticsearch..."
|
||||
sleep 10
|
||||
done
|
||||
echo "Elasticsearch is ready!"
|
||||
|
||||
echo "=== Setting kibana_system password ==="
|
||||
curl -s --cacert $CA_CERT -u "elastic:${ELASTIC_PASS}" \
|
||||
-X POST "${ES_URL}/_security/user/kibana_system/_password" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "{\"password\": \"${KIBANA_PASS}\"}"
|
||||
|
||||
echo ""
|
||||
echo "=== Creating OIDC role mapping ==="
|
||||
curl -s --cacert $CA_CERT -u "elastic:${ELASTIC_PASS}" \
|
||||
-X PUT "${ES_URL}/_security/role_mapping/authentik_users" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"roles": ["superuser"],
|
||||
"enabled": true,
|
||||
"rules": {
|
||||
"field": {
|
||||
"realm.name": "authentik"
|
||||
}
|
||||
},
|
||||
"metadata": {
|
||||
"description": "Map Authentik SSO users to superuser role"
|
||||
}
|
||||
}'
|
||||
|
||||
echo ""
|
||||
echo "=== Creating logstash_writer role ==="
|
||||
curl -s --cacert $CA_CERT -u "elastic:${ELASTIC_PASS}" \
|
||||
-X PUT "${ES_URL}/_security/role/logstash_writer" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"cluster": ["manage_index_templates", "monitor", "manage_ilm"],
|
||||
"indices": [
|
||||
{
|
||||
"names": ["employees-*", "access-logs-*", "logstash-*"],
|
||||
"privileges": ["write", "create", "create_index", "manage", "auto_configure"]
|
||||
}
|
||||
]
|
||||
}'
|
||||
|
||||
echo ""
|
||||
echo "=== Creating index templates ==="
|
||||
curl -s --cacert $CA_CERT -u "elastic:${ELASTIC_PASS}" \
|
||||
-X PUT "${ES_URL}/_index_template/employees-template" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"index_patterns": ["employees-*"],
|
||||
"template": {
|
||||
"settings": {
|
||||
"number_of_shards": 1,
|
||||
"number_of_replicas": 0
|
||||
},
|
||||
"mappings": {
|
||||
"properties": {
|
||||
"id": { "type": "integer" },
|
||||
"first_name": { "type": "keyword" },
|
||||
"last_name": { "type": "keyword" },
|
||||
"email": { "type": "keyword" },
|
||||
"department": { "type": "keyword" },
|
||||
"salary": { "type": "float" },
|
||||
"salary_band": { "type": "keyword" },
|
||||
"hire_date": { "type": "date" },
|
||||
"is_active": { "type": "boolean" },
|
||||
"updated_at": { "type": "date" }
|
||||
}
|
||||
}
|
||||
}
|
||||
}'
|
||||
|
||||
echo ""
|
||||
curl -s --cacert $CA_CERT -u "elastic:${ELASTIC_PASS}" \
|
||||
-X PUT "${ES_URL}/_index_template/access-logs-template" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"index_patterns": ["access-logs-*"],
|
||||
"template": {
|
||||
"settings": {
|
||||
"number_of_shards": 1,
|
||||
"number_of_replicas": 0
|
||||
},
|
||||
"mappings": {
|
||||
"properties": {
|
||||
"id": { "type": "integer" },
|
||||
"employee_id": { "type": "integer" },
|
||||
"first_name": { "type": "keyword" },
|
||||
"last_name": { "type": "keyword" },
|
||||
"department": { "type": "keyword" },
|
||||
"action": { "type": "keyword" },
|
||||
"resource": { "type": "keyword" },
|
||||
"ip_address": { "type": "ip" },
|
||||
"status_code": { "type": "integer" },
|
||||
"status_category": { "type": "keyword" },
|
||||
"response_time_ms":{ "type": "integer" },
|
||||
"performance_tier":{ "type": "keyword" },
|
||||
"created_at": { "type": "date" }
|
||||
}
|
||||
}
|
||||
}
|
||||
}'
|
||||
|
||||
echo ""
|
||||
echo "=== Setup complete! ==="
|
||||
volumeMounts:
|
||||
- name: certs
|
||||
mountPath: /certs
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: certs
|
||||
secret:
|
||||
secretName: elk-certificates
|
||||
124
elk-k8s-deployment/k8s/kibana/kibana.yaml
Normal file
124
elk-k8s-deployment/k8s/kibana/kibana.yaml
Normal file
@@ -0,0 +1,124 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: kibana-config
|
||||
namespace: elk
|
||||
data:
|
||||
kibana.yml: |
|
||||
server.name: kibana
|
||||
server.host: "0.0.0.0"
|
||||
server.port: 5601
|
||||
|
||||
# --- Elasticsearch connection ---
|
||||
elasticsearch.hosts: ["https://elasticsearch:9200"]
|
||||
elasticsearch.username: "kibana_system"
|
||||
elasticsearch.password: "${KIBANA_SYSTEM_PASSWORD}"
|
||||
elasticsearch.ssl.certificateAuthorities: ["/usr/share/kibana/config/certs/ca.crt"]
|
||||
|
||||
# --- Kibana TLS (internal, behind NGINX) ---
|
||||
server.ssl.enabled: false
|
||||
|
||||
# --- OIDC authentication via Authentik ---
|
||||
xpack.security.authc.providers:
|
||||
oidc.authentik:
|
||||
order: 0
|
||||
realm: "authentik"
|
||||
description: "Log in with Authentik SSO"
|
||||
basic.basic1:
|
||||
order: 1
|
||||
description: "Log in with username/password"
|
||||
|
||||
# --- Public-facing URL (through NGINX proxy) ---
|
||||
server.publicBaseUrl: "https://kibana.elk.local"
|
||||
|
||||
# --- Encryption keys ---
|
||||
xpack.security.encryptionKey: "a]eSC7?L8dq$Kvkw3p^uS2Wa!bNf9GYz"
|
||||
xpack.encryptedSavedObjects.encryptionKey: "R4nd0mStr1ng0f32Ch4r4ctersHer3!!"
|
||||
xpack.reporting.encryptionKey: "An0therR4nd0m32Ch4racterStr1ng!!"
|
||||
|
||||
# --- Monitoring ---
|
||||
monitoring.ui.enabled: true
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: kibana-secret
|
||||
namespace: elk
|
||||
type: Opaque
|
||||
stringData:
|
||||
KIBANA_SYSTEM_PASSWORD: "KibanaP@ss2024!"
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: kibana
|
||||
namespace: elk
|
||||
labels:
|
||||
app: kibana
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: kibana
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: kibana
|
||||
spec:
|
||||
containers:
|
||||
- name: kibana
|
||||
image: docker.elastic.co/kibana/kibana:8.17.0
|
||||
ports:
|
||||
- containerPort: 5601
|
||||
name: http
|
||||
env:
|
||||
- name: KIBANA_SYSTEM_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: kibana-secret
|
||||
key: KIBANA_SYSTEM_PASSWORD
|
||||
volumeMounts:
|
||||
- name: kibana-config
|
||||
mountPath: /usr/share/kibana/config/kibana.yml
|
||||
subPath: kibana.yml
|
||||
- name: kibana-certs
|
||||
mountPath: /usr/share/kibana/config/certs
|
||||
readOnly: true
|
||||
resources:
|
||||
requests:
|
||||
memory: "1Gi"
|
||||
cpu: "250m"
|
||||
limits:
|
||||
memory: "2Gi"
|
||||
cpu: "500m"
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /api/status
|
||||
port: 5601
|
||||
initialDelaySeconds: 60
|
||||
periodSeconds: 15
|
||||
timeoutSeconds: 10
|
||||
volumes:
|
||||
- name: kibana-config
|
||||
configMap:
|
||||
name: kibana-config
|
||||
- name: kibana-certs
|
||||
secret:
|
||||
secretName: elk-certificates
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: kibana
|
||||
namespace: elk
|
||||
labels:
|
||||
app: kibana
|
||||
spec:
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- port: 5601
|
||||
targetPort: 5601
|
||||
protocol: TCP
|
||||
name: http
|
||||
selector:
|
||||
app: kibana
|
||||
267
elk-k8s-deployment/k8s/logstash/logstash.yaml
Normal file
267
elk-k8s-deployment/k8s/logstash/logstash.yaml
Normal file
@@ -0,0 +1,267 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: logstash-config
|
||||
namespace: elk
|
||||
data:
|
||||
logstash.yml: |
|
||||
http.host: "0.0.0.0"
|
||||
http.port: 9600
|
||||
xpack.monitoring.enabled: true
|
||||
xpack.monitoring.elasticsearch.hosts: ["https://elasticsearch:9200"]
|
||||
xpack.monitoring.elasticsearch.username: "elastic"
|
||||
xpack.monitoring.elasticsearch.password: "${ES_PASSWORD}"
|
||||
xpack.monitoring.elasticsearch.ssl.certificate_authority: "/usr/share/logstash/config/certs/ca.crt"
|
||||
|
||||
pipelines.yml: |
|
||||
- pipeline.id: mysql-employees
|
||||
path.config: "/usr/share/logstash/pipeline/employees.conf"
|
||||
pipeline.workers: 1
|
||||
- pipeline.id: mysql-access-logs
|
||||
path.config: "/usr/share/logstash/pipeline/access_logs.conf"
|
||||
pipeline.workers: 1
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: logstash-pipeline
|
||||
namespace: elk
|
||||
data:
|
||||
employees.conf: |
|
||||
input {
|
||||
jdbc {
|
||||
jdbc_driver_library => "/usr/share/logstash/vendor/mysql-connector-j.jar"
|
||||
jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
|
||||
jdbc_connection_string => "jdbc:mysql://mysql.database.svc.cluster.local:3306/elkdemo"
|
||||
jdbc_user => "logstash"
|
||||
jdbc_password => "logstash_password"
|
||||
schedule => "*/2 * * * *"
|
||||
statement => "SELECT * FROM employees WHERE updated_at > :sql_last_value ORDER BY updated_at ASC"
|
||||
use_column_value => true
|
||||
tracking_column => "updated_at"
|
||||
tracking_column_type => "timestamp"
|
||||
last_run_metadata_path => "/usr/share/logstash/data/.logstash_employees_last_run"
|
||||
type => "employee"
|
||||
tags => ["mysql", "employees"]
|
||||
}
|
||||
}
|
||||
|
||||
filter {
|
||||
mutate {
|
||||
remove_field => ["@version"]
|
||||
add_field => { "[@metadata][index_name]" => "employees" }
|
||||
}
|
||||
if [salary] {
|
||||
ruby {
|
||||
code => '
|
||||
salary = event.get("salary").to_f
|
||||
if salary >= 100000
|
||||
event.set("salary_band", "senior")
|
||||
elsif salary >= 75000
|
||||
event.set("salary_band", "mid")
|
||||
else
|
||||
event.set("salary_band", "junior")
|
||||
end
|
||||
'
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
output {
|
||||
elasticsearch {
|
||||
hosts => ["https://elasticsearch:9200"]
|
||||
user => "elastic"
|
||||
password => "${ES_PASSWORD}"
|
||||
ssl_enabled => true
|
||||
ssl_certificate_authorities => ["/usr/share/logstash/config/certs/ca.crt"]
|
||||
index => "employees-%{+YYYY.MM}"
|
||||
document_id => "%{id}"
|
||||
action => "index"
|
||||
}
|
||||
stdout {
|
||||
codec => dots
|
||||
}
|
||||
}
|
||||
|
||||
access_logs.conf: |
|
||||
input {
|
||||
jdbc {
|
||||
jdbc_driver_library => "/usr/share/logstash/vendor/mysql-connector-j.jar"
|
||||
jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
|
||||
jdbc_connection_string => "jdbc:mysql://mysql.database.svc.cluster.local:3306/elkdemo"
|
||||
jdbc_user => "logstash"
|
||||
jdbc_password => "logstash_password"
|
||||
schedule => "*/1 * * * *"
|
||||
statement => "SELECT a.*, e.first_name, e.last_name, e.department FROM access_logs a JOIN employees e ON a.employee_id = e.id WHERE a.created_at > :sql_last_value ORDER BY a.created_at ASC"
|
||||
use_column_value => true
|
||||
tracking_column => "created_at"
|
||||
tracking_column_type => "timestamp"
|
||||
last_run_metadata_path => "/usr/share/logstash/data/.logstash_access_logs_last_run"
|
||||
type => "access_log"
|
||||
tags => ["mysql", "access_logs"]
|
||||
}
|
||||
}
|
||||
|
||||
filter {
|
||||
mutate {
|
||||
remove_field => ["@version"]
|
||||
add_field => { "[@metadata][index_name]" => "access-logs" }
|
||||
}
|
||||
if [status_code] {
|
||||
if [status_code] >= 200 and [status_code] < 300 {
|
||||
mutate { add_field => { "status_category" => "success" } }
|
||||
} else if [status_code] >= 300 and [status_code] < 400 {
|
||||
mutate { add_field => { "status_category" => "redirect" } }
|
||||
} else if [status_code] >= 400 and [status_code] < 500 {
|
||||
mutate { add_field => { "status_category" => "client_error" } }
|
||||
} else if [status_code] >= 500 {
|
||||
mutate { add_field => { "status_category" => "server_error" } }
|
||||
}
|
||||
}
|
||||
if [response_time_ms] {
|
||||
ruby {
|
||||
code => '
|
||||
rt = event.get("response_time_ms").to_i
|
||||
if rt <= 50
|
||||
event.set("performance_tier", "fast")
|
||||
elsif rt <= 200
|
||||
event.set("performance_tier", "normal")
|
||||
elsif rt <= 1000
|
||||
event.set("performance_tier", "slow")
|
||||
else
|
||||
event.set("performance_tier", "critical")
|
||||
end
|
||||
'
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
output {
|
||||
elasticsearch {
|
||||
hosts => ["https://elasticsearch:9200"]
|
||||
user => "elastic"
|
||||
password => "${ES_PASSWORD}"
|
||||
ssl_enabled => true
|
||||
ssl_certificate_authorities => ["/usr/share/logstash/config/certs/ca.crt"]
|
||||
index => "access-logs-%{+YYYY.MM}"
|
||||
document_id => "%{id}"
|
||||
action => "index"
|
||||
}
|
||||
stdout {
|
||||
codec => rubydebug
|
||||
}
|
||||
}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: logstash-secret
|
||||
namespace: elk
|
||||
type: Opaque
|
||||
stringData:
|
||||
ES_PASSWORD: "ElasticP@ss2024!"
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: logstash
|
||||
namespace: elk
|
||||
labels:
|
||||
app: logstash
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: logstash
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: logstash
|
||||
spec:
|
||||
initContainers:
|
||||
# Download MySQL JDBC driver
|
||||
- name: download-jdbc-driver
|
||||
image: curlimages/curl:8.5.0
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
- |
|
||||
curl -L -o /jdbc-driver/mysql-connector-j.jar \
|
||||
"https://repo1.maven.org/maven2/com/mysql/mysql-connector-j/8.3.0/mysql-connector-j-8.3.0.jar"
|
||||
volumeMounts:
|
||||
- name: jdbc-driver
|
||||
mountPath: /jdbc-driver
|
||||
containers:
|
||||
- name: logstash
|
||||
image: docker.elastic.co/logstash/logstash:8.17.0
|
||||
ports:
|
||||
- containerPort: 9600
|
||||
name: monitoring
|
||||
- containerPort: 5044
|
||||
name: beats
|
||||
env:
|
||||
- name: ES_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: logstash-secret
|
||||
key: ES_PASSWORD
|
||||
- name: LS_JAVA_OPTS
|
||||
value: "-Xms512m -Xmx512m"
|
||||
volumeMounts:
|
||||
- name: logstash-config
|
||||
mountPath: /usr/share/logstash/config/logstash.yml
|
||||
subPath: logstash.yml
|
||||
- name: logstash-config
|
||||
mountPath: /usr/share/logstash/config/pipelines.yml
|
||||
subPath: pipelines.yml
|
||||
- name: logstash-pipeline
|
||||
mountPath: /usr/share/logstash/pipeline
|
||||
- name: logstash-certs
|
||||
mountPath: /usr/share/logstash/config/certs
|
||||
readOnly: true
|
||||
- name: jdbc-driver
|
||||
mountPath: /usr/share/logstash/vendor
|
||||
- name: logstash-data
|
||||
mountPath: /usr/share/logstash/data
|
||||
resources:
|
||||
requests:
|
||||
memory: "1Gi"
|
||||
cpu: "250m"
|
||||
limits:
|
||||
memory: "2Gi"
|
||||
cpu: "500m"
|
||||
volumes:
|
||||
- name: logstash-config
|
||||
configMap:
|
||||
name: logstash-config
|
||||
- name: logstash-pipeline
|
||||
configMap:
|
||||
name: logstash-pipeline
|
||||
- name: logstash-certs
|
||||
secret:
|
||||
secretName: elk-certificates
|
||||
- name: jdbc-driver
|
||||
emptyDir: {}
|
||||
- name: logstash-data
|
||||
emptyDir: {}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: logstash
|
||||
namespace: elk
|
||||
labels:
|
||||
app: logstash
|
||||
spec:
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- port: 9600
|
||||
targetPort: 9600
|
||||
protocol: TCP
|
||||
name: monitoring
|
||||
- port: 5044
|
||||
targetPort: 5044
|
||||
protocol: TCP
|
||||
name: beats
|
||||
selector:
|
||||
app: logstash
|
||||
67
elk-k8s-deployment/k8s/mysql/configmap.yaml
Normal file
67
elk-k8s-deployment/k8s/mysql/configmap.yaml
Normal file
@@ -0,0 +1,67 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: mysql-init-db
|
||||
namespace: database
|
||||
data:
|
||||
init.sql: |
|
||||
CREATE DATABASE IF NOT EXISTS elkdemo;
|
||||
USE elkdemo;
|
||||
|
||||
CREATE TABLE IF NOT EXISTS employees (
|
||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||
first_name VARCHAR(50) NOT NULL,
|
||||
last_name VARCHAR(50) NOT NULL,
|
||||
email VARCHAR(100) NOT NULL,
|
||||
department VARCHAR(50) NOT NULL,
|
||||
salary DECIMAL(10,2) NOT NULL,
|
||||
hire_date DATE NOT NULL,
|
||||
is_active BOOLEAN DEFAULT TRUE,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS access_logs (
|
||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||
employee_id INT,
|
||||
action VARCHAR(100) NOT NULL,
|
||||
resource VARCHAR(200) NOT NULL,
|
||||
ip_address VARCHAR(45),
|
||||
status_code INT,
|
||||
response_time_ms INT,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
FOREIGN KEY (employee_id) REFERENCES employees(id)
|
||||
);
|
||||
|
||||
INSERT INTO employees (first_name, last_name, email, department, salary, hire_date, is_active) VALUES
|
||||
('Alice', 'Johnson', 'alice.johnson@example.com', 'Engineering', 95000.00, '2021-03-15', TRUE),
|
||||
('Bob', 'Smith', 'bob.smith@example.com', 'Marketing', 72000.00, '2020-07-22', TRUE),
|
||||
('Charlie', 'Williams', 'charlie.w@example.com', 'Engineering', 105000.00, '2019-01-10', TRUE),
|
||||
('Diana', 'Brown', 'diana.brown@example.com', 'HR', 68000.00, '2022-05-01', TRUE),
|
||||
('Eve', 'Davis', 'eve.davis@example.com', 'Finance', 88000.00, '2020-11-30', TRUE),
|
||||
('Frank', 'Miller', 'frank.miller@example.com', 'Engineering', 110000.00, '2018-09-14', FALSE),
|
||||
('Grace', 'Wilson', 'grace.wilson@example.com', 'Marketing', 76000.00, '2021-08-20', TRUE),
|
||||
('Henry', 'Moore', 'henry.moore@example.com', 'Finance', 92000.00, '2019-04-05', TRUE),
|
||||
('Ivy', 'Taylor', 'ivy.taylor@example.com', 'HR', 65000.00, '2023-01-15', TRUE),
|
||||
('Jack', 'Anderson', 'jack.anderson@example.com', 'Engineering', 98000.00, '2022-02-28', TRUE);
|
||||
|
||||
INSERT INTO access_logs (employee_id, action, resource, ip_address, status_code, response_time_ms) VALUES
|
||||
(1, 'GET', '/api/projects', '10.0.1.15', 200, 45),
|
||||
(1, 'POST', '/api/projects/new', '10.0.1.15', 201, 120),
|
||||
(2, 'GET', '/api/campaigns', '10.0.2.20', 200, 67),
|
||||
(3, 'PUT', '/api/deployments/12', '10.0.1.30', 200, 230),
|
||||
(3, 'DELETE', '/api/cache', '10.0.1.30', 204, 15),
|
||||
(4, 'GET', '/api/employees', '10.0.3.10', 200, 89),
|
||||
(5, 'GET', '/api/reports/q3', '10.0.4.5', 200, 340),
|
||||
(5, 'POST', '/api/reports/q4', '10.0.4.5', 403, 12),
|
||||
(6, 'GET', '/api/admin/settings', '10.0.1.50', 401, 8),
|
||||
(7, 'GET', '/api/campaigns/15', '10.0.2.25', 404, 22),
|
||||
(8, 'POST', '/api/invoices', '10.0.4.8', 201, 178),
|
||||
(9, 'GET', '/api/employees/self', '10.0.3.15', 200, 34),
|
||||
(10,'POST', '/api/code-review', '10.0.1.60', 200, 567),
|
||||
(1, 'GET', '/api/projects/5', '10.0.1.15', 200, 51),
|
||||
(3, 'POST', '/api/deployments', '10.0.1.30', 500, 2100);
|
||||
|
||||
-- Grant read access for logstash user
|
||||
CREATE USER IF NOT EXISTS 'logstash'@'%' IDENTIFIED BY 'logstash_password';
|
||||
GRANT SELECT ON elkdemo.* TO 'logstash'@'%';
|
||||
FLUSH PRIVILEGES;
|
||||
102
elk-k8s-deployment/k8s/mysql/deployment.yaml
Normal file
102
elk-k8s-deployment/k8s/mysql/deployment.yaml
Normal file
@@ -0,0 +1,102 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: mysql
|
||||
namespace: database
|
||||
labels:
|
||||
app: mysql
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: mysql
|
||||
strategy:
|
||||
type: Recreate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: mysql
|
||||
spec:
|
||||
containers:
|
||||
- name: mysql
|
||||
image: mysql:8.4
|
||||
ports:
|
||||
- containerPort: 3306
|
||||
name: mysql
|
||||
envFrom:
|
||||
- secretRef:
|
||||
name: mysql-secret
|
||||
volumeMounts:
|
||||
- name: mysql-data
|
||||
mountPath: /var/lib/mysql
|
||||
- name: mysql-init
|
||||
mountPath: /docker-entrypoint-initdb.d
|
||||
resources:
|
||||
requests:
|
||||
memory: "512Mi"
|
||||
cpu: "250m"
|
||||
limits:
|
||||
memory: "1Gi"
|
||||
cpu: "500m"
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
- mysqladmin
|
||||
- ping
|
||||
- -h
|
||||
- localhost
|
||||
- -u
|
||||
- root
|
||||
- -prootpassword
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- mysqladmin
|
||||
- ping
|
||||
- -h
|
||||
- localhost
|
||||
- -u
|
||||
- root
|
||||
- -prootpassword
|
||||
initialDelaySeconds: 60
|
||||
periodSeconds: 20
|
||||
timeoutSeconds: 5
|
||||
volumes:
|
||||
- name: mysql-data
|
||||
persistentVolumeClaim:
|
||||
claimName: mysql-pvc
|
||||
- name: mysql-init
|
||||
configMap:
|
||||
name: mysql-init-db
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: mysql-pvc
|
||||
namespace: database
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 5Gi
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: mysql
|
||||
namespace: database
|
||||
labels:
|
||||
app: mysql
|
||||
spec:
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- port: 3306
|
||||
targetPort: 3306
|
||||
protocol: TCP
|
||||
name: mysql
|
||||
selector:
|
||||
app: mysql
|
||||
9
elk-k8s-deployment/k8s/mysql/secret.yaml
Normal file
9
elk-k8s-deployment/k8s/mysql/secret.yaml
Normal file
@@ -0,0 +1,9 @@
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: mysql-secret
|
||||
namespace: database
|
||||
type: Opaque
|
||||
stringData:
|
||||
MYSQL_ROOT_PASSWORD: "rootpassword"
|
||||
MYSQL_DATABASE: "elkdemo"
|
||||
13
elk-k8s-deployment/k8s/namespaces/namespaces.yaml
Normal file
13
elk-k8s-deployment/k8s/namespaces/namespaces.yaml
Normal file
@@ -0,0 +1,13 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: elk
|
||||
labels:
|
||||
app.kubernetes.io/part-of: elk-stack
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: database
|
||||
labels:
|
||||
app.kubernetes.io/part-of: elk-stack
|
||||
186
elk-k8s-deployment/k8s/nginx/nginx.yaml
Normal file
186
elk-k8s-deployment/k8s/nginx/nginx.yaml
Normal file
@@ -0,0 +1,186 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: nginx-config
|
||||
namespace: elk
|
||||
data:
|
||||
nginx.conf: |
|
||||
worker_processes auto;
|
||||
error_log /var/log/nginx/error.log warn;
|
||||
pid /tmp/nginx.pid;
|
||||
|
||||
events {
|
||||
worker_connections 1024;
|
||||
}
|
||||
|
||||
http {
|
||||
include /etc/nginx/mime.types;
|
||||
default_type application/octet-stream;
|
||||
sendfile on;
|
||||
keepalive_timeout 65;
|
||||
client_max_body_size 50m;
|
||||
|
||||
# Logging
|
||||
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
|
||||
'$status $body_bytes_sent "$http_referer" '
|
||||
'"$http_user_agent"';
|
||||
access_log /var/log/nginx/access.log main;
|
||||
|
||||
# --- Redirect HTTP to HTTPS ---
|
||||
server {
|
||||
listen 80;
|
||||
server_name kibana.elk.local;
|
||||
return 301 https://$host$request_uri;
|
||||
}
|
||||
|
||||
# --- Kibana HTTPS Proxy ---
|
||||
server {
|
||||
listen 443 ssl;
|
||||
server_name kibana.elk.local;
|
||||
|
||||
ssl_certificate /etc/nginx/certs/nginx.crt;
|
||||
ssl_certificate_key /etc/nginx/certs/nginx.key;
|
||||
ssl_protocols TLSv1.2 TLSv1.3;
|
||||
ssl_ciphers HIGH:!aNULL:!MD5;
|
||||
ssl_prefer_server_ciphers on;
|
||||
ssl_session_cache shared:SSL:10m;
|
||||
ssl_session_timeout 10m;
|
||||
|
||||
# Security headers
|
||||
add_header X-Frame-Options "SAMEORIGIN" always;
|
||||
add_header X-Content-Type-Options "nosniff" always;
|
||||
add_header X-XSS-Protection "1; mode=block" always;
|
||||
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
|
||||
|
||||
location / {
|
||||
proxy_pass http://kibana:5601;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_set_header X-Forwarded-Host $host;
|
||||
proxy_set_header X-Forwarded-Port $server_port;
|
||||
|
||||
# WebSocket support (for Kibana live features)
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
|
||||
proxy_buffering off;
|
||||
proxy_read_timeout 90;
|
||||
}
|
||||
}
|
||||
|
||||
# --- Authentik HTTPS Proxy ---
|
||||
server {
|
||||
listen 9443 ssl;
|
||||
server_name authentik.elk.local;
|
||||
|
||||
ssl_certificate /etc/nginx/certs/authentik.crt;
|
||||
ssl_certificate_key /etc/nginx/certs/authentik.key;
|
||||
ssl_protocols TLSv1.2 TLSv1.3;
|
||||
ssl_ciphers HIGH:!aNULL:!MD5;
|
||||
ssl_prefer_server_ciphers on;
|
||||
|
||||
location / {
|
||||
proxy_pass http://authentik-server:80;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
}
|
||||
}
|
||||
}
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nginx
|
||||
namespace: elk
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: nginx
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:1.27
|
||||
ports:
|
||||
- containerPort: 80
|
||||
name: http
|
||||
- containerPort: 443
|
||||
name: https
|
||||
- containerPort: 9443
|
||||
name: authentik-https
|
||||
volumeMounts:
|
||||
- name: nginx-config
|
||||
mountPath: /etc/nginx/nginx.conf
|
||||
subPath: nginx.conf
|
||||
- name: nginx-certs
|
||||
mountPath: /etc/nginx/certs
|
||||
readOnly: true
|
||||
resources:
|
||||
requests:
|
||||
memory: "128Mi"
|
||||
cpu: "100m"
|
||||
limits:
|
||||
memory: "256Mi"
|
||||
cpu: "200m"
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
port: 80
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 10
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
port: 80
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 20
|
||||
volumes:
|
||||
- name: nginx-config
|
||||
configMap:
|
||||
name: nginx-config
|
||||
- name: nginx-certs
|
||||
secret:
|
||||
secretName: elk-certificates
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nginx
|
||||
namespace: elk
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
type: NodePort
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 80
|
||||
nodePort: 30080
|
||||
protocol: TCP
|
||||
name: http
|
||||
- port: 443
|
||||
targetPort: 443
|
||||
nodePort: 30443
|
||||
protocol: TCP
|
||||
name: https
|
||||
- port: 9443
|
||||
targetPort: 9443
|
||||
nodePort: 30944
|
||||
protocol: TCP
|
||||
name: authentik-https
|
||||
selector:
|
||||
app: nginx
|
||||
115
elk-k8s-deployment/scripts/configure-authentik-oidc.sh
Normal file
115
elk-k8s-deployment/scripts/configure-authentik-oidc.sh
Normal file
@@ -0,0 +1,115 @@
|
||||
#!/usr/bin/env bash
|
||||
# =============================================================================
|
||||
# configure-authentik-oidc.sh
|
||||
# Configures Authentik with an OIDC Provider and Application for Kibana SSO
|
||||
# Run AFTER Authentik is fully started and accessible
|
||||
# =============================================================================
|
||||
set -euo pipefail
|
||||
|
||||
AUTHENTIK_URL="${1:-http://localhost:9000}"
|
||||
BOOTSTRAP_TOKEN="${2:-bootstrap-token-elk-lab-2024}"
|
||||
KIBANA_URL="https://kibana.elk.local:30443"
|
||||
|
||||
AUTH_HEADER="Authorization: Bearer ${BOOTSTRAP_TOKEN}"
|
||||
CT="Content-Type: application/json"
|
||||
|
||||
echo "=== Configuring Authentik OIDC for Kibana ==="
|
||||
echo "Authentik URL: ${AUTHENTIK_URL}"
|
||||
|
||||
# Wait for Authentik
|
||||
echo ">>> Waiting for Authentik API..."
|
||||
until curl -sf "${AUTHENTIK_URL}/-/health/ready/" > /dev/null 2>&1; do
|
||||
echo " Waiting..."
|
||||
sleep 5
|
||||
done
|
||||
echo " Authentik is ready!"
|
||||
|
||||
# --- Step 1: Create a Certificate-Key Pair (optional, for signed JWTs) ---
|
||||
echo ">>> Creating certificate key pair..."
|
||||
CERT_RESP=$(curl -sf -X POST "${AUTHENTIK_URL}/api/v3/crypto/certificatekeypairs/generate/" \
|
||||
-H "${AUTH_HEADER}" -H "${CT}" \
|
||||
-d '{
|
||||
"common_name": "kibana-oidc-signing",
|
||||
"subject_alt_name": "kibana.elk.local",
|
||||
"validity_days": 365
|
||||
}' 2>/dev/null || echo '{}')
|
||||
CERT_ID=$(echo "$CERT_RESP" | python3 -c "import sys,json; print(json.load(sys.stdin).get('pk',''))" 2>/dev/null || echo "")
|
||||
echo " Certificate ID: ${CERT_ID:-skipped}"
|
||||
|
||||
# --- Step 2: Create Scope Mappings (if not already present) ---
|
||||
echo ">>> Checking scope mappings..."
|
||||
SCOPES_RESP=$(curl -sf "${AUTHENTIK_URL}/api/v3/propertymappings/scope/?ordering=scope_name" \
|
||||
-H "${AUTH_HEADER}" 2>/dev/null || echo '{"results":[]}')
|
||||
|
||||
# --- Step 3: Create OIDC Provider ---
|
||||
echo ">>> Creating OIDC Provider for Kibana..."
|
||||
PROVIDER_BODY=$(cat <<PROVIDER_JSON
|
||||
{
|
||||
"name": "Kibana OIDC Provider",
|
||||
"authorization_flow": "default-provider-authorization-implicit-consent",
|
||||
"client_type": "confidential",
|
||||
"client_id": "kibana",
|
||||
"client_secret": "kibana-client-secret-2024",
|
||||
"redirect_uris": "${KIBANA_URL}/api/security/oidc/callback",
|
||||
"signing_key": ${CERT_ID:+\"$CERT_ID\"}${CERT_ID:-null},
|
||||
"sub_mode": "user_username",
|
||||
"issuer_mode": "per_provider",
|
||||
"access_code_validity": "minutes=1",
|
||||
"access_token_validity": "minutes=5",
|
||||
"refresh_token_validity": "days=30"
|
||||
}
|
||||
PROVIDER_JSON
|
||||
)
|
||||
|
||||
PROVIDER_RESP=$(curl -sf -X POST "${AUTHENTIK_URL}/api/v3/providers/oauth2/" \
|
||||
-H "${AUTH_HEADER}" -H "${CT}" \
|
||||
-d "${PROVIDER_BODY}" 2>/dev/null || echo '{}')
|
||||
PROVIDER_ID=$(echo "$PROVIDER_RESP" | python3 -c "import sys,json; print(json.load(sys.stdin).get('pk',''))" 2>/dev/null || echo "")
|
||||
|
||||
if [ -z "$PROVIDER_ID" ]; then
|
||||
echo " Provider may already exist, looking it up..."
|
||||
PROVIDER_ID=$(curl -sf "${AUTHENTIK_URL}/api/v3/providers/oauth2/?search=Kibana" \
|
||||
-H "${AUTH_HEADER}" | python3 -c "import sys,json; r=json.load(sys.stdin)['results']; print(r[0]['pk'] if r else '')" 2>/dev/null || echo "")
|
||||
fi
|
||||
echo " Provider ID: ${PROVIDER_ID}"
|
||||
|
||||
# --- Step 4: Create Application ---
|
||||
echo ">>> Creating Kibana Application..."
|
||||
APP_RESP=$(curl -sf -X POST "${AUTHENTIK_URL}/api/v3/core/applications/" \
|
||||
-H "${AUTH_HEADER}" -H "${CT}" \
|
||||
-d "{
|
||||
\"name\": \"Kibana\",
|
||||
\"slug\": \"kibana\",
|
||||
\"provider\": ${PROVIDER_ID},
|
||||
\"meta_launch_url\": \"${KIBANA_URL}\",
|
||||
\"meta_description\": \"ELK Stack - Kibana Dashboard\",
|
||||
\"policy_engine_mode\": \"any\"
|
||||
}" 2>/dev/null || echo '{}')
|
||||
APP_SLUG=$(echo "$APP_RESP" | python3 -c "import sys,json; print(json.load(sys.stdin).get('slug',''))" 2>/dev/null || echo "")
|
||||
|
||||
if [ -z "$APP_SLUG" ]; then
|
||||
echo " Application may already exist."
|
||||
APP_SLUG="kibana"
|
||||
fi
|
||||
echo " Application slug: ${APP_SLUG}"
|
||||
|
||||
echo ""
|
||||
echo "╔══════════════════════════════════════════════════════════════╗"
|
||||
echo "║ Authentik OIDC Configuration Complete! ║"
|
||||
echo "╠══════════════════════════════════════════════════════════════╣"
|
||||
echo "║ ║"
|
||||
echo "║ OIDC Provider: Kibana OIDC Provider ║"
|
||||
echo "║ Client ID: kibana ║"
|
||||
echo "║ Client Secret: kibana-client-secret-2024 ║"
|
||||
echo "║ Application: kibana ║"
|
||||
echo "║ ║"
|
||||
echo "║ Issuer URL: ║"
|
||||
echo "║ ${AUTHENTIK_URL}/application/o/kibana/ ║"
|
||||
echo "║ ║"
|
||||
echo "║ Endpoints: ║"
|
||||
echo "║ Authorization: .../application/o/authorize/ ║"
|
||||
echo "║ Token: .../application/o/token/ ║"
|
||||
echo "║ UserInfo: .../application/o/userinfo/ ║"
|
||||
echo "║ JWKS: .../application/o/kibana/jwks/ ║"
|
||||
echo "║ ║"
|
||||
echo "╚══════════════════════════════════════════════════════════════╝"
|
||||
118
elk-k8s-deployment/scripts/generate-certs.sh
Normal file
118
elk-k8s-deployment/scripts/generate-certs.sh
Normal file
@@ -0,0 +1,118 @@
|
||||
#!/usr/bin/env bash
|
||||
# =============================================================================
|
||||
# generate-certs.sh — Generate a custom CA and TLS certificates for ELK stack
|
||||
# =============================================================================
|
||||
set -euo pipefail
|
||||
|
||||
CERT_DIR="${1:-./certs}"
|
||||
DAYS_VALID=825
|
||||
CA_SUBJECT="/C=US/ST=State/L=City/O=ELK-Lab/OU=Infrastructure/CN=ELK-Lab-CA"
|
||||
DOMAIN="elk.local"
|
||||
|
||||
mkdir -p "${CERT_DIR}"
|
||||
|
||||
echo ">>> Generating Custom CA..."
|
||||
openssl genrsa -out "${CERT_DIR}/ca.key" 4096
|
||||
openssl req -x509 -new -nodes \
|
||||
-key "${CERT_DIR}/ca.key" \
|
||||
-sha256 -days ${DAYS_VALID} \
|
||||
-out "${CERT_DIR}/ca.crt" \
|
||||
-subj "${CA_SUBJECT}"
|
||||
|
||||
# --- Function to generate a certificate signed by the CA ---
|
||||
generate_cert() {
|
||||
local NAME="$1"
|
||||
local CN="$2"
|
||||
local SANS="$3"
|
||||
|
||||
echo ">>> Generating certificate for ${NAME} (CN=${CN})..."
|
||||
|
||||
openssl genrsa -out "${CERT_DIR}/${NAME}.key" 2048
|
||||
|
||||
cat > "${CERT_DIR}/${NAME}.cnf" <<SSLCNF
|
||||
[req]
|
||||
distinguished_name = req_dn
|
||||
req_extensions = v3_req
|
||||
prompt = no
|
||||
|
||||
[req_dn]
|
||||
C = US
|
||||
ST = State
|
||||
L = City
|
||||
O = ELK-Lab
|
||||
OU = ${NAME}
|
||||
CN = ${CN}
|
||||
|
||||
[v3_req]
|
||||
basicConstraints = CA:FALSE
|
||||
keyUsage = digitalSignature, keyEncipherment
|
||||
extendedKeyUsage = serverAuth, clientAuth
|
||||
subjectAltName = ${SANS}
|
||||
SSLCNF
|
||||
|
||||
openssl req -new -nodes \
|
||||
-key "${CERT_DIR}/${NAME}.key" \
|
||||
-out "${CERT_DIR}/${NAME}.csr" \
|
||||
-config "${CERT_DIR}/${NAME}.cnf"
|
||||
|
||||
openssl x509 -req \
|
||||
-in "${CERT_DIR}/${NAME}.csr" \
|
||||
-CA "${CERT_DIR}/ca.crt" \
|
||||
-CAkey "${CERT_DIR}/ca.key" \
|
||||
-CAcreateserial \
|
||||
-out "${CERT_DIR}/${NAME}.crt" \
|
||||
-days ${DAYS_VALID} \
|
||||
-sha256 \
|
||||
-extensions v3_req \
|
||||
-extfile "${CERT_DIR}/${NAME}.cnf"
|
||||
|
||||
rm -f "${CERT_DIR}/${NAME}.csr" "${CERT_DIR}/${NAME}.cnf"
|
||||
}
|
||||
|
||||
# --- Elasticsearch ---
|
||||
generate_cert "elasticsearch" "elasticsearch" \
|
||||
"DNS:elasticsearch,DNS:elasticsearch.elk.svc.cluster.local,DNS:elasticsearch.elk.svc,DNS:localhost,IP:127.0.0.1"
|
||||
|
||||
# --- Kibana ---
|
||||
generate_cert "kibana" "kibana" \
|
||||
"DNS:kibana,DNS:kibana.elk.svc.cluster.local,DNS:kibana.elk.svc,DNS:localhost,IP:127.0.0.1"
|
||||
|
||||
# --- Logstash ---
|
||||
generate_cert "logstash" "logstash" \
|
||||
"DNS:logstash,DNS:logstash.elk.svc.cluster.local,DNS:logstash.elk.svc,DNS:localhost,IP:127.0.0.1"
|
||||
|
||||
# --- NGINX ---
|
||||
generate_cert "nginx" "kibana.${DOMAIN}" \
|
||||
"DNS:kibana.${DOMAIN},DNS:nginx,DNS:nginx.elk.svc.cluster.local,DNS:localhost,IP:127.0.0.1"
|
||||
|
||||
# --- Authentik ---
|
||||
generate_cert "authentik" "authentik.${DOMAIN}" \
|
||||
"DNS:authentik.${DOMAIN},DNS:authentik,DNS:authentik-server,DNS:authentik-server.elk.svc.cluster.local,DNS:localhost,IP:127.0.0.1"
|
||||
|
||||
# --- Create Elasticsearch PKCS12 keystore ---
|
||||
echo ">>> Creating Elasticsearch PKCS12 keystore..."
|
||||
openssl pkcs12 -export \
|
||||
-in "${CERT_DIR}/elasticsearch.crt" \
|
||||
-inkey "${CERT_DIR}/elasticsearch.key" \
|
||||
-CAfile "${CERT_DIR}/ca.crt" \
|
||||
-chain \
|
||||
-out "${CERT_DIR}/elasticsearch.p12" \
|
||||
-passout pass:changeit
|
||||
|
||||
# --- Create Elasticsearch HTTP PKCS12 keystore ---
|
||||
cp "${CERT_DIR}/elasticsearch.p12" "${CERT_DIR}/elastic-http.p12"
|
||||
|
||||
# --- Cleanup serial file ---
|
||||
rm -f "${CERT_DIR}/ca.srl"
|
||||
|
||||
echo ""
|
||||
echo "=== Certificates generated in ${CERT_DIR}/ ==="
|
||||
ls -la "${CERT_DIR}/"
|
||||
echo ""
|
||||
echo "CA certificate: ${CERT_DIR}/ca.crt"
|
||||
echo "CA private key: ${CERT_DIR}/ca.key"
|
||||
echo "Elasticsearch cert: ${CERT_DIR}/elasticsearch.crt"
|
||||
echo "Kibana cert: ${CERT_DIR}/kibana.crt"
|
||||
echo "Logstash cert: ${CERT_DIR}/logstash.crt"
|
||||
echo "NGINX cert: ${CERT_DIR}/nginx.crt"
|
||||
echo "Authentik cert: ${CERT_DIR}/authentik.crt"
|
||||
Reference in New Issue
Block a user