add the media stack
This commit is contained in:
249
README.md
Normal file
249
README.md
Normal file
@@ -0,0 +1,249 @@
|
||||
# Media Stack Kubernetes Deployment
|
||||
|
||||
A complete self-hosted media solution deployed on Kubernetes.
|
||||
|
||||
## Stack Components
|
||||
|
||||
| Component | Purpose | Port | Image |
|
||||
|-----------|---------|------|-------|
|
||||
| **Jellyfin** | Media server + Live TV frontend | 8096 | jellyfin/jellyfin:10.11.5 |
|
||||
| **Sonarr** | TV show management | 8989 | lscr.io/linuxserver/sonarr:latest |
|
||||
| **Radarr** | Movie management | 7878 | lscr.io/linuxserver/radarr:latest |
|
||||
| **Lidarr** | Music management | 8686 | lscr.io/linuxserver/lidarr:latest |
|
||||
| **Prowlarr** | Indexer management | 9696 | lscr.io/linuxserver/prowlarr:latest |
|
||||
| **qBittorrent** | Download client | 8080 | lscr.io/linuxserver/qbittorrent:latest |
|
||||
| **Dispatcharr** | IPTV/M3U proxy for live TV | 9191 | ghcr.io/dispatcharr/dispatcharr:latest |
|
||||
|
||||
## Network Configuration
|
||||
|
||||
- Network: `10.0.0.0/24`
|
||||
- K8s Control Plane: `10.0.0.69` (k8scontrol)
|
||||
- K8s Workers: `10.0.0.70-73` (k8sworker1-3)
|
||||
- NFS Server: `10.0.0.230`
|
||||
- Kubernetes Version: 1.35
|
||||
|
||||
## NFS Share Structure
|
||||
|
||||
Your NFS server (10.0.0.230) should have these directories:
|
||||
|
||||
```
|
||||
/srv/nfs/media/
|
||||
├── config/
|
||||
│ ├── jellyfin/
|
||||
│ ├── sonarr/
|
||||
│ ├── radarr/
|
||||
│ ├── lidarr/
|
||||
│ ├── prowlarr/
|
||||
│ ├── qbittorrent/
|
||||
│ └── dispatcharr/
|
||||
├── downloads/
|
||||
│ ├── complete/
|
||||
│ └── incomplete/
|
||||
├── media/
|
||||
│ ├── movies/
|
||||
│ ├── tv/
|
||||
│ └── music/
|
||||
└── transcode/
|
||||
```
|
||||
|
||||
## Pre-requisites
|
||||
|
||||
### 1. Configure NFS Server (10.0.0.230)
|
||||
|
||||
```bash
|
||||
# On NFS server
|
||||
sudo mkdir -p /srv/nfs/media/{config/{jellyfin,sonarr,radarr,lidarr,prowlarr,qbittorrent,dispatcharr},downloads/{complete,incomplete},media/{movies,tv,music},transcode}
|
||||
|
||||
# Set permissions (adjust UID/GID as needed - 1000:1000 is common)
|
||||
sudo chown -R 1000:1000 /srv/nfs/media
|
||||
|
||||
# Add to /etc/exports
|
||||
echo "/srv/nfs/media 10.0.0.0/24(rw,sync,no_subtree_check,no_root_squash)" | sudo tee -a /etc/exports
|
||||
|
||||
# Apply exports
|
||||
sudo exportfs -ra
|
||||
```
|
||||
|
||||
### 2. Install NFS Client on K8s Nodes
|
||||
|
||||
```bash
|
||||
# On each worker node (k8sworker1-3)
|
||||
sudo apt-get update && sudo apt-get install -y nfs-common
|
||||
```
|
||||
|
||||
### 3. Verify NFS Connectivity
|
||||
|
||||
```bash
|
||||
# From any worker node
|
||||
showmount -e 10.0.0.230
|
||||
```
|
||||
|
||||
## Deployment
|
||||
|
||||
### Manual Deployment (in order)
|
||||
|
||||
**Without VPN:**
|
||||
|
||||
```bash
|
||||
# Create namespace
|
||||
kubectl apply -f base/namespace.yaml
|
||||
|
||||
# Create storage
|
||||
kubectl apply -f base/nfs-storage.yaml
|
||||
|
||||
# Create config
|
||||
kubectl apply -f base/configmap.yaml
|
||||
|
||||
# Create PVCs
|
||||
kubectl apply -f base/pvcs.yaml
|
||||
|
||||
# Deploy applications
|
||||
kubectl apply -f base/jellyfin.yaml
|
||||
kubectl apply -f base/sonarr.yaml
|
||||
kubectl apply -f base/radarr.yaml
|
||||
kubectl apply -f base/lidarr.yaml
|
||||
kubectl apply -f base/prowlarr.yaml
|
||||
kubectl apply -f base/qbittorrent.yaml
|
||||
kubectl apply -f base/dispatcharr.yaml
|
||||
```
|
||||
|
||||
**With VPN (after configuring VPN credentials in `base/vpn/mullvad-secret.yaml`):**
|
||||
|
||||
Download a .conf file from your Mullvad account and update the private key and wireguard address in `base/vpn/mullvad-secret.yaml`)
|
||||
|
||||
```bash
|
||||
# Create namespace
|
||||
kubectl apply -f base/namespace.yaml
|
||||
|
||||
# Create storage
|
||||
kubectl apply -f base/nfs-storage.yaml
|
||||
|
||||
# Create config
|
||||
kubectl apply -f base/configmap.yaml
|
||||
|
||||
# Create VPN-specific resources
|
||||
kubectl apply -f base/vpn/gluetun-config.yaml
|
||||
kubectl apply -f base/vpn/qbittorrent-init-configmap.yaml
|
||||
kubectl apply -f base/vpn/mullvad-secret.yaml
|
||||
|
||||
# Create PVCs
|
||||
kubectl apply -f base/pvcs.yaml
|
||||
|
||||
# Deploy non-VPN applications
|
||||
kubectl apply -f base/jellyfin.yaml
|
||||
kubectl apply -f base/sonarr.yaml
|
||||
kubectl apply -f base/radarr.yaml
|
||||
kubectl apply -f base/lidarr.yaml
|
||||
|
||||
# Deploy VPN-enabled applications
|
||||
kubectl apply -f base/vpn/prowlarr-vpn.yaml
|
||||
kubectl apply -f base/vpn/qbittorrent-vpn.yaml
|
||||
kubectl apply -f base/vpn/dispatcharr-vpn.yaml
|
||||
```
|
||||
|
||||
### Automated deployment
|
||||
|
||||
A [deploy.sh](link) script can be used as such:
|
||||
|
||||
**Without VPN:**
|
||||
```bash
|
||||
./deploy.sh
|
||||
```
|
||||
|
||||
**With VPN (recommended for qBittorrent, Prowlarr, Dispatcharr):**
|
||||
```bash
|
||||
./deploy.sh --vpn
|
||||
```
|
||||
|
||||
Optionally to change the VPN location amend `SERVER_CITIES` in the `base/vpn/gluetun-config.yaml` file.
|
||||
|
||||
To enable port forwarding:
|
||||
1. Go to your mullvad.net account and navigate to `account/ports`
|
||||
2. Add a port for your wireguard key
|
||||
3. Amend the `FIREWALL_VPN_INPUT_PORTS` value to your port in the `base/vpn/gluetun-config.yaml` file
|
||||
4. Configure qBittorrent to point to your client
|
||||
|
||||
## Accessing Services
|
||||
|
||||
After deployment, services are available via NodePort:
|
||||
|
||||
| Service | URL |
|
||||
|---------|-----|
|
||||
| Jellyfin | http://10.0.0.70:30096 |
|
||||
| Sonarr | http://10.0.0.70:30989 |
|
||||
| Radarr | http://10.0.0.70:30878 |
|
||||
| Lidarr | http://10.0.0.70:30686 |
|
||||
| Prowlarr | http://10.0.0.70:30696 |
|
||||
| qBittorrent | http://10.0.0.70:30080 |
|
||||
| Dispatcharr | http://10.0.0.70:30191 |
|
||||
|
||||
*Replace 10.0.0.70 with any worker node IP*
|
||||
|
||||
## Post-Deployment Configuration
|
||||
|
||||
### 1. qBittorrent
|
||||
- Check container logs for temporary password: `kubectl logs -n media deployment/qbittorrent`
|
||||
- Login and change password immediately
|
||||
- Set download paths to `/downloads/complete` and `/downloads/incomplete`
|
||||
|
||||
### 2. Prowlarr
|
||||
- Add your indexers
|
||||
- Connect to Sonarr, Radarr, and Lidarr via Settings → Apps
|
||||
- Use internal service names: `http://sonarr:8989`, `http://radarr:7878`, `http://lidarr:8686`
|
||||
|
||||
### 3. Sonarr/Radarr/Lidarr
|
||||
- Add qBittorrent as download client: `http://qbittorrent:8080`
|
||||
- Set media library paths: `/media/tv`, `/media/movies`, `/media/music`
|
||||
- Configure quality profiles and root folders
|
||||
|
||||
### 4. Jellyfin
|
||||
- Run initial setup wizard
|
||||
- Add media libraries pointing to `/media/movies`, `/media/tv`, `/media/music`
|
||||
- For Live TV: Settings → Live TV → Add Tuner Device → HDHomeRun
|
||||
- Use Dispatcharr's HDHomeRun URL: `http://dispatcharr:9191`
|
||||
|
||||
### 5. Dispatcharr
|
||||
- Add your M3U playlist sources
|
||||
- Configure EPG sources
|
||||
- Enable channels you want to watch
|
||||
|
||||
## Useful Commands
|
||||
|
||||
```bash
|
||||
# Restart a deployment
|
||||
kubectl rollout restart deployment/jellyfin -n media
|
||||
|
||||
# Scale down all (for maintenance)
|
||||
kubectl scale deployment --all -n media --replicas=0
|
||||
|
||||
# Scale back up
|
||||
kubectl scale deployment --all -n media --replicas=1
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Permission Denied Errors
|
||||
- Ensure PUID/PGID in configmap matches NFS share ownership
|
||||
- Check that `no_root_squash` is set in NFS exports
|
||||
|
||||
## Upgrading
|
||||
|
||||
To upgrade images:
|
||||
|
||||
```bash
|
||||
# Update a single deployment
|
||||
kubectl set image deployment/jellyfin -n media jellyfin=jellyfin/jellyfin:NEW_VERSION
|
||||
|
||||
# Or edit the YAML and reapply
|
||||
kubectl apply -f base/jellyfin.yaml
|
||||
```
|
||||
|
||||
## Cleanup
|
||||
|
||||
```bash
|
||||
# Delete everything
|
||||
kubectl delete namespace media
|
||||
|
||||
# This will remove all deployments, services, and PVCs
|
||||
# PVs and NFS data will remain
|
||||
```
|
||||
29
base/configmap.yaml
Normal file
29
base/configmap.yaml
Normal file
@@ -0,0 +1,29 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: media-config
|
||||
namespace: media
|
||||
labels:
|
||||
app.kubernetes.io/name: media-stack
|
||||
data:
|
||||
# User/Group IDs - should match NFS share ownership
|
||||
PUID: "1000"
|
||||
PGID: "1000"
|
||||
|
||||
# Timezone - adjust to your location
|
||||
TZ: "Europe/London"
|
||||
|
||||
# Common paths (used inside containers)
|
||||
DOWNLOADS_PATH: "/downloads"
|
||||
MEDIA_PATH: "/media"
|
||||
|
||||
# qBittorrent specific
|
||||
WEBUI_PORT: "8080"
|
||||
TORRENTING_PORT: "6881"
|
||||
|
||||
# Dispatcharr specific
|
||||
DISPATCHARR_ENV: "aio"
|
||||
REDIS_HOST: "localhost"
|
||||
CELERY_BROKER_URL: "redis://localhost:6379/0"
|
||||
DISPATCHARR_LOG_LEVEL: "info"
|
||||
89
base/dispatcharr.yaml
Normal file
89
base/dispatcharr.yaml
Normal file
@@ -0,0 +1,89 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: dispatcharr
|
||||
namespace: media
|
||||
labels:
|
||||
app: dispatcharr
|
||||
app.kubernetes.io/name: dispatcharr
|
||||
app.kubernetes.io/component: iptv-proxy
|
||||
spec:
|
||||
replicas: 1
|
||||
strategy:
|
||||
type: Recreate
|
||||
selector:
|
||||
matchLabels:
|
||||
app: dispatcharr
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: dispatcharr
|
||||
spec:
|
||||
containers:
|
||||
- name: dispatcharr
|
||||
image: ghcr.io/dispatcharr/dispatcharr:latest
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 9191
|
||||
protocol: TCP
|
||||
env:
|
||||
- name: DISPATCHARR_ENV
|
||||
value: "aio"
|
||||
- name: REDIS_HOST
|
||||
value: "localhost"
|
||||
- name: CELERY_BROKER_URL
|
||||
value: "redis://localhost:6379/0"
|
||||
- name: DISPATCHARR_LOG_LEVEL
|
||||
value: "info"
|
||||
- name: TZ
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: media-config
|
||||
key: TZ
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /data
|
||||
subPath: dispatcharr
|
||||
resources:
|
||||
requests:
|
||||
memory: "256Mi"
|
||||
cpu: "100m"
|
||||
limits:
|
||||
memory: "1Gi"
|
||||
cpu: "1000m"
|
||||
livenessProbe:
|
||||
tcpSocket:
|
||||
port: http
|
||||
initialDelaySeconds: 60
|
||||
periodSeconds: 30
|
||||
timeoutSeconds: 10
|
||||
readinessProbe:
|
||||
tcpSocket:
|
||||
port: http
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
volumes:
|
||||
- name: config
|
||||
persistentVolumeClaim:
|
||||
claimName: media-config
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: dispatcharr
|
||||
namespace: media
|
||||
labels:
|
||||
app: dispatcharr
|
||||
spec:
|
||||
type: NodePort
|
||||
selector:
|
||||
app: dispatcharr
|
||||
ports:
|
||||
- name: http
|
||||
port: 9191
|
||||
targetPort: 9191
|
||||
nodePort: 30191
|
||||
protocol: TCP
|
||||
105
base/jellyfin.yaml
Normal file
105
base/jellyfin.yaml
Normal file
@@ -0,0 +1,105 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: jellyfin
|
||||
namespace: media
|
||||
labels:
|
||||
app: jellyfin
|
||||
app.kubernetes.io/name: jellyfin
|
||||
app.kubernetes.io/component: media-server
|
||||
spec:
|
||||
replicas: 1
|
||||
strategy:
|
||||
type: Recreate # Required for single-instance apps with persistent storage
|
||||
selector:
|
||||
matchLabels:
|
||||
app: jellyfin
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: jellyfin
|
||||
spec:
|
||||
containers:
|
||||
- name: jellyfin
|
||||
image: jellyfin/jellyfin:10.11.5
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 8096
|
||||
protocol: TCP
|
||||
- name: discovery
|
||||
containerPort: 7359
|
||||
protocol: UDP
|
||||
- name: dlna
|
||||
containerPort: 1900
|
||||
protocol: UDP
|
||||
env:
|
||||
- name: JELLYFIN_PublishedServerUrl
|
||||
value: "http://jellyfin:8096"
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: media-config
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /config
|
||||
subPath: jellyfin
|
||||
- name: media
|
||||
mountPath: /media
|
||||
readOnly: true
|
||||
- name: transcode
|
||||
mountPath: /config/transcodes
|
||||
resources:
|
||||
requests:
|
||||
memory: "1Gi"
|
||||
cpu: "500m"
|
||||
limits:
|
||||
memory: "4Gi"
|
||||
cpu: "4000m"
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /health
|
||||
port: http
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 30
|
||||
timeoutSeconds: 10
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /health
|
||||
port: http
|
||||
initialDelaySeconds: 15
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
volumes:
|
||||
- name: config
|
||||
persistentVolumeClaim:
|
||||
claimName: media-config
|
||||
- name: media
|
||||
persistentVolumeClaim:
|
||||
claimName: media-library
|
||||
- name: transcode
|
||||
persistentVolumeClaim:
|
||||
claimName: media-transcode
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: jellyfin
|
||||
namespace: media
|
||||
labels:
|
||||
app: jellyfin
|
||||
spec:
|
||||
type: NodePort
|
||||
selector:
|
||||
app: jellyfin
|
||||
ports:
|
||||
- name: http
|
||||
port: 8096
|
||||
targetPort: 8096
|
||||
nodePort: 30096
|
||||
protocol: TCP
|
||||
- name: discovery
|
||||
port: 7359
|
||||
targetPort: 7359
|
||||
nodePort: 30359
|
||||
protocol: UDP
|
||||
90
base/lidarr.yaml
Normal file
90
base/lidarr.yaml
Normal file
@@ -0,0 +1,90 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: lidarr
|
||||
namespace: media
|
||||
labels:
|
||||
app: lidarr
|
||||
app.kubernetes.io/name: lidarr
|
||||
app.kubernetes.io/component: music-management
|
||||
spec:
|
||||
replicas: 1
|
||||
strategy:
|
||||
type: Recreate
|
||||
selector:
|
||||
matchLabels:
|
||||
app: lidarr
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: lidarr
|
||||
spec:
|
||||
containers:
|
||||
- name: lidarr
|
||||
image: lscr.io/linuxserver/lidarr:latest
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 8686
|
||||
protocol: TCP
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: media-config
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /config
|
||||
subPath: lidarr
|
||||
- name: downloads
|
||||
mountPath: /downloads
|
||||
- name: media
|
||||
mountPath: /media
|
||||
resources:
|
||||
requests:
|
||||
memory: "256Mi"
|
||||
cpu: "100m"
|
||||
limits:
|
||||
memory: "1Gi"
|
||||
cpu: "1000m"
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /ping
|
||||
port: http
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 30
|
||||
timeoutSeconds: 10
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /ping
|
||||
port: http
|
||||
initialDelaySeconds: 15
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
volumes:
|
||||
- name: config
|
||||
persistentVolumeClaim:
|
||||
claimName: media-config
|
||||
- name: downloads
|
||||
persistentVolumeClaim:
|
||||
claimName: media-downloads
|
||||
- name: media
|
||||
persistentVolumeClaim:
|
||||
claimName: media-library
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: lidarr
|
||||
namespace: media
|
||||
labels:
|
||||
app: lidarr
|
||||
spec:
|
||||
type: NodePort
|
||||
selector:
|
||||
app: lidarr
|
||||
ports:
|
||||
- name: http
|
||||
port: 8686
|
||||
targetPort: 8686
|
||||
nodePort: 30686
|
||||
protocol: TCP
|
||||
11
base/namespace.yaml
Normal file
11
base/namespace.yaml
Normal file
@@ -0,0 +1,11 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: media
|
||||
labels:
|
||||
app.kubernetes.io/name: media-stack
|
||||
app.kubernetes.io/description: "Self-hosted_media_server_stack"
|
||||
pod-security.kubernetes.io/enforce: privileged
|
||||
pod-security.kubernetes.io/audit: privileged
|
||||
pod-security.kubernetes.io/warn: privileged
|
||||
89
base/nfs-storage.yaml
Normal file
89
base/nfs-storage.yaml
Normal file
@@ -0,0 +1,89 @@
|
||||
---
|
||||
# StorageClass for NFS (optional, for dynamic provisioning reference)
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: nfs-media
|
||||
provisioner: kubernetes.io/no-provisioner
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
reclaimPolicy: Retain
|
||||
|
||||
---
|
||||
# PV for app configs - each app gets its own subdirectory
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: media-config-pv
|
||||
labels:
|
||||
type: nfs
|
||||
usage: config
|
||||
spec:
|
||||
capacity:
|
||||
storage: 5Gi
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
persistentVolumeReclaimPolicy: Retain
|
||||
storageClassName: nfs-media
|
||||
nfs:
|
||||
server: 10.0.0.230
|
||||
path: /srv/nfs/media/config
|
||||
|
||||
---
|
||||
# PV for downloads (shared between qbittorrent and *arr apps)
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: media-downloads-pv
|
||||
labels:
|
||||
type: nfs
|
||||
usage: downloads
|
||||
spec:
|
||||
capacity:
|
||||
storage: 20Gi
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
persistentVolumeReclaimPolicy: Retain
|
||||
storageClassName: nfs-media
|
||||
nfs:
|
||||
server: 10.0.0.230
|
||||
path: /srv/nfs/media/downloads
|
||||
|
||||
---
|
||||
# PV for media library (shared between jellyfin and *arr apps)
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: media-library-pv
|
||||
labels:
|
||||
type: nfs
|
||||
usage: media
|
||||
spec:
|
||||
capacity:
|
||||
storage: 20Gi
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
persistentVolumeReclaimPolicy: Retain
|
||||
storageClassName: nfs-media
|
||||
nfs:
|
||||
server: 10.0.0.230
|
||||
path: /srv/nfs/media/media
|
||||
|
||||
---
|
||||
# PV for Jellyfin transcode cache
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: media-transcode-pv
|
||||
labels:
|
||||
type: nfs
|
||||
usage: transcode
|
||||
spec:
|
||||
capacity:
|
||||
storage: 10Gi
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
persistentVolumeReclaimPolicy: Retain
|
||||
storageClassName: nfs-media
|
||||
nfs:
|
||||
server: 10.0.0.230
|
||||
path: /srv/nfs/media/transcode
|
||||
80
base/prowlarr.yaml
Normal file
80
base/prowlarr.yaml
Normal file
@@ -0,0 +1,80 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: prowlarr
|
||||
namespace: media
|
||||
labels:
|
||||
app: prowlarr
|
||||
app.kubernetes.io/name: prowlarr
|
||||
app.kubernetes.io/component: indexer-management
|
||||
spec:
|
||||
replicas: 1
|
||||
strategy:
|
||||
type: Recreate
|
||||
selector:
|
||||
matchLabels:
|
||||
app: prowlarr
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: prowlarr
|
||||
spec:
|
||||
containers:
|
||||
- name: prowlarr
|
||||
image: lscr.io/linuxserver/prowlarr:latest
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 9696
|
||||
protocol: TCP
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: media-config
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /config
|
||||
subPath: prowlarr
|
||||
resources:
|
||||
requests:
|
||||
memory: "128Mi"
|
||||
cpu: "50m"
|
||||
limits:
|
||||
memory: "512Mi"
|
||||
cpu: "500m"
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /ping
|
||||
port: http
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 30
|
||||
timeoutSeconds: 10
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /ping
|
||||
port: http
|
||||
initialDelaySeconds: 15
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
volumes:
|
||||
- name: config
|
||||
persistentVolumeClaim:
|
||||
claimName: media-config
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: prowlarr
|
||||
namespace: media
|
||||
labels:
|
||||
app: prowlarr
|
||||
spec:
|
||||
type: NodePort
|
||||
selector:
|
||||
app: prowlarr
|
||||
ports:
|
||||
- name: http
|
||||
port: 9696
|
||||
targetPort: 9696
|
||||
nodePort: 30696
|
||||
protocol: TCP
|
||||
83
base/pvcs.yaml
Normal file
83
base/pvcs.yaml
Normal file
@@ -0,0 +1,83 @@
|
||||
---
|
||||
# Config PVC - shared, each app mounts a subdirectory
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: media-config
|
||||
namespace: media
|
||||
labels:
|
||||
app.kubernetes.io/name: media-stack
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
storageClassName: nfs-media
|
||||
resources:
|
||||
requests:
|
||||
storage: 5Gi
|
||||
selector:
|
||||
matchLabels:
|
||||
type: nfs
|
||||
usage: config
|
||||
|
||||
---
|
||||
# Downloads PVC - shared between qbittorrent and *arr apps
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: media-downloads
|
||||
namespace: media
|
||||
labels:
|
||||
app.kubernetes.io/name: media-stack
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
storageClassName: nfs-media
|
||||
resources:
|
||||
requests:
|
||||
storage: 20Gi
|
||||
selector:
|
||||
matchLabels:
|
||||
type: nfs
|
||||
usage: downloads
|
||||
|
||||
---
|
||||
# Media library PVC - shared between jellyfin and *arr apps
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: media-library
|
||||
namespace: media
|
||||
labels:
|
||||
app.kubernetes.io/name: media-stack
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
storageClassName: nfs-media
|
||||
resources:
|
||||
requests:
|
||||
storage: 20Gi
|
||||
selector:
|
||||
matchLabels:
|
||||
type: nfs
|
||||
usage: media
|
||||
|
||||
---
|
||||
# Transcode cache PVC - Jellyfin only
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: media-transcode
|
||||
namespace: media
|
||||
labels:
|
||||
app.kubernetes.io/name: media-stack
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
storageClassName: nfs-media
|
||||
resources:
|
||||
requests:
|
||||
storage: 10Gi
|
||||
selector:
|
||||
matchLabels:
|
||||
type: nfs
|
||||
usage: transcode
|
||||
100
base/qbittorrent.yaml
Normal file
100
base/qbittorrent.yaml
Normal file
@@ -0,0 +1,100 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: qbittorrent
|
||||
namespace: media
|
||||
labels:
|
||||
app: qbittorrent
|
||||
app.kubernetes.io/name: qbittorrent
|
||||
app.kubernetes.io/component: download-client
|
||||
spec:
|
||||
replicas: 1
|
||||
strategy:
|
||||
type: Recreate
|
||||
selector:
|
||||
matchLabels:
|
||||
app: qbittorrent
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: qbittorrent
|
||||
spec:
|
||||
containers:
|
||||
- name: qbittorrent
|
||||
image: lscr.io/linuxserver/qbittorrent:latest
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 8080
|
||||
protocol: TCP
|
||||
- name: torrent-tcp
|
||||
containerPort: 6881
|
||||
protocol: TCP
|
||||
- name: torrent-udp
|
||||
containerPort: 6881
|
||||
protocol: UDP
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: media-config
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /config
|
||||
subPath: qbittorrent
|
||||
- name: downloads
|
||||
mountPath: /downloads
|
||||
resources:
|
||||
requests:
|
||||
memory: "256Mi"
|
||||
cpu: "100m"
|
||||
limits:
|
||||
memory: "1Gi"
|
||||
cpu: "1000m"
|
||||
# qBittorrent doesn't have a health endpoint, so we check if the port is open
|
||||
livenessProbe:
|
||||
tcpSocket:
|
||||
port: http
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 30
|
||||
timeoutSeconds: 10
|
||||
readinessProbe:
|
||||
tcpSocket:
|
||||
port: http
|
||||
initialDelaySeconds: 15
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
volumes:
|
||||
- name: config
|
||||
persistentVolumeClaim:
|
||||
claimName: media-config
|
||||
- name: downloads
|
||||
persistentVolumeClaim:
|
||||
claimName: media-downloads
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: qbittorrent
|
||||
namespace: media
|
||||
labels:
|
||||
app: qbittorrent
|
||||
spec:
|
||||
type: NodePort
|
||||
selector:
|
||||
app: qbittorrent
|
||||
ports:
|
||||
- name: http
|
||||
port: 8080
|
||||
targetPort: 8080
|
||||
nodePort: 30080
|
||||
protocol: TCP
|
||||
- name: torrent-tcp
|
||||
port: 6881
|
||||
targetPort: 6881
|
||||
nodePort: 30881
|
||||
protocol: TCP
|
||||
- name: torrent-udp
|
||||
port: 6881
|
||||
targetPort: 6881
|
||||
nodePort: 30881
|
||||
protocol: UDP
|
||||
90
base/radarr.yaml
Normal file
90
base/radarr.yaml
Normal file
@@ -0,0 +1,90 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: radarr
|
||||
namespace: media
|
||||
labels:
|
||||
app: radarr
|
||||
app.kubernetes.io/name: radarr
|
||||
app.kubernetes.io/component: movie-management
|
||||
spec:
|
||||
replicas: 1
|
||||
strategy:
|
||||
type: Recreate
|
||||
selector:
|
||||
matchLabels:
|
||||
app: radarr
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: radarr
|
||||
spec:
|
||||
containers:
|
||||
- name: radarr
|
||||
image: lscr.io/linuxserver/radarr:latest
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 7878
|
||||
protocol: TCP
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: media-config
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /config
|
||||
subPath: radarr
|
||||
- name: downloads
|
||||
mountPath: /downloads
|
||||
- name: media
|
||||
mountPath: /media
|
||||
resources:
|
||||
requests:
|
||||
memory: "256Mi"
|
||||
cpu: "100m"
|
||||
limits:
|
||||
memory: "1Gi"
|
||||
cpu: "1000m"
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /ping
|
||||
port: http
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 30
|
||||
timeoutSeconds: 10
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /ping
|
||||
port: http
|
||||
initialDelaySeconds: 15
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
volumes:
|
||||
- name: config
|
||||
persistentVolumeClaim:
|
||||
claimName: media-config
|
||||
- name: downloads
|
||||
persistentVolumeClaim:
|
||||
claimName: media-downloads
|
||||
- name: media
|
||||
persistentVolumeClaim:
|
||||
claimName: media-library
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: radarr
|
||||
namespace: media
|
||||
labels:
|
||||
app: radarr
|
||||
spec:
|
||||
type: NodePort
|
||||
selector:
|
||||
app: radarr
|
||||
ports:
|
||||
- name: http
|
||||
port: 7878
|
||||
targetPort: 7878
|
||||
nodePort: 30878
|
||||
protocol: TCP
|
||||
90
base/sonarr.yaml
Normal file
90
base/sonarr.yaml
Normal file
@@ -0,0 +1,90 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: sonarr
|
||||
namespace: media
|
||||
labels:
|
||||
app: sonarr
|
||||
app.kubernetes.io/name: sonarr
|
||||
app.kubernetes.io/component: tv-management
|
||||
spec:
|
||||
replicas: 1
|
||||
strategy:
|
||||
type: Recreate
|
||||
selector:
|
||||
matchLabels:
|
||||
app: sonarr
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: sonarr
|
||||
spec:
|
||||
containers:
|
||||
- name: sonarr
|
||||
image: lscr.io/linuxserver/sonarr:latest
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 8989
|
||||
protocol: TCP
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: media-config
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /config
|
||||
subPath: sonarr
|
||||
- name: downloads
|
||||
mountPath: /downloads
|
||||
- name: media
|
||||
mountPath: /media
|
||||
resources:
|
||||
requests:
|
||||
memory: "256Mi"
|
||||
cpu: "100m"
|
||||
limits:
|
||||
memory: "1Gi"
|
||||
cpu: "1000m"
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /ping
|
||||
port: http
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 30
|
||||
timeoutSeconds: 10
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /ping
|
||||
port: http
|
||||
initialDelaySeconds: 15
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
volumes:
|
||||
- name: config
|
||||
persistentVolumeClaim:
|
||||
claimName: media-config
|
||||
- name: downloads
|
||||
persistentVolumeClaim:
|
||||
claimName: media-downloads
|
||||
- name: media
|
||||
persistentVolumeClaim:
|
||||
claimName: media-library
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: sonarr
|
||||
namespace: media
|
||||
labels:
|
||||
app: sonarr
|
||||
spec:
|
||||
type: NodePort
|
||||
selector:
|
||||
app: sonarr
|
||||
ports:
|
||||
- name: http
|
||||
port: 8989
|
||||
targetPort: 8989
|
||||
nodePort: 30989
|
||||
protocol: TCP
|
||||
160
base/vpn/dispatcharr-vpn.yaml
Normal file
160
base/vpn/dispatcharr-vpn.yaml
Normal file
@@ -0,0 +1,160 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: dispatcharr
|
||||
namespace: media
|
||||
labels:
|
||||
app: dispatcharr
|
||||
app.kubernetes.io/name: dispatcharr
|
||||
app.kubernetes.io/component: iptv-proxy
|
||||
vpn: "true"
|
||||
spec:
|
||||
replicas: 1
|
||||
strategy:
|
||||
type: Recreate
|
||||
selector:
|
||||
matchLabels:
|
||||
app: dispatcharr
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: dispatcharr
|
||||
vpn: "true"
|
||||
spec:
|
||||
containers:
|
||||
# Gluetun VPN Sidecar
|
||||
- name: gluetun
|
||||
image: qmcgaw/gluetun:latest
|
||||
securityContext:
|
||||
capabilities:
|
||||
add:
|
||||
- NET_ADMIN
|
||||
env:
|
||||
- name: TZ
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: media-config
|
||||
key: TZ
|
||||
- name: WIREGUARD_PRIVATE_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: mullvad-vpn
|
||||
key: WIREGUARD_PRIVATE_KEY
|
||||
- name: WIREGUARD_ADDRESSES
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: mullvad-vpn
|
||||
key: WIREGUARD_ADDRESSES
|
||||
# Dispatcharr needs to serve streams to Jellyfin inside cluster
|
||||
- name: FIREWALL_OUTBOUND_SUBNETS
|
||||
value: "10.0.0.0/24,10.96.0.0/12"
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: gluetun-config
|
||||
volumeMounts:
|
||||
- name: tun-device
|
||||
mountPath: /dev/net/tun
|
||||
- name: gluetun-data
|
||||
mountPath: /gluetun
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 9191
|
||||
protocol: TCP
|
||||
- name: http-proxy
|
||||
containerPort: 8888
|
||||
protocol: TCP
|
||||
resources:
|
||||
requests:
|
||||
memory: "128Mi"
|
||||
cpu: "50m"
|
||||
limits:
|
||||
memory: "512Mi"
|
||||
cpu: "500m"
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- /gluetun-entrypoint
|
||||
- healthcheck
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 30
|
||||
timeoutSeconds: 10
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
- /gluetun-entrypoint
|
||||
- healthcheck
|
||||
initialDelaySeconds: 15
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
|
||||
# Dispatcharr
|
||||
- name: dispatcharr
|
||||
image: ghcr.io/dispatcharr/dispatcharr:latest
|
||||
env:
|
||||
- name: DISPATCHARR_ENV
|
||||
value: "aio"
|
||||
- name: REDIS_HOST
|
||||
value: "localhost"
|
||||
- name: CELERY_BROKER_URL
|
||||
value: "redis://localhost:6379/0"
|
||||
- name: DISPATCHARR_LOG_LEVEL
|
||||
value: "info"
|
||||
- name: TZ
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: media-config
|
||||
key: TZ
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /data
|
||||
subPath: dispatcharr
|
||||
resources:
|
||||
requests:
|
||||
memory: "256Mi"
|
||||
cpu: "100m"
|
||||
limits:
|
||||
memory: "1Gi"
|
||||
cpu: "500m"
|
||||
livenessProbe:
|
||||
tcpSocket:
|
||||
port: 9191
|
||||
initialDelaySeconds: 90
|
||||
periodSeconds: 30
|
||||
timeoutSeconds: 10
|
||||
readinessProbe:
|
||||
tcpSocket:
|
||||
port: 9191
|
||||
initialDelaySeconds: 60
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
|
||||
volumes:
|
||||
- name: tun-device
|
||||
hostPath:
|
||||
path: /dev/net/tun
|
||||
type: CharDevice
|
||||
- name: gluetun-data
|
||||
emptyDir: {}
|
||||
- name: config
|
||||
persistentVolumeClaim:
|
||||
claimName: media-config
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: dispatcharr
|
||||
namespace: media
|
||||
labels:
|
||||
app: dispatcharr
|
||||
spec:
|
||||
type: NodePort
|
||||
selector:
|
||||
app: dispatcharr
|
||||
ports:
|
||||
- name: http
|
||||
port: 9191
|
||||
targetPort: 9191
|
||||
nodePort: 30191
|
||||
protocol: TCP
|
||||
47
base/vpn/gluetun-config.yaml
Normal file
47
base/vpn/gluetun-config.yaml
Normal file
@@ -0,0 +1,47 @@
|
||||
---
|
||||
# Gluetun VPN Configuration
|
||||
# Shared configuration for all pods using Gluetun sidecar
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: gluetun-config
|
||||
namespace: media
|
||||
labels:
|
||||
app.kubernetes.io/name: media-stack
|
||||
app.kubernetes.io/component: vpn
|
||||
data:
|
||||
# VPN Provider
|
||||
VPN_SERVICE_PROVIDER: "mullvad"
|
||||
VPN_TYPE: "wireguard"
|
||||
|
||||
# Server selection - adjust to your preferred location
|
||||
# Options: See https://github.com/qdm12/gluetun-wiki/blob/main/setup/servers.md
|
||||
# Examples: "London", "Amsterdam", "New York City", "Stockholm"
|
||||
SERVER_CITIES: "London"
|
||||
|
||||
# Kill switch - blocks all traffic if VPN drops (highly recommended)
|
||||
FIREWALL: "on"
|
||||
|
||||
# DNS over TLS for privacy
|
||||
DOT: "on"
|
||||
|
||||
# Block malicious domains
|
||||
BLOCK_MALICIOUS: "on"
|
||||
BLOCK_SURVEILLANCE: "on"
|
||||
BLOCK_ADS: "off"
|
||||
|
||||
# Health check settings
|
||||
HEALTH_VPN_DURATION_INITIAL: "30s"
|
||||
HEALTH_VPN_DURATION_ADDITION: "5s"
|
||||
|
||||
# Logging
|
||||
LOG_LEVEL: "info"
|
||||
|
||||
# HTTP Proxy (optional - can be used by other apps)
|
||||
HTTPPROXY: "on"
|
||||
HTTPPROXY_LOG: "off"
|
||||
|
||||
# Firewall input ports - for qBittorrent incoming connections
|
||||
# If you have Mullvad port forwarding enabled, set this to your forwarded port
|
||||
# Get a port at: https://mullvad.net/en/account/ports
|
||||
# FIREWALL_VPN_INPUT_PORTS: "12345"
|
||||
35
base/vpn/mullvad-secret.yaml
Normal file
35
base/vpn/mullvad-secret.yaml
Normal file
@@ -0,0 +1,35 @@
|
||||
---
|
||||
# Mullvad VPN Secret
|
||||
#
|
||||
# IMPORTANT: You must fill in your Mullvad WireGuard credentials before applying!
|
||||
#
|
||||
# To get these values:
|
||||
# 1. Go to https://mullvad.net/en/account/wireguard-config
|
||||
# 2. Generate a new WireGuard configuration
|
||||
# 3. Download and extract the .conf file
|
||||
# 4. Copy the PrivateKey and Address values from the [Interface] section
|
||||
#
|
||||
# Example .conf file contents:
|
||||
# [Interface]
|
||||
# PrivateKey = wOEI9rqqbDwnN8/Bpp22sVz48T71vJ4fYmFWujulwUU=
|
||||
# Address = 10.64.222.21/32,fc00:bbbb:bbbb:bb01::1/128
|
||||
#
|
||||
# Use the PrivateKey value for WIREGUARD_PRIVATE_KEY
|
||||
# Use the IPv4 Address (first one, e.g., 10.64.222.21/32) for WIREGUARD_ADDRESSES
|
||||
#
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: mullvad-vpn
|
||||
namespace: media
|
||||
labels:
|
||||
app.kubernetes.io/name: media-stack
|
||||
app.kubernetes.io/component: vpn
|
||||
type: Opaque
|
||||
stringData:
|
||||
# Your WireGuard private key from the Mullvad config file
|
||||
# DO NOT use the key shown on the Mullvad website - it must come from the .conf file
|
||||
WIREGUARD_PRIVATE_KEY: "123abv..."
|
||||
|
||||
# Your WireGuard address from the Mullvad config file (IPv4 only, with /32)
|
||||
WIREGUARD_ADDRESSES: "10.1.1.200/32"
|
||||
152
base/vpn/prowlarr-vpn.yaml
Normal file
152
base/vpn/prowlarr-vpn.yaml
Normal file
@@ -0,0 +1,152 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: prowlarr
|
||||
namespace: media
|
||||
labels:
|
||||
app: prowlarr
|
||||
app.kubernetes.io/name: prowlarr
|
||||
app.kubernetes.io/component: indexer-management
|
||||
vpn: "true"
|
||||
spec:
|
||||
replicas: 1
|
||||
strategy:
|
||||
type: Recreate
|
||||
selector:
|
||||
matchLabels:
|
||||
app: prowlarr
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: prowlarr
|
||||
vpn: "true"
|
||||
spec:
|
||||
containers:
|
||||
# Gluetun VPN Sidecar
|
||||
- name: gluetun
|
||||
image: qmcgaw/gluetun:latest
|
||||
securityContext:
|
||||
capabilities:
|
||||
add:
|
||||
- NET_ADMIN
|
||||
env:
|
||||
- name: TZ
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: media-config
|
||||
key: TZ
|
||||
- name: WIREGUARD_PRIVATE_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: mullvad-vpn
|
||||
key: WIREGUARD_PRIVATE_KEY
|
||||
- name: WIREGUARD_ADDRESSES
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: mullvad-vpn
|
||||
key: WIREGUARD_ADDRESSES
|
||||
# Prowlarr needs to reach other *arr apps inside cluster
|
||||
# Add cluster network to firewall outbound subnets
|
||||
- name: FIREWALL_OUTBOUND_SUBNETS
|
||||
value: "10.0.0.0/24,10.96.0.0/12"
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: gluetun-config
|
||||
volumeMounts:
|
||||
- name: tun-device
|
||||
mountPath: /dev/net/tun
|
||||
- name: gluetun-data
|
||||
mountPath: /gluetun
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 9696
|
||||
protocol: TCP
|
||||
- name: http-proxy
|
||||
containerPort: 8888
|
||||
protocol: TCP
|
||||
resources:
|
||||
requests:
|
||||
memory: "128Mi"
|
||||
cpu: "50m"
|
||||
limits:
|
||||
memory: "512Mi"
|
||||
cpu: "500m"
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- /gluetun-entrypoint
|
||||
- healthcheck
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 30
|
||||
timeoutSeconds: 10
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
- /gluetun-entrypoint
|
||||
- healthcheck
|
||||
initialDelaySeconds: 15
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
|
||||
# Prowlarr
|
||||
- name: prowlarr
|
||||
image: lscr.io/linuxserver/prowlarr:latest
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: media-config
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /config
|
||||
subPath: prowlarr
|
||||
resources:
|
||||
requests:
|
||||
memory: "128Mi"
|
||||
cpu: "50m"
|
||||
limits:
|
||||
memory: "512Mi"
|
||||
cpu: "500m"
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /ping
|
||||
port: 9696
|
||||
initialDelaySeconds: 60
|
||||
periodSeconds: 30
|
||||
timeoutSeconds: 10
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /ping
|
||||
port: 9696
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
|
||||
volumes:
|
||||
- name: tun-device
|
||||
hostPath:
|
||||
path: /dev/net/tun
|
||||
type: CharDevice
|
||||
- name: gluetun-data
|
||||
emptyDir: {}
|
||||
- name: config
|
||||
persistentVolumeClaim:
|
||||
claimName: media-config
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: prowlarr
|
||||
namespace: media
|
||||
labels:
|
||||
app: prowlarr
|
||||
spec:
|
||||
type: NodePort
|
||||
selector:
|
||||
app: prowlarr
|
||||
ports:
|
||||
- name: http
|
||||
port: 9696
|
||||
targetPort: 9696
|
||||
nodePort: 30696
|
||||
protocol: TCP
|
||||
17
base/vpn/qbittorrent-init-configmap.yaml
Normal file
17
base/vpn/qbittorrent-init-configmap.yaml
Normal file
@@ -0,0 +1,17 @@
|
||||
apiVersion: v1
|
||||
data:
|
||||
qbittorrent-init.sh: |+
|
||||
#!/bin/bash
|
||||
# Wait for config file to exist
|
||||
while [ ! -f /config/qBittorrent/qBittorrent.conf ]; do
|
||||
sleep 1
|
||||
done
|
||||
|
||||
# Add WebUI settings if they don't exist
|
||||
grep -q "WebUI\\CSRFProtection" /config/qBittorrent/qBittorrent.conf || \
|
||||
sed -i '/^\[Preferences\]/a WebUI\\CSRFProtection=false\nWebUI\\ClickjackingProtection=false\nWebUI\\HostHeaderValidation=false\nWebUI\\LocalHostAuth=false' /config/qBittorrent/qBittorrent.conf
|
||||
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: qbittorrent-init-script
|
||||
namespace: media
|
||||
173
base/vpn/qbittorrent-vpn.yaml
Normal file
173
base/vpn/qbittorrent-vpn.yaml
Normal file
@@ -0,0 +1,173 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: qbittorrent
|
||||
namespace: media
|
||||
labels:
|
||||
app: qbittorrent
|
||||
app.kubernetes.io/name: qbittorrent
|
||||
app.kubernetes.io/component: download-client
|
||||
vpn: "true"
|
||||
spec:
|
||||
replicas: 1
|
||||
strategy:
|
||||
type: Recreate
|
||||
selector:
|
||||
matchLabels:
|
||||
app: qbittorrent
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: qbittorrent
|
||||
vpn: "true"
|
||||
spec:
|
||||
containers:
|
||||
# Gluetun VPN Sidecar - MUST be first container
|
||||
- name: gluetun
|
||||
image: qmcgaw/gluetun:latest
|
||||
securityContext:
|
||||
capabilities:
|
||||
add:
|
||||
- NET_ADMIN
|
||||
env:
|
||||
- name: TZ
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: media-config
|
||||
key: TZ
|
||||
# Mullvad credentials from secret
|
||||
- name: WIREGUARD_PRIVATE_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: mullvad-vpn
|
||||
key: WIREGUARD_PRIVATE_KEY
|
||||
- name: WIREGUARD_ADDRESSES
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: mullvad-vpn
|
||||
key: WIREGUARD_ADDRESSES
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: gluetun-config
|
||||
# Gluetun needs /dev/net/tun
|
||||
volumeMounts:
|
||||
- name: tun-device
|
||||
mountPath: /dev/net/tun
|
||||
- name: gluetun-data
|
||||
mountPath: /gluetun
|
||||
ports:
|
||||
# All ports must be on Gluetun container since it owns the network
|
||||
- name: http
|
||||
containerPort: 8080
|
||||
protocol: TCP
|
||||
- name: torrent-tcp
|
||||
containerPort: 6881
|
||||
protocol: TCP
|
||||
- name: torrent-udp
|
||||
containerPort: 6881
|
||||
protocol: UDP
|
||||
- name: http-proxy
|
||||
containerPort: 8888
|
||||
protocol: TCP
|
||||
resources:
|
||||
requests:
|
||||
memory: "128Mi"
|
||||
cpu: "50m"
|
||||
limits:
|
||||
memory: "512Mi"
|
||||
cpu: "500m"
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- /gluetun-entrypoint
|
||||
- healthcheck
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 30
|
||||
timeoutSeconds: 10
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
- /gluetun-entrypoint
|
||||
- healthcheck
|
||||
initialDelaySeconds: 15
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
|
||||
# qBittorrent - uses Gluetun's network namespace
|
||||
- name: qbittorrent
|
||||
image: lscr.io/linuxserver/qbittorrent:latest
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: media-config
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /config
|
||||
subPath: qbittorrent
|
||||
- name: downloads
|
||||
mountPath: /downloads
|
||||
- name: init-script
|
||||
mountPath: /custom-cont-init.d/qbittorrent-init.sh
|
||||
subPath: qbittorrent-init.sh
|
||||
resources:
|
||||
requests:
|
||||
memory: "256Mi"
|
||||
cpu: "100m"
|
||||
limits:
|
||||
memory: "1Gi"
|
||||
cpu: "1000m"
|
||||
# Note: No ports here - they're on the gluetun container
|
||||
# Health check via localhost since we share network namespace
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
port: 8080
|
||||
initialDelaySeconds: 60
|
||||
periodSeconds: 30
|
||||
timeoutSeconds: 10
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
port: 8080
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
|
||||
volumes:
|
||||
- name: tun-device
|
||||
hostPath:
|
||||
path: /dev/net/tun
|
||||
type: CharDevice
|
||||
- name: gluetun-data
|
||||
emptyDir: {}
|
||||
- name: config
|
||||
persistentVolumeClaim:
|
||||
claimName: media-config
|
||||
- name: downloads
|
||||
persistentVolumeClaim:
|
||||
claimName: media-downloads
|
||||
- name: init-script
|
||||
configMap:
|
||||
name: qbittorrent-init-script
|
||||
defaultMode: 0755
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: qbittorrent
|
||||
namespace: media
|
||||
labels:
|
||||
app: qbittorrent
|
||||
spec:
|
||||
type: NodePort
|
||||
selector:
|
||||
app: qbittorrent
|
||||
ports:
|
||||
- name: http
|
||||
port: 8080
|
||||
targetPort: 8080
|
||||
nodePort: 30080
|
||||
protocol: TCP
|
||||
# Note: Torrent ports are not exposed externally when using VPN
|
||||
# Peers will connect via VPN IP, not node IP
|
||||
Reference in New Issue
Block a user