Test Deployment Notes
What follows is my, potentially slightly incoherent, stream of consciousness as I try to teach myself Kubernetes
First Deployment
- Minikube running with QEMU driver on local machine
- used the following deployment and service YAML:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: uptime-kuma
labels:
app: uptime-kuma
spec:
replicas: 1
selector:
matchLabels:
app: uptime-kuma
template:
metadata:
labels:
app: uptime-kuma
spec:
containers:
- name: uptime-kuma
image: louislam/uptime-kuma
ports:
- containerPort: 3001
---
apiVersion: v1
kind: Service
metadata:
name: uptime-kuma-service
spec:
selector:
app: uptime-kuma
type: LoadBalancer
ports:
- protocol: TCP
port: 3001
targetPort: 3001
nodePort: 30000
- kubectl apply -f uptime-kuma-deployment.yml
- Got “MK_UNIMPLEMENTED” error when running
minikube service uptime-kuma-serviceto get URL. Google showed that minikube service was unavailable when using QEMU. Saw an option to use Docker instead. - ran
minikube deleteand redeployed minikube withminikube start --driver=docker - re-ran kubectl apply
- minikube service command gave URL http://192.168.49.2:30000 which brought up uptime-kuma in the browser
Adding Volumes for Persistent Storage
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: uptime-kuma
labels:
app: uptime-kuma
spec:
replicas: 1
selector:
matchLabels:
app: uptime-kuma
template:
metadata:
labels:
app: uptime-kuma
spec:
containers:
- name: uptime-kuma
image: louislam/uptime-kuma
ports:
- containerPort: 3001
volumeMounts: # Volume must be created along with volumeMount (see next below)
- name: uptime-kuma-data
mountPath: /app/data # Path within the container, like the right side of a docker bind mount -- /tmp/data:/app/data
volumes: # Defines a volume that uses an existing PVC (defined below)
- name: uptime-kuma-data
persistentVolumeClaim:
claimName: uptime-kuma-pvc
---
apiVersion: v1
kind: Service
metadata:
name: uptime-kuma-service
spec:
selector:
app: uptime-kuma
type: LoadBalancer
ports:
- protocol: TCP
port: 3001
targetPort: 3001
nodePort: 30000
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: uptime-kuma-pv
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
storageClassName: standard
hostPath:
path: /tmp/uptime/data # Path on the host
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: uptime-kuma-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: standard
volumeName: uptime-kuma-pv
- Was confused on the need for
storageClassName— tried to comment it out and redeploy:- Pod was stuck in pending status
kubectl describe podshowed:Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 4m10s default-scheduler 0/1 nodes are available: persistentvolumeclaim "uptime-kuma-pvc" not found. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
Warning FailedScheduling 4m9s default-scheduler 0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
kubectl describe pvc uptime-kuma-pvcshowed:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning VolumeMismatch 7s (x26 over 6m20s) persistentvolume-controller Cannot bind to requested volume "uptime-kuma-pv": storageClassName does not match
-kubectl describe pv uptime-kuma-pv shows that uptime-kuma-pv has no storage class assigned:
Name: uptime-kuma-pv
Labels: <none>
Annotations: <none>
Finalizers: [kubernetes.io/pv-protection]
StorageClass:
Status: Available
- The docs state the following: You can create a PersistentVolumeClaim without specifying a
storageClassNamefor the new PVC, and you can do so even when no default StorageClass exists in your cluster. In this case, the new PVC creates as you defined it, and thestorageClassNameof that PVC remains unset until a default becomes available. - Checking what my default storage class is with
kubectl get storageclass(I didn’t set this… must be a minikube default)
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
standard (default) k8s.io/minikube-hostpath Delete Immediate false 2d
- It appears that a PVC will be set to the default storage class if one is not specified, but the same is not true for a PV — the docs also state: A PV with no storageClassName has no class and can only be bound to PVCs that request no particular class.
- After understanding this I ran
kubectl apply -f uptime-kuma-deployment.ymlwith the above configuration. - Ran
minikube service uptime-kuma-serviceto pull it up in the browser - Created an uptime-kuma user and created a monitor for google.com
- Ran
kubectl delete deployment uptime-kumato delete all pods, then re-rankubectl apply -f uptime-kuma-deployment, then opened it in the browser again withminikube service uptime-kuma-service - My user and monitor still exist after recreating the deployment, so data was successfully persisted with a PV and PVC! 🎉🎉🎉
Using an SMB share as persistent storage
- Config files are in the folder
remote-storage-smb-deployment - Created a share on TrueNAS called ‘kubernetes-testing’
- Installed the SMB CSI driver — https://github.com/kubernetes-csi/csi-driver-smb/tree/master
- Generate a secret to store share credentials with
kubectl create secret generic smbcreds --from-literal username=USERNAME --from-literal password="PASSWORD"- note: the generic flag is used to denote that the secret being created is an ‘Opaque’ secret (arbitrary, user-defined data). There are other secret types for various specific use cases such as docker registry credentials or tls certificates. https://kubernetes.io/docs/concepts/configuration/secret/
- Used this video from the Youtube channel Jim’s garage for reference with the PV and PVC configs https://www.youtube.com/watch?v=3S5oeB2qhyg&t=318s
- Got a permission denied error on the pod, appears to be when mounting smb share in fstab
- Ran the following to verify contents of the secret:
kubectl get secret smbcreds -o jsonpath='{.data}'which gave the following output:{"password":"MXFhQFdTM2Vk","username":"a3VzZXI="} - These values are base64 encoded, so I ran the password through an online decoder and found that it appears to have been cut off after 9 characters.
- Found out the reason that it was cut off was that when using double quotes the shell will interpret special characters. Since the password was a waterfall of 1qa@WS3ed$RF5tg —> the shell interpreted everything after the dollar sign as a variable (which was obviously empty)
- After fixing this, everything appears to deploy without errors, however connecting with minikube service gives ‘ERR_CONNECTION_REFUSED’
- Checking pod logs — it appears the app can’t connect to SQLLite because the database is locked
- Switched to an nginx image rather than uptme-kuma — you wouldn’t want a SQLLite DB on an SMB share anyway.
- Gave a mountpath of /usr/share/nginx/html
- I thought if nothing existed for this volume it would create a default splashpage — it didn’t, and I got 403 forbidden
- Added a hello-world HTML file to the share and then ran
kubectl rollout restart deployment nginxto restart the pod(s) - It worked! :tada:
- Scaled pods up from 1 to 2 and re-ran kubectl apply — still works! :tada:
- Data is thrown into the share like data soup (no sub-directories based on pod, pvc etc)
- Added this storage class config based on the docs:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: smb
provisioner: smb.csi.k8s.io
parameters:
source: //10.0.30.11/kubernetes-testing
# if csi.storage.k8s.io/provisioner-secret is provided, will create a sub directory
# with PV name under source
csi.storage.k8s.io/provisioner-secret-name: smbcreds
csi.storage.k8s.io/provisioner-secret-namespace: default
csi.storage.k8s.io/node-stage-secret-name: smbcreds
csi.storage.k8s.io/node-stage-secret-namespace: default
reclaimPolicy: Delete # available values: Delete, Retain
volumeBindingMode: Immediate
allowVolumeExpansion: true
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
- Also removed the PersistentVolume config — this dynamically provisioned volumes and created sub-directories on the share based on the names of the dynamically provisioned volumes (ex. pvc-646faec2-c9cc-4a75-9b3a-4b6c9742f339)
- I wanted a sub-directory created based on the deployment name rather than the difficult-to-remember PVC names.
- I found two ways to do this:
- Add
subDirparameter to the Storage Class config. This will create a single sub-directory that every PVC will use rather than a unique folder based on the PVC name. - Add
subPathin the Container Template config undervolumeMounts— this will put the contents of that volume underneath that sub-folder - Both of these methods can be used at the same time
- Add
- Added both of these configurations, attempted to add additional heimdall deployment — same issue with database locking. Decided to try to deploy it as a statefulset:
- When deleting heimdall’s PVC it deleted the entire subDir defined in the storage path
- Found out the reason for this is that the default of the
onDelteparameter is to delete the directories when the volume is deleted. Because the storageclass’s reclaim policy was Delete — the volume was deleted when I deleted the PVC. Therefore, the CSI driver deleted the directories on the share as well. - https://github.com/kubernetes-csi/csi-driver-smb/blob/master/docs/driver-parameters.md