Running the CSI Provider on a K8s Cluster in VMC
One reason why the adoption of Kubernetes happens with tremendous speed is the possibility to provide persistent storage for the containerised applications which are running on it. Kubernetes provides a standard API that allows different storage systems to be exposed as volume plugins. The API primitives like StorageClasses and Persistent Volumes provide a standard declarative mechanism to manage storage like we know it from Kubernetes itself.
In the past volume plugins which have been part of the core Kubernetes codebase were leveraged to provide persistent storage but this also came at a cost and introduced a lot challenges:
- Different Providers introduced their plugins, so the codebase was growing and growing
- Bugs can be introduced with this new code and probably crashing core Kubernetes components like the kubelet etc.
- Those “external components” needed to be tested and this was slowing down the Release of Kubernetes itself. It further introduced dependencies between the volume plugin and he K8s release.
To standardise the interface and enable vendors to integrate their Storage plugins to Kubernetes indipendently of the actual Kubernetes Release, the Container Storage interface was introduced. The CSI is vendor neutral, provides the specification and is only a control plane.
The CSI uses remote procedure calls to define a programatic interface for standard volume operations. The important here is that operations are not in the container orchestrators codebase.
The drawing below is a logical drawing regarding the neccessary components and high-level interaction between those.
In this tutorial we will leverage the csi-vsphere which enables the provisioning of those static values on demand. For the current K8s version the CSI-Plugin is the preferred way to be leveraged.
- vSphere > 6.5
- K8s > 1.14 (I´m using 1.15.2)
- govc installed and configured - please see This Post
First step is to get the vsphere-csi-driver
We´re going to clone the vSphere CSI-Driver plugin to your nodes. You can clone this to all of them or distribute via scp
git clone https://github.com/kubernetes-sigs/vsphere-csi-driver.git
Enable Disk UUID for all your Nodes
If you followed my previous post you have most likely already enabled this setting. If you haven´t please enable the disk.enableUUID=1
govc vm.change -vm k8s-master01 -e "disk.enableUUID=1"
govc vm.change -vm k8s-worker01 -e "disk.enableUUID=1"
govc vm.change -vm k8s-worker02 -e "disk.enableUUID=1"
Providing the vSphere Credentials to the CSI-Plugin
There are two options how to provide the csi-vsphere our vCenter Credentials.
- Using a K8s Secret
- Within the vSphere.conf
In our case we will provide the secrets in the vsphere.conf since we´re working in our lab-environment. For a real environment we would for sure store the credentials in a K8s secret.
user = "firstname.lastname@example.org"
password = "Highlysecure-XXXX-"
port = "443"
insecure-flag = "1"
datacenters = "SDDC-Datacenter"
server = "10.56.224.4"
datacenter = "SDDC-Datacenter"
default-datastore = "WorkloadDatastore"
resourcepool-path = "SDDC-Datacenter/host/Cluster-1/Resources/Compute-ResourcePool"
folder = "adess"
Create the RBAC roles
Next we need to create the neccessary rbac roles
ubuntu@k8s-master01:~/vsphere-csi-driver$ kubectl create -f manifests/1.14/rbac
Wohoo! We´re now ready to deploy our csi-vsphere
Deploy from the git-repo. It´s mandatory to have the cluster setup with kubeadm in order to properly work:
kubectl create -f manifests/1.14/deploy
Next we need to create a Storage Class
Adjust the values according to your deployment. For the deployment on VMC it is absolutely mandatory that we´re selecting the Workload-Datastore
ubuntu@k8s-master01:~/csi$ cat sc.yaml
# Apply to create our Storage-Class:
kubectl apply -f sc.yaml
After creating a storage-class into the K8s-API we can now create a persistent Volume Claim
Lets check if our volume was created accordingly. If you get the status “Bound” it has been successfully created. At the same time you should see a vmdk created in vCenter.
ubuntu@k8s-master01:~/csi$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
adess-pvc2g Bound pvc-75199f1d-1e9a-4290-bf77-eb7475ba327a 2Gi RWO vsan 5s
That´s it - stay tuned. In the next post we will leverage this to deploy a stateful Kubernetes application!