Persistent Storage on Kubernetes for Azure
Updated: Apr 22
The advent of Containers and micro-services greatly benefits developers and users alike. It is now simple to package software that a user can spin up and run in Docker Desktop, on a Linux box running the Docker Daemon, or within a Container Orchestration Engine such as Kubernetes.
Feel free to reach out to C2 Labs if you have questions or would like for us to help with your Azure, AWS, or other cloud deployments.
Creating applications in this environment has some architectural challenges as well, particularly if you have data that you want to persist. Containers, by definition, are ephemeral and immutable, so as you restart them, the are refreshed using the specified image. Connecting to an external database can keep most of the structured persistent data you need. However, there are also times when you want to keep persistent unstructured data or local file storage as well. For C2 Labs, we have an application that needs to keep file uploads outside of the database which posed an additional challenge.
Docker natively supports this use case through the use of volumes. However, working in a cloud environment introduces even more complexity. This post is intended to walk through how to setup and use persistent storage in Kubernetes running on both Azure (NOTE: a later post will cover the same within AWS).
I recently worked through the various pieces of this, overcame some problems, and wanted to show the key lessons learned to making this work for you.
The first thing to know is there are a couple types of storage for both Azure that you can use as a persistent volume in Kubernetes. You need to select the correct type for your use case, but we will recommend the more scalable option for each.
One thing to understand up front is if your storage will always be mapped to a single pod or if there is the possibility that it could be mapped to multiple pods. The multiple pod scenario includes if you simply want to scale to additional replicas of the same Pod for additional redundancy, load balancing, or even upgrades. Unless you are 100% sure you are going to only map to a single pod, we highly recommend going with the multiple pod storage options. The rest of this article deals with setting up persistent storage volumes with the ability to map them to multiple pods and requires at least a basic working knowledge of Kubernetes; with the ability and permissions to run kubectl on your Kubernetes cluster.
If you are running Azure Kubernetes Service (AKS), Azure provides two separate provisioners as part of the K8S StorageClass: Azure Disks and Azure Files. Both of these options allow the ability to dynamically provision your storage. Since Azure Disks does not support access to the storage for multiple containers at a time, it is HIGHLY RECOMMENDED that you use Azure Files.
The first step is to set up your StorageClass which selects the type of storage from Azure. Copy the text below and save it to a file: azure-files-sc.yaml.
As you look at this file, there are a few things to note:
provisioner: kubernetes.io/azure-file - This config specifies to use Azure Files which is highly recommended
allowVolumeExpansion: true - This config specifies that the volumes using this StorageClass can later be expanded, which we want to allow for increasing storage later, if necessary.
skuName: Standard_LRS - This is the least expensive option from Azure and sufficient for most application use cases. If your app has a higher IOPS requirement, you can select a more performant (and expensive) option.
More details here: https://docs.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv
Standard_LRS - standard locally redundant storage (LRS)
Standard_GRS - standard geo-redundant storage (GRS)
Standard_RAGRS - standard read-access geo-redundant storage (RA-GRS)
Premium_LRS - premium locally redundant storage (LRS)
Now that you have created the file, apply it:
Ensure that it was successfully created:
You should see output similar to:
A couple of the items (default and managed-premium are there natively; note this could be different in your environment or as things change in Azure). azure-file is the new one.
Now that you have the StorageClass, you are ready to configure your Persistent Volume Claim (PVC). In this configuration, AKS will automatically configure the Persistent Volume (PV), so you do not have to explicitly perform that step. Copy the text below and save it to a file: test-azure-pvc.yaml:
As you look at this file, there are a few things to note:
namespace: default - This specifies the default namespace. If you are using a different namespace for this PVC and your pods, specify it here.
ReadWriteMany - This config allows multiple pods to write to the same PVC. If you did NOT use the azure-file partitioner/StorageClass, this will cause errors.
storageClassName: azure-file - This config is the name of the StorageClass configured above. You do not need to edit this, unless you changed it in the above steps.
storage: 1Gi - This is the initial amount of storage for your file store. We have chosen 1 GB but you configure to meet your needs. Note, that since we configured it to allow Volume Expansion above, you can expand it later. However you can NOT shrink it once it is configured.
Ensure the PVC has been created:
This storage should now be ready for use within your pods. Simply map it with the following commands in your container spec in your Deployment file:
A couple things to note here:
mountPath: "/test/files" - This is wherever you want the storage mounted within your pod/container.
claimName: test-files - This refers to the name of the PVC you set up previously
That is it! You now have a PVC mapped to Azure Files that can be mounted to multiple pods. Create a deployment with 2 replicas of the above spec or simply create 2 deployments each pointed to the same storage.
At C2 Labs, we love to work on challenging problems such as this. We serve our clients as a Digital Transformation partner, ensuring their projects are successful from beginning to operation hand-offs. We would love to talk to you more about the exciting challenges your organization is facing and how we can help you fundamentally transform IT to Take Back Control. Please CONTACT US to learn more.