In our previous post, Persistent Storage on Kubernetes for Azure, we discussed setting up Persistent Storage for Azure. This blog post expands that tutorial to provide how to accomplish the same task within Amazon Web Services (AWS). NOTE: If you are having trouble with Kubernetes in the cloud, feel free to reach out to C2 Labs at anytime if you have questions or would like for us to help with your Azure, AWS, or other cloud deployments.
As a reminder, Docker natively supports persistent storage through the use of volumes. However, working in a cloud environment introduces even more complexity. This post is intended to walk you through how to setup and use persistent storage in Kubernetes running on AWS.
Just like with Azure, there are a couple types of storage for AWS that you can use as a persistent volume in Kubernetes. You need to select the correct type for your use, but we will recommend the more scalable option.
As with Azure, you need to decide if your storage will only be mapped to a single pod or if there is the possibility that it could be mapped to multiple pods. This is a common requirement if you want to scale your application to additional replicas of the same Pod spec for additional redundancy, load balancing, or upgrades. Unless you are 100% sure you are going to map to a single pod, we highly recommend going with the multiple pod options. The rest of this article with deal with setting up persistent storage volumes with the ability to map them to multiple pods.
For prerequisites, the rest of this article requires at least a basic working knowledge of Kubernetes and the ability and permissions to run kubectl on your Kubernetes cluster.
AWS
If you are running Amazon's Elastic Kubernetes Service (EKS), things get a little bit harder than Azure. Unfortunately, AWS does not currently have a built-in storage class for Kubernetes that allows multiple pods to Read/Write to the same storage. If you have a single pod, you can use Elastic Block Storage. For multiple pods, you must use Elastic File Storage (EFS). Additionally, you must manually configure your EFS in Amazon vs. EKS automatically doing it for you.
There are two ways to utilize storage on EKS:
Utilize the Amazon Supported CSI driver to leverage a StorageClass (https://aws.amazon.com/about-aws/whats-new/2019/09/amazon-eks-announces-beta-release-of-amazon-efs-csi-driver/)
Direct NFS mount
We will discuss both options. However, please note that using the CSI driver requires more resources, as it deploys a DaemonSet, which runs several pods on each of your nodes. We found we could NOT get this running using the AWS free nodes, even when we were running a small test pod to utilize the storage. Our pod would never get scheduled due to resource limitations. So, if you are going to use the CSI driver, we would recommend using at least Medium sized nodes on AWS. The Direct NFS option works on smaller pods as there is no overhead to run the CSI driver.
The following guidance is provided for setting up your storage:
Create your EFS storage in the same VPC as your Kubernetes cluster. Only static provisioning of EFS is currently supported, so you must manually provision the storage prior to the next steps.
When creating the EFS filesystem, make sure it is accessible from the Kubernetes cluster. This can be achieved by creating the filesystem inside the same VPC as the Kubernetes cluster or using VPC peering. It is recommended to have it in the same VPC as K8s to simplify the installation.
Permissions and settings are detailed here: https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html
Grab the Filesystem ID, as you will need it later
Option 1: CSI Driver
Amazon Announcement:https://aws.amazon.com/about-aws/whats-new/2019/09/amazon-eks-announces-beta-release-of-amazon-efs-csi-driver/
GitHub Repo:https://github.com/kubernetes-sigs/aws-efs-csi-driver
Deploy the CSI Driver:
kubectl apply -k "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/?ref=master"
Create a StorageClass manifest file, named: aws-efs-csi-sc.yaml
Deploy the StorageClass:
kubectl apply -f aws-efs-csi-sc.yaml
Create the PersistentVolume manifest file, named: c2labs-aws-csi-pv.yaml. Insert your [FileSystemID] from when you set up the storage above.
Deploy the Persistent Volume (PV):
kubectl apply -f c2labs-aws-csi-pv.yaml
Create the PersistentVolumeClaim manifest file, named: c2labs-aws-csi-pvc.yaml. Insert the [Namespace] that you are using. If you are deploying to default, you can use default here or remove the line. Also, you can change the name to whatever you want to use in your pods. For this example we used, c2labs-files. Also, AWS does not limit your storage based on the value here, so you can really put anything. You will pay for the amount you actually consume in your pods.
Deploy the Persistent Volume Claim (PVC):
kubectl apply -f c2labs-aws-nfs-pvc.yaml
You are now able to use c2labs-files in any of your pod definitions.
Option 2: Direct NFS
Create the PersistentVolume manifest file, named: c2labs-aws-nfw-pv.yaml. Insert your [FileSystemID] and [Region] from when you set up the storage above.
Deploy the Persistent Volume (PV):
kubectl apply -f c2labs-aws-nfs-pv.yaml
Create the PersistentVolumeClaim manifest file, named: c2labs-aws-nfs-pvc.yaml. Insert the [Namespace] that you are using. If you are deploying to default, you can use default here or remove the line. Also, you can change the name to whatever you want to use in your pods. For this example we used, c2labs-files. Also, AWS does not limit your storage based on the value here, so you can really put anything. You will pay for the amount you actually consume in your pods.
Deploy the Persistent Volume Claim (PVC):
kubectl apply -f c2labs-aws-nfs-pvc.yaml
You are now able to use c2labs-files in any of your pod definitions.
Using the Storage
After implementing either of the above options, you should have storage ready to use in your pods. Simply map it with the following commands in your container spec in your Deployment manifest file:
A couple things to note here:
mountPath: "/test/files" - This is wherever you want the storage mounted within your pod/container.
claimName: c2labs-files - This refers to the name of the PVC you set up previously
That is it! You now have a PVC mapped to AWS Elastic File Storage that can be mounted to multiple pods. Create a deployment with 2 replicas of the above spec or simply create 2 deployments each pointed to the same storage.
At C2 Labs, we love to work on challenging problems such as this. We serve our clients as a Digital Transformation partner, ensuring their projects are successful from beginning to operational hand-offs. We would love to talk to you more about the exciting challenges your organization is facing and how we can help you fundamentally transform IT to Take Back Control. Please CONTACT US to learn more.
Comentários