Skip to main content
Contact our team to know more about our services
select webform
By submitting, you acknowledge that you've read and agree to our privacy policies, and Opcito may use the information provided for business purposes.
Become a part of our team
select webform
One file only.
1.5 GB limit.
Allowed types: gif, jpg, jpeg, png, bmp, eps, tif, pict, psd, txt, rtf, html, odf, pdf, doc, docx, ppt, pptx, xls, xlsx, xml, avi, mov, mp3, mp4, ogg, wav, bz2, dmg, gz, jar, rar, sit, svg, tar, zip.
By submitting, you acknowledge that you've read and agree to our privacy policies, and Opcito may use the information provided for business purposes.
Deployment automation of shared storage PVs on EKS using AWS EFS and CSI Driver
20 Jan 2023

Deployment automation of shared storage PVs on EKS using AWS EFS and CSI Driver

Kubernetes has set itself as a go-to orchestrator over the last few years. And this is primarily the reason that almost all the leading cloud providers have come up with their own managed Kubernetes services. One such widely popular service is Amazon Elastic Kubernetes Service (EKS). EKS lets users efficiently run Kubernetes on AWS cloud and on-premises. The Amazon Elastic File System (EFS) Container Storage Interface (CSI) Driver provides an interface that allows Kubernetes clusters that run on AWS to manage the lifecycle of EFS files.

What is AWS EKS?

Amazon EKS is a fully managed Kubernetes service that allows you to run upstream Kubernetes on AWS with a highly available control plane across multiple AWS Availability Zones. The Amazon Elastic Kubernetes Service can automatically scale and manage infrastructure resource clusters on AWS with Kubernetes.

A lot of organizations deploy applications on EKS clusters. For this, the data needs to be transferred or shared between pods when it comes to multiple replicas of deployments or StatefulSets. For example, if you are deploying multiple replicas of the Jenkins server or WordPress website, you need to have the same storage shared across numerous pods of applications. In some cases, while scaling applications using HPA in deployments, where applications are required to have shared locations for application data across instances, we need shared file systems that can be mounted across AZ, nodes, and pods.

How can we create shared storage on AWS to deploy on EKS?

Amazon EFS provides a simple, scalable, fully managed elastic shared file system. With Amazon EFS, you can deploy shared file systems without the need to provision and manage capacity to accommodate growing storage requirements. EKS can use EFS file systems to create shared storage between pods within and across AWS Availability Zones.

EFS can also help to make Kubernetes applications scalable and highly available. This is possible because all data written to EFS is written to multiple AWS Availability Zones. Scaled pods can share the same data in case of dependency on the same data across pods.

What is an EFS CSI Driver?

Amazon’s EFS Container Storage Interface (CSI) Driver provides a CSI interface that lets Kubernetes clusters that run on AWS to manage the Amazon EFS file systems lifecycle. The EFS CSI Driver supports static and dynamic provisioning of PVs. For static provisioning, the AWS EFS file system needs to be created manually on AWS first. After that, it can be mounted inside a container as a volume using the Driver. On the other hand, dynamic provisioning creates an access point for each PV. Although, there are certain limitations to this.

EFS CSI Driver architecture

Limitations for EFS CSI Driver

  • The Amazon EFS CSI Driver isn't compatible with Windows-based container images.
  • Dynamic, persistent volume provisioning is not supported with Fargate nodes.

Before we move on to the actual deployment automation of shared storage PVs on EKS using AWS EFS and AWS CSI Driver. Let's see what the prerequisites are.

Pre-requisite before configurations

  • An existing Amazon EKS cluster
  • IAM OIDC provider created and configured for EKS cluster; refer
  • Command line tool eksctl installed; to install please refer
  • Command line tool kubectl installed; to install please refer
  • Command line tool awscli installed; to install please refer
  • Command line tool helm installed; to install please refer

Clone Automation repo opcito-blogs/efs-csi-provisioner

git clone git@github.com:opcito-blogs/efs-csi-proCvisioner.git

Create IAM Policy and IAM Role to integrate EFS with EKS

Create an IAM policy and assign it to an IAM role. The policy will allow the Amazon EFS driver to interact with our file system. We have created an automation script that will create an IAM role as part of the automated script. Create an IAM policy that allows the CSI Driver's service account to make calls to AWS APIs on your behalf.

./deploy.sh --action create_role --region <region> --cluster-name <cluster-name>
Example:./deploy.sh --action create_role --region us-east-1 --cluster-name opcito-eks

Create and configure the EFS filesystem using an automation script that runs Terraform infrastructure as a code for automation scripts. This automation script creates an EFS filesystem, security groups, and network access points for EFS to mount in the provided VPC ID and the region specified in the command.

./deploy.sh --action create_efs --efs-name <name> --vpc-id <vpc-xxxxx> --region <region>
Example:./deploy.sh --action create_efs --efs-name opcito-efs --vpc-id vpc-43434344 --region us-east-1

After running this command, it will create an EFS filesystem, providing the output as shown below. Use File System ID to create a storage class later.

##### Use the following information to deploy Storage Class #####

File System ID ====> fs-455555555
Install the EFS CSI Driver on your EKS cluster using the following automation script:
./deploy.sh --action deploy_csi --region <region> --cluster-name <cluster-name>

This deploys a Helm chart from https://kubernetes-sigs.github.io/aws-efs-csi-driver into your cluster.

Or you can also use the Helm command to deploy as shown below:

Add Helm Repo

helm repo add aws-efs-csi-driver https://kubernetes-sigs.github.io/aws-efs-csi-driver/

Update Helm Repo

helm repo update 

Install Release

helm upgrade -i aws-efs-csi-driver aws-efs-csi-driver/aws-efs-csi-driver \ --namespace kube-system \ --set image.repository=602401143452.dkr.ecr.region-code.amazonaws.com/eks/aws-efs-csi-driver \ --set controller.serviceAccount.create=false \ --set controller.serviceAccount.name=efs-csi-controller-sa

Create a Storage Class to create dynamic PVCs.

Create Storage class for EFS in EKS, replace <file-system-id> with EFS ID created above storageclass.yaml

kind: StorageClass 
apiVersion: storage.k8s.io/v1 
metadata: 
  name: efs-sc 
provisioner: efs.csi.aws.com 
parameters: 
  provisioningMode: efs-ap 
  fileSystemId: <file-system-id> 
  directoryPerms: "700" 
  gidRangeStart: "1000" # optional 
  gidRangeEnd: "2000" # optional 
  basePath: "/dynamic_provisioning" # optional 

Kubectl apply -f storageclass.yaml

Use efs-sc storage class to create dynamic PVC using EFS as shown below:

pvc-pod.yaml

--- 
apiVersion: v1 
kind: PersistentVolumeClaim 
metadata: 
  name: efs-claim 
spec: 
  accessModes: 
    - ReadWriteMany 
  storageClassName: efs-sc 
  resources: 
    requests: 
      storage: 5Gi 
--- 
apiVersion: v1 
kind: Pod 
metadata: 
  name: efs-app 
spec: 
  containers: 
    - name: app 
      image: centos 
      command: ["/bin/sh"] 
      args: ["-c", "while true; do echo $(date -u) >> /data/out; sleep 5; done"] 
      volumeMounts: 
        - name: persistent-storage 
          mountPath: /data 
  volumes: 
    - name: persistent-storage 
      persistentVolumeClaim: 
        claimName: efs-claim 

Kubectl apply -f pvc-pod.yaml

This is how you can use EFS CSI Driver to Mount EFS on EKS as dynamic Persistent Volumes. PVCs can be created to mount EFS as PV in pods using the storage class efs-sc. This creates a new folder for each Persistent Volume inside the EFS root folder. Feel free to share your thoughts on this automation process, and if you have any queries, let me know in the comments below.

Subscribe to our feed

select webform