Sizing AWS EFS accordingly

Share on:

AWS Elastic File System is a great tool for using shared storage in auto scaling group situations. There are two throughput modes to choose from for your file system, Bursting Throughput and Provisioned Throughput. With Bursting Throughput mode, throughput on Amazon EFS scales as the size of your file system in the standard storage class grows. EFS performance is well documented in this AWS knowledge base article, so we won’t get too in-depth here.

One caveat with Bursting Throughput that we’ll discuss in this post is bursting limitation for small file systems.

perf table As you can see above in the table, the lower the File System Size (GiB) the lower the Baseline Aggregate Throughput (MiB/s).

To ensure you have proper initial baseline aggregate throughput, you’ll need to increase the file system size using the tool dd. Once you have an EFS file system created and mounted (in this example, it’s mounted /efs) use the following command to increase the size to 256GB:

cd /efs
sudo nohup dd if=/dev/urandom of=2048gb-b.img bs=1024k count=256000 status=progress &

The 256GB value can be changed by modifying the count=argument, but bear in mind the above table for allowed % of time for bursting.

Deployed applications such as mounting your Jenkins share in an EFS mount will benefit from this. The Jenkins /workspace path for example, requires many burstable writes dependent on the job\pipeline\project count.