» Post Updates
- 2024-09-30: Added a second approach with using a sidecar container.
- 2024-09-30: Added a third approach with using nfsmount.
» Introduction
Kubernetes has a wide list of various CSI-drivers for mounting remote storage. I was looking for mounting Hetzner Cloud Object Storage, but unfortunately, the given options were not to my satisfaction. There are several options for mounting S3-compatible storage, but few of them offer transparent encryption and if they do, you are stuck with the tool. When I have to be stuck with a tool, why not a very popular one I can use almost everywhere? When I can mount volumes with rclone and its crypt module in Kubernetes, I can use the same settings for mounting the bucket on my other machines, and that was my goal. Unfortunately, there is no rclone CSI-driver yet, so I went with the hacky solution of directly using the host path of the Node.
There where two articles which guided me the way for this post: Kubernetes shared storage with S3 backend and Mounting S3 bucket in docker containers on kubernetes.
With the following example, we should be able to use all rclone supported remotes, such as S3, SFTP, SMB, etc.
» Approach 1: DaemonSet
With the DaemonSet you will be able to mount the same data in multiple pods across different namespaces.
» Deployment
» Namespace
First, let us create an own rclone
namespace:
|
|
The secret contains a rclone configuration as you would normally configure it.
When using the encryption, you must point the remote
to the unencrypted config entry.
» Secret
|
|
» DaemonSet
The DaemonSet
will execute the rclone mount
command on every node of your Kubernetes cluster, so no matter on which node a pod starts, the mount will be there.
|
|
Now one or more rclone Pods should be running which mounted the S3 bucket to your Nodes host path.
» Example Deployment
Finally, we can use this host post in other Pods to access our S3 bucket.
Those pods don’t need to be in the same rclone
namespace, as they are not referencing anything there, the only reference is the volume from the host path.
|
|
When you shell into the container, you should be able to see your unencrypted files in the /my-bucket
directory.
» Troubleshooting
» Transport endpoint is not connected
When your pod failes to start with CreateContainerError
, check the events at the bottom after executing:
|
|
When you see the follwing error:
|
|
It can happen that the mount was not properly unmounted when playing with the settings and restarting the pods from the DaemonSet
. If so, shell into your node and unmount it manually (umount <PATH>
).
It should be possible to fix this with a preStop
lifecycle handler (inspired by kube-rclone which I found after writing the first version of this post).
» Approach 2: Sidecar Container
With the sidecar container you can mount the data only in this specific Pod by using an emptyDir
volume.
|
|
» Approach 3: nfsmount
I’ve figured out that sharing the mount in Kubenetes performs very poorly when seeking in bigger files.
I’ve tried various available caching options but with no success. I’m not sure if it is the decryption or something else. After spending several hours, I tried it with rclone nfsmount
.
Due to lazyness, I will just paste the relevant parts of the manifest.
We adjust our rclone mount
container to use nfsmount
instead. I had to specify the mount options manually to add nolock
.
|
|
This increased the performance drastically for my use case and is now how I would have expect it from the beginning. Unfortunately, this approach also comes with the disadvantage that NFS does not support inofity, so your applications won’t be able to detect changes based on this feature.
» Summary
When you try to access the data without the crypt module from rclone (e.g. with the minio client or the Hetzner Cloud Console), you will only see encrypted files and filenames.
With rclone
we have a very versatile tool which does not only allow us to mount S3 compatible remotes, but all other supported protocols, too and with all available features such as encryption. Especially with the encryption feature, I didn’t want to use a tool which only runs on Kubernetes, but I can use it on other machines too, I just have to copy the rclone config.
While examining the deployment manifests, you have probably already seen a big bummer: We are running the containers in privileged
mode. This is necessary for mounting on the host path, as well as for using the Bidirectional
mount propagation. Privileged means, the container has full access to your node! This is a huge disadvantage and security concern, so you have to weigh up whether it is worth it for you.