Skip to content

Latest commit

 

History

History
119 lines (81 loc) · 2.92 KB

5_2_Persistent_Storage_GlusterFS.asciidoc

File metadata and controls

119 lines (81 loc) · 2.92 KB

Set up GlusterFS cluster

Warning
GlusterFS is not recommended for deploying Elasticsearch. Here’s some more info on that: link1, and link2. The only approach we found that sounded close to what we want is link3 but we had no time to look into this in more detail.

1. Create LVs on Metal

Create a logical volume with 100GB to each server with the following command: 

sudo lvcreate -L 100g <name-of the-volume-group>

The name of the volume group can be retrieved with the following command under VG column:

sudo lvs

2. Create LVs on AWS

2.1. Command-line way

sudo vgcreate worker3 /dev/xvdf
sudo lvcreate -L 9g worker3

Add the volume disk id, on the same line, for each host in your glusterfs cluster inventory: e.g. disk_volume_device_1=/dev/mapper/ent—​vg-lvol0.

Sample inventory:

[all]
pegasus      ansible_host=139.91.23.5 ip=139.91.23.5 disk_volume_device_1=/dev/mapper/pegasus--vg-lvol0
ent      ansible_host=139.91.23.8 ip=139.91.23.8 disk_volume_device_1=/dev/mapper/ent--vg-lvol0
[gfs-cluster]
kube-node

[network-storage:children]
gfs-cluster

Basically, for each Kubernetes PV you want to add to glusterFS, you will need to follow this procedure:

  1. Add extra EBS volumes and attach them to existing EC2 instances (Similarly need extra LVMs, on bare metal)

  2. Add the device mapping to ansible inventory, incrementing the disk_volume_device number

2.2. Troubleshooting

2.2.1. Allow non-root users to write to glusterfs volumes

As long as your container is running under root, you’re fine. Alas, that’s not the case for all containers (hello Elasticsearch!).

We have encountered what seems to be a pretty common issue, as you can see from the following links:

One solution:

set owner / group id on the gluster volume and on the node set Access Control Lists.

Second solution:

chown ???

third solution:

gluster volume set $VOLUME allow-insecure on

In the end, we managed to overcome this by using both the below on the PVC:

  annotations:
    pv.beta.kubernetes.io/gid: "1234"

AND adding the following security context on the spec of the Pod (not the container) that is using the PVC

  securityContext:
    supplementalGroups: [1000]
    fsGroup: 1000


With all this done, you’re now ready to start Deploying on Kubernetes!