Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes: There seems to be some sort of writing protection on the docker image #419

Open
HakunMatat4 opened this issue Jul 14, 2023 · 8 comments
Labels

Comments

@HakunMatat4
Copy link

Image: phpmyadmin:5.2.1

I am playing with phpmyadmin and mariadb on my homelab kubernetes cluster, and although it all works, the docker images doesn't seem to accept environment variables:

image

But none of those values are being passed over, in fact TZ is set as TZ=UTC

MariaDB pod which is set just like above:

image

printenv does show those env and I can mysql -u user -p and it all works

MARIADB_USER
MARIADB_PASSWORD
MARIADB_ROOT_PASSWORD

I can access it from phpmyadmin just fine but something isn't 100% right.

image

I tried to mount its /var/www/html as persistence volume, it does not accept that either, I mean, the volume is mounted empty so why I think there is some writing restriction on this image.
mariadb does mount its volume, I have deleted it dozen times playing around and the data stays as expected.

Thank you

@williamdes
Copy link
Member

This user seems to have got it working: #294 (comment)

I still did not have time to try Kubernetes myself

@HakunMatat4
Copy link
Author

@williamdes I got somewhere.

phpmyadmin:latest did accepts the envs but I am not big fan of running latest images.
But even it won't mount the volume, you get access denied issues so if I wanna make changes, I must compile my own docker image.

Something isn't right with the image permission, even that other ticket you mentioned, you can see that user stopped mounting a volume for it to work.

I was running phpmyadmin:2.5.1 so I downgraded to phpmyadmin:2.5 and it is accepting the envs now.
When accessing for the first time, it auto login using the envs input, no more loging is required.

image

# printenv | grep PMA
PMA_PASSWORD=<value>
PMA_HOST=mariadb-service.mariadb.svc.cluster.local
PMA_USER=<value>
PMA_PORT=40000
PMA_ARBITRARY=1

This is my whole Terraform template and it works fine on my homelab kubernetes.

resource "kubernetes_service" "phpmyadmin_service" {
  metadata {
    name = "${var.app}-service"
    namespace = var.namespace
    labels = {
      app = var.app
      }
 } 
  spec {   
    selector = {
      app = var.app
      tier = "app"
    }
    type = "LoadBalancer"    
    port {
      port = 80
      target_port = 80
    }  
  }
}


resource "kubernetes_deployment" "phpmyadmin_deployment" {
  metadata {
    name = var.app
    namespace = var.namespace   
    labels = {
      app = var.app
      tier = "app"
    }
  }
  spec {
    replicas = 1
    selector {
      match_labels = {
        app = var.app
        tier = "app"
      }
    }
    template {
      metadata {
        labels = {
          app = var.app
          tier = "app"         
        }
      }
      spec {
        container {
          name = var.app
          image = var.image
          port {
            container_port = 80
          }                                                                          
          env_from {
            secret_ref {
              name = "mariadb-user"
            }            
          }
          env {
            name = "PMA_ARBITRARY"
            value = "1"
          }
          env {
            name = "PMA_HOST"
            value = "mariadb-service.mariadb.svc.cluster.local"
          }
          env {
            name = "PMA_PORT"
            value = "40000"
          }
          env_from {
            secret_ref {
              name = "mariadb-user"
            }            
          }
          env_from {
            secret_ref {
              name = "mariadb-password"
            }            
          }                                                                           
        }                      
      }
    }
  }
}

Thank you

@williamdes
Copy link
Member

Let's keep this one open until we solve most of it and have a working documentation for users.

What changes do you want to make?

I was running phpmyadmin:2.5.1 so I downgraded to phpmyadmin:2.5 and it is accepting the envs now.

This makes no sense, it's probably the opposite. And it's an upgrade then
Is 5.2.1 tag working better?
By the way, using latest or an other tag gives you the same software and container. I mean latest is only a shortcut name :)

@HakunMatat4
Copy link
Author

This makes no sense, it's probably the opposite. And it's an upgrade then

I should have explained better lol

For security reasons it is always a good practice to use a tagged version than latest, that is why I went with 5.2.1 instead of latest

By the way, using latest or an other tag gives you the same software and container. I mean latest is only a shortcut name :)

hmm not really, if I redeploy this template, it will run 5.2 while the latest will pick always the latest version which I believe is 5.2.1
I usually check the vulnerabilities and compile my own image with the fix but yeah, won't get into that lol


So to wrap-up, I am running phpmyadmin:5.2 at the moment because the vars are being passed over as expected, the auto-login works as expected by reading the envs, 5.2.1 didn't work.
The only pending bit is the volume, if I set a persistentvolumeclaim to keep the files, it gives me an access denied permission.
I can live with that for now and it is homelab so.

@HakunMatat4
Copy link
Author

To give you more content:

image

Declare the persistentvolumeclaim:

resource "kubernetes_persistent_volume" "phpmyadmin_config_pv" {
  metadata {
    name = "phpmyadmin-config-pv"
  }
  spec {  
    capacity = {
      storage = "5Gi"
    }  
    access_modes = ["ReadWriteOnce"]
    persistent_volume_source {
     host_path {
      path = "/persistent_volume/phpmyadmin/"  
    }    
   }
  }  
}

resource "kubernetes_persistent_volume_claim" "phpmyadmin_config_pvc" {
  metadata {
    name = "phpmyadmin-config-pvc"
    namespace = var.namespace    
  }
  spec {
    access_modes = ["ReadWriteOnce"]
    resources {
      requests = {
        storage = "5Gi"
      }
    }
    volume_name = "${kubernetes_persistent_volume.phpmyadmin_config_pv.metadata.0.name}"
  }
}

Mount it:

      volume_mount {
        name       = "lib-mysql"
        mount_path = "/var/www/html"
      }



      volume {
        name = "lib-mysql"
        persistent_volume_claim {
          claim_name = "phpmyadmin-config-pvc"
        }
      } 

Empty directory:

root@node01:/persistent_volume/phpmyadmin# ls
root@node01:/persistent_volume/phpmyadmin# 

@williamdes
Copy link
Member

Thank you, I will have another look at this
But the tag 5.2.1 is exactly the same as 5.2, at the moment. I do not get why it's not working
You are not using the fpm version, right?

Also, why do you want to mount a volume?
The container manages it's content itself :)
That said a while ago it would have filled your volume with the contents. But we need to fix this back

@HakunMatat4
Copy link
Author

HakunMatat4 commented Jul 14, 2023

You are not using the fpm version, right?

That is correct, I'm using the normal version.

Also, why do you want to mount a volume?\nThe container manages it's content itself :)

I mean, why not?? hahaha
Heimdall Dashboard for example, to upload higher quality background image, you need to edit its config file and restart the container.

Well, Kubernetes deployment will automatically deploy a stock container if you do that.
Also, if you update the container version, any changes will be lost so why you wanna mount the application volume for any changes to remain no matter what happened with the container.

This is a normal practice in Docker as it's with Kubernetes no matter if I'm running my homelab or the company GCP/AWS Kubernetes cluster.

@williamdes
Copy link
Member

Okay, so if you mount the volume on the apache2 version it's most probably a bad idea to do it as it was not made for this use case. So you are mostly creating problems for yourself ^^

Also, if you update the container version, any changes will be lost so why you wanna mount the application volume for any changes to remain no matter what happened with the container.

For this, there is no need to save any changes as nothing changes in the container :)
You are safe dropping the volume for var www
The only volumes you should mount are the ones you can find in the documentation.

Can you drop the lib-mysql volume and let me know if everything works great?
I hope I did not miss an aspect of Kubernetes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants