Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nextcloud service is not starting - jitsi and router seem to work fine #48

Open
Wollipolli opened this issue Aug 3, 2020 · 6 comments

Comments

@Wollipolli
Copy link

Wollipolli commented Aug 3, 2020

First of all, thanks for this awesome repo!
I have some issues with the nextcloud instance, maybe anyone with more experience can help?

I want to install a nextcloud and jitsi instance on my Zotac zbox CA621 nano with 16GB RAM and 265GB SSD behind a Fritzbox.
I registered 3 (free) subdomains from noip.com (*.ddns.net) and pointed them to the IP of my Fritzbox, which has a portforwarding to the zbox. I installed Ubuntu Server 20.04 LTS performed updates and cloned the repo.
I then followed the instructions and ended up having a running router and jitsi server. However the Nextcloud instance is not running. I set up name and domain as for the others, but the Let's Encrypt Cert is missing. When I open the nextcloud adress in Firefox and confirm the security exception for missing https, the page simply contains: "404 page not found".

Where could I have made a mistake?

Edit:
I found sth in the events log. Potentially, it doesn't like the values I set?

$ kubectl get pods

NAME                                              READY   STATUS             RESTARTS   AGE
nextcloud-team-nextcloud-db-64d687b6f9-t6b8p      0/1     Pending            0          25h
nextcloud-team-nextcloud-nc-69f7654bf-jc8r6       0/2     Pending            0          25h
video-team-video-jvb-559b8f7585-6cxqq             1/1     Running            1          25h
svclb-traefik-zxz2l                               2/2     Running            2          22h
video-team-video-jicofo-587b7c8c8b-9z97x          1/1     Running            1          25h
nextcloud-team-nextcloud-redis-86d7f4ffff-jhkgc   1/1     Running            1          25h
traefik-6bc99d5778-d477v                          1/1     Running            1          22h
landingpage-5d8dc7cd5c-jvtvf                      1/1     Running            1          22h
video-team-video-web-6655f6fff4-8jxp7             1/1     Running            1          25h
video-team-video-prosody-6769ff54d7-7tt8c         0/1     CrashLoopBackOff   129        25h

$ kubectl get events

LAST SEEN   TYPE      REASON                OBJECT                                             MESSAGE
<unknown>   Warning   FailedScheduling      pod/nextcloud-team-nextcloud-nc-69f7654bf-jc8r6    persistentvolumeclaim "nextcloud-team-nextcloud-nc-data" not found
<unknown>   Warning   FailedScheduling      pod/nextcloud-team-nextcloud-db-64d687b6f9-t6b8p   persistentvolumeclaim "nextcloud-team-nextcloud-nc-db" not found
10m         Warning   FailedPostStartHook   pod/video-team-video-prosody-6769ff54d7-7tt8c      Exec lifecycle hook ([/bin/bash -c sleep 60; prosodyctl --config /config/prosody.cfg.lua register meetmaster jitsisubdomain.ddns.net password]) for Container "prosody" in Pod "video-team-video-prosody-6769ff54d7-7tt8c_default(8208a497-f090-40e7-9101-25ce7aaba746)" failed - error: command '/bin/bash -c sleep 60; prosodyctl --config /config/prosody.cfg.lua register meetmaster jitsisubdomain.ddns.net password' exited with 1: , message: "Error: Account creation/modification not supported.\n"
5m39s       Normal    Pulled                pod/video-team-video-prosody-6769ff54d7-7tt8c      Container image "jitsi/prosody" already present on machine
51s         Warning   BackOff               pod/video-team-video-prosody-6769ff54d7-7tt8c      Back-off restarting failed container

@ghost
Copy link

ghost commented Aug 3, 2020

I'would suggest to start over. nextcloud didn't create it's storage containers.

Is there really enough space? Maybe a wrong LVM setup? What's the output of:

df -h

And your jitsi container seems to exist twice...

@Wollipolli
Copy link
Author

Wollipolli commented Aug 3, 2020

Thanks! That was it.
I did not know nextcloud expects a separate pre-formatted storage partition.
I thought this would happen during the setup.

Not sure about the jitsi-problem though.
If I uninstall and reinstall, it shows the same warning.
How can any other "jitsi/prosody" container be there?

@ghost
Copy link

ghost commented Aug 3, 2020

You've set user auth in jitsi - but your config seems to be wrong. Please paste your config without passwords...

Exec lifecycle hook ([/bin/bash -c sleep 60; prosodyctl --config /config/prosody.cfg.lua register meetmaster jitsisubdomain.ddns.net password]) for Container "prosody" in Pod "video-team-video-prosody-6769ff54d7-7tt8c_default(8208a497-f090-40e7-9101-25ce7aaba746)" failed - error: command '/bin/bash -c sleep 60; prosodyctl --config /config/prosody.cfg.lua register meetmaster jitsisubdomain.ddns.net password' exited with 1: , message: "Error: Account creation/modification not supported.\n

@Wollipolli
Copy link
Author

Wollipolli commented Aug 3, 2020

I already before had replaced password and domain by "jitsisubdomain.ddns.net" and "password"

app:
  name: mysubdomain
  domain: ddns.net
  pullpolicy: IfNotPresent # set to Always for auto updates

auth:
  enabled: true
  guests: true
  # internal auth
  type: internal
  admin:
    user: myadminname
    password: "mypassword"
  # ldap auth - remove above "type: internal" auth to use it
  #type: ldap
  #ldapauthmethod: bind
  #ldapurl: ldap://LDAP_SERVER
  #ldapusetls: 1
  #ldapstarttls: 1 # needs LDAP_VERSIOn 3
  #ldaptlscacertfile:
  #ldaptlscacertdir:
  #ldaptlscheckpeer:
  #ldapbase: OU=users,DC=domain,DC=local
  #ldapbinddn: CN=ldap user,OU=svc_users,DC=domain,DC=local
  #ldapbindpw: VerySecretPassword
  #ldapfilter: (&(&(|(objectclass=person)))(|(samaccountname=%uid)(|(mailPrimaryAddress=%uid)(mail=%uid))))
  #ldapversion: 3 # can break helm upgrade

logLevel: "info"
hideWelcomePage: true
# Remove following # to use different stun servers
# stun:
#  server: stun.stunprotocol.org:3478, stun.services.mozilla.com:3478

@ghost
Copy link

ghost commented Aug 3, 2020

I would try to run:

helm upgrade video team-video --values values-video.yaml

And:

kubectl get pods
kubectl logs -f video-team-video-prosody-XXXXXXXXX 

@Wollipolli
Copy link
Author

Interesting. Now it works.
Thanks heaps!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant