-
-
Notifications
You must be signed in to change notification settings - Fork 129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Server fatal exception: KeyError: 'projects' #1027
Comments
just re-launched with docker image re-built to get latest devpi* components version:
but again looping on : ...
+ devpi-server --role master --serverdir /devpi-server/server --secretfile /devpi-server/secret --host 0.0.0.0 --port 3141 --keyfs-cache-size 32768 --mirror-cache-expiry 3600 --replica-max-retries 3 --max-request-body-size 2684354560 --theme /usr/local/lib/python3.10/site-packages/devpi_semantic_ui
2024-02-20 21:39:20,486 INFO NOCTX Loading node info from /devpi-server/server/.nodeinfo
2024-02-20 21:39:20,487 INFO NOCTX wrote nodeinfo to: /devpi-server/server/.nodeinfo
2024-02-20 21:39:20,517 INFO NOCTX running with role 'master'
2024-02-20 21:39:20,522 INFO NOCTX Using secret file '/devpi-server/secret'.
2024-02-20 21:39:23,130 INFO NOCTX Found plugin devpi-web-4.2.1.
2024-02-20 21:39:23,132 INFO NOCTX Found plugin devpi-private-mirrors-0.0.3.
2024-02-20 21:39:23,138 INFO NOCTX Found plugin devpi-semantic-ui-0.2.2.
2024-02-20 21:39:23,139 INFO NOCTX Found plugin devpi-findlinks-3.0.0.
2024-02-20 21:39:23,236 INFO NOCTX Using /devpi-server/server/.indices for Whoosh index files.
2024-02-20 21:39:23,267 INFO [ASYN] Starting asyncio event loop
2024-02-20 21:39:23,277 INFO NOCTX devpi-server version: 6.10.0
2024-02-20 21:39:23,277 INFO NOCTX serverdir: /devpi-server/server
2024-02-20 21:39:23,277 INFO NOCTX uuid: 664115d193f8492883523ca4e54e097e
2024-02-20 21:39:23,277 INFO NOCTX serving at url: http://0.0.0.0:3141 (might be http://[0.0.0.0]:3141 for IPv6)
2024-02-20 21:39:23,277 INFO NOCTX using 50 threads
2024-02-20 21:39:23,277 INFO NOCTX bug tracker: https://github.com/devpi/devpi/issues
2024-02-20 21:39:23,278 INFO NOCTX Hit Ctrl-C to quit.
2024-02-20 21:39:23,290 INFO Serving on http://0.0.0.0:3141
2024-02-20 21:39:28,447 INFO [req0] GET /+changelog/25877-
2024-02-20 21:39:28,672 INFO [IDX] Indexer queue size ~ 26
2024-02-20 21:39:33,689 INFO [IDX] Indexer queue size ~ 13
2024-02-20 21:40:02,135 INFO [req1] GET /+changelog/25877-
2024-02-20 21:40:32,922 INFO [req2] GET /+changelog/25877-
2024-02-20 21:40:34,002 INFO [IDX] Indexer queue size ~ 1
2024-02-20 21:41:03,203 INFO [req3] GET /+changelog/25877-
2024-02-20 21:41:33,724 INFO [req4] GET /+changelog/25877- EDIT: and still crashing the same way:
|
Also extra-extra information: I'm using 2 replica instances replicating that master server role. |
well, after an nth restart after such same crash/error, now I see:
has this solved by itself ? (but I restarted 1 of the replica meanwhile fwiw) EDIT unfortunately same key error / stacktrace just after I posted this. but now:
maybe at some point it will recover completely ? |
The replicas are polling on the latest serial using The traceback is unrelated to that. It is triggered by the search indexing. If you don't use the search functionality, you could disable it with It seems like the |
put that on my master (and restarted it). correct for master ? it's not on the replica(s) I have to do this ? or both ?
can access this url on all my instances: master + the 2 replicas.. but, on the replicas, if I click/try refresh button/devpi refresh => I then get in logs of the related replica:
|
put (and restarted) --indexer-backend=null on one of my replica : but same above traceback if I try the refresh after. EDIT: atm/since I've put --index-backend=null on the master (and restarted it with that) : it apparently does not crash anymore.. :) already good :) |
sorry forgot answer this one : but yes I can access/download pypi.org/that specific url without any issue from within my master container. |
That all looks really weird. The traceback in |
hi @fschulze , for info: my master instance is now ok : no more recurring crash/exception at all and also no more stacktrace/exception if/when I do refresh the (root/pypi/) bokeh indice/project (either via the web-ui page or via devpi refresh cli command). it seems it just succeed. but from my 2 replicas I still get the
replica log:
|
You updated the versions in the meantime, could you give me the full current list including waitress? It might be some resent change somewhere. The replicas forward the POST to the primary, so maybe something is going on in the communication between the replica and the master for just that case. |
replica pip list:
|
|
So, there was a waitress 3.0.0 release at the beginning of February with some changes which might be related. I still have to investigate further, but you could try to install the last version before that in the meantime. |
@fschulze but I've the last version of everything in/with above replica pip list.. :/ well, atm :
the second replica has few more outdated packages, but waitress is not part of them : I've 3.0.0 on all my nodes. second replica:
and my master has same outdated pkgs than my first replica here. I gave a try with previous waitress version (2.1.2) on my first replica but same exception/traceback if I try refresh root/pypi/bokeh. |
Extra info: I spawned a totally fresh/new replica (which fully synced with the master) : and I tried refresh bokeh on it and I still get the same stacktrace/exception as before :/ |
OS + python/pip versions
pip check + pip list:
Issue:
have a devpi server instance running since ~1-2y .. last friday it went down for some reason.
today after restart of the host machine, I see in logs it get a "fatal" exception which makes the server to stop (and being restart thx to docker container restart policy) :
and that's re-occurring after few minutes after the server restart (docker container restart always).. after few minutes of such GET /+changelog/25877- (with sometimes some get from my clients).
what can be done ?
Thank you.
Extra infos:
The text was updated successfully, but these errors were encountered: