-
-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Utterly crazy cpu usage #998
Comments
quick followup after looking through the code a bit, it seems like the filesystem monitoring defaults to polling only instead of using an OS native implementation when available? If the observer.polling was replaced with the standard observer: https://pythonhosted.org/watchdog/api.html#module-watchdog.observers it should reduce the amount of processing overhead that's used. It'll still fail over to the polling implementation if the other ones aren't available. |
I've noticed the indexing mode also chews up a ton of cpu time, I haven't checked yet but if all files are being hashed using the approach here: https://pythonhosted.org/watchdog/api.html#module-watchdog.observers it may be a good idea to swap that out for files below a specific threshold for a function that reads the whole file in one go then hashes once it's read into memory, instead of reading 1024 bytes at a time as this should allow you to reduce the number of filesystem actions taken. |
I'm not very familiar with the docker setup, this was an external contribution. If run natively, Maestral will either use FSEvents on macOS or Inotify on Linux, and the idle CPU usage is close to 0%. It will only fall back to polling if the Platform is neither of those: maestral/src/maestral/fsevents/__init__.py Lines 25 to 30 in 5f0cf08
Regarding hashing, this is done in chunks of 65536 bytes, which has been a good tradeoff between CPU and memory usage. Note that multiple files may be hashed in parallel to better distribute load across CPU cores. Finally, the config file has settings for max bandwidth and CPU usage, and Maestral will throttle its work and transfer speeds to stay below both. Could you check if you can replicate this behavior outside of a Docker image? |
Ah i hadn't noticed that in the init area and had found the polling observer elsewhere. My bad! I had been trying to avoid setting up an environment locally but will replicate my docker config to test and see if I can figure out what's going squirrely with the docker one if things behave more normally natively. |
ok I've got maestral configured as a local install in a venv now and it is working, unfortunately even with max cpu time set to 5% the indexing step is now running at 1100% cpu usage. I'm going to try using a cgroup to manage the max cpu and see if that works. |
Ok, so I wrote a new systemd user file and forced it to restrain cpu usage via cgroup and my laptop is no longer trying to burn holes in my desk!
I may consider bumping up the total amount of cpu it can use but for now it's working in the background and chugging along. I'll dig into the docker container in a bit, there's ways to do similar there and a compose file would be a more robust way of handling things long term. edit: quick note for anybody using this, this is a user space systemd file it would go into second note: moved to 20% and the sync and update finished in about an hour and is just idling around not really hurting at all. working solid. |
Describe the bug
I converted an existing dropbox folder on fedora linux to a maestral docker container, all went well from a sync perspective till it seemed to consider itself 'done' with the sync and goes into standard running mode. At this point it pins the os at around 80-90% of the cpu. I've got a 6 core 12 thread laptop and it's currently sitting at 910% cpu usage.
To Reproduce
I'm not sure if there's anything specific to replicate this beyond run the docker version as a background process. Only additional thing I've done is add a kde widget to monitor the output of
docker exec -t maestral maestral status
this runs every 5 seconds.Expected behaviour
This shouldn't be using more than 2-3% when idle, ideally way less than that.
System:
The text was updated successfully, but these errors were encountered: