Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ceph logs are not pruned/cleaned up for components that are no longer scheduled on a node #14202

Open
jhoblitt opened this issue May 14, 2024 · 13 comments
Labels
Projects

Comments

@jhoblitt
Copy link
Contributor

Is this a bug report or feature request?

  • Bug Report

Deviation from expected behavior:

The size of the logs in the hostpath /var/lib/rook/rook-ceph/log continuing to grow unbounded over time.

Expected behavior:

That there would be an eventually high water mark for logs that isn't exceeded.

How to reproduce it (minimal and precise):

Basically, run rook-ceph cluster for years and have various components (mons, rgw, etc.) be rescheduled between nodes because of drains.

File(s) to submit:

# ls -la
total 14969036
drwxr-xr-x  2  167  167       20480 May 14 17:50 .
drwxr-xr-x 16 root root        4096 Oct 20  2022 ..
-rw-r--r--  1  167  167 14482448724 May 14 17:11 ceph-client.admin.log
-rw-r--r--  1  167  167     8931360 Sep  1  2022 ceph-client.rgw.ceph.objectstore.a.log
-rw-r--r--  1  167  167      532283 Sep  1  2022 ceph-client.rgw.ceph.objectstore.a.log.1.gz
-rw-r--r--  1  167  167      533066 Aug 31  2022 ceph-client.rgw.ceph.objectstore.a.log.2.gz
-rw-r--r--  1  167  167      531411 Aug 30  2022 ceph-client.rgw.ceph.objectstore.a.log.3.gz
-rw-r--r--  1  167  167      531368 Aug 29  2022 ceph-client.rgw.ceph.objectstore.a.log.4.gz
-rw-r--r--  1  167  167      533529 Aug 28  2022 ceph-client.rgw.ceph.objectstore.a.log.5.gz
-rw-r--r--  1  167  167      530242 Aug 27  2022 ceph-client.rgw.ceph.objectstore.a.log.6.gz
-rw-r--r--  1  167  167      101967 Aug 26  2022 ceph-client.rgw.ceph.objectstore.a.log.7.gz
-rw-r--r--  1  167  167     3676163 Feb 29 23:26 ceph-client.rgw.lfa.a.log
-rw-r--r--  1  167  167      231207 Feb 29 00:12 ceph-client.rgw.lfa.a.log.1.gz
-rw-r--r--  1  167  167      232068 Feb 28 00:12 ceph-client.rgw.lfa.a.log.2.gz
-rw-r--r--  1  167  167      231650 Feb 27 00:12 ceph-client.rgw.lfa.a.log.3.gz
-rw-r--r--  1  167  167      231540 Feb 26 00:12 ceph-client.rgw.lfa.a.log.4.gz
-rw-r--r--  1  167  167      230579 Feb 25 00:12 ceph-client.rgw.lfa.a.log.5.gz
-rw-r--r--  1  167  167      231025 Feb 24 00:12 ceph-client.rgw.lfa.a.log.6.gz
-rw-r--r--  1  167  167      231346 Feb 23 00:12 ceph-client.rgw.lfa.a.log.7.gz
-rw-r--r--  1  167  167      159016 May 14 18:39 ceph-client.rgw.rubintv.a.log
-rw-r--r--  1  167  167     1068078 Jul  7  2023 ceph-mds.auxtel-c.log
-rw-r--r--  1  167  167       43160 Jul  7  2023 ceph-mds.auxtel-c.log.1.gz
-rw-r--r--  1  167  167       35724 Jul  6  2023 ceph-mds.auxtel-c.log.2.gz
-rw-r--r--  1  167  167       34912 Jul  5  2023 ceph-mds.auxtel-c.log.3.gz
-rw-r--r--  1  167  167       35139 Jul  4  2023 ceph-mds.auxtel-c.log.4.gz
-rw-r--r--  1  167  167       35504 Jul  3  2023 ceph-mds.auxtel-c.log.5.gz
-rw-r--r--  1  167  167       35284 Jul  2  2023 ceph-mds.auxtel-c.log.6.gz
-rw-r--r--  1  167  167       34911 Jul  1  2023 ceph-mds.auxtel-c.log.7.gz
-rw-r--r--  1  167  167      951463 Jul  7  2023 ceph-mds.auxtel-d.log
-rw-r--r--  1  167  167       38119 Jul  7  2023 ceph-mds.auxtel-d.log.1.gz
-rw-r--r--  1  167  167       35378 Jul  6  2023 ceph-mds.auxtel-d.log.2.gz
-rw-r--r--  1  167  167       34699 Jul  5  2023 ceph-mds.auxtel-d.log.3.gz
-rw-r--r--  1  167  167       34978 Jul  4  2023 ceph-mds.auxtel-d.log.4.gz
-rw-r--r--  1  167  167       35326 Jul  3  2023 ceph-mds.auxtel-d.log.5.gz
-rw-r--r--  1  167  167       35129 Jul  2  2023 ceph-mds.auxtel-d.log.6.gz
-rw-r--r--  1  167  167       34647 Jul  1  2023 ceph-mds.auxtel-d.log.7.gz
-rw-r--r--  1  167  167     2142946 Jul  7  2023 ceph-mds.auxtel-e.log
-rw-r--r--  1  167  167       36112 Jul  7  2023 ceph-mds.auxtel-e.log.1.gz
-rw-r--r--  1  167  167       35653 Jul  6  2023 ceph-mds.auxtel-e.log.2.gz
-rw-r--r--  1  167  167       33808 Jul  5  2023 ceph-mds.auxtel-e.log.3.gz
-rw-r--r--  1  167  167       34209 Jul  4  2023 ceph-mds.auxtel-e.log.4.gz
-rw-r--r--  1  167  167       34152 Jul  3  2023 ceph-mds.auxtel-e.log.5.gz
-rw-r--r--  1  167  167       34098 Jul  2  2023 ceph-mds.auxtel-e.log.6.gz
-rw-r--r--  1  167  167       33972 Jul  1  2023 ceph-mds.auxtel-e.log.7.gz
-rw-r--r--  1  167  167     2146716 Jul  7  2023 ceph-mds.auxtel-f.log
-rw-r--r--  1  167  167       36540 Jul  7  2023 ceph-mds.auxtel-f.log.1.gz
-rw-r--r--  1  167  167       35068 Jul  6  2023 ceph-mds.auxtel-f.log.2.gz
-rw-r--r--  1  167  167       34124 Jul  5  2023 ceph-mds.auxtel-f.log.3.gz
-rw-r--r--  1  167  167       34191 Jul  4  2023 ceph-mds.auxtel-f.log.4.gz
-rw-r--r--  1  167  167       34602 Jul  3  2023 ceph-mds.auxtel-f.log.5.gz
-rw-r--r--  1  167  167       34190 Jul  2  2023 ceph-mds.auxtel-f.log.6.gz
-rw-r--r--  1  167  167       33851 Jul  1  2023 ceph-mds.auxtel-f.log.7.gz
-rw-r--r--  1  167  167      745819 Jul 24  2023 ceph-mds.comcam-d.log
-rw-r--r--  1  167  167       34116 Jul 24  2023 ceph-mds.comcam-d.log.1.gz
-rw-r--r--  1  167  167       33791 Jul 23  2023 ceph-mds.comcam-d.log.2.gz
-rw-r--r--  1  167  167       34149 Jul 22  2023 ceph-mds.comcam-d.log.3.gz
-rw-r--r--  1  167  167       33939 Jul 21  2023 ceph-mds.comcam-d.log.4.gz
-rw-r--r--  1  167  167       34237 Jul 20  2023 ceph-mds.comcam-d.log.5.gz
-rw-r--r--  1  167  167       34119 Jul 19  2023 ceph-mds.comcam-d.log.6.gz
-rw-r--r--  1  167  167       34044 Jul 18  2023 ceph-mds.comcam-d.log.7.gz
-rw-r--r--  1  167  167      755018 May 14 17:54 ceph-mds.comcam-f.log
-rw-r--r--  1  167  167       36328 May 14 00:13 ceph-mds.comcam-f.log.1.gz
-rw-r--r--  1  167  167       36263 May 13 00:13 ceph-mds.comcam-f.log.2.gz
-rw-r--r--  1  167  167       36246 May 12 00:13 ceph-mds.comcam-f.log.3.gz
-rw-r--r--  1  167  167       36225 May 11 00:13 ceph-mds.comcam-f.log.4.gz
-rw-r--r--  1  167  167       36249 May 10 00:13 ceph-mds.comcam-f.log.5.gz
-rw-r--r--  1  167  167       36268 May  9 00:13 ceph-mds.comcam-f.log.6.gz
-rw-r--r--  1  167  167       36469 May  8 00:13 ceph-mds.comcam-f.log.7.gz
-rw-r--r--  1  167  167      715165 Jul 24  2023 ceph-mds.jhome-a.log
-rw-r--r--  1  167  167       35081 Jul 24  2023 ceph-mds.jhome-a.log.1.gz
-rw-r--r--  1  167  167       35229 Jul 23  2023 ceph-mds.jhome-a.log.2.gz
-rw-r--r--  1  167  167       37466 Jul 22  2023 ceph-mds.jhome-a.log.3.gz
-rw-r--r--  1  167  167       37683 Jul 21  2023 ceph-mds.jhome-a.log.4.gz
-rw-r--r--  1  167  167       37734 Jul 20  2023 ceph-mds.jhome-a.log.5.gz
-rw-r--r--  1  167  167       37721 Jul 19  2023 ceph-mds.jhome-a.log.6.gz
-rw-r--r--  1  167  167       37863 Jul 18  2023 ceph-mds.jhome-a.log.7.gz
-rw-r--r--  1  167  167       54283 Apr  2  2022 ceph-mds.jhome-b.log
-rw-r--r--  1  167  167       34301 Apr  2  2022 ceph-mds.jhome-b.log.1.gz
-rw-r--r--  1  167  167       34910 Apr  1  2022 ceph-mds.jhome-b.log.2.gz
-rw-r--r--  1  167  167       34546 Mar 31  2022 ceph-mds.jhome-b.log.3.gz
-rw-r--r--  1  167  167       37012 Mar 30  2022 ceph-mds.jhome-b.log.4.gz
-rw-r--r--  1  167  167       34377 Mar 29  2022 ceph-mds.jhome-b.log.5.gz
-rw-r--r--  1  167  167       34030 Mar 28  2022 ceph-mds.jhome-b.log.6.gz
-rw-r--r--  1  167  167       33510 Mar 27  2022 ceph-mds.jhome-b.log.7.gz
-rw-r--r--  1  167  167      753276 Aug 18  2022 ceph-mds.jhome-c.log
-rw-r--r--  1  167  167       35043 Aug 17  2022 ceph-mds.jhome-c.log.1.gz
-rw-r--r--  1  167  167       35376 Aug 16  2022 ceph-mds.jhome-c.log.2.gz
-rw-r--r--  1  167  167       35474 Aug 15  2022 ceph-mds.jhome-c.log.3.gz
-rw-r--r--  1  167  167       33331 Aug 14  2022 ceph-mds.jhome-c.log.4.gz
-rw-r--r--  1  167  167       33608 Aug 13  2022 ceph-mds.jhome-c.log.5.gz
-rw-r--r--  1  167  167       34220 Aug 12  2022 ceph-mds.jhome-c.log.6.gz
-rw-r--r--  1  167  167       34100 Aug 11  2022 ceph-mds.jhome-c.log.7.gz
-rw-r--r--  1  167  167      780726 Jun  1  2023 ceph-mds.jhome-d.log
-rw-r--r--  1  167  167       33423 Jun  1  2023 ceph-mds.jhome-d.log.1.gz
-rw-r--r--  1  167  167       33744 May 31  2023 ceph-mds.jhome-d.log.2.gz
-rw-r--r--  1  167  167       33290 May 30  2023 ceph-mds.jhome-d.log.3.gz
-rw-r--r--  1  167  167       33298 May 29  2023 ceph-mds.jhome-d.log.4.gz
-rw-r--r--  1  167  167       34226 May 28  2023 ceph-mds.jhome-d.log.5.gz
-rw-r--r--  1  167  167       34202 May 27  2023 ceph-mds.jhome-d.log.6.gz
-rw-r--r--  1  167  167       34036 May 26  2023 ceph-mds.jhome-d.log.7.gz
-rw-r--r--  1  167  167      730800 Jul 24  2023 ceph-mds.jhome-e.log
-rw-r--r--  1  167  167       33585 Jul 24  2023 ceph-mds.jhome-e.log.1.gz
-rw-r--r--  1  167  167       33949 Jul 23  2023 ceph-mds.jhome-e.log.2.gz
-rw-r--r--  1  167  167       36022 Jul 22  2023 ceph-mds.jhome-e.log.3.gz
-rw-r--r--  1  167  167       36353 Jul 21  2023 ceph-mds.jhome-e.log.4.gz
-rw-r--r--  1  167  167       36305 Jul 20  2023 ceph-mds.jhome-e.log.5.gz
-rw-r--r--  1  167  167       36478 Jul 19  2023 ceph-mds.jhome-e.log.6.gz
-rw-r--r--  1  167  167       36637 Jul 18  2023 ceph-mds.jhome-e.log.7.gz
-rw-r--r--  1  167  167      944232 Aug 18  2022 ceph-mds.jhome-f.log
-rw-r--r--  1  167  167       35346 Aug 17  2022 ceph-mds.jhome-f.log.1.gz
-rw-r--r--  1  167  167       35959 Aug 16  2022 ceph-mds.jhome-f.log.2.gz
-rw-r--r--  1  167  167       34187 Aug 15  2022 ceph-mds.jhome-f.log.3.gz
-rw-r--r--  1  167  167       33133 Aug 14  2022 ceph-mds.jhome-f.log.4.gz
-rw-r--r--  1  167  167       33470 Aug 13  2022 ceph-mds.jhome-f.log.5.gz
-rw-r--r--  1  167  167       34707 Aug 12  2022 ceph-mds.jhome-f.log.6.gz
-rw-r--r--  1  167  167       33382 Aug 11  2022 ceph-mds.jhome-f.log.7.gz
-rw-r--r--  1  167  167      554586 May 14 14:04 ceph-mds.lsstdata-a.log
-rw-r--r--  1  167  167       36673 May 14 00:13 ceph-mds.lsstdata-a.log.1.gz
-rw-r--r--  1  167  167       36735 May 13 00:13 ceph-mds.lsstdata-a.log.2.gz
-rw-r--r--  1  167  167       36649 May 12 00:13 ceph-mds.lsstdata-a.log.3.gz
-rw-r--r--  1  167  167       36637 May 11 00:13 ceph-mds.lsstdata-a.log.4.gz
-rw-r--r--  1  167  167       36735 May 10 00:13 ceph-mds.lsstdata-a.log.5.gz
-rw-r--r--  1  167  167       36662 May  9 00:13 ceph-mds.lsstdata-a.log.6.gz
-rw-r--r--  1  167  167       36800 May  8 00:13 ceph-mds.lsstdata-a.log.7.gz
-rw-r--r--  1  167  167      742785 Jul 24  2023 ceph-mds.lsstdata-b.log
-rw-r--r--  1  167  167       33723 Jul 24  2023 ceph-mds.lsstdata-b.log.1.gz
-rw-r--r--  1  167  167       33475 Jul 23  2023 ceph-mds.lsstdata-b.log.2.gz
-rw-r--r--  1  167  167       33794 Jul 22  2023 ceph-mds.lsstdata-b.log.3.gz
-rw-r--r--  1  167  167       33739 Jul 21  2023 ceph-mds.lsstdata-b.log.4.gz
-rw-r--r--  1  167  167       33859 Jul 20  2023 ceph-mds.lsstdata-b.log.5.gz
-rw-r--r--  1  167  167       33923 Jul 19  2023 ceph-mds.lsstdata-b.log.6.gz
-rw-r--r--  1  167  167       33833 Jul 18  2023 ceph-mds.lsstdata-b.log.7.gz
-rw-r--r--  1  167  167      743925 Jul 24  2023 ceph-mds.lsstdata-c.log
-rw-r--r--  1  167  167       33973 Jul 24  2023 ceph-mds.lsstdata-c.log.1.gz
-rw-r--r--  1  167  167       33953 Jul 23  2023 ceph-mds.lsstdata-c.log.2.gz
-rw-r--r--  1  167  167       33845 Jul 22  2023 ceph-mds.lsstdata-c.log.3.gz
-rw-r--r--  1  167  167       34095 Jul 21  2023 ceph-mds.lsstdata-c.log.4.gz
-rw-r--r--  1  167  167       33995 Jul 20  2023 ceph-mds.lsstdata-c.log.5.gz
-rw-r--r--  1  167  167       34252 Jul 19  2023 ceph-mds.lsstdata-c.log.6.gz
-rw-r--r--  1  167  167       34117 Jul 18  2023 ceph-mds.lsstdata-c.log.7.gz
-rw-r--r--  1  167  167      796743 Jun  1  2023 ceph-mds.lsstdata-d.log
-rw-r--r--  1  167  167       33069 Jun  1  2023 ceph-mds.lsstdata-d.log.1.gz
-rw-r--r--  1  167  167       33040 May 31  2023 ceph-mds.lsstdata-d.log.2.gz
-rw-r--r--  1  167  167       33042 May 30  2023 ceph-mds.lsstdata-d.log.3.gz
-rw-r--r--  1  167  167       33130 May 29  2023 ceph-mds.lsstdata-d.log.4.gz
-rw-r--r--  1  167  167       33342 May 28  2023 ceph-mds.lsstdata-d.log.5.gz
-rw-r--r--  1  167  167       33206 May 27  2023 ceph-mds.lsstdata-d.log.6.gz
-rw-r--r--  1  167  167       33284 May 26  2023 ceph-mds.lsstdata-d.log.7.gz
-rw-r--r--  1  167  167      768310 May 14 18:30 ceph-mds.lsstdata-e.log
-rw-r--r--  1  167  167       36351 May 14 00:13 ceph-mds.lsstdata-e.log.1.gz
-rw-r--r--  1  167  167       36533 May 13 00:13 ceph-mds.lsstdata-e.log.2.gz
-rw-r--r--  1  167  167       36357 May 12 00:13 ceph-mds.lsstdata-e.log.3.gz
-rw-r--r--  1  167  167       36383 May 11 00:13 ceph-mds.lsstdata-e.log.4.gz
-rw-r--r--  1  167  167       36360 May 10 00:13 ceph-mds.lsstdata-e.log.5.gz
-rw-r--r--  1  167  167       36434 May  9 00:12 ceph-mds.lsstdata-e.log.6.gz
-rw-r--r--  1  167  167       36361 May  8 00:12 ceph-mds.lsstdata-e.log.7.gz
-rw-r--r--  1  167  167      769488 May 14 18:30 ceph-mds.lsstdata-f.log
-rw-r--r--  1  167  167       36567 May 14 00:13 ceph-mds.lsstdata-f.log.1.gz
-rw-r--r--  1  167  167       36635 May 13 00:13 ceph-mds.lsstdata-f.log.2.gz
-rw-r--r--  1  167  167       36335 May 12 00:13 ceph-mds.lsstdata-f.log.3.gz
-rw-r--r--  1  167  167       36604 May 11 00:13 ceph-mds.lsstdata-f.log.4.gz
-rw-r--r--  1  167  167       36556 May 10 00:12 ceph-mds.lsstdata-f.log.5.gz
-rw-r--r--  1  167  167       36503 May  9 00:12 ceph-mds.lsstdata-f.log.6.gz
-rw-r--r--  1  167  167       36378 May  8 00:12 ceph-mds.lsstdata-f.log.7.gz
-rw-r--r--  1  167  167      747270 Jul 24  2023 ceph-mds.obs-env-a.log
-rw-r--r--  1  167  167       33902 Jul 24  2023 ceph-mds.obs-env-a.log.1.gz
-rw-r--r--  1  167  167       33713 Jul 23  2023 ceph-mds.obs-env-a.log.2.gz
-rw-r--r--  1  167  167       34126 Jul 22  2023 ceph-mds.obs-env-a.log.3.gz
-rw-r--r--  1  167  167       34211 Jul 21  2023 ceph-mds.obs-env-a.log.4.gz
-rw-r--r--  1  167  167       33994 Jul 20  2023 ceph-mds.obs-env-a.log.5.gz
-rw-r--r--  1  167  167       34295 Jul 19  2023 ceph-mds.obs-env-a.log.6.gz
-rw-r--r--  1  167  167       33983 Jul 18  2023 ceph-mds.obs-env-a.log.7.gz
-rw-r--r--  1  167  167      748490 Jul 24  2023 ceph-mds.obs-env-b.log
-rw-r--r--  1  167  167       33929 Jul 24  2023 ceph-mds.obs-env-b.log.1.gz
-rw-r--r--  1  167  167       33873 Jul 23  2023 ceph-mds.obs-env-b.log.2.gz
-rw-r--r--  1  167  167       34501 Jul 22  2023 ceph-mds.obs-env-b.log.3.gz
-rw-r--r--  1  167  167       34384 Jul 21  2023 ceph-mds.obs-env-b.log.4.gz
-rw-r--r--  1  167  167       34277 Jul 20  2023 ceph-mds.obs-env-b.log.5.gz
-rw-r--r--  1  167  167       34216 Jul 19  2023 ceph-mds.obs-env-b.log.6.gz
-rw-r--r--  1  167  167       34417 Jul 18  2023 ceph-mds.obs-env-b.log.7.gz
-rw-r--r--  1  167  167      751466 Jul 24  2023 ceph-mds.obs-env-c.log
-rw-r--r--  1  167  167       33624 Jul 24  2023 ceph-mds.obs-env-c.log.1.gz
-rw-r--r--  1  167  167       33800 Jul 23  2023 ceph-mds.obs-env-c.log.2.gz
-rw-r--r--  1  167  167       33911 Jul 22  2023 ceph-mds.obs-env-c.log.3.gz
-rw-r--r--  1  167  167       34040 Jul 21  2023 ceph-mds.obs-env-c.log.4.gz
-rw-r--r--  1  167  167       33815 Jul 20  2023 ceph-mds.obs-env-c.log.5.gz
-rw-r--r--  1  167  167       33996 Jul 19  2023 ceph-mds.obs-env-c.log.6.gz
-rw-r--r--  1  167  167       33810 Jul 18  2023 ceph-mds.obs-env-c.log.7.gz
-rw-r--r--  1  167  167      755033 Jul 24  2023 ceph-mds.obs-env-d.log
-rw-r--r--  1  167  167       34770 Jul 24  2023 ceph-mds.obs-env-d.log.1.gz
-rw-r--r--  1  167  167       34745 Jul 23  2023 ceph-mds.obs-env-d.log.2.gz
-rw-r--r--  1  167  167       35083 Jul 22  2023 ceph-mds.obs-env-d.log.3.gz
-rw-r--r--  1  167  167       34974 Jul 21  2023 ceph-mds.obs-env-d.log.4.gz
-rw-r--r--  1  167  167       34816 Jul 20  2023 ceph-mds.obs-env-d.log.5.gz
-rw-r--r--  1  167  167       35025 Jul 19  2023 ceph-mds.obs-env-d.log.6.gz
-rw-r--r--  1  167  167       34997 Jul 18  2023 ceph-mds.obs-env-d.log.7.gz
-rw-r--r--  1  167  167      622592 Aug  7  2023 ceph-mds.obs-env-e.log
-rw-r--r--  1  167  167       34257 Aug  7  2023 ceph-mds.obs-env-e.log.1.gz
-rw-r--r--  1  167  167       34204 Aug  6  2023 ceph-mds.obs-env-e.log.2.gz
-rw-r--r--  1  167  167       34282 Aug  5  2023 ceph-mds.obs-env-e.log.3.gz
-rw-r--r--  1  167  167       34081 Aug  4  2023 ceph-mds.obs-env-e.log.4.gz
-rw-r--r--  1  167  167       34238 Aug  3  2023 ceph-mds.obs-env-e.log.5.gz
-rw-r--r--  1  167  167       34492 Aug  2  2023 ceph-mds.obs-env-e.log.6.gz
-rw-r--r--  1  167  167       34142 Aug  1  2023 ceph-mds.obs-env-e.log.7.gz
-rw-r--r--  1  167  167      756409 Jul 24  2023 ceph-mds.obs-env-f.log
-rw-r--r--  1  167  167       34135 Jul 24  2023 ceph-mds.obs-env-f.log.1.gz
-rw-r--r--  1  167  167       34209 Jul 23  2023 ceph-mds.obs-env-f.log.2.gz
-rw-r--r--  1  167  167       34630 Jul 22  2023 ceph-mds.obs-env-f.log.3.gz
-rw-r--r--  1  167  167       34374 Jul 21  2023 ceph-mds.obs-env-f.log.4.gz
-rw-r--r--  1  167  167       34428 Jul 20  2023 ceph-mds.obs-env-f.log.5.gz
-rw-r--r--  1  167  167       34416 Jul 19  2023 ceph-mds.obs-env-f.log.6.gz
-rw-r--r--  1  167  167       34434 Jul 18  2023 ceph-mds.obs-env-f.log.7.gz
-rw-r--r--  1  167  167      743586 Jul 24  2023 ceph-mds.project-a.log
-rw-r--r--  1  167  167       34442 Jul 24  2023 ceph-mds.project-a.log.1.gz
-rw-r--r--  1  167  167       34290 Jul 23  2023 ceph-mds.project-a.log.2.gz
-rw-r--r--  1  167  167       34723 Jul 22  2023 ceph-mds.project-a.log.3.gz
-rw-r--r--  1  167  167       34550 Jul 21  2023 ceph-mds.project-a.log.4.gz
-rw-r--r--  1  167  167       34887 Jul 20  2023 ceph-mds.project-a.log.5.gz
-rw-r--r--  1  167  167       34958 Jul 19  2023 ceph-mds.project-a.log.6.gz
-rw-r--r--  1  167  167       34660 Jul 18  2023 ceph-mds.project-a.log.7.gz
-rw-r--r--  1  167  167     2557721 May 14 18:39 ceph-mds.project-b.log
-rw-r--r--  1  167  167       55090 May 14 00:13 ceph-mds.project-b.log.1.gz
-rw-r--r--  1  167  167       55195 May 13 00:13 ceph-mds.project-b.log.2.gz
-rw-r--r--  1  167  167       54242 May 12 00:13 ceph-mds.project-b.log.3.gz
-rw-r--r--  1  167  167       55411 May 11 00:13 ceph-mds.project-b.log.4.gz
-rw-r--r--  1  167  167       55463 May 10 00:13 ceph-mds.project-b.log.5.gz
-rw-r--r--  1  167  167       55235 May  9 00:13 ceph-mds.project-b.log.6.gz
-rw-r--r--  1  167  167       54713 May  8 00:12 ceph-mds.project-b.log.7.gz
-rw-r--r--  1  167  167      881492 Apr 11  2023 ceph-mds.project-c.log
-rw-r--r--  1  167  167       32840 Apr 11  2023 ceph-mds.project-c.log.1.gz
-rw-r--r--  1  167  167       33105 Apr 10  2023 ceph-mds.project-c.log.2.gz
-rw-r--r--  1  167  167       33205 Apr  9  2023 ceph-mds.project-c.log.3.gz
-rw-r--r--  1  167  167       33172 Apr  8  2023 ceph-mds.project-c.log.4.gz
-rw-r--r--  1  167  167       32976 Apr  7  2023 ceph-mds.project-c.log.5.gz
-rw-r--r--  1  167  167       33156 Apr  6  2023 ceph-mds.project-c.log.6.gz
-rw-r--r--  1  167  167       33037 Apr  5  2023 ceph-mds.project-c.log.7.gz
-rw-r--r--  1  167  167     3429013 May 14 18:39 ceph-mds.project-d.log
-rw-r--r--  1  167  167       55267 May 14 00:13 ceph-mds.project-d.log.1.gz
-rw-r--r--  1  167  167       55187 May 13 00:13 ceph-mds.project-d.log.2.gz
-rw-r--r--  1  167  167       54158 May 12 00:13 ceph-mds.project-d.log.3.gz
-rw-r--r--  1  167  167       55206 May 11 00:13 ceph-mds.project-d.log.4.gz
-rw-r--r--  1  167  167       55224 May 10 00:13 ceph-mds.project-d.log.5.gz
-rw-r--r--  1  167  167       55033 May  9 00:13 ceph-mds.project-d.log.6.gz
-rw-r--r--  1  167  167       54622 May  8 00:12 ceph-mds.project-d.log.7.gz
-rw-r--r--  1  167  167      932803 May 14 18:39 ceph-mds.project-e.log
-rw-r--r--  1  167  167       54918 May 14 00:13 ceph-mds.project-e.log.1.gz
-rw-r--r--  1  167  167       55047 May 13 00:13 ceph-mds.project-e.log.2.gz
-rw-r--r--  1  167  167       54137 May 12 00:13 ceph-mds.project-e.log.3.gz
-rw-r--r--  1  167  167       55269 May 11 00:13 ceph-mds.project-e.log.4.gz
-rw-r--r--  1  167  167       55134 May 10 00:13 ceph-mds.project-e.log.5.gz
-rw-r--r--  1  167  167       55272 May  9 00:13 ceph-mds.project-e.log.6.gz
-rw-r--r--  1  167  167       54587 May  8 00:12 ceph-mds.project-e.log.7.gz
-rw-r--r--  1  167  167      755306 Jul 24  2023 ceph-mds.project-f.log
-rw-r--r--  1  167  167       34214 Jul 24  2023 ceph-mds.project-f.log.1.gz
-rw-r--r--  1  167  167       34146 Jul 23  2023 ceph-mds.project-f.log.2.gz
-rw-r--r--  1  167  167       34579 Jul 22  2023 ceph-mds.project-f.log.3.gz
-rw-r--r--  1  167  167       34258 Jul 21  2023 ceph-mds.project-f.log.4.gz
-rw-r--r--  1  167  167       34795 Jul 20  2023 ceph-mds.project-f.log.5.gz
-rw-r--r--  1  167  167       34735 Jul 19  2023 ceph-mds.project-f.log.6.gz
-rw-r--r--  1  167  167       34439 Jul 18  2023 ceph-mds.project-f.log.7.gz
-rw-r--r--  1  167  167      877864 Apr 11  2023 ceph-mds.scratch-a.log
-rw-r--r--  1  167  167       33237 Apr 11  2023 ceph-mds.scratch-a.log.1.gz
-rw-r--r--  1  167  167       33516 Apr 10  2023 ceph-mds.scratch-a.log.2.gz
-rw-r--r--  1  167  167       33441 Apr  9  2023 ceph-mds.scratch-a.log.3.gz
-rw-r--r--  1  167  167       33278 Apr  8  2023 ceph-mds.scratch-a.log.4.gz
-rw-r--r--  1  167  167       33502 Apr  7  2023 ceph-mds.scratch-a.log.5.gz
-rw-r--r--  1  167  167       33533 Apr  6  2023 ceph-mds.scratch-a.log.6.gz
-rw-r--r--  1  167  167       33404 Apr  5  2023 ceph-mds.scratch-a.log.7.gz
-rw-r--r--  1  167  167       99860 Jun  7  2022 ceph-mds.scratch-b.log
-rw-r--r--  1  167  167       36653 Jun  7  2022 ceph-mds.scratch-b.log.1.gz
-rw-r--r--  1  167  167       37026 Jun  6  2022 ceph-mds.scratch-b.log.2.gz
-rw-r--r--  1  167  167       36984 Jun  5  2022 ceph-mds.scratch-b.log.3.gz
-rw-r--r--  1  167  167       34992 Jun  4  2022 ceph-mds.scratch-b.log.4.gz
-rw-r--r--  1  167  167      105992 Jun  3  2022 ceph-mds.scratch-b.log.5.gz
-rw-r--r--  1  167  167       36529 Jun  1  2022 ceph-mds.scratch-b.log.6.gz
-rw-r--r--  1  167  167       42532 May 31  2022 ceph-mds.scratch-b.log.7.gz
-rw-r--r--  1  167  167      761882 May 14 18:19 ceph-mds.scratch-c.log
-rw-r--r--  1  167  167       36608 May 14 00:13 ceph-mds.scratch-c.log.1.gz
-rw-r--r--  1  167  167       36351 May 13 00:13 ceph-mds.scratch-c.log.2.gz
-rw-r--r--  1  167  167       36469 May 12 00:13 ceph-mds.scratch-c.log.3.gz
-rw-r--r--  1  167  167       36410 May 11 00:13 ceph-mds.scratch-c.log.4.gz
-rw-r--r--  1  167  167       36983 May 10 00:13 ceph-mds.scratch-c.log.5.gz
-rw-r--r--  1  167  167       36581 May  9 00:13 ceph-mds.scratch-c.log.6.gz
-rw-r--r--  1  167  167       36326 May  8 00:13 ceph-mds.scratch-c.log.7.gz
-rw-r--r--  1  167  167      766905 Jun  2  2022 ceph-mds.scratch-d.log
-rw-r--r--  1  167  167       32971 Jun  1  2022 ceph-mds.scratch-d.log.1.gz
-rw-r--r--  1  167  167       39216 May 31  2022 ceph-mds.scratch-d.log.2.gz
-rw-r--r--  1  167  167       38536 May 30  2022 ceph-mds.scratch-d.log.3.gz
-rw-r--r--  1  167  167       36533 May 29  2022 ceph-mds.scratch-d.log.4.gz
-rw-r--r--  1  167  167       33746 May 28  2022 ceph-mds.scratch-d.log.5.gz
-rw-r--r--  1  167  167       33871 May 27  2022 ceph-mds.scratch-d.log.6.gz
-rw-r--r--  1  167  167       33402 May 26  2022 ceph-mds.scratch-d.log.7.gz
-rw-r--r--  1  167  167       99750 Jun  7  2022 ceph-mds.scratch-e.log
-rw-r--r--  1  167  167       35436 Jun  7  2022 ceph-mds.scratch-e.log.1.gz
-rw-r--r--  1  167  167       35559 Jun  6  2022 ceph-mds.scratch-e.log.2.gz
-rw-r--r--  1  167  167       35229 Jun  5  2022 ceph-mds.scratch-e.log.3.gz
-rw-r--r--  1  167  167       34062 Jun  4  2022 ceph-mds.scratch-e.log.4.gz
-rw-r--r--  1  167  167       79816 Jun  3  2022 ceph-mds.scratch-e.log.5.gz
-rw-r--r--  1  167  167       36438 Jun  1  2022 ceph-mds.scratch-e.log.6.gz
-rw-r--r--  1  167  167       42603 May 31  2022 ceph-mds.scratch-e.log.7.gz
-rw-r--r--  1  167  167      936810 May 14 18:19 ceph-mds.scratch-f.log
-rw-r--r--  1  167  167       36397 May 14 00:13 ceph-mds.scratch-f.log.1.gz
-rw-r--r--  1  167  167       36401 May 13 00:13 ceph-mds.scratch-f.log.2.gz
-rw-r--r--  1  167  167       36218 May 12 00:13 ceph-mds.scratch-f.log.3.gz
-rw-r--r--  1  167  167       36437 May 11 00:13 ceph-mds.scratch-f.log.4.gz
-rw-r--r--  1  167  167       36890 May 10 00:13 ceph-mds.scratch-f.log.5.gz
-rw-r--r--  1  167  167       36360 May  9 00:13 ceph-mds.scratch-f.log.6.gz
-rw-r--r--  1  167  167       36249 May  8 00:13 ceph-mds.scratch-f.log.7.gz
-rw-r--r--  1  167  167    19396432 Jun  1  2023 ceph-mgr.a.log
-rw-r--r--  1  167  167     1274987 Jun  1  2023 ceph-mgr.a.log.1.gz
-rw-r--r--  1  167  167     1262210 May 31  2023 ceph-mgr.a.log.2.gz
-rw-r--r--  1  167  167     1279718 May 30  2023 ceph-mgr.a.log.3.gz
-rw-r--r--  1  167  167     1276592 May 29  2023 ceph-mgr.a.log.4.gz
-rw-r--r--  1  167  167     1282592 May 28  2023 ceph-mgr.a.log.5.gz
-rw-r--r--  1  167  167     1284520 May 27  2023 ceph-mgr.a.log.6.gz
-rw-r--r--  1  167  167     1291835 May 26  2023 ceph-mgr.a.log.7.gz
-rw-r--r--  1  167  167    26011729 May 14 17:40 ceph-mgr.b.log
-rw-r--r--  1  167  167     1762923 May 14 00:13 ceph-mgr.b.log.1.gz
-rw-r--r--  1  167  167     1739531 May 13 00:13 ceph-mgr.b.log.2.gz
-rw-r--r--  1  167  167     1767059 May 12 00:13 ceph-mgr.b.log.3.gz
-rw-r--r--  1  167  167     1779754 May 11 00:13 ceph-mgr.b.log.4.gz
-rw-r--r--  1  167  167     1788308 May 10 00:13 ceph-mgr.b.log.5.gz
-rw-r--r--  1  167  167     1779844 May  9 00:13 ceph-mgr.b.log.6.gz
-rw-r--r--  1  167  167     1774757 May  8 00:13 ceph-mgr.b.log.7.gz
-rw-r--r--  1  167  167    17802927 Feb 10  2022 ceph-mon.a.log
-rw-r--r--  1  167  167     1403159 Feb  9  2022 ceph-mon.a.log.1.gz
-rw-r--r--  1  167  167     2160233 Feb  8  2022 ceph-mon.a.log.2.gz
-rw-r--r--  1  167  167     1382238 Feb  7  2022 ceph-mon.a.log.3.gz
-rw-r--r--  1  167  167     1309700 Feb  6  2022 ceph-mon.a.log.4.gz
-rw-r--r--  1  167  167     1315143 Feb  5  2022 ceph-mon.a.log.5.gz
-rw-r--r--  1  167  167     1362418 Feb  4  2022 ceph-mon.a.log.6.gz
-rw-r--r--  1  167  167     1300114 Feb  3  2022 ceph-mon.a.log.7.gz
-rw-r--r--  1  167  167     7395676 May 13  2022 ceph-mon.e.log
-rw-r--r--  1  167  167    19116669 Sep 24  2022 ceph-mon.g.log
-rw-r--r--  1  167  167     1460662 Sep 24  2022 ceph-mon.g.log.1.gz
-rw-r--r--  1  167  167     1571631 Sep 23  2022 ceph-mon.g.log.2.gz
-rw-r--r--  1  167  167     1470157 Sep 22  2022 ceph-mon.g.log.3.gz
-rw-r--r--  1  167  167     1473463 Sep 21  2022 ceph-mon.g.log.4.gz
-rw-r--r--  1  167  167     1430203 Sep 20  2022 ceph-mon.g.log.5.gz
-rw-r--r--  1  167  167     1409030 Sep 19  2022 ceph-mon.g.log.6.gz
-rw-r--r--  1  167  167     1420278 Sep 18  2022 ceph-mon.g.log.7.gz
-rw-r--r--  1  167  167    14070819 Aug  7  2023 ceph-mon.n.log
-rw-r--r--  1  167  167     1466629 Aug  7  2023 ceph-mon.n.log.1.gz
-rw-r--r--  1  167  167     1469772 Aug  6  2023 ceph-mon.n.log.2.gz
-rw-r--r--  1  167  167     1480570 Aug  5  2023 ceph-mon.n.log.3.gz
-rw-r--r--  1  167  167     1421555 Aug  4  2023 ceph-mon.n.log.4.gz
-rw-r--r--  1  167  167     1434916 Aug  3  2023 ceph-mon.n.log.5.gz
-rw-r--r--  1  167  167     1709128 Aug  2  2023 ceph-mon.n.log.6.gz
-rw-r--r--  1  167  167     1440394 Aug  1  2023 ceph-mon.n.log.7.gz
-rw-r--r--  1  167  167     6740366 May 14 18:39 ceph-mon.u.log
-rw-r--r--  1  167  167    16221573 May 14 18:36 ceph-osd.0.log
-rw-r--r--  1  167  167      162486 May 14 00:13 ceph-osd.0.log.1.gz
-rw-r--r--  1  167  167      163611 May 13 00:13 ceph-osd.0.log.2.gz
-rw-r--r--  1  167  167      157908 May 12 00:13 ceph-osd.0.log.3.gz
-rw-r--r--  1  167  167      159201 May 11 00:13 ceph-osd.0.log.4.gz
-rw-r--r--  1  167  167      163634 May 10 00:13 ceph-osd.0.log.5.gz
-rw-r--r--  1  167  167      157966 May  9 00:13 ceph-osd.0.log.6.gz
-rw-r--r--  1  167  167      154373 May  8 00:13 ceph-osd.0.log.7.gz
-rw-r--r--  1  167  167    17989291 May 14 18:39 ceph-osd.11.log
-rw-r--r--  1  167  167      155480 May 14 00:13 ceph-osd.11.log.1.gz
-rw-r--r--  1  167  167      162610 May 13 00:13 ceph-osd.11.log.2.gz
-rw-r--r--  1  167  167      170404 May 12 00:13 ceph-osd.11.log.3.gz
-rw-r--r--  1  167  167      160270 May 11 00:13 ceph-osd.11.log.4.gz
-rw-r--r--  1  167  167      161765 May 10 00:13 ceph-osd.11.log.5.gz
-rw-r--r--  1  167  167      164143 May  9 00:13 ceph-osd.11.log.6.gz
-rw-r--r--  1  167  167      162437 May  8 00:13 ceph-osd.11.log.7.gz
-rw-r--r--  1  167  167    16989771 May 14 18:37 ceph-osd.17.log
-rw-r--r--  1  167  167      173359 May 14 00:13 ceph-osd.17.log.1.gz
-rw-r--r--  1  167  167      166573 May 13 00:13 ceph-osd.17.log.2.gz
-rw-r--r--  1  167  167      169435 May 12 00:13 ceph-osd.17.log.3.gz
-rw-r--r--  1  167  167      161822 May 11 00:13 ceph-osd.17.log.4.gz
-rw-r--r--  1  167  167      165643 May 10 00:13 ceph-osd.17.log.5.gz
-rw-r--r--  1  167  167      170443 May  9 00:13 ceph-osd.17.log.6.gz
-rw-r--r--  1  167  167      166488 May  8 00:13 ceph-osd.17.log.7.gz
-rw-r--r--  1  167  167    16719190 May 14 18:39 ceph-osd.24.log
-rw-r--r--  1  167  167      166005 May 14 00:13 ceph-osd.24.log.1.gz
-rw-r--r--  1  167  167      165600 May 13 00:13 ceph-osd.24.log.2.gz
-rw-r--r--  1  167  167      164283 May 12 00:13 ceph-osd.24.log.3.gz
-rw-r--r--  1  167  167      167982 May 11 00:13 ceph-osd.24.log.4.gz
-rw-r--r--  1  167  167      169249 May 10 00:13 ceph-osd.24.log.5.gz
-rw-r--r--  1  167  167      157408 May  9 00:13 ceph-osd.24.log.6.gz
-rw-r--r--  1  167  167      171751 May  8 00:13 ceph-osd.24.log.7.gz
-rw-r--r--  1  167  167    16957752 May 14 18:39 ceph-osd.26.log
-rw-r--r--  1  167  167      158335 May 14 00:13 ceph-osd.26.log.1.gz
-rw-r--r--  1  167  167      163783 May 13 00:13 ceph-osd.26.log.2.gz
-rw-r--r--  1  167  167      159128 May 12 00:13 ceph-osd.26.log.3.gz
-rw-r--r--  1  167  167      154048 May 11 00:13 ceph-osd.26.log.4.gz
-rw-r--r--  1  167  167      166889 May 10 00:13 ceph-osd.26.log.5.gz
-rw-r--r--  1  167  167      164556 May  9 00:13 ceph-osd.26.log.6.gz
-rw-r--r--  1  167  167      156648 May  8 00:13 ceph-osd.26.log.7.gz
-rw-r--r--  1  167  167    16945240 May 14 18:39 ceph-osd.28.log
-rw-r--r--  1  167  167      166821 May 14 00:13 ceph-osd.28.log.1.gz
-rw-r--r--  1  167  167      170518 May 13 00:13 ceph-osd.28.log.2.gz
-rw-r--r--  1  167  167      168134 May 12 00:13 ceph-osd.28.log.3.gz
-rw-r--r--  1  167  167      175178 May 11 00:13 ceph-osd.28.log.4.gz
-rw-r--r--  1  167  167      162544 May 10 00:13 ceph-osd.28.log.5.gz
-rw-r--r--  1  167  167      161880 May  9 00:13 ceph-osd.28.log.6.gz
-rw-r--r--  1  167  167      169788 May  8 00:13 ceph-osd.28.log.7.gz
-rw-r--r--  1  167  167    16612881 May 14 18:39 ceph-osd.32.log
-rw-r--r--  1  167  167      155558 May 14 00:13 ceph-osd.32.log.1.gz
-rw-r--r--  1  167  167      162066 May 13 00:13 ceph-osd.32.log.2.gz
-rw-r--r--  1  167  167      160335 May 12 00:13 ceph-osd.32.log.3.gz
-rw-r--r--  1  167  167      168682 May 11 00:13 ceph-osd.32.log.4.gz
-rw-r--r--  1  167  167      152200 May 10 00:13 ceph-osd.32.log.5.gz
-rw-r--r--  1  167  167      168176 May  9 00:13 ceph-osd.32.log.6.gz
-rw-r--r--  1  167  167      157446 May  8 00:13 ceph-osd.32.log.7.gz
-rw-r--r--  1  167  167    16564845 May 14 18:39 ceph-osd.48.log
-rw-r--r--  1  167  167      160303 May 14 00:13 ceph-osd.48.log.1.gz
-rw-r--r--  1  167  167      156306 May 13 00:13 ceph-osd.48.log.2.gz
-rw-r--r--  1  167  167      152983 May 12 00:13 ceph-osd.48.log.3.gz
-rw-r--r--  1  167  167      163541 May 11 00:13 ceph-osd.48.log.4.gz
-rw-r--r--  1  167  167      164086 May 10 00:13 ceph-osd.48.log.5.gz
-rw-r--r--  1  167  167      151135 May  9 00:13 ceph-osd.48.log.6.gz
-rw-r--r--  1  167  167      165746 May  8 00:13 ceph-osd.48.log.7.gz
-rw-r--r--  1  167  167    16614131 May 14 18:35 ceph-osd.50.log
-rw-r--r--  1  167  167      148024 May 14 00:13 ceph-osd.50.log.1.gz
-rw-r--r--  1  167  167      152481 May 13 00:13 ceph-osd.50.log.2.gz
-rw-r--r--  1  167  167      154504 May 12 00:13 ceph-osd.50.log.3.gz
-rw-r--r--  1  167  167      157741 May 11 00:13 ceph-osd.50.log.4.gz
-rw-r--r--  1  167  167      150317 May 10 00:13 ceph-osd.50.log.5.gz
-rw-r--r--  1  167  167      151277 May  9 00:13 ceph-osd.50.log.6.gz
-rw-r--r--  1  167  167      152838 May  8 00:13 ceph-osd.50.log.7.gz
-rw-r--r--  1  167  167    16815018 May 14 18:35 ceph-osd.54.log
-rw-r--r--  1  167  167      158165 May 14 00:13 ceph-osd.54.log.1.gz
-rw-r--r--  1  167  167      158983 May 13 00:13 ceph-osd.54.log.2.gz
-rw-r--r--  1  167  167      151592 May 12 00:13 ceph-osd.54.log.3.gz
-rw-r--r--  1  167  167      160796 May 11 00:13 ceph-osd.54.log.4.gz
-rw-r--r--  1  167  167      170588 May 10 00:13 ceph-osd.54.log.5.gz
-rw-r--r--  1  167  167      155643 May  9 00:13 ceph-osd.54.log.6.gz
-rw-r--r--  1  167  167      158554 May  8 00:13 ceph-osd.54.log.7.gz
-rw-r--r--  1  167  167    16793344 May 14 18:39 ceph-osd.59.log
-rw-r--r--  1  167  167      193111 May 14 00:13 ceph-osd.59.log.1.gz
-rw-r--r--  1  167  167      192726 May 13 00:13 ceph-osd.59.log.2.gz
-rw-r--r--  1  167  167      195929 May 12 00:13 ceph-osd.59.log.3.gz
-rw-r--r--  1  167  167      189323 May 11 00:13 ceph-osd.59.log.4.gz
-rw-r--r--  1  167  167      194411 May 10 00:13 ceph-osd.59.log.5.gz
-rw-r--r--  1  167  167      193781 May  9 00:13 ceph-osd.59.log.6.gz
-rw-r--r--  1  167  167      194772 May  8 00:13 ceph-osd.59.log.7.gz
-rw-r--r--  1  167  167    16686581 May 14 18:39 ceph-osd.6.log
-rw-r--r--  1  167  167      175411 May 14 00:13 ceph-osd.6.log.1.gz
-rw-r--r--  1  167  167      164813 May 13 00:13 ceph-osd.6.log.2.gz
-rw-r--r--  1  167  167      160007 May 12 00:13 ceph-osd.6.log.3.gz
-rw-r--r--  1  167  167      163748 May 11 00:13 ceph-osd.6.log.4.gz
-rw-r--r--  1  167  167      165586 May 10 00:13 ceph-osd.6.log.5.gz
-rw-r--r--  1  167  167      167513 May  9 00:13 ceph-osd.6.log.6.gz
-rw-r--r--  1  167  167      163979 May  8 00:13 ceph-osd.6.log.7.gz
-rw-r--r--  1  167  167    10533605 Jan 12  2023 ceph-volume.log
-rw-------  1  167  167  1908944896 Feb  9  2022 core.13
  • Cluster CR (custom resource), typically called cluster.yaml, if necessary
apiVersion: v1
items:
- apiVersion: ceph.rook.io/v1
  kind: CephCluster
  metadata:
    annotations:
      meta.helm.sh/release-name: rook-ceph-cluster
      meta.helm.sh/release-namespace: rook-ceph
      objectset.rio.cattle.io/id: default-pillan-fleet-s-tu-c-pillan-rook-ceph-cluster
    creationTimestamp: "2021-11-19T17:06:40Z"
    finalizers:
    - cephcluster.ceph.rook.io
    generation: 14
    labels:
      app.kubernetes.io/managed-by: Helm
      objectset.rio.cattle.io/hash: 8719f080c0b9ff76bf4dde33b851bf342788ba35
    name: rook-ceph
    namespace: rook-ceph
    resourceVersion: "818600583"
    uid: e642c361-1530-4fbe-a0fa-d7a38d8bcd2e
  spec:
    cephVersion:
      image: quay.io/ceph/ceph:v17.2.6
    cleanupPolicy:
      sanitizeDisks:
        dataSource: zero
        iteration: 1
        method: quick
    crashCollector: {}
    dashboard:
      enabled: true
      ssl: false
    dataDirHostPath: /var/lib/rook
    disruptionManagement:
      managePodBudgets: true
      osdMaintenanceTimeout: 30
      pgHealthCheckTimeout: 30
    external: {}
    healthCheck:
      daemonHealth:
        mon:
          interval: 45s
        osd:
          interval: 1m0s
        status:
          interval: 1m0s
      livenessProbe:
        mgr: {}
        mon: {}
        osd: {}
    labels:
      monitoring:
        lsst.io/monitor: "true"
    logCollector:
      enabled: true
      maxLogSize: 500M
      periodicity: 1d
    mgr:
      allowMultiplePerNode: false
      count: 2
      modules:
      - enabled: true
        name: pg_autoscaler
    mon:
      count: 5
    monitoring:
      enabled: true
    network:
      connections:
        compression:
          enabled: false
        encryption:
          enabled: false
        requireMsgr2: false
    placement:
      all:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: role
                operator: In
                values:
                - storage-node
        tolerations:
        - effect: NoSchedule
          key: role
          operator: Equal
          value: storage-node
    priorityClassNames:
      mgr: system-cluster-critical
      mon: system-node-critical
      osd: system-node-critical
    resources:
      cleanup:
        limits:
          cpu: 500m
          memory: 1Gi
        requests:
          cpu: 500m
          memory: 100Mi
      crashcollector:
        limits:
          cpu: 500m
          memory: 60Mi
        requests:
          cpu: 100m
          memory: 60Mi
      exporter:
        limits:
          memory: 128Mi
        requests:
          cpu: 50m
          memory: 50Mi
      logcollector:
        limits:
          cpu: 500m
          memory: 1Gi
        requests:
          cpu: 100m
          memory: 100Mi
      mgr:
        limits:
          cpu: "1"
          memory: 1Gi
        requests:
          cpu: 500m
          memory: 512Mi
      mgr-sidecar:
        limits:
          cpu: 500m
          memory: 100Mi
        requests:
          cpu: 100m
          memory: 40Mi
      mon:
        limits:
          cpu: "2"
          memory: 2Gi
        requests:
          cpu: "1"
          memory: 1Gi
      osd:
        limits:
          cpu: "2"
          memory: 12Gi
        requests:
          cpu: "1"
          memory: 8Gi
      prepareosd:
        limits:
          cpu: 500m
          memory: 400Mi
        requests:
          cpu: 500m
          memory: 50Mi
    security:
      kms: {}
    storage:
      config:
        osdsPerDevice: "4"
      nodes:
      - devices:
        - name: /dev/disk/by-id/nvme-Samsung_SSD_983_DCT_1.92TB_S48BNG0MB01685F
        - name: /dev/disk/by-id/nvme-SAMSUNG_MZQLB1T9HAJR-00007_S439NC0RA01816
        - name: /dev/disk/by-id/nvme-SAMSUNG_MZQL21T9HCJR-00A07_S64GNE0T503390
        name: pillan01
      - devices:
        - name: /dev/disk/by-id/nvme-Samsung_SSD_983_DCT_1.92TB_S48BNG0MB01695D
        - name: /dev/disk/by-id/nvme-SAMSUNG_MZQLB1T9HAJR-00007_S439NC0RA01819
        - name: /dev/disk/by-id/nvme-SAMSUNG_MZQL21T9HCJR-00A07_S64GNE0T503393
        name: pillan02
      - devices:
        - name: /dev/disk/by-id/nvme-Samsung_SSD_983_DCT_1.92TB_S48BNG0MB01690H
        - name: /dev/disk/by-id/nvme-SAMSUNG_MZQLB1T9HAJR-00007_S439NC0R803375
        - name: /dev/disk/by-id/nvme-SAMSUNG_MZQL21T9HCJR-00A07_S64GNE0T503391
        name: pillan03
      - devices:
        - name: /dev/disk/by-id/nvme-Samsung_SSD_983_DCT_1.92TB_S48BNG0MB01744N
        - name: /dev/disk/by-id/nvme-SAMSUNG_MZQLB1T9HAJR-00007_S439NC0R803387
        - name: /dev/disk/by-id/nvme-SAMSUNG_MZQL21T9HCJR-00A07_S64GNE0T503394
        name: pillan04
      - devices:
        - name: /dev/disk/by-id/nvme-Samsung_SSD_983_DCT_1.92TB_S48BNG0MB00905W
        - name: /dev/disk/by-id/nvme-SAMSUNG_MZQL21T9HCJR-00A07_S64GNE0T503385
        - name: /dev/disk/by-id/nvme-SAMSUNG_MZQLB1T9HAJR-00007_S439NC0R803373
        name: pillan05
      - devices:
        - name: /dev/disk/by-id/nvme-Samsung_SSD_983_DCT_1.92TB_S48BNG0MB01747H
        - name: /dev/disk/by-id/nvme-SAMSUNG_MZQLB1T9HAJR-00007_S439NC0R803392
        - name: /dev/disk/by-id/nvme-SAMSUNG_MZQL21T9HCJR-00A07_S64GNE0T503395
        name: pillan06
      - devices:
        - name: /dev/disk/by-id/nvme-SAMSUNG_MZQLB1T9HAJR-00007_S439NC0R803389
        - name: /dev/disk/by-id/nvme-SAMSUNG_MZQL21T9HCJR-00A07_S64GNE0T504111
        - name: /dev/disk/by-id/nvme-SAMSUNG_MZQL21T9HCJR-00A07_S64GNE0T503387
        name: pillan07
      - devices:
        - name: /dev/disk/by-id/nvme-SAMSUNG_MZQLB1T9HAJR-00007_S439NC0R803386
        - name: /dev/disk/by-id/nvme-SAMSUNG_MZQL21T9HCJR-00A07_S64GNE0T503386
        - name: /dev/disk/by-id/nvme-SAMSUNG_MZQL21T9HCJR-00A07_S64GNE0T503388
        name: pillan08
      - devices:
        - name: /dev/disk/by-id/nvme-SAMSUNG_MZQLB1T9HAJR-00007_S439NC0R803379
        - name: /dev/disk/by-id/nvme-SAMSUNG_MZQL21T9HCJR-00A07_S64GNE0T503389
        - name: /dev/disk/by-id/nvme-SAMSUNG_MZQL21T9HCJR-00A07_S64GNE0T503404
        name: pillan09
      useAllDevices: false
    waitTimeoutForHealthyOSDInMinutes: 10
  status:
    ceph:
      capacity:
        bytesAvailable: 28711521767424
        bytesTotal: 51850002825216
        bytesUsed: 23138481057792
        lastUpdated: "2024-05-14T18:46:21Z"
      fsid: e60c272e-6043-4f98-a7ea-85f3c1a207da
      health: HEALTH_OK
      lastChanged: "2024-05-14T18:40:15Z"
      lastChecked: "2024-05-14T18:46:21Z"
      previousHealth: HEALTH_WARN
      versions:
        mds:
          ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable): 48
        mgr:
          ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable): 2
        mon:
          ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable): 5
        osd:
          ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable): 108
        overall:
          ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable): 172
        rgw:
          ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable): 9
    conditions:
    - lastHeartbeatTime: "2024-05-14T18:46:22Z"
      lastTransitionTime: "2024-05-14T16:00:34Z"
      message: Cluster created successfully
      reason: ClusterCreated
      status: "True"
      type: Ready
    message: Cluster created successfully
    observedGeneration: 14
    phase: Ready
    state: Created
    storage:
      deviceClasses:
      - name: nvme
      osd:
        storeType:
          "": 108
    version:
      image: quay.io/ceph/ceph:v17.2.6
      version: 17.2.6-0
kind: List
metadata:
  resourceVersion: ""

Logs to submit:

  • Operator's logs, if necessary

  • Crashing pod(s) logs, if necessary

    To get logs, use kubectl -n <namespace> logs <pod name>
    When pasting logs, always surround them with backticks or use the insert code button from the Github UI.
    Read GitHub documentation if you need help.

Cluster Status to submit:

  • Output of kubectl commands, if necessary

    To get the health of the cluster, use kubectl rook-ceph health
    To get the status of the cluster, use kubectl rook-ceph ceph status
    For more details, see the Rook kubectl Plugin

Environment:

  • Rook version (use rook version inside of a Rook Pod): 1.3-ish through 1.14.3
@jhoblitt jhoblitt added the bug label May 14, 2024
@BlaineEXE
Copy link
Member

This is definitely a challenging edge condition. On one hand, when a component moves to another node, we don't necessarily want to immediately delete that component's logs, in case the move to another node was caused by a failure someone would want to debug.

And at the same time, I do think that since Rook is keeping and rotating on-disk logs, Rook should also take care to clean up any old lingering things with some regularity. Relying on user environments to do cleanup would risk a too-many-cooks scenario that could end up messing with the existing log rotation mechanism.

Logs are currently managed and rotated via a container sidecar mechanism. To handle broader-scope cleanup, Rook would probably need to run a rotator daemon on each node. Or Rook could periodically run a rotation job on every node.

If Rook were to implement either mechanism, I feel like that basically makes the existing log rotator sidecar containers obsolete. Overall, that is probably a good thing since those sidecars each eat up a portion of node CPU/mem resources. A per-node rotator daemonset would be more resource efficient.

A Job that runs on a cron would be even more resource efficient, but Rook would have to create a Job per node, which would be harder to implement in code. A good initial implementation could use a daemonset.

I think Seb added the collector sidecars initially, but @subhamkrai made some modifications in recent memory. Let's get his and @travisn 's inputs here also.

As a first glance, I don't think we can put a super high priority on work for this. However, there are likely other users who are (or are about to) encounter this scenario themselves. And I think that being able to reduce overall resource consumption is a compelling factor. For me, I'd like to see this make it into 1.15 or 1.16 if possible, and this seems like something we could consider backporting to 1.14.

@travisn
Copy link
Member

travisn commented May 14, 2024

Without requiring any code changes, what if we had a script running in a daemonset (added to the examples folder) that does the following:

  • Mount the hostpath: /var/lib/rook/rook-ceph/log
  • Periodically delete all log files older than a month

Users who notice this issue could choose to deploy the daemonset.

@jhoblitt
Copy link
Contributor Author

@BlaineEXE I think its fair to say this isn't a high priority. In my env, this issue takes months to years to become noticeable. However, it has occurred on several different clusters and it manifests as a node becoming tainted with disk-pressure. It has happened enough times that I'm considering implementing a periodic cleanup external to k8s. E.g. A cron job that removes files older than 30 days.

@travisn I suspect that a daemonset is the best that can be done without adding a dependency on an external controller. It doesn't sound like daemonjob is coming anytime soon: kubernetes/kubernetes#115716

I use a label on storage nodes of role: storage-node which should be usable to restrict a daemonset to only the relevant nodes.

@Madhu-1
Copy link
Member

Madhu-1 commented May 15, 2024

@travisn @BlaineEXE @jhoblitt Additionally Most of the log rotation will have an option to keep the logs to specific duration or count, like keep logs for the last one week or keep the last 5 logs file etc. Should we also have that mechanism as well?

@parth-gr
Copy link
Member

@Madhu-1 we do have the periodicity https://github.com/rook/rook/blob/master/deploy/examples/cluster.yaml#L139 added to the log rotate,

I think the above problem is for the case if the pod got re-scheduled to another node, so the older log file on old node isn't deleted for that daemon...

@Madhu-1
Copy link
Member

Madhu-1 commented May 15, 2024

@Madhu-1 we do have the periodicity https://github.com/rook/rook/blob/master/deploy/examples/cluster.yaml#L139 added to the log rotate,

I think the above problem is for the case if the pod got re-scheduled to another node, so the older log file on old node isn't deleted for that daemon...

@parth-gr is that used to delete the old logs as well after a certain time? yes i agree re-schedule is problematic which need to be looked into as well.

@parth-gr
Copy link
Member

is that used to delete the old logs as well after a certain time?

yes that's correct, it deletes it in certain interval

@parth-gr
Copy link
Member

A Job that runs on a cron would be even more resource efficient, but Rook would have to create a Job per node, which would be harder to implement in code. A good initial implementation could use a daemonset.

That would be a good idea.... we can yaml in the example folder which do this simpler cleaning job, and later rook can use to run this with the logrotate if needed

@travisn
Copy link
Member

travisn commented May 15, 2024

A very simple option would also be to update the logrotate sidecar script so that it periodically deletes really old files. Then we don't need any redesign or separate daemonset. The only case that doesn't cover is if no ceph daemons are running on a node, but that doesn't seem worth worrying about.

@BlaineEXE
Copy link
Member

We learned in huddle today that logrotate has a mechanism that sends a HUP signal to a process after it rotates logs that tells the process to reload its logs. This prevents files from getting corrupted, to my understanding. I think what this means is that we truly do need the sidecar containers for logrotate to work well with ceph.

So perhaps it would be a good thing to have the script also periodically check the fuller-scope log directory to delete files older than ~some criteria~, like 3-6 months. This assumes that the sidecars have access to (or can be given access to) the full log dir.

I'm not sure how configurable logrotate is in that respect though. Is it possible to tell it to rotate /log/path/daemon-A, and then go do old file cleanup across the entirety of /log/path?

Maybe Travis' suggestion from here to have a daemonset with a cleanup script would be good if we have trouble getting logrotate to do the wide-scale cleanup.

@travisn
Copy link
Member

travisn commented May 15, 2024

So perhaps it would be a good thing to have the script also periodically check the fuller-scope log directory to delete files older than some criteria, like 3-6 months. This assumes that the sidecars have access to (or can be given access to) the full log dir.

Agreed, this is what I was trying to suggest with my previous comment. We should have some flexibility in our script, independent of whether logrotate has such an option.

@jhoblitt
Copy link
Contributor Author

@Madhu-1 As an operator, the main concern I have is that the total space used by logs doesn't grow unbounded. IOW - require periodic human intervention. The specific case here is not being caused by log rotation not working for running pods but that logs for pods which no longer exist (on that node) are never cleaned up and that over time this leads to unbounded growth.

@travisn Another, possibly cheap, solution would be for the operator to launch a job once per day on nodes which have any rook/ceph component running on them that delete files over a certain age threshold.

@BlaineEXE
Copy link
Member

BlaineEXE commented May 15, 2024

To aid in discussion, this is what the default logrotate file looks like in the ceph container.

[root@d95e20ded1c6 /]# cat /etc/logrotate.d/ceph
/var/log/ceph/*.log {
    rotate 7
    daily
    compress
    sharedscripts
    postrotate
        killall -q -1 ceph-mon ceph-mgr ceph-mds ceph-osd ceph-fuse radosgw rbd-mirror cephfs-mirror || pkill -1 -x "ceph-mon|ceph-mgr|ceph-mds|ceph-osd|ceph-fuse|radosgw|rbd-mirror|cephfs-mirror" || true
    endscript
    missingok
    notifempty
    su root ceph
}

Then Rook does some sed replacements to make it only apply to the current daemon, and apply some of the user config overrides, and then calls logrotate via script every 15 mins.

cronLogRotate = `
CEPH_CLIENT_ID=%s
PERIODICITY=%s
LOG_ROTATE_CEPH_FILE=/etc/logrotate.d/ceph
LOG_MAX_SIZE=%s
ROTATE=%s
# edit the logrotate file to only rotate a specific daemon log
# otherwise we will logrotate log files without reloading certain daemons
# this might happen when multiple daemons run on the same machine
sed -i "s|*.log|$CEPH_CLIENT_ID.log|" "$LOG_ROTATE_CEPH_FILE"
# replace default daily with given user input
sed --in-place "s/daily/$PERIODICITY/g" "$LOG_ROTATE_CEPH_FILE"
# replace rotate count, default 7 for all ceph daemons other than rbd-mirror
sed --in-place "s/rotate 7/rotate $ROTATE/g" "$LOG_ROTATE_CEPH_FILE"
if [ "$LOG_MAX_SIZE" != "0" ]; then
# adding maxsize $LOG_MAX_SIZE at the 4th line of the logrotate config file with 4 spaces to maintain indentation
sed --in-place "4i \ \ \ \ maxsize $LOG_MAX_SIZE" "$LOG_ROTATE_CEPH_FILE"
fi
while true; do
# we don't force the logrorate but we let the logrotate binary handle the rotation based on user's input for periodicity and size
logrotate --verbose "$LOG_ROTATE_CEPH_FILE"
sleep 15m
done
`

I believe all log collector pods have /var/log/ceph mounted without a specific subdir, so I think travis' suggestion that the scripts could be tailored to do overall cleanup would be fairly straightforward.

I would expect that there will be race conditions where old files are being cleaned up by 2 log rotators at once, but we can probably just ensure that the script doesn't return an error in that case. I think it's fine if the log cleanup is a best-effort task -- eventually the cleanup should succeed if it fails for any non-race reason.

@BlaineEXE BlaineEXE added this to To do in v1.15 via automation May 31, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
v1.15
To do
Development

No branches or pull requests

5 participants