Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Random invalid session and inconsistent service accounts #19510

Open
pschichtel opened this issue Apr 15, 2024 · 46 comments
Open

Random invalid session and inconsistent service accounts #19510

pschichtel opened this issue Apr 15, 2024 · 46 comments

Comments

@pschichtel
Copy link

pschichtel commented Apr 15, 2024

This is a follow up to #19217 and #19201

After my vacation I just verified the state of the minio installation again after the previous issues.

Expected Behavior

Once logged in I'd expect not to randomly receive "invalid session" warnings or to get randomly logged out when navigating to certain pages (e.g. the Site Replication config page).

I would also expect to the same service accounts on my root user every time I refresh the Access Keys page (or when directly accessing /api/v1/service-accounts).

Current Behavior

I randomly get invalid session responses ("The Access Key Id you provided does not exist in our records.") from the backend and on some pages, that leads to a redirect to the login page.

I also get a different list service accounts every time I refresh, sometimes it doesn't even include the site-replicator-0 account, which would explain why I'm still seeing #19217. Actually in my tests now by refreshing /api/v1/service-accounts a bunch of times, I rarely get all 4 service accounts.

The backup site still occasionally logs this as in #19217:

minio-1  | API: SRPeerBucketOps(bucket=154a22a1-8dca-4e64-98d8-687376a04d32)
minio-1  | Time: 16:58:03 UTC 04/15/2024
minio-1  | DeploymentID: bc54da3b-88f4-4a0d-a9d4-2365bf5a0d80
minio-1  | RequestID: 17C68294B9A6D50A
minio-1  | RemoteHost: 
minio-1  | Host: 
minio-1  | UserAgent: MinIO (linux; amd64) madmin-go/2.0.0
minio-1  | Error: Site replication error(s): 
minio-1  | 'ConfigureReplication' on site Production (bf3123cf-9753-4ea4-a46f-535599899c4c): failed(Backup->Production: Bucket target creation error: Remote service endpoint offline, target bucket: 154a22a1-8dca-4e64-98d8-687376a04d32 or remote service credentials: site-replicator-0 invalid 
minio-1  | 	The Access Key Id you provided does not exist in our records.) (*errors.errorString)
minio-1  |        4: internal/logger/logger.go:259:logger.LogIf()
minio-1  |        3: cmd/logging.go:30:cmd.adminLogIf()
minio-1  |        2: cmd/admin-handlers-site-replication.go:142:cmd.adminAPIHandlers.SRPeerBucketOps()
minio-1  |        1: net/http/server.go:2136:http.HandlerFunc.ServeHTTP()

Steps to Reproduce (for bugs)

I'm still not sure how I arrived at this state, I assume by enabling site replication.

I've checked that KES is working on both the production and the backup site. At this point I'm not even able to disable site replication on the production site, because I get constantly logged out (redirected to login page) from the page.

The single-node backup instance does not observe this behavior. there, I never get invalid session responses, I always get the same 4 service accounts on the root user (including site-replicator-0) and I can also access the Site Replication page.

Context

It makes using the minio console difficult. I assume, replication from backup to production would not reliably work (or be a lot slower), but that's not currently something I need to do.

Interestingly mcli admin user svcacct list production admin always returns the complete list of service accounts for my root user, although not always in the same order, but that doesn't matter. S3 clients in general don't seem to be affected, at least not functionally.

To elaborate on the setup:

2 sites:

  1. site (production): 5 nodes, each with 1 disk, deployed via minio-operator to k8s, kes configured against a vault running in the same k8s
  2. site (backup): 1 node with 1 disk, deployed via docker-compose, kes configured with filesystem, containing the necessary keys from vault (to decouple the backup site from the k8s).

The keys between the KES deployments are identical (replicated files from production site can be decrypted on backup site. The production KES setup is responsive and can successfully access the vault (I created and deleted a test key to confirm).

Your Environment

  • Version used (minio --version): RELEASE.2024-04-06T05-26-02Z
  • Server setup and configuration: deployed by operator (5.0.14), replicating to a single-node setup on the same version deployed with docker-compose.
  • Operating System and version (uname -a): Linux 6.1.0-17-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.69-1 (2023-12-30) x86_64 GNU/Linux
@jiuker
Copy link
Contributor

jiuker commented Apr 16, 2024

@pschichtel Do you only have one KES server for production?

@pschichtel
Copy link
Author

@jiuker production has 3, backup has 1

@jiuker
Copy link
Contributor

jiuker commented Apr 16, 2024

@jiuker production has 3, backup has 1

Does the 3 KES server have the same keys for production? @pschichtel

@pschichtel
Copy link
Author

They are all connected to the same vault (with a dedicated V2 KV engine for minio), so I'd assume so. How can I check?

@jiuker
Copy link
Contributor

jiuker commented Apr 16, 2024

Could you check the key site-replicator-0's value have changed always? @pschichtel

@pschichtel
Copy link
Author

Not sure what you mean

@jiuker
Copy link
Contributor

jiuker commented Apr 16, 2024

Not sure what you mean

Check for overlapping value assignments between two clients

@pschichtel
Copy link
Author

Sorry for being confused!

Could you check the key site-replicator-0's value have changed always?

By key, do you mean a KES key or an access key/secret access key? There is no "site-replicator-0" KES key, so I assume access key. What do you mean by "value" then?

Check for overlapping value assignments between two clients

What do you mean by "value assignments"? And what clients?

I just checked with mcli again (mcli admin user svcacct info production site-replicator-0) and every now and then I get mcli: <ERROR> Unable to get information of the specified service account. The specified service account is not found (Specified service account does not exist)., so there must be an instance that doesn't have the account I guess. Didn't notice it yesterday weirdly. If it doesn't fail with said error, then it returns this consistently:

AccessKey: site-replicator-0
ParentUser: root-user
Status: on
Name: 
Description: 
Policy: implied
Expiration: no-expiry

@jiuker
Copy link
Contributor

jiuker commented Apr 16, 2024

I only found a strange case where when I open two minio login pages in one browser at the same time, there have one said that {"detailedMessage":"Access Denied.","message":"invalid session"} , but I can access it normally with two browsers. I don't know if it's similar to your case? @pschichtel

@pschichtel
Copy link
Author

pschichtel commented Apr 16, 2024

@jiuker I have the occasional case, where the page after login stays blank, because /api/v1/session returns invalid session / The Access Key Id you provided does not exist in our records.. I don't need a two tabs/browsers for that.

I think you case sounds like a race condition on the shared cookies/localStorage/sessionStorage between the browser tabs, that are not shared between browsers.

@jiuker
Copy link
Contributor

jiuker commented Apr 16, 2024

@jiuker I have the occasional case, where the page after login stays blank, because /api/v1/session returns invalid session / The Access Key Id you provided does not exist in our records.. I don't need a two tabs/browsers for that.

I think you case sounds like a race condition on the shared cookies/localStorage/sessionStorage between the browser tabs, that are not shared between browsers.

Yeah. Will return back to login page for /api/v1/session return that {"detailedMessage":"Access Denied.","message":"invalid session"} when fresh the page. It looks like your case. So I guess that could be a console issue. What's the page you view when that happen. @pschichtel

@pschichtel
Copy link
Author

@jiuker I don't think it is limited to a specific page, I've seen it happen on several different pages.

So I guess that could be a console issue.

I'm not so sure anymore, because I get errors with mcli too, that doesn't go through the console, right?

@harshavardhana
Copy link
Member

We can't reproduce any of the issues reported here.

@pschichtel
Copy link
Author

pschichtel commented Apr 26, 2024

How can I properly clear replication settings from both sites? then I could test the production cluster without site replication and see if that helps.

@poornas
Copy link
Contributor

poornas commented Apr 26, 2024

 $ mc admin replicate remove sitea siteb  --force

@pschichtel
Copy link
Author

I just noticed, even the replication rules on buckets are completely inconsistent from refresh to refresh.

@poornas thanks, I'll try that next week.

@pschichtel
Copy link
Author

Bucket versioning is also affected. it seems like everything somehow related to site replication is completely inconsistent between the nodes of the production cluster. It also seems to have gotten worse since I last checked last week.

@pschichtel
Copy link
Author

@poornas I removed the backup site from replication and it's all fine now. Should the site-replicator-0 account disappear? Or should I clean that account up before re-enabling replication?

@jiuker
Copy link
Contributor

jiuker commented Apr 29, 2024

Remove site-replication and site-replicator-0 account disappear. You cannot remove that account, it's interval account.

@pschichtel
Copy link
Author

Are you saying it should automatically disappear after removing site-replication? Because it hasn't so far, neither in the production site nor in the backup site. Both sites don't have any other replication rules.

So I'll delete the service accounts to have a clean state.

@jiuker
Copy link
Contributor

jiuker commented Apr 29, 2024

Are you saying it should automatically disappear after removing site-replication? Because it hasn't so far, neither in the production site nor in the backup site. Both sites don't have any other replication rules.

So I'll delete the service accounts to have a clean state.

Yeah. It should disappear. If not, you can try delete it for I didn't reproduce your case.

@pschichtel
Copy link
Author

I removed the accounts, I'll upgrade both instances to latest now and then setup replication again in the evening.

@pschichtel
Copy link
Author

I remember @harshavardhana saying something about this in a past issue: The /v1/service-accounts endpoint is rather slow (400-900 ms "wait" time in the browser) given that this is a small cluster (5 nodes) and only 3 service accounts exist and my connection is basically local, this feels noticeable slow in the UI. This is still the case even after disabling replication. Is the timing within a normal range or would this be worth investigating? I originally thought this is caused by the replication problem, but apparently it isn't.

@harshavardhana
Copy link
Member

I remember @harshavardhana saying something about this in a past issue: The /v1/service-accounts endpoint is rather slow (400-900 ms "wait" time in the browser) given that this is a small cluster (5 nodes) and only 3 service accounts exist and my connection is basically local, this feels noticeable slow in the UI. This is still the case even after disabling replication. Is the timing within a normal range or would this be worth investigating? I originally thought this is caused by the replication problem, but apparently it isn't.

Can you share mc admin trace -a output while browsing this element in the UI?

@pschichtel
Copy link
Author

It's currently pretty noisy, I can do that in the evening. Or would there be a way to filter its output?

@klauspost
Copy link
Contributor

Tracing only the call itself will not show what is going on, so filtering is not too feasible here.

You can try --response-duration=20ms to filter out the fastest requests - but it may not have what we need.

@pschichtel
Copy link
Author

@harshavardhana https://gist.github.com/pschichtel/c62d0eb9e46adb5472bf103a4b9cac85

I filtered out a bunch of log lines obviously related to S3 file accesses.

@pschichtel
Copy link
Author

OK so I wiped my backup site and enabled replication again. Replication has completed over night. /service-accounts is still slow (possibly even a bit slower, but not sure), but so far I haven't observed any invalid session responses and the list of service accounts in production seems to be consistent.

@harshavardhana
Copy link
Member

2024-04-29T18:42:17.368 [200 OK] admin.ListServiceAccounts production-2.prod-hl.minio.svc.cluster.local:9000/minio/admin/v3/list-service-accounts?user=  10.16.101.254    70.091ms     ⇣  70.085266ms  ↑ 242 B ↓ 259 B
2024-04-29T18:42:17.512 [200 OK] admin.InfoServiceAccount production-1.prod-hl.minio.svc.cluster.local:9000/minio/admin/v3/info-service-account?accessKey=access-key-c  10.16.101.254    33.786ms     ⇣  33.778568ms  ↑ 242 B ↓ 330 B
2024-04-29T18:42:17.621 [200 OK] admin.InfoServiceAccount production-0.prod-hl.minio.svc.cluster.local:9000/minio/admin/v3/info-service-account?accessKey=access-key-a  10.16.101.254    70.136ms     ⇣  70.129695ms  ↑ 242 B ↓ 516 B
2024-04-29T18:42:17.765 [200 OK] admin.InfoServiceAccount production-3.prod-hl.minio.svc.cluster.local:9000/minio/admin/v3/info-service-account?accessKey=access-key-b  10.16.101.254    73.995ms     ⇣  73.985804ms  ↑ 242 B ↓ 611 B

Looks like wrong implementation by Console UI. Going serially per key info at a time.

@pschichtel
Copy link
Author

That's what I thought when I saw the trace lines. Should a create an issue over at minio/console or will you handle that "internally" ?

@harshavardhana
Copy link
Member

Will handle it internally.

@pschichtel
Copy link
Author

I've just applied yesterday's minio release to the sites and that also hasn't reintroduced the issue. I'll monitor this for the rest of the week and if it stays without issues I'll close the issue.

@harshavardhana
Copy link
Member

Checking how things are going @pschichtel ?

@harshavardhana
Copy link
Member

Will handle it internally.

sent PRs for this newer console release will handle these changes without making double the calls.

@harshavardhana
Copy link
Member

Closing this since I haven't heard and assuming this has been resolved.

@pschichtel
Copy link
Author

Yeah I haven't had issues so far.

@harshavardhana harshavardhana added the fixed in latest release this issue is already fixed and upgrade is recommended label May 4, 2024
@pschichtel
Copy link
Author

@harshavardhana now after the upgrade to RELEASE.2024-05-01T01-11-10Z I once again have the issue that service accounts seem to be inconsistent between nodes (/api/v1/service-accounts reporting different sets). However this time I don't have any "invalid session" issues so far. At least the site-replicator-0 service account is consistently contained in the response of the service accounts api and so far the backup site is not complaining either.

I wonder if this is something caused by the upgrade process of the operator?

Should I open a new issue for this?

@pschichtel
Copy link
Author

At least it seems to be limited to the console API, running mcli admin user svcacct list production root continuously in a loop hasn't shown any differences except for order.

@pschichtel
Copy link
Author

pschichtel commented May 10, 2024

@harshavardhana after upgrading to RELEASE.2024-05-10T01-41-38Z today the console is once again barely usable due to random invalid session and inconsistent service accounts. S3 seems unaffected so far (site replication to backup is unaffected and mcli admin user svcacct list production root has been showing consistent results for a while now.)

I'm now convinced that that the upgrade as performed by the operator seems to cause or at least worsen this issue.

Should I create a new issue? possibly over at minio/console ?

@harshavardhana
Copy link
Member

This generally I don't it to occur unless someone actively wipes your credentials.

@harshavardhana
Copy link
Member

Can you collect both sites all their backend .minio.sys folder and share it with us ?

@harshavardhana harshavardhana added waiting for info and removed fixed in latest release this issue is already fixed and upgrade is recommended labels May 10, 2024
@pschichtel
Copy link
Author

@harshavardhana Seems there is quite a bit of information in there that I don't think I can just share. Is it possible to limit the requested files? Otherwise I'd first have to clear internally if it's ok to share this stuff.

What I noticed while poking around:

  • deleting a service account via minio client works and is correctly reflected in the /service-account response (in those cases that don't fail with "invalid session")
  • it doesn't matter which of the nodes receive the request, they all randomly fail with invalid session. (I assume they internally redistribute the requests?)
  • the service account files in .minio.sys seem reasonably similar
  • none of the service accounts have expiry dates, but in the /service-accounts endpoint some say "expiration": "0001-01-01T00:00:00Z" and others say "expiration": "1970-01-01T00:00:00Z". not sure what that is about.
  • the /service-accounts endpoint is currently all or nothing: either it shows invalid session or it shows all service accounts. I'd say ~2/3 of the time it fails with invalid session.

@pschichtel
Copy link
Author

Ha... I found the offender. I slowly, one-by-one went through the pods (from last to first similar to the sts controller), deleted them and let the sts controller recreate them. between each pod I repeatedly checked the the /service-accounts endpoint. pods 4, 3 and 2 did nothing, restarting the pod 1 completely resolved the issue.

@harshavardhana
Copy link
Member

Ha... I found the offender. I slowly, one-by-one went through the pods (from last to first similar to the sts controller), deleted them and let the sts controller recreate them. between each pod I repeatedly checked the the /service-accounts endpoint. pods 4, 3 and 2 did nothing, restarting the pod 1 completely resolved the issue.

Do you have logs from this pod before deleting it ?

@pschichtel
Copy link
Author

I do, but I don't think there was anything of interest. I'll check...

@pschichtel
Copy link
Author

here you go: production-1-logs.txt

I noticed that the cluster once lost the quorum. The log file btw includes both the update and my restart the fixed the problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants