-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Erroneous behaviour when cancelling a program #134
Labels
Comments
jaxxstorm
added
kind/bug
Some behavior is incorrect or out of spec
needs-triage
Needs attention from the triage team
labels
Apr 7, 2023
Potentially related to #124 |
Need to actually try it out, but just from looking, it looks like Is the parent process handling and cancelling the SIGINT, but child processes still being killed? |
tgummerer
added a commit
to pulumi/pulumi
that referenced
this issue
Dec 13, 2023
These tests are pretty close to hitting the timeout during regular runs (e.g. one successful run I picked randomly took ~194000ms). The GitHub action runners don't have super reliable performance, so we can easily get pushed over this limit. To make matters worse here, if we hit this timeout just at the right time, the test doesn't exit cleanly (potentially related to pulumi/pulumi-dotnet#134), as I've seen a `pulumi preview` process stick around in that case, preventing the tests from shutting down). This means we have to wait until the CI job times out after an hour until the failure is reported. I'm not sure if we only got close to this timeout recently, or if this is a longer standing issue, but it reproduces well for me locally. Ideally of course the tests would be faster, but this is still "only" ~5min which is much faster than other tests, and should hopefully reduce the amount of times we need to go through the merge queue, saving a lot of time there.
6 tasks
github-merge-queue bot
pushed a commit
to pulumi/pulumi
that referenced
this issue
Dec 13, 2023
These tests are pretty close to hitting the timeout during regular runs (e.g. one successful run I picked randomly took ~194000ms). The GitHub action runners don't have super reliable performance, so we can easily get pushed over this limit. To make matters worse here, if we hit this timeout just at the right time, the test doesn't exit cleanly (potentially related to pulumi/pulumi-dotnet#134), as I've seen a `pulumi preview` process stick around in that case, preventing the tests from shutting down). This means we have to wait until the CI job times out after an hour until the failure is reported. I'm not sure if we only got close to this timeout recently, or if this is a longer standing issue, but it reproduces well for me locally. Ideally of course the tests would be faster, but this is still "only" ~5min which is much faster than other tests, and should hopefully reduce the amount of times we need to go through the merge queue, saving a lot of time there. Fixes #14842 ## Checklist - [ ] I have run `make tidy` to update any new dependencies - [ ] I have run `make lint` to verify my code passes the lint check - [ ] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [ ] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [ ] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
github-merge-queue bot
pushed a commit
to pulumi/pulumi
that referenced
this issue
Dec 13, 2023
These tests are pretty close to hitting the timeout during regular runs (e.g. one successful run I picked randomly took ~194000ms). The GitHub action runners don't have super reliable performance, so we can easily get pushed over this limit. To make matters worse here, if we hit this timeout just at the right time, the test doesn't exit cleanly (potentially related to pulumi/pulumi-dotnet#134), as I've seen a `pulumi preview` process stick around in that case, preventing the tests from shutting down). This means we have to wait until the CI job times out after an hour until the failure is reported. I'm not sure if we only got close to this timeout recently, or if this is a longer standing issue, but it reproduces well for me locally. Ideally of course the tests would be faster, but this is still "only" ~5min which is much faster than other tests, and should hopefully reduce the amount of times we need to go through the merge queue, saving a lot of time there. Fixes #14842 ## Checklist - [ ] I have run `make tidy` to update any new dependencies - [ ] I have run `make lint` to verify my code passes the lint check - [ ] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [ ] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [ ] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
github-merge-queue bot
pushed a commit
to pulumi/pulumi
that referenced
this issue
Dec 13, 2023
These tests are pretty close to hitting the timeout during regular runs (e.g. one successful run I picked randomly took ~194000ms). The GitHub action runners don't have super reliable performance, so we can easily get pushed over this limit. To make matters worse here, if we hit this timeout just at the right time, the test doesn't exit cleanly (potentially related to pulumi/pulumi-dotnet#134), as I've seen a `pulumi preview` process stick around in that case, preventing the tests from shutting down). This means we have to wait until the CI job times out after an hour until the failure is reported. I'm not sure if we only got close to this timeout recently, or if this is a longer standing issue, but it reproduces well for me locally. Ideally of course the tests would be faster, but this is still "only" ~5min which is much faster than other tests, and should hopefully reduce the amount of times we need to go through the merge queue, saving a lot of time there. Fixes #14842 ## Checklist - [ ] I have run `make tidy` to update any new dependencies - [ ] I have run `make lint` to verify my code passes the lint check - [ ] I have formatted my code using `gofumpt` <!--- Please provide details if the checkbox below is to be left unchecked. --> - [ ] I have added tests that prove my fix is effective or that my feature works <!--- User-facing changes require a CHANGELOG entry. --> - [ ] I have run `make changelog` and committed the `changelog/pending/<file>` documenting my change <!-- If the change(s) in this PR is a modification of an existing call to the Pulumi Cloud, then the service should honor older versions of the CLI where this change would not exist. You must then bump the API version in /pkg/backend/httpstate/client/api.go, as well as add it to the service. --> - [ ] Yes, there are changes in this PR that warrants bumping the Pulumi Cloud API version <!-- @pulumi employees: If yes, you must submit corresponding changes in the service repo. -->
lukehoban
added
kind/enhancement
Improvements or new features
and removed
kind/bug
Some behavior is incorrect or out of spec
labels
Feb 5, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
What happened?
When running a pulumi program, especially with automation API, sending a single
Ctrl+C
to the execution hangs the program indefinitelyExpected Behavior
When sending a Ctrl+C to a Pulumi invocation, I'd expect the program to try and gracefully shut down and terminate
Steps to reproduce
Customer sent the following program that exhibits this behaviour
Output of
pulumi about
N/A
Additional context
No response
Contributing
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
The text was updated successfully, but these errors were encountered: