Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Show progress when waiting #111

Open
evmar opened this issue Sep 15, 2011 · 8 comments · May be fixed by #2312
Open

Show progress when waiting #111

evmar opened this issue Sep 15, 2011 · 8 comments · May be fixed by #2312

Comments

@evmar
Copy link
Collaborator

evmar commented Sep 15, 2011

When we've been waiting for commands for more than a second with no output, print some "still waiting for xyz" text.

@sorbits
Copy link
Contributor

sorbits commented Aug 4, 2019

Here’s what I have been using locally (for 6 years) sorbits@192fed9

@jhasse
Copy link
Collaborator

jhasse commented Aug 4, 2019

It would be better to delay re-printing status until we have waited a certain amount of time, which would also allow us to handle the case where more than one edge is stalling the build, but this require some refactoring, as waiting for tasks is presently done without the ability to specify a timeout.

This needs to be done, before we could merge it. See #1405 (comment).

@sorbits
Copy link
Contributor

sorbits commented Aug 4, 2019

This needs to be done, before we could merge it. See #1405 (comment).

My patch only prints status when there is a single task running, so for a normal concurrent build, you only get extra status printed when you have everything waiting for a single task, not when the build is running along fine.

I just tested a build which had 365 edges (needing rebuild) and it printed status 367 times, so only 2 additional times, once was linking (which is slow with clang’s lto option) and the other was codesigning (which require network connection for timestamp server).

@jhasse
Copy link
Collaborator

jhasse commented Aug 4, 2019

Oh I didn't notice, sorry.

That does mean though, that the problem persists if there are two jobs stalling the build, right?

@sorbits
Copy link
Contributor

sorbits commented Aug 4, 2019

That does mean though, that the problem persists if there are two jobs stalling the build, right?

Correct. The patch does not address that situation, I have found that builds either stall due to an issue with a single job, or an issue with all the jobs, my patch address the former, and incase of the latter, I think ninja’s behavior is already correct.

Personally I would want as few extra status lines as possible, as I often run ninja outside of a terminal, where lines are not overwritten, but I do want to have the last line reflect what is actually going on when the build is stalled, which I have defined as everything waiting for a single job.

But if you want to generalize it to multiple jobs, I would suggest the following (for all outputs, not just smart terminal):

If all of the below

  1. There are less jobs running than capacity (-j)
  2. All jobs running have been running for more than n seconds (n = 1 for smart terminal, n = 3 for non-smart terminal)
  3. The last status line printed refers to a job that is no longer running

Then

Print status line showing the job most recently started

I pick the most recently started job because if we are dealing with a set of extraordinary slow jobs then that is the one we can assume will be running the longest, and thus the status printed will be valid for the longest amount of time.

@orgads
Copy link
Contributor

orgads commented Jan 28, 2023

I created a prototype that prints all the running jobs, a bit similar to bazel or docker compose output.

It still needs much polish (fix job numbers, improve alignment for long lines, keep last line[s], handle errors and more), but what do you think about the general idea?

If you like it, I'll try to complete it (as opt-in of course) and will push a PR.

ninja-live.mp4

@orenbenkiki
Copy link

For me it would be sufficient if ninja printed the list of active tasks every time this list changed, even if it is on a separate line. This seems to require less complex logic than some of the above proposals, but would result with more lines. I opened this proposal as #2249 which isn't quite a duplicate of the issue here, but has the same motivation.

Personally I'm fine with the extract lines #2249 would create, but if not, then the proposals here with a 1s timeout to eliminate most of these lines are also a solution.

BTW - when there's only one task running, it would be "very nice" if it were not buffered, even if it wasn't explicitly console.

This way it would always be clear what is running, and tracing output would be delayed the least amount of time.

ecatmur added a commit to ecatmur/ninja that referenced this issue Aug 2, 2023
In the terminal, when an edge finishes update the status to display the first started (longest running) action.
Usually this will be immediately overwritten with the next started action but if there is a bottleneck in the build or
if it is reaching the end, this ensures that the console displays status of a build that is actually running and not
one that has completed.
This ensures that the user attributes any delay to the (or an) action that is actually causing the slowdown and not
unfairly to an action that has already finished just because it happened to be the last started.

To make this clear, also change the default NINJA_STATUS to include current actual parallelism level - this will
reduce from number of cores to 1 as the bottleneck or end of build approaches.

Implementation stores currently running nodes in a deque, with NULL for interior completed nodes. This should be
fairly efficient and allows future extension to e.g. display multiple parallel action names, as in ninja-build#2249 and @orgads
prototype for ninja-build#111.

Partially resolves ninja-build#111 (should be enough for most purposes).
@ecatmur ecatmur linked a pull request Aug 2, 2023 that will close this issue
@jhasse jhasse added this to the 2.0.0 milestone Apr 11, 2024
@esnosy
Copy link

esnosy commented May 23, 2024

same problem here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants