way to get resource consumption of individual jobs #5798
Replies: 3 comments 1 reply
-
@vito This is also needed by us, as we talked in last week's meeting. |
Beta Was this translation helpful? Give feedback.
-
I've been thinking about this a bit this morning, and I'm particularly interested in finding solutions that require no code/configuration changes to Concourse itself. https://docs.docker.com/config/containers/runmetrics/#metrics-from-cgroups-memory-cpu-block-io contains some good discussion on getting things like CPU and memory consumption per-container in docker. It turns out that garden creates cgroupfs mounts in a structured way very similar to docker, at paths like Let's suppose I want to see how many CPU shares are in use by the pipeline
where I have also sorted the containers by worker, to get output a bit like
I could then read the resource consumption a few different ways:
Something that feels reasonable would be to follow the example of something like kube-state-metrics or flight_recorder and create a configurable prometheus exporter that can do this basic joining/munging/aggregating. Maybe it could expose a prometheus endpoint with aggregate CPU data per-pipeline (and maybe aggregate CPU data for all check containers as well?):
Maybe it would be deployed as a
|
Beta Was this translation helpful? Give feedback.
-
The ideas I just presented don't work particularly well for collecting resource consumption down to individual builds, because that is potentially very high-cardinality data. That data is so specific and abundant that it probably makes more sense to model as a series of events, e.g. that might be sent to a logging system. |
Beta Was this translation helpful? Give feedback.
-
Hello All,
Is there any way to get the resource utilization(CPU, memory) of individual jobs? Whether concourse emits this metrics?
BRs, Gowrisankar
Beta Was this translation helpful? Give feedback.
All reactions