Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider using k8s workload name as the service.name #1423

Open
dashpole opened this issue Feb 28, 2024 · 3 comments
Open

Consider using k8s workload name as the service.name #1423

dashpole opened this issue Feb 28, 2024 · 3 comments
Labels
enhancement New feature or request

Comments

@dashpole
Copy link

dashpole commented Feb 28, 2024

Feature Request

This may be a question for semantic conventions, but I figured I would start here.

The k8s demo currently uses the app.kubernetes.io/component label as the service.name resource attribute:

- name: OTEL_SERVICE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.labels['app.kubernetes.io/component']
.

This differs from the behavior of the OpenTelemetry Operator, which uses the workload (deployment, statefulset, etc.) name as the service.name resource attribute

It would be nice if there was a standard way to default the service name in k8s environments which was used in the operator and in the k8s demo.

@open-telemetry/semconv-k8s-approvers

@dashpole dashpole added the enhancement New feature or request label Feb 28, 2024
@puckpuck
Copy link
Contributor

puckpuck commented Mar 1, 2024

To confirm I'm reading that code correctly, it uses the K8s Deployment name for the service.name attribute?

@jinja2
Copy link

jinja2 commented Mar 1, 2024

There was a similar discussion in this now closed PR on semantic-conventions. We were looking at a different set of standard labels in the PR than the ones being discussed here. We should create a new issue to discuss this in semantic-conventions but this might be a little difficult to standardize on given the number of options.

Re: what should be our recommended way of getting the service.name? On the one hand, the name of the controller (deployment/sts) might not reflect the logical service name. For example, we split a single installation of a database cluster into statefulset per zone to have better operational control in case a zone is down. In this case, the 3 stses belong to the same logical service and our service.name equivalent attr is set to same value (which we get from a label) for pods from the 3 sts. Another more common usecase for when a logical service is likely to run as part of multiple deployments would be when practicing canary/stable deployment pattern. On the other, the standard k8s labels are just recommendations so they might not even be available on workloads which is not the case for controller name. I think the recommendation in sem-conv should be the k8s recommended label (we seem to have 3 options here, but imo app.kubernetes.io/instance is closer to logical service than others) and if not available, the controller name.

@puckpuck
Copy link
Contributor

puckpuck commented Mar 1, 2024

I wonder if part of the problem here is we have the Chart's release name as a prefix to every service name. If we remove that, we would more closely align with each component's service name being the same as the deployment name.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants