Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tb-node kafka topics being reset every minute #101

Open
aistisdev opened this issue Jul 18, 2023 · 1 comment
Open

tb-node kafka topics being reset every minute #101

aistisdev opened this issue Jul 18, 2023 · 1 comment

Comments

@aistisdev
Copy link

aistisdev commented Jul 18, 2023

Description

The tb-node setatefulset pod log keeps printing that topicId changed. It prints this for every id every 60 seconds~. From my understanding this happens when a topic is being recreated with the same id. Is this expected bahaviour?

Steps to Reproduce

  1. Create completely empty cluster
  2. Create empty demo thingsboard postgresql database
  3. kubectl apply -f thirdparty.yml
  4. kubectl apply -f tb-services.yml
  5. When initialized, tb-node starts to post these messages every 60 seconds.

Expected Behavior

Should not keep printing these messages for the same kafka topics every 60 seconds.

Actual Behavior

The messages look like this

....
2023-07-18 11:40:47,193 [kafka-consumer-stats-6-thread-1] INFO  org.apache.kafka.clients.Metadata - [Consumer clientId=consumer-stats-loader-client, groupId=consumer-stats-loader-client-group] Resetting the last seen epoch of partition tb_core.1-0 to 0 since the associated topicId changed from null to fwMO8nYKRQGjNtbuUtspfQ
2023-07-18 11:40:47,193 [kafka-consumer-stats-6-thread-1] INFO  org.apache.kafka.clients.Metadata - [Consumer clientId=consumer-stats-loader-client, groupId=consumer-stats-loader-client-group] Resetting the last seen epoch of partition tb_core.5-0 to 0 since the associated topicId changed from null to YMBDSCF2R7SrrgxblL5p7A
2023-07-18 11:40:47,193 [kafka-consumer-stats-6-thread-1] INFO  org.apache.kafka.clients.Metadata - [Consumer clientId=consumer-stats-loader-client, groupId=consumer-stats-loader-client-group] Resetting the last seen epoch of partition tb_core.7-0 to 0 since the associated topicId changed from null to zR53ZGknRoWhibRlgVM3cQ
....

Full tb-node log:
tb-node-0-log.txt

Environment

  • Operating System: Ubuntu server 22.04
  • Kubernetes : k3s cluster v1.27.3+k3s1
  • tb-node version: 3.4.2
  • kafka version: wurstmeister/kafka:2.13-2.8.1 (i had the same problem with wurstmeister/kafka:2.12-2.2.1 version)
  • Cluster nodes: 1 master node, 3 worker nodes

Additional Information

Transport nodes don't seem to have the same behaviour at the start, but sometimes also start spamming these kind of messages for some topics after a couple of weeks or months of deployment.

@aistisdev
Copy link
Author

It seems like it's possible to control how often this information is printed via this tb-node statefulset config:

        - name: TB_QUEUE_KAFKA_CONSUMER_STATS_MIN_PRINT_INTERVAL_MS
              value: "600000"

Though it's still not clear if the topicid metada changing in consumer is because of some bug in the logger, or if it's a real problem with the kafka setup.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant