Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HDFS-17504. DN process should exit when BPServiceActor exit #6792

Open
wants to merge 1 commit into
base: trunk
Choose a base branch
from

Conversation

zhuzilong2013
Copy link
Contributor

Description of PR

Refer to HDFS-17504.
BPServiceActor is a very important thread. In a non-HA cluster, the exit of the BPServiceActor thread will cause the DN process to exit. However, in a HA cluster, this is not the case.
I found HDFS-15651 causes BPServiceActor thread to exit and sets the "runningState" from "RunningState.FAILED" to "RunningState.EXITED", it can be confusing during troubleshooting.
I believe that the DN process should exit when the flag of the BPServiceActor is set to RunningState.FAILED because at this point, the DN is unable to recover and establish a heartbeat connection with the ANN on its own.

How was this patch tested?

For code changes:

  • Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')?
  • Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation?
  • If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
  • If applicable, have you updated the LICENSE, LICENSE-binary, NOTICE-binary files?

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
_ Prechecks _
+1 💚 dupname 0m 00s No case conflicting files found.
+0 🆗 spotbugs 0m 00s spotbugs executables are not available.
+0 🆗 codespell 0m 00s codespell was not available.
+0 🆗 detsecrets 0m 01s detect-secrets was not available.
+1 💚 @author 0m 00s The patch does not contain any @author tags.
-1 ❌ test4tests 0m 00s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
_ trunk Compile Tests _
+1 💚 mvninstall 90m 48s trunk passed
+1 💚 compile 6m 16s trunk passed
+1 💚 checkstyle 4m 54s trunk passed
+1 💚 mvnsite 6m 52s trunk passed
+1 💚 javadoc 6m 23s trunk passed
+1 💚 shadedclient 152m 26s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+1 💚 mvninstall 4m 50s the patch passed
+1 💚 compile 3m 39s the patch passed
+1 💚 javac 3m 39s the patch passed
+1 💚 blanks 0m 01s The patch has no blanks issues.
+1 💚 checkstyle 2m 31s the patch passed
+1 💚 mvnsite 4m 34s the patch passed
+1 💚 javadoc 3m 34s the patch passed
+1 💚 shadedclient 160m 18s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 asflicense 5m 31s The patch does not generate ASF License warnings.
431m 38s
Subsystem Report/Notes
GITHUB PR #6792
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname MINGW64_NT-10.0-17763 eede48f9ec0f 3.4.10-87d57229.x86_64 2024-02-14 20:17 UTC x86_64 Msys
Build tool maven
Personality /c/hadoop/dev-support/bin/hadoop.sh
git revision trunk / 3de213f
Default Java Azul Systems, Inc.-1.8.0_332-b09
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6792/1/testReport/
modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6792/1/console
versions git=2.44.0.windows.1
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@zhuzilong2013
Copy link
Contributor Author

@Hexiaoqiao Hi~ sir. Could you please help me review this PR when you are free? Thanks.

@Hexiaoqiao
Copy link
Contributor

@zhuzilong2013 Thanks for your report and contribution! IMO, they are independent between different BPServiceActor, if exit DN process due to one BPServiceActor issue, it will increase number of Dead DataNode from the whole cluster view, where I don't think it is proper in Federation Arch. Another side, maybe we could add some BPServiceActor count metric to monitor if BPServiceActor works fine? Thanks again.

@vinayakumarb
Copy link
Contributor

@zhuzilong2013 Thanks for your report and contribution! IMO, they are independent between different BPServiceActor, if exit DN process due to one BPServiceActor issue, it will increase number of Dead DataNode from the whole cluster view, where I don't think it is proper in Federation Arch. Another side, maybe we could add some BPServiceActor count metric to monitor if BPServiceActor works fine? Thanks again.

+1
One BPServiceActor reports to one namenode. In case of HA, if one of the namenode not able to connect due to some reason, DN can continue to report to available NameNode.

Morever, if all BPServiceActor of a BPOfferService (i.e. connections to all namenodes belonging to same namespace) exited, BPOfferService also shutsdown.

When all of such BPOfferServices (in case of federation, there will be multiple) shutdown, Datanode will automatically initiate the shutdown.

Refer DataNode.join() method.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants