Unknown protocol: org.apache.hadoop.ha.HAServiceProtocol
I encountered this error in a yarn HA-setup
yarn rmadmin -getServiceState rm1
active
yarn rmadmin -getServiceState rm2
Operation failed: Unknown protocol: org.apache.hadoop.ha.HAServiceProtocol
I went through the snippet but hardly found anything.
// hadoop-rel-release-3.4.1\hadoop-common-project\hadoop-common\src\main\java\org\apache\hadoop\ipc\ProtobufRpcEngine2.java
VerProtocolImpl highest = server.getHighestSupportedProtocol(
RPC.RpcKind.RPC_PROTOCOL_BUFFER, protoName);
if (highest == null) {
throw new RpcNoSuchProtocolException(
"Unknown protocol: " + protoName);
}
The interface HAServiceProtocol under the package org.apache.hadoop.ha should be recognized according to the class path. So the problem I was facing was not likely to be caused by missing dependencies.
# export HADOOP_HOME=/data/hadoop-3.4.1
hadoop classpath # /data/hadoop-3.4.1/share/hadoop/hdfs/*
ls share/hadoop/hdfs/hadoop-hdfs-3.4.1.jar
Suddenly I noticed that the JAVA_HOME on the failed node was slightly different from the other two nodes. It was set to /usr/local/jdk1.8.0_333 instead of /usr/local/jdk1.8.0_361 ...
It worked as expected after the modification.
Operation category READ is not supported in state standby
I was tring to submmit a job using yarn jar, but then an error occurred.At first, you thought it was related to the Yarn resource manager. But then I noticed the error detail message, which was "at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation".
The reason for this error is that Hadoop looks for the NameNode URI in the configuration file core-site.xml, specifically in the property fs.defaultFS. The URI in the configuration file was pointing to the old NameNode, which was no longer available.
Top comments (0)