问题描述
我已经在 4 台 linux 机器上安装了 hadoop 3.1.0 集群,hadoop1(master)、hadoop2、hadoop3 和 hadoop4。
我运行了 start-dfs.sh
和 start-yarn.sh
,并且只看到了名称节点和数据节点与 jps
一起运行。辅助名称节点、节点管理器和资源管理器失败。我尝试了一些解决方案,这就是我得到的地方。如何配置和启动二级名称节点、节点管理器和资源管理器?
关于二级名称节点日志说
java.net.BindException: Port in use: hadoop1:9000
...
Caused by: java.net.BindException: Address already in use
...
关于节点管理器和资源管理器日志说
2021-02-21 03:29:03,463 WARN org.eclipse.jetty.webapp.WebAppContext: Failed startup of context o.e.j.w.WebAppContext@51d719bc{/,file:///tmp/jetty-0.0.0.0-8042-node-_-any-8548809575065892553.dir/webapp/,UNAVAILABLE}{/node}
com.google.inject.ProvisionException: Unable to provision,see the following errors:
1) Error injecting constructor,java.lang.NoClassDefFoundError: javax/activation/DataSource
at org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver.<init>(JAXBContextResolver.java:52)
at org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer$NMWebApp.setup(WebServer.java:153)
while locating org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver
我有 hdfs-site.xml
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop1:9000</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/app/hadoop/hadoop-3.1.0/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/app/hadoop/hadoop-3.1.0/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
yarn-site.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>hadoop1:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>hadoop1:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>hadoop1:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>hadoop1:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>hadoop1:8088</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>1024</value>
</property>
<property>
<name>yarn.nodemanager.resource.cpu-vcores</name>
<value>1</value>
</property>
工人
hadoop1
hadoop2
hadoop3
hadoop4
etc/hosts
192.168.0.111 hadoop1
192.168.0.112 hadoop2
192.168.0.113 hadoop3
192.168.0.114 hadoop4
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
解决方法
我安装了 jdk15.0.2,它在 hadoop 3.1.0 上有一些问题。后来装了jdk8,改了java_home。一切顺利!
关于辅助节点管理器,我为 fs.defaultFS 和 dfs.namenode.secondary.http-address 设置了 hadoop1:9000,因此产生了冲突。我把secondary改成了9001,一切顺利!