Datanode 未在 Windows 10 for Hadoop 3.1.3

问题描述

我正在尝试在 Windows 10 上为 hadoop 3.1.3 启动 datanode 和 namenode,并且我将所需的 winutils.exe 和 hadoop.dll 保留在 bin 文件夹以及系统 32 文件夹中。但我仍然低于数据节点的例外:


2021-04-09 13:45:24,464 INFO checker.ThrottledAsyncChecker: Scheduling a check for [disK]file:/c:/Sankha/Study/hadoop-3.1.3/hadoop-3.1.3/data/datanode
2021-04-09 13:45:24,691 WARN checker.StorageLocationChecker: Exception checking StorageLocation [disK]file:/c:/Sankha/Study/hadoop-3.1.3/hadoop-3.1.3/data/datanode
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat(Ljava/lang/String;)Lorg/apache/hadoop/io/nativeio/NativeIO$POSIX$Stat;
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat(Native Method)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.getStat(NativeIO.java:455)
        at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfoByNativeIO(RawLocalFileSystem.java:796)
        at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:710)
        at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:678)
        at org.apache.hadoop.util.diskChecker.mkdirsWithExistsAndPermissionCheck(diskChecker.java:233)
        at org.apache.hadoop.util.diskChecker.checkDirInternal(diskChecker.java:141)
        at org.apache.hadoop.util.diskChecker.checkDir(diskChecker.java:116)
        at org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:239)
        at org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:52)
        at org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker$1.call(ThrottledAsyncChecker.java:142)
        at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
        at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
        at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
2021-04-09 13:45:24,706 ERROR datanode.Datanode: Exception in secureMain
org.apache.hadoop.util.diskChecker$diskErrorException: Too many Failed volumes - current valid volumes: 0,volumes configured: 1,volumes Failed: 1,volume failures tolerated: 0
        at org.apache.hadoop.hdfs.server.datanode.checker.StorageLocationChecker.check(StorageLocationChecker.java:231)
        at org.apache.hadoop.hdfs.server.datanode.Datanode.makeInstance(Datanode.java:2799)
        at org.apache.hadoop.hdfs.server.datanode.Datanode.instantiateDatanode(Datanode.java:2714)
        at org.apache.hadoop.hdfs.server.datanode.Datanode.createDatanode(Datanode.java:2756)
        at org.apache.hadoop.hdfs.server.datanode.Datanode.secureMain(Datanode.java:2900)
        at org.apache.hadoop.hdfs.server.datanode.Datanode.main(Datanode.java:2924)
2021-04-09 13:45:24,765 INFO util.ExitUtil: Exiting with status 1: org.apache.hadoop.util.diskChecker$diskErrorException: Too many Failed volumes - current valid volumes: 0,volume failures tolerated: 0
2021-04-09 13:45:24,816 INFO datanode.Datanode: SHUTDOWN_MSG:

我知道同一问题有不同的线程。我去那里尝试了所有类似的方法

管理员身份运行, 获取正确的 winutils.exe 和 hadoop.dll 并放置在 hadoop/bin 和 windows 的 system32 文件夹中。 但没有任何结果。

下面是我的配置xml:

yarn-site.xml
--------------------
<configuration>

<!-- Site specific YARN configuration properties -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
</configuration>

mapred-site.xml
-------------------
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>

hdfs-site.xml
------------------
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/Study/hadoop-3.1.3/hadoop-3.1.3/data/namenode</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/Study/hadoop-3.1.3/hadoop-3.1.3/data/datanode</value>
    </property> 
    <property>
        <name>dfs.datanode.Failed.volumes.tolerated</name>
        <value>0</value>
    </property> 
</configuration>

core-site.xml
-----------------
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)