Windows 10 失败启动时出现 HDFS Datanode 错误

问题描述

我在运行 hadoop 节点时遇到错误

start-dfs.cmd

我的配置:

<property>

 <name>dfs.replication</name>

 <value>1</value>


</property>



<property>

<name>dfs.namenode.name.dir</name>

<value>/hadoop/data/namenode</value>


</property>


<property>

<name>dfs.datanode.data.dir</name>

<value>C:/hadoop/data/datanode</value>


</property>



  <property>
      <name>dfs.permissions</name>
      <value>false</value>
   </property>

错误

 2021-05-25 21:20:34,259 INFO checker.ThrottledAsyncChecker: Scheduling a check for [disK]file:/C:/hadoop/data/datanode
    2021-05-25 21:20:34,299 WARN checker.StorageLocationChecker: Exception checking StorageLocation [disK]file:/C:/hadoop/data/datanode
    java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat(Ljava/lang/String;)Lorg/apache/hadoop/io/nativeio/NativeIO$POSIX$Stat;
            at org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat(Native Method)
            at org.apache.hadoop.io.nativeio.NativeIO$POSIX.getStat(NativeIO.java:608)
            at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfoByNativeIO(RawLocalFileSystem.java:823)
            at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:737)
            at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:705)
            at org.apache.hadoop.util.diskChecker.mkdirsWithExistsAndPermissionCheck(diskChecker.java:233)
            at org.apache.hadoop.util.diskChecker.checkDirInternal(diskChecker.java:141)
            at org.apache.hadoop.util.diskChecker.checkDir(diskChecker.java:116)
            at org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:239)
            at org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:52)
            at org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker$1.call(ThrottledAsyncChecker.java:142)
            at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
            at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
            at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
            at java.lang.Thread.run(Thread.java:748)
    2021-05-25 21:20:34,307 ERROR datanode.Datanode: Exception in secureMain
    org.apache.hadoop.util.diskChecker$diskErrorException: Too many Failed volumes - current valid volumes: 0,volumes configured: 1,volumes Failed: 1,volume failures tolerated: 0
            at org.apache.hadoop.hdfs.server.datanode.checker.StorageLocationChecker.check(StorageLocationChecker.java:231)
            at org.apache.hadoop.hdfs.server.datanode.Datanode.makeInstance(Datanode.java:2806)
            at org.apache.hadoop.hdfs.server.datanode.Datanode.instantiateDatanode(Datanode.java:2721)
            at org.apache.hadoop.hdfs.server.datanode.Datanode.createDatanode(Datanode.java:2763)
            at org.apache.hadoop.hdfs.server.datanode.Datanode.secureMain(Datanode.java:2907)
            at org.apache.hadoop.hdfs.server.datanode.Datanode.main(Datanode.java:2931)
    2021-05-25 21:20:34,307 INFO util.ExitUtil: Exiting with status 1: org.apache.hadoop.util.diskChecker$diskErrorException: Too many Failed volumes - current valid volumes: 0,volume failures tolerated: 0
    2021-05-25 21:20:34,315 INFO datanode.Datanode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down Datanode at DESKTOP-TQPMKPM/172.18.160.1
    ************************************************************/

我尝试了很多不同的网络配置,比如创建节点故障的最小阈值

我也在 stackoverflow 上尝试了无数看起来与此类似的解决方案,但我无法让它工作!

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)