Hadoop Docker 集群的 Java HDFS 写入错误:有 3 个数据节点正在运行,并且此操作中排除了 3 个节点

问题描述

我在 Docker 上运行 hadoop 集群,当我尝试从 Java 编写 HDFS 时,出现以下错误。我不知道是什么原因造成的:

type b1 = Extract<Union,{event: {eventName: 'b1'}}> // ok
type a1 = Extract<Union,{event: {eventName: 'a1'}}> // never 
  1. 这是我用来写入 hdfs 的代码, 我从 (http://localhost:9870/explorer.html#/) 监控到代码运行时创建了目录和文件,但大小为 0 :
Exception in thread "main" org.apache.hadoop.ipc.remoteexception(java.io.IOException): File /user/javadeveloperzone/javareadwriteexample/read_write_hdfs_example.txt Could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2219)
    at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2789)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:892)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:574)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:528)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:999)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:927)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupinformation.doAs(UserGroupinformation.java:1730)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2915)

存储库: (https://github.com/nsquare-jdzone/hadoop-examples/tree/master/ReadWriteHDFSExample) 从本教程: https://javadeveloperzone.com/hadoop/java-read-write-files-hdfs-example/

  1. 使用本教程设置 docker 集群: https://clubhouse.io/developer-how-to/how-to-set-up-a-hadoop-cluster-in-docker/

总结一下,本教程使用大数据欧洲存储库(https://github.com/big-data-europe/docker-hadoop) 并另外使用此 Docker-compose.yml 使其成为多个数据节点,而不是单个。本教程版本位于 Big Data Europe Repository 之后,因此我将 Docker-compose.yml 文件更改为如下所示:

 public static void writeFiletoHDFS() throws IOException {
        Configuration configuration = new Configuration();
        configuration.set("fs.defaultFS","hdfs://localhost:9000");
        FileSystem fileSystem = FileSystem.get(configuration);
        //Create a path
        String fileName = "read_write_hdfs_example.txt";
        Path hdfsWritePath = new Path("/user/javadeveloperzone/javareadwriteexample/" + fileName);
        FSDataOutputStream fsDataOutputStream = fileSystem.create(hdfsWritePath,true);
        BufferedWriter bufferedWriter = new BufferedWriter(new OutputStreamWriter(fsDataOutputStream,StandardCharsets.UTF_8));
        bufferedWriter.write("Java API to write data in HDFS");
        bufferedWriter.newLine();
        bufferedWriter.close();
        fileSystem.close();
    }

任何帮助将不胜感激。

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)