无法将工件io.confluent:kafka-connect-storage-common-parent:pom:6.0.0-SNAPSHOT从/转移到confluent$ {confluent.maven.repo}

问题描述

我第一次尝试连接Kafka,我想将SAP S / 4 HANA连接到Hive。我使用以下方法创建了SAP S / 4源Kafka连接器:

https://github.com/SAP/kafka-connect-sap

但是,我无法创建HDFS接收器连接器。该问题与pom文件有关。

我已经尝试过mvn clean package。 但是,我得到了这个错误:

[INFO] Scanning for projects...
[ERROR] [ERROR] Some problems were encountered while processing the POMs:
[FATAL] Non-resolvable parent POM for io.confluent:kafka-connect-hdfs:[unknown-version]: Could not transfer artifact io.confluent:kafka-connect-storage-common-parent:pom:6.0.0-SNAPSHOT from/to confluent (${confluent.maven.repo}): Cannot access ${confluent.maven.repo} with type default using the available connector factories: BasicRepositoryConnectorFactory and 'parent.relativePath' points at wrong local POM @ line 22,column 13
 @
[ERROR] The build could not read 1 project -> [Help 1]
[ERROR]
[ERROR]   The project io.confluent:kafka-connect-hdfs:[unknown-version] (/home/dev_dice_sa/test-dicestreaming/config/kafka-connect-hdfs/pom.xml) has 1 error
[ERROR]     Non-resolvable parent POM for io.confluent:kafka-connect-hdfs:[unknown-version]: Could not transfer artifact io.confluent:kafka-connect-storage-common-parent:pom:6.0.0-SNAPSHOT from/to confluent (${confluent.maven.repo}): Cannot access ${confluent.maven.repo} with type default using the available connector factories: BasicRepositoryConnectorFactory and 'parent.relativePath' points at wrong local POM @ line 22,column 13: Cannot access ${confluent.maven.repo} using the registered transporter factories: WagonTransporterFactory: Unsupported transport protocol -> [Help 2]
[ERROR]
[ERROR] To see the full stack trace of the errors,re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions,please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
[ERROR] [Help 2] http://cwiki.apache.org/confluence/display/MAVEN/UnresolvableModelException

这是我的pom.xml:

    <?xml version="1.0" encoding="UTF-8"?>
<!--
  ~ Copyright 2018 Confluent Inc.
  ~
  ~ Licensed under the Confluent Community License (the "License"); you may not use
  ~ this file except in compliance with the License.  You may obtain a copy of the
  ~ License at
  ~
  ~ http://www.confluent.io/confluent-community-license
  ~
  ~ Unless required by applicable law or agreed to in writing,software
  ~ distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
  ~ WARRANTIES OF ANY KIND,either express or implied.  See the License for the
  ~ specific language governing permissions and limitations under the License.
  -->
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">

    <modelVersion>4.0.0</modelVersion>

    <parent>
        <groupId>io.confluent</groupId>
        <artifactId>kafka-connect-storage-common-parent</artifactId>
        <version>6.0.0-SNAPSHOT</version>
    </parent>

    <artifactId>kafka-connect-hdfs</artifactId>
    <packaging>jar</packaging>
    <name>kafka-connect-hdfs</name>
    <organization>
        <name>Confluent,Inc.</name>
        <url>http://confluent.io</url>
    </organization>
    <url>http://confluent.io</url>
    <description>
        A Kafka Connect HDFS connector for copying data between Kafka and Hadoop HDFS.
    </description>

    <licenses>
        <license>
            <name>Confluent Community License</name>
            <url>http://www.confluent.io/confluent-community-license</url>
            <distribution>repo</distribution>
        </license>
    </licenses>

    <scm>
        <connection>scm:git:git://github.com/confluentinc/kafka-connect-hdfs.git</connection>
        <developerConnection>scm:git:git@github.com:confluentinc/kafka-connect-hdfs.git</developerConnection>
        <url>https://github.com/confluentinc/kafka-connect-hdfs</url>
        <tag>HEAD</tag>
    </scm>

    <properties>
        <confluent.maven.repo>http://packages.confluent.io/maven/</confluent.maven.repo>
        <apacheds-jdbm1.version>2.0.0-M2</apacheds-jdbm1.version>
        <kafka.connect.maven.plugin.version>0.11.1</kafka.connect.maven.plugin.version>
    </properties>

    <repositories>
        <repository>
            <id>confluent</id>
            <name>Confluent</name>
            <url>${confluent.maven.repo}</url>
        </repository>
    </repositories>

    <dependencies>
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>connect-api</artifactId>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>connect-json</artifactId>
            <version>${kafka.version}</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>io.confluent</groupId>
            <artifactId>kafka-connect-storage-common</artifactId>
            <version>${confluent.version}</version>
        </dependency>
        <dependency>
            <groupId>io.confluent</groupId>
            <artifactId>kafka-connect-storage-core</artifactId>
            <version>${confluent.version}</version>
        </dependency>
        <dependency>
            <groupId>io.confluent</groupId>
            <artifactId>kafka-connect-storage-format</artifactId>
            <version>${confluent.version}</version>
        </dependency>
        <dependency>
            <groupId>io.confluent</groupId>
            <artifactId>kafka-connect-storage-partitioner</artifactId>
            <version>${confluent.version}</version>
        </dependency>
        <dependency>
            <groupId>io.confluent</groupId>
            <artifactId>kafka-connect-storage-wal</artifactId>
            <version>${confluent.version}</version>
        </dependency>
        <dependency>
            <groupId>io.confluent</groupId>
            <artifactId>kafka-connect-storage-hive</artifactId>
            <version>${confluent.version}</version>
            <exclusions>
                <exclusion>
                    <groupId>io.netty</groupId>
                    <artifactId>netty</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>io.netty</groupId>
                    <artifactId>netty-all</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>com.github.spotbugs</groupId>
            <artifactId>spotbugs-annotations</artifactId>
        </dependency>
        <dependency>
            <groupId>com.fasterxml.jackson.core</groupId>
            <artifactId>jackson-databind</artifactId>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-minicluster</artifactId>
            <version>${hadoop.version}</version>
            <scope>test</scope>
            <exclusions>
                <exclusion>
                    <groupId>io.netty</groupId>
                    <artifactId>netty</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>io.netty</groupId>
                    <artifactId>netty-all</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-minikdc</artifactId>
            <version>${hadoop.version}</version>
            <scope>test</scope>
            <exclusions>
                <exclusion>
                    <!-- exclude this from hadoop minikdc as the minikdc depends on
                    the apacheds-jdbm1 bundle,which is not available in maven central-->
                    <groupId>org.apache.directory.jdbm</groupId>
                    <artifactId>apacheds-jdbm1</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <!-- add this to satisfy the dependency requirement of apacheds-jdbm1-->
            <groupId>org.apache.directory.jdbm</groupId>
            <artifactId>apacheds-jdbm1</artifactId>
            <version>${apacheds-jdbm1.version}</version>
            <scope>test</scope>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>io.confluent</groupId>
                <version>${kafka.connect.maven.plugin.version}</version>
                <artifactId>kafka-connect-maven-plugin</artifactId>
                <executions>
                    <execution>
                        <goals>
                            <goal>kafka-connect</goal>
                        </goals>
                        <configuration>
                            <title>Kafka Connect HDFS</title>
                            <documentationUrl>https://docs.confluent.io/${project.version}/connect/connect-hdfs/docs/index.html</documentationUrl>
                            <description>
                                The HDFS connector allows you to export data from Kafka topics to HDFS files in a variety of formats and integrates with Hive to make data immediately available for querying with HiveQL.

                                The connector periodically polls data from Kafka and writes them to HDFS. The data from each Kafka topic is partitioned by the provided partitioner and divided into chunks. Each chunk of data is represented as an HDFS file with topic,Kafka partition,start and end offsets of this data chunk in the filename. If no partitioner is specified in the configuration,the default partitioner which preserves the Kafka partitioning is used. The size of each data chunk is determined by the number of records written to HDFS,the time written to HDFS and schema compatibility.

                                The HDFS connector integrates with Hive and when it is enabled,the connector automatically creates an external Hive partitioned table for each Kafka topic and updates the table according to the available data in HDFS.
                            </description>

                            <supportProviderName>Confluent,Inc.</supportProviderName>
                            <supportSummary>Confluent supports the HDFS sink connector alongside community members as part of its Confluent Platform offering.</supportSummary>
                            <supportUrl>https://docs.confluent.io/current/</supportUrl>
                            <supportLogo>logos/confluent.png</supportLogo>

                            <ownerUsername>confluentinc</ownerUsername>
                            <ownerType>organization</ownerType>
                            <ownerName>Confluent,Inc.</ownerName>
                            <ownerUrl>https://confluent.io/</ownerUrl>
                            <ownerLogo>logos/confluent.png</ownerLogo>

                            <dockerNamespace>confluentinc</dockerNamespace>
                            <dockerName>cp-kafka-connect</dockerName>
                            <dockerTag>${project.version}</dockerTag>

                            <componentTypes>
                                <componentType>sink</componentType>
                            </componentTypes>

                            <tags>
                                <tag>hadoop</tag>
                                <tag>hdfs</tag>
                                <tag>hive</tag>
                            </tags>

                            <confluentControlCenterIntegration>true</confluentControlCenterIntegration>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <configuration>
                    <compilerArgs>
                        <arg>-Xlint:all,-deprecation,-processing</arg>
                        <arg>-Werror</arg>
                    </compilerArgs>
                    <showWarnings>true</showWarnings>
                    <showDeprecation>false</showDeprecation>
                </configuration>
            </plugin>
            <plugin>
                <artifactId>maven-assembly-plugin</artifactId>
                <configuration>
                    <descriptors>
                        <descriptor>src/assembly/development.xml</descriptor>
                        <descriptor>src/assembly/package.xml</descriptor>
                    </descriptors>
                    <attach>false</attach>
                </configuration>
                <executions>
                    <execution>
                        <id>make-assembly</id>
                        <phase>package</phase>
                        <goals>
                            <goal>single</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-surefire-plugin</artifactId>
                <configuration>
                    <reuseForks>false</reuseForks>
                    <forkCount>1</forkCount>
                    <systemPropertyVariables>
                        <datanucleus.schema.autoCreateAll>true</datanucleus.schema.autoCreateAll>
                    </systemPropertyVariables>
                    <additionalClasspathElements>
                        <!--puts hive-site.xml on classpath - w/o HMS tables are not created-->
                        <additionalClasspathElement>src/test/resources/conf</additionalClasspathElement>
                    </additionalClasspathElements>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-checkstyle-plugin</artifactId>
                <executions>
                    <execution>
                        <id>validate</id>
                        <phase>validate</phase>
                        <configuration>
                            <suppressionsLocation>checkstyle/suppressions.xml</suppressionsLocation>
                        </configuration>
                        <goals>
                            <goal>check</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <artifactId>maven-clean-plugin</artifactId>
                <version>3.0.0</version>
                <configuration>
                    <filesets>
                        <fileset>
                            <directory>.</directory>
                            <includes>
                                <include>derby.log</include>
                                <include>metastore_db/</include>
                            </includes>
                        </fileset>
                    </filesets>
                </configuration>
            </plugin>
        </plugins>

        <resources>
            <resource>
                <directory>src/main/resources</directory>
                <filtering>true</filtering>
            </resource>
        </resources>
    </build>

    <profiles>
        <profile>
            <id>standalone</id>
            <build>
                <plugins>
                    <plugin>
                        <artifactId>maven-assembly-plugin</artifactId>
                        <configuration>
                            <descriptors>
                                <descriptor>src/assembly/standalone.xml</descriptor>
                            </descriptors>
                        </configuration>
                    </plugin>
                </plugins>
            </build>
        </profile>
    </profiles>
</project>

我已经查看了此链接:

https://github.com/confluentinc/kafka-connect-hdfs/wiki/FAQ

但是,我无法按照此操作。请帮帮我。

解决方法

我建议您下载现有的Confluent平台,其中已包含HDFS Connect

否则,签出发行版本而不是仅签出master分支来构建项目。

相关问答

错误1:Request method ‘DELETE‘ not supported 错误还原:...
错误1:启动docker镜像时报错:Error response from daemon:...
错误1:private field ‘xxx‘ is never assigned 按Alt...
报错如下,通过源不能下载,最后警告pip需升级版本 Requirem...