问题描述
请帮助我!!!!!我找不到任何解决方案。
我已经工作了好几个星期,才能在Windows 10上运行Apache Hbase,但我却不能。 Hmaster启动正常。我安装了Xampp来使我的本地主机正常工作,尽管除了安装它之外我没有什么大头绪。
启动hbase shell
时出现以下错误:
错误zookeeper.RecoverableZooKeeper:ZooKeeper存在于4后失败 尝试2020-09-24 14:13:22,363错误zookeeper.ZooKeeperWatcher: hconnection-0x62db38910x0,仲裁=本地主机:2181,baseZNode = / hbase 收到意外的KeeperException,重新引发异常 org.apache.zookeeper.KeeperException $ ConnectionLossException: / hbase / hbaseid的KeeperErrorCode = ConnectionLoss
当我启动hadoop
或hdfs
时,母版正常启动。但是,当我输入任何变量命令,例如hadoop -version
或hdfs -namenode
时,会出现以下错误:
费勒:Hauptklasse NG konnte nicht gefunden oder geladen werden
(找不到主类NG)。 我不知道这些文件究竟在哪里引用了任何NG类..在相应的配置文件中我什么也找不到。我认为它可能已链接到我的用户名,因此我的用户名文件夹曾经是“ Cris NG”,并且空格使java无法读取路径。无论如何,我使用该用户名更改了我的用户名以及所有目录,以便再次没有空间,但错误仍然存在。
启动zkCli
时出现以下错误:
[myid:localhost:2181]-信息[main-SendThread(localhost:2181):ClientCnxn $ SendThread @ 1156]-SASL配置状态:将不会尝试使用SASL进行身份验证(未知错误) [myid:localhost:2181]-警告[main-SendThread(localhost:2181):ClientCnxn $ SendThread @ 1272]-服务器localhost / 0:0:0:0:0:0:0:0:1:2181的会话0x0,关闭插座 连接。尝试重新连接,除非它是 SessionExpiredException。 java.net.ConnectException:连接被拒绝:没有更多信息
我认为端口或服务器设置有问题,但是我不知道如何以正确的方式配置它们。我尝试了几种在线阅读的选项,但都没有改变该错误。
这是我的hbase-site.xml
配置的样子:
<configuration>
<property>
<name>hbase.rootdir</name>
<value>file:/home/hadoop/HBase/HFiles</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
<description>Property from ZooKeeper's config zoo.cfg.
The port at which the clients will connect.
</description>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>localhost</value>
<description>Comma separated list of servers in the ZooKeeper Quorum.
For example,"host1.mydomain.com,host2.mydomain.com,host3.mydomain.com".
By default this is set to localhost for local and pseudo-distributed modes
of operation. For a fully-distributed setup,this should be set to a full
list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh
this is the list of servers which we will start/stop ZooKeeper on.
</description>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/usr/local/zookeeper</value>
<description>Property from ZooKeeper's config zoo.cfg.
The directory where the snapshot is stored.
</description>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/hadoop/zookeeper</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>false</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
</property>
</configuration>
这是我的zoo.cfg文件的样子:
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage,/tmp here is just
# example sakes.
dataDir=C:\software\hbase\bin\zookeeper\dataDir
# the port at which the clients will connect
clientPort=2181
server.1=localhost:80:3888
server.2=localhost:80:3888
server.3=localhost:80:3888
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
## Metrics Providers
#
# https://prometheus.io Metrics Exporter
#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
#metricsProvider.httpPort=7000
#metricsProvider.exportJvmInfo=true
解决方法
暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!
如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。
小编邮箱:dio#foxmail.com (将#修改为@)