centos 64位 hadoop 完全分布式安装

#####
####安装hadoop完全分布式集群
#####

####文件及系统版本:
####
hadoop-1.2.1
Javaversion1.7.0_79
centos64位

####预备
####
在/home/hadoop/下:mkdirCloud
把java和hadoop安装包放在/home/hadoop/Cloud下

####配置静态ip
####
master	192.168.116.100
slave1	192.168.116.110
slave2	192.168.116.120

####修改机器相关名称(都是在root权限下)
####
suroot
vim/etc/hosts
在原信息下输入:(空格+tab)
master	192.168.116.100
slave1	192.168.116.110
slave2	192.168.116.120

vim/etc/hostname
master
shutdown-rNow(重启机器)

vim/etc/hostname
slave1
shutdown-rNow

vim/etc/hostname
slave2
shutdown-rNow

####安装openssh
####
suroot
yuminstallopenssh
ssh-keygen-trsa
然后一直确认
把slave1和slave2的公钥发给master:
scp/home/hadoop/.ssh/id_rsa.pubhadoop@master:~/.ssh/slave1.pub
scp/home/hadoop/.ssh/id_rsa.pubhadoop@master:~/.ssh/slave2.pub
在master下:cd.ssh/
catid_rsa.pub>>authorized_keys
catslave1.pub>>authorized_keys
catslave2.pub>>authorized_keys
把公钥包发给slave1和slave2:
scpauthorized_keyshadoop@slave1:~/.ssh/
scpauthorized_keyshadoop@slave2:~/.ssh/

sshslave1
sshslave2
sshmaster
相应的输入yes
到这里ssh无密码登录配置完成


####设计JAVA_HOMEHADOOP_HOME
####
suroot
vim/etc/profile
输入:
exportJAVA_HOME=/home/hadoop/Cloud/jdk1.7.0_79
exportJRE_HOME=$JAVA_HOME/jre
exportCLAsspATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
exportHADOOP_HOME=/home/hadoop/Cloud/hadoop-1.2.1
exportPATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
然后source/etc/profile



####配置hadoop文件
####
在/home/hadoop/Cloud/hadoop-1.2.1/conf下
vimmasters输入:
master

vimslaves输入:
master
slave1
slave2

vimhadoop-env.sh输入:
exportJAVA_HOME=/home/hadoop/Cloud/jdk1.7.0_79
exportHADOOP_HOME_WARN_SUPPRESS="TRUE"
然后sourcehadoop-env.sh

vimcore-site.xml输入:
###################################core
<configuration>

<property>

<name>io.native.lib.avaliable</name>

<value>true</value>

</property>

<property>

<name>fs.default.name</name>

<value>hdfs://master:9000</value>

<final>true</final>

</property>

<property>

<name>hadoop.tmp.dir</name>

<value>/home/hadoop/Cloud/workspace/temp</value>

</property>

</configuration>
############################core

vimhdfs-site.xml
##############################hdfs
<configuration>

<property>

<name>dfs.replication</name>

<value>3</value>

</property>

<property>

<name>dfs.permissions</name>

<value>false</value>

</property>

<property>

<name>dfs.namenode.name.dir</name>

<value>/home/hadoop/Cloud/workspace/hdfs/data</value>

<final>true</final>

</property>

<property>

<name>dfs.namenode.dir</name>

<value>/home/hadoop/Cloud/workspace/hdfs/name</value>

</property>

<property>

<name>dfs.datanode.dir</name>

<value>/home/hadoop/Cloud/workspace/hdfs/data</value>

</property>

<property>

<name>dfs.webhdfs.enabled</name>

<value>true</value>

</property>

</configuration>
##################################hdfs

vimmapred-site.xml

####################################mapred
<configuration>

<property>
<name>mapred.job.tracker</name>
<value>master:9001</value>
</property>

</configuration>
######################################mapred

到这里hadoop配置完成
把hadoop发送到slave1和slave2
scp-rhadoop-1.2.1hadoop@slave1:~/Cloud/
scp-rhadoop-1.2.1hadoop@slave2:~/Cloud/

########现在可以启动hadoop啦
########
首先格式化namenode
hadoopnamenode-format(由于前面设计了hadoop-env.sh和系统环境,所以在任意目录下都可以执行)
查看日志没错的话往下
start-all.sh
然后
完整的的话会出现:
[hadoop@master~]$jps
8330JobTracker
8452TaskTracker
8246SecondaryNameNode
8125Datanode
8000NameNode
8598Jps
[hadoop@master~]$sshslave1
Lastlogin:ThuJan1207:08:062017frommaster
[hadoop@slave1~]$jps
3885Datanode
3970TaskTracker
4078Jps
[hadoop@slave1~]$sshslave2
Lastlogin:ThuJan1207:20:452017frommaster
[hadoop@slave2~]$jps
2853TaskTracker
2771Datanode
2960Jps
至此,hadoop完全分布式配置完成。
下面是hadoop的浏览器端口号:
localhost:50030/fortheJobtracker
localhost:50070/fortheNamenode
localhost:50060/fortheTasktracker

从此走上大数据这条不归路。。。

相关文章

Centos下搭建性能监控Spotlight
CentOS 6.3下Strongswan搭建IPSec VPN
在CentOS6.5上安装Skype与QQ
阿里云基于centos6.5主机VPN配置
CentOS 6.3下配置multipah
CentOS安装、配置APR和tomcat-native