hadoop 2.6.0集群安装
准备工作
- 安装好 jdk 1.8 64位以上
- 安装好 集群ssh免密登录
- hadoop 安装包 hadoop-2.6.0.tar.gz
- 操做系统 ubuntu 15 64位
集群环境如下
- 各节点分布
s0为master主控制节点
192.168.0.110 s0 192.168.0.111 s1 192.168.0.112 s2 192.168.0.113 s3 192.168.0.114 s4
集群配置说明
先在s0上配置完成所有配置信息,然后同步到所有集群每一个节点上,所有配置包括安装路径是一样的
操作步骤
- 解压hadoop安装包
tar -zxvf hadoop-2.6.0.tar.gz -C /opt/modules/bigdata/hadoop 即:$HADOOP_HOME=/opt/modules/bigdata/hadoop/hadoop-2.6.0
修改配置文件
- 修改文件$HADOOP_HOME/etc/hadoop/hadoop-env.sh
增加
export JAVA_HOME=/opt/modules/environment/jdk/jdk1.8.0_65 export HADOOP_HOME=/opt/modules/bigdata/hadoop/hadoop-2.6.0
- 修改文件$HADOOP_HOME/etc/hadoop/core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://s0:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/modules/bigdata/hadoop/hadoop-2.6.0/tmp/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>hadoop.native.lib</name>
<value>true</value>
<description>Should native hadoop libraries,if present,bu used</description>
</property>
</configuration>
- 修改文件$HADOOP_HOME/etc/hadoop/hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>s2:50090</value>
<description>The secondary namenode http server address and port</description>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>${hadoop.tmp.dir}/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>${hadoop.tmp.dir}/dfs/data</value>
</property>
<property>
<name>dfs.namenode.checkpoint.dir</name>
<value>file://${hadoop.tmp.dir}/dfs/namesecondary</value>
<description>Determines where on the local filesystem the DFS secondary
name node should store the temporary images to merge.
If this is a comma-delimited list of directories then the image is
replicated in all of the directories for redundancy.
</description>
</property>
</configuration>
修改文件$HADOOP_HOME/etc/hadoop/mapred-site.xml
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>
修改文件$HADOOP_HOME/etc/hadoop/slaves
s1
s2
s3
s4