视频1 视频21 视频41 视频61 视频文章1 视频文章21 视频文章41 视频文章61 推荐1 推荐3 推荐5 推荐7 推荐9 推荐11 推荐13 推荐15 推荐17 推荐19 推荐21 推荐23 推荐25 推荐27 推荐29 推荐31 推荐33 推荐35 推荐37 推荐39 推荐41 推荐43 推荐45 推荐47 推荐49 关键词1 关键词101 关键词201 关键词301 关键词401 关键词501 关键词601 关键词701 关键词801 关键词901 关键词1001 关键词1101 关键词1201 关键词1301 关键词1401 关键词1501 关键词1601 关键词1701 关键词1801 关键词1901 视频扩展1 视频扩展6 视频扩展11 视频扩展16 文章1 文章201 文章401 文章601 文章801 文章1001 资讯1 资讯501 资讯1001 资讯1501 标签1 标签501 标签1001 关键词1 关键词501 关键词1001 关键词1501 专题2001
hadoop2.3.0单点伪分布与多点分布的配置
2020-11-09 07:33:45 责编:小采
文档


机器mac book,virtualbox4.3.6,virtualbox安装ubunt13.10,在多点分布环境中,配置好一个机器后,clone出另外2个,一共三台机器。 1. Configure the Environment Bash语言: sudo apt-get install -y openjdk-7-jdk openssh-server sudo addgroup hadoop su

机器mac book,virtualbox4.3.6,virtualbox安装ubunt13.10,在多点分布环境中,配置好一个机器后,clone出另外2个,一共三台机器。

1. Configure the Environment

Bash语言: sudo apt-get install -y openjdk-7-jdk openssh-server

sudo addgroup hadoop

sudo adduser —ingroup hadoop hadoop # create password

sudo visudo

hadoop ALL=(ALL) ALL # hadoop user can use sudo

su - hadoop # need password

ssh-keygen -t rsa -P "" # Enter file (/home/hadoop/.ssh/id_rsa)

cat /home/hadoop/.ssh/id_rsa.pub >> /home/hadoop/.ssh/authorized_keys

wget http://apache.fayea.com/apache-mirror/hadoop/common/hadoop-2.3.0/hadoop-2.3.0.tar.gz

tar zxvf hadoop-2.3.0.tar.gz

sudo cp -r hadoop-2.3.0/ /opt

cd /opt

sudo ln -s hadoop-2.3.0 hadoop

sudo chown -R hadoop:hadoop hadoop-2.3.0

sed -i '$a \\nexport JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd' hadoop/etc/hadoop/hadoop-env.sh

2. Configure hadoop single Node environment

cp mapred-site.xml.template mapred-site.xml

vi mapred-site.xml

mapreduce.cluster.temp.dir

No description

true

mapreduce.cluster.local.dir

No description

true

vi yarn-site.xml

yarn.resourcemanager.resource-tracker.address

127.0.0.1:8021

host is the hostname of the resource manager and port is the port on which the NodeManagers contact the Resource Manager.

yarn.resourcemanager.scheduler.address

127.0.0.1:8022

host is the hostname of the resourcemanager and port is the port on which the Applications in the cluster talk to the Resource Manager.

yarn.resourcemanager.scheduler.class

org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler

In case you do not want to use the default scheduler

yarn.resourcemanager.address

127.0.0.1:8023

the host is the hostname of the ResourceManager and the port is the port on which the clients can talk to the Resource Manager.

yarn.nodemanager.local-dirs

the local directories used by the nodemanager

yarn.nodemanager.address

0.0.0.0:8041

the nodemanagers bind to this port

yarn.nodemanager.resource.memory-mb

10240

the amount of memory on the NodeManager in GB

yarn.nodemanager.remote-app-log-dir

/app-logs

directory on hdfs where the application logs are moved to

yarn.nodemanager.log-dirs

the directories used by Nodemanagers as log directories

yarn.nodemanager.aux-services

mapreduce_shuffle

shuffle service that needs to be set for Map Reduce to run

补充配置:

mapred-site.xml

mapreduce.framework.name

yarn

core-site.xml

fs.defaultFS

hdfs://127.0.0.1:9000

hdfs-site.xml

dfs.replication

1

Bash语言: cd /opt/hadoop

bin/hdfs namenode -format

sbin/hadoop-daemon.sh start namenode

sbin/hadoop-daemon.sh start datanode

sbin/yarn-daemon.sh start resourcemanager

sbin/yarn-daemon.sh start nodemanager

jps

# Run a job on this node

bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.3.0.jar pi 5 10

3. Running Problem

14/01/04 05:38:22 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8023. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

netstat -atnp # found tcp6

Solve:

cat /proc/sys/net/ipv6/conf/all/disable_ipv6 # 0 means ipv6 is on, 1 means off

cat /proc/sys/net/ipv6/conf/lo/disable_ipv6

cat /proc/sys/net/ipv6/conf/default/disable_ipv6

ip a | grep inet6 # have means ipv6 is on

vi /etc/sysctl.conf

net.ipv6.conf.all.disable_ipv6=1

net.ipv6.conf.default.disable_ipv6=1

net.ipv6.conf.lo.disable_ipv6=1

sudo sysctl -p # have the same effect with reboot

sudo /etc/init.d/networking restart

4. Cluster setup

Config /opt/hadoop/etc/hadoop/{hadoop-env.sh, yarn-env.sh}

export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd

cd /opt/hadoop

mkdir -p tmp/{data,name} # on every node. name on namenode, data on datanode

vi /etc/hosts # hostname also changed on each node

192.168.1.110 cloud1

192.168.1.112 cloud2

192.168.1.114 cloud3

vi /opt/hadoop/etc/hadoop/slaves

cloud2

cloud3

core-site.xml

fs.defaultFS

hdfs://cloud1:9000

io.file.buffer.size

131072

hadoop.tmp.dir

/opt/hadoop/tmp

A base for other temporary directories.

据说dfs.datanode.data.dir 需要清空,不然datanode不能启动

hdfs-site.xml

dfs.namenode.name.dir

/opt/hadoop/name

dfs.datanode.data.dir

/opt/hadoop/data

dfs.replication

2

yarn-site.xml

yarn.resourcemanager.address

cloud1:8032

ResourceManager host:port for clients to submit jobs.

yarn.resourcemanager.scheduler.address

cloud1:8030

ResourceManager host:port for ApplicationMasters to talk to Scheduler to obtain resources.

yarn.resourcemanager.resource-tracker.address

cloud1:8031

ResourceManager host:port for NodeManagers.

yarn.resourcemanager.admin.address

cloud1:8033

ResourceManager host:port for administrative commands.

yarn.resourcemanager.webapp.address

cloud1:8088

ResourceManager web-ui host:port.

yarn.resourcemanager.scheduler.class

org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler

In case you do not want to use the default scheduler

yarn.nodemanager.resource.memory-mb

10240

the amount of memory on the NodeManager in MB

yarn.nodemanager.local-dirs

the local directories used by the nodemanager

yarn.nodemanager.log-dirs

the directories used by Nodemanagers as log directories

yarn.nodemanager.remote-app-log-dir

/app-logs

directory on hdfs where the application logs are moved to

yarn.nodemanager.aux-services

mapreduce_shuffle

shuffle service that needs to be set for Map Reduce to run

yarn.nodemanager.aux-services.mapreduce_shuffle.class

org.apache.hadoop.mapred.ShuffleHandler

-->

mapred-site.xml

mapreduce.framework.name

yarn

mapreduce.jobhistory.address

cloud1:10020

mapreduce.jobhistory.webapp.address

cloud1:19888

cd /opt/hadoop/

bin/hdfs namenode -format

sbin/start-dfs.sh # cloud1 NameNode SecondaryNameNode, cloud2 and cloud3 DataNode

sbin/start-yarn.sh # cloud1 ResourceManager, cloud2 and cloud3 NodeManager

jps

查看集群状态 bin/hdfs dfsadmin -report

查看文件块组成 bin/hdfs fsck / -files -blocks

NameNode查看hdfs http://192.168.1.110:50070

查看RM http://192.168.1.110:8088

bin/hdfs dfs -mkdir /input

bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.3.0.jar randomwriter input

5. Questions:

Q: 14/01/05 23:59:05 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

A: /opt/hadoop/lib/native/ 下面的动态链接库是32bit的,要替换成位的

Q: ssh 登录出现Are you sure you want to continue connecting (yes/no)?解决方法

A: 修改/etc/ssh/ssh_config 将其中的# StrictHostKeyChecking ask 改成 StrictHostKeyChecking no

Q: 两个slaves的DataNode无法加入cluster系统,

A: 把/etc/hosts 里面127.0.1.1或localhost 的内容行删除

下载本文
显示全文
专题