视频1 视频21 视频41 视频61 视频文章1 视频文章21 视频文章41 视频文章61 推荐1 推荐3 推荐5 推荐7 推荐9 推荐11 推荐13 推荐15 推荐17 推荐19 推荐21 推荐23 推荐25 推荐27 推荐29 推荐31 推荐33 推荐35 推荐37 推荐39 推荐41 推荐43 推荐45 推荐47 推荐49 关键词1 关键词101 关键词201 关键词301 关键词401 关键词501 关键词601 关键词701 关键词801 关键词901 关键词1001 关键词1101 关键词1201 关键词1301 关键词1401 关键词1501 关键词1601 关键词1701 关键词1801 关键词1901 视频扩展1 视频扩展6 视频扩展11 视频扩展16 文章1 文章201 文章401 文章601 文章801 文章1001 资讯1 资讯501 资讯1001 资讯1501 标签1 标签501 标签1001 关键词1 关键词501 关键词1001 关键词1501 专题2001
Hbase+Hadoop安装部署
2020-11-09 13:21:42 责编:小采
文档


VMware安装多个RedHat Linux操作系统,摘抄了不少网上的资料,基本上按照顺序都能安装好 ? 1、建用户 groupadd bigdata useradd -g bigdata hadoop passwd hadoop ? 2、建JDK vi /etc/profile ? export JAVA_HOME=/usr/lib/java-1.7.0_07 export CLASSPATH=.

VMware安装多个RedHat Linux操作系统,摘抄了不少网上的资料,基本上按照顺序都能安装好

?

1、建用户

groupadd bigdata

useradd -g bigdata hadoop

passwd hadoop

?

2、建JDK

vi /etc/profile

?

export JAVA_HOME=/usr/lib/java-1.7.0_07

export CLASSPATH=.

export HADOOP_HOME=/home/hadoop/hadoop

export HBASE_HOME=/home/hadoop/hbase?

export HADOOP_MAPARED_HOME=${HADOOP_HOME}

export HADOOP_COMMON_HOME=${HADOOP_HOME}

export HADOOP_HDFS_HOME=${HADOOP_HOME}

export YARN_HOME=${HADOOP_HOME}

export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop

export HDFS_CONF_DIR=${HADOOP_HOME}/etc/hadoop

export YARN_CONF_DIR=${HADOOP_HOME}/etc/hadoop

export HBASE_CONF_DIR=${HBASE_HOME}/conf

export ZK_HOME=/home/hadoop/zookeeper

export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin:$HADOOP_HOME/sbin:$ZK_HOME/bin:$PATH

?

?

?

?

?

source /etc/profile

chmod 777 -R /usr/lib/java-1.7.0_07

?

?

3、修改hosts

vi /etc/hosts

加入

172.16.254.215 ? master

172.16.254.216 ? salve1

172.16.254.217 ? salve2

172.16.254.218 ? salve3

?

3、免ssh密码

215服务器

su -root

vi /etc/ssh/sshd_config

确保含有如下内容

RSAAuthentication yes

PubkeyAuthentication yes

AuthorizedKeysFile ? ? ?.ssh/authorized_keys

重启sshd

service sshd restart

?

su - hadoop

ssh-keygen -t rsa

cd /home/hadoop/.ssh

cat id_rsa.pub >> authorized_keys

chmod 600 authorized_keys

?

在217 ?218 ?216 分别执行?

mkdir /home/hadoop/.ssh

chmod 700 /home/hadoop/.ssh

?

在215上执行

scp id_rsa.pub hadoop@salve1:/home/hadoop/.ssh/

scp id_rsa.pub hadoop@salve2:/home/hadoop/.ssh/

scp id_rsa.pub hadoop@salve3:/home/hadoop/.ssh/

?

在217 ?218 ?216 分别执行?

cat /home/hadoop/.ssh/id_rsa.pub >> /home/hadoop/.ssh/authorized_keys?

chmod 600 /home/hadoop/.ssh//authorized_keys

?

?

4、建hadoop与hbase、zookeeper

su - hadoop

mkdir /home/hadoop/hadoop

mkdir /home/hadoop/hbase

mkdir /home/hadoop/zookeeper

?

cp -r /home/hadoop/soft/hadoop-2.0.1-alpha/* /home/hadoop/hadoop/

cp -r /home/hadoop/soft/hbase-0.95.0-hadoop2/* /home/hadoop/hbase/

cp -r /home/hadoop/soft/zookeeper-3.4.5/* /home/hadoop/zookeeper/

?

?

1) hadoop 配置

?

vi /home/hadoop/hadoop/etc/hadoop/hadoop-env.sh?

修改?

export JAVA_HOME=/usr/lib/java-1.7.0_07

export HBASE_MANAGES_ZK=true

?

?

vi /home/hadoop/hadoop/etc/hadoop/core-site.xml

加入

hadoop.tmp.dir

/home/hadoop/hadoop/tmp

A base for other temporary directories.

fs.default.name

hdfs://172.16.254.215:9000

hadoop.proxyuser.root.hosts

172.16.254.215

hadoop.proxyuser.root.groups

*

?

vi /home/hadoop/hadoop/etc/hadoop/slaves ?

加入(不用master做salve)

salve1

salve2

salve3

?

vi /home/hadoop/hadoop/etc/hadoop/hdfs-site.xml

加入

dfs.replication

3

?

dfs.namenode.name.dir

file:/home/hadoop/hdfs/name

true

?

dfs.federation.nameservice.id

ns1

?

dfs.namenode.backup.address.ns1

172.16.254.215:50100

?

dfs.namenode.backup.http-address.ns1

172.16.254.215:50105

?

dfs.federation.nameservices

ns1

?

dfs.namenode.rpc-address.ns1

172.16.254.215:9000

dfs.namenode.rpc-address.ns2

172.16.254.215:9000

?

dfs.namenode.http-address.ns1

172.16.254.215:23001

?

dfs.namenode.http-address.ns2

172.16.254.215:13001

?

dfs.dataname.data.dir

file:/home/hadoop/hdfs/data

true

?

dfs.namenode.secondary.http-address.ns1

172.16.254.215:23002

?

dfs.namenode.secondary.http-address.ns2

172.16.254.215:23002

?

dfs.namenode.secondary.http-address.ns1

172.16.254.215:23003

?

dfs.namenode.secondary.http-address.ns2

172.16.254.215:23003

?

?

vi /home/hadoop/hadoop/etc/hadoop/yarn-site.xml

加入

yarn.resourcemanager.address

172.16.254.215:18040

?

yarn.resourcemanager.scheduler.address

172.16.254.215:18030

?

yarn.resourcemanager.webapp.address

172.16.254.215:18088

?

yarn.resourcemanager.resource-tracker.address

172.16.254.215:18025

?

yarn.resourcemanager.admin.address

172.16.254.215:18141

?

yarn.nodemanager.aux-services

mapreduce.shuffle

?

2) hbase配置

?

vi /home/hadoop/hbase/conf/hbase-site.xml

加入

?

dfs.support.append?

true?

?

?

hbase.rootdir?

hdfs://172.16.254.215:9000/hbase?

?

?

hbase.cluster.distributed?

true?

?

?

hbase.config.read.zookeeper.config?

true

?

hbase.master?

master?

?

?

hbase.zookeeper.quorum?

salve1,salve2,salve3?

?

zookeeper.session.timeout

60000

hbase.zookeeper.property.clientPort

2181

hbase.tmp.dir

/home/hadoop/hbase/tmp

Temporary directory on the local filesystem.

hbase.client.keyvalue.maxsize

10485760

?

vi /home/hadoop/hbase/conf/regionservers

加入

salve1

salve2

salve3

?

vi /home/hadoop/hbase/conf/hbase-env.sh

修改

export JAVA_HOME=/usr/lib/java-1.7.0_07

export HBASE_MANAGES_ZK=false

?

?

?

3) zookeeper配置

?

vi /home/hadoop/zookeeper/conf/zoo.cfg

加入

tickTime=2000

initLimit=10

syncLimit=5

dataDir=/home/hadoop/zookeeper/data

clientPort=2181

server.1=salve1:2888:3888

server.2=salve2:2888:3888

server.3=salve3:2888:3888

?

将/home/hadoop/zookeeper/conf/zoo.cfg拷贝到/home/hadoop/hbase/

?

?

4) 同步master和salve

scp -r /home/hadoop/hadoop ?hadoop@salve1:/home/hadoop ?

scp -r /home/hadoop/hbase ?hadoop@salve1:/home/hadoop ?

scp -r /home/hadoop/zookeeper ?hadoop@salve1:/home/hadoop

?

scp -r /home/hadoop/hadoop ?hadoop@salve2:/home/hadoop ?

scp -r /home/hadoop/hbase ?hadoop@salve2:/home/hadoop ?

scp -r /home/hadoop/zookeeper ?hadoop@salve2:/home/hadoop

?

scp -r /home/hadoop/hadoop ?hadoop@salve3:/home/hadoop ?

scp -r /home/hadoop/hbase ?hadoop@salve3:/home/hadoop ?

scp -r /home/hadoop/zookeeper ?hadoop@salve3:/home/hadoop

?

设置 salve1 salve2 salve3 的zookeeper

?

echo "1" > /home/hadoop/zookeeper/data/myid

echo "2" > /home/hadoop/zookeeper/data/myid

echo "3" > /home/hadoop/zookeeper/data/myid

?

?

?

5)测试

测试hadoop

hadoop namenode -format -clusterid clustername

?

start-all.sh

hadoop fs -ls hdfs://172.16.254.215:9000/?

hadoop fs -mkdir hdfs://172.16.254.215:9000/hbase?

//hadoop fs -copyFromLocal ./install.log hdfs://172.16.254.215:9000/testfolder?

//hadoop fs -ls hdfs://172.16.254.215:9000/testfolder

//hadoop fs -put /usr/hadoop/hadoop-2.0.1-alpha/*.txt hdfs://172.16.254.215:9000/testfolder

//cd /usr/hadoop/hadoop-2.0.1-alpha/share/hadoop/mapreduce

//hadoop jar hadoop-mapreduce-examples-2.0.1-alpha.jar wordcount hdfs://172.16.254.215:9000/testfolder hdfs://172.16.254.215:9000/output

//hadoop fs -ls hdfs://172.16.254.215:9000/output

//hadoop fs -cat ?hdfs://172.16.254.215:9000/output/part-r-00000

?

启动 salve1 salve2 salve3 的zookeeper

zkServer.sh start

?

启动 start-hbase.sh

进入 hbase shell

测试 hbase?

list

create 'student','name','address' ?

put 'student','1','name','tom'

get 'student','1'

?



已有 0 人发表留言,猛击->> 这里<<-参与讨论


ITeye推荐
  • —软件人才免语言低担保 赴美带薪读研!—



  • 下载本文
    显示全文
    专题