Amol1984
作者Amol1984·2018-06-28 21:14
系统工程师·YuSYS

4VMWARE+RHEL5.8+JRE1.7+APACHE HADOOP2.6.5搭建记录

字数 27152阅读 1327评论 1赞 1

0===JRE环境安装配置

upload jre-7u45-linux-x64.tar.gz
gzip -d jre-7u45-linux-x64.tar.gz
tar -xvf jre-7u45-linux-x64.tar

vi /etc/profile
export JAVA_HOME=/root/soft/jre1.7.0_45
export JRE_HOME=/root/soft/jre1.7.0_45/jre
export PATH=$PATH:/root/soft/jre1.7.0_45/bin
export CLASSPATH=./:/root/soft/jre1.7.0_45/lib:/root/soft/jre1.7.0_45/jre/lib

sourece /etc/profile or reboot the system

[root@master soft]# scp jre-7u45-linux-i586 root@192.168.100.129:/root/soft
[root@master soft]# scp jre-7u45-linux-i586 root@192.168.100.130:/root/soft
[root@master soft]# scp jre-7u45-linux-i586 root@192.168.100.131:/root/soft

PS:repeat the step above to configure the jre for every virtual machine....

0===prepairation

在两台机器上都创建一个名为hadoop的用户 和 组。
groupadd hadoop
useradd hadoop -g hadoop
passwd hadoop
chown -R hadoop:hadoop /home/hadoop

0.1===SSH 访问设置

该步骤是目的是生成无密码公钥(即在master端无需输入密码进可以通过SSH进去到node1下),预知更多请学习SSH

相关知识, 设置步骤如下:

在mater机器下切换到Hadoop 用户

su hadoop
cd /home/hadoop/

生成公钥和私钥

ssh-keygen -t rsa

执行这句的时候一路敲enter键

cd .ssh
cat id_rsa.pub > authorized_keys
chmod go-wx authorized_keys

把公钥拷贝到node1的/home/hadoop/下

ssh-copy-id -i ~/.ssh/id_rsa.pub hmaster2
ssh-copy-id -i ~/.ssh/id_rsa.pub hslave1
ssh-copy-id -i ~/.ssh/id_rsa.pub hslave2

在hmaster主机上验证登录

ssh hadoop@hmaster2
ssh hadoop@hslave1
ssh hadoop@hslave2

通过上面的操作,SSH无密码登录设置就完成了,可以在master hadoop用户下执行
ssh hslave1 #如果能进入到hslave1的hadoop用户那说明就ok!

su - hadoop
vi .bash_profile

PS1=whoami@hostname>

1===

vi /etc/hosts

192.168.100.128 hmaster
192.168.100.129 hmaster2
192.168.100.130 hslave1
192.168.100.131 hslave2

2===

vi /etc/rc.local

ifconfig eth0 192.168.100.128 netmask 255.255.255.0
ifconfig eth0 192.168.100.129 netmask 255.255.255.0
ifconfig eth0 192.168.100.130 netmask 255.255.255.0
ifconfig eth0 192.168.100.131 netmask 255.255.255.0

设置IP和掩码
ifconfig eth0 192.168.5.40 netmask 255.255.255.0
设置网关
route add default gw 192.168.100.1

3===下载安装HADOOP

You can first download the hadoop install verion file from

http://mirror.bit.edu.cn/apache/hadoop/common/
mget http://mirror.bit.edu.cn/apache/hadoop/common

upload the hadoop install file to the taget server

uncompress the file by command below:

tar -xzvf hadoop**.tar.gz

configure the hadoop environment in file:/etc/profile

vi /etc/profile
export HADOOP_HOME=/root/soft/hadoop3.1.0/hadoop-3.1.0
export PATH=${HADOOP_HOME}/bin:$PATH

install the Hadoop on the other machines

mkdir /root/soft/hadoop3.1.0
scp -v hadoop-3.1.0.tar.gz root@192.168.100.129:/root/soft/hadoop3.1.0
scp -v hadoop-3.1.0.tar.gz root@192.168.100.130:/root/soft/hadoop3.1.0
scp -v hadoop-3.1.0.tar.gz root@192.168.100.131:/root/soft/hadoop3.1.0

...

tar -xzvf hadoop-3.1.0.tar.gz
vi /etc/profile
export HADOOP_HOME=/root/soft/hadoop3.1.0/hadoop-3.1.0
export PATH=${HADOOP_HOME}/bin:$PATH

configure the hadoop conf files

1)modify hadoop-env.sh,set the JAVA_HOME;
export JAVA_HOME=/root/soft/jre1.7.0_45

2)edit the hadoop configuration file:.bashrc
vi /root/soft/hadoop3.1.0/hadoop-3.1.0/.bashrc

export HADOOP_PREFIX=/root/soft/hadoop3.1.0/hadoop-3.1.0
export HADOOP_HOME=$HADOOP_PREFIX
export HADOOP_COMMON_HOME=$HADOOP_PREFIX
export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop
export HADOOP_HDFS_HOME=$HADOOP_PREFIX
export HADOOP_MAPRED_HOME=$HADOOP_PREFIX
export HADOOP_YARN_HOME=$HADOOP_PREFIX
export PATH=$PATH:$HADOOP_PREFIX/sbin:$HADOOP_PREFIX/bin

3)configure all the other cluster machines
...

edit the hadoop configuration files and then copy it to the cluster machines

a)在master机器上编辑core-site.xml,执行命令:vim /opt/hadoop-2.6.4/etc/hadoop/core-site.xml,并拷贝到

slave机器上
vi $HADOOP_HOME/etc/hadoop/core-site.xml

<configuration>
<property>

<name>fs.defaultFS</name>
<value>hdfs://hmaster:9000/</value>

</property>
</configuration>

b)COPY THE CONFIGURATION TO THE OTHER MACHINES
scp -v $HADOOP_HOME/etc/hadoop/core-site.xml root@hmaster2:$HADOOP_HOME/etc/hadoop/
scp -v $HADOOP_HOME/etc/hadoop/core-site.xml root@hslave1:$HADOOP_HOME/etc/hadoop/
scp -v $HADOOP_HOME/etc/hadoop/core-site.xml root@hslave2:$HADOOP_HOME/etc/hadoop/

c)在master机器上编辑hdfs-site.xml,配置namenode和datanode
mkdir -p /home/hadoop/namenode
mkdir -p /home/hadoop/datanode

<span style="font-size:12px;">

<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>

    <name>dfs.namenode.data.dir</name>
    <value>/home/hadoop/namenode</value>

</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/hadoop/datanode</value>
</property>
</configuration>

</span>

d)在hslave1和hslave2上编辑hdfs-site.xml,配置datamode
mkdir -p /home/hadoop/datanode

<span style="font-size:12px;">

<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>

    <name>dfs.datanode.data.dir</name>
    <value>/home/hadoop/datanode</value>

</property>
</configuration>

</span>

e)在hmaster上配置mapred-site.xml

<span style="font-size:12px;">

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value> <!-- and not local (!) -->
</property>
</configuration>

</span>

f)在hmaster上配置yarn-site.xml

<span style="font-size:12px;">

<property>

<name>yarn.resourcemanager.hostname</name>
<value>hmaster</value>

</property>
<property>

<name>yarn.nodemanager.hostname</name>
<value>hmaster</value> <!-- or hslave1, hslave2, hslave3 -->

</property>
<property>

<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>

</property>

</span>

g)在hslave1、hslave2上配置yarn-site.xml

<span style="font-size:12px;">

<property>

<name>yarn.nodemanager.hostname</name>
<value>hmaster</value> <!-- or hslave1, hslave2, hslave3 -->

</property>
<property>

<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>

</property>

</span>

h)在hmaster上编辑文件slaves
mkdir $HADOOP_HOME/conf
vi $HADOOP_HOME/conf/slaves
hslave1
hslave2

i)modify the dir right
cd $HADOOP_HOME/..
chown -R hadoop:hadoop ./hadoop-3.1.0
chown -R hadoop:hadoop /home/hadoop

j)add the user hadoop to the root group in order to let it has the right to run the JAVA programs

which install on the root directory
vi /etc/group
root:x:0:root,hadoop

4.在hmaster上执行格式化hdfs命令:
hadoop namenode -format

5.启动hadoop,目录/sbin下执行命令:./start-all.sh,启动后jps查看启动情况

==在hmaster上

jps

25734 ResourceManager
25900 NodeManager
25263 DataNode
25147 NameNode
945 Jps
25494 SecondaryNameNode

==在hslave上

jps

15708 DataNode
19171 Jps

6.测试hadoop:拷贝本地文件到hdfs,使用自带wordcount工具统计单词个数

a)关闭safe mode,执行指令:
hdfs dfsadmin -safemode leave

b)上传文件test.txt

本地文件夹/home/hadoop/下创建文件test.txt;
vi /home/hadoop/test.txt

hdfs上创建文件夹input:
hdfs dfs -mkdir /input

拷贝test.txt到input文件夹下:
hdfs dfs -copyFromLocal /home/hadoop/test.txt /input/test.txt

查看hdfs上的test.txt:
hdfs dfs -cat /input/test.txt |head

c)执行工具类wordcount,将结果输出到output1:

hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.0.jar wordcount

/input/test.txt /output1

在output1下查看输出结果文件:

hdfs dfs -ls -R /output1

查看统计结果
hdfs dfs -cat /output1/part-r-00000|head

---------搭建过程碰到的坑记录----------
hmaster以hadoop用户运行hadoop namenode -format报以下错误:
hadoop@hmaster>hadoop namenode -format
WARNING: Use of this script to execute namenode is deprecated.
WARNING: Attempting to execute replacement "hdfs namenode" instead.

Exception in thread "main" java.lang.UnsupportedClassVersionError:

org/apache/hadoop/hdfs/server/namenode/NameNode : Unsupported major.minor version 52.0

    at java.lang.ClassLoader.defineClass1(Native Method)
    at java.lang.ClassLoader.defineClass(Unknown Source)
    at java.security.SecureClassLoader.defineClass(Unknown Source)
    at java.net.URLClassLoader.defineClass(Unknown Source)
    at java.net.URLClassLoader.access$100(Unknown Source)
    at java.net.URLClassLoader$1.run(Unknown Source)
    at java.net.URLClassLoader$1.run(Unknown Source)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(Unknown Source)
    at java.lang.ClassLoader.loadClass(Unknown Source)
    at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source)
    at java.lang.ClassLoader.loadClass(Unknown Source)
    at sun.launcher.LauncherHelper.checkAndLoadMain(Unknown Source)

hadoop@hmaster>

一想创建文件系统可能hadoop用户权限不够,所以换root试试:
[root@hmaster hadoop]# hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

18/06/20 21:51:01 INFO namenode.NameNode: STARTUP_MSG:
/**
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = hmaster/192.168.100.128
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.6.5
STARTUP_MSG: classpath = /root/soft/hadoop-2.6.5/etc/hadoop:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/jersey-json-1.9.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/log4j-1.2.17.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/jetty-util-6.1.26.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/asm-3.2.jar:/root/soft/hadoop-2.6.5/share/hadoop/common/lib/jets3t-

0.9.0.jar:/root/soft/hadoop-2.6.5/share/hadoop/common/lib/stax-api-1.0-2.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/commons-httpclient-3.1.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/jettison-1.1.jar:/root/soft/hadoop-2.6.5/share/hadoop/common/lib/gson-

2.2.4.jar:/root/soft/hadoop-2.6.5/share/hadoop/common/lib/commons-logging-

1.1.3.jar:/root/soft/hadoop-2.6.5/share/hadoop/common/lib/xmlenc-0.52.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/curator-recipes-2.6.0.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/junit-4.11.jar:/root/soft/hadoop-2.6.5/share/hadoop/common/lib/jersey-

core-1.9.jar:/root/soft/hadoop-2.6.5/share/hadoop/common/lib/htrace-core-

3.0.4.jar:/root/soft/hadoop-2.6.5/share/hadoop/common/lib/commons-net-3.1.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/guava-11.0.2.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/commons-lang-2.6.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/httpclient-4.2.5.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/jsp-api-2.1.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/hamcrest-core-1.3.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/commons-codec-1.4.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/commons-digester-1.8.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/commons-el-1.0.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/commons-io-2.4.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/mockito-all-1.8.5.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/activation-1.1.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/zookeeper-3.4.6.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/curator-framework-2.6.0.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/jersey-server-1.9.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/hadoop-auth-2.6.5.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/paranamer-2.3.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/hadoop-annotations-2.6.5.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/xz-1.0.jar:/root/soft/hadoop-2.6.5/share/hadoop/common/lib/jsch-

0.1.42.jar:/root/soft/hadoop-2.6.5/share/hadoop/common/lib/jasper-runtime-

5.5.23.jar:/root/soft/hadoop-2.6.5/share/hadoop/common/lib/netty-3.6.2.Final.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/commons-compress-1.4.1.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/commons-cli-1.2.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/servlet-api-2.5.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/curator-client-2.6.0.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/commons-configuration-1.6.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/avro-1.7.4.jar:/root/soft/hadoop-2.6.5/share/hadoop/common/lib/api-

util-1.0.0-M20.jar:/root/soft/hadoop-2.6.5/share/hadoop/common/lib/jsr305-

1.3.9.jar:/root/soft/hadoop-2.6.5/share/hadoop/common/lib/commons-math3-3.1.1.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/jetty-6.1.26.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/commons-collections-3.2.2.jar:/root/soft/hadoop-

2.6.5/share/hadoop/common/lib/httpcore-4.2.5.jar:/root/soft/hadoop-2.6.5/share/hadoop/common/hadoop-

common-2.6.5.jar:/root/soft/hadoop-2.6.5/share/hadoop/common/hadoop-common-2.6.5-

tests.jar:/root/soft/hadoop-2.6.5/share/hadoop/common/hadoop-nfs-2.6.5.jar:/root/soft/hadoop-

2.6.5/share/hadoop/hdfs:/root/soft/hadoop-2.6.5/share/hadoop/hdfs/lib/log4j-

1.2.17.jar:/root/soft/hadoop-2.6.5/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/root/soft/hadoop-

2.6.5/share/hadoop/hdfs/lib/asm-3.2.jar:/root/soft/hadoop-2.6.5/share/hadoop/hdfs/lib/jackson-core-

asl-1.9.13.jar:/root/soft/hadoop-2.6.5/share/hadoop/hdfs/lib/commons-logging-

1.1.3.jar:/root/soft/hadoop-2.6.5/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/root/soft/hadoop-

2.6.5/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/root/soft/hadoop-

2.6.5/share/hadoop/hdfs/lib/htrace-core-3.0.4.jar:/root/soft/hadoop-2.6.5/share/hadoop/hdfs/lib/xml-

apis-1.3.04.jar:/root/soft/hadoop-2.6.5/share/hadoop/hdfs/lib/guava-11.0.2.jar:/root/soft/hadoop-

2.6.5/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/root/soft/hadoop-2.6.5/share/hadoop/hdfs/lib/jsp-

api-2.1.jar:/root/soft/hadoop-2.6.5/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/root/soft/hadoop-

2.6.5/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/root/soft/hadoop-

2.6.5/share/hadoop/hdfs/lib/commons-el-1.0.jar:/root/soft/hadoop-

2.6.5/share/hadoop/hdfs/lib/commons-io-2.4.jar:/root/soft/hadoop-2.6.5/share/hadoop/hdfs/lib/jersey-

server-1.9.jar:/root/soft/hadoop-2.6.5/share/hadoop/hdfs/lib/jasper-runtime-

5.5.23.jar:/root/soft/hadoop-2.6.5/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/root/soft/hadoop-

2.6.5/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/root/soft/hadoop-

2.6.5/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/root/soft/hadoop-

2.6.5/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/root/soft/hadoop-

2.6.5/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/root/soft/hadoop-

2.6.5/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/root/soft/hadoop-

2.6.5/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/root/soft/hadoop-2.6.5/share/hadoop/hdfs/lib/jetty-

6.1.26.jar:/root/soft/hadoop-2.6.5/share/hadoop/hdfs/hadoop-hdfs-nfs-2.6.5.jar:/root/soft/hadoop-

2.6.5/share/hadoop/hdfs/hadoop-hdfs-2.6.5-tests.jar:/root/soft/hadoop-

2.6.5/share/hadoop/hdfs/hadoop-hdfs-2.6.5.jar:/root/soft/hadoop-2.6.5/share/hadoop/yarn/lib/jersey-

client-1.9.jar:/root/soft/hadoop-2.6.5/share/hadoop/yarn/lib/jersey-json-1.9.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/lib/log4j-1.2.17.jar:/root/soft/hadoop-2.6.5/share/hadoop/yarn/lib/jetty-

util-6.1.26.jar:/root/soft/hadoop-2.6.5/share/hadoop/yarn/lib/asm-3.2.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/lib/jettison-1.1.jar:/root/soft/hadoop-2.6.5/share/hadoop/yarn/lib/commons-

logging-1.1.3.jar:/root/soft/hadoop-2.6.5/share/hadoop/yarn/lib/jersey-core-

1.9.jar:/root/soft/hadoop-2.6.5/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/lib/javax.inject-1.jar:/root/soft/hadoop-2.6.5/share/hadoop/yarn/lib/guava-

11.0.2.jar:/root/soft/hadoop-2.6.5/share/hadoop/yarn/lib/aopalliance-1.0.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/lib/commons-lang-2.6.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/lib/commons-codec-1.4.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/lib/commons-io-2.4.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/lib/activation-1.1.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/root/soft/hadoop-2.6.5/share/hadoop/yarn/lib/guice-

servlet-3.0.jar:/root/soft/hadoop-2.6.5/share/hadoop/yarn/lib/jersey-server-

1.9.jar:/root/soft/hadoop-2.6.5/share/hadoop/yarn/lib/guice-3.0.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/root/soft/hadoop-2.6.5/share/hadoop/yarn/lib/xz-

1.0.jar:/root/soft/hadoop-2.6.5/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/lib/commons-cli-1.2.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/lib/servlet-api-2.5.jar:/root/soft/hadoop-2.6.5/share/hadoop/yarn/lib/jline-

0.9.94.jar:/root/soft/hadoop-2.6.5/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/root/soft/hadoop-2.6.5/share/hadoop/yarn/lib/jetty-

6.1.26.jar:/root/soft/hadoop-2.6.5/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.6.5.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/hadoop-yarn-registry-2.6.5.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.6.5.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.6.5.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.6.5.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/hadoop-yarn-common-2.6.5.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.6.5.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/hadoop-yarn-client-2.6.5.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.5.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/hadoop-yarn-server-common-2.6.5.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/hadoop-yarn-server-tests-2.6.5.jar:/root/soft/hadoop-

2.6.5/share/hadoop/yarn/hadoop-yarn-api-2.6.5.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/lib/asm-3.2.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/lib/junit-4.11.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/lib/javax.inject-1.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/lib/guice-3.0.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/lib/hadoop-annotations-2.6.5.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/lib/xz-1.0.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.6.5.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.6.5.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.6.5.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.5.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.6.5.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.5-tests.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.6.5.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar:/root/soft/hadoop-

2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.6.5.jar:/root/soft/hadoop-

2.6.5/contrib/capacity-scheduler/.jar:/root/soft/hadoop-2.6.5/contrib/capacity-scheduler/.jar
STARTUP_MSG: build = https://github.com/apache/hadoop.git -r

e8c9fe0b4c252caf2ebf1464220599650f119997; compiled by 'sjlee' on 2016-10-02T23:43Z
STARTUP_MSG: java = 1.7.0_45
**/
18/06/20 21:51:02 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
18/06/20 21:51:02 INFO namenode.NameNode: createNameNode [-format]
Java HotSpot(TM) Client VM warning: You have loaded library /root/soft/hadoop-

2.6.5/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the

stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z

noexecstack'.
18/06/20 21:51:08 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your

platform... using builtin-java classes where applicable
Formatting using clusterid: CID-dce0b5d9-2cf3-48e0-bbf2-cc3da4f5af7a
18/06/20 21:51:11 INFO namenode.FSNamesystem: No KeyProvider found.
18/06/20 21:51:11 INFO namenode.FSNamesystem: fsLock is fair:true
18/06/20 21:51:12 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
18/06/20 21:51:12 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-

hostname-check=true
18/06/20 21:51:12 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is

set to 000:00:00:00.000
18/06/20 21:51:12 INFO blockmanagement.BlockManager: The block deletion will start around 2018 六月

20 21:51:12
18/06/20 21:51:12 INFO util.GSet: Computing capacity for map BlocksMap
18/06/20 21:51:12 INFO util.GSet: VM type = 32-bit
18/06/20 21:51:12 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
18/06/20 21:51:12 INFO util.GSet: capacity = 2^22 = 4194304 entries
18/06/20 21:51:18 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
18/06/20 21:51:18 INFO blockmanagement.BlockManager: defaultReplication = 3
18/06/20 21:51:18 INFO blockmanagement.BlockManager: maxReplication = 512
18/06/20 21:51:18 INFO blockmanagement.BlockManager: minReplication = 1
18/06/20 21:51:18 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
18/06/20 21:51:18 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
18/06/20 21:51:18 INFO blockmanagement.BlockManager: encryptDataTransfer = false
18/06/20 21:51:18 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
18/06/20 21:51:18 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE)
18/06/20 21:51:18 INFO namenode.FSNamesystem: supergroup = supergroup
18/06/20 21:51:18 INFO namenode.FSNamesystem: isPermissionEnabled = false
18/06/20 21:51:18 INFO namenode.FSNamesystem: HA Enabled: false
18/06/20 21:51:18 INFO namenode.FSNamesystem: Append Enabled: true
18/06/20 21:51:22 INFO util.GSet: Computing capacity for map INodeMap
18/06/20 21:51:22 INFO util.GSet: VM type = 32-bit
18/06/20 21:51:22 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
18/06/20 21:51:22 INFO util.GSet: capacity = 2^21 = 2097152 entries
18/06/20 21:51:23 INFO namenode.NameNode: Caching file names occuring more than 10 times
18/06/20 21:51:23 INFO util.GSet: Computing capacity for map cachedBlocks
18/06/20 21:51:23 INFO util.GSet: VM type = 32-bit
18/06/20 21:51:23 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
18/06/20 21:51:23 INFO util.GSet: capacity = 2^19 = 524288 entries
18/06/20 21:51:24 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct =

0.9990000128746033
18/06/20 21:51:24 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
18/06/20 21:51:24 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
18/06/20 21:51:24 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
18/06/20 21:51:24 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache

entry expiry time is 600000 millis
18/06/20 21:51:24 INFO util.GSet: Computing capacity for map NameNodeRetryCache
18/06/20 21:51:24 INFO util.GSet: VM type = 32-bit
18/06/20 21:51:24 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
18/06/20 21:51:24 INFO util.GSet: capacity = 2^16 = 65536 entries
18/06/20 21:51:24 INFO namenode.NNConf: ACLs enabled? false
18/06/20 21:51:24 INFO namenode.NNConf: XAttrs enabled? true
18/06/20 21:51:24 INFO namenode.NNConf: Maximum size of an xattr: 16384
18/06/20 21:51:29 INFO namenode.FSImage: Allocated new BlockPoolId: BP-2115813706-192.168.100.128-

1529502685638
18/06/20 21:51:29 INFO common.Storage: Storage directory /tmp/hadoop-root/dfs/name has been

successfully formatted.
18/06/20 21:51:29 INFO namenode.FSImageFormatProtobuf: Saving image file /tmp/hadoop-

root/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
18/06/20 21:51:31 INFO namenode.FSImageFormatProtobuf: Image file /tmp/hadoop-

root/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 320 bytes saved in 1 seconds.
18/06/20 21:51:31 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
18/06/20 21:51:31 INFO util.ExitUtil: Exiting with status 0
18/06/20 21:51:31 INFO namenode.NameNode: SHUTDOWN_MSG:
/**
SHUTDOWN_MSG: Shutting down NameNode at hmaster/192.168.100.128
**/
[root@hmaster hadoop]#

如果觉得我的文章对您有用,请点赞。您的支持将鼓励我继续创作!

1

添加新评论1 条评论

Amol1984Amol1984系统工程师YuSYS
2018-06-28 21:16
占个沙发,抛转引玉,求同行同学。
Ctrl+Enter 发表

作者其他文章

相关文章

相关问题

相关资料

X社区推广