1000字范文,内容丰富有趣,学习的好帮手!
1000字范文 > 【大数据集群搭建-Apache】Apache版本进行大数据集群各组件环境部署

【大数据集群搭建-Apache】Apache版本进行大数据集群各组件环境部署

时间:2024-03-14 07:21:23

相关推荐

【大数据集群搭建-Apache】Apache版本进行大数据集群各组件环境部署

【大数据集群搭建-Apache】Apache版本进行大数据集群各组件环境部署

1)大数据环境统一1.1.设置主机名和域名映射1.2.关闭服务器防火墙和Selinux1.3.服务器免密登陆1.4.所有机器时钟同步1.5.JDK安装 2)MySQL2.1.将MySQL的rpm文件导入服务器中2.2.安装rpm文件2.3.卸载mariadb2.4.启动MySQL2.5.登录MySQL2.6.设置MySQL权限 3)Zookeeper3.1.下载3.2.上传与解压3.3.修改配置文件3.4.添加myid配置3.5.安装包分发并修改myid的值3.6.所有机器启动ZK服务 4)Hadoop4.1.下载4.2.解压4.3.分发hadoop4.4.配置/etc/profile4.5.准备目录4.6.配置Hadoop配置文件4.7.准备native-lib4.8.启动Hadoop4.9.验证 5)Hive5.1.下载5.2.解压并重命名5.3.修改Hive的配置文件5.4.Hive的交互方式 6)Sqoop6.1.解压安装并更改名字6.2.拷贝mysql的jdbc驱动包到lib目录下6.3.配置文件6.4.测试6.5.语句 7)HBase7.1.上传解压HBase安装包7.2.修改HBase配置文件7.3.配置环境变量7.4.复制jar包到lib7.5.修改regionservers文件7.6.分发安装包与配置文件7.7.搭建HBase高可用7.8.解决hbase/filter错误7.9.解决sqoop的lib库中没有hbase的相应jar包7.10.启动HBase7.11.验证Hbase是否启动成功7.12.WebUI 8)Elasticsearch8.1.创建普通用户8.2.为普通用户itcast添加sudo权限8.3.上传压缩包并压缩8.4.修改配置文件8.5.将安装包分发到其他服务器上面8.6.其他节点修改es配置文件8.7.修改系统配置,解决启动时候的问题8.8.启动ES服务8.9.Elasticsearch-head插件8.9.1.安装nodejs8.9.2.本地安装 9)Spark9.1.下载9.2.Local安装9.3.Standalone集群安装9.4.Standalone HA 搭建9.5.Spark On Yarn9.6.启动9.7.WebUI 10.Kafka10.1.准备如下目录10.2.下载10.3.上传压缩包并解压10.4.配置环境变量10.5.分发安装包10.6.修改Kafka配置文件10.6.1.目录重命名10.6.2.修改配置文件10.6.3.配置详解 10.7.启动 11)Flink11.1.下载11.2.Local安装11.3.Standalone集群安装11.4.Standalone HA搭建11.5.Flink On Yarn11.6.WebUI

1)大数据环境统一

1.1.设置主机名和域名映射

1、配置每台虚拟机主机名:

vim /etc/hostname

第一台主机主机名为:5gcsp-bigdata-svr1

第二台主机主机名为:5gcsp-bigdata-svr2

第三台主机主机名为:5gcsp-bigdata-svr3

第四台主机主机名为:5gcsp-bigdata-svr4

第五台主机主机名为:5gcsp-bigdata-svr5

2、配置每台服务器域名映射

vim /etc/hosts#ip hostname/域名

1.2.关闭服务器防火墙和Selinux

1、关闭每台机器的防火墙

systemctl stop firewalld.service#停止firewallsystemctl disable firewalld.service #禁止firewall开机启动systemctl status firewalld.service #关闭之后,查看防火墙状态

2、关闭每台机器的Selinux

vim /etc/selinux/config#改成如下:SELINUX=disabled

重启:

#如果更改了Selinux一定要重启机器reboot

1.3.服务器免密登陆

1、在所有机器执行以下命令,生成公钥与私钥,敲三下回车

ssh-keygen -t rsa

2、所有机器将拷贝公钥到第一台机器,所有机器执行命令

ssh-copy-id 5gcsp-bigdata-svr1

3、将第一台机器的公钥拷贝到其他机器上,在第一台机器上指行以下命令,执行时需要输入yes和对方密码

scp /root/.ssh/authorized_keys 5gcsp-bigdata-svr1:/root/.sshscp /root/.ssh/authorized_keys 5gcsp-bigdata-svr2:/root/.sshscp /root/.ssh/authorized_keys 5gcsp-bigdata-svr4:/root/.sshscp /root/.ssh/authorized_keys 5gcsp-bigdata-svr5:/root/.ssh

4、测试一下,可以在任何一台主机上通过ssh 主机名命令去远程登录到该主机,输入exit退出登录

ssh node1exit

1.4.所有机器时钟同步

启动定时任务

crontab -e

随后在输入界面键入以下内容,每隔一分钟就去连接阿里云时间同步服务器,进行时钟同步

*/1 * * * * /usr/sbin/ntpdate -u ;

1.5.JDK安装

1、每个服务器上创建好目录

mkdir -p /export/software 软件包放置的目录mkdir -p /export/servers软件安装的目录

2、进入 /export/software 目录, 上传jdk的安装包: jdk-8u241-linux-x64.tar.gz

3、解压压缩包到/export/servers目录下

tar -zxvf jdk-8u241-linux-x64.tar.gz -C /export/servers

4、配置 jdk 环境变量,export 命令用于将 shell 变量输出为环境变量

第一步: vi /etc/profile第二步: 通过键盘上下键 将光标拉倒最后面第三步: 然后输入 i, 将一下内容输入即可注意:具体的文件目录要根据自己的文件目录进行修改#set java environment JAVA_HOME=/export/servers/jdk1.8.0_241 CLASSPATH=.:$JAVA_HOME/lib PATH=$JAVA_HOME/bin:$PATH export JAVA_HOME CLASSPATH PATH第四步: esc键 然后 :wq 保存退出即可

5、重新加载环境变量

source /etc/profile

6、配置jdk是否安装成功

java -version或者javac -version

2)MySQL

2.1.将MySQL的rpm文件导入服务器中

cd /export/software

2.2.安装rpm文件

依次执行下面命令

rpm -ivh mysql-community-common-5.7.26-1.el7.x86_64.rpmrpm -ivh mysql-community-libs-5.7.26-1.el7.x86_64.rpm --nodeps --forcerpm -ivh mysql-community-client-5.7.26-1.el7.x86_64.rpmrpm -ivh mysql-community-server-5.7.26-1.el7.x86_64.rpm --nodeps --force

2.3.卸载mariadb

cnetos7集成了mariadb,而安装mysql的话会和mariadb的冲突,所以需要先卸载掉mariadb,以下为卸载mariadb

查看是否安装过MySQL其他的包,如果有也可以按照下面命令删除,然后重新安装新的MySQLrpm -qa|grep -i mysql查看是否有mariadb,如果有的话可以删除,防止和mysql冲突rpm -qa | grep mariadbrpm -e mariadb包名 --nodeps#再次查看发现消失rpm -qa | grep mariadb

2.4.启动MySQL

service mysqld status //查看是否启动service mysqld start //启动service mysqld status //查看是否启动

2.5.登录MySQL

1、查看密码

grep "password" /var/log/mysqld.log K3-JrYp5S2)7

2、登录mysql

mysql -uroot -p

3、修改密码

#取消mysql密码规范限制set global validate_password_policy=0;set global validate_password_length=1;#重设密码alter user 'root'@'localhost' identified by '123456';flush privileges;

2.6.设置MySQL权限

create database scm DEFAULT CHARACTER SET utf8;#如果由于数据库更新导致下面命令报错,输入如下命令#mysql_upgrade -u root -p 123456grant all PRIVILEGES on *.* TO 'root'@'%' IDENTIFIED BY '123456' WITH GRANT OPTION;grant all PRIVILEGES on *.* TO 'root'@'localhost' IDENTIFIED BY '123456' WITH GRANT OPTION;grant all PRIVILEGES on *.* TO 'root'@'5gcsp-bigdata-svr1' IDENTIFIED BY '123456' WITH GRANT OPTION;flush privileges;

3)Zookeeper

3.1.下载

/dist/zookeeper/

3.2.上传与解压

解压zookeeper的压缩包到/export/servers路径下去,然后准备进行安装

cd /export/softwaretar -zxvf zookeeper-3.4.6.tar.gz -C /export/servers/

3.3.修改配置文件

cd /export/servers/zookeeper-3.4.6/conf/cp zoo_sample.cfg zoo.cfgmkdir -p /export/servers/zookeeper-3.4.6/zkdatas/vim zoo.cfg

修改以下内容:

#Zookeeper的数据存放目录dataDir=/export/servers/zookeeper-3.4.6/zkdatas# 保留多少个快照autopurge.snapRetainCount=3# 日志多少小时清理一次autopurge.purgeInterval=1# 集群中服务器地址server.1=5gcsp-bigdata-svr1:2888:3888server.2=5gcsp-bigdata-svr2:2888:3888server.3=5gcsp-bigdata-svr3:2888:3888server.4=5gcsp-bigdata-svr4:2888:3888server.5=5gcsp-bigdata-svr5:2888:3888server.1=node1:2888:3888server.2=node2:2888:3888server.3=node3:2888:3888

3.4.添加myid配置

在第一台服务器上的/export/servers/zookeeper-3.4.6/zkdatas/这个路径下创建一个文件,文件名为myid

echo 1 > /export/servers/zookeeper-3.4.6/zkdatas/myid

3.5.安装包分发并修改myid的值

1、第一台机器上面执行以下命令

scp -r /export/servers/zookeeper-3.4.6/ 5gcsp-bigdata-svr2:/export/servers/scp -r /export/servers/zookeeper-3.4.6/ 5gcsp-bigdata-svr3:/export/servers/scp -r /export/servers/zookeeper-3.4.6/ 5gcsp-bigdata-svr4:/export/servers/scp -r /export/servers/zookeeper-3.4.6/ 5gcsp-bigdata-svr5:/export/servers/

2、第二台机器上修改myid的值为2

echo 2 > /export/servers/zookeeper-3.4.6/zkdatas/myid

3、第三台机器上修改myid的值为3

echo 3 > /export/servers/zookeeper-3.4.6/zkdatas/myid

4、第四台机器上修改myid为4

echo 4 > /export/servers/zookeeper-3.4.6/zkdatas/myid

5、第五台机器上修改myid为5

echo 2 > /export/servers/zookeeper-3.4.6/zkdatas/myid

3.6.所有机器启动ZK服务

1、这个命令三台机器都要执行

/export/servers/zookeeper-3.4.6/bin/zkServer.sh start

2、三台主机分别查看启动状态

/export/servers/zookeeper-3.4.6/bin/zkServer.sh status

4)Hadoop

4.1.下载

链接:/s/154nyt3GBOTon_shvJ_DUlg

提取码:kyun

4.2.解压

在5gcsp-bigdata-svr1节点上执行:

# 解压Hadoop到/export/servers内tar -zxvf hadoop-2.7.5.tar.gz -C /export/servers/

4.3.分发hadoop

在5gcsp-bigdata-svr1执行:

scp -r /export/servers/hadoop-2.7.5 5gcsp-bigdata-svr2:/export/servers/scp -r /export/servers/hadoop-2.7.5 5gcsp-bigdata-svr3:/export/servers/scp -r /export/servers/hadoop-2.7.5 5gcsp-bigdata-svr4:/export/servers/scp -r /export/servers/hadoop-2.7.5 5gcsp-bigdata-svr5:/export/servers/

4.4.配置/etc/profile

1、在5gcsp-bigdata-svr1将如下内容追加写入到/etc/profile内:

export JAVA_HOME=/usr/local/jdk1.8.0_191export HADOOP_HOME=/export/servers/hadoop-2.7.5export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoopexport YARN_CONF_DIR=$HADOOP_HOME/etc/hadoopexport YARN_LOG_DIR=$HADOOP_HOME/logs/yarnexport HADOOP_LOG_DIR=$HADOOP_HOME/logs/hdfsexport HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/nativeexport HADOOP_OPTS="-Djava.library.path=${HADOOP_HOME}/lib/native"export PATH=$JAVA_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/bin:$PATH

2、将这个文件分发到每台机器:

scp /etc/profile 5gcsp-bigdata-svr2:/etc/scp /etc/profile 5gcsp-bigdata-svr3:/etc/scp /etc/profile 5gcsp-bigdata-svr4:/etc/scp /etc/profile 5gcsp-bigdata-svr5:/etc/

3、每台机器均执行:

source /etc/profile

4.5.准备目录

在5gcsp-bigdata-svr1执行:

mkdir -p /data/namenode-datamkdir -p /data/nm-localmkdir -p /data/nm-log

4.6.配置Hadoop配置文件

在5gcsp-bigdata-svr1机器上配置

1、hadoop-env.sh文件

添加如下内容:

export JAVA_HOME=/usr/local/jdk1.8.0_191export HADOOP_HOME=/export/servers/hadoop-2.7.5export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoopexport YARN_CONF_DIR=$HADOOP_HOME/etc/hadoopexport YARN_LOG_DIR=$HADOOP_HOME/logs/yarnexport HADOOP_LOG_DIR=$HADOOP_HOME/logs/hdfs

2、core-site.xml

在configuration块内添加:

<property><name>fs.defaultFS</name><value>hdfs://5gcsp-bigdata-svr1:8020</value></property><property><name>io.file.buffer.size</name><value>131072</value></property>

3、hdfs-site.xml

<property><name>dfs.datanode.data.dir.perm</name><value>700</value></property><property><name>dfs.namenode.name.dir</name><value>/data/namenode-data</value><description>Path on the local filesystem where the NameNode stores the namespace and transactions logs persistently.</description></property><property><name>dfs.namenode.hosts</name><value>5gcsp-bigdata-svr2,5gcsp-bigdata-svr3,5gcsp-bigdata-svr4,5gcsp-bigdata-svr5</value><description>List of permitted DataNodes.</description></property><property><name>dfs.blocksize</name><value>268435456</value><description></description></property><property><name>dfs.namenode.handler.count</name><value>100</value><description></description></property><property><name>dfs.datanode.data.dir</name><value>/data/dn-data-1,/data/dn-data-2,/data/dn-data-3,/data/dn-data-4,/data/dn-data-5,/data/dn-data-6,/data/dn-data-7,/data/dn-data-8</value><description>DataNode data dir</description></property>

4、yarn-env.sh文件

添加:

export JAVA_HOME=/usr/local/jdk1.8.0_191export HADOOP_HOME=/export/servers/hadoop-2.7.5export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoopexport YARN_CONF_DIR=$HADOOP_HOME/etc/hadoopexport YARN_LOG_DIR=$HADOOP_HOME/logs/yarnexport HADOOP_LOG_DIR=$HADOOP_HOME/logs/hdfs

5、yarn-site.xml文件

在configuration块中添加:

<property><name>yarn.log.server.url</name><value>http://5gcsp-bigdata-svr1:19888/jobhistory/logs</value><description></description></property><property><name>yarn.log-aggregation-enable</name><value>true</value><description>Configuration to enable or disable log aggregation</description></property><property><name>yarn.nodemanager.remote-app-log-dir</name><value>/tmp/logs</value><description>Configuration to enable or disable log aggregation IN HDFS</description></property><!-- Site specific YARN configuration properties --><property><name>yarn.resourcemanager.hostname</name><value>5gcsp-bigdata-svr1</value><description></description></property><property><name>yarn.resourcemanager.scheduler.class</name><value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value><description></description></property><property><name>yarn.nodemanager.local-dirs</name><value>/data/nm-local</value><description>Comma-separated list of paths on the local filesystem where intermediate data is written.</description></property><property><name>yarn.nodemanager.log-dirs</name><value>/data/nm-log</value><description>Comma-separated list of paths on the local filesystem where logs are written.</description></property><property><name>yarn.nodemanager.log.retain-seconds</name><value>10800</value><description>Default time (in seconds) to retain log files on the NodeManager Only applicable if log-aggregation is disabled.</description></property><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value><description>Shuffle service that needs to be set for Map Reduce applications.</description></property>

6、maprd-env.sh文件

增加:

export JAVA_HOME=/usr/local/jdk1.8.0_191

7、mapred-site.xml文件

在configuration块中添加:

<property><name>mapreduce.framework.name</name><value>yarn</value><description></description></property><property><name>mapreduce.jobhistory.address</name><value>5gcsp-bigdata-svr1:10020</value><description></description></property><property><name>mapreduce.jobhistory.webapp.address</name><value>5gcsp-bigdata-svr1:19888</value><description></description></property><property><name>mapreduce.jobhistory.intermediate-done-dir</name><value>/tmp/mr-history/tmp</value><description></description></property><property><name>mapreduce.jobhistory.done-dir</name><value>/tmp/mr-history/done</value><description></description></property>

8、slave文件

修改为:

5gcsp-bigdata-svr25gcsp-bigdata-svr35gcsp-bigdata-svr45gcsp-bigdata-svr5

9、分发配置

将这些编辑好的配置文件分发到每个机器上:

scp -r /export/servers/hadoop-2.7.5/etc/hadoop/* 5gcsp-bigdata-svr2:/export/servers/hadoop-2.7.5/etc/hadoop/scp -r /export/servers/hadoop-2.7.5/etc/hadoop/* 5gcsp-bigdata-svr3:/export/servers/hadoop-2.7.5/etc/hadoop/scp -r /export/servers/hadoop-2.7.5/etc/hadoop/* 5gcsp-bigdata-svr4:/export/servers/hadoop-2.7.5/etc/hadoop/scp -r /export/servers/hadoop-2.7.5/etc/hadoop/* 5gcsp-bigdata-svr5:/export/servers/hadoop-2.7.5/etc/hadoop/

4.7.准备native-lib

上传hadoop-2.6.0+cdh5.14.4+2785-1.cdh5.14.4.p0.4.el6.x86_64.rpm,并在每个节点均执行:

# 找到 hadoop-2.6.0+cdh5.14.4+2785-1.cdh5.14.4.p0.4.el6.x86_64.rpm,执行:rpm2cpio hadoop-2.6.0+cdh5.14.4+2785-1.cdh5.14.4.p0.4.el6.x86_64.rpm | cpio -div# 如果其他节点没有这个rpm文件可以scp复制过去# 进入解压后的路径 usr/lib/hadoop/lib/native,执行:cp -d * $HADOOP_HOME/lib/native/

4.8.启动Hadoop

1、第一台机器(namenode节点所在机器)格式化NameNode

hadoop namenode -format

2、启动HDFS与Yarn

/export/servers/hadoop-2.7.5/sbin/start-dfs.sh/export/servers/hadoop-2.7.5/sbin/start-yarn.sh

3、或者直接启动所有

start-all.sh

4、启动历史服务

mr-jobhistory-daemon.sh start historyserver

4.9.验证

# HDFS WEB页面http://IP:50070http://IP:8088

5)Hive

5.1.下载

/dist/hive/

5.2.解压并重命名

cd /export/softwaretar -zxvf apache-hive-2.1.0-bin.tar.gz -C /export/serverscd /export/serversmv apache-hive-2.1.0-bin hive-2.1.0

5.3.修改Hive的配置文件

1、hive-env.sh

cd /export/servers/hive-2.1.0/confcp hive-env.sh.template hive-env.shvim hive-env.sh

修改内容如下:

HADOOP_HOME=/export/servers/hadoop-2.7.5export HIVE_CONF_DIR=/export/servers/hive-2.1.0/conf

2、hive-site.xml

cd /export/servers/hive-2.1.0/confvim hive-site.xml

在该文件中添加以下内容

<?xml version="1.0" encoding="UTF-8" standalone="no"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration><property><name>javax.jdo.option.ConnectionUserName</name><value>root</value></property><property><name>javax.jdo.option.ConnectionPassword</name><value>123456</value></property><property><name>javax.jdo.option.ConnectionURL</name><value>jdbc:mysql://5gcsp-bigdata-svr1:3306/hive?createDatabaseIfNotExist=true&amp;useSSL=false</value></property><property><name>javax.jdo.option.ConnectionDriverName</name><value>com.mysql.jdbc.Driver</value></property><property><name>hive.metastore.schema.verification</name><value>false</value></property><property><name>datanucleus.schema.autoCreateAll</name><value>true</value></property><property><name>hive.server2.thrift.bind.host</name><value>5gcsp-bigdata-svr1</value></property></configuration>

3、上传MySQL的lib驱动包

将mysql的lib驱动包上传到hive的lib目录下

cd /export/servers/hive-2.1.0/lib

将mysql-connector-java-5.1.38.jar 上传到这个目录下

4、拷贝相关jar包

将hive-2.1.0/jdbc/目录下的hive-jdbc-2.1.0-standalone.jar 拷贝到hive-2.1.0/lib/目录

cp /export/servers/hive-2.1.0/jdbc/hive-jdbc-2.1.0-standalone.jar /export/servers/hive-2.1.0/lib/

5、配置Hive环境变量

Hive节点执行以下命令配置hive的环境变量

vim /etc/profile

添加以下内容:

export HIVE_HOME=/export/servers/hive-2.1.0export PATH=:$HIVE_HOME/bin:$PATH

5.4.Hive的交互方式

1、bin/hive

cd /export/servers/hive-2.1.0/bin/hive

创建一个数据库

create database mytest;show databases;

此处需要注意: 如果启动后在mysql中没有发现构建hive库及其相关的表, 建议执行一下操作:

schematool -dbType mysql -initSchema#手动初始化元数据信息

2、使用sql语句或者sql脚本进行交互

不进入hive的客户端直接执行hive的hql语句

cd /export/servers/hive-2.1.0/bin/hive -e "create database mytest"

或者我们可以将我们的hql语句写成一个sql脚本然后执行

cd /export/serversvim hive.sql

脚本内容如下:

create database mytest2;use mytest2;create table stu(id int,name string);

通过hive -f 来执行我们的sql脚本

bin/hive -f /export/server/hive.sql

3、BeelineClient

hive经过发展,推出了第二代客户端beeline,但是beeline客户端不是直接访问metastore服务的,而是需要单独启动hiveserver2服务。在hive运行的服务器上,首先启动metastore服务,然后启动hiveserver2服务

nohup /export/servers/hive-2.1.0/bin/hive --service metastore &nohup /export/servers/hive-2.1.0/bin/hive --service hiveserver2 &

在Hive的安装节点上使用beeline客户端进行连接访问。

/export/servers/hive-2.1.0/bin/beeline

根据提醒进行以下操作:

[root@node3 ~]# /export/server/hive-2.1.0/bin/beelinewhich: no hbase in (:/export/server/hive-2.1.0/bin::/export/server/hadoop-2.7.5/bin:/export/server/hadoop-2.7.5/sbin::/export/server/jdk1.8.0_241/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/export/server/mysql-5.7.29/bin:/root/bin)Beeline version 2.1.0 by Apache Hivebeeline> !connect jdbc:hive2://5gcsp-bigdata-svr1:10000Connecting to jdbc:hive2://node3:10000Enter username for jdbc:hive2://node3:10000: rootEnter password for jdbc:hive2://node3:10000:123456

注意: 如果报出以下, 请修改 hadoop中 core-site.xml文件

错误信息为: User: root is not allowed to impersonate root

解决方案:在node1的 hadoop的 core-site.xml文件中添加一下内容:

<property> <name>hadoop.proxyuser.root.hosts</name><value>*</value> </property> <property> <name>hadoop.proxyuser.root.groups</name> <value>*</value> </property>

添加后, 将 core-site.xml 发送到其他两台机子:

cd /export/servers/hadoop-2.7.5/etc/hadoopscp core-site.xml 5gcsp-bigdata-svr2:$PWDscp core-site.xml 5gcsp-bigdata-svr3:$PWDscp core-site.xml 5gcsp-bigdata-svr4:$PWDscp core-site.xml 5gcsp-bigdata-svr5:$PWD

此时重新启动Hive并连接即可连接成功

6)Sqoop

6.1.解压安装并更改名字

tar -zxvf sqoop-1.4.6.bin__hadoop-2.0.4-alpha.tar.gz -C /export/servers/cd /export/servers/mv sqoop-1.4.6.bin__hadoop-2.0.4-alpha sqoop

6.2.拷贝mysql的jdbc驱动包到lib目录下

cd /export/servers/sqoop/lib

6.3.配置文件

cd /export/servers/sqoop/confcp sqoop-env-template.sh sqoop-env.shvim sqoop-env.sh#修改配置文件#Set path to where bin/hadoop is availableexport HADOOP_COMMON_HOME=/export/servers/hadoop-2.7.5#Set path to where hadoop-*-core.jar is availableexport HADOOP_MAPRED_HOME=/export/servers/hadoop-2.7.5#set the path to where bin/hbase is available#export HBASE_HOME=#Set the path to where bin/hive is availableexport HIVE_HOME=/export/servers/hive-2.1.0#Set the path for where zookeper config dir is#export ZOOCFGDIR=

6.4.测试

cd /export/servers/sqoop/binsqoop-version

6.5.语句

创建和mysql结构相同的hive表sqoop create-hive-table \--connect jdbc:mysql://5gcsp-bigdata-svr1:3306/test \--table emp \--username root \--password 123456 \--hive-table sqooptohive.emp将mysql表中的数据导入到hive中sqoop import \--connect jdbc:mysql://5gcsp-bigdata-svr1:3306/test \--username root \--password 123456 \--table emp \--hive-table sqooptohive.emp \--hive-import \-m1

7)HBase

7.1.上传解压HBase安装包

tar -zxvf hbase-1.6.0-bin.tar.gz -C /export/servers/

7.2.修改HBase配置文件

1、hbase-env.sh

cd /export/servers/hbase-1.6.0/confvim hbase-env.sh# 第28行export JAVA_HOME=/usr/local/jdk1.8.0_191export HBASE_MANAGES_ZK=false

2、hbase-site.xml

vim hbase-site.xml------------------------------<configuration><!-- HBase数据在HDFS中的存放的路径 --><property><name>hbase.rootdir</name><value>hdfs://5gcsp-bigdata-svr1:8020/hbase</value></property><!-- Hbase的运行模式。false是单机模式,true是分布式模式。若为false,Hbase和Zookeeper会运行在同一个JVM里面 --><property><name>hbase.cluster.distributed</name><value>true</value></property><!-- ZooKeeper的地址 --><property><name>hbase.zookeeper.quorum</name><value>5gcsp-bigdata-svr1,5gcsp-bigdata-svr2,5gcsp-bigdata-svr3,5gcsp-bigdata-svr4,5gcsp-bigdata-svr5</value></property><!-- ZooKeeper快照的存储位置 --><property><name>hbase.zookeeper.property.dataDir</name><value>/export/servers/zookeeper-3.4.6/zkdatas</value></property><!-- V2.1版本,在分布式情况下, 设置为false --><property><name>hbase.unsafe.stream.capability.enforce</name><value>false</value></property></configuration>

7.3.配置环境变量

# 配置Hbase环境变量vim /etc/profileexport HBASE_HOME=/export/servers/hbase-1.6.0export PATH=$PATH:${HBASE_HOME}/bin:${HBASE_HOME}/sbin#加载环境变量source /etc/profile

7.4.复制jar包到lib

根据版本来决定此步骤,到lib目录下看看有没有htrace-core-3.1.0-incubating.jar,如果有跳过此步骤

cp $HBASE_HOME/lib/client-facing-thirdparty/htrace-core-3.1.0-incubating.jar $HBASE_HOME/lib/

7.5.修改regionservers文件

vim regionservers 5gcsp-bigdata-svr15gcsp-bigdata-svr25gcsp-bigdata-svr35gcsp-bigdata-svr45gcsp-bigdata-svr5

7.6.分发安装包与配置文件

cd /export/serversscp -r hbase-1.6.0/ 5gcsp-bigdata-svr2:$PWDscp -r hbase-1.6.0/ 5gcsp-bigdata-svr3:$PWDscp -r hbase-1.6.0/ 5gcsp-bigdata-svr4:$PWDscp -r hbase-1.6.0/ 5gcsp-bigdata-svr5:$PWD在其余节点配置加载环境变量# 配置Hbase环境变量vim /etc/profileexport HBASE_HOME=/export/servers/hbase-1.6.0export PATH=$PATH:${HBASE_HOME}/bin:${HBASE_HOME}/sbin#加载环境变量source /etc/profile

7.7.搭建HBase高可用

1、在hbase的conf文件夹中创建 backup-masters 文件

cd /export/servers/hbase-1.6.0/conf/touch backup-masters

2、将备份节点添加到该文件中

vim backup-masters5gcsp-bigdata-svr25gcsp-bigdata-svr3

3、将backup-masters文件分发到所有的服务器节点中

scp backup-masters 5gcsp-bigdata-svr2:$PWDscp backup-masters 5gcsp-bigdata-svr3:$PWDscp backup-masters 5gcsp-bigdata-svr4:$PWDscp backup-masters 5gcsp-bigdata-svr5:$PWD

7.8.解决hbase/filter错误

后面hbase与sqoop合作是用时如果报Exception in thread “main” java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/filter/Filter的错误的话就按照如下方式进行解决

1、关闭所有Hadoop进程

7.9.解决sqoop的lib库中没有hbase的相应jar包

ERROR tool.ImportTool: Error during import: HBase jars are not present in classpath, cannot import to HBase!

原因是:sqoop的lib库中没有hbase的相应jar包

解决办法 : 将hbase中的lib文件夹下的hbase-hbase-annotations.jar、hbase-common.jar、hbase-protocol.jar复制到sqoop的lib文件夹中,如果还是不能解决问题,则把hbase中lib文件夹的所有jar包都复制到sqoop的lib文件夹中。

cd /export/servers/hbase-1.6.0/libcp * /export/servers/sqoop/lib#如有覆盖提醒选择n即可

7.10.启动HBase

cd /export/servers# 启动ZK./start-zk.sh# 启动hadoopstart-dfs.sh# 启动hbasestart-hbase.sh

7.11.验证Hbase是否启动成功

# 启动hbase shell客户端hbase shell# 输入status[root@node1 onekey]# hbase shellSLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/export/server/hadoop-2.7.5/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/export/server/hbase-1.6.0/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: See /codes.html#multiple_bindings for an explanation.SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]HBase ShellUse "help" to get list of supported commands.Use "exit" to quit this interactive shell.Version 2.1.0, re1673bb0bbfea21d6e5dba73e013b09b8b49b89b, Tue Jul 10 17:26:48 CST Took 0.0034 seconds Ignoring executable-hooks-1.6.0 because its extensions are not built. Try: gem pristine executable-hooks --version 1.6.0Ignoring gem-wrappers-1.4.0 because its extensions are not built. Try: gem pristine gem-wrappers --version 1.4.02.4.1 :001 > status1 active master, 0 backup masters, 3 servers, 0 dead, 0.6667 average loadTook 0.4562 seconds 2.4.1 :002 >

7.12.WebUI

http://5gcsp-bigdata-svr1:16010/master-status

8)Elasticsearch

8.1.创建普通用户

使用root用户在所有机器下执行

useradd itcastpasswd itcast

8.2.为普通用户itcast添加sudo权限

所有机器使用root用户执行visudo命令然后为es用户添加权限

visudo# 第100行itcastALL=(ALL) ALL

8.3.上传压缩包并压缩

以下操作 使用root用户进行es的相关操作,所有机器都需要创建

mkdir -p /export/servers/eschown -R itcast:itcast /export/servers/es

将es的安装包下载并上传到5gcsp-bigdata-svr1服务器的/export/software路径下,然后进行解压,使用itcast用户来执行以下操作,将es安装包上传到5gcsp-bigdata-svr1服务器,并使用es用户执行以下命令解压

# 解压Elasticsearchcd /export/software/ tar -zvxf elasticsearch-7.6.1-linux-x86_64.tar.gz -C /export/servers/es/

8.4.修改配置文件

1、修改elasticsearch.yml

5gcsp-bigdata-svr1服务器使用itcast用户来修改配置文件

cd /export/servers/es/elasticsearch-7.6.1/configmkdir -p /export/servers/es/elasticsearch-7.6.1/logmkdir -p /export/servers/es/elasticsearch-7.6.1/datarm -rf elasticsearch.ymlvim elasticsearch.ymlcluster.name: itcast-esnode.name: 5gcsp-bigdata-svr1path.data: /export/servers/es/elasticsearch-7.6.1/datapath.logs: /export/servers/es/elasticsearch-7.6.1/lognetwork.host: 5gcsp-bigdata-svr1http.port: 9200discovery.seed_hosts: ["5gcsp-bigdata-svr1", "5gcsp-bigdata-svr2", "5gcsp-bigdata-svr3", "5gcsp-bigdata-svr4", "5gcsp-bigdata-svr5"]cluster.initial_master_nodes: ["5gcsp-bigdata-svr1", "5gcsp-bigdata-svr2"]bootstrap.system_call_filter: falsebootstrap.memory_lock: falsehttp.cors.enabled: truehttp.cors.allow-origin: "*"

2、修改jvm.option

使用itcast用户执行以下命令调整jvm堆内存大小,每个人根据自己服务器的内存大小来进行调整。

cd /export/servers/es/elasticsearch-7.6.1/configvim jvm.options-Xms2g-Xmx2g

8.5.将安装包分发到其他服务器上面

使用itcast用户将安装包分发到其他服务器上面去

cd /export/servers/es/scp -r elasticsearch-7.6.1/ 5gcsp-bigdata-svr2:$PWDscp -r elasticsearch-7.6.1/ 5gcsp-bigdata-svr3:$PWDscp -r elasticsearch-7.6.1/ 5gcsp-bigdata-svr4:$PWDscp -r elasticsearch-7.6.1/ 5gcsp-bigdata-svr5:$PWD

8.6.其他节点修改es配置文件

使用itcast用户执行以下命令修改es配置文件,更改node.namenetwork.host,以此类推

cd /export/servers/es/elasticsearch-7.6.1/configmkdir -p /export/servers/es/elasticsearch-7.6.1/logmkdir -p /export/servers/es/elasticsearch-7.6.1/datarm -rf elasticsearch.ymlvim elasticsearch.ymlcluster.name: itcast-esnode.name: 5gcsp-bigdata-svr2path.data: /export/servers/es/elasticsearch-7.6.1/datapath.logs: /export/servers/es/elasticsearch-7.6.1/lognetwork.host: 5gcsp-bigdata-svr2http.port: 9200discovery.seed_hosts: ["5gcsp-bigdata-svr1", "5gcsp-bigdata-svr2", "5gcsp-bigdata-svr3", "5gcsp-bigdata-svr4", "5gcsp-bigdata-svr5"]cluster.initial_master_nodes: ["5gcsp-bigdata-svr1", "5gcsp-bigdata-svr2"]bootstrap.system_call_filter: falsebootstrap.memory_lock: falsehttp.cors.enabled: truehttp.cors.allow-origin: "*"

8.7.修改系统配置,解决启动时候的问题

1、普通用户打开文件的最大数限制

所有机器使用itcast用户执行

sudo vi /etc/security/limits.conf添加如下内容:* soft nofile 65536* hard nofile 131072* soft nproc 2048* hard nproc 4096

2、普通用户启动线程数限制

所有机器使用itcast用户执行

Centos6sudo vi /etc/security/limits.d/90-nproc.confCentos7sudo vi /etc/security/limits.d/20-nproc.conf找到如下内容:* soft nproc 1024#修改为* soft nproc 4096

3、普通用户调大虚拟内存

所有机器使用itcast用户执行

第一种调整: 临时调整, 退出会话 重新登录 就会失效的 (测试环境下配置)sudo sysctl -w vm.max_map_count=262144 第二种: 永久有效 (生产中配置)sudo vim /etc/sysctl.conf在最后添加一行vm.max_map_count=262144

备注:以上三个问题解决完成之后,重新连接secureCRT或者重新连接xshell生效

8.8.启动ES服务

nohup /export/servers/es/elasticsearch-7.6.1/bin/elasticsearch 2>&1 &

启动成功之后jsp即可看到es的服务进程,并且访问页面

http://5gcsp-bigdata-svr1:9200/?pretty

注意:如果哪一台机器服务启动失败,那么就到哪一台机器的/export/server/es/elasticsearch-7.6.1/log这个路径下面去查看错误日志

8.9.Elasticsearch-head插件

8.9.1.安装nodejs

1、第一台机器执行以下命令下载安装包,然后进行解压

cd ~wget /mirrors/node/v8.1.0/node-v8.1.0-linux-x64.tar.gztar -zxvf node-v8.1.0-linux-x64.tar.gz -C /export/servers/es/

2、创建软连接

执行以下命令创建软连接

sudo ln -s /export/servers/es/node-v8.1.0-linux-x64/lib/node_modules/npm/bin/npm-cli.js /usr/local/bin/npmsudo ln -s /export/servers/es/node-v8.1.0-linux-x64/bin/node /usr/local/bin/node

3、修改环境变量

服务器添加环境变量

sudo vim /etc/profileexport NODE_HOME=/export/servers/es/node-v8.1.0-linux-x64export PATH=:$PATH:$NODE_HOME/bin#修改完环境变量使用source生效source /etc/profile

5、验证安装成功

执行以下命令验证安装生效

node -vnpm -v

8.9.2.本地安装

1、上传压缩包

将我们的压缩包 elasticsearch-head-compile-after.tar.gz 上传到机器的/export/software 路径下面去

2、解压安装包

执行以下命令解压安装包

cd ~tar -zxvf elasticsearch-head-compile-after.tar.gz -C /export/servers/es/

3、node1机器修改Gruntfile.js

修改Gruntfile.js这个文件

cd /export/servers/es/elasticsearch-headvim Gruntfile.js找到代码中的93行:hostname: '192.168.100.100', 修改为:当前主机的hostname

4、node1机器修改app.js

第一台机器修改app.js

cd /export/servers/es/elasticsearch-head/_sitevim app.js在Vim中输入「:4354」,定位到第4354行修改 http://localhost:9200为http://5gcsp-bigdata-svr1:9200

5、解决未连接问题

打开路径 "…\elasticsearch\config\ " 下的 elasticsearch.yml 文件,在文件末尾添加如下代码:

cd /export/servers/es/elasticsearch-7.6.1/configvim elasticsearch.yml#在文件末尾添加如下代码:http.cors.allow-methods: OPTIONS, HEAD, GET, POST, PUT, DELETEhttp.cors.allow-headers: "X-Requested-With, Content-Type, Content-Length, X-User"

6、启动head服务

启动elasticsearch-head插件

cd /export/servers/es/elasticsearch-head/node_modules/grunt/bin/进程前台启动命令./grunt server进程后台启动命令nohup ./grunt server >/dev/null 2>&1 &

访问端口号9100

6、如何停止:elasticsearch-head进程

执行以下命令找到elasticsearch-head的插件进程,然后使用kill -9 杀死进程即可

netstat -nltp | grep 9100kill -9 8328

9)Spark

9.1.下载

/apache/spark/releases

/downloads.html

/dist/spark/spark-2.4.5/

9.2.Local安装

解压软件包tar -zxvf spark-2.4.7-bin-hadoop2.7.tgz -C /export/servers创建软连接,方便后期升级ln -s /export/servers/spark-2.4.7-bin-hadoop2.7 /export/servers/spark如果有权限问题,可以修改为root,方便学习时操作,实际中使用运维分配的用户和权限即可chown -R root /export/servers/spark-2.4.7-bin-hadoop2.7chgrp -R root /export/servers/spark-2.4.7-bin-hadoop2.7

9.3.Standalone集群安装

1、修改配置并分发

#修改slaves#进入配置目录cd /export/servers/spark/conf#修改配置文件名称mv slaves.template slavesvim slaves#内容如下:5gcsp-bigdata-svr25gcsp-bigdata-svr3

2、修改spark-env.sh

进入配置目录cd /export/servers/spark/conf修改配置文件名称mv spark-env.sh.template spark-env.sh修改配置文件vim spark-env.sh

修改内容如下:

## 设置JAVA安装目录JAVA_HOME=/usr/local/jdk1.8.0_191## HADOOP软件配置文件目录,读取HDFS上文件和运行YARN集群HADOOP_CONF_DIR=/export/servers/hadoop-2.7.5/etc/hadoopYARN_CONF_DIR=/export/servers/hadoop-2.7.5/etc/hadoop## 指定spark老大Master的IP和提交任务的通信端口export SPARK_MASTER_HOST=5gcsp-bigdata-svr1export SPARK_MASTER_PORT=7077SPARK_MASTER_WEBUI_PORT=8080SPARK_WORKER_CORES=1SPARK_WORKER_MEMORY=1g

3、分发

cd /export/servers/scp -r spark-2.4.7-bin-hadoop2.7 root@5gcsp-bigdata-svr2:$PWDscp -r spark-2.4.7-bin-hadoop2.7 root@5gcsp-bigdata-svr3:$PWDscp -r spark-2.4.7-bin-hadoop2.7 root@5gcsp-bigdata-svr4:$PWDscp -r spark-2.4.7-bin-hadoop2.7 root@5gcsp-bigdata-svr5:$PWD##分别创建软连接ln -s /export/servers/spark-2.4.7-bin-hadoop2.7 /export/servers/spark

9.4.Standalone HA 搭建

1、主节点上配置

vim /export/servers/spark/conf/spark-env.sh

注释或删除MASTER_HOST内容:

# SPARK_MASTER_HOST=node1

增加如下配置:

SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=5gcsp-bigdata-svr1:2181,5gcsp-bigdata-svr2:2181,5gcsp-bigdata-svr3:2181,5gcsp-bigdata-svr4:2181,5gcsp-bigdata-svr5:2181 -Dspark.deploy.zookeeper.dir=/spark-ha"

2、将spark-env.sh分发集群

cd /export/servers/spark/confscp -r spark-env.sh root@5gcsp-bigdata-svr2:$PWDscp -r spark-env.sh root@5gcsp-bigdata-svr3:$PWDscp -r spark-env.sh root@5gcsp-bigdata-svr4:$PWDscp -r spark-env.sh root@5gcsp-bigdata-svr5:$PWD

9.5.Spark On Yarn

1、修改spark-env.sh

cd /export/servers/spark/confvim /export/servers/spark/conf/spark-env.sh## 添加内容## HADOOP软件配置文件目录,读取HDFS上文件和运行YARN集群HADOOP_CONF_DIR=/export/servers/hadoop-2.7.5/etc/hadoopYARN_CONF_DIR=/export/servers/hadoop-2.7.5/etc/hadoop

同步:

cd /export/servers/spark/confscp -r spark-env.sh root@5gcsp-bigdata-svr2:$PWDscp -r spark-env.sh root@5gcsp-bigdata-svr3:$PWDscp -r spark-env.sh root@5gcsp-bigdata-svr4:$PWDscp -r spark-env.sh root@5gcsp-bigdata-svr5:$PWD

2、整合Yarn历史服务器并关闭资源检查

在主节点上修改

cd /export/servers/hadoop/etc/hadoopvim /export/servers/hadoop/etc/hadoop/yarn-site.xml

添加内容:

<!-- 设置yarn集群的内存分配方案 --><property><name>yarn.nodemanager.resource.memory-mb</name><value>20480</value></property><property><name>yarn.scheduler.minimum-allocation-mb</name><value>2048</value></property><property><name>yarn.nodemanager.vmem-pmem-ratio</name><value>2.1</value></property><!-- 设置聚合日志在hdfs上的保存时间 --><property><name>yarn.log-aggregation.retain-seconds</name><value>604800</value></property><!-- 关闭yarn内存检查 --><property><name>yarn.nodemanager.pmem-check-enabled</name><value>false</value></property><property><name>yarn.nodemanager.vmem-check-enabled</name><value>false</value></property>

3、在yarn-site.xml 中添加proxyserver的配置

<property><name>yarn.web-proxy.address</name><value>5gcsp-bigdata-svr1:8089</value></property>

同步:

cd /export/server/hadoop2.7.5/etc/hadoopscp -r yarn-site.xml root@5gcsp-bigdata-svr2:$PWDscp -r yarn-site.xml root@5gcsp-bigdata-svr3:$PWDscp -r yarn-site.xml root@5gcsp-bigdata-svr4:$PWDscp -r yarn-site.xml root@5gcsp-bigdata-svr5:$PWD

4、配置spark历史服务器

## 进入配置目录cd /export/servers/spark/conf## 修改配置文件名称mv spark-defaults.conf.template spark-defaults.confvim spark-defaults.conf

添加内容:

spark.eventLog.enabled truespark.eventLog.dir hdfs://5gcsp-bigdata-svr1:8020/sparklog/press truespark.yarn.historyServer.address 5gcsp-bigdata-svr1:18080

5、修改spark-env.sh

进入配置目录cd /export/servers/spark/conf修改配置文件vim spark-env.sh

增加如下内容

## 配置spark历史服务器地址SPARK_HISTORY_OPTS="-Dspark.history.fs.logDirectory=hdfs://5gcsp-bigdata-svr1:8020/sparklog/ -Dspark.history.fs.cleaner.enabled=true"

注意:sparklog需要手动创建

hadoop fs -mkdir -p /sparklog

6、设置日志级别

## 进入目录cd /export/servers/spark/conf## 修改日志属性配置文件名称mv log4j.properties.template log4j.properties## 改变日志级别vim log4j.properties修改INFO为WARN

同步

cd /export/servers/spark/confscp -r spark-env.sh root@5gcsp-bigdata-svr2:$PWDscp -r spark-env.sh root@5gcsp-bigdata-svr3:$PWDscp -r spark-env.sh root@5gcsp-bigdata-svr4:$PWDscp -r spark-env.sh root@5gcsp-bigdata-svr5:$PWDscp -r spark-defaults.conf root@5gcsp-bigdata-svr2:$PWDscp -r spark-defaults.conf root@5gcsp-bigdata-svr3:$PWDscp -r spark-defaults.conf root@5gcsp-bigdata-svr4:$PWDscp -r spark-defaults.conf root@5gcsp-bigdata-svr5:$PWDscp -r log4j.properties root@5gcsp-bigdata-svr2:$PWDscp -r log4j.properties root@5gcsp-bigdata-svr3:$PWDscp -r log4j.properties root@5gcsp-bigdata-svr4:$PWDscp -r log4j.properties root@5gcsp-bigdata-svr5:$PWD

7、配置SparkJar包

## hdfs上创建存储spark相关jar包目录hadoop fs -mkdir -p /spark/jars/## 上传$SPARK_HOME/jars所有jar包hadoop fs -put /export/servers/spark/jars/* /spark/jars/

在spark-defaults.conf中增加Spark相关jar包位置信息:

vim /export/servers/spark/conf/spark-defaults.confspark.yarn.jars hdfs://5gcsp-bigdata-svr1:8020/spark/jars/*

同步

cd /export/servers/spark/confscp -r spark-defaults.conf root@5gcsp-bigdata-svr2:$PWDscp -r spark-defaults.conf root@5gcsp-bigdata-svr3:$PWDscp -r spark-defaults.conf root@5gcsp-bigdata-svr4:$PWDscp -r spark-defaults.conf root@5gcsp-bigdata-svr5:$PWD

9.6.启动

注意:Spark依赖于Hadoop,所以要先启动Hadoop才可以启动Spark

## 启动HDFS和YARN服务start-all.sh## 启动MRHistoryServer服务,在node1执行命令mr-jobhistory-daemon.sh start historyserver## 启动Spark HistoryServer服务,,在node1执行命令/export/servers/spark/sbin/start-history-server.sh## 启动Yarn的ProxyServer服务/export/servers/hadoop-2.7.5/sbin/yarn-daemon.sh start proxyserver

9.7.WebUI

http://5gcsp-bigdata-svr1:18080/

10.Kafka

10.1.准备如下目录

安装包存放的目录:/export/software安装程序存放的目录:/export/servers数据目录:/export/data日志目录:/export/logs如果没有需要创建:mkdir -p /export/servers/mkdir -p /export/software/mkdir -p /export/data/mkdir -p /export/logs/

10.2.下载

/dist/kafka/

/dyn/closer.cgi?path=/kafka/1.0.0/kafka_2.11-1.0.0.tgz

10.3.上传压缩包并解压

tar -zxvf kafka_2.11-1.0.0.tgz -C /export/servers/cd /export/servers/mv kafka_2.11-1.0.0 kafka

10.4.配置环境变量

vim /etc/profile#KAFKA_HOMEexport KAFKA_HOME=/export/servers/kafkaexport PATH=$PATH:$KAFKA_HOME/binsource /etc/profile

10.5.分发安装包

scp -r /opt/dtstack/kafka 5gcsp-bigdata-svr2:/opt/dtstack/kafkascp -r /export/servers/kafka 5gcsp-bigdata-svr3:/export/serversscp -r /export/servers/kafka 5gcsp-bigdata-svr4:/export/serversscp -r /export/servers/kafka 5gcsp-bigdata-svr5:/export/serversscp /etc/profile 5gcsp-bigdata-svr2:/etc/profilescp /etc/profile 5gcsp-bigdata-svr3:/etc/profilescp /etc/profile 5gcsp-bigdata-svr4:/etc/profilescp /etc/profile 5gcsp-bigdata-svr5:/etc/profilesource /etc/profile

10.6.修改Kafka配置文件

10.6.1.目录重命名

mv /export/servers/kafka/config/server.properties /export/servers/kafka/config/server.properties.bakvim /export/servers/kafka/config/server.properties

10.6.2.修改配置文件

主要修改以下6个地方:

​ 1) broker.id 需要保证每一台kafka都有一个独立的broker

​ 2) log.dirs 数据存放的目录

​ 3) zookeeper.connect zookeeper的连接地址信息

​ 4) delete.topic.enable 是否直接删除topic

​ 5) host.name 主机的名称

​ 6) 修改: listeners=PLAINTEXT://5gcsp-bigdata-svr1:9092

1、第一台机器修改kafka配置文件servers.properties

vim /export/servers/kafka/config/server.properties#删除所有:ggdG或者:%d#添加如下内容:broker.id=work.threads=3num.io.threads=8socket.send.buffer.bytes=102400socket.receive.buffer.bytes=102400socket.request.max.bytes=104857600log.dirs=/export/data/kafka/kafka-logsnum.partitions=4num.recovery.threads.per.data.dir=1offsets.topic.replication.factor=1transaction.state.log.replication.factor=1transaction.state.log.min.isr=1log.flush.interval.messages=10000log.flush.interval.ms=1000log.retention.hours=168log.segment.bytes=1073741824log.retention.check.interval.ms=300000zookeeper.connect=5gcsp-bigdata-svr1:2181,5gcsp-bigdata-svr2:2181,5gcsp-bigdata-svr3:2181,5gcsp-bigdata-svr4:2181,5gcsp-bigdata-svr5:2181zookeeper.connection.timeout.ms=6000group.initial.rebalance.delay.ms=0delete.topic.enable=truehost.name=5gcsp-bigdata-svr1

2、第二台机器修改kafka配置文件servers.properties

vim /export/servers/kafka/config/server.properties#删除所有ggdG或者:%d#添加如下内容broker.id=work.threads=3num.io.threads=8socket.send.buffer.bytes=102400socket.receive.buffer.bytes=102400socket.request.max.bytes=104857600log.dirs=/export/data/kafka/kafka-logsnum.partitions=4num.recovery.threads.per.data.dir=1offsets.topic.replication.factor=1transaction.state.log.replication.factor=1transaction.state.log.min.isr=1log.flush.interval.messages=10000log.flush.interval.ms=1000log.retention.hours=168log.segment.bytes=1073741824log.retention.check.interval.ms=300000zookeeper.connect=5gcsp-bigdata-svr1:2181,5gcsp-bigdata-svr2:2181,5gcsp-bigdata-svr3:2181,5gcsp-bigdata-svr4:2181,5gcsp-bigdata-svr5:2181zookeeper.connection.timeout.ms=6000group.initial.rebalance.delay.ms=0delete.topic.enable=truehost.name=5gcsp-bigdata-svr2

3、第三台机器修改kafka配置文件servers.properties

vim /export/servers/kafka/config/server.properties#删除所有ggdG或者:%d#添加如下内容broker.id=work.threads=3num.io.threads=8socket.send.buffer.bytes=102400socket.receive.buffer.bytes=102400socket.request.max.bytes=104857600log.dirs=/export/data/kafka/kafka-logsnum.partitions=4num.recovery.threads.per.data.dir=1offsets.topic.replication.factor=1transaction.state.log.replication.factor=1transaction.state.log.min.isr=1log.flush.interval.messages=10000log.flush.interval.ms=1000log.retention.hours=168log.segment.bytes=1073741824log.retention.check.interval.ms=300000zookeeper.connect=5gcsp-bigdata-svr1:2181,5gcsp-bigdata-svr2:2181,5gcsp-bigdata-svr3:2181,5gcsp-bigdata-svr4:2181,5gcsp-bigdata-svr5:2181zookeeper.connection.timeout.ms=6000group.initial.rebalance.delay.ms=0delete.topic.enable=truehost.name=5gcsp-bigdata-svr3

第四台和第五台以此类推,注意修改关键内容即可

10.6.3.配置详解

#设置Kafka 节点唯一IDbroker.id=O# 开启删除Kafka 主题属性delete.topic.enable=true#设置网络请求处理线程数work.threads=10#设置磁盘IO 请求线程数num.io.threads=20#设置发送buffer字节数socket.send.buffer.bytes=1024000#设置收到buffer字节数socket.receive.buffer.bytes=l024000#设置最大请求字节数socket.request.max.bytes=l048576000#设置消息记录存储路径log.dirs=/export/data/kafka/kafka-logs#设置Kafka 的主题分区数num.partitions=4#设置主题保留时间log.retention.hours=l68#设置Zookeeper 的连接地址zookeeper.connect=5gcsp-bigdata-svr1:2181,node2:2181,node3:2181#设置Zookeeper连接起时时间zookeeper.connection.timeout.ms=60000

10.7.启动

先启动ZK

再在三台机器上分别启动

#前台启动/export/servers/kafka/bin/kafka-server-start.sh /export/servers/kafka/config/server.properties#后台启动nohup /export/servers/kafka/bin/kafka-server-start.sh /export/servers/kafka/config/server.properties >/dev/null 2>&1 &nohup /opt/dtstack/kafka/bin/kafka-server-start.sh /opt/dtstack/kafka/config/server.properties >/dev/null 2>&1 &

11)Flink

11.1.下载

/dist/flink/

11.2.Local安装

1、上传到5gcsp-bigdata-svr1的指定目录

2、解压

tar -zxvf flink-1.12.0-bin-scala_2.12.tgz

3、如果出现权限问题,需要修改权限

chown -R root:root /export/servers/flink-1.12.0

4、改名或创建软链接

mv flink-1.12.0 flinkln -s /export/servers/flink-1.12.0 /export/servers/flink

11.3.Standalone集群安装

1、修改flink-conf.yaml

vim /export/servers/flink/conf/flink-conf.yamljobmanager.rpc.address: 5gcsp-bigdata-svr1taskmanager.numberOfTaskSlots: 2web.submit.enable: true#历史服务器jobmanager.archive.fs.dir: hdfs://5gcsp-bigdata-svr1:8020/flink/completed-jobs/historyserver.web.address: 5gcsp-bigdata-svr1historyserver.web.port: 8082historyserver.archive.fs.dir: hdfs://5gcsp-bigdata-svr1:8020/flink/completed-jobs/

2、修改masters

vim /export/servers/flink/conf/masters5gcsp-bigdata-svr1:8081

3、修改slaves

vim /export/servers/flink/conf/workers5gcsp-bigdata-svr15gcsp-bigdata-svr25gcsp-bigdata-svr35gcsp-bigdata-svr45gcsp-bigdata-svr5

4、添加HADOOP_CONF_DIR环境变量

vim /etc/profileexport HADOOP_CONF_DIR=/export/servers/hadoop/etc/hadoop

5、分发

cd /export/serversscp -r /export/servers/flink 5gcsp-bigdata-svr2:/export/servers/flinkscp -r /export/servers/flink 5gcsp-bigdata-svr3:/export/servers/flinkscp -r /export/servers/flink 5gcsp-bigdata-svr4:/export/servers/flinkscp -r /export/servers/flink 5gcsp-bigdata-svr5:/export/servers/flinkscp /etc/profile 5gcsp-bigdata-svr2:/etc/profilescp /etc/profile 5gcsp-bigdata-svr3:/etc/profilescp /etc/profile 5gcsp-bigdata-svr4:/etc/profilescp /etc/profile 5gcsp-bigdata-svr5:/etc/profilesource /etc/profile

11.4.Standalone HA搭建

1、启动ZooKeeper

zkServer.sh statuszkServer.sh stopzkServer.sh start

2、启动HDFS

/export/servers/hadoop/sbin/start-dfs.sh

3、停止Flink集群

/export/servers/flink/bin/stop-cluster.sh

4、修改flink-conf.yaml

vim /export/servers/flink/conf/flink-conf.yaml#增加如下内容Gstate.backend: filesystemstate.backend.fs.checkpointdir: hdfs://5gcsp-bigdata-svr1:8020/flink-checkpointshigh-availability: zookeeperhigh-availability.storageDir: hdfs://5gcsp-bigdata-svr1:8020/flink/ha/high-availability.zookeeper.quorum: 5gcsp-bigdata-svr1:2181,5gcsp-bigdata-svr2:2181,5gcsp-bigdata-svr3:2181,5gcsp-bigdata-svr4:2181,5gcsp-bigdata-svr5:2181

配置解释

#开启HA,使用文件系统作为快照存储state.backend: filesystem#启用检查点,可以将快照保存到HDFSstate.backend.fs.checkpointdir: hdfs://5gcsp-bigdata-svr1:8020/flink-checkpoints#使用zookeeper搭建高可用high-availability: zookeeper# 存储JobManager的元数据到HDFShigh-availability.storageDir: hdfs://5gcsp-bigdata-svr1:8020/flink/ha/# 配置ZK集群地址high-availability.zookeeper.quorum: 5gcsp-bigdata-svr1:2181,5gcsp-bigdata-svr2:2181,5gcsp-bigdata-svr3:2181

5、修改masters

vim /export/servers/flink/conf/masters5gcsp-bigdata-svr1:80815gcsp-bigdata-svr2:8081

6、同步

scp -r /export/servers/flink/conf/flink-conf.yaml 5gcsp-bigdata-svr2:/export/servers/flink/conf/scp -r /export/servers/flink/conf/flink-conf.yaml 5gcsp-bigdata-svr3:/export/servers/flink/conf/scp -r /export/servers/flink/conf/flink-conf.yaml 5gcsp-bigdata-svr4:/export/servers/flink/conf/scp -r /export/servers/flink/conf/flink-conf.yaml 5gcsp-bigdata-svr5:/export/servers/flink/conf/scp -r /export/servers/flink/conf/masters 5gcsp-bigdata-svr2:/export/servers/flink/conf/scp -r /export/servers/flink/conf/masters 5gcsp-bigdata-svr3:/export/servers/flink/conf/scp -r /export/servers/flink/conf/masters 5gcsp-bigdata-svr4:/export/servers/flink/conf/scp -r /export/servers/flink/conf/masters 5gcsp-bigdata-svr5:/export/servers/flink/conf/

7、修改5gcsp-bigdata-svr2上的flink-conf.yaml

vim /export/servers/flink/conf/flink-conf.yamljobmanager.rpc.address: 5gcsp-bigdata-svr2

8、重新启动Flink集群,5gcsp-bigdata-svr1上执行

/export/servers/flink/bin/stop-cluster.sh/export/servers/flink/bin/start-cluster.sh

9、查看日志发现报错

cat /export/servers/flink/log/flink-root-standalonesession-0-5gcsp-bigdata-svr1.log

10、下载jar包并在Flink的lib目录下放入该jar包并分发使Flink能够支持对Hadoop的操作

下载地址:/downloads.html

放入lib目录:

cd /export/servers/flink/lib

11、分发

scp flink-shaded-hadoop-2-uber-2.7.5-10.0.jar 5gcsp-bigdata-svr2:/export/servers/flink/libscp flink-shaded-hadoop-2-uber-2.7.5-10.0.jar 5gcsp-bigdata-svr3:/export/servers/flink/libscp flink-shaded-hadoop-2-uber-2.7.5-10.0.jar 5gcsp-bigdata-svr4:/export/servers/flink/libscp flink-shaded-hadoop-2-uber-2.7.5-10.0.jar 5gcsp-bigdata-svr5:/export/servers/flink/lib

12、重新启动Flink集群,5gcsp-bigdata-svr1上执行

/export/servers/flink/bin/start-cluster.shjps查看发现成功

11.5.Flink On Yarn

1、关闭yarn的内存检查

vim /export/servers/hadoop-2.7.5/etc/hadoop/yarn-site.xml#添加<!-- 关闭yarn内存检查 --><property><name>yarn.nodemanager.pmem-check-enabled</name><value>false</value></property><property><name>yarn.nodemanager.vmem-check-enabled</name><value>false</value></property>

2、同步

scp -r /export/servers/hadoop/etc/hadoop/yarn-site.xml 5gcsp-bigdata-svr2:/export/servers/hadoop/etc/hadoop/yarn-site.xmlscp -r /export/servers/hadoop/etc/hadoop/yarn-site.xml 5gcsp-bigdata-svr3:/export/servers/hadoop/etc/hadoop/yarn-site.xmlscp -r /export/servers/hadoop/etc/hadoop/yarn-site.xml 5gcsp-bigdata-svr4:/export/servers/hadoop/etc/hadoop/yarn-site.xmlscp -r /export/servers/hadoop/etc/hadoop/yarn-site.xml 5gcsp-bigdata-svr5:/export/servers/hadoop/etc/hadoop/yarn-site.xml

3、重启yarn

/export/servers/hadoop/sbin/stop-yarn.sh/export/servers/hadoop/sbin/start-yarn.sh

11.6.WebUI

http://5gcsp-bigdata-svr1:8081/#/overview

【其他相关文章】

【大数据集群搭建-Apache】Apache版本进行大数据集群各组件环境部署

【大数据集群搭建-CDH-(1)虚拟机基础环境配置】CDH版本进行大数据集群各组件环境部署-(1)虚拟机基础环境配置

【大数据集群搭建-CDH-(2)ClouderManager相关介绍】CDH版本进行大数据集群各组件环境部署-(2)ClouderManager相关介绍

【大数据集群搭建-CDH-(3)VMware-Linux磁盘扩容】CDH版本进行大数据集群各组件环境部署-(3)VMware-Linux磁盘扩容

【大数据集群搭建-CDH-(4)CDH部署前的环境准备】CDH版本进行大数据集群各组件环境部署-(4)CDH部署前的环境准备

【大数据集群搭建-CDH-(5)CDH环境搭建】CDH版本进行大数据集群各组件环境部署-(5)CDH环境搭建

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。