1000字范文,内容丰富有趣,学习的好帮手!
1000字范文 > HDP2.5更换集群IP

HDP2.5更换集群IP

时间:2020-08-05 21:00:22

相关推荐

HDP2.5更换集群IP

场景:

linux centos6.9 Ambari + HDP + Kerberos

目前集群节点有3个,运行一切正常。由于客户ip发生变化,需要统一将原先的ip全部替换。

注:首先将dataNode目录下的数据进行备份

1、通过Ambari界面将所有服务停了

2、修改hosts(win/linux)

(1)修改linux 之hosts(所有节点都得修改)[root@hdp39 network-scripts]# cat /etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4#::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.22.103 hdp39192.168.22.40 hdp40192.168.22.41 hdp41192.168.22.103 192.168.22.40 192.168.22.41

注:只修改ip不修改映射名,原因是在原先环境中有很多地方用的是映射名,如果映射名没改那些地方便不需要再进行修改,如下为原先的hosts文件内容

(2)修改window 之host

C:\Windows\System32\drivers\etc\hosts

(3)修改当前ip映射的信息

/etc/sysconfig/network-scripts/ifcfg-Auto_eth0查找当前映射信息配置文件的命令[root@hdp39 network-scripts]# cd /etc/sysconfig/network-scripts[root@hdp39 network-scripts]# grep -rn 192.168.22.39 ifcfg-Auto_eth0_bak:4:IPADDR=192.168.22.39Binary file .ifcfg-Auto_eth0.swp matches

(4)重启网络服务

命令:service network restart或/etc/init.d/network restart[root@hdp39 network-scripts]# service network restartShutting down interface Auto_eth0: Device state: 3 (disconnected)[ OK ]Shutting down interface Auto_eth0_bak: [ OK ]Shutting down interface eth0:[ OK ]Shutting down loopback interface:[ OK ]Bringing up loopback interface: [ OK ]Bringing up interface Auto_eth0: Active connection state: activatingActive connection path: /org/freedesktop/NetworkManager/ActiveConnection/2state: activatedConnection activated[ OK ]Bringing up interface Auto_eth0_bak: Active connection state: activatedActive connection path: /org/freedesktop/NetworkManager/ActiveConnection/3[ OK ]Bringing up interface eth0: Error: No suitable device found: no device found for connection 'System eth0'.[FAILED]

解决:

cd /etc/udev/rules.drm 70-persistent-cd.rules

验证网络访问是否正常

ping

(5)重启服务器

init 6 或reboot

(6)检查防火墙是否开启

[root@hdp39 ~]# service iptables statusiptables: Firewall is not running.

(7)重启 Ldap

[root@hdp39 ~]# find / -name 'ldap.sh'/usr/hdp/2.5.3.0-37/knox/bin/ldap.sh[root@hdp39 bin]# ./ldap.sh startStarting LDAP succeeded with PID 5330.

(8)开启Ambari并登录Ambari

启动Ambariambari-server start 通过界面访问ambarihttp://<your.ambari.server>:8080 由于Ambari之前配置了knox单点登录,因此会自动跳转到knox单点登录对应的 为了让其不跳转到knox中,因此在跳转之前快速停止浏览器的跳转,也就是直接跳转到ambari主页,其url 为http://<your.ambari.server>:8080

(9)通过Abmari 界面修改有写了固定 IP 地址的地方。

修改 yarn、hdfs、mapreduce、zookeeper 等组件中有写了ip地址的配置文件。

遇到的坑:

一、修改完之后ambari 后与hosts 后,重启服务器发现登录不进ambari 主界面

原因一:重启服务器后防火墙被重新打开了

原因二:knox 对应的Ldap 服务没有开户

参考文档: knox官方文档

/books/knox-0-12-0/user-guide.html

[root@hdp39 bin]# ./ldap.sh start

Starting LDAP succeeded with PID 46528.

解决方法:在集群正常运行的情况下先将knox 停了,修改knox主机ip

二、Ambari Metrics 启动失败

报的错如下:

org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.DefaultPhoenixDataSource:Unable to connect to HBase store using .apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table undefined. tableName=SYSTEM.CATALOG.

This is usually due to AMS Data being corrupt.

1、Shut down Ambari Monitors, and Collector via Ambari

2、Cleared out the /var/lib/ambari-metrics-collector dir for fresh restart

3、From Ambari -> Ambari Metrics -> Config -> Advanced ams-hbase-site get the hbase.rootdir and hbase-tmp directory

4、Delete or Move the hbase-tmp and hbase.rootdir directories to an archive folder

Started AMS.

All services will came online and graphs started to display, after a few minutes

参考文档:

/articles/11805/how-to-solve-ambari-metrics-corrupted-data.html

三、dfs.data.dir 配置导致datanode无法启动

报错如下所示:

查看日志如下:(日志为:less /var/log/hadoop/hdfs/hadoop-hdfs-datanode-hdp40.log)-12-20 12:51:21,201 WARN datanode.DataNode (DataNode.java:checkStorageLocations(2524)) - Invalid dfs.datanode.data.dir /bigdata/hadoop/hdfs/data : java.io.FileNotFoundException: File file:/bigdata/hadoop/hdfs/data does not existat org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:624)at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:850)at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:614)at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:422)at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:139)at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:156)at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2479)at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2521)at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2503)at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2395)at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2442)at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2623)at org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.start(SecureDataNodeStarter.java:77)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at mons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)-12-20 12:51:21,203 ERROR datanode.DataNode (DataNode.java:secureMain(2630)) - Exception in secureMainjava.io.IOException: All directories in dfs.datanode.data.dir are invalid: "/bigdata/hadoop/hdfs/data" at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2530)at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2503)at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2395)at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2442)at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2623)at org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.start(SecureDataNodeStarter.java:77)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at mons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)-12-20 12:51:21,204 INFO util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1-12-20 12:51:21,208 INFO datanode.DataNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG:

解决:

(1)确认在更换了Ip 之后,该机器确实不存在该目录了

[root@hdp40 hdfs]# cd /bigdata/hadoop/hdfs/data-bash: cd: /bigdata/hadoop/hdfs/data: No such file or directory

(2)手动创建对应的目录

mkdir -p hdfs/data

(3)改变指定目录以及其子目录下的所有文件的拥有者和群组

[root@hdp40 hdfs]# chown -R -v hdfs:hadoop datachanged ownership of `data' to hdfs:hadoop[root@hdp40 hdfs]# lltotal 4drwxr-xr-x 2 hdfs hadoop 4096 Dec 20 16:07 data如果权限不是755则给他授权[root@hdp40 hdfs]# chmod -R 755 data/

注:上述操作需要在所有做为datanode节点的机器上进行操作。

执行完上述步骤后:

重启dataNode,执行正常(笔者第一次重启的时候依然是报错,但是有点奇怪的是第二重启便可以正常启动了,只是启动的时间久了点)

四、Ranger启动失败

报错如下所示:

Watcher org.mon.cloud.ConnectionManager@7b1083f6 name:ZooKeeperConnection Watcher:hdp39:2181,hdp40:2181,hdp41:2181/infra-solr got event WatchedEvent state:AuthFailed type:None path:null path:null type:NonezkClient received AuthFailedmakePath: /configs/ranger_audits/managed-schemaError uploading file /usr/hdp/current/ranger-admin/contrib/solr_for_audit_setup/conf/managed-schema to zookeeper path /configs/ranger_audits/managed-schemajava.io.IOException: Error uploading file /usr/hdp/current/ranger-admin/contrib/solr_for_audit_setup/conf/managed-schema to zookeeper path /configs/ranger_audits/managed-schemaat org.mon.cloud.ZkConfigManager$1.visitFile(ZkConfigManager.java:69)at org.mon.cloud.ZkConfigManager$1.visitFile(ZkConfigManager.java:59)at java.nio.file.Files.walkFileTree(Files.java:2670)at java.nio.file.Files.walkFileTree(Files.java:2742)at org.mon.cloud.ZkConfigManager.uploadToZK(ZkConfigManager.java:59)at org.mon.cloud.ZkConfigManager.uploadConfigDir(ZkConfigManager.java:121)at org.apache.ambari.mands.UploadConfigZkCommand.executeZkConfigCommand(UploadConfigZkCommand.java:39)at org.apache.ambari.mands.UploadConfigZkCommand.executeZkConfigCommand(UploadConfigZkCommand.java:29)at org.apache.ambari.mands.AbstractZookeeperConfigCommand.executeZkCommand(AbstractZookeeperConfigCommand.java:38)at org.apache.ambari.mands.AbstractZookeeperRetryCommand.createAndProcessRequest(AbstractZookeeperRetryCommand.java:38)at org.apache.ambari.mands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)at org.apache.ambari.mands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.uploadConfiguration(AmbariSolrCloudClient.java:218)at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:469)Caused by: org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /configsat org.apache.zookeeper.KeeperException.create(KeeperException.java:123)at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)at org.mon.cloud.SolrZkClient$4.execute(SolrZkClient.java:294)at org.mon.cloud.SolrZkClient$4.execute(SolrZkClient.java:291)at org.mon.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)at org.mon.cloud.SolrZkClient.exists(SolrZkClient.java:291)at org.mon.cloud.SolrZkClient.makePath(SolrZkClient.java:486)at org.mon.cloud.SolrZkClient.makePath(SolrZkClient.java:408)at org.mon.cloud.ZkConfigManager$1.visitFile(ZkConfigManager.java:67)... 13 more

解决方式:

将各服务器的时间同步便解决了。观察上述日志发现有关Kerberos认证的错误,因而想到的第一点可能是服务器时间不同步所引起的。

五、有些服务启动成功后又立刻失败

当遇到那些启动成功后又很快就自动停止的,首先是确认该服务是否已有一个启动了,但是还没有关闭或者看该服务端口是否被占用了。如:hive 的metastore 启动成功又立马自动停止。查看对应的端口发现已有对应的服务启动了,这种情况kill掉,重启便可。

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。