世界速递!大数据Flink进阶(九):集群基础环境搭建

2023-03-31 09:42:54 来源:腾讯云

node3

192.168.179.7

node4

192.168.179.8

node5

2、安装配置HDFS

2.1、各个节点安装HDFS HA自动切换必须的依赖

yum -y install psmisc

2.2、上传下载好的Hadoop 安装包到 node1节点上,并解压

[root@node1 software]# tar -zxvf ./hadoop-3.3.4.tar.gz

2.3、在node1 节点上配置 Hadoop的环境变量

[root@node1 software]# vim /etc/profileexport HADOOP_HOME=/software/hadoop-3.3.4/export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:#使配置生效source/etc/profile

2.4、配置$HADOOP_HOME/etc/hadoop 下的hadoop-env.sh文件

#导入JAVA_HOMEexport JAVA_HOME=/usr/java/jdk1.8.0_181-amd64/

2.5、配置$HADOOP_HOME/etc/hadoop 下的hdfs-site.xml文件

                    dfs.nameservices        mycluster                        dfs.permissions.enabled        false                        dfs.ha.namenodes.mycluster        nn1,nn2                        dfs.namenode.rpc-address.mycluster.nn1        node1:8020                        dfs.namenode.rpc-address.mycluster.nn2        node2:8020                        dfs.namenode.http-address.mycluster.nn1        node1:50070                        dfs.namenode.http-address.mycluster.nn2        node2:50070                        dfs.namenode.shared.edits.dir        qjournal://node3:8485;node4:8485;node5:8485/mycluster                        dfs.client.failover.proxy.provider.mycluster        org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider                        dfs.ha.fencing.methods        sshfence                dfs.ha.fencing.ssh.private-key-files        /root/.ssh/id_rsa                        dfs.journalnode.edits.dir        /opt/data/journal/node/local/data                        dfs.ha.automatic-failover.enabled        true    

2.6、配置$HADOOP_HOME/ect/hadoop/core-site.xml

                    fs.defaultFS        hdfs://mycluster                        hadoop.tmp.dir        /opt/data/hadoop/                        ha.zookeeper.quorum        node3:2181,node4:2181,node5:2181    

2.7、配置$HADOOP_HOME/etc/hadoop/yarn-site.xml

            yarn.nodemanager.aux-services        mapreduce_shuffle                yarn.nodemanager.env-whitelist        JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME                        yarn.resourcemanager.ha.enabled        true                        yarn.resourcemanager.cluster-id        mycluster                        yarn.resourcemanager.ha.rm-ids        rm1,rm2                        yarn.resourcemanager.hostname.rm1        node1                        yarn.resourcemanager.hostname.rm2        node2                        yarn.resourcemanager.webapp.address.rm1        node1:8088                        yarn.resourcemanager.webapp.address.rm2        node2:8088                        yarn.resourcemanager.zk-address        node3:2181,node4:2181,node5:2181           yarn.nodemanager.vmem-check-enabled    false        

2.8、配置$HADOOP_HOME/etc/hadoop/mapred-site.xml

            mapreduce.framework.name        yarn    

2.9、配置$HADOOP_HOME/etc/hadoop/workers文件

[root@node1 ~]# vim /software/hadoop-3.3.4/etc/hadoop/workersnode3node4node5

2.10、配置$HADOOP_HOME/sbin/start-dfs.sh 和 stop-dfs.sh两个文件中顶部添加以下参数,防止启动错误

HDFS_DATANODE_USER=rootHDFS_DATANODE_SECURE_USER=hdfsHDFS_NAMENODE_USER=rootHDFS_JOURNALNODE_USER=rootHDFS_ZKFC_USER=root

2.11、配置$HADOOP_HOME/sbin/start-yarn.sh stop-yarn.sh 两个文件顶部添加以下参数,防止启动错误

YARN_RESOURCEMANAGER_USER=rootYARN_NODEMANAGER_USER=root

2.12、将配置好的 Hadoop 安装包发送到其他 4****个节点

[root@node1 ~]# scp -r /software/hadoop-3.3.4 node2:/software/[root@node1 ~]# scp -r /software/hadoop-3.3.4 node3:/software/[root@node1 ~]# scp -r /software/hadoop-3.3.4 node4:/software/[root@node1 ~]# scp -r /software/hadoop-3.3.4 node5:/software/

也可以在对应其他节点上解压对应的安装包后,只发送对应的配置文件,这样速度较快。

关键词:
x 广告
x 广告

Copyright   2015-2022 财务报告网版权所有  备案号: 京ICP备12018864号-19   联系邮箱:29 13 23 6 @qq.com