0


Deepin 图形化部署 Hadoop Single Node Cluster

Deepin 图形化部署 Hadoop Single Node Cluster

升级操作系统和软件

快捷键 ctrl+alt+t 打开控制台窗口

deepin 控制台窗口

更新 apt 源

sudoapt update

在这里插入图片描述

更新 系统和软件

sudoapt-y dist-upgrade

在这里插入图片描述

升级后建议重启

开启ssh服务

打开资源管理器

在这里插入图片描述

进入系统盘
deepin

找到 etc 目录

deepin

在系统盘的 etc 目录上 右键 点击 以管理员身份打开
deepin

输入密码

deepin

以管理员身份 进入 etc 目录
在这里插入图片描述

进入 ssh 目录
deepin

在 /etc/ssh目录下 编写 ssh_config
deepin

原始文件内容

# This is the ssh client system-wide configuration file.  See# ssh_config(5) for more information.  This file provides defaults for# users, and the values can be changed in per-user configuration files# or on the command line.# Configuration data is parsed as follows:#  1. command line options#  2. user-specific file#  3. system-wide file# Any configuration value is only changed the first time it is set.# Thus, host-specific definitions should be at the beginning of the# configuration file, and defaults at the end.# Site-wide defaults for some commonly used options.  For a comprehensive# list of available options, their meanings and defaults, please see the# ssh_config(5) man page.

Host *
#   ForwardAgent no#   ForwardX11 no#   ForwardX11Trusted yes#   PasswordAuthentication yes#   HostbasedAuthentication no#   GSSAPIAuthentication no#   GSSAPIDelegateCredentials no#   GSSAPIKeyExchange no#   GSSAPITrustDNS no#   BatchMode no#   CheckHostIP yes#   AddressFamily any#   ConnectTimeout 0#   StrictHostKeyChecking ask#   IdentityFile ~/.ssh/id_rsa#   IdentityFile ~/.ssh/id_dsa#   IdentityFile ~/.ssh/id_ecdsa#   IdentityFile ~/.ssh/id_ed25519#   Port 22#   Protocol 2#   Ciphers aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc#   MACs hmac-md5,hmac-sha1,[email protected]#   EscapeChar ~#   Tunnel no#   TunnelDevice any:any#   PermitLocalCommand no#   VisualHostKey no#   ProxyCommand ssh -q -W %h:%p gateway.example.com#   RekeyLimit 1G 1h
    SendEnv LANG LC_*
    HashKnownHosts yes
    GSSAPIAuthentication yes

在 /etc/ssh/ssh_config 文件中 将 # Port 22 注释去掉

去掉注释后端 文件内容

# This is the ssh client system-wide configuration file.  See# ssh_config(5) for more information.  This file provides defaults for# users, and the values can be changed in per-user configuration files# or on the command line.# Configuration data is parsed as follows:#  1. command line options#  2. user-specific file#  3. system-wide file# Any configuration value is only changed the first time it is set.# Thus, host-specific definitions should be at the beginning of the# configuration file, and defaults at the end.# Site-wide defaults for some commonly used options.  For a comprehensive# list of available options, their meanings and defaults, please see the# ssh_config(5) man page.

Host *
#   ForwardAgent no#   ForwardX11 no#   ForwardX11Trusted yes#   PasswordAuthentication yes#   HostbasedAuthentication no#   GSSAPIAuthentication no#   GSSAPIDelegateCredentials no#   GSSAPIKeyExchange no#   GSSAPITrustDNS no#   BatchMode no#   CheckHostIP yes#   AddressFamily any#   ConnectTimeout 0#   StrictHostKeyChecking ask#   IdentityFile ~/.ssh/id_rsa#   IdentityFile ~/.ssh/id_dsa#   IdentityFile ~/.ssh/id_ecdsa#   IdentityFile ~/.ssh/id_ed25519
   Port 22#   Protocol 2#   Ciphers aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc#   MACs hmac-md5,hmac-sha1,[email protected]#   EscapeChar ~#   Tunnel no#   TunnelDevice any:any#   PermitLocalCommand no#   VisualHostKey no#   ProxyCommand ssh -q -W %h:%p gateway.example.com#   RekeyLimit 1G 1h
    SendEnv LANG LC_*
    HashKnownHosts yes
    GSSAPIAuthentication yes

deepin ssh

重启 ssh 服务

sudo systemctl restart ssh

设置 ssh 服务开机启动

sudo systemctl enablessh

上传安装包

将 jdk 和 hadoop 安装包 拖入到 deepin 窗口即可

将桌面上的安装包 剪切到 用户主目录指定位置

jdk 保存在 /home/lhz/opt/java/jdk

hadoop 保存在 /home/lhz/opt

使用命令创建

mkdir-p /home/lhz/opt/java/jdk

解压安装包

选中 jdk 安装包 右键 解压到当前文件夹

deepin jdk

选择解压缩后的jdk目录 右键 重命名

deepin jdk

将 jdk 安装包 重命名为 jdk-8
deepin jdk

选中 hadoop 安装包 右键 解压到当前文件夹
deepin hadoop

选中 hadoop 解压后的目录 右键 重命名
deepin hadoop

将 hadoop 安装包 重命名为 hadoop-3
deepin hadoop

配置环境变量

打开资源管理器
deepin

进入当前用户主目录

deepin
deepin

使用快捷键 ctrl+h 显示隐藏文件

deepin

编辑 .bashrc 文件

deepin 环境变量

在 .bashrc 文件 末尾追加以下内容

exportJAVA_HOME=/home/lhz/opt/java/jdk/jdk-8

exportHADOOP_HOME=/home/lhz/opt/hadoop-3

exportHADOOP_INSTALL=${HADOOP_HOME}exportHADOOP_MAPRED_HOME=${HADOOP_HOME}exportHADOOP_COMMON_HOME=${HADOOP_HOME}exportHADOOP_HDFS_HOME=${HADOOP_HOME}exportYARN_HOME=${HADOOP_HOME}exportHADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop

exportHDFS_NAMENODE_USER=lhz
exportHDFS_DATANODE_USER=lhz
exportHDFS_ZKFC_USER=lhz
exportHDFS_JOURNALNODE_USER=lhz

exportYARN_RESOURCEMANAGER_USER=lhz
exportYARN_NODEMANAGER_USER=lhz

exportPATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

deepin 环境变量

设置静态IP地址

网络图标 右键
deepin 网络设置

选择 网络设置

deepin 网络设置

点击 网络详情 查看当前相信网络信息

deepin 网络详情

点击 有线连接 点击右侧 由三角图表
deepin 设置静态IP地址

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

点击 IPV4 下拉列表

deepin 设置静态IP地址

选择手动

deepin 设置静态IP地址

按照网络要求填入 IP地址、子网掩码、网关、DNS

deepin 设置静态IP地址

修改主机名

控制台输入

sudo hostnamectl set-hostname hadoop

deepin 修改主机名

修改 hosts 文件

打开资源管理器
deepin 资源管理器

进入系统盘

deepin 系统盘

找到 etc 目录

在系统盘的 etc 目录上 右键 点击 以管理员身份打开

deepin

输入密码

deepin

以管理员身份 进入 etc 目录

deepin

找到 /etc/hosts

deepin 域名映射

内容修改为 192.168.171.129 hadoop

deepin 域名映射

重启系统

修改 hadoop 配置文件

修改Hadoop配置文件 在hadoop解压后的目录找到 etc/hadoop目录

修改如下配置文件

  • hadoop-env.sh
  • core-site.xml
  • hdfs-site.xml
  • workers
  • mapred-site.xml
  • yarn-site.xml

hadoop-env.sh 文件末尾追加

exportJAVA_HOME=/home/lhz/opt/java/jdk/jdk-8
exportHDFS_NAMENODE_USER=lhz
exportHDFS_DATANODE_USER=lhz
exportHDFS_ZKFC_USER=lhz
exportHDFS_JOURNALNODE_USER=lhz

exportYARN_RESOURCEMANAGER_USER=lhz
exportYARN_NODEMANAGER_USER=lhz

core-site.xml

<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration><property><name>fs.defaultFS</name><value>hdfs://hadoop:9000</value></property><property><name>hadoop.tmp.dir</name><value>/home/lhz/hadoop_data</value></property><property><name>hadoop.http.staticuser.user</name><value>lhz</value></property><property><name>dfs.permissions.enabled</name><value>false</value></property><property><name>hadoop.proxyuser.lhz.hosts</name><value>*</value></property><property><name>hadoop.proxyuser.lhz.groups</name><value>*</value></property></configuration>

hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration><property><name>dfs.replication</name><value>1</value></property></configuration>

workers

hadoop

mapred-site.xml

<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration><property><name>mapreduce.framework.name</name><value>yarn</value></property><property><name>mapreduce.application.classpath</name><value>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*</value></property></configuration>

yarn-site.xml

<?xml version="1.0"?><configuration><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property><property><name>yarn.nodemanager.env-whitelist</name><value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_HOME,PATH,LANG,TZ,HADOOP_MAPRED_HOME</value></property></configuration>

配置ssh免密钥登录

创建本地秘钥并将公共秘钥写入认证文件

ssh-keygen -t rsa -P''-f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
# 或者
ssh-copy-id hadoop

deepin 免密钥

Hadoop初始化

# 格式化文件系统
hdfs namenode -format# 启动 NameNode SecondaryNameNode DataNode 
start-dfs.sh
# 查看启动进程
jps
# 看到 DataNode SecondaryNameNode NameNode 三个进程代表启动成功

hdfs

# 启动 ResourceManager daemon 和 NodeManager
start-yarn.sh
# 看到 DataNode NodeManager SecondaryNameNode NameNode ResourceManager 五个进程代表启动成功

hadoop

重点提示:

# 关机之前 依关闭服务
stop-yarn.sh
stop-dfs.sh
# 开机后 依次开启服务
start-dfs.sh
start-yarn.sh

或者

# 关机之前关闭服务
stop-all.sh
# 开机后开启服务
start-all.sh
#jps 检查进程正常后开启胡哦关闭在再做其它操作

浏览器访问web页面

http://localhost:9870

Hadoop

标签: hadoop hdfs 大数据

本文转载自: https://blog.csdn.net/qq_24330181/article/details/132581039
版权归原作者 李昊哲小课 所有, 如有侵权,请联系我们删除。

“Deepin 图形化部署 Hadoop Single Node Cluster”的评论:

还没有评论