WARNING: Use of this script to start HDFS daemons is deprecated.
WARNING: Attempting to execute replacement "hdfs --daemon start" instead.
ERROR: Cannot set priority of datanode process 10603
场景:启动datanode出现如上错误
查看日志:
JSVC_HOME is not set or set incorrectly. jsvc is required to run secure
or privileged daemons. Please download and install jsvc from
http://archive.apache.org/dist/commons/daemon/binaries/
and set JSVC_HOME to the directory containing the jsvc binary.
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63373
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 65536
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
修改文件配置:
vi hadoop-env.sh
#export HDFS_DATANODE_SECURE_USER=root
成功启动
官网解释:
Secure DataNode
Because the DataNode data transfer protocol does not use the Hadoop RPC framework, DataNodes must authenticate themselves using privileged ports which are specified by dfs.datanode.address and dfs.datanode.http.address. This authentication is based on the assumption that the attacker won’t be able to get root privileges on DataNode hosts.
When you execute the hdfs datanode command as root, the server process binds privileged ports at first, then drops privilege and runs as the user account specified by HDFS_DATANODE_SECURE_USER. This startup process uses the jsvc program installed to JSVC_HOME. You must specify HDFS_DATANODE_SECURE_USER and JSVC_HOME as environment variables on start up (in hadoop-env.sh).
As of version 2.6.0, SASL can be used to authenticate the data transfer protocol. In this configuration, it is no longer required for secured clusters to start the DataNode as root using jsvc and bind to privileged ports. To enable SASL on data transfer protocol, set dfs.data.transfer.protection in hdfs-site.xml. A SASL enabled DataNode can be started in secure mode in following two ways: 1. Set a non-privileged port for dfs.datanode.address. 1. Set dfs.http.policy to HTTPS_ONLY or set dfs.datanode.http.address to a privileged port and make sure the HDFS_DATANODE_SECURE_USER and JSVC_HOME environment variables are specified properly as environment variables on start up (in hadoop-env.sh).
In order to migrate an existing cluster that used root authentication to start using SASL instead, first ensure that version 2.6.0 or later has been deployed to all cluster nodes as well as any external applications that need to connect to the cluster. Only versions 2.6.0 and later of the HDFS client can connect to a DataNode that uses SASL for authentication of data transfer protocol, so it is vital that all callers have the correct version before migrating. After version 2.6.0 or later has been deployed everywhere, update configuration of any external applications to enable SASL. If an HDFS client is enabled for SASL, then it can connect successfully to a DataNode running with either root authentication or SASL authentication. Changing configuration for all clients guarantees that subsequent configuration changes on DataNodes will not disrupt the applications. Finally, each individual DataNode can be migrated by changing its configuration and restarting. It is acceptable to have a mix of some DataNodes running with root authentication and some DataNodes running with SASL authentication temporarily during this migration period, because an HDFS client enabled for SASL can connect to both.
版权归原作者 zhouyanjun_ 所有, 如有侵权,请联系我们删除。