Prerequisites
Before configuring HDFS on Debian, ensure the following prerequisites are met:
- A Debian system (preferably Debian 10/11) with root or sudo access.
- Java Development Kit (JDK) 8 or higher installed (OpenJDK is recommended):
sudo apt update && sudo apt install -y openjdk-11-jdk - SSH service enabled and configured for passwordless login (required for Hadoop node communication):
sudo apt install -y openssh-server ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys chmod 600 ~/.ssh/authorized_keys
Step 1: Download and Install Hadoop
- Download the latest stable Hadoop release from the Apache website. For example:
wget https://downloads.apache.org/hadoop/common/hadoop-3.3.6/hadoop-3.3.6.tar.gz - Extract the tarball to a dedicated directory (e.g.,
/usr/local/hadoop):sudo tar -xzvf hadoop-3.3.6.tar.gz -C /usr/local/ sudo mv /usr/local/hadoop-3.3.6 /usr/local/hadoop - Set directory ownership to the current user (replace
your_usernamewith your actual username):sudo chown -R your_username:your_username /usr/local/hadoop
Step 2: Configure Environment Variables
- Edit the
~/.bashrcfile to add Hadoop-specific environment variables:nano ~/.bashrc - Append the following lines to the end of the file:
export HADOOP_HOME=/usr/local/hadoop export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64 # Adjust based on your JDK installation path - Apply the changes to the current session:
source ~/.bashrc
Step 3: Configure HDFS Core Files
All Hadoop configuration files are located in $HADOOP_HOME/etc/hadoop. Modify the following files to define HDFS behavior:
-
core-site.xml: Specifies the default file system and temporary directory.
<configuration> <property> <name>fs.defaultFSname> <value>hdfs://namenode:9000value> property> <property> <name>hadoop.tmp.dirname> <value>/var/lib/hadoop/tmpvalue> property> configuration> -
hdfs-site.xml: Configures HDFS replication, NameNode/DataNode directories, and optional secondary NameNode settings.
<configuration> <property> <name>dfs.replicationname> <value>3value> property> <property> <name>dfs.namenode.name.dirname> <value>/data/hadoop/hdfs/namenodevalue> property> <property> <name>dfs.datanode.data.dirname> <value>/data/hadoop/hdfs/datanodevalue> property> <property> <name>dfs.namenode.secondary.http-addressname> <value>secondarynamenode:50090value> property> configuration> -
mapred-site.xml: Configures MapReduce framework (use YARN as the resource manager).
<configuration> <property> <name>mapreduce.framework.namename> <value>yarnvalue> property> configuration> -
yarn-site.xml: Configures YARN (Yet Another Resource Negotiator) for resource management.
<configuration> <property> <name>yarn.resourcemanager.hostnamename> <value>resourcemanagervalue> property> <property> <name>yarn.nodemanager.aux-servicesname> <value>mapreduce_shufflevalue> property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.classname> <value>org.apache.hadoop.mapred.ShuffleHandlervalue> property> configuration>
Step 4: Format the NameNode
The NameNode must be formatted once before starting HDFS (this initializes the metadata storage). Run the following command on the NameNode machine:
hdfs namenode -format
Note: Formatting erases all existing HDFS data. Only run this command on a new cluster or if you need to reset the NameNode.
Step 5: Start HDFS Services
- Start the HDFS daemons (NameNode and DataNodes) using the
start-dfs.shscript (run from the NameNode):$HADOOP_HOME/sbin/start-dfs.sh - Verify that the services are running by checking the process list:
You should see the following processes:jps- NameNode: Manages HDFS metadata.
- DataNode: Stores actual data blocks.
- SecondaryNameNode (optional): Assists with NameNode metadata management.
Step 6: Validate the Configuration
- Check the HDFS status using the web interface (default port: 9870 for Hadoop 3.x):
Open a browser and navigate tohttp://.:9870 - Run basic HDFS commands to verify functionality:
hdfs dfs -mkdir -p /test # Create a test directory hdfs dfs -put /path/to/local/file.txt /test # Upload a local file to HDFS hdfs dfs -ls /test # List contents of the test directory hdfs dfs -cat /test/file.txt # View the uploaded file
Troubleshooting Tips
- Permission Denied: Ensure the Hadoop directories (e.g.,
/data/hadoop/hdfs/namenode) have the correct ownership (hadoop:hadoopor your username). - Port Conflicts: Verify that ports like 9000 (NameNode), 50070 (Web UI), and 50090 (Secondary NameNode) are not blocked by the firewall.
- Java Issues: Confirm
JAVA_HOMEis set correctly inhadoop-env.sh(located in$HADOOP_HOME/etc/hadoop). - NameNode Not Starting: Check the NameNode logs (located in
$HADOOP_HOME/logs) for errors—common issues include formatting errors or incorrectfs.defaultFSconfigurations.
以上就是关于“Debian如何配置HDFS”的相关介绍,筋斗云是国内较早的云主机应用的服务商,拥有10余年行业经验,提供丰富的云服务器、租用服务器等相关产品服务。云服务器资源弹性伸缩,主机vCPU、内存性能强悍、超高I/O速度、故障秒级恢复;电子化备案,提交快速,专业团队7×24小时服务支持!
简单好用、高性价比云服务器租用链接:https://www.jindouyun.cn/product/cvm