您的当前位置:首页>新品 > 正文

linux安装jdk8怎么装?手把手教你安装单机版Hadoop3.2.1

来源:CSDN 时间:2022-12-21 13:47:57

硬件/软件

Centos 8


(资料图片仅供参考)

jdk8

hadoop3.2.1

jdk8安装

从Oracle官网下载jdk8,网址:https://www.oracle.com/java/technologies/javase-jdk8-downloads.html#license-lightbox

下载需要注册并登录Oracle,Oracle网址进去有点慢,太慢的话也可以从我百度云这里下载:链接:https://pan.baidu.com/s/10AhpoVK12KrdKgrTVxyqRA 提取码:phif

我们把这些东西都安装在home目录下

首先在/home/hadoop目录下创建data文件夹,以及在data文件夹里面创建JdkInstall文件夹

mkdir datacd datamkdir JdkInstall

将jdk8安装包:jdk-8u241-linux-x64.tar.gz放到目录:/home/hadoop/data/JdkInstall下并解压,并再/home/hadoop/data目录下建立一个软链接jdk8指向/home/hadoop/data/JdkInstall/jdk1.8.0_241

mv jdk-8u241-linux-x64.tar.gz /home/hadoop/data/JdkInstalltar zxvf jdk-8u241-linux-x64.tar.gzln -s /home/hadoop/data/JdkInstall/jdk1.8.0_241 /home/hadoop/data/jdk8

然后再添加JAVA_HOME环境变量,并将其bin目录中的文件添加到PATH路径中

vim ~/.bashrc添加export JAVA_HOME=/usr/local/jdk1.8.0_241export PATH=$PATH:$JAVA_HOME/bin添加完按Esc->:->wq   保存并退出~/.bashrc文件source ~/.bashrc 使环境变量更改生效通过以下命令查看java版本java -version可得到下面的信息:java version "1.8.0_241"Java(TM) SE Runtime Environment (build 1.8.0_241-b07)Java HotSpot(TM) 64-Bit Server VM (build 25.241-b07, mixed mode)

hadoop3.2.1单机版安装

从hadoop官网下载hadoop3.2.1,网址:http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-3.2.1/hadoop-3.2.1.tar.gz ,百度云链接:https://pan.baidu.com/s/14SSrCbWZeNG2jiJV884qxQ 提取码:fvpb

在data文件夹里面创建HadoopInstall文件夹

mkdir HadoopInstall

将hadoop3.2.1安装包:hadoop-3.2.1.tar.gz 放到目录:/home/hadoop/data/HadoopInstall下,然后解压

mv hadoop-3.2.1.tar.gz /home/hadoop/data/hadoopInstalltar zxvf hadoop-3.2.1.tar.gz

在/home/hadoop/data目录下创建hadoop软链接,指向hadoop-3.2.1,同时也将hadoop的配置文件单独复制出来放到/home/hadoop/data目录下,并重新命名为hadoop-config

ln -s /home/hadoop/data/hadoopInstall/hadoop-3.2.1 /home/hadoop/data/hadoopcp -r /home/hadoop/data/hadoop/etc/hadoop /home/hadoop/data/hadoop-config

到/home/hadoop/data/hadoop目录下,执行以下命令看一下hadoop版本信息:

cd /home/hadoop/data/hadoop ./bin/hadoop version 得到如下信息: Hadoop 3.2.1Source code repository https://gitbox.apache.org/repos/asf/hadoop.git -r b3cbbb467e22ea829b3808f4b7b01d07e0bf3842Compiled by rohithsharmaks on 2019-09-10T15:56ZCompiled with protoc 2.5.0From source with checksum 776eaf9eee9c0ffc370bcbc1888737This command was run using /home/hadoop/data/HadoopInstall/hadoop-3.2.1/share/hadoop/common/hadoop-common-3.2.1.jar

环境变量配置:

vim ~/.bashrcexport HADOOP_HOME=/home/hadoop/data/hadoopexport PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbinexport HADOOP_CONF_DIR=/home/hadoop/data/hadoop-configsource ~/.bashrc

文件配置:

core-site.xml:通用属性hdfs-site.xml:hadoop守护进程配置mapred-site.xml:mapreduce守护进程配置yarn-site.xml:资源调度相关配置

编辑core-site.xml文件,修改如下内容:

hadoop.tmp.dir   file:/home/hadoop/data/hadoop/tmpAbase for other temporary directories.  fs.defaultFS hdfs://localhost:9000

hadoop.tmp.dir:hadoop中数据存储的临时目录,建议设置,如果没有配置的话,系统会使用默认的临时目录:/tmp/hadoop-hadoop。而这个目录再每次重启后都会被删除,必须重新执行format才行,否则会出错。fs.defaultFS:默认文件系统,HDFS客户端访问需要此参数

编辑hdfs-site.xml文件,修改如下内容:

dfs.replication1dfs.name.dir/home/hadoop/data/hadoop/hdfs/name   dfs.data.dir  /home/hadoop/data/hadoop/hdfs/data

dfs.replication:数据块副本数dfs.name.dir:指定namenode节点的文件存储目录dfs.data.dir:指定datanode节点的文件存储目录

编辑mapred-site.xml文件,修改如下内容:

mapreduce.framework.name   yarn       mapreduce.application.classpath  $HADOOP_HOME/share/hadoop/mapreduce/*:$HADOOP_HOME/share/hadoop/mapreduce/lib/*

编辑yarn-site.xml文件,修改如下内容:

yarn.nodemanager.aux-services  mapreduce_shuffle                yarn.resourcemanager.hostname    192.168.1.100        yarn.nodemanager.env-whitelist   JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_HOME

格式化HDFS文件系统:

hadoop namenode -format

可以看到如下日志:

STARTUP_MSG:   build = https://gitbox.apache.org/repos/asf/hadoop.git -r b3cbbb467e22ea829b3808f4b7b01d07e0bf3842; compiled by "rohithsharmaks" on 2019-09-10T15:56ZSTARTUP_MSG:   java = 1.8.0_241************************************************************/2020-02-27 00:57:46,530 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]2020-02-27 00:57:46,571 INFO namenode.NameNode: createNameNode [-format]2020-02-27 00:57:46,806 INFO common.Util: Assuming "file" scheme for path /home/hadoop/data/hadoop/hdfs/name in configuration.2020-02-27 00:57:46,806 INFO common.Util: Assuming "file" scheme for path /home/hadoop/data/hadoop/hdfs/name in configuration.Formatting using clusterid: CID-e789e32a-5f1c-45dd-ad98-f6a076748c892020-02-27 00:57:46,825 INFO namenode.FSEditLog: Edit logging is async:true2020-02-27 00:57:46,832 INFO namenode.FSNamesystem: KeyProvider: null2020-02-27 00:57:46,832 INFO namenode.FSNamesystem: fsLock is fair: true2020-02-27 00:57:46,832 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false2020-02-27 00:57:46,837 INFO namenode.FSNamesystem: fsOwner             = hadoop (auth:SIMPLE)2020-02-27 00:57:46,837 INFO namenode.FSNamesystem: supergroup          = supergroup2020-02-27 00:57:46,837 INFO namenode.FSNamesystem: isPermissionEnabled = true2020-02-27 00:57:46,838 INFO namenode.FSNamesystem: HA Enabled: false2020-02-27 00:57:46,861 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling2020-02-27 00:57:46,866 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=10002020-02-27 00:57:46,866 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true2020-02-27 00:57:46,868 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.0002020-02-27 00:57:46,870 INFO blockmanagement.BlockManager: The block deletion will start around 2020 Feb 27 00:57:462020-02-27 00:57:46,870 INFO util.GSet: Computing capacity for map BlocksMap2020-02-27 00:57:46,870 INFO util.GSet: VM type       = 64-bit2020-02-27 00:57:46,871 INFO util.GSet: 2.0% max memory 3.4 GB = 70.3 MB2020-02-27 00:57:46,871 INFO util.GSet: capacity      = 2^23 = 8388608 entries2020-02-27 00:57:46,879 INFO blockmanagement.BlockManager: Storage policy satisfier is disabled2020-02-27 00:57:46,879 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false2020-02-27 00:57:46,883 INFO Configuration.deprecation: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS2020-02-27 00:57:46,883 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.99900001287460332020-02-27 00:57:46,883 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 02020-02-27 00:57:46,883 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 300002020-02-27 00:57:46,883 INFO blockmanagement.BlockManager: defaultReplication         = 12020-02-27 00:57:46,883 INFO blockmanagement.BlockManager: maxReplication             = 5122020-02-27 00:57:46,883 INFO blockmanagement.BlockManager: minReplication             = 12020-02-27 00:57:46,883 INFO blockmanagement.BlockManager: maxReplicationStreams      = 22020-02-27 00:57:46,883 INFO blockmanagement.BlockManager: redundancyRecheckInterval  = 3000ms2020-02-27 00:57:46,883 INFO blockmanagement.BlockManager: encryptDataTransfer        = false2020-02-27 00:57:46,883 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 10002020-02-27 00:57:46,909 INFO namenode.FSDirectory: GLOBAL serial map: bits=29 maxEntries=5368709112020-02-27 00:57:46,909 INFO namenode.FSDirectory: USER serial map: bits=24 maxEntries=167772152020-02-27 00:57:46,909 INFO namenode.FSDirectory: GROUP serial map: bits=24 maxEntries=167772152020-02-27 00:57:46,909 INFO namenode.FSDirectory: XATTR serial map: bits=24 maxEntries=167772152020-02-27 00:57:46,915 INFO util.GSet: Computing capacity for map INodeMap2020-02-27 00:57:46,915 INFO util.GSet: VM type       = 64-bit2020-02-27 00:57:46,915 INFO util.GSet: 1.0% max memory 3.4 GB = 35.2 MB2020-02-27 00:57:46,915 INFO util.GSet: capacity      = 2^22 = 4194304 entries2020-02-27 00:57:46,917 INFO namenode.FSDirectory: ACLs enabled? false2020-02-27 00:57:46,917 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true2020-02-27 00:57:46,917 INFO namenode.FSDirectory: XAttrs enabled? true2020-02-27 00:57:46,917 INFO namenode.NameNode: Caching file names occurring more than 10 times2020-02-27 00:57:46,920 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 655362020-02-27 00:57:46,921 INFO snapshot.SnapshotManager: SkipList is disabled2020-02-27 00:57:46,923 INFO util.GSet: Computing capacity for map cachedBlocks2020-02-27 00:57:46,923 INFO util.GSet: VM type       = 64-bit2020-02-27 00:57:46,924 INFO util.GSet: 0.25% max memory 3.4 GB = 8.8 MB2020-02-27 00:57:46,924 INFO util.GSet: capacity      = 2^20 = 1048576 entries2020-02-27 00:57:46,962 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 102020-02-27 00:57:46,962 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 102020-02-27 00:57:46,962 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,252020-02-27 00:57:46,964 INFO namenode.FSNamesystem: Retry cache on namenode is enabled2020-02-27 00:57:46,964 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis2020-02-27 00:57:46,965 INFO util.GSet: Computing capacity for map NameNodeRetryCache2020-02-27 00:57:46,965 INFO util.GSet: VM type       = 64-bit2020-02-27 00:57:46,966 INFO util.GSet: 0.029999999329447746% max memory 3.4 GB = 1.1 MB2020-02-27 00:57:46,966 INFO util.GSet: capacity      = 2^17 = 131072 entries2020-02-27 00:57:46,978 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1403611423-127.0.0.1-15827362669742020-02-27 00:57:46,983 INFO common.Storage: Storage directory /home/hadoop/data/HadoopInstall/hadoop-3.2.1/hdfs/name has been successfully formatted.2020-02-27 00:57:46,998 INFO namenode.FSImageFormatProtobuf: Saving image file /home/hadoop/data/HadoopInstall/hadoop-3.2.1/hdfs/name/current/fsimage.ckpt_0000000000000000000 using no compression2020-02-27 00:57:47,045 INFO namenode.FSImageFormatProtobuf: Image file /home/hadoop/data/HadoopInstall/hadoop-3.2.1/hdfs/name/current/fsimage.ckpt_0000000000000000000 of size 401 bytes saved in 0 seconds .2020-02-27 00:57:47,051 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 02020-02-27 00:57:47,053 INFO namenode.FSImage: FSImageSaver clean checkpoint: txid=0 when meet shutdown.2020-02-27 00:57:47,053 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************SHUTDOWN_MSG: Shutting down NameNode at localhost/127.0.0.1************************************************************/

启动hadoop节点:

start-all.sh

使用jps查看节点运行状态:

8864 NameNode10038 Jps9258 SecondaryNameNode9850 NodeManager9036 DataNode9519 ResourceManager

查看hadoop节点详细状态

hadoop dfsadmin -report

得到结果:

WARNING: Use of this script to execute dfsadmin is deprecated.WARNING: Attempting to execute replacement "hdfs dfsadmin" instead. Configured Capacity: 436053315584 (406.11 GB)Present Capacity: 402692866048 (375.04 GB)DFS Remaining: 402692857856 (375.04 GB)DFS Used: 8192 (8 KB)DFS Used%: 0.00%Replicated Blocks:        Under replicated blocks: 0        Blocks with corrupt replicas: 0        Missing blocks: 0        Missing blocks (with replication factor 1): 0        Low redundancy blocks with highest priority to recover: 0        Pending deletion blocks: 0Erasure Coded Block Groups:         Low redundancy block groups: 0        Block groups with corrupt internal blocks: 0        Missing block groups: 0        Low redundancy blocks with highest priority to recover: 0        Pending deletion blocks: 0 -------------------------------------------------Live datanodes (1): Name: 127.0.0.1:9866 (localhost)Hostname: localhostDecommission Status : NormalConfigured Capacity: 436053315584 (406.11 GB)DFS Used: 8192 (8 KB)Non DFS Used: 33360449536 (31.07 GB)DFS Remaining: 402692857856 (375.04 GB)DFS Used%: 0.00%DFS Remaining%: 92.35%Configured Cache Capacity: 0 (0 B)Cache Used: 0 (0 B)Cache Remaining: 0 (0 B)Cache Used%: 100.00%Cache Remaining%: 0.00%Xceivers: 1Last contact: Thu Feb 27 01:21:45 CST 2020Last Block Report: Thu Feb 27 01:18:45 CST 2020Num of Blocks: 0

通过web查看hadoop和YARN状态

标签:

最新新闻:

新闻放送
Top