HDFS 将升级至 2.9.2-1.0.2 版本
升级影响:NameNode主备切换,DataNode滚动升级
升级时间:预计持续4天
为期一周的hdfs升级已经完成,在升级期间引起了部分卡顿,在此表示十分抱歉。目前集群已经稳定的运行在了hdfs2.9版本上。
Launching Job 2 out of 4
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1560304876299_2765370, Tracking URL = http://hadoop2249.jd.163.org:8088/proxy/application_1560304876299_2765370/
Kill Command = /home/hadoop/hadoop-client/bin/hadoop job -kill job_1560304876299_2765370
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2019-07-02 07:03:24,647 Stage-2 map = 0%, reduce = 0%
2019-07-02 07:03:34,064 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 3.22 sec
2019-07-02 07:03:46,609 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 8.76 sec
MapReduce Total cumulative CPU time: 8 seconds 760 msec
Ended Job = job_1560304876299_2765370
Stage-5 is selected by condition resolver.
Stage-4 is filtered out by condition resolver.
Stage-6 is filtered out by condition resolver.
Moving data to directory hdfs://hz-cluster9/tmp/hive/public/.hive-staging_hive_2019-07-02_07-01-08_937_7632443381017040440-1/-ext-10000
Loading data to table portal.adm_device_version_gray_lm_day partition (day=20190701)
chgrp: changing ownership of 'hdfs://hz-cluster9/user/portal/ADM/SDK/ADM_DEVICE_VERSION_GRAY_LM_DAY/day=20190701/000000_0': User null does not belong to hdfs
MapReduce Jobs Launched:
Stage-Stage-1: Map: 23 Reduce: 28 Cumulative CPU: 693.07 sec HDFS Read: 1155880366 HDFS Write: 2688 SUCCESS
Stage-Stage-2: Map: 1 Reduce: 1 Cumulative CPU: 8.76 sec HDFS Read: 38557 HDFS Write: 163 SUCCESS
Total MapReduce CPU Time Spent: 11 minutes 41 seconds 830 msec
Query ID = hadoop_20190620070057_18cf7c77-4d9f-4b2f-8163-e8630c8e9bd8
Total jobs = 4
Launching Job 1 out of 4
Number of reduce tasks not specified. Estimated from input data size: 29
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1560304876299_1110798, Tracking URL = http://hadoop2249.jd.163.org:8088/proxy/application_1560304876299_1110798/
Kill Command = /home/hadoop/hadoop-client/bin/hadoop job -kill job_1560304876299_1110798
Hadoop job information for Stage-1: number of mappers: 23; number of reducers: 29
2019-06-20 07:01:26,814 Stage-1 map = 0%, reduce = 0%
2019-06-20 07:01:40,790 Stage-1 map = 17%, reduce = 0%, Cumulative CPU 33.56 sec
2019-06-20 07:01:41,846 Stage-1 map = 35%, reduce = 0%, Cumulative CPU 70.78 sec
2019-06-20 07:01:42,899 Stage-1 map = 39%, reduce = 0%, Cumulative CPU 81.92 sec
2019-06-20 07:01:43,953 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 94.7 sec
2019-06-20 07:01:45,005 Stage-1 map = 52%, reduce = 0%, Cumulative CPU 121.72 sec
2019-06-20 07:01:46,059 Stage-1 map = 65%, reduce = 0%, Cumulative CPU 159.58 sec
2019-06-20 07:01:47,111 Stage-1 map = 74%, reduce = 0%, Cumulative CPU 188.98 sec
2019-06-20 07:01:48,163 Stage-1 map = 75%, reduce = 0%, Cumulative CPU 208.05 sec
2019-06-20 07:01:49,215 Stage-1 map = 83%, reduce = 0%, Cumulative CPU 300.1 sec
2019-06-20 07:01:53,428 Stage-1 map = 84%, reduce = 0%, Cumulative CPU 300.1 sec
2019-06-20 07:01:55,530 Stage-1 map = 86%, reduce = 0%, Cumulative CPU 333.57 sec
2019-06-20 07:01:56,580 Stage-1 map = 87%, reduce = 0%, Cumulative CPU 336.84 sec
2019-06-20 07:01:59,730 Stage-1 map = 89%, reduce = 0%, Cumulative CPU 342.65 sec
2019-06-20 07:02:01,831 Stage-1 map = 95%, reduce = 0%, Cumulative CPU 366.47 sec
2019-06-20 07:02:02,881 Stage-1 map = 96%, reduce = 0%, Cumulative CPU 368.19 sec
2019-06-20 07:02:20,147 Stage-1 map = 96%, reduce = 1%, Cumulative CPU 391.02 sec
2019-06-20 07:02:21,196 Stage-1 map = 96%, reduce = 4%, Cumulative CPU 393.57 sec
2019-06-20 07:02:22,244 Stage-1 map = 96%, reduce = 9%, Cumulative CPU 396.77 sec
2019-06-20 07:02:23,293 Stage-1 map = 96%, reduce = 21%, Cumulative CPU 406.67 sec
2019-06-20 07:02:24,342 Stage-1 map = 96%, reduce = 22%, Cumulative CPU 407.82 sec
2019-06-20 07:02:25,390 Stage-1 map = 96%, reduce = 32%, Cumulative CPU 423.8 sec
2019-06-20 07:02:31,679 Stage-1 map = 99%, reduce = 32%, Cumulative CPU 436.16 sec
2019-06-20 07:02:35,914 Stage-1 map = 100%, reduce = 32%, Cumulative CPU 442.78 sec
2019-06-20 07:02:36,962 Stage-1 map = 100%, reduce = 34%, Cumulative CPU 446.25 sec
2019-06-20 07:02:38,009 Stage-1 map = 100%, reduce = 43%, Cumulative CPU 458.27 sec
2019-06-20 07:02:39,058 Stage-1 map = 100%, reduce = 44%, Cumulative CPU 461.81 sec
2019-06-20 07:02:40,105 Stage-1 map = 100%, reduce = 50%, Cumulative CPU 501.33 sec
2019-06-20 07:02:41,152 Stage-1 map = 100%, reduce = 63%, Cumulative CPU 593.97 sec
2019-06-20 07:02:42,200 Stage-1 map = 100%, reduce = 71%, Cumulative CPU 656.88 sec
2019-06-20 07:02:43,255 Stage-1 map = 100%, reduce = 80%, Cumulative CPU 740.49 sec
2019-06-20 07:02:44,303 Stage-1 map = 100%, reduce = 91%, Cumulative CPU 847.97 sec
2019-06-20 07:02:45,350 Stage-1 map = 100%, reduce = 99%, Cumulative CPU 905.88 sec
2019-06-20 07:02:46,398 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 908.45 sec
MapReduce Total cumulative CPU time: 15 minutes 8 seconds 450 msec
Ended Job = job_1560304876299_1110798
Launching Job 2 out of 4
Number of reduce tasks not specified. Estimated from input data size: 4
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1560304876299_1111042, Tracking URL = http://hadoop2249.jd.163.org:8088/proxy/application_1560304876299_1111042/
Kill Command = /home/hadoop/hadoop-client/bin/hadoop job -kill job_1560304876299_1111042
Hadoop job information for Stage-2: number of mappers: 2; number of reducers: 4
2019-06-20 07:03:13,268 Stage-2 map = 0%, reduce = 0%
2019-06-20 07:03:33,140 Stage-2 map = 5%, reduce = 0%, Cumulative CPU 16.21 sec
2019-06-20 07:03:34,188 Stage-2 map = 9%, reduce = 0%, Cumulative CPU 29.74 sec
2019-06-20 07:03:39,424 Stage-2 map = 14%, reduce = 0%, Cumulative CPU 36.78 sec
2019-06-20 07:03:40,471 Stage-2 map = 21%, reduce = 0%, Cumulative CPU 44.45 sec
2019-06-20 07:03:45,711 Stage-2 map = 24%, reduce = 0%, Cumulative CPU 52.39 sec
2019-06-20 07:03:46,759 Stage-2 map = 32%, reduce = 0%, Cumulative CPU 59.48 sec
2019-06-20 07:03:50,996 Stage-2 map = 37%, reduce = 0%, Cumulative CPU 66.91 sec
2019-06-20 07:03:52,041 Stage-2 map = 44%, reduce = 0%, Cumulative CPU 74.26 sec
2019-06-20 07:03:57,270 Stage-2 map = 48%, reduce = 0%, Cumulative CPU 81.13 sec
2019-06-20 07:03:58,317 Stage-2 map = 51%, reduce = 0%, Cumulative CPU 88.22 sec
2019-06-20 07:04:03,545 Stage-2 map = 56%, reduce = 0%, Cumulative CPU 96.27 sec
2019-06-20 07:04:04,592 Stage-2 map = 72%, reduce = 0%, Cumulative CPU 104.62 sec
2019-06-20 07:04:06,683 Stage-2 map = 77%, reduce = 0%, Cumulative CPU 106.83 sec
2019-06-20 07:04:09,891 Stage-2 map = 80%, reduce = 0%, Cumulative CPU 113.5 sec
2019-06-20 07:04:15,121 Stage-2 map = 85%, reduce = 0%, Cumulative CPU 121.9 sec
2019-06-20 07:04:21,405 Stage-2 map = 93%, reduce = 0%, Cumulative CPU 127.99 sec
2019-06-20 07:04:23,496 Stage-2 map = 93%, reduce = 4%, Cumulative CPU 128.64 sec
2019-06-20 07:04:26,635 Stage-2 map = 93%, reduce = 8%, Cumulative CPU 129.22 sec
2019-06-20 07:04:27,682 Stage-2 map = 99%, reduce = 13%, Cumulative CPU 134.86 sec
2019-06-20 07:04:28,727 Stage-2 map = 100%, reduce = 13%, Cumulative CPU 136.27 sec
2019-06-20 07:04:29,772 Stage-2 map = 100%, reduce = 22%, Cumulative CPU 138.29 sec
2019-06-20 07:04:31,864 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 141.61 sec
2019-06-20 07:04:33,955 Stage-2 map = 100%, reduce = 45%, Cumulative CPU 147.02 sec
2019-06-20 07:04:36,044 Stage-2 map = 100%, reduce = 66%, Cumulative CPU 165.59 sec
2019-06-20 07:04:38,134 Stage-2 map = 100%, reduce = 70%, Cumulative CPU 177.15 sec
2019-06-20 07:04:40,223 Stage-2 map = 100%, reduce = 74%, Cumulative CPU 188.89 sec
2019-06-20 07:04:42,313 Stage-2 map = 100%, reduce = 80%, Cumulative CPU 207.67 sec
2019-06-20 07:04:44,454 Stage-2 map = 100%, reduce = 84%, Cumulative CPU 213.91 sec
2019-06-20 07:04:45,499 Stage-2 map = 100%, reduce = 88%, Cumulative CPU 220.12 sec
2019-06-20 07:04:46,544 Stage-2 map = 100%, reduce = 91%, Cumulative CPU 224.33 sec
2019-06-20 07:04:47,590 Stage-2 map = 100%, reduce = 95%, Cumulative CPU 232.15 sec
2019-06-20 07:04:48,636 Stage-2 map = 100%, reduce = 97%, Cumulative CPU 235.94 sec
2019-06-20 07:04:52,813 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 240.89 sec
MapReduce Total cumulative CPU time: 4 minutes 0 seconds 890 msec
Ended Job = job_1560304876299_1111042
Stage-5 is filtered out by condition resolver.
Stage-4 is selected by condition resolver.
Stage-6 is filtered out by condition resolver.
Launching Job 4 out of 4
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1560304876299_1111348, Tracking URL = http://hadoop2249.jd.163.org:8088/proxy/application_1560304876299_1111348/
Kill Command = /home/hadoop/hadoop-client/bin/hadoop job -kill job_1560304876299_1111348
Hadoop job information for Stage-4: number of mappers: 1; number of reducers: 0
2019-06-20 07:05:13,734 Stage-4 map = 0%, reduce = 0%
2019-06-20 07:05:26,392 Stage-4 map = 100%, reduce = 0%, Cumulative CPU 9.02 sec
MapReduce Total cumulative CPU time: 9 seconds 20 msec
Ended Job = job_1560304876299_1111348
Loading data to table portal.adm_device_version_gray_lm_day partition (day=20190619)
MapReduce Jobs Launched:
Stage-Stage-1: Map: 23 Reduce: 29 Cumulative CPU: 908.45 sec HDFS Read: 1496152720 HDFS Write: 814948451 SUCCESS
Stage-Stage-2: Map: 2 Reduce: 4 Cumulative CPU: 240.89 sec HDFS Read: 815008439 HDFS Write: 14627954 SUCCESS
Stage-Stage-4: Map: 1 Cumulative CPU: 9.02 sec HDFS Read: 14632161 HDFS Write: 14627277 SUCCESS
Total MapReduce CPU Time Spent: 19 minutes 18 seconds 360 msec
OK
Time taken: 273.461 seconds
job ok
2019-06-20 07:05:31 CST adm_device_version_gray_lm_day INFO - Process completed successfully in 317 seconds.
2019-06-20 07:05:31 CST adm_device_version_gray_lm_day INFO - Find application information of application_1560304876299_1111042: {
applicationId: application_1560304876299_1111042
type: MAPREDUCE
name: insert overwrite tabl...ce_model
,os_version(Stage-2)
originTrackingURI: http://hadoop2249.jd.163.org:19888/jobhistory/job/job_1560304876299_1111042
}
2019-06-20 07:05:31 CST adm_device_version_gray_lm_day INFO - Find application information of application_1560304876299_1110798: {
applicationId: application_1560304876299_1110798
type: MAPREDUCE
name: insert overwrite tabl...ce_model
,os_version(Stage-1)
originTrackingURI: http://hadoop2249.jd.163.org:19888/jobhistory/job/job_1560304876299_1110798
}
2019-06-20 07:05:31 CST adm_device_version_gray_lm_day INFO - Find application information of application_1560304876299_1111348: {
applicationId: application_1560304876299_1111348
type: MAPREDUCE
name: insert overwrite tabl...ce_model
,os_version(Stage-4)
originTrackingURI: http://hadoop2249.jd.163.org:19888/jobhistory/job/job_1560304876299_1111348
}
2019-06-20 07:05:31 CST adm_device_version_gray_lm_day INFO - Finishing job adm_device_version_gray_lm_day attempt: 0 at 1560985531859 with status SUCCEEDED
今天突然遇到表的dfs路径没数据了 一上是日志。
正在排查原因中
没有评论