操屁眼的视频在线免费看,日本在线综合一区二区,久久在线观看免费视频,欧美日韩精品久久综

新聞資訊

    需求:
    對新建hadoop集群和hive集群的安全認證安裝部署。
    ?
    版本:
    centos 7.7
    hadoop 2.7.6
    hive 1.2.2
    ?
    部署規(guī)劃:
    192.168.216.111 hadoop01 namenode、resourcemanager、datanode、nodemanager、hive、KDC服務(wù)
    192.168.216.112 hadoop02 datanode、nodemanager、secondarynamenode、kerbros客戶端 
    192.168.216.113 hadoop03 datanode、nodemanager、kerbros客戶端

    第一章 kerbros認證

    1.1 Kerbros概述

        Kerberos 是一種網(wǎng)絡(luò)認證協(xié)議,其設(shè)計目標(biāo)是通過密鑰系統(tǒng)為客戶機 / 服務(wù)器應(yīng)用程序提供強大的認證服務(wù)。該認證過程的實現(xiàn)不依賴于主機操作系統(tǒng)的認證,無需基于主機地址的信任,不要求網(wǎng)絡(luò)上所有主機的物理安全,并假定網(wǎng)絡(luò)上傳送的數(shù)據(jù)包可以被任意地讀取、修改和插入數(shù)據(jù)。在以上情況下, Kerberos 作為一種可信任的第三方認證服務(wù),是通過傳統(tǒng)的密碼技術(shù)(如:共享密鑰)執(zhí)行認證服務(wù)的。


    1.2 Kerbros身份認證原理和機制

    Kerberos的工作圍繞著票據(jù)展開,票據(jù)類似于人的駕駛證,駕駛證標(biāo)識了人的信息,以及其可以駕駛的車輛等級。

    Kerberos是一種基于對稱密鑰技術(shù)的身份認證協(xié)議,它作為一個獨立且可靠的的第三方的身份認證服務(wù),可以為其它服務(wù)提供身份認證功能,且支持SSO(即客戶端身份認證后,可以訪問多個服務(wù)如HBase/HDFS等)。

    Kerberos協(xié)議過程主要有兩個階段,第一個階段是KDC對Client身份認證,第二個階段是Service對Client身份認證。如下圖:

    俗語:

    KDC:Kerberos的服務(wù)端程序;密鑰分發(fā)中心,負責(zé)管理發(fā)放票據(jù),記錄授權(quán)。
    Client:需要訪問服務(wù)的用戶(principal),KDC和Service會對用戶的身份進行認證。
    Service:集成了Kerberos的服務(wù),如HDFS/YARN/HBase等。
    principal:當(dāng)每添加一個用戶或服務(wù)的時候都需要向kdc添加一條principal,principl的形式為 主名稱/實例名@領(lǐng)域名。
    TGT : 票證授予票證。
    SGT : 服務(wù)授予票證。

    認證步驟:

    • KDC對Client身份認證
      當(dāng)客戶端用戶(principal)訪問一個集成了Kerberos的服務(wù)之前,需要先通過KDC的身份認證。
      若身份認證通過,則客戶端會獲取到一個TGT(Ticket Granting Ticket,票據(jù)),后續(xù)就可以使用該TGT去訪問集成了Kerberos的服務(wù)。
    • Service對Client身份認證
      當(dāng)用戶獲取TGT后,就可以繼續(xù)訪問Service服務(wù)。它會使用TGT以及需要訪問的服務(wù)名稱(如 HDFS)去KDC獲取SGT(Service Granting Ticket),然后使用SGT去訪問 Service,Service會利用相關(guān)信息對Client進行身份認證,認證通過后就可以正常訪問Service服務(wù)。


    1.3 Kerbros的安裝部署

    1.3.1 Kerbros服務(wù)端安裝(KDC)

    [root@hadoop01 ~]# yum install -y krb5-server krb5-lib krb5-workstation
    或者使用下面這個:
    yum install -y krb5-server krb5-libs krb5-auth-dialog krb5-workstation  

    1.3.2 Kerbros客戶端安裝

    客戶機在hadoop的從節(jié)點上安裝即可。
    [root@hadoop02 ~]# yum install -y krb5-libs krb5-workstation
    [root@hadoop03 ~]# yum install -y krb5-libs krb5-workstation


    1.3.3 KDC的配置

    在安裝的kerbros服務(wù)端上修改即可。
    ?
    [root@hadoop01 ~]# vi /var/kerberos/krb5kdc/kdc.conf
    修改內(nèi)容如下:
    [kdcdefaults]
     kdc_ports=88
     kdc_tcp_ports=88
    ?
    [realms]
    # EXAMPLE.COM={
    #  #master_key_type=aes256-cts
    #  acl_file=/var/kerberos/krb5kdc/kadm5.acl
    #  dict_file=/usr/share/dict/words
    #  admin_keytab=/var/kerberos/krb5kdc/kadm5.keytab
    #  supported_enctypes=aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
    # }
    ?
     HIVE.COM={
      #master_key_type=aes256-cts
      acl_file=/var/kerberos/krb5kdc/kadm5.acl
      dict_file=/usr/share/dict/words
      admin_keytab=/var/kerberos/krb5kdc/kadm5.keytab
      max_renewable_life=7d
      supported_enctypes=aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
     }

    配置說明:

    HIVE.COM:是設(shè)定的realms。名字隨意。Kerberos可以支持多個realms,一般全用大寫
    master_key_type,supported_enctypes默認使用aes256-cts。由于,JAVA使用aes256-cts驗證方式需要安裝額外的jar包,這里暫不使用
    acl_file:標(biāo)注了admin的用戶權(quán)限。文件格式是
    Kerberos_principal permissions [target_principal] [restrictions]支持通配符等
    admin_keytab:KDC進行校驗的keytab
    supported_enctypes:支持的校驗方式。注意把aes256-cts去掉


    1.3.4 krb5.conf配置

    krb5.conf需要再kerbros的服務(wù)和客戶端都配置。
    kerbros服務(wù)端配置:
    [root@hadoop01 ~]# vi /etc/krb5.conf
    ?
    替換內(nèi)容如下:
    # Configuration snippets may be placed in this directory as well
    includedir /etc/krb5.conf.d/
    ?
    [logging]
     default=FILE:/var/log/krb5libs.log
     kdc=FILE:/var/log/krb5kdc.log
     admin_server=FILE:/var/log/kadmind.log
    ?
    [libdefaults]
    # dns_lookup_realm=false
    # ticket_lifetime=24h
    # renew_lifetime=7d
    # forwardable=true
    # rdns=false
    # pkinit_anchors=/etc/pki/tls/certs/ca-bundle.crt
    ## default_realm=EXAMPLE.COM
    # default_ccache_name=KEYRING:persistent:%{uid}
     default_realm=HIVE.COM
     dns_lookup_realm=false
     dns_lookup_kdc=false
     ticket_lifetime=24h
     renew_lifetime=7d
     forwardable=true
     clockskew=120
     udp_preference_limit=1
    ?
    [realms]
    # EXAMPLE.COM={
    #  kdc=kerberos.example.com
    #  admin_server=kerberos.example.com
    # }
     HIVE.COM={
      kdc=hadoop01
      admin_server=hadoop01
     }
    ?
    [domain_realm]
    # .example.com=EXAMPLE.COM
    # example.com=EXAMPLE.COM
     .hive.com=HIVE.COM
     hive.com=HIVE.COM
     
     
     kerbros客戶端配置:
    [root@hadoop02 ~]# vi /etc/krb5.conf
    內(nèi)容如上
    [root@hadoop03 ~]# vi /etc/krb5.conf
    內(nèi)容如上

    配置說明:

    [logging]:表示server端的日志的打印位置
    udp_preference_limit=1 禁止使用udp可以防止一個Hadoop中的錯誤
    ticket_lifetime: 表明憑證生效的時限,一般為24小時。
    renew_lifetime: 表明憑證最長可以被延期的時限,一般為一個禮拜。當(dāng)憑證過期之后,對安全認證的服務(wù)的后續(xù)訪問則會失敗。
    clockskew:時鐘偏差是不完全符合主機系統(tǒng)時鐘的票據(jù)時戳的容差,超過此容差將不接受此票據(jù),單位是秒
    修改其中的realm,把默認的EXAMPLE.COM修改為自己要定義的值,如:HIVE.COM。其中,以下參數(shù)需要修改:
    default_realm:默認的realm。設(shè)置為realm。如HIVE.COM
    kdc:代表要kdc的位置。添加格式是 機器名
    admin_server:代表admin的位置。格式是機器名
    default_domain:代表默認的域名。(設(shè)置master主機所對應(yīng)的域名,如hive.com)


    1.3.5 database administrator的ACL權(quán)限

    數(shù)據(jù)庫管理員權(quán)限配置。在kerbros的服務(wù)端配置。
    ?
    [root@hadoop01 ~]# vi /var/kerberos/krb5kdc/kadm5.acl
    修改如下:
    */admin@HIVE.COM        *

    配置說明:

    kadm5.acl 文件更多內(nèi)容可參考:kadm5.acl文檔
    想要管理 KDC 的資料庫有兩種方式, 一種直接在 KDC 本機上面直接執(zhí)行,可以不需要密碼就登入資料庫管理;一種則是需要輸入賬號密碼才能管理~這兩種方式分別是:
    kadmin.local:需要在 KDC server 上面操作,無需密碼即可管理資料庫
    kadmin:可以在任何一臺 KDC 領(lǐng)域的系統(tǒng)上面操作,但是需要輸入管理員密碼


    1.3.6 配置Kerberos服務(wù)操作

    1.3.6.1 創(chuàng)建kerbros數(shù)據(jù)庫

    創(chuàng)建Kerberos數(shù)據(jù)庫,需要設(shè)置管理員密碼,創(chuàng)建成功后會在/var/Kerberos/krb5kdc/下生成一系列文件,如果重新創(chuàng)建的話,需要先刪除/var/kerberos/krb5kdc下面principal相關(guān)文件。

    kerbros服務(wù)器上操作命令:

    [root@hadoop01 ~]# kdb5_util create -s -r HIVE.COM

    輸入kdc的密碼。一定要記住。我這兒設(shè)置為root,兩次相同即可。


    1.3.6.2 kerberos開機啟動配置

    kerbros的服務(wù)端執(zhí)行即可。
    ?
    [root@hadoop01 ~]# chkconfig krb5kdc on
    [root@hadoop01 ~]# chkconfig kadmin on
    [root@hadoop01 ~]# service krb5kdc start
    [root@hadoop01 ~]# service kadmin start
    [root@hadoop01 ~]# service krb5kdc status


    1.3.6.3 kerberos的管理員創(chuàng)建

    在kerbros服務(wù)端執(zhí)行如下命令。
    ?
    kadmin.local輸入后,,添加規(guī)則:addprinc admin/admin@HIVE.COM。
    [root@hadoop01 ~]# kadmin.local
    Authenticating as principal root/admin@HIVE.COM with password.
    繼續(xù)如下圖的填寫:

    輸入規(guī)則和密碼,,兩次密碼相同即可,我是用的是root。

    最后使用q、quit或者exist退出即可。


    第二章 hadoop集群配置Kerbros

    一些概念:
    Kerberos principal用于在kerberos加密系統(tǒng)中標(biāo)記一個唯一的身份。
    kerberos為kerberos principal分配tickets使其可以訪問由kerberos加密的hadoop服務(wù)。
    對于hadoop,principals的格式為username/fully.qualified.domain.name@YOUR-REALM.COM.

    keytab是包含principals和加密principal key的文件。 keytab文件對于每個host是唯一的,因為key中包含hostname。keytab文件用于不需要人工交互和保存純文本密碼,實現(xiàn)到kerberos上驗證一個主機上的principal。 因為服務(wù)器上可以訪問keytab文件即可以以principal的身份通過kerberos的認證,所以,keytab文件應(yīng)該被妥善保存,應(yīng)該只有少數(shù)的用戶可以訪問。

    hive配置kerberos的前提是Hadoop集群已經(jīng)配置好Kerberos,因此我們先來配置Hadoop集群的認證。

    2.1 添加用戶

    如下的創(chuàng)建用戶,密碼都是用戶名。可以隨意設(shè)置。
    #創(chuàng)建hadoop用戶
    [root@hadoop01 hadoop]# useradd hadoop
    [root@hadoop01 hadoop]# passwd hadoop
    ?
    [root@hadoop02 hadoop]# useradd hadoop
    [root@hadoop02 hadoop]# passwd hadoop
    ?
    [root@hadoop03 hadoop]# useradd hadoop
    [root@hadoop03 hadoop]# passwd hadoop
    ?
    #新建用戶yarn,其中需設(shè)定userID<1000,命令如下:
    [root@hadoop01 ~]# useradd -u 502 yarn -g hadoop
    #并使用passwd命令為新建用戶設(shè)置密碼
    [root@hadoop01 ~]# passwd yarn
    passwd yarn 輸入新密碼
    ?
    #創(chuàng)建hdfs用戶
    [root@hadoop01 hadoop]# useradd hdfs -g hadoop
    [root@hadoop01 hadoop]# passwd hdfs
    ?
    [root@hadoop02 hadoop]# useradd hdfs -g hadoop
    [root@hadoop02 hadoop]# passwd hdfs
    ?
    [root@hadoop03 hadoop]# useradd hdfs -g hadoop
    [root@hadoop03 hadoop]# passwd hdfs
    ?
    #創(chuàng)建HTTP用戶
    [root@hadoop01 hadoop]# useradd HTTP
    [root@hadoop01 hadoop]# passwd HTTP
    ?
    [root@hadoop02 hadoop]# useradd HTTP
    [root@hadoop02 hadoop]# passwd HTTP
    ?
    [root@hadoop03 hadoop]# useradd HTTP
    [root@hadoop03 hadoop]# passwd HTTP

    2.2 創(chuàng)建 kerberos的普通用戶及密鑰文件,為配置 YARN kerberos security 時,各節(jié)點可以相互訪問用


    在服務(wù)端節(jié)點的root用戶下分別執(zhí)行以下命令:
    ?
    [root@hadoop01 ~]# cd /var/kerberos/krb5kdc/
    #登錄管理用戶
    [root@hadoop01 krb5kdc]# kadmin.local
    #創(chuàng)建用戶
    addprinc -randkey yarn/hadoop01@HIVE.COM
    addprinc -randkey yarn/hadoop02@HIVE.COM
    addprinc -randkey yarn/hadoop03@HIVE.COM
    addprinc -randkey hdfs/hadoop01@HIVE.COM
    addprinc -randkey hdfs/hadoop02@HIVE.COM
    addprinc -randkey hdfs/hadoop03@HIVE.COM
    addprinc -randkey HTTP/hadoop01@HIVE.COM
    addprinc -randkey HTTP/hadoop02@HIVE.COM
    addprinc -randkey HTTP/hadoop03@HIVE.COM
    #生成密鑰文件(生成到當(dāng)前路徑下)
    [root@hadoop01 krb5kdc]# kadmin.local -q "xst  -k yarn.keytab  yarn/hadoop01@HIVE.COM"
    [root@hadoop01 krb5kdc]# kadmin.local -q "xst  -k yarn.keytab  yarn/hadoop02@HIVE.COM"
    [root@hadoop01 krb5kdc]# kadmin.local -q "xst  -k yarn.keytab  yarn/hadoop03@HIVE.COM"
    ?
    [root@hadoop01 krb5kdc]# kadmin.local -q "xst  -k HTTP.keytab  HTTP/hadoop01@HIVE.COM"
    [root@hadoop01 krb5kdc]# kadmin.local -q "xst  -k HTTP.keytab  HTTP/hadoop02@HIVE.COM"
    [root@hadoop01 krb5kdc]# kadmin.local -q "xst  -k HTTP.keytab  HTTP/hadoop03@HIVE.COM"
    ?
    [root@hadoop01 krb5kdc]# kadmin.local -q "xst  -k hdfs-unmerged.keytab hdfs/hadoop01@HIVE.COM"
    [root@hadoop01 krb5kdc]# kadmin.local -q "xst  -k hdfs-unmerged.keytab  hdfs/hadoop02@HIVE.COM"
    [root@hadoop01 krb5kdc]# kadmin.local -q "xst  -k hdfs-unmerged.keytab hdfs/hadoop03@HIVE.COM"
    ?
    #合并成一個keytab文件,rkt表示展示,wkt表示寫入
    [root@hadoop01 krb5kdc]# ktutil
    ktutil:  rkt hdfs-unmerged.keytab
    ktutil:  rkt HTTP.keytab
    ktutil:  rkt yarn.keytab
    ktutil:  wkt hdfs.keytab
    ktutil:  q
    注意:ktutil:以后面的是輸入的。
    ?
    #查看
    [root@hadoop01 krb5kdc]# klist -ket  hdfs.keytab
    Keytab name: FILE:hdfs.keytab
    KVNO Timestamp           Principal
    ---- ------------------- ------------------------------------------------------
       3 04/14/2020 15:48:21 hdfs/hadoop01@HIVE.COM (aes128-cts-hmac-sha1-96)
       3 04/14/2020 15:48:21 hdfs/hadoop01@HIVE.COM (des3-cbc-sha1)
       3 04/14/2020 15:48:21 hdfs/hadoop01@HIVE.COM (arcfour-hmac)
       3 04/14/2020 15:48:21 hdfs/hadoop01@HIVE.COM (camellia256-cts-cmac)
       3 04/14/2020 15:48:21 hdfs/hadoop01@HIVE.COM (camellia128-cts-cmac)
       3 04/14/2020 15:48:21 hdfs/hadoop01@HIVE.COM (des-hmac-sha1)
       3 04/14/2020 15:48:21 hdfs/hadoop01@HIVE.COM (des-cbc-md5)
       3 04/14/2020 15:48:21 hdfs/hadoop02@HIVE.COM (aes128-cts-hmac-sha1-96)
       3 04/14/2020 15:48:21 hdfs/hadoop02@HIVE.COM (des3-cbc-sha1)
       3 04/14/2020 15:48:21 hdfs/hadoop02@HIVE.COM (arcfour-hmac)
       3 04/14/2020 15:48:21 hdfs/hadoop02@HIVE.COM (camellia256-cts-cmac)
       3 04/14/2020 15:48:21 hdfs/hadoop02@HIVE.COM (camellia128-cts-cmac)
       3 04/14/2020 15:48:21 hdfs/hadoop02@HIVE.COM (des-hmac-sha1)
       3 04/14/2020 15:48:21 hdfs/hadoop02@HIVE.COM (des-cbc-md5)
       8 04/14/2020 15:48:21 HTTP/hadoop03@HIVE.COM (aes128-cts-hmac-sha1-96)
       8 04/14/2020 15:48:21 HTTP/hadoop03@HIVE.COM (des3-cbc-sha1)
       8 04/14/2020 15:48:21 HTTP/hadoop03@HIVE.COM (arcfour-hmac)
       8 04/14/2020 15:48:21 HTTP/hadoop03@HIVE.COM (camellia256-cts-cmac)
       8 04/14/2020 15:48:21 HTTP/hadoop03@HIVE.COM (camellia128-cts-cmac)
       8 04/14/2020 15:48:21 HTTP/hadoop03@HIVE.COM (des-hmac-sha1)
       8 04/14/2020 15:48:21 HTTP/hadoop03@HIVE.COM (des-cbc-md5)
       6 04/14/2020 15:48:21 HTTP/hadoop01@HIVE.COM (aes128-cts-hmac-sha1-96)
       6 04/14/2020 15:48:21 HTTP/hadoop01@HIVE.COM (des3-cbc-sha1)
       6 04/14/2020 15:48:21 HTTP/hadoop01@HIVE.COM (arcfour-hmac)
       6 04/14/2020 15:48:21 HTTP/hadoop01@HIVE.COM (camellia256-cts-cmac)
       6 04/14/2020 15:48:21 HTTP/hadoop01@HIVE.COM (camellia128-cts-cmac)
       6 04/14/2020 15:48:21 HTTP/hadoop01@HIVE.COM (des-hmac-sha1)
       6 04/14/2020 15:48:21 HTTP/hadoop01@HIVE.COM (des-cbc-md5)
       6 04/14/2020 15:48:21 HTTP/hadoop02@HIVE.COM (aes128-cts-hmac-sha1-96)
       6 04/14/2020 15:48:21 HTTP/hadoop02@HIVE.COM (des3-cbc-sha1)
       6 04/14/2020 15:48:21 HTTP/hadoop02@HIVE.COM (arcfour-hmac)
       6 04/14/2020 15:48:21 HTTP/hadoop02@HIVE.COM (camellia256-cts-cmac)
       6 04/14/2020 15:48:21 HTTP/hadoop02@HIVE.COM (camellia128-cts-cmac)
       6 04/14/2020 15:48:21 HTTP/hadoop02@HIVE.COM (des-hmac-sha1)
       6 04/14/2020 15:48:21 HTTP/hadoop02@HIVE.COM (des-cbc-md5)
       7 04/14/2020 15:48:21 HTTP/hadoop03@HIVE.COM (aes128-cts-hmac-sha1-96)
       7 04/14/2020 15:48:21 HTTP/hadoop03@HIVE.COM (des3-cbc-sha1)
       7 04/14/2020 15:48:21 HTTP/hadoop03@HIVE.COM (arcfour-hmac)
       7 04/14/2020 15:48:21 HTTP/hadoop03@HIVE.COM (camellia256-cts-cmac)
       7 04/14/2020 15:48:21 HTTP/hadoop03@HIVE.COM (camellia128-cts-cmac)
       7 04/14/2020 15:48:21 HTTP/hadoop03@HIVE.COM (des-hmac-sha1)
       7 04/14/2020 15:48:21 HTTP/hadoop03@HIVE.COM (des-cbc-md5)
       4 04/14/2020 15:48:21 yarn/hadoop01@HIVE.COM (aes128-cts-hmac-sha1-96)
       4 04/14/2020 15:48:21 yarn/hadoop01@HIVE.COM (des3-cbc-sha1)
       4 04/14/2020 15:48:21 yarn/hadoop01@HIVE.COM (arcfour-hmac)
       4 04/14/2020 15:48:21 yarn/hadoop01@HIVE.COM (camellia256-cts-cmac)
       4 04/14/2020 15:48:21 yarn/hadoop01@HIVE.COM (camellia128-cts-cmac)
       4 04/14/2020 15:48:21 yarn/hadoop01@HIVE.COM (des-hmac-sha1)
       4 04/14/2020 15:48:21 yarn/hadoop01@HIVE.COM (des-cbc-md5)
       4 04/14/2020 15:48:21 yarn/hadoop02@HIVE.COM (aes128-cts-hmac-sha1-96)
       4 04/14/2020 15:48:21 yarn/hadoop02@HIVE.COM (des3-cbc-sha1)
       4 04/14/2020 15:48:21 yarn/hadoop02@HIVE.COM (arcfour-hmac)
       4 04/14/2020 15:48:21 yarn/hadoop02@HIVE.COM (camellia256-cts-cmac)
       4 04/14/2020 15:48:21 yarn/hadoop02@HIVE.COM (camellia128-cts-cmac)
       4 04/14/2020 15:48:21 yarn/hadoop02@HIVE.COM (des-hmac-sha1)
       4 04/14/2020 15:48:21 yarn/hadoop02@HIVE.COM (des-cbc-md5)
       4 04/14/2020 15:48:21 yarn/hadoop03@HIVE.COM (aes128-cts-hmac-sha1-96)
       4 04/14/2020 15:48:21 yarn/hadoop03@HIVE.COM (des3-cbc-sha1)
       4 04/14/2020 15:48:21 yarn/hadoop03@HIVE.COM (arcfour-hmac)
       4 04/14/2020 15:48:21 yarn/hadoop03@HIVE.COM (camellia256-cts-cmac)
       4 04/14/2020 15:48:21 yarn/hadoop03@HIVE.COM (camellia128-cts-cmac)
       4 04/14/2020 15:48:21 yarn/hadoop03@HIVE.COM (des-hmac-sha1)
       4 04/14/2020 15:48:21 yarn/hadoop03@HIVE.COM (des-cbc-md5)

    將生成的hdfs.keytab文件復(fù)制到hadoop配置路徑下,并授權(quán) 后面經(jīng)常會遇到使用keytab login失敗的問題,首先需要檢查的就是文件的權(quán)限。

    [root@hadoop01 krb5kdc]# cp ./hdfs.keytab /usr/local/hadoop-2.7.6/etc/hadoop/
    [root@hadoop01 krb5kdc]# cd /usr/local/hadoop-2.7.6/etc/hadoop/
    [root@hadoop01 krb5kdc]# chown hdfs:hadoop hdfs.keytab && chmod 400 hdfs.keytab


    2.3 配置hadoop集群

    core-site.xml配置:

    <!--添加以下配置-->
    <property>
        <name>hadoop.security.authorization</name>
        <value>true</value>
    </property>
    <property>
        <name>hadoop.security.authentication</name>
        <value>kerberos</value>
    </property>
    ?

    yarn-site.xml

    <!--添加以下內(nèi)容,內(nèi)存不足就不要配置
    <property>
          <name>yarn.nodemanager.resource.memory-mb</name>
          <value>1024</value>
    </property>
    -->
    <!-- ResourceManager security configs -->
    <property>
      <name>yarn.resourcemanager.keytab</name>
      <value>/usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab</value>
    </property>
    <property>
      <name>yarn.resourcemanager.principal</name>
      <value>hdfs/_HOST@HIVE.COM</value>
    </property>
    <!-- NodeManager security configs -->
    <property>
      <name>yarn.nodemanager.keytab</name>
      <value>/usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab</value>
    </property>
    <property>
      <name>yarn.nodemanager.principal</name>
      <value>hdfs/_HOST@HIVE.COM</value>
    </property>
    <property>
      <name>yarn.nodemanager.container-executor.class</name>
      <value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
    </property>
    <property>
      <name>yarn.nodemanager.linux-container-executor.group</name>
      <value>yarn</value>
    </property>
    <property>
      <name>yarn.resourcemanager.proxy-user-privileges.enabled</name>
      <value>true</value>
    </property>
    <property>
      <name>yarn.nodemanager.local-dirs</name>
      <value>/usr/local/hadoop-2.7.6/tmp/nm-local-dir</value>
    </property>
    ?

    hdfs-site.xml

    <!--添加以下內(nèi)容-->
    <property>
      <name>dfs.block.access.token.enable</name>
      <value>true</value>
    </property>
    <property>  
      <name>dfs.datanode.data.dir.perm</name>  
      <value>700</value>  
    </property>
    <property>
      <name>dfs.namenode.keytab.file</name>
      <value>/usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab</value>
    </property>
    <property>
      <name>dfs.namenode.kerberos.principal</name>
      <value>hdfs/_HOST@HIVE.COM</value>
    </property>
    <property>
      <name>dfs.namenode.kerberos.https.principal</name>
      <value>HTTP/_HOST@HIVE.COM</value>
    </property>
    <property>
      <name>dfs.datanode.address</name>
      <value>0.0.0.0:1004</value>
    </property>
    <property>
      <name>dfs.datanode.http.address</name>
      <value>0.0.0.0:1006</value>
    </property>
    <property>
      <name>dfs.datanode.keytab.file</name>
      <value>/usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab</value>
    </property>
    <property>
      <name>dfs.datanode.kerberos.principal</name>
      <value>hdfs/_HOST@HIVE.COM</value>
    </property>
    <property>
      <name>dfs.datanode.kerberos.https.principal</name>
      <value>HTTP/_HOST@HIVE.COM</value>
    </property>
    ?
    <property>
      <name>dfs.webhdfs.enabled</name>
      <value>true</value>
    </property>
     
    <property>
      <name>dfs.web.authentication.kerberos.principal</name>
      <value>HTTP/_HOST@HIVE.COM</value>
    </property>
     
    <property>
      <name>dfs.web.authentication.kerberos.keytab</name>
      <value>/usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab</value>
    </property>
    ?
    <property>
    <name>dfs.secondary.namenode.keytab.file</name>
    <value>/usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab</value>
    </property>
    ?
    <property>
    <name>dfs.secondary.namenode.kerberos.principal</name>
    <value>hdfs/_HOST@HIVE.COM</value>
    </property>
    ?
    <property>
      <name>hadoop.tmp.dir</name>
      <value>/usr/local/hadoop-2.7.6/tmp</value>
    </property>
    ?

    mapred-site.xml:

    <!--添加以下內(nèi)容-->
    <property>
      <name>mapreduce.jobhistory.keytab</name>
      <value>/usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab</value>
    </property>
    <property>
      <name>mapreduce.jobhistory.principal</name>
      <value>hdfs/_HOST@HIVE.COM</value>
    </property>
    <property>
      <name>mapreduce.jobhistory.http.policy</name>
      <value>HTTPS_ONLY</value>
    </property>


    container-executor.cfg

    <!--覆蓋以下內(nèi)容-->
    yarn.nodemanager.linux-container-executor.group=hadoop
    ?
    #configured value of yarn.nodemanager.linux-container-executor.group
    ?
    banned.users=hdfs
    ?
    #comma separated list of users who can not run applications
    ?
    min.user.id=0
    ?
    #Prevent other super-users
    ?
    allowed.system.users=root,yarn,hdfs,mapred,nobody
    ?
    ##comma separated list of system users who CAN run applications


    2.4 編譯安裝JSVC

    當(dāng)設(shè)置了安全的datanode時,啟動datanode需要root權(quán)限,需要修改hadoop-env.sh文件.且需要安裝jsvc,同時重新下載編譯包commons-daemon-1.0.15.jar,并把$HADOOP_HOME/share/hadoop/hdfs/lib下替換掉.
    否則報錯Cannot start secure DataNode without configuring either privileged resources

    啟動datanode具體報錯如下:

    2020-04-14 15:56:35,164 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
    java.lang.RuntimeException: Cannot start secure DataNode without configuring either privileged resources or SASL RPC data transfer protection and SSL for HTTP.  Using privileged resources in combination with SASL RPC data transfer protection is not supported.
            at org.apache.hadoop.hdfs.server.datanode.DataNode.checkSecureConfig(DataNode.java:1208)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1108)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:429)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2414)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2301)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2348)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2530)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2554)
    2020-04-14 15:56:35,173 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
    2020-04-14 15:56:35,179 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:

    2.4.1 下載安裝包

    下載解壓commons-daemon-1.2.2-src.tar.gz及commons-daemon-1.2.2-bin.tar.gz

    2.4.2 安裝操作

    [root@hadoop01 hadoop]# cd /usr/local
    [root@hadoop01 local]# cd ./JSVC_packages/
    [root@hadoop01 JSVC_packages]# wget http://apache.fayea.com//commons/daemon/source/commons-daemon-1.2.2-src.tar.gz
    [root@hadoop01 JSVC_packages]# wget http://apache.fayea.com//commons/daemon/binaries/commons-daemon-1.2.2-bin.tar.gz
    [root@hadoop01 JSVC_packages]# tar xf commons-daemon-1.2.2-bin.tar.gz
    [root@hadoop01 JSVC_packages]# tar xf commons-daemon-1.2.2-src.tar.gz
    ?
    [root@hadoop01 JSVC_packages]# ll
    total 472
    drwxr-xr-x. 3 root root    278 Apr 14 16:25 commons-daemon-1.2.2
    -rw-r--r--. 1 root root 179626 Apr 14 16:24 commons-daemon-1.2.2-bin.tar.gz
    drwxr-xr-x. 3 root root    180 Apr 14 16:25 commons-daemon-1.2.2-src
    -rw-r--r--. 1 root root 301538 Apr 14 16:24 commons-daemon-1.2.2-src.tar.gz
    ?
    #編譯生成jsvc,并拷貝至指定目錄
    [root@hadoop01 JSVC_packages]# cd commons-daemon-1.2.2-src/src/native/unix/
    [root@hadoop01 unix]# ./configure
    [root@hadoop01 unix]# make
    [root@hadoop01 unix]# cp ./jsvc /usr/local/hadoop-2.7.6/libexec/
    ?
    #拷貝commons-daemon-1.2.2.jar
    [root@hadoop01 unix]# cd /usr/local/JSVC_packages/commons-daemon-1.2.2/
    [root@hadoop01 commons-daemon-1.2.2]# cp /usr/local/hadoop-2.7.6/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar /usr/local/hadoop-2.7.6/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar.bak
    ?
    [root@hadoop01 commons-daemon-1.2.2]# cp ./commons-daemon-1.2.2.jar /usr/local/hadoop-2.7.6/share/hadoop/hdfs/lib/
    ?
    ?
    [root@hadoop01 /opt/JSVC_packages/commons-daemon-1.2.2]# cd /opt/hadoop-2.7.2/share/hadoop/hdfs/lib/
    [root@hadoop01 /opt/hadoop-2.7.2/share/hadoop/hdfs/lib]# chown hdfs:hadoop commons-daemon-1.2.2.jar 


    2.4.3 hadoop-env.sh

    [root@hadoop01 hadoop-2.7.6]# vi ./etc/hadoop/hadoop-env.sh
    ?
    追加如下內(nèi)容:
    export HADOOP_SECURE_DN_USER=hdfs
    export JSVC_HOME=/usr/local/hadoop-2.7.6/libexec/


    2.5 分發(fā)到其它服務(wù)器

    [root@hadoop01 local]# scp -r /usr/local/hadoop-2.7.6/ hadoop02:/usr/local/
    ?
    [root@hadoop01 local]# scp -r /usr/local/hadoop-2.7.6/ hadoop03:/usr/local/


    2.6 啟動hadoop集群

    ?
    [root@hadoop01 hadoop-2.7.6]# kinit -k -t /usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab hdfs/hadoop01@HIVE.COM
    [root@hadoop02 hadoop-2.7.6]# kinit -k -t /usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab hdfs/hadoop02@HIVE.COM
    [root@hadoop03 hadoop-2.7.6]# kinit -k -t /usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab hdfs/hadoop03@HIVE.COM
    ?
    [root@hadoop02 krb5kdc]# cd /usr/local/hadoop-2.7.6/etc/hadoop/
    [root@hadoop02 krb5kdc]# chown hdfs:hadoop hdfs.keytab && chmod 400 hdfs.keytab
    ?
    [root@hadoop03 krb5kdc]# cd /usr/local/hadoop-2.7.6/etc/hadoop/
    [root@hadoop03 krb5kdc]# chown hdfs:hadoop hdfs.keytab && chmod 400 hdfs.keytab
    ?
    [root@hadoop01 hadoop-2.7.6]# klist
    Ticket cache: FILE:/tmp/krb5cc_0
    Default principal: hdfs/hadoop01@HIVE.COM
    ?
    Valid starting       Expires              Service principal
    04/14/2020 16:49:17  04/15/2020 16:49:17  krbtgt/HIVE.COM@HIVE.COM
            renew until 04/21/2020 16:49:17
            
     
     
     
     [root@hadoop02 ~]# useradd hdfs
     [root@hadoop02 hadoop-2.7.6]# passwd hdfs
     [root@hadoop03 ~]# useradd hdfs
     [root@hadoop03 hadoop-2.7.6]# passwd hdfs
     
     #啟動hdfs,,直接root用戶
    [root@hadoop01 hadoop-2.7.6]# start-dfs.sh
    #啟動DataNode,直接root用戶
    [root@hadoop01 hadoop-2.7.6]# start-secure-dns.sh
    #啟動yarn,直接root用戶啟動即可(親測沒有問題)
    [root@hadoop01 hadoop-2.7.6]# start-yarn.sh
     #啟動historyserver,,直接root用戶
    [root@hadoop01 hadoop-2.7.6]# mr-jobhistory-daemon.sh start historyserver
    ?
    ?
    停止集群:
    #停止DataNode,需要切換到root用戶
    [root@hadoop01 hadoop-2.7.6]# stop-secure-dns.sh
     #停止hdfs
    [root@hadoop01 hadoop-2.7.6]# stop-dfs.sh
    ?
    #停止yarn,直接root用戶啟動即可(親測沒有問題)
    [root@hadoop01 hadoop-2.7.6]# stop-yarn.sh
    ?


    2.7 測試hadoop集群

    訪問地址:http://hadoop01:50070

    yarn的訪問地址:http://hadoop01:8088

    hdfs的測試:

    [root@hadoop01 hadoop-2.7.6]# hdfs dfs -ls /
    [root@hadoop01 hadoop-2.7.6]# hdfs dfs -put /home/words /
    [root@hadoop01 hadoop-2.7.6]# hdfs dfs -cat /words
    hello qianfeng
    hello flink
    wuhan jiayou hello wuhan wuhan hroe
    ?
    ?
    # 如下使用hdfs測試,當(dāng)hdfs未獲取授權(quán)驗證,是不能訪問hdfs的文件系統(tǒng)的
    [hdfs@hadoop02 hadoop]$ hdfs dfs -cat /words
    20/04/15 15:04:41 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
    cat: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "hadoop02/192.168.216.112"; destination host is: "hadoop01":9000;
    ?
    #解決方法:
    [hdfs@hadoop02 hadoop]$ kinit -k -t /usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab hdfs/hadoop02@HIVE.COM
    [hdfs@hadoop02 hadoop]$ hdfs dfs -cat /words
    hello qianfeng
    hello flink
    wuhan jiayou hello wuhan wuhan hroe


    yarn的測試:

    [root@hadoop01 hadoop-2.7.6]# kinit -k -t /usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab yarn/hadoop01@HIVE.COM
    ?
    [root@hadoop01 hadoop-2.7.6]# yarn jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.6.jar wordcount /words /out/00
    ?
    錯誤1:
    20/04/15 23:42:45 INFO mapreduce.Job: Job job_1586934815492_0008 failed with state FAILED due to: Application application_1586934815492_0008 failed 2 times due to AM Container for appattempt_1586934815492_0008_000002 exited with  exitCode: -1000
    For more detailed output, check application tracking page:http://hadoop01:8088/cluster/app/application_1586934815492_0008Then, click on links to logs of each attempt.
    Diagnostics: Application application_1586934815492_0008 initialization failed (exitCode=255) with output: Requested user hdfs is banned
    ?
    錯誤2:
    Caused by: java.io.IOException: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
    解決方案:
    hdfs-site.xml中配置臨時目錄
    yarn-site.xml中也要配置零食目錄,,并且和hdfs中的前邊一樣,后邊加一點固定的
    ?
    #再次測試:
    [root@hadoop01 hadoop-2.7.6]# yarn jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.6.jar wordcount /words /out/02
    20/04/16 02:55:38 INFO client.RMProxy: Connecting to ResourceManager at hadoop01/192.168.216.111:8032
    20/04/16 02:55:38 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 61 for yarn on 192.168.216.111:9000
    20/04/16 02:55:38 INFO security.TokenCache: Got dt for hdfs://hadoop01:9000; Kind: HDFS_DELEGATION_TOKEN, Service: 192.168.216.111:9000, Ident: (HDFS_DELEGATION_TOKEN token 61 for yarn)
    20/04/16 02:55:39 INFO input.FileInputFormat: Total input paths to process : 1
    20/04/16 02:55:39 INFO mapreduce.JobSubmitter: number of splits:1
    20/04/16 02:55:39 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1586976916277_0001
    20/04/16 02:55:39 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: 192.168.216.111:9000, Ident: (HDFS_DELEGATION_TOKEN token 61 for yarn)
    20/04/16 02:55:41 INFO impl.YarnClientImpl: Submitted application application_1586976916277_0001
    20/04/16 02:55:41 INFO mapreduce.Job: The url to track the job: http://hadoop01:8088/proxy/application_1586976916277_0001/
    20/04/16 02:55:41 INFO mapreduce.Job: Running job: job_1586976916277_0001
    20/04/16 02:56:11 INFO mapreduce.Job: Job job_1586976916277_0001 running in uber mode : false
    20/04/16 02:56:11 INFO mapreduce.Job:  map 0% reduce 0%
    20/04/16 02:56:13 INFO mapreduce.Job: Task Id : attempt_1586976916277_0001_m_000000_0, Status : FAILED
    Application application_1586976916277_0001 initialization failed (exitCode=20) with output: main : command provided 0
    main : user is yarn
    main : requested yarn user is yarn
    Permission mismatch for /usr/local/hadoop-2.7.6/tmp/nm-local-dir for caller uid: 0, owner uid: 502.
    Couldn't get userdir directory for yarn.
    20/04/16 02:56:20 INFO mapreduce.Job:  map 100% reduce 0%
    20/04/16 02:56:28 INFO mapreduce.Job:  map 100% reduce 100%
    20/04/16 02:56:28 INFO mapreduce.Job: Job job_1586976916277_0001 completed successfully
    20/04/16 02:56:28 INFO mapreduce.Job: Counters: 51
            File System Counters
                    FILE: Number of bytes read=81
                    FILE: Number of bytes written=251479
                    FILE: Number of read operations=0
                    FILE: Number of large read operations=0
                    FILE: Number of write operations=0
                    HDFS: Number of bytes read=154
                    HDFS: Number of bytes written=51
                    HDFS: Number of read operations=6
                    HDFS: Number of large read operations=0
                    HDFS: Number of write operations=2
            Job Counters
                    Failed map tasks=1
                    Launched map tasks=2
                    Launched reduce tasks=1
                    Other local map tasks=1
                    Data-local map tasks=1
                    Total time spent by all maps in occupied slots (ms)=4531
                    Total time spent by all reduces in occupied slots (ms)=3913
                    Total time spent by all map tasks (ms)=4531
                    Total time spent by all reduce tasks (ms)=3913
                    Total vcore-milliseconds taken by all map tasks=4531
                    Total vcore-milliseconds taken by all reduce tasks=3913
                    Total megabyte-milliseconds taken by all map tasks=4639744
                    Total megabyte-milliseconds taken by all reduce tasks=4006912
            Map-Reduce Framework
                    Map input records=3
                    Map output records=10
                    Map output bytes=103
                    Map output materialized bytes=81
                    Input split bytes=91
                    Combine input records=10
                    Combine output records=6
                    Reduce input groups=6
                    Reduce shuffle bytes=81
                    Reduce input records=6
                    Reduce output records=6
                    Spilled Records=12
                    Shuffled Maps=1
                    Failed Shuffles=0
                    Merged Map outputs=1
                    GC time elapsed (ms)=192
                    CPU time spent (ms)=2120
                    Physical memory (bytes) snapshot=441053184
                    Virtual memory (bytes) snapshot=4211007488
                    Total committed heap usage (bytes)=277348352
            Shuffle Errors
                    BAD_ID=0
                    CONNECTION=0
                    IO_ERROR=0
                    WRONG_LENGTH=0
                    WRONG_MAP=0
                    WRONG_REDUCE=0
            File Input Format Counters
                    Bytes Read=63
            File Output Format Counters
                    Bytes Written=51


    錯誤1:

    2020-04-15 14:38:36,457 INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user hdfs/hadoop02@HIVE.COM using keytab file /usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab
    2020-04-15 14:38:36,961 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid dfs.datanode.data.dir /home/hdfs/hadoopdata/dfs/data :
    ?
    解決方案(如果滿足下面的要求,不用做)
    第1步:
    [root@hadoop02 ~]#  useradd hdfs -g hadoop
    [root@hadoop02 ~]#  passwd hdfs
    ?
    [root@hadoop03 ~]#  useradd hdfs -g hadoop
    [root@hadoop03 ~]#  passwd hdfs
    ?
    第2步(那一臺報錯在那一臺執(zhí)行):
    [root@hadoop02 hadoop]# chown -R hdfs:hadoop /home/hdfs/hadoopdata/
    [root@hadoop02 hadoop]# chown -R hdfs:hadoop /home/hdfs/hadoopdata/
    [root@hadoop03 hadoop]# chown -R hdfs:hadoop /home/hdfs/hadoopdata/


    錯誤2:

    啟動datanode報錯:
    java.io.IOException: All directories in dfs.datanode.data.dir are invalid: "/home/hdfs/hadoopdata/dfs/data"
    ?
    解決方案(確定沒有手動創(chuàng)建都可以):
    [root@hadoop02 hadoop-2.7.6]# mkdir -p /home/hdfs/hadoopdata/dfs/data
    [root@hadoop03 hadoop-2.7.6]# mkdir -p /home/hdfs/hadoopdata/dfs/data
    ?


    錯誤3:

    啟動yarn時報錯:
    Caused by: java.io.IOException: Login failure for hdfs/hadoop03@HIVE.COM from keytab /usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab: javax.security.auth.login.LoginException: Unable to obtain password from user
    ?
    解決(那一臺報錯就在那一臺是對應(yīng)執(zhí)行):
    [root@hadoop02 hadoop-2.7.6]# kinit -k -t /usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab hdfs/hadoop02@HIVE.COM
    [root@hadoop03 hadoop-2.7.6]# kinit -k -t /usr/local/hadoop-2.7.6/etc/hadoop/hdfs.keytab hdfs/hadoop03@HIVE.COM


    錯誤4:

    啟動yarn時報錯如下:
    Caused by: ExitCodeException exitCode=24: File /usr/local/hadoop-2.7.6/etc/hadoop/container-executor.cfg must be owned by root, but is owned by 20415
    ?
    將container-executor.cfg的所有父目錄及本身文件都修改成root:root即可:
    [root@hadoop01 hadoop-2.7.6]# chown  root:root /usr/local/hadoop-2.7.6/etc/
    [root@hadoop01 hadoop-2.7.6]# chown  root:root /usr/local/hadoop-2.7.6/etc/hadoop/
    [root@hadoop01 hadoop-2.7.6]# chown  root:root /usr/local/hadoop-2.7.6/etc/hadoop/container-executor.cfg


    錯誤5:

    啟動yarn時報錯如下:
    Caused by: ExitCodeException exitCode=22: Invalid permissions on container-executor binary.
    ?
    解決方法:
    [root@hadoop01 hadoop-2.7.6]# chown root:hadoop $HADOOP_HOME/bin/container-executor
    [root@hadoop01 hadoop-2.7.6]# chmod 6050 $HADOOP_HOME/bin/container-executor
    ?
    [root@hadoop02 hadoop-2.7.6]# chown root:hadoop $HADOOP_HOME/bin/container-executor
    [root@hadoop02 hadoop-2.7.6]# chmod 6050 $HADOOP_HOME/bin/container-executor
    ?
    [root@hadoop03 hadoop-2.7.6]# chown root:hadoop $HADOOP_HOME/bin/container-executor
    [root@hadoop03 hadoop-2.7.6]# chmod 6050 $HADOOP_HOME/bin/container-executor


    錯誤6:

    #運行案例報錯
    java.io.IOException: org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request, requested memory < 0, or requested memory > max configured, requestedMemory=1536, maxMemory=1024
    ?
    ?
    #解決方案,修改yarn-site.xml:
    <property>
          <name>yarn.nodemanager.resource.memory-mb</name>
          <value>2048</value>
    </property>
    ?
    #分發(fā)到別的服務(wù)器:
    [root@hadoop02 hadoop-2.7.6]# scp -r ./etc/hadoop/yarn-site.xml hadoop02:/usr/local/hadoop-2.7.6/etc/hadoop/
    [root@hadoop03 hadoop-2.7.6]# scp -r ./etc/hadoop/yarn-site.xml hadoop03:/usr/local/hadoop-2.7.6/etc/hadoop/
    ?
    #重啟yarn服務(wù)
    [root@hadoop01 hadoop-2.7.6]# start-yarn.sh
    ?


    第三章 Hive配置Kerberos

    3.1 創(chuàng)建hive用戶

    #新建用戶hive,命令如下:
    [root@hadoop01 hive-1.2.2]# useradd -u 503 hive -g hadoop
    [root@hadoop01 hive-1.2.2]# passwd hive 輸入新密碼,我的密碼為hive


    3.2 生成 keytab

    在主節(jié)點,即KDC server 節(jié)點上執(zhí)行下面命令(root用戶):

    [root@hadoop01 hive-1.2.2]# cd /var/kerberos/krb5kdc/
    [root@hadoop01 krb5kdc]# kadmin.local -q "addprinc -randkey hive/hadoop01@HIVE.COM"
    [root@hadoop01 krb5kdc]# kadmin.local -q "xst -k hive.keytab hive/hadoop01@HIVE.COM"
    #查看
    [root@hadoop01 krb5kdc]# klist -ket hive.keytab
    Keytab name: FILE:hive.keytab
    KVNO Timestamp           Principal
    ---- ------------------- ------------------------------------------------------
       2 04/15/2020 23:52:46 hive/hadoop01@HIVE.COM (aes128-cts-hmac-sha1-96)
       2 04/15/2020 23:52:46 hive/hadoop01@HIVE.COM (des3-cbc-sha1)
       2 04/15/2020 23:52:46 hive/hadoop01@HIVE.COM (arcfour-hmac)
       2 04/15/2020 23:52:46 hive/hadoop01@HIVE.COM (camellia256-cts-cmac)
       2 04/15/2020 23:52:46 hive/hadoop01@HIVE.COM (camellia128-cts-cmac)
       2 04/15/2020 23:52:46 hive/hadoop01@HIVE.COM (des-hmac-sha1)
       2 04/15/2020 23:52:46 hive/hadoop01@HIVE.COM (des-cbc-md5)
    ?
    ?
    #將hive.keytab發(fā)送到hive目錄的配置文件下:
    [root@hadoop01 krb5kdc]# cp hive.keytab /usr/local/hive-1.2.2/conf/
    #授權(quán)
    [root@hadoop01 krb5kdc]# cd /usr/local/hive-1.2.2/conf/
    [root@hadoop01 conf]# chown hive:hadoop hive.keytab && chmod 400 hive.keytab
    ?
    由于 keytab 相當(dāng)于有了永久憑證,不需要提供密碼(如果修改 kdc 中的 principal 的密碼,則該 keytab 就會失效),所以其他用戶如果對該文件有讀權(quán)限,就可以冒充 keytab 中指定的用戶身份訪問 hadoop,所以 keytab 文件需要確保只對 owner 有讀權(quán)限(0400)

    3.3 修改配置文件

    hive-site.xml:

    [root@hadoop01 hive-1.2.1]# vi ./conf/hive-site.xml
    <!--添加以下內(nèi)容-->
    <property>
        <name>hive.server2.authentication</name>
        <value>KERBEROS</value>
      </property>
      <property>
        <name>hive.server2.authentication.kerberos.principal</name>
        <value>hive/_HOST@HIVE.COM</value>
      </property>
    <property>
      <name>hive.server2.authentication.kerberos.keytab</name>
      <value>/usr/local/hive-1.2.2/conf/hive.keytab</value>
    </property>
    ?
    <property>
      <name>hive.metastore.sasl.enabled</name>
      <value>true</value>
    </property>
    <property>
      <name>hive.metastore.kerberos.keytab.file</name>
      <value>/usr/local/hive-1.2.2/conf/hive.keytab</value>
    </property>
    <property>
      <name>hive.metastore.kerberos.principal</name>
      <value>hive/_HOST@HIVE.COM</value>
    </property>

    core-site.xml:

    [root@hadoop01 hive-1.2.2]# vi ../hadoop-2.7.6/etc/hadoop/core-site.xml
    <!--添加以下配置-->
    <property>
        <name>hadoop.proxyuser.hive.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hive.groups</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hdfs.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hdfs.groups</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.HTTP.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.HTTP.groups</name>
        <value>*</value>
    </property>
    ?
    ?
    # 添加后同步到其它服務(wù)器
    [root@hadoop01 hive-1.2.2]# scp -r ../hadoop-2.7.6/etc/hadoop/core-site.xml hadoop02:/usr/local/hadoop-2.7.6/etc/hadoop/
    [root@hadoop01 hive-1.2.2]# scp -r ../hadoop-2.7.6/etc/hadoop/core-site.xml hadoop03:/usr/local/hadoop-2.7.6/etc/hadoop/

    3.4 啟動hive

    [root@hadoop01 hive-1.2.2]# nohup hive --service metastore >> metastore.log 2>&1 &
    [root@hadoop01 hive-1.2.2]# nohup hive --service hiveserver2 >> hiveserver2.log 2>&1 &
    ?
    ##也可以切換到hive執(zhí)行。

    3.5 連接測試

    hive連接

    [root@hadoop01 hive-1.2.2]# hive
    ?
    Logging initialized using configuration in file:/opt/apache-hive-1.2.1-bin/conf/hive-log4j.properties
    hive> 
    ?
    Caused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: GSS initiate failed
    ?
    2020-04-16 00:47:11,335 ERROR [main]: transport.TSaslTransport (TSaslTransport.java:open(315)) - SASL negotiation failure
    javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7) - LOOKING_UP_SERVER)]
    ?

    beeline連接

    配置kerberos后,每次窗口連接都要登錄:kinit -k -t /usr/local/hive-1.2.2/conf/hive.keytab hive/hadoop01@HIVE.COM
    ?
    [root@hadoop01 hive-1.2.2]# kinit -k -t /usr/local/hive-1.2.2/conf/hive.keytab hive/hadoop01@HIVE.COM
    ?
    [root@hadoop01 hive-1.2.2]# beeline
    Beeline version 1.2.2 by Apache Hive
    beeline> !connect jdbc:hive2://hadoop01:10000/default;principal=hive/hadoop01@HIVE.COM
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:file:/usr/local/hbase-1.2.1/lib/phoenix-4.14.1-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/usr/local/hadoop-2.7.6/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
    Connecting to jdbc:hive2://hadoop01:10000/default;principal=hive/hadoop01@HIVE.COM
    Enter username for jdbc:hive2://hadoop01:10000/default;principal=hive/hadoop01@HIVE.COM: hive
    Enter password for jdbc:hive2://hadoop01:10000/default;principal=hive/hadoop01@HIVE.COM: ****
    Connected to: Apache Hive (version 1.2.2)
    Driver: Hive JDBC (version 1.2.2)
    Transaction isolation: TRANSACTION_REPEATABLE_READ
    0: jdbc:hive2://hadoop01:10000/default> show databases;
    這里登錄的用戶名和密碼是最開始創(chuàng)建hive的時候的所用的 hive的用戶名和密碼,本次測試的用戶名和密碼為:hive/hive


    3.6 hive操作測試

    [root@hadoop01 hive-1.2.2]# hive
    ?
    create table if not exists u1(
    uid int,
    age int
    )
    row format delimited fields terminated by ','
    ;
    ?
    數(shù)據(jù):
    [root@hadoop01 hive-1.2.2]# vi /home/u1
    1,18
    2,20
    3,20
    4,32
    5,18
    6.20
    ?
    #數(shù)據(jù)裝載
    load data local inpath '/home/u1' into table u1;
    ?
    #查詢
    hive> select * from u1;
    chmod: changing permissions of 'hdfs://hadoop01:9000/tmp/hive/hive/e9a76813-5c64-47f7-9a2b-5d7876111786/hive_2020-04-16_01-18-41_393_8778198899588815011-1/-mr-10000': Permission denied: user=hive, access=EXECUTE, inode="/tmp":hdfs:supergroup:drwx------
    OK
    1       18
    2       20
    3       20
    4       32
    5       18
    6       NULL
    ?
    ?
    hive> select count(*) from u1;
    Query ID=root_20200416025824_e9adc8a8-7052-4ee9-8924-bf735461484b
    Total jobs=1
    Launching Job 1 out of 1
    Number of reduce tasks determined at compile time: 1
    In order to change the average load for a reducer (in bytes):
      set hive.exec.reducers.bytes.per.reducer=<number>
    In order to limit the maximum number of reducers:
      set hive.exec.reducers.max=<number>
    In order to set a constant number of reducers:
      set mapreduce.job.reduces=<number>
    Starting Job=job_1586976916277_0002, Tracking URL=http://hadoop01:8088/proxy/application_1586976916277_0002/
    Kill Command=/usr/local/hadoop-2.7.6//bin/hadoop job  -kill job_1586976916277_0002
    Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
    2020-04-16 02:58:39,528 Stage-1 map=0%,  reduce=0%
    2020-04-16 02:58:45,992 Stage-1 map=100%,  reduce=0%, Cumulative CPU 2.03 sec
    2020-04-16 02:58:52,547 Stage-1 map=100%,  reduce=100%, Cumulative CPU 4.51 sec
    MapReduce Total cumulative CPU time: 4 seconds 510 msec
    Ended Job=job_1586976916277_0002
    MapReduce Jobs Launched:
    Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 4.51 sec   HDFS Read: 6381 HDFS Write: 2 SUCCESS
    Total MapReduce CPU Time Spent: 4 seconds 510 msec
    OK
    6
    Time taken: 30.518 seconds, Fetched: 1 row(s)
    hive>

    至此,hive的kerberos認證配置完成!

    REV:1.00

    k ASUSTek COMPUTER INC。

    1.授予PnP BIOS郵政編碼

    2.如何修復(fù)

    C0

    1.關(guān)閉OEM特定緩存,陰影.....

    2.使用默認值初始化所有標(biāo)準(zhǔn)設(shè)備

    標(biāo)準(zhǔn)設(shè)備包括:

    ?DMA控制器(8237)

    ?可編程中斷控制器(8259)

    ?可編程間隔定時器(8254)

    ?RTC芯片

    C1自動檢測板載DRAM&Cache

    1.And PnP BIOS郵政編碼(1)

    C3

    1.測試第一個256K DRAM

    2.將壓縮代碼展開到臨時DRAM中

    區(qū)域,包括壓縮系統(tǒng)BIOS和選件

    ROMs

    C5

    將BIOS從ROM復(fù)制到E000-FFFF影子RAM中

    所以POST會更快

    01-

    02

    保留

    03初始化EISA寄存器(僅限EISA BIOS)

    04保留

    05

    1.鍵盤控制器自檢

    2.啟用鍵盤接口

    06保留

    1.And PnP BIOS郵政編碼(2)

    07驗證CMOS的基本R / W功能

    程序默認值到芯片組根據(jù)

    MODBINable芯片組默認表

    09

    1.編程Cyrix CPU的配置寄存器

    根據(jù)MODBINable Cyrix寄存器表

    2.OEM特定的緩存初始化

    0A

    1.初始化相應(yīng)的前32個中斷向量

    中斷處理程序

    使用虛擬(偽)從33-120初始化INT號

    中斷處理程序

    2.發(fā)出CPUID指令以識別CPU類型

    3.電源管理初始化(OEM特定)

    1.And PnP BIOS郵政編碼(3)

    0B

    1.驗證RTC時間是否有效

    2.檢測壞電池

    3.將CMOS數(shù)據(jù)讀入BIOS堆棧區(qū)

    4.PnP初始化,包括(僅PnP BIOS)

    ?將CSN分配給PnP ISA卡

    ?從ESCD創(chuàng)建資源映射

    5.為PCI設(shè)備分配IO和內(nèi)存(僅限PCI BIOS)

    0C初始化BIOS數(shù)據(jù)區(qū)(40:0-40:FF)

    0D

    1.根據(jù)芯片組的某些值來計算

    (早期設(shè)定值程序)

    2.測量CPU速度以顯示和決定系統(tǒng)

    時鐘速度。

    視頻初始化包括單色,CGA,

    EGA / VGA。

    如果沒有找到顯示設(shè)備,揚聲器將發(fā)出蜂鳴聲。

    1.And PnP BIOS郵政編碼(4)

    0E

    1.初始化APIC(僅限多處理器BIOS)

    2.測試視頻RAM(如果找到單色顯示設(shè)備)

    顯示消息包括:

    ?獎標(biāo)志,版權(quán)字符串,BIOS日期代碼和部分

    沒有。

    ?OEM特定登錄消息

    ?能源之星標(biāo)志(僅限綠色BIOS)

    ?CPU品牌,類型和速度

    0F DMA通道0測試

    10 DMA通道1測試

    11 DMA頁寄存器測試

    12-

    13

    保留

    1.And PnP BIOS郵政編碼(5)

    14測試8254定時器0計數(shù)器2

    15測試通道1的8259個中斷屏蔽位

    16測試通道2的8259個中斷屏蔽位

    17預(yù)留

    19測試8259功能

    圖1A-

    1D

    保留

    1E

    如果EISA NVM校驗和良好,則執(zhí)行EISA

    初始化(僅限EISA BIOS)

    1F-

    29

    保留

    30獲取基本內(nèi)存和擴展內(nèi)存大小

    1.And PnP BIOS郵政編碼(6)

    31

    1.測試基本內(nèi)存從256K到640K

    2.測試擴展內(nèi)存從1M到內(nèi)存的頂部

    32

    1.顯示Award Plug&Play BIOS擴展名

    消息(僅限PnP BIOS)

    2.編程所有板載超級I / O芯片(如果有)包括

    COM端口,LPT端口,F(xiàn)DD端口...根據(jù)設(shè)置

    33-

    3B

    保留

    3C設(shè)置標(biāo)志,允許用戶進入CMOS設(shè)置實用程序

    3D

    1.初始化鍵盤

    2.安裝PS2鼠標(biāo)

    1.And PnP BIOS郵政編碼(7)

    3E

    嘗試打開第2級緩存

    注意:某些芯片組可能需要打開L2緩存

    這個階段。但通常,稍后在Post中打開緩存

    61h

    3F-

    40

    保留

    BF

    1.根據(jù)芯片組的剩余值計算

    設(shè)置(后面的設(shè)置值程序)

    2.如果啟用自動配置,則編程芯片組

    在MODBINable自動表中具有預(yù)定義的值

    41初始化軟盤驅(qū)動器控制器

    42初始化硬盤驅(qū)動器控制器

    43如果是PnP BIOS,請初始化串行端口和并行端口

    44保留

    1.And PnP BIOS郵政編碼(8)

    45初始化數(shù)學(xué)協(xié)處理器

    46-

    4D

    保留

    4E

    如果檢測到任何錯誤(如視頻,KB ....),請顯示

    所有的錯誤信息在屏幕上,等待用戶

    按<F1>鍵

    4F

    1.如果需要密碼,請詢問密碼

    2.清除能源之星徽標(biāo)(僅限綠色BIOS)

    50

    寫入BIOS堆棧中當(dāng)前的所有CMOS值

    ares回到CMOS

    51保留

    1.And PnP BIOS郵政編碼(9)

    52

    1.初始化所有ISA ROM

    2.稍后的PCI初始化(僅限PCI BIOS)

    ?向PCI設(shè)備分配IRQ

    ?初始化所有PCI ROM

    3.PnP初始化(僅限PnP BIOS)

    ?向PnP ISA設(shè)備分配IO,內(nèi)存,IRQ和DMA

    ?初始化所有PnP ISA ROM

    4.根據(jù)設(shè)置設(shè)置程序陰影RAM

    5.根據(jù)設(shè)置設(shè)置進行程序校驗

    6.電源管理初始化

    ?啟用/禁用全局PM

    ?APM界面初始化

    1.And PnP BIOS郵政編碼(10)

    53

    1.如果它不是PnP BIOS,請初始化串行和并行端口

    2.通過翻譯在BIOS數(shù)據(jù)區(qū)中初始化時間值

    RTC時間值轉(zhuǎn)換為定時器計時值

    54-

    5F

    保留

    60

    安裝病毒防護(引導(dǎo)扇區(qū)保護)

    功能根據(jù)設(shè)置設(shè)置

    61)

    1.嘗試打開2級緩存

    注意:如果二維緩存已在3D后打開,這部分

    將被跳過

    2.根據(jù)設(shè)置設(shè)置設(shè)置啟動速度

    3.芯片組初始化的機會

    4.電源管理初始化的機會(綠色

    BIOS)

    5.顯示系統(tǒng)配置表

    1.And PnP BIOS郵政編碼(11)

    62

    1.根據(jù)設(shè)置值設(shè)置夏令時

    2.計算NUM鎖定,速度和速度

    根據(jù)設(shè)置設(shè)置

    63

    1.如果硬件配置有任何更改,

    更新ESCD信息(僅限PnP BIOS)

    2.已使用的清除內(nèi)存

    通過INT 19h引導(dǎo)系統(tǒng)

    FF

    系統(tǒng)啟動。這意味著BIOS已經(jīng)通過了

    控制操作系統(tǒng)的權(quán)限

    意外錯誤:

    POST

    (HEX)

    描述

    B0如果在保護模式下發(fā)生中斷

    B1發(fā)生未聲明的NMI

    1.And PnP BIOS郵政編碼(12)

    。 2.如何修復(fù)(1)

    ■00(1)POWER,PWROK,RESET,CLK,REQ#,A20M#,M / IO#信號錯誤

    ■(2)HA,HD,AD,SA信號錯誤

    ■(3)插入PCI卡原因00.remove卡確定:PCI總線GNT打開

    ■(4)地址FFFFF8,9,A,B,4,5,6,7:PCI總線AD16斷開

    ■(5)地址FFFFF1,2,3,4,5:PCI總線C / BE0#斷開

    ■(6)IOR#&IOW#LED錯誤:IOR#&IOW#短路

    ■(7)地址FFFFF1:BIOS的MEMR#信號打開

    ■(8)地址FFFFF0,1,2,3總是重放:PCI總線DEVSEL#和GNT短路

    ■(9)地址正確,但DATA始終FF:BIOS的MEMRCS#打開

    ■(10)00掛起:高位地址打開或短路

    ■(11)地址不正確:檢查所有HA,AD,SA,ADS#信號是否打開或短路

    ■(12)DEBUG卡不能顯示地址:PCI總線REQ2#打開

    ■(13)NEMR#LED指示燈熄滅:檢查電源,CLK,復(fù)位,MEMR#,HA,AD,M / IO#

    ■D / C#,BE0?BE3

    ■(14)地址FFFFE0 SA4錯誤:檢查SA4,AD4,HA4信號

    。 2.w如何修復(fù)(2)

    ■(15)數(shù)據(jù)Eb SD2錯誤:檢查SD2,AD2,HD2信號是否打開或短路

    ■(16)地址FFFFF0,1,2,3掛斷:檢查AD,HOLD,HOLDA,NA#

    ■BOFF信號打開或短路

    。 2.如何修復(fù)(3)

    ■C0(1)HD信號打開或短路

    ■(2)HA信號斷開或短路

    ■(3)RTCRD#打開

    ■(4)電壓誤差

    ■(5)BIOS不良

    ■(6)371差

    ■C1(1)MA,MD,CAS,RAS,WE#,CLK信號打開或短路

    ■(2)BIOS BAD

    ■(3)MADV#,CLKRUN#,MWE#,AHOLD,KEN#,NA#Open或Short

    ■(4)VCCLK,STPCLK頻率誤差

    ■(5)M16#和IO16#短路

    ■(6)DIMM或SIMM插座不潔凈或不良

    ■(7)CPU過載或跳線設(shè)置錯誤

    。 2.如何修復(fù)(4)

    ■C3(1)BIOS BAD

    ■(2)芯片組

    ■C5(1)HA短路或開路

    ■(2)MA短路或開路

    ■(3)BIOS BAD

    ■05(1)CLK

    ■(2)XIOW,XIOR,KBCS,SA2,BRSTDRV,IRQ1,KBINIT信號打開或

    ■短

    ■(3)XD0?XD7斷開或短路

    ■(4)K / B BIOS錯誤

    ■(5)I / O芯片(如果K / B功能內(nèi)部)不良

    ■07(1)XD打開

    。 2.如何修復(fù)(5)

    ■(2)RTCWR,RTCRD,RTCALE,IRQ8

    ■(3)檢查32.768khz

    ■(4)清除CMOS

    ■(5)達拉斯壞

    ■0A(1)BIOS不良

    ■(2)檢查中斷電路

    ■0B(1)PCLKS,PCLKPIIX

    ■(2)AD信號

    ■(3)14.318 MHZ

    ■(4)PIIXINIT,TRDY#,RADY

    ■(5)PCI總線C / BE0和AD8短路

    ■(6)檢查所有CLK信號

    ■(7)371或電池壞

    。 2.如何修復(fù)(6)

    ■0C(1)INTR信號打開

    ■0D(1)BIOS BAD

    ■(2)INTR#,IRDY#,TRDY#,DEVSEL#,STOP#,NMI

    ■(3)AD信號

    ■(4)BOFF#打開或短路

    ■(5)+ 12V開路

    ■0E(1)A20GATE信號錯誤

    ■(2)檢查INIT,INTR

    ■(3)清除CMOS

    ■0F(1)HD63,IERR#短路

    。 2.如何修復(fù)(7)

    ■18(1)D / C#打開或短路

    ■(2)HITM#打開或短路

    ■(3)NMI#打開或短路

    ■(4)INTR,LOCK打開或短路

    ■31(1)KBCS#,IRQ1 Short

    ■41(1)BIOS BAD

    ■(2)SA0?SA16打開

    ■(3)MEMR#,MEMW#

    ■(4)BIOS的信號必須小心

    ■4E(1)TRDY#,DEVSEL#short

    ■52(1)PCIRST打開

    。 2.如何修復(fù)(8)

    ■61(1)TAG RAM或SRAM不良

    ■(2)NA#,BS16斷開或短路

    ■(3)芯片組

    ■C1> 0D-> C1(1)K / B BIOS BAD

    ■C1-> 05-> C1(2)KBRST#Short

    ■保護模式(1)KBDRST#,A20M

    ■(2)INIT,PEN短路

    ■(3)A20 GATE#

    ■功能測試(1)DALLAS BAD

    ■(2)32.768

    。 2.如何修復(fù)(9)

    ■速度錯誤(1)CACHE BAD

    ■(2)CPU CLK

    ■(3)時鐘芯片

    ■ESCD錯誤(1)BIOS BAD

    ■(2)BIOS的信號錯誤

    ■無SCSI(1)EADS,GNT#,HITM,AD信號打開或短路

    ■(2)SCSI CLK

    ■聲音不良(1)更新BIOS

    ■(2)SD打開或短路

    。 2.如何修復(fù)(10)

    ■Com端口(1)GD75232打開

    ■(2)Com端口到GD75232到I / O芯片打開或短路

    ■(3)IRQ3,IRQ4打開或短路

    ■(4)芯片組BAD

    ■(5)電壓:+ 12V,-12V,+ 5V,-5V,GND錯誤

    ■打印機端口(1)打印機端口到I / O芯片打開或短路

    ■(2)IRQ7

    ■(3)I / O芯片BAD

    ■(4)陣列電阻不良

    ■■

    K / B錯誤(1)XD0?XD7,IRQ1,CLK斷開或短路

    ■(2)K / B BIOS或K / B JACK壞

    ■(3)FUSE,L(電感),陣列電阻

    。 2.如何修復(fù)(11)

    ■軟盤(1)24Mhz

    ■(2)SA,SD,IOR#,IOW#,PHOLD#,PHOLDA#,TC,DRQ2,DACK2#,IRQ6

    ■(3)檢查ISA到I / O芯片的信號

    ■(4)檢查371芯片到ISA信號

    ■(5)CMOS和JUMPER無法禁用

    ■IDE(1)DD信號

    ■(2)IORDY,IOW#,IOR#IRQ14,15打開或短路

    ■(3)緩存不良

    ■(4)IDE連接器

    ■0KB(1)CLK,HD,HA,ADSC#信號斷開或短路

    ■存儲器容量錯誤(1)MA信號短路或開路

網(wǎng)站首頁   |    關(guān)于我們   |    公司新聞   |    產(chǎn)品方案   |    用戶案例   |    售后服務(wù)   |    合作伙伴   |    人才招聘   |   

友情鏈接: 餐飲加盟

地址:北京市海淀區(qū)    電話:010-     郵箱:@126.com

備案號:冀ICP備2024067069號-3 北京科技有限公司版權(quán)所有