现在的位置: 首页 > 综合 > 正文

Failed to create or upgrade OLR

2013年08月15日 ⁄ 综合 ⁄ 共 11420字 ⁄ 字号 评论关闭

    对于Oracle 11g RAC 的安装,与Oracle 10g(clusterware)类似,grid 安装完毕后需要执行orainstroot.sh和root.sh,如果是AMD芯片,Oracle说不认识啊,那就要恭喜一下你又多了一次patch的亲密接触,这个错误提示是Failed to create or upgrade OLR。客官,中招了?接着往下看...

1、安装时的环境  
  操作系统(Oracle linux 5.5 32bit)  
  [root@node1 ~]# cat /etc/issue   
  Enterprise Linux Enterprise Linux Server release 5.5 (Carthage)  
  Kernel \r on an \m  
  
  Oracle版本  
  Oracle 11g RAC R2(32bit)  
    
  宿主机系统  
  Win7 64bit + vmware server 2.0.2  

2、错误再现
  [root@node1 ~]# /u01/app/11.2.0/grid/root.sh 
  Running Oracle 11g root.sh script...
  
  The following environment variables are set as:
      ORACLE_OWNER= grid
      ORACLE_HOME=  /u01/app/11.2.0/grid
  
  Enter the full pathname of the local bin directory: [/usr/local/bin]: 
     Copying dbhome to /usr/local/bin ...
     Copying oraenv to /usr/local/bin ...
     Copying coraenv to /usr/local/bin ...
  
  Creating /etc/oratab file...
  Entries will be added to the /etc/oratab file as needed by
  Database Configuration Assistant when a database is created
  Finished running generic part of root.sh script.
  Now product-specific root actions will be performed.
  2012-12-12 21:20:04: Parsing the host name
  2012-12-12 21:20:04: Checking for super user privileges
  2012-12-12 21:20:04: User has super user privileges
  Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
  Creating trace directory
  Failure with signal 11 from command: /u01/app/11.2.0/grid/bin/ocrconfig -local -upgrade grid oinstall
  Failed to create or upgrade OLR

  #查看日志文件
  [grid@node1 ~]$ cd $ORACLE_HOME/log/node1
  [grid@node1 node1]$ pwd
  /u01/app/11.2.0/grid/log/node1
  [grid@node1 node1]$ ls
  admin  agent  alertnode1.log  client  crsd  cssd  ctssd  diskmon  evmd  gipcd  gnsd  gpnpd  mdnsd  ohasd  racg  srvm
  [grid@node1 node1]$ tail -30 alertnode1.log
  Oracle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved.
  2012-12-12 21:20:06.347
  [client(14059)]CRS-2106:The OLR location /u01/app/11.2.0/grid/cdata/node1.olr is inaccessible.
   Details in /u01/app/11.2.0/grid/log/node1/client/ocrconfig_14059.log.
  #也可以根据上面的描述查看日志的详细信息,此处省略

3、问题分析
  关于这个问题,Meatlink 上[ID 1068212.1]有关于这个问题的描述,同时也说明明了由bug 8670579所引起的,而且还是未公开的,说是
  不认识新的AMD芯片,我倒...

  一起来看看解决办法吧。 

  Cause
  
  Unpublished bug 8670579 which relates to the identification of newer AMD chips and therefore only affect platforms 
  using the newer AMD chips.
  
  Solution
  
  If the error occurs during the installation of the GRID Infrastructure the patch has to be applied, 
  before the root.sh Script in the installation is run:
  
  a.) Run a installation (grid/runInstaller) to the prompt where it requests to run orainstroot.sh and <GRID_HOME>/root.sh
  b.) Run orainstroot.sh on all nodes, but not root.sh
  c.) Open another session with the Oracle User and apply Patch 8670579 on all nodes (with opatch apply).
  d.) Continue with the root.sh from the installation.
  
  If you hit this bug, and have already started root.sh then:
  - Deconfigure Clusterware on the failed host with <GRID_HOME>/install/rootcrs.pl -deconfig -force
  - Install the Patch as Oracle User (opatch apply)
  - Rerun root.sh

  上面的描述说明了两种情况。
  a. 仅仅在所有节点执行了orainstroot.sh,还没有执行root.sh
    那么使用grid用户再开一个session,
    在所有节点使用opatch来打补丁(8670579)
    再运行root.sh
  b. 已经执行了root.sh
    使用root帐户先deconfigure之前的配置
    使用grid再打补丁(grid),
    重新运行root.sh

4、问题解决
  #现在当前属于b情形,因此先deconfigure
  #注意我当前的补丁文件放在/inst_src/patch8670579路径
  [grid@node1 grid]$ pwd
  /u01/app/11.2.0/grid
  [grid@node1 grid]$ cd OPatch/
  [grid@node1 OPatch]$ ./opatch apply /inst_src/patch8670579/8670579
  Invoking OPatch 11.1.0.6.6
  
  Oracle Interim Patch Installer version 11.1.0.6.6
  Copyright (c) 2009, Oracle Corporation.  All rights reserved.
  
  Oracle Home       : /u01/app/11.2.0/grid
  Central Inventory : /u01/app/oraInventory
     from           : /etc/oraInst.loc
  OPatch version    : 11.1.0.6.6
  OUI version       : 11.2.0.1.0
  OUI location      : /u01/app/11.2.0/grid/oui
  Log file location : /u01/app/11.2.0/grid/cfgtoollogs/opatch/opatch2012-12-13_11-39-32AM.log
  
  Patch history file: /u01/app/11.2.0/grid/cfgtoollogs/opatch/opatch_history.txt
  --------------------------------------------------------------------------------
  The patch has more than one Archive Action but there is no Make Action.
  --------------------------------------------------------------------------------
  ApplySession applying interim patch '8670579' to OH '/u01/app/11.2.0/grid'
  
  Running prerequisite checks...
  
  OPatch detected the node list and the local node from the inventory.  
  OPatch will patch the local system then propagate the patch to the remote nodes.
  
  This node is part of an Oracle Real Application Cluster.
  Remote nodes: 'node2' 
  Local node: 'node1'
  Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
  (Oracle Home = '/u01/app/11.2.0/grid')
  
  Is the local system ready for patching? [y|n]
  y
  User Responded with: Y
  Backing up files and inventory (not for auto-rollback) for the Oracle Home
  Backing up files affected by the patch '8670579' for restore. This might take a while...
  Backing up files affected by the patch '8670579' for rollback. This might take a while...
  
  Patching component oracle.network.rsf, 11.2.0.1.0...
  Updating archive file "/u01/app/11.2.0/grid/lib/libnnz11.a"  with "lib/libnnz11.a/ahseteco.o"
  Updating archive file "/u01/app/11.2.0/grid/lib/libnnz11.a"  with "lib/libnnz11.a/am11rkg.o"
  Updating archive file "/u01/app/11.2.0/grid/lib/libnnz11.a"  with "lib/libnnz11.a/amsha.o"
  Updating archive file "/u01/app/11.2.0/grid/lib/libnnz11.a"  with "lib/libnnz11.a/cpui32.o"
  Updating archive file "/u01/app/11.2.0/grid/lib/libnnz11.a"  with "lib/libnnz11.a/sha.o"
  Updating archive file "/u01/app/11.2.0/grid/lib/libnnz11.a"  with "lib/libnnz11.a/x931rand.o"
  Updating archive file "/u01/app/11.2.0/grid/lib/libnnz11.a"  with "lib/libnnz11.a/am11dkg.o"
  Updating archive file "/u01/app/11.2.0/grid/lib/libnnz11.a"  with "lib/libnnz11.a/am931rnd.o"
  Updating archive file "/u01/app/11.2.0/grid/lib/libnnz11.a"  with "lib/libnnz11.a/amsharnd.o"
  Updating archive file "/u01/app/11.2.0/grid/lib/libnnz11.a"  with "lib/libnnz11.a/ghash.o"
  Updating archive file "/u01/app/11.2.0/grid/lib/libnnz11.a"  with "lib/libnnz11.a/shacomm.o"
  Copying file to "/u01/app/11.2.0/grid/lib/libnnz11.so"
  ApplySession adding interim patch '8670579' to inventory
  
  Verifying the update...
  Inventory check OK: Patch ID 8670579 is registered in Oracle Home inventory with proper meta-data.
  Files check OK: Files from Patch ID 8670579 are present in Oracle Home.
  
  The local system has been patched.  You can restart Oracle instances on it.
  #上面提示patch已经成功
  Patching in rolling mode.
  
  The node 'node2' will be patched next.
  
  Please shutdown Oracle instances running out of this ORACLE_HOME on 'node2'.
  (Oracle Home = '/u01/app/11.2.0/grid')
  
  Is the node ready for patching? [y|n]
  #这里提示节点2是否已准备好,我这里选择了n,所以下面给出了错误提示
  #我这里采取的是单节点patch的方法
  OPatch failed with error code 130
  
  #Author: Robinson
  #Blog  : http://blog.csdn.net/robinson_0612
  
  #接下来单独在第二个节点patch,之后在执行root.sh,一路高歌阿...
    #注意patch的时候check一下ORACLE_HOME环境变量以及perl -v 查看perl的版本,应当高于5.00503
    #感觉Oracle 11g 32 bit版本问题挺多的。尽可能安装64bit测试。还有一点说明,安装oracle database后也要patch 8670579,否则dbca报错。
    
5、附[ID 1068212.1]
  GRID INSTALLATION ROOT.SH fails with Failure with signal 11 Failed to create or upgrade OLR [ID 1068212.1] To Bottom  
  --------------------------------------------------------------------------------
  Modified:Mar 21, 2012Type:PROBLEMStatus:PUBLISHEDPriority:3 Comments (0)     
  
  In this Document
    Symptoms
    Cause
    Solution
    References
  --------------------------------------------------------------------------------
  
  Applies to: 
  Oracle Server - Enterprise Edition - Version: 11.2.0.1.0 and later   [Release: 11.2 and later ]
  Information in this document applies to any platform.
  
  Symptoms
  
  During installation of Grid Infrastructure root.sh and de-install fail with:
  
  "Failure with signal 11 from command: <grid home>/bin/ocrconfig -local -upgrade oracle oinstall
  Failed to create or upgrade OLR"
  
  And in the alert<node>.log of clusterware (<CRS_HOME>/log/<node>/):
  
  [client(2294)]CRS-2106:The OLR location <grid hoe>/cdata/bumucsvm5.olr is inaccessible. 
  Details in <grid home>/log/<node>/client/ocrconfig_nnnn.log.
  
  And in this log:
  Oracle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved.
  2010-02-26 14:36:48.183: [ OCRCONF][3047065280]ocrconfig starts...
  2010-02-26 14:36:48.184: [ OCRCONF][3047065280]Upgrading OCR data
  2010-02-26 14:36:48.185: [ OCROSD][3047065280]utread:3: Problem reading buffer 9ea9000 buflen 4096 retval 0 phy_offset 102400 retry 0
  2010-02-26 14:36:48.185: [ OCROSD][3047065280]utread:3: Problem reading buffer 9ea9000 buflen 4096 retval 0 phy_offset 102400 retry 1
  2010-02-26 14:36:48.185: [ OCROSD][3047065280]utread:3: Problem reading buffer 9ea9000 buflen 4096 retval 0 phy_offset 102400 retry 2
  2010-02-26 14:36:48.185: [ OCROSD][3047065280]utread:3: Problem reading buffer 9ea9000 buflen 4096 retval 0 phy_offset 102400 retry 3
  2010-02-26 14:36:48.185: [ OCROSD][3047065280]utread:3: Problem reading buffer 9ea9000 buflen 4096 retval 0 phy_offset 102400 retry 4
  2010-02-26 14:36:48.185: [ OCROSD][3047065280]utread:3: Problem reading buffer 9ea9000 buflen 4096 retval 0 phy_offset 102400 retry 5
  2010-02-26 14:36:48.185: [ OCRRAW][3047065280]propriogid:1_1: Failed to read the whole bootblock. Assumes invalid format.
  2010-02-26 14:36:48.185: [ OCRRAW][3047065280]proprioini: all disks are not OCR/OLR formatted
  2010-02-26 14:36:48.185: [ OCRRAW][3047065280]proprinit: Could not open raw device
  2010-02-26 14:36:48.186: [ default][3047065280]a_init:7!: Backend init unsuccessful : [26]
  2010-02-26 14:36:48.186: [ OCRCONF][3047065280]Exporting OCR data to [OCRUPGRADEFILE]
  2010-02-26 14:36:48.187: [ OCRAPI][3047065280]a_init:7!: Backend init unsuccessful : [33
  
  Another indication for this error is that deinstall will fail with the following error:
  ######################## CHECK OPERATION START ########################
  Install check configuration START
  
  #
  # An unexpected error has been detected by HotSpot Virtual Machine:
  #
  # SIGSEGV (0xb) at pc=0x87af135e, pid=2767, tid=3086526144
  #
  # Java VM: Java HotSpot(TM) Server VM (1.5.0_17-b02 mixed mode)
  # Problematic frame:
  # C [libnnz11.so+0x3c35e]
  #
  # An error report file with more information is saved as hs_err_pidnnnn.log
  #
  # If you would like to submit a bug report, please visit:
  # http://java.sun.com/webapps/bugreport/crash.jsp
  #
  
  Cause
  
  Unpublished bug 8670579 which relates to the identification of newer AMD chips and therefore 
  only affect platforms using the newer AMD chips.
  
  Solution
  
  If the error occurs during the installation of the GRID Infrastructure the patch has to be applied, 
  before the root.sh Script in the installation is run:
  
  a.) Run a installation (grid/runInstaller) to the prompt where it requests to run orainstroot.sh and <GRID_HOME>/root.sh
  b.) Run orainstroot.sh on all nodes, but not root.sh
  c.) Open another session with the Oracle User and apply Patch 8670579 on all nodes (with opatch apply).
  d.) Continue with the root.sh from the installation.
  
  If you hit this bug, and have already started root.sh then:
  - Deconfigure Clusterware on the failed host with <GRID_HOME>/install/rootcrs.pl -deconfig -force
  - Install the Patch as Oracle User (opatch apply)
  - Rerun root.sh
  
  References
  BUG:9166347 - GRID INSTALLATION ROOT.SH AND DEINSTALL FAIL WITH HOTSPOT VIRTUAL MACHINE SIGSEG
  NOTE:942076.1 - X86 DBCA, NETCA GIVE JAVA HOTSPOT ERROR IF ON X86_64 HARDWARE
  NOTE:957903.1 - 11gR2 OUI Crashes: An Unexpected Error Has Been Detected By Java HotSpot Virtual Machine, libjvm.so

 

更多参考:

有关Oracle RAC请参考
     使用crs_setperm修改RAC资源的所有者及权限
     使用crs_profile管理RAC资源配置文件
     RAC 数据库的启动与关闭
     再说 Oracle RAC services
     Services in Oracle Database 10g
     Migrate datbase from single instance to Oracle RAC
     Oracle RAC 连接到指定实例
     Oracle RAC 负载均衡测试(结合服务器端与客户端)
     Oracle RAC 服务器端连接负载均衡(Load Balance)
     Oracle RAC 客户端连接负载均衡(Load Balance)
     ORACLE RAC 下非缺省端口监听配置(listener.ora tnsnames.ora)
     ORACLE RAC 监听配置 (listener.ora tnsnames.ora)
     配置 RAC 负载均衡与故障转移
     CRS-1006 , CRS-0215 故障一例 
     基于Linux (RHEL 5.5) 安装Oracle 10g RAC
     使用 runcluvfy 校验Oracle RAC安装环境

有关Oracle 网络配置相关基础以及概念性的问题请参考:
     配置非默认端口的动态服务注册
     配置sqlnet.ora限制IP访问Oracle
     Oracle 监听器日志配置与管理
     设置 Oracle 监听器密码(LISTENER)
     配置ORACLE 客户端连接到数据库

有关基于用户管理的备份和备份恢复的概念请参考
     Oracle 冷备份
     Oracle 热备份
     Oracle 备份恢复概念
     Oracle 实例恢复
     Oracle 基于用户管理恢复的处理
     SYSTEM 表空间管理及备份恢复
     SYSAUX表空间管理及恢复
     Oracle 基于备份控制文件的恢复(unsing backup controlfile)

有关RMAN的备份恢复与管理请参考
     RMAN 概述及其体系结构
     RMAN 配置、监控与管理
     RMAN 备份详解
     RMAN 还原与恢复
     RMAN catalog 的创建和使用
     基于catalog 创建RMAN存储脚本
     基于catalog 的RMAN 备份与恢复
     RMAN 备份路径困惑
     使用RMAN实现异机备份恢复(WIN平台)
     使用RMAN迁移文件系统数据库到ASM
     linux 下RMAN备份shell脚本
     使用RMAN迁移数据库到异机

有关ORACLE体系结构请参考
     Oracle 表空间与数据文件
     Oracle 密码文件
     Oracle 参数文件
     Oracle 联机重做日志文件(ONLINE LOG FILE)
     Oracle 控制文件(CONTROLFILE)
     Oracle 归档日志
     Oracle 回滚(ROLLBACK)和撤销(UNDO)
     Oracle 数据库实例启动关闭过程
     Oracle 10g SGA 的自动化管理
     Oracle 实例和Oracle数据库(Oracle体系结构) 

抱歉!评论已关闭.