现在的位置: 首页 > 综合 > 正文

11gR2 Clusterware Key Facts

2013年05月10日 ⁄ 综合 ⁄ 共 2739字 ⁄ 字号 评论关闭
  • 11gR2 Clusterware is required to be up and running prior to installing a
    11gR2 Real Application Clusters database.
  • The GRID home consists of the Oracle Clusterware and ASM.  ASM should not be
    in a seperate home.
  • The 11gR2 Clusterware can be installed in "Standalone" mode for ASM and/or
    "Oracle Restart" single node support. This clusterware is a subset of the full
    clusterware described in this document.
  • The 11gR2 Clusterware can be run by itself or on top of vendor clusterware. 
    See the certification matrix for certified combinations. Ref: Note:
    184875.1

    "How To Check The Certification Matrix for Real Application
    Clusters"
  • The GRID Home and the RAC/DB Home must be installed in different locations.
  • The 11gR2 Clusterware requires a shared OCR files and voting files.  These
    can be stored on ASM or a cluster filesystem.
  • The OCR is backed up automatically every 4 hours to
    <GRID_HOME>/cdata/<scan name>/ and can be restored via ocrconfig. 
  • The voting file is backed up into the OCR at every configuration change and
    can be restored via crsctl. 
  • The 11gR2 Clusterware requires at least one private network for inter-node
    communication and at least one public network for external communication. 
    Several virtual IPs need to be registered with DNS.  This includes the node VIPs
    (one per node), SCAN VIPs (up to 3).  This can be done manually via your network
    administrator or optionally you could configure the "GNS" (Grid Naming
    Service) in the Oracle clusterware to handle this for you (note that GNS
    requires its own VIP).  
  • A SCAN (Single Client Access Name) is provided to clients to connect to. 
    For more info on SCAN see Note:
    887522.1

    and/or http://www.oracle.com/technology/products/database/clustering/pdf/scan.pdf

  • The root.sh script at the end of the clusterware installation starts the
    clusterware stack.  For information on troubleshooting root.sh issues see Note:
    1053970.1

  • Only one set of clusterware daemons can be running per node. 
  • On Unix, the clusterware stack is started via the init.ohasd script
    referenced in /etc/inittab with "respawn".
  • A node can be evicted (rebooted) if a node is deemed to be unhealthy.  This
    is done so that the health of the entire cluster can be maintained.  For more
    information on this see: Note:
    1050693.1

    "Troubleshooting 11.2 Clusterware Node Evictions (Reboots)"
  • Either have vendor time synchornization software (like NTP) fully configured
    and running or have it not configured at all and let CTSS handle time
    synchonization.  See Note:
    1054006.1

    for more infomation.
  • If installing DB homes for a lower version, you will need to pin the nodes
    in the clusterware or you will see ORA-29702 errors.  See Note 946332.1

    for more info.

  • The clusterware stack can be started by either booting the machine, running
    "crsctl start crs" to start the clusterware stack, or by running "crsctl start
    cluster" to start the clusterware on all nodes.  Note that crsctl is in the
    <GRID_HOME>/bin directory.
  • The clusterware stack can be stopped by either shutting down the machine,
    running "crsctl stop crs" to stop the clusterware stack, or by running "crsctl
    stop cluster" to stop the clusterware on all nodes.  Note that crsctl is in the
    <GRID_HOME>/bin directory.
  • Killing clusterware daemons is not supported.

 

抱歉!评论已关闭.