现在的位置: 首页 > 综合 > 正文

Oracle arraysize 和 fetch size 参数 与 性能优化 说明

2013年10月10日 ⁄ 综合 ⁄ 共 8359字 ⁄ 字号 评论关闭

一. 参数说明

1.1 arraysize 参数

       Oracle sqlplus 有很多设置,这个在我之前的blog有说明:

              Oracle sqlplus 常用命令总结

              http://blog.csdn.net/xujinyang/article/details/6823232

 

       昨天和owind 讨论问题的时候,他强调了这个参数,通过一些测试,确实与性能这块有很大影响。

       Arraysize specifies how many rows SQL*Plus will fetch in a call. The number n can be between 1 and 5000.

 

       arraysize定义了一次返回到客户端的行数,当扫描了arraysize 行后,停止扫描,返回数据,然后继续扫描。

       这个过程就是统计信息中的SQL*Net roundtrips to/from client。因为arraysize 默认是15行,那么就有一个问题,因为我们一个block 中的记录数一般都会超过15行,所以如果按照15行扫描一次,那么每次扫描要多扫描一个数据块,一个数据块也可能就会重复扫描多次。

 

       重复的扫描会增加consistent gets 和 physical reads。 增加physical reads,这个很好理解,扫描的越多,物理的可能性就越大。

       consistent gets,这个是从undo里读的数量,Oracle 为了保证数据的一致性,当一个查询很长,在查询之后,数据块被修改,还未提交,再次查询时候,Oracle根据Undo 来构建CR块,这个CR块,可以理解成数据块在之前某个时间的状态。 这样通过查询出来的数据就是一致的。

       那么如果重复扫描的块越多,需要构建的CR块就会越多,这样读Undo 的机会就会越多,consistent gets 就会越多。

 

       如果数据每次传到客户端有中断,那么这些数据会重新扫描,这样也就增加逻辑读,所以调整arraysize可以减少传的次数,减少逻辑读。

 

       关于CR 参考我的Blog:

       CR (consistent read) blocks create 说明

       http://blog.csdn.net/xujinyang/article/details/6823237

 

       所以通过上面的说明,arraysize 参数如果过低,会影响如physical reads,consistent gets 还有SQL*Net roundtrips to/from client次数。

 

永久保存arraysize 参数:

       可以该参数保存到glogin.sql 或者login.sql 文件里,这样可以永久生效,不必每次都去set 指定。

 

-- 查看默认值

SYS@anqing2(rac2)> show arraysize

arraysize 15

 

--手工修改arraysize

SYS@anqing2(rac2)> set arraysize 100

SYS@anqing2(rac2)> show arraysize

arraysize 100

 

--修改glogin.sql

[oracle@rac2 admin]$ pwd

/u01/app/oracle/product/10.2.0/db_1/sqlplus/admin

[oracle@rac2 admin]$ ls

glogin.sql  help  iplus  libisqlplus.def  libsqlplus.def  plustrce.sql  pupbld.sql

 

在glogin.sql里添加:

set arraysize 5000

 

--重新登陆,查询

SYS@anqing2(rac2)> show arraysize

arraysize 5000

 

1.2  fetch size 参数

       arraysize 和 fetch size 参数都是客户段的一个参数,需要在客户段来设置,arraysize 是在sqlplus 中设置的,如果我们通过程序去连数据库,那么这个参数就是Fetch size。 它的作用和arraysize 一样。 Fetch size 默认是10,一般改成50就ok了,太大会消耗内存。

 

       The JDBC fetch size gives the JDBC driver a hint as to the number of rows that should be fetched from the database when more rows are needed. For large queries that return a large number of objects you can configure the row fetch size used in the
query to improve performance by reducing the number database hits required to satisfy the selection criteria. Most JDBC drivers (including Oracle) default to a fetch size of 10, so if you are reading 1000 objects, increasing the fetch size to 256 can significantly
reduce the time required to fetch the query's results. The optimal fetch size is not always obvious. Usually, a fetch size of one half or one quarter of the total expected result size is optimal. Note that if you are unsure of the result set size, incorrectly
setting a fetch size too large or too small can decrease performance.

 

In this example application, I print out the default fetch size and then increase it to 50 using the setFetchSize(int) method of a Statement object. When you execute the query, the JDBC driver retrieves the first 50 rows from the database (or all rows if
less than 50 rows satisfy the selection criteria). As you iterate over the first 50 rows, each time you call rset.next(), the JDBC driver returns a row from local memory – it does not need to retrieve the row from the database. When you try to access the fifty
first row (assuming there are more than 50 rows that satisfy the selection criteria), the JDBC driver again goes to the database and retrieves another 50 rows. In this way, 100 rows are returned with only two database hits.

 

Alternatively, you can use the method setMaxRows() to set the limit for the maximum number of rows that any ResultSet can contain. If you specify a value of zero, then the hint is ignored: the JDBC driver returns one row at a time. The default value is zero.

 

如下连接是一个Jdbc 中配置Fetch size的示例。

http://www.idevelopment.info/data/Programming/java/jdbc/FetchSize.java

 

二. 相关测试

       每个block 中row的条数和row的大小也有关系,row 内容越多,那么block 中的row就会少。

 

每个block里有多少条记录,可以通过rowid 来判断。

       关于Oracle rowid说明,参考我的Blog

                     Oracle Rowid 介绍

                     http://blog.csdn.net/xujinyang/article/details/6829751

      

rowid 格式如下:OOOOOOFFFBBBBBBRRR, 其中:

(1)OOOOOO

       The data object number identifies the segment (data object AAAPec in Example 12-1). A data object number is assigned to every database segment. Schema objects in the same segment, such as a table cluster, have the same data object number.

(2)FFF

       The tablespace-relative data file number identifies the data file that contains the row (file AAF in Example 12-1).

(3)BBBBBB

       The data block number identifies the block that contains the row (block AAAABS in Example 12-1). Block numbers are relative to their data file, not their tablespace. Thus, two rows with identical block numbers could reside in different data files
of the same tablespace.

(4)RRR

       The row number identifies the row in the block (row AAA in Example 12-1).

 

DAVE@anqing2(rac2)> create table dave as select * from sys.ta where rownum<10000;

Table created.

 

-- 查看table 占用blocks 数量

DAVE@anqing2(rac2)> select owner,extents,segment_name,blocks from dba_segments where segment_name='DAVE' and owner='DAVE';

 

OWNER    EXTENTS SEGMENT_NAME       BLOCKS

---------- ---------- -------------------- ----------

DAVE       3  DAVE                   24

 

从这个数据算一个,1000行数据24个数据块。 平均下来每个数据块里有417条记录. 但事情情况可能不是这样.

 

--表结构很简单

DAVE@anqing2(rac2)> desc dave;

 Name                       Null?    Type

 ----------------------------------------- -------- ----------------------------

 ID                                NUMBER

 NAME                            VARCHAR2(10)

 

-- 查看rowid格式

DAVE@anqing2(rac2)> select rowid from dave where rownum=1;

ROWID

------------------

AAANXzAAHAAAAAMAAA

 

--查看每个数据块中有多少记录:

/* Formatted on 2011/7/1 14:59:56 (QP5 v5.163.1008.3004) */

  SELECT prerid, COUNT (rid) rid

    FROM (SELECT SUBSTR (ROWID, 1, 15) prerid, ROWID rid FROM dave)

GROUP BY prerid;

 

DAVE@anqing2(rac2)> select  prerid,count(rid) rid from (select  substr(rowid,1,15) prerid,rowid rid from dave) group by  prerid;

 

PRERID                                RID

------------------------------ ----------

AAANXzAAHAAAAAa                       517

AAANXzAAHAAAAAf                       517

AAANXzAAHAAAAAP                       517

AAANXzAAHAAAAAU                       517

AAANXzAAHAAAAAW                       517

AAANXzAAHAAAAAX                       517

AAANXzAAHAAAAAM                       524

AAANXzAAHAAAAAO                       517

AAANXzAAHAAAAAQ                       517

AAANXzAAHAAAAAS                       517

AAANXzAAHAAAAAY                       517

AAANXzAAHAAAAAR                       517

AAANXzAAHAAAAAg                       169

AAANXzAAHAAAAAN                       517

AAANXzAAHAAAAAT                       517

AAANXzAAHAAAAAV                       517

AAANXzAAHAAAAAb                       517

AAANXzAAHAAAAAe                       517

AAANXzAAHAAAAAc                       517

AAANXzAAHAAAAAd                       517

 

20 rows selected.

 

-- 这里只有20行,即实际只使用了20个数据块,每个数据块的记录如上查询结果,因为表的记录很简单,所以每个块中的记录很多。

 

       但是之前我们查询表占用了24个数据块,那么通过以下查询,可以理解为什么是24个blocks:

 

DAVE@anqing2(rac2)> select extent_id,block_id,blocks from dba_extents where owner='DAVE' and segment_name='DAVE';

 

 EXTENT_ID   BLOCK_ID     BLOCKS

---------- ---------- ----------

         0          9          8

         1         17          8

         2         25          8

 

因为这里分配了3个extents,每个extent 由8个blocks组成。

 

       如果按照默认的情况,arraysize 为15,那么每个块要查询的次数是:517/15 = 35次。 那么这个就会带来更多的consistents gets 和 physical read。 我们验证一下。

 

DAVE@anqing2(rac2)> set autot traceonly stat

DAVE@anqing2(rac2)> select * from dave where rownum<518;

-- 因为一个数据块中有517条记录,所以这里只查询一个数据块的次数。

517 rows selected.

 

Statistics

----------------------------------------------------------

          7  recursive calls

          0  db block gets

         87  consistent gets

          0  physical reads

          0  redo size

       9354  bytes sent via SQL*Net to client

        774  bytes received via SQL*Net from client

         36  SQL*Net roundtrips to/from client

          0  sorts (memory)

          0  sorts (disk)

        517  rows processed

--注意这里的SQL*Net roundtrips to/from client,在之前,我们估计是按照arraysize 的默认值,读完这个数据块需要roundtrips 35次,这里实际用了36次。

 

我们设置下arraysize, 在查询:

 

DAVE@anqing2(rac2)> set arraysize 10000

SP2-0267: arraysize option 10000 out of range (1 through 5000)

--arraysize 最大5000

 

DAVE@anqing2(rac2)> set arraysize 5000

DAVE@anqing2(rac2)> select * from dave where rownum<518;

 

517 rows selected.

 

Statistics

----------------------------------------------------------

          0  recursive calls

          0  db block gets

          5  consistent gets

          0  physical reads

          0  redo size

       5036  bytes sent via SQL*Net to client

        400  bytes received via SQL*Net from client

          2  SQL*Net roundtrips to/from client

          0  sorts (memory)

          0  sorts (disk)

        517  rows processed

 

比较:

consistent gets : 从87 变成了5.

SQL*Net roundtrips to/from client       : 从36 变成了2

 

如果数据量越大,那么这种优化的性能提升就越明显。

 

-------------------------------------------------------------------------------------------------------

抱歉!评论已关闭.