视频1 视频21 视频41 视频61 视频文章1 视频文章21 视频文章41 视频文章61 推荐1 推荐3 推荐5 推荐7 推荐9 推荐11 推荐13 推荐15 推荐17 推荐19 推荐21 推荐23 推荐25 推荐27 推荐29 推荐31 推荐33 推荐35 推荐37 推荐39 推荐41 推荐43 推荐45 推荐47 推荐49 关键词1 关键词101 关键词201 关键词301 关键词401 关键词501 关键词601 关键词701 关键词801 关键词901 关键词1001 关键词1101 关键词1201 关键词1301 关键词1401 关键词1501 关键词1601 关键词1701 关键词1801 关键词1901 视频扩展1 视频扩展6 视频扩展11 视频扩展16 文章1 文章201 文章401 文章601 文章801 文章1001 资讯1 资讯501 资讯1001 资讯1501 标签1 标签501 标签1001 关键词1 关键词501 关键词1001 关键词1501 专题2001
实例宕机引发的ORA-00240错误
2020-11-09 10:44:20 责编:小采
文档

事故第二天进行了数据库的alert log分析,从日志中可以看到数据库在实例NODE2发生宕机后,RAC已经做出了实例切换步骤,但在切换的

1.环境描述

OS:AIX6.1

Oracle :11.2.0.3.0 RAC

2.事故发生

数据库NODE 2所在小型机发生宕机事故,,本应正常切换至NODE1,但切换失败,重启系统得以解决。

3.事故分析

事故第二天进行了数据库的alert log分析,从日志中可以看到数据库在实例NODE2发生宕机后,RAC已经做出了实例切换步骤,但在切换的过程中遭遇了ORA-00240、ORA-29770错误,导致当时数据库没有切换成功。下面是日志的详细分析。

数据库在当时经历了大致以下几个重要步骤:


1. Beginning instance recovery of 1 threads

数据库开始在本地恢复宕机的实例

2. Started redo application at

Thread 2: logseq 4556, block 368380

数据库开始从在线redo日志序号为4556的日志恢复

3. Completed instance recovery at

Thread 2: logseq 4556, block 376983, scn 3502123313

数据库redo日志4556已经恢复成功。

4. Redo thread 2 internally disabled at seq 4557 (SMON)

当数据库准备恢复4557的日志时,遭遇失败。

5 ORA-00240: control file enqueue held for more than 120 seconds

日志中开始出现ora-00240错误,提示控制文件被持有超过120秒。

6 ORA-29770: global enqueue process DIA0 (OSID 12517556) is hung for more than 300 seconds

Incident details in:

紧接着出现ora29770错误,数据库进程DIA0 hung住超过5分钟。

在当天实例宕机的20分钟左右时间内,当时系统的监控等脚本均无法执行,分析Oracle awr报告得知在当天宕机的时间内,未宕机的NODE1系统CPU资源几乎耗尽。

经过分析与推测,DIA0进程的主要作用是处理数据库死锁、hung住的一个进程,日志中的现象应该是这个进程发现控制文件被持有超过120秒,属于错误行为,进程开始处理这一问题,但当时系统cpu资源耗尽,进程DIA0解决故障超过5分钟,导致出现了ora-29770错误。

所以认定这次实例切换失败的主要原因在于ORA-00240错误。根据日志中给出的trace文件,查看trace文件中的具体报错原因。

从trace日志中分析得知,当时控制文件被持有超过120秒的主要原因是KSV master wait等待,KSV master wait耗费了2分03秒。

4.事故分析结论

根据以上现象及日志体现,从oracle官方metalink中查找资料得知这是一个bug [BUG ID 1308282.1]

下面是官方metalink文档中关于这个bug的解释:


High 'ksv master wait' And 'ASM File Metadata Operation' Waits In Non-Exadata 11g

Symptoms

High waits for 'ksv master wait' while doing an ASM file metadata operation were reported when a data migration utility was running. This wait was also seen for a drop of a tablespace.

The AWR showed the top events were CPU (> 100%), with 'ASM file metadata operation' (7%).

Cause

Event 'KSV master wait' indicates the process on the RDBMS side is waiting for a reply from a process on the ASM side. In 11g, the parameter cell_offload_processing is set to TRUE. Although that is a parameter is not applicable for non-Exadata databases, it caused ASM to try to deliver smart-scan results. The issue was reported in Bug 11800170 - ASM IN KSV WAIT AFTER APPLICATION OF 11.2.0.2 GRID PSU.

After applying the workaround for this issue (see Solution below), a drop of a tablespace that used to take 13 minutes took 4 seconds.

Solution

The following solutions are available for non-Exadata databases:

For the quickest solution, use the workaround. The workaround does not negatively impact non-Exadata databases. This parameter is to be set on the database instance.

alter system set cell_offload_processing = false;

Upgrade to 12.1, when available. OR

Apply the 11.2.0.3 patch set OR

Apply one-off Patch 11800170, if available for your RDBMS and Grid Homes

Note: At the time this note was written (March 2011), neither 12.1 nor 11.2.0.3 were available.

官方文档中给出的最快解决方式是修改oracle中的一个参数cell_offload_processing,修改为false。

下载本文
显示全文
专题