Tuesday, January 29, 2013

Auto-Start Standby Database



1. Start Standalone Grid Infrastructure - High Availability Services (HAS)
host > $CRS_HOME/bin/crsctl config has
CRS-4621: Oracle High Availability Services autostart is disabled.
host > $CRS_HOME/bin/crsctl enable has
CRS-4622: Oracle High Availability Services autostart is enabled.

2. Change auto_start to “always” and the default startup mode to “mount”
srvctl modify database -d stndbydb -s mount
crsctl modify resource ora.stndbydb.db –attr “AUTO_START=always”
Test with crs_stat -p

How to Trace SQL in the Current Session

Trace SQL in the current session:

alter session set timed_statistics=true;
alter session set max_dump_file_size=unlimited;
alter session set tracefile_identifier='20Jul2012_special_trace_1';​
alter session set events '10046 TRACE NAME CONTEXT FOREVER,level 12';

..Run some things..

select 'close cursor' from dual; --to dump the rowsource information
alter session set events '10046 trace name context off';


Trace file will be in diag/admin/db_name/instance_name/trace


ASM Debugging Info


ASM Debugging Information

If you have an ASM issue, you'll want these commands handy.

cat /etc/*release
uname -a
rpm -qa|grep oracleasm
/usr/sbin/oracleasm configure
/sbin/modinfo oracleasm
/etc/init.d/oracleasm status​
/usr/sbin/oracleasm-discover
oracleasm scandisks
oracleasm listdisks
ls -l /dev/oracleasm/disks
/sbin/blkid
ls -l /dev/mpath/*
ls -l /dev/mapper/*
ls -l /dev/dm-*

another approach...
1. uname -a
2. rpm -qa | grep oracleasm
3 cat /etc/sysconfig/oracleasm
4. upload /var/log/oracleasm
5. cat /proc/partitions
6. ls -la /dev/oracleasm/disks
7. /etc/init.d/oracleasm scandisks
8. /etc/init.d/oracleasm listdisks
9. Run "sosreport" from command line. That will generate a bzip file. Attach it to SR
10. Send me following command outputs
a. cat /proc/partitions |grep sd|while read a b c d;do echo -n $d$'\t'" scsi_id=";(echo $d|tr -d [:digit:]|xargs -i scsi_id -g -s /block/{})done
b. blkid|grep sd.*oracleasm|while read a b;do echo -n $a$b" scsi_id=";(echo $a|tr -d [:digit:]|tr -d [:]|cut -d"/" -f3|xargs -i scsi_id -g -s /block/{})done;
11. Also you can verify the disk has correct header as follows:
# dd if=/dev/path/to/disk bs=16 skip=2 count=1 | hexdump -C
example:
# dd if=/dev/mapper/data0p1 bs=16 skip=2 count=1 | hexdump -C
1+0 records in
1+0 records out
16 bytes (16 B) copied, 0.037821 seconds, 0.4 kB/s
00000000 4f 52 43 4c 44 49 53 4b 44 41 54 41 30 00 00 00 |ORCLDISKDATA0...|

Reference note: Troubleshooting a multi-node ASMLib installation (Doc ID 811457.1)

RAC - Relocating VIP and SCAN


Failover VIP (on the destination node)
./crs_relocate [vip resource name]

The VIP will now go where it's configured to be

Failover SCAN
srvctl relocate scan -i [LISTENER_NUMBER] -n [DESTINATION_NODE_NAME]

Shared TNSNAMES.ora is not supported



Oracle does not Support Shared tnsnames.ora

SR Reply: "Well, technically there is no official support for UNC (Universal Naming Convention eg. \\path\file) in the tnsnames files. I have seen customers try this and some manage to get it to work and some do not. My tests have always lead to it not working so I personally do not believe those that have claimed it worked when I used the same syntax and it fails.

Officially we do not support it because the values used for UNC are restricted values.


See this link Specifically the //

Those are used by our coding logic for ezconnect connections therefore we have logic in place for those values to be used for a different purpose.
That is just one of the many reasons its not supported not the only reason just an FYI.

So I cannot really provide the proper syntax as it technically does not exist. 
My apologies.

Also so you know technically we do not officially support shared tnsnames files on remote directories.


The ifile and even TNS_ADMIN was designed to handle multiple oracle homes on the same node. It was never intended or supported to work over a network. It does work under most circumstances but is not something we recommended or support. Shared usage over the network we suggest LDAP server as the intended feature to use."

How FULL are the BLOCKS in my TABLE?


Table Block Space Usage:

set serveroutput on size 100000
declare
 v_unformatted_blocks number;
 v_unformatted_bytes number;
 v_fs1_blocks number;
 v_fs1_bytes number;
 v_fs2_blocks number;
 v_fs2_bytes number;
 v_fs3_blocks number;
 v_fs3_bytes number;
 v_fs4_blocks number;
 v_fs4_bytes number;
 v_full_blocks number;
 v_full_bytes number;
 begin
  dbms_space.space_usage (
   '&TABLEOWNER',        --object owner
   '&TABLENAME',         --object name
   'TABLE',              --object type TABLE, INDEX, or "TABLE PARTITION" 
   v_unformatted_blocks,
   v_unformatted_bytes,
   v_fs1_blocks,
   v_fs1_bytes,
   v_fs2_blocks,
   v_fs2_bytes,
   v_fs3_blocks,
   v_fs3_bytes,
   v_fs4_blocks,
   v_fs4_bytes,
   v_full_blocks,
   v_full_bytes
--'&PARTITIONNAME',
);
  dbms_output.put_line('Unformatted Blocks = '||v_unformatted_blocks);
  dbms_output.put_line('FS1 Blocks   = '||v_fs1_blocks);
  dbms_output.put_line('FS2 Blocks   = '||v_fs2_blocks);
  dbms_output.put_line('FS3 Blocks   = '||v_fs3_blocks);
  dbms_output.put_line('FS4 Blocks   = '||v_fs4_blocks);
  dbms_output.put_line('Full Blocks  = '||v_full_blocks);
 end;
/

Sample Output:
Unformatted Blocks = 16
FS1 Blocks   = 42  <--- 0-25% full
FS2 Blocks   = 31  <-- 25-50% full
FS3 Blocks   = 35  <-- 50-75% full
FS4 Blocks   = 4651 <- 75-99% full
Full Blocks  = 99448
 
Shrinking options:
-- Enable row movement.
ALTER TABLE scott.emp ENABLE ROW MOVEMENT;
-- Recover space and amend the high water mark (HWM).
ALTER TABLE scott.emp SHRINK SPACE;
-- Recover space, but don't amend the high water mark (HWM).
ALTER TABLE scott.emp SHRINK SPACE COMPACT;
-- Recover space for the object and all dependant objects.
ALTER TABLE scott.emp SHRINK SPACE CASCADE;

RAC Cluster Name



$CRS_HOME/bin/cemutlo -n

DML Activity by Object (including statistics)


This is a helpful script to see DML activity on tables.

By default, if the "Updates" column is over 10%, stats will be gathered on that object.

First, flush DML activity stats to dba_tab_modifications:

dbms_stats.flush_monitoring_info

set linesize 140
set pagesize 50
col table_partition heading 'Table.Partition' format a40
col analyzed heading 'Last Analyzed'
col num_rows heading '# Rows' format 99,999,999,999
col tot_updates heading 'Total DMLs' format 99,999,999,999
col truncd heading 'Truncated?'
col pct_updates heading '%|Updates' format 999.99
col ts heading 'Last DML'
select table_name||decode(partition_name,null,'','.'||partition_name) table_partition,
       to_char(last_analyzed,'MM/DD/YY HH24:MI') analyzed,
       num_rows,
       tot_updates,
       to_char(timestamp,'MM/DD/YY HH24:MI') ts,
       to_number(perc_updates) pct_updates,
       decode(truncated,'NO','','Yes       ') truncd
  from (select a.*,
               nvl(decode(num_rows, 0, '-1', 100 * tot_updates / num_rows), -1) perc_updates
          from (select (select num_rows
                         from dba_tables
                        where dba_tables.table_name = DBA_TAB_MODIFICATIONS.table_name
                          and DBA_TAB_MODIFICATIONS.table_owner = dba_tables.owner) num_rows,
                       (select last_analyzed
                          from dba_tables
                         where dba_tables.table_name = DBA_TAB_MODIFICATIONS.table_name
                           and DBA_TAB_MODIFICATIONS.table_owner = dba_tables.owner) last_analyzed,
                       (inserts + updates + deletes) tot_updates,
                       DBA_TAB_MODIFICATIONS.*
                  from sys.DBA_TAB_MODIFICATIONS
               ) a
       ) b
 where perc_updates > 5
   and table_owner = '&SCHEMA'
 order by last_analyzed desc
/
exit
/


SAMPLE OUTPUT:

Table.Partition      Last Analyzed           # Rows      Total DMLs Last DML       Updates Truncated?
-------------------- -------------- --------------- --------------- -------------- ------- ----------
AQ$_QUEUE_TABLES     03/12/13 14:32              11               1 03/12/13 14:36    9.09


Manually Run Stats Gathering Job


Run Stats Gathering Job:
exec DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC()

Notes:
The GATHER_DATABASE_STATS_JOB_PROC procedure collects statistics on database objects when the object has no previously gathered statistics or the existing statistics are stale because the underlying object has been modified significantly (more than 10% of the rows).

The DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC is an internal procedure, but its operates in a very similar fashion to the DBMS_STATS.GATHER_DATABASE_STATS procedure using the GATHER AUTO option. 

The primary difference is that the DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC procedure prioritizes the database objects that require statistics, so that those objects which most need updated statistics are processed first. 

This ensures that the most-needed statistics are gathered before the maintenance window closes.

SQL Query for Oracle Redo Log Files


Handy SQL when manipulating logfile groups

set pagesize 30
set linesize 100
column member format a60
column status format a10
column size_MB format '999,999'
select group#,thread#,members,status,bytes/1024/1024 size_MB from v$log;
select group#,member from v$logfile order by group#;


select group#, thread#, status, bytes/1024/1024 size_MB from v$standby_log;
--show parameter db_create_online_log_dest
--alter system switch all logfile;
--alter database add logfile thread &thread group &group size &size;
--alter database drop logfile group &group;
--alter database drop logfile member '&member';


Sample Output:
    GROUP#    THREAD#    MEMBERS STATUS      SIZE_MB
---------- ---------- ---------- ---------- --------
         1          1          2 INACTIVE      4,096
         2          1          2 INACTIVE      4,096
         3          1          2 CURRENT       4,096
         6          2          2 INACTIVE      4,096
         7          2          2 CURRENT       4,096
         8          2          2 INACTIVE      4,096



    GROUP# MEMBER
---------- ------------------------------------------------------------
         1 +MY_DATA/mydb/onlinelog/group_1.324.807662009
         1 +MY_FRA/mydb/onlinelog/group_1.2611.807662031
         2 +MY_DATA/mydb/onlinelog/group_2.336.807662061
         2 +MY_FRA/mydb/onlinelog/group_2.2601.807662085
         3 +MY_DATA/mydb/onlinelog/group_3.321.807662115
         3 +MY_FRA/mydb/onlinelog/group_3.2607.807662137
...snip...


Database Uptime


Database Uptime (11g):
col d_name heading 'Database' format a8
col v_logon_time heading 'Startup'
col dh_uptime heading 'Uptime' format a30
select upper(sys_context('USERENV','DB_NAME')) d_name,
       to_char(logon_time,'DD-MON-YYYY hh24:mi:ss') v_logon_time,
       to_char(trunc(sysdate-logon_time,0))||' days, '||trunc(((sysdate-logon_time)-floor(sysdate-logon_time))*24)||' Hours' dh_uptime
  from sys.v_$session
 where sid=1 /* pmon session */
/

Sample Output:
Database Startup              Uptime
-------- -------------------- ------------------------------
MYDB     05-JUL-2012 21:42:24 32 days, 13 Hours

Top Waits by Object


SQL to Identify Objects Creating Cluster-wide bottlenecks (in the past 24 hours)
set linesize 140
set pagesize 50
col sample_time format a26
col event format a30
col object format a45
--col num_sql heading '# SQL' format 9,999
select
       ash.sql_id,
--       count(distinct ash.sql_id) Num_SQL,
       ash.event,
       ash.current_obj#,
       o.object_type,
       o.owner||'.'||o.object_name||'.'||o.subobject_name object,
       count(*)
  from gv$active_session_history ash,
       all_objects o
 where ash.current_obj# = o.object_id
   and ash.current_obj# != -1
   and ash.event is not null
   and ash.sample_time between  sysdate - 1 and sysdate
--   and ash.sample_time between  sysdate - 4 and sysdate - 3
--   and to_date ('24-SEP-2010 14:28:00','DD-MON-YYYY HH24:MI:SS') and to_date ('24-SEP-2010 14:29:59','DD-MON-YYYY HH24:MI:SS')
 group by
       ash.sql_id,
       ash.event,
       ash.current_obj#,
       o.object_type,
       o.owner||'.'||o.object_name||'.'||o.subobject_name
having count(*) > 20
 order by count(*) desc
/
exit
/

Track RMAN Job Process via gv$session_longops


Nice way to track RMAN Channel worker progress

Also useful: watch -n 10 sqlplus -s usr/pwd [this script].sql 

set linesize 120
column pct_done format '999.99'
column opname format a35
column time_left format a15
column started format a15
select
  sid,
  opname,
  to_char(start_time,'DD-MON HH24:MI') started,
  round(totalwork-sofar) blocks_left,
 (sofar/totalwork) * 100 pct_done,
  to_char(to_date(time_remaining,'sssss'),'hh24:mi:ss') time_left
from
   gv$session_longops
where
   totalwork > sofar
AND
   opname NOT LIKE '%aggregate%'
AND
   opname like 'RMAN%'
order by 2;
exit;

Historical Blocking Locks

To Investigate Recent Blocking Locks (after the dust settles)

set pagesize 50
set linesize 120
col sql_id format a15
col inst_id format '9'
col sql_text format a50
col module format a10
col blocker_ses format '999999'
col blocker_ser format '999999'

------------------------------------------------------------------
--IN CHRONOLOGICAL ORDER (which is probably what you want anyways)
------------------------------------------------------------------
 SELECT distinct
        a.sql_id ,
        to_char(a.sql_exec_start,'DD-Mon HH24:MI') sql_start,
        a.inst_id,
        a.blocking_session blocker_ses,
        a.blocking_session_serial# blocker_ser,
        a.user_id,
        s.sql_text,
        a.module
 FROM  GV$ACTIVE_SESSION_HISTORY a,
       gv$sql s
 where a.sql_id=s.sql_id
   and blocking_session is not null
   and a.user_id <> 0 --  exclude SYS user
   and a.sample_time > sysdate - 1
 order by sql_start


Query ASM Disks and Diskgroups

List all ASM devices:
/etc/init.d/oracleasm querydisk -d `/etc/init.d/oracleasm listdisks -d` |
 cut -f2,10,11 -d" " | perl -pe 's/"(.*)".*\[(.*), *(.*)\]/$1 $2 $3/g;'


List all AVAILABLE ASM disks:
col path format a20
col header_status format a13
col os_mb format 999,999,999 heading 'Size (MB)'
SELECT inst_id, 
       path, 
       header_status, 
       os_mb 
  FROM GV$ASM_DISK 
 WHERE header_status in ('FORMER','PROVISIONED')
 ORDER BY path,
          inst_id;

Sample Output:
INST_ID  PATH                 HEADER_STATUS    Size (MB)
-------- -------------------- ------------- ------------
       1 ORCL:DISK25          FORMER             524,294
       2 ORCL:DISK25          FORMER             524,294
       1 ORCL:DISK26          FORMER             524,294
       2 ORCL:DISK26          FORMER             524,294
       1 ORCL:DISK30          FORMER             524,294
       2 ORCL:DISK30          FORMER             524,294
       1 ORCL:DISK33          PROVISIONED        524,294
       2 ORCL:DISK33          PROVISIONED        524,294


List All ASM DISKGROUPS:
set pagesize 60
set linesize 132
column aa format 99999 heading "DiskGroup"
column ab format a15 heading "DiskGroup"
column ac format a20 heading "Disk"
column ad format a15 heading "DiskGroup State"
column ae format a15 heading "Disk State"
break on ab skip 1
select substr(to_char(a.group_number),1,5) aa, substr(a.name,1,15) ab, substr(b.name,1,20) ac , b.total_mb, b.free_mb, a.state ad, b.state ae from
v$asm_diskgroup a, v$asm_disk b
where a.group_number = b.group_number
--and a.group_number = 4
order by 2,3
/

Sample Output:
DiskG DiskGroup       Disk              TOTAL_MB    FREE_MB DiskGroup State Disk State
----- --------------- --------------- ---------- ---------- --------------- ---------------
1     DATA            DATA1               517893     183780 MOUNTED         NORMAL
1                     DATA2               517893     183783 MOUNTED         NORMAL
1                     DATA3               517893     183781 MOUNTED         NORMAL
1                     DATA4               517893     183781 MOUNTED         NORMAL
1                     DATA5               517893     183780 MOUNTED         NORMAL
2     FRA             FRA1                517893     426005 MOUNTED         NORMAL
3     VOTING          VOTING                8631       8235 MOUNTED         NORMAL


Rebalance Operations:
11g
select inst_id, 
       operation, 
       state, 
       power, 
       sofar, 
       est_work, 
       est_rate, 
       est_minutes 
  from gv$asm_operation 
 order by inst_id, state
/

12c
select inst_id, 
       pass,
       state, 
       power, 
       sofar, 
       est_work, 
       est_rate, 
       est_minutes 
  from gv$asm_operation 
 order by inst_id, state
/

12c added a "COMPACT" pass to improve disk seek performance.

Sample Output:
   INST_ID OPERA STAT      POWER      SOFAR   EST_WORK   EST_RATE EST_MINUTES
---------- ----- ---- ---------- ---------- ---------- ---------- -----------
         1 REBAL RUN           5     314121     314121       1029           0
         1 REBAL WAIT          5
         2 REBAL RUN           5       9724     188000       1289         138
         2 REBAL WAIT          5

Query Disk Compatibility:
col COMPATIBILITY form a10
col DATABASE_COMPATIBILITY form a10
col NAME form a20
select group_number, name, compatibility, database_compatibility from v$asm_diskgroup;

Additional reference:
How To Gather & Backup ASM/ACFS Metadata In A Formatted Manner version 10.1, 10.2, 11.1, 11.2 and 12.1? (Doc ID 470211.1)

ASM Diskgroup Space Used / Free


SQL:
set linesize 140
col group_number heading 'Diskgroup|Number' format 999
col diskgroup heading 'Name' format a20
col total_mb heading 'Allocated (MB)' format 999,999,999
col free_mb heading 'Available (MB)' format 999,999,999
col tot_used heading 'Used (MB)' format 999,999,999
col pct_used heading '% Used' format 999
col pct_free heading '% Free' format 999
select group_number,
       name diskgroup,
       total_mb,
       free_mb,
       total_mb-free_mb tot_used,
       pct_used,
       pct_free
  from (select group_number,name,total_mb,free_mb,
             round(((total_mb-nvl(free_mb,0))/decode(total_mb,0,1,total_mb))*100) pct_used,
             round((free_mb/total_mb)*100) pct_free
      from v$asm_diskgroup
      where total_mb >0
      order by pct_free
     )
/

SAMPLE OUTPUT:
Diskgroup
   Number Name            Allocated (MB) Available (MB)   Used (MB) % Used % Free
--------- --------------- -------------- -------------- ----------- ------ ------
        2 DATA2                5,767,234      1,860,008   3,907,226     68     32
        1 DATA1                5,767,234      1,996,305   3,770,929     65     35

Objects in Data Buffers

Objects in Data Buffers:

set pages 50
set linesize 110
spool blocks.lst
ttitle 'Contents of Data Buffers'
drop table t1;
create table t1 as
select
   o.object_name    object_name,
--   o.subobject_name subobject_name,
   o.object_type    object_type,
   count(1)         num_blocks
from
   dba_objects  o,
   v$bh         bh
where
   o.object_id  = bh.objd
and
   o.owner not in ('SYS','SYSTEM')
group by
   o.object_name,
--   o.subobject_name,
   o.object_type
order by
   count(1) desc
/

column c1 heading "Object|Name"                 format a30
--column c1a heading "Partition|Name"             format a15
column c2 heading "Object|Type"                 format a16
column c3 heading "Number of|Blocks"            format 999,999,999,999
column c3a heading "Size (MB)|32k blocks"       format 999,999,999
column c4 heading "Percentage|of object|data blocks|in Buffer" format 999
select
   object_name       c1,
--   subobject_name    c1a,
   object_type       c2,
   num_blocks        c3,
   (num_blocks*32)/1024 c3a,
   (num_blocks/decode(sum(blocks), 0, .001, sum(blocks)))*100 c4
from
   t1,
   dba_segments s
where
   s.segment_name = t1.object_name
and
   num_blocks > 10
group by
   object_name,
--   subobject_name,
   object_type,
   num_blocks
order by
   num_blocks desc
/
exit
/
 
Sample Output:
Mon Aug 06                                                                       page    1
                                 Contents of Data Buffers
                                                                               Percentage
                                                                                of object
Object                         Object                  Number of    Size (MB) data blocks
Name                           Type                       Blocks   32k blocks   in Buffer
------------------------------ ---------------- ---------------- ------------ -----------
...
38 rows selected.

SGA Usage Report


SGA Usage Report:
break on report
compute sum of mb on report
compute sum of inuse on report
set pagesize 50
col mb format 999,999
col inuse format 999,999
select name,
       round(sum(mb),1) mb,
       round(sum(inuse),1) inuse
  from (select case when name = 'buffer_cache'
                    then 'db_cache_size'
                    when name = 'log_buffer'
                    then 'log_buffer'
                    else pool
                end name,
                bytes/1024/1024 mb,
                case when name <> 'free memory'
                     then bytes/1024/1024
                end inuse
           from v$sgastat
       )
 group by name
 order by mb desc
/
exit
/

Sample Output:
NAME                MB    INUSE
------------- -------- --------
db_cache_size   85,504   85,504
shared pool      6,144    3,879
streams pool       512      256
large pool         256        1
java pool          256
log_buffer          98       98
                     2        2
              -------- --------
sum             92,773   89,741

Instance Memory Usage


Instance Memory Usage:
set linesize 100
set pagesize 50
col component format a35
col size_mb format 999,999
select component,
       current_size/1024/1024 size_mb
  from v$memory_dynamic_components
 order by current_size desc
/
!free
exit
/

Sample Output:
COMPONENT                            SIZE_MB
----------------------------------- --------
SGA Target                            92,160
DEFAULT buffer cache                  85,504
PGA Target                            30,720
shared pool                            5,376
streams pool                             256
java pool                                256
large pool                               256
RECYCLE buffer cache                       0
DEFAULT 2K buffer cache                    0
DEFAULT 4K buffer cache                    0
DEFAULT 8K buffer cache                    0
KEEP buffer cache                          0
DEFAULT 32K buffer cache                   0
Shared IO Pool                             0
ASM Buffer Cache                           0
DEFAULT 16K buffer cache                   0

16 rows selected.

Validate RAC Networking


Validate RAC Networking
 a. Record IP’s and node names
  i. Run /sbin/ifconfig
  ii. Note private, public IP’s.
  iii. Example:
   1. node name : l6312
   2. Public IP : 10.118.49.25
   3. Private IP: 10.255.255.25
   4. node name : l6313
   5. Public IP : 10.118.49.26
   6. Private IP: 10.255.255.26
  iv. Run nslookup [scan name]
   1. Note IP’s returned
 b. Verify Multicast (11.2.0.2 RAC specific) on all nodes
  i. /bin/netstat –in
  ii. Look for: eth0 and eth1, MTU = 1500
  iii. /sbin/ifconfig
  iv. Look for: “MULTICAST MTU:1500”
 c. Test public IP’s on all nodes
  i. /bin/ping –s 1500 –c 2 –i [IP]
   1. Ping node1 => node1
   2. Ping node2 => node2
   3. Ping node1 => node2, etc
   4. Ping node2 => node1, etc
 d. Test private IP’s on all nodes
  i. /bin/ping –s 1500 –c 2 –I [IP]
   1. Ping node1 => node1
   2. Ping node2 => node2
   3. Ping node1 => node2, etc
   4. Ping node2 => node1, etc
 e. Test private IP’s traceroute
  i. /bin/traceroute –s [local private IP] –r –F [remote private IP] 1472
   1. Look for ONLY 1 hop to the remote private IP
 f. Test VIP’s
  i. /bin/ping –c 2 [VIP name]  from all to all
   1. Ping node1 => node1 vip
   2. Ping node2 => node2 vip
   3. Ping node1 => node2 vip, etc
   4. Ping node2 => node1 vip, etc
  ii. Note: These VIP’s could be on different nodes after clusterware is up
  iii. Look for: Successful pings
 g. Test DNS setup
  i. /usr/bin/nslookup
   1. nslookup from node1,node2,etc. => VIP name
   2. nslookup from node1,node2,etc. => SCAN name
 h. Verify name resolution order
  i. grep ^hosts /etc/nsswitch.conf
  ii. Look for : “files dns”
 i. Verify /etc/hosts (check for all cluster members)
  i. grep [node1 hostname] /etc/hosts
  ii. grep [node2 hostname] /etc/hosts
  iii. grep [node1 VIP name] /etc/hosts
  iv. grep [node2 VIP name] /etc/hosts
  v. grep [node1 IP] /etc/hosts
  vi. grep [node2 IP] /etc/hosts
  vii. grep [node1 VIP] /etc/hosts
  viii. grep [node2 VIP] /etc/hosts
 j. Verify scan is not in /etc/hosts
  i. grep [scan name] /etc/hosts
  ii. grep [SCAN IP] /etc/hosts (rerun for each SCAN IP)
2. Test CRS
 a. Restart nodes one at a time
  i. Verify that resources fail-over
  ii. Verify that resources actually restart on the restarted node
   1. crsctl status resource –t
    a. check for gds,vip,listener,db(s),services,etc.
  iii. Follow crs logs, look for unexpected errors