Tuesday, December 29, 2015

11gR2 RAC Administration Commands

11gR2 RAC Administration Commands


1. Checking CRS status:

Below two commands are generally used to check status of CRS on local node and all the nodes of cluster.

crsctl check crs ==> to check the status of cluster on local node.

# pwd
/u01/app/11.2.0.3/grid/bin
You have new mail in /var/spool/mail/root
21:34:56 root@its0003: /u01/app/11.2.0.3/grid/bin
# ./crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
21:35:12 root@its0003: /u01/app/11.2.0.3/grid/bin
#
crsctl check cluster ==> to check the status of cluster on remote nodes.

21:35:12 root@its0003: /u01/app/11.2.0.3/grid/bin
# ./crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
21:39:57 root@its0003: /u01/app/11.2.0.3/grid/bin
#

2.Viewing Cluster name:

Below are few ways to get the name of the cluster.

# ./cemutlo -n
itsrac01
22:01:01 root@its0003: /u01/app/11.2.0.3/grid/bin
#

or

Oracle will create a directory with the name of the cluster under $ORA_CRS_HOME/cdata. you can get the name of the cluster from this directory as well.
22:06:00 root@its0003: /u01/app/11.2.0.3/grid/cdata
# ls
its0003  its0003.olr  itsrac01  localhost
22:06:18 root@its0003: /u01/app/11.2.0.3/grid/cdata
#

or

olsnodes -c  ===> will displays the name of the cluster.
# ./olsnodes -c
itsrac01
06:28:57 root@its0003: /u01/app/11.2.0.3/grid/bin
#


3.Viewing Number Of Nodes configured in Cluster:

The below command will displays number of nodes registered in the cluster. it will also displays other information, see the usage detaials below.
olsnodes -n -s
# ./olsnodes -n -s
its0001      1       Active
its0002      2       Active
its0003      3       Active
06:25:53 root@its0003:
Usage: olsnodes [ [-n] [-i] [-s] [-t] [<node> | -l [-p]] | [-c] ] [-g] [-v]
        where
                -n print node number with the node name
                -p print private interconnect address for the local node
                -i print virtual IP address with the node name
                <node> print information for the specified node
                -l print information for the local node
                -s print node status - active or inactive
                -t print node type - pinned or unpinned
                -g turn on logging
                -v Run in debug mode; use at direction of Oracle Support only.
                -c print clusterware name


4.Votedisk information:

The below command will display the number of votedisks configured in the Cluster.
crsctl query css votedisk
# ./crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   039f2497bfbf4f63bfb6ba0455c69921 (ORCL:OCR02) [OCR_VOTE]
Located 1 voting disk(s).
06:37:07 root@its0003: /u01/app/11.2.0.3/grid/bin
#

- Use the ocssd.log utility to check for voting disks issues.
$ grep voting <grid_home>/log/<hostname>/cssd/ocssd.log

5.Viewing OCR Disk Information:

The below command will display the number of OCR files configured in the cluster and also displays the version of OCR as well as storage information.
Minimum 1 and maximum 5 copy of OCR is possible. we need to run this command as root user, if we run this command as oracle user we get this message "logical corruption check bypassed due to non-privileged user"

- Use the cluvfy utility or the ocrcheck command to check the integrity of the OCR.
# cluvfy comp ocr –n all -verbose
or

ocrcheck
# ./ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       5320
         Available space (kbytes) :     256800
         ID                       :  828879957
         Device/File Name         :  +OCR_VOTE
                                    Device/File integrity check succeeded
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
         Cluster registry integrity check succeeded
         Logical corruption check succeeded
06:42:03 root@its0003: /u01/app/11.2.0.3/grid/bin
#

or
- To determine the location of the OCR:
$ cat /etc/oracle/ocr.loc
ocrconfig_loc=+DATA
local_only=FALSE
 

6. Various Timeout Settings in Cluster:

Disktimeout:
Disk Latencies in seconds from node-to-Votedisk. Default Value is 200. (Disk IO)

Misscount:
Network Latencies in second from node-to-node (Interconnect). Default Value is 60 Sec (Linux) and 30 Sec in Unix platform. (Network IO)
Misscount < Disktimeout
NOTE: Do not change them without contacting Oracle Support. This may cause logical corruption to the Data.
IF
(Disk IO Time > Disktimeout) OR (Network IO time > Misscount)
THEN
REBOOT NODE
ELSE
@DO NOT REBOOT
END IF;

crsctl get css disktimeout
crsctl get css misscount
crsctl get css reboottime

Disktimeout:
# ./crsctl get css disktimeout
CRS-4678: Successful get disktimeout 200 for Cluster Synchronization Services.
06:48:54 root@its0003: /u01/app/11.2.0.3/grid/bin
#

Misscount:
# ./crsctl get css misscount
CRS-4678: Successful get misscount 60 for Cluster Synchronization Services.
06:49:26 root@its0003: /u01/app/11.2.0.3/grid/bin
#
- you can change the misscount values as below.
# ./crsctl set css misscount 80
Configuration parameter misscount is now set to 80

# ./crsctl get css misscount
80
- setting the value of misscount back to its Default value.
crsctl unset css misscount
# ./crsctl unset css misscount
Configuration parameter misscount is reset to default operation value.
# ./crsctl get css misscount
60

Rebootingtime:
# ./crsctl get css reboottime
3

7. OCR and Voting disks info.
OCR: It created at the time of Grid Installation. It’s store information to manage Oracle cluster-ware and it’s component such as RAC database, listener, VIP,Scan IP & Services.
Minimum 1 and maximum 5 copy of OCR is possible.
Voting Disk: It manage information about node membership. Each voting disk must be accessible by all nodes in the cluster.If any node is not passing heat-beat across other note or voting disk, then that node will be evicted by Voting disk.
Minimum 1 and maximum 15 copy of voting disk is possible.

New Facts in 11gR2:
• We can store OCR And Voting disk on ASM or certified cluster file system.
• We can dynamically add or replace voting disk & OCR.
• Backup of Voting disk using “dd” command not supported.
• Voting disk and OCR can be keep in same disk-group or different disk-group
• Voting disk and OCR automatic backup kept together in a single file.
• Automatic backup of Voting disk and OCR happen after every four hours, end of the day, end of the week
• You must have root or sudo privilege account to manage it.
OCR and Voting Disks Backup:
In 11g release 2 you no longer have to take voting disks backup as it included in all OCR backups (auto and manual).

- OCR backups are made to the GRID_HOME/cdata/<cluster name> directory on the node performing the backups. These backups are named as   follows:
  4-hour backups (3 max) –backup00.ocr, backup01.ocr, and backup02.ocr.
  Daily backups (2 max) – day.ocr and day_.ocr
  Weekly backups (2 max) – week.ocr and week_.ocr

- Note that RMAN does not backup the OCR.
- You can use the ocrconfig command to view the current OCR backups as seen in this example:

- check the auto backups of OCR using below command
# ./ocrconfig -showbackup auto
its0002     2014/06/04 05:43:16     /u01/app/11.2.0.3/grid/cdata/itsrac01/backup00.ocr
its0002     2014/06/04 01:43:14     /u01/app/11.2.0.3/grid/cdata/itsrac01/backup01.ocr
its0002     2014/06/03 21:43:14     /u01/app/11.2.0.3/grid/cdata/itsrac01/backup02.ocr
its0002     2014/06/02 09:43:07     /u01/app/11.2.0.3/grid/cdata/itsrac01/day.ocr
its0002     2014/05/22 09:42:21     /u01/app/11.2.0.3/grid/cdata/itsrac01/week.ocr
09:30:05 root@its0003: /u01/app/11.2.0.3/grid/bin
#

- One thing to be aware of is that if your cluster is shutdown, then the automatic backups will not occur (nor will the purging).
- If you feel that you need to backup the OCR immediately (for example, you have made a number of cluster related changes) then you can use   the ocrconfig command to perform a manual backup:
Ocrconfig –manualbackup

- You can list the manual backups with the ocrconfig command too:

 Ocrconfig –showbackup manual

# ./ocrconfig -showbackup manual
its0002     2013/07/03 19:31:44     /u01/app/11.2.0.3/grid/cdata/itsrac01/backup_20130703_193144.ocr
its0002     2013/07/01 15:52:04     /u01/app/11.2.0.3/grid/cdata/itsrac01/backup_20130701_155204.ocr
09:34:24
root@itsolx0003: /u01/app/11.2.0.3/grid/bin
#
- Ocrconfig also supports the creation of a logical backup of the OCR as seen here:

 Ocrconfig –export /tmp/ocr.exp

- It is recommended that the OCR backup location be on a shared file system and that the cluster be configured to write the backups to that   file system. To change the location of the OCR backups, you can use the ocrconfig command as seen in this example:
 Ocrconfig –backuploc /u01/app/oracle/ocrloc

- Note that the ASM Cluster File System (ACFS) does not support storage of OCR backups.

Add/Remove Votedisks

- To add or remove voting disks on non-Automatic Storage Management (ASM) storage, use the following commands:
# crsctl delete css votedisk path_to_voting_disk
# crsctl add css votedisk path_to_voting_disk

 - To add a voting disk to ASM:
#crsctl replace votedisk +asm_disk_group

- Use the crsctl replace votedisk command to replace a voting disk on ASM. You do not have to delete any voting disks from ASM using this command.
Restoring the OCR
- If you back it up, there might come a time to restore it. Recovering the OCR from the physical backups is fairly straight forward, just   follow these steps:
 1. Locate the OCR backup using the ocrconfig command.
  ocrconfig -showbackup
 2. Stop Oracle Clusterware (on all nodes)
  crsctl stop cluster -all
 3. Stop CRS on all nodes
  crsctl stop crs   ----> it will stop the CRS that particular node we have executed.
 4. Restore the OCR backup (physical) with the ocrconfig command.
  ocrconfig –restore {path_to_backup/backup_file_to_restore}
 5. Restart CRS
  crsctl start crs 6. Check the integrity of the newly restored OCR:
  cluvfy comp ocr –n all

 You can also restore the OCR using a logical backup as seen here:
 
1. Locate your logical backup.
  
2. Stop Oracle Clusterware (on all nodes)
   crsctl stop cluster -all3. Stop CRS on all nodes
   crsctl stop crs
4. Restore the OCR backup (physical) with the ocrconfig command.
   ocrconfig –import /tmp/export_file.fil5. Restart CRS
   crsctl start crs
6. Check the integrity of the newly restored OCR:
   cluvfy comp ocr –n all

- If you are upgrading to Oracle Database 11g you can migrate your voting disks to ASM easily with the crsctl replace command.
- You can also use the crsctl query command to locate the voting disks as seen in this example:
  Crsctl query css votedisk
- You can also migrate voting disks between NAS and ASM or ASM to NAS using the crsctl replace command.

To check the clusterware vertion:
$ crsctl query crs activeversion
Oracle Clusterware active version on cluster is [11.2.0.1.0]

Troubleshooting Oracle Clusterware

Oracle Clusterware Main Log Files:

  • Cluster Ready Service (CRS) logs are in <Grid_Home>/log/<hostname>/crsd/. The crsd.log file is archived every 10 MB (crsd.l01, crsd.l02,...)
  • Cluster Synchronization Service (CSS) logs are in <Grid_Home>/log/<hostname>/cssd/. The cssd.log file is archived every 20 MB (cssd.l01, cssd.l02,...)
  • Event Manager (EVM) logs are in <Grid_Home>/log/<hostname>/evmd.
  • SRVM (srvctl) and OCR (ocrdump, ocrconfig, ocrcheck) logs are in <Grid_Home>/log/<hostname>/client/ and $ORACLE_HOME/log/<hostname>/client/.
  • Important Oracle Clusterware alerts can be found in alert<nodename>.log in the <Grid_Home>/log/<hostname> directory.
  • Oracle Cluster Registry tools (ocrdump, ocrcheck, ocrconfig) logs can be found in <Grid_Home>/log/<hostname>/client.
  • In addition, important Automatic Storage Management (ASM)related trace and alert information can be found in the <Grid_Base>/diag/asm/+asm/+ASMn  directory, specifically the log and trace directories.

Diagnostics Collection Script:

- Use the diagcollection.pl script to collect diagnostic information from an Oracle Grid  Infrastructure installation. The diagnostics provide additional information so that Oracle Support can resolve problems. This script is located in <Grid_Home>/bin
/u01/app/11.2.0/grid/bin/diagcollection.pl --collect

# /u01/app/11.2.0/grid/bin/diagcollection.pl --collect
Production Copyright 2004, 2008, Oracle.  All rights reserved
Cluster Ready Services (CRS) diagnostic collection tool
The following diagnostic archives will be created in the local directory.
crsData_host01_20090729_1013.tar.gz -> logs,traces and cores from CRS home. Note: core files will be packaged only with the --core option.
ocrData_host01_20090729_1013.tar.gz -> ocrdump, ocrcheck etc
coreData_host01_20090729_1013.tar.gz -> contents of CRS core files
osData_host01_20090729_1013.tar.gz -> logs from Operating System
....
  - To check css moduels;

$ crsctl lsmodules css
The following are the Cluster Synchronization Services modules:CSSD COMCRS COMMNS CLSF SKGFD
To enable tracing for cluvfy, netca, and srvctl, set SRVM_TRACE to TRUE:

$ export SRVM_TRACE=TRUE
$ srvctl config database -d orcl > /tmp/srvctl.trc
$ cat /tmp/srvctl.trc
...
[main] [ 2009-09-16 00:58:53.197 EDT ] [CRSNativeResult.addRIAttr:139]  addRIAttr: name 'ora.orcl.db 3 1', 'USR_ORA_INST_NAME@SERVERNAME(host01)':'orcl1'
[main] [ 2009-09-16 00:58:53.197 EDT ] [CRSNativeResult.addRIAttr:139]  addRIAttr: name 'ora.orcl.db 3 1', 'USR_ORA_INST_NAME@SERVERNAME(host02)':'orcl2'
[main] [ 2009-09-16 00:58:53.198 EDT ] [CRSNativeResult.addRIAttr:139]  addRIAttr: name 'ora.orcl.db 3 1', 'USR_ORA_INST_NAME@SERVERNAME(host03)':'orcl3'
[main] [ 2009-09-16 00:58:53.198 EDT ] [CRSNative.searchEntities:857]  found 3 ntitie
...
Cluster Verify Components:

CVU supports the notion of component verification. The verifications in this category are not associated with any specific stage. A component can range from basic, such as free disk space, to complex (spanning over multiple subcomponents), such as the Oracle Clusterware stack. Availability, integrity, or any other specific behavior of a cluster component can be verified.

You can list verifiable CVU components with the cluvfy comp -list command:

$ cluvfy comp -list command

nodereach - Checks node reachability
peer - Compares properties with peers
nodecon - Checks node connectivity
ha - Checks HA integrity
cfs -Checks CFS integrity
asm - Checks ASM integrity
ssa - Checks shared storage
acfs - Checks ACFS integrity
space - Checks space availability
olr - Checks OLR integrity
sys - Checks minimum requirements
gpnp - Checks GPnP integrity
clu - Checks cluster integrity
gns - Checks GNS integrity
clumgr - Checks cluster manager integrity
scan - Checks SCAN configuration
ocr - Checks OCR integrity
ohasd - Checks OHASD integrity
admprv - Checks administrative privileges
crs - Checks CRS integrity
software - Checks software distribution
vdisk - Checks Voting Disk Udev settings
clocksync - Checks clock synchronization
nodeapp - Checks node applications’ existence

Note: For manual installation, you need to install CVU on only one node. CVU deploys itself on remote nodes during executions that require access to remote nodes.

Cluster Verify Output: Example

$ cluvfy comp crs -n all -verbose
Verifying CRS integrity
Checking CRS integrity...
The Oracle clusterware is healthy on node "host03"
The Oracle clusterware is healthy on node "host02"
The Oracle clusterware is healthy on node "host01"
CRS integrity check passed
Verification of CRS integrity was successful.


 Write a shell script to copy log files before they wrap:

# Script to archive log files before wrapping occurs
# Written for CSS logs. Modify for other log file types.
CSSLOGDIR=/u01/app/11.2.0/grid/log/host01/cssd
while [ 1 –ne 0 ]; do
   CSSFILE=/tmp/css_`date +%m%d%y"_"%H%M`.tar
   tar -cf $CSSFILE $CSSLOGDIR/*
   sleep 300
done
exit


Processes That Can Reboot Nodes:

The following processes can evict nodes from the cluster or cause a node reboot:
  • hangcheck-timer: Monitors for machine hangs and pauses (it is not required in 11gR2 but required for 11gR1)
  • oclskd: Is used by CSS to reboot a node based on requests from other nodes in the cluster
  • ocssd: Monitors the internode’s health status
Note: While the hangcheck-timer module is still required for Oracle Database 11g Release 1 RAC databases, it is no longer needed for Oracle Database 11g Release 2 RAC.

Determining Which Process Caused Reboot:
Most of the time, the process writes error messages to its log file when a reboot is required.

- ocssd
  - /var/log/messages
  - <Grid_Home>/log/<hostname>/cssd/ocssd.log
- oclskd
  - <Grid_Home>/log/<hostname>/client/oclskd.log
- hangcheck-timer
  - /var/log/messages

Using diagwait for Eviction Troubleshooting:
When a node is evicted on a busy system, the OS may not have had time to flush logs and trace files before reboot.
- Use the diagwait CSS attribute to allow more time.
- It does not guarantee that logs will be written.
- The recommended value is 13 seconds.
- Clusterwide outage must be changed.
- It is not enabled by default.
- To enable:

# crsctl set css diagwait 13 -force
- To Disable:
# crsctl unset css diagwait
Using ocrdump to View Logical Contents of the OCR:
- The ocrdump utility can be used to view the OCR content for troubleshooting. The  ocrdump utility enables you to view logical information by writing the contents to a file or displaying the contents to stdout in a readable format.

- To dump the OCR contents into a text file for reading:
[grid]$ ocrdump filename_with_limited_results.txt
[root]# ocrdump filename_with_full_results.txt

- To dump the OCR contents for a specific key:
# ocrdump -keyname SYSTEM.language

- To dump the OCR contents to stdout in XML format:
# ocrdump -stdout -xml

- To dump the contents of an OCR backup file:
# ocrdump -backupfile week.ocr

- To determine all the changes that have occurred in the OCR over the previous week, locate the automatic backup from the previous week and compare it to a dump of the current OCR as follows:

- If the ocrdump command is issued without any options, the default file name of OCRDUMPFILE will be written to the current directory, provided that the directory is writable.

# ocrdump
# ocrdump -stdout -backupfile week.ocr | diff - OCRDUMPFILE

Checking the Integrity of the OCR:

- Use the ocrcheck command to check OCR integrity.
$ ocrcheck
Status of Oracle Cluster Registry is as follows :
    Version                  :          2
    Total space (kbytes)     :     275980
    Used space (kbytes)      :       2824
    Available space (kbytes) :     273156
    ID                       : 1274772838
    Device/File Name         :  +DATA1
                        Device/File integrity check succeeded
    Device/File Name         :  +DATA2
                        Device/File integrity check succeeded
    Cluster registry integrity check succeeded
  Logical corruption check succeeded

OCR-Related Tools for Debugging:

OCR tools:
 - ocrdump
 - ocrconfig
 - ocrcheck
 - srvctl
Logs are generated in the following directory:
<Grid_Home>/log/<hostname>/client/
Debugging is controlled through the following file:
<Grid_Home>/srvm/admin/ocrlog.ini

-  These utilities create log files in <Grid_Home>/log/<hostname>/client/. To change the amount of logging, edit the <Grid_Home>/srvm/admin/ocrlog.ini file. 

- The default logging level is 0, which basically means minimum logging. When mesg_logging_level is set to 0, which is its default value, only error conditions are logged. You can change this setting to 3 or 5 for detailed logging information.

expdp or exp were failing with ORA-39126 SYS.KUPW$WORKER

The expdp or exp were failing with the below errors:

ORA-39126: Worker unexpected fatal error in KUPW$WORKER.FETCH_XML_OBJECTS [PROCACT_SCHEMA:"OPS$ORACLE"]
ORA-19051: Cannot use fast path insert for this XMLType table
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 9001
----- PL/SQL Call Stack -----
  object      line  object
  handle    number  name
70000007befb878     20462  package body SYS.KUPW$WORKER
70000007befb878      9028  package body SYS.KUPW$WORKER
70000007befb878     10935  package body SYS.KUPW$WORKER
70000007befb878      2728  package body SYS.KUPW$WORKER
70000007befb878      9697  package body SYS.KUPW$WORKER
70000007befb878      1775  package body SYS.KUPW$WORKER
700000072fa8930         2  anonymous block

Reason:
When the 11g DB was started LD_LIBRARY_PATH and LIBPATH were pointing to 10g Home (any DB at the time of startup if the above Library variables are not pointing to current DB ORCLE_HOME)

- in my case we have oracle versions in the server 11g and 10g, at the time 11g DB startup LIB variables are pointing to 10g.

Fix:
I’ve set the below parameters explicitly and restart the database which has fixed the issue
ushux37$ export LIBPATH=/u01/app/oracle/product/11.2.0/lib:/u01/app/oracle/product/11.2.0/lib32
ushux37$ export LD_LIBRARY_PATH=/u01/app/oracle/product/11.2.0/lib:/u01/app/oracle/product/11.2.0/network/lib

setting parameter "OPTIMIZER_INDEX_COST_ADJ" in 11g

- We have migrated one of the critical production database from 10g to 11g(11.2.0) successfully and also application team confirmed everyting working as expected. After a week time, application team reported issue as few of the SQLs(huge select) are running very slow (in 10g they completed in 2.5 mintues, but 11g those are not completed after 10 mintues also).

- We started working on this issue and find the solution, "There is an oracle optimizer paramemter "OPTIMIZER_INDEX_COST_ADJ", which by default
has a value of 100 in 11g.

- This value means that the optimiser considers a full table scan to have the same cost as
  using an index. Consequently queries were not using the indexes at all and getting slower
  and slower as the tables became larger.

- We have tested by setting "OPTIMIZER_INDEX_COST_ADJ" with different values 20,15 and 10 and we observed massively improves performance with value 10, the query which is completed in 2.5 mintues in 10g, after setting this parameter value to 10 in 11g it just completed in 28 seconds.

Note: The default for this parameter is 100 percent, at which the optimizer evaluates index access paths at the regular cost. Any other value makes the optimizer evaluate the access path at that percentage of the regular cost. For example, a setting of 50 makes the index access path look half as expensive as normal.

11gR2 ASM Commands and Details

ASM
ASM Parameters:
==> There are three groupings of parameters for an ASM instance, they are below.
- INSTANCE_TYPE=ASM is the only mandatory parameter setting.
- There are a number of ASM-specific parameters, These have names starting with ASM_.
- Some database parameters are valid for ASM.
  For example, MEMORY_TARGET


ASM_DISKGROUPS:
- ASM_DISKGROUPS specifies a list of disk group names that ASM automatically mounts at instance startup.
- Is automatically modified when disk groups are added, deleted, mounted, or unmounted if using an SPFILE
- Must be manually adjusted if using a PFILE(Except when ASMCA is used to create a new disk group)

Disk Groups Mounted at ASM startup:

==> At startup, the Oracle ASM instance attempts to mount the following disk groups:
- Disk groups specified in the ASM_DISKGROUPS initialization parameter
- Disk group used by Cluster Synchronization Services (CSS) for voting files
- Disk groups used by Oracle Clusterware for the Oracle Cluster Registry (OCR)
- Disk group used by the Oracle ASM instance to store the ASM server parameter file (SPFILE)

ASM_POWER_LIMIT:

- The ASM_POWER_LIMIT initialization parameter specifies the default power for disk rebalancing.
 - Default is 1.
 - Meaning ASM conducts rebalancing operations using minimal system resources
- Allowable range is 0 to 11.
- 0 disables rebalancing operations.
- Lower values use fewer system resources but result in slower rebalancing operations.
- Higher values use more system resources to achieve faster rebalancing operations.
- It can be set dynamically using ALTER SYSTEM or ALTER SESSION.

CLUSTER_DATABASE:

- CLUSTER_DATABASE specifies whether or not storage clustering is enabled.
- Set CLUSTER_DATABASE = TRUE for multiple ASM instances to access the same ASM disks concurrently.
- Clustered ASM can support clustered (RAC) and single-instance (non-RAC) databases.

MEMORY_TARGET:

- MEMORY_TARGET specifies the total memory used by an ASM instance.
- Oracle strongly recommends that you use automatic memory management for ASM.
- All other memory-related instance parameters are automatically adjusted based on MEMORY_TARGET.
- The default value of 272 MB is suitable for most environments. It can be increased dynamically using ALTER SYSTEM.

Adjusting ASM Instance Parameters in SPFILEs:

- The server parameter file (SPFILE) is a binary file that cannot be edited using a text editor.
- Use Oracle Enterprise Manager or the ALTER SYSTEM SQL command to adjust ASM instance parameter settings in an SPFILE.
Eg:
SQL> ALTER SYSTEM SET ASM_DISKSTRING='ORCL:*' SID='*' SCOPE=SPFILE;

Note: In a clustered ASM environment, SPFILEs should reside in ASM or a cluster file system.
- For example, to adjust your SPFILE so that your ASM environment discovers only Oracle ASMLib disks, you could execute:
SQL> ALTER SYSTEM SET ASM_DISKSTRING='ORCL:*' SID='*' SCOPE=SPFILE;

Starting and Stopping ASM Instances by Using srvctl:

- The Sever Control utility (srvctl) can be used to start and stop ASM instances.
One node at a time:
$ srvctl start asm -n host01$ srvctl start asm -n host02
$ srvctl status asm -n host01
ASM is running on host01.
$ srvctl status asm -n host02
ASM is running on host02.

All nodes simultaneously:
$ srvctl stop asm
$ srvctl status asm -n host01
ASM is not running on host01.
$ srvctl status asm
ASM is not running on host01,host02.

Starting and Stopping ASM Instances by Using ASMCA and ASMCMD:

- The ASMCA utility shown in the slide allows you to start and stop an ASM instance. The ASMCMD utility also includes the ability to start and start ASM instances with the following
commands:
asmcmd start asm
$ asmcmd
ASMCMD [+] > shutdown
ASMCMD [+] > shutdown --immediate
ASMCMD [+] > shutdown --abort
ASMCMD> startup --nomount --pfile asm_init.ora
ASMCMD> startup --mount

Note: Oracle Clusterware is a client of ASM when the OCR files and voting files are in ASM disk groups. Stopping the Oracle Clusterware services includes stopping ASM.

Starting and Stopping the ASM Listeners:

Using the lsnrctl utility:
$ lsnrctl start listener
LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 08-OCT-2009 22:44:22
Copyright (c) 1991, 2009, Oracle.  All rights reserved.
Starting /u01/app/11.2.0/grid/bin/tnslsnr: please wait...
... Intermediate output removed ...
The command completed successfully
$

Using the srvctl utility:

$ srvctl start listener -n host01
$

ASM Dynamic Performance Views:

- The ASM instance hosts memory-based metadata tables presented as dynamic performance views.
- Is accessed by ASM utilities to retrieve metadata-only information using the SQL language
- Contains many dedicated ASM-related views such as:
V$ASM_ALIAS
V$ASM_ATTRIBUTE
V$ASM_CLIENT
V$ASM_DISK
V$ASM_DISK_IOSTAT
V$ASM_DISK_STAT
V$ASM_DISKGROUP
V$ASM_DISKGROUP_STAT
V$ASM_FILE
V$ASM_OPERATION
V$ASM_TEMPLATE
V$ASM_ACFSVOLUME
V$ASM_FILESYSTEM
Note: The V$ASM_* views exist in both ASM and database instances. The rows returned will vary.


- Following is a typical example of a query that retrieves information about disks in a particular disk group:


SQL> SELECT G.NAME DISK_GROUP, D.NAME, D.STATE, D.TOTAL_MB,
2 D.FREE_MB
3 FROM V$ASM_DISK D, V$ASM_DISKGROUP G
4 WHERE D.GROUP_NUMBER = G.GROUP_NUMBER
5 AND G.NAME = 'DATA';
DISK_GROUP NAME STATE TOTAL_MB FREE_MB
---------- ------ -------- ---------- ----------
DATA SDE5 NORMAL 977 136
DATA SDE6 NORMAL 977 134
DATA SDE7 NORMAL 977 135
DATA SDE10 NORMAL 977 150
DATA SDE8 NORMAL 977 150
DATA SDE9 NORMAL 977 150

Tracing Remote Session and tkprof

Methods to set Event 10046:
==========================
Method 1 :
syntax: exedc dbms_system.set_ev(l_sid,l_serial#,10046,l_level,'');
SQL > execute dbms_system.set_ev(10,224,10046,8,'');

Method 2:SQL> oradebug unlimit
SQL> oradebug setospid 10927
SQL> oradebug event 10046 trace name context forever, level 8                 ------- this is to turn on
SQL> oradebug event 10046 trace name context off                             

TKPROF:
tkprof clndv10_ora_21234002.trc clndv10_ora_21234002.txt table=system.plan_table explain=system/manager sys=no waits=yes sort=prsela, exeela, fchela
Note: in the tkprof sys=no means it won't show the sqls belongs to sys user.

Genating DDL from export dump file or check expdp dumpfile is corruped or not

- We can generate DDL from export dump file using below command, it won't import the data into DB, it generate the DDLs and save into the file.
- What does this sqlfile do?
- Here we have used sqlfile option, which will not import any data into the database.
- This will write all DDL statements (which will be executed if a import is performed) into the file which we mentioned in the command.
- So this will read the entire DATAPUMP Export dump file and will report if a corruption is detected
 nohup impdp / dumpfile=expdp_hsdv04_maximo7_2Jul2014.dmp logfile=Impdp_maximo7_into_hsdv01_07Jul14.log schemas=MAXIMO7 directory=dumpdir REMAP_TABLESPACE=MAXTEMP:TEMP sqlfile=Impdp_dump_file_corruption_check.sql &