1. Checking CRS
status:
Below two commands are
generally used to check status of CRS on local node and all the nodes of
cluster.
crsctl check crs ==> to check the status of
cluster on local node.
#
pwd
/u01/app/11.2.0.3/grid/bin
You have new mail in
/var/spool/mail/root
21:34:56 root@its0003: /u01/app/11.2.0.3/grid/bin
#
./crsctl check crs
CRS-4638: Oracle High
Availability Services is online
CRS-4537: Cluster Ready Services is
online
CRS-4529: Cluster Synchronization Services is online
CRS-4533:
Event Manager is online
21:35:12 root@its0003:
/u01/app/11.2.0.3/grid/bin
#
crsctl check cluster ==> to check the status of
cluster on remote nodes.
21:35:12 root@its0003:
/u01/app/11.2.0.3/grid/bin
# ./crsctl check
cluster
CRS-4537: Cluster Ready Services is online
CRS-4529:
Cluster Synchronization Services is online
CRS-4533: Event Manager is
online
21:39:57 root@its0003: /u01/app/11.2.0.3/grid/bin
#
2.Viewing Cluster name:
Below are few ways to get
the name of the cluster.
# ./cemutlo -n
itsrac01
22:01:01 root@its0003:
/u01/app/11.2.0.3/grid/bin
#
or
Oracle will create a
directory with the name of the cluster under $ORA_CRS_HOME/cdata. you can get
the name of the cluster from this directory as well.
22:06:00 root@its0003:
/u01/app/11.2.0.3/grid/cdata
# ls
its0003 its0003.olr
itsrac01 localhost
22:06:18 root@its0003:
/u01/app/11.2.0.3/grid/cdata
#
or
olsnodes -c ===> will
displays the name of the cluster.
# ./olsnodes -c
itsrac01
06:28:57 root@its0003:
/u01/app/11.2.0.3/grid/bin
#
3.Viewing Number Of Nodes
configured in Cluster:
The below command will
displays number of nodes registered in the cluster. it will also displays other
information, see the usage detaials below.
olsnodes -n
-s
# ./olsnodes -n -s
its0001 1
Active
its0002 2
Active
its0003 3
Active
06:25:53 root@its0003:
Usage: olsnodes [ [-n]
[-i] [-s] [-t] [<node> | -l [-p]] | [-c] ] [-g] [-v]
where
-n print
node number with the node name
-p print
private interconnect address for the local node
-i print
virtual IP address with the node name
<node> print information for the specified node
-l print
information for the local node
-s print
node status - active or inactive
-t print
node type - pinned or unpinned
-g turn
on logging
-v Run in
debug mode; use at direction of Oracle Support only.
-c print
clusterware name
4.Votedisk
information:
The below command will
display the number of votedisks configured in the Cluster.
crsctl query
css votedisk
# ./crsctl query css votedisk
## STATE File
Universal Id File Name Disk group
-- -----
----------------- --------- ---------
1. ONLINE
039f2497bfbf4f63bfb6ba0455c69921 (ORCL:OCR02) [OCR_VOTE]
Located 1 voting
disk(s).
#
- Use the ocssd.log utility to check for voting disks
issues.
$ grep voting
<grid_home>/log/<hostname>/cssd/ocssd.log
5.Viewing OCR Disk
Information:
The below command will
display the number of OCR files configured in the cluster and also displays the
version of OCR as well as storage information.
Minimum 1 and maximum 5
copy of OCR is possible. we need to run this command as root user, if we run
this command as oracle user we get this message "logical corruption check
bypassed due to non-privileged user"
- Use the cluvfy utility or
the ocrcheck command to check the integrity of the OCR.
# cluvfy comp ocr –n all -verbose
or
ocrcheck
# ./ocrcheck
Status of Oracle Cluster
Registry is as follows :
Version : 3
Total space
(kbytes) : 262120
Used space
(kbytes) : 5320
Available space
(kbytes) : 256800
ID : 828879957
Device/File
Name : +OCR_VOTE
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry
integrity check succeeded
Logical
corruption check succeeded
06:42:03 root@its0003:
/u01/app/11.2.0.3/grid/bin
#
or
- To
determine the location of the OCR:
$ cat /etc/oracle/ocr.loc
ocrconfig_loc=+DATA
local_only=FALSE
6. Various Timeout
Settings in Cluster:
Disktimeout:
Disk Latencies in seconds
from node-to-Votedisk. Default Value is 200. (Disk IO)
Misscount:
Network Latencies in
second from node-to-node (Interconnect). Default Value is 60 Sec (Linux) and 30
Sec in Unix platform. (Network IO)
Misscount <
Disktimeout
NOTE: Do not change them
without contacting Oracle Support. This may cause logical corruption to the
Data.
IF
(Disk IO Time >
Disktimeout) OR (Network IO time > Misscount)
THEN
REBOOT NODE
ELSE
@DO NOT
REBOOT
END IF;
crsctl get
css disktimeout
crsctl get css misscount
crsctl get css
reboottime
Disktimeout:
# ./crsctl get css disktimeout
CRS-4678: Successful get
disktimeout 200 for Cluster Synchronization Services.
06:48:54 root@its0003:
/u01/app/11.2.0.3/grid/bin
#
Misscount:
# ./crsctl get css misscount
CRS-4678: Successful get
misscount 60 for Cluster Synchronization Services.
06:49:26 root@its0003:
/u01/app/11.2.0.3/grid/bin
#
- you can change the
misscount values as below.
# ./crsctl set css misscount 80
Configuration parameter
misscount is now set to 80
# ./crsctl get css misscount
80
- setting the value of
misscount back to its Default value.
crsctl unset css misscount
# ./crsctl unset css misscount
Configuration parameter
misscount is reset to default operation value.
# ./crsctl get css misscount
60
Rebootingtime:
# ./crsctl get css reboottime
3
7. OCR and Voting disks
info.
OCR: It created at
the time of Grid Installation. It’s store information to manage Oracle
cluster-ware and it’s component such as RAC database, listener, VIP,Scan IP
& Services.
Minimum 1 and maximum 5
copy of OCR is possible.
Voting Disk: It
manage information about node membership. Each voting disk must be accessible by
all nodes in the cluster.If any node is not passing heat-beat across other note
or voting disk, then that node will be evicted by Voting disk.
Minimum 1 and maximum 15
copy of voting disk is possible.
New Facts in 11gR2:
•
We can store OCR And Voting disk on ASM or certified cluster file system.
•
We can dynamically add or replace voting disk & OCR.
• Backup of Voting
disk using “dd” command not supported.
• Voting disk and OCR can be keep in
same disk-group or different disk-group
• Voting disk and OCR automatic
backup kept together in a single file.
• Automatic backup of Voting disk and
OCR happen after every four hours, end of the day, end of the week
• You must
have root or sudo privilege account to manage it.
OCR and Voting Disks
Backup:
In 11g release 2 you
no longer have to take voting disks backup as it included in all OCR
backups (auto and manual).
- OCR backups are made to
the GRID_HOME/cdata/<cluster name> directory on the node performing the
backups. These backups are named as follows:
4-hour backups (3 max)
–backup00.ocr, backup01.ocr, and backup02.ocr.
Daily backups (2 max) –
day.ocr and day_.ocr
Weekly backups (2 max)
– week.ocr and week_.ocr
- Note that RMAN does not
backup the OCR.
- You can use the
ocrconfig command to view the current OCR backups as seen in this
example:
- check the auto backups of OCR using below
command
#
./ocrconfig -showbackup auto
its0002 2014/06/04
05:43:16 /u01/app/11.2.0.3/grid/cdata/itsrac01/backup00.ocr
its0002 2014/06/04
01:43:14 /u01/app/11.2.0.3/grid/cdata/itsrac01/backup01.ocr
its0002 2014/06/03
21:43:14 /u01/app/11.2.0.3/grid/cdata/itsrac01/backup02.ocr
its0002 2014/06/02
09:43:07 /u01/app/11.2.0.3/grid/cdata/itsrac01/day.ocr
its0002 2014/05/22
09:42:21 /u01/app/11.2.0.3/grid/cdata/itsrac01/week.ocr
09:30:05 root@its0003:
/u01/app/11.2.0.3/grid/bin
#
- One thing to be aware
of is that if your cluster is shutdown, then the automatic backups will not
occur (nor will the purging).
- If you feel that you
need to backup the OCR immediately (for example, you have made a number of
cluster related changes) then you can use the ocrconfig command to perform a
manual backup:
Ocrconfig
–manualbackup
- You can list the manual
backups with the ocrconfig command too:
Ocrconfig –showbackup manual
#
./ocrconfig -showbackup manual
its0002
2013/07/03 19:31:44
/u01/app/11.2.0.3/grid/cdata/itsrac01/backup_20130703_193144.ocr
its0002 2013/07/01 15:52:04
/u01/app/11.2.0.3/grid/cdata/itsrac01/backup_20130701_155204.ocr
09:34:24
root@itsolx0003:
/u01/app/11.2.0.3/grid/bin
#
- Ocrconfig
also supports the creation of a logical backup of the OCR as seen
here:
Ocrconfig –export /tmp/ocr.exp
- It is recommended that
the OCR backup location be on a shared file system and that the cluster be
configured to write the backups to that file system. To change the location of
the OCR backups, you can use the ocrconfig command as seen in this
example:
Ocrconfig –backuploc
/u01/app/oracle/ocrloc
- Note that
the ASM Cluster File System (ACFS) does not support storage of OCR
backups.
Add/Remove
Votedisks
- To add or remove voting disks on
non-Automatic Storage Management (ASM) storage, use the following
commands:
# crsctl delete css votedisk
path_to_voting_disk
# crsctl add css votedisk
path_to_voting_disk
- To add a voting disk to
ASM:
#crsctl replace votedisk
+asm_disk_group
- Use the
crsctl replace votedisk command to replace a voting disk on ASM. You do not have
to delete any voting disks from ASM using this command.
Restoring
the OCR
- If you
back it up, there might come a time to restore it. Recovering the OCR from the
physical backups is fairly straight forward, just follow these
steps:
1. Locate the OCR backup using the ocrconfig command.
ocrconfig -showbackup
2. Stop Oracle
Clusterware (on all nodes)
crsctl stop cluster
-all
3. Stop CRS on all nodes
crsctl
stop crs ----> it will stop the CRS that particular node we have
executed.
4. Restore the OCR backup (physical) with the ocrconfig command.
ocrconfig –restore
{path_to_backup/backup_file_to_restore}
5. Restart CRS
crsctl start crs 6. Check the integrity of the newly
restored OCR:
cluvfy comp ocr –n
all
You
can also restore the OCR using a logical backup as seen
here:
1. Locate your logical backup.
2. Stop Oracle
Clusterware (on all nodes)
crsctl stop cluster
-all3. Stop CRS on all nodes
crsctl stop
crs
4. Restore the OCR backup (physical) with the ocrconfig command.
ocrconfig –import /tmp/export_file.fil5.
Restart CRS
crsctl start crs
6. Check
the integrity of the newly restored OCR:
cluvfy comp ocr –n all
- If you
are upgrading to Oracle Database 11g you can migrate your voting disks to ASM
easily with the crsctl replace command.
- You can
also use the crsctl query command to locate the voting disks as seen in this
example:
Crsctl query css votedisk
- You can also migrate
voting disks between NAS and ASM or ASM to NAS using the crsctl replace
command.
To check the clusterware
vertion:
$ crsctl query crs activeversion
Oracle Clusterware
active version on cluster is [11.2.0.1.0]