Monday, August 5, 2019

Tech Topics




CALLS_TO_KCMGAS     -      http://www.dba-oracle.com/m_calls_to_kcmgas.htm

Huge Pages

Without huge pages, Linux kernel manages the SGA area of the Oracle database as 4K sized page. For large SGA sizes, number of pages which kernel has to handle becomes huge resulting in the Page TLB to be of hugesize. With huge page feature, Linux kernel gets the ability to have 2 MB sized pages (upto 1 GB) which results in smaller TLB size and better performance.

TCP Vs UDP

Transmission Control Protocol sends packets to the recipient and wait for an acknowledgement and then sends the further packets. Communication is one-way and always ensures that recipient has received the packets fully
User Datagram Protocol doesnt wait for packet acknowledgements. Client keeps sending packets to the recipient. This reduces transmission overhead and is used during broadcasts


Relink sqlplus or Oracle binaries
Relinking is a process of rebuilding the executables when there is an OS level patch or upgrade done on the binaries on the host. Relinking is done by running relink command. This will internally run the make commands. But make can be run manually as well. $ORACLE_HOME/rdbms/lib  has all the make files. for sqlplus make -f ins_sqlplus install

Enabling Trace - Different Ways

alter session set events '10046 trace name context forever, level 12';
alter session set events '10046 trace name context off';

Another method is to trace using ORADEBUG commands
SQL> ORADEBUG SETMYPID; -- Debug current session.
SQL> ORADEBUG SETOSPID 1234; -- Debug session with the specified OS process.
SQL> ORADEBUG SETORAPID 123456; -- Debug session with the specified Oracle process ID.

SQL> ORADEBUG EVENT 10046 TRACE NAME CONTEXT FOREVER, LEVEL 12;
SQL> ORADEBUG TRACEFILE_NAME; -- Display the current trace file.
SQL> ORADEBUG EVENT 10046 TRACE NAME CONTEXT OFF;

Indexes
6 Types of indexes
BTree -
Compressed Btree - 
Bitmap - 
Reverse BTree -
Funtional - 
Index Organized Tables - 


PSU Patches
dba_sqlpatch_registry will show the list of all SQL patches applied
dba_registry_history

Checking whether Database initialization parameter is static or dynamic
v$parameter table can be queried and a column issysmodifyable can identify it. TRUE - modifyable and FALSE means needs restart

Oracle Multi-tenant Architecture

Oracle Management Database
This is an oracle database which replaced the berkley database available in clusterware from 12c onwards and is used to store the Grid specific information - cluster health infrmation

Row Chaining and Row Migration
Row chaining - When contents of a table row extend beyond a block, data is written to the next continuous block resulting in Row Chaining
Row Migration - When contents of a table row are updated, the row exceeds the size of the current block and moves to another block and fills it.


OPatchAuto

OPatchAuto allows application of multiple database patches placed under a common directory pointed out by  baseDir parameter

OPatch is rolling batch or not
opatch query -is_rolling_patch

OPatch conflict when db is down
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir

GUARANTEED RESTORE POINTS
Guaranteed restore points allows database to flashback to the SCN at which the restore point was created irrespective of the value of db_flashback_retention_target parameter. Oracle will hold enough UNDO logs to revert to that SCN. We can use flashback database to or recover database untill


BLOCK CHANGE TRACKING

Block change tracking process is used to identify the blocks which have changes since the last incremental backup.

ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;

select status from V$BLOCK_CHANGE_TRACKING;

The size of the change tracking file is proportional to the size of the database and the number of enabled threads of redo. The size is not related to the frequency of updates to the database.

Typically, the space required for block change tracking is approximately 1/30,000 the size of the data blocks to be tracked. The following two factors that may cause the file to be larger than this estimate suggests:
  • To avoid overhead of allocating space as database grows, the change tracking file size starts at 10MB, and new space is allocated in 10MB increments. Thus, for any database up to approximately 300GB the file size is no smaller than 10MB, for up to approximately 600GB the file size is no smaller than 20MB, and so on.
  • For each datafile, a minimum of 320K of space is allocated in the change tracking file, regardless of the size of the file. Thus, if you have a large number of relatively small datafiles, the change tracking file is larger than for databases with a smaller number of larger datafiles containing the same data.

OBSOLETE BACKUP VS EXPIRED BACKUP
RMAN marks a backup as expired if the backup piece or file is not present on the backup tape or location. 
RMAN marks a backup as obsolete if the backup is beyond the retention policy defined for the backups (As specified in RMAN CONFIGURE)


OCR BACKUP

$ sudo ocrconfig -export /oracle/export/CRS/OCR_Backup -s online

LIST THE AVAILABLE OCR DISKS

$ ls -ltr /dev/mapper/lbcdt_201-202*ocr*
brw-rw---- 1 oracle dba 253, 135 Jan 17 06:37 /dev/mapper/lbcdt_201-202_ocrmirr_003_1000M
brw-rw---- 1 root   dba 253, 283 Jan 17 06:37 /dev/mapper/lbcdt_201-202_ocrmirr_002_1000M
brw-rw---- 1 root   dba 253, 308 Jan 17 06:37 /dev/mapper/lbcdt_201-202_ocr_001_1000M
$

Note :  Since it is 10g CRS, Maximum 1 OCR mirror Disk will be allowed. Starting from 11.2 Version, We can have Maximum of 4 OCR Mirror Disks.,

TAKE VOTING DISK CONFIGURATION

$ sudo crsctl query css votedisk
0.     0    /dev/mapper/lbcdt_201-202_vote_003_1000M
1.     0    /dev/mapper/lbcdt_201-202_votemirr_004_1000M

TAKE VOTING DISK BACKUP

Use dd commands

CRS Integrity Check
===================

Goto CRS_HOME /oracle/product/crs_home/bin

./crsctl check crs
./olsnodes -n
./ocrcheck

LIST THE AVAILABLE VOTING DISK

$ ls -ltr /dev/mapper/lbcdt_201-202*vote*

ADDING ADDITIONAL VOTING DISK

$ sudo crsctl add css votedisk /dev/mapper/lbcdt_201-202_votemirr_005_1000M -force
Now formatting voting disk: /dev/mapper/lbcdt_201-202_votemirr_005_1000M
successful addition of votedisk /dev/mapper/lbcdt_201-202_votemirr_005_1000M.
$ sudo crsctl query css votedisk
0.     0    /dev/mapper/lbcdt_201-202_vote_003_1000M
1.     0    /dev/mapper/lbcdt_201-202_votemirr_004_1000M
2.     0    /dev/mapper/lbcdt_201-202_votemirr_005_1000M

located 3 votedisk(s).

ASM


Mandatory parameters for ASM instance are PROCESSES, ASM_DISKSTRING, ASM_DISKGROUPS, INSTANCE_TYPE
ASM binary version should always be higher than the database version.

Striping - It is a process to balance load across all disks in a diskgroup and to reduce IO latency
Coarse - Stripe size of 1 MB. Does not support control, redo and flashback files.
Fine - Stripe size of 128KB. Supports control, redo and flashback filese

Mirroring - storing data at multiple places to provide protection from failure
Normal - 1 additional copy of each extent
High - 2 additional copies of each extent
External - Third party mirroring 

ASMLib is an optional utility to manage ASM devices
sudo crsctl add css votedisk /dev/mapper/lnxdbplt_201-202_votemirr_005_1000M -force

CRS Startup with ASM

  1. OHASD starts
  2. OHASD refers to the OLR (Oracle Local Registry) stored on the local filesystem to look for the conf files to start.
  3. OHASD brings up GPNPD and CSSD
  4. CSSD accesses the GPNP profile stored on the local filesystem to get the bootstap data
    1. ASM Diskstring - To identify the ASM disks which needs to be scanned to fetch the further information to start CRS
    2. ASM SPFILE location - Name of diskgroup containing the ASM SPFILE
    3. Voting Disk location - Disk name which has the voting disk info
  5. CSSD scans the headers of all the disks which has the voting disk information and using the information in disk header, CSSD completes the initialization and joins the cluster.
  6. ASM starts up by reading the SP file from the disk by parsing the header to locate the diskblocks which hold the SPFile information
  7. Once ASM is up, the diskgroups are up
  8. CRS is able to locate the OCR and gets started
  9. All Clusterware services are started after CRS starts

GPNP Profile is an xml file located under GRID_HOME/gpnp/hostname/profiles/peer/profile.xml used for storing spfile information, OCR location, ASM Diskstring etc.

kfed read | grep -F "spfile" this can be used to scan the disk for spfile information
dd command can be used to dump the ASM Spfile from the disk directly

RMAN


§  TARGET DATABASE - A Target Database is the primary database that will be backed up for standby database creation. In RMAN's terminology, the term target database identifies the database that is undergoing a backup, restore or recovery operation by Recovery Manager.

AUXILIARY DATABASE - An Auxiliary Database is a standby database that will be created as a result of the duplication of the target database. In RMAN's terminology, Auxiliary instance identifies an instance which RMAN connects in order to execute the duplicate command.

CHANNEL - A Channel is a communication pipeline between a RMAN executable and a target or auxiliary database. A channel consists of a server session on the target or auxiliary database and a data stream from the database to the backup device or vice-versa. RMAN console sends commands to the database using this channel, and the server session running on the database executes the command on behalf of Recovery Manager. Some degree of parallelism during the backup or restore operation can be achieved using multiple channels.

AUTOMATIC CHANNEL ALLOCATION  - RMAN Channels can be configured to use a set of default attributes for each operation when a channel is not allocated manually. This set of channels is persistent and can be configured using the CONFIGURE command. When such a set of channels is pre-defined, it is called automatic channel allocation. By default, RMAN configures a channel of device type, DISK, to be used for automatic channel allocation.

MANUAL CHANNEL ALLOCATION   - As the name suggests, a channel can be configured manually for special needs such as increasing the degree of parallelism. Channels can be allocated manually by using the ALLOCATE CHANNEL command in the RUN block of RMAN statement.

DUPLICATE COMMAND - To restore a backup set or image copy, the DUPLICATE command can be used. The duplicate command always performs a complete restoration of the target database. Using this command, a database can be restored on the local host sharing with the target database or on remote host. A recovery catalog is not needed to use the duplicate command.  It can be used with the control file of the target database serving as the repository. 

BACKUP SET - Recovery Manager backs up the datafiles, control file, archived log files, and server parameter files in a RMAN specific format called backup pieces. A set of one or more such backup pieces makes up a backup set. A backup set is created using the BACKUP command.

IMAGE COPY - As opposed to the backup set, an image copy is not a RMAN specific format. It is a replica of an actual file. Image copies are created using the COPY command.

SET NEWNAME   - This command can be used to rename the data files to be restored to a new location. It is equivalent to the db_file_name_convert  parameter of the server initialization parameter file. Combination of the SET NEWNAME and SWITCH command is the equivalent of the ALTER DATABASE RENAME FILE statement. 

CONFIGURE AUXNAME   - CONFIGURE AUXNAME is equivalent to the SET NEWNAME  command, except that the CONFIGURE AUXNAME is persistent, whereas, the SET NEWNAME command must be used every time the DBA wants to rename a data file. It is necessary to connect to the recovery catalog in order to use the CONFIGURE AUXNAME command. 

DORECOVER   - When the DUPLICATE command is specified with the DORECOVER option, it starts recovery after restoration. The recovery is performed using all available archived redo logs and incremental backups. This is the recommended option while creating standby databases using recovery manager. This will save the extra step of recovering the standby database once it is created.

NOFILENAMECHECK  - It is an option for the DUPLICATE command. When NOFILENAMECHECK is used with the DUPLICATE command, RMAN does not validate the filenames during restoration. If the primary database and the standby database are on the same host, this option should not be used.


Backup commands:

Backup database;
Backup database plus archivelog;     -     Backup database and archivelogs
Backup as copy database;               -     Create Image copy of database
format     -     Used to specific the format for backup pieces
tag          -     Used to label the backup
backup incremental level 0 database;
backup incremental level 1 cumulative database
backup incremental level 1 database;     - Default is differential incremental
backup for recovery of copy with tag - used to perform incrementally updated backup of a backup taken with the tag
backup for recovery of copy datafile copy     -     used to perform incremental backup with RMAN automatically creating datafiles in that format.

Validate :

RMAN validate is used for check for corruption in datafiles.

v$database_block_corruption table is populated with every corruption noticed
DB_BLOCK_CHECKSUM - when set to typical ensures that database performs a checksum check for each block while writing it to disk.

backup.. validate is used to validate the datafiles which are to be backed up.

backup validate database archivelog all;
validate check logical database;
validate datafile 1;
validate backupset 3;

validation can be done in parallel by passing sector size parameter. RMAN splits the datafile into section size and runs validate in parallel

Crosscheck:

Crosscheck synchronizes logical records of RMAN backups and copies with files on storage media. if backup is on disk, RMAN checks the file header and if file is on tape, RMAN checks the repository to ensure that the backups and copies are consistent. crosschecks helps in updating the recovery catalog with the current status of the file on disk

Crosscheck backup;
crosscheck copy;


RAC


Split brain scenario
This is a scenario where 2 or more nodes in a RAC cluster loose connectivity between them and start acting independently. To handle this scenario, Oracle clusterware uses the voting disks to identify which nodes will remain in cluster and which will be evicted. Based on the voting result, OProcd (OCSSD) will fence the node which didnt acquire the voting disk. Heartbeat is maintained between nodes of the cluster by LMON,LMS,LCK processes. When either of the processes receive IPC error, eviction is performed.

Undo Management in RAC

In RAC environment, each instance has its own set of UNDO tablespaces since the trasaction processing happens independently on each instance.

Log file sync
=============

Whenever a commit is given, the process issuing commit will create a redo record containing commit opcodes and copies that redo record in to log buffer. Then that process signlas LGWR to write contents of log buffer. LGWR writes from log buffer to log file and signals user process back completing a commit. Commit is considered successful after LGWR write is successful.
After posting LGWR to write, user or background processes waits for LGWR to signal back with 1 sec timeout. User process charges this wait time as ‘log file sync’ event.

Cause for log file sync depends on
1. disk I/O performance because LGWR has to write to log files.
2. CPU resources being low
3. Memory being low

INI_TRANS & MAXTRANS
====================

select ini_trans,max_trans from dba_tables where owner='XXCSS_O' and table_name in ('XXCSS_NGSC_ES_OP_INFO');

INI_TRANS - Every database block holds a block header and that block header contains a small table called transaction table.Any session trying to update
         the database block has to put an entry into this table.This parameter specifies the minimum number of transactions that can concurrently
         update the database block
MAX_TRANS - This parameter specifies the maximum number of transactions that can concurrently update the database block

GC buffer cache full Issues
===========================

When a session accesses a block which is remotely located, the session puts a Block Level lock and starts shipping the block to Global Cache(GC)
or Buffer Cache. During this operation if any other session tries to access the same block, it will have to wait. This wait event is called GC Buffer Cache.
v$waitstat table holds information about the buffer busy wait events.

Run the below SQL to find the status of GC Buffer cache

cd /usr/tools/oracle/Standard/onCall/core
@sw

Run the below SQL to check the status of various wait events.
@event_like.sql

or

select a.sid,a.serial#,a.schemaname,a.username,b.event,a.status from v$session a,v$session_wait b where b.event='buffer busy waits' and a.sid=b.sid;

Latch free wait event
=====================

Latch is a fast lock obtained for a shorter duration. If a latch on an object is not available, the session will sleep for a shorter period and retry its
operation later. The time it sleeps is the wait event "latch free"

v$latch holds information about all the latches. v$session_wait holds information about the waiting sessions.

P1 column in v$session_wait specifies the latch address and P2 refers to latch number.

select s.sid,s.event,s.state,l.wait_time,l.waits_holding_latch,l.name from v$session_wait s, v$latch l where l.latch#=s.P2

There are different types in latch free wait event. The are as follows

cache free buffer chain
-----------------------

gcs remote message
------------------

row cache lock
--------------

jobq slave wait
---------------

PX Deq: reap credit
-------------------

SQL*Net message to client
-------------------------

DB File Scattered Read
======================

This wait event occurs when a process reads multiple blocks from a datafile. This happens mostly in times of full table scan.

DB File Sequential Read
=======================

This wait event occurs when a process is reading a accessing an index which leads to a single block and that block is held by some other process.

Enqueues
========

This wait event occurs when a session tries to access an object which has been locked by other session. There are 40 different types of locks.Some are as below

TX - This enqueue is a transaction lock and is typically caused by incorrect application logic or table setup issues.
TM - This enqueue represents a DML lock and is generally due to application issues, particularly if foreign key constraints have not been indexed.
ST - When Oracle performs space management operations (such as allocating temporary segments for a sort, allocating
     extents for a table, etc), the user session waits on the 'ST' enqueue.

CF enqueue – The CF enqueue is a Control File enqueue (a.k.a. enq: CF – contention) and happens during parallel access 6to the control files.  The CF enqueue can be seen during any action that requires reading the control file, such as redo log archiving, redo log switches and begin backup commands.

CI Enqueue - The CI enqueue is the Cross Instance enqueue (a.k.a. enq: US - contention) and happens when a session executes a cross instance call such as a query over a database link.

FB enqueue – This is the Format Block enqueue, used only when data blocks are using ASSM (Automatic Segment Space Management or bitmapped freelists).  As we might expect, common FB enqueue relate to buffer busy conditions, especially since ASSM tends to cause performance problems under heavily DML loads.

HV enqueue – The HV enqueue (a.k.a. enq: HV - contention) is similar to the HW enqueue but for parallel direct path INSERTs.

HW enqueue – The HW High Water enqueue (a.k.a. enq: HW – contention) occurs when competing processing are inserting into the same table and are trying to increase the high water mark of a table simultaneously. The HW enqueue can sometimes be removed by adding freelists or moving the segment to ASSM.

KO enqueue – The KO enqueue (a.k.a. enq: KO - fast object checkpoint) is seem in Oracle STAR transformations and high enqueue waits can indicate a sub-optimal DBWR background process.

PE enqueue – The PE enqueue (a.k.a. enq: PE – contention) is the Parameter Enqueue, which happens after “alter system” or “alter session” statements.

PS enqueue – The PS enqueue is the Parallel Slave synchronization enqueue (a.k.a enq: PS – contention), which is only seen with Oracle parallel query.  The PS enqueue happens when pre-processing problems occur when allocating the factotum (slave) processes for OPQ.

RO Enqueue – The RO enqueue is the Reuse Object enqueue and is a cross-instance enqueue related to truncate table and drop table DDL operations.

SQ enqueue – The SQ enqueue is the Sequence Cache enqueue (a.k.a. enq: SQ – contention) is used to serialize access to Oracle sequences.

SS enqueue – The SS enqueue is the Sort Segment enqueue (a.k.a. enq:SS – contention) and these are related to the sorting of large result sets.

SS enqueue – The SS enqueues are Sort Segment enqueues (a.k.a. enq: SS – contention), and occur when a process is performing a large sort operation.

ST enqueue – The ST enqueue can be seen in a partitioned environment when a large number of partitions are created simultaneously.

TC enqueue – The TC enqueue is related to the DBWR background process and occur when “alter tablespace” commands are issued.  You will also see the TC enqueue when doing parallel full-table scans where rows are accessed directly, without being loaded into the data buffer cache.

TM enqueue – The TM enqueue related to Transaction Management (a.k.a. enq: TM - contention) and can be seen when tables are explicitly locked with reorganization activities that require locking of a table.

TQ enqueue – The TQ enqueue is the Queue Table enqueue (a.k.a. enq: TQ - DDL contention) and happens during Data ump (export import) operations.

TS enqueue – The TS enqueue is the temporary segment enqueue (a.k.a. enq: TS – contention) and these enqueues happen during disk sort operations.

TT enqueue – The TT enqueue (a.k.a. enq: TT – contention) is used to avoid deadlocks in parallel tablespace operations.  The TT enqueue can be seen with parallel create tablespace and parallel point in time recovery (PITR)

TX runqueue – The TX enqueue is the transaction enqueue (a.k.a. enq: TX – contention) and is commonly related to buffer busy waits, in conditions where multiple transaction attempt to update the same data blocks.

enq: TX - row lock contention
enq: TX - allocate ITL entry
enq: TX - row lock contention

UL enqueue – The UL enqueue is a User Lock enqueue (a.k.a. enq: UL – contention) and happens when a lock is requested in dbms_lock.request.  The UL enqueue can be seen in Oracle Data Pump.

US Enqueue - The US enqueue happens with Oracle automatic UNDO management was undo segments are moved online and offline.

LogFile Sync
============

The log file sync wait event is the time the foreground processes spends waiting for the redo to be flushed. It can be assumed here that the LGWR is always active, it does not have to be awaken by the system dispatcher.The log file sync wait can be broken into:

The ‘redo write time’: this is the total elapsed time of the write from the redo log buffer to the current redo log file (in centiseconds).
The “log file parallel write” is basically the time for the log write I/O to complete

The LGWR may have some post processing to do, then signals the waiting foreground process that the write has completed. The foreground process is finally waken up by the system dispatcher. This completes the ‘log file sync’ wait.


CRS

crsd          -     Oracle clusterware process which is used to maintain the resources configured in cluster in OCR
ocssd        -     CSSD is responsible for cluster membership of nodes in the cluster. it is done through voting disk
oprocd      -     OProcd ensures that nodes behaving abnormally are fenced from the cluster. CSSDAgent in 11gR2
evmd        -     Event Manager Daemon takes care of publishing the events generated by CRSD
acms        -     Atomic Controlfile Management Service takes care of managing distributed SGA.
gtx0         -     Global Transaction Process  takes care of handling support for distriubuted transactions
lmon        -     Global Enqueue Service Monitor monitors the global resources and reconfigures GES/GCS in case of any transitions
lmd          -     Global Enqueue Service Daemon manages incoming enqueue requests from other instances. 
lms          -     Global Cache Service Process handles transfer of blocks in the cache fusion
lck0          -     Instance Enqueue Process manages non-cache fusion requestss like row cache and library cache events.
RMSn       -     Oracle Rack Management Process manages creation of resources onto instances which join the cluster.
RSMN       -     Remote Slave Monitor handles creation of slave processes for coordination between other nodes.

FAN     -      Fast Application Notification is a feature of Oracle RAC which allows notification of status change in cluster to be provided to the subscribers. Applications can be smartly designed to listen to the FAN events and act accordingly.

TAF     -       Transparent Application Failover is a feature of Oracle Call Interface (OCI) which allows applications to automatically reconnect if the connection to database fails. There are 2 failover methods - BASIC/PRECONNECT. In BASIC method, connections get failed over to secondary connection when primary connection fails. PRECONNECT method allows secondary connection to be established when the primary is established. It reduces failover time as connection is already present. FAILOVER TYPE=SELECT or SESSION define whether the session or the session with select command should be failed to the next available node. when select is chosen, connection is failed and select commands are replayed.


FAILING-OVER VIP

11g CRS:
./crsctl relocate resource ora.service.vip -n host2

Datagaurd

Standby Redo Logs and usage

Physical Standby
Logical Standby - v$logstdby, dba_log_stdby
Snapshot Standby - 

Max Performance - Default protection mode. Commits of redo to the standby are done asynchronously without impacting the performance. 
Max Availability - Transactions dont commit until the redo is written to the online redo log and to atleast one standby synchronously. If it is not able to write to atleast one standby database synchronously, primary database will continue operate in max availability mode untill it is able to get a write on standby.
Max Protection - Transactions dont commit until the redo is written to the online redo logs and to atleast one standby synchronously. If it is not able to write to atleast one standby database synchronously, primary database will be shutdown to protect the data.


select protection_mode from v$database;
alter database set standby database to maximum (availability | protection | performance);

Checking on inconsistent parameters
show database verbose '' 'InconsistentLogXptProps'
Above command will the give the inconsistence in the parameter settings at db memory level and broker level

Setting up a Physical Standby

1. Ensure primary database is in archivelog mode
2. Verify primary database to have proper initialization parameters
     log_archive_dest_2, log_archive_format, db_name,db_unique_name, remote_login_password_file,fal_server,standby_file_management
3. Setup the TNS entries for primary and standby in tnsnames.ora of primary
4. Create a standby_control_file from primary and also create pfile from primary
5. Change db_unique_name,fal_server and log_archive_dest_2 parameters in pfile.
6. Copy the password file,standby controlfile and pfile from primary to standby
7. Connect to primary database as target database and standby as auxilary database and run RMAN duplicate database command
8. duplicate target database for standby  from active database nofilenamecheck;
9. Enable redo apply. Start MRP



STARTING OBSERVER AND MVDC COMMANDS

Below commands to be run at DGMRGL prompt

enable fast_start failover     -     To start the FSFO
start observer     - Starts the Observer
convert database 'mydb_sby' to snapshot standby ;     -     To open standby to users
Snapshot standby databases continue to receive redo from the primary database, but do not apply it – it just queues up on the standby server – until they are reverted back to being a physical standby database
turn off/on the redo transport service for all standby databases
Primary  DGMGRL> edit database prod1 set state=transport-off;
         DGMGRL> edit database prod1 set state=transport-on;
turn off/on the apply state
Standby    DGMGRL> edit database prod1dr set state=apply-off;
           DGMGRL> edit database prod1dr set state=apply-on;
put a database into a real-time query mode
Standby  DGMGRL> edit database prod1dr set state=apply-off;
         sql> alter database open read only;
         DGMGRL> edit database prod1dr set state=apply-on;
switchover to TO 'ABCPRD_PRIMARY';
convert database 'mydb_sby' to physical standby ;     -     To close standby to users
edit database ‘mydb_sby’ set state=’APPLY-OFF’ ;     -     Stops application of redo on standby

Switchover failed with the below error
Error: ORA-16644: apply instance not available

In that scenario, edited the DGMGRL configuration to explicitly mention the apply instance as below
edit database 'ABCPRD_PRIMARY' set state='APPLY-ON' with apply instance ='ABCPRD1';
column ID format 999
column DB_status format a10
column Archive_dest format a40
column Error format a30 wrap
set linesize 200
set pause on
SELECT inst_id,DEST_ID "ID", STATUS "DB_status", DESTINATION "Archive_dest", ERROR "Error"
FROM GV$ARCHIVE_DEST WHERE DEST_ID <=4;

2PC


11g
Wait Events

ASH REPORT

ASH stands for Active Session History. An ASH report details statistics from the in-memory performance monitoring tables. The report provides:
1. Top User Events (frequent wait events)
2. Details to the wait events
3. Top Queries
4. Top Sessions
5. Top Blocking Sessions
6. Top DB Objects (Note: be wary of the QRTZ_LOCKS result. This table is what Quartz, our third-party Java scheduler, uses as a locking  mechanism. As a result, any contention is intentional and will not affect  performance).
7. Activity Over Time

AWR REPORT

In Oracle, AWR stands for Automated Workload Repository. Like ASH, the AWR contains statistics on the database in snapshots through time. The difference is that AWR records survive instance restart. You can find various section such as SQL ordered by elapsed time,SQL ordered by CPU time, Background Wait Events, Foreground Wait Events, SQL Text, SQLs ordered by Executions

Foreground Vs Background Wait Events

It depends on who is doing the waiting decides the wait event type. If I run a query and it waits on a wait event, then it is a foreground wait event. If SMON runs a query and waits, then its a background event. If I issue a commit which waits on log file sync (foreground) which may internally wait on LGWR to complete the IO (background event)

ADDM

In Oracle, ADDM stands for Automatic Database Diagnostic Monitor. An ADDM report details potential performance bottlenecks and recommendations on what actions can be perform to alleviate the problems. The report provides:
Findings
Recommendations
Estimated Impact on Performance

SQL OUTLINE

Collection of hints stored in OL$ , OL$HINTS and OL$NODES tables. Oracle uses this information in the table to come up with the execution plan.

SQL PROFILE


A SQL profile is a set of auxiliary information specific to a SQL statement. Conceptually, a SQL profile is to a SQL statement what statistics are to a table or index. The database can use the auxiliary information to improve execution plans. ... Therefore, SQL profiles just guide the optimizer to a better plan.

SQL PLAN BASELINE


SQL Plan baselines are one or more accepted SQL plans that contains hints. Oracle maintains a SQL Plan history which has the best plan for each repeatable statement parsed by CBO. All the SQL Plan history and SQL Plan Baseline information is stored in SQL Management Database (SMB) which resides in SYSAUX tablespace.

Any SQL plan can be part of SQL Plan Baseline by either of below 2 processes

Capturing - This is the process where initial load of plans into the baseline. This can be automatic or manual. OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES dynamic parameter can be set to true and Oracle will place the plans for repeatable 
statements into SQL plan baseline(first plan only) and the subsequent plans into SQL Plan history. Manual process is through executing DBMS_SPM package to load. All plans loaded through manual executions are placed in SQL PLan baseline

Evolving- Process in which better plans move up the list in SQL Plan history and move into the SQL Plan baseline. This process is either automatic or manual. Automatic SQL Tuning job which runs during maintenance windows allows Oracle to evaluate plans for a statement and the best plan moves into the SQL Plan baseline. Manually SQL Tuning Advisor can be executed resulting in new plans/recommendations being generated resulting in plans getting moved in the SQL Plan baseline

CBO follows the below algo to decide on using the plan in SQL PLan Baseline

Is OPTIMIZER_USE_SQL_PLAN_BASELINES=TRUE?
YES? Is the plan part of the SQL History?
YES? Is the plan part of the SQL Baseline?
YES? Use the plan generated
NO? Replace with the best SQL Baseline Plan
NO? Add the plan to the SQL History and use the best SQL Baseline Plan
NO? Use the plan generated 



Apps Upgrade (11.5.10.2 to R12.1.1)

1. Verify the CPU/RAM requirements for Upgrade from 11.5.10.2 to R12.1.1
2. Ensure OATM Model is being used.
3. Database should be 10g or 11.2.0.4
4. Ensure TEMP tablespace is locally managed. If not, recreate the TEMP tablespace
5. Convert the application to multi-org enabled. This enable the option to define multiple operating units and set of books.
     adadmin -> Maintain Application Database Entities -> Convert to Multi-org
     This populates org_id column in multi-org tables with the operating unit defined in MO: Operating Unit Profile option
     Sets the Multi_Org_flag in fnd_product_groups table to Y
     Runs "Replicate Seed Data" program for each for OU
6. Download and run the TUMS utility (The Upgrade Manual Steps) to generate a report of the steps which could be safely ignored in the upgrade activity.
7. TUMS script will be present in $AD_TOP/patch/115/sql/adtums.sql
8. Run AD Prepartion Scripts - Download patch from metalink
     adgencons.sql - Generates adcrtbsp.sql
     adcrtbsp.sql - run with system user. creates new tablespaces, allocates unlimited tablepaces to all Application users, Updates fnd_product_installations table with the correct index tablespace information
     adgrants.sql - Grants privileges on selected SYS object to the application. this should be run from $ORACLE_HOME/appsutil/admin
9. Gather Schema Statistics to be run in FE
10. Backup the database and applications
11. Install R12.1 software using rapidwiz (upgrade)
12. Enable Maintenance mode
13. Apply AD minipack patch for R12.AD.B.1
14. Apply American English Upgrade patch
15. Disable maintenace mode.
16. Generate and copy appsutil folder from old Oracle_home and run autoconfig


OATM

• APPS_TS_TX_DATA - This tablespace hold the translational tables of all Oracle Applications products. For example the GL_JE_HEADERS will be a part of APPS_TX_DATA.
• APPS_TS_TX_IDX - All the indexes on the product tables are kept under this tablespace.
• APPS_TS_SEED - The seeded data that is setup and reference data tables and indexes form this tablespace. For example your FND_DATABASES table would reside in the APPS_TS_SEED tablespace.
• APPS_TS_INTERFACE - All the interface tables are kept in this tablespace for example the GL_INTERFACE table.
• APPS_TS_SUMMARY - All objects that record summary information are grouped under this tablespace.
• APPS_TS_NOLOGGING - This tablespace contains the materialized views that are not used for summary purposes and other temporary
object that do not require redo log entries.
• APPS_TS_QUEUES - With the support for advanced queuing in Oracle Applications, the advanced queue tables and related objects form a part of this tablespace.
• APPS_TS_MEDIA - This tablespace holds multimedia objects like graphics sound recordings and spital data.
• APPS_TS_ARCHIVE - Tables that are obsolete in the current release of Oracle Applications 11i are stored here. These tables are preserved to maintain backward compatibility of custom programs or migration scripts.
• UNDO - The undo tablespace is used as automatic undo management is enabled by default in 11.5.10.This acts as a replacement to red log files.
• TEMP - The Temp tablespace is the default temporary tablespace for Oracle Applications.
• SYSTEM - This is the SYSTEM tablespace used by the Oracle Database.


DB Upgrade


9.2.0.6 to 10.2.0.2

1. Install the 10G Oracle Software only and followed by that install the required Patch sets.
2. Run cr9data.pl to retain the Language and Territory information which came with Oracle 9i.
3. Point ORA_NLS10 parameter to the new 9idata directory.
4. Start the 9i database and run Pre-Upgrade script (utlu102i.sql) under 10g Oracle_home rdbms/admin directory.
5. Review the output of the Pre-Upgrade Tool
6. Point to 10G home and start the database in upgrade mode.
7. Create SYSAUX tablespace
8. Run catupgrd.sql which shows what all upgrade scripts needs to be run.
9. Run utlu102s.sql from $ORACLE_HOME/rdbms/admin to show the status of the upgrade.
10. Shutdown Immediate/Startup
11. Compile invalid objects using utlrp.sql
12. Modify the memory parameters as required.

10g to 11g

1. Install 11g Oracle Software
2. Run pre-upgrade information script - utlu111i.sql from $ORACLE_HOME/rdbms/admin of 11g Oracle home on the 10g database/
3. Review the suggestions/recommendations from the output of the upgrade information script. It checks for version, timezone, stats
4. Implement the suggestions and re-run the script.
5. Shutdown 10g database.
6. Source and start the 11g database in upgrade mode.
7. Run catupgrd.sql from $ORACLE_HOME/rdbms/admin directory
8. After upgrade completes, database is automatically brought down.
9. Start the database and run upgrade status utility utlu111s.sql from $ORACLE_HOME/rdbms/admin directory to validate the status of the components.
10. Run post upgrade steps - catuppst.sql from $ORACLE_HOME/rdbms/admin
11. Compile invalid objects.

12c Upgrade

1. Take a backup of the existing database and install the 12c database software only.
2. Run the Pre-Upgrade Information tool - preupgrd.sql on the existing ORacle 11g database. Checks for version compatibility, parameters, components, tablespace adequacy, process count, OLAP catalog (Online Analytics Processing), Timezone, recycle bin, statistics etc
3. PRe-Upgrade tool generates 3 output files under $ORACLE_HOME/cfgtoollogs/preupgrade. preupgrd.log, preupgrade_fixups.sql and postupgrade_fixups.sql
4. Review the preupgrd.log to ensure that there are no errors reported. Review the recommended checks and perform them prior to upgrade and post upgrade
5. Preupgrade_fixups.sql should be run before the upgrade activity.
6. Run the preupgrade_fixups.sql on the existing 11g database.
7. Review the suggested modifications by the fixup script and fix them.
8. Bring down the 11g database.
9. Copy the parameter file and password file from 11g to 12c oracle home
10. Edit the oracle home in the /etc/oratab
11. Set the environment for 12c database and start it in upgrade mode.
12. Run the Parallel upgrade utility catctl.pl (Catalog Control) under $RDBMS_ADMIN (Default parallelism is 4)
perl catctl.pl -n 8 catupgrd.sql
This runs total of 728 files (plb & SQL) through 9 files and 56 phases of execution.
13. After the upgrade completes, run utlu121s.sql to check the status of the upgrade (normal startup mode)
14. If there are errors, evaluate them and run catuppst.sql which will run the post upgrade tasks which needs to be run when database is out of upgrade mode.
15. Run postupgrade_fixups.sql to ensure any post upgrade steps needs to be performed
16. Check the invalid objects and confirm the database status using utlu121s.sql.

ERP Upgrade
Cloning
Patching

Oracle 12c Features

1. Pluggable databases - Instead of having multiple databsaes or schemas for different applications, pluggable database brings in the feature to have different databases in a container which makes management easier.
2. Redaction Policy - This feature allows masking of data in tables which can be used to prevent only authorized users to look at data. extension if FGAC
3. Fetch first x rows only
4. Adaptive Query optimization and online stats gathering - this feature allows optimizer to make runtime adjustments to execution plans by gathering stats online.
5. Varchar2 size has increased from 4k to 32K
6. Identity columns have been brought in as a replacement for sequence.
7. Multiple indexes on the same column. but only one index can be used.
8. Invisible columns. 


Migrate Oracle R11 Solaris to Linux

1. Perform the Pre-requisite software version checks on source/target 
2. Apply the recent AD patch
3. Run auto-config on the source system
4. Run adpreclone on the source system.
5. Run Maintain Snapshot information
6. Run techstack inventory utility txkInventory.pl  -txkTop=$APPLTMP -contextfile= -appspass=apps -outfile=file.html - generates report about the patch levels of oracle homes
7. Generate and upload the customer-specific updates by running $AD_TOP/bin/adgenpsf.pl and upload the output adgenpsf.txt o metalink
8. Copy APPL_TOP, COMMON_TOP/clone, java,_pages, webapps and utl to the target system
9. Generate the target system application context file by running adclonectx.pl migrate contextfile=. New mount points will be prompted bu the tool
10. Run rapidwiz on the new host to install only the techstack. This will install 806, iAS homes
11. Run INSTE8_SETUP phase of autoconfig to create the environment files needed for autopatch
./adconfig run=INSTE8_SETUP contextfile=
12. Apply the customer specific updates patch provided by oracle based on the manifest file.    
13. Regenerate filesystem objects - forms, menus, messages
14. Logout and login and run autoconfig to propogate the changes.


Block Corruption/Recovery


Corrupted blocks can be identified using the below

1. DBVerify utility - dbv file=file_name start= end= logfile=<>
2. Analyze table ... validate structure;
    Analyze table ... validate structure cascade; -- to be used to validate the structure with indexes
    Analyze index ... validate structure; -- validate the index structure
3. DB_BLOCK_CHECKING - Oracle performs a validation of the blocks to check if it is self-consistent. It adds overhead to the host. set to false,on,
4. RMAN validate, backup validate, restore validate
     validate datafile <>
     validate tablespace <>
     validate database
     All block corruptions are inserted into v$database_block_corruption view.

Recovery of corrupted blocks 
1. Block Media Recovery (BMR) can be used in Oracle to recover the corrupted blocks and not the entire datafile.
     Block recover datafile block
2. DBMS_Repair can be used to fix block corruption. stores data of corrupt blocks in 2 tables - repair_table and orphan_key_table.
     DBMS_REPAIR.CHECK_OBJECT               -     Checks the object for corruption
     DBMS_REPAIR.FIX_CORRUPT_BLOCKS   -     marks the blocks as corrupt
     DBMS_REPAIR.DUMP_ORPHAN_KEYS     -     checks all indexes to ensure they dont point to corrupt block
     DBMS_REPAIR.REBUILD_FREELISTS
     DBMS_REPAIR.SKIP_CORRUPT_BLOCKS





PCP

Check for purpose. How it is configured in a RAC environment

1. Modify the context file and change APPLDCP parameter to YES and run autoconfig on the CM nodes.
2. Verify whether FNDSM and FNDFS entries are populated in tnsnames.ora and listener.ora
3. Verify the FE listener status by running adalnctl.sh
4. Change the profile option Concurrent: PCP Instance check to ON.

Adding a Node to a Shared Application Tier File System

1.Prepare existing node. Execute Rapid Clone on the existing node. As the APPLMGR user, log in to the node that is sharing the file system and execute the following commands:
$ cd /admin/scripts/
$ perl adpreclone.pl appsTier

2.Make the Applications files accessible. Mount the shared file system disk to the Application node that you want to add. For example, after mounting the shared file system disk, the following file system is visible from both the application nodes:

Shared COMMON_TOP: /d01/shared_mount/apps/apps_st/viscomn
Shared APPL_TOP: /d01/shared_mount/apps/apps_st/visappl
Shared OracleAS 10.1.2 ORACLE_HOME: /d01/shared_mount/apps/tech_st/visora/10.1.2
Shared OracleAS 10.1.3 ORACLE_HOME: /d01/shared_mount/apps/tech_st/visora/10.1.3

Note: User ID and group ID should be consistent across nodes in a shared file system to avoid file access permission issues. You must retain the same absolute path for the mount points of the shared file system on each node. The value for the context variable "s_atName" must be same in the context file for all the application tier nodes.

3.Configure the node to be added. You will have to copy the Applications context file for the existing node to the current node if the Applications context file for the existing node is not accessible from the current node. Execute the commands listed below

As the APPLMGR user, log in to the node that you want to add.
$ cd $COMMON_TOP/clone/bin

$ perl adclonectx.pl addnode \
contextfile=/SLOTS/slot03/appmgr/inst/apps/txkr12d1/appl/admin/txkr12d1.xml

The above command will create a new context file for the node you were adding to the multi-node system. The next step is to run the AutoConfig utility which will generate the new Instance Home for this node and update the required tables in the database.

$ perl $AD_TOP/bin/adconfig.pl \
contextfile=

The next step is to run the AutoConfig utility on all the other nodes so that the required configuration files on those are updated with the new node information.

AD Splicer

Oracle Applications provides quite a few utilities which are aimed at effective maintenance and monitoring an Oracle Applications instance.
AD Splicer is a utility used to add an off-cycle product into Oracle Applications.

Products which have been released between the base releases of 11i are referred to an off-cycle product. Once AD Splicer is used to splice these off-cycle products into Oracle Applications they are recognized by other AD utilities like AD Patch as a valid Oracle Applications product for a given release.

In a multi node environment adsplicer should be used on all the APPL_TOPs.It is recommended to run AD Splice on the Admin node first.

Before using AD Splice you are required to add the new tablespace for the new products that are going to be added. In case you have migrated to the Oracle Applications new tablespace model you would require to create a new tablespace APPS_TS_TOOLs manually before using AD Splicer.

Ad Splicer makes use of the following control files to add the new off-cycle product into oracle applications.

[prod]prod.txt : This is a product definition file which contains the language independent information about the product.
[prod]terr.txt : This is also a product definition file containing language-specific information about the new product.

Both these product definition files should not be edited manually.

For example the product definition files for the off-cycle product Web ADI are bneterr.txt and bneprod.txt

newprods.txt : This file is also know as the product configuration file. You can edit this file optionally.

This file acts as a template for configuring the new product that would be added. It comprises of a list of parameters which include

product- The product being added eg.bne .This must not be changed.

base_product_top - This is the base path where the files for the new product will be places using the Oracle applications filesystem architecture. the default value for this is APPL_TOP.

oracle_schema - This parameter refer to the oracle schema which is the owner of the new product to be added.

sizing_factor - The sizing factor which is to be followed for table and index creation.

main_tspace - The tablespace for product tables

index_tspace - The tablespace for product indexes

temp_tspace - The tablespace for Oracle schema’s temporary segments

default_tspace - This refers to the default table space for the of the Oracle Schema owning the product.

All the above mentioned control files namely the prod.txt,terr.txt and newprods.txt must be placed in $APPL_TOP/admin so that adsplce is able to read them.
Like other Ad utilities AD Splice will also require to make a connection to your database instance using SYSTEM and APPLSYS accounts as a result your database and listener must be running while using AD Splice.
An important fact to note is that products that are added through AD Splice are visible to other AD Utilites except autoupgrade.

Rapid Clone Methodologies


You need to prepare the  Source system so that only configuration files and not the whole file system is copied.

Pre-requiste Steps

To start with cloning you need to first check about some pre-requisites some utilities should be of required version

1. Oracle Universal Installer should be of 2.2.0.19 version
2. Perl should be of 5.005 version minimum
3. JRE 1.1.8 and 1.3.1
4. JDK 1.3.1
5. Zip utility of version 2.3 minimum
6. Latest Autoconfig Templates patches are applied. TXK-O is the latest till date.
7. Latest Rapid Clone patches should be applied. Latest is ADX.F till date.

After checking the pre-requisites now your system is ready to clone.

Preparing the Source system:
---------------------------

This step will create the staged clone directory which will be having the driver files and configuration file of the source.

You have to run the adpreclone.pl script once for Application tier and one for the datbase

Lets first check for the Database tier

On Source system, run following as ORACLE user

cd RDBMS Oracle_Home/appsutil/scripts/_perl ./adpreclone.pl dbTier

After running this it goes in two stages

dbTechstack and Database

It will create stage cloned area at $ORACLE_HOME/appsutil/clone This clone directory has got following main directories
data,jre,bin,db,html,context

Prepares database techstack at dbTechStack stage.

-Creates template files at $ORACLE_HOME/appsutil/template
-Creates driver files at $ORACLE_HOME/appsutil/driver/instconf.drv
-Converts inventory from binary to xml

Prepares database at the database stage. Major activities includes

-Create database control file script
$Oracle_Home/appsutil/clone/context/data/stage/addbhomsrc.xml
$Oracle_Home/appsutil/clone/context/data/stage/adcrdb.zip
adcrdbclone.sql

-Generates database creation driver file
$Oracle_Home/appsutil/clone/context/data/driver/data.drv

-Copy JDBC Libraries
$Oracle_Home/appsutil/clone /clone/jlib/classes111.zip

So all this happens on database side

Now on Application tier side. it goes almost in the same way that is in two stages.

As a APPLMGR user on application tier this script is run
cd $COMMON_TOP/admin/scripts/_perl ./adpreclone.pl appsTier

It will also create the staged clone directory at $COMMON_TOP/clone
It goes in two stages. Lets see one by one

atTechStack that is Application tier techstack.

- Creates template files for
Oracle_iAS_Home/appsutil/template
Oracle_806_Home/appsutil/template

-Creates Techstack driver files for
Oracle_iAS_Home/appsutil/driver/instconf.drv
Oracle_806_Home/appsutil/driver/instconf.drv

appltop preparation

-It will create application top driver file
$COMMON_TOP/clone/appl/driver/appl.drv

-Copy JDBC libraries
$COMMON_TOP/clone/jlib/classes111.zip

So this all happens in dressing the source system

After then we need to copy the files.

Copy following files

APPL_TOP
OA_HTML ($COMMON_TOP/html)
JAVA_TOP
COMMON_TOP/util
COMMON_TOP/clone
8.0.6 ORACLE_HOME
iAS ORACLE_HOME
DB ORACLE_HOME
dbf files

After this stage now you need to configure the Target node. Basically you will run adcfgclone.pl same for databse tier and for application tier.
Let go in bit more detail:

First we will see for the database tier

On the target node you will run
ORACLE_DB_HOME/appsutil/clone/bin/perl adcfgclone.pl dbTier

Again there are two stages :

First will look at dbTechstack
It will use the driver files and templates which were created at source and has been copied to target.

Following scripts are run
-adchkutl.sh
-create context file using adclonectx.pl
-runInstallConfigDriver —- located in $Oracle_Home/appsutil/driver/instconf.drv
-Relinking $Oracle_Home/appsutil/install/adlnkoh.sh

Now for database
-Driver file
$Oracle_Home/appsutil/clone/context/data/driver/data.drv
-Create database adcrdb.zip
-Autoconfig is run
-Control file creation adcrdbclone.sql

Now on application tier
from COMMON_TOP/clone/bin/perl adcfgclone.pl appsTier

Stage at TechStack
-Creates context file for target adclonectx.pl
-Run driver files
$Oracle_806_Home/appsutil/driver/instconf.drv
$Oracle_iAS_Home/appsutil/driver/instconf.drv

Relinking of Oracle Home
$Oracle_806_Home/bin/adlnk806.sh
$Oracle_iAS_Home/bin/adlnkiAS.sh

at the end for ApplTop stage
It runs driver and then autoconfig

So this will totally configure your application which has been copied.

At the end you have to some finishing tasks

-Update Printer settings
-Update Workflow Configuration settings
-Update Profile options

This completes you application cloning.

Custom Applications Top Creation


Steps to create a custom top are as below
1.Make the directory structure for the custom application as below
cd $APPL_TOP
mkdir fz
mkdir fz/11.5.0
mkdir fz/11.5.0/admin
mkdir fz/11.5.0/admin/sql
mkdir fz/11.5.0/admin/odf
mkdir fz/11.5.0/sql
mkdir fz/11.5.0/bin
mkdir fz/11.5.0/reports
mkdir fz/11.5.0/reports/US
mkdir fz/11.5.0/forms
mkdir fz/11.5.0/forms/US
mkdir fz/11.5.0/$APPLLIB
mkdir fz/11.5.0/$APPLLOG
mkdir fz/11.5.0/$APPLOUT

2.Add the custom module into the environment Apply ADX.E.1 patch and add the entry to topfile.txt

3.Create a new tablespace.

4.Create new user/schema.

5.Register the new schema by going to Application -> Register
    Provide the application name, short name, base path and description.

6.Register the new user by going to Security -> User - >Register

7.Add the application to a Data Group by going to Security -> User -> Datagroup

8.Create a new Request Group, Menu, Responsibility and assign the responsibility to the user.


Difference Between APPS,APPLSYS and APPLSYSPUB

APPS        :    It is a universal schema and Contains synonyms to all objects in the database.
                 It doesnt has any objects by itself. But it has packages, proceedures

APPLSYS        :    This schema contains all the tables for administration purposes like for FND,AOL etc

APPLSYSPYB    :    This schema is reponsible for password checking.It is used for initial forms login. when a user logs in with his credentials, to validate those credentials, it needs APPS schema password. To get that, APPLSYSPUB/PUB is used to connect to DB and obtain the privileges.

Passwords for APPS and APPLSYS should be same

JVM Tuning Tips

1. Use one JVM per 2 CPU
2. No more than one JVM/CPU
3. No more than 100 concurrent users per JVM
4. For Forms users . 1 JVM per 250 users max
5. Enforce Parallel GC with 2 threads -XX:UseParallelGC -XX:ParallelGCThreads=2
6. Using 2 GC threads is enough for a heapsize below 1 GB. Going beyond 1 GB heap space can have 4 GC threads.
7. Forms in socket mode is more efficient than servlet mode. R12 Default is servlet mode. R11 default is socket mode.
8. To decide on the number of JVMs needed, take the lower of the values (Number of cores or concurrent users/100)

wdbsvr.app - contains database connection details to be used by mod_plsql component of apache to connect to database.

Concurrent Manager Tuning Tips

1. To maximize throughput for jobs with parallel workers, reduce the sleep time of CRM. Default is 60 sec.
2. Increase cache size to pull requests into queue twice as that of target processes.
3. Create dedicated concurrent managers for programs which run short or long. This will help in reducing the queue length.
4. Purge FND tables regularly.
5. Use specialization rules to schedule programs in right window to avoid resource intensive programs to run a same time.

Concurrent program level trace is Lever 8(with waits, no binds)

Concurrent Program Running Long

1. Enable a Program level trace. Concurrent -> Program -> Define
2. Goto User_Dump_Dest directory and the trace file will be generated.
Alternate way to enable a SQL trace on a session
exec dbms_system.set_sql_trace_in_session(sid,serial#,true);


SQL Performance 

Concurrent Program Running Long

Trace Level in Program Level Trace
When enable trace button is clicked, Level 8 trace is conducted

Concurrent Program Tuning

FND Diag Trace is written to FND_LOG_MESSAGES table

11g Vs 10g differences

High CPU situation

Tuning Database Memory

memory_max_target  - Static parameter which allocates the chunk of RAM for the databsae usage. This space will be by default occupied by database.
memory_target          - Dynamic parameter which Specifies how much of RAM database can use for total memory components.

sga_target                 - If AMM is enabled and SGA_target is set to 0, then 60% of memory_target is allocated to SGA and 40% to PGA

pga_aggregate_target - If AMM is enabled and PGA_aggregate_target is set, then this value will become the min value.



SGA Target Vs Memory Target Vs SGA Max Size

SGA_Target specifies the max memory allocted to the SGA.
Memory_Target if set means AMM is enabled and will control SGA and PGA space by adjusting accordingly.
SGA_Max_SIze is the upper ceiling limit for SGA_Target.

R12.2 Technology Component Changes

10.1.2 is still retained to host forms/reports
10.1.3 is replaced with Fusion Middleware 11.1.1.4 with WLS 10.3.4. This hosts OHS
Default database version is 11.2
Oracle JSP compiler is replaced by Weblogic JSP compiler 

R12.2 Cloning Procedure

1. Run pre-clone on Source Run Filesystem
2. Copy the Run filesystem from source to target
3. Run adcfgclone on the Run filesystem
4. Bring up the services validate the health
5. Bring down the services
6. Run adpreclone on run filesystem of Target system. Weblogic server needs to be up
7. Copy the Run filesystem (fs) to the patch filesytem
8. Unset the environment and run adcfgclone from patch filesytem clone/bin directory.
9. Source Run filesystem and start the services

R12.2 ADOP

Has 5 phases
Prepare - syncs the patch filesystem with the run filesystem and creates a patch cycle id and patch edition in database.
          - Checks whether cleanup needs to be run if the previous ADOP session didnt run the cleanup after cutover.
          - Checks for the existence of conc program - "Online Patching in Progress". This program runs when the patching activity is going on.
          - Checks for patch service to exist. Patch service is needed by ADOP to connect to patch edition of databases. DB service_name parameter should have it.
          - Runs txkADOPPreparePhaseSynchronize.pl to sync the patches applied in run appl_top and not in patch appl_top
          - Creates a patch edition in database.

apply - patches are actually applied to patch edition of the database and filesystem in this phase. apply_mode=online, downtime, hotpatch
     restart=yes will make the patch continue from failed step
     abandon=yes will clean the previously failed patch step and start a new cycle
     input_file=<> location of defaults file which will run in non-interactive mode
If any post patch steps needs to be performed, it depends on the type of patch application. In normal online mode, steps needs to be executed from patch filesytem after apply phase. If downtime or hotpatch mode, steps needs to be run on the run fs after apply phase.

finalize - This phase performs steps which can help in fast possible cutover. this phase performs compiling of invalid objects, generating objects.  untill this phase, abort can be performed
     finalize_mode = quick|full - quick mode will skip non-essential steps. Full mode will perform gather stats on key data dictionary tables for improved performance. default is quick

cutover - Bring down application services, promote patch filesystem to run filesystem, promote patch database edition to run database edition, maintenance tasks, bring up application services
          - Shutdown ICM and waits for existing concurrent programs to finish. It can take longer. We can terminate ICM from other shell.
          - Shutdown application tier services
          - Cutover database - promote patch edition to run edition using adzdpmgr.pl
          - Cutover fs - promote patch fs to run fs and switch FILE_EDITION env value in patch and run environments.
          - Start application services
          - ADZDPATCH concurrent program is stopped.

     mtrestart = yes|no. suggests whether middle tier services should be restarted. default is yes
     cm_wait = mins - specifies the mins adop will wait for cm services to go down. if time is elapsed, remaining requests will be killed and cutover will proceed.

cleanup - this phase cleans up application and database for next patching cycle.
     cleanup_mode = full|standard|quick. Quick skips few non-essential steps to reduce execution time Quick mode removes crossedition triggers and seed data. Standard mode performs dropping of obsolete objects from old editions. this is default mode. Full drops all obsolete code and data from previous editions.

abort - This is a special phase which could be run before cutover phase.
          - confirms on the existence of an online patch cycle.
          - checks for existence of patch edition and drops it
          - cancels adzdpatch concurrent program
          - drops rows from ad_Adop_sessions and ad_adop_session_patches table
          - full cleanup should be run after abort.

fs_clone - This phase is used to sync up run and patch filesystems. This phase is useful if APPL_TOPs hjwhen 
There can be multiple apply and finalize phases. But one finalize should be invoked prior to cutover.
There are 6 different log levels in ADOP.
statement - used for debugging
procedure - debugging high level procedures
event - used to display informational messages in normal procesing. This is the default option
warning - displays internal errors which are noticed during the patching activity which does not impact the activity
error - indicates an action failed and will be reviewed by the user
unexpected - indicates unrecoverable error which halts the processing and requires manual intervention.

adop-status and adop-status-detail gives information about an adop session
downtime mode is new addition in AD Delta5 where patching can be done only to run filesystem with an increased downtime. Restrictions apply

To reduce the downtime duration in cutover phase, do the below

1. Invoke finalize phase explicitly so that cutover phase doesnt have to do that
2. Bring down concurrent managers before cutover phase begins so that bringing down of services doesnt have to wait for running programs to complete
3. Ensure no long running concurrent programs are scheduled during patching activity
4. Ensure more parallel workers are used.

fs_clone is an option used with adop phase=fs_clone which will copy the run filesystem into patch filesystem


R12.2 Log file locations

Apache Logs

$IAS_ORACLE_HOME/instances/*/diagnostics/logs/OHS/EBS_web_*/*log

OPMN Log
$IAS_ORACLE_HOME/instances/*/diagnostics/logs/OPMN/opmn/*

Weblogic Logs

$IAS_ORACLE_HOME/../wlserver_10.3/common/nodemanager
$EBS_DOMAIN_HOME/servers/oa*/logs/*
$EBS_DOMAIN_HOME/servers/forms*/logs/*
$EBS_DOMAIN_HOME/servers/AdminServer/logs/*
$EBS_DOMAIN_HOME/sysman/log/*

ADDING/REMOVING MANAGED SERVERS

To add remove MS, 2 commands are executed. One to add a managed server and other to updated OHS on existence of the new server into files mod_wl_ohs.conf and apps.conf

adProvisionEBS.pl ebs-Create-ManagedServer -contextfile <> -managedsrvname=<> -servicetype= -managedsrvport=<> -logfile=<>
txtSetAppsConf.pl -configoption= -contextfile <> -oacore=:



FND_CONC_CLONE.SETUP_CLEAN Vs CMClean

FND_CONC_CLONE.SETUP_CLEAN is used to clean up source node entries in some tables FND_NODES, FND_CONCURRENT_QUEUES etc on the target node after the cloning is done.
CMCLean.sql is used to clean up pending, running requests information from backend tables if the CM services didnt go down properly

9i Vs 10g vs 11g

10g

Statspack was replaced by AWR
ADDM was introduced
ASM was introduced
DBMS_Job replaced by DBMS_Scheduler
New OEM interface
imp replaced by impdp
multiple TEMP tablespaces option

11g

OCRCheck - Checks the integrity of OCR 
OCRConfig - Used to perform the backup of OCR and voting disks
crsctl query css votedisk - checks the status of votedisks
OCR & Voting disk reside in ASM
SCAN
srvctl can manage scan and other components


RAC Configuration
storage options for RAC
how is ip configuration done in RAC
redo log - active/inactive - usage - configuration
patching - procedure
forms triggering sequence

Topics to cover
performance tuning
SCN scenario
Block
Security Vault, Data Vault, FGA, Database Firewall

Traceing options
SQL baseline, outline, profiles
DNFS

ADOP cutover failed. what to do.

ADOP each phase activities in detail.
what happens when fndcpass is run
dependent and independent managed server
managed sever independence? 
If managed server is started in the absense of admin server, it is called as managed server independence. This happens when admin server is not reachable. MS use the local copy of configuration to boot 
how does autoconfig connect to database
why is fs-clone needed
difference between production mode and development mode in Weblogic server
FND_CONC_CLONE.CLEAN()


TEMP_UNDO_ENABLED
Setting this parameter to true, allows Oracle to manage the undo space for temporary objects (tables) separately


ABCPRD ADOP MVIEW ISSUE


Issue :

ABCPRD - MView Refresh Failing With "ORA-38802 edition does not exist" After Online Patching Window

Issue Details :

20+ materialized views in ABCPRD were found invalid and their refresh was failing with below error.

SQL> exec dbms_refresh.refresh('"APPS"."ABC_CONTACT_MV"'); BEGIN dbms_refresh.refresh('"APPS"."ABC_CONTACT_MV"'); END; ERROR at line 1: ORA-38802: edition does not exist ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2821 ORA-06512: at "SYS.DBMS_SNAPSHOT", line 3058 ORA-06512: at "SYS.DBMS_IREFRESH", line 687 ORA-06512: at "SYS.DBMS_REFRESH", line 195 ORA-06512: at line 1

How did we reach to a STF/LTF :

DBA team had run full cleanup as part of adop online patching cycle for Feb’18 release.

The full cleanup completed successfully dropping all the old editions as we had run actualize before.

The cutover/cleanup make several attempts to upgrade the objects but some won't get validated because the miviews were dependent on database links that were invalid at the time.

There is a know Oracle database bug (Bug 25643931 - ORA-38802: EDITION DOES NOT EXIST WHEN REFRESHING MVIEW) which has been fixed in CSFPRD as part of Oct’17 PSU patching.

So Oracle support(#3-16847585381) after reviewing the logs requested us to run the below commands.

--> alter materialized view ABC_QUAL_CD_MV evaluate using current edition;

--> execute dbms_mview.refresh('APPS.ABC_QUAL_CD_MV');

Long Term Fix :

The cutover/cleanup make several attempts to upgrade the objects but some won't get validated because the miviews were dependent on database links that were invalid at the time.

--> ALWAYS review the adzdallddls output and look for these errors as we consistently have come to this same issue and it has been the root of many issues.

--> Identify the invalids and manually "evaluate using current edition" for those that failed and show invalid.

RCA Note :

A materialized view is a noneditioned object that can specify an evaluation edition, thereby enabling it to depend on editioned objects.

To display the evaluation editions and unusable editions of materialized views, use the static data dictionary views *_MVIEWS (described in Oracle Database Reference).

Dropping the evaluation edition invalidates the materialized view.

The DB team explains that, if the mviews might have been created with 'evaluate using edition' option (which can be verified querying the EVALUATION_EDITION, UNUSABLE_BEFORE, UNUSABLE_BEGINNING fields in DBA_MVIEWS) since those editions were not available, mview refresh is bound to have encountered error.

PSU PATCHING


PSU Patching is a 2 step process. 
  1. Apply the patch to upgrade the ORacle home hosting the database. This is done using OPatch utility.
    1. Ensure you have the latest OPatch version compatible with the PSU release.
    2. Download and Unzip the PSU patch 
    3. Ensure database and listener on all nodes is down
    4. Check for conflicts and prereqs using opatch prereq  CheckConflictAgainstOHWithDetail -oh -ph
    5. Apply the patch using opatch apply
    6. OPatch takes care of unarchiving the .o extension files into the respective oracle binaries and relinks them
  2. Updation of database objects with the modified PSU files
    1. This is done using catbundle.sql script under $RDBMS_HOME/admin.
    2. SQL>@catbundle.sql PSU apply
    3. This script reads bundledata_.xml file which comes as part of each patch to understand what has changed after the last patch application and executes the respective changes on the database objects
    4. This generates 2 files - catbundle___APPLY.sql and catbundle___ROLLBACK.sql
    5. We need to executed the one based on the mode we are in. Apply or Rollback.
  3. Starting 12c, catbundle.sql is replaced with datapatch. which is present in OPatch directory.
  4. After executing OPatch, we can bring up the database and run datapatch which will automatically take care of apply or rollback mode based on the changes reflected in dba_registry_history
  5. datapatch -verbose, datapatch -prereq are different execution types.
1) Make sure the life cycle non-prod databases Patching activity is completed without any issues.

2) Export Opatch
 export PATH=$PATH:$ORACLE_HOME/OPatch
3) Upgrade OPatch Utility (Non Downtime if not latest) across all the nodes.
Check Patch Utility Version using opatch version
Download 6880880 patch from Oracle Support.
Copy it to the server.
mv $ORACLE_HOME/OPatch $ORACLE_HOME/OPatch__old
unzip
Check Patch Utility Version using opatch version
4) Check the version of the  latest PSU/JVM applied in the database.
$ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | grep -i -E 'DATABASE PSU|DATABASE PATCH SET UPDATE'
5) Copy the required PSU and JVM Patches to the server and Unzip them and perform the  conflict analysis as detailed below.
cd
opatch prereq CheckConflictAgainstOHWithDetail -invPtrLoc $ORACLE_HOME/oraInst.loc -ph ./
6) Check if conflicting patches on a higher version/solution patches are available. If yes, copy the same to server. Else raise a service request with oracle.
7) For 12C databases, if Grid One-Off Patch is not applied, raise incident with SA/hosting and get the conflict analysis done.
/oracle/product/grid_home/OPatch/opatchauto apply /21232394 -analyze -ocmrf /ocm.rsp
8) For 12C databases, Check mandatory patches - scope them if not applied.
9) Check if sufficient space is existing under ORACLE_HOME location. Please do not create softlinks to files/directories or zip the files/directories to fix space crunch in the ORACLE HOME binaries (except for .patch_storage in dump). Instead update the same to Primary DBA and request for sufficient space.
10) If .patch_storage is not a softlink to /oracle/dump directory, move $ORACLE_HOME/.patch_storage to /oracle/dump and create a softlink for the same in $ORACLE_HOME.
11) Create CR, scope for release (EIR), send outage communication and follow up for approvals.
12) Before the Production Weekend, Check the backup timings and if it is conflicting with the CR window - get it commented and pre-fired.
Just before the CR Implementation :
1) Check and make sure the CR is fully approved.
2) Taking Oracle Home Binary Tar Backup to /oracle/export or /oracle/admin (Make sure the same is removed a week after the activity).
cd $ORACLE_HOME
tar -czvf /oracle_home.tar *
3) Run DB related Prechecks.
4) Check and enable Blackouts in EM.
5) Run Doctor DB, Instaspect, Environment Verification, Qualys Vulnerability Tool, DBStatus tool & Check Remote Connectivity.
During CR window :
1) Bring down the database with stop_oracle.
2) Apply PSU, JVM and all required One-Off Patches (Conflicting Patches + Mandatory Patches in case of 12C).
cd
opatch apply -local
3) Apply GRID Mandatory One-Off Patch with the help of SA for 12C (if not applied).
/oracle/product/grid_home/OPatch/opatchauto apply /21232394 -ocmrf /ocm.rsp
4) Run post patch scripts for PSU and JVM as suggested by Oracle (postinstall.sql/catbundle.sql/datpatch -verbose) as per database version either in normal mode or upgrade mode as per patch read me.
5) Startup the database manually and make sure there are no errors in alert log and then stop the database.
6) Startup the database with start_oracle.
7) Check if any new invalids are created. If yes, compile them with utlrp.
@?/rdbms/admin/utlrp.sql
8) Upgrade RMAN CATALOG
rman catalog username/password@alias
RMAN> UPGRADE CATALOG;
9) Disable the VPC_CONTEXT_TRG in RCATPROD - dbc-prd1-1001 (If it is enabled while running upgrade catalog)
select owner,trigger_name,status from dba_triggers where trigger_name='VPC_CONTEXT_TRG' and owner='';
alter trigger .VPC_CONTEXT_TRG disable;
select owner,trigger_name,status from dba_triggers where trigger_name='VPC_CONTEXT_TRG' and owner='';
10) Run DB related Postchecks and validate the same with Prechecks across all the nodes.
11) Check JDK Version in all instances.
12C : select dbms_java.get_jdk_version() from dual;
11G : select dbms_java.longname('foo') from dual;
12) Check and end Blackouts in EM.
13) Run Doctor DB, Instaspect, Environment Verification, Qualys Vulnerability Tool, DBStatus tool.
14) If any new issues were found in Instapect due to the CR, fix them within the CR window (DNFS, softlinks etc..) and re-run Instapect to crosscheck the same.
15) If any new vulnerabilities reported, follow the fix mentioned in Qualys Wiki.
16) Monitor alert log for some time. Make sure no new ORA- errors are generated.
17) If any new ORA- errors raise a jira with primary DBA and SR with Oracle.
18) Send mail to SA to uncomment backup, if it was commented by us before CR.
19) Send out CR completion mail to the concerned teams, if any mail chain exists.

Issues & Solutions:

Issue :
adoacorectl.sh: exiting with status 206

-> Process (index=1,uid=1772111186,pid=19594)
    failed to start a managed process after the maximum retry limit
    Log:
    /cwdev1001/inst/apps/DBDEV_host1140/logs/ora/10.1.3/opmn/HTTP_Server~1.log

When checked the HTTP server log, noticed the below errors
/dbdev1001/apps/tech_st/10.1.3/Apache/Apache/bin/httpd: error while loading shared libraries: libdb.so.2: cannot open shared object file: No such file or directory


Fix :
libdb.so.2 was missing
As root user, created softlink to the actual library file.
ln -s /usr/lib/libgdbm.so.2.0.0 /usr/lib/libdb.so.2


Issue :
MRP is not applying the archives and reporting the below error in the standby alert log.
MRP has not responded for more than 600 seconds. Possible that primary not shipping logs, check log_archive_dest_state params on primary.

Fix: 
Issue could be related to storage IO not being sufficient on the host. Also as per metalink, this is a bug in 11204 and fixed through patch or release 12.1

Issue :
Workflow Notification Mailer was not starting. So was the container Workflow Mailer Service. No errors were reported in the Workflow logs

Fix : 

When checked, on CM node, there were old Workflow processes running with name GSMServiceController. Killed those processes and restarted the container from UI and workflow came up.


Issue :  

ORA-27300: OS system dependent operation:sendmsg failed with status: 105
ORA-27301: OS failure message: No buffer space available

Cause:
This issue seems to be occurring due to loopback interface which is used for local communication consuming more memory resulting in host depleting of memory and eventually either hanging or being evicted.

Solution: 
Solution is to
a.    reduce the MTU value of Loopback (lo) interface from 65K to 16K
b.    set the parameter vm.min_free_kbytes to 32 M in sysctl.conf

Issue : 
While applying CPU patch on R12.1.3, below error occcured.

sqlplus -s APPS/***** @/applpt12/apps/apps_st/appl/ad/12.0.0/patch/115/sql/adrevokegrants.sql &systempwd
Connected.
Connected.
  sys.ad_grants.cleanup;
  *
ERROR at line 2:
ORA-06550: line 2, column 3:
PLS-00201: identifier 'SYS.AD_GRANTS' must be declared
ORA-06550: line 2, column 3:
PL/SQL: Statement ignored


Solution:

Copy adgrants.sql $APPL_TOP/admin to RDBMS tier and run it as below
sqlplus /nolog @adgrants.sql &apps_username

Issue :
Starting of Apps listener fails with below error
TNS-01153: Failed to process string:

Solution:
This issue is due to libnlsnr10.a being corrupted. Copy the file from working instance  $ORACLE_HOME/network/lib/ and relink the executable.
cp $ORACLE_HOME/network/lib/libnlsnr10.a
make -f ins_net_server.mk itnslsnr

Issue:

While doing RMAN incremental restore, recover database was failing with the below error
ORA-19573: cannot obtain exclusive enqueue for datafile 1

Solution:

Database should ideally be in mounted state and not in Open mode. To fix this, cancel the media recovery to ensure that no processes are accessing the datafiles.
alter database recover managed standby database cancel;

and then run recover database again.

Tech Topics

CALLS_TO_KCMGAS     -      http://www.dba-oracle.com/m_calls_to_kcmgas.htm Huge Pages Without huge pages, Linux kernel mana...