Pages

Questions

Questions :

http://expertoracle.com/2014/03/07/oracle-data-guard-interview-questions/
https://www.oracle-dba-online.com/unix_oracle_dba_interview_questions.htm
https://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677.html
https://oracle-base.com/articles/12c/articles-12c

PGA : private memory region that contains the data and control information for a server process
Library Cache : cache to store SQL PL/SQL, to parse and validate.
buffer cache : cache for data

ASH : History of recent session history
AWR : used to store the DB statistics STATISTICS_LEVEL = TYPICAL . We can run compare period report
ADDM : DB monitor provide recommendations for preformance improvement by compating two snapshots

ASH : 
Samples of wait event information are taken once per second , provide real-time information
part of the Diagnostics and Tuning Pack
DBA_HIST_ACTIVE_SESS_HISTORY : sample time is now 10 seconds


AWR is used to collect performance statistics

  1. Wait events 
  2. Active Session History (ASH) statistics from the V$ACTIVE_SESSION_HISTORY view.
  3. Object usage statistics.
  4. Resource intensive SQL statements

Repository is a source of information for

  1.  ADDM Automatic Database Diagnostic Monitor
  2.  SQL Tuning Advisor
  3.  Undo Advisor
  4.  Segment Advisor


ADDM analyzes data in the Automatic Workload Repository (AWR) to identify potential performance bottlenecks. It locates the root cause and provides recommendations for correcting the problem.




IOT : PK data and few columns are stored in the index leaf block ; create index ..... oraganization index including
DBVerify : to check corruption in database ; $ dbv file=data01.dbf logfile=verify.log blocksize=8192 feedback=100

alter index index_name rebuild online;


High water mark is the maximum number of database blocks a segment has used so far. This mark cannot be reset by delete operations.TRUNCATE will reset HWM.

Stop one node for reboot :

https://alllinuxdba.wordpress.com/2014/08/01/how-to-stop-one-node-in-a-two-node-oracle-rac-cluster/


CREATE A PHYSICAL STANDBY DATABASE :

https://alllinuxdba.wordpress.com/category/oracle/create-a-physical-standby-database-11gr2/


Segment Advisor :
dbms_advisor.create_task
 dbms_advisor.create_object
   dbms_advisor.set_task_parameter
     dbms_advisor.execute_task



LMT -   CREATE TABLESPACE ...EXTENT MANAGEMENT LOCAL AUTOALLOCATE :                         Tablespace manages it's own free and used space within a bitmap structure
               Reduce contention on data dictionary tables
                 exec dbms_space_admin.Tablespace_Migrate_TO_Local('ts1');
                 Not valid for LMT - NEXT, PCTINCREASE, MINEXTENTS, MAXEXTENTS, and DEFAULT STORAGE

Extent management can be LOCAL or DICTIONARY(contention)  and size can be  UNIFORM or AUTOALLOCATE

AUTOALLOCATE  - extent sizes are managed by Oracle which dynamically adjust the size of an extent.
AU_SIZE = default 1 MB (4MB in Exadata)
Uniform extent size till 10g  = 1MB
In 11g onwards  it's AUTOALLOCATE, first AU_SIZE  then 4*AU_SIZE  then 16*AU_SIZE


UNIFORM - extent allocation is of fixed uniform size ; minimizes fragmentation

AUTOEXTEND  - non-OMF datafile specify next size  ;  (i.e. where "db_create_file_dest" is configured) AUTOEXTEND to ON with a of 100MB

SEGMENT SPACE MANAGEMENT- AUTO eliminates the need to specify and tune the PCTUSED, FREELISTS. Freespace of each block  within segment maintained automatically using bitmaps



In RAC , ALTER SYSTEM KILL SESSION 'sid,serial#,@inst_id';


Flashback :

SELECT * FROM emp           AS OF TIMESTAMP  TO_TIMESTAMP('2007-06-07 10:00:00', 'YYYY-MM-DD HH:MI:SS')

Select versions_operation, * from EMP versions between timestamp to_timestamp('  ') and to_timestamp('  ')

FLASHBACK TABLE emp TO TIMESTAMP TO_TIMESTAMP('2007-06-19 09:30:00', `YYYY-MM-DD HH24:MI:SS');      -- can go back as far as UNDO_RETENTION

Flashback table emp to before drop RENAME TO EMP_VER_1;


select log_mode,flashback_on from v$database;
SQL> flashback database to scn 930717;
rewind the whole database to a target time, SCN, or log sequence number
RMAN uses flashback logs
faster than a RESTORE operation followed by point-in-time recovery

SQL> create restore point test_rest1 guarantee flashback database;
SQL> FLASHBACK database TO RESTORE POINT test1;
SQL> alter database open resetlogs;
FLASHBACK TABLE emp TO RESTORE POINT test1;



Locks : 

Locks are used to guarantee the integrity of data being accessed concurrently by multiple users.Locks can be acquired in Exclusive or share mode.

DML Locks 

Row Locks (TX) :

To prevent multiple transactions from modifying the same row.
Whenever a transaction obtains row locks for a row, it also acquires a table lock  to prevent DDL operations 
INSERT, UPDATE, and DELETE and SELECT... FOR UPDATE 

Table Locks (TM) :

Acquired to ensure Table not dropped while DML running.

row share (RS), 
row exclusive (RX),    --  for updating, inserting, or deleting
share (S), 
share row exclusive (SRX),   -- others can select but prohibit  updating row or locking table
exclusive (X). :  Only one transaction can acquire e.g to drop


DDL Locks(TM) :
Exclusive DDL Locks : ALTER TABLE 
Share     DDL Locks : CREATE PROCEDURE/PACKAGE/FUNCTION/ TRIGGER/TABLE

Latches :


Statement-Level Read Consistency   : guarantees that data returned by a query comes from a single point in time i.e the time when the query began.
Transaction-Level Read Consistency : all queries within the same transaction is consistent with respect to a single point in time i.e when transaction began


Exclusive : SELECT ... FOR UPDATE  gets exclusive row lock

v$lock
TX row transaction lock
TM DML lock.

UL user-defined lock –DBMS_LOCK


Profile  :


CREATE PROFILE  enduser  LIMIT
CPU_PER_SESSION           60000
LOGICAL_READS_PER_SESSION 1000
CONNECT_TIME              30
IDLE_TIME
PRIVATE_SGA               102400
CPU_PER_CALL              UNLIMITED
COMPOSITE LIMIT           60000000
FAILED_LOGIN_ATTEMPTS       3
PASSWORD_LIFE_TIME              90
PASSWORD_REUSE_TIME             180
PASSWORD_LOCK_TIME              3
PASSWORD_GRACE_TIME             3
Verify_function_one ;


ASM :

V$ASM_DISK

HEADER_STATUS : CANDIDATE/PROVISIONED , MEMBER  , FORMER         Others : UNKNOWN , CONFLICT  , FOREIGN(not created using ASM)

MODE_STATUS   : ONLINE, OFFLINE, UNKNOWN


ASM_DISKSTRING


Dataguard Process :

LGWR : writes redo log buffer to the online redo log file.It can also in sync transmit,as redo is generated  to the remote destination.
ARCH :  archive online redo log file on primary on log switch and remote destination
LNS : on primary database directly captures redo from redo log buffer or reads Online redo log files
RTS : Redo Transport Services transfer of redo data from the production database to stdby
RFS : Remote File Server Process: receives the redo from LGWR/ARCH and write to standby redo logfiles
ARC  : archive the standby redo log files
MRP  : Managed Recovery Process applies redo entries from standby redo log files or archived redo logs

Fetch Archive Log (FAL) : to re-transmit the archived log files

With SYNC , Log Writer (LGWR) of primary with LNS  and RFS writes to standby redo logfiles


Split Brain :

Condition which occur on communication failure between the nodes due to issues with the interconnect(NETWORK HEARTBEAT).
When acknowledgement is not receive from the other node, both rushes to the V.D to register and claim it's membership.
Whichever node it able to access moe than half the no. of v.disks will be considered as a part of the cluster
and the other will be evicted to maintain the integrity of the cluster.

Each node has to register within short timeout period(DISK HEARTBEAT) to the V.disks.It it fails, node is considered
unhealthy and may be rebooted to protect the database using KILL BLOCK.


Start/stop RAC :


crsctl stop cluster -all           -- all nodes

crsctl stop crs           crsctl stop cluster           -- local node
or
srvctl stop listener -n node1
srvctl stop database -d RACDB
srvctl stop asm -n node1 -f
srvctl stop nodeapps -n node1 -f
crsctl stop crs

crsctl start crs
srvctl start asm -n node1
srvctl start database -d oradb


Active Data Guard  : enables read-only access to a physical standby
Snapshot Standby   : convert stdby to r/w for testing and then back to physical standby again.

OLR : http://sandeepnandhadba.blogspot.com/2014/07/oracle-local-registry-olr-in-rac-11gr2.html
OCR : http://sandeepnandhadba.blogspot.com/2014/07/oracle-cluster-registry-ocr-in-11gr2.html
VD : http://sandeepnandhadba.blogspot.com/2014/07/voting-disks-in-oracle-11gr2-rac.html
Startup : http://marthadba.blogspot.com/2016/03/clusterware-startup-process-step-by-step.html
B/G proc: http://marthadba.blogspot.com/2016/03/oracle-10g-background-processes.html  http://marthadba.blogspot.com/2016/04/oracle-rac-architecture-10g-and-11g.html
Waits :


Question : http://marthadba.blogspot.com/2016/05/oracle-rac-dba-interview-questions-and_25.html


Q 1. Row  Migration  and Row Chaining 

Row  Migration :

 Row size increase on update and then moved.So entire row is moved leaving the pointer/rowid of the new block behind in the original block.
        Migrated rows are caused by updates;
Index Read will cause additional IO's on migrated rows


Solution for Migrated rows : ALTER TABLE MOVE   -- lets you modify any of the storage attributes of the table ; Use higher PCTFREE to avoid future migration
ALTER TABLE row_mig_chain_demo MOVE
    PCTFREE 20
    PCTUSED 40
    STORAGE (INITIAL 20K
             NEXT 40K
             MINEXTENTS 2
             MAXEXTENTS 20
             PCTINCREASE 0);

     ALTER INDEX SYS_C003228 REBUILD;    -- Required since indexes become unusable after Alter table move

Solution for already migrated : dbms_redefinition utility

Row Chaining :

While chained rows are caused by either inserts or updates.LONG,BLOB, CLOB,LONG RAW columns i.e Tables whose rowsize exceeds the blocksize. A row is too large to fit into a single database block .So it will have data on two or more blocks.

ANALYZE TABLE row_mig_chain_demo COMPUTE STATISTICS;-- and then check chain_cnt/num_rows in the user_tables

Create table CHAINED_ROWS ; ANALYZE TABLE row_mig_chain_demo LIST CHAINED ROWS into CHAINED_ROWS;  Select * from CHAINED_ROWS

Solution for chaining : Use large block size to store large columns LONG,BLOB, CLOB,LONG RAW

PCTFREE - The percentage of space reserved for future update of existing data.
PCTUSED - The percentage of minimum space used for insertion of new row data ; determines when the block gets back into the FREELISTS structure.
FREELIST - Structure where Oracle maintains a list of all free available blocks.
When free space falls below PCTFREE,  block is removed from FREELIST. Re-listed again when used data becomes less than PCTUSED value.

Q 2 : Fragmentation 

Table fragmentation solution :


Wastage of space due to creation of some "pockets" of free space within table which cannot be used.

As data is updated and / or deleted, pockets of empty space is created, leading to fragmented free space which are not used to hold new data but impact performance


Export or CTAS
Shrink or move
redifination

1.Export and import the table  or CTAS

2.ALTER TABLE . SHRINK SPACE [COMPACT] [CASCADE].  // alter table xxx deallocate unused space;

3.Alter table move (to another tablespace, or same tablespace) and rebuild indexes.
   requires alter table <table_name> enable row movement;

4. dbms_redefinition - This procedure will reorganize a table while it remains online for updates.

Tablespace fragmentation Solution :


Caused due to drop table, shrink tables, move tables, rebuilding indexes.
Not major performance impact except if it appears in backup

honey-combing and bubbling

honeycomb fragmentation : To avoid use alter tablespace xxx coalesce" 
bubble fragmentation         :  To avoid use locally-managed tablespaces

Disk fragmentation Solution :

  Present raw discs to ASM


Q3 : deadlock
Two or more sessions are waiting for data locked by each other,
Statement that detects the deadlock is rolled back.
Increasing INITRANS can help.INITRANS and MAXTRANS parameters are used to control the concurrent access to the same block.

Q4 : snapshot too old
If the Before image/rollback information of committed) transaction  is overwritten in the UNDO tablespace and if that before image is required by another query to provide read-consistent data then the query will get ORA-1555 :snapshot too old
Increase UNDO size or commit less often in cursor loop




Q6 : Change in Query performance

Explain plan changed
Performance Advisory
Generate stats

SQLAccess advisor  : recommends missing indexes and missing materialized views

SQL Tuning Sets : Consists of queries and corresponding execution statistics DBMS_SQLTUNE.CREATE_SQLSET  // DBMS_SQLTUNE.SELECT_WORKLOAD_REPOSITORY /
DBMS_SQLTUNE.CREATE_TUNING_TASK // dbms_sqltune.execute_tuning_task


SQL Tuning Advisor : may recommend SQL profile which can be accepted dbms_sqltune.accept_sql_profile
SQL Plan Management
SQL Access Advisor

SQL Performance Advisor

Q7 : Joins

Nested loops better when a large table is joined with a small table.

Hash Joins are preferable when 2 large tables need to be joined.

Q8 : Flashback database


Q6 : Patch conflict








DBMS_REDEFINTION -  provides a substantial increase in table availability.Primary key or rowid needed or alter table <> add constraint <> using index <>
EXEC DBMS_REDEFINITION.can_redef_table('TEST', 'REDEF_TAB');
create the interim table  TEST.REDEF_TAB2
EXEC DBMS_REDEFINITION.start_redef_table('TEST', 'REDEF_TAB', 'REDEF_TAB2');
EXEC DBMS_REDEFINITION.sync_interim_table('TEST', 'REDEF_TAB', 'REDEF_TAB2');
ALTER TABLE test.redef_tab2 ADD (CONSTRAINT redef_tab2_pk PRIMARY KEY (id));
or
    DBMS_REDEFINITION.copy_table_dependents(copy_indexes,copy_triggers,copy_constraints,copy_privileges,copy_statistics)
    triggers and views are both marked as invalid which can be ALTER...COMPILE
We have register_dependent_object to registers/unregister a dependent object
EXEC DBMS_REDEFINITION.finish_redef_table('TEST', 'REDEF_TAB', 'REDEF_TAB2');
DROP TABLE TEST.REDEF_TAB2;
ALTER TABLE TEST.REDEF_TAB RENAME CONSTRAINT

   Note : In 12c, You don’t have to create interim table either.
  dbms_redefinition.redef_table ( uname=>'FOO', tname=>'Tablename', table_part_tablespace=>'NewTablspaceUSERS');

   ALTER TABLE … MOVE PARTITION … ONLINE



Datafile Movement - In 12c , alter database move datafile '...' to '...'; Even SYSTEM tablespace’s datafile can ve moved in 12c while DB online;




Delete :
On delete, space is not released back to the OS.Oracle reuses this space when new rows are inserted in the block.
New rows are inserted only when used space fall below PCTUSED.
Then rows are inserted as long as PCTFREE remains for updates


Data block :
Smallest DB logical unit ,data block size should be a multiple of the operating system's block size within the maximum (port-specific) limit to avoid unnecessary I/O.
data block is filled to the limit determined by PCTFREE

Block size K * 4 = D.F size in GB /TB(bigfile)
32TB-8K blocks  / 64TB - 16k block / 128TB-32K blocks 

Extent :
deallocate unused extents in segments and return them to the tablespace


Clustered Tables :
Data segment is created for the cluster and clustered tables store information in that segment.
When you drop one table ,data segment remains for the other tables in the cluster, and no extents are deallocated.
A table cluster is a group of tables that share common columns and are often used togetherin the query.
So it's better to store all rows for each table in the same data blocks.
This helps to reduce I/O with Oracle cluster tables
Multi-table Index Cluster Tables
Single-table Index Cluster Table


Oracle Performance :

There are 4 plan stability features in the database

SQL Outlines can be collected by the database at runtime. They can exactly match a SQL text or force match. However, they are deprecated from 11g.
SQL Patches for manually inserting specific hints into a specific SQL ID
SQL Profiles are also a mechanism for inserting specific hints into a specific SQL statement, but can also do forced matching. required database tuning pack to be licenced , which is not the case with the other mechanisms.
SQL Baselines can be collected by the database at runtime either on request or automatically. They feed real experience of previous executions of a statement back into subsequent executions. They match by SQL_ID.

To inject hints to SQL statements at run-time without changing the code.
Part of  SQL Repair Advisor ,free are internally based on SQL Profiles;
DBMS_SQLDIAG.CREATE_SQL_PATCH(sql id, hint_text)
 DBMS_SQLDIAG.ALTER_SQL_PATCH(name=>'my_sql_patch', attribute_name=>'STATUS', value=>'DISABLED');
DBMS_SQLDIAG.DROP_SQL_PATCH(name=> 'my_sql_patch')


dbms_sqltune:

 Package helps to tune the SQL statements in the SQL Tuning Advisor by creating SQL TUNING TASK  using SQL Tuning Sets. SQL Tuning Sets contains one or more SQL statements combined with their execution statistics;schema , bind variables.

STS can be loaded from AWR , another STS, or the cursor cache.
                      SELECT_CURSOR_CACHE // SELECT_SQLSET // SELECT_WORKLOAD_REPOSITORY

When we run SQL tuning advisor against a SQL statement or sql_id, it provides tuning recommendations that can be done that query to improve performance. It might give suggestion to create few indexes or accepting a SQL profile.

    dbms_sqltune.create_sqlset(sqlset_name => 'SQLSET1', populate_cursor => dbms_sqltune.select_cursor_cache );
SELECT * FROM   TABLE(DBMS_SQLTUNE.select_sqlset ('test_sql_tuning_set'));
dbms_sqltune.create_tuning_task (sqlset_name  => ‘SQLSET1’, task_name => ‘TASK1’);
DBMS_SQLTUNE.create_tuning_task
DBMS_SQLTUNE.execute_tuning_task
DBMS_SQLTUNE.report_tuning_task
DBMS_SQLTUNE.accept_sql_profile


SQL Profile    : Part of STA,it  helps optimizer (with optimizer statistics, data set,bind variable values etc.)   to select the best plan without constraining it to any specific plan. Basically uses hints.
SQL profiles are reactive(after performance issue occurred)  and only helps to correct optimizer cost estimates.They do not guarantee the same plan each time the statement is parsed.

DBMS_SQLTUNE.create_tuning_task
DBMS_SQLTUNE.execute_tuning_task
DBMS_SQLTUNE.report_tuning_task
DBMS_SQLTUNE.accept_sql_profile

Automatic SQL Tuning with SQL profile: 

   During maintenance windows if a new profile shows threefold improvement and if ACCEPT_SQL_PROFILES=TRUE then
   DB accepts SQL profile else it report the recommendation to create a SQL profile.
   DBMS_SQLTUNE.REPORT_AUTO_TUNING_TASK 
   DBA_ADVISOR with select and ADVISOR privileges.

Category with SQL profiles :

By default, all profiles are in the DEFAULT category.All sessions with SQLTUNE_CATEGORY parameter set to DEFAULT can use the profile.Change category to DEV for profile you want to test and only sessions with SQLTUNE_CATEGORY=DEV can use the profile.

Transporting a SQL profile from test to production(DBMS_SQLTUNE.PACK_STGTAB_SQLPROF)

SQL plan baselines : 

Oracle uses SQL plan management using baselines to prevent plan regressions and permit the optimizer to consider new plans.

consists of a set of accepted plans for a SQL;
best plan from this set is selected or a if a different plan is found it added to the list but not used until verified.
Baselines are proactive and produces specific plan. They feed real experience of previous executions of a statement back into subsequent executions.

With SQL Plan management(SPM) in 11g ,SQL Profile approach is deprecated and two parameter are used
optimizer_capture_sql_plan_baselines =  TRUE   --  creates initial plan baselines for new statements not already in the plan history
optimizer_use_sql_plan_baselines   = TRUE      -- if false, the database does not use any plan baselines in the database

dbms_spm.load_plans_from_cursor_cache   --extract plan
dbms_spm.load_plans_from_sqlset
dbms_spm.evolve_sql_plan_baseline             -- test newplan against verified plan


Types of PlansAccepted Plans ( available for use ) ; Enabled Plans ( eligible for use) ;  Fixed plan ( preferred plan)

SQL plan history : contains both SQL plan baselines and unaccepted plans

DBMS_SPM used to load plans from  1. STS  2. shared SQL Area or 3. Staging table

SPM Evolve Advisor :  evolves recently added plan to the SQL plan baseline. When ACCEPT_PLANS is true,automatically accepts all plans recommended by the task



Expdp/Impdp :


ADD_FILE :  Add additional dump files.  ADD_FILE=hr2.dmp, dpump_dir2:hr3.dmp
NETWORK_LINK - specify database link to the source; objects imported directly from source without being written to a dump file.
flashback_time - expdp ..... flashback_time=\"to_timestamp\(\'09-05-2011 09:00:00\', \'DD-MM-YYYY HH24:MI:SS\'\)\"
FILESIZE - Specify the size of each dump file in units of bytes.
CLUSTER - distributing the export job across Oracle RAC instances. Use SERVICE_NAME with the CLUSTER=Y  to run only on instances defined for that service
REMAP -
   REMAP_DATA=EMP.EMPNO:REMAPPKG.EMPNO  -- Pkg convert empno and store new values in the dumpfile e.g credit card no.
   remap_schema=SCOTT:TEST
   REMAP_TABLE=HR.EMPLOYEES:EMPS     -- to rename table
   REMAP_TABLESPACE
ENCRYTION - Encrypt dump file  ALL | DATA_ONLY | ENCRYPTED_COLUMNS_ONLY METADATA_ONLY
ENCRYPTION_PASSWORD - Password key for dump file
ENCRYPTION_MODE = PASSWORD ,TRANSPARENT , DUAL
ESTIMATE_ONLY - Calculate job estimates without performing the export
SQLFILE - Write all the SQL DDL to a specified file.

TARGET_EDITION (11.2) -
VERSION (10.1) -     ---helps to import into earlier release of Oracle( 12c to 11g)

EXIT_CLIENT
CONTINUE_CLIENT

STATUS
START_JOB
STOP_JOB
KILL_JOB - Detach and delete job.

USER_DATAPUMP_JOBS view or the V$SESSION_LONGOPS view


UNDO :

Rollback a particular transaction
Recover database from crash(rollback uncommited transactions)  or  recover logical corruption(
Provide read consistency
For Flashback feature : flashback query to go back in time  specified by UNDO_RETENTION parameter

  With fixed size ,  UNDO_RETENTION initialization parameter is ignored unless retention guarantee is enabled.Provide longest possible retention
  With AUTOEXTEND , retention period slightly longer than longest-running query or UNDO_RETENTION

If space not enough , DB start using UNEXPIRED undo blocks which causes “snapshot too old” error for some SQL.
UNEXPIRED undo blocks(previous UNDO activities) automatically become EXPIRED based on the undo retention period


SQL> create undo tablespace UNDOTBS_NEW datafile '/u01/data/undo_new01.dbf' size 1g;
SQL> alter system set undo_tablespace=UNDOTBS_NEW scope=both;


RETENTION GUARANTEE       : unexpired extents not overwritten. Useful for Flashback Query
RETENTION NOGUARANTEE : unexpired extents  can be consumed if necessary

We can create more than one undo tablespace but only one  can active at one time

Bigfile tablespaces :

stores its data in a single datafile
Supported only for LMT with ASSM
Simplifies datafile management with OMF and ASM(Automated Storage Management)

CREATE BIGFILE TABLESPACE user_tbs DATAFILE '/u02/oradata/grid/user_tbs01.dbf' SIZE 1024M;
ALTER TABLESPACE user_tbs RESIZE 10G;
"alter tablespace xxx resize datafile" syntax. alloed on Big tablespace.

Temporary tablespace :

Used for sorting data results in the disk when the sort operation is too large to be done in the RAM area.(e.g hash joins)
 used for order by, group by and create index; SQL queries are less likely to run out of space.

DEFAULT TEMPORARY tablespace : alter database default temporary tablespace temp02;

Multiple Temporary Tablespaces allowed

No fragmentation - "All extents of temporary tablespaces are of uniform size"



Reclaim space :

Use Segment Advisory

Row space used within a table can be found out by
dbms_space.space_usage
avg_row_len*no. of rows

Tablespace should be automatic segment space management else use online table redefinition.

CTAS

1. Data pump (expdp, impdp)  or CTAS

2. Alter table <> shrink space cascade;
     ALTER TABLE scott.emp SHRINK SPACE; -- amend HWM
     ALTER TABLE scott.emp SHRINK SPACE COMPACT;     -- don't amend HWM
     ALTER TABLESPACE TEMP SHRINK SPACE ; (optionally add KEEP 512M or similar)

3. Alter table move TABLESPACE new_ts;
     alter index ind_big_stuff rebuild TABLESPACE new_ts;

4. Online reorg
      dbms_redefinition - This procedure will reorganize a table while it remains online for updates

5. Coalesce /dellaocate
     alter table xxx coalesce;       -- puts together discontinuous fragmented extents
     alter index xxx coalesce; -- coalesces all leaf blocks within same branch of b-tree

    alter table xxx deallocate unused space; --removes space above the high-water mark
    alter index xxx deallocate unused space;

6. Resize datafile
     ALTER DATABASE DATAFILE '/u02/oracle/rbdb1/file1.dbf' RESIZE 100M;


     exec dbms_space_admin.drop_empty_segments(schema_name=>'HR');

     PURGE TABLESPACE table_space_name;




Incremental backup :

backup blocks changed since a specified previous backup. cumulative Incremental from last Full(1) and differential Incremental from last 1 or 0 level

A LEVEL 0 incremental backup means complete backup just like Full backup. It's the base for subsequent incremental backups
A LEVEL 1 incremental backup all blocks changed since last LEVEL 0 or LEVEL 1 backup.
A LEVEL 1 COMULATIVE  backup all blocks changed since last LEVEL 0 backup. Provides fast recovery time,

RMAN> backup incremental level 1 database;

RMAN reads SCN of each data block and compares it to the checkpoint SCN of the parent incremental backup.
If it's greater than or equal to checkpoint SCN of the parent incremental backup, then RMAN copies the block.

If enough disk space is available use cumulative incremental backups, when recovery time is more important

block change tracking  : helps to identify changed blocks in datafiles without scanning the full datafiles.
SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;



Oracle Advanced Compression :

Basic Table Compression in Enterprise Edition 11g


DBMS_COMPRESSION -  Compression Advisor

CREATE TABLE emp (..) Row store compress Basic  -- after initial load, no compression in DML
CREATE TABLE emp (..) Row store compress Advanced
ALTER TABLE …      ROW STORE COMPRESS ADVANCED    -- compress only future DML
ALTER TABLE … MOVE ROW STORE COMPRESS ADVANCED    -- compress existing data and future DML


Create smallfile tablespace ... default compress for oltp;
Create smallfile tablespace ... default compress for all operations;
CREATE TABLE emp (..) COMPRESS FOR OLTP;
ALTER TABLE … COMPRESS FOR OLTP
ALTER TABLE … MOVE COMPRESS FOR OLTP
create table mytab (col1 number) COMPRESS FOR ALL OPERATIONS; -- 11g
create table fred  (col1 number) COMPRESS FOR DIRECT_LOAD OPERATIONS;
alter table fred move COMPRESS / NOCOMPRESS;
ALTER TABLE test_tab MOVE PARTITION test_tab_q2 COMPRESS;



OLTP Table Compression now called Advanced Compression Option 11g .

No major Performance degradation as Oracle can read compressed blocks directly without having to first uncompress the block.


3. Online Redefinition (DBMS_REDEFINITION)

ALTER TABLE MOVE PARTITION...COMPRESS – Will maintain indexes
ALTER TABLE MOVE PARTITION...UPDATE INDEXES clause --Global indexes maintained

       Not for tables with LOB or more than 255 columns


Hybrid Columnar Compression (HCC) : best suited for tables that are not or lightly modified,
CREATE TABLE emp (..) Column store compress For Query
CREATE TABLE emp (..) Column store compress For Archive

Automatic Data Optimization (ADO) policy :
  Create policies that automate data compression and data movement.Heat Map track the usage of this table over time.
DBMS_ILM

conditions : no access | no modification | creation time

ALTER TABLE orders ILM ADD POLICY
ROW STORE COMPRESS ADVANCED ROW
AFTER 2 DAYS OF NO MODIFICATION;

ALTER TABLE sales ILM ADD POLICY
COLUMN STORE COMPRESS FOR QUERY ROW
AFTER 2 DAYS OF NO MODIFICATION;

ALTER TABLE orders ILM ADD POLICY
COLUMN STORE COMPRESS FOR ARCHIVE HIGH SEGMENT
AFTER 90 DAYS OF NO MODIFICATION;


RMAN Compression : LOW, MEDIUM, and HIGH.  backup data is compressed before it is written to disk

Data Pump compression : No need to decompress a dump file before importing  ALL | DATA-ONLY | METADATA-ONLY(default)

DG redo log compression : redo data transmitted in compressed format to reduce network bandwidth and transmission time



Patch : 

11g SQL> @catbundle.sql psu apply -- replaced with datapatch


12c Unix>$ORACLE_HOME/OPatch/datapatch -verbose

$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -ph ./
$ $ORACLE_HOME/OPatch/opatch apply
$ $ORACLE_HOME/OPatch/opatch lsinventory

Restart and apply datapatch
$ sqlplus / as sysdba
SQL> startup
SQL> alter pluggable database all open


$ $ORACLE_HOME/OPatch/datapatch -verbose -- add entries to bothe both DBA_REGISTRY_HISTORY and DBA_REGISTRY_SQLPATCH views

SQL> SELECT patch_id, status FROM dba_registry_sqlpatch;

After creating new databases 12.1.0.1 and Oracle 12.1.0.2 always run $ORACLE_HOME/OPatch/datapatch -verbose
Since Oracle Database 12.2.0.1 , it's not required to run datapatch after creation

Patching Steps :

opatch version
opatch lsinventory detail      -oh /u01/app/oracle/12.1.0.2/grid
opatch query -is_rolling_patch | grep rolling
opatchauto apply /u01/27468957 -oh /u01/app/oracle/product/12.1.0.2/db_1 -analyze  or opatch prereq CheckConflictAgainstOHWithDetail
opatchauto apply /u01/27468957 -oh /u01/app/oracle/product/12.1.0.2/db_1



dbms_profiler : 

Procedures to capture performance information from PL/SQL; to detect where in the code time is being spent.

SQL> exec dbms_profiler.start_profiler ('Test');
SQL> exec dbms_profiler.flush_data();
SQL> exec dbms_profiler.stop_profiler();







Corruption : 

DB_BLOCK_CHECKING = HIGH | LOW

DBMS_REPAIR.check_object and
DBMS_REPAIR.fix_corrupt_blocks
 DBMS_REPAIR.dump_orphan_keys ( -- rebuild index If the orphan key count is greater than 0
DBMS_REPAIR.rebuild_freelists (
DBMS_REPAIR.skip_corrupt_blocks (  -- DML to ignore correupt blocks

RMAN > VALIDATE DATABASE;

RMAN > BLOCKRECOVER DATAFILE 3 BLOCK 121;



Partitions :

Range Partitioning :  based on a range of values.
List Partitioning : list of values
Auto-List Partitioning
Hash Partitioning
Composite Partitioning
Multi-Column Range Partitioning
Interval Partitioning : define interval to create future partition
Reference Partitioning : Partitioning uisng parent-child relationship
Virtual Column Based Partitioning
Interval Reference Partitioning
Range Partitioned Hash Cluster : Allows hash clusters to be partitioned by ranges

Local partitioned index :

Key value for the local index must be identical to key value for the table partition
Number of partitions in the index have to match the number of partitions in the base table.

Global partitioned index :

partition key is independent of Table partition key.
global partitioned index can have as many partitions for the index as desired.
Enforcing uniqueness using PK or unique index on the table will force you to create global indexes.
when a table partition is dropped as part of a reorganization, the entire global index will be affected.


Parallel :

ALTER SESSION ENABLE/DISABLE PARALLEL DML;

INSERT /*+ ENABLE_PARALLEL_DML */ …

INSERT /*+ APPEND PARALLEL (t3,2) */ INTO t3 SELECT * FROM t4
DELETE /*+ PARALLEL (t2,2) */ FROM t2;

DOP of 3 for a CREATE TABLE AS SELECT statement

Can be used for UPDATE, MERGE, and DELETE
Parallel DML is not supported on a table with bitmap indexes if the table is not partitioned.
Parallel DML operations cannot be done on tables with triggers.

https://www.akadia.com/services/ora_parallel_processing.html



Switchover Steps :

Primary Side

SQL> alter system archive log current;

SQL> alter database commit to switchover to standby with session shutdown;

SQL> shutdown immediate;
SQL> startup mount;


Data Guard Side


SQL> alter database recover managed standby database cancel;

SQL> alter database commit to switchover to primary with session shutdown;

SQL> shutdown immediate;
SQL> startup;


Primary Side

SQL> alter database recover managed standby database [ using current logfile ] disconnect;


If you create standby log files you can use real time apply with below command.
SQL> alter database open read only;
SQL> alter database recover managed standby database using current logfile disconnect;


Dataguard Failover(with least amount of data loss) :

In MAXIMUM PROTECTION or MAXIMUM AVAILABILITY :

ALTER DATABASE SET Oracle instance TO MAXIMIZE PERFORMANCE.

Allow the MRP to finish applying the redo data from the archived redo log file and the standby redo log files.

SQL> alter database recover managed standby database finish;
           ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;


In MAXIMUM PERFORMANCE :(may not have standby redo logs configured) :

Resolve archive gap   SQL> SELECT THREAD#, LOW_SEQUENCE#, HIGH_SEQUENCE# FROM V$ARCHIVE_GAP; and SQL> alter database register physical logfile '..'
manually copy online redo logs from original primary and register.
SQL> alter database recover managed standby database finish;
SQL> alter database COMMIT TO SWITCHOVER TO PRIMARY;


Without considering above mentioned data recovery options

SQL> recover standby database until cancel/finish
SQL> alter database activate standby database;
SQL> ALTER DATABASE OPEN;

Reconvert failed Primary to standby :

NewPri-SQL> SELECT TO_CHAR(STANDBY_BECAME_PRIMARY_SCN) FROM V$DATABASE;
New-Stdby-SQL> STARTUP MOUNT;
New-Stdby-SQL> FLASHBACK DATABASE TO SCN standby_became_primary_scn;
New-Stdby-SQL> ALTER DATABASE CONVERT TO PHYSICAL STANDBY;
New-Stdby-SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;




SESSIONS/DB Hang :

SESSIONS parameter is derived from value of PROCESSES parameter unless you override

PROCESSES = Operating System Dependant
SESSIONS = (1.5 * PROCESSES) + 22      -- 11g
TRANSACTIONS = 1.1 * SESSIONS

v$resource_limits

sqlplus -prelim "/as sysdba"

Oracle Managed Files :

DB_CREATE_FILE_DEST - sets the default location of the data/temp files
DB_CREATE_ONLINE_LOG_DEST_n - sets the default location of the redo, archived log files and controlfiles.
DB_RECOVERY_FILE_DEST - sets the default location of the flashback logs.


Commands :

Create tablespace test datafile 'c:\oracle\test.dbf' size 2G
EXTENT MANAGEMENT LOCAL
uniform size 1M maxsize unlimited
segment space management AUTO;

Specify the EXTENT MANAGEMENT LOCAL clause if you want to specify either the AUTOALLOCATE clause or the UNIFORM clause

AUTOALLOCATE causes tablespace to be system managed with a minimum extent size of 64K

alter  tablespace test add datafile 'c:\oracle\test02.dbf' 2G autoextend ON  next 100m maxsize 5G;
drop   tablespace test including contents and datafiles;


CREATE TEMPORARY TABLESPACE lmtemp2
TEMPFILE '/u02/oracle/data/lmtemp201.dbf' SIZE 50M
TABLESPACE GROUP group1;

ALTER TABLESPACE lmtemp2 TABLESPACE GROUP group2;

ALTER DATABASE sample DEFAULT TEMPORARY TABLESPACE group2;

select DBMS_METADATA.GET_DDL('TABLESPACE','USERS') from dual;




1. Diff. between Auto Allocate on and auto extend on.
2.Uniform size in Storage and ASM
3.Row Migration and Row Chaning
4.Defragmentation. How it happens at table level.
5. Table redefinition methods. (All 4 ways)
6.What is dbms_sqltune
6.Diff between sql profile and sql base line
7.Explain CLUSTER,ENCRYTION,FLASHBACK,REMAP parameter in expdp/impdp
8.Rman backup Full and Point In time recovery explain.
9.What is Flashback query, Flashback table, Flashback database differences.
10.Undo tablespace use and syntax.
11.Temporary tablespace and benefits.
12. DG background processes, explain steps of switch over and failover.
13. For a RAC setup of one database with 3 instances, tell the background processes that runs in Dataguard Physical Standby?
14.Start and stop commands for database in RAC and stand alone.
15. Syntax to kill session in RAC and Standalone.
16.Advanced level compression techniques in oracle
17. Explain object and DML level operation of parallelism.
16.If max set sessions set for the database is 300 but more users are connected and some doesnt log out. Tell the parameters to control it and where.
17.At 3pm a table was modified incorrectly how to go back to it with scn and time.
18. A query ran yesterday for 2mins but takes 4hrs today, tell me the sequence how your troubleshooting will proceed.
19.Whats sql advisory report and how to post the best execution plan. And how to compare yesterday and today's report.
20 . What is dbms_profile and execution plan,sql baseline.
21. Whats is ASH, AWR and ADDM for?
22. What is big file in oracle and what are the max sizes for 8kb block and 64kb block size?
23.What sql tuning utility you use?
24.What is table partition.
25.what is sql id and where we use?
26.what dml causes table fragmentation.
27. What are locks.
28. You delete rows and it deletes from block how far can new rows be filled again.
29.How much space can a data block be filled in 1 GB datafile..
30.Differenec between 11g and 12c patch.
31. What is the latest patch released.
32. Tell me steps of patching.
33.How does rman do the incremental backup.
34.Syntax for Tablespace creation and datafile addition.
35.Shrinking db files and tablespace?
36. What are the ways to reclaim space?
37. Pre-requisites for table shrinking.
38.When row deletion happens what happens to the space.
39. What are the block spaces allocated for datafiles.

40. What is the command for applying patch. Full command.




General :

1. Diff. between Auto Allocate on and auto extend on.
2.Uniform size in Storage and ASM
3.Row Migration and Row Chaning
4.Defragmentation. How it happens at table level.
5. Table redefinition methods. (All 4 ways)
6.What is dbms_sqltune
6.Diff between sql profile and sql base line
7.Explain CLUSTER,ENCRYTION,FLASHBACK,REMAP parameter in expdp/impdp
8.Rman backup Full and Point In time recovery explain.
9.What is Flashback query, Flashback table, Flashback database differences.
10.Undo tablespace use and syntax.
11.Temporary tablespace and benefits.
24.What is table partition.
26.what dml causes table fragmentation.
27. What are locks.
33.How does rman do the incremental backup.
34.Syntax for Tablespace creation and datafile addition.

16.Advanced level compression techniques in oracle
17. Explain object and DML level operation of parallelism.


RAC :

12. DG background processes, explain steps of switch over and failover.
13. For a RAC setup of one database with 3 instances, tell the background processes that runs in Dataguard Physical Standby?
14.Start and stop commands for database in RAC and stand alone.
15. Syntax to kill session in RAC and Standalone.


Space:

22. What is big file in oracle and what are the max sizes for 8kb block and 64kb block size?
28. You delete rows and it deletes from block how far can new rows be filled again.
29.How much space can a data block be filled in 1 GB datafile..
35.Shrinking db files and tablespace?
36. What are the ways to reclaim space?
37. Pre-requisites for table shrinking.
38.When row deletion happens what happens to the space.
39. What are the block spaces allocated for datafiles.


Performance :

25.what is sql id and where we use?
19.Whats sql advisory report and how to post the best execution plan. And how to compare yesterday and today's report.
20 . What is dbms_profile and execution plan,sql baseline.
21. Whats is ASH, AWR and ADDM for?
23.What sql tuning utility you use?
16.If max set sessions set for the database is 300 but more users are connected and some doesnt log out. Tell the parameters to control it and where.
17.At 3pm a table was modified incorrectly how to go back to it with scn and time.
18. A query ran yesterday for 2mins but takes 4hrs today, tell me the sequence how your troubleshooting will proceed.

Patch :

30. Differenec between 11g and 12c patch.
31. What is the latest patch released.
32. Tell me steps of patching.
40. What is the command for applying patch. Full command.




DataGurad Broker :

$dgmgrl sys/pwd@oltp

DGMGRL> CREATE CONFIGURATION 'dg' AS PRIMARY DATABASE IS 'prod' CONNECT IDENTIFIER IS prod.foo.com;
DGMGRL> ADD DATABASE 'testdb'     AS CONNECT IDENTIFIER IS testdb MAINTAINED AS PHYSICAL;

DGMGRL> START OBSERVER;
DGMGRL> SWITCHOVER TO "standbyDbname";
DGMGRL> FAILOVER  TO "standbyDbname";
DGMGRL> CONVERT DATABASE 'devdb' to SNAPSHOT STANDBY;
DGMGRL> CONVERT DATABASE 'devdb' to PHYSICAL STANDBY;


Fast-Start Failover(FSFO) :

Data Guard broker to failover a failed primary database automatically to a predetermined standby database.
If Primary unavailable for connection, after  FastStartFailoverThreshold  (30seconds)  it will issue a failover


FastStartFailoverThreshold : set on Primary and Secondary
DGMGRL> edit database 'pitt' set property 'FastStartFailoverTarget'='cosp';
DGMGRL> edit database 'cosp' set property 'FastStartFailoverTarget'='pitt';


FastStartFailoverLagLimit :  DGMGRL> edit database 'cosp' set property ' FastStartFailoverLagLimit '='60';

DGMGRL> ENABLE FAST_START FAILOVER CONDITION 'Inaccessible Logfile';
DGMGRL> ENABLE FAST_START FAILOVER CONDITION 'Datafile Offline';
DGMGRL> ENABLE FAST_START FAILOVER CONDITION 'Corrupted Controlfile';
DGMGRL> ENABLE FAST_START FAILOVER CONDITION 'Stuck Archiver';


DGMGRL> show fast_start failover
DGMGRL> start observer file='/tmp/fsfo_mydg.dat'
DGMGRL> enable fast_start failover
DGMGRL> SHOW CONFIGURATION;
DGMGRL> show database orcl_primary


SQL Performance Analyzer : 

Compare optimizer version changes and Parameter Change

DBMS_SQLPA.create_analysis_task(sqlset_name => 'spa_test_sqlset');
DBMS_SQLPA.execute_analysis_task( task_name => :v_task, execution_type => 'test execute',execution_name  => 'before_change');
    Make a change like create Index and DBMS_STATS.gather_table_stats
DBMS_SQLPA.execute_analysis_task( task_name => :v_task, execution_type => 'test execute',execution_name  => 'after_change');
SELECT DBMS_SQLPA.report_analysis_task(:v_task, 'HTML', 'ALL')

To compare SQL Tuning sets : execution_type => 'convert sqlset'


Reports change in execution plan :

If the SQL statement has regressed, tune the SQL
DBMS_SQLTUNE.CREATE_TUNING_TASK( spa_task_name => 'my_spa_task', spa_task_owner => 'immchan', spa_compare_exec => 'my_exec_compare');
DBMS_SQLTUNE.EXECUTE_TUNING_TASK(spa_task_name => 'my_spa_task');

To tune in the environment other than Production

DBMS_SQLTUNE.CREATE_SQLSET('SUB_STS1', 'test purpose');
DBMS_SQLTUNE.SELECT_SQLPA_TASK( task_name  => 'SPA_TASK1', execution_name => 'COMP', level_filter => 'REGRESSED'))
DBMS_SQLTUNE.LOAD_SQLSET('SUB_STS1', sqlset_cur);


DBMS_SQLTUNE.CREATE_STGTAB_SQLSET(table_name => 'STG_TAB1', schema_name => 'JOHNDOE', tablespace_name => 'TBS_1',  db_version => DBMS_SQLTUNE.STS_STGTAB_11_1_VERSION);
DBMS_SQLTUNE.PACK_STGTAB_SQLSET( sqlset_name => 'SUB_STS1', sqlset_owner => 'JOHNDOE', staging_table_name => 'STG_TAB1', staging_schema_owner => 'JOHNDOE', db_version => DBMS_SQLTUNE.STS_STGTAB_11_1_VERSION);
DBMS_SQLTUNE.UNPACK_STGTAB_SQLSET(sqlset_name =>'SUB_STS1', staging_table_name => 'STG_TAB1',  replace => TRUE);


DBMS_SQLTUNE.CREATE_TUNING_TASK(sqlset_name  => sts_name,  sqlset_owner => sts_owner,  task_name    => tune_task_name);
DBMS_SQLTUNE.SET_TUNING_TASK_PARAMETER(:tname,   'APPLY_CAPTURED_COMPILENV', 'FALSE');
DBMS_SQLTUNE.EXECUTE_TUNING_TASK(tname);



DBA_ADVISOR_TASKS
DBA_ADVISOR_SQLPLANS
DBA_ADVISOR_SQLSTATS
DBA_ADVISOR_FINDINGS


Wallet // Keystore for TDE : 

Locn : $ORACLE_BASE/admin/DB_UNIQUE_NAME/wallet  or sqlnet : WALLET_LOCATION / ENCRYPTION_WALLET_LOCATION

ENCRYPTION_WALLET_LOCATION =  (SOURCE =(METHOD = FILE)(METHOD_DATA =    (DIRECTORY = /u01/app/oracle/admin/$ORACLE_SID/encryption_keystore/)))

Create keystore

ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY myPassword;
ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY myPassword WITH BACKUP CONTAINER=ALL;   --

SELECT * FROM v$encryption_wallet;

Then we can then encrypted columns in tables or the whole tablespace

CREATE TABLESPACE encrypted_ts DATAFILE SIZE 128K AUTOEXTEND ON NEXT 64K ENCRYPTION USING 'AES256' DEFAULT STORAGE(ENCRYPT);


Note :
If PDB/CDB restarted, keystore must be opened
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY myPassword CONTAINER=ALL;

Auto-Login Keystores :
  ADMINISTER KEY MANAGEMENT CREATE LOCAL AUTO_LOGIN KEYSTORE FROM KEYSTORE '/u01/app/oracle/admin/cdb1/encryption_keystore/' IDENTIFIED BY myPassword;



Wallet : Old method to connect only using tnsnames alias 

mkstore -wrl /oracle/admin/DBNAME/wallet -create

SQLNET.WALLET_OVERRIDE = TRUE
WALLET_LOCATION=(
  SOURCE=(METHOD=FILE)
  (METHOD_DATA=(DIRECTORY=/oracle/admin/DBNAME/wallet))
)

Create TNS alias
For this TNS alias , add the username and passwd

mkstore -wrl /oracle/admin/DBNAME/wallet -createCredential DBFS dbfs_admin p@ssw0rd


sqlplus /@DBFS

RAC Installation Steps :

https://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677.html


Post Installation # Verify Oracle Grid Infrastructure and Database Configuration

@?/rdbms/admin/utlrp.sql
[oracle@racnode1 ~]$  sqlplus / as sysdba ; SQL>  alter system set cluster_database=false scope=spfile sid='racdb1';
[oracle@racnode1 ~]$  srvctl stop database -d racdb

[oracle@racnode1 ~]$  sqlplus / as sysdba ; startup mount ; SQL>  alter database archivelog; SQL>  alter system set cluster_database=true scope=spfile sid='racdb1'; SQL> shutdown immediate
[oracle@racnode1 ~]$  srvctl start database -d racdb

crsctl check cluster -- CRS , CSS , Eventy manager online

srvctl status database -d racdb
srvctl status instance -d racdb -i racdb1

srvctl status nodeapps -- Vip racnod1 and racnode2 ; N/w ; GSD ; ONS and eONS
srvctl config nodeapps

srvctl config database
srvctl config database -d racdb -a

srvctl status asm
srvctl config asm -a

srvctl status listener
srvctl config listener -a

srvctl status scan
srvctl config scan

srvctl status vip -n racnode1
srvctl status vip -n racnode2

srvctl config vip -n racnode1
srvctl config vip -n racnode2
srvctl config nodeapps -a -g -s -l
[oracle@racnode1 ~]$  cluvfy comp clocksync -verbose

SELECT  inst_id   , instance_number inst_no   , instance_name inst_name   , parallel   ,
status   , database_status db_status   , active_state state   , host_name host
FROM gv$instance ORDER BY inst_id;
select name from v$datafile union select member from v$logfile union select name from v$controlfile union select name from v$tempfile;

SELECT path FROM   v$asm_disk;





==== Pre-requisite =====

Oracle Preinstallation RPM (pre-included from 5.2 onwards)
# yum install oracle-database-server-12cR2-preinstall


# rpm -qa | grep ssh -- confirm that SSH packages are installed.Ensure to remove stty commands from the profiles

Oracle highly recommends deploying the Unbreakable Enterprise Kernel in your Oracle Linux environment


# cat /etc/redhat-release


# rpm -q package_name
# rpm -iv cvuqdisk-1.0.10-1.rpm -- Without cvuqdisk, Cluster Verification Utility cannot discover shared disks
                                        -- If you do not use an Oracle Preinstallation RPM,install the cvuqdisk RPM.

Verifying the Disk I/O Scheduler

# cat /sys/block/${ASM_DISK}/queue/scheduler
noop [deadline] cfq

Network Time Protocol for Cluster Time Synchronization should be in place.

Either all the nodes use only IPv4, or all the nodes use only IPv6.

ensure that the private IP addresses are reachable only by the cluster nodes and no firewall.
Oracle mDNS daemon uses multicasting on all interfaces to communicate with other nodes in the cluster.


The VIP is on the same subnet as your public interface
VIP  is registered in the grid naming service (GNS), the DNS, or in a hosts file.

Grid Naming Service (GNS) : uses mDNS  to enable the cluster to map host names and IP addresses dynamically as nodes are added and
                            removed from the cluster, without requiring additional host configuration in the DNS.
To use GNS, you must specify a static IP address for the GNS VIP address.
name resolution requests to the cluster are delegated to the GNS, listening on the GNS virtual IP address.
DNS must be configured to delegate resolution requests for cluster names to the GNS.


Each node must have at least one public network adapter or network interface cards (NIC).
Each node must have at least one private network adapter or network interface cards (NIC).  (RDS) with Infiniband // User datagram protocol (UDP) with network adapters and switches



Every node's private interface(interconnects) must be on the same subnet and must connect to every node of the cluster.

If using multiple interfaces(Redundant Interconnect) , Each private interface should be on a different subnet


for VIP failover, you need to enable ARP(Address Resolution Protocol)


NOTE : To enable Oracle Flex Cluster, you must at least configure a GNS virtual IP address.


public IP addresses, VIP addresses, and SCAN addresses in the cluster should be on the same subnet.

any node that does not have access to an absolute majority of voting files (more than half) is restarted.


A flex disk group can consolidate files with different redundancy requirements into a single disk group.


A quorum failure group is a special type of failure group that is used to store the Oracle Clusterware voting files.
The quorum failure group is used to ensure that a quorum of the specified failure groups are available.
Disks in the quorum failure group do not contain user data



Grid Infrastructure Management Repository (GIMR), or the Management Database (MGMTDB) :
 GIMR is a multitenant database with a pluggable database (PDB) for the GIMR of each cluster.
Real time performance data the Cluster Health Monitor collects
Fault, diagnosis, and metric data the Cluster Health Advisor collects
Cluster-wide events about all resources that Oracle Clusterware collects
CPU architecture data for Quality of Service Management (QoS)



Oracle ASM Filter Driver (Oracle ASMFD) :  Rejects write I/O requests that are not issued by Oracle software.
Metadata required for Rapid Home Provisioning

Grid Infrastructure Management Repository (GIMR) 



========================

Q5 :   Query performance : Adaptive query optimization

Plans are created using the statistics. If stats are stale/unhelthy, it can lead to bad plans.
Even while query execution, optimizer may find the plans generated with latest available stats to be not efficient enough due to actual conditions.
optimizer re-compute better plans by judging the effectiveness of the plan during query execution.

First execution of a SQL is done as usual
Current plan estimates are compared with the actuals (E-Rows vs A-Rows)
If they vary , optimizer looks for a replacement plan on the next execution.Marks  the original plan is marked as IS_REOPTIMIZIBLE
SQL plan directives are automatically created and stored in SYSAUX tablespace. DBA_SQL_PLAN_DIRECTIVES
On Next execution, SQL plan directives are used to takes dynamic sampling for tables in the query.
Results of dynamic sampling held in SQL plan directive repository and even used by other queries.OPTIMIZER_DYNAMIC_SAMPLING --> 11
SQL plan directive is no longer necessary it will be automatically purged after 53 weeks.

In 12.2, adaptive functionality is controlled   by OPTIMIZER_ADAPTIVE_PLANS / OPTIMIZER_ADAPTIVE_STATISTICS (obsolete : OPTIMIZER_ADAPTIVE_FEATURES)
If OPTIMIZER_ADAPTIVE_STATISTICS is set to TRUE, adaptive statistics features used if OPTIMIZER_FEATURES_ENABLE is set to 12.1.0.1 or above.
If OPTIMIZER_ADAPTIVE_STATISTICS is set to FALSE, adaptive statistics features not used ;SQL plan directives created but not used to refine plans.

Make OPTIMIZER_ADAPTIVE_REPORTING_ONLY= TRUE to check how many SQL statements will be affect by the new adaptive plans


INCREMENTAL_STALENESS : USE_STALE_PERCENT -  Use partition level statistics if change % less than this value.
USE_LOCKED_STATS  -  Use global level statistics  regardless of change



Difference between central/global inventory and local inventory


What are the components of Grid control?

OMS (Oracle Management Server)
OMR (Oracle Management Repository)
OEM Agent


What is the use of root.sh & oraInstRoot.sh?
Ans:
Changes ownership & permissions of oraInventory
Creating oratab file in the /etc directory
In RAC, starts the clusterware stack



What is the difference between SYSDBA, SYSOPER and SYSASM?

SYSOPER can’t create and drop database.
SYSOPER can’t do incomplete recovery.
SYSOPER can’t change character set.
SYSOPER can’t CREATE DISKGROUP, ADD/DROP/RESIZE DISK

SYSASM can do anything SYSDBA can do.




How you will find out fragmentation of index?

AUTO_SPACE_ADVISOR_JOB will run in daily maintenance window and report fragmented indexes/Tables.

analyze index validate structure;

This populates the table ‘index_stats’. It should be noted that this table contains only one row and therefore only one index can be analysed at a time.

An index should be considered for rebuilding under any of the following conditions:

* The percentage of deleted rows exceeds 30% of the total, i.e. if del_lf_rows / lf_rows > 0.3.
* If the ‘HEIGHT’ is greater than 4.
* If number of rows in index (‘LF_ROWS’) is significantly smaller than ‘LF_BLKS’ it indicate large number of deletes and needs to be rebuilt.


What is fencing?

When a cluster node fails,I/O fencing prevents updates by failed instances, and detecting failure and preventing split brain in cluster.
This methodology is called I/O Fencing, sometimes called Disk Fencing or failure fencing.

What are nodeapps?

VIP, listener, ONS, GSD


What are the wait events in RAC?
Ans:
gc buffer busy
gc buffer busy acquire
gc current request
gc cr request
gc cr failure
gc current block lost
gc cr block lost
gc current block corrupt
gc cr block corrupt
gc current block busy
gc cr block busy
gc current block congested
gc cr block congested.
gc current block 2-way
gc cr block 2-way
gc current block 3-way
gc cr block 3-way
(gc current/cr block n-way, n is number of nodes)
gc current grant 2-way
gc cr grant 2-way
gc current grant busy
gc current grant congested
gc cr grant congested
gc cr multi block read
gc current multi block request
gc cr multi block request
gc cr block build time
gc current block flush time
gc cr block flush time
gc current block send time
gc cr block send time
gc current block pin time
gc domain validation
gc current retry
ges inquiry response
gcs log flush sync