Basic Level Oracle Interview Questions

Razi Abuzar's picture

1)    Go through Oracle Architecture.

2)     How the query parsing will happen. Need to explain completely. Refer in youtube Key:Oracle architerture less than 10min its 4min video

When Oracle get a SQL query it needs to execute some tasks before being able to really execute the query. These tasks make up what is called parsing. Oracle uses a shared memory area named the shared pool. Whenever a statement is executed, evaluate the statement in terms of syntax, validity of objects being referred and privileges to the user.

Soft (statement is already parsed and available in memory) or a Hard (all parsing steps to be carried out) parse.

3)   What is Shared server and dedicated server?

 

4)   What is tnsname.ora? Where its located? How to configure tns entries? If the log missed how you will bring back?

The tnsnames.ora is a SQL*Net configuration file that normally resides in the ORACLE_HOME/network/admin directory.  Specifically, the tnsnames.ora defines databases addresses so that connections can be made to them.

The basic format of the tnsnames.ora file is:

net_service_name= 
  (description= 
     (address=(protocol_address_information)) 
  (connect_data= 
     (service_name=service_name)))

A sample tnsnames.ora entry might look like the following:

ORATST= 
 (description= 
   (address_list= 
     (address = (protocol = TCP)(host = fu.bar)(port = 1521)) 
   ) 
 (connect_data = 
   (service_name=oratst) 
 ) 
)

5)     What is sqlnet.ora?

The sqlnet.ora file is the profile configuration file, and it resides on the client machines and the database server.

The sqlnet.ora file is created by running the Network Configuration Assistant (NETCA).  The sqlnet.ora is a text file that contains basic configuration details used by SQL*Net.  The sqlnet.ora file is normally found in theORACLE_HOME/network/admin directory on Windows systems.  For UNIX systems, the sqlnet.ora is located by default in $ORACLE_HOME/network/admin. 

6)     What is listener.ora? Where its located?

The listener.ora file contains the server side network configuration parameters.  It is usually located in the ORACLE_HOME/network/admin directory.

LISTENER= 
  (description= 
    (address_list)= 
      (address=(protocol=tcp)(host=fu-bar)(port=1521)) 
      (address=(protocol=ipc)(key=extproc)))) 
sid_list_listener= 
  (sid_list= 
    (sid_desc= 
      (sid_name=plsextproc)  
      (oracle_home=/oracle11g) 
      (program=extproc)))

 

$ lsnrctl

LSNRCTL for Solaris: Version 9.2.0.1.0 - Production on 30-JAN-2003 11:54:13

(c) Copyright 1998 Oracle Corporation.  All rights reserved.

Welcome to LSNRCTL, type "help" for information.

LSNRCTL> help

The following lsnrctl operations are available

An asterisk (*) denotes a modifier or extended command:

start               stop                status

services            version             reload

save_config         trace               spawn

dbsnmp_start        dbsnmp_stop         dbsnmp_status

change_password     quit                exit

set*                show*

7)     How to find running listener?

ps –ef | grep tns

8)     How to check the listener services?

services:   Displays each service available, along with the connection history.

$ lsnrctl

LSNRCTL for Solaris: Version 9.2.0.1.0 - Production on 30-JAN-2003 11:54:13

(c) Copyright 1998 Oracle Corporation.  All rights reserved.

Welcome to LSNRCTL, type "help" for information.

LSNRCTL> services

9)     How you will stop/start the listener services?

$lsnrctl start {listener_name}

$lsnrctl stop {listener_name}

10)     How to find how many database configured in your server?

$more /etc/oratab-à in this location we can check how many databases are installed in the server

11)     How to find how many instance currently running in your server?

$ps –ef | grep pmon (or) smon

 

12)     What is pfile& spfile? Where its located and how the file looks? 

The Oracle spfile is a binary representation of the text-based init.ora file. By default, a new Oracle9i database will be working on a pfile, so the spfile must be created from the pfile at the SQL prompt.  The spfile is created using the CREATE SPFILE statement; this requires connecting as SYSDBA. 

Connect system/manager as sysdba; 

 CREATE SPFILE FROM PFILE; 

This command creates an spfile in a non-default location ($ORACLE_HOME/database). However, you can fully-qualify the path name is the “create spfile” statement: 

 CREATE SPFILE='/u01/admin/prod/pfile/file_mydb.ora' 

FROM  

 PFILE=/u01/admin/prod/pfile/initprod.ora'; 

 

Warning - After an spfile is created, when you bounce the database you may encounter an error. To get around this, you have to reconnect as SYSDBA and use the STARTUP command. 

 

The addition of the spfile has changed the search path for the Oracle startup deck. Oracle9i now uses the following path: 

 

   1:  Search for the spfile$ORACLE_SID.ora file in the default location,  

 

   2:  Look for the spfile.ora; and  

 

   3:  Search for the pfile by name init$ORACLE_SID.ora.

13) How to set the value of static or dynamic initialization parameter? What is static and dynamic parameter?

ALTER SESSION SET parameter_name = value 

ALTER SYSTEM SET parameter_name = value [DEFERRED]

Use the SET clause of the ALTER SYSTEM statement to set or change initialization parameter values. Additionally, the SCOPE clause specifies the scope of a change as described below:

SCOPE = SPFILE

(For both static and dynamic parameters, changes are recorded in the spfile, to be given effect in the next restart.)

SCOPE = MEMORY

(For dynamic parameters, changes are applied in memory only. No static parameter change is allowed.)

SCOPE = BOTH

(For dynamic parameters, the change is applied in both the server parameter file and memory. No static parameter change is allowed.)

14)     What is control file?

The Control File of the database is a binary file that contains a great deal of database information. The control file contains the database name, data about the database log files. Oracle cannot function without valid control files.

Because the control file is so important, Oracle allows you to maintain duplicate copies of the control file. When you have more than one control file, then you are said to be multiplexing your control files. It is a good practice to put these multiple copies on different disks to protect the control file. Later in this document, we will demonstrate how to do this.

15)     How you will multiplex the control file?

Alter system set control_files=’location1’,’location2’ scope=spfile

SQL>shutdown immediate;

$cp ‘old_location’ ‘new location’

SQL>startup

SQL>create pfile from spfile;

16)     How you will enable auto control file on?

17)     If the control file deleted how you will recover? With multiplexing & without multiplexing?

18)     How you will take the online control file backup?

RMAN> Backup current controlfile;

19)     Basic unix commands?

20)     Syntax for moving the file from server to server?

21)     How to find the background process running the server?

22)     How to find the CPU utilization?

We can use the following commands to find the cpu utilization:

·     Top

·     Prstat

24)     What is redolog?

Every Oracle database must have at least 2 redo logfile groups. Oracle writes all statements except, SELECT statement, to thelogfiles. This is done because Oracle performs deferred batch writes i.e. it does write changes to disk per statement instead it performs write in batches. So in this case a user updates a row, Oracle will change the row in db_buffer_cache and records the statement in the logfile and give the message to the user that  row is updated. Actually the row is not yet written back to the datafile but still it give the message to the user that row is updated. After 3 seconds the row is actually written to the datafile. This is known as deferred batch writes. 

Since Oracle defers writing to the datafile there is chance of power failure or system crash before the row is written to the disk. That’s why Oracle writes the statement in redo logfile so that in case of power failure or system crash oracle can re-execute the statements next time when you open the database.

25)     How to add/delete the new logfile group?

To add a new Redo Logfile group to the database give the following command

SQL>alter database add logfile group 3  

‘/u01/oracle/ica/log3.ora’ size 10M;

To add new member to an existing group give the following command

SQL>alter database add logfile member  

‘/u01/oracle/ica/log11.ora’ to group 1;

You can drop member from a log group only if the group is having more than one member and if it is not the current group. If you want to drop members from the current group, force a log switch or wait so that log switch occurs and another group becomes current. To force a log switch give the following command

SQL>alter system switch logfile;

The following command can be used to drop a logfile member

SQL>alter database drop logfile member ‘/u01/oracle/ica/log11.ora’;

Similarly, you can also drop logfile group only if the database is having more than two groups and if it is not the current group.

SQL>alter database drop logfile group 3;

Note: When you drop logfiles the files are not deleted from the disk. You have to use O/S command to delete the files from disk.

You cannot resize logfiles. If you want to resize a logfile create a new logfile group with the new size and subsequently drop the old logfile group.

To Rename or Relocate Logfiles perform the following steps

For Example, suppose you want to move a logfile from ‘/u01/oracle/ica/log1.ora’ to‘/u02/oracle/ica/log1.ora’, then do the following

Steps

1.      Shutdown the database

SQL>shutdown immediate; 

2.      Move the logfile from Old location to new location using operating system command

$mv /u01/oracle/ica/log1.ora  /u02/oracle/ica/log1.ora 

3.      Start and mount the database

SQL>startup mount 

4.      Now give the following command to change the location in controlfile

SQL>alter database rename file ‘/u01/oracle/ica/log1.ora’ to ‘/u02/oracle/ica/log2.ora’;

5.      Open the database

SQL>alter database open;

 

26)     How to resize the redo log size?

27)     How to enable archive log?

SQL>startup mount;

SQL>alter database archivelog;

SQL>alter database open;

SQL>alter database noarchivelog;

SQL>archive log list;

28)     If online redo log deleted how u will recover? With member & without member?

29)     How to find the corrupted blocks?

Using the view

V$database_block_corruption

V$datafile_header(where recover=yes)

30)     How to resize the datafile?

SQL>alter database datafile ‘/location’ resize 100m;

31)     How to add the new datafile?

SQL>alter tablespace tablespace_name add datafile ‘location’ size 100m;

32)     How to find the datafile location? (Table name/view)

SQL>select name from v$datafile;

SQL>select name from dba_data_files;

33)     How to backup the particular datafile?

Using cold or hot backup;

SQL>alter tablespace data begin backup;

$cp ‘/location.dbf’ ‘/backup_location’

Using rman

RMAN>backup datafile 4;

34)     How you move datafile one location to another location?

SQL>alter tablespace data offline;

Set newname for datafile ‘old_location’ to ‘new_location’;

Switch datafile all;--à to change the datafile location in the control files

SQL>alter tablespace data online;

35)     Different types of tablespaces?

·     System tablespaces

·     Sysaux tablespaces

·     Temporary tablespaces

·     Undo Tablespaces

·     Bigfile tablespaces

·     Smallfile tablespaces

36)     What is cronjob? How to find the cron entries? And how(Command) to edit the cron entries?

To schedule a job at O/S level we can schedule a job in crontab

To edit a crontab is crontab –e

To list crontabs available for a particular user is crontab –l

37)     Types of segments?

Oracle has 11 types of segments and you can display then by querying the dba_segments view:

SQL> select distinct segment_type from dba_segments; 
 
SEGMENT_TYPE 
------------------ 
LOBINDEX 
INDEX PARTITION 
TABLE PARTITION 
NESTED TABLE 
ROLLBACK 
LOB PARTITION 
LOBSEGMENT 
INDEX 
TABLE 
CLUSTER 
TYPE2 UNDO

38)     Types of backup?

There are two types of backup

·     Physical Backup

·     Logical Backup

 

39)     What is awr rpt? Where the script is located?

AWR report script is located in $ORACLE_HOME/rdbms/admins/awrrpt.sql

40)     Diff between exp and expdp? Learn both commands and syntax for exp & imp

Difference between exp and expdp is as follows:

Exp is by default conventional path while expdp is by default direct path

Exp backup can be transferred to any server while backing up the database, while in expdp the backup can only be stored in database server

41)     What is RMAN?

RMAN is an oracle inbuilt utility to backup the database. RMAN is also know as recovery manager

we will use the Oracle RMAN tool. It comes with the Oracle RDBMS and it’s free. In this chapter we will cover:

* Backing up your database with RMAN

* Restoring your database with RMAN

db_recovery_file_dest ? Determines the location of the flash recovery area

db_recovery_file_dest_size ? Determines how much space can be used by Oracle in the flash recovery area.

Alter system set db_recovery_file_dest=/u01/app/oracle/backup?;

Alter system set db_recovery_file_dest_size=2G;

We will also want to set a couple of RMAN settings before we do our first backup. Start RMAN as detailed earlier in this section with:

$ rman target /

Now, we want to configure the following:

* Automatic backups of the control file and SPFILE.

* Retention policy to a redundancy of 2

Next, we can use the RMAN configure command to configure these settings as seen in this example:

-- Configure the retention policy to redundancy of 2.

-- This means RMAN will try to keep 2 copies of the database backups.

 

 

RMAN> configure retention policy to redundancy 2;

 

 

-- Configure automated backups of the control file and SPFILE.

 

 

RMAN>configure controlfile autobackup on;

42)     How to create user/schema? Grant for the user?

create user user_name identified by password;

grant create session to user_name;----à without this privileges user cannot connect to database

grant select on owner.table_name to user_name;--à to give select access to user of a another owner’s table

drop user user_name cascade;-à to drop the user and the respective objects/tables of that user

43)     What is alert log? Where its located?

SQL>sho parameter backgroup_dumpà o/p we can find the alert logfile location

alert logfile captures all the changes made to the database like the following:

creating tablepsace, resizing the tablespace, switching logfile, important alerts are captured in alert log like ORA-0600,ORA-7445

44)     For expdp how u will create the new directory? (Sql statement)

SQL>create directory directory_name as ‘location’;

SQL>grant read,write on directory directory_name to user_name;

SQL>select directory_name,directory_path from dba_directories;

45)     What is sga target and sga max?

SQL>show parameter sga_max_size;

SQL>show parameter sga_target;

46)     What is parallazation?

47)     How you will reset the password of user?

SQL>select username,account_status from dba_user where username=’DINESH’;

SQL>alter user dinesh identified by dinesh123 account unlock password expire;

48)     How you will grant access to user?

SQL>grant create session to dinesh;-à granting system privileges

SQL>grant select on emp to dinesh;-à granting object privileges

SQL>grant role_name to dinesh;

49)     What is role? How u will set and grant the role?

Combination of system and object privileges in a role.

SQL>create role role_name;

SQL>grant create session,create table,select on stanly.emp to role_name;

SQL>grant role_name to dinesh;

50)     What is checkpoint?

The checkpoint process is responsible for updating file headers in the database datafiles. A checkpoint occurs when Oracle moves new or updated blocks (called dirty blocks) from the RAM buffer cache to the database datafiles. A checkpoint keeps the database buffer cache and the database datafiles synchronized. This synchronization is part of the mechanism that Oracle uses to ensure that your database can always be recovered.

51)     How to find the db timezone?

52)     [oracle@wissem ~]$ sqlplus / as sysdba

53)     

54)     SQL*Plus: Release 11.2.0.1.0 Production on Wed Aug 17 10:42:16 2011

55)     

56)     Copyright (c) 1982, 2009, Oracle.  All rights reserved.

57)     

58)     

59)     Connected to:

60)     Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production

61)     With the Partitioning, OLAP, Data Mining and Real Application Testing options

62)     

63)     SQL>

64)     SQL> select u.name || '.' || o.name || '.' || c.name "Col TSLTZ"  

65)      from sys.obj$ o, sys.col$ c, sys.user$ u

66)      where c.type# = 231  

67)      and o.obj# = c.obj#   

68)      and u.user# = o.owner#; 

69)      2    3    4    5 

70)     no rows selected

71)     

72)     SQL> select DBTIMEZONE from dual;

73)     

74)     DBTIME

75)     ------

76)     +00:00

77)     

78)     SQL> alter database set time_zone='Asia/Karachi';

79)     

80)     Database altered.

81)     

82)     SQL> select DBTIMEZONE from dual;

83)     

84)     DBTIME

85)     ------

86)     +00:00

87)     

88)     SQL> startup force;

89)     ORACLE instance started.

90)     

91)     Total System Global Area 1068937216 bytes

92)     Fixed Size              2220200 bytes

93)     Variable Size            725618520 bytes

94)     Database Buffers       335544320 bytes

95)     Redo Buffers              5554176 bytes

96)     Database mounted.

97)     Database opened.

98)     SQL> select DBTIMEZONE from dual;

99)     

100)     DBTIMEZONE

101)     ------------

102)     Asia/Karachi

 

103)     Learn what is ASM?

104)     What is db link? And how to create?

105)     How to create & drop schema?

SQL>create user dinesh identified by dinesh123 default tablespace dinesh temporary tablespace temp quota unlimited on dinesh;

SQL>drop user dinesh cascade;

106)     How to find the temp table space;

UMASK :

  1. A umask of 022 allows only you to write data, but anyone can read data.

  2. A umask of 077 is good for a completely private system. No other user can read or write your data if umask is set to 077.

  3. A umask of 002 is good when you share data with other users in the same group. Members of your group can create and modify data files; those outside your group can read data file, but cannot modify it. Set your umask to 007 to completely exclude users who are not group members.

 

108) Export import Q/A.

1.  What is the Import/ Export Utility ?

 

Export (exp), Import (imp) are Oracle utilities which allow you to write data in an ORACLE-binary format from the database into operating system files and to read data back from those operating system files.

2.  Which are the Import/ Export modes ?

 

a) Full export/export

       The EXP_FULL_DATABASE & IMP_FULL_DATABASE, respectively, are needed to perform a full export. Use the full export parameter for a full export. 

b) Tablespace
Use the tablespaces export parameter for a tablespace export.

 
c) User
This mode can be used to export and import all objects that belong to a user. Use the owner export parameter and the from user import parameter for a user (owner) export-import. 


d) Table
Specific tables (or partitions) can be exported/imported with table export mode. Use the tables export parameter for a table export/ import mode. 

 

3. Is it possible to exp/ imp to multiple files?

 

Yes, is possible. Here is an example:  

exp SCOTT/TIGER FILE=C:\backup\File1.dmp,C:\backup\File2.dmp LOG=C:\backup\scott.log

 

 

4.  How we can use exp/ imp when we have 2 different Oracle database versions?

  • exp must be of the lower version

  • imp must match the target version

 5. What I have to do before importing database objects ?

 

Before importing database objects, we have to drop or truncate the objects, if not, the data will be added to the objects. If the sequences are not dropped, the sequences will generate inconsistent values.  If there are any constraints on the target table, the constraints should be disabled during the import and enabled after import.

 

6.  Is it possible to import a table in a different tablespace ?

 By default, NO. Because is no tablespace parameter for the import operation.

However this could be done in the following manner:

  • (re)create the table in another tablespace (the table will be empty)

  • import the table using INDEXFILE parameter (the import is not done, but a file which contains the indexes creation is generated)

  • modify this script to create the indexes in the tablespace we want

  • import the table using IGNORE=y option (because the table exists)

  • recreate the indexes

    Here is an example of INDEXFILE:

     

    7.  In which cases imp/exp is used ?

  • Eliminate database fragmentation

  • Schema refresh (move the schema from one database to another)

  • Detect database corruption. Ensure that all the data can be read (if the data can be read that means there is no block corruption)

  • Transporting tablespaces between databases

  • Backup database objects

    8.  How we can improve the EXP performance ?

  • Import the table using INDEXFILE parameter (the import is not done, but a file which contains the indexes creation is generated), import the data and recreate the indexes

  • Store the dump file to be imported on a separate physical disk from the oracle data files

  • If there are any constraints on the target table, the constraints should be disabled during the import and enabled after import

  • Set the BUFFER parameter to a high value (ex. BUFFER=30000000 (~30MB)  ) and COMMIT =y  or set COMMIT=n (is the default behavior: import commits after each table is loaded, however, this use a lot of the rollback segments or undo space for huge tables.)

  • use the direct path to import the data (DIRECT=y)

  • (if possible) Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i) considerably in the init<SID>.ora file

  • (if possible) Set the LOG_BUFFER to a big value and restart oracle.

    10) Which are the common IMP/EXP problems?

  • ORA-00001: Unique constraint ... violated - Perhaps you are importing duplicate rows. Use IGNORE=N to skip tables that already exist (imp will give an error if the object is re-created) or the table could be dropped/ truncated and re-imported if we need to do a table refresh..

  • IMP-00015: Statement failed ... object already exists... - Use the IGNORE=Y import parameter to ignore these errors, but be careful as you might end up with duplicate rows.

  • ORA-01555: Snapshot too old - Ask your users to STOP working while you are exporting or use parameter CONSISTENT=NO (However this option could create possible referential problems, because the tables are not exported from one snapshot in time).

  • ORA-01562: Failed to extend rollback segment - Create bigger rollback segments or set parameter COMMIT=Y (with an appropriate BUFFER parameter) while importing.

    11) What are the Differences between Data Pump impdp and import utility?  I know that import does the same things as impdp, but I'm not clear on the differences.

  • Data Pump does not use the BUFFERS parameter
  • Data Pump export represents the data in XML format
  • A Data Pump schema import will recreate the user and execute all of the associated security privileges (grants, user password history).
  • Data Pump's parallel processing feature is dynamic. You can connect to a Data Pump job that is currently running and dynamically alter the number of parallel processes.
  • Data Pump will recreate the user, whereas the old imp utility required the DBA to create the user ID before importing.

 

Original Import Parameter

Comparable Data Pump Import Parameter

BUFFER

A parameter comparable to BUFFER is not needed.

CHARSET

A parameter comparable to CHARSET is not needed.

COMMIT

A parameter comparable to COMMIT is not supported.

COMPILE

A parameter comparable to COMPILE is not supported.

CONSTRAINTS

EXCLUDE=CONSTRAINT

DATAFILES

TRANSPORT_DATAFILES

DESTROY

REUSE_DATAFILES

FEEDBACK

STATUS

FILE

DUMPFILE

FILESIZE

Not necessary. It is included in the dump file set.

FROMUSER

SCHEMAS

FULL

FULL

GRANTS

EXCLUDE=GRANT

HELP

HELP

IGNORE

TABLE_EXISTS_ACTION

INDEXES

EXCLUDE=INDEX

INDEXFILE

SQLFILE with INCLUDE INDEX

LOG

LOGFILE

PARFILE

PARFILE

RECORDLENGTH

A parameter comparable to RECORDLENGTH is not needed.

RESUMABLE

A parameter comparable to RESUMABLE is not needed. It is automatically defaulted for privileged users.

RESUMABLE_NAME

A parameter comparable to RESUMABLE_NAME is not needed. It is automatically defaulted for privileged users.

RESUMABLE_TIMEOUT

A parameter comparable to RESUMABLE_TIMEOUT is not needed. It is automatically defaulted for privileged users.

ROWS=N

CONTENT=METADATA_ONLY

ROWS=Y

CONTENT=ALL

SHOW

SQLFILE

SKIP_UNUSABLE_INDEXES

SKIP_UNUSABLE_INDEXES

STATISTICS

A parameter comparable to STATISTICS is not needed. If the source table has statistics, they are imported.

STREAMS_CONFIGURATION

STREAMS_CONFIGURATION

STREAMS_INSTANTIATION

A parameter comparable to STREAMS_INSTANTIATIONis not needed.

TABLES

TABLES

TABLESPACES

This parameter still exists, but some of its functionality is now performed using theTRANSPORT_TABLESPACES parameter.

TOID_NOVALIDATE

A command comparable to TOID_NOVALIDATE is not needed. OIDs are no longer used for type validation.

TOUSER

REMAP_SCHEMA

TRANSPORT_TABLESPACE

TRANSPORT_TABLESPACES (see command description)

TTS_OWNERS

A parameter comparable to TTS_OWNERS is not needed because the information is stored in the dump file set.

USERID

A parameter comparable to USERID is not needed. This information is supplied as theusername/password when you invoke Import.

VOLSIZE

A parameter comparable to VOLSIZE is not needed because tapes are not supported.

 

bash-3.2$ expdp – help

 

Export: Release 11.2.0.4.0 - Production on Mon Feb 15 07:35:42 2016

 

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

 

The Data Pump export utility provides a mechanism for transferring data objects between Oracle databases. The utility is invoked with the following command

  Example: expdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

You can control how Export runs by entering the 'expdp' command followed  by various parameters. To specify parameters, you use keywords:

   Format:  expdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)

   Example: expdp scott/tiger DUMPFILE=scott.dmp DIRECTORY=dmpdir SCHEMAS=scott

   or TABLES=(T1:P1,T1:P2), if T1 is partitioned table

USERID must be the first parameter on the command line.

----------------------------------------------------------------------------

The available keywords and their descriptions follow. Default values are listed  within square brackets.

ATTACH

Attach to an existing job. For example, ATTACH=job_name.

CLUSTER

Utilize cluster resources and distribute workers across the Oracle RAC. Valid keyword values are: [Y] and N.

COMPRESSION

Reduce the size of a dump file.

Valid keyword values are: ALL, DATA_ONLY, [METADATA_ONLY] and NONE.

CONTENT

Specifies data to unload.

Valid keyword values are: [ALL], DATA_ONLY and METADATA_ONLY.

DATA_OPTIONS

Data layer option flags.

Valid keyword values are: XML_CLOBS.

DIRECTORY

Directory object to be used for dump and log files.

DUMPFILE

Specify list of destination dump file names [expdat.dmp]. For example, DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.

ENCRYPTION

Encrypt part or all of a dump file. Valid keyword values are: ALL, DATA_ONLY, ENCRYPTED_COLUMNS_ONLY, METADATA_ONLY                                                                                         and NONE.

ENCRYPTION_ALGORITHM

Specify how encryption should be done.Valid keyword values are: [AES128], AES192 and AES256.

ENCRYPTION_MODE

Method of generating encryption key. Valid keyword values are: DUAL, PASSWORD and [TRANSPARENT].

ENCRYPTION_PASSWORD

Password key for creating encrypted data within a dump file.

ESTIMATE

Calculate job estimates.

Valid keyword values are: [BLOCKS] and STATISTICS.

ESTIMATE_ONLY

Calculate job estimates without performing the export.

EXCLUDE

Exclude specific object types.

For example, EXCLUDE=SCHEMA:"='HR'".

FILESIZE

Specify the size of each dump file in units of bytes.

FLASHBACK_SCN

SCN used to reset session snapshot.

FLASHBACK_TIME

Time used to find the closest corresponding SCN value.

FULL

Export entire database [N].

HELP

Display Help messages [N].

INCLUDE

Include specific object types.

For example, INCLUDE=TABLE_DATA.

JOB_NAME

Name of export job to create.

LOGFILE

Specify log file name [export.log].

NETWORK_LINK

Name of remote database link to the source system.

NOLOGFILE

Do not write log file [N].

PARALLEL

Change the number of active workers for current job.

PARFILE

Specify parameter file name.

QUERY

Predicate clause used to export a subset of a table.

For example, QUERY=employees:"WHERE department_id > 10".

REMAP_DATA

Specify a data conversion function. For example, REMAP_DATA=EMP.EMPNO:REMAPPKG.EMPNO.

REUSE_DUMPFILES

Overwrite destination dump file if it exists [N].

SAMPLE

Percentage of data to be exported.

SCHEMAS

List of schemas to export [login schema].

SERVICE_NAME

Name of an active Service and associated resource group to constrain Oracle RAC resources.

SOURCE_EDITION

Edition to be used for extracting metadata.

STATUS

Frequency (secs) job status is to be monitored where the default [0] will show new status when available

TABLES

Identifies a list of tables to export.

For example, TABLES=HR.EMPLOYEES,SH.SALES:SALES_1995.

TABLESPACES

Identifies a list of tablespaces to export.

TRANSPORTABLE

Specify whether transportable method can be used.Valid keyword values are: ALWAYS and [NEVER].

TRANSPORT_FULL_CHECK

Verify storage segments of all tables [N].

TRANSPORT_TABLESPACES

List of tablespaces from which metadata will be unloaded.

VERSION

Version of objects to export.

Valid keyword values are: [COMPATIBLE], LATEST or any valid database version.

-----------------------------------------------------------------------------

The following commands are valid while in interactive mode. Note: abbreviations are allowed.

ADD_FILE

Add dumpfile to dumpfile set.

CONTINUE_CLIENT

Return to logging mode. Job will be restarted if idle.

EXIT_CLIENT

Quit client session and leave job running.

FILESIZE

Default filesize (bytes) for subsequent ADD_FILE commands.

HELP

Summarize interactive commands.

KILL_JOB

Detach and delete job.

PARALLEL

Change the number of active workers for current job.

REUSE_DUMPFILES

Overwrite destination dump file if it exists [N].

START_JOB

Start or resume current job.

Valid keyword values are: SKIP_CURRENT.

STATUS

Frequency (secs) job status is to be monitored where

the default [0] will show new status when available.

STOP_JOB

Orderly shutdown of job execution and exits the client.

Valid keyword values are: IMMEDIATE.

 

http://www.datadisk.co.uk/html_docs/oracle/data_pump.htm (datagurad)

Conventional path Export
Conventional path Export uses the SQL SELECT statement to extract data from tables. Data is read from disk into the buffer cache, and rows are transferred to the evaluating buffer. The data, after passing expression evaluation, is transferred to the Export client, which then writes the data into the export file. 
 Direct path Export
When using a direct path Export, the data is read from disk directly into the export session's program global area (PGA): the rows are transferred directly to the Export session's private buffer. This also means that the SQL command-processing layer (evaluation buffer) can be bypassed, because the data is already in the format that Export expects. As a result, unnecessary data conversion is avoided. The data is transferred to the Export client, which then writes the data into the export file. 

. The parameter DIRECT specifies whether you use the direct path Export (DIRECT=Y) or the conventional path Export (DIRECT=N).

 

External Path

Uses the database buffer cache acts as a SELECT statement into a dump file, during import reconstructs statements into INSERT statements, so whole process is like a normal SELECT/INSERT job. Both undo and redo are generated and uses a normal COMMIT just like a DML statement would.

Useful Views

DBA_DATAPUMP_JOBS

summary information of all currently running data pump jobs

DBA_DATAPUMP_SESSIONS

displays the user currently running data pump jobs

V$SESSION_LONGOPS

display information like totalwork, sofar, units and opname

 

Database Link

database link is a schema object in one database that enables you to access objects on another database. The other database need not be an Oracle Database system. However, to access non-Oracle systems you must use Oracle Heterogeneous Services.
 Three link type supported.

  • Private database link - belongs to a specific schema of a database. Only the owner of a private database link can use it.

  • Public database link - all users in the database can use it.

  • Global database link - defined in an OID or Oracle Names Server. Anyone on the network can use it.

Create db_link   : CREATE DATABASE LINK remotedb CONNECT TO scott IDENTIFIED BY tiger USING 'tns_conn_str';

For Dropping: DROP DATABASE LINK remotedb;

Close DBlink: ALTER SESSION CLOSE DATABASE LINK <link_name>;

 

SYNTAX.

CREATE [SHARED][PUBLIC] DATABASE LINK link_name

     [CONNECT TO CURRENT_USER]

        [USING 'connect_string']

 

   CREATE [SHARED][PUBLIC] DATABASE LINK link_name

      [CONNECT TO user IDENTIFIED BY password]

      [AUTHENTICATED BY user IDENTIFIED BY password]

         [USING 'connect_string']

 

Drop a database link when you are not the owner

When you are owner of the link you can drop it without problem but if you are a DBA and want to purge database, let's say after restoring test database from live database, you can use the proxy user feature so you can connect to the database link owner without knowing or changing its password (below MICHEL is the DBA).

SQL> CONNECT test/test

Connected.

TEST> CREATE DATABASE LINK mika@loop CONNECT TO test IDENTIFIED BY test USING 'mika';

Database link created.

TEST> CONNECT michel/michel

Connected.

MICHEL> ALTER USER test GRANT CONNECT THROUGH michel ;

User altered.

MICHEL> -- Connect to TEST through MICHEL account and so with MICHEL's password

MICHEL> CONNECT michel[test]/michel

Connected.

TEST> SHOW USER

USER is "TEST"

TEST> DROP DATABASE LINK mika@loop;

Database link dropped.

TEST> CONNECT michel/michel

Connected.

MICHEL> ALTER USER test REVOKE CONNECT THROUGH michel ;

User altered.

user_db_links
all_db_links
dba_db_links

Display what database links are created/available

 

Undo Data

Undo data provides read consistency

 

UNDO_MANAGEMENT (default manual)

This is the only mandatory parameter and can be set to either auto or manual.

UNDO_TABLESPACE
(default undo tablespace)

This specifies the tablespace to be used; of course the tablespace needs to be a undo tablespace. If you do not set this value oracle will automatically pick the one available. If no undo tablespace exists then oracle will use the system tablespace which is not a good idea (always create one).

UNDO_RETENTION
(seconds)
Once a transaction commits the undo data for that transaction stays in the undo tablespace until space is required in which case it will be over written.

Undo data can be in 3 states

State

 

When is undo data over written

uncommitted undo information

undo data that supports active transactions and required in the event of rollback

never

committed undo information (unexpired)

also known as unexpired undo, required to support undo_retention interval

after undo_retention period or undo tablespace space pressure unless guaranteed option is set (see below)

expired undo information

undo information that is no longer needed

always

Undo Sizing

undo tablespace size

UR  *  UPS  *  DB_BLOCK_SIZE

UR = undo retention (system parameter undo_retention) 
UPS = maximum undo blocks used/sec (obtain from v$undostat)
DB_BLOCK_SIZE = the default block size (obtained from dba_tablespaces)

Undo Commands

Undo System Management

Management

alter system set undo_management=auto;

Setting

alter system set undo_tablespace = 'undotbs02';

Retention

alter system set undo_retention = 43200; (it's in seconds)

Undo Control

Creating

create undo tablespace undotbs2 datafile 'c:\oracle\undo02.dbf' size 2G;

Removing

drop undo tablespace undotbs02;

guarantee

alter tablespace undotbs02 retention guarantee;
alter tablespace undotbs02 retention noguarantee;

See current undo blocks

select begin_time, undotsn, undoblks, activeblks, unexpiredblks, expiredblks from v$undostat;

Contains snapshots of v$undostat (use obtain the oldest undo available)

select begin_time, undotsn, undoblks, activeblks, unexpiredblks, expiredblks from dba_hist_undostat;

NOTE: If your current undo_retention period is 6 days then the oldest undo data in dba_hist_undo should be 6 days old.

Useful Views

DBA_ROLLBACK_SEGS

describes rollback segments

DBA_TABLESPACES

describes all tablespaces in the database

DBA_UNDO_EXTENTS

describes the extents comprising the segments in all undo tablespaces in the database

DBA_HIST_UNDOSTAT

displays the history of histograms of statistical data to show how well the system is working. The available statistics include undo space consumption, transaction concurrency, and length of queries executed in the instance. This view contains snapshots of V$UNDOSTAT.

V$UNDOSTAT

displays a histogram of statistical data to show how well the system is working. The available statistics include undo space consumption, transaction concurrency, and length of queries executed in the instance. You can use this view to estimate the amount of undo space required for the current workload. Oracle uses this view to tune undo usage in the system. The view returns null values if the system is in manual undo management mode.

V$ROLLNAME

lists the names of all online rollback segments. It can only be accessed when the database is open.

V$ROLLSTAT

contains rollback segment statistics

V$TRANSACTION

lists the active transactions in the system

 FlashBack Architecture

There are a number of flashback levels

row level

flashback query, flashback versions query, flashback transaction query

table level

flashback table, flashback drop

database level

flashback database

  • Flashback query - retrieves data from a past point in time

  • Flashback versions query - shows you different versions of data rows, plus start and end times of a particular transaction that created that row

  • Flashback transaction query - lets you retrieve historical data for a given transaction and the SQL code to undo the transaction.

  • Flashback table - recovers a table to its state at a past point in time, without having to perform a point in time recovery

    There are two other flashback technologies that do not use the undo data, they use flashback logs and recyclebin instead.

  • flashback database - restore the whole database back to a point in time.
  • flashback drop - allows you to reverse the effects of a drop table statement, without resorting to a point-in-time recovery

DBMS_FLASHBACKflashback table queryflashback transaction queryflashback version query and select .. as of .. statements all use the undo segments. Flashback database uses the flashback logs and flashback drop uses the recycled bin.

User Management

4 main accounts that are created during install sys, system, sysman and dbmsmp, 

SYS: Owns all internal tables

SYSTEM: Has additional tables and views

SYSMAN: Use by OEM to monitor and gather performance stats, which are stored in the sysaux tablespace

DBSNMP: Same as sys but for the OEM, owns all internal tables in the sysaux tablespace.

Creating

create user vallep identified by password;
create user vallep identified by password default tablespace users temporary tablespace temp quota 100m on users;

Remove

drop user vallep;
drop user vallep cascade;

Note: the cascade option will remove all the users objects as well.

Alter

alter user vallep idenitfied by newpassword;
alter user vallep quota 200m on users;

Password options

alter user vallep identified by password;
alter user vallep identified externally;
alter user vallep identified globally as extname;

Note:
identified by - the password will be kept in the data dicitonary
identified externally - authenication will be performed by the O/S
idenitified globally as extname - authenication will be performed by external app i.e radius

a user is only allowed to change is his/her password

Expire password

alter user vallep password expire;

Lock/unlock

alter user vallep account lock;
alter user vallep account unlock;

Connecting

grant create session to vallep;

Note: this allows the user to connect to the database

Revoke access

revoke create session from vallep;

Quota

alter user vallep quota 100m on users;
alter user vallep quota unlimited on users; 
grant unlimited tablespace to vallep;

Kill a users session

select username, sid, serial# from v$session; 
alter system kill session '<session_id>,<session-serial>';

User connection type
(decicated/shared)

select username, program, server from v$session;

Useful Views

DBA_USERS

describes all users of the database

DBA_TS_QUOTAS

describes tablespace quotas for all users

V$SESSION

lists session information for each current session

By default oracle passwords are sent in clear text across the network, set the following environment variables to encrypt the password between the client and server.

Server

dblink_encrypt_login = true

Client

ora_encrypt_login = true

Profiles

Profiles are used to limit a users resource, it can also enforce password management rules, only the DBA can change profiles.

  • connect_time - limits session to number of minutes
  • cpu_per_call - limits cpu time by any single database call
  • cpu_per_session - limit cpu by session
  • idle_time - limit session to idle time, allows user to rollback or commit before logging off
  • logical_reads_per_call - caps the amount of work by any single database call
  • logical_reads_per_session - caps the amount of work by any session
  • private_sga - limits memory when using shared servers
  • sessions_per_user - limits the number of sessions a user can have
  • composite_limit - calculated by cpu_per_session, logical_reads_per_session, connection_time and private_sga

The security features that the profile can also manage are

  • failed_login_attemps - number of times a user can enter the wrong password before the account is locked
  • password_lock_time - if the above is breached lock password for this number of days
  • password_life_time - number of days a password can remain in force
  • password_grace_time - number of days user is notified but is used in above value
  • password_reuse_time - maximum # of days before a password can be reused
  • password_reuse_max - minimum # of different passwords before password can be reused
  • password_verify_function - allows the use of a function to be used to verify a password

Creating

create profile user_profile limit sessions_per_user 5;

Remove

drop profile user_profile cascade;

Note: any users using the dropped profile will be automatically assigned the default profile

Setting a limit

alter profile user_profile limit idle_time 30;

Displaying current resource limits

select * from user_resource_limits;

Displaying current password limits

select * from user_password_limits;

Displaying profile

select * from dba_profiles where profile = 'USER_PROFILE';

Assign a profile

alter user vallep profile user_profile;

Useful Views

USER_RESOURCE_LIMITS

displays the resource limits for the current user.

USER_PASSWORD_LIMITS

describes the password profile parameters that are assigned to the user.

DBA_PROFILES

displays all profiles and their limits

Before profiles are used you must set the following systems parameter, you have to restart the database in order for the changes to take affect.

Enable resource limits

alter system set resource_limit = true scope = both;

Disable resource limits

alter system set resource_limit = false scope = both;

Data Access(Roles and Privileges)

There are two basic privileges system and object, using the commands grant and revoke privileges can be given and taken away from a user.

System Privileges

  • advisor
  • alter database
  • alter system
  • audit system
  • create database link
  • create table
  • create any index
  • create session
  • create tablespace
  • create, alter and drop user
  • insert any table
     

·         Granting

grant create session to vallep;
grant create tablespace to vallep;
grant create user, alter user, drop user to vallep;

Revoking

revoke create session from vallep;
revoke create tablespace from vallep;

Allow user to also grant this privilege

grant create session to vallep with admin option;

Note: now vallep can also grant this privilege

Useful Views

SYSTEM_PRIVILEGE_MAP

table to list all system privileges

DBA_USERS

provides information about users

DBA_SYS_PRIVS

see who has system privileges

There are two very powerful system privileges sysdba and sysoper, you cannot grant this privilege to a role and you cannot use with admin option.

SYSOPER

perform startup and shutdown operations
mount/dismount and open/close the database 
use alter database commands (BACKUP, ARCHIVE, LOG AND RECOVER) 
perform archiving and recovery operations
create a spfile

SYSDBA

All the SYSOPER privileges

use the create database command
all system privileges with admin option

Object Privileges

Object privileges are privileges on database objects which allows a user to perform some action on a specific table, view, sequence, etc. You can use the following SQL statements when you grant object privileges

  • alter
  • select
  • delete
  • execute
  • insert
  • references
  • index
  • Some of the possible object privileges on the following are possible, it is also possible to allow column only privileges

table

select, insert, update, delete, alter, debug, index and references

views

select, insert, update, delete, debug and references

sequences

select and alter

Functions, procedures, packages

debug and execute

  • As with the system users using the option "with grant admin option".

Grant

grant select, insert, delete, update on employees to vallep;
grant select on employees to public;

grant update (product_id) on products to vallep;

Revoke

revoke select, insert, delete, update on employees from vallep;
revoke select on employees from public;

revoke update (product_id) on products from vallep;

Allow user to also grant this privilege

grant select on employees to vallep with grant option;

Useful Views

DBA_TAB_PRIVS

show users table privileges

DBA_COL_PRIVS

show users columns privileges

Roles

It can be very difficult to keep track of each users privilege, Oracle addresses this problem by using roles, which are named sets of privileges that can be assigned to users. Roles are a set of privileges that can be set or taken away in one go, using grant or revoke. A user by default, will use the default role unless he/she is assigned another role, you can assign more than one role to a user and he/she can switch roles during a session.

  • dba - all system privileges with admin option
  • connect role - this now only has the create session privilege
  • resource role - create cluster, index type, operator, procedure, sequence, table, trigger and type
  • exp_full_database - used for data pump export
  • imp_full_database - used for data pump import

·         creating

create role test_role identified by <password>;

Note: the password is optional, you can also use externally or globally authentication

removing

drop role test_role;

adding privileges to role

grant select on HR.employees to test_role;
grant exp_full_database to test_role;

removing privileges from role

revoke select on HR.employees from test_role;
revoke exp_full_database from test_role;

adding a role to a role

grant dba to test_role;

Note: the dba is a very powerful role be careful giving this out to anyone

granting a role to a user

grant test_role to valle; 
grant test_role to vallep with admin option;

revoking a role from a user

revoke test_role from vallep;

list roles/privileges

select * from session_roles;
select * from session_privs;

setting session role

set role test_role identified by <password>;

set default

alter user vallep default role test_role

Useful Views

DBA_ROLES

list all the roles

DBA_ROLE_PRIVS

lists the users granted roles

Note: useful columns are with admin option, default role

ROLE_SYS_PRIVS

lists the roles system privileges and what roles have other roles within them

ROLE_TAB_PRIVS

lists the roles table privileges

ROLE_ROLE_PRIVS

lists what other roles the role has (roles within roles)

SESSION_ROLES

lists current role in use.

SESSION_PRIVS

show privileges currently enabled for the user

You can disable a user’s role by inserting a row within the table product_user_profile in the sys schema.

disable specific role for user

insert into product_user_profile (
  product, userid, attribute, char_value)
  values ('SQL*Plus', 'VALLEP', 'ROLES', 'TEST_ROLE')
;

enable specific role for user

delete from product_user_profile
  where userid = 'VALLEP',
  and char_value = 'TEST_ROLE'
;

Tablespaces

Oracle has 4 different types of tablespace

  • Permanent - uses data files and normally contains the system (data dictionary) and users data
  • Temporary - is used to store objects for the duration of a user’s session, temp files are used to create temporary tablespaces
  • Undo - is a permanent type of tablespace that are used to store undo data which if required would undo changes of data by users
  • Read only - is a permanent tablespace that can only be read, no writes can take place, but the tablespace can be made read/write.

Every oracle database has at least two tablespaces

  • System - is a permanent tablespace and contains the vital data dictionary (metadata about the database)
  • Sysaux - is an auxiliary tablespaces and contains performance statistics collected by the database.

Tablespace Management

 

Locally (default)

Extents are the basic unit of a tablespace and are managed in bitmaps that are kept within the data file header for all the blocks within that data file. For example, if a tablespace is made up of 128KB extents, each 128KB extent is represented by a bit in the extent bitmap for this file, the bitmap values indicate if the extent is used or free. The bitmap is updated when the extent changes there is no updating on any data dictionary tables thus increasing performance.

Extents are tracked via bitmaps not using recursive SQL which means a performance improvement.

Locally managed tablespaces cannot be converted into a dictionary managed one. The benefits of using a local managed tablespace

  • relieves contention on the system tablespace
  • free extents are not managed by the data dictionary
  • no need to specify storage parameters

Dictionary Managed

The extent allocation is managed by the data dictionary and thus updating the extent information requires that you access the data dictionary, on heavy used systems this can cause a performance drop.

extents are tracked via FET$ and UET$ using recursive SQL.

Dictionary managed tablespaces can be converted to a locally managed one.

Extent Management

Anytime an object needs to grow in size space is added to that object by extents. When you are using locally managed tablespaces there are two options that the extent size can be managed

Autoallocate (default)

This means the extent will vary in size, the first extent starts at 64k and progressively increased to 64MB by the database. The database automatically decides what size the new extent will be based on segment growth patterns.

Autoallocate is useful if you aren't sure about growth rate of an object and you let oracle decide.

Uniform

Create the extents the same size by specifying the size when create the tablespace.

This is default for temporary tablespace but not available for undo tablespaces.

Be careful with uniform as it can waste space, use this option you are know what the growth rate of the objects are going to be.

Segment Space Management

Segment space management is how oracle deals with free space with in an oracle data block. The segment space management you specify at tablespace creation time applies to all segments you later create in the tablespace.

Oracle uses two methods to deal with free space

Manual

Oracle manages the free space in the data blocks by using free lists and a pair of storage parameters PCTFREE and PCTUSED. When the block reaches the PCTUSED percentage the block is then removed from the freelist, when the block falls below the PCTFREE threshold the block is then placed back on the freelist. Oracle has to perform a lot of hard work maintaining these lists, a slow down in performance can occur when you are making lots of changes to the blocks as Oracle needs to keep checking the block thresholds.

Automatic (default)

Oracle does not use freelist when using automatic mode, Instead oracle uses bitmaps. A bitmap which is contained in a bitmap block, indicates whether free space in a data block is below 25%, between 25%-50%, between 50%-75% or above 75%. For an index block the bitmaps can tell you whether the blocks are empty or formatted. Bitmaps do use additional space but this is less than 1% for most large objects.

The performance gain from using automatic segment management can be quite striking.

Permanent Tablespaces

Tablespaces can be either small tablespaces or big tablespaces

  • Small tablespace - The tablespace can be made up of a number of data files each of which can be quite large in size
  • Big tablespace - The tablespace will only be made up of one data file and this can get extremely large.

Tablespace commands

Creating

create tablespace test datafile 'c:\oracle\test.dbf' size 2G;
create tablespace test datafile 'c:\oracle\test.dbf' size 2G extent management local uniform size 1M maxsize unlimited;
create bigfile tablespace test datafile 'c:\oracle\bigfile.dbf' 2G;

Creating non-standard block size

## Note: if the block is different than db_block_size then make sure to set a db_nk_cache_size 
alter system db_16k_cache_size = 5M; 
create tablespace test datafile 'c:\oracle\test.dbf' size 2G blocksize 16K;

Removing

drop tablespace test;
drop tablespace test including contents and datafiles; (removes the contents and the physical data files)

Modifying

alter tablespace test rename to test99;
alter tablespace test [offline|online];
alter tablespace test [read only|read write];
alter tablespace test [begin backup | end backup];

Note: use v$backup to see tablespace is in backup mode (see below)

Adding data files

alter tablespace test add datafile 'c:\oracle\test02.dbf' 2G;

Dropping data files

alter tablespace test drop datafile 'c:\oracle\test02.dbf';

Autoextending

See Datafile commands below

Rename a data file

## offline the tablespace then rename at O/S level, then peform below 
alter tablespace test rename datafile 'c:\oracle\test.dbf' to 'c:\oracle\test99.dbf';

Tablespace management

create tablespace test datafile 'c:\oracle\test.dbf' size 2G extent management manual;

Extent management

create tablespace test datafile 'c:\oracle\test.dbf' size 2G uniform size 1M maxsize unlimited;

Segment Space management

create tablespace test datafile 'c:\oracle\test.dbf' size 2G segment space management manual;

Display default tablespace

select property_value from database_properties where property_name = 'DEFAULT_PERMANENT_TABLESPACE';

Set default tablespace

alter database default tablespace users;

Display default tablespace type

select property_value from database_properties where property_name = 'DEFAULT_TBS_TYPE';

Set default tablespace type

alter database set default bigfile tablespace;
alter database set default smallfile tablespace;

Get properties of an existing tablespace

set long 1000000 
select DBMS_METADATA.GET_DDL('TABLESPACE','USERS') from dual;

Free Space

select tablespace_name, round(sum(bytes/1024/1024),1) "FREE MB" from dba_free_space group by tablespace_name;

Display backup mode

select tablespace_name, b.status from dba_data_files a, v$backup b where a.file_id = b.file#;

Useful Views

DBA_TABLESPACES

describes all tablespaces in the database

DBA_DATA_FILES

describes database files

DBA_TABLESPACE_GROUPS

describes all tablespace groups in the database

DBA_SEGMENTS

describes the storage allocated for all segments in the database

DBA_FREE_SPACE

describes the free extents in all tablespaces in the database

V$TABLESPACE

displays tablespace information from the control file

V$BACKUP

displays the backup status of all online datafiles

DATABASE_PROPERTIES

lists Permanent database properties

Datafile Commands

Resizing

alter database datafile 'c:\oracle\test.dbf' resize 3G;

Offlining

alter database datafile 'c:\oracle\test.dbf' offline;

Note: you must offline the tablespace first

Onlining

alter database datafile 'c:\oracle\test.dbf' online;

Renaming

alter database rename file 'c:\oracle\test.dbf' to 'c:\oracle\test99.dbf';

Autoexend

alter database datafile 'c:\oracle\test.dbf' autoextend on;
alter database datafile 'c:\oracle\test.dbf' autoextend off;

select file_name, autoextensible from dba_data_files;

Temporary tablespaces

Temporary tablespaces are used for order by, group by and create index. It is required when the system tablespace is locally managed. In oracle 10g you can now create temporary tablespace groups which mean you can use multiple temporary tablespaces simultaneously.

The benefits of using a temporary tablespace group are

  • SQL queries are less likely to run out of space
  • You can specify multiple default temporary tablespaces at the db level
  • Parallel execution can utilize multiple temporary tablespaces
  • single user can simultaneously use multiple temp tablespaces in different sessions.

Temporary tablespace commands

Creating non temp group

create temporary tablespace temp tempfile 'c:\oracle\temp.dbf' size 2G autoextend on;

Creating temp group

create temporary tablespace temp tempfile 'c:\oracle\temp.dbf' size 2G tablespace group '';

Adding to temp group

alter tablespace temp02 tablespace group tempgrp;

Note: if no group exists oracle will create it

Removing from temp group

alter tablespace temp02 tablespace group '';

Displaying temp groups

select group_name, tablespace_name from dba_tablespace_groups;

Make user use temp group

alter user vallep temporary tablespace tempgrp;

Display default temp tbs

select property_value from database_properties where property_name = 'DEFAULT_TEMPORARY_TABLESPACE';
select property_value from database_properties where property_name = 'DEFAULT_TEMP_TABLESPACE';

set default temp tbs

alter database default temporary tablespace temp02;

Display free temp space

select tablespace_name, sum(bytes_used), sum(bytes_free) from v$temp_space_header group by tablespace_name;

Who is using temp segments

SELECT b.tablespace,
  ROUND(((b.blocks*p.value)/1024/1024),2)||'M' "SIZE",
  a.sid||','||a.serial# SID_SERIAL,
  a.username,
  a.program
FROM sys.v_$session a,
  sys.v_$sort_usage b,
  sys.v_$parameter p
WHERE p.name = 'db_block_size'
  AND a.saddr = b.session_addr
ORDER BY b.tablespace, b.blocks;

Useful Tables Views

DBA_TEMP_FILES

describes database temporary files

DBA_TABLESPACE_GROUPS

describes all tablespace groups in the database

V$SORT_SEGMENT

Contains infor about every sort segment in a given instanceThe view is only updated when the tablespace is of the temporary type

V$TEMPSEG_USAGE

describes temporary segment usage

Oracle Tables

Dual Table: The dual table belongs to the sys schema and is created automatically when the data dictionary is created.

Heaped Organised Tables

This is a normal standard table. Data is managed in a heap like fashion. A heap is a bunch of space and it is used in a somewhat random fashion.

Index Organised Tables

Here, a table is stored in an index structure (B-Tree). The rows themselves are in order unlike a heaped table. The order is determined by the primary key.

Clustered Tables

Many tables may be stored and joined together, many tables may be stored on the same data block, also data that contains the same cluster key value will be physically stored together. The data is clustered around the cluster key value. A cluster key is built using a B-Tree index.

Partitioned tables

You can divide a large amount of data into subtables called partitions, according various criteria. Partitioning is especially useful in data warehousing.

There are 4 types of tables in oracle:

Table names can be 1-30 characters in length and must start with a character, you can use _ (underscore), # (hash) or $ (dollar) in the name.

Oracle uses a namespace which means that the same name for object within the same namespace cannot be used, below is a list of the namespaces and where object names must be different.

namespace1

tables, views, sequences, private synonymous, procedures, functions, package, materialized views, user-defined types

namespace2

tablespaces, indexes, constraints, clusters, triggers, database links, dimensions, roles, public synonymous, profiles and pfiles.

 

Table Sizing:  OEM or you can use the DBMS_SPACE package to check the space allocation of your table.

High Water Mark: If you view a table as a flat structure as a series of blocks laid one after the other in a line from left to right, the high water mark would the right most block that ever contained data. Over a period of time the high water mark rises with the amount of data added. However if data was deleted there may be empty blocks under the high water mark until the object is rebuilt or truncated using the TRUNCATE command when deleting rows.

Freelist : The freelist is where Oracle keeps track of blocks under the high water mark for objects that have free space on them. Each object will have at least one freelist associated with it. You can increase Oracle performance by creating addition freelists on objects to reduce contention on objects. There are four parameters that can improve performance:

FREELIST

every table manages the blocks it has allocated in the heap on a freelist. A table may have more than one freelist. Increase the number of freelists if a table is heavily used.

PCTFREE

a measure of how full a block can be made during a insert process. once the block has less than pctfree it will no longer be used for inserts.

PCTUSED

a measure of how empty a block must become, before it will be used for inserts again, once the block has less than PCUSED it will be available for insets again.

INITRANS

The number of transaction slots initially allocated to a block. If set to low this can cause concurrency issues in a block that is used by many users. see locking for more information

Heap Organized Tables:  This is the standard oracle table and data is managed in a heaped fashion (no order).

Table

Creating

create table emp (
  empno number(5) primary key,
  ename varchar2(15) not null,
  national_insurance varchar2(9) not null,
  job varchar2(25),
  manager number(5),
  deptno number(3) not null constraint dept_fkey references   hr.dept(dept_id))
tablespace users;

Creating with CTAS

create table emp_new as select * from emp parallel degree 4 nologging;

Note: CTAS = create table as select, load data using multiple processes, do not log the changes to the redo logs(minimum information will always be logged).

Removing (restorable)

drop table emp cascade constraints;

Note: The table can be retrieved by the 'flashback table' command

Removing (permanently)

drop table emp purge;

Renaming a table

alter table emp rename to employees;

Moving a table

alter table emp move new_tablespace;

Note: do this when you want the change any storage parameters, also remember that the row ids will change thus indexes will have to be re-created or rebuilt.

Emptying a table

trucate table emp;

Note: there is no rollback from this command its a DDL command.

Restoring a table after a drop

flashback table emp to before drop;

Note: see flashback table, this will not bring back a truncated table.

Columns

adding columns

alter table emp add(retired char(1));

removing columns

alter table emp drop(retired);

Marking column/s as unused

alter table emp set unused (retired); 
select * from user_unused_col_tabs;

Note: once a column is marked as unsed there is no way to get it back othet than a restore.

removing unused columns

alter table emp drop unused columns;

Removing unused columns and checkpointing

alter table emp drop unused columns checkpoint 10000;

Note: this stops the undo tablespace from filling up by checkpointing after every 10,000 rows.

Renaming a column

alter table emp rename retired to dead;

Rows

Count number of rows

select count(*) from emp;

Delete duplicate rows

delete from t where rowid in (select rid
from (select rowid rid,   
    row_number() over     
      (partition by cust_seg_nbr order by rowid) rn 
   from t
)where rn <> 1 )
/

Temporary Tables:Temporary tables are used to hold result sets, either for the duration of a transaction or a session. The data is held in a temporary table is only ever visible to the current session - no other session will ever see any other session's data. A session will never block another session using their temporary table

Benefits of temp tables

  • Temporary tables reduce the amount of redo activity generated by transactions
  • Temporary tables can be indexed to improve performance
  • Sessions can update, insert and delete in temporary tables just like normal tables
  • Data is automatically removed after session or transaction
  • Table constraints can be used on temporary tables
  • Temporary tables can be used by a number of users, all seeing their data only.
  • Minimal amount of locking of temporary tables means more efficient query processing
  • The table structure of the table persists after the data is removed, so the table can be used again.

Index Organized Tables: IOT is a table stored in an index structure (B-Tree), an IOT is stored in a ordered fashion sorted by its primary key. When using overflow for a IOT additional data that cannot fit into the index is stored in a overflow segment, a pointer from the index pointed to the additional data in the row. Getting the right mix of data on the index block versus data in the overflow segment is the most critical part of a IOT setup. Consider the freelist as this can affect the table. pctfree is not that important and pctused doesn't come into play normally. When using a overflow segment pctfree and pctused have the same meaning as they did for a heaped tabled.

The differences between a regular oracle table and a IOT table

Regular Oracle table

IOT

Physical ROWIDs

Logical ROWIDs

Uniquely identified by ROWID

Uniquely identified by primary key

Unique constraints allowed

Unique constraints not allowed

Can contain LONG and LOB data

Can't contain LONG data

Allowed in table clusters

Not allowed in table clusters

Larger space requirements

Smaller space requirements

Slower data access

Faster data access

Do not order data

Order data

To use secondary bitmap indexes you need to create another segment to map the table, the mapping table is a heap-organized table that stores logical rowids of the index-organized table, each mapping table stores one logical rowid for the corresponding index-organized table row. The bitmap indexes are in fact built on this mapping not the underlying IOT.

IOT Rules:

  • An IOT can have additional indexes defined on other columns in the table
  • An IOT without secondary indexes is stored as a single segment
  • The space requirement for an IOT is reduced because the data is stored in the same segment as the index and therefore no physical ROID is required.
  • An IOT can be rebuilt without locking the table
  • IOT must be created with a primary key, non-deferrable, constraint, this is used to create the index that holds the table
  • An IOT cannot be a clustered table
  • Composite partitioning cannot be applied to an IOT
  • Columns of type long cannot be defined in an IOT
  • Access via a primary key is fast but secondary keys can be slow.

External Tables: Oracle allows the use of external tables that is tables that reside in external operating system files. The definition of an external table is created in the data dictionary which allows you to load data into other oracle tables, so no space is actually used by the external table. If you drop the external table you are actually only removing the data definition in the dictionary. Basically the data dictionary holds the table definition but the data remains in the o/s file but you can access the table just like any other table. One small note is that you cannot index a external table, heavy used in data warehousing environments.

Nested Tables: A nested table is one of two collection types in Oracle, it is very similar to a child table in a traditional parent/child table pair. It give the illusion that each row in the parent table has its own child table i.e. if there are 100 rows in the parent table there are virtually 100 nested tables. There are two ways to use a nested table one is in PL/SQL code as a way to extend the PL/SQL language and the other is a physical storage mechanism for persistent storage of collections. They are very rarely used as a storage mechanism due to the following reasons:

  • The overhead of the RAW columns that are added. The parent table will have an extra 16 byte RAW for each nested table.
  • The overhead of the unique constraint on the parent table, when it already typically has a unique constraint.
  • The nested table is not easily used by itself without unsupported constructs. It can be un-nested for queries but not mass updates.

range partitioning

used for data that can be separated into ranges based on some criterion i.e. date, part number, sequence.

hash partitioning

if data in a range partition is not evenly distributed (lots of data for one particular date) then performance decreases, hash partitioning uses oracle hashing algorithms to assign a hash value to each rows partitioning key and place it in the appropriate partition. Hopefully the data will be evenly distributed across the partitions.

list partitioning

used for data that can be separated into lists based on some criterion i.e. city, territory. Again partition can be unevenly distributed across the partitions.

composite range-hash partitioning

basically a combination of range and hash partition making sure that the data is evenly distributed across the partitions

composite range-list partitioning

basically a combination of range and list. First partition on a range of values then break up the first set of partitions using a list of discrete values.

Partitioned Tables:Oracle tables can be quite large, partitioning basically means dividing the table up into smaller chunks. All the partitions share the same logical definition, column definition and constraints. Performance is greatly improved as you only search the relevant partitions of the table during a query. Partitions can be kept on different disks to further increase

performance. You can backup, index, load data partitions independently of each other. There are five ways to partition data

 

Examples

range partitioning

create table sales_data (
ticket_no number,
sale_year int not null,
sale_month int not null,
sale_day int not null)
partition by range (sale_year, sale_month, sale_day)
(partition sales_q1 values less than (2007, 04, 01) tablespace ts1,
partition sales_q2 values less than (2007, 07, 01) tablespace ts2,
partition sales_q3 values less than (2007, 10, 01) tablespace ts3,
partition sales_q4 values less than (2008, 01, 01) tablespace ts4);

Note: ideally each of the tablespaces should be on its own disk for increased performance

Hash partitioning

create table sales_data (
ticket_no number,
sale_year int not null,
sale_month int not null,
sale_day int not null)
partition by hash (ticket_no)
partitions 4
store in (ts1, ts2, ts3, t4);

Note: you have no control on where the data is put

List Partitioning

create table sales_data (
ticket_no number,
sale_year int not null,
sale_month int not null,
sale_day int not null,
destination_city char(3),
start_city char(3),
partition by list (start_city)
(partition north_sales values ('newcastle', 'sunderland', 'leeds') tablespace ts1,
partition south_sales values ('brighton', 'bournemouth', 'cornwall') tablespace ts2,
partition east_sales values ('norwich', 'ipswitch', 'yarmouth') tablespace ts3,
partition west_sales values ('birmingham', 'cardiff', 'coventry') tablespace ts4);

composite range-hash partitioning

create table football_kit (equipno number, equipname varchar(32), price number)
partition by range (equipno) subpartition by hash(equipname)
subpartitions 8 store in (ts1, ts2, ts3, ts4)
(partition p1 values less than (1000),
partition p2 values less than (2000),
partition p3 values less than (3000),
partition p4 values less than (MAXVALUE));

composite range-list partitioning

create table quarterly_sales_data (
ticket_no number,
sale_year int not null,
sale_month int not null,
sale_day int not null,
destination_city char(3),
start_city char(3),
partition by range (sale_day)
subpartition by list (start_city)
(partition q1_2007 values les than (to_date('1-apr-2007','dd-mon-yyyy')) tablespace ts1
(subpartition q12007_north_sales values ('newcastle', 'sunderland', 'leeds'),
subpartition q12007_south_sales values ('brighton', 'bournemouth', 'cornwall'),
subpartition q12007_east_sales values ('norwich', 'ipswitch', 'yarmouth'),
subpartition q12007_west_sales values ('birmingham', 'cardiff', 'coventry')
),
partition q2_2007 values les than (to_date('1-jul-2007','dd-mon-yyyy')) tablespace ts1
(subpartition q22007_north_sales values ('newcastle', 'sunderland', 'leeds'),
subpartition q22007_south_sales values ('brighton', 'bournemouth', 'cornwall'),
subpartition q22007_east_sales values ('norwich', 'ipswitch', 'yarmouth'),
subpartition q22007_west_sales values ('birmingham', 'cardiff', 'coventry')
),
partition q3_2007 values les than (to_date('1-oct-2007','dd-mon-yyyy')) tablespace ts1
(subpartition q32007_north_sales values ('newcastle', 'sunderland', 'leeds'),
subpartition q32007_south_sales values ('brighton', 'bournemouth', 'cornwall'),
subpartition q32007_east_sales values ('norwich', 'ipswitch', 'yarmouth'),
subpartition q32007_west_sales values ('birmingham', 'cardiff', 'coventry')
),
partition q4_2007 values les than (to_date('1-jan-2008','dd-mon-yyyy')) tablespace ts1
(subpartition q42007_north_sales values ('newcastle', 'sunderland', 'leeds'),
subpartition q42007_south_sales values ('brighton', 'bournemouth', 'cornwall'),
subpartition q42007_east_sales values ('norwich', 'ipswitch', 'yarmouth'),
subpartition q42007_west_sales values ('birmingham', 'cardiff', 'coventry')
)
);

 

Partition Maintenance

Adding partitions

alter table sales_data
add partition sales_quarters values less than (to_date('1-apr-2007','dd-mon-yyyy')) tablespace ts4;

Splitting partitions

alter vtable sales_data split partition ticket_sales01 at (2000) into
(partition ticket_sales01A, ticket_sales01B);

Merging partitions

alter table ticket_sales merge partitions ticket_sales01A, ticket_sales01B into partition ticket_sales02;

Renaming partitions

alter table rename partition ticket_sales01B to ticket_sales01C;

Exchanging partitions

alter table ticket_sales exchange partition ticket_sales02 with ticket_sales03;

Note : This enables you to convert a regular nonpartitioned table into a partition of a partitioned table.

Dropping partitions

Alter table sales_data drop partition sales_quarters;

if you have data in the partitions you intend to drop, you need to use the 'update global indexes' clause with the preceding statement, Otherwise all globally created indexes will become invalidated. Local indexes will still be 0kay because they're mapped directly to the affected partitions only.

Coalescing partitions

alter table sales_data coalsce partition;

Note: you coalsce hash and list partitioned tables

Index Clustered Tables

A cluster is a way to store a group of tables that share some common columns in the same database blocks and to store related data together on the same block, the goal is to reduce disk I/O and therefore increase access speed when you join two tables together. You should not use a clustered table when:

  • Clusters may impact the performance of DML - if the tables are heavy modified an index cluster will have certain negative performance side effects.
  • Full scans of a table in clusters are affected, as all the tables in the cluster will be scanned, full scans will take longer
  • If you believe you frequently need to truncate and load the table, Tables in clusters cannot be truncated since the cluster stores more then one table on a block, we must delete the rows in a cluster table.

Permissions

grant create cluster to test02;

Creating

# Create the cluster 
create cluster emp_dept( deptno number (3)) tablespace users;

# Create the two tables that are part of the cluster
create table dept ( deptno number (3) primary key) cluster emp_dept (deptno);

create table emp ( empno number (5) primary key, ename varchar2(15), deptno number (3) references dept) cluster emp_dept (deptno);

Removing

drop table emp_dept;

Cluster Information

select table_name, tablespace_name, cluster_name, from user_tables; (only see tables, not cluster) 
select segment_name, tablespace_name, segment_type from user_segments; (only see the cluster, not tables)

Hash Cluster Tables

Hash cluster tables are similar to Index cluster tables with the exception that a cluster index is not used. The data is the index in this case. The cluster key is hashed into a block address and the data is expected to be there. The important things to really understand are:

  • The hash cluster is allocated right from the beginning. Oracle will take your HASHKEYS/trunc(blocksize/SIZE) and will allocate that space right away, as soon as the first table is put in that cluster, any full full scan will hit every allocated block. This is different from every other table.
  • The number of HASHKEYS in a hash cluster is fixed, performance can be affected due to unintended hash collisions if it was set to low.
  • Range scanning on the cluster key is not available.The table will be full scanned if you use a range on a cluster key.

Hash clusters are suitable when:

  • You know how many rows the table will have over its life, or a reasonable upper bound. If at all possible you must not do a rebuild of the table.
  • DML, especially inserts is light. Updates do not introduce overhead, unless you update the HASHKEY, which would not be a good idea, that would cause the row to migrate.
  • You access the data by the HASHKEY value constantly. For example you have a table of parts, and part numbers accesses these parts. Lookup tables are appropriate for hash clusters

Creating

# Create the hash cluster 
create cluster emp_dept(deptno number (3)) tablespace users hash is deptno hashkeys 200;

# Create the two tables that are part of the cluster
create table dept ( deptno number (3) primary key) cluster emp_dept (deptno);

create table emp ( empno number (5) primary key, ename varchar2(15), deptno number (3) references dept) cluster emp_dept (deptno);

Removing

drop table emp_dept;

Useful Views

DBA_TABLES: describes all relational tables in the database

DBA_TAB_COLUMNS: describes the columns of all tables, views, and clusters in the database

DBA_UNUSED_COL_TABS: list all tables that have columns marked as unused.

DBA_TAB_COMMENTS : displays comments on all tables and views in the database

DBA_COL_COMMENT: displays comments on all tables and views in the database

DBA_TAB_PARTITIONS: provides the partition-level partitioning information, partition storage parameters, and partition statistics determined by ANALYZE statements for all partitions in the database.

Automatic Segment Space Management (ASSM)

 (ASSM) is a simpler and more efficient way of managing space within a segment. It completely eliminates any need to specify and tune the pctused, freelists, and freelist groups storage parameters for schema objects created in the tablespace. If any of these attributes are specified, they are ignored.If you have space issues on a tablespace you can shrink tables and move indexes while online. MMON will send the alert for any tablespace if thresholds have been exceeded. To shrink table segments you can use the below commands, cascade will shrink both table and indexes(only these objects are affected), compact will stop before moving the High Water Mark (HWM) thus not reclaiming space back. , this may be useful as moving the HWM locks the table thus this may impact users.

True statements of a shrink operation:

  • Segment shrink is allowed only on segments whose space is automatically managed
  • Heap-organised, index-organised and cluster tables can be shrunk
  • ROW MOVEMENT must be enabled for heap-organised tables
  • Chained rows may be repaired during a segment shrink operation
  • Triggers are not fired during a segment shrink operation.
  • Tables with ROWID-based materialized view are cannot be maintained
  • IOT mapping tables and overflow segments cannot be shrunk
  • Tables with function-based indexes cannot be shrunk

There are two phases in a segment shrink operation

Compaction phase

During compaction the rows are compacted and moved towards the left side of the segment,

the HWM remains the same so the free space is still not available. DML is still available

while the object is being compacted

Adjustment of HWM
(release free space)

This adjustment is very short, oracle lowers the HWM and releases the free space. Oracle locks the object in an exclusive

mode while the HMW is being lowered, meaning that no DML operations can take place.

A shrink could fail because the tablespace is not locally managed or do not have ASSM (automatic segment managed) enabled, the table has a column of long, row movement has not been enabled or it is a clustered table. There is no tool to display how often a segment is used but using the segment advisor in the OEM you can perform any of the below commands

Determine amount of free space

declare
  l_fs1_bytes number;
  l_fs2_bytes number;
  l_fs3_bytes number;
  l_fs4_bytes number;
  l_fs1_blocks number;
  l_fs2_blocks number;
  l_fs3_blocks number;
  l_fs4_blocks number;
  l_full_bytes number;
  l_full_blocks number;
  l_unformatted_bytes number;
  l_unformatted_blocks number;
bgin
dbms_space.space_usage(
  segment_owner => user,
  segment_name => 'BOOKINGS',
  segment_type => 'TABLE',
  fs1_bytes => l_fs1_bytes,
  fs1_blocks => l_fs1_blocks,
  fs2_bytes => l_fs2_bytes,
  fs2_blocks => l_fs2_blocks,
  fs3_bytes => l_fs3_bytes,
  fs3_blocks => l_fs3_blocks,
  fs4_bytes => l_fs4_bytes,
  fs4_blocks => l_fs4_blocks,
  full_bytes => l_full_bytes,
  full_blocks => l_full_blocks,
  unformatted_blocks => l_unformatted_blocks,
  unformatted_bytes => l_unformatted_bytes
);
dbms_output.put_line(' FS1 Blocks = '||l_fs1_blocks||' Bytes = '||l_fs1_bytes);
dbms_output.put_line(' FS2 Blocks = '||l_fs2_blocks||' Bytes = '||l_fs2_bytes);
dbms_output.put_line(' FS3 Blocks = '||l_fs3_blocks||' Bytes = '||l_fs3_bytes);
dbms_output.put_line(' FS4 Blocks = '||l_fs4_blocks||' Bytes = '||l_fs4_bytes);
dbms_output.put_line('Full Blocks = '||l_full_blocks||' Bytes = '||l_full_bytes);
end;
/

Enable row movement

alter table <table> enable row movement;

Note: required to perform a shrink operation

recover space from table and indexes and amend the HWM

alter table <table> shrink space cascade;

Note: this will lock the table

recover space and don’t amend the HWM

alter table <table> shrink space compact;

recover space table and indexes and don’t amend the HWM

alter table <table> shrink space compact cascade;

deallocate unused extents from table

alter table test03 deallocate unused;
alter table test03 deallocate unsed keep 10M;

Shrinking index segments is required as when an index entry is deleted the space remains assigned. First you need to analyze the index and then query the index_stats view, remember the space is not removed but is available for reuse, to get the space back you must rebuild the index.

Validate index

analyze index test_indx validate structure;

 

select lf_rows_len, del_lf_rows from index_stats where name =’IND1’;

 

select pct_used from index_stats where name = ‘EMP_INDX’;

Shrink index

alter index test_indx shrink space;

Coalesce space

alter index emp_indx coalesce;

Note: cannot move to another tablespace, does not require additional disk space, coalesces index leaf blocks with each branch

Rebuild index (online)

alter index test_indx rebuild online;

Note: uses journal table to store any new rows then applies to new index, can move index to another tablespace, requires double space

to rebuild online, creates new tree and adjust tree height

Monitor index usage

alter index test_indx monitoring usage; (turn on monitoring)
alter index test_indx nomonitoring usage; (turn off monitoring)

select index_name, table_name, monitoring, used, start_monitoring from v$object_usage where index_name = ‘TEST_IDX’;

SGA 
(System Global Area): is a large part of memory that all the oracle background processes access.

PGA 
(Process Global Area) :This is memory that is private to a single process or thread and is not accessible by any other process or thread

UGA 
(User Global Area): This is memory that is associated with your session, it can be found in the PGA or SGA depending on whether you are connected to the database via shared server

Shared Server - the UGA will be in the SGA
Dedicated Server - the UGA will be in the PGA

The shared pool:

Library cache includes the shared SQL area, private SQL areas, PL/SQL procedures and packages the control structures such as locks and library cache handles. Oracle code is first parsed, then executed , this parsed code is stored in the library cache, oracle first checks the library cache to see if there is an already parsed and ready to execute form of the statement in there, if there is this will reduce CPU time considerably, this is called a soft parse, If Oracle has to parse it then this is called a hard parse. If there is not enough room in the cache oracle will remove older parsed code, obviously it is better to keep as much parsed code in the library cache as possible. Keep an eye on missed cache hits which is an indication that a lot of hard parsing is going on. 

Dictionary cache is a collection of database tables and views containing information about the database, its structures, privileges and users. When statements are issued oracle will check permissions, access, etc and will obtain this information from its dictionary cache, if the information is not in the cache then it has to be read in from the disk and placed in to the cache. The more information held in the cache the less oracle has to access the slow disks.

The parameter SHARED_POOL_SIZE is used to determine the size of the shared pool, there is no way to adjust the caches independently, you can only adjust the shared pool size.

The shared pool uses a LRU (least recently used) list to maintain what is held in the buffer, see buffer cache for more details on the LRU.You can clear down the shared pool area by using the following command

    alter system flush shared_pool;

Buffer cache

This area holds copies of read data blocks from the datafiles. The buffers in the cache contain two lists, the write list and the least used list (LRU). The write list holds dirty buffers which contain modified data not yet written to disk.

The LRU list has the following

  • free buffers hold no useful data and can be reused

  • pinned buffers actively being used by user sessions

  • dirty buffers contain data that has been read from disk and modified but hasn't been written to disk

  • It's the database writers job to make sure that they are enough free buffers available to users session, if not then it will write out dirty buffers to disk to free up the cache.

  • There are 3 buffer caches

  • Default buffer cache, which is everything not assigned to the keep or recycle buffer pools, DB_CACHE_SIZE

  • Keep buffer cache which keeps the data in memory (goal is to keep warm/hot blocks in the pool for as long as possible), DB_KEEP_CACHE_SIZE.

  • Recycle buffer cache which removes data immediately from the cache after use (goal here is to age out a blocks as soon as it is no longer needed), DB_RECYCLE_CACHE_SIZE.

  • The standard block size is determined by the DB_CACHE_SIZE, if tablespaces are created with a different block sizes then you must also create an entry to match that block size.

  • DB_2K_CACHE_SIZE (used with tablespace block size of 2k) 
    DB_4K_CACHE_SIZE (used with tablespace block size of 4k)
    DB_8K_CACHE_SIZE (used with tablespace block size of 8k) 
    DB_16K_CACHE_SIZE (used with tablespace block size of 16k) 
    DB_32K_CACHE_SIZE (used with tablespace block size of 32k)

  • buffer cache hit ratio is used to determine if the buffer cache is sized correctly, the higher the value the more is being read from the cache.

  •      hit rate = (1 - (physical reads / logical reads)) * 100

  • You can clear down the buffer pool area by using the following command

  •     alter system flush buffer_cache;

    Redo buffer

    The redo buffer is where data that needs to be written to the online redo logs will be cached temporarily before it is written to disk, this area is normally less than a couple of megabytes in size. These entries contain necessary information to reconstruct/redo changes by the INSERT, UPDATE, DELETE, CREATE, ALTER and DROP commands.The contents of this buffer are flushed:

  • Every three seconds
  • Whenever someone commits a transaction
  • When its gets one third full or contains 1MB of cached redo log data.
  • When LGWR is asked to switch logs

Use the parameter LOG_BUFFER parameter to adjust but be-careful increasing it too large as it will reduce your I/O but commits will take longer. 

Large Pool

This is an optional memory area that provide large areas of memory for:

  • Shared Server - to allocate the UGA region in the SGA
  • Parallel execution of statements - to allow for the allocation of inter-processing message buffers, used to coordinate the parallel query servers.
  • Backup - for RMAN disk I/O buffers

The large pool is basically a non-cached version of the shared pool.Use the parameter LARGE_POOL_SIZE parameter to adjust

Java Pool

Used to execute java code within the database.Use the parameter JAVA_POOL_SIZE parameter to adjust (default is 20MB)

Streams Pool

Streams are used for enabling data sharing between databases or application environment.Use the parameter STREAMS_POOL_SIZE parameter to adjust

PGA : The PGA (Process Global Area) is a specific piece of memory that is associated with a single process or thread, it is not accessible by any other process or thread, note that each of Oracles background processes have a PGA area. Oracle creates a PGA area for each users session, this area holds data and control information, the PGA is exclusively used by the users session. Users cursors, sort operations are all stored in the PGA. The PGA is split in to two areas

 UGA (User Global Area): The UGA (User Global Area) is your state information, this area of memory will be accessed by your current session, depending on the connection type (shared server) the UGA can be located in the SGA which is accessible by any one of the shared server processes, because a dedicated connection does not use shared servers the memory will be located in the PGA.

  • workarea_size_policy - you can set this option to manual or auto (default)
  • pga_aggregate_target - controls how much to allocate the PGA in total.

Display background process PGA memory usage

select program, pga_used_mem, pga_alloc_mem, pga_max_mem from v$process;

 

Data block contains:

HEARDER: contains information regarding the type of block (a table block, index block, etc), transaction information regarding active and past transactions on the block and the address (location) of the block on the disk

 

System Change (Commit) Number (SCN)

The SCN is an important quantifier that oracle uses to keep track of its state at any given point in time. The SCN is used to keep track of all changes within the database; it’s a logical timestamp that is used by oracle to order events that have occurred within the database. SCN's are increasing sequence numbers and are used in redo logs to confirm that transactions have been committed, all SCN's are unique. SCN's are used in crash recovery as the control maintains a SCN for each data file, if the data files are out of sync after a crash oracle can reapply the redo log information to bring the database backup to the point of the crash. You can even take the database back in time to a specific SCN number (or point in time).

Checkpoints

Checkpoints are important events that synchronize the database buffer cache and the datafiles, they are used with recovery. Checkpoints are used as a starting point for a recovery; it is a framework that enables the writing of dirty blocks to disk based on a System Change or Commit Number (for SCN see above) and a Redo Byte Address (RBA) validation algorithm and limits the number of blocks to recover.

The checkpoint collects all the dirty buffers and writes them to disk, the SCN is associated with a specific RBA in the log, which is used to determine when all the buffers have been written.

The LGWR process writes redo information from the redo buffer to the online redo logs when

  • user commits a transaction
  • redo log buffer becomes 1/3 full
  • redo buffer contains 1MB of changed records
  • switch of the log files

The log group can be in one of four states:

1) Current: log group that is being actively being written too.

2) Active: the files in the log group are required for instance recovery

3) InActive: the files in the log group are not required for instance recovery and can be over written

4) Unused: log group has never been written too, a new group.

 

A log file can be in one of four states: Invalid: The files are corrupt or missing.

Stale: the log file is new and never been used

Deleted the log file is no longer being used

Blank: the log file is currently being used

Log group and log files commands

Configuration

Creating new log group

alter database add logfile group 4 ('c:\oracle\redo3a.log','c:\oracle\redo3b.log') size 10M;

Adding new log file to existing group

alter database add logfile member 'c:\oracle\redo3c.log' to group3;

Renaming log file in existing group

shutdown database
rename file
startup database in mount mode
alter database rename file 'old name' to'new name'
open database
backup controlfile

Drop log group

alter database drop logfile group 3;

Drop log file from existing group

alter database drop logfile member 'c:\oracle\redoc.log'

Maintaining

Clearing Log groups

alter database clear logfile group 3;
alter database clear unarchived logfile group 3;

Note: used the unarchived option when a loggroup has not ben archived

Logswitch and Checkpointing

alter system checkpoint;

alter system switch logfile;
alter system archive log current;
alter system archive log all;

# Difference between them are
switch logfile - will switch logfile and return prompt immediately, archiving will take place in the background
log current - will switch logfile and return prompt only when logfile has been successfully archived
log all - will only archiving full log files

Note: I have discussed checkpoints

Display the redo usage

select le.leseq "Current log sequence No",
  100*cp.cpodr_bno/le.lesiz "Percent Full",
  cp.cpodr_bno "Current Block No",
  le.lesiz "Size of Log in Blocks"
from x$kcccp cp, x$kccle le
where le.leseq =CP.cpodr_seq
and bitand(le.leflg,24) = 8
/

Useful Views

V$LOG

displays log file information from the control file.

V$LOGFILE

contains information about redo log files.

 

Archived Logs

When a redo log file fills up and before it is used again the file is archived for safe keeping, this archive file with other redo log files can recover a database to any point in time. It is best practice to turn on ARCHIVELOG mode which performs the archiving automatically.

The log files can be written to a number of destinations (up to 10 locations), even to a standby database, using the parameters log_archive_dest_n and log_archive_min_succeed_dest you can control how Oracle writes its log files.

Configuration

Enabling

alter system set log_archive_dest_1 = 'location=c:\oracle\archive' scope=spfile;
alter system set log_archive_format = 'arch_%d_%t_%r_%s.log' scope=spfile;

shutdown database
startup database in mount mode
alter database archivelog;
startup database in open mode

Archive format options
%r - resetlogs ID (required parameter) 
%s - log sequence number (required parameter) 
%t - thread number (required parameter) 
%d - database ID (not required)

Disabling

alter database noarchivelog;

Displaying

archive log list;
select name, log_mode from v$database;
select archiver from v$instance;

Maintainance

Display system parameters

show parameter log_archive_dest
show parameter log_archive_format 
show parameter log_archive_min_succeed_dest

Useful Views

V$ARCHIVED_LOG

Display the archived log files

V$INSTANCE

Display if database is in archive mode

V$DATABASE

Display if database is in archive mode

 

Oracle Processes:

Process Monitor

PMON

Responsible for cleaning up after abnormally terminated connections. 
Responsible for monitoring other server processes and restarting them if necessary
Registers the instance with the listener dynamically (dynamic service registration).
Restarts failed server processes and dispatcher processes

System Monitor

SMON

Temporary space cleanup
Crash recovery apon restart 
Coalescing free space
Recovering transactions active against unavailable files
Instance recovery of failed node in OPS (Oracle parallel server)
Cleans up OJB$ (Low Level data dictionary)
Shrinks rollback segments
Offline's rollback segments

Other processes call the SMON process when required.

Distributed database recovery

RECO

Recovers transactions that are left in a prepared state because of a crash or loss of connection during a two-phase commit.

Checkpoint process

CKPT

The checkpoint process is charged with instructing the database block buffer writers to write the database buffer cache to disk, it then updates the data file headers and control file to indicate when the checkpoint was performed. There is a relationship with checkpoints and recovery time, the more checkpointing the less recovery time is need when a crash occurs.

The ckpt process does not do the checkpoint but assists with the checkpointing process by updating the file headers of the data files.

A checkpointing process involves the following:

  • Flushing the redo log buffers to the redo log files
  • Writing a checkpoint record to the redo log file
  • Flushing the database log buffers to the data files
  • Updating the data file headers and control files after the checkpoint completes

Database block writer

DBWn

Responsible for writing dirty blocks to disk when free space within the database buffer cache is low, it writes the dirty blocks from the buffer cache out to the disk. It uses the LRU (Least Recently Used) algorithm which retains data in the memory based on how long it has been since someone asked for that data. The database buffer cache is flushed to disk

  • when the database issues a checkpoint
  • when a server process can't find a clean reusable buffer after checking a threshold number of buffers
  • every 3 seconds
  • users process has searched to long for a free buffer when reading a buffer into the buffer cache
  • Instance is shutdown
  • tablespace is put in backup mode or offline
  • segment is dropped

If you have multiple CPU's then it is advised to run multiple database writers. Use the DB_WRITER_PROCESSES parameter to increase the number of database writers, the instance has to be rebooted.

Log writer

LGWR

Responsible for flushing to disk the contents of the redo log buffer located in the SGA. Both committed and uncommitted changes are written to the redo log buffer. The redo log buffer is flushed to disk before the data blocks are written to disk. The redo log buffer is flushed to disk

  • every 3 seconds
  • whenever a user commits a transaction
  • when the redo log buffer is a third full or contains 1 Mb of buffered data
  • before the DBWn process writes when a checkpoint occurs

Archive process

ARCn

Used when the database is in archive-mode, it copies the online redo log file to another location when LGWR fills up, these log files would be used to perform media recovery. There can be a maximum of ten archive processes running ARC0-ARC9. The LOG_ARCHIVE_MAX_PROCESSES parameter determines how many archive processes will be started (default is 1).

Manageability Monitor

MMON

Collects statistics to help the database manage itself. The MMON process collects the AWR (automatic workload repository) snapshot information which is used by the ADDM (automatic database diagnostic monitor), also MMON issues alerts when database thresholds are exceeded.

Manageability Monitor Light

MMNL

The process flushes ASH information to disk when the buffer is full, it also captures session history and database metrics.

Memory Manager

MMAN

Uses the the metrics collected to determine the ideal distribution of memory within oracle. It constantly monitors the database and adjusts the memory allocations according to workloads.

Job Queue Coordination

CJQ0

Used to schedule and run user jobs. It spawns job queue slave processes (J000-J999) which actually run the job.

Job Queue Process

J000-J999

These processes are what actually run the schedule jobs requested by CJQ0.

File Mapping Monitor

FMON

Maps files to immediate storage layers and physical devices. Results are normally kept in the DBMS_STORAGE_MAP view. Generally the 3rd party LVM (logical volume manager) supplier will supply a driver to map to.

Recovery Writer

RVWR

This process is started when you implement flashback logging, it logs the before image (taken from the flashback buffers) of an oracle block before it is changed, this is written to the flashback log files.

Change Tracking Writer

CTWR

This process tracks any data blocks that have changed which then RMAN can use to speed up backups as it will no longer need to read the entire data file to see what has changed.

Queue Monitor Coordinator

QMNC

Spawns and coordinates queue slave processes.

Block server process

BSP

Used in OPS and keeps each servers SGA in the clusters consistent with each other.

Lock monitor process

LMON

Used in OPS and monitors all instances in a cluster to detect a failure of an instance.

Lock manager daemon

LMD

Used in OPS and controls the global locks and global resources for the block buffer cache in a clustered environment.

Lock process

LCKn

Used in OPS and is the same as the LMD daemon but handles requests for all global resources other than database block buffers

Dispatcher process

Dnnn

Dispatcher processes that are used when using a shared server environment

Shared Server process

Snnn

Shared Server processes that are used when using a shared server environment

Oracle process spawner

PSP0

Process spawner has the job of creating and managing other Oracle processes.

Oracle shadow process

SHAD

Oracle's shadow process, could not find much on this process

Streams Advanced Queuing process

q000 - q???

I believe this is something to do with Oracle Streams Advanced Queuing

 

V$BGPROCESS

displays information about the background processes

 

 

V$PROCESS

contains information about the currently active processes

 

 

Password file (orapwd utility) in Oracle: password file stores passwords for users with administrative privileges.

If the DBA wants to start up an Oracle instance there must be a way for Oracle to authenticate the DBA. Obviously, DBA password cannot be stored in the database, because Oracle cannot access the database before the instance is started up. Therefore, the authentication of the DBA must happen outside of the database. There are two distinct mechanisms to authenticate the DBA:

(i) Using the password file or

(ii) Through the operating system (groups). Any OS user under dba group, can login as SYSDBA.

 

REMOTE_LOGIN_PASSWORDFILE: The init parameter REMOTE_LOGIN_PASSWORDFILE specifies if a password file is used to authenticate the Oracle DBA or not. If it set either to SHARED or EXCLUSIVE, password file will be used.

REMOTE_LOGIN_PASSWORDFILE is a static initialization parameter and therefore cannot be changed without bouncing the database. Below are the values.

NONE - Oracle ignores the password file if it exists i.e. no privileged connections are allowed over non secure connections. If REMOTE_LOGIN_PASSWORDFILE is set to EXCLUSIVE or SHARED and the password file is missing, this is equivalent to setting REMOTE_LOGIN_PASSWORDFILE to NONE.

EXCLUSIVE (default) - Password file is exclusively used by only one (instance of the) database. Any user can be added to the password file. Only an EXCLUSIVE file can be modified. EXCLUSIVE password file enables you to add, modify, and delete users. It also enables you to change the SYS password with the ALTER USER command.

SHARED - The password file is shared among databases. A SHARED password file can be used by multiple databases running on the same server, or multiple instances of an Oracle Real Application Clusters (RAC) database. However, the only user that can be added/authenticated is SYS.

A SHARED password file cannot be modified i.e. you cannot add users to a SHARED password file. Any attempt to do so or to change the password of SYS or other users with the SYSDBA or SYSOPER or SYSASM (this is from Oracle 11g) privileges generates an error. All users needing SYSDBA or SYSOPER or SYSASM system privileges must be added to the password file when REMOTE_LOGIN_PASSWORDFILE is set to EXCLUSIVE. After all users are added, you can change REMOTE_LOGIN_PASSWORDFILE to SHARED.

ORAPWD: The Oracle orapwd utility assists the DBA while granting SYSDBA, SYSOPER and SYSASM privileges to other users. By default, SYS is the only user that has SYSDBA and SYSOPER privileges. Creating a password file, via orapwd, enables remote users to connect with administrative privileges.

$ orapwd file=password_file_name [password=the_password] [entries=n] [force=Y|N] [ignorecase=Y|N] [nosysdba=Y|N]

 

Active Session History (ASH)

Oracle collects Active Session History (ASH) statistics (mostly wait statistics for different events) for all active sessions every second from v$session and stores them in a circular FIFO buffer in the SGA. ASH records are very recent session history within the last 5-10 mins. The MMNL (Manageability Monitor light - shows up as "Manageability Monitor light 2" process) process, if the buffer fills up before the AWR flushes (by default every hour) the MMNL process will flush the data to disk (data stored in dba_hist_active_session_history).

ASH collects the following:

  • SQL_ID
  • SID
  • Client ID, Service ID
  • Program, Module, Action
  • Object, File, Block
  • Wait event number, actual wait time (if session is waiting)

Useful Views

V$ACTIVE_SESSION_HISTORY

stores the history session history of every session, collects this information every second from v$session, this is the circular buffer and older information will be rewritten over.

V$SESSION

this view holds all the sessions information (72 columns of info)

DBA_HIST_ACTIVE_SESSION_HISTORY

provides historical information about recent active session history. Its basically snapshots from v$active_session_history.

 

ASH Report: report is in $ORACLE_HOME/rdbms/admin/ashrpt.sql

Data Integrity(CONSTRAINTS)

 

Oracle enforces important business rules via constraints, integrity constraints to enforce the business rules associated with your database and prevent the entry of invalid information into tables.

 

A CONSTRAINT can be one of the following: a column-level constraint

Column-level constraints refer to a single column in the table and do not specify a column name (except check constraints). They refer to the column that they follow.

a table-level constraint

Table-level constraints refer to one or more columns in the table. Table-level constraints specify the names of the columns to which they apply. Table-level CHECK constraints can refer to 0 or more columns in the table.

Primary key constraints

A primary key is basically a key which is Not Null and Unique

Not Null constraints

A tables column cannot be Null, it must contain a value

Check constraints

Ensure that a tables column is within some parameters that you have specified, for example a employee's salary must not exceed £100,000.

Unique constraints

Ensure the uniqueness of the rows in the table, for example national insurance number would be unique to each employee

Referential integrity constraints

Ensure that values for certain important columns make sense basically cross referencing other tables, for example confirm that the department exists in the department table. The reference is know as a foreign key and the table is the child table which the actual data held in the parent table.

It is possible for a table to have a self referential integrity constraint.

 

Views: A view is a virtual table consisting of a stored query, it contains no data. A view does not exist, basically its a definition defined within the data dictionary, lots of the DBA_ are views , views can be created in your own schema (need CREATE VIEW privilege)or someone else's schema (need CREATE ANY VIEW privilege),

Creating

create view test_view as select employee_id, first_name, last_name from employee where manger_id = 122;

Removing

drop view test_view;

Compile

alter view test compile;

Using a view

select * from test_view;

Note: The sql statement defined by the view will be run.

Check for invalid views

select object_name, status from dba_objects where status = 'INVALID' and object_type = 'VIEW';

Display source code of view

select view_name, text from user_views;

Display view definition

select * from v$fixed_view_definition where view_name = 'V$SESSION';

Useful Views

DBA_VIEWS

describes all views in the database

DBA_CATALOG

lists all indexes, tables, views, clusters, synonyms, and sequences in the database

V$FIXED_VIEW_DEFINITION

contains the definitions of all the fixed views

Views Views are the virtual projection of an output query or the dynamic view of the data in a database that is presented to the user whenever requested. The operations performed using Views directly affects the data in the base table. Therefore they are subjected to the integrity constraints and triggers of the base table. Views are the virtual projection of a query result hence they do not take up any storage area. This helps in reduction of memory usage and encourages the use of Shared SQL.

Materialized Views: Materialized views on the other hand are a non-virtual schema which is a common part of database warehousing, primarily used for summarizing, pre-computing, replicating and distributing data, Materialized views provide an indirect access to the data of the base table. This is due to the fact that they create a separate schema to store the results of a query. The schema created by Materialized Views take up some storage space as it is saved in either the same database as its base table or in a different database.

Views benefits:1) Commonality of code: Since a view is based on one common set of SQL, this means that when it is called it's less likely to require parsing. This is because the basic underlying SQL that is called is always the same. However, since you can add additional where clauses when calling a view, you still need to use bind variables. Additional where clauses without a bind variable can still cause a hard parse!

2) Security: Views have long been used to hide the tables that actually contain the data you are querying. Also, views can be used to restrict the columns that a given user has access to. Using views for security on less complex databases is probably not a bad thing. As databases become more complex, this solution becomes harder to scale and other solutions will be needed.

3) Predicate pushing: Oracle supports pushing of predicates into a given view

 

 

Synonyms

A synonym is an alias for any table, view, materialized view, sequence, procedure, function, or package. 
A public synonym is owned by the user group PUBLIC and every user in a database can access it. 
A private synonym is in the schema of a specific user who has control over its availability to others.

Synonyms are used to:
- Mask the real name and owner of a schema object 
- Provide global (public) access to a schema object
- Provide location transparency for tables, views, or program units of a remote database.
- Simplify SQL statements for database users

e.g. to query the table PATIENT_REFERRALS with SQL:

SELECT * FROM MySchema.PATIENT_REFERRALS;

CREATE PUBLIC SYNONYM referrals FOR MySchema.PATIENT_REFERRALS;

After the public synonym is created, you can query with a simple SQL statement:

SELECT * FROM referrals;

 

===============================================================

 

Oracle Alter System
Version 12.1.0.2

 

General Information

Library Note

The Library is currently in the process of being upgraded from Oracle Database Version 11.2.0.3 to 12.1.0.1. Demos are being upgraded to reflect the new Container paradigm as well as EBR (Edition Based Redefinition) and may contain references to CDBs, PDBs, and other objects you may not be familiar with such as CDB_OBJECTS_AE: Welcome to 12c.

Purpose

Quote from the online docs at http://docs.oracle.com:
"Use the ALTER SYSTEM statement to dynamically alter your Oracle Database instance. The settings stay in effect as long as the database is mounted. When you use the ALTER SYSTEM statement in a multitenant container database (CDB), you can specify some clauses to alter the CDB as a whole and other clauses to alter a specific pluggable database (PDB)"

Data Dictionary

conn / as sysdba

desc v$parameter

SELECT issys_modifiable, COUNT(*)
FROM v$parameter
GROUP BY issys_modifiable;

SELECT ispdb_modifiable, COUNT(*)
FROM v$parameter
GROUP BY ispdb_modifiable;

SELECT isinstance_modifiable, COUNT(*)
FROM v$parameter
GROUP BY isinstance_modifiable;

-- parameters changed during installation
SELECT ismodified, COUNT(*)
FROM v$parameter
GROUP BY ismodified;

SELECT name, value
FROM v$parameter
WHERE ismodified = 'MODIFIED';

-- parameters changed following installation
SELECT isadjusted, COUNT(*)
FROM v$parameter
GROUP BY isadjusted;

SELECT name, value
FROM v$parameter
WHERE isadjusted = 'TRUE';

-- deprecated parameters: make sure you are not setting any
col name format a32
col value format a49

SELECT name, value
FROM v$parameter
WHERE isdeprecated = 'TRUE';

System Privileges

ALTER SYSTEM

 

 

 

Archive Log

ALTER SYSTEM <archivelog clause> follow the link at page bottom

Checkpoint

ALTER SYSTEM CHECKPOINT [GLOBAL];

ALTER SYSTEM CHECKPOINT;

Check Datafiles

-- only valid with a RAC configuration: Global indicates all instances
ALTER SYSTEM CHECK DATAFILE <GLOBAL | LOCAL>;

ALTER SYSTEM CHECK DATAFILES LOCAL;

Commit

ALTER SYSTEM SET <commit clause> follow the TRANSACTIONS link at page bottom

Distributed Recovery

ALTER SYSTEM <ENABLE | DISABLE> DISTRIBUTED RECOVERY;

ALTER SYSTEM enable DISTRIBUTED RECOVERY;
ALTER SYSTEM disable DISTRIBUTED RECOVERY;

Disconnect Session

ALTER SYSTEM DISCONNECT SESSION '<SID>,<SERIAL#>' [POST_TRANSACTION] [< IMMEDIATE | NOREPLAY>];

set linesize 121

SELECT sid, serial#, osuser, username, schemaname
FROM v$session
WHERE sid = (SELECT sid FROM v_$mystat WHERE rownum = 1);

ALTER SYSTEM DISCONNECT SESSION '8,694' POST_TRANSACTION;

Flush Buffer Cache

ALTER SYSTEM FLUSH BUFFER_CACHE;

ALTER SYSTEM FLUSH BUFFER_CACHE;

Flush Redo

ALTER SYSTEM FLUSH REDO TO <target_db_name> [[NO] [CONFIRM] [APPLY]]

SELECT name
FROM v$database;

ALTER DATABASE CLOSE;

ALTER SYSTEM FLUSH REDO TO orabase CONFIRM APPLY;

Flush Shared Pool

ALTER SYSTEM FLUSH SHARED_POOL;

ALTER SYSTEM FLUSH SHARED_POOL;

Kill Session

ALTER SYSTEM KILL SESSION '<SID>,<SERIAL#>[, @INSTANCE_NUMBER]' [< IMMEDIATE | NOREPLAY>];

set linesize 121

SELECT sid, serial#, osuser, username, schemaname
FROM v_$session
WHERE sid = (SELECT sid FROM v$mystat WHERE rownum = 1);

ALTER SYSTEM KILL SESSION '8,694';

ALTER SYSTEM KILL SESSION '8,694,@2';

ALTER SYSTEM KILL SESSION '8,694,@2' IMMEDIATE;

Determine if a killed session is rolling back a transactions

This shows the user who's transaction is being rolled back If this number, xidusn, is decreasing then the transaction is rolling back. If it is increasing then the transaction is moving forward.

SELECT a.sid, a.username, b.xidusn rollback_seg_no,
b.used_urec undo_records, b.used_ublk undo_blocks
FROM gv$session a, gv$transaction b
WHERE a.saddr = b.ses_addr;

Local Listener Registration

ALTER SYSTEM SET LOCAL_LISTENER='<listener_name>';

ALTER SYSTEM SET LOCAL_LISTENER = 'CDB_LISTENER';

Quiesce

ALTER SYSTEM <QUIESCE [RESTRCTED] | UNQUIESS>;

-- only valid in conjunction with DBMS_RESOURCE_MANAGER
ALTER SYSTEM QUIESCE;
ALTER SYSTEM UNQUIESCE;

Register with the listener

ALTER SYSTEM REGISTER;

ALTER SYSTEM REGISTER;

Relocate Client

ALTER SYSTEM <relocate client clause> is relevant only with Oracle Flex ASM and is not covered in the Library

Remote Listener Registration

ALTER SYSTEM SET REMOTE_LISTENER='<listener_name>';

ALTER SYSTEM SET REMOTE_LISTENER = 'DG_LISTENER';

Restricted Session

ALTER SYSTEM <ENABLE | DISABLE> RESTRICTED SESSION;

ALTER SYSTEM ENABLE RESTRICTED SESSION;
ALTER SYSTEM DISABLE RESTRICTED SESSION;

Resume

ALTER SYSTEM RESUME;

ALTER SYSTEM RESUME;

Rolling Migration

-- only valid in conjuntion with ASM
ALTER SYSTEM <rolling migration clause> follow the ASM link at page bottom

Shutdown Dispatcher

-- only valid with a shared server configuration something best avoided and thus this is not covered in the Library
ALTER SYSTEM SHUTDOWN [IMMEDIATE] <dispatcher_name>;

Suspend

ALTER SYSTEM SUSPEND;

ALTER SYSTEM SUSPEND;

Switch Logfile

ALTER SYSTEM SWITCH LOGFILE;

ALTER SYSTEM SWITCH LOGFILE;

Wallet

ALTER SYSTEM SET <encyption wallet> follow the WALLET link at page bottom

 

Set Parameter Value

Full Syntax

ALTER SYSTEM SET <parameter_name> = <value> [CONTAINER=<ALL | CURRENT>] [DEFERRED]
[SID = '<sid_name>'] [SCOPE = <BOTH | MEMORY | SPFILE>] [COMMENT='<comment_string>'];

Comment

Allow the association of a comment string with this change in the value of the parameter. If SPFILE is specified the comment will be written to the file.

ALTER SYSTEM SET <parameter_name> = <value> [COMMENT '<comment_string>'];

ALTER SYSTEM SET audit_sys_operations=TRUE
COMMENT='Begin auditing SYS'
CONTAINER = ALL
SCOPE=SPFILE;

 

Container

If the ALTER SYSTEM commands will affect a specific PDB rather than all PDBs specify this clause

ALTER SYSTEM SET <parameter_name> = <value> [CONTAINER=<ALL | CURRENT>];

conn sys@pdbdev as sysdba

ALTER SYSTEM SET plsql_code_type=NATIVE
CONTAINER=CURRENT;

 

Deferred

Most ALTER SYSTEM commands can use the DEFERRED suffix to modify all future sessions but not affect the current session

ALTER SYSTEM SET <parameter_name> = <value> [DEFERRED];

conn sys@pdbdev as sysdba

ALTER SYSTEM SET plsql_code_type=NATIVE DEFERRED;

 

Reset

Return the value of a parameter to its original installation default

ALTER SYSTEM RESET <parameter_name>  SCOPE = SPFILE;
ALTER SYSTEM RESET <parameter_name>  [SID = '<sid_name | *>'];

ALTER SYSTEM RESET plsql_code_type SCOPE=SPFILE;

ALTER SYSTEM RESET plsql_code_type SID='*';

 

Scope

Note

MEMORY indicates that the change is made in memory, takes effect immediately, and persists until the database is shut down.

SPFILE indicates that the change is made in the server parameter file and will only take affect after the database is restarted.

BOTH indicates that the change is made in memory and in the server parameter file.

Specifies when the change takes effect. Scope only affects databases using an SPFILE. With databases started using a PFILE all ALTER SYSTEM commands affect only memory.

ALTER SYSTEM SET <parameter_name> = <value> [SCOPE = <BOTH | MEMORY | SPFILE>];

ALTER SYSTEM SET  plsql_code_type = 'NATIVE'
SCOPE=MEMORY;

ALTER SYSTEM SET audit_sys_operations=TRUE
SCOPE=SPFILE;

ALTER SYSTEM SET plsql_code_type = 'NATIVE'
SCOPE=BOTH;

 

SID

Use on a RAC cluster to identify a specific instance

* = all

ALTER SYSTEM SET <parameter_name> = <value> [SID = '<sid_name | *>'];

conn / as sysdba

SELECT instance_name
FROM v$instance;

ALTER SYSTEM SET plsql_code_type = 'NATIVE'
SID = 'orabase';

 

Related Demos

Is the SGA performing sizing dynamic

SQL> show parameter sga_max_size
SQL> show sga

Total System Global Area 2505338880 bytes
Fixed Size 2405760 bytes
Variable Size 637536896 bytes
Database Buffers 1845493760 bytes
Redo Buffers 19902464 bytes

SQL> ALTER SYSTEM SET sga_max_size=2400M SCOPE=SPFILE;

System altered.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup;
ORACLE instance started.

Total System Global Area 2505338880 bytes
Fixed Size 2405760 bytes
Variable Size 671091328 bytes
Database Buffers 1811939328 bytes
Redo Buffers 19902464 bytes
Database mounted.
Database opened.

SQL> ALTER SYSTEM SET sga_max_size=180m;
*
ERROR at line 1:
ORA-02095: specified initialization parameter cannot be modified

Which proves two things. First, if you set SGA_MAX_SIZE to 2400M then the next time you start your instance, a full 2.5GB of RAM is used by the SGA (give or take a bit of rounding up to the next actual granule border). Yet I changed nothing to do with my shared pool, my buffer cache or my large pool ... so clearly, the *used* bit of my SGA can only be the same as it was before ... as is evidenced by the before and after values displayed for the "Database Buffers" and "Redo Buffers" components, for example.

Secondly, how "dynamic" is the SGA when the parameter which sizes it, SGA_MAX_SIZE, can't actually be dynamically altered, as I demonstrate at the end with an attempt to dynamically set it to 180M: It can only be modified with the 'scope=spfile' clause requiring an instance re-start before the new value is read.

Ergo: SGA_MAX_SIZE is not actually dynamic. And SGA_MAX_SIZE steals all of its memory from the operating system regardless of what your caches and pools are set to.

Different operating systems, for example Solaris, may behave differently.

 

How to change an INIT parameter in oracle database.

Step 1: Take the backup of Spfile and pfile .

 

    $cd $ORACLE_HOME/dbs

    $cp  initEDI.ora initEDI.ora_bkp

    $cp spfileEDI.ora spfileEDI.ora_bkp

 

Step 2: create pfile from spfile.

    

    SQL>create pfile from spfile;

 

Step 3: update the local_listener parameter in pfile and starup the database.

        Edit initEDI.ora and replace local_listener with below entry.

local_listener='(Address=(Protocol=TCP)(Port=1521)(Host=yourhost.com))'

 

         SQL>startup pfile=initEDI.ora

Step 4: Create the spfile from pfile.

           SQL>create spfile from pfile;

Step 5: Shutdown the database and startup using spfile;

            SQL>startup 

 

 

 

 


  

 

 

 

 

Category: 

Tags: